id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
234951034
pes2o/s2orc
v3-fos-license
Regional approaches to solving the problems of inclusive education . The article deals with the integration processes in the Russian education system. In recent years, they have become increasingly active in the practice of education and become a reality. It is important to emphasize the relative novelty of this component in the domestic education system. And since we are only taking the first steps in this direction, then inevitably questions arise about the ways, means and possibilities of solving this problem in the existing socio-economic and socio-cultural conditions that have developed in modern Russian society. Introduction In the 90s of the last century, the Ministry of Education of the Russian Federation quite clearly defined its position of supporting the process of including children with various developmental disabilities into the environment of normally developing peers, thereby legalizing integration trends, as it were. Moreover, the subjects of the Federation were urged to act more boldly in this direction [1]. On the other hand, the regions had to independently seek and find opportunities for integrated education and upbringing of children with special educational needs due to quite objective reasons, one of which was a new social order coming from the parents of the children. The peculiarities of integration process in education system in Russia The main ideas reflecting the strategic line of integration processes, were as follows: -The development of the idea of integration as one of the leading trends of the modern stage in the development of the domestic system of special education does not mean in any way the need to curtail the system of differentiated education for different categories of children. A well-thought-out state policy is important, which does not allow "distortions" and «excesses», a balanced combination of the principles of integration and professional influence in specially organized conditions [2]. -Every person regardless of health status physical or mental disability, has the right to receive education, the quality of which does not differ from the quality obtained by healthy people. -The most important developmental periods for children with disabilities are infancy, early and preschool ages. These periods in the life of children with disabilities require increased attention from the state and society. -Medical, social and educational institutions provide parents with full information. -Work with the family begins from the moment a child is found to have a particular physical or intellectual impairment. Experts involve parents as full-fledged partners in compiling an individual program of habilitation and rehabilitation [3]. The role of parents is changing qualitatively: they are included in the life of the children's collective, the collective of teachers and parents. -From an early age, the specialist helps parents to include children with mental and physical disabilities in the communication space of healthy children. -At the request of parents, children with mental and physical disabilities are included in educational institutions at their place of residence, parents have the right to attend all classes of specialists. -One of the ways to establish a qualitatively new interaction between special and mass education is the creation and development of fundamentally new educational institutionsinstitutions of a combined type, including preschool groups or classes for both normally developing children and children with a certain developmental disability [4]. -Each child should be given the right to develop at their own personal pace. -Effective integrated education is possible only on condition of special training and retraining of personnel, both teachers of general education and special (correctional) institutions. -Taking into account the novelty, social significance, complexity, complexity of the problems solved in the framework of integrated education, it is necessary to provide for fundamental and applied scientific research of an interdisciplinary nature. One of the directions of the developed strategy provided for the creation of experimental sites that would allow both teachers and educational organizers to accumulate a methodological base for integrated learning and gain experience in new conditions, as well as test variable integration models. There are three main forms of children education in Russia (Fig.1). Special schools Home education The nosology groups are shown in Fig.2. Fig. 2. Nosology groups Today we all quite clearly imagine that the integration processes are conditioned by the socio-political changes taking place in our society [5]. The need to provide all children with equal rights to receive educational services, regardless of the state of their mental and physical health, is increasingly recognized not only in the field of special education, but also by teachers of mass preschool and school educational institutions . Looking ahead also fosters awareness of the inevitability and necessity of developing inclusive education. An analysis of the health status of children in Russia suggests that the number of children with "special educational needs" is unlikely to decrease in the near future, but rather the opposite. And if all these children join the system of differentiated education, then in the system itself such a large-scale differentiation can give rise to a critical situation. Therefore, many children, especially children with mild developmental disabilities, will inevitably be accepted by mass kindergartens and schools. Building the educational process in the context of inclusive education requires a verified, scientifically grounded approach that would absorb all the available both foreign and domestic experience of integrated education. In addition, it should be noted the importance of a flexible approach to the development of integration processes. It is hard to imagine that without creative processing of existing models of integrated learning (domestic and foreign), they are with the same will function successfully in various socioeconomic and socio-cultural conditions. As for the foreign experience of integration, in each country this experience has its own both historical and socio-cultural roots. If we talk, for example, about the Norwegian model of integrated education, then despite the fact that integration in Norway has a state legislative basis, it cannot be said that the practice of raising and educating children with special educational qualifications does not experience any serious problems there. The experience of professional communication with Norwegian specialists allows us to say that quite often integration is of a formal nature and children with developmental disabilities, being within the confines of a mass school, are no less isolated than before. As a rule, models of partial or temporary integration are used more often. As for full integration (and such a variant of integration takes place, although it is far from being represented everywhere), in this case everything depends on the ability of an ordinary teacher and a special teacher to build their professional relationships and organize joint work. Sometimes the teacher working with the class can simply ignore both the special teacher and his recommendations, and also be indifferent to the special child included in his class [6]. As for the passive stay of a child with special educational needs in a regular classroom (which, as a rule, amounts to including him in the educational process), it can hardly be considered sufficiently productive in terms of the effect of correctional and developmental education. Although, at the same time, there is no point in denying the humanistic value of such inclusion and the importance of communication between the "special" child and normally developing peers, as well as the possibility and importance of the awareness of normally developing children that children can be different and have different learning opportunities. It should also be noted the humanistic value of the attitude that Norwegian teachers often demonstrate in interaction with their pupils. This model of interaction, imbued with love for the child, demonstrating its complete acceptance, deserves special attention. As for the construction of models of integration in the Russian education system, in this case, the path associated with the parallel study of not only foreign and domestic, but also regional experience seems to be more promising. It would be more correct to speak about the need to develop regional concepts of inclusive education or to include children with disabilities in the educational space of the region. The experience of social and educational integration in the Rostov region shows that the regional education system implements and tests various integration models, although many of them do not have the status of experimental and experimental, and sometimes arise spontaneously [7]. The existing spectrum of integration models can be presented hierarchically from wellknown, scientifically and methodologically verified and substantiated to single and specifically regional ones. The emergence of the latter is due to the special conditions prevailing in one of the other settlements of the region, and can be considered as the only possible way out in the specific situation of the regional educational space. However, the experience of such rather forced searches and finds is especially valuable because it has its own regional origins and arises in the very depths of the national education system. Purposeful study and generalization of this experience, of course, could contribute to a more effective development of integration processes on the ground. The conditions of temporary, partial, combined or full integration that exist in ordinary kindergartens and schools can serve as examples of well-known models for the inclusion of children with special educational needs in the educational space of the region [8]. Many educational institutions have accumulated more than ten years of experience, proving the effectiveness of an inclusive approach to teaching and raising children with disabilities. At the same time, in most cases it is not about educational, but rather about social integration. In addition, these models are presented mainly in large settlements of the region. As for rural areas, practice shows that in a number of cases, teachers of rural correctional schools and orphanages successfully carry out the social integration of their pupils. For example, mentally retarded schoolchildren are included in amateur art circles, in which they study together with students of public schools, and then give concerts in front of residents of villages and villages. According to the observations of teachers, such cooperation has a positive effect not only on the pupils of the correctional school. Pupils of the mass school in the process of communication, cooperation and co-creation with mentally retarded schoolchildren go through a special school of personal growth, assert themselves, helping the weaker ones, learn to be attentive, patient, caring, at the same time becoming teachers' assistants [9]. Of particular interest are examples of educational integration that emerged due to emergency situations in the educational space of a particular settlement. It makes sense to give several typical examples that demonstrate how this problem is solved in the real conditions of the regional education system. Several years ago, at the Center for Psychological, Pedagogical and Medical and Social Assistance of the Pomor State University named after M.V. Lomonosov, the parents of a thirteen-year-old girl with Down syndrome from the Pomor village of Nizhnyaya Zolotitsa applied. The girl studied from the age of eight in a local small-scale mass school according to an individual program in a class of different ages for five years, being constantly present at all lessons. All the specialists of the center noted the high level of her socialization: the girl communicated with desire, noted that she loved Russian, reading, drawing more than other subjects. He is happy if he gets "five" or "four", he gets upset if he gets "three". In the successes achieved in the girl's schooling, a decisive role was played by the wellcoordinated interaction of her parents and school teachers, who professionally solved the issue of organizing psychological and pedagogical support for the child in the process of passing the individual educational route [10]. Then a seven-year-old boy came to the first consultation. The village where the family lived was located 80 kilometers from the district center. The boy's parents had higher education, his grandmother was a kindergarten teacher in the past. The family members had a great desire to help the child, but they lacked the knowledge and practical skills to provide adequate assistance in learning. The boy did not speak, he had a pronounced mental retardation. Grandmother gave all her strength to the organization of speech therapy classes, which for the most part were reduced to the setting of sounds, while being guided by purchased textbooks on speech therapy. All this complicated the child's developmental situation for the second time. The regional psychological and pedagogical commission, taking into account the desire of the parents, sent the child to study at the place of residence. Local education authorities met halfway and made it possible to study according to an individual program in a local rural mass school. At the same time, the grandmother became the first teacher for the boy. Other teachers, having no experience of teaching such children, at first did not challenge her this right. The boy studied for two years under the first grade program, then for one year under the second grade. Now he is being taught by mass school teachers. The boy is calm and flexible, easily builds relationships with peers and adults, this allows him to be present at school almost all the time. In addition, for several years the family has been constantly provided with advice by the specialists of the center. The last consultation at the center, to which the family once again came, showed that the boy has achieved a good level of social adaptation, he feels much more confident in a situation of interaction in an unfamiliar environment, he says, reads, writes. But most importantly, during this time in the family, he acquired everyday and practical skills, which to a large extent provided him with a sense of self-confidence. Great changes took place in the mood of the boy's mother, her level of anxiety decreased, she became calmer and more confident in the future of her family and child [11]. It should be noted that such an approach to solving the problem of education of children with special educational needs can be implemented only if there are two factors: the desire of the parents and the ability of teachers to carry out inclusive education. In this case, not only the child, who begins to feel self-sufficient and protected, is the winner, but also the family, in which peace and confidence in the future is restored to a certain extent. In general in recent years the issues of including children. Living in the districts of the region, the mass educational environment began to be resolved more successfully. Educational authorities grant such a child the right to individual education in a mass school at the place of residence [12]. However, it must be emphasized that one referral to a mass school is not always enough for a child to be accepted, so that a teacher in a mass school will accept his special educational needs. Therefore, in the fact that the processes of "inclusion" are successful in such cases, great credit goes to local teachers who reveal tendencies of selflessness, and without special training, often only on the basis of 210, 18045 (2020) E3S Web of Conferences ITSE-2020 https://doi.org/10.1051/e3sconf/202021018045 pedagogical intuition, self-education and love for the child, its content adequate to the child's capabilities. Every year the increasing number of examples of regional integration processes testifies to the fact that integration is becoming more and more familiar, and, most importantly, a component of a single organism of the regional educational system accepted by teachers by society [13]. The grains of integration experience accumulated in the region require deeper understanding and generalization, selection of its positive links, which can allow avoiding mistakes in the future and prepare the development of the most acceptable regional models of social and educational integration [14]. The purpose of this work is to determine the optimal forms and content of education and training that would allow any child with special educational needs to join society at the level of his psychophysical capabilities and achieve his functional norm. Conclusion With this approach the main goal of the integration processes is achieved which is their humanistic orientation. The principle of variability in the creation and implementation of regional models of integration, taking into account specific socio-economic and sociocultural conditions, presupposes the possibility of realizing the right of every child to receive an education and contributes to the real awareness of the value of any child by society [15]. The upbringing and educational process in this case, both in organizational terms and in content, can most closely approach the individual educational needs and capabilities of students, and will be less subject to unification. At the same time, the implementation of regional integration models is possible only if there is an appropriate legislative framework and the professional readiness of teachers of mass and special educational institutions to work in new conditions.
2020-12-10T09:05:02.343Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a97debb3c753e28685edd24c0a4a6c142d2f609e", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/70/e3sconf_itse2020_18045.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6bb06c43e8cc71fe1cc65c03187399e445d63d9d", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Computer Science" ] }
220844126
pes2o/s2orc
v3-fos-license
Placebos without deception reduce self-report and neural measures of emotional distress Several recent studies suggest that placebos administered without deception (i.e., non-deceptive placebos) can help people manage a variety of highly distressing clinical disorders and nonclinical impairments. However, whether non-deceptive placebos represent genuine psychobiological effects is unknown. Here we address this issue by demonstrating across two experiments that during a highly arousing negative picture viewing task, non-deceptive placebos reduce both a self-report and neural measure of emotional distress, the late positive potential. These results show that non-deceptive placebo effects are not merely a product of response bias. Additionally, they provide insight into the neural time course of non-deceptive placebo effects on emotional distress and the psychological mechanisms that explain how they function. P lacebo interventions offer a cost-effective tool to manage a host of clinical disorders and nonclinical symptoms [1][2][3] . However, an important ethical issue prevents their widespread use: the ubiquitous belief that for placebos to be effective, a person needs to be deceived into believing they are taking an active treatment 4,5 . Recently, researchers have begun to examine whether the beneficial effects of placebos can be harnessed without deception by communicating to participants what placebos are, explaining the science behind how they work, and highlighting how placebos can still provide beneficial effects even if people know they are taking them 4,5 . This verbal suggestion approach leverages one of the primary psychological mechanisms through which placebos operate: a person's expectation that their condition will improve after receiving a treatment [6][7][8] . Guided by this approach, researchers have demonstrated the beneficial effects of non-deceptive placebos for a variety of conditions, including irritable bowel syndrome 5 , chronic back pain 9 , experimental pain 10,11 , and emotional distress, psychological well-being, and sleep quality 12 (see Supplementary Note 1 for the distinction between open-label and non-deceptive placebos). However, these studies have primarily documented the benefits of non-deceptive placebos using self-report measures [13][14][15][16] . Out of twenty-six published non-deceptive placebo studies to date, eight included objective behavioral or biological measures. Only one of these eight studies showed an effect on behavioral outcomes, and no direct effects on biological outcomes have been documented 10,17-21 (see Supplementary Table 1 for a current list of non-deceptive placebo studies). Therefore, it remains unclear whether the beneficial effects associated with non-deceptive placebos represent genuine psychobiological effects 2,3 . Here, we argue that prior research may have failed to observe non-deceptive placebo effects on objective biological measures because they focused on domains (e.g., wound healing recovery rate or physical skin reaction) that do not reliably respond to deceptive placebos induced through verbal suggestion [22][23][24][25] . Put simply, if a deceptive placebo induced through verbal suggestion does not reliably impact biological outcomes in these contexts, there is no reason to expect a non-deceptive placebo should either. Guided by this logic, we examine whether non-deceptive placebos can reduce self-report measures and objective biological markers in a context that is responsive to deceptive placebo effects: emotional distress [26][27][28][29][30][31][32][33] . In Experiment 1 (n = 68), we examine the effect of a non-deceptive placebo manipulation on self-report emotional distress in response to viewing negative emotional images (see Fig. 1a for task sequence). In Experiment 2 (n = 218), using a similar image viewing paradigm (see Fig. 2a for task sequence), we examine the effect of the same non-deceptive placebo manipulation on a neural biomarker of emotional distress: the late positive potential (LPP). The LPP is an electroencephalogram (EEG) derived eventrelated brain potential (ERP) response that measures millisecond changes in the neural activity involved in emotional processing 34 . The early-time window of the LPP (400-1000 ms) indexes attention allocation 34 ; the sustained time window (1000-6000 ms) indexes conscious appraisals and meaning-making mechanisms involved in emotion processing 34,35 and is consistently downregulated by cognitive emotion regulation strategies [36][37][38][39] . Consistent with its role in immediate attentional orienting responses to emotional stimuli and later appraisal processes, neural sources of the LPP include both the amygdala 35,40 and dorsolateral prefrontal cortex 41 . Thus, the LPP is ideally suited to help examine the neural mechanisms and time course of nondeceptive placebo effects on emotional distress. In both experiments, we randomly assigned participants to either a non-deceptive placebo group or a control group. Participants in the non-deceptive placebo group read about placebo effects and were then asked to inhale a nasal spray consisting of saline solution. They were told that the nasal spray was a placebo that contained no active ingredients, but would help reduce their negative emotional reactions to viewing distressing images if they believed it would. Participants in the control group read about the neural processes underlying the experience of pain and were also asked to inhale the same saline solution spray; however, they were told that the purpose of the nasal spray was to improve the clarity of the physiological readings we were recording in the study. The articles were matched for narrative structure, emotional content, and length (see "Methods" section, Supplementary Methods 1 and 2 for details). Consistent with the idea that non-deceptive placebos reflect genuine psychobiological effects, we hypothesized that the nondeceptive placebo group (vs. control) would report less negative affect and exhibit lower neural activity during the sustained LPP time window. Given conflicting evidence concerning how deceptive placebos influence attentional processes, we were agnostic about how non-deceptive placebos would influence early LPP amplitude. In Experiment 1, we find that non-deceptive placebos (vs. control) reduces self-report measures of emotional distress. Moreover, in Experiment 2, we demonstrate that non-deceptive placebos (vs. control) also reduces neural activity during the sustained LPP time window that indexes meaning-making stages of emotional reactivity. We do not find any effects of nondeceptive placebos (vs. control) on attentional processes as indexed by the early LPP time window. In summary, nondeceptive placebos can downregulate both self-report and neural measures of emotional distress, providing evidence that they are more than response bias. These main effects were qualified by a significant condition by picture type interaction, F(1, 60) = 12.41, p < 0.001, η 2 ρ = 0.171. As Fig. 1b illustrates, participants in the non-deceptive placebo group reported less distress after viewing negative pictures compared to participants in the control group, t(60) = 3.94, p = 0.0002, d = 1.00. There was no non-deceptive placebo effect on neutral pictures, t(60) = −0.36, p = 0.72, d = −0.09. Supplementary Table 2 reports exploratory correlational analysis regarding beliefs, expectations, and self-reported emotional distress. These findings demonstrate that the non-deceptive placebo manipulation we administered is effective at reducing subjective emotional distress. Experiment 2 examined whether this emotion-dampening effect generalizes to an objective neural biomarker of emotional reactivity. Sustained LPP in Experiment 2. We tested our predictions concerning whether non-deceptive placebos would influence an objective neural biomarker of emotional distress by performing a mixed-factorial ANOVA on the sustained LPP using a broad set of topographically organized clusters of electrodes that have been the focused of prior work 38,42,43 (see "Methods" section for details of our preregistered data analytic approach). As expected, the non-deceptive placebo manipulation led to a significant reduction in a neural biomarker of emotional distress, as evidenced by a main effect of condition on sustained LPP amplitude, F(1, 194) = 8.98, p = 0.003, η 2 ρ = 0.044. This main effect was qualified by a significant condition by time interaction, F(1.62, 314.94) = 4.58, p = 0.017, η 2 ρ = 0.023. As Fig. 2b illustrates, participants in the non-deceptive placebo group showed a gradual reduction in sustained LPP amplitude throughout the picture presentation, as shown by a significant time effect in the non-deceptive placebo group, F(1.73, 167.98) = 6.38, p = 0.003, η 2 ρ = 0.062, and followed by a within-subjects linear contrast, F(1, 97) = 6.83, p = 0.01, η 2 ρ = 0.066. In comparison, the sustained LPP amplitude for participants in the control group did not change in magnitude throughout the picture presentation, F(1.53, 148.63) = 0.41, p = 0.61, η 2 ρ = 0.004. Figure 2c shows topographic headmaps of the sustained LPP activity across the scalp (with amplitude from neutral and negative images collapsed) broken down by condition and time. Figure 2d illustrates that the magnitude of the difference between the control group and the non-deceptive placebo group increased at 2000-3000 ms, then peaked and plateaued at~3000-4000 ms. See Supplementary Table 3 for detailed independent pairwise comparison statistics. To corroborate these findings, we performed a similar analysis at CPz, where the sustained LPP is typically maximal. We found similar patterns for the main effect of condition and condition by time interaction (see Supplementary Fig. 1, Supplementary Table 4). We also report other significant interactions with condition in Supplementary Fig. 2, any condition by sample interactions in Supplementary Note 2, and exploratory correlational analysis regarding beliefs, expectations, and sustained LPP activity in Supplementary Table 5. There was no significant three-way interaction among condition, picture type, and time, F(1.88, 364.35) = 0.08, p = 0.91, η 2 ρ < 0.001, nor were there any other significant interactions involving condition and picture type (p > 0.05). These null interactions involving condition and picture type suggest that the nondeceptive placebo manipulation exerted a general dampening effect on emotional reactivity in response to both neutral and negative images. As we elaborate in more detail in the "Discussion" section, although this pattern is inconsistent with the self-report findings we observed in Experiment 1, it is consistent with several placebo studies that have shown a main effect of deceptive placebos across neutral and negative stimuli for autonomic and neural measures 27,28 . Early LPP in Experiment 2. The early LPP (400-1000 ms) indexes attention allocation to incoming emotional stimuli 33,44 . As noted earlier, we did not have strong predictions regarding how the non-deceptive placebo we administered would affect the early LPP because prior research provides mixed evidence regarding how deceptive placebos influence attention allocation processes. While some studies suggest that deceptive placebos amplify attention to negative stimuli, others suggest the opposite 27,33,45,46 . We examined the effects of non-deceptive placebos on the attentional stages of emotional processing by performing a mixed-factorial ANOVA on the same broad set of topographically organized clusters of electrodes, but focused on the early LPP time window (400-1000 ms; see "Methods" section for details of our preregistered data analytic approach) 34 . This analysis revealed a complicated set of interactions, such as a three-way condition by time by anterior/posterior interaction F(1, 194) = 4.00, p = 0.047, η 2 ρ = 0.02, as well as a five-way condition by picture type by time by hemisphere by anterior/posterior interaction F(1, 194) = 4.46, p = 0.036, η 2 ρ = 0.022. Probing these interactions did not, however, reveal any consistent condition effects (see Supplementary Table 6 and Supplementary Table 7 for contrasts). Moreover, we did not detect any condition effect at CPz, where the early LPP is typically maximal (all p > 0.05). We did not further probe any significant interactions with sample and condition for the early LPP. In summary, we found no reliable non-deceptive placebo effect on the early LPP. Discussion The beneficial effects of non-deceptive placebos have been established in self-report measures for a host of clinical conditions and nonclinical impairments 4,47 . However, it is unclear (not at all negative) to 9 (very negative). b A mixed-factorial ANOVA (condition by picture type) was conducted followed-up by independent pairwise comparisons for relevant contrasts. All tests were two-tailed, and followup tests were not adjusted for multiple comparisons. Bars represent the mean self-report ratings calculated for condition (control group, n = 33, and nondeceptive placebo group, n = 29) per picture type (neutral and negative). Error bars represent ± 1 SEM. Dots represent mean values for each participant per picture type. There was a significant interaction between condition and picture type (p = 0.0008). Follow-up tests showed no difference in emotional distress ratings between the control group and non-deceptive placebo group for neutral pictures (p = 0.72); however, the non-deceptive placebo group, compared to the control group, reported less emotional distress when viewing negative pictures (p = 0.0002). No asterisk = not significant, ***p < 0.001. 26,27,[29][30][31][32][33]46 . More importantly, we demonstrate that non-deceptive placebos decreased an objective neural marker of emotional distress during the appraisal stages of emotional processing: the sustained LPP. This finding provides initial support that non-deceptive placebos, at least in the domain of emotional distress, are not merely a product of response bias, but represent genuine psychobiological effects. These findings also help illuminate the neural time course of nondeceptive placebo effects on emotional distress. It seems that nondeceptive placebos do not exert their regulatory effects immediately and require some time to reduce emotional reactivity (Fig. 2). This pattern of gradual decreases in sustained LPP amplitude throughout the picture presentation is consistent with the time course of deceptive placebo effects on pain processing, where participants in the placebo condition initially experience equivalent levels of pain similar to the control condition before it is modulated by the placebo intervention 48 . A gradual decrease in sustained LPP amplitude appears to happen at~2000-3000 ms and then plateaus at the 3000-4000 ms range, following a non-deceptive placebo intervention. This time course pattern in neural activity suggests that nondeceptive placebos are likely acting on appraisal and meaningmaking mechanisms 49,50 . Moreover, it raises questions about what type of appraisal processes are occurring when someone receives a non-deceptive placebo intervention and the degree to which these appraisals are conscious or unconscious processes. Consistent with prior research 27,30 , we observe an asymmetry in terms of how our non-deceptive placebo manipulation impacted participants' self-report and EEG measures of emotional distress. In Experiment 1, we show a non-deceptive placebo effect for negative stimuli but not neutral stimuli; however, in Experiment 2, we observe a non-deceptive placebo effect for both neutral and negative stimuli. One explanation for this asymmetry may have to do with the temporal feature of self-report and the sustained LPP. The sustained LPP measures online reactions to the images while self-report is assessed retrospectively, 4000 ms after picture offset. It may be that by the time participants are asked about the neutral images, any small negative emotional distress they experienced may have returned to baseline levels. More broadly, these findings are compatible with a large body of research suggesting that self-report, behavior, peripheral physiological, and neural measures are not redundant and often do not cohere 51,52 . Taken together, these findings underscore the importance of examining the effects of non-deceptive placebos across multiple levels of analysis. Our findings also have important translational implications. Acute episodes of emotional distress have relevance not only for daily emotional life, but for many physical and psychiatric conditions 1,53,54 . In terms of physical conditions, emotional distress is associated with increased chances of chronic pain onset and amplifying existing pain experience 55 . As such, non-deceptive placebos can help manage the emotional aspect of many medical conditions that have a pain component. For psychiatric conditions, non-deceptive placebos may be used as cointerventions with existing therapies, especially for disorders in which emotion dysregulation is a core feature, such as depression and anxiety 54 . From a nonclinical standpoint, non-deceptive placebos also offer an alternative emotion regulation strategy that some researchers believe are distinct from internally generated reappraisal strategies, which often require intact cognitive control mechanisms and available cognitive resources 50,56-58 . We believe these various clinical and nonclinical areas provide important translational research directions. It is important to acknowledge that we did not find a significant relationship between beliefs and expectations with selfreport (Supplementary Table 2), and neural measures of emotional distress (Supplementary Table 5). This lack of a relationship seems to be consistent with the broader non-deceptive placebo literature, since associations between expectations and outcome measures are not commonly documented and observed. In fact, of the twenty-six non-deceptive placebo studies published thus far, eleven report asking expectation questions, and only two show a relationship 12,19 . These inconsistencies may be due to the well-established finding that people frequently lack direct access to internal states, making it difficult to provide an accurate account of their expectations 59 . Future theoretical and empirical work is needed to delineate the factors that influence the associations between expectations and outcome measures. Future research is also needed to examine how these findings generalize to other demographics. We sampled from a population of college students, limited in age range and not ethnically diverse. Moreover, because of sex differences between males and females in emotional reactivity, Experiment 2 only recruited female participants to minimize the confounding effect of sex 60 . An important question for future research is to examine if the sex of the participant influences the efficacy of non-deceptive placebos on emotional distress and other domains. Non-deceptive placebos may offer a cost-effective intervention to help manage a host of clinical disorders and nonclinical symptoms 4,61 ; however, it is important first to establish that their beneficial effects go beyond self-report measures and lead to positive changes on objective biological markers 47 . Our findings demonstrate an objective non-deceptive placebo effect on a neural biomarker that is relevant for emotion regulation and conditions characterized by emotional distress. Future research should examine the generalizability of these findings to other populations, domains, and biomarkers. Methods Participants. For Experiment 1, participants were recruited from a nonclinical sample at a large university in the Midwest. They were compensated with course credit. Sixty-eight participants participated in the study, but six were removed due to experimenter error or substantial deviation from the protocol (n = 3), participant indicating they were a non-native English speaker at the exit survey (n = 1), participant indicating that they misread the self-report scale (n = 1), and software error resulting in no self-report affective ratings (n = 1). Four were removed from the non-deceptive placebo group, and two were removed from the control group. The final sample submitted to analyses included 62 participants with n = 33 in the control group (M age = 18.61, SD = 0.83; 39.4% female; 60.6% European American) and n = 29 in the non-deceptive placebo group (M age = 18.76, SD = 0.74; 34.5% female; 75.9% European American). Experiment 1 complied with all relevant ethical guidelines and regulations involving human participants, and was approved by the University of Michigan's Institutional Review Board. All participants provided informed consent before participating. For Experiment 2, participants were recruited from a nonclinical sample at another large university in the Midwest. Two samples were collected, sample 1 (n = 115) and sample 2 (n = 103). They were compensated with course credit (n = 110) or $20 (n = 108) for their time. All participants were female to control for sex differences in brain structure, brain formation, emotion processing, and emotion regulation ability 52,62,63 ; moreover, all participants were right-handed, between the ages of 18 and 30, and native English speakers. Participants who reported a history of severe mental illness or seizures were excluded. Participants recruited through a course credit system and did not meet all of our criteria were automatically filtered out, and were not able to sign up for the study. Participants recruited through a payment system were sent a screening survey, and eligible participants were scheduled to come into the lab. A total of 218 people participated in Experiment 2. Twenty participants were removed from analysis due to reporting that English was not their native language at the exit survey (n = 1), software error (n = 4), and excessive artifacts due to eye and body movements (n = 15). One hundred and ninety-eight participants were submitted to analyses, with n = 99 in the control group (M age = 19.92, SD = 2.14; 78.8% European American) and n = 99 in the non-deceptive placebo group (M age = 19.78, SD = 2.36; 80.8% European American). Experiment 2 complied with all relevant ethical guidelines and regulations involving human participants, and was approved by Michigan State University's Institutional Review Board. All participants provided informed consent before participating. Experimental design. For both experiments, participants were told that the study was on cognitive processing, memory, and emotion. Participants were randomly assigned to a control or non-deceptive placebo group (see Supplementary Fig. 3 for a design diagram). Those in the control group read an article on the neurological processes of pain and how to treat it (Supplementary Methods 1). Those in the nondeceptive placebo group read an article on the placebo effect, how powerful it is for some conditions, and how it can still work even without deception (Supplementary Methods 1). After reading the articles, the experimenter delivered different nasal spray instructions to the control and non-deceptive placebo participants. For the nondeceptive placebo group, the experimenter summarized the main points of the reading, positively framed that placebos can still work if the participant believes it will, and administered a saline nasal spray once to each nostril. For the control group, the experimenter explained that the saline nasal spray was designed to help obtain better physiological readings (Supplementary Methods 2). The articles were matched fornarrative structure, negatively valenced words (control = 62, non-deceptive placebo = 58), and length (control = 1287 words, non-deceptive placebo = 1270 words; see Supplementary Methods 1 and 2 for details). Participants in the control and non-deceptive placebo group did not differ in terms of reading duration, writing duration, perception of article quality (all p > 0.05), and post article reading mood (Experiment 1, p > 0.05; see Supplementary Tables 8 and 9 for details). For the non-deceptive placebo group, the experimenter and the participant were not blind to the condition since our manipulation involved honestly telling participants they were receiving a placebo. Nevertheless, it is important to highlight that those in the non-deceptive placebo group were not aware they were receiving a placebo nasal spray until the actual nasal spray administration. This feature of our NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-17654-y ARTICLE NATURE COMMUNICATIONS | (2020) 11:3785 | https://doi.org/10.1038/s41467-020-17654-y | www.nature.com/naturecommunications design reduces the bias from participants knowing they are participating in a study involving placebos before coming into the lab. Equally important, unlike previous work on non-deceptive placebos, the control group was blind to their condition and was not aware they were participating in a placebo study or that they were in the control group 5 . This feature of our experiments reduces the bias that stems from a participant knowing they are in the control group and will not receive the experimental treatment 64 . Image viewing task. After the nasal spray administration, participants engaged in an image viewing task. For Experiment 1, participants viewed one block of forty images (30 negative and 10 neutral; see Supplementary Table 10 for a complete list of these images) based on their normative valence and arousal ratings. The block design was based on a previous placebo study on emotional distress 29 . The negative images were considered high intensity with a M valence = 2.30 (1 = very unpleasant; 9 = highly pleasant) and M arousal = 6.37 (1 = low; 9 = high) 65,66 . The images were presented in a randomized order in forty trials using E-Prime (version 2.0; Psychology Software Tools, Pittsburg, PA). For each image, participants viewed a fixation cross (4000 ms), a random image (6000 ms), and another fixation cross (4000 ms), followed by an affective rating period (5000 ms or less depending on when the participant chose their response). For each image, the participant rated how the picture made them feel on a nine-point Likert scale from 1 (not at all negative) to 9 (very negative; see Fig. 1a Table 11 for a complete list of these images). The nasal spray was administered twice to each nostril before each block. The mean valence and arousal ratings for each block were matched, and did not significantly differ from each other (p > 0.05). The pictures were presented in a randomized order using E-Prime (version 2.0; Psychology Software Tools, Pittsburgh, PA). For each image, participants viewed a blank screen (500 ms), a fixation cross (500 ms), a random image (6000 ms), and a relaxation prompt instructing them to relax and clear their mind (4000 ms; see Fig. 2a for trial sequence). Critically, participants did not self-report their negative feelings after each trial or after each block to obtain pure neural signals of emotional reactivity without intervening introspective questions 37,42 . Data analytic strategy for Experiment 1. All statistical analyses for Experiment 1 were performed with SPSS (version 26), and Fig. 1b bar graph was created with R studio (version 3.6.1) and ggplot2 (version 3.3.0) 67 . For the primary analysis, we performed a mixed-factorial ANOVA with condition (control and non-deceptive placebo) as a between-subjects factor, and picture type (neutral and negative) as a within-subjects factor. A significant interaction between condition and picture type was followed by independent pairwise comparisons contrasting control minus nondeceptive placebo for neutral and negative pictures. Follow-up comparisons did not use any adjustments for multiple comparisons. For preliminary analyses, separate independent-samples t-tests were conducted for each respective variable (Supplementary Table 8). All tests were two-tailed and used a significance level of p < 0.05. Partial eta squared was calculated for all ANOVA results, and Cohen's d was calculated for all t-tests. Psychophysiological recording and data reduction for Experiment 2. Continuous EEG activity was recorded using the ActiveTwo Biosemi system (Biosemi, Amsterdam, the Netherlands) from a 64-electrode cap arranged according to the International 10-20 system. Two additional electrodes were placed on the left and right mastoids for use in offline references. Three additional electrodes were placed inferior to the left pupil, and the left and right outer canthi were used to record blinks and eye movements. A common mode sense active electrode and driven right leg passive electrode formed a ground specified by the Biosemi system; this limited the amount of current that could return to the participant. Bioelectric signals were sampled at 1024 Hz. EEG signal processing and creating topographic headmaps for Fig. 2c were performed using BrainVision Analyzer (version 2.2; BrainProducts, Gilching, Germany). Each electrode recording was referenced to the mean of the mastoids, band-pass filtered (cutoffs: 0.01-20 Hz; 24 dB/oct roll-off), and subjected to ocular artifact correction 68 . Each picture trial was subjected to standard artifact rejection procedures using a computer-based algorithm criterion: a voltage step exceeding 50 μV between contiguous sampling points, a voltage difference of 400 μV within a trial, and a maximum voltage difference of <0.5 μV within 100 ms intervals. The average activity 500 ms before picture onset served as a baseline and was subtracted from each data point after picture onset. We elected to preregister our data analytic plan on AsPredicted.org (http:// aspredicted.org/blind.php?x=ie6r5j). All analyses for Experiment 2 were performed with SPSS (version 26), and Fig. 2b, d, Supplementary Fig. 1, and Supplementary Fig. 2 were created with SigmaPlot (version 14). Combining two samples, we first examined the effects of non-deceptive placebos on the sustained LPP by performing a 2 (condition: control and non-deceptive placebo) × 2 (sample: sample 1 and sample 2) × 2 (picture type: neutral and negative) × 5 (time: 1000-2000 ms, 2000-3000 ms, 3000-4000 ms, 4000-5000 ms, and 5000-6000 ms) × 2 (hemisphere: left and right) × 2 (anterior/posterior: anterior and posterior) × 2 (inferior/superior: inferior and superior) mixed-factorial ANOVA with condition and sample as a between-subjects factor, and the other variables as a within-subjects factor. We focused on the main effect of condition and any interaction effects involving condition that were robust against sample type. Greenhouse-Geiser corrections were applied to relevant interaction analyses. We report outlier detection procedures and additional robust analysis found in Supplementary Table 12. Moreover, to corroborate this analysis, we performed a 2 (condition: control and non-deceptive placebo) by 2 (sample: sample 1 and sample 2) × 2 (picture type: neutral and negative) × 5 (time: 1000-2000 ms, 2000-3000 ms, 3000-4000 ms, 4000-5000 ms, and 5000-6000 ms) at CPz, where the LPP is typically maximal and analyzed. We report the analysis at CPz in Supplementary Fig. 1. We also report any significant interactions with condition in Supplementary Fig. 2 and any condition by sample interaction in Supplementary Note 2. Next, we tested the effect of non-deceptive placebos on the early LPP (400-1000 ms) by performing a 2 (condition: control and non-deceptive placebo) × by 2 (sample: sample 1 and sample 2) × 2 (picture type: neutral and negative) × 2 (time: 400-700 ms and 700-1000 ms) × 2 (laterality: left and right) × 2 (anterior/posterior: anterior and posterior) × 2 (inferior/superior: inferior and superior) mixed-factorial ANOVA with condition and sample as between-subjects factors, and the other variables as within-subjects factors. We focused on the main effect of condition and any interaction effects involving condition that were robust against sample type. To corroborate this analysis, we perform a 2 (condition: control and non-deceptive placebo) by 2 (sample: sample 1 and sample 2) × 2 (picture type: neutral and negative) × 2 (time: 400-700 ms and 700-1000 ms) at CPz, where the LPP is typically maximal and analyzed. Any significant interactions with condition were probed further until it could be followed by independent pairwise comparisons. Follow-up comparisons did not use any adjustments unless otherwise stated. For preliminary analyses, separate independent-samples t-tests were conducted for each respective variable (Supplementary Table 9). All tests were two-tailed and used a significance level of p < 0.05. Partial eta squared was calculated for all ANOVA tests, and Cohen's d was calculated for all t-tests. Questionnaires. For both experiments, participants completed additional measures, such as duration of reading and writing time, quality of the article readings, and belief in the effectiveness of placebos without deception (See Supplementary Methods 3 and 4). Participants in Experiment 2 completed extra measures, such as the perception of the experimenters and individual difference measures, such as the tendency to worry, trait anxiety, levels of optimism, and proneness to social desirability responding (See Supplementary Methods 4). Preliminary analyses and results for Experiment 1 are reported in Supplementary Table 8, and those for Experiment 2 are reported in Supplementary Table 9. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data supporting these findings can be found at the Open Science Framework (https://osf. io/s3b8d/). SPSS (version 26) is used for all statistical analyses. Data and R Code underlying Fig. 1b can be found in Experiment 1 data files. Data and SPSS syntax underlying Fig. 2d, Supplementary Fig. 1b, and Supplementary Figs. 2a, b can be found in Experiment 2 data files. A reporting summary for this article is available as a Supplementary Information file. Additional data from these studies are available from the corresponding author upon request.
2020-07-29T14:58:41.192Z
2020-07-29T00:00:00.000
{ "year": 2020, "sha1": "e27f5d21b6487c2ab3f68ece224fdfddd0430254", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-17654-y.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "e27f5d21b6487c2ab3f68ece224fdfddd0430254", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
18255596
pes2o/s2orc
v3-fos-license
Dynamics and bifurcations of nonsmooth systems: A survey In this survey we discuss current directions of research in the dynamics of nonsmooth systems, with emphasis on bifurcation theory. An introduction to the state-of-the-art (also for non-specialists) is complemented by a presentation of main open problems. We illustrate the theory by means of elementary examples. The main focus is on piecewise smooth systems, that have recently attracted a lot of attention, but we also briefly discuss other important classes of nonsmooth systems such as nowhere di ff erentiable ones and di ff erential variational inequalities. This extended framework allows us to put the diverse range of papers and surveys in this special issue in a common context. A dedicated section is devoted to concrete applications that stimulate the development of the field. This survey is concluded by an extensive bibliography. Introduction Nonsmooth dynamical systems have received increased attention in recent years, motivated in particular by engineering applications, and this survey aims to present a compact introduction to this subject as a background for the other articles in this special issue of Physica D. In the field of smooth dynamical systems many results rely on (or have been derived under) certain smoothness assumptions. In this context the question arises to what extent nonsmooth dynamical systems have (or don't have) different dynamical behaviour than their smooth counterparts. As nonsmooth dynamical systems naturally arise in the context of many applications, this question is not merely academic. One may be tempted to argue that nonsmoothness is a modelling issue that can be circumvented by a suitable regularisation procedure, but there are some fundamental and practical obstructions. Firstly, regularisation is not always possible. For instance, Kolmogorov's classical theory of incompressible fluids [200] asserts that the dependence of the velocity vector v(x) as a function of the spatial coordinate x is of order 1 3 , leaving no sensible way to smoothen the continuous map x → v(x) in order to render it differentiable everywhere [362]. Secondly, even if regularisation is possible, it may yield a smooth dynamical system that is very difficult to analyse (both numerically and analytically), obscuring certain important dynamical properties (often referred to as discontinuity-induced phenomena) that may feature more naturally in the nonsmooth model, see eg [168,233]. Finally, mechanical systems with dry friction display nonuniqueness of the limit when the stiffnesses of the regularisation springs approach infinity. Regularisation in mechanical models with friction is often accomplished by introducing virtual springs of large stiffnesses at the points of contact [364,331,261]. The specific configuration of the springs is assumed to be unknown, which accounts for the nonsmoothness of the original (rigid) system. Also, nonuniqueness in some control models can not be suppresed (known as reverse-Zeno phenomenon) and needs a theory to deal with, see Stewart [335]. For more on these, and other applications that require nonsmooth modelling, see Section 5. Elementary stability theory for nonsmooth systems was first motivated by the need to establish stability for nonsmooth engineering devices see for instance Barbashin [25], Leine-Vande-Wouw [227], and Brogliato [57]. A significant growth in the subject has been due to the understanding that nonsmooth systems display a wealth of complex dynamical phenomena, that must not be disregarded in applications. Some applications that illustrate the relevance of nonsmooth dynamics include the squealing noise in car brakes [20,177] (linked to regimes that stick to the switching manifold determined by the discontinuous dry friction characteristics), loss of image quality in atomic force microscopy [357,382,263,293] (caused by new transitions that an oscillator can undergo under perturbations when it just touches an elastic obstacle), and, on a more microscopic scale, the absense of a thermal equilibrium in gases modelled by scattering billiards [360,197,198] (whose ergodicity can be broken by a small perturbation as soon as the unperturbed system possesses a closed orbit that touches the boundary of the billiard). The main focus of this survey is on aspects of dynamics in-volving bifurcations (transitions between different types of dynamical behaviour). In Section 2 we review general (generic) bifurcation scenarios, while in Section 3 we review the literature on bifurcation problems posed in the context of explicit perturbations to (simple) nonsmooth systems with known solutions. Section 4 is devoted to nonsmooth systems that include a variational inequality and do not readily appear as a dynamical system. This very important class of nonsmooth systems (also known as differential variational inequalities) originates from optimisation [287] and nonsmooth mechanics [57]. In order to access the dynamics of differential variational inequalities the questions of the existence, uniqueness and dependence of solutions on initial conditions have been actively investigated in the literature. The engineering applications that stimulated the interest in analysis the dynamics of nonsmooth systems are discussed in Section 5. An extensive bibliography concludes this survey. Despite our best efforts to present a balanced overview, this survey is of course not without bias, and we apologise to colleagues that will find their interests and results perhaps underrepresented. Bifurcation theory A precise analysis of the dynamics of an arbitrary chosen dynamical system is rarely possible. A common approach to the study of dynamical systems is to divide the majority of the dynamical systems into equivalence classes so that the dynamics of any two systems from each such a class are similar (with respect to specific criteria). Usually (but not always) the equivalence classes are chosen to be open in a suitably defined space of dynamical systems. Bifurcation theory concerns the study of transitions between these classes (as one varies parameters, for instance), and the transition points are often referred to as singularities. For an elementary non-technical introduction to bifurcation theory, see Mees [262]. Many technical books on bifurcation theory have appeared over the years, see for instance [222]. We present an elementary example to illustrate the concept of bifurcation. Consider a ball in a pipe that is attached by a spring to one end of the pipe and subject to gravitation and a viscous friction. If the ends of the pipe are bent upwards the system has a unique stable equilibrium. However, if the ends of the pipe are bent down the pipe-ball system may exhibit three, one unstable and two stable, equilibria (see Fig. 1). There is a transition where the unique stable equilibrium splits into three co-existing equilibria (see Fig. 2). It can be shown rigorously that this pitchfork bifurcation is typical (and robust) in this type of model, and also that generically the equilibrium generically cannot admit a Hopf bifurcation (where stability is transferred to a limit cycle). Border-collision bifurcations If the friction characteristic in the above mentioned example has a discontinuity along the pipe, the oscillator may exhibit new dynamical behaviour. For example, a stable equilibrium can lose stability under emission of a stable limit cycle (Hopf bifurcation) when the position of the discontinuity in the friction law moves (as a function of a changing parameter) past the equilibrium (see Fig. 3). This situation is modelled by the following equation of motion When µ < 0 there is one stable equilibrium (x,ẋ) = (µ, 0) that persists until µ = 0. As µ increases further and becomes positive, the equilibrium loses its stability and a stable limit cycle arises from (0, 0) (see Fig. 4). This bifurcation is characterised by the collision of the equilibrium with the switching manifold (defined by the discontinuity as x = µ) {µ} × R, and is known as a border-collision bifurcation of the equilibrium. Meiss and Simpson in [326] have proposed sufficient conditions for border-collision bifurcations where an equilibrium of R n transforms into a limit cycle. Some other scenarios have been investigated in di Bernardo-Nordmark-Olivar [47] and the paper by Rossa-Dercole [307] in this special issue. The paper by Hosham-Kuepper-Weiss [373] of this special issue provides conditions that guarantee the dynamics near an equilibrium on the border to develop along so-called invariant cones, providing a possible framework for further analysis of border-collision of an equilibrium in R n . From a mechanical point of view, we note that negative friction plays a crucial role in example (1). Another example of a border-collision bifurcation, where negative friction is essential, can be found in a paper by Kuepper [403]. The pipe-ball system where the friction characteristic of the boundary changes discontinuously. The two parts where the friction characteristics is smooth are coloured in black and grey respectively. The distance between the friction discontinuity and the equilibrium of the ball is denoted by µ. The two figures in the bottom illustrate the co-existence of an unstable equilibrium and a stable limit cycle. Although not standard, negative parts in the friction characteristics can appear in real mechanical devices because of the socalled Stribeck effect (see [227, §4.2]). Border-collision bifurcation caused by negative friction are also discussed in Leine-Brogliato-Nijmeijer [229]. Classifications of bifurcations from an equilibrium on a switching manifold of a discontinuous system have been derived by Guardia-Seara-Teixeira [154] and Kuznetsov-Rinaldi-Gragnani [223]. They show that the possible scenarios include homoclinic solutions and non-local transitions, e.g. a stable equilibrium can bifurcate to a cycle that doesn't lie in the neighbourhood of this equilibrium. In the case where the differential equations are nonsmooth but continuous along the switching manifold some non-standard border-collision bifurcations have been reported in Leine [233] and Leine-Van Campen [228]. Properties of the Clarke generalised Jacobian (versus the classical Fréchet derivative) proved to be conclusive here. A point on the discontinuity (i.e. switching) manifold between two smooth systems can attract solutions while not being an equilibrium of any of these systems. An elementary illustration of this arises in (2) Equation (2) comes from an analogue of the pipe-ball system whose boundary is straight, but undergoes a discontinuity at mg Figure 5: A ball attached through a spring to an immovable wall, resting in a corner of a piecewise flat surface. a point (see Fig. 5). This point is the position of an asymptotically stable equilibrium as the mechanical setup suggests (a proof can be found in Barbashin [25], Leine-Van-de-Wouw [227]). In particular, small perturbations of the second-order differential equation (2) do not lead to bifurcations. This equation, therefore, serves as an example of the situation where a point on the switching manifold is an attractive equilibrium while not an equilibrium of any of the two smooth components This example also highlights that not all bifurcations that are generic from the point of view of bifurcation theory are physically possible. In fact, the point (0, 0) of the two-dimensional version of (2) is attractive when µ = 0. However, the phase portraits for µ < 0 and µ > 0 are drastically different, see Fig. 6. We thus see that only particular perturbations of system (3) with µ = 0 preserve the attractive properties of the point (0,0). What those particular perturbations are, has not yet been understood. Perhaps symmetry plays an important role here as the perturbations of equation (2) always lead to a symmetric (inẋ coordinate) twodimensional system. A result in this direction is presented by Jacquemard and Teixeira in this special issue [186]. Example (3) also illustrates the phenomenon of sticking in nonsmooth systems. Fig. 6 suggests that all the solutions of (3) with µ negative approach the interval [−|µ|, |µ|] of the vertical axis and do not leave it in the future. The definition of how trajectories of (4) behave within this interval is usually taken by the Filippov convention [125], which recently has been further developed by Broucke-Pugh-Simic [59]. The Filippov convention and corresponding Filippov systems are discussed in several papers in this special issue. Biemond, Van de Wouw and Nijmeijer [51] introduces the classes of perturbations that preserve an interval of equilibria lying on the discontinuity threshold and discuss the situations where such perturbations lead to bifurcations coming from the end points of the interval. Another approach disregards the dynamics inside [−|µ|, |µ|] and treats this interval as an attractive equilibrium set of the differential inclusioṅ where For more on the latter approach, we refer the reader to the book by Leine and Van de Wouw [227] and references therein. An attractive point on the discontinuity threshold can also be structurally stable. We refer the reader to the aforementioned papers Guardia-Seara-Teixeira [154] and Kuznetsov-Rinaldi-Gragnani [223] for classification of these points in R 2 . As for the higher-dimensional studies, much attention has recently been given to the analysis of the dynamics near a point in R 3 , where the smooth vector fields on the two sides of the switching manifold are tangent to this manifold simultaneously. Such equilibria were first described by Teixeira [353] and Filippov [125] and are known as Teixeira singularities or Usingularities. Teixeira [353] gave conditions where such a singularity is asymptotically stable. Colombo and Jeffrey showed [92,189] that the Teixeira singularity can be a simultaneous attractor and repeller of local and global dynamics, where the orbits flow into the singularity from one side and out from the other. Chillingworth [84] analyses scenarios in which a Teixeira singularity loses and gains stability following the sketch in Fig. 7. An example of the occurrence of the Teixeria singularity in the context of an application has been discussed by Colombo, di Bernardo, Fossas and Jeffrey [90]. Nonsmooth systems with switching manifolds causing trajectories to jump, according to a so-called impact law, have become known as impact systems. Border-collision bifurcations of an equilibrium lying on a switching manifold of an impact system are classified in [47], but little has been done yet towards applications of these results. An equilibrium crossing the switching manifold is not the only transition that causes qualitatively changes to the dynamics near the equilibrium. Motivated by applications in control, the next Section discusses transitions that occur when a switching manifold (with an equilibrium on it) splits into several sheets. The Teixeira singularity may be no longer structually stable under this type of perturbation that we refer to as border-splitting. Border-splitting bifurcations This type of bifurcation allows to prove the existence of limit cycles in so-called switching systems studied in the context of control theory. The illustration in Fig. 8 provides a simple example of a switching system. Two contacts are built into a pipe with a metal ball inside. These contacts are connected with The following differential equation models this setup, where µ is the coordinate of the position of the white contact point, and −µ the coordinate of the black contact point, i. e. k = ±d depending on whether the right or left magnet is activated. The existence of limit cycles in systems of this form is known since Barbashin [25], but the fact that this cycle can been seen as a bifurcation from (0,0) as a parameter indicating the distance of the black and white contact points from the centre crosses zero (see Fig. 9), hasn't been yet been pointed out in the literature. In some situations the aforementioned switching law can be replaced by a more general switching manifold (see the bold curve at the right graph of Fig. 9) that is nonsmooth. This point of view has been proposed by Barbashin [25] for switching systems involving second-order differential equations, but no general results about its validity are available. Fig. 8. The trajectories escape from the local neighbourhood of (0,0) and converge to one of the two stable equilibria, if µ < 0 (left graph), converge to a limit cycle, if µ > 0 (right graph). The middle graph illustrates that the radius of the limit cycle approaches 0 when µ → 0. The right graph also features the Barbashin's discontinuity surface, which is drawn in bold. The interest in switching systems has been increasing by new applications in control, where switching is used to achieve closed-loop control strategies. For instance, Tanelli et al. [347] designed a switching system to achieve a closed-loop control for anti-lock braking systems (ABS). This example exhibits a nontrivial cycle and four switching thresholds. The classification of bifurcations in switching systems that are induced by changes in the switching threshold (splitting or the braking of smoothness) is a largely open question that has not yet been systematically addressed in the literature. Studying a natural 3dimensional extension of system (5) leads to the problem of the response to the splitting of the switching manifold in a Teixeira singularity (see Fig. 10). Where the switching manifold does not just cause a discontinuity in the vector field of the ODE under consideration, but introduces jumps into the solutions of these ODEs, the nonsmooth system is called a nonsmooth system with impacts or impact system. No paper about border-collision of equilibria in such systems is available in the literature. The paper by Leine and Heimsch [226] in this special issue discusses sufficient conditions for stability of such an equilibrium (absence of bifurcation). This paper may play the same instructive role in the development of the theory of border-collision bifurcation of equilibrium in impact systems as the result about the structual stability of an equilibrium in second-order discontinuous ODEs, as sketched in Fig. 5. Figure 10: A partial sketch of trajectories of a 3-dimensional switching system (right graph). The limit of this sketch when the distance between the switching thresholds approach 0 (left graph). Grazing bifurcations It appears that only smooth bifurcations 1 can happen to a closed orbit that intersects the switching manifold transversally although the proof is not always straightforward, e.g. in the case of a homoclinic orbit as discussed by Battelli and Feckan [29] in this special issue. The intrinsically nonsmooth transitions occurring near closed orbits (or tori) that touch the switching manifold (nontransversally) are known as grazing bifurcations [49] or C-bifurcations [121]. This type of bifurcation is very common in applications. It for instance takes place when a mechanical system transits from a smooth regime to one that allows for collisions. A simple example is that of a church bell rocked by a periodic external force. A grazing bifurcation occurs when the amplitude of the driving increases to the point where the clapper hits the bell, see Fig. 11. Somewhat surprisingly, the dynamical behaviour close to the grazing bifurcation associated with a low velocity chime appears to be chaotic, following Whiston [377], Nordmark [275] and, more recently, Budd-Piiroinen [64]. The simplest model of the bell-clapper systems has the bell in fixed position with only the clapper moving. Shaw and Holmes [317] pioneered the modelling of this situation by a single-degree-of-freedom impact oscillator (Fig. 12) with a linear restitution law: The impact rule on the second line is such that the magnitude of the velocity of each trajectory changes instantaneously froṁ u(t − 0) to −ku(t + 0) when u(t) = c. Though realistic restitution (a) no contact with the bell  (b) clapper touches the bell periodically  (c) motion after grazing bifurcation occurs Figure 11: 3 different types of relative oscillations of the bell upon the clapper: clapper doesn't touch the bell; clapper touches the bell; clapper hits the bell laws are known to be nonlinear (see e.g. Davis-Virgin [102]), Piiroinen, Virgin and Champneys [297] conclude that (6) models the actual dynamics of a constrained pendulum reasonably well. A more general mathematical model of the impact oscillator of Fig. 12 can be found in Schatzman [314]. x c f (t,x,x) Figure 12: Impact oscillator, i.e. a ball attached to an immovable beam via a spring that oscillates upon an obstacle and subject to a vertical force f (t, u,u). We now consider the natural grazing bifurcation in this model. We start considering the system (in the parameter regime µ < 0) with a (stable) periodic cycle that does not impact the u = c line. By increasing µ smoothly we envisage that the amplitude of the periodic cycle changes smoothly to touch u = c precisely at µ = 0. This implies that in the phase space the trajectory is tangent to the line u = c since the orbit has zero velocity at the extermal point of the cycle at u = c. We now present a simple argument to explain the at first sight somewhat surprising fact that the tangent (also known as grazing) periodic trajectories of generic impact oscillators (6) are unstable. Indeed, fix an arbitrary τ ∈ R and consider the trajectory of system (6) with the initial condition (u(τ − 0),u(τ − 0)) = (c,u 0 (τ)), where u 0 (t) denotes the grazing orbit, see Fig. 13. If f (t 0 , c, 0, 0) 0, it is a consequence of the fact that the grazing orbit impacts with zero velocity (u 0 (t 0 ) = 0, where t 0 denotes the time of grazing impact) that 2 This implies that there is always a trajectory that escapes from an arbitrary small neighbourhood of the grazing trajectory u 0 . For a complete proof of the instability see Nordmark [275]. Figure 13: Periodic trajectory u 0 (bold curve) of system (6) in cyclindrical coordinates, i.e. a point ζ is assigned to u 0 (t) in such a way that u 0 (t) is the distance from ζ to the vertical axis of the cylinder,u 0 (t) is the vertical coordinate of ζ and t is the angle measured from a fixed hyperlane containing the axis of the cylinder. The surface of the cylinder is given by u = c, so that the trajectory u 0 grazes the cylinder at the point "•". The curve u is a part of the trajectory that originates from ζ. It has been noticed by Nordmark [275] that shortly after grazing, there remains to be a trapping region R in its (former) neighbourhood, so that all the trajectories that originate in R 2 It is sufficient to observe that don't leave this region. An important step in studying the response of the dynamics in R to varying µ and c is due to Chillingworth [85], who introduced a so-called impact surface. The work [82] by Chillingwort, Nordmark and Piiroinen relates the Morse transitions of this surface (investigated in [85]) to possible global bifurcations. Insightful numerical simulations in relation to the dynamics on this impact surface have been carried out by Humphries-Piiroinen [173] in this special issue. Kryzhevich [213] has studied topological features of the attractor in R. Luo and colleagues [242,243] have published many numerical results about the dynamics in R when the ODE in (6) is a linear oscillator. Nordmark has introduced a general notion of a discontinuity mapping which is a method for deriving an asymptotic description of the Poincaré map at a grazing point of any piecewise smooth system. This method enables a generalisation of these concepts to study which periodic orbits exist and their stability types in a neighbourhood of a grazing bifurcation in arbitrary N-dimensional dynamical systems [278]. Using this approach, it can be shown [277] that the leading-order expression for the Poincare map at a grazing bifurcation in an impacting system contains a square-root singularity and can be written in the form [275, p. 290] where λ 1 and λ 2 are constants representing details of f . For µ < 0 the point (0, 0) is a fixed point of the map (7), reflecting the fact that the oscillator (6) has a T -periodic solution that doesn't collide with the obstacle. When µ increases through zero this fixed point and complicated dynamics emerges. This intrinsically nonsmooth bifurcation is known as a border-collision of a fixed point. Many have investigated border-collision bifurcations through two-dimensional maps of the form (7), see e.g. Nordmark [275,278], Chin-Ott-Nusse-Grebogi [88], Feigin [121], Dutta-Dea-Banerjee-Roy [112], and Di Bernardo-Budd-Champneys-Kowalczyk [49]. One of the central conclusions of this collaborative effort is the assertion that the impact oscillator (6)) typically has no stable near-T -periodic solutions near u 0 after the occurrence of grazing. In addition, Nordmark [278] gives conditions for the existence of periodic solutions which do not only have arbitrary large periods, but which also have a prescribed symbolic binary representation (a 0 representing a revolution after which the orbit "does not hit the cylinder", and 1 when it is "hits the cylinder"). A geometric impact surface approach [85] is used in Chillingworth-Nordmark [83] to reveal the geometry behind the bifurcation of impacting periodic orbits from u 0 . The map (7) can be viewed as a generalization of a piecewise smooth Lozi-map, but the results known for the Lozi-map are normally formulated in terms of one-sided derivatives [140,384] that doesn't exist for (7) at (0, 0). Several papers (e.g. Thota-Dankowicz [357], Dankowicz-Jerrelind [100] Thota-Zhao-Dankowicz [358], Rom-Kedar-Turaev [360,306], Janin-Lamarque [187]) discuss non-generic situations (with more structure), where a stable T -periodic solution is not destroyed and keeps its stability after grazing. The first result in this direction is due to Ivanov [181] who related the phenomenon of the persistence of a periodic solution under grazing to a resonance between the periodic force and the eigenfrequency of the oscillator in (6). Budd and Dux [62] relate intermittent chaotic behaviour after grazing bifurcations to resonance conditions. The map (7) is derived by truncation from a certain Taylor series. In fact, arbitrary higher-order terms in such maps can be derived using Nordmark's discontinuity mapping approach [274]. The need for higher-order terms to detect certain bifurcation scenarios is discussed in Molenaar-De Weger-Van de Water [269], see also Zhao [389]. The system (8) can be viewed as a generalized version of the familiar tent map (see e.g. the book [146] by Glendinning), but with a fixed point in its corner (when µ = 0). Based on Lagrangian equations of motion, Nordmark [128] shows that the map of (8) can model the dynamics of several-degreesof-freedom impact oscillators. In particular, by using a suitable one-dimensional map of the form (8), Nordmark [128] recaptures the bifurcation scenarios that he found earlier in the two-dimensional map (7) [275]. However, the validity of the proposed reduction of the two-dimensional dynamics of maps of the form (7) to one-dimensional maps of the form (8) is a largely open question. This dimension reduction issue is also discussed in the survey by Simpson and Meiss [325] in this special issue. That the aforementioned reduction is not always possible, even for piecewise linear two-dimensional maps, follows from the fact that the attractors of similar (7) two dimensional piecewise linear maps (i.e. when a linear term appears in the place of the square-root one in (7)) are sometimes truly twodimensional, see Glendinning-Wong [143]. Another intrinsically nonsmooth phenomenon happens when the function (t, x,ẋ) → f (t, x,ẋ, 0) takes 0 value at the point where a closed orbit u 0 grazes the switching manifold. Increasing µ > 0 can here lead to bifurcation of orbits with chattering, where an infinite number of impacts occur in a finite time interval, see Fig. 16. Chillingworth [86] was the first to establish a precise understanding of the local dynamics near such a grazing bifurcation with chattering, asserting that all the chattering trajectories from a neighbourhood of the original grazing orbit u 0 hit the switching manifold along their own stable manifolds (one such manifold is represented by a dotted curve in Fig. 16) which all are bounded by a stable manifold that is tangent tȯ x = 0 (represented by a dashed bold curve in Fig. 16). Any trajectory that hits the switching manifold (cylinder) within the region surrounded by the dashed curves (stable manifold that approachesẋ = 0 at E andẋ = 0 itself) leads to chattering that accumulates onẋ = 0 (in the same way as the sample trajectory of Fig. 16 accumulates to the point D). The trajectory stays quiescent then until it gets released when reaching the discontinuity arc (the point E). This (Chillingworth-Budd-Dux) region shrinks to a point (i.e. the two white points A and E converge to one point where u 0 grazes) as µ approaches 0. A formula for the map g that maps one collision on the dotted curve into another one (e.g. point B into point C) was proposed by Budd and Dux in [63] and revised by Chillingworth [86]. If the obstacle in the impact oscillator Fig. 12 is not absolutely elastic, two model situations are often studied. In the first one, the obstacle is another spring attached to an immovable wall that constrains the motion of the mass from one side (Fig. 17a). This obstacle determines a switching manifold where the right-hand sides of the equations of motion Much attention has recently been devoted to grazing bifurcations in oscillators with a so-called preloaded or prestressed spring 4 . Fig. 17b contains an illustration. A preloaded spring doesn't create impacts, but defines a switching manifold where the equations of motion are discontinuous, see Duan-Singh [111]. Also in this context, grazing doesn't necessary imply bifurcation. However, in numerical simulations, Ma-Agarwal-Banerjee [253] have found that grazing of a periodic orbit in the prototypical preloaded oscillator of the form leads to bifurcation for a large set of parameters. The same paper [253] also suggests that a grazing bifurcation of a periodic solution of (9) can be modelled by a border-collision bifurcation of a fixed point in a suitable two-dimensional piecewise linear continuous map: (the map (7) where the square-root √ µ − ξ is replaced with µ − ξ). A theoretical justification of this assertion can be found in Di Bernardo-Budd-Champneys [46] under the assuption that system (9) does not possess sliding solutions (i.e. solutions that stick to the switching manifold for positive time intervals, see Fig. 6 for an illustration). Numerical confirmation can be found in Leine [140] (snap-back repellers), and Glendining [144] (Markov partitions). Not all conclusions achieved for the piecewise linear category remain valid for nearby piecewise smooth nonlinear maps, see e.g. Simpson-Meiss [328]. The question whether equation (9) has sliding solutions has not yet been rigorously answered, and only assumed to be true in [253]. An important role here might be played by the symmetry inu. Indeed, a non-symmetric perturbation of (9) of the formẋ does evidently have sliding solutions. Specifically, Fig. 18 illustrates that a non-sliding periodic solution in system (10) transforms to a sliding one (through grazing) as µ changes sign from negative to positive. Although absent in the second-order differential equation modelling the preloaded oscillator of Fig. 17, grazing bifurcations of solutions with a sliding component (also known as grazing-sliding bifurcations) play a very important role in many other applications in mechanics and control. A prototypical example is a dry friction oscillator where the switching manifold is horizontal and where the occurrence of periodic solutions with sliding is a well known phenomenon (due to the pioneering work of Den Hartog [158], it is sometimes referred to as the Den Hartog problem). For further studying grazing-sliding bifurcations in dry friction oscillators and general discontinuous systems (Fillipov systems, see previous section) we refer the reader to Luo-Gegg [247,248,249,250], Kowalczyk-Piiroinen [204], Kowalczyk-di Bernardo [205], Galvanetto [132,133,134], Nordmark-Kowalczyk [279], di Bernardo-Kowalczyk-Nordmark [43], Svahn-Dankowicz [344,345], di Bernardo-Hogan [45], Guardia-Hogan-Seara [155], [190], Kuznetsov-Rinaldi-Gragnani [223], Szalai-Osinga [346], Teixeira [352], Benmerzouk-Barbot [35], and to Jeffrey-Hogan [191] and Colombo-di Bernardo-Hogan-Jeffrey [89] in this volume for a review of sliding bifurcations. More numerical results can be found in Sieber-Krauskopf [318], Cone-Zadoks [93], and Dercole-Gragnani-Kuznetsov-Rinaldi [105]. In addition to the two types of nonlinear springs that are depicted in Fig. 17 the spring characteristic may include socalled hysteresis loops. In the simplest case the stiffness of the spring depends not only on its extension, but also on whether it is stretched or compressed. More generally, hysteresis may refer to various types of memory, see Krasnoselski-Pokrovski [207]. We refer the reader to Babitsky [19] for a discussion of mechanical models. Grazing bifurcations in systems with hysteresis have been investigated in Dankowicz-Paul [99] and in this special issue Dankowicz-Katzenbach [98] introduce a general framework for studying grazing bifurcations in nonsmooth systems that can contain, in particular, hysteretic nonlinearities. The dynamics of a system of two coupled pendulums (similar to that of the bell-clapper system of Fig. 11) reveals an essential novelty. It was reported already in 1875, see Veltmann [365,366], that the famous Emperoris bell in the Cathedral of Cologne incidentally failed to chime as the clapper stuck to the bell. It appears that in contrast with individual oscillators, chattering becomes generic and even intrinsic for grazing bifurcations in coupled impact oscillators. In one of the scenarios for this bifurcation there is an emergence of periodic orbits with chattering followed by a sticking phase, see Wang [371,372], Luo-Xie-Zhu-Zhang [252] for linear restitution law and Davis-Virgin [102] for a more realistic restitution law derived from experiments. Figure 19: A typical Newton cradle, a system of n balls suspended to an immovable beam. A familiar realization of higher-dimensional impact oscillators is known as Newton cradle, see Fig. 19. The discrete dynamical system that arises from the analysis of grazing bifurcations in the model of Fig. 19 with a linear restitution law resembles that of a so-called billiard flow, whose border-collision bifurcations are investigated in papers by Rom-Kedar and Turaev [360,306]. However, various studies (see e.g. [77,147]) suggest that the nonlinear nature of the restitution law in the real mechanical setup of Fig. 19 is crucial for understanding the phenomena that the Newton cradle exhibits. Little is known about the consequences of grazing bifurcations in these nonlinear settings. One of the open conjectures is: for almost all initial data and whatever the dissipation, the Newton cradle converges asymptotically towards a rocking collective motion with all the balls in contact (Brogliato, personal communications). Specific perturbative results Perturbative results are inherent to the methodology of bifurcation theory, when used to gain insight into the generic unfolding of all possible responses of a given trajectory to perturbations, often with focus on a particular type of dynamics, e.g. on periodic solutions of a certain period. Sometimes, perturbative results may yield local results in the sense that they yield all the dynamics in a (sufficiently small) neighbourhood of an original trajectory. If we consider small perturbations of a dynamical system whose solutions are known in the whole phase space, then perturbation theory may provide more global information into the dynamics of the perturbed system (e.g. it can help to determine how fast the convergence of the trajectories to a periodic solution of the perturbed system is). An introductory discussion on perturbation theory can be found in Guckenheimer-Holmes [153,Ch. 4]. In simple mechanical systems, exact solutions are often known if the friction or the magnitude of some excitatory forces are neglected. The latter type of effects may then be modelled as small perturbations. For example, the existence of the limit cycle for equation (1) that has been identified in the previous Section by increasing µ through zero (see Fig. 4) can be detected for any fixed µ by varying the friction coefficients c 1 and c 2 through zero. An added benefit of this kind of perturbative approach is that it yields information about the domain of attraction of the aforementioned limit cycle. All this can be achieved in principle along the classical lines of the proof of the existence of limit cycles of Van der Pol oscillators, by averaging, and does not necessarily require any specific nonsmooth theory (see Andronov-Vitt-Khaikin [1, Ch. IX]). A new type of problem arises if one attempts to apply the perturbation approach and analyse the asymptotic behaviour of switching systems. Indeed, the solutions of (5) are known completely when k = 0, but their norms approach infinity, if time goes to infinity. Consequently, the limit cycle that is displayed in Fig. 9 can be seen for any fixed µ as a bifurcation from infinity when k crosses zero (see Fig. 20). The global attractivity properties of the latter cycle can be understood by a suitable modification of standard perturbative approaches for studying perturbations of infinity. Although this problem is essentially a smooth one, the class of switching systems serves as a rich source of open problems. The development of intrinsically nonsmooth perturbation methods is required for the analysis of grazing bifurcations. The continuous differentiability of the solutions in linear or Hamiltonian systems with impacts has been largely unexplored. This property stands in contrast with that of generic impact systems with square-root type singularities, but provides an opportunity for the development of a perturbation theory for trajectories that graze an impact manifold. To illustrate this, let us consider the following elementary example of an impact oscillator, cf. (6), x + εkẋ + x = Aε cos(ωt), The solutions of the unperturbed system (with ε = 0) x + x = 0, form a family of closed orbits (see Fig. 21 [123], and Philipchuk [296] used a so-called method of discontinuous transformation to remove the impacts and transform equations of the form (11) into nonsmooth differential equations where the switching manifold causes discontinuities only. Perturbation methods for differential equations with discontinuous right-hand sides have been developed in Fidlin [122,124], Li-Du-Zhang [109] and, more recently, Granados-Hogan-Seara [152]. Where the obstacle in the impact oscillator is not absolutely elastic (Fig. 17) the perturbation methods by Samoylenko [309], Samoilenko-Perestyuk [310,311] (for prestressed oscillators with small jumps in the stiffness characteristics) and X. Liu, M. Han [238], and Lazer-Glover-McKenna [148] (for piecewise smooth continuous stiffness characteristics) can be employed. However, none of these methods apply to the unperturbed trajectory that touches the line x = c (bold cycle in Fig. 21). Again, the theory of discontinuity mappings due to Nordmark (see [49,Ch. 2, for more details) can be fruitful here. In contrast with the generic impact situation grazing periodic solutions in linear or Hamiltonian systems may well gain stability under perturbations. Fig. 21 illustrates this assertion for the particular example (11). The significantly better stability properties of grazing induced resonance solutions with respect to the unperturbed ones are not seen in the smooth perturbation theory. Numerical results in Leine-van Campen [230,231,232] and Kahraman-Blankenship [192] suggest that the grazing induced resonances may also have nonsmooth scenarios (jump of multipliers) in non-impacting discontinuous and even in nondifferentiable continuous differential equations (see an earlier footnote about Levinson's change of variables). Theoretical and experimental evidence of non-standard resonances in coupled nonsmooth oscillators is discussed in the paper by Casini-Giannini-Vestroni [79] in this special issue. Another new class of problems relates to perturbations of a closed orbit in the case where this orbit transits into a (resonance) solution that intersects the switching manifold an infinite number of times. One important example is the development of Melnikov perturbation theory for homoclinic orbits by Batelli-Feckan [30,31] (see also their paper in this special issue), Du-Zhang [108], Xu-Feng-Rong [380], Kukucka [218]. Another example is the analysis of the response of periodic orbits to almost periodic perturbations initiated by Burd [71] (see also his paper [70] in this volume). A common ingredient of these studies is an ability to control the aforementioned infinite number of intersections, that has been only achieved for nongrazing situations so far. One of the central approaches within the theory of perturbations is the study of the contraction properties of finitedimensional or integral operators associated to the perturbed system based on contraction properties of a so-called bifurcation function. The particular choice of the operator depends on the type of the dynamics one wants to access (periodic, almost periodic, chaotic). This approach has been initiated by the classical Second Bogolyubov's theorem ( [55], [153,Theorem 4.1.1(ii)]) that recently started to be developed for grazing situations by Feckan [119] (discontinuous ODEs) and Buica-Llibre-Makarenkov [67,68,69] (continuous nondifferentiable ODEs). Though the development of the second Bogolyubov's theorem for single-degree-of-freedom impact oscillators of form (11) near grazing solutions looks manageable, accessing higher dimensional prototypic mechanical systems may be challenging. Indeed, coupling of even linear impact oscillators leads to complex behaviour where chattering trajectories may occupy a nonzero measure set of the phase space, see Valente-McClamroch-Mezic [361]. Another approach that has its routs in the First Bogolyubov's theorem ( [55], [153,Theorem 4.1.1(i)]) discusses the dynamics on a finite time interval of the order of the amplitude of the perturbation. This approach has been extended to differential inclusions in papers by Plotnikov, Filatov, Samoylenko, Perstyuk and the survey by Skripnik [199] in this special issue provides an overview of this research direction. Resonances in impact oscillators formulated in the form of differential inclusions are investigate by Paoli and Schatzman in [288]. Versions of the first Bogolyubov's theorem for differential equations with bounded variation right-hand-sides are developed in Iannelli-Johansson-Jonsson-Vasca [174,175,176] in the context of control systems subject to a dither noise. The response of a piecewise-linear FitzHugh-Nagumo model to a white noise is investigated in Simpson-Kuske [320]. However, the research on the response of nonsmooth systems to random perturbations has the potential for a great deal of strengening. The part of perturbation theory that is based on versions of the first and the second Bogolyubov's theorems is commonly known as averaging principle. Though differential inclusions form a very broad class of nonsmooth dynamical systems and even includes a class of switching systems (if the Barbashin switching manifold is used, see previous section), some important problems in nonsmooth mechanics are most conveniently formulated in terms of even more general equations called measure differential inclusions (see the books by Moreau [271], Monteiro Marques [270], and Leine-Van-de-Wouw [227]). An averaging principle for measure differential inclusions appears within reach, but has not yet been developed. As for nonsmooth systems with hysteresis we refer the reader to the book by Babitsky [19] and the survey by Brokate-Pokrovskii-Rachinskii-Rasskazov [58] for the perturbation theory that is currently available for this class of systems. A largely open question within the theory of perturbations of nonsmooth systems is the persistence of KAM-tori in nonsmooth Hamiltonian systems under perturbations. Numerical simulations by Nordmark [276] suggest that KAM-tori in Hamiltonian systems with impacts are destroyed under grazing incidents. However, a theoretical clarification is unknown for even the simplest examples of the form (11) (with k = 0). Adiabatic perturbation theory for Hamiltonian systems with impacts is developed in Gorelyshev-Neishtadt [149,150], who introduced an adiabatic invariant that preserves the required accuracy near grazing orbits as well. Pioneered by Mawhin [260], while working with linear unperturbed systems, topological degree theory is often used in the literature to relate the topological degree of various operators associated with the perturbed system to the topological degree of the averaging function. Several advances have been made in this direction since then. For example, Feckan [119] (see also his book [120]) generalised the Mawhin's concept for nonlinear unperturbed systems, while focusing the evaluation of the topological degree on neighbourhoods of certain points. Working in R 2 , Henrard-Zanolin [385], Makarenkov-Nistri [258] and Makarenkov [257] developed similar results in more global settings (these methods can be eventually used to evaluate the topological degree of the Poincare map of (11) with respect to the interior of the circle of radius c). Though topological degree theory has the reputation of being capable to work with nonsmooth system, the grazing of an orbit possesses challenging questions also here. One such a question is how to evaluate the topological degree of the 2π-return map of the unperturbed system (12) with respect to the neighborhood of the interior of the disk of radius c (which grazes the switching manifold), see Fig. 22, and the analogues of the Krasnoselskii [206,Lemma 6.1] and Capietto-Mawhin-Zanolin [75] results known in the non-grazing situation. Another question is whether the topological index of a grazing periodic solution of a generic impact system is always 0. Answers to these questions should lead to topological degree based conditions of grazing bifurcations of periodic solutions that do not rely on any genericity (and e.g. apply in the case of zero acceleration at grazing). The work by Kamenski-Makarenkov-Nistri [194] initiates the development of perturbation theory in the settings where the only available knowledge about the perturbation is continuity. This problem falls into a different class of systems rather than piecewise smooth ones as the perturbation is allowed to be differentiable nowhere. The interest in considering nowhere smooth dynamical systems comes from applications in fluid dynamics, where Kolmogorov's conjecture [200] states that the order of the dependence of the velocity vector v(x) of a wide class of fluids on the coordinate x does not exceed 1, so that the continuous map x → v(x) cannot be differentiable anywhere. The solutions of the initial-values problems of relevant differential equations are nonunique and form so-called integral funnels (see E-Vanden-Eijden [362] and Pugh [302]). To cope with the problem of nonuniqueness the authors of [194] operate with integral operators and prove bifurcations of sets that are mapped into themself under the action of these operators. Further discussion on the mathematical methods available for the Kolmogorov's fluid model can be found in the recent survey by Falkovich-Gawedzki-Vergassola [116]. Differential variational inequalities Important classes of nonsmooth systems are not readily formulated as dynamical systems and mere existence, uniqueness and dependence of solutions on initial conditions represent one of the active directions of research within the nonsmooth community. One of the most general classes of these nonsmooth systems is that of differential variational inequalities, formulated aṡ x(t) = f (t, x(t), u(t)), (ξ − u(t)) T F(t, x(t), u(t)) ≥ 0, for any ξ ∈ K, where f ∈ C 0 (R n × R m , R n ), F ∈ C 0 (R n × R m , R m ) and K ⊂ R m is a nonempty closed convex set. Where K is a cone, the inequality in (13) is called a complementarity condition. Differential variational inequalities provide a convenient formalism for optimal control problems (see Pang-Stewart [287], Kwon-Friesz-Mookherjee-Yao-Feng [224]) and frictional contact problems (see Brogliato [57], Pang [286]). Various other formalisms (coming from control, mechanics and biology) and their relationship are discussed in the survey by Georgescu, Brogliato and Acary [141] in this special issue. The central framework to deal with (13) lies in transforming (13) (using so-called convex analysis) to differential inclusions where the properties of the solutions are well understood. The details of this transformation can be found in the aforementioned papers [286,57] and the current state-of-the-art of the corresponding results on the existence of solutions (in sense of Caratheodory) for both initial-values and boundary-value problems for (13) has been developed in Pang-Stewart [287]. However, there are important situations where the differential inclusions approach doesn't offer the uniqueness of solutions and direct analysis of the DVIs is needed, see Stewart [334,335]. We refer the reader to the book [336] by Stewart for further reading on differential variational inequalities and their applications. Where the inequality in (13) models a mechanical contact one can approximately investigate the solutions of (13) by replacing one of the surfaces of the contact by an array of springs. This approach, called regularization in the mechanics literature, takes the differential variational inequality (13) to a system of ODEs. Several experiments suggest that the true dynamical behaviour is that of the regularized ODEs, that can deviate from the dynamics of the original differential variational inequality, see e.g. Hinrichs-Oestreich-Popp [164] and Liang-Feeny [237] (yet, other experiments show also that nonsmooth models compare very well with experiments). We refer the reader to the pioneering paper [364] by Vielsack and to the more recent development [331] by Stamm and Fidlin. A mathematical theory to study the dynamics of the regularised systems in the infinite-stiffness limit of the springs has been recently developed in Nordmark-Dankowicz-Champneys [272]. In addition, nonuniqueness of solutions of the initial-value problem for (13) is a common phenomenon in contact mechanics (called static indeterminacy, see e.g. [261]). The aforementioned paper [272] identifies the situations where the regularised ODEs resolve the ambiguity and where they do not. Sufficient conditions for robustness of regularisation of piecewise smooth ODEs are discussed in Fridman [130] and in the survey by Teixeira and da Silva [351] in this special issue. The paper [319] by Sieber and Kowalczyk suggests that the class of systems of piecewise smooth ODEs where this robustness takes place is rather limited. Regularisation of impact oscillators is discussed in Ivanov [182,183]. Bastien and Schatzman [26] discuss the differential inclusions that occur in the limit of the regularisation processes for dry friction oscillators and analyse the size of integral funnels of these inclusions. Another class of nonsmooth systems where the properties of the solutions arises as a major problem is the class of systems with hysteresis. In the most general form these systems can be descibed aṡ where P is a so-called hysteresis operator, see the pioneering work by Krasnoselski-Pokrovski [207]. A survey by Krejci-O'Kane-Pokrovskii-Rachinskii [208] in this special issue discusses the existence, uniqueness, dependence on initial conditions and other properties of solutions of systems with hysteresis of the aforementioned general form, focusing on the rightmost equation of (14). Applications In this section we discuss applications that have stimulated the development of mathematical methods for the analysis of nonsmooth systems. We focus on the mathematical problems around applications and highlight the place of these in the theory of nonsmooth systems, as just presented. Border-collision of an equlibrium with a smooth switching manifold of discontinuous systems has been used in to explain fundamental paradoxes in mechanical devices with friction. The situation where the switching manifold is discontinuous has received much attention in the closed-loop control of car braking systems. Car braking systems. Tanelli, Osorio, di Bernardo, Savaresi and Astolfi [347] use a two-dimensional switching system with four switching manifolds (that switch the actions of charging and discharging valves in the hydraulic actuator) to design closed-loop control strategies in anti-lock braking systems (ABS). The dynamics of this model exhibits a border-splitting bifurcation: if one first squeezes the parallel thresholds together and then observes how the dynamics responds to the increase of the gap between these thresholds. As for the dynamics of brakes this can be adequately described by a dry friction oscillator, i.e. a second-order differential equation involving a sign function. The time periods that stable regimes spend sticking to the switching manifold appear to be in direct relation to the break squeal level, see Badertscher-Cunefare-Ferri [20] and Ibrahim [177]. Studying grazing bifurcations in dry friction oscillators is a possible way to understand the properties of such sticking phases. This direction of research is explored in Zhang-Yang-Hu [387] and Luo-Thapa [244]. When the viscuous friction is small, sticking phases can be investigated by a suitable perturbation approach as the paper by Hetzler-Schwarzer-Seemann [163] asserts. However, the recent survey Cantoni-Cesarini-Mastinu-Rocca-Sicigliano [73] suggests that more work is necessary to completely understand the connection of the brake squeal with sliding solutions of an appropriate mathematical model. Periodic solutions with sliding phases also play a pivotal role in the Burridge-Knopoff mathematical model of earthquakes, see Xu-Knopoff [381], Mitsui-Hirahara [268], Ryabov-Ito [308], Galvanetto [135], Galvanetto-Bishop [136]. But grazing-sliding bifurcations of these solutions have not been yet addressed in the literature. Grazing-sliding bifurcations in a superconducting resonator are discussed in the paper by Jeffrey [190] in this special issue. Atomic force microscopy. According to Hansma-Elings-Marti-Bracker [157] the AFM cantilever-sample interaction can be modelled by a piecewise linear continuous spring (see also Sebastian-Salapaka-Chen [315]). The switch from one linear stiffness characteristics to another happens at the moment when the cantilever enters into contact with the sample. As the cantilever is designed to oscillate (cantilever tapping mode that prevents damaging the sample), the free motions of the cantilever are separated from those touching the sample by a perodic solution that grazes the switching manifold. The corresponding grazing bifurcations turn out to be related to loss of image quality, as shown in the analys of Misra-Dankowicz-Paul [267], Dankowicz-Zhao-Misra [101], and Van de Water-Molenaar [369]. Under certain typical circumstances and away from the grazing regimes the occurrence of subharmonic and chaotic solutions has been investigated using perturbation theory by Yagasaki [382,383] and Ashhab-Salapaka-Dahleh-Mezic [4,5]. Drilling. Mass-spring oscillators with piecewise linear stiffness characteristics play an important role in the modelling of drilling. Similar to AFM, the switch in the stiffness coefficient corresponds to the moment where the drill enters the sample. A difference with respect to the AFM model is that the position of the whole system moves over time due to a periodic (percussive) forcing from a periodically excited slider (reflecting the fact that the drill penetrates into the sample). Dry friction resists penetration of the drill into the sample. The model can be therefore seen as a combination of a dry friction oscillator with a soft impact one. Progressive motions with repeating sticking phases is the most useful regime of this setup. Analytic results about the properties of the sticking phases have been obtained in Besselink-van de Wouw-Nijmeijer [50], Germay-Van de Wouw-Nijmeijer-Sepulchre [142], and Cao-Wiercigroch-Pavlovskaia-Yang [74] by averaging methods under the assumption that the generating solution do not graze the switching manifolds. A numerical approach to the bifurcation analysis was followed in Luo-Lv [251]. In similarity to the modelling of drilling, Zimmermann-Zeidis-Bolotnik-Pivovarov [401] discuss how a two-module vibration-driven system moving along a rough horizontal plane describes the behaviour of biomimetic systems. Neuron models. Predominantly unexplored challenges in nonsmooth bifurcation theory can be found in neuroscience applications, where the switching manifold sends any trajectory of integrate-and-fire or resonate-and-fire models to the same point of the phase space. Grazing bifurcation here corresponds to the transition from a sub-threshold to firing oscillations. This special volume contains a survey by Coombes-Thul-Wedgwood [94] of the new phenomena and open problems that stem from the presence of nonsmoothness in neuron models. New perturbation methods applicable near grazing solutions can be useful to reduce the dimension of networks of coupled neurons of integrate-and-fire or resonate-and-fire type. Such an approach has been employed in a series of recent papers by Holmes (see e.g. [363]) to investigate the dynamics of weakly coupled FitzHugh-Nagumo, Hindmarsh-Rose, Morris-Lecar and other smooth neuron models. Hard ball gas. Rom-Kedar and Turaev [360,306] have recently shown that grazing periodic trajectories of scattering billiards (two-degree-of-freedom Hamiltonian systems with impacts) can transform into an island of asymptotically stable periodic solutions under perturbations that regularise the nonsmooth impact into a smooth one. Though a higher-dimensional generalization of this observation is still an open problem, this result may potentially help to examine the boundaries of applicability of the Boltzman ergodic hypothesis (asserting that the hard ball gas is ergodic). These islands of stability have been later seen in experiments with an atom-optic system by Kaplan-Friedman-Andersen-Davidson [197]. A similar phenomenon known as absence of thermal equilibrium has been experimentally observed in one-dimensional Bose gases by Kinoshita-Wenger-Weiss [198]. Periodic orbits that graze the boundary of focusing billiards play an important role in the context of Tethered Satellite Systems, see Beletsky [33] and Beletsky-Pankova [34]. Electrochemical waves in the heart. Employing the mathematical modelling from Sun-Amellal-Glass-Billette [337], an unfolded border-collision bifurcation in a tent-like piecewise linear continuous map has been used to explain the transition from long to short periods (alternans) in electrochemical waves in the heart (linked to ventricular fibrillation and sudden cardiac death), see Zhao-Schaeffer [390], Berger-Zhao-Schaeffer-Dobrovolny-Krassowska-Gauthier [36], Hassouneh-Abed [159,160], and Chen-Wang-Chin [81]. However, only particular forms of perturbations have been analysed and the question of a complete unfolding of the dynamics of this map is explicitly posed in [390]. As a possible root to chaos in propagation of light in a circular lazer-diaphragm-prism system, the border-collision bi-furcation in a nonsmooth logistic map was discussed in the pioneering paper [165]. The book by Banerjee-Verghese [23] and papers by Zhusubaliyev-Mosekilde [398,399], and Zhusubaliyev-Soukhoterin-Mosekilde [400] discuss the role of border-collision bifurcations in tent-like maps in the context of power electronic circuits such as boost converters and buck converters. Collision of a fixed point with a border in more general piecewise smooth maps appears in the analysis of inverse problems (Ayon-Beato, Garcia, Mansilla, Terrero-Escalante [18]), forest fire competition model (Dercole-Maggi [106], Colombo-Dercole [91]), and mutualistic interactions (see Dercole [104]). Incompressible fluids. The classical theory by Kolmogorov [200] asserts that the order of the dependence of the velocity vector v(x) of incompressible fluids on the coordinate x does not exceed 1 at any point x of the phase space. The relevant differential equations are, therefore, not piecewise smooth and in fact nowhere differentiable. This implies non-uniqueness of the flow starting from any point of the phase space. Kolmogorov's fluid model challenges the development of bifurcation and perturbation theory to study transitions of the funnels of flows. Despite potential novel insights towards the understanding of the nature of turbulence, little has been developed in this direction and the approach commonly used so far is based on embedding (known as stochastic approximation) the given deterministic ODEs into a more general class of stochastic differential equations, see e.g. Falkovich-Gawedzki-Vergassola [116], and E-Vanden-Eijden [362]. Disk clutches. Static indeterminacy is the phenomenon caused by the presence of dry friction in mechanical devices, where the static equations of forces do not lead to a unique solution. This phenomenon represents one of the main motivating problems behind the field of Nonsmooth Mechanics (see Brogliato [57]). One of the methods to cope with the non-uniqueness of solutions is known as regularization [364], the development of which has recently been reinforced by applications to disk clutches by Stamm-Fidlin [331,332]. This method is based on the approximation of rough surfaces by springs and leads to a singularly perturbed system where the so-called reduced system turns out to be degenerate. This concept is ideologically similar to smoothening (or softening) the given nonsmooth problem and challenges further development of the Fenichel's singular perturbation theory [379]. One of the problems in relation to the disk clutches is how well the regularised system approximates the moment of time (known as cut-off) when the initially motionless clutch's disk starts moving versus the parameters of the applied torque. A theory for a similar phenomenon in wave front propagation has been developed in a paper by Popovic [299] in this special issue. A regularization procedure has also been proposed in McNamara [261] to resolve the nonuniqueness problem in the context of granular material. Wave propagation through the Earth. The need to gain a deeper understanding of the topological properties of grazing orbits (in particular, the topological index of grazing orbits) has been recently underlined by the problem of geophysical wave propagation. According to De Hoop-Hormann-Oberguggenberger [170], this process is modelled by hyper-bolic PDEs with piecewise smooth coefficients (the switching manifold corresponds to the lowermost mantle layer). Attempts to apply Buffoni-Dancer-Toland global analytic bifurcation theory (see Dancer [95] and Buffoni-Toland [65]) proved to be effective for studying the existence of steady waves of the Euler equation (see Buffoni-Dancer-Toland [66]) and construct solutions of these partial differential equations, starting from convenient ordinary differential equations. The challenge of extending global analytic bifurcation theory to piecewise analytic differential equations is relevant in this context. Discussion This survey aims to sketch the central directions of research concerning the dynamics of nonsmooth systems. In this final section we briefly summarise our conclusions. The need to develop new mathematical methods to study the dynamics of nonsmooth systems is motivated by real world applications. For example, existing smooth methods do not provide a mechanism for the understanding of how the switching manifolds generate cycles or chattering in control. In mechanics, new methods have been required to understand bifurcations initiated by oscillations that touch elastic limiters at zero speed (e.g. when a cantillever of an atomic force microscope or a drill starts to penetrate into a sample). A similar grazing problem appears in neuroscience when subthreshold oscillations transit into firing ones. In hydrodynamics, the Kolmogorov model of turbulence leads to differential equations that are non-Lipschitz everywhere (thus not piecewise smooth) and smooth methods cannot be applied because of the non-uniquness of solutions. Finally, the mere existence, uniqueness and dependence of initial conditions is a challenge for nonsmooth systems coming from optimisation theory and nonsmooth mechanics. For nonsmooth systems given in the form of differential equations with piecewise smooth right-hand sides and impacts (that cause trajectories to jump according to an impact law upon approaching a switching manifold) the new phenomena can be identified and understood by a local analysis of the consequences of the collision of a simple invariant object (like an equilibrium, a periodic solution or a torus) with switching manifolds. Here a collision for periodic solutions and tori is meant in a broader sense and stands for a non-transversal intersection with a switching manifold. Despite of useful applications of the recently discovered classifications of a border-collision bifurcation of an equilibrium in control (see e.g. [347]), the role of these phenomena in other applied sciences is in our view still largely underestimated. For example, it hasn't yet been explained which of the discovered scenarios of border-collision bifurcations can be realised in dry friction or impact mechanical oscillators. A significantly greater number of papers has been published on applications of the scenarios of grazing bifurcations of closed orbits (i.e. phenomena coming from collisions of closed orbits with the switching manifold). Yet, the role of this fundamental phenomenon remains unexplored in many important applied problems (e.g. in integrate-and-fire and resonate-and-fire neuron models and atom billiards). The available knowledge about bifurcations of trajectories with chat-tering have not yet found common points with control where these trajectories correspond to so-called Zenoness (we refer the reader to Sussmann [343] and Zhang-Johansson-Lygeros-Sastry [386] for known alternative results). The analysis of the collision of an invariant object with a switching manifold in piecewise smooth systems often leads to the study of the collision of a fixed point with a switching manifold in maps, otherwise known as border-collision in maps. Because of applications in medicine and electrical engineering (as discussed in Section 5) border-collision bifurcations in maps have received an independent interest in the literature. The two most fundamental maps of this type are tent and square-root ones. Some examples show that the dynamics of a skew product of two such maps is non-reducible to one dimension, but general results have not been obtained. Much less is known about nonsmooth systems that are not piecewise smooth. Partial results are available in the case a nowhere Lipschitz continuous system is smooth for some value of the parameter. These results suggest that studying bifurcations of trapping regions versus bifurcations of solutions is a potentially fruitful approach to access the dynamics. As for more general nonsmooth systems like differential variational inequalities, a complete understanding of the dynamics has been achieved only in the case where this nonsmooth system is reducible to a convergent differential inclusion. Though the classes of differential variational inequalities that lead to piecewise smooth differential equations have been well identified in the literature, the piecewise smooth bifurcation and perturbation theories haven't been applied yet in this context. Also, the possibilities to relax the requirement for convergence of the aforementioned differential inclusions based on perturbation theory (which is partially developed for these systems already) have not yet been explored. We hope this survey, and this special volume of Physica D, will facilitate the joining of efforts of researchers interested in different aspects of the dynamics of nonsmooth systems.
2015-12-07T17:26:25.782Z
2012-11-15T00:00:00.000
{ "year": 2012, "sha1": "16ab3fdc87ca60cedb2ef48fc771cb07c3e55cee", "oa_license": "CCBY", "oa_url": "https://hal.archives-ouvertes.fr/hal-01350541/file/NonsmoothSurvey.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "3a059e1bc0d375358e7269395ef6b24bb3066e55", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212515113
pes2o/s2orc
v3-fos-license
Facial Plastic Training During Residency Program and the Factors Affecting it-A Descriptive Study of Saudi Residents Otorhinolaryngology-Head and Neck Surgery (ORL-H&N) in the Kingdom of Saudi Arabia is a five-year structured training program, upon completion of which trainees will have gained fundamental knowledge, clinical skills, and an understanding of professional behaviour, it is considered the largest training program in the Gulf region. The Saudi Commission for Health Specialties (SCFHS) is adopting the Canadian Medical Education Directives for Specialists (Can MEDS) framework to set up the core curriculum of all training programs, including the Saudi Board Certification in Otorhinolaryngology – Head and Neck Surgery. Upon completion of the residency training program, graduating residents will be able to function as independent otolaryngologist-head and neck surgeons, enabling them to pursue careers in general otolaryngology successfully or to proceed with subspecialty fellowship training [1]. Received: September 30, 2019 The residency training program plays an important factor affecting the resident's choice of the study area and the need for fellowship training. The impact of residency training in otolaryngology subspecialty may be positive or negative; thus, it is crucial to receive balanced training in all divisions. A lot of factors may play roles in training quality, for example, the number of trainers, the training hospital's specialty or restricted policies in the selection of cases. The number of facial plastic surgery cases, particularly cosmetic cases, has recently increased. This enhance to improve the facial plastic training during the residency program [3]. A study undertaken by Osguthorpe et al. found that no significant differences in cosmetic surgery results between surgeries performed by the residents or staff who trained them except that residents required more time [4]. However, residents need sufficient training & increased exposure to these procedures to reach this goal [3]. In Saudi (ORL-H&N) residency program, the facial plastic rotation is mandatory to do in the (R3-R5) levels for three months, with a minimum number of cases for each procedure and recommended courses to be taken [1]. Objectives There is no recent study has assessed resident's exposure to facial plastic surgery during the residency program, especially in Saudi Arabia. Therefore, it was found worthwhile to analyse the change in the knowledge and attitudes of (ORL-H&N) program residents after introducing the rotation on facial plastic surgery in the new curriculum. Methods A cross-sectional study was conducted at King Saud University, Saudi Arabia after the approval of the institutional review board committee of the same institute. The study included all male and female otorhinolaryngology-head and neck surgery residents in the four approved residency programs in the kingdom as well as plastic surgery residents rotating in facial plastic surgery in Saudi Arabia from July to September 2018 (the end of the training year); a questionnaire was emailed to all of them. Data regarding the residents' demography, duration of facial plastic surgery rotation during the residency program was collected. Also, self-reported evaluation related to facial plastic surgery procedures such as did not think it was adequate but they were satisfied, and (n=18, 20.69%) were not exposed to a facial plastic surgery at all ( Table 3). A majority felt that their training was affected by consultant concern of the outcome (n=57, 65. Discussion Residency programs are critical in developing competencies that residents are expected to demonstrate to help them in patients' care, improve their knowledge and communication skills, and competently perform all medical and invasive procedures essential to their area of practice [5] It is much easier to acquire new surgical skills in a residency program than after the completion of the residency training [6]. Safely learning the necessary surgical skills are considered the most significant challenge and requires a training period [6]; thus, the residents must be provided with reliable and effective methods to help them achieve such skills. Having adequate knowledge, undergoing a period of preceptorship with multiple courses, and working as members of a high-performance team, all play a role in gaining the necessary experience to ensure optimum outcomes [6]. The Accreditation Council for Graduate Medical Education's (ACGME) suggests viewing a videotape of one's performance, receiving oral feedback, keeping logs and learning plans, performing self-assessments and quality improvement projects, and developing resident-initiated projects such as activities involving active experimentation for practice-based learning and improvement [7]. However, inadequate training during residency is one of the reasons why residents seek fellowship training [8]. But the Lack of appropriate training in basic procedures during residency program may prevent them from gaining additional skills and may affect their drive to complete their fellowship training as well [8]. Surgical training in otolaryngology has characteristics that could minimize or enhance the magnitude of this task. These factors include many subspecialties and varied operative modes including open vs. endoscopic, microscopic, office-based procedures as well as the issues of medical otolaryngology and comfort zone [9]. Most of our residents thought that consultant concern of the result was the main reason for insufficient training. Conversely, studies showed that resident participation in surgery is not associated with an increase in morbidity or mortality [9][10][11][12][13][14][15][16] and based on the American College of Surgeons-NSQIP Database, operations with resident involvement had similar morbidity and lowered mortality than did those without the participation of residents, and the cosmetic surgery in particular has no difference [4,17] Limitations of this study include the relatively small sample size, lack of objective tools for assessing resident knowledge and skills, the different level of residents training in facial plastic surgery rotations, potential recall bias attributable to inaccurate self-reporting and response bias due to nonresponders. Likewise, the current study is one of the first in our region that explores this problem with the growth of facial plastic surgery cases [18].
2019-12-05T09:09:54.613Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "32c0705309ab5a67a1c4db734c15838b835db9d1", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.003687.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "db89b949bd73e513bf375aac0a54d3a70521379e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237794857
pes2o/s2orc
v3-fos-license
Prediction model for the leakage rate in a water distribution system Leakages cause real losses in water distribution systems (WDSs) from transmission lines, storage tanks, networks, and service connections. In particular, the amount of leakage increases in aging networks due to pressure effects, resulting in severe water losses. In this study, various arti fi cial neural network (ANN) models are considered for determining monthly leakage rates and the variables that affect leakage. The monthlydata,which are standardized byZ-score for theyears2016 – 2019, are usedin these modelsbyselect-ing four independent variables that affect the leakage rate regarding district metered areas and pressure metered areas in WDSs. The pressure effects are taken into consideration directly as input. The model accuracy is determined by comparing the predicted and measured data. Furthermore, the leakage rates are estimated by directly modelling the actual data with ANNs. Consequently, it is found that the model results after data standardization are somewhat better than the original nonstandardized data model results when 30 neurons are used in a single hidden layer. The reason for the higher accuracy in the standardized case compared with pre-viousmodellingstudiesisthatthepressureeffectistakenintoconsideration.Thesuggestedmodelsimprovethemodelaccuracy,and hence, the methodology of this paper supports an improved pressure management system and leakage reduction. INTRODUCTION Water losses in water distribution systems cause increase in operational cost of water utilities and, in turn, increase the water price. It is predicted that the amount of water leakage in water distribution systems (WDSs) is 48 billion m 3 per year around the world (Kingdom et al. 2006). Water utilities try to apply certain techniques besides modernization programs in networks to control and reduce high levels of water losses. Each water utility should prioritize reliable water loss studies by modernizing its water distribution system. Reliable water loss method developments that use modern techniques will help to reduce losses on a planned basis, save energy, reduce water production costs, improve water quality and increase investments. According to the American Water Works Association (AWWA) and the International Water Association (IWA) Water Balance and Terminology, water loss consists of apparent losses (non-physical losses and management losses) and real losses (physical losses) (AWWA 2003). Real losses consist of leakage on the transmission and/or distribution mains, real losses from raw water mains and treatment works, leakage and overflows at transmission and/or distribution storage tanks and leakage on service connections up to the point of customer metering (Alegre et al. 2016). Leakage is a key parameter for water loss. Leakages in WDSs can be categorized as reported leakages, unreported leakages and background leakages (Lambert 2003). Reported leakages can be defined as emerging and visible leakages; unreported leakages as non-surface leakages that are detectable by acoustic devices; and background leakages as non-surface leakages that are acoustically undetectable. Leakage removal by timely detection is also significant for water loss levels. A literature review concerning this subject shows that various methods have been preferred in several studies (Xue et al. 2020;Hu et al. 2021). To reduce water leakage, a pressure management systema well-known system with low costis implemented in WDSs (Kanakoudis & Gonelas 2016;Samir et al. 2017). High leakage rates are observable at high pressure levels because the leakage rate is a function of pressure (Kanakoudis & Muhammetoglu 2014). Pressure management is achieved by dividing the WDS into smaller and more manageable areas (DMAs) (Kanakoudis & Muhammetoglu 2014). The pressure is reduced and controlled by installing pressure reducing valves (PRVs) at the critical points in the DMA. Using a pressure management system (PMS) and DMAs, it is possible to monitor a system 24 hours per day via the supervisory control and data acquisition system (SCADA), which can prevent losses by reducing leakages and breaks. Leakage reduction helps to protect limited water sources, minimize the quantity of refined water, pump less water and minimize power consumption. The leakage rate (LR) is the ratio between the total system input water volume and the water loss. The leakage rate varies depending on the pipe age, material quality, hole geometry on the pipe surface, operating pressure and similar factors. Marchis & Milici (2019) examined leakages in laboratory environments by using rectangular and circular cracks in polyethylene pipes of different sizes at various pressure levels and then evaluated the experimental study results with Toricelli, International Water Association (IWA) standards and their modifications as well as Cassa formulations. Niu et al. (2018) modelled the leakage rate in Tianjin water supply networks through the principal component regression method. The researchers took the network factors into account in their studies, such as maintenance cost, annual average water pressure, pipe material, valve replacement cost, pipe age, and pipe diameter. They obtained the adjusted R 2 value as 0.72 through the developed leakage rateleakage factors model. AL-Washali et al. (2018) analysed the leakage rate by using the minimum night flow analysis in Zarqa intermittent supply system. The researchers indicated that the one-day minimum night flow analysis should not be used to predict the leakage rate due to customer tanks will fill overnight. Leu & Bui (2016) developed, through the Bayesian method, a leakage prediction model in the Taipei WDS. According to the model results, the pipe age, construction activity, ground movement and pressure fluctuation have significant roles in leakage. Jang et al. (2018) predicted the leakage rates in WDSs by using certain statistical analysis methods, such as ANNs, Z-scores and principal components. The pipe length/junction, demand energy rate, number of water leaks, mean diameter, pipe rate deterioration and water supply quantity and junction parameters were used as input variables in the model. The best determination coefficient (0.55) was estimated by an ANN model with multiple hidden layers and 24 neurons (Jang et al. 2018). However, the pressure effect, the most important parameter regarding the leakage, was not taken into consideration in the modelling. The study aims to predict the leakage rate through the artificial neural networks that are applied today in many science fields with the ability to solve complex problems successfully. For this purpose; (i) in this study, the pressure parameter and network age that are directly related to the leakage have been taken into consideration for the first time as model input for the LR prediction, (ii) the ideal ANN architecture has been developed by analysing the effective of each parameter one by one, (iii) the combination that provides the highest model accuracy by using the least input parameters has been researched, (iv) and finally, the original data has been standardized through the Z-score technique, as in similar studies, to increase the prediction model accuracy and the calculations have been repeated for the determined ANN model combinations. METHODOLOGY In this study, the artificial neural networks, one of the artificial intelligence methods, has been used to predict the leakage rate according to the following steps: 1. The parameters of _ Izmit's water distribution system that monthly measured and recorded between 2016 and 2019 have been collected for the model study. 2. The original data has been standardized through the Z-score technique to increase the prediction performance of ANN models. 3. The ANN models with single input have been fictionalized to determine the model input parameters. In these models, an ideal ANN structure has been designed for the LR prediction by increasing the neuron numbers in the single hidden layer from 5 to 30. 4. The effective parameters such as system input volume (TSIV), total network length (TNL), mean age of networks (MAN), mean diameter of networks (MDN), and average network pressure (ANP) have been selected using single input models for the LR prediction. 5. Various model combinations have been developed to predict the LR with minimal input by increasing the model input numbers. The best prediction model has been obtained as TSIV/TNL-ANP-MAN-MDN combination. 6. The performance criteria such as R 2 , SI, and G-value have been used to analyse the model accuracy. 7. The LR has been predicted using original data in TSIV/TNL-ANP-MAN-MDN combination. 8. The prediction accuracy of models obtained through the original data has been evaluated with the same performance criteria. 9. The ideal LR prediction model has been identified by comparing all model performance results. 10. The accuracy of all prediction models developed through the applied methodology and selected parameters is higher than the prediction models with six inputs suggested by Jang et al. (2018). Also, the methodology suggested in the study has been summarized in Figure 1. Data standardization via the Z-score The Z-score can be used when performing analysis to distinguish the differences and the distributions of variables (Jang & Choi 2017). The variable values, x, can be standardized by taking the mean difference of each variable value and dividing it into the standard deviation. In this equation, z is the standardized data value, m is the mean data and s is the standard deviation. The Z-score technique allows the variables in all data sets to be accumulated into a common variable range. In addition, this technique indicates by how many standard deviations the variables deviate from the mean. By means of this technique, the raw data are converted to a standardized value score with a standardized deviation of 1 and a mean of 0. Hence, comparing the standardized values and variables becomes easier. Artificial neural networks In recent years, the development of ANNs has accelerated to help cognitive science by imitating the working principle of nervous systems. ANNs can be classified in accordance with their topologies (e.g., single-layer and multilayer feed-forward networks). Single-and multilayer feed-forward networks have been widely used in studies to better understand hydraulic engineering problems (Kizilöz et al. 2015) and to determine the complex structures of WDS components (Jang et al. 2018). The architectural structure of an ANN is composed of artificial neurons, which allow data transfer between layers in the forward and backward directions. Each neuron in the network is connected to the others by weights. The weights are the parameters used to establish the effects of inputs on outputs. The key of the network is to calculate the required optimum weight values by propagating the error in accordance with the training algorithm of the given weights. The transfer function used in this study calculates the effect of all inputs and weights. This function calculates the net neuron input. The total net input collected in a neuron (net) is obtained by the following expression: where x i is the neuron input value, w ji is the weight coefficient, n is the total number of input neurons, and b is the threshold value. In addition, the activation function helps to determine the neuron output by processing the net input obtained from the transfer function. Selecting the correct activation function significantly affects the network performance and the success rate. The sigmoid function is generally selected as the activation function for multilayer perceptron models. The neuron output calculated by means of this function is given as follows: In this study, a three-layer feed-forward back-propagation network (FFBP) model is considered, with a sigmoid function in the hidden layer and a linear function in the output layer. While the forward transfer of data through the model to obtain a result in the output layer is called feed-forward, the backward transfer of the network into the input layerif there is an error between the actual and target outputsis called back-propagation. In the back-propagation stage, all weights are readjusted according to the error correction rule (Haykin 1999). When the error reaches the actual value, the iteration ends, and the target value is calculated as the output. Each model needs to be trained before estimation. In this study, the Levenberg-Marquart back-propagation (trainlm) algorithm is considered a fast and precise algorithm for the training process (Kizilöz et al. 2015). The Levenberg-Marquart algorithm is based mainly on the least-squares method that uses the maximum neighbourhood. This method includes the best features of the Gauss-Newton and gradient descent algorithms for adjusting the weights, and it is expressed by the following equation in accordance with the LM algorithm after various approximations and optimizations. where w is the weighting factor, J is the Jacobian matrix, I is the unit matrix, m is a constant coefficient larger than zero and e is the error. If m is too large, the method behaves like the gradient-descent method; if it is too small, it behaves like the Gauss-Newton method. In this study, the MATLAB software was used to calculate the ANN prediction model design and result. The model inputs included TSIV/NL, ANP, MAN, and MDN variables, and the model output was the LR. For each model application, the data was randomly divided into training (55%), validation (35%), and test data (10%) through the algorithm defined in the MATLAB program (Kizilöz et al. 2015;Sisman & Kizilöz 2020). The most important issue in the ANN application is to decide the hidden layer and neuron numbers. Many studies in the literature have preferred a single hidden layer due to the higher number of hidden layers do not improve the model performance (Kizilöz et al. 2015). All ANN models in this study were installed as a single hidden layer (Sisman & Kizilöz 2020). There is no mathematical test to determine the neuron numbers in the hidden layer for ANN design. Generally, the numbers are determined through trial-and-error methods. The ANN models suggested in this study are chosen on the basis of various numbers of neurons, such as 5, 10, 20 and 30, in one hidden layer. A typical FFBP network consists of an input layer, one or more hidden layers and an output layer, as shown in Figure 2. Evaluation of the ANN model performance Accuracy evaluation is possible through a comparison of the model estimation value using ANNs and the measured value in estimating the LR. In this study, certain performance functions, such as the coefficient of determination (R 2 ), scatter index (SI), and G-value, have been used to determine the model accuracy. The calculation methods of all the performance functions for all the validation data sets are given by the following expressions: (y i À y) 2 s 2 6 6 6 6 4 3 7 7 7 7 5 2 (5) where x i is a data value, y i is the estimated data value and n is the number of validation data values. Finally, x and y are the means of the measurement and estimation data. Izmit is the second largest district of Kocaeli and is selected as the study area. The district has had 363,416 people, 160,135 water consumers and 30,840,477 m 3 of water supply since 2018. Here, 67 sections of DMAs and 84 sections of PMAs, as shown in Figure 3, were installed in 2014 to reduce the water loss rate of 45.40%. While the total network length of the district is 1,114 km (Kizilöz & Sisman 2021), the network length in the DMAs is 56,639 km. In addition, all water meters in the DMAs have been replaced entirely by smart water metres to remove the apparent loss effect. As a result of WDS hydraulic model studies of the district in question at the end of 2018, the water loss rate was reduced up to 29.70% by dividing the WDS into DMAs and PMAs. In particular, the pressure management system has been very useful in the WDS, where the losses were minimized, reducing the leakages of mains and service connections that could not be detected. To analyse the leakage rate in the modelling study, 1,357 data measurements were taken on a monthly basis between 2016 and 2019 in the DMAs and PMAs. The effective factors affecting leakage in the WDS divided into the DMAs are as follows: the average pipe diameter, water supply quantity, district characteristics, pipe length, frequency of leaks, water pressure in the pipes and network configuration ( Jo et al. 2016). In this study, certain variables are used for modelling that directly express the real losses in DMAs and PMAs, such as the total system input volume (TSIV), total network length (TNL), mean age of networks (MAN), mean diameter of networks (MDN), average network pressure (ANP), and leakage rate (LR). The TSIV and TNL represent the total monthly measured values, and the MAN, MDN and ANP represent the average monthly measurements. The descriptive summary statistics used in the prediction models for the variables are given in Table 1. The DMA comparisons were made using the average monthly variable measurements in each DMA. The leakage rates (LRs) were calculated by dividing the water losses by the TSIV. The largest rate was 0.64 in DMA No. 35, while the smallest rate was 0.05 in DMA No. 63 (Figure 4). An analysis of the LR rates for DMAs has shown that the rate is above 0.50 in eleven of the DMAs, between 0.3 and 0.5 in twenty-eight and between 0.2 and 0.3 in fourteen. It is necessary to identify the detection failures in DMAs with very high LR rates by means of active leakage control activities through acoustic devices, to replace aging networks that break down frequently and to revise the ideal operating pressure after these studies. The LR in fourteen DMAs was successfully maintained under 0.2. The LR may be minimized by reducing the pressure at regular intervals in accordance with the minimum night flow due to the 24-hour monitoring by the SCADA system. In the study area, the network pressure of fourteen DMAs is above 50 m. The mean pipe age of all the DMAs is 11.54; the greatest pipe age is 25.71 in DMA No. 27, and the least is 4.88 in DMA No. 23. While DMA No. 6 has the greatest network length, 38.18 km, DMA No. 35 has the smallest length, 0.43 km. By generating smaller DMA areas, the LR can be controlled and reduced. The maximum mean system input volume, 60,882 m 3 , is that of DMA No. 2. The maximum mean pipe diameter is that of DMA No. 16, 237.14 mm, and the minimum diameter is that of DMA No. 54, 81.25 mm. The average data regarding the dependent and independent variables that affect the LR are shown in Figure 4. Z-score analysis The standardized data were obtained by means of the Z-score method in the estimation of the LR by the ANNs method. A standardized analysis method was implemented using a total of 1,357 data points on a monthly basis for various variables that affected the leakage in 67 DMAs. The analysis results indicated that the Z-scores of 66 data points were outside the range of +3; that is, these data were outliers from the average and were removed before the analysis. When analysing the distribution of the removed data, it was found that there were 27 data points from the MDN, 20 from the MAN, 3 from the ANP, 15 from the TSIV/TNL (km) and 1 from the LR. On the other hand, 1,291 pieces of monthly data were used in this study for LR estimation by standardizing 1,357 pieces of raw data in the DMAs. The Z-score results regarding all variables in the DMAs and PMAs are shown in Figure 5. Artificial neural networks To identify the effective variables in the LR estimates, single-input single-output ANN models were established by using standardized data. The monthly data collected from the DMAs and PMAs were randomly divided into 55% for training (710 data points), 35% for validation (452 data points) and 10% for testing (129 data points). Similar training, validation and testing data sets were used for all models. The Levenberg-Marquart method of backpropagation was selected for the training algorithm by using the Neural Net Fitting toolbox in MATLAB. Before each training process, the models were initialized with irregular initial weights and biases (Kizilöz et al. 2015). In this study, different numbers of neurons (such as 5, 10, 20 and 30) were used in the hidden layer for the models. The best model with four inputs was developed by means of the best model variables with a single input. The performance of the prediction models is given in Tables 1 and 2. Subsequently, the same model with four inputs was established by using the same ANN methodology as for the original data, and finally, the best prediction model was determined as a result of the performance evaluation for the models obtained from the original and standardized data. ANN model performance and optimal model selection To separately analyse the effects of physical parameters such as the MDN, MAN, TNL, ANP and TSIV, which are related to the LR, ANN models with a single input and single output were established by using data with removed outliers. The model accuracy was evaluated by comparing the predicted LR values with the measured LR values. The performance functions given in Table 2 were used for the model accuracy evaluations. The best criterion for how well the model results fit in a linear curve is the coefficient of determination, R 2 , in the regression analysis process. A higher R 2 value means that the prediction models are more accurate. If the SI is small, the model The single input ANN models on LR prediction are available in Table 2. When the first models are analyzed, it is seen that the performances of MDN, MAN, and TNL are better than ANP and TSIV. The performances of the models with single input suggested in this study are higher than the ones given in the study conducted by Jang et al. (2018). In addition, the neuron numbers were increased up to 30 starting from 5 neurons in the hidden layer, and the model performances were evaluated accordingly. The highest accuracy was obtained by using 30 neurons in the hidden layer described in Table 2 for prediction models with a single input using different neuron numbers (such as 5, 10, 20, and 30). If more than 30 neurons are available in the hidden layer, the model performances have decreased, so they are not included in this study. The model performances based on the neuron numbers in the hidden layer can be seen in Table 3. The model results indicate that pressure, diameter, and age are the effective parameters of the leakage rates. The TSIV/TNL-ANP-MAN-MDN prediction models with four inputs and a single output were obtained with a higher accuracy by using the independent variables, which are effective on leakage rates, as shown in Table 2. The prediction model has a higher accuracy than the other applied neuron numbers, according to the performance evaluations of R 2 , the SI and the G-value, if 30 neurons are available in a single hidden layer using data eliminated and standardized with the Z-score (Table 3). The ANN [30] prediction model has a lower scattering value, which is 15.223 in all models. The LR prediction models that use discretization of the outlier data through the Z-score technique are shown in Figure 6. The LR Uncorrected Proof prediction model with 30 neurons in the hidden layer in Figure 6 has the highest coefficient of determination of 0.8658 and the highest G-value of 86.506. The most accurate results were obtained by means of the ANN in [30] in comparison with other neuron numbers when the same number of original data values were used as the input in TSIV/TNL-ANP-MAN-MDN, the best model for LR prediction (see Table 3). This LR prediction model has a higher accuracy than the other applied neuron numbers provided that 30 neurons are available in the hidden layer in accordance with the R 2 , SI and G-value performance evaluations. Different LR prediction models based on the original data with various numbers of neurons in the hidden layer are shown in Figure 7. The prediction model of the ANN in [30] has the highest R 2 value of 0.8586 and the lowest scattering index of 18.160. The most accurate model results corresponding to the measured LR were achieved when there were 30 neurons in the hidden layer of the suggested models for both the original and standardized data sets. In the case of using 30 neurons in the hidden layer with outlier data removal, the G-value, scattering index (SI) and coefficient of determination, R 2 , are slightly better than for the original data. When comparing the model results with the study of Jang et al. (2018), it was found that the prediction accuracy was higher. They obtained the best model result by using 24 neurons in multiple hidden layers with 6 principal component analysis data inputs (R 2 ¼ 0.5516 and G-value ¼ 52.4). In this study, the monthly leakage rates were predicted with higher accuracy through the ANN models with comparatively less input and fewer neurons. The prediction models with a pressure variable have higher model accuracy, which derives from the effect of pressure on the leakage rate being higher than that of the other variables. Various examples from the literature are as follows: the leakage in water distribution systems changes directly with pressure (Bonthuys et al. 2020); while a small amount of leakage occurs at low pressure, excessive leakage occurs at high pressure (Marchis & Milici 2019); reducing the leakage in WDSs can be achieved by controlling the pressure through a pressure management system (Jafari-Asl et al. 2020); on the other hand, pressure management can reduce the system input volume (SIV) amount due to water loss and a decrease in demand (Kravvari et al. 2018); and pressure regulation and replacing old water supply networks in a planned way prevents leakages (Leu & Bui 2016). CONCLUSIONS In this study, the monthly leakage rate in the water distribution system (WDS) of _ Izmit district (Kocaeli/Turkey) was predicted through artificial neural network (ANN) models. The model input variables were determined to be the ANP, TSIV/TNL, MAN and MDN, and the goal was to achieve the highest prediction accuracy with the least input in this way. The pressure effect was considered as an input for the first time for model performance improvement up to 57.41%, according to the previous studies in the literature. In this study, the model performance improvement was achieved with data standardization by suitable methods and with an increase in the preferred neuron numbers. Also, higher prediction accuracies can be obtained through the model structure with one hidden layer designed in this study. The developed models clearly revealed the relationship between leakage and pressure. It is understood from the study that pressure is a significant factor for modelling and that pressure management should be taken into account by water utilities to reduce water losses by preventing leakages in water distribution systems (WDSs). According to the models, the other factor influencing leakages is the network age. An increase in the leakage rates has been observed in old networks under high pressure effects due to the reduced resistance to pressure. It is necessary to control the operating pressure to certain levels by taking the network age into consideration to reduce leakage rates. In conclusion, the leakage rates can be predicted through the suggested models by taking into consideration the network pressure and network age as a reference, and these models provide important information for water utilities. In the suggested models, the pressure evaluation and age appear to be the variables with the greatest effect
2021-09-01T15:10:57.985Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "e27bc0c576997d95a521626a1a90d8e253ba2e61", "oa_license": "CCBY", "oa_url": "https://iwaponline.com/ws/article-pdf/21/8/4481/970050/ws021084481.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5913fb58d50dd777baab7952226470d9c7b68cf6", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
246177153
pes2o/s2orc
v3-fos-license
Head computed tomography findings in relation to red flag signs among patients presenting with non-traumatic headache in the emergency services Introduction: Non-traumatic headaches are a common presentation in emergency services. A non-contrast computed tomography (NCCT) scan of the head is done when there is suspicion of intracranial abnormalities. Such intracranial abnormalities are indicated by "red flag" signs. This study aimed to determine the prevalence of intracranial abnormalities in patients with non-traumatic headaches and its association with the red flag signs. Method: A total of 106 patients presenting with a non-traumatic headache to the emergency services of TUTH from Aug 2019 to Aug 2020, who underwent head CT were included in the study. The association of head CT positivity with the presence of red flag signs was studied by bivariate analysis using the chi-square test or Fisher exact test. Result: Among 106 patients, 46(43.4%) were male and the rest were female. The mean age of the patient was 43.69+17.46. All the patients who had positive findings in head CT had at least one red flag sign. Out of 16 red flag signs included in this study, 10 signs showed a significant association (p<0.05) with head CT positivity. These are sudden onset of headache, age of onset >50 years, significant change in pattern or severity of headache, “worst headache ever”, vomiting, neck stiffness, seizures, altered sensorium, papilledema, and focal neurological deficits. Conclusion: Red flag signs of headache are helpful to determine whether head CT is needed or not to look for significant intracranial abnormalities in a patient presenting with non-traumatic headache in an emergency. Introduction Headache is localized or diffuse pain in various parts of the head, eventually irradiating to the face or the neck. 1 Headache is one of the most common complaints of patients presenting to the emergency services accounting for 2-3% of all emergency visits. 2,3 Headaches affect people across all ethnic, geographic, and economic levels, with an estimated global prevalence of 50% in adults. 4 Headaches are classified into primary and secondary depending on the presence and absence of an underlying cause respectively. 5 Recognizing headaches secondary to intracranial pathology is critical not only because such headaches may be lifethreatening but also the treatment of the underlying problem usually cures the headache. The initial imaging in patients presenting with non-traumatic headache in the emergency setting is Head CT. 6,7 Head CT may relieve the patient's anxiety about having an underlying pathology 8 but is a costly investigation and poses a radiation hazard. 9 A study showed that a third of projected cancers due to radiation from CT scans were from scans taken in adults between the ages of 35 and 54. 10 While there are guidelines for performing head CT for headaches in trauma patients, there are no clear guidelines for the same in non-trauma patients. "Red flag signs" help clinicians to indicate headache secondary to significant intracranial abnormalities. 1,[11][12][13][14] This study was done to show the relation of red flag signs as a clinical predictor for significant intracranial abnormalities in head CT in patients with nontraumatic headaches in emergency services. Method This was an observational cross-sectional study done in individuals of different age groups who presented to the emergency department of Tribhuvan University Teaching Hospital with non-traumatic headache and underwent head Non-Contrast CT for diagnosis from Aug 2019 to Aug 2020. This study was done to find the relation of red flag signs as a clinical predictor for significant intracranial abnormalities in head CT in patients with non-traumatic headaches. The decision for ordering head CT in the patients was taken by the treating doctor responsible for the patient in the emergency and the researcher was not involved in this decision making. Approval of ethical clearance was obtained from the Institutional Review Committee, Institute of Medicine. The individuals were explained about the study by the researcher and were included in the study after receiving the written informed consent. Patients more than 16 y of age presenting to the emergency and undergoing head CT for the evaluation of non-traumatic headache were included. Patients having headaches after head trauma, patients already diagnosed with some known intracranial pathology before presentation, and patients unable to answer the structured questionnaire were excluded. Individuals meeting the inclusion criteria were interviewed and examined to determine the presence or absence of red flag signs and their head CT finding was noted. This information was recorded in a pre-structured proforma. The proforma was pretested to check its validity and reliability. Non-probability convenience sampling was done. Data obtained were compiled and analyzed using standard statistical analysis. SPSS statistics software version 21 was utilized for data analysis and presentation. Continuous variables are presented as mean or median depending on the presence or absence of normal distribution respectively and categorical variables are presented as absolute numbers and percentages. Bivariate analysis was done to test the association of individual red flag signs of headache with the presence of significant intracranial abnormalities using the chi-square test or Fisher exact test. A p-value less than 0.05 was considered statistically significant. Result A total of 106 participants were included in the study. Regarding the sociodemographic findings, the mean age of the participants was 43.69+17.46 y. The minimum age was 17 y and the maximum age was 98 y. Among the 106 participants, 46(43.4%) were males and 60(56.6%) were females. Age distribution showed, 28 (26.42%) were in the age group of <30 y, 17(16.04%) in the age of 30-39 y, 21 (19.81%) in the age group of 40-49 y, 23(21.70%) were in the age group of 50-59 y and 17(16.4%) were in the age group of >60y. Analysis regarding the association of gender with head CT showed a higher percentage of males i.e. 15(32%) of 46 males as compared to 16(26%) of 60 females had positive findings in head CT. However, this difference was not statistically significant (p value=0.5). At least one or more red flag signs of headache were present in 67(63%) participants, Table 1. Thirty-one (29%) of the participants had significant intracranial abnormalities in head CT, Figure 1. Upon evaluation of the frequency of different significant intracranial abnormalities, Subarachnoid hemorrhage (SAH) was the commonest finding present in 16(15%) cases, Figure 2. Among 16 red flag signs, 10 signs were found to have a positive association with positive head CT findings, Table 2. Red flag signs like Sudden onset of headache, Altered Sensorium, and focal neurological deficit are found to have a high positive predictive value for the diagnosis of secondary nontraumatic headache, Table 3. Discussion Out of 106 patients with non-traumatic headaches who had head CT. Most of the time a detailed history and physical examination are all that are required to differentiate primary and secondary headaches and it is the most important part of the assessment of a patient with headache. 15 When a secondary cause is suspected, ahead NCCT is ordered as urgent intervention is required in such cases. However, in the absence of significant findings in the history and clinical examinations, head CT is usually unnecessary. Out of 106 patients in this study, 60(57%) were females and 46(43%) were males, i.e. a female preponderance was seen. This is similar to a study 16 done on patients with non-traumatic headaches presenting to the emergency in which 190(77.8%) were female. The higher percentage of females presenting with nontraumatic headaches may be because females tend to be more sensitive to their symptoms and seek consultation more often than men do. However, head CT positivity was higher among males i.e. 15 out of 46(32%) than in females i.e. 16 out of 44(26%) in our study. This is in concordance with a study in which the percentage of positive neuroimaging outcomes was higher among males than in females. 12 However, there was no statistically significant association between the gender of the patient and positive head CT findings. In this study, the mean age of patients was 43.69+17.46 y ranging from 17 y to 98 y. Age of onset of headache more than 50 had a significant association with head CT positivity (p<0.021). A similar result was seen in a study done in Minnesota, the USA in which age of onset more than 55 was found to be significantly associated with positive neuroimaging findings. 17 The similar findings from two different geographical locations signify that geography and lifestyle do not affect the occurrence of head CT positivity. In our study, 31(29%) cases with non-traumatic headaches presenting to emergency services had significant intracranial abnormalities in head CT. In a study done in Chitwan Medical Journal of Patan Academy of Health Sciences. 2021Dec;8(3):79-86. College, Nepal, on patients with headache referred from out-patient and emergency to radiology for head CT, 26(10.1%) patients with headache showed some form of brain parenchymal pathology in head CT. 18 Higher positivity in head CT is seen in our study, probably because our study only included patients presenting with headache in the emergency services. In our study, 67(63%) patients had one or more red flag signs of headache and the remaining did not have any red flag signs. Among those who had red flag signs, 31 cases were found to have positive findings in head CT. Among those who had no red flag signs, none of the cases had positive findings in head CT. Though not all patients with red flag signs had a positive head CT, all cases that had a positive head CT had at least one red flag sign. The presence of at least one red flag sign was found to have a significant association with head CT positivity (p <0.01). This finding is in concordance with a study done in California 19 in which all of the patients with significant head CT findings had an abnormal physical or neurologic exam or unusual clinical symptoms. Similarly, in a study done in Cameroon, Central Africa, abnormal results in the neurological examination were found to be the best clinical predictors of structural intracranial pathology in head CT in an adult patient experiencing headache disorder. 20 It was concluded that routine computed tomography of the brain in headache patients with normal physical and neurologic exams and no unusual clinical symptoms has a low likelihood ratio for discovering significant intracranial disease. Among the 16 red flag signs, 10 signs had a statistically significant association with positive findings on head CT on bivariate analysis. They were sudden onset of headache, worst headache ever, the onset of headache after the age of 50 y, neck stiffness, vomiting, seizures, altered sensorium, presence of focal neurological deficits, papilledema, and worsening of headache with coughing, straining, sneezing, bending. However, a multivariate analysis couldn't be done because of the small sample size of our study. In the study done in Malaysia 12, the presence of 3 red flag signs proved to be statistically significant with the p-value of less than 0.05% on both univariate and multivariate analysis. These were paralysis, papilledema, and altered sensorium. However other red flags when individually analyzed were not found to be clinically significant. This difference in results could be because in our study only bivariate analysis was done. Similarly, a retrospective chart review study conducted from 2013 to 2018 in Thailand in acute non-traumatic headache patients who visited the emergency department concluded that abrupt onset, awakening pain, duration of headache >1 week, fever, worst headache ever, alteration of consciousness, and localizing neurological deficit were the significant predictive factors for the serious intracranial cause of acute nontraumatic headache. 21 However, in our study we haven't studied the duration of headache, and pain awakening from sleep was not included as a red flag sign. Red flag signs like 'known case of HIV/Cancer with a headache', 'patient under thrombolytic or anticoagulant', 'headache associated with rash', 'headache associated with personality change' had a very low prevalence in our study population. So even though they were found to be statistically insignificant in our study, their presence in a patient with a non-traumatic headache needs to be taken seriously. Significant intracranial abnormalities which may not be evident in the head CT may have been missed. Conclusion Patients presenting with non-traumatic headaches in the emergency services may have one or more red flag signs of headache which can be identified by a proper history and physical examination. Patients who do not have any of the red flag signs of headache usually do not require NC Head CT to rule out significant intracranial abnormalities. on bivariate analysis. They were sudden onset of headache, worst headache ever, the onset of headache after the age of 50 y, neck stiffness, vomiting, seizures, altered sensorium, presence of focal neurological deficits, papilledema, and worsening of headache with coughing, straining, sneezing, bending.
2022-01-23T17:09:58.260Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "01d1d189e35cb0b76397145cd8f6f169780987e3", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/JPAHS/article/download/33804/32171", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9f925da9d15f16895c9f68ad014d07cdcbb1c886", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118639232
pes2o/s2orc
v3-fos-license
Numerical evaluation of convex-roof entanglement measures with applications to spin rings We present two ready-to-use numerical algorithms to evaluate convex-roof extensions of arbitrary pure-state entanglement monotones. Their implementation leaves the user merely with the task of calculating derivatives of the respective pure-state measure. We provide numerical tests of the algorithms and demonstrate their good convergence properties. We further employ them in order to investigate the entanglement in particular few-spins systems at finite temperature. Namely, we consider ferromagnetic Heisenberg exchange-coupled spin-1/2 rings subject to an inhomogeneous in-plane field geometry obeying full rotational symmetry around the axis perpendicular to the ring through its center. We demonstrate that highly entangled states can be obtained in these systems at sufficiently low temperatures and by tuning the strength of a magnetic field configuration to an optimal value which is identified numerically. I. INTRODUCTION Entanglement, one of the most intriguing features of quantum mechanics [1,2], is undoubtedly an indispensable ingredient as a resource to any quantum computation or quantum communication scheme [3]. The ability to (sometimes drastically) outperform classical computations using multipartite quantum correlations has been demonstrated in various theoretical proposals which by now have become well known standard examples [4,5,6,7]. Due to the rapid progress in the fields of quantum computation, communication, and cryptography, both on the theoretical and the experimental side, it has become a necessity to quantify and study the production, manipulation and evolution of entangled states theoretically. However, this has turned out to be a rather difficult task, as the dimension of the state space of a quantum system grows exponentially with the number of qudits and thus permits the existence of highly nontrivial quantum correlations between parties. While bipartite entanglement is rather well understood (see, e.g., [8]), the study of multipartite states (with three or more qudits) is an active field of research. Several different approaches towards the study of entanglement exist. Bell's original idea [9] that certain quantum states can exceed classically strict upper bounds on expressions of correlators between measurement outcomes of different parties sharing the same state has been widely extended and improved to detect entanglement in a great variety of states. Entanglement between photons persisting over large distances has been demonstrated with the use of Bell-type inequalities (see, e.g., Ref. [10] and references therein). Another more recent approach is the concept of entanglement witnesses [11,12]. These are observables whose expectation value is non-negative for separable states and negative for some entangled states. Thirdly, the concept of entanglement measures is focussing more on the quantification of entanglement: if state A has lower entanglement than state B, then A cannot be converted into B by means of local operations and classical communication. Remarkably, there exist interesting relations between entanglement measures and Bell inequalities [13] on the one hand, and entanglement witnesses [14,15] on the other hand. In this work, we focus on the direct evaluation of entanglement measures. Among the many features one can demand of such a measure, monotonicity is arguably the most important one: an entanglement measure should be non-increasing under local operations and classical communication (reflecting the fact that it is impossible to create entanglement in a separable state by these means). A measure exhibiting this property is called an entanglement monotone, with prominent examples being, e.g., the entanglement of formation [16], the tangle [17], the concurrence [18] or the measure by Meyer and Wallach [19]. While one measure captures certain features of some states especially well, other measures focus on different aspects of different states. Often, entanglement monotones are defined only for pure states and are given as analytical expressions of the state's components in a standard basis. Unfortunately, quantifying mixed-state entanglement is more involved. This is somewhat intuitive, since the measure needs to be capable of distinguishing quantum from classical correlations. A manifestation of this difficulty is the fact that the problem of determining whether a given density matrix is separable or not is apparently very hard and has no known general solution for an arbitrary number of subsystems with arbitrary dimensions. The ability to study mixed-state entanglement is, however, highly desirable since mixed-states appear naturally due to various coupling mechanisms of the system under examination to its environment. There exists a standard way to construct a mixed-state entanglement monotone from a pure-state monotone, the so-called convex-roof construction [20], but the evaluation of functions obtained in this way requires the solution of a rather involved constrained arXiv:0905.3106v1 [quant-ph] 19 May 2009 optimization problem (see Sec. II). In this paper, we present two algorithms targeted at solving this optimization problem numerically for any given convex-roof entanglement measure. In principle, these algorithms can also be applied to any optimization problem subjected to the same kind of constraints. The first algorithm is an extension of a procedure originally used to calculate the entanglement of formation [21]. It is a conjugate gradient method exploiting the geometric structure of the nonlinear search space emerging from the optimization constraint. The second algorithm is based on a real parametrization of the search space, which allows one to carry out the optimization problem in the more familiar Euclidean space using standard techniques. In the second part of the paper, we use these algorithms in order to study the entanglement properties of a certain type of spin rings. These systems form a generalization to N qubits of our previous study, where we had only considered the case N = 3 [22]. In the presence of an isotropic and ferromagnetic Heisenberg interaction and local in-plane magnetic fields obeying a radial symmetry, it can be argued (see Sec. IV and Ref. [22]) that the ground state becomes a local unitary equivalent of an almost perfect N-partite Greenberger-Horne-Zeilinger (GHZ) state [23] Such a system could hence be used for the production of highly entangled multipartite states merely by cooling it down to low temperatures. One finds, however, that the energy splitting between the ground and first excited state vanishes in the same limit as the N-partite approximate GHZ states become perfect, namely for the magnetic field strength going to zero. Therefore, in order to quantitatively identify the magnetic field strengths yielding maximal entanglement at finite temperature, one has to study the system in terms of a suitable mixed-state entanglement measure. The outline of the paper is as follows: In Sec. II we review how the evaluation of a convex-roof entanglement measure is related to a constrained optimization problem. We then develop and describe the numerical algorithms capable of tackling this problem in Sec. III. We also present some benchmark tests, comparing our methods to another known algorithm. In Sec. IV, we describe the spin rings mentioned earlier and study their entanglement properties in terms of a convex-roof entanglement measure evaluated using our algorithms. We conclude our work in Sec. V. II. CONVEX-ROOF ENTANGLEMENT MEASURES AS CONSTRAINED OPTIMIZATION PROBLEMS Given a pure-state entanglement monotone m, the most reasonable properties one can demand of a generalization of m to mixed states are that this generalization is itself an entanglement monotone, and that it properly reduces to m for pure states. A standard procedure which achieves this is the so-called convex-roof construction [20,24]. Given a mixed state ρ acting on a Hilbert space H of finite dimension d, it is defined as where is the set of all pure-state decompositions of ρ. Note that the pure states |ψ i are understood to be normalized. The numerical value of M (ρ) is hence defined as an optimization problem over the set D(ρ). In order to apply numerical algorithms to this problem, D(ρ) must be accessible in a parametric way. This parametrization is well-known and is often referred to as the Schrödinger-HJW theorem [25,26], which we briefly outline here for the sake of completeness. Let St(k, r) denote the set of all k × r matrices U ∈ C k×r with the property U † U = 1 r×r , i.e., matrices with orthonormal column vectors (hence we have k ≥ r). The first part of the Schrödinger-HJW theorem states that every U ∈ St(k, r) yields a pure-state decomposition {p i , |ψ i } k i=1 ∈ D(ρ) of the density matrix ρ by the following construction. Let λ i , |χ i , i = 1, . . . , r = rank ρ denote the eigenvalues and corresponding normalized eigenvectors of ρ, i.e., Note that we have λ i > 0 since ρ is a density matrix and as such a positive semi-definite operator. Given a matrix U ∈ St(k, r), define the auxiliary states It is then readily checked that is indeed a valid decomposition of ρ into a convex sum of k projectors. The second part of the theorem states that for any given pure-state decomposition {p i , |ψ i } k i=1 of ρ, there exists a U ∈ St(k, r) realizing the decomposition by the above construction. This guarantees that by searching over the set St(k, r) and obtaining the decompositions according to the Schrödinger-HJW theorem, we do not 'miss out' on any part of the subset of D(ρ) with a fixed number of states k. The parameterization is thus complete, i.e., searching the infimum over St(k, r) is equivalent to searching over all decompositions with fixed socalled cardinality k. This allows us to reformulate the optimization problem Eq. (2) as where h(U ) is the sum on the right-hand side of Eq. (2) obtained via the matrix U from ρ, i.e., Note that we have dropped the ρ-dependence in the above expressions, since ρ is fixed within a particular calculation and only the dependence of h on U is of relevance in the following. It is clear that in a numerical calculation only a finite number of different values for k can be investigated. However, it is also intuitive to expect that for some large enough value of k, increasing the latter even further has only marginal effects. In fact, we have observed numerically that already k = rank ρ + 4 yields very accurate results in all tests we have performed (also in the ones presented in Sec. III C), and we have used this choice throughout all numerical calculations within this work. Note that for a fixed value of k, also all other decompositions with cardinality smaller than k are considered as well, since the probabilities p i in the elements of D(ρ) are allowed to go to zero (with the convention that the corresponding states |ψ i are then discarded). Since the algorithms presented in the next section will both be gradient-based, the derivatives of Eq. (9) with respect to the real and imaginary parts of U evaluated at U will be required at some point. We state them here for the convenience of the reader. They are given by where and superscripts such as in ψ (i) denote the ith component of the state |ψ in an arbitrary but fixed basis. As a last remark, we would like to point out that the constraint set St(k, r) is, in fact, a closed embedded submanifold of C k×r , called the complex Stiefel manifold [27]. The geometric structure emerging thereof is exploited in one of the two algorithms following shortly. The dimension of the Stiefel mainfold is dim St(k, r) = 2kr − r 2 [27]. Since we have k ≥ r, we can set k = r + n, n = 0, 1, . . .. The number of free parameters N in the optimization is thus N = r 2 +2nr. Hence, N grows linearly with n, but quadratically with r. Numerical evaluation in larger systems will thus be restricted to low-rank density matrices. The flexibility of choosing n is however less restricted. As mentioned above, n = 4 already yields satisfying results. III. NUMERICAL ALGORITHMS The study of optimization problems on matrix manifolds is a rather new and still active field of research (see [27,28] and references therein). Only recently, two ready-to-use algorithms for minimization over the complex Stiefel manifold have been presented [29]. To our knowledge, these are the only general purpose algorithms applicable to generic target functions over St(k, r) found in the literature. One is a steepest descent-type method, the other one is of Newton-type. We will compare the performance of the modified steepest descent algorithm, as it is referred to in the original work, with the methods presented in this section. We have found that our algorithms generally show better convergence properties in the cases we have examined. We will, however, not make use of the modified Newton algorithm for the following reasons. The second derivatives (as required by any Newton-type algorithm) of the function h(U ) [Eq. (9)] are in general quite involved and their number grows quadratically with the size of U . Hence, they are very expensive to evaluate, even if one resorts to numerical finite differences. Moreover, the good convergence properties of Newton-type methods may only be expected in the very proximity of a local minimum. One therefore first typically em-ploys gradient-based techniques to approach a minimum sufficiently enough. However, what 'sufficiently enough' means in a particular case is often not known beforehand. We will later make use of a quasi-Newton algorithm, which approaches local minima satisfyingly and shows strong convergence similar to Newton methods automatically when being close enough to a minimum. A. Generalized Conjugate-Gradient Method In Ref. [21] a conjugate-gradient algorithm on the unitary group U (k) = St(k, k) was presented. The goal there was to calculate the entanglement of formation also for systems with dimensions different from 2 × 2 [30]. Here, we extend this result by noting that the method is applicable to any optimization problem on St(k, k), particularly to the evaluation of entanglement measures other than the entanglement of formation, and we calculate the required general expression of the gradient of h(U ). Optimizing over St(k, k) instead of St(k, r) comes at the cost of over-parameterizing the search space. When using this algorithm to calculate convex-roof entanglement measures, we simply took into account only the first r columns of the matrix obtained at every iteration. This is certainly an aspect one could improve upon in future research. The algorithm presented here is a conjugate gradienttype method, meaning that instead of simply going downhill, i.e., in the direction of steepest descent, previous search directions are taken into account at the current iteration step. Once the search direction X i at iteration step i, a skew-Hermitian k × k matrix, is known, a line search along the geodesic U i exp(tX i ) is performed, where U i is the current iteration point. In particular, one iteration step of the algorithm may be described as follows [21]: 1. Perform a line minimization, i.e., set and set 2. Compute the new gradient G i+1 at U i+1 and set T is the gradient G i parallel-transported to the new point U i+1 . Set the new search direction to 5. i ← i + 1. The starting point U 0 can be chosen arbitrarily, and the initial search direction is set to X 0 = −G 0 . In order to find a good approximation to the global minimum, one should restart the procedure several times using random initial conditions. For the line search in step 1, we utilized the derivative-free algorithm linmin described in Ref. [31]. In the following, we calculate the general expression for the gradient G of the function h, evaluated at the point U (we drop iteration indices for simplicity). The gradient G is defined in terms of the directional derivative of h, namely as where in direction X (skew-Hermitian matrix) and passing through V . The inner product is defined as in step 3 of the algorithm. We will eventually read off the gradient G from its definition in Eq. (19). Treating h(U ) as a function of the real and imaginary matrix elements of U , Re U ik and Im U ik , respectively, we have The partial derivatives of h with respect to Re U ik and Im U ik have already been stated in Eqs. (10,11). Inserting the derivatives of U (ε) ik into Eq. (20) and sorting all terms with respect to Re X and Im X, we obtain where Taking into account the symmetry conditions on X by using the relations Re X = (X − X T )/2 and Im X = −i(X + X T )/2 we further obtain (24) By comparing this to the right-hand side of Eq. (19), i.e., we finally obtain the desired expression for the matrix elements of the gradient G, One readily sees that G is skew-Hermitian, as required. By this, we have completed the description of the conjugate gradient algorithm capable of evaluating any convex-roof entanglement measure presented in the form of Eq. (8). B. Parametrization with Euler-Hurwitz angles Here we present an alternative approach to optimization problems over the Stiefel manifold St(k, r). We will obtain a parametrization of St(k, r) in terms of a set of real numbers which we will call Euler-Hurwitz angles, therefore unconstraining the optimization problem and mapping it to Euclidean space, where optimization problems have been investigated for much longer. We will therefore be able to employ a standard algorithm to tackle the transformed problem Eq. (8) [32]. The idea of parameterizing St(k, r) is somewhat motivated by a theorem known in classical mechanics, where it is stated that any rotation in three-dimensional Euclidean space can be written as a sequence of three elementary rotations described by three angles, the Euler angles. In other words, any orthogonal 3 × 3 matrix is parameterized by three real numbers. It was already Euler himself who generalized this idea to arbitrary k × k orthogonal matrices [33], and Hurwitz [34] extended the parametrization to unitary matrices. We remark that ideas in a similar fashion to the ones promoted here have been used to calculate an entanglement measure for Werner states [35] but were not discussed in greater detail. We now derive the parametrization of St(k, r). Let A ∈ St(k, r). The basic idea is to generate zeroes in A and bring it to upper triangular form by applying socalled (complex) Givens rotations G s (ϑ, ϕ) [36] Multiplying A from the left with G s (ϑ, ϕ), i.e.,Ã = G s (ϑ, ϕ)A, has the actioñ Let us write the matrix elements A s,j and A s+1,j , with j arbitrary but fixed, in polar form, i.e., A s,j = xe iφx and A s+1,j = ye iφy , with x, y ≥ 0. We stick to the convention that the phases φ x and φ y be in the interval ]−π, π] in order to make this representation unique. It is now easy to see that by choosing we obtain while all the other entries in the sth and (s + 1)th row have changed according to Eqs. (28). In the case x = 0, we set ϑ = π/2 and ϕ = 0. In the case y = 0, we have ϑ = 0, and we choose to set ϕ = 0 as well. The angles ϑ and ϕ are thus restricted to the intervals ϑ ∈ [0, π 2 ] and ϕ ∈]−π, π[. By successively applying Givens rotations with appropriately chosen angles according to Eqs. (29) and (30), we may now generate zeroes in A column by column, from left to right, bottom to top. In greater detail, we first erase the whole first column, except for the top entry which will generally remain non-zero. Continuing at the bottom of the second column, we may generate zeros up to (and including) the third entry from the top of the column. If we tried to make the second entry zero, we would in general generate a non-zero entry in the second row of the first column according to the transformation Eq. (28). It is convenient to label the angles calculated during this process by two indices, and to use the abbreviation G s (i, j) = G s (ϑ ij , ϕ ij ). Eventually, we obtain a matrixR given bỹ The inner of the two products generates zeros in column r − i from the bottom up to (and including) row number The upper block ofR consisting of the first r rows is of upper triangular form, while the lower block is zero. As a product of unitary Givens rotations,Q −1 is itself unitary and in particular invertible. Hence,Q always exists and is unitary. We may therefore write where Q ∈ St(k, r) consists of the first r columns ofQ and R is the upper r × r block ofR. Since we assumed that A ∈ St(k, r), we have and hence, R is unitary. It is straightforward to see that a unitary upper triangular matrix can only be of the form i.e., a diagonal matrix with only phases on the diagonal. Again, we may choose χ i ∈]−π, π]. We have thus achieved a unique parametrization of an arbitrary matrix A ∈ St(k, r) by a tuple of Euler-Hurwitz angles (ϑ, ϕ, χ) ∈ S, where As required, we find that the number of free parameters in this representation is equal to the dimension of the Stiefel manifold, i.e., dim St(k, r) = 2kr − r 2 . It is clear that the procedure described above is fully invertible. Hence, we have obtained a one-to-one mapping F : S → St(k, r). In detail, this mapping, for a vector (ϑ, ϕ, χ) ∈ S, is carried out by filling an otherwise empty k×r matrix B with the entries B ii = e iχi , i = 1, . . . r. Then, we apply inverse Givens rotations (specified by the Euler-Hurwitz angles ϑ and ϕ) from the left to B, in inverse order with respect to Eq. (32). In conclusion, we have transformed the optimization problem Eq. Due to the periodic dependence of F (s) on the angles s, it is practical to expand the search space from S to the whole Euclidean space, making Eq. (37) a completely unconstrained optimization problem (at the cost of overparameterizing the search space [37]). This problem can then be solved using standard numerical techniques. In all our calculations, we have used a quasi-Newton algorithm [32] together with the line search linmin mentioned earlier. This method requires first derivatives of the target function with respect to the angles. The derivatives with respect to F have already been stated in Eqs. (10,11), and the derivatives of F with respect to the angles are obtained straightforwardly since each angle appears only once in the product representation presented above. In order to find a good approximation to the global minimum, one should restart with random initial conditions several times and take the over-all minimum. C. Test Cases Here, we briefly present some performance results of the two algorithms presented above. We have applied them to the evaluation of two different convex-roof entanglement measures for which the numerical data can be verified by analytically known results. Although our algorithms show comparatively good performance in these cases, we would like to stress that the efficiency of a certain method depends strongly on the type of problem present, and may even be related to the particular instance of the problem (see the GHZ/W example below). We have for instance also studied certain matrix approximation problems, in some of which the parameterized quasi-Newton method converged very poorly, whereas the modified steepest descent and the generalized conjugate gradient method were equally strong and very efficient. One thus cannot generically claim one algorithm to be better than the other. It is just beneficial to have several different techniques at hand, out of which one can choose the best-performing one when applied to a particular given problem. Entanglement of formation of random 2 × 2 states The entanglement of formation [16] is a popular entanglement measure for bipartite mixed states. It is defined as the convex roof of the entropy of entanglement [38], which is, for a state |ψ , the von-Neumann entropy S(ρ) = − Tr ρ log 2 ρ of the reduced density matrix ρ = Tr B |ψ ψ|, Tr B denoting the partial trace over the second subsystem. Figure 1 shows the convergence behavior of the algorithms applied to ten random full-rank two-qubit density matrices. Displayed is the error at each step of the iteration between the respective iteration value and the true result. The latter is known analytically from Ref. [30]. Compared to the algorithms described here, the modified steepest descent algorithm due to Ref. [29] (top panel) performs rather poorly. We are aware of the fact that we are comparing here a steepest descent algorithm with two superlinear algorithms. However, apart from presenting convergence properties, we would like to point out that the modified steepest descent algorithm often converges to imprecise solutions, i.e., it gets stuck in undesirable local minima. Rather than on the starting point, this phenomenon seems to depend more on the actual density matrix itself. The conjugate gradient algorithm due to Ref. [21] (middle panel) also shows some dependence on the form of the density matrix, but always reaches satisfactory accuracy. The results for the parameterized quasi-Newton method (bottom panel) do not, at first glance, show the typical fast drop to the solution when close to a good local minimum. This is due to the effect that changing the starting point seems to have more influence on the number of required iterations in the case of the quasi-Newton . Convergence plots of the algorithms used to evaluate the entanglement of formation on ten random full-rank two-qubit states (each plot was done using the same ten states) showing the difference between the numerical data and the analytical result as a function of the iteration number. Top: The modified steepest descent algorithm from Ref. [29]; Middle: The generalized conjugate gradient method from Sec. III A; Bottom: Quasi-Newton on the parameterized search space, Sec. III B. Each curve in the main plot is averaged over ten randomly chosen initial points. The typical behavior of the algorithms for a single fixed density matrix, but with varying initial points of the iteration, is displayed in the insets. method (see insets in Fig. 1). When considering single (non-averaged) runs of the algorithm, the fast convergence to the minimum becomes visible. In conclusion, the conjugate gradient and the parameterized quasi-Newton methods perform best in this case, the latter even slightly better than the former. Tangle of GHZ/W mixtures The second test case we present here is concerned with the evaluation of the tangle of the rank-2 mixed states where |GHZ + 3 has been defined in Eq. (1), is the three-qubit W state [39], and 0 ≤ η ≤ 1. The tangle τ p [17] is an entanglement measure for pure states of three qubits and is known to be an entanglement monotone [39]. It can hence be generalized to mixed states by the convex roof construction (2). We will denote the mixedstate tangle by τ , in contrast to the pure-state version τ p . The definition of τ p reads where and ψ 1 , ψ 2 , . . . , ψ 8 denote the components of the state |ψ represented in an arbitrary product basis. In this form, the derivatives of τ p with respect to the real and imaginary parts of the components of |ψ , as required by the gradient Eqs. (10,11), can be read off most easily. The tangle takes values between 0 and 1 and is maximal for GHZ states. The tangle of the states ρ(η) has been studied in Ref. [40], where analytical expressions as a function of η where presented. Particularly, it was found that the tangle vanishes for all 0 ≤ η ≤ η 0 , where 2 ≈ 0.6269, and then continuously increases to unity at η = 1. In Figure 2 we plot the error between the numerically obtained and analytically calculated values of τ (ρ(η)) as a function of the iteration number for four particular values of η (see caption of the figure). Only the results of the generalized conjugate gradient (top panel) and the parameterized quasi-Newton (bottom panel) method are shown. The modified steepest descent algorithm from Ref. [29] did not succeed to converge to a reasonable local minimum for the lowest three values of η considered. In these cases, we empirically find the success rate, which we define as the relative number of final errors smaller than 10 −6 , to be 0.1%. For the largest value of η examined, the algorithm showed typical linear convergence behavior and arrived at a precision around 10 −12 − 10 −6 after 1000 iterations with a rather high success rate of about 60%. Similarly, the generalized conjugate gradient algorithm failed to obtain reasonable results for the value of η slightly below the threshold value p 0 in most attempts, and we find a success rate of 0.2%. The success probability for the other three values of η are between 12% and 95%, whereas they are between 25% and 80% for the parameterized quasi-Newton algorithm. One can see, with the help of looking more detailed into the behavior of single runs (see insets), that the averaged convergence plots are slightly flattened out due to some rather rare occurrences of slow convergence. Still, one can observe that the parameterized quasi-Newton method converges faster to good local minima. D. Local unitary equivalence We would like to remark here that the parameterized quasi-Newton method is also capable of determining whether two arbitrary mixed states are equivalent up to local unitary transformations. While this problem has an operational solution in some special cases (see, e.g., Ref. [41] and references therein), there is no generally applicable operational criterion known capable of mak-ing this decision. Using the parametrization developed in Sec. III B, one can express each local unitary transformation U i in the matrix U = U 1 ⊗ U 2 ⊗ . . . ⊗ U n by its Euler-Hurwitz angles and optimize over the whole set of all angles simultaneously. Furthermore, on can study in this way how 'close' two mixed state are with respect to local unitary equivalence. Note that such kind of analyses are not possible with the modified steepest descent or the generalized conjugate gradient methods, since, as there is no parametrization, one can optimize over only one unitary matrix at a time. IV. PHYSICAL APPLICATION In this section, we use the algorithms developed and described above to evaluate a multipartite mixed-state entanglement measure of a concrete physical system. A. Exchange-coupled spin rings with inhomogeneous magnetic field geometry In the following, we consider the Hamiltonian (44) where S i = (S x i , S y i , S z i ), S k i = σ k /2 with σ k being the standard Pauli matrices acting on the ith spin, S N +1 ≡ S 1 and the angles α k = 2π(k−1)/N , k = 1, . . . , N . Equation (44) describes a closed ring of N ≥ 2 equidistant exchange-coupled spin qubits with local in-plane magnetic fields b i ≡ (b cos α i , b sin α i , 0) T which are chosen such that the system is invariant under rotations by multiples of 2π/N about the center of the ring. The exchange coupling J is throughout assumed to be ferromagnetic (i.e., J > 0). The fields in Eq. (44) are chosen to point radially outwards, but the following discussion and results also hold for any other local in-plane field configuration possessing the same rotational symmetry, since all these systems are local unitary equivalents. The system is depicted schematically in Fig. 3 (a) for three spins. In fact, we are considering here a generalization of one of the N = 3 cases studied in Ref. [22]. There, the particular field configuration resulted from semiclassical considerations with the goal of obtaining a state which is close to a GHZ state [see Eq. (1)] as the ground state of the system. In that case, entanglement can be created by merely cooling the system to low enough temperatures. In principle, the argumentation for the occurrence of a GHZ ground state presented in Ref. [22] can be extended to a number of qubits N > 3. However, it can be expected that for N → ∞, the lowest-lying multiplet becomes a continuous spectrum. Hence, the question arises up to which numbers of spins N this setup still allows generating GHZ-type entanglement. Before further investigating this question, we briefly restate the arguments of Ref. [22] for the convenience of the reader. We start from the fact that in the ground state of the classical analog of the Hamiltonian (44), all spins are aligned for b = 0. However, no direction of alignment is favored, reflecting the full rotational symmetry of the system in spin space. Small local magnetic fields (b J), applied in the way described above, break this symmetry and one is left with the two degenerate ground states ↑↑ . . . ↑ and ↓↓ . . . ↓ where the representation ('quantization') axis is the usual z-direction. In fact, each spin is slightly tilted against its local magnetic field, but there is no globally favored direction of orientation, such as with, e.g., a global spatially uniform magnetic field. Note that this effect of tilting vanishes as b → 0. Due to the Zeeman term in Eq. (44) there is an energy barrier between any path connecting the two degenerate minima. In the quantum case, tunneling through this barrier lifts the degeneracy between the ground states and one obtains a tunnel doublet. Thus, in the limit b → 0 + , the two lowest lying states are the generalized GHZ states given in Eq. (1). As an illustration, we plot the energy surface of the classical three-spin system corresponding to Eq. (44) in Fig. 3 (b). We have previously argued [see Ref. [22], especially the discussion leading to Eq. (2) therein] that this energy can be expressed in terms of two 'mean' spherical anglesφ andθ [cf. Fig. 3 (a)], since all spins will basically align in the present limit b J, up to small fluctuations which sum to zero and are chosen to minimize the total energy. One can nicely see how the out-of-plane configurations atθ = 0 andθ = π are energetically favored. For any value ofφ, a path connecting the two minima has to overcome an energy barrier which scales as O(b 2 ). In the figure, this barrier is displayed by the superimposed white line for the specific valueφ = π/2. Independently of N , we are generally confronted with the following problem if we want to achieve the systems considered here to be in a highly entangled state at nonzero temperature. On the one hand, the energy splitting between the ground state and the first excited state vanishes as b goes to zero. On the other hand, a perfect GHZ state is obtained exactly in this limit. For increasing magnetic field, the states continuously deviate from the maximally entangled GHZ state, as can be imagined with the help of the classical picture, where the spins start to tilt. One therefore has to choose the strength of b as a tradeoff between having a highly entangled ground state and separating this state in energy from the next higher state. In order to find this optimal magnetic field strength at a given temperature T = 0 we evaluate a suited mixedstate entanglement measure on the system's canonical density matrix ρ = exp(−βH)/ Tr exp(−βH) where β = 1/k B T and k B is Boltzmann's constant. When we studied the case N = 3 in Ref. [22] we used the tangle [see Eq. (40)] as our pure-state measure of choice, since it is an entanglement measure for three qubits. The general- Local radial in-plane magnetic fields bi (shown as green arrows in the xy-plane) point radially outwards. As discussed in the text, any other in-plane field geometry obeying the same radial symmetry (such as, e.g., a 'chiral' field looping around the triangle) leads to equivalent results. (b) Classical energy surface Ec of the system shown in the top panel. The 'mean' anglesθ andφ (introduced in the top panel) are well suited to characterize the state of the system since fluctuations around these angles are small for b J and sum to zero. The superimposed white line shows the perturbatively calculated energy barrier atφ = π/2 [see Eq. (2) in Ref. [22]], whereas the crosses are due to a corresponding numerical minimization of the energy. ization to mixed states was done via the convex-roof construction Eq. (2). Here, however, we need a pure-state entanglement measure which is defined for any N ≥ 2. B. Entanglement measure In principle, an exponentially increasing number of distinct entanglement measures is required to capture all possible quantum correlations in a general pure state of N qudits. This may be viewed as the reason for the rather large number of proposals for multipartite entanglement measures that have been put forward over the last years. Various insights about the structure and characterization of multipartite entanglement have been gained by studying such measures. For our purpose, we want to have a measure that is easy (and fast) to com-pute (in particular, that is an analytic function whose complexity grows at most polynomially with N ), that captures the type of entanglement present in our system well, and that possibly has a nice (physical) interpretation. We found that the Meyer-Wallach measure [19], defined for an arbitrary number of qubits, fulfills all these criteria. According to Ref. [42], it can be written in the compact form where ρ k is the density matrix obtained by tracing out all but the kth qubit out of |ψ ψ|. This is simply the subsystem linear entropy averaged over all bipartite partitions involving one qubit and the rest [43]. Moreover, it was shown that this entanglement measure is experimentally observable by determining a set of parameters that grows linearly with N , in contrast to the exponentially increasing complexity of quantum state tomography [42]. We note at this point that the Meyer-Wallach entanglement has been generalized to a broader family of entanglement measures [44] that might give deeper insight into the structure of multipartite entanglement. However, we stick to the simple form (45) for our numerical calculations, as this measure turns out to describe our type of entanglement well. The Meyer-Wallach measure is an entanglement monotone (and can thus be extended to mixed states via the convex-roof construction), lies between zero and one, vanishes only for full product states (i.e., states of the form |ψ = i |ψ i , i = 1, . . . , N ), and is maximal for generalized GHZ states Eq. (1). The upper bound is however also reached by other states, for instance by the so-called cluster states [42,45]. A drawback of the Meyer-Wallach measure is that it can also be maximized by partially separable states. For example, the state |Ψ = |Φ ⊗ |Φ , where |Φ = (|↑↑ + |↓↓ )/ √ 2 is a bipartite Bell state, gives γ(|Ψ ) = 1 although it is clearly not globally entangled [42]. This is however not a problem in our study for two reasons. First of all, we can check by numerical diagonalization that the ground state of our systems indeed converges to a multiparite GHZ state (at least for the first few N 20). Secondly, comparing the data for N = 3 with our earlier study in Ref. [22] where we had employed the tangle, we find the same qualitative behavior of both entanglement measures. Moreover, the optimal values of b for which the measures reach their maxima at a given temperature coincide almost perfectly. It is thus reasonable to assume that the Meyer-Wallach entanglement measure is well suited for quantifying entanglement in our systems. The numerical evaluation of the Meyer-Wallach measure extended to mixed states via the convex-roof construction requires the derivatives of γ(|ψ ) with respect to the real and imaginary components of |ψ [see Eqs. (10,11)]. Due to the partial traces, these expressions are a bit cumbersome. However, exploiting the rotational symmetry of the Hamiltonian studied here, they can be considerably simplified (see Appendix). C. Results Before we present and discuss our numerical results, we would like to mention that studying the system Eq. (44) analytically for arbitrary N is rather difficult. An exact diagonalization of the Hamiltonian is not known for arbitrary N , and perturbation theory to constant order in b (independent of N ) is not suitable to study the groundstate properties of the system, since the ground-state splitting is lifted only in N -th order. One can can thus generally expect that the ground-state splitting scales with the number of spins as b N . Since we must always have b 1, this goes to zero for large N , as discussed in Sec. IV A above. Obtaining highly entangled states at finite temperature with this approach will thus be increasingly difficult for an increasing number of spins N . Our numerical results are presented in Figs. 4 and 5. Figure 4 shows the Meyer-Wallach measure for N = 2, 3, 4, and 5 spins at four different temperatures (see caption of the figure). Each data point is the result of whichever of the two algorithms described in Sec. III performed better in a few trials with random initial conditions. For a fixed number of spins, the entanglement as a function of the magnetic field strength b assumes a maximum. This maximal entanglement γ max (T ) is increased and its position is shifted to smaller magnetic field values as the temperature is lowered. This is due to the fact that at low temperatures, only a small magnetic field is required in order to make the ground-state splitting sufficiently large compared with temperature. Since these small field values only slightly disturb the ideal GHZ configuration, almost maximal values of the entanglement measure (corresponding to almost perfect GHZ-states) are observed. With higher temperature, larger field values are required to protect the ground state. Consistent with the semiclassical picture, this perturbs the desired spin configuration and leads to a lower amount of entanglement. For large magnetic fields, all curves coincide eventually, as the system is always found in the ground state in that case. Figure 5 gives more insight into the dependence of maximal entanglement γ max (T ) on temperature and the number of particles. The plot was obtained by maximizing the Meyer-Wallach measure over the magnetic field strength b while holding the temperature fixed. Displayed is the difference between the resulting data to the zero-temperature maximum (being equal to 1) as a function of temperature for different numbers of particles (see caption of the figure). Clearly, the maximally achievable entanglement γ max (T ) decreases for both increasing temperature and increasing number of particles. The qualitative dependence on the temperature was discussed already above. Here we additionally see an almost linear behavior on a log-log scale at low temperatures, suggesting a power-law decay of the maximal entanglement of the form 1 − γ max (T ) ∝ T α with an exponent α depending on the number of particles N . The decrease of γ max (T ) with the number of spins N at fixed temperature is due to the fact that the energy splitting between the ground and first excited state scales as b N . With a larger number of particles, a higher magnetic field is required to achieve a sufficiently large splitting. This in turn lowers the entanglement in the ground state, due to its b-dependence, resulting in a lowered maximum of the Meyer-Wallach measure. As an additional obstacle, the ground-state entanglement as a function of b decays even more rapidly as the number of particles is increased. This can be seen from the inset of figure 5, where, at T = 0, the b-values yielding the Meyer-Wallach measure 0.5 (full width at half maximum, since the maximum at T = 0 is always 1) are shown as a function of N . V. CONCLUSIONS We have presented two ready-to-use numerical algorithms to evaluate any generic convex-roof entanglement measure. While one is based on a conjugate gradient algorithm operating directly on the search space, the other one is a quasi-Newton procedure performing the search in the transformed unconstrained Euclidean space. All required formulas to implement either of the two algorithms have been stated explicitly, which, in order to calculate different convex-roof extended pure-state measures, merely leaves the user with the task of calculating its derivatives with respect to the real and imaginary components of the pure-state argument. The relatively different nature of the two procedures increases the chances that at least one of them performs well in the concrete application. In a series of numerical tests, we have found that the algorithms perform well and especially significantly better than previously presented (non Newton-type) ready-to-use optimization problems on the Stiefel manifold. However, it is found that the convergence properties, as is often the case in involved optimization problems, depend on the cost function. This suggests to try applying different techniques to a particular optimization problem and examine which one performs best in that case. Further, we have applied our algorithms to evaluate a multipartite entanglement measure on density matrices originating from a real physical system. The latter consists of N ferromagnetically exchange-coupled spin-1 2 particles placed on the edges of a regular polygon with N edges. We have argued that a particular local magnetic field geometry, namely radially symmetric in-plane fields, favor a highly entangled ground state configuration. We have confirmed this argumentation by evaluating the mixed-state Meyer-Wallach entanglement measure, defined for an arbitrary number of qubits, and found indeed high values of entanglement at low temperatures and specific magnetic field strengths. This not only quantifies the entanglement properties present in this system, but also serves more generally as a proof-of-principle for the usefulness and applicability of our algorithms. In case of the systems studied in Sec. IV A, the computation of the Meyer-Wallach measure (45) and its derivatives can be greatly simplified by exploiting the rotational symmetry of the Hamiltonian H. for 2 N −1 ≤ i ≤ 2 N − 1. In practice, we first diagonalize H numerically [46], subsequently diagonalize further any degenerate spaces with respect to R, and then apply the simplified formulas above.
2009-05-19T13:38:42.000Z
2009-05-19T00:00:00.000
{ "year": 2009, "sha1": "7addc38ac02b926bed34913bc008e47d098e92a8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7addc38ac02b926bed34913bc008e47d098e92a8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227176845
pes2o/s2orc
v3-fos-license
Ectopic ACTH syndrome of different origin—Diagnostic approach and clinical outcome. Experience of one Clinical Centre Purpose Ectopic Cushing Syndrome (EAS) is a rare condition responsible for about 5–20% of all Cushing syndrome cases. It increases the mortality of affected patients thus finding and removal of the ACTH-producing source allows for curing or reduction of symptoms and serum cortisol levels. The aim of this study is to present a 20-year experience in the diagnosis and clinical course of patients with EAS in a single Clinical Centre in Southern Poland as well as a comparison of clinical course and outcomes depending on the source of ectopic ACTH production–especially neuroendocrine tumors with other neoplasms. Methods Twenty-four patients were involved in the clinical study with EAS diagnosed at the Department of Endocrinology between years 2000 and 2018. The diagnosis of EAS was based on the clinical presentation, hypercortisolemia with high ACTH levels, high dose dexamethasone suppression test and/or corticotropin-releasing hormone tests. To find the source of ACTH various imaging studies were performed. Results Half of the patients were diagnosed with neuroendocrine tumors, whereby muscle weakness was the leading symptom. Typical cushingoid appearance was seen in merely a few patients, and weight loss was more common than weight gain. Patients with neuroendocrine tumors had significantly higher midnight cortisol levels than the rest of the group. Among patients with infections, we observed a significantly higher concentrations of cortisol 2400 levels in gastroenteropancreatic neuroendocrine tumors. Chromogranin A correlated significantly with potassium in patients with neuroendocrine tumors and there was a significant correlation between ACTH level and severity of hypokalemia. Conclusion EAS is not common, but if it occurs it increases the mortality of patients; therefore, it should be taken into consideration in the case of coexistence of severe hypokalemia with hypertension and muscle weakness, especially when weight loss occurs. Because the diagnosis of gastroenteropancreatic neuroendocrine tumor worsens the prognosis-special attention should be paid to these patients. Introduction Hypercortisolemia and a set of symptoms caused by it is defined as Cushing Syndrome (CS). In most cases the source of CS lies in the excessive administration of glucocorticoids for various medical reasons [1]. As regards endogenous causes, they are divided into two groups: adrenocorticotropic hormone (ACTH)-dependent and ACTH-independent CS, responsible for about 70-80% and 20-30% of cases respectively [2]. Adenoma of the pituitary gland producing ACTH-Cushing disease (CD) is the most common source of endogenous hypercortisolemia, which accounts for about 60-70% of all CS cases [3] whereas less common are adenomas of the suprarenal gland-a condition known as ACTH-independent CS (10-20% of all CS patients) [2]. Ectopic Cushing Syndrome (EAS) is a rare condition, responsible for about 5-20% of all (CS) cases and ca. 10-20% of ACTH-dependent CS patients [2,[4][5][6][7][8][9][10][11]. The first time when EAS was named and largely studied was in the early 1960s by Liddle and soon after by Meador [12,13]. It plays a pivotal role to extract the group of EAS patients from all CS patients due to a different management. Removal of the ACTH-producing source allows for curing or a significant reduction of symptoms and serum cortisol levels. It is crucial to be active in searching for the tumor that produces ACTH; some malignant, aggressive tumors could be hidden behind it and failure to recognize it could result in poor prognosis [8,14,15]. In most cases the source of ectopic production of ACTH is located in the lungs and mediastinum, but it can also be produced by tumors originating from other parts of the body, such as gastroenteropancreatic neuroendocrine tumors (GepNETs), pheochromocytomas and other [4]. The objective of this study is to present 20-year experience in the diagnosis and prognosis of patients with EAS in a single Clinical Centre in Southern Poland. To the best of our knowledge, this is the first work on the Polish population of patients with EAS, studying diagnostics and clinical course depending on the type of tumor producing ACTH. We aimed to analyze the course of EAS in NET and especially in GepNET patients, compared to other locations. Methods We retrospectively reviewed the records of patients with EAS diagnosed at the Department of Endocrinology between 2000 and 2018 and reviewed the records for routine (but typical for CS) and endocrine biochemical tests: ACTH was measured by immunoradiometric assay (Brahms, Henningsdorf, Germany), whereas plasma cortisol was measured by electrochemiluminescence method (Roche Diagnostics GmbH, Mannheim, Germany). The diagnosis of EAS was based on the clinical presentation, hypercortisolemia with high ACTH levels, high-dose dexamethasone suppression test (HDDST) and/or corticotropinreleasing hormone (CRH) tests, because bilateral inferior petrosal sinus sampling (BIPSS) was not available-mostly due to poor condition of the patients. Based on imaging techniques (CT or MR), visible pituitary focal lesions were excluded as a cause of high level of ACTH. Statistical methods Statistical analysis was performed using STATISTICA 13.1 software (StatSoft, Inc., Tulsa, USA). The data normality distribution was assessed using the Kruskal-Wallis test with the Lilliefors correction. Non-parametric tests were applied due to rejection of the normality hypothesis for most of the analyzed parameters. The level of significance for all tests was set at 0.05. The differences in median values were tested using the Mann-Whitney U test. Differences in the number of patients, categorized by any criteria defined in this study, were tested using a contingency table and the results were assessed based on a chi-square test with the Yates correction. The Kaplan-Meier plot and Cox proportional hazards model with FCox statistics were used to assess differences in mortality in patients dichotomized by any criteria in this study, where patients who were alive were classified as censored observations. The Spearman correlation was used to test the relationship between the parameters and logistic regression was used to prepare a model for predicting mortality. The selection of significant predictors was based on the probability of the likelihood-ratio statistic. Finally, statistical significance was assessed using the CHI 2 test for the overall model and the Wald test statistics for the predictors. Ethics statement The study was approved by the Bioethics Committee of the Jagiellonian University (reference no.: 1072.6120.213.2019) and was performed in accordance with the ethical standards as laid down in the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Prior to performing any procedure, and after obtaining comprehensive information, each patient signed informed consent, which is included in the patient's medical history. Results Twenty-four patients with EAS were involved in the study: 14 women and 10 men (female to male ratio 1.4:1), with a median age at the time of the diagnosis being 61 years. Persistent hypokalemia with high suspicion of hypercortisolism was the reason for referral to our Clinical Center in most of the cases. Merely 8 patients presented a typical cushingoid feature and one patient has previously been treated for Guillain-Barre syndrome (due to sudden onset of muscle weakness). Half of the patients were diagnosed with neuroendocrine neoplasms-6 females and 6 males (GepNETs, thymic and pulmonary carcinoids); among non-NET patients, two were found with pheochromocytoma, one with esthesioneuroblastoma, two with medullary thyroid carcinoma, two with carcinoma of the ovary, while the remaining patients-with single small-cell lung carcinoma (SCLS), papilloma of the maxillary sinus and adenocarcinoma of the stomach. In two patients in terminal state, who were only palliative treated, although the potential source of ACTH was found (tumor of the pancreas and lung in imagining studies), tissue specimen for histopathological examination was not available. Table 1. The most common clinical findings are shown in Table 2. There were no significant gender differences in clinical presentation, although there was a higher percentage of hypertension and peripheral oedema in females, while facial plethora and psychiatric disorders were more common in males. In general, muscle weakness was the leading symptom and typical cushingoid appearance (facial plethora, easy bruising, redistribution of fat tissue, weight gain and hirsutism) was present in merely a few patients, most likely in NET individuals. Weight loss was more prevalent than weight gain (11 vs 7 patients). Redistribution of the fat tissue characteristic for cushingoid appearance and peripheral oedema were considerably more common in NET patients (p = 0.041). Facial plethora was present in 13 patients (8 being NET patients) whereas easy bruising was seen in more than 50% of patients (in 13 of 24); in this group, 9 patients had NET. Clinical data on osteoporosis was available only in 9 cases: in 7 osteoporosis was diagnosed (in 5 patients with NET, and in single cases of carcinoma of the ovary and tumor of the lung). Diabetes or a pre-diabetic state were present in 75% of patients (18 of 24), 10 being NET patients. As regarding infections-they were documented in 16 patients (7 GepNET, 1 pheochromocytoma, 1 esthesioneuroblastoma, 1 thymic carcinoid, 1 SCLC, 1 pulmonary carcinoid, 1 ovarian carcinoma, 1 medullary carcinoma, 1 papilloma and 1 lung tumor). 8 of them had an infection in more than 1 anatomic site. Table 3. Patients with infections were treated based on the antibiogram results when available, in other cases-empiric therapy was applied in accordance to official and local antimicrobial policy. Individuals with infections had higher cortisol 0600 and 2400 concentrations than those without it, although this did not reach statistical significance (p>0.05), what is more, among patients with infections, we observed significantly higher concentrations of cortisol 2400 levels in NET (p = 0.002) and GepNET (p = 0.046) patients. Fig 1a-1c. Table 4 presents the most essential laboratory evaluations. Not surprisingly, most of the patients had low TSH levels with a median concentration of 0.36 mIU/ml (in 21 of 24 it was lower than 1.0mIU/ml, 1 patient had TSH level of 3.23mIU/ ml, for the remaining 2 there was no data) and he range of sodium (Na) concentrations varied from 135mmol/l up to 153 mmol/l (median-145mmol/l). In 15 patients from our group, we measured the acid-base balance. Among 10 patients with metabolic alkalosis, 9 were hypokalemic, while on the other hand, correct acid-base balance was observed only in normokalemic patients. All subjects had elevated morning serum cortisol levels and midnight plasma cortisol levels, with a median morning and midnight plasma cortisol level 1655.17 nmol/l and 1434.48 nmol/l respectively. Twenty of 24 patients had lost their cortisol circadian rhythm, whereby of the remaining 4, all were non-NET female patients, with lung tumor, papilloma, gastric Table 4 also shows that the majority of patients suffered from hypokalemia with median potassium (K) concentrations 2.65mmol/l, which affected 17 of 24 individuals (70%; all but one had potassium levels lower than 3.0 mmol/l). However, all patients had an evidence of prior hypokalemia in medical history. Furthermore, there was a significant correlation between ACTH level and severity of hypokalemia (p<0.05) despite the source of EAS. Table 5. We found that ACTH was significantly positively correlated with Na and negatively correlated with phosphate in EAS patients without NET, whereas there was not any substantial correlation of those electrolytes in patients with NET. Moreover, ACTH considerably negatively correlated with K both in EAS patients with and without NET (as mentioned above). Chromogranin A correlated significantly with K (positively) in EAS patients with NET. EAS patients who died had significantly higher values of early morning cortisol levels (n = 18; 1779.31± 468.97 [nmol/l]) than patients, who survived (n = 6; 1020.69± 965.52 [nmol/l]): p = 0.028) and when assessing patients with and without NET, we observed a significantly higher concentration of midnight cortisol levels (cortisol 2400) in NET patients (p = 0.024). Table 6. The levels of ACTH did not vary (p = 0.37) in patient with and without distant metastases (median 75.64 pmol/l; IQR 58.30pmol/l vs 45.80 pmol/l; IQR 37.44pmol/l), although if outliers are omitted (1421pg/ml in one case of esthesioneuroblastoma without metastases), the Concerning other laboratory findings: serum calcitonin was elevated in both patients with medullary thyroid cancer, while chromogranin A was elevated in 6 patients: in 4 patients with NET (2 with pulmonary carcinoid and 2 with pancreatic NET) and 1 with pheochromocytoma and 1 with ovarian carcinoma. For 6 NET patients and for 7 non-NET patients there was no data. Radiological data of the patients is presented in Table 8. In most cases, a single imaging study allowed to detect the primary change: In 2 patients, MRI was the first-choice examination, and it was positive, whereas CT was performed in 22 patients, giving a negative result in 4. MRI examination revealed a source of ectopic ACTH in one case. Among 3 patients with negative both CT and MRI-in 2 SRS or FDG-PET revealed a lesion, while the last one had positive ultrasound examination. Concerning CT, hyperplasia of the adrenal glands (AH) was present in 75% of our patients (18 of 24); no data was available for one patient. In our study, 10 patients diagnosed as NET underwent SRS, in all of those, we observed a high uptake of radionuclide in lesions producing ACTH. Five patients with NET underwent FDG-PET, which was positive. SRS performed in 7 non-NET patients was positive only in 4 cases-all with diffused neuroendocrine system-derived tumors (2 pheochorocytomas, one Concerning treatment, tumor responsible for ectopic secretion was resected in 12 patients (in 2 surgery was not radical), in all patients who underwent radical tumorectomy, the signs and symptoms of Cushing's syndrome resolved and normalization of cortisol and ACTH levels was observed. Adjuvant treatment included: chemotherapy (1/24), radiotherapy (2/24), Peptide Receptor Radionuclide Therapy (PRRT) and treatment with long-acting somatostatin analogue for all NET patients. To control hypercortisolemia, adrenal steroidogenesis inhibitors (ketoconazole, metyrapone, mitotane) were used. Three patients underwent only palliative treatment due to a poor general condition and two patients: one with MTC and one with thymic NET, underwent bilateral adrenalectomy. Of the whole group of patients, 18 died, most due to widely disseminated disease. Table 1. Among them one patient with co-existing viral hepatitis C died with the symptoms of acute liver failure due to rapid hypercortisolemia and three patients due to complications after surgery; the median duration of follow-up was 8.5 months (range-1-86 months; with a median survival of 9.7 months). Patients diagnosed with GepNET had higher mortality compared to the rest of the group, despite targeted treatment-those were the patients who, at the time of diagnosis of EAS, already had disseminated disease, or had been treated with long-acting somatostatin analogs for many years until progression synchronous with the appearance of EAS. Fig 2. Similarly, the presence of metastases significantly worsened the probability of survival. Discussion EAS is a rare disease whose incidence rate is 1 up to 3 new cases/1 million people/year [1,[3][4][5]11]. However, if it occurs increases the mortality of patients [8,15,16]. It is well known, that hypercortisolemia itself increases the mortality of affected patients. In recently published [14]. In our work, patients who died had significantly higher cortisol concentrations than patients who survived. During our research we focused on EAS in the course of NET and non-NET tumors. We did not compare CD with EAS, but presented a group of patients with EAS focusing on the differences between the course of EAS in NET and non-NET. To our knowledge, no work on the subject has been published so far. The major finding of our work is the fact that NET and especially GepNET is a special group of EAS patients. As more and more patients with GepNET are being diagnosed recently, more and more EAS cases in the course of NET will appear. In our study, compared to other tumors, NETs had higher ACTH and cortisol concentrations with lower potassium and TSH levels. Our data are in contrast to those analyzed by Isidori, where NET patients had lower cortisol and ACTH concentrations [4]. Furthermore, when analyzed the GepNET subgroup versus the rest of the tumors -the differences were the highest. This could be the explanation of another important finding of our study-that GepNET patients have significantly worse overall survival compared with patients with different source of EAS. GepNET patients usually have distant metastases at initial diagnosis [17]. In our group, all GepNET patients had disseminated disease at the time of the diagnosis. On the other hand, it can be partly explained by the fact that the non-GepNET group includes patients with poor prognosis such as carcinomas (ovary, lung, stomach) as well as those with better prognosis such as medullary thyroid carcinoma, pheochromocytoma, esthesioneuroblastoma and patients with lung carcinoid tumors. ACTH and CRH can be produced by almost all tumors, both malignant and benign, of endocrine and non-endocrine origin [3,[18][19][20][21][22][23]. In the last decades, we have observed a shift in the prevalence in EAS to more often diagnosed neuroendocrine tumors [8,19,23,24]. Most cases in the first decades after establishing the definition of EAS were caused by small cell lung carcinoma [12,13,23]. Still, almost half of the tumors can be found in the thoracic cavity, mostly bronchial carcinoids and SCLCs [4,8,11,[24][25][26]. In our study, on the contrary, most were located in the abdomen or pelvis. Compared with other large studies we observed a higher proportion of GepNETs (37.5% vs 3.0-18.3%) [3,4,9,14,15,17,21,26]. Table 9. This can be partly explained by the fact that in our medical center, we mostly diagnose and treat endocrinological disorders, whereas most patients with SCLC are under care in oncological centers. Another explanation for the different percentage distribution of tumors in our study could be the fact, that those patients do not have time to develop typical cushingoid feature and severe hypokalemia is usually treated symptomatically, without any further diagnostic procedures. Typical cushingoid features are more often seen in latent tumors, compared to malignant neoplasms-in those cases, due to the rapid progression of the underlying disease, typical symptoms of hypercortisolemia may not be revealed. Symptoms of hypercortisolism often appear in advanced stages of non-NET patients, especially in SCLC, when cachexia and electrolyte disturbances dominate which are related to the terminal state of the patient or treatment. Concerning neuropsychiatric disorders (observed in 42% of patients in our group) they are not common in EAS patients, most likely due to a poor general condition, though is some cases it can be the leading symptom of ectopic ACTH production [27][28][29]. What is more, there was a higher female to male ratio in our study, in contrary to other analyses where there is a male predominance in EAS; probably because most lung cancer patients are males, while GepNET patients are predominantly women [25,30]. Furthermore, compared to CD, EAS patients are more likely to experience severe hypokalemia, which has been previously broadly studied-the higher the plasma cortisol concentration, the more severe the hypokalemia [14,25,31]. Hence, our results are similar. The authors of these earlier publications have proposed an explanation for this phenomenon. They suggested that excessive production of cortisol induces a state in which cortisol itself acts as a mineralocorticoid, regardless of ACTH, saturating the 11beta-hydroxysteroid-dehydrogenase [25]. In comparison to other series-our group had similar prevalence of hypokalemia (70%). Concerning infections in EAS patients: it is well known that high level of cortisol predispose to infections [32][33][34]. We have found interesting correlation between the level of hypercortisolemia, predisposition to infections and primary site of tumor responsible for EAS. NET and GepNET patients (with worse prognosis of survival) with infections had significantly higher levels of cortisol. As it was presented in previously reported research-the higher hypercortisolemia, the higher prevalence of infections in affected patients [33]. Our study confirmed that, EAS patients most often had muscle fatigue and hypertension. Similarly, proximal muscle weakness as the leading symptom in the case of hypercortisolemia of ectopic origin is observed in studies from various centers [3,11,[35][36][37][38]. Also well-known is the influence of glucocorticoids on thyroid function (suppression of the hypothalamic-pituitary-thyroid axis) [39][40][41][42]. Here, the assessment of TSH by primary care providers can be of great value and practitioners should be aware, that in case of coexisting severe hypokalemia, muscle weakness with low TSH (in patients with no history and signs of thyroid disorders) can strongly suggest hypercortisolemia and EAS. We therefore propose that TSH may be a simple blood test, which could improve the diagnosis of hypercortisolemia. Concerning imagining techniques, in our work, in all cases the possible source of ectopic ACTH was found in at least one of the imagining techniques. None of the single imaging studies give 100% sensitivity but combining different techniques allows to increase it [7,[43][44][45][46][47]. On top of that, because more and more cases of EAS origin from NETs, SRS begins to play an increasingly important role, especially in the case of negative CT or MRI, and should be taken into account at an early stage of the diagnostic algorithm [45,[48][49][50][51][52]. In EAS non-NET patients with tumors originating from the diffused endocrine system-DES (as esthesioneuroblastoma, pheochromocytoma or medullary thyroid carcinoma), diagnostic procedures can be based on SRS. In those patients, lesions localized in SRS were confirmed by other imaging studies, similarly to NET patients. In other patients with tumors not originating from DES (as carcinomas or papillomas), SRS seems to be of lower diagnostic significance. In those cases, imaging procedures should first be based on CT, MR or FDG-PET. FDG-PET in NETs has a much lower sensitivity in detecting the primary source of tumor [47,48,50,53,54]. As it was explained by Adams et al., this is mostly due to its limited metabolic activity [55]. In general, hyperplasia of the adrenal glands (AH) is often seen in CS [56,57]. In our work it was observed in 18 patients (75%), these results are similar to those achieved by Imaki et al., where AH was seen in 75% patients (3 of 4) and in 54% of CD patients [56]. An even higher frequency was reported by Sohaib et al. (90% in EAS patients (9 of 10) compared with 62% in CD patients) [57]. What is common across all studies is that AH was more often seen in EAS patients than in other causes of ACTH-independent CS-probably due to extremely high levels of ACTH in those patients [17,56,57]. Confirmation of EAS is challenging. The gold standard in the diagnosis of EAS is BIPSS and confirmation of EAS requires positive staining for ACTH or CRH in tumor cells [4,10,20,54,58]. In our patients, due to the poor condition of the patients, BIPSS was not available, and in addition, when patients could not underwent tumorectomy, no immunohistochemical evaluation was available. Pituitary MR examination cannot be used to unequivocally exclude or confirm EAS, because false negative results are also observed in pituitary CS-and it is an important limitation of this method. HDDST, alone, is also of limited value [10]. However, the ACTH concentration in all concerning EAS publications was shown to be significantly higher than in CD patients. Aron in his article has focused on the value of HDDST in the diagnosis of ACTHdependent CS. He proved that the HDDST has limited value in the differentiation of the source of ACTH-dependent CS. We fully agree with this statement. In our patients, it was one of the tests that in combination with other laboratory tests and clinical presentation allowed EAS to be suspected. What is more, in his publication, Aron showed that compared to CD patients, EAS patients had significantly higher mean ACTH concentration (47 vs 17 vs. pmol/l), as well as a smaller percentage of patients who had suppression by 50% or more of the baseline in HDDST (33.3 vs. 81.0%). Also MR imaging was not a differentiating criterion [10]. The source of ectopic ACTH is neoplastic tissue, which usually is confirmed in immunohistochemistry examination. In our study, only in 3 patients immunohistochemistry staining in the tumor tissue for ACTH was available: in two patients it was positive, and in the other one it was negative. Concerning the last patient, as it was proposed by Isidori and Lenzi, only a subpopulation of cells may actually secrete ACTH which could explain our finding [4]. There are some limits to this study. Firstly, we mainly used archival data. Secondly, the patients came from a single center focused on endocrine diseases. Most of them had at least mild features of hypercortisolemia and/or signs of elevated ACTH. Patients with lung neoplasms are diagnosed and treated in Oncological Centers. Because IPSS considered as the diagnostic gold standard was not available, the diagnosis of EAS was based on laboratory findings, clinical symptoms, imagining techniques and/or resolution of hypercortisolemia after removal of the tumor responsible for EAS. The limitation of this study is also the fact that we failed to distinguish between ectopic CRH and ACTH secretion. Thus, Muller et al. suggested that nonexcessive elevation of serum ACTH and a partial response to high-dose dexamethasone test with negative imaging can imply the ectopic production of CRH [59]. Conclusions 1. The occurrence of hypokalemia in GepNET patients should prompt suspicion of EAS, especially when other symptoms such as hypertension, muscle weakness or weight loss appear. 2. GepNET patients usually do not have time to develop typical cushingoid feature because of rapid progression of EAS. 3. Diagnosis of GepNET in EAS patients significantly worsens the probability of surviving. 4. In active searching for the source of ectopic cortisol production, combining different imagining techniques allows to increase sensitivity. In patients with NET, SRS should be the test of choice.
2020-11-26T09:03:01.652Z
2020-11-25T00:00:00.000
{ "year": 2020, "sha1": "b36138d3941c233f19d729fbae2bf70ba02a14bd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0242679&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2883d3b96650dfbc9b4af0c70df2854f54dfed9c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264373634
pes2o/s2orc
v3-fos-license
Integration of Carbon Nanotubes in an HFCVD Diamond Synthesis Process in a Methane-Rich H2/CH4 Gas Mixture In this work, we present experimental data on carbon nanotubes integration during diamond synthesis. Carbon nanotubes layers were preliminarily deposited on silicon and diamond substrates, after which the substrates were loaded into the HFCVD reactor for further growth of the diamond phase. The CVD process was held in an argon-free H2/CH4 working gas mixture without the use of a catalyst for carbon nanotubes growth. It is shown that in a wide range of studied working gas composition (CH4 concentration up to 28.6 vol.%) nanotubes etched from the substrate surface before the diamond growth process began. Introduction Carbon nanotubes (CNT) are an outstanding material combining great mechanical, electrical and thermal properties [1][2][3][4][5][6][7].CNT is the material of choice for the reinforcement of composite materials [8,9].There are three main methods for the synthesis of CNTs: laser ablation, arc discharge and chemical vapor deposition (CVD) [10][11][12].CVD is the most promising method, due to its ability to upgrade this technique to an industrial scale, the high purity of synthesized nanotubes, its high yield and relatively low cost [13].The key feature of CVD synthesis of CNTs is that the metal catalyst particles should be involved, such as iron, nickel, cobalt or their mixtures.The most common working gas mixture for the synthesis of CNTs is H 2 /CH 4 , and there are also reports on using the addition of argon (C 2 H 2 /Ar/H 2 ) [14] and nitrogen (CH 4 /N 2 /H 2 ) [15]. Another allotrope of carbon, diamond is widely used in industry as a protective coating for tools and mechanisms subject to abrasive wear [16,17].Diamond coatings are also synthesized using the CVD method.Moreover, the deposition parameters of these materials are quite similar: a substrate temperature of 700-1000 • C for diamond [18] and 500-1200 • C for CNTs [13], chamber pressure of 10-100 Torr for diamond [18] and 10-750 Torr for CNTs [13] and CH 4 /H 2 working gas mixtures of 1-30% CH 4 for diamond and 3-80% for CNTs [19][20][21].Despite the excellent physical and mechanical properties of diamond (high hardness, elastic modulus, wear resistance), the disadvantage of diamond coatings is their brittleness.A possible way to solve this problem is to create a diamond-CNTs composite. Recently, diamond-CNTs composites have received attention among researchers [21][22][23][24][25].A material combining diamond and CNTs can exhibit interesting properties due to the synergy of these two materials, with improved mechanical, thermal and electrical properties.Possible benefits of diamond-CNTs composites have been reported or speculated for electrical, electronic and thermal applications (electrodes, field emission devices, MEMS/NEMS, thermionic energy generation, thermal interfaces) [24][25][26].Moreover, the mechanical properties of diamond protective coatings can be improved by impregnating CNTs into their structure, which can increase the service life of diamond-coated tools and mechanisms. In most of the works devoted to the combination of CNTs and diamond, these materials were grown simultaneously with the CVD method, using a window in the deposition parameters, with catalyst nanoparticles for CNTs growth.Table 1 presents the deposition parameters used in various studies of composite growth.However, if catalyst nanoparticles are involved, it is only possible to synthesize a porous volume of diamond-CNTs material, while the growth of a dense nonporous diamond film with CNTs distributed in its volume is impossible, since the catalyst, upon interaction with diamond, causes its graphitization, which inevitably decreases the mechanical properties of the diamond film and significantly decreases the adhesion to the substrate [27,28]. One possible solution to avoid the problem of diamond graphitization is the predeposition of CNTs on the surface of the substrate and the subsequent growth of diamond over this layer.N. Shankar et al. [29] tested this approach with the following deposition parameters: HFCVD method, H 2 /CH 4 atmosphere (1-5% CH 4 ), substrate temperature 800 • C and chamber pressure 25 Torr.The authors report the etching of CNTs at 1% CH 4 , but with an increase in the concentration of methane to 2-5% they observed a window in the diamond phase growth parameter wherein CNTs were not destroyed.This reveals another problem of diamond-CNTs composite synthesis, which is H-etching of CNTs.F.-B. Rao et al. [30] treated CNTs at high temperatures in a thermal CVD reactor with pure hydrogen, and found that CNTs were preserved after hydrogen treatment at 800 • C, but were etched at 900 • C. The etching of CNTs was reported even at less harsh conditions.G. Zhang et al. [31] treated pre-deposited CNTs layer using RF-plasma in a hydrogen atmosphere (1 Torr) and temperatures from 200 to 400 • C. As a result, at all studied parameters they observed intensive CNTs etching. In any case, thick (at least ~2 µm), dense and nonporous diamond-CNTs composite film has not yet been presented in the literature.Moreover, the conditions under which CNTs are preserved in a hydrogen environment remain unclear.The purpose of this work was to study further the processes occurring with pre-deposited CNTs layers in an HFCVD reactor under the conditions of diamond deposition at various methane amounts in the working mixture (5-28 vol.%CH 4 ). Materials and Methods We used two types of substrates in this work: (1) The mirror-polished single crystal Si (100) plates of 10 × 10 × 0.38 mm size; (2) Microcrystalline diamond films grown on Si substrates, similar to those described above (film thickness ~5 µm). Microcrystalline diamond films were grown with an HFCVD system.Prior to the diamond synthesis, substrates were ultrasonicated first in acetone, for 10 min, then in an aqueous solution of nano-diamond (diamond particle size 5-9 nm, mass fraction 0.035 wt.%) (FRPC "Altai", Biysk, Russia) for 10 min, and again in acetone for 5 min.The substrate temperature during diamond synthesis was maintained at 850 ± 20 • C, and the operating pressure was maintained at 20 ± 1 Torr using a needle valve.The substrate to filament distance was 10 ± 1 mm.H 2 /CH 4 (5.6 vol.%CH 4 ) working gas composition was used for microcrystalline diamond synthesis with a total flow of 106 mL/min.Diamond film deposition time was 6 h. Prior to CNTs deposition, all substrates were ultrasonicated in acetone for 10 min.CNTs were deposited on a self-made installation, shown in Figure 1a.The reservoir of the CNTs deposition installation was equipped with an ultrasonic membrane.The reservoir was filled with a mixture of distilled water (98 vol.%) and a suspension of single-wall carbon nanotubes (SWCNT) (SWCNTs concentration in suspension 0.1%, carrier liquid H 2 O) (2 vol.%)TUBALL BATT (OCSiAl, Novosibirsk, Russia), which vaporized under ultrasonic action of the membrane.Next, CNTs-containing vapor flow was directed to the substrates, and placed on the substrate holder.The substrate holder was equipped with a heating system, and the temperature of the substrates was held at 85 ± 2 • C. The CNTs deposition time was 20 min.After CNTs' layer deposition, substrates were treated with HFCVD (Figure 2b).The substrates were heated up to ~850 • C during the CVD treatment, the operating pressure was 20 ± 1 Torr and the substrate to filaments distance was 12 ± 1 mm.H 2 /CH 4 mixture was used as a working gas, and the flow rate of hydrogen was held constant at 100 mL/min, while methane varied from 6 to 40 mL/min (5.6-28.6 vol.%). Materials and Methods We used two types of substrates in this work: (1) The mirror-polished single crystal Si (100) plates of 10 × 10 × 0.38 mm size; (2) Microcrystalline diamond films grown on Si substrates, similar to those described above (film thickness ~5 µm). Microcrystalline diamond films were grown with an HFCVD system.Prior to the diamond synthesis, substrates were ultrasonicated first in acetone, for 10 min, then in an aqueous solution of nano-diamond (diamond particle size 5-9 nm, mass fraction 0.035 wt.%) (FRPC "Altai", Biysk, Russia) for 10 min, and again in acetone for 5 min.The substrate temperature during diamond synthesis was maintained at 850 ± 20 °C, and the operating pressure was maintained at 20 ± 1 Torr using a needle valve.The substrate to filament distance was 10 ± 1 mm.H2/CH4 (5.6 vol.%CH4) working gas composition was used for microcrystalline diamond synthesis with a total flow of 106 mL/min.Diamond film deposition time was 6 h. Prior to CNTs deposition, all substrates were ultrasonicated in acetone for 10 min.CNTs were deposited on a self-made installation, shown in Figure 1a.The reservoir of the CNTs deposition installation was equipped with an ultrasonic membrane.The reservoir was filled with a mixture of distilled water (98 vol.%) and a suspension of single-wall carbon nanotubes (SWCNT) (SWCNTs concentration in suspension 0.1%, carrier liquid H2O) (2 vol.%)TUBALL BATT (OCSiAl, Novosibirsk, Russia), which vaporized under ultrasonic action of the membrane.Next, CNTs-containing vapor flow was directed to the substrates, and placed on the substrate holder.The substrate holder was equipped with a heating system, and the temperature of the substrates was held at 85 ± 2 °C.The CNTs deposition time was 20 min.After CNTs' layer deposition, substrates were treated with HFCVD (Figure 2b).The substrates were heated up to ~850 °C during the CVD treatment, the operating pressure was 20 ± 1 Torr and the substrate to filaments distance was 12 ± 1 mm.H2/CH4 mixture was used as a working gas, and the flow rate of hydrogen was held constant at 100 mL/min, while methane varied from 6 to 40 mL/min (5.6-28.6 vol.%).Structure and morphology were investigated using an Apreo S LoVac (Thermo Fisher Scientific, Waltham, MA, USA) scanning electron microscope and JEM-2100F (JEOL, Tokyo, Japan) transmission electron microscope.Raman spectra were observed using a confocal Raman microscope coupled with the Scanning Probe Optical Unit NTEGRA Spectra (NT-MDT, Moscow, Russia).Cumulative mass loss (CML) data measurements were performed using microbalance MXA 21 (RADWAG, Radom, Poland).For CML measurements for each methane concentration, three samples were obtained.The nanotube diameters were measured using ImageJ software.A sample of 40 measurements was taken to calculate the arithmetic average diameter.Since the measurement error of the nanotube diameter depends mainly on the SEM image resolution, it was constant and amounted to ±1.2 nm. Samples were prepared for TEM using the carbon extraction replica method.First, a thin layer of amorphous carbon was deposited on the samples, and then gelatin was applied over the amorphous carbon layer.When dried, the gelatin tore off part of the coating from the sample surface.Next, the detached films were immersed in distilled water to dissolve the gelatin, after which they were dried and were ready for TEM analysis. Raman ratios were calculated from the Raman mapping data as the arithmetic average of a sample of 10 spectra. Characterization of As-Deposited CNTs Layer Figure 2 shows the SEM images of as-deposited CNTs on silicon (a, b) and diamond substrates (c, d).The image settings were adjusted in order to improve the contrast of the CNTs, which may cause the CNTs to appear dark or bright in some images.SEM scanning reveals that the nanotube deposition method used in this work allows nanotubes to be deposited in a uniform layer over the entire surface of the samples.The CNTs deposition time was selected experimentally to ensure both that the structure of the CNTs web was sufficiently dense and that the substrate was not completely covered by a layer of nanotubes to allow diamond nucleation on the surface of the substrate.It is known that silicon is carbonized during CVD and forms a SiC surface layer [32,33], and there is a possibility Structure and morphology were investigated using an Apreo S LoVac (Thermo Fisher Scientific, Waltham, MA, USA) scanning electron microscope and JEM-2100F (JEOL, Tokyo, Japan) transmission electron microscope.Raman spectra were observed using a confocal Raman microscope coupled with the Scanning Probe Optical Unit NTEGRA Spectra (NT-MDT, Moscow, Russia).Cumulative mass loss (CML) data measurements were performed using microbalance MXA 21 (RADWAG, Radom, Poland).For CML measurements for each methane concentration, three samples were obtained. The nanotube diameters were measured using ImageJ software.A sample of 40 measurements was taken to calculate the arithmetic average diameter.Since the measurement error of the nanotube diameter depends mainly on the SEM image resolution, it was constant and amounted to ±1.2 nm. Samples were prepared for TEM using the carbon extraction replica method.First, a thin layer of amorphous carbon was deposited on the samples, and then gelatin was applied over the amorphous carbon layer.When dried, the gelatin tore off part of the coating from the sample surface.Next, the detached films were immersed in distilled water to dissolve the gelatin, after which they were dried and were ready for TEM analysis. Raman ratios were calculated from the Raman mapping data as the arithmetic average of a sample of 10 spectra. Characterization of As-Deposited CNTs Layer Figure 2 shows the SEM images of as-deposited CNTs on silicon (a, b) and diamond substrates (c, d).The image settings were adjusted in order to improve the contrast of the CNTs, which may cause the CNTs to appear dark or bright in some images.SEM scanning reveals that the nanotube deposition method used in this work allows nanotubes to be deposited in a uniform layer over the entire surface of the samples.The CNTs deposition time was selected experimentally to ensure both that the structure of the CNTs web was sufficiently dense and that the substrate was not completely covered by a layer of nanotubes to allow diamond nucleation on the surface of the substrate.It is known that silicon is carbonized during CVD and forms a SiC surface layer [32,33], and there is a possibility of dissolution of carbon from CNTs in this process, so we used diamond substrates to find out if this process occurs or not. Figure 3 shows some of the features of the deposited nanotubes layer observed during the analysis of SEM images.In particular, agglomerations with a high packing density of nanotubes (HDA) were found (Figure 3a).The nanotube layer also contained residual catalyst nanoparticles observed as bright globes in Figure 3b, which were the feature of the CNTs suspension used.An insufficient amount of amorphous carbon can also be observed in the form of dark globes in Figure 3b,c. of dissolution of carbon from CNTs in this process, so we used diamond substrates to find out if this process occurs or not. Figure 3 shows some of the features of the deposited nanotubes layer observed during the analysis of SEM images.In particular, agglomerations with a high packing density of nanotubes (HDA) were found (Figure 3a).The nanotube layer also contained residual catalyst nanoparticles observed as bright globes in Figure 3b, which were the feature of the CNTs suspension used.An insufficient amount of amorphous carbon can also be observed in the form of dark globes in Figure 3b,c.The Id/Ig Raman peak ratio (Table 2) indicates the high quality and phase purity of carbon nanotubes. SEM Observations Figure 4 shows SEM images of CNTs after exposure to a H2/CH4 environment under the conditions of microcrystalline diamond (5.6 vol.%CH4) synthesis on Si (a) and diamond (b) substrates.In the case of the deposition of diamond and/or carbon nanotubes by the HFCVD method, an important role is played by atomic hydrogen, which is formed upon activation of the gas mixture with hot filaments, because it acts as an etchant.Interaction with such a high portion of atomic hydrogen in the gas mixture causes the rapid etching of nanotubes from the surface of the sample.The I d /I g Raman peak ratio (Table 2) indicates the high quality and phase purity of carbon nanotubes. SEM Observations Figure 4 shows SEM images of CNTs after exposure to a H 2 /CH 4 environment under the conditions of microcrystalline diamond (5.6 vol.%CH 4 ) synthesis on Si (a) and diamond (b) substrates.In the case of the deposition of diamond and/or carbon nanotubes by the HFCVD method, an important role is played by atomic hydrogen, which is formed upon activation of the gas mixture with hot filaments, because it acts as an etchant.Interaction with such a high portion of atomic hydrogen in the gas mixture causes the rapid etching of nanotubes from the surface of the sample. After 10 s of operation of the HFCVD (5.6 vol.%CH 4 ) reactor, the density of the CNTs layer was highly reduced, and visible deformation and cutting of the tubes was observed.The calculated average diameter of as-deposited nanotubes was 27 nm, while after 10 s etching under such harsh conditions it decreased to 17 and 18 nm on Si and diamond substrates, respectively, and the largest CNTs diameter decreased from 45.1 ± 1.2 to 32.4 ± 1.2 and 35.2 ± 1.2 nm on Si and diamond substrates, respectively.This observation indicates the absence of a sufficient influence of substrate material on the CNTs etching, hence no carbonization of the Si substrate by CNTs. SEM Observations Figure 4 shows SEM images of CNTs after exposure to a H2/CH4 environment under the conditions of microcrystalline diamond (5.6 vol.%CH4) synthesis on Si (a) and diamond (b) substrates.In the case of the deposition of diamond and/or carbon nanotubes by the HFCVD method, an important role is played by atomic hydrogen, which is formed upon activation of the gas mixture with hot filaments, because it acts as an etchant.Interaction with such a high portion of atomic hydrogen in the gas mixture causes the rapid etching of nanotubes from the surface of the sample.This indicates, firstly, that the diameter of a nanotube significantly affects its etching time, and secondly, that the diameter of large nanotubes decreases during etching.It will be shown later in the TEM observations section that the visible "nanotube" is a bundle of nanotubes, and the thinning process is explained by the fact that nanotubes from the surface of the bundle are etched, while nanotubes inside the bundle are mostly preserved.Hydrogen etching also causes the cutting of nanotubes, which begins at defect sites [31].Further, after 30 s of CVD exposure, the CNTs almost completely disappeared from the surface of the sample (not presented).The hydrogen etching tendency of CNTs was traced for all studied experimental conditions (CH 4 concentration up to 28.6 vol.%). Figure 5 shows SEM images of CNTs after exposure to a methane-rich H 2 /CH 4 environment (28.6 vol.%CH 4 ) on Si and diamond substrates.After 10 s of operation of the HFCVD (5.6 vol.%CH4) reactor, the density of the CNTs layer was highly reduced, and visible deformation and cutting of the tubes was observed.The calculated average diameter of as-deposited nanotubes was 27 nm, while after 10 s etching under such harsh conditions it decreased to 17 and 18 nm on Si and diamond substrates, respectively, and the largest CNTs diameter decreased from 45.1 ± 1.2 to 32.4 ± 1.2 and 35.2 ± 1.2 nm on Si and diamond substrates, respectively.This observation indicates the absence of a sufficient influence of substrate material on the CNTs etching, hence no carbonization of the Si substrate by CNTs. This indicates, firstly, that the diameter of a nanotube significantly affects its etching time, and secondly, that the diameter of large nanotubes decreases during etching.It will be shown later in the TEM observations section that the visible "nanotube" is a bundle of nanotubes, and the thinning process is explained by the fact that nanotubes from the surface of the bundle are etched, while nanotubes inside the bundle are mostly preserved.Hydrogen etching also causes the cutting of nanotubes, which begins at defect sites [31].Further, after 30 s of CVD exposure, the CNTs almost completely disappeared from the surface of the sample (not presented).The hydrogen etching tendency of CNTs was traced for all studied experimental conditions (CH4 concentration up to 28.6 vol.%). Figure 5 shows SEM images of CNTs after exposure to a methane-rich H2/CH4 environment (28.6 vol.%CH4) on Si and diamond substrates.After 30 s of operation of the HFCVD (28.6 vol.%CH 4 ) reactor, the deposition of amorphous carbon in the form of globes on the surface of the substrates could be observed (Figure 5a,d).This is explained by the fact that with methane-rich gas composition and a short operating time (the reactor does not enter the stable operating mode), the low temperature of the substrate and the predominance of methyl radicals over atomic hydrogen in the activated gas mixture ensures the growth of the amorphous carbon phase and the low rate of H-etching of both CNTs and the grown amorphous carbon.The further etching of both nanotubes and amorphous carbon was observed after 150 s of CVD exposure (Figure 5b,e).The average CNTs diameter increased from 19 (for 30 s) to 24 nm, while the number of nanotubes was significantly reduced.The reason for the increase in the average diameter is that smaller CNTs bundles have lower volume, and hence significantly fewer nanotubes per bundle than larger bundles.The density of the amorphous carbon was significantly lower than that at 30 s.The results observed after 300 s of methane-rich environment treatment (Figure 5c,f) were close to those after 10 s in a 5.6 vol.%CH 4 environment.The same defects were observed in the form of the thinning of the nanotube walls and cuts. The CML of the samples was measured at various methane concentrations (Figure 6).The etching rate decreased with decreasing hydrogen concentration in the gas phase.Based on the CML data, it was calculated that an increase in the methane concentration by 10% decreases the CNTs etching rate by ~18%. Materials 2023, 16, x FOR PEER REVIEW After 30 s of operation of the HFCVD (28.6 vol.%CH4) reactor, the dep amorphous carbon in the form of globes on the surface of the substrates could be (Figure 5a,d).This is explained by the fact that with methane-rich gas composit short operating time (the reactor does not enter the stable operating mode), the perature of the substrate and the predominance of methyl radicals over atomic in the activated gas mixture ensures the growth of the amorphous carbon phas low rate of H-etching of both CNTs and the grown amorphous carbon.The furth of both nanotubes and amorphous carbon was observed after 150 s of CVD expo ure 5b,e).The average CNTs diameter increased from 19 (for 30 s) to 24 nm, number of nanotubes was significantly reduced.The reason for the increase in th diameter is that smaller CNTs bundles have lower volume, and hence significan nanotubes per bundle than larger bundles.The density of the amorphous carbon nificantly lower than that at 30 s.The results observed after 300 s of methaneronment treatment (Figure 5c,f) were close to those after 10 s in a 5.6 vol.%CH ment.The same defects were observed in the form of the thinning of the nanot and cuts. The CML of the samples was measured at various methane concentrations ( The etching rate decreased with decreasing hydrogen concentration in the g Based on the CML data, it was calculated that an increase in the methane conc by 10% decreases the CNTs etching rate by ~18%. TEM Observations TEM analysis (Figure 7) was performed for the samples shown in Figure CVD treated at 28.6 vol.%CH4 for 30 and 150 s (silicon substrates). TEM Observations TEM analysis (Figure 7) was performed for the samples shown in Figure 5a,b, i.e., CVD treated at 28.6 vol.%CH 4 for 30 and 150 s (silicon substrates).Figure 7a shows a typical structure of a CNTs bundle containing SWCNTs; the same bundle structure was reported in other works [34,35].In the sample in Figure 7b, both single nanotubes (Figure 7 (1)) and CNTs bundles (Figure 7 dash line) can be observed, and at 30 s a distinct crystal structure could still be traced in CNTs.It should be noted that single CNTs were observed on the sample treated for 30 s (Figure 7b), while on the sample treated for 150 s (Figure 7c) only CNTs bundles were presented.It is shown that thinning occurred unevenly along the length of the nanotubes.For the same CNTs bundle, the change in diameter was measured from 22 to 14 nm.Cutting (Figure 7 (2)) has been shown to be relatively rare at 30 s of CVD treatment.A change was observed after 150 s of CVD treatment (Figure 7c).It can be seen that the crystal structure of nanotubes becomes less distinct with frequent cuts along the nanotubes' length.Additionally, a sufficient amount of CNTs material was removed by H-etching.The decrease in the intensity of CNT structures was observed with DFTEM (Figure 7e,g).Since the DFTEM images are acquired by selecting a certain reflex of the SAED pattern, we chose the reflections (Figure 7 red circles in SAED patterns inserts) on the ring characterizing the carbon nanotubes (graphene) structure, and observed that with an increase in the CVD treatment time, the intensity of the structures decreased, which indicated a violation of the structural integrity of the nanotubes and, therefore, their amorphization.The sharpness of the SAED rings corresponding to the CNTs also decreased, which additionally indicated a decrease in the crystallinity of the structure of the remaining nanotubes.Both the thinning and cutting of nanotubes are supported by the amorphization process. Raman Observations To obtain statistical data on Raman spectra, we performed Raman mapping.Raman spectroscopy was carried out only for Si substrate samples.In the Raman spectra maps (not presented), two typical spectra were observed; the spectrum of an evenly distributed network of carbon nanotubes (Figure 8a), and the spectrum of a high-density agglomerations of nanotubes (Figure 8b). Figure 7a shows a typical structure of a CNTs bundle containing SWCNTs; the same bundle structure was reported in other works [34,35].In the sample in Figure 7b, both single nanotubes (Figure 7 (1)) and CNTs bundles (Figure 7 dash line) can be observed, and at 30 s a distinct crystal structure could still be traced in CNTs.It should be noted that single CNTs were observed on the sample treated for 30 s (Figure 7b), while on the sample treated for 150 s (Figure 7c) only CNTs bundles were presented.It is shown that thinning occurred unevenly along the length of the nanotubes.For the same CNTs bundle, the change in diameter was measured from 22 to 14 nm.Cutting (Figure 7 (2)) has been shown to be relatively rare at 30 s of CVD treatment.A change was observed after 150 s of CVD treatment (Figure 7c).It can be seen that the crystal structure of nanotubes becomes less distinct with frequent cuts along the nanotubes' length.Additionally, a sufficient amount of CNTs material was removed by H-etching.The decrease in the intensity of CNT structures was observed with DFTEM (Figure 7e,g).Since the DFTEM images are acquired by selecting a certain reflex of the SAED pattern, we chose the reflections (Figure 7 red circles in SAED patterns inserts) on the ring characterizing the carbon nanotubes (graphene) structure, and observed that with an increase in the CVD treatment time, the intensity of the structures decreased, which indicated a violation of the structural integrity of the nanotubes and, therefore, their amorphization.The sharpness of the SAED rings corresponding to the CNTs also decreased, which additionally indicated a decrease in the crystallinity of the structure of the remaining nanotubes.Both the thinning and cutting of nanotubes are supported by the amorphization process. Raman Observations To obtain statistical data on Raman spectra, we performed Raman mapping.Raman spectroscopy was carried out only for Si substrate samples.In the Raman spectra maps (not presented), two typical spectra were observed; the spectrum of an evenly distributed network of carbon nanotubes (Figure 8a), and the spectrum of a high-density agglomerations of nanotubes (Figure 8b).Spectra of as-deposited CNTs layers are shown by black curves.The spectra show the second-order Raman peaks of silicon substrate (943 and 978 cm −1 ) [36].They are shown for the purpose of comparing their intensity with the peaks of the deposited material.The peak of the CNTs structure consisted of a high-intensity G-band peak (1590 cm −1 ) and its left shoulder, i.e., the G − peak at 1570 cm −1 , which is characteristic of SWCNTs [37].The G-band peak shape was the same in both the evenly-distributed CNTs web and HDA.There was also a low-intensity D-band peak at 1335 cm −1 .This peak is associated with both the presence of amorphous carbon and with structural defects of nanotubes [37]. Blue curve spectra in Figure 8 corresponded to the sample presented in Figure 5a.The deposition of amorphous carbon on the surface of CNTs was observed as an increase in the D peak, and the ratio of I d /I g (D-band peak intensity/G-band peak intensity) changed from 0.06 ± 0.02 to 0.81 ± 0.08 and from 0.02 ± 0.01 to 0.14 ± 0.02 for CNTs web and HDA, respectively.Despite the predominance of methyl radicals over atomic hydrogen in the gas mixture, which caused the deposition of amorphous carbon, Raman spectroscopy data indicated a decrease in the thickness of the CNTs/amorphous carbon layer, which can be seen from the decrease in the I g /I si (G-band peak intensity/mean Si peaks intensity) ratio from 2.49 ± 0.7 to 1.67 ± 0.33 and from 38.56 ± 10.7 to 15.51 ± 6.98 for CNTs web and HDA, respectively.In addition, after 30 s of CVD treatment, a D' peak (1620 cm −1 ) appeared near the G peak.The D' peak is associated with defects in the graphitic crystal structure [38].The red curve spectra in Figure 8 corresponded to the sample presented in Figure 5b.A further decrease in the thickness of the CNTs/amorphous carbon layer could be observed, which was characterized by a decrease in the I g /I si ratio to 0.6 ± 0.11 and 5.1 ± 1.39 for CNTs web and HDA, respectively. Based on the analysis of the I d /I g ratios for samples 30 s and 150 s, we assume that the etching rate of amorphous carbon is higher than that of nanotubes.These ratios, coupled with the measured CML of carbon nanotubes, suggest the theoretical possibility of a parameters window in which hydrogen could selectively etch the amorphous phase formed during diamond synthesis, while preserving nanotubes.However, a further increase in the concentration of methane in the HFCVD method is limited by the fact that at high methane contents, the intensive carburization of hot tungsten filaments begins, which leads to their rapid destruction [39]. Conclusions In this work, we studied the possibility of integrating preliminarily deposited CNTs into the CVD diamond synthesis process and the dynamics of processes occurring with nanotubes under these conditions.In the entire studied range of parameters, intense hydrogen etching of CNTs was observed, which led to the complete disappearance of the CNTs layer from the substrate surface before the diamond growth process began.This indicates the impossibility of synthesizing a diamond-CNT composite under the studied conditions.It was calculated that with a decrease in the hydrogen concentration by 10 vol.%, the etching rate of CNTs would decrease by ~18%.Two main mechanisms of CNTs destruction are shown-thinning and cutting, which were accompanied by amorphization.Thinning is the main mechanism of etching of small-diameter CNTs bundles.Cutting is mostly found in thicker bundles, where it starts at defect sites and then propagates through the length of a tube.Based on the results of CML and Raman analysis, we hypothesize that it may be possible to achieve a parameter window in which the H-etching rate is reduced just enough to etch the amorphous carbon and preserve the CNTs.However, the HFCVD method is limited by a maximum methane concentration.We suggest two possible solutions for the problem of hydrogen etching.The first is a change in the gas composition to a hydrogen-free mixture, for example CH 4 /Ar.The second is a change in the gas activation system from hot filaments to microwave plasma or arc discharge, as it will allow the use of higher methane concentrations. Figure 1 . Figure 1.Carbon nanotube deposition setup (a); bubble inserts show schematic view of CNTs-containing liquid vaporization and SEM image of as-deposited CNTs layer; HFCVD setup (b); bubble insert shows cross-sectional view on hot filaments, explaining gas activation chemistry. Figure 1 . Figure 1.Carbon nanotube deposition setup (a); bubble inserts show schematic view of CNTscontaining liquid vaporization and SEM image of as-deposited CNTs layer; HFCVD setup (b); bubble insert shows cross-sectional view on hot filaments, explaining gas activation chemistry. Figure 2 . Figure 2. SEM images of as-deposited CNTs on Si (a,b) and diamond (c,d) substrates. Figure 2 . Figure 2. SEM images of as-deposited CNTs on Si (a,b) and diamond (c,d) substrates. Figure 3 . Figure 3. SEM images of high-density agglomerations of carbon nanotubes (a), catalytic nanoparticles presented in the sample (b) and amorphous carbon (b,c). Figure 3 . Figure 3. SEM images of high-density agglomerations of carbon nanotubes (a), catalytic nanoparticles presented in the sample (b) and amorphous carbon (b,c). Figure 4 . Figure 4. SEM images of CNTs after exposure to a H 2 /CH 4 environment (5.6 vol.%CH 4 ) for 10 s on Si (a) and diamond (b) substrates. Figure 6 . Figure 6.Plot of cumulative mass loss versus CVD operating time at various methane tions. Figure 6 . Figure 6.Plot of cumulative mass loss versus CVD operating time at various methane concentrations. Figure 8 . Figure 8.Typical Raman spectra of evenly distributed CNTs web (a) and HDA (b) taken from Raman mapping: black curves-as-deposited CNTs; blue curves-after 30 s of CVD (28.6 vol.%CH4); red curves-after 150 s of CVD (28.6 vol.%CH4).Spectra of as-deposited CNTs layers are shown by black curves.The spectra show the second-order Raman peaks of silicon substrate (943 and 978 cm −1 )[36].They are shown for the purpose of comparing their intensity with the peaks of the deposited material.The Table 1 . Deposition parameters used for simultaneous diamond-CNTs composite growth. Table 2 . Peak intensity ratios derived from Raman spectra. Table 2 . Peak intensity ratios derived from Raman spectra.
2023-10-21T15:09:47.672Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "1d781279dfcef36032da57fd7695e17dc8de3e99", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/20/6755/pdf?version=1697682956", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d7b8e05a5e704b5ce8e23b067efa0825430aa9b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
214156141
pes2o/s2orc
v3-fos-license
Catechins- a natural blessing in Breast cancer treatment Breast malignant growth is a typical disease in female. There were an expected 1.7 million new cases (25% of all tumors in female) and 0.5 million malignant growth passings (15% of all disease deaths in female) in 2012 [1]. In spite of the fact that there have been incredible advances in the treatment of breast disease, mortality from breast malignant growth is still high and it is the subsequent driving reason for disease related passing among female in the United States [2]. A few regenerative and way of life factors are likewise connected with the advancement of breast malignant growth. Among the regenerative components are long menstrual history, nulliparity, expanded utilization of oral contraceptives, and bringing forth a kid at later age [3]. Way of life factors including less physical movement, utilization of unhealthy eating regimens, cigarette smoking and liquor utilization are emphatically connected with expanded danger of breast disease advancement [4]. By 2030, WHO International expected that 25% of individuals around the globe will have at any rate one malignancy type. Over 60% of new cases are normal in low-and center salary regions [5]. Introduction Breast malignant growth is a typical disease in female. There were an expected 1.7 million new cases (25% of all tumors in female) and 0.5 million malignant growth passings (15% of all disease deaths in female) in 2012 [1]. In spite of the fact that there have been incredible advances in the treatment of breast disease, mortality from breast malignant growth is still high and it is the subsequent driving reason for disease related passing among female in the United States [2]. A few regenerative and way of life factors are likewise connected with the advancement of breast malignant growth. Among the regenerative components are long menstrual history, nulliparity, expanded utilization of oral contraceptives, and bringing forth a kid at later age [3]. Way of life factors including less physical movement, utilization of unhealthy eating regimens, cigarette smoking and liquor utilization are emphatically connected with expanded danger of breast disease advancement [4]. By 2030, WHO International expected that 25% of individuals around the globe will have at any rate one malignancy type. Over 60% of new cases are normal in low-and center salary regions [5]. The present review will feature the ongoing advances in the impacts of tea and its catechins on breast malignant growth, including epidemiological, in vivo, and in vitro investigations. Tea is one of the most well-known drinks devoured everywhere throughout the world. Tea leaves are wealthy in catechins, a gathering of polyphenols that supply tea with numerous medical advantages. Epidemiological Evidence As ahead of schedule as 1997, an epidemiological examination completed in Japan demonstrated that drinking green tea had a possibly preventive impact on breast malignant growth, particularly among female who drank in excess of 10 cups of green tea every day [11]. Numerous partner studies or case-control examines on the relationship between tea utilization and breast malignant growth hazard have been completed from that point forward. Partner thinks about in China and the USA demonstrated that ongoing drinking of green tea was pitifully connected with a diminished danger of breast malignant growth [12]. The relationship between tea utilization and diminished danger of breast disease was likewise affirmed by populace-based case-control examines completed in China [13], the USA [14] and Singapore [15]. Mechanism of Tea Catechins in Suppressing Breast Cancer about decreased pervasiveness of female related with oxidative stress [20]. They have been seen as very effective in the aversion of specific infections for quite a long time particularly malignancy [21]. The substance, in vitro and organic measures, showed green tea polyphenols as solid cancer prevention agents in their action against iodophenol-inferred phenoxyl radicals, superoxide anion radicals, and lipid peroxidation in rodent liver microsomes [22]. EGCG is a significant forager of reactive oxygen species and has solid cancer prevention agent movement [23]. A few trial thinks about investigated that green tea has anticarcinogenic impacts against breast disease. EGCG are viable in In vitro, epigallocatechin, another significant catechin in green tea, likewise has solid impacts in initiating apoptosis and hindering development of breast malignant growth cells [31]. Epidemiologic contemplates have proposed that the customary utilization of tea, especially green tea, reasonably diminishes the danger of malignant growth. These outcomes were altogether upheld with meta-examination did by Sun et al. [32] and Zhang et al. [10] gave definitive thoughts in regard to counteractive action and taking out the danger of breast malignancy. This is probably going to be the consequence of a lessening in the auto-phosphorylative limit of EGFR and an ensuing decrease in the movement of intracellular flagging falls, which are actuated by EGFR. Conclusion These progressions may prompt modifications in the declaration of proteins administering the phone cycle. The essential objective of malignant growth chemoprevention contemplates with dietary constituents is to distinguish dynamic fixings and to clarify their basic instruments for structuring a superior routine or procedure for intercession preliminaries. Of the most widely explored and well-characterized dietary chemo preventives.
2020-01-23T09:10:19.223Z
2019-12-12T00:00:00.000
{ "year": 2019, "sha1": "01239f6e4980daa17f9843c1bb925ecb99fd81a3", "oa_license": "CCBY", "oa_url": "https://biomedgrid.com/pdf/AJBSR.MS.ID.001066.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "364ddeb88f948db9b112876a38447b618eeebeb5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235651953
pes2o/s2orc
v3-fos-license
Fragmentation of jets containing a prompt J$/\psi$ meson in PbPb and pp collisions at $\sqrt{s_\mathrm{NN}} =$ 5.02 TeV Jets containing a prompt J$/\psi$ meson are studied in lead-lead collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV, using the CMS detector at the LHC. Jets are selected to be in the transverse momentum range of 30 $\lt$ $p_\mathrm{T}$ $\lt$ 40 GeV. The J$/\psi$ yield in these jets is evaluated as a function of the jet fragmentation variable $z$, the ratio of the J$/\psi$ $p_\mathrm{T} $ to the jet $p_\mathrm{T}$. The nuclear modification factor, $R_\mathrm{AA}$, is then derived by comparing the yield in lead-lead collisions to the corresponding expectation based on proton-proton data, at the same nucleon-nucleon center-of-mass energy. The suppression of the J$/\psi$ yield shows a dependence on $z$, indicating that the interaction of the J$/\psi$ with the quark-gluon plasma formed in heavy ion collisions depends on the fragmentation that gives rise to the J$/\psi$ meson. Introduction Dissociation of quarkonium states in nucleus-nucleus collisions is one of the best studied signatures of the formation of the quark-gluon plasma (QGP), a deconfined state of quarks and gluons. Although other nuclear effects have been identified, it is widely accepted that at least part of the suppression of the various quarkonium states in central lead-lead (PbPb) collisions, at the collision energies probed at the CERN LHC, is indeed coming from Debye-like screening of heavy-quark pairs in the QGP, as anticipated in the seminal paper by Matsui and Satz [1]. This picture may be probed with quarkonia produced at rest. With the emergence of competing mechanisms, however, it becomes interesting to study the momentum dependence of the nuclear suppression. In particular, this is true for regeneration [2], wherein quarkonia may be created from heavy quarks produced independently. This effect is expected to become more relevant with increased collision energy, as more heavy-quark pairs are produced. This explains the relatively low nuclear suppression at low transverse momentum (p T ) that is observed at the LHC [3], as compared to data at lower collision energies [4,5]. Independent of these nuclear modifications, the interpretation of quarkonium results, and heavy-flavor results in general, is typically based on the assumption that quarkonia are formed at early times compared to the formation time of the QGP (on the order of 1 fm/c). However, the formation time estimate of quarkonia is based on general arguments rather than on a detailed calculation. Despite decades of theoretical developments, models generally are not able to describe the entirety of the quarkonium data. In particular, they are not able to simultaneously describe the polarization [6] and the p T -differential cross section [7]. A recent measurement by the LHCb Collaboration that looked at hadrons produced at small angles with respect to prompt J/ψ mesons (those not produced in b-hadron decays) in proton-proton (pp) collisions [8] gives some new insight into this puzzle. The observable is the J/ψ-jet fragmentation function, which corresponds to the distribution of z, the ratio of the J/ψ p T to the p T of the jet into which it is clustered. For jets in pseudorapidity 2.5 < η < 4.0 and p T > 20 GeV, LHCb observes that prompt J/ψ mesons are produced with far more in-jet associated hadroproduction than predicted by models, i.e., J/ψ mesons tend towards lower values of z. Models of J/ψ production typically couple fixed-order perturbative quantum chromodynamics calculations with nonperturbative matrix elements that describe hadronization of the charm quark pair into a color-neutral state. A solution to this discrepancy was proposed in Ref. [9], where the evolution of the parton shower prior to the formation of the J/ψ is accounted for. By including this parton shower contribution, which is not adequately described in hadronization generators, such as PYTHIA [10], the authors were able to successfully describe the data. Although the LHCb measurement only concerns the subset of J/ψ mesons found inside a relatively high-p T jet, a recent measurement by CMS indicates that, for J/ψ mesons with energy larger than 15 GeV, nearly all of them are produced in association with a significant degree of jet activity [11]. Assuming this explanation of the LHCb data is correct, this paradigm shift in our understanding of J/ψ production has important implications for the interpretation of J/ψ data in nucleusnucleus collisions. It implies that J/ψ are not exclusively produced at short times, but may also be produced in the course of the interaction of a hard-scattered parton with the QGP. Hence, the suppression of the yield, as quantified by the nuclear modification factor (R AA ), may be sensitive to parton energy loss, the same phenomenon that gives rise to jet quenching. There are already some hints in this direction. First, as observed in Ref. [12], the J/ψ R AA in PbPb collisions [13,14] appears to exhibit the same rise with p T as light hadrons show [15], which for the case of light hadrons is well described by parton energy loss models [16][17][18][19][20][21]. Second, in mid-central collisions the J/ψ show a significant magnitude of elliptic anisotropy in their azimuthal angle (v 2 ) [22,23] at large p T , a region where the hydrodynamical effects that produce such an anisotropy are expected to die out. We know of no explanation for this high-p T v 2 feature other than path-length-dependent parton energy loss. The goal of the current measurement is to investigate the z dependence of the nuclear modification factor of jets containing a J/ψ meson, i.e., This ratio compares the per-event yield in PbPb collisions (N AA ) to the expectation based on pp collisions, by scaling the cross section of the latter (σ pp ) by T AA . The factor T AA is the average effective nucleon-nucleon luminosity delivered by a single heavy ion collision for a given centrality selection (a quantity related to the collision impact parameter) [24]. In the absence of nuclear effects, R AA = 1. This Letter constitutes the first direct study of the nuclear modification of J/ψ mesons inside jets. Jets with a constituent J/ψ meson are studied in the jet p T range of 30 < p T < 40 GeV for 0-90%, as well as 0-20% and 20-90% collision centralities. The jets are required to be in the pseudorapidity range |η| < 2, such that they are completely contained in the tracker acceptance, with no explicit selection on the rapidity of the J/ψ. The p T of the J/ψ is measured down to a threshold of 6.5 GeV. This gives a range of z from 0.22 to 1. We investigate to what extent the R AA varies with z, and indirectly, the formation time of the J/ψ. These data potentially constrain the roles of the different QGP interaction mechanisms that may be at play, in particular parton energy loss and Debye screening. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the η coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. Muons are measured in the range |η| < 2.4, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. The efficiency to reconstruct and identify muons is greater than 96% over the full range of η. Matching muons to tracks measured in the silicon tracker results in a relative transverse momentum resolution, for muons with p T up to 100 GeV, of 1% in the barrel and 3% in the endcaps [25]. The forward hadron (HF) calorimeter uses steel as an absorber and quartz fibers as the sensitive material. The two halves of the HF are located 11.2 m from the interaction region, one on each end, and together they provide coverage in the range 3.0 < |η| < 5.2. Events of interest are selected using a two-tiered trigger system [26]. The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100 kHz for high luminosity pp collisions and 30 kHz for PbPb collisions. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1 kHz before data storage, for both pp and PbPb collisions. The particle-flow algorithm [27] aims to reconstruct and identify each individual particle in an event, with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies. Collision centrality is determined from the total transverse energy deposited in both of the HF calorimeters. The centrality is expressed as a percentage of the total inelastic hadronic cross section, with 0% representing the most head-on (central) collisions and 100% the most peripheral collisions. Hadronic events are selected by requiring at least three towers in each half of the HF calorimeter with an energy larger than 4 GeV. In this analysis, we restrict to the 90% most central events, where the hadronic event selection is fully efficient. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [28]. Simulation Monte Carlo simulations are used as a baseline for the J/ψ efficiency and acceptance, as well as the detector response to jets. Correction factors to account for data-to-simulation discrepancies are discussed in the following section. Events are generated with PYTHIA 8 (version 8.230) [10], using the CP5 underlying event tune [29]. Prompt J/ψ are produced using all color-singlet and color-octet modes with the default matrix elements [30]. Simulation of PbPb collisions is done by embedding PYTHIA 8 events into PbPb collision events produced with the HYDJET generator (version 1.9) [31]. The event activity at mid-rapidity is matched to that of minimum bias collision data via an analysis of the energy deposited in randomly distributed cones. The response of the CMS detector is simulated using the GEANT4 package [32]. Prompt J/ψ meson signal extraction J/ψ mesons are measured via their decays into oppositely charged muon pairs. Events are selected with a trigger that requires that at least two muon candidates are reconstructed in the muon subsystem, first at L1, and then using refined information at HLT. In PbPb collisions, the rate is reduced at HLT by requiring one of the two muons to match a track from the silicon tracking subsystem. The invariant mass of the pair (m µµ ) is required to be within the range of 1 to 5 GeV. A set of offline muon selection criteria that is optimized for low p T muons is applied [13]. The J/ψ yield is obtained from a fit to the m µµ distribution in bins of jet p T and z. An example of a m µµ fit is shown in the left panel of Fig. 1. The signal is modeled as the sum of two Crystal Ball functions [33] with different widths, but common mean and tail parameters. The tail parameters are obtained from simulation. The dimuon background is modeled by a Chebyshev polynomial. Nonprompt J/ψ mesons, i.e., those that are produced in b hadron decays, are separated from the prompt component by exploiting the long lifetime of these decays. A fit is performed to the distribution of the pseudo-proper decay length, l J/ψ = Lm/|p|, where L is the distance along the beam axis between the J/ψ vertex and the nearest primary collision vertex, and m and p are the world-average J/ψ mass (3.097 GeV) [34] and the J/ψ candidate momentum, respectively. An example of a l J/ψ fit is shown in the right panel of Fig. 1. In order to parameterize the l J/ψ distribution, the sPlot technique [35] is used to decompose it into the J/ψ signal and non-J/ψ background components, using m µµ as the discriminating variable. Prompt J/ψ mesons can have nonzero values of l J/ψ , both positive and negative, because of the finite detector resolution. The resolution is modeled as a sum of two Gaussian functions, which are constrained by fitting the negative part of the l J/ψ distribution. The nonprompt component is modeled as an exponential function, with the lifetime treated as a free parameter, convolved with the same double-Gaussian resolution function. The combinatorial background is itself well described by prompt-and nonprompt-like components, again using the same resolution function. The nonprompt component of the background is described by an empirical sum of exponential functions. To extract the yield of prompt J/ψ mesons a two-dimensional fit to the joint m µµ and l J/ψ distribution is performed. All parameters are fixed by the one-dimensional fits, aside from the prompt and nonprompt J/ψ signal normalizations, and the background normalizations, as well as the nonprompt signal lifetime parameter. A more detailed description of the fitting procedure can be found in Ref. [13]. The J/ψ meson acceptance, and the efficiency of reconstruction and identification, are determined in simulation in finely binned histograms of the p T and η of the J/ψ. For PbPb collisions, the efficiency is also determined in bins of collision centrality. The correction for acceptance and efficiency is applied as a weight factor to each J/ψ candidate prior to the signal extraction. Differences between data and simulation are evaluated in-situ, from events collected with a single-muon trigger, using the tag-and-probe technique [36]. These additional corrections are applied to the simulation in relatively coarse bins. Tag-and-probe corrections are derived separately for trigger, identification, and reconstruction efficiency. Jet p T determination Jets are clustered using the anti-k T algorithm [37,38] with a distance parameter of R = 0.3. The constituents of the jets are the output of the particle-flow algorithm, described in Section 2, with the exception of the J/ψ candidate. Muon pairs with an invariant mass consistent with a J/ψ decay are replaced by the reconstructed J/ψ candidate. Jet energy corrections are derived from simulation as a function of p T and η using the framework described in Ref. [39], in which datato-simulation scale factors are obtained from Z+jet, γ+jet and dijet p T balancing studies. In the case of PbPb collisions, these corrections are derived from peripheral events to avoid additional p T imbalance from jet quenching. The jet energy correction procedure is only applied to the non-J/ψ component of the jet, as the momentum of the J/ψ is determined with high precision from its dimuon decay. In PbPb collisions, the constituent subtraction method [40] is employed to subtract the contribution to the jet momentum from the underlying event. As the signal particle of interest, the J/ψ meson is excluded from the subtraction, as it would be incorrect to attribute any fraction of its energy to the underlying event. In addition to the energy scale, the energy resolution of jets is somewhat degraded in data compared to simulation. The corresponding scale factors are derived from dijet balancing studies performed in pp collisions at √ s = 13 TeV in 2017 and 2018 [39]. The resolution is found to be between 10 and 20% worse than the simulation, depending on η. In order to incorporate this effect, a Gaussian smearing is applied to the measured jet p T values in simulation to match the resolution in data. This smeared mapping is then applied in the unfolding procedure, as described below. The finite p T resolution of jets causes migration of jets across z bins. It also causes migration of jets into and out of the nominal p T selection of 30-40 GeV. In order to capture the full jet response, we also perform the measurement in underflow and overflow bins of 10-30 GeV and 40-50 GeV, respectively. With the current data, a reliable yield cannot be extracted for p T intervals larger than the selected overflow bin, which motivates the choice of the nominal p T interval. To correct for these bin migration effects, we unfold the data simultaneously in these two dimensions (jet p T and z) using the iterative D'Agostini method [41], as implemented in the ROOUNFOLD package [42]. The detector response matrix, which defines the relationship between the true and measured values, is taken from the PYTHIA 8 simulation, embedded into HYDJET for the case of PbPb collisions, aside from the aforementioned data-to-simulation scale factors for the jet energy scale and resolution. The response matrices for pp and PbPb collisions are shown in Fig. 2. The coarse bins show the jet p T dependence, whereas the finer inset bins show the dependence on z. Each column of measured bins is normalized to unity, such that each bin represents the fractional contribution of the given true p T and z values to the measured values for that column. The PbPb data exhibit substantially larger off-diagonal contributions than for pp, as expected from the larger underlying event, which drives the bin migration in PbPb. In this method, the unfolding is initialized with a "prior" distribution in the two variables, which is taken from simulation. To avoid bias from the presumed shape of the z distribution in simulation, which is known to be inaccurate [8], the prior distribution is flattened in z. After a tunable number of iterations, which corresponds to the degree of regularization, the procedure is truncated. To improve the performance of the unfolding, we run the full set of iterations multiple times. The prior of each such "super-iteration" is obtained using the z distribution that is output from the previous super-iteration. The numbers of iterations and super-iterations are optimized using simulated prompt J/ψ events, by applying a random Gaussian smearing to the measured z distributions according to the relative statistical uncertainties in data, in order to emulate the effect of statistical fluctuations. The optimization criterion is the minimum χ 2 between the unfolded and true z distributions. The use of three iterations is found to give the best performance in both pp and PbPb simulated samples. However, while only one superiteration is sufficient for pp, 25 super-iterations are required for PbPb collisions. Since the optimal settings depend not only on the statistical uncertainties of the measured z distribution, but also on its shape, we additionally evaluated them in nonprompt J/ψ meson simulation, where the measured distribution is more similar to the data than in the prompt J/ψ meson simulation, by unfolding the nonprompt z distributions with the nonprompt response matrices and using the settings with the best performance as a systematic variation in the regularization procedure. Systematic uncertainties The systematic uncertainties may be divided into three categories: J/ψ signal extraction, jet energy scale and resolution, and normalization uncertainties. J/ψ signal extraction: Uncertainties in the extraction of the J/ψ yields arise from the signal and background shape modeling, as well as from the acceptance and efficiency of muon reconstruction and identification. The signal shape of the m µµ distribution is varied from the two Crystal Ball parameterization to a single Crystal Ball function convoluted with a Gaussian function. The radiative tail of the signal model is also varied by treating the tail parameters of the Crystal Ball function as free parameters, rather than taking their values from simulation. The relative uncertainty in the J/ψ yield coming from the signal modeling is less than 1% in pp and less than 2% in PbPb collisions. The dependence of the signal shape on l J/ψ is evaluated by switching from a parameterization of the lifetime distribution of nonprompt J/ψ signal to a template built from simulation. The relative uncertainty from this variation is less than 1.5% in both pp and PbPb collisions, except at low J/ψ p T (<10 GeV) in pp collisions, where it reaches 4.5%, and where it dominates the uncertainties in the J/ψ signal extraction. The Gaussian function describing the lifetime resolution is varied to use the value obtained from simulation, rather than obtaining it from the data. This variation affects both the signal and background modeling, and results in a relative uncertainty of less than 2% in pp and PbPb collisions. The uncertainty in the description of the background is obtained as follows. Instead of a Chebyshev polynomial, an exponential of a Chebyshev polynomial is used to describe the m µµ distribution. The range of m µµ used in the fit is varied as well, to reduce the contamination from the ψ(2S) state. This results in an uncertainty of less than 1% in pp and PbPb collisions, except at low J/ψ p T (< 10 GeV) in PbPb collisions, where it reaches 6% and dominates the uncertainties in the J/ψ signal extraction. As was done for the signal, a template is used to describe the l J/ψ distribution, rather than a parameterization. In this case, the template is obtained from a background-like event sample using sPlot. The source results in a relative uncertainty of < 2% in the J/ψ yields in both pp and PbPb collisions. The acceptance and efficiency corrections lead to an associated relative uncertainty in the J/ψ yields in pp collisions of 1.5-2.5%, and 3-4% in PbPb collisions, depending on the value of J/ψ p T , and comprise the dominant source of uncertainty in the J/ψ signal extraction for J/ψ p T > 10 GeV. The largest component of the uncertainty is statistical in nature, coming from the finite size of the single-muon trigger samples used in the tag-and-probe method. The statistical uncertainty of the simulation samples is also taken into account, but is generally subdominant. A systematic component is evaluated by variations of the signal and background modeling in the extraction of the J/ψ yield from fits to the m µµ distribution, similar to the procedure for the J/ψ yield extraction in the main analysis, described above. The relative uncertainty from this source is around 1% in both pp and PbPb collisions, with no strong dependence on p T . Further details on the procedure used to evaluate systematic uncertainties for muons from the tag-and-probe method are reported in Refs. [25,36]. Jet energy scale and resolution: The uncertainty in the jet energy scale is evaluated from dijet and γ+jet balancing methods, as described in Ref. [39]. The uncertainty in the jet energy scale in pp collisions is around 3-4%, increasing as a function of |η|. In PbPb collisions, the uncertainty is around 4%, except for the barrel-endcap transition region (1.3 < |η| < 1.6), where it can become as large as 10%. An additional source of uncertainty in the jet energy scale is considered in PbPb collisions to take into account the differences in the particle mixture in simulation and data due to jet quenching, as described in Ref. [43]. This uncertainty is around 5% for the 30% most central events and 2.5% for all other centralities. The uncertainty in the jet energy resolution is evaluated from a dijet balancing method, as also described in Ref. [39]. In pp collisions, the uncertainty in the resolution is in the range of 2-4% in the barrel region, but is larger in the endcap and transition regions, where it varies in the range of 10-20%, depending on η. For PbPb collisions, the uncertainty is evaluated from peripheral data, as well as from pp data recorded in the same yearly running period, and varies from 3% in the barrel to 7% in the endcap and transition regions. In PbPb, there is an additional contribution to the uncertainty because of the modeling of the underlying event in simulation with HYDJET. This uncertainty is evaluated by comparing the energy in randomly distributed cones in data and simulation. The difference between the random cone distributions in data and simulation is used to estimate the effect on the jet resolution. Unfolding: The uncertainty in the regularization applied is estimated by varying the number of iterations used in the unfolding, from the settings found to be optimal for prompt J/ψ to those found to be optimal for the nonprompt J/ψ signal, which has a very different signal shape as a function of z. For nonprompt simulations in pp and PbPb collisions, the use of ten iterations is found to give the best performance, in contrast to three iterations for the prompt simulation. In both cases, the statistical precision of the data is emulated in simulation. The assumption of a prior that is flat in z is also relaxed, by instead initializing the prior to match the shape of the truth z distributions in simulated nonprompt J/ψ signal samples, which are more similar in shape to the measured z distributions. Finally, the statistical uncertainty of the simulated samples used in the unfolding is taken into account, and is generally subdominant. Systematic uncertainties related to the jet energy scale and resolution are propagated by producing variations of the detector response matrix and repeating the unfolding procedure for each variation. Normalization uncertainties: Cross section measurements in pp collisions have an overall uncertainty from the integrated luminosity of 1.9% [44] that is obtained from an analysis of data from a van der Meer scan [45]. The PbPb data are normalized by the equivalent number of hadronic nucleon-nucleon interactions in the data sample, which has an uncertainty of 1.3% coming from the selection of such events, taking into account possible contamination from electromagnetic interactions and beam backgrounds. To compare to data from pp collisions, the PbPb data are normalized by T AA , which is determined from the Monte Carlo implementation of the Glauber model described in Ref. [46]. For the centrality intervals used in this analysis, the corresponding values of T AA are 6.27 ± 0.14, 18.79 ± 0.36, and 2.717 ± 0.098 mb −1 , for the 0-90, 0-20 and 20-90% centrality intervals, respectively. The various sources of systematic uncertainty are shown in Fig. 3. Sources related to the J/ψ yield extraction are combined into a single component, for visibility. The total uncertainty is the quadrature sum of all the sources. Some sources of systematic uncertainty are largely correlated bin-to-bin, notably the uncertainty in the jet energy scale, jet energy resolution, and prior. No cancellation of systematic uncertainties is assumed, however, when comparing pp to PbPb collisions. Figure 4 shows the distribution in pp data of the fragmentation variable z for prompt J/ψ mesons. Its shape is compared to generator-level predictions from PYTHIA 8 for prompt and nonprompt J/ψ signals. In contrast to the PYTHIA 8 simulation, where prompt J/ψ are produced directly in the matrix element partonic scattering, the data show a relatively large degree of surrounding jet activity, indicative of J/ψ production inside of parton showers. The z distribution in data more closely resembles that of the nonprompt J/ψ PYTHIA 8 simulation, which contains a larger jet-like component from fragmentation, as well as other products of the b-hadron decay. The data confirm the trends observed in Ref. Results [8], but in a different rapidity range and nucleon-nucleon center-of-mass energy. The PbPb data show a suppression level that is generally comparable to that observed for "inclusive" prompt J/ψ production, i.e., without an explicit jet requirement [13]. This is quantified by the ratio of these two distributions, R AA , shown in Fig. 5 (right). The data show a slight rising trend as a function of z, with a significance of around two standard deviations. Fig. 6 shows the R AA for two centrality selections, 0-20 and 20-90%. A larger degree of suppression is suggested for the more central selection, as expected for final-state effects related to the QGP. The rising trend with increasing z is somewhat more pronounced in central events. In the largest z bin, where the J/ψ is produced with fewer associated particles, the suppression is significantly reduced as compared to lower values of z, most importantly in the centralityintegrated results. Such a reduction of suppression at large z has a natural interpretation in terms of the jet quenching phenomenon. Lower values of z should be populated with jets with a J/ψ produced late in the parton shower. Such a parton cascade is expected to have a large degree of interaction with the QGP in the form of subsequent medium-induced emissions, as compared to a jet with a small partonic multiplicity [47]. In this picture, the rising trend observed for inclusive prompt J/ψ production would be explained by the same mechanism, as z tends to increase with increasing p T . Summary Jets containing a prompt J/ψ meson were studied in proton-proton (pp) and lead-lead (PbPb) collisions at √ s NN = 5.02 TeV, for jets with transverse momentum 30 < p T < 40 GeV and pseudorapidity |η| < 2. The distribution of the fragmentation variable z, the ratio of the J/ψ p T to that of the jet, was measured in both systems. In pp collisions, prompt J/ψ mesons were found to have more surrounding jet activity, i.e., to populate lower values of z than predicted by PYTHIA 8 simulations, suggesting that J/ψ production late in the parton shower is underestimated. The pp and PbPb distributions were compared by calculating the nuclear modification factor, R AA , the ratio of PbPb data to the expectation based on pp data. The value of R AA as a function of z shows a rising trend. The suppression at low z is found to be larger in the 20% most central events (i.e. "head-on" collisions), as compared to the less central selection. The results show explicitly that the J/ψ produced with a large degree of surrounding jet activity are more highly suppressed than those produced in association with fewer particles. This finding emphasizes the importance of incorporating the jet quenching mechanism in models of J/ψ suppression. [6] LHCb Collaboration, "Measurement of J/ψ polarization in pp collisions at √ s = 7 TeV", Eur. Phys. J. C 73 (2013) 2631, doi:10.1140/epjc/s10052-013-2631-3, arXiv:1307.6379.
2021-06-28T01:16:06.069Z
2021-06-23T00:00:00.000
{ "year": 2021, "sha1": "d75a8b11049f3eb868fd2bcc665317e4fc1e0487", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2021.136842", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d75a8b11049f3eb868fd2bcc665317e4fc1e0487", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
211099676
pes2o/s2orc
v3-fos-license
Evaluation of Acetaminophen Release from Biodegradable Poly (Vinyl Alcohol) (PVA) and Nanocellulose Films Using a Multiphase Release Mechanism Biodegradable polymers hold great therapeutic value, especially through the addition of additives for controlled drug release. Nanocellulose has shown promise in drug delivery, yet usually requires chemical crosslinking with harsh acids and solvents. Nanocellulose fibrils (NFCs) and 2,2,6,6-tetramethylpiperidine-N-oxyl (TEMPO)-mediated oxidized nanocellulose fibrils (TNFCs) with poly (vinyl alcohol) (PVA) could be aqueously formulated to control the release of model drug acetaminophen over 144 h. The release was evaluated with a multiphase release mechanism to determine which mechanism(s) contribute to the overall release and to what degree. Doing so indicated that the TNFCs in PVA control the release of acetaminophen more than NFCs in PVA. Modeling showed that this release was mostly due to burst release—drug coming off the immediate surface, rather than diffusing out of the matrix. Introduction Controlled drug delivery is of high therapeutic value because it can extend the release time of a single dosage. Popular drug delivery platforms include nanoparticles, tablets, films, and transdermal patches. In the case of films and patches, biodegradable polymers such as poly (lactic-co-glycolic acid) (PLGA), poly (vinyl alcohol) (PVA), polylactide (PLA), polyglycolide (PGA), and poly (ε-caprolactone) (PCL) have been used as biocompatible matrices [1]. Copolymers, additives, and plasticizers are often formulated into these matrices to increase characteristics such as solubility, stability, and mechanical properties [2]. Nanocellulose as a drug delivery platform has been used in the form of hydrogels, where NFCs and CNCs can be formulated as suspensions, particles, gels, or composite biopolymer delivery systems [4, 6,14,16,17]. Drug entrapment using NFCs or CNCs commonly involves crosslinking and/or surface modifications [4-6, [16][17][18]. Drug-loaded matrices include cellulose as suitable tablets for oral administration. In many of these formulations, nanocellulose is able to control the rate of drug release and deliver the appropriate drug concentration over time [19]. Mathematical models have long been used to predict the release behavior of drugs in order to aid optimal formulations and the design of new systems [1,[20][21][22][23][24][25]. Many existing models focus on diffusional release as described by Fick's law of diffusion [20,22,23]. Although applicable to drug transport through thick slabs, cylinders, and spheres, this approach accounts for one simple mechanism of release that is highly dependent on the matrix structure. As precise experimental data and observations are applied to diffusional models, the need for more sophisticated mathematical models becomes apparent. In addition to diffusional release, several other controlled release mechanisms include chemically controlled, osmotically controlled, and swelling and/or dissolution controlled [20,22]. More specific types of controlled release mechanisms incorporate polymer morphology and the internal structure through which the drug moves to reach the surrounding environment as well as properties specific to the chosen material (i.e., polarity, crystallinity, viscosity, molecular weight, additives) [2,20,23,25]. Chemically controlled release accounts for erodible and pendant chain systems, in which the drug release rate is dependent on polymer degradation and hydrolytic or enzymatic degradation, respectively. Erodible systems can dominate over diffusional through surface or bulk erosion. This is when polymer degradation occurs due to water moving quickly into a matrix of less-hydrophobic polymers (bulk erosion) or if the water is excluded from the bulk of the hydrophobic polymer matrix (surface erosion) [20,22]. Osmotically controlled release occurs when local osmotic pressure becomes sufficiently high to cause the system to rupture. Polymers are typically homogenously loaded with highly soluble drugs to be released from encapsulated spheres in a single pulse [20,22]. In swelling-controlled release, the polymer is placed into water or buffer and the solvent diffuses into the polymer causing volume expansion to release water-soluble drug from the matrix [20,22]. However, there are two interfaces within this system: the polymer interface contacting water moving outward, and the swelling interface moving inward as the matrix swells. During this swelling interface, polymer chains undergo relaxation and affect drug diffusion (Fickian or non-Fickian) through the polymer [20]. Acknowledging the effect of this on overall diffusion constructs a more realistic model of drug release. During swelling and relaxation, polymer chains uncrosslink, become disentangled, and dissolve in the surrounding solvent [20,22,[24][25][26]. This is especially the case for biodegradable materials such as those that have been mentioned above. With biodegradable polymers incorporating cellulose, a majority of the literature reports biphasic release profiles, specifically for water-soluble drugs and small molecules [4]. The first phase is a quick initial burst in the first few hours followed by a prolonged diffusional release for up to 72 h [4, 23,25,[27][28][29]. However, incorporating the dissolution/relaxation mechanism would allow mathematical models to more accurately reflect the true driving forces that control the flux of drug through and out of the delivery platform [1]. In this research, an independent evaluation of NFC and TNFC as drug delivery platforms for acetaminophen (a model drug) was performed. The main hypothesis of this work was that carboxylic groups present in TNFC will improve the availability of the drug to be released. Biodegradable composites films using polyvinyl alcohol in combination with NFC and TNFC and acetaminophen, as purely aqueous formulations, were compared based on their prolonged release of acetaminophen. No chemical linkers or surface modifiers were used to prepare the suspensions and corresponding films. Collected experimental data were then evaluated with a tri-phasic release model based upon Lao, Venkatraman, and Peppas 2008 to determine the potential mechanisms and the degree to which these mechanisms contributed based on the formulation. Film Formulations Four stock solutions of acetaminophen were made in deionized H 2 O based on acetaminophen's maximum solubility-100% at 14.0 g/L, 75% at 10.5 g/L, 50% at 7.0 g/L, and 25% at 3.5 g/L ( Table 1). To form the films, NFC and TNFC were separately added to acetaminophen solutions. PVA was then slowly added to the cellulose-drug mixtures in a 1:4 ratio (PVA:cellulose) with the specified type of cellulose ( Figure 1) [28]. The mixtures were stirred for 20 min unheated, then heated to 90 • C and stirred for an additional 20 min. Heat was turned off for a final 10 min of mixing. Films were produced through solvent casting (7.5 cm × 2.5 cm × 0.2 cm) and oven-dried overnight at 40 • C. films. Collected experimental data were then evaluated with a tri-phasic release model based upon Lao, Venkatraman, and Peppas 2008 to determine the potential mechanisms and the degree to which these mechanisms contributed based on the formulation. Film Formulations Four stock solutions of acetaminophen were made in deionized H2O based on acetaminophen's maximum solubility-100% at 14.0 g/L, 75% at 10.5 g/L, 50% at 7.0 g/L, and 25% at 3.5 g/L (Table 1). To form the films, NFC and TNFC were separately added to acetaminophen solutions. PVA was then slowly added to the cellulose-drug mixtures in a 1:4 ratio (PVA:cellulose) with the specified type of cellulose ( Figure 1) [28]. The mixtures were stirred for 20 min unheated, then heated to 90 °C and stirred for an additional 20 min. Heat was turned off for a final 10 min of mixing. Films were produced through solvent casting (7.5 cm × 2.5 cm × 0.2 cm) and oven-dried overnight at 40 °C. Drug Release Studies Release studies were conducted using the SR-8 Plus Dissolution System from Hanson Research. Conditions were set at 37 °C and an agitation speed of 25 rpm. Samples with 2.5 cm × 2.5 cm dimensions were cut from film strips (7.5 cm × 2.5 cm × 0.2 cm) and placed in 1 L of phosphatebuffered saline (PBS) in individual dissolution cells. Samples (1 mL) were taken at 0 and 30 min, then every hour for the first 24 h. Samples were then taken every 12 h until day 6. The total sample (1 mL) was replaced with fresh filtered PBS buffer after every sampling (Millex -GN 0.20 µ m Nylon Drug Release Studies Release studies were conducted using the SR-8 Plus Dissolution System from Hanson Research. Conditions were set at 37 • C and an agitation speed of 25 rpm. Samples with 2.5 cm × 2.5 cm dimensions were cut from film strips (7.5 cm × 2.5 cm × 0.2 cm) and placed in 1 L of phosphate-buffered saline Nanomaterials 2020, 10, 301 4 of 11 (PBS) in individual dissolution cells. Samples (1 mL) were taken at 0 and 30 min, then every hour for the first 24 h. Samples were then taken every 12 h until day 6. The total sample (1 mL) was replaced with fresh filtered PBS buffer after every sampling (Millex -GN 0.20 µm Nylon Membrane Filter Unit). Samples were quantified using high-performance liquid chromatography (Shimadzu, reversed-phase C18) at 254 nm. Drug Release Quantification High-performance liquid chromatography (HPLC) (Shimadzu UFLC, Japan) was used with a premier C18 reverse-phase column (Shimadzu 50 mm × 4.6 mm, 3 µm particle diameter) to quantify acetaminophen in phosphate-buffered saline (PBS) from release samples. Injection volume was 20 µL with an absorption length of 254 nm and flow rate of 2.0 mL/min. Drug Release Modeling Mathworks MATLAB R2018a was used to model drug release from the prepared biodegradable films. As described by Lao et al. [1], this mechanistic and modeling approach follows three steps: 1) solvent (water) penetration into the matrix causing a burst release; 2) a degradation-dependent "relaxation of the network" that creates more free volume for drug dissolution; and 3) drug removal to the surrounding medium, usually by diffusion [1]. Together, these three mechanisms combine to a triphasic release comprised of burst release, relaxation-induced drug dissolution release, and diffusion-controlled release. Each step is incorporated into Equation (1). Each term includes a fraction of drug released (Φ) and constant value (k), designated by the subscripts b for burst release, r for relaxation-induced dissolution release, and d for diffusion-controlled release. Experimental data were initially fit to the burst term (Equation (2)), the first term in Equation (1). The imported data were calculated as the average values of each time point from six identical studies (n = 6). The fraction of burst release (Φ b ), constant of burst release (k b ), and R 2 values were calculated based on this application. Results and Discussion Repeated release studies were conducted for all formulations (n = 6). Average values and standard deviations were calculated and plotted under the estimation of 75% drug entrapment in the matrix. Figure 2 illustrates the varied acetaminophen concentration in NFC/PVA films. As expected, the highest concentration had the highest percent of acetaminophen released (15.5% ± 3.7%). Lower concentrations released acetaminophen in descending order: 15.5% ± 3.7%, 12.0% ± 4.4%, and 1.3% ± 1.3%. This demonstrates that when there is more drug in the formulation, more drug is released over time. The same trends were also seen in the control (PVA) and TNFC/PVA formulations (Table 2). However, the lack of release near 100% of drug could indicate that the drug was stuck in the polymeric matrix. This could be due to the innate fibril structure of alternating amorphous and crystalline regions entangling and trapping drug. Figure 3 compares the three formulations (PVA, NFC/PVA, and TNFC/PVA) based on drug concentration. Each plot shows the same release trend based on formulation: control (PVA) had the highest release, followed by TNFC/PVA, then NFC/PVA (above 100% release likely due to experimental error). Adding either type of cellulose to PVA films decreased drug release due to added resistance to the matrix. However, adding TNFC showed a greater release than adding NFC. The addition of TNFC increased the release of acetaminophen by 11.4% with 14 mg/mL, 17.8% with 10.5 mg/mL, 0.8% with 7.0 mg/mL, and 6.2% with 3.5 mg/mL compared to the addition of NFC ( Table 2). The increased release with TNFC versus NFC was likely due to their innate structures and interactions with acetaminophen. TNFC is the product of TEMPO-oxidation of NFC, a common modification to cellulose during its processing to nanofibrils [12,15,[29][30][31][32]. TEMPO is selective for primary alcohols (OH) and converts them to carboxylic acids (COOH) to change the surface reactivity. In acetaminophen, the hydrogen on the amine group has a pKa of 7.2. In the release environment of pH 7.4, this hydrogen leads to a partial positive charge (δ + ) on the nitrogen and a partial negative charge (δ − ) on the neighboring oxygen. The distribution of partial charges surrounded by the hydroxyl groups in PVA and the combination of innate structures and interactions with additional hydroxyls and carboxylic groups support the idea of better physical crosslinking in the matrix between TNFC and acetaminophen, leading to higher loading and higher release. Chemical crosslinking was disproved with FTIR spectra. (a) (b) Figure 3 compares the three formulations (PVA, NFC/PVA, and TNFC/PVA) based on drug concentration. Each plot shows the same release trend based on formulation: control (PVA) had the highest release, followed by TNFC/PVA, then NFC/PVA (above 100% release likely due to experimental error). Adding either type of cellulose to PVA films decreased drug release due to added resistance to the matrix. However, adding TNFC showed a greater release than adding NFC. The addition of TNFC increased the release of acetaminophen by 11.4% with 14 mg/mL, 17.8% with 10.5 mg/mL, 0.8% with 7.0 mg/mL, and 6.2% with 3.5 mg/mL compared to the addition of NFC ( Table 2). The increased release with TNFC versus NFC was likely due to their innate structures and interactions with acetaminophen. TNFC is the product of TEMPO-oxidation of NFC, a common modification to cellulose during its processing to nanofibrils [12,15,[29][30][31][32]. TEMPO is selective for primary alcohols (OH) and converts them to carboxylic acids (COOH) to change the surface reactivity. In acetaminophen, the hydrogen on the amine group has a pKa of 7.2. In the release environment of pH 7.4, this hydrogen leads to a partial positive charge (δ + ) on the nitrogen and a partial negative charge (δ − ) on the neighboring oxygen. The distribution of partial charges surrounded by the hydroxyl groups in PVA and the combination of innate structures and interactions with additional hydroxyls and carboxylic groups support the idea of better physical crosslinking in the matrix between TNFC and acetaminophen, leading to higher loading and higher release. Chemical crosslinking was disproved with FTIR spectra. To further evaluate and understand these differences in release between the formulations, experimental data sets were fit to the first term of Equation (1) (simplified to Equation (2)). The imported data were the average values of each time point from identical studies of control, NFC/PVA, and TNFC/PVA, fitting for just the burst duration of the mechanism. Fits produced an average R 2 value of 0.98 ± 0.01. These significant values indicate the occurrence of a burst release during this 24-h period (Figure 4a). From this, the fraction of burst release (Φ b ) and burst constant (k b ) were determined as well ( Table 3). environment of pH 7.4, this hydrogen leads to a partial positive charge (δ + ) on the nitrogen and a partial negative charge (δ − ) on the neighboring oxygen. The distribution of partial charges surrounded by the hydroxyl groups in PVA and the combination of innate structures and interactions with additional hydroxyls and carboxylic groups support the idea of better physical crosslinking in the matrix between TNFC and acetaminophen, leading to higher loading and higher release. Chemical crosslinking was disproved with FTIR spectra. To further evaluate and understand these differences in release between the formulations, experimental data sets were fit to the first term of Equation (1) (simplified to Equation (2)). The imported data were the average values of each time point from identical studies of control, NFC/PVA, and TNFC/PVA, fitting for just the burst duration of the mechanism. Fits produced an average R 2 value of 0.98 ± 0.01. These significant values indicate the occurrence of a burst release during this 24h period (Figure 4a). From this, the fraction of burst release (Φb) and burst constant (kb) were determined as well (Table 3). (a) To further evaluate and understand these differences in release between the formulations, experimental data sets were fit to the first term of Equation (1) (simplified to Equation (2)). The imported data were the average values of each time point from identical studies of control, NFC/PVA, and TNFC/PVA, fitting for just the burst duration of the mechanism. Fits produced an average R 2 value of 0.98 ± 0.01. These significant values indicate the occurrence of a burst release during this 24h period (Figure 4a). From this, the fraction of burst release (Φb) and burst constant (kb) were determined as well (Table 3). As presented in Figure 4 and Table 3, the fraction of burst release increased with increasing drug concentration in the initial formulations. This accounts for drug attached to the surface of the film and release, rather than being entrapped deeper in the polymer matrix. Therefore, even if more drug is released, it is not necessarily embedded into the polymer matrix and could instead simply be attached to the surface. The burst constant (kb) value remained constant between formulations at 0.10 ± 0.01. The increased burst is not unexpected, and depending on the application may be of clinical value. However, when fitting the entire 6-day sampling period to Equation (2), the modeled lines shift slightly, indicating the possible presence of another release mechanism (Figure 4b). The same trends were observed when fitting control (PVA) and TNFC/PVA to the same time periods. Physically, this indicates that drug was being released from the outer layer of the film as the solution penetrated the matrix. Since, at most, only about 14% (NFC) and about 28% (TNFC) of the drug was released in this way (as seen in Figure 2a), the remaining amount must have been released through a secondary and possibly third mechanism, as proposed. This would include the drug moving through the matrix via diffusion and/or through a relaxation-induced dissolution (b) Table 3. Values related to the burst release of each formulation, determined by fitting Equation (2) to the experimental data in MATLAB. As presented in Figure 4 and Table 3, the fraction of burst release increased with increasing drug concentration in the initial formulations. This accounts for drug attached to the surface of the film and release, rather than being entrapped deeper in the polymer matrix. Therefore, even if more drug is released, it is not necessarily embedded into the polymer matrix and could instead simply be attached to the surface. The burst constant (k b ) value remained constant between formulations at 0.10 ± 0.01. The increased burst is not unexpected, and depending on the application may be of clinical value. However, when fitting the entire 6-day sampling period to Equation (2), the modeled lines shift slightly, indicating the possible presence of another release mechanism (Figure 4b). The same trends were observed when fitting control (PVA) and TNFC/PVA to the same time periods. Sample Physically, this indicates that drug was being released from the outer layer of the film as the solution penetrated the matrix. Since, at most, only about 14% (NFC) and about 28% (TNFC) of the drug was released in this way (as seen in Figure 2a), the remaining amount must have been released through a secondary and possibly third mechanism, as proposed. This would include the drug moving through the matrix via diffusion and/or through a relaxation-induced dissolution mechanism. Another possibility is that the drug was entangled in the matrix, attributed to the alternating amorphous and crystalline regions characteristic of NFC. This entanglement could affect the drug's rate of diffusion through the film, the penetration of water into the film, the amount of free volume for the drug to maneuver, and the dissolution rate of the film. When fitting the entirety of Equation (1) to the experimental release data (Figure 5), the remaining fractions (Φ r , Φ d ) and constants (k r , D) can be determined (Table 4). mechanism. Another possibility is that the drug was entangled in the matrix, attributed to the alternating amorphous and crystalline regions characteristic of NFC. This entanglement could affect the drug's rate of diffusion through the film, the penetration of water into the film, the amount of free volume for the drug to maneuver, and the dissolution rate of the film. When fitting the entirety of Equation (1) to the experimental release data ( Figure 5), the remaining fractions (Φr, Φd) and constants (kr, D) can be determined (Table 4). The quantification of three fractional releases indicates burst release, relaxation-induced dissolution release, and diffusion-controlled release were present in all three formulations. In all the formulations, burst release dominated the mechanism, which is characteristic of acetaminophen. The diffusion-controlled release showed the second-highest fraction, followed by relaxation-induced dissolution. Although the dissolution fractions were fairly small compared to the other two fractions, they were still present and indicative of dissolution being a contributing mechanism to drug release. (1)) to experimental data. MATLAB inputs were average values from each time point. The quantification of three fractional releases indicates burst release, relaxation-induced dissolution release, and diffusion-controlled release were present in all three formulations. In all the formulations, burst release dominated the mechanism, which is characteristic of acetaminophen. The diffusion-controlled release showed the second-highest fraction, followed by relaxation-induced dissolution. Although the dissolution fractions were fairly small compared to the other two fractions, they were still present and indicative of dissolution being a contributing mechanism to drug release. Conclusions Polyvinyl alcohol film formulations were varied in terms of both material (NFC vs. TNFC) and drug concentration without using any chemical linkers. Physical crosslinking between nanocellulose and PVA proved to create a functional matrix for the release of acetaminophen. Release profiles for each followed the same trend: the more drug incorporated into the formulation, the greater the percent drug released. The control PVA formulation had the highest release due to the lack of cellulose and resistance in the matrix. TNFC had a higher release when added to PVA compared to NFC. NFC had about 50% less drug released compared to TNFC films for most of the concentrations. This shows that TNFC/PVA created less resistance in the matrix and a more controlled release of acetaminophen when evaluated based on percent drug released. A triphasic mathematical model was applied to determine the presence and degree of mechanisms occurring within these films. Control PVA and TNFC/PVA films demonstrated mainly diffusion-controlled and burst release, with extremely small fractions of relaxation-induced dissolution and prolonged-diffusional release. NFC/PVA films also showed mainly diffusion-controlled burst release, but a more significant presence of the burst and relaxation-induced dissolution mechanisms. The presence of each mechanism supports the need to incorporate additional phases into the mathematical model and expand on the traditionally accepted diffusional model of solely burst and diffusion based release for nanocellulose composites. Interpreting both the drug release percentages and mechanisms, it is proposed the drug is entangled in the matrix, attributed to the alternating amorphous and crystalline regions characteristic of NFC. This entanglement could be greater with NFC than TNFC and could affect the drug's rate of diffusion through the film, the penetration of water into the film, the amount of free volume for the drug to maneuver, and the dissolution rate of the film. This creates a basis for incorporating other molecules in the same formulations, evaluating their release mechanisms, and quantifying the amount released over time. Funding: This research was partially funded by USDA-NIFA 1007636 "Advanced applications for nanomaterials from lignoellulosic sources".
2020-02-13T09:24:21.478Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "0c7e43a8badda17dbfce68d3a85a31441b1bb3c2", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/nanomaterials/nanomaterials-10-00301/article_deploy/nanomaterials-10-00301-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab75efc6f4e4ba0564e4cbe79c1305be5575a816", "s2fieldsofstudy": [ "Materials Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
17473313
pes2o/s2orc
v3-fos-license
The Ins and Outs of Maternal-fetal Fatty Acid Metabolism Fatty acids (FAs) are one the most essential substances in intrauterine human growth. They are involved in a number of energetic and metabolic processes, including the growth of cell membranes, the retina and the nervous system. Fatty acid deficiency and disruptions in the maternal-placental fetal metabolism of FAs lead to malnutrition of the fetus, hypotrophy and preterm birth. What is more, metabolic diseases and cardiovascu-lar conditions may appear later in life. Meeting a fetus' need for FAs is dependent on maternal diet and on the efficiency of the placenta in transporting FAs to fetal circulation. " Essential fatty acids " are among the most important FAs during the intrauterine growth period. These are α-linolenic acid, which is a precursor of the n-3 series, linoleic acid, which is a precursor of the n-6 series and their derivatives, represented by docosahexaenoic acid and arachidonic acid. The latest studies have shown that medium-chain fatty acids also play a significant role in maternal-fetal metabolism. These FAs have significant effect on the transformation of the precursors into DHA, which may contribute to a relatively stable supply of DHA — even in pregnant women whose diet is low in FAs. The review discusses the problem of fatty acid metabolism at the intersection between a pregnant woman and her child with reference to physiological pregnancy, giving birth to a healthy child, intrauterine growth restriction , preterm birth and giving birth to a small for gestational age child. INTRODUCTION Intrauterine fetal development is a critical period in human development that greatly affects the quality of life in the postnatal period.If the development proceeds properly, the usual outcome is that healthy babies are born at term and are not prone to metabolic or cardiovascular diseases in adult life.The basic precondition of proper intrauterine growth is an appropriate supply of nutrients transported across the placenta.Placental transfer is determined by numerous factors, such as mother's health, condition of the fetus, transport efficiency of the placenta and diet during pregnancy (Haggarty, 2002;Cetin & Alvino, 2009;Cetin et al., 2009).Among the most important nutritional substances are fatty acids (FAs), which are involved in a number of key energy, metabolic and structural processes. The role of FAs in fetal metabolism can be analysed at two levels: the cellular level and the tissue level.At the cellular level FAs are responsible for the proper de-velopment and metabolism of cell membranes as well as for maintaining their appropriate fluidity and permeability.In addition, they are involved in energy processes, in the metabolism of proteins and sugars and regulation of gene expression.They are also precursors of prostacyclins, prostaglandins, thromboxanes and leukotrienes (Haggarty, 2004).At the tissue level they are responsible for the development of the retina, nervous tissue and the brain, which is reflected in children's increased intellectual capabilities when measured later with the use of IQ tests (Gale et al., 2008;Helland et al., 2003).Disturbances in placental transport of FAs usually lead to premature deliveries, which have become an increasingly serious medical and social problem.According to the World Health Organisation (WHO), an estimated 15 million babies are born preterm.They require time-consuming, specialist care in the first weeks of life and, in the long term, they need also treatment for many medical conditions, including chronic diseases, such as diabetes, cardiovascular conditions, etc. THE INFLUENCE OF A PREGNANT WOMAN'S DIETARY FAT ON FETAL DEVELOPMENT During pregnancy, a mother's body deposits fat in an amount which corresponds approximately to the baby's weight (3500g) (Hytten, 1974).Fat deposition is most intense during the first and second trimesters of pregnancy (the anabolic period).The main purpose of maternal fat deposition is to transfer some of the deposits to the developing fetus.The body weight of a pregnant woman and the fat mass in her adipose tissue increase -even if the mother is malnourished (Prentice & Goldberg, 2000;Herrera, 2002;Herrera et al., 2006).During fat deposition the levels of phospholipids, non-esterified fatty acids and triglycerides (TG) increase in the maternal circulation.This mechanism is associated with an insulin-dependent decrease in lipoprotein lipase activity in adipose tissue and subsequent insulin resistance.The nutritional requirements of the fetus increase considerably during the third trimester of pregnancy, reflecting the fetus' substantial growth.This is the catabolic period for fat metabolism, including the mother's FAs, due to maternal lipolysis.Increased lipolysis results from the decreased sensitivity of insulin receptors, which are hormonally controlled by progesterone, cortisol, prolactin and leptin (Cousins, 1991;Catov et al., 2007).Oestrogens also promote high levels of lipids in the blood circulation of pregnant women.They inhibit the activity of hepatic lipoprotein lipase and increase intestinal absorption of dietary fats (Cetin & Alvino, 2009).These physiological changes in the maternal metabolism increase the con-centration of circulating free fatty acids (FFAs) and glycerol, which are substrates for hepatic biosynthesis rich in TG very low density lipoproteins (VLDL).During the catabolic phase the TG concentration in the fasted state is twice as high as the peak postprandial TG concentration recorded in women who are not pregnant (Cetin & Alvino, 2009).The dynamics of changes in the fat content of fetal tissue is different from that found in the mother.Firstly, there is no catabolic period.Secondly, the anabolic period starts much later than it starts for the mother, that is, around weeks 20-22 of pregnancy.The increase in fetal fat occurs gradually over the following 10-12 weeks (Fig. 1) and no sharp increase in fat is observed until approximately week 32 of pregnancy.This period of net mobilisation corresponds to an exponential increase in fetal fat during which 94% of all fat deposition in the fetus occurs (Widdowson, 1968).To make the fat accretion process effective and to guarantee proper fetal development, the mother should eat an appropriate amount of fat of a suitable composition. DIETARY FATTY ACIDS According to "Dietary guidelines for Americans" (2005 U.S. Department of Health) fats should constitute about 20-35% of calories consumed (Cetin & Alvino, 2009;FAO 2010).In line with current recommendations and expert opinions, fatty acids should be a key component of the diet of pregnant women.From the physiological point of view, the most important role in maternal-fetal metabolism is performed by long chain poly-unsaturated fatty acids (LC-PUFA) (Cetin et al., 2005;Koletzko et al., 2008;Innis, 2007b;Innis, 2007c;Smithers et al., 2008); the most important of which include the so called "essential fatty acids" α-Linolenic (C18:3 n-3; ALA) and Linoleic acid (C18:2 n-6; LA) (Fig. 2).These FAs are not synthesized by the body and thus, their only source is the mother's diet.ALA and LA are precursors for other, biologically important, long chain-polyunsaturated fatty acids (LC-PUFA).Derivatives of ALA are represented by docosahexaenoic acid (C22:6 n-3; DHA), necessary for brain development (Innis, 2005) and eicosapentaenoic acid (C20:5 n-3; EPA), a precursor of numerous prostanoids and leukotrienes. LA is converted to dihomogamma-linolenic acid (C18:3 n-6; DGLA) and arachidonic acid (C20:4 n-6; AA), which are later converted to subsequent derivatives fundamental for immune response, such as prostaglandins, thromboxanes and leukotrienes (Haggarty, 2002;Cetin & Alvino, 2009;Haggarty, 2004).The recommended daily intake of DHA, EPA and AA The figure presents only the most important changes connected with the extension of carbon chains that lead to the creation of LCPUFAs and their metabolites, prostanoids and leukotrienes, which are essential in foetal development.Owing to the lack of some elongases and desaturases in the placenta, the biosynthesis of the most important LCPUFAs, such as AA, DHA or EPA, takes place in the mother and partly in the liver of the foetus. for pregnant women is as follows: DHA=200 mg/d, DHA+EPA=300 mg/d and AA=800 mg/d (FAO, 2008).These amounts should meet the needs of the fetus and promote proper nervous system development as well as decrease the risk of preterm birth and/or low birth weight.The dynamics of changes in fetal n-3 and n-6 composition are similar to the dynamics of changes in total fat deposition in the fetus.The fat content, including that of n-3 and n-6, increases in the maternal organism during the final ten weeks of pregnancy (Haggarty, 2004).A specific gradient of LC-PUFA is created between the mother and the fetus, that is a result of the increase in placental transport activity.The process is reflected in the difference in DHA and AA concentrations in the blood of the mother vs the fetus.During the final weeks of pregnancy, the DHA and AA content in fetal plasma is almost twice as high as in the mother's blood.The FAs are absorbed from fetal circulation and stored in fetal adipose tissue.As a result, towards the end of the pregnancy, their levels are several times higher in fetal adipose tissue than in the maternal adipose tissuesixteen times higher in the case of DHA and over ninety times higher in the case of AA (Fig. 3) (Haggarty, 2002;Leaf et al., 1995;Otto et al., 1997;Lakin et al., 1998;Clandinin et al., 1981;Jansson et al., 2006).The core issue surrounding n-3 and n-6 intake in a mother's diet is not only the amount of the fatty acids (FA) but also the relative proportions of these FAs.Currently, pregnant women are advised to consume oily fish, rich in n-3 FAs because of its beneficial effects on vascular function.In addition, it is generally assumed that the consumption of oily fish may also be beneficial for the development of the fetus's brain and retina.It is worth mentioning, however, that very high intakes from marine sources -particularly in the form of supplements -may not be beneficial for the developing fetus and may not be entirely free of risk (Haggarty, 2004).This problem stems from the common metabolic pathway of n-3 FAs and n-6 FAs in the processes of desaturation and carbon chain extension.The reactions are catalysed by two desaturases: Δ-5 and Δ-6.If the diet of a pregnant woman is abundant in fish and seafood, the increased amount of EPA may -through inhibition of Δ-5 desaturase -slow the creation of AA and its derivatives (Koletzko et al., 2008;Lafond et al., 2000;Llanos et al., 2005).If the maternal diet is rich in plant oils, such as sunflower-seed oil, safflower oil or corn oil, which contain large amounts of LA, then less DHA is produced from ALA as a result of Δ-6 desaturase inhibition leading to decreased EPA biosynthesis.The imbalance between dietary n-3 and n-6 FAs may lead to structural changes in cell membranes in which the composition of LCPUFA lipid fraction is dependent on current LCPUFA concentration in maternal blood.The cell membrane AA content decreases in women consuming foods rich in EPA and DHA.This consequence may have an impact on the duration of pregnancy and on intrauterine fetal development.Studies have shown that n-3 and n-6 deficiencies and changes in the relative proportions of n-3 and n-6 also correlate with placental mass and a low value for fetal/placental mass quotient (Cetin et al., 2002;Cetin et al., 2001).An evaluation of the mutual influence of EPA and DHA contained in the maternal diet and in the maternal-fetal metabolism was performed using laboratory animals.To this end, the diets of two groups of pregnant female rats were enriched with fish oil and olive oil respectively.Lower AA, lower vitamin E and delays in development were found in the offspring of the former group (Smithers et al., 2008;Pardi et al., 2002).Similar changes were not observed in the offspring of the later group.The diet of the first group of females was then enriched with Dihomo-gamma-linolenic acid (DGLA) which is a precursor of AA.As a result of this modification both the AA levels in the offspring in the subsequent litter and their overall development were normalised.This research demonstrates that the mere presence of LCPUFA in the maternal diet is insufficient to guarantee proper development of the fetus, and that this is, in fact, highly dependent on the composition and relative proportions of n-3 and n-6.Optimum intake of the latter reduces the risk of preterm birth and intrauterine fetal underdevelopment, as well as lower the chances of major changes in the child's nervous system, which can have long term negative consequences. LCPUFAs VERSUS MCFAs The influence of FAs of the n-3 and n-6 series on fetal development is currently the subject of intensive research.The role of ALA, LA, DHA, AA and others of the n-3 and n-6 series in the maternal diet and in maternal-fetal metabolism has been reasonably well recognised.Insufficient consumption of these fatty acids (below recommended standards) during pregnancy may The content of all of the fatty acids presented in the figure constitutes proportions of total fatty acids in maternal diet (Otto et al.,1997), maternal adipose tissue (FAO, 2008), foetal brain and adipose tissue (Clandinin et al., 1981), and maternal and cord blood plasma (Bobiński et al., 2013b). disrupt the progression of the pregnancy and is often correlated with preterm birth or intrauterine growth restriction (IUGR) (Cetin et al., 2002;Cetin et al., 2001).Recent studies have shown that medium-chain fatty acids (MCFAs) also play an important role in maternal-fetal metabolism (Bobiński et al., 2015a;Bobiński et al., 2015b;Bobiński, et al., 2015c;Nasser et al., 2010;Bobiński et al., 2013a, Bobiński et al., 2013b).Changes in MCFA content have been observed in cord blood, breast milk and diet of pregnant women who gave birth at the junction of physiology and pathology (Bobiński, 2015a).Studies of the diets of mothers who gave birth to "late" preterm neonates (weeks 35-37) or who gave birth to full-term infants who were small for gestational age (SGA), where APGAR scores (Appearance, Pulse, Grimace, Activity, Respiration) in both groups were 9-10, have found a smaller intake of medium-chain fatty acids (MCFAs) and short-chain fatty acids (SCFAs) (Bobiński, 2015a) compared to women who gave birth to healthy neonates on time (AGA) (Table 1).Whereas, the breast milk of women who gave birth to preterm or SGA neonates contained more MCFAs compared to women who had healthy full term neonates (Bobiński et al., 2015a;Garg et al., 2005).Hence, there is a negative correlation between the amount of MCFAs consumed and their content in breast milk.These research results raise the question of the physiological role of MCFAs in prenatal and postnatal child development.Are there any relationships between maternal-fetal metabolism of the n-3 and n-6 series of FAs and MCFAs?The physiological role of MCFAs is connected mainly with energetic and metabolic processes.MCFAs constitute an optimal substrate in the mitochondrial process of energy production that is particularly important for neonates -especially for immature neonates whose enzymatic systems are inefficient and whose demand for energy is very high.MCFAs are preferentially hydrolysed in the intestines: transporting them to mitochondria does not require a carnitine conveyor, so that ATP molecules, precious for a neonate, are not consumed (Bobiński et al., 2013a).The metabolic role of MCFAs is broader and includes a number of processes.Using animal models, investigators have established that lauric acid (C12:0) may affect n-3 metabolism (Legrand, 2010).Under certain conditions this FA may be a precursor of LCPUFA of the n-3 series (Fig. 4) It has been observed that the liver of rats is capable of slow conversion of C12:0 to the mono-unsaturated C12:1 n-3.This may lead to conversion of C12:1 to ALA by Δ6-desaturation, elongation, Δ5-desaturation and two final elongations (Legrand et al., 2002;Jan et al., 2004), especially in extreme physiological circumstances such as a prolonged lack of n-3 in the diet.If such processes take place in humans, there is a possibility that DHA will form from lauric acid, which substantially changes many fundamental issues in maternal-fetal metabolism and the nourishment of pregnant women.What is more, it also extends our knowledge of the physiological role of MC-FAs.Studies have shown that myristic acid (C14:0) also affects n-3 and n-6 metabolism and may activate the conversion of ALA to DHA (Legrand et al., 2002).In cultured rat hepatocytes, myristic acid had a specific and dose-dependent effect on Δ6-desaturase activity (Rioux et al., 2005).Based on in vivo tests, Rioux showed that when myristic acid was supplied for two months in the diet of rats (0.2-1.2% of dietary energy), with a similar level of dietary ALA (1.6% of FA, 0.3% of energy), a dose response accumulation of EPA was observed in the liver and plasma (Dabadie et al., 2005).Similar results were obtained in research on human diets.Comparing a diet containing 0.6% of myristic acid with a diet containing 1.2% of myristic acid over a 5-week consumption period significantly enhanced EPA and DHA levels in the plasma phospholipid fraction (PL) and DHA level in the plasma cholesteryl esters (Dabadie et al., 2006;Sola et al., 2007).An increase in myristic acid consumption from 1.2% to 1.8% resulted in a decrease in plasma level PL and EPA.This result suggests that the effect of myristic acid on circulating n-3 LCPUFA follows a Ushaped curve with a favourable turning point at around 1.2% of total daily energy (Rioux et al., 2005).In addition to participating in fat metabolism, myristic acid is involved in regulating protein activation by N-myristoylation.The myristoyl moiety has been shown to mediate protein subcellular localisation, protein-protein interaction or protein-membrane interaction (Rioux et al., 2005;Jan et al., 2004).Myristoylation of histone proteins in a chromatin area may regulate the transcription processes of genes located in that area.This process may therefore affect the expression of genes in the fetus, as well as its development.These processes, however, are currently poorly recognised. The data presented suggest a relationship between n-3 and n-6 metabolism and MCFAs, while also demonstrating the significant role of MCFAs in fetal development (Fig. 4).MCFAs also have a beneficial effect on the metabolism of maternal fat because they undergo fast liver oxidation and are not stored in adipose tissue.According to one of the hypotheses, medium-chain TGs have an inhibitory effect on apoB synthesis and reduce VLDL secretion by hepatocytes (Geliebter et al., 1983, Tachibana et al., 2005).While the currently valid nutrition standards for pregnant women include recommendations on the daily consumption of selected n-3 and n-6 FAs, there are no such guidelines for the intake of MCFAs.It would seem that these recommendations should be reviewed and that at least three MCFAs -capric acid, lauric acid and myristic acid -should potentially be added to the recommended daily intake. THE ROLE OF THE PLACENTA IN FATTY ACID METABOLISM The deposition of fatty acids in the fetus does not depend solely on the levels of said FAs in the mother's diet.The placenta has a significant impact on the transport of FAs from the maternal to the fetal circulation (Larqué et al., 2014, Brett et al., 2014).Fatty acids have to pass through the villous trophoblast, which consists of two membranes: the microvillous facing the maternal bloodstream and the basal facing the fetal bloodstream (Haggarty, 2002;Duttaroy et al., 2009a).The difference in FA concentrations in the maternal blood and in the cord blood creates a gradient enabling the transfer of FAs to the fetus by simple diffusion.Specialized fatty acid binding proteins (FABPs), which are located in the microvillous and basal membranes of syncytiotrophoblast cells (Fig. 5) (Haggarty, 2002) also contribute to the placental transfer of FAs.Three types of FABPs can be found in the microvillous membrane which directly faces the maternal bloodstream.The first type is the plasma membrane fatty-acid binding protein (FABPpm), which has a molecular mass of approximately 40 kDa and can be found throughout the body.One of the ways in which the placental FABPpm isoform differs from the other types is its selectivity for fatty acid binding activity in maternal blood (Kaufman & Scheffen, 1998;Campbell et al., 1998).FABPpm binds only 10% of total fatty acids -mainly AA (98%), DHA (87%) and smaller quantities of LA and OA (oleic acid) (Schmitz & Ecker, 2008).FABPpm fulfils the role of an extracellular acceptor of non-esterified fatty acids whose operational mechanism is based on binding the FAs from the maternal cardiovascular system and enabling their diffusion through the lipid membrane by creating a local gradient of FAs between the intracellular and extracellular spaces.Fatty acid translocase (FAT/ CD36) is the second type of protein involved in placental transfer of FAs.The sequence of FAT/CD36 is 85% homologous with that of glycoprotein IV (CD36).FAT is a highly glycosylated polypeptide chain with an apparent molecular mass of 88 kDa, which is present in both of the placental membranes: microvillous and basal.Unlike FABPpm and FATP, FAT is a multifunctional protein that interacts with a number of ligands such as free fatty acids (FFAs), collagen, thrombospondin, oxidized LDL and others (Thorburn, 1991;Challis et al., 1998;Challis et al., 2002;Helliwel et al., 2004;Lundin-Schiller & Mitchel, 1990;Alvino et al., 2008).FAT participates not only in FA metabolism but also in angiogenesis, atherosclerosis and inflammation (Kaufman & Scheffen, 1998).FAT is a transmembrane protein functioning as a system which transports or translocates fatty acids to the cytoplasm of syncytiotrophoblast cells in a process that has yet to be explained (Cetin et al., 2009;Cetin et al., 2005;Schmitz & Ecker, 2008;Cetin & Alvino, 2009;Cross, 2006;Duttaroy, 2000;Duttaroy, 2009). The third protein involved in the placental FA metabolism is fatty acid transporter protein (FATP), which can be found in microvillous and basal membranes.So far, six isoforms of FATP have been identified, each having different tissue expression patterns.Although the structure of FATP1, one of the best understood placental FATPs, has not been fully elucidated, it is proposed to have only one membrane-spanning region and several membrane-associated regions (Lewis et al., 2001).The role of FATP is to enhance fatty acid internalisation through cooperation with acylCoA synthetase.From this, AcylCoA derivatives are created and fatty acid uptake becomes unidirectional as a consequence.Unlike FABPpm, FATP does not have specific preferences for fatty acid uptake. The cytoplasm of syncytiotrophoblast cells contains intracellular fatty acid binding proteins (FABPs), such as H-FABP (heart), L-FABP (liver), A-FABP (adipose) and E-FABP (epidermal).The roles of these proteins is not yet fully understood.However, they participate in intracellular transport and metabolism of fatty acids -especially in the conversion of the n-3 and n-6 series to their respective derivatives. Fatty acids can only be transported via the microvillous membrane in their non-esterified form.However, due to their hydrophobicity they are not present in the form of fatty acids in the mother's bloodstream.The vast majority of FAs are transported in the form of triglyceride inside very low density lipoprotein fraction or bound to albumins (Auestad et al., 2003).VLDL fractions moving near microvillous membranes are recognised by lipoprotein lipase attached to the membrane surface and hydrolysed to fatty acids which bind with FABPpm, FAT and FATP, and are transported in this form to cytoplasm.Not all fatty acids may be transported to the cell by means of specific conveyors: FABPs.This kind of transport refers mainly to LCPUFAs, which are the first to be disconnected from TG.Other FAs, especially their saturated forms, penetrate into the interior of a cell based on free movement.Irrespective of the cellular transport mechanism, FAs are bound inside the cell by cytoplasmic FABPs occurring in two variants: heart-type H-FABP and liver-type L-FABP.The choice of the cytoplasmic conveyor determines the later metabolism of LCPUFAs.Fatty acids combined with L-FABP undergo esterification and are stored in a cell, or are transported to the basal membrane of syncytiotrophoblast cells and transferred to transport proteins located there, which are the same as those present on the microvillous membrane (FAT and FATP).Other fatty acids combine with H-FABP and are transformed into eicosanoids, as in the case mentioned above, combine with the FAT and FATP of the basal membrane of syncytiotrophoblast cells before entering fetal circulation where they combine with albumin or alpha-fetoprotein (Enke et al., 2008).Owing to the symmetrical distribution of FABP on the microvillous and basal membranes, fatty acids can be transported from maternal to fetal circulation and vice versa.In fact, the dynamics of FA transport in both directions is not the same and it is subject to various mechanisms and regulatory factors.For example, the transport of arachidonic acid (AA) to the inside of the cell from maternal circulation and fetal circulation, is an ATP-dependent process, yet its transport across the basal membrane requires Na+ ions in addition to the ATP (Gale et al., 2008). The regulation of FA transport from the mother to the fetus leads to the emergence of differences in the levels of particular FAs in the blood of the mother and of the fetus.These differences are particularly evident in n-3 and n-6 long-chain polyunsaturated fatty acids (LCPUFAs).The differences stem from a certain preferential relation of placenta towards LCPUFAs involving both their uptake from the mother's bloodstream and transport to the fetal circulation.Studies have shown that the concentration of arachidonic acid (AA) and docosahexaenoic acid (DHA) in the inter-microvillous space is already three to four times higher than in the mother's blood taken from outside the placenta (Schmiters et al., 2008;Cetin et al., 2002;Gauster et al., 2007).This concentration gradient does not result from the release of LCPUFAs from the placenta to maternal circulation but comes into existence because of the lipoprotein lipase discussed above.The enzyme preferentially hydrolyses TG at the 2-position which is most common for unsaturated fatty acids.The released AA and DHA are then transported, also based on principles of priority by FABPpm according to a specified hierarchy DHA > AA > LA > ALA defining the order of transport of the acids across the placental barrier.This hierarchy may change depending on the trimester of pregnancy, the content of these FAs in the placenta and their concentration in maternal and fetal blood.Due to the lack of activity, or very little activity, of placental desaturases there is no biosynthesis of DHA and AA from their precursors -LA and ALA (Cetin & Alvino, 2009) -in the placenta.The source of placental DHA and AA is maternal plasma.Having penetrated into the syncytiotrophoblast, the DHA is further transported to the fetal circulation where it becomes a physiologically important component necessary for the nervous system to function.A part of the placental AA is used in the biosynthesis of prostanoids and leukotrienes, while this what remains, as in the case of DHA, enters fetal circulation.An important factor conditioning the penetration of n-3 and n-6 into the syncytiotrophoblast cells is the concentration of these FAs and, more precisely, their mutual proportions in the inter-microvillous space and in the syncytiotrophoblast cells.The competition for FABP begins as early as in the inter-microvillous space.It results from a particular property of LCPUFA and the law of mass action.The amount of LCPUFA in placental transport is also dependent on the content of trans fatty acid isomers in the maternal blood.These isomers, whose only source is the mother's diet, also compete with LCPUFAs for FABP binding sites, which reduces the placental uptake of LCPUFAs -including those acids that are most important in terms of biology: n-3 and n-6. Inside the cell, cytosolic fatty acids, both n-3 and n-6, are subject to enzymatic treatment which results in bioactive derivatives belonging to the groups of prostacyclins, prostaglandins, thromboxanes and leukotrienes.The synthesis reactions of the derivatives are catalysed by a complex of oxygenase common to n-3 and n-6.Both groups of fatty acids, while being substrates for the same enzymes, show a mutual inhibitory effect re-sulting from the competition for an enzyme active site.EPA and AA provide an example of mutual placental inhibition of n-3 and n-6 acids.In the process of the biosynthesis of derivatives, EPA and its metaboliteseicosanoids compete with AA and its derivatives in the placenta for access to cyclooxygenases and lipoxygenases.As a result of this process the placental content of prostanoids and leukotrienes originating from the n-6 group decreases.Alpha-linolenic acid, the precursor of EPA, has similar inhibition properties towards AA and LA transport (Haggarty, 2002;Haggarty, 2004;Burdge & Calder, 2005). FATTY ACID CHANGES IN PRETERM, SGA AND IUGR INFANTS According to WHO data, premature birth and subsequent complications caused by irregularities in the course of pregnancy are a growing medical and social problem.Each year approximately 15 million children around the world are born preterm.They require cost-intensive and specialized medical care in the first few weeks of life.Maintaining an optimised supply of FAs during the period of fetal and infant life reduces the risk of IUGR, preterm birth and SGA neonates and, in the long run, reduces the risk of developing diseases such as diabetes, cardiovascular conditions and other chronic conditions later in life (Cetin et al., 2005;de Rooij et al., 2007).If pregnancy takes its proper course, the total plasma fatty acid concentration is higher in the mother than in the fetus (Cetin et al., 2009).This maternal-fetal profile of fatty acids is maintained by specialized placental systems that transport FAs according to a particular hierarchy.In this way a specific FA gradient is created between the blood of the mother and child.As a result of this physiological mechanism the content of DHA and AA increases in cord blood in relation to the levels of their precursors -LA and ALA -in the maternal blood.The maternal-fetal proportions of a number of FAs change in the course of IUGR.The fetal/maternal (F/M) ratio for LA increases, while it decreases for DHA and AA (Cetin et al., 2002).As a result of these disorders there is a quantitative reduction in the amount of DHA and AA available for proper fetal development and for important metabolic processes required for a successfully pregnancy.Changes in the F/M ratio of MCFAs can also be observed during IUGR.Studies have shown that rearrangement of maternal-fetal FAs is already visible in the course of a small degree of pathology, such as late prematurity (weeks 35-37) and the birth of a term SGA child (Bobiński et al., 2013b).The cord blood of premature and SGA infants has also been identified as containing a higher percentage of lauric acid (C12:0), which is one of MCFAs.Analysis of the n-3 and n-6 FA content in cord blood reveals differences when AGA neonates are compared with preterm and SGA neonates.In the range of the n-6 placental FAs of the AGA group, a higher content of Dihomo-γ-linolenic (DGLA, 20:3n-6), eicosatrienic (C20:3n-6) and arachidonic acids can be observed.No statistical differences are observed among the n-3 FAs.This result may indicate that in the case of prematurity and SGA, the n-6 FAs are preferentially transmitted via protein transport systems to fetal circulation.There is a breach of the physiological hierarchy of FA placental transport -DHA > AA > LA > ALAwith n-3 DHA on top, probably resulting in moving AA to the top of the hierarchy.The proof of these changes in preferences in the placental transport of DHA is the variation in the relationships between DHA concentrations in cord blood and DHA concentrations in maternal blood (DHA F /DHA M ) between the groups of AGA, preterm (weeks 35-37) and SGA neonates.The algebraic value of these relations is approximately 100% lower in the group of mothers whose children were born prematurely, and the group of mothers with SGA children, than it is in the AGA group (Bobiński et al., 2013b).This means that the amount of DHA transported across the placenta is smaller in preterm and small-for-gestational-age neonates than in AGA ones.Consequently, a lower amount of DHA is available for fetal development processes, including that of creating the nervous system, for which DHA is an essential polyunsaturated fatty acid.Changes in the maternal-placental relationships of FA content also apply to AA, LA and ALA, which suggests that the placental transport of fatty acids that are essential from the biological point of view changes in the course of slight prematurity or hypotrophy. RECOMMENDATIONS One of the basic preconditions for the proper development of human beings is that an appropriate FA profile is provided to the organism in the intrauterine development, neonatal and infant periods.There are a number of research papers containing dietary recommendations for n-3 and n-6 during pregnancy and lactation (Haggarty, 2004;FAO, 2010;Duttaroy 2009b).Analysis of the range of FAs shows that MCFAs play an important role in fetal development.Changes in the levels of these acids can be observed in cases of IUGR, prematurity and SGA in pregnancy diet (Pardi et al., 2002), maternal blood (Cetin et al., 2002;Bobiński et al., 2013b), cord blood (Cetin et al., 2002, Bobiński et al., 2013b) and breast milk (Nasser et al., 2010;Bobiński et al., 2013a).Physiologically, these FAs fulfil many significant energetic and metabolic functions, which are especially important for the fetus and neonate.In the digestive system of an enzymatically underdeveloped child, following birth TGs containing FAs with the carbon number C10-C12 are preferentially degraded by pancreatic lipase and absorbed directly into the blood circulation -omitting the incorporation of FAs into chylomicrons (Schmeits et al., 1999).The bactericidal effect they exert on microorganisms in the digestive tract is also an important aspect.The specific construction of medium-chain fatty acids allows for relatively easy penetration of MCFAs into the cell in an undissociated form where they then undergo dissociation.Dissociation deteriorates the delicate pH balance inside bacteria.To maintain a neutral pH, a bacterial cell begins to consume large quantities of ATP to preserve the proper acid-base balance.As a result, excessive demand for ATP affects the limitation and -finally -the inhibition of other metabolic processes of bacteria (e.g. protein synthesis), which eventually leads to its necrosis (Rickie, 2003).The case described has been observed mainly in the intestines of both humans and animals, where MCFAs impeded the growth of grampositive and gram-negative bacteria (Nakai & Siebert, 2002).It has also been observed that antibacterial activity can be reduced along with the extension of the MCFA carbon chain.The inhibitory properties of MC-FAs with regard to Clostridium, Salmonella, Escherichia and Helicobacter are now reasonably well documented (Nakai & Siebert, 2002;Mauronek et al., 2003;Skrivanowa et al., 2006;Szewczyk & Hanczakowska, 2010).In the case of the latter bacteria high activity is demonstrated mainly by lauric acid and medium-chain monoacylglycerols. MCFAs perform another important role in acylation.This is especially true in the case of the myristoylation of proteins and in the conversion of essential unsaturated fatty acids of alpha-linolenic acid (ALA, C18:3n-3) and linoleic acid (LA, C18:2n-6) to their physiologically most important derivatives: docosahexaenoic acid (DHA, C22:6n-3) and arachidonic acid (AA, 20:4n-6) (Legrand et al., 2002) (Fig. 4).Taking into account the physiological and biochemical role of MCFAs, recommended norms for their consumption by pregnant women along with norms for the enrichment of breast-milk substitute in IUGR, SGA and preterm cases should be elaborated.Studies have established that the daily intake of capric acid, lauric acid and myristic acid should not be less than 1.05 g/day, 1.45 g/day and 4.8 g/day (Bobiński et al., 2015a) respectively.Clarification of the optimum amount of MCFAs for the mother and for the fetus will require further research on a larger population of pregnant women. Figure 2 . Figure 2. Scheme of metabolism of linoleic (n-6) acids and alpha-linolenic (n-3).The figure presents only the most important changes connected with the extension of carbon chains that lead to the creation of LCPUFAs and their metabolites, prostanoids and leukotrienes, which are essential in foetal development.Owing to the lack of some elongases and desaturases in the placenta, the biosynthesis of the most important LCPUFAs, such as AA, DHA or EPA, takes place in the mother and partly in the liver of the foetus. Figure 3 . Figure 3. Percentage level of selected fatty acids in maternal diet, adipose tissue of mother and foetus, brain of foetus, maternal blood and cord blood.The content of all of the fatty acids presented in the figure constitutes proportions of total fatty acids in maternal diet(Otto et al.,1997), maternal adipose tissue(FAO, 2008), foetal brain and adipose tissue(Clandinin et al., 1981), and maternal and cord blood plasma(Bobiński et al., 2013b). Figure 4 . Figure 4.The influence of medium-chain fatty acids (MCFAs) on systemic metabolism
2016-10-26T03:31:20.546Z
2015-09-08T00:00:00.000
{ "year": 2015, "sha1": "e48d974f3f467c797a45ca589a1195cefdc6fa9f", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.18388/abp.2015_1067", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e48d974f3f467c797a45ca589a1195cefdc6fa9f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
226769532
pes2o/s2orc
v3-fos-license
Study on the influence of test solution concentration on the chemical corrosion resistance of ceramic tiles : Chemical resistance is one of the important evaluation factors of ceramic tiles. This article describes the method for determining the chemical resistance of ceramic tiles, and discusses the influence of acid - base solution concentration on the test results during the experiment, and proposes corresponding improvement measures. The principle and equipment of the experiment of chemical corrosion resistance of ceramic tiles In the experiment, the sample is directly affected by the test solution, and the degree of chemical corrosion will be observed and determined after a certain period of time under certain conditions. The equipment used in the experiment mainly includes: Balance (precision is 0. 05g); Oven (operating temperature 110℃±5℃), Biochemical incubator (20℃ ±2℃);Cylinder (made of borosilicate glass); Suede etc. 1 Test method for chemical corrosion resistance of unglazed brick Three kinds of unglazed bricks produced by different manufacturers were selected and named as sample 1, sample 2 and sample 3. After cleaning the surface of the unglazed brick sample with the size of 50 mm × 50 mm and measuring weight, the samples were immersed in a container with test solution at a vertical depth of 25 mm. The non cutting edge of the sample must be completely immersed in the solution and kept at 20 ℃℃± 2 ℃ for 12 days after covering the lid. After 12 days, the samples were rinsed with flowing water for 5 days. After rinsing, the samples were completely soaked in water and boiled for 30 minutes. Then they were removed from the water E3S Web of Conferences 185, 04042 (2020) ICEEB 2020 http://doi.org/10.1051/e3sconf/202018504042 and gently wiped with wring dry but still wet suede, and then dried in a 110℃ ±5℃ drying oven. 2 Experimental method of chemical corrosion resistance of glazed tiles The glazed bricks produced by three different manufacturers in the market were also selected for the experiment and named as samples 4, 5 and 6. Place the cylinder used to hold the reagent on the surface of the glazed ceramic tile samples that have been cleaned and sealed effectively, and mark it separately. The test liquid was injected at the opening with a liquid level of 20mm±1mm to make the sample contact with the test liquid for 4days. The device was gently shaken once a day to ensure that the liquid level of the test liquid remained unchanged. After 2days, we replaced the solution, removed the cylinder after another 2 days and thoroughly cleaned the sealing material on the glaze with appropriate solvent. Influence of Test Solution Concentration on the Test Results of Ceramic Tiles' Chemical Resistance According to the test standard GB/T 3810. 13-2016, hydrochloric acid test solution and potassium hydroxide test solution with volume fraction of 0. 03 and 0. 18 and concentration of 30g/L and 100g/L, respectively, are usually selected as the strong acid and strong base test solution for the test of chemical corrosion resistance of ceramic bricks. In this experiment, based on the standard concentration, a series of acid test solutions with the concentration error of ± 0. 01 and ± 0. 02 with the standard acid test solution and the series of alkaline test solution with the error of ± 5 and ± 10 with the standard alkali test solution are selected as the experimental solutions. The detection situation when the test solution concentration deviation occurs in the experiment is simulated, and the influence of different strong acid and strong alkali test solution concentrations on the chemical corrosion resistance of ceramic tiles is studied. According to the grading method of the test standard, the experimental results are shown in Table 1~ Table 4. 1 Analysis of experimental results From the experimental results in Table 1 2 Improvement measures According to the inspection standards, the concentration of strong acid and strong base test solution in the experiment is mainly affected by the preparation steps of the solution and the operation of the inspectors. Therefore, we propose the following improvements: 1:When preparing the test solution, the standard solution within the validity period and the measured instrument shall be used for preparation in strict accordance with the operation instruction for preparation of test solution. After preparation, the test solution shall be stored in strict accordance with the conditions required for preservation of test solution to ensure the accuracy of test solution concentration. 2:The inspection personnel shall be trained and take up the post after passing the operation level examination. In the process of the experiment, pay attention to the control of the "volume" of the test solution to ensure the accuracy of the volume of the test solution.
2020-09-03T09:05:07.880Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e4a6a55f5ca92eb0348c49ae774a9f9296aa4ce7", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/45/e3sconf_iceeb2020_04042.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a3ef92c59a3001103eeace04c1b03598156444b8", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
18142117
pes2o/s2orc
v3-fos-license
Fluid catalytic cracking: recent developments on the grand old lady of zeolite catalysis Fluid catalytic cracking (FCC) is one of the major conversion technologies in the oil refinery industry. FCC currently produces the majority of the world’s gasoline, as well as an important fraction of propylene for the polymer industry. In this critical review, we give an overview of the latest trends in this field of research. These trends include ways to make it possible to process either very heavy or very light crude oil fractions as well as to co-process biomass-based oxygenates with regular crude oil fractions, and convert these more complex feedstocks in an increasing amount of propylene and diesel-range fuels. After providing some general background of the FCC process, including a short history as well as details on the process, reactor design, chemical reactions involved and catalyst material, we will discuss several trends in FCC catalysis research by focusing on ways to improve the zeolite structure stability, propylene selectivity and the overall catalyst accessibility by (a) the addition of rare earth elements and phosphorus, (b) constructing hierarchical pores systems and (c) the introduction of new zeolite structures. In addition, we present an overview of the state-of-the-art micro-spectroscopy methods for characterizing FCC catalysts at the single particle level. These new characterization tools are able to explain the influence of the harsh FCC processing conditions ( e.g. steam) and the presence of various metal poisons ( e.g. V, Fe and Ni) in the crude oil feedstocks on the 3-D structure and accessibility of FCC catalyst materials. Introduction Fluid catalytic cracking (FCC) is one of the major conversion technologies in the oil refinery industry and produces the majority of the world's gasoline. The process is in operation at over 300 out of a total of 646 refineries, as of the beginning of 2014. It is important to note that FCC is not the only conversion process used in oil refineries, as there are also e.g. hydrocracking units. Fig. 1 provides an overview of the different conversion processes in use in oil refineries as of the beginning of 2014, expressed as both the number of barrels of crude oil processed per day and the number of refineries utilizing the processes. 1 A number of oil refineries use multiple conversion technologies, and some refineries even have more than one FCC unit. Apart from producing gasoline, the FCC unit is also a major producer of propylene and, to a lesser extent, raw materials for petrochemical processes. It is estimated that B2300 metric tons of FCC catalyst are produced per day, 2 or B840 000 metric tons per year. This implies that, on average, approximately 0.16 kg of FCC catalysts are used for the conversion of a barrel of feedstock. This equals about 0.35 lbs per bbl, in units more conventionally used in the field, making use of vacuum gas oil (VGO). Heavier feedstocks, such as resid, require more catalyst material (0.4 lbs per bbl) while lighter feedstocks, such as heavy gas oil (HGO), require less catalyst (B0.15 lbs per bbl). 2 The leading worldwide FCC catalyst producers are W. R. Grace, Albemarle and BASF, while local producers like CCIC in Japan and Sinopec and Petrochina in China have smaller market shares. In this review article, we will demonstrate that, in spite of the fact that FCC has been practiced for almost 75 years already, the field is still very active and still central in many research activities of both academia and industry. New developments in the availability of feedstocks, such as shale oil and gas and tight oil, the quest to increase the use of renewable resources, as well as changes in the demand for gasoline, result in a desire to change the selectivity of the FCC process. This development has led to a renewed interest in new molecular sieves, zeolites with hierarchical pore structure, and stabilization of the zeolites used in FCC. At the same time a rapid development in analytical tools has recently led to a substantial increase in the fundamental understanding of the integral FCC catalyst particle at sub-micrometer resolution. Reports on new spectroscopic tools used in the analysis of FCC catalyst materials are published in rapid succession. All in all, research in the field of FCC, the grand old lady of zeolite catalysis, is very much alive. A short history Commercial production of petroleum dates back to 1859, when Colonel Edwin L. Drake found ''rock oil'' in Titusville (PA, USA). The initial petroleum products were refined in very simple refineries without conversion capability. At the beginning of the 20th century, the number of cars propelled by an internal combustion engine sharply increased, and a shortage of gasoline developed. Thermal cracking, in which the unused fractions in the higher boiling range were converted to gasoline-range molecules, was first introduced in 1913, by Burton at Standard Oil of Indiana. 3,4 However, the gasoline produced by this process was of relatively poor quality. Additives like tetra-ethyl lead, discovered in the 1920's by Midgley, could improve the ''octane number'' of gasoline, 5 but other solutions were required. The first technical embodiment of catalytic cracking was introduced in 1915, when McAfee at Gulf Refining Company developed a catalyst based on aluminum chloride. 6 However, this process was not economically feasible, 7 and was abandoned. In the 1920's, French engineer Houdry experimented with the conversion of lignite to useful products, and found that clay minerals could convert his lignite-based oil to a fuel similar to gasoline. 3,8 This was the advent of catalytic cracking as we know it today. Houdry moved to the USA and developed his process with the Socony-Vacuum Oil company (which later became Mobil Oil Company), and eventually the first catalytic cracker operating the Houdry process, which processed 15 000 barrels of petroleum per day, was started up in 1936 in Paulsboro (NJ, USA). The first full-scale commercial plant went on-stream in 1937 at Sun Oil's refinery in Marcus Hook (PA, USA). 8 The catalyst was replaced by a synthetic silica-alumina already in the early 1940's, and the process, which produced very high quality fuels, was very quickly developed to produce aviation fuel for the allied war effort in the Second World War. The original Houdry process made use of a fixed bed reactor. In 1938, a consortium called Catalytic Research Associates (originally Standard Oil of New Jersey, Standard Oil of Indiana, M. W. Kellogg Co., and I. G. Farben) set out to develop a new cracking process. 9 At the beginning of the 2nd World War, I. G. Farben was dropped from the Consortium, and Anglo-Iranian Oil Co. Ltd, Royal Dutch-Shell Co., The Texas Co. and Universal Oil Products Co. (UOP) joined. A pilot plant based on a powdered catalyst moving through a pipe coil reactor and a regenerator was built in Baton Rouge (LA, USA). The 100 barrels per day unit was called PECLA-1 (Powdered Experimental Catalyst, Louisiana). In about a year, the system was developed to commercial stage, and mid-1942, the first commercial FCC unit (PCLA-1) was started up. 9,10 This system was based on an up-flow reactor and regenerator 11 and used a clay-based catalyst. 9 It was based on work of Lewis and Gilliland, 12 working with Standard Oil Company of New Jersey, who suggested that a low velocity gas flow through a powder might ''lift'' it enough to cause it to flow in a manner similar to a liquid. 13 The system was extremely successful, and with ongoing developments, 14 at the end of the war, 34 FCC units were in operation in the USA. PCLA No. 3, which was the second unit at Baton Rouge, was started up in June, 1943. This unit is still in operation today, and is the oldest operating FCC unit, as the PCLA-1 unit was shut down in 1963. As mentioned before the initial FCC process used clay-based catalysts. Improvements were soon made, and synthetic amorphous SiO 2 -Al 2 O 3 or SiO 2 -MgO-based catalysts were developed already in the 1940's. 17 The reason for this was an improved selectivity to the desired products, 18 as can be observed in Fig. 2 using data from ref. 15 and 16. The graph shows a combined effect of activity increase and selectivity improvement. In the early 1960's and 1970's, synthetic crystalline microporous aluminosilicates (i.e. zeolites) were invented at the laboratories of Union Carbide and Mobil Oil Corporation. The first of these relevant to FCC was synthetic faujasite (IUPAC structure code FAU 19 ), or zeolite Y (Linde Y), invented by Breck at Union Carbide. 20 Zeolite Y in various improved forms has been the main cracking component of FCC catalysts since 1964. 21 The initial embodiment was Mg-stabilized, while the currently used rare earth (RE)-stabilized zeolite Y was introduced fairly quickly after that. 21 A second zeolite that has found large-scale application in FCC is zeolite ZSM-5 (IUPAC structure code MFI 19 ), which was invented in 1973 by Argauer and Landolt at Mobil Oil Corporation. 22 The main application of zeolite ZSM-5 has been in FCC operation targeting an increased propylene yield. Fig. 2 clearly shows that the introduction of zeolite materials in FCC catalyst formulations resulted in a drastic increase in the gasoline yield in the 1970s and 1980s. The books by Venuto and Habib 23 and Scherzer 24 give good accounts of the history and backgrounds of the FCC process up to the 1980's. Process and reactor design Although a number of different designs exist for the FCC process, 25,26 a number of general principles can be described on the basis of Fig. 3. FCC, or at least the cracking reaction, is an endothermic process. The heat required for cracking is produced by sacrificing a small portion of the feedstock, and burning it in the regenerator. Hot catalyst material is combined with pre-heated feedstock at the bottom of the riser reactor. The catalyst-to-oil ratio at the bottom of riser is larger than one, and a typical ratio is 5.5. The temperature at the bottom of the riser is typically in the range of about 550 1C. The reactant mixture expands due to the cracking reaction as gases are formed, and the catalyst/feedstock mixture is rapidly transported up the riser reactor, at speeds approaching 40 m s À1 . The typical contact time in a riser is therefore in the order of seconds. At the top of the riser reactor, the temperature has dropped to about 500 1C as catalytic cracking is an endothermic process. The catalyst is separated from the product mixture and stripped of remaining useful product by steam treatment. The products are further refined downstream. The catalyst material, on which a certain amount of carbon, better known as coke, has been deposited during the cracking process, is transported to the regenerator, where the coke is burned off. The catalyst is thus regenerated and re-used continuously. Depending on the exact conditions (such as the oxygen availability), the regenerator temperature can reach up to 760 1C. 16 The selectivity to gasoline is in the order of 50% (see also Fig. 2). The catalyst temperature cycles between about 500 1C and about 760 1C, while it is moving at great speed. It is clear that this means the catalyst is exposed to harsh reaction conditions. As a result of this, the catalyst deactivates. A conservative estimate is that a typical FCC catalyst particle has an average lifetime in the order of about 1 month. Since it is not possible in the present process to selectively remove the deactivated catalyst, refiners remove a small portion of the complete inventory of the regenerator at fixed intervals (typically daily), and replace the removed catalyst with fresh catalyst. When this practice is performed for a longer period, a more or less steady state is reached in the catalyst life-time distribution, which is called equilibrium catalyst, or E-cat. Depending on the size of the FCC unit and the operational parameters, catalyst withdrawal rates can be between 1 and 30 tons per day. The FCC unit in the oil refinery The function of the FCC unit in an oil refinery is to convert heavy gas oil (HGO), vacuum gas oil (VGO) or residue feedstocks into useful products. Fig. 4, based on models by Fu et al. and Ma et al., 27 provides an artist impression of molecules such as could be found in an FCC feedstock, depicting larger aromatic structures with alkyl side-chains, as well as sulfur and nitrogen impurities (oxygen would be present in similar molecules), while Fig. 5a illustrates the complexity of a typical VGO feedstock with a GC Â GC plot. When applying a zeolite Y-containing FCC catalyst material the wide variety of molecules present in the VGO feedstock is converted into molecules with on average a lower molecular weight, as illustrated in Fig. 5b, including molecules in the gasoline range (i.e., the 150 1C boiling temperature range). A typical molecule in the gasoline range would be 2,2,3-trimethylpentane (i.e., isooctane). VGO feedstocks typically boil at 340-540 1C 30 while resid has a higher boiling range (4540 1C), and contains multi-layered systems of poly-aromatic rings. In addition to multi-aromatic ring structures, both VGO and resid, also contain impurities, such as sulfur and nitrogen, and Ni, Fe and V. These are typically remainders from the plant or animal life forms that originally made up the organic matter that decayed into fossil fuels over millions of years, although they can also originate from the interaction of the oil fractions with rock formations. Interestingly, by comparing the GC Â GC plots of Fig. 5b and c one can appreciate the influence of the addition of zeolite ZSM-5 to an FCC catalyst material. A more schematic way of illustrating the FCC conversion process is shown in Fig. 6. 31 Approximately 45% of the original feedstock (i.e., middle distillates, naphtha, and C 2 -C 4 -range molecules) can be further processed without conversion e.g. in reforming and isomerization to increase their value, and will likely require some form of hydrotreatment (e.g. HDS) to remove impurities. A major part of the remaining relatively low-value bottom-ofthe-barrel fractions (HGO and VGO in this example) are converted to desired products by the actions of the FCC catalyst, in which molecules are cracked to form high-octane rating products. The residue is not converted by the FCC catalyst in this particular example, 31 although present day FCC catalyst materials can certainly convert resid, and resid FCC has now become an important process and, consequently, a vast amount of research is directed to focus on resid conversion. Structure and composition The FCC process as described above sets a number of demands for catalyst parameters: 16 Fig. 4 Typical molecules that could be found in an FCC feedstock, depicting larger aromatic structures with alkyl side-chains, as well as impurities: in this case sulfur (yellow) and nitrogen (blue). Structures based on VGO-molecule cores described in Fu et al. and Ma et al. 27 The structures were sketched in the ADF-builder, and energy-minimized using the built-in UFF force field in ADF. 28 The resulting atomic positions were rendered with POV-Ray 3.6. 29 Activity, selectivity and accessibility: first of all, the catalytic properties to convert the large feedstock molecules to the desired molecules; Attrition resistance: the catalyst particles must be able to withstand the impacts with each other and the unit walls during circulation; Hydrothermal stability: the catalyst must be able to withstand the temperature and steam partial pressure in the regenerator; Metals tolerance: the catalyst must be able to withstand the actions of poisons in the (heavier) feedstock; Coke selectivity: the catalyst must make the minimum amount of coke at high cracking activity, especially when processing heavier feedstocks, such as resids; and Fluidizability: the catalyst components must be available in a form that allows fluidization in the regenerator. The above demands can be met in a catalyst system that combines a number of components, as depicted in Fig. 7. As described above, the main active component is a zeolite, usually a stabilized form of zeolite Y. This material contains an internal porous structure in which acid sites are present, which can convert larger molecules to the desired gasoline range molecules. Clay is added as filler, but also for heat-capacity reasons. Various alumina and silica sources are used to produce a meso-and macroporous matrix that allows access to, and pre-cracks the larger molecules in the feedstocks. In addition, these components are used to bind the system together. Additional components may comprise specific metal traps for trapping Ni and V. The components are typically mixed in aqueous slurry, and then spray-dried to form more or less uniform spherical particles that can be fluidized in the regenerator. 32 provides a schematic overview of the reactions occurring in the conversion of FCC feedstocks to gasoline range or gas products. It is clear that the conversion occurs in stages, and gasoline is not the primary reaction product, which should be obvious, since the large molecules in the feedstock cannot enter the (B7.3 Å) pores of zeolite Y. Rather, the large molecules are pre-cracked in the matrix on their way to the zeolites. The cracking reactions are likely a combination between thermal and catalytic reactions, in which the catalytic reaction becomes more important as the molecules get smaller. The catalytic cracking reaction is acidcatalyzed. Reactions Acidity can be found both at the surface of matrix particles (for instance, Brønsted acidity at silica-alumina interfaces, or Lewis acidity at Al 2 O 3 surfaces), or in the zeolite. The basic structure for zeolites is a tetrahedrally linked silicate. In some lattice positions, the silicon is replaced by aluminum. Since aluminum is present as a trivalent cation, this induces a local negative charge in the lattice, which can be compensated with a proton to form a Brønsted acid site. Lewis acid sites can be formed when the aluminum sites are coordinatively unsaturated when the framework is damaged (e.g. by steaming). The subject of the cracking mechanism was discussed from the early days of catalytic cracking. 33 It is now generally accepted that catalytic cracking involves the formation of carbenium ions. 34 As depicted in Fig. 9, there is variety of ways these can be created: 35,36 (1) Brønsted acid sites can donate a proton to an alkene. This alkene must than have been formed by thermal cracking beforehand. Dupain et al. describe that the initial stages of the FCC process involve mostly thermal (radical) cracking on the outer surface. 32 Fig. 6 The effect of FCC conversion on total refinery product. Left: Atmospheric distillation frees up about 50% of the feedstock (middle distillates, gasoline and light gases). Heavy gas oil (HGO) and vacuum gas oil (VGO) are converted in the FCC unit. The products from FCC are combined with the initial products from crude distillation in the column on the right. More recent FCC processes will also convert part of the residue. Data from ref. 31. (2) Lewis acid sites can abstract a hydride from an alkane, and the same can occur on strong Brønsted sites (forming dihydrogen). (3) Alternatively, a Brønsted acid site can donate a proton to an alkane, forming a penta-coordinated carbonium ion. When the carbonium ion cracks protolytically (monomolecular, Haag-Dessau), an alkane and a carbenium ion remain. 37 Isomerization reactions can yield branched molecules, in which the tertiary carbenium ions are more stable. The carbenium ions formed in steps 1-3 crack through b-scission, forming a smaller alkene and a smaller carbenium ion. Hydride abstraction from a larger alkane molecule allows the smaller carbenium ion to desorb from the acid site as an alkane, leaving a new larger carbenium ion on the zeolite acid site to propagate the reaction. Alternatively, the carbenium ion can donate the proton back to the acid site, and desorb as an alkene. Corma et al. 34 conclude that both pathways, involving initial carbenium ion formation on Lewis sites and initial carbonium ion formation on Brønsted sites, occur in parallel. FCC catalyst testing One of the major problems in designing improved FCC catalysts is that it is very difficult to scale down the commercial FCC process with its short residence time and rapid deactivation processes. The feedstocks are complex and contain various impurities that can have a major effect on performance, such as Conradsen carbon, metals like Ni and V, oxygenates, and nitrogen-and sulfur-containing molecules. Resid feedstocks require a different operation than VGO, and diesel-or propylene-selective applications again are completely different. 38 Over the years, various more or less standard methods have been developed for testing FCC catalysts. The first was the ''MAT''-test, or Micro Activity Test, according to ASTM D-3907. In this test, a small sample of catalyst is tested in fixed bed. Conversion can be influenced by changing the catalyst-to-oil (CTO) ratio. The test has various drawbacks, 38 but has nevertheless been very popular over the years. The test contacts the catalyst and feed for prolonged periods, during which deactivation of the FCC catalyst proceeds, and coke-and temperature profiles may develop over the catalyst bed. As a result of the prolonged exposure to feedstock, also the amount of coke deposited on the catalyst material may be unrealistic. The same holds for the observed gas selectivities. The major drawbacks, concerning contact time and feed vaporization were addressed in various protocols. 39,40 Kayser 41 developed the so-called ACE (Advanced Cracking Evaluation) units, a catalytic fixed fluid bed system, in which a small catalyst sample (typically about 1 g) is fluidized in a gas stream, and a brief pulse of atomized VGO is passed through the fluidized bed at 538 1C (1000 1F). Another solution capable of handling the heavier feedstock is the Short Contact Time Resid Test, described by Imhof et al. 42 MAT and its refinements (e.g. SCT-MAT and AUTOMAT 43 ) and ACE protocols can show ranking differences amongst each other, but also with pilot plant results. To overcome this, more realistic simulations or even downscaled versions of the riser reactor, like Pilot Riser Units (PRU), have to be applied. The closest approximation on lab scale may be the Micro-riser simulation based on a coiled reactor developed by Dupain et al., 32 Reaction network in zeolite-assisted cracking of hydrocarbon molecules. Reaction 1: proton transfer from zeolite Brønsted site to alkane to form carbonium ion. Reaction 2: proton transfer from zeolite to alkene to form carbenium ion. Reaction 3: hydride transfer from alkane to zeolite to form carbenium ion. Reaction 4: Beta scission of a carbenium ion to form a new carbenium ion and an alkene. a moving bed system with short contact time, which also allows testing with heavier feedstocks. While FCC catalyst testing is already complicated, the protocol will also have to take into account the deactivation of the catalyst during its lifetime of cracking and regeneration cycles. The deactivation of the catalyst is caused by steaming during the regeneration and assisted by the presence of metals like Ni and V (but also Fe, Na and Ca). Deactivated commercial catalysts may contain thousands of ppms of Ni and V, depending on the operation. Mitchell Impregnation (MI) 45 is used to deposit Ni and V on the catalyst particle, usually prior to steaming. The metals are impregnated throughout the catalyst particle, which is maybe (in part) correct for V, but certainly not for Ni. Simple steaming of the catalyst (with or without metals) at increased temperatures mimics the effect of the regenerator in vary crude way. More realistic procedures mimic the cracking-regeneration cycles, e.g. cyclic propane steaming (CPS), 48 in which the catalysts are exposed to multiple cycles of (propane) cracking, stripping and steaming prior to the actual activity tests. A more elaborate deactivation procedure is the cyclic deactivation (CD) procedure, 49 in which actual feedstock cracking, depositing metals every cycle, is combined with regeneration for many (up to over 50) cycles to create a more realistic metals profile. Improvements are the two-step CD (2s-CD) and advanced CPS protocols, as described by Psarras et al. 50 Zeolite framework stabilization As mentioned above, the main cracking component in FCC catalysts responsible for the production of gasoline-range molecules is zeolite Y. 19 The structure of zeolite Y, shown in Fig. 10, has a 3-D pore system, in which pores of B7.3 Å connect larger (13 Å in diameter) cages, which are known as the supercages of this zeolite. The addition of solid acids to the catalyst improves both the conversion as well the product selectivity towards gasoline. The original FCC catalyst contained clay, and later amorphous silica-alumina and silica-magnesia. The advent of zeolitebased catalytic cracking was seen shortly after their discovery at Union Carbide, 20,21 in the early 1960's. Zeolite Y combines high surface area/pore volume solid acidity (both Brønsted and Lewis) with sufficient room to allow bimolecular (carbenium ion) cracking. The preparation of the zeolite is relatively simple, no organic Structure Directing Agents (SDAs) or even autoclaves are required. However, the as-prepared zeolite is not very stable towards hydrothermal conditions. The stability can be improved by controlled steaming and washing/leaching cycles (to make the so-called ultra stable Y, or US-Y). A well-known way to improve the effectiveness of the zeolite (i.e. to retain activity longer) is to exchange part of the counterions with rare earth (RE) ions. There is a lot of literature on the effect of RE ions on zeolite stability and reaction characteristics. A large body of work in this area was already performed in the 1970's and 1980's. For example, Rees et al. 51 show that the exothermic peak in differential thermal analysis, which is interpreted as a collapse of the framework, shifts towards higher temperature for RE-exchanged faujasite versus Na-exchanged faujasite. This framework collapse occurs in the range of 800-1000 1C, so outside of the temperature range relevant for FCC. Nevertheless, the effect is an indication for increased lattice thermal stability. Flanigen et al. 52 provide an assignment of the IR vibrations observed for zeolite Y. Roelofsen et al. 46 explain that the symmetric stretch vibration at around 790 cm À1 is the most suited to derive the framework silicon-to-aluminum ratio, shortened as SAR, because other IR peaks are more sensitive to the type and amount of cations in the framework, crystallinity, as well as water content. The peak frequency of the IR band at around 790 cm À1 has found to be linearly proportional to the Al/(Al + Si)-ratio. Rabo et al. 53 describe two IR peaks related to hydroxyl groups in RE-Y. The first peak, at 3640 cm À1 , shows strong hydrogen bonding with water, benzene and ammonia, and can thus be interpreted as a Brønsted acid site exposed in the supercage. The other OH-vibration, centered at 3524 cm À1 , does not bind with ammonia or benzene, and is thus hidden inside the sodalite cage. Rabo et al. assume these hydroxyls are associated with OH-groups retained between two RE-cations as an electrostatic shield. Roelofsen et al. 46 investigated the dealumination of zeolite Y with varying loading of RE 2 O 3 (mixed rare earths) with IR, XRD, and 29 Si MAS NMR. They find a good correlation between the framework SAR derived from IR and from 29 Si MAS NMR. However, the correlation with the SAR derived from the unit cell size using the Breck-Flanigen relation 54 does not hold in this case. The unit cell size is significantly larger than would be expected from the Breck-Flanigen relation. This indicates that the unit cell size is not a good indicator for lattice stabilization. A variety of authors studied the stability of RE-exchanged zeolite Y in the 1960's and 1970's, mostly based on IR-analyses. Scherzer et al. 47 conclude that framework vibrations shift to Fig. 10 The structure of zeolite Y (Faujasite), with the most relevant ion-exchange sites highlighted. The effect of RE-introduction: XRD: comparing RE-stabilized (blue) with non-stabilized Y-zeolite (red), we observe a shift to lower angles (i.e. larger unit cell size, lower SAR, less dealumination), as well as higher crystallinity in the RE-stabilized material; IR: we observe a shift to lower frequency (lower SAR, less dealumination) for the RE-stabilized form; NMR: we observe larger contributions from Si-species with multiple Al-neighbors (i.e., a lower SAR, less dealumination). All spectra are simulated based on literature data from Roelofsen 46 and Scherzer et al. 47 higher frequencies, and the XRD unit cell size decreases, upon increased severity of the thermal treatment. In both cases there is more or less linear dependence of the effect with the RE-loading. In a subsequent paper, Scherzer and Bass 55 look at the OH-stretching region of the same samples. They conclude that bands at 3600 and 3700 cm À1 indicate that the framework is dealuminated. Bands at 3650 and 3600 cm À1 are shown to be acidic (from interaction with ammonia, pyridine, and sodium hydroxide). They also observed a band at 3540 cm À1 , which they ascribe to OH groups attached to lanthanum ions, although there also appears to be a framework band in the same IR region. Fallabella et al. 56 study the effects of using different RE-ions in the ion exchange process of zeolite Y. The introduction of RE cations brought about no significant changes in the structural region of the zeolites. However, in the hydroxyl region, a band ranging from 3530 to 3498 cm À1 was observed. This band, attributed to OH groups interacting with RE-cations (see also Scherzer and Bass 55 ), is shifted to higher wavenumbers as the ionic radius of the cations increases. This hydroxyl is not acidic (or at least not active in catalysis), as it resides in the sodalite cages. The authors do note a clear effect of the radius of the RE-ion on the acidity as probed with pyridine and lutidine. Pyridine is capable of detecting both Brønsted and Lewis acid sites, whereas lutidine can only interact Brønsted acid sites due to sterical hindrances generated by the methyl groups. In their study, Dysprosium falls outside the plotted correlation, possibly because remaining chloride ions create extra activity. Van Bokhoven et al. 57 report that high-charge octahedral extra-framework Al in US-Y, as well as La 3+ ions in the ion exchange positions in La(x)NaY induce local polarization of the Al-atoms in the lattice. In addition, a long-range effect is observed which causes the T-O-T angles to increase (and thus the unit cell size to increase). The authors thus assume that although the type of ion is different, the origin of the enhanced activity in US-Y and RE-Y is identical. Most authors claim that rare earth elements stabilize the zeolites by moving into the hexagonal prisms (site S-I), and retaining the framework Al by some form of electrostatic interaction. Excess rare earth migrates from the hexagonal prism into the supercage (site S-II), and forms strong Brønsted acid sites in connection with framework Al. Du et al. 58 claim that the ionic radius of different RE elements has an effect on the stability of the RE-Y zeolite and the framework stability increases with decreasing ionic radius for the set Ho 3+ , Dy 3+ , Nd 3+ , La 3+ . Ce 3+ does not seem to move into the S-I positions, because under the conditions applied by Du et al. the cerium gets oxidized to Ce 4+ , and forms a larger complex that cannot migrate into the sodalite cages. Schüßler et al. 59 investigated the nature and location of La-species in faujasite with a combination of techniques, including DFT calculations. In order to make full periodic calculations possible, they selected the rhombohedral primitive cell of faujasite. This reduces the number of framework atoms by a factor of 4, from 576 to 144. The authors find small amounts of [La(OH)] 2+ and [La(OH) 2 ] + -species in the S-II sites, but claim the majority of the La 3+ is present in the sodalite cages in multinuclear OH-bridged aggregates. The formation of the hydroxylated clusters leads to the formation of Si-OH-Al groups at a distance to the La-clusters. However, the authors claim that isolated La 3+ species in the S-II site are also able to polarize secondary and tertiary C-H bonds and thus activate alkanes, and point to these species as responsible for the enhanced activity and hydrogen transfer of RE-exchanged zeolites. Noda et al. 60 performed a combination of temperature programmed desorption (TPD) of NH 3 with DFT cluster calculations. They examined Ba-, Ca-, and La-exchanged zeolite Y and observe an increase in catalytic activity for all ion-exchanged zeolites with the Ba ones producing the lowest activity. They ascribe the formation of stronger acid sites to a removal of OH-sites in the sodalite cages and hexagonal prisms, and strengthening of the supercage-OH sites by a polarization effect induced by the cations. From the above, it is clear that the presence of RE-cations in the structure provide some form of stabilization, to the extent that more aluminum is retained in the lattice as observed with IR and NMR. XRD unit cell size analysis does not correlate with IR and NMR measurements in the normal way for RE-containing zeolite Y. The effect of the presence of RE in the lattice on performance is dramatic. Plank et al. 61 already in the early 1960's noted an appreciable increase in activity (more than 100 times as active as amorphous silica-alumina's) when using RE-stabilized Y zeolites, although they compared their materials to amorphous SiO 2 -Al 2 O 3 and Na-Y. Although the activity increase is desirable, the incorporation of RE also increases the rate of hydrogen transfer, which leads to a less desirable drop in research octane number and olefinicity in the LPG range. Fallabella et al. 62 define a hydrogen transfer (HT) index derived from the ratio of different reaction rate constants in the cracking of cyclohexene, which correlates with the atomic ratio of the RE-ion and the acidity. Lemos et al. 63 studied heptane cracking on RE-exchanged Y-zeolites, and observed mainly paraffinic cracking products. The cracking activity seems to correlate with strong protonic acidity, as derived by reactivity comparison. General trends Even though the FCC process has been with us for over 70 years now, the process is still being developed further. Changes in the demand for products, and changes in the feedstock drive constant development. Fletcher 64 lists the following challenges for FCC catalysis: LCO maximization (i.e. diesel flexibility); Petrochemical feedstock maximization (i.e. propylene); Flue gas emissions control; and Enhanced metals tolerance. On the one hand, conventional feedstocks are becoming heavier. Resid cracking in FCC gained popularity in the early 1990's, and has gained importance since. Heavier feedstocks imply that larger, more aromatic molecules need to be cracked, which calls for improved accessibility and improved metals tolerance. At the same time, there is a drive to increase activity, but at the same time limit the amount of coke produced to the absolute minimum required for heat balance of the unit. This is a continuous challenge in FCC since the early days, and various improvements have been made over the decades, as illustrated in Fig. 11. Apart from the conversion of heavier feedstocks, we have recently also seen an increased application of relatively light, paraffinic shale oil as the feedstock to the cracker. So the traditional feedstock of FCC, namely VGO, is replaced more and more by both heavier and lighter feedstocks. At the same time, a similar effect can be observed on the product side. Where (aviation) gasoline was the desired product for the initial FCC units, we have seen an increased demand for propylene over the last two decades. Propylene is the raw material for polypropylene, and the FCC unit can be one of the main sources to form propylene (the other would be steam cracking of naphtha). Propylene can be produced in the FCC unit as a product mostly of secondary cracking of gasoline range molecule, usually by specific additives containing ZSM-5 zeolite. Fig. 12 lists the market size for FCC catalyst specifically targeting propylene production, which has risen from about 10 000 metric tons per year in 2005 to almost 90 000 metric tons of catalyst in 2014. The development of specific FCC-propylene capacity follows the demand for olefins. 66 It illustrates the clear expected increase in the propylene demand, which cannot be absorbed by steam cracking alone, and has to come from the FCC unit. On the other hand, as shown in Fig. 13, the world market for gasoline seems to flatten out, and developing countries and even the USA show an increasing demand for diesel as a transportation fuel. The compiled information, based on data from the OPEC World Oil Outlook 2013, 67 shows the ratio between gasoline and diesel demand over the next decades is projected to change in favor of diesel. Historically, the USA had a surplus in diesel, and the EU had a surplus in gasoline, which could be traded. 68 With the new gasoline/diesel demand ratios predicted for the next decades, this is no longer possible, and this will no doubt have an impact on the desired products from the FCC unit as the main conversion process. The two developments combined require a shift from gasoline as the main product to both higher-and lower-boiling products, which is not possible at the same time. Increasing propylene selectivity Propylene is a minor product (o5% product yield) in normal FCC operation, but selectivity towards propylene can be enhanced by selectively cracking gasoline range molecules. Although increasing the riser temperature, increasing the catalyst-to-oil ratio, and increasing the residence time will increase the propylene yield, 69 these options are limited. Corma et al. 70 show for cracking over Y-zeolites, that although propylene yield increases with conversion, propane yield increases faster, so the alkene/alkane ratio decreases at higher conversion. So rather than olefins, coke, dry gas and paraffinic LPG will be produced preferentially through so-called over-cracking. This is because the wide pore system of FAU allows for bimolecular cracking and hydrogen transfer reactions. In order to selectively produce lower olefins, refiners apply additives containing zeolite ZSM-5. 66,71,72 These additives, complete FCC catalysts in themselves, usually contain ZSM-5 as the only active zeolite, in loadings of 25-50%. Combination of Y-zeolites and ZSM-5 in one catalyst is also possible, but removes (some) flexibility for the refiner. It is also possible to base the entire conversion on ZSM-5 based catalysts in dedicated processes, such as DCC, 73 which operates at higher temperature than the conventional FCC process, and converts heavy feedstocks such as VGO, vacuum resid, or VGO mixed with DeAsphalted Oil, into light olefins or iso-olefins. In this review paper, we will exclusively focus on ZSM-5containing additives. Argauer and Landolt first reported ZSM-5, this structure shown in Fig. 14, as a synthetic molecular sieve in 1972. 22 Although Kokotailo et al. solved the structure of ZSM-5 already in 1978, 74 recent work has shed new light on this material. Even though zeolite ZSM-5 was first described as a synthetic material, a natural mineral form (named Mutinaite) also exists as it was discovered in Antarctica adjacent to deposits of natural zeolite Beta. 75 Zeolite ZSM-5 can be prepared both in the presence and absence of organic SDAs. The typical SDA molecule is tetrapropylamine (TPA), which can be located in the pores of the synthetic material. 76 Materials with a silica-to-alumina ratio (i.e., molar SiO 2 /Al 2 O 3 ratio) up to about 25 can be synthesized without SDA, for higher silica-to-alumina ratios typically an SDA is required. The essentially all-silica form, known as silicalite, has a slightly different structure than the low-SAR material, it has a monoclinic unit cell, whereas the low SAR material crystallizes in an orthorhombic cell. The framework is exactly the same for both phases. The structure of ZSM-5 consists of a 3D pore system circumscribed by 10 T-atoms. The pores are slightly elliptical and have diameters of 5.1-5.6 Å. The structure has a straight 10-MR pore along the [010]-direction, and a zig-zag 10-MR pore along the [100]-direction. The pores intersect, and molecules (of the correct dimensions) can reach any point in the pore system from any other point. ZSM-5 normally crystallizes in lozenge-or coffin-shaped crystals that are frequently twinned. The limited room in the pore system of zeolite ZSM-5 compared to the supercages in zeolite Y implies that it is much more difficult to accommodate the larger bimolecular transition states. As a result, the secondary cracking of gasoline range molecules in ZSM-5 will produce more olefins. This is illustrated in Fig. 15. Just like the primary cracking zeolites, also zeolite ZSM-5 is unstable towards the harsh environments of the FCC process. Dealumination by repeated contact with steam in the regenerator dislodges the aluminum from its framework position, thus removing the active acid sites, and in the process destroying the zeolite lattice. Although a partial destruction of the zeolite lattice may improve the diffusion characteristics of the zeolite by creating access to the interior through mesopores, this also creates larger pores, and hence the opportunity for bimolecular cracking. To increase the stability of zeolite ZSM-5, a treatment with phosphorous is often applied. The trick has been used in ZSM-5 for various applications apart from FCC, such as methanol-toolefins (MTO) conversion, alkylation, and ethanol dehydration. 77 A very recent review on phosphorus promotion of zeolites covers the synthesis, characterization and catalysis aspects of phosphated ZSM-5. 78 A variety of phosphorous sources has been used over the years to achieve the desired stabilization. For example, Xue et al. 77 mention organo-phosphorus compounds, such as trimethyl phosphite ((CH 3 O) 3 P) and others, and inorganic compounds, such as phosphoric acid (H 3 PO 4 ) and ammonium phosphates ((NH 4 ) 3 PO 4 , (NH 4 ) 2 HPO 4 , (NH 4 )H 2 PO 4 ) and others. Given the scale of the operation and ease of handling (e.g. by-products), especially the inorganic compounds are of relevance to FCC catalyst manufacturing. The overall interaction between the phosphorous species and the zeolite lattice seems to be relatively independent of the phosphate source, although the overall effect of the treatment on activity strongly depends on parameters like Al/P ratio, Si/Al ratio, zeolite crystal size, and activation conditions. 77,79 When phosphate species are introduced in a way that allows them to enter the pores and react with the bridging hydroxyls of the Si-OH-Al active sites in the zeolites, an adduct forms in which the phosphate ions force the aluminum in an octahedral coordination. This process, which is reversible under treatment with hot water, eliminates the bridging hydroxyls and thus the Brønsted acidity of the zeolite. 78 However, when elution with hot water can be avoided during heat treatment, the octahedral Al-PO x -species is more stable, and the lattice integrity is maintained to a larger extent then for untreated zeolites. Excess phosphorus used during the treatment will deposit on the external surface of the zeolite ZSM-5 crystals as a polyphosphate. If any aluminum is dislodged during the thermal treatment, it will very likely react with the available phosphate, and form an amorphous aluminophosphate. It should be noted that the ZSM-5-containing additive will generally also contain an alumina binder, which will react with excess phosphate to form an aluminophosphate species that may be beneficial for binding the system. The effect of the treatment with phosphate on macroscopically observable parameters is (1) Enhanced stability of the zeolite lattice; (2) A decrease in the formation of bulky isomers; (3) Formation of increased amounts of lights olefins in FCC, but also in MTO and ethanol dehydration; and (4) A decrease in coke formation. In view of the mechanistic relations described above, the latter two seem to hint at decreased hydrogen transfer, and the second seems to indicate decreased room in the lattice for the formation of bulky intermediates. The phosphate treatment usually involves impregnation of solutions of phosphate sources, typically H 3 PO 4 , or the less acidic ammonium phosphates, followed by drying (70-120 1C) and calcination (450-650 1C) for 1 to 6 h. Although generally the catalysts will be exposed to steam after they were stabilized, some authors describe phosphate treatment after initial steaming. This may lead to the formation of extra-framework aluminum (EFAL) and hydroxyl nests, and dislodged aluminum still partially connected to the lattice. Various characterization techniques have been used to study phosphated zeolites. 78 XPS shows enhanced P-concentration at the surface of larger zeolite crystals, possibly because the initial stage P-species react with surface Si-OH groups before they can enter the pores. The optimal loading for the phosphate treatment seems to be an Al/P-ratio of about unity, although care must be taken to avoid diffusion problem during the impregnation stage. Excess of phosphate will remain on the external surface of the zeolite ZSM-5 crystals. Upon P-treatment, a decrease of porosity/ surface area is observed, which is correlating with P-content. A decrease in surface area/porosity can be attributed to pore blockage by P-species, aggregation of zeolite crystals by the action of external polyphosphate, or dealumination. Although porosity and accessibility are initially decreased, the bridging hydroxyl groups (and thus the acidity) appear to remain available at this stage. The zeolite crystals appear to lose some crystallinity after the calcination treatment following the phosphorous impregnation, but this could be due to scattering of the X-rays by P-species in the pores. Although the Si/Al ratio as observed with 29 Si MAS NMR seems to increase, this may just be caused by changes in the coordination spheres of the Si-or Al-species in the lattice, and not necessarily by removal of the framework Al. Depending on conditions for the calcination, the phosphate species may coordinate to the aluminum, and thus break the Si-OH-Al bridges. Although this would lower the number of strong acid sites, the Al-O-P(OH) 3 and Si-OH species formed when this happens may lead to new acid sites, and partially connected Al may form additional Lewis acid sites. Upon phosphate treatment, the typical resonances for tetrahedral framework Al seem to decrease in 27 Al MAS NMR. This does not necessarily mean that the Al is dislodged from its framework position. By using combined spectroscopy and scanning transmission X-ray microscopy (STXM), van der Bij et al. observed that there are two different interactions between the phosphate and aluminum (Fig. 16). Extra-framework aluminum reacts with the P-sources to form an extra-framework crystalline ALPO phase. When there is no EFAL to react with, the P reacts with framework Al, seriously distorting its coordination, but without forming EFAL species. These distorted sites were more or less immune to hydrothermal treatment. Excess phosphate was found on the external surface of the zeolite crystals. Upon heat treatment, van der Bij et al. observed the formation of stable -(SiO) 3Àx -Al-(PO) x -type species (see Fig. 17), in other words SAPO species are connected to the framework, but with Al no longer in its original framework position. The exact structure and position of these clusters, as well as mechanisms to form acid sites around these cluster remains as yet unsolved, although it is suggested that the bulky SAPO-species impede the formation of carbenium ions, and thus successfully suppress the bimolecular mechanism, resulting in an improved propylene selectivity for the treated samples. Zeolites with hierarchical pore systems Until recently, the job of operators working with an FCC unit was to make gasoline. Improved vehicle efficiency has led to a Fig. 15 A comparison of the total products from the same runs as depicted in Fig. 4. The graphs are combinations of the GC Â GC plot for total liquid product, PIANO analysis of the naphtha fraction, and GC analysis of the gases. Top: Products from a normal cracking run. Bottom: Products with ZSM-5 containing additive added to the catalyst. Colorcoding: n-paraffins: dark blue; iso-paraffins: red; naphthenes and olefins: green; naphthenes: purple; olefins: blue; aromatics: orange. drop in the demand for gasoline in the USA, a trend that is more than likely to continue in view of the expected further efficiency increases demanded by greenhouse reduction emission limits. This implies the gasoline-to-diesel ratio in the refined product changes in favor of diesel, and the FCC unit, the main conversion unit in a large number of refineries, will have to respond. Hansen et al. 82 describe this can be tackled by a number of operational changes, such as minimizing the diesel fraction in the FCC feedstock, changing cutpoints and reducing the cracking severity. It is also possible to change the FCC catalyst to a more diesel-selective catalyst. Hansen et al. 82 describe that one option is to lower the zeolite content and increase matrix activity. However, this leads to an increased coke formation. They describe a series pathway as one of the cracking pathways: as conversion increases, first LCO, then gasoline, and finally LPG reach a maximum yield, and they propose that mass transfer limitations determine the outcome of this complex inter-conversion network to a great extent. There are a number of ways to introduce a hierarchical pore structure, in which mesopores and micropores are connected, in zeolites. A review on hierarchical zeolites is presented by Li et al., 83 while other recent reviews on hierarchical zeolites are those by Na et al., Moliner, and Serrano et al. [84][85][86] Li et al. 83 describe, as summarized in Fig. 18, two approaches: bottom up, in which the hierarchical zeolite is synthesized directly from a silica-alumina gel, and top-down, in which existing zeolites are post-treated. In the bottom up-approach, extra-crystalline, hard, templates such as carbon black, 3-D ordered mesoporous carbon, or carbon aerogel can be used (e.g., ref. 87 and 88). The zeolites form within the structure of the hard template, which is then burnt off to create mesoporosity. Adaptations of the more standard templates, which introduce mesopore-structure direction in the same molecule, are called soft-templating. Here, different functionalities are combined in one template molecule that direct for micropores and mesopores. For instance, Ryoo et al. 89 describe hierarchical zeolites from randomly stacked MFI nanolayers, which are created by using special bifunctional organic structure directing agents. Rimer et al. 90 influence the crystallization kinetics by applying zeolite growth modifiers (ZGM), organic molecules that impede the growth of specific zeolite crystal planes. The conversion of amorphous cell walls of MCM-41 or SBA-15 type mesoporous materials towards crystalline zeolite structures (such as TUD-C, 91 or zeolite Y encapsulated in TUD-1 92 ) is also considered bottom-up. In the top-down approach to achieve hierarchical zeolites, the zeolites are post-treated after synthesis. The easiest way to introduce mesoporosity is by dealumination, which can be achieved by steaming and chemical treatments, such as acid leaching or reaction with EDTA or other chelating agents that remove the resulting extra-framework alumina. This approach was used in the development of Dow's 3DDM mesoporous mordenite catalyst for the production of cumene, 93 and is also the basis of US-Y zeolites that are used in many applications nowadays. Clearly, dealumination leads to a lower number of acid sites and at least an initial loss of framework integrity. However, Fig. 16 Chemical maps of phosphate-activated zeolite clusters, constructed from Al and P K-edge spectra stacks for two different samples. Blue color denotes Al, red denotes P, resolution is 60 Â 60 nm. these disadvantages are more than offset by the creation of new types of acidic sites and enhanced diffusion properties. 94 The increased mesoporosity may give rise to increasing rates in bimolecular and oligomeric reaction pathways that require large transition states. 95 Separating this effect from the modified acidity per site in explaining activity and selectivity differences can be a challenge. Janssen et al. provide a good insight in the formation of the mesopores in zeolite Y by applying 3D transmission electron microscopy (TEM) in combination with nitrogen physisorption and mercury porosimetry measurements. 96 They find a large part of the mesopores in cavities within the crystal, and the creation of an interconnecting system of cylindrical mesopores requires special treatments. Another way of producing mesopores is by desilication. Initial work in this field was published by Groen et al. 97,98 and expanded upon by Pérez-Ramírez et al. 99,100 The authors stress the need for a sustainable route, and note that most bottom-up approaches make use of exotic ingredients or larger amounts of organic templates than the original materials. Top-down approaches typically have low yields because they leach away either alumina or silica, and thus give rise to waste-streams. The authors note that a typical base leaching may remove as much as 30% of the parent material. 101 They propose to use the silica-rich waste stream as a raw material in the original synthesis of the zeolite, thereby closing the material loop. 99 Li et al. 102 compare mesoporous mordenites made with different synthesis methods. They applied soft and hard templating, as well as a combination of acid leaching and base treatment. Only the combination of acid leaching and base leaching yielded a material with improved accessibility and strong acidity, leading to optimal performance in the isomerization of 2-methyl-2-pentene and the alkylation of benzene with benzyl alcohol. Park et al. 103 describe ZSM-5 based catalysts with hierarchical pore systems prepared with soft templating. When compared to normal ZSM-5 catalysts in the cracking of gas oil, they observe higher overall activity, and higher yield of lower olefins like propylene and butylene. The catalysts contain intracrystalline mesopores. The author assume that pre-cracking of larger molecules inside the mesopores provides the molecules that can be cracked inside the MFI micropores to give the desired products. Normal ZSM-5 would require conversion of gasoline range molecules to form the desired olefins, whereas the mesoporous catalysts described by the authors have similar or better gasoline yields compared to normal ZSM-5. However, the catalytic performance was tested on pure zeolite samples. The addition of matrix and binder, as well as the presence of a main Y-zeolite based FCC catalyst in the catalyst system, may cause the observed benefits to change, among others because this would supply a large concentration of gasoline molecules. The conversion and selectivity to propylene observed for the hierarchical ZSM-5 samples described by the authors is not high enough to warrant use by itself (see e.g. the performance characteristics of the DCC process 104 ). Hansen et al. 82 describe the introduction of uniform mesopores in the size range of about 4 nm, or about 6 times larger than the micropores in the host lattice of the zeolite (see Fig. 19), by a post-synthesis chemical treatment. 83, 105 We will expand a bit on this work, as it directly concerns an application in FCC. The authors observe a lower bottoms yield at constant coke, and improved middle distillate over bottoms selectivity in ACE testing. A similar effect is seen for the gasoline over LPG selectivity, since the optimum in the series pathway network is shifted to higher molecular weight. The post-synthesis treatment in this technology appears to amount to a re-crystallization of the zeolite in alkali (pH 9-11) in the presence of cetyl-trimethylammonium-bromide (CTAB) at 150 1C. The starting zeolite in the original process already has a quite high silica-to-alumina ratio of about 30, lower SAR zeolites apparently need an acid pretreatment before they are suitable for post-treatment. 106 Carbon residue from the template is removed by careful calcination at 550 1C. Following the treatment, the authors do not observe any octahedrally coordinated Al in the NMR spectrum, and terminal silanol vibrations at 3740 cm À1 also disappear, both indicating a lattice without too many irregularities. The vibration of the Brønsted acid site at 3640 cm À1 seems to increase compared to the parent material, as does a vibration at B3540 cm À1 , on which the authors do not comment. TPD of ammonia shows that the mesostructured material has about the same number of acid sites as normal zeolite US-Y. The zeolites were tested after being introduced in FCC-matrices, and steam-deactivated. At constant conversion, lower bottoms-and coke-make, and higher gasoline and middle distillate yields are observed. García-Martínez et al. 107 describe a test with a commercial quantity of the material in a refinery. They tested the E-cat from the refinery in a FCC test unit before and at the end of the trial, and report lower coke make, higher LCO make and lower bottoms for the catalyst containing hierarchical zeolites. New zeolites in fluid catalytic cracking Although it is clear that improved mesoporosity in FCC catalysts improves the performance, this does not imply that ultra-large-pore zeolites are necessarily good active ingredients in FCC catalysts. 108 In the cracking of model reactants like n-hexane, for instance, MCM-41 performs much poorer than zeolite US-Y. In the cracking of larger molecules, like gas oil cracking, the difference is smaller, but the low thermal stability of MCM-41 prevents its application under the severe FCC process conditions. Early work by Derouane and co-workers 109 explains this effect. The authors describe the role of the curvature of the zeolite pore surface and explain that the interaction between molecules and the zeolite surface is strongest when the radius of the molecule and the surface curvature are similar. At this exact fit, a number of phenomena are described that have a direct effect on the performance, e.g. a supermobility instead of Knudsen diffusion. The increased interaction leads to increased concentration of reactants near the acid sites, and expresses itself macroscopically as increased apparent acid strength. This implies that the 3D structure of the zeolite and its effect on sorption equilibria can play a large role in reaction kinetics; they directly influence the observed rate of reaction, especially when the sorption energetics are magnified by the surface curvature. 110 This implies that the decreased rate of cracking of n-hexane in MCM-41 as compared to zeolite US-Y does not necessarily mean that the acid sites in MCM-41 are weaker than those in zeolite US-Y. Apart from zeolite Y and ZSM-5, other zeolites have been tested in FCC catalysis. Zeolite Beta, for instance, has been studied extensively. Although economics and thermal stability have thus far prevented the application of zeolite Beta in largescale FCC processes, it is known 111-113 that (P-stabilized 111 ) zeolite Beta improves C 4 -yields. Bonetto et al. describe an optimal crystallite size for stability, activity and selectivity for zeolite Beta in gas oil cracking. 112 Mavrovouniotis et al. ascribe the higher olefinicity in the gases for zeolite Beta to a lower hydrogen transfer activity. 113 The issue of cost and stability returns for many of the new structures proposed for FCC applications. Quite often, complicated organic SDAs, or exotic framework constituents (e.g. Ge and Ga), or fluoride-assisted syntheses are required to even synthesize (new) zeolite structures. These do not translate well to the scale of operation, catalyst consumption and the severity of the FCC process. Nevertheless, we will discuss some recent developments in the paragraphs below. Fig. 20 gives an overview of some of the new zeolites tested in FCC as a function of their pore diameters. When examining the medium pore size zeolite MCM-22, 114 Corma et al. observed little activity in the cracking of larger molecules. When using it in an additive similar to zeolite ZSM-5 additives, zeolite MCM-22 produces less gases (lower loss in gasoline yield), but with higher olefinicity (so higher propylene and butylene selectivity than ZSM-5). ZSM-5 is more active, though. ITQ-13 115 with a 3D 9-MR Â 10-MR pore system, presents acid sites that are similar in strength to those of ZSM-5, or stronger. The specific pore structure induces an increased yield of propylene in VGO cracking. Zeolite ITQ-7 116 has a pore system similar to zeolite Beta, yet a higher gasoline yield and improved olefin selectivity are observed in FCC cracking, where an ITQ-7 containing additive was used. 117 The authors conclude that the specific structure and tortuosity of the pore system favors b-scission over protolytic cracking and limits hydrogen transfer reactions. Zeolite ZSM-20 118 and ITQ-21 119 both have structures that resemble zeolite Y, and pore openings that are similar in size to zeolite Y. Their cracking characteristics are similar to zeolite Y, except for a higher gas (LPG) and propylene yield but lower gasoline olefinicity in ITQ-21. Zeolite ZSM-20 shows good thermal stability compared to zeolite Y, but this does not directly translate into higher activity. In their description of the new zeolite IM-5, Corma et al. 120,121 apply various cracking and isomerization tests (i.e., n-decane hydroisomerization-cracking, m-xylene isomerization-disproportionation, and n-hexadecane isodewaxing) and adsorption tests to study the pore morphology and suitability of the structure for cracking reactions. The structure is described as having 10-MR pores with side pockets, and performance of the material in some cases is close to ZSM-5, possibly with improved thermal stability. Moliner et al. 122 describe the synthesis of ITQ-39, a new zeolite with a three-directional channel system with interconnected large (12-MR) and medium pores (10-MR). The zeolite performs well in the alkylation of benzene to cumene. The authors claim the material would be a good additive for FCC since its pore system behaves as an intermediate between zeolites ZSM-5 and Beta. The silica-germanate ITQ-33, from the same group 123 is another zeolite with a mixed pore system, in this case an intersecting 18-MR-10-MR system. This material was compared to ITQ-17, a material with the same composition but only 12-MR pores, as well as zeolite Beta (3D 12-MR pores). Cracking experiments were performed with 1,3diisopropylbenzene (DIPB) and 1,3,5-triisopropylbenzene (TIPB), i.e. relatively large molecules that do not easily fit in small pores (DIPB can diffuse through 12-MR pores, TIPB cannot). The authors conclude the material behaves like a 12-MR, i.e. it has a mediumstrong acid site strength. The material was also tested in VGO cracking, yielding more middle distillates than zeolite US-Y or Beta at the same conversion. 124 A catalyst mixture of ITQ-33 and ZSM-5 yielded more middle distillate as well as significantly more propylene than zeolite US-Y, even when the US-Y was also tested with ZSM-5 additive. Economics and stability of the material may impede its widespread application, though. There are other new or fairly recently described materials that could be of use for FCC, but these have not been extensively discussed, such as the 11-MR systems JU-64 (JSR 125 ) and EMM-25, 126 Co-processing biomass-derived oxygenates with FCC catalysts Due to growing awareness of depleting crude oil resources, rising CO 2 levels, global warming and securing energy supply it would be advantageous to use biomass-derived feedstock in existing petroleum refineries. 133 As petroleum refineries are already in place the use of this infrastructure for the production of fuels and base chemicals, such as propylene, from biomass requires -in principle -relatively little investment costs. An attractive, and already explored option is the co-processing of biomass-derived oxygenates with petroleum-derived fractions, such as VGO. FCC of biomass-derived oxygenates gives products with higher hydrogen content than the starting biomass-based feedstock by removing oxygen as carbon monoxide as well as carbon dioxide, next to an increased amount of water. In addition, higher amounts of carbon deposits are found on the FCC catalyst material, which then can be burned off in the regeneration to produce process heat. Alternatively, the coke deposits formed by co-processing biomass with VGO during FCC can be converted into synthesis gas (CO + H 2 ), which can be used elsewhere in the oil refinery. Another important issue relates to the significant content of water in biomass-derived oxygenates, which may not dissolve into VGO, although some options have been discussed by Corma and co-workers. 134 In this article we focus on the catalytic cracking of biomassderived feedstocks mixed with petroleum-derived feedstocks making use of real-life FCC catalyst materials. This topic has been the subject of several review articles and we refer here the reader to the excellent articles of Huber & Corma and Stocker for the required background and the various possibilities for catalytic cracking of lignocellulosic-and triglycerides-based feedstocks. 135,136 Examples include sugars (glucose and xylose), sugar alcohols (e.g. xylitol and glycerol), lignin as well as vegetable oils. Other more recent review papers are of the hands of Al-Sabawi and co-workers 137 and Kubicka and Kikhtyanin. 138 Active research groups include those of e. (c) hydrogen-producing reactions; (d) hydrogen-consuming reactions; and (e) production of larger molecules by carbon-carbon bond formation. Although the FCC process in principle does not require hydrogen it can be produced through steam reforming and water-gas shift reactions as well as dehydrogenation and decarbonylation of biomass-derived molecules. A seminal mechanistic contribution in this field of research originates from the group of Schuurman and Mirodatos, who have explored the 14 C technique -known to discriminate fossil carbon from bio-carbon since fossil fuel is virtually free of 14 C, while biofuel contains the present-day amount of 14 Cto determine how the carbon from the co-processed biomass re-distributes in the range of FCC products formed. This has been done by co-processing hydro-deoxygenated (HDO) bio-oil with VGO feedstock over an E-cat FCC catalyst. It was found that the bio-carbon was mainly concentrated in the gas fraction (10.6%) and in the coke deposits (15.8%), while the gasoline produced contains only around 7% of the bio-carbon. In other words, it was found that co-processing leads to a bio-carbon impoverished gasoline, and a bio-carbon enriched LPG product slate. Such an uneven bio-carbon distribution can be explained by the changes in the cracking routes during co-processing, arising essentially from the competitive adsorption of the polar oxygenated molecules and non-polar hydrocarbon molecules in the mesopore space of the FCC catalyst material. The HDO biooil molecules are preferentially cracked and deoxygenated into light gases, which seems to inhibit the production of bottom, LCO and gasoline from the VGO feedstock. The larger coke formation, which was noted to be richer in bio-carbon, could originate from the re-polymerization of phenolic compounds. Another part of the increased coke formation may originate from the depletion in hydrogen due to water formation. A detailed study on the catalytic cracking of various bio-oil model compounds, which could be co-processed in an FCC unit, has been performed by Sedran and co-workers. 151,152 This group has studied, making use of an E-cat FCC catalyst, the influence of various functional groups in biomass-derived molecules on the catalytic conversion, selectivity and coke levels, and compared them to those obtained for thermal cracking of the very same model compounds. More in particular, they have investigated the following biomass-derived model compounds: methanol (MEL), acetic acid (ACET), methyl acetate (MACET), furfural (FUR), 3-methyl-2-pentanone (MP), 2-hidroxy-3-methylcyclopenenone (HMCP), phenol (PHE), 2,6-dimethoxyphenol (SYR) and 1,2,4-trimethoxybenzene (TMBENZ). Table 1 summarizes the results of the thermal and catalytic cracking, including the conversion, as well as the yields of hydrocarbons, oxygenates, H 2 , CO 2 , CO, H 2 O and coke deposits. It can be concluded from Table 1 that the catalytic cracking activity decreases in the order: TMB 4 MACET 4 HMPC 4 FUR 4 SYR 4 MET 4 PHE 4 ACET. These conversion levels are, with some exceptions (e.g. TMB), always higher for the catalytic cracking as compared to thermal cracking. Deoxygenation reactions, taking place via dehydration and decarboxylation, results in the production of CO 2 /CO and H 2 O, and was very important in all cases, but was very dominant for ACET, MACET, HMPC, SYR and TMB. Deoxygenation reactions were always much lower in the thermal conversions than in the catalytic conversions. The reverse was (almost) the case for coke deposit formation. The reaction products were also very different, ranging from mainly aromatics in the gasoline range for methanol and TMB, to C 4 -hydrocarbons of olefinic nature for PHE and SYR. In a follow-up study, Bertero and Sedran have converted a raw and thermally processed pine sawdust bio-oil over an E-cat FCC catalyst and compared their findings with a synthetic biooil composed of MEL, ACET, MACET, FUR, HMCP, PHE, SYR and TMBENZ. 153 It was found that with this biomass-derived feedstock mainly C 4 olefins, oxygenates and coke were formed. In contrast, the synthetic bio-oil produced lesser hydrocarbons and more oxygenates and coke than the sawdust-derived feedstock. Thermal treatment of the raw bio-oil lead to an increased amount of hydrocarbons, and a decreased amount of coke deposits. As a side conclusion it was stated that the behavior of bio-oils over FCC-based catalysts could not be well-described by using mixtures of model compounds, indicating the need for real-life testing, including the use of a commercial FCC catalyst. Table 1 Thermal and catalytic conversion of various biomass-derived model compounds when using an E-cat FCC catalyst and a reaction temperature of 500 1C in a fixed bed laboratory reactor for 60 s. The selectivity is expressed as a distribution in wt% of the hydrocarbon products analysed. SiC was used as inert material in the reactor to simulate thermal cracking, while nr in the Within this context, it is important to refer to the very recent work of Petrobras, who have co-processed raw bio-oil and gasoil in an FCC unit making use of an E-cat FCC catalyst. De Rezende Pinho and co-workers have made use of a bio-oil, produced from the fast pyrolysis of pine woodchips, together with standard VGO in a 150 kg h À1 FCC demonstration-scale unit. 154 When 10% bio-oil was co-processed, LPG, gasoline and LCO with similar product yields was obtained in comparison with the base VGO feedstock. Increasing this co-processing to levels of 20% bio-oil lead to some product quality deterioration. Very interesting experiments, complementing those of Fogassy and co-workers, 142 were conducted with 14 C isotopic labeled feedstock to determine the bio-carbon content in the FCC products. As mentioned above, the 14 C isotopic analysis performed allowed distinguishing biomass-derived carbon from fossil carbon in the catalytic cracking products; naphtha (gasoline), LCO (diesel range) yield and bottoms. It was found that for 10% bio-oil in the feed, 2% bio-carbon was found in the total liquid product, while for 20% bio-oil in the feed, between 3 and 5% was found, while the bio-carbon in the LCO and bottoms was respectively 5 and 6%. Furthermore, a high amount of phenolic compounds was detected in the naphtha produced by the FCC process, while most of the oxygen present in the bio-oil was removed as water. Another interesting observation was the fact that less coke deposits were formed at this demonstration-scale production plant as anticipated based on literature data making use of labscale testing units. These differences should be attributed to the differences in scales as larger scales tend to improve the product qualities due to a better contact between catalyst and feedstock. These results show that co-processing of bio-oils in FCC is technically feasibly at the level of a demonstration plant making use of a practical E-cat FCC catalyst. The KiOR company has recently explored the commercialization of their catalytic pyrolysis technology making use of FCC catalysts intimately mixed with finely grinded lignocellulosic biomass at a production facility in Columbus (MS, USA). [155][156][157] KiOR uses equipment from the paper industry to dry and grind the biomass, which is fed into a modified FCC reactor loaded with a zeolite-based catalyst, performing the actual pyrolysis process. KiOR then separates the pyrolysis oil from the other reaction products and removes O 2 by hydrogenation, using purchased H 2 . The resulting product is then distilled into fuels through standard oil refining technology. Unfortunately, the industrial activities of KiOR, which started around 2012, had to be stopped due to financial problems at the end of 2014. 158 Within this context it is important to mention other commercial approaches of producing biomass-based fuels in a refinery, such as the one developed by Neste/Albemarle. The Neste process (NexBTL) deoxygenates fatty acids to yield alkanes and propane (i.e. no fatty acid esters or glycerol are produced) in a catalytic hydrogenolysis process using a proprietary set of catalysts. The process is used in plants in Porvoo, Singapore and Rotterdam, at nearly 2 million tpa capacity. 159,160 Kalnes et al. describe a process which combines a deoxygenation stage with an isomerization stage to achieve the same conversion of fatty acids to diesel range alkanes. 161 Processing tight oil and shale oil One of the challenges facing especially USA-based FCC Units is the increasing use of tight oils. The oils are generally relatively light, and contain low amounts of sulfur and nitrogen and nickel and vanadium. Although all these parameters are generally good for FCC operation, there are also a number of drawbacks to using tight oils. Tight oil cracking gives high naphtha and LPG yields, but these are generally paraffinic, which makes it more difficult to reach octane number targets for the gasoline pool. This needs to be corrected by units outside the FCC complex, such as isoparaffinolefin alkylation and reforming. 162 Because the feedstock contains low Conradsen carbon residue, and low aromatics, it is also more difficult to control the coke deposition and thus the unit heat balance. One could say in some aspects that the feed is too easy to crack. The high naphtha and LPG yields can disturb the distillation units and gas plants after the FCC unit, thus limiting the FCC throughput. Although the feed is low on the normal impurities (S, N, Ni and V), tight oils generally contain increased levels of Na, Ca, and Fe. 163 Especially the iron can lead to problems, 164 165 ). Na will of course potentially block the acid sites in the zeolite. The challenges described above can partly be addressed through process changes, and by the addition of additional VGO or resid feedstock to the unit. It is also evident that catalyst flexibility and tolerance to the specific contaminants need to be built into the FCC catalysts. 162,163 FCC catalysts in other refinery applications Thermal coking is a process used in refineries to convert the heaviest part of the crude, the ''bottom of the barrel'' into useful products. Three major processes exist in present day refineries: delayed coking, fluid coking, and flexicoking. Because the feedstocks used by refineries are getting heavier, the coker units are utilized more heavily, and may become limiting in refinery operations. Therefore, improvements have been developed to the processes. One of these is to introduce a catalyst-containing additive to the coker feed, and thus transform the thermal coking into (at least partially) a catalytic process. The catalysts applied in this technology are derived from FCC catalysts, since these show a high activity and selectivity to hydrocarbon under similar operating conditions (low pressure, low hydrogen partial pressure), the size requirements are similar, and they can easily and cost-effectively be tailored to meet a variety of demands. Catalysts proposed for this application thus comprise a combination of building blocks found in FCC catalysts (i.e., zeolite, matrix, clay and binder), but not necessarily all of them. This allows control of the cracking/coking ratio and the quality of both hydrocarbon products as well as the coke. 166,167 Developments in FCC catalyst characterization During the last two decades we have seen the merge between spectroscopy and microscopy leading to the conception of different micro-spectroscopy approaches. 168 These promising characterization techniques now find their way in the field of heterogeneous catalysts, including the characterization of FCC catalysts at the single particle level. It is interesting to observe that at various synchrotron radiation facilities single catalyst particle characterization tools are now becoming available making it possible to compare bulk characterization data with single catalyst particle analysis. A similar trend can be found at analytical companies, which have introduced in recent years a variety of Raman, UV-Vis, IR and fluorescence microscopes, often in combination with microscopy accessories, including scanning/transmission electron microscopy (SEM/TEM) and atomic force microscopy (AFM). In the following paragraphs, we describe the possibilities and limitations of these micro-spectroscopy techniques, as applied to FCC catalyst materials, for shedding new insight in the 2D and/or 3D distribution of (a) metal poisons, (b) acid sites, (c) pore network accessibility and connectivity and (d) ultra-structures, including the zeolite and different matrix components (i.e., clay, silica and alumina). X-ray-based characterization methods Ruiz-Martinez et al. 169 have used a combined characterization set-up, comprising m-X-ray fluorescence (XRF), m-X-ray absorption near edge spectroscopy (XANES) and m-X-ray diffraction (XRD), developed at the I18 beamline of the Diamond Light Source. This set-up allows to investigate for the same FCC catalyst particle the 2-D distribution of metal poisons, such as Ni and V, their oxidation state, as well as the presence of the different ultra-structures embedded in the FCC catalyst particle. The 2D spatial resolution is 5 mm. This approach is illustrated in Fig. 22 for a fresh FCC and an E-cat catalyst particle. As one can expect, the fresh FCC catalyst particle did not contain any appreciable amount of Ni and V, to be detected by the m-XRF method. Interestingly, high quality XRD patterns could be obtained by the m-XRD approach, which allowed distinguishing between the diffraction patterns of zeolite Y, clay and boehmite. Moreover, the relative intensity of the diffraction peaks, as well as their exact position, can be used to determine the relative contribution of zeolite Y, as well as the Si/Al ratio of the embedded zeolite aggregates. From Fig. 22c one can conclude that zeolite Y is a randomly distributed, although the embedded zeolite material is not entirely homogeneously present in the catalyst matrix, and some hot spots of high amounts of zeolite Y are found. Furthermore, when considering the Si/Al ratio it can be noticed that this value is rather homogeneous across the entire catalyst particle. In strong contrast, the results for the E-cat catalyst particle are entirely different from those of the fresh FCC particle. First of all, it is obvious from Fig. 22g and h that both Ni and V are present as metal poisons, and that Ni (in green) is present in an egg-shell distribution, whereas V (in blue) penetrates deeper into the inner parts of the FCC catalyst particle. m-XANES confirmed that V and Ni were mainly present in respectively their 5+ and 2+ oxidation state. Finally, m-XRD revealed that the diffraction patterns of zeolite Y were much less intense as in the case of the fresh FCC catalyst particle. Furthermore, a distorted egg-shell distribution could be observed for both the relative intensities of zeolite Y (Fig. 22i) and the Si/Al ratio (Fig. 22l). Furthermore, the Si/Al ratio values are much higher than those observed for the fresh FCC catalyst particle (Fig. 22f), indicating that severe dealumination has taken place during metal poisoning, and subsequent catalyst regeneration. Meirer and co-workers 170 have been using element-specific X-ray nano-tomography to investigate the 3D structure of a whole individual FCC catalyst particle at high spatial resolution and in a non-invasive manner. This was done by using a fullfield X-ray absorption mosaic nano-tomography set-up at beamline 6.2 of the Stanford Synchrotron Radiation Lightsource, providing better than 30 nm 2D spatial resolution. With this instrumentation it was possible to map the relative spatial distribution of the metal contaminants, Ni and Fe, and correlate these distributions to porosity and permeability changes of an E-cat catalyst particle. Both Ni and Fe were found to accumulate in the outer layers of the catalyst particle, although Ni was found to penetrate in the deeper layers than Fe, effectively decreasing the porosity by clogging the macropores and thereby restricting access into the catalyst particle. This is illustrated in Fig. 23, which shows the permeability calculation of a sub-volume of the E-cat particle of 16.6 Â 16.6 Â 10 mm 3 in size, in which Fe is found in lower concentrations than at the outer catalyst surface, while Ni is more concentrated at the top of the selected subvolume (Fig. 23b). By simulating the fluid flow through this subvolume, two distinct effects could be revealed. First, the authors observed a constriction of flow where Ni is present, indicated by the high velocity (red area, Fig. 23c) fluid flow through small cross sectional areas. Elsewhere in the region, with little to no Ni, flow is less inhibited (blue streamlines, Fig. 23c). Secondly, there were areas with large Ni content, which were totally inaccessible because the Ni is clogging some macropores completely. Another observation made possible by this X-ray nanotomography study was that valleys and nodules at the outer surface of the FCC catalyst particle were observed, which are similar to those seen in surface topography studies of E-cat samples. Fe was found to be distributed along these valleys and nodules, showing the largest amounts in the top 1 mm outer layer of the E-cat catalyst particle. The X-ray nano-tomography work of FCC catalyst particles was recently expanded in two papers correlating 3D Fe and Ni contamination with porosity and pore connectivity in individual FCC particles, showing how gradual pore clogging could explain the progressive deactivation of FCC catalysts due to metal poisoning. 165 This was done by making use of a unique set of four FCC catalyst particles: i.e., a fresh FCC catalyst particle and three E-cat catalyst particles with increasing metal loading. The latter three particles were obtained by performing a density separation step on a well-characterized E-cat sample. In a subsequent step, X-ray nano-tomography was used to quantify in the changes in single-particle macroporosity and pore connectivity for these four FCC catalyst particles and correlate them with Fe and Ni deposition. Both Fe and Ni were found to be gradually incorporated almost exclusively in the near-surface regions of the FCC catalyst particles, severely limiting the macropore accessibility as metal concentrations increase. Because macropore channels can be regarded as ''transportation highways'' of the pore network, blocking them prevents crude oil feedstock molecules from reaching the catalytically active domains. Consequently, metal deposition reduces the catalytic conversion with increasing time on stream because the internal pore volume, although itself unobstructed, becomes largely inaccessible. Furthermore, it was found that metal accumulation at the near-surface regions plays a role in FCC catalyst particle agglutination. 171 This was concluded based on a detailed analysis of the concentration distribution of Fe and Ni in a system of agglutinated FCC catalyst particles. It was found that the interfaces between the agglutinated catalyst particles have metal concentrations above the average near-surface concentrations, suggesting that the surface accumulation of Fe and Ni could lead to increased particle clustering, hence decreased cracking activity. Bare and co-workers 172 have been studying E-cat catalyst particles at both the ensemble and single particle level making use of a combination of X-ray micro-and nano-tomography as well as m-XRF and m-XRD. The X-ray micro-and nanotomography were performed at respectively beamline 2-BM of the Advanced Photon Source and the X8C beamline of the National Synchrotron Light Source, whereas m-XRF and m-XRD data were acquired at beamline ID-D of the Advanced Photon Source. X-ray micro-tomography was used to determine the average size and shape, and their respective distributions, of over 1200 individual E-cat catalyst particles. As shown in Fig. 24a it was found that a large fraction of the E-cat particles contained large internal voids, which certainly affect the particles' density, including their accessibility, and catalytic activity. Fig. 24d shows the equivalent diameter of the internal voids within these E-cat catalyst particles, illustrating that most of them are in the range of 5-15 mm, whereas still several can even exceed the 25 mm size. 2-D transmission X-ray microscopy images of both situations are shown in respectively Fig. 24b and c. X-ray nano-tomography revealed, in addition to these large micrometer-sized pores, voids in the sub-micrometer range, with macropores as small as 100 nm in diameter. Furthermore, the method was able to resolve different ultra-structures, such as clay, TiO 2 , and La-stabilized zeolite Y. The m-XRF measurements performed by the authors revealed that Ni was preferentially located on the exterior of the E-cat particles, while V was more deposited throughout the catalyst particle, confirming the observations of Ruiz-Martinez et al. 169 Finally, the measured m-XRD patterns allowed the identification of the zeolite La-Y aggregates present, including the determination of the lattice parameters of zeolite Y, providing direct insight in the dealumination degree of the material. Da Silva et al. 173 used a combination of phase-contrast X-ray micro-tomography and high-resolution ptychographic X-ray tomography to investigate a model FCC catalyst body, consisting of a mixture of 5% La 2 O 3 -exchanged zeolite Y and metakaolin, at the single particle level. The two types of tomographic methods have been performed at the TOMCAT and cSAXS beamlines of the Swiss Light Source. Fig. 25 illustrates the results as obtained with ptychographic X-ray tomography operating at a 3D spatial resolution of 39 nm. Fig. 25a shows a vertical slice of the electron-density tomogram for the FCC catalyst body. The two distinct material phases present can be clearly distinguished, and the upper and some lateral parts that appear brighter indicate some re-deposition of materials during the focused ion beam (FIB) milling of this model FCC catalyst particle. Fig. 25b presents a selection of axial sections for different vertical positions of the tomograms obtained, with the different colored squares reflecting different heights in the catalyst body shown in Fig. 25a. The top region is mostly metakaolin with only few spots of zeolite material, whereas the deeper parts contain more zeolite material. The zeolites are round and porous, whereas the metakaolin is more square-shaped, and these differences in morphology also lead to the formation of interparticle pores of irregular shapes. By taking into account the mass density differences between 5% La 2 O 3 -exchanged zeolite Y and metakaolin it has been possible to obtain the 3D rendering of both components, as shown in Fig. 25c-f. Here, the metakaolin clay, zeolite and pores are colored in blue, red and light blue, respectively. As the spatial resolution of the ptychographic X-ray tomography method is in the same range as that of mercury porosimetry, this methodology has been applied on the model FCC catalyst material under investigation. It was found that there was a fairly good agreement with the pore diameter range as probed with the local and global characterization method. Kalirai et al. have recently used synchrotron-based multielement XRF tomography with a large array Maia detector to investigate the 3-D distributions of metal poisons (i.e., Fe, Ni, V and Ca) and structural markers (i.e., La and Ti) within individual, intact and industrially deactivated FCC catalyst particles at two different catalytic life-stages. 174 This study was performed making use of the recently developed set-up at the PO6 beamline at the Petra III synchrotron (DESY, Hamburg, Germany). It was found that for all metal poisons under study there is a radial concentration gradient where there is a maximum near the surface of the catalyst particle, gradually decreasing towards the particle's interior. Correlation analysis of the metal poisons revealed that Fe, Ni and Ca are highly correlated, particularly at the particle's exterior, where they form a shell around the FCC catalyst particle. V clearly penetrates further into the particle. This is illustrated in Fig. 26. However, no spatial correlation was found for V with La, hinting that V does not specifically interact with the zeolite domains and is present near the Al 2 O 3 -based matrix components of the catalyst particle. Visible and infrared light-based characterization methods Buurmans, Ruiz-Martinez and coworkers 175 have developed a set of selective staining reactions, which are catalyzed by Brønsted acid sites, making it possible to localize zeolite aggregate domains with a confocal fluorescence microscope. More specifically, various thiophene and styrene derivatives could be oligomerized selectively within the pores of zeolites Y and ZSM-5 embedded in an FCC catalyst matrix. As the resulting probe molecule oligomers have a distinct optical spectrum in the visible region of the spectrum, it was possible to excite the oligomers formed with an appropriate laser excitation in a confocal fluorescence microscope with a 2D spatial resolution of 500 nm. This selective staining approach is illustrated in Fig. 27 for two fresh FCC catalyst particles, one containing zeolite Y, whereas the other one was devoid in zeolite material. Clearly, green spots in the size of 2 mm or smaller could be discerned, which were clearly absent in the FCC catalyst particle containing no zeolite Y. Based on this selective staining approach three industrially relevant deactivation methods, namely steaming (ST), two-step cyclic deactivation (CD) and Mitchel impregnation-steam deactivation (MI), have been evaluated, and compared with both fresh FCC and E-cat samples. A statistical analysis of the fluorescence microscopy data obtained for the two sets of FCC catalyst materials (labeled as FCC1 and FCC2), originating from two different manufacturing routes, are shown in Fig. 28. By comparing the fluorescence intensity values for the FCC1 and FCC2 catalyst batches, the reliability of the staining method has been verified. The fluorescence microscopy images for the fresh and deactivated FCC1 and FCC2 catalyst batches were very similar, as was there average fluorescence intensities. Interestingly, the fluorescence intensity trend follows the order: Fresh 4 ST 4 CD 4 MI. Because sufficiently strong Brønsted acid sites are needed for the formation of fluorescent carbocations upon e.g. thiophene oligomerization, this means that the amount of strong acid sites has decreased following the severity of the applied deactivation method. The sustained accessibility of the internal volume was confirmed with a staining reaction with the unreactive Nile Blue, which is too large to enter the zeolite pores. Furthermore, it was found that the E-cat sample has a fluorescence intensity, hence acidity, value in between that of the CD and MI FCC catalyst particles. These findings have been corroborated by catalytic cracking activity measurements, as well as bulk XRD, IR spectroscopy after pyridine adsorption, TPD of ammonia and N 2 physisorption measurements. These additional bulk characterization data on the two sets of FCC catalyst batches confirmed that the developed confocal fluorescence microscopy data are in line with the observed Brønsted acidity trends. Finally, the advantage of developed single particle analysis approach is that the average fluorescence intensity per individual FCC catalyst particle can be determined. This is shown in Fig. 29. It was found that the range of fluorescence intensities observed for the E-cat sample is wider than for CD and MI combined, reflecting the large interparticle heterogeneity in terms of age, Brønsted acidity and catalytic activity within an industrial E-cat sample. Interestingly, a similar wide range in fluorescence intensity, hence Brønsted acidity, was also observed for fresh FCC catalyst particles. In a follow-up manuscript, Buurmans and co-workers 176 have extended their selective staining approach of FCC catalyst particles for the determination of reactive zeolite ZSM-5 and Y aggregates within single FCC catalyst particles making use of 4-fluorostyrene and 4-methoxystyrene as probe molecules. Both styrene derivatives have different oligomerization reactivities, 4-fluorostyrene preferentially visualizing strong Brønsted acid sites, whereas 4-methoxystyrene as staining molecule reveals both weak and strong Brønsted acid sites. It was found that the zeolite ZSM-5 aggregates are 1-5 mm in size and are inhomogeneously distributed within the FCC matrix. Fresh as well as three distinct laboratory-deactivated catalyst particles were studied upon reaction with the two styrene derivatives. When comparing these four sets of zeolite ZSM-5-containing FCC catalyst particles the Brønsted acidity decreases in the order: fresh 4 steamed 4 CD 4 MI. In the case of ZSM-5 as zeolite material, the same activity trends were obtained for 4-methoxy-and 4-fluorostyrene as probe molecule. However, this was not the case for the zeolite Y-containing FCC catalyst particles. It was found that these FCC catalyst particles show lower fluorostyrene intensity values upon reaction with 4-fluorostyrene, suggesting that not every acidic site within the catalyst particles has sufficient strength to convert 4-fluorostyrene into fluorescent carbocations. In other words, a large fraction of the Brønsted acid sites could be visualized with the less-demanding 4-methoxystyrene, while only a small fraction is observed upon reaction with 4-fluorostyrene. This observation is indicative for the higher Brønsted acid site diversity in FCC catalyst particles containing zeolite Y as compared to FCC catalyst particles containing zeolite ZSM-5. This conclusion is corroborated by ammonia TPD results on the same set of FCC catalyst materials. Indeed, the ammonia TPD curve for the FCC catalyst containing ZSM-5 shows a more well-defined pattern, which corresponds well with the more homogeneous Brønsted acid strength deduced from the confocal fluorescence microscopy experiments. Sprung and Weckhuysen 177 refined the confocal fluorescence microscopy approach for single FCC catalyst particles, containing ZSM-5, and determined the size, distribution, orientation and amount of zeolite ZSM-5 aggregates within binder material. More specifically, by making use of the anisotropic nature of zeolite ZSM-5 crystals and its interaction with planepolarized laser light it turned out to be possible to distinguish between zeolite ZSM-5 aggregates and the various binder materials (i.e., alumina, silica and clay) after staining the FCC catalyst particles with 4-fluorostyrene as a probe molecule. It was found that the amount of detected fluorescent light corresponds to about 15 wt% of zeolite material, whereas statistical analysis of the emitted fluorostyrene light indicated that a large number of ZSM-5 domains appeared in small sizes of B0.015-0.25 mm 2 , representing single zeolite crystals or small aggregates thereof. On the other hand, the highest amount of the zeolite ZSM-5 material within the FCC matrix was aggregated into larger domains (ca. 1-5 mm 2 ) with more or less similarly oriented zeolite crystallites. Unfortunately, the confocal fluorescence microscopy approach described above cannot resolve sub-micrometer zeolite domains in great detail, and does also not provide any quantitative information about catalytic activity of the individual zeolite aggregates dispersed within the FCC matrix. For this purpose, single molecule fluorescence microscopy can be called in as a very sensitive and informative method. Ristanovic and co-workers 178 have very recently reported the first application of single-molecule fluorescence microscopy and the required data analysis methods to quantitatively investigate Brønsted acidity and related catalytic activity in a single FCC catalyst particle containing ZSM-5. The approach developed in this work has a 2D spatial resolution of 30 nm; the approach and related results are summarized in Fig. 30. To selectively study the zeolite ZSM-5 domains within the FCC catalyst particles the oligomerization of furfuryl alcohol was used as a selective probe reaction. The 532 nm laser light can excite the fluorescent oligomers formed from the non-fluorescent furfuryl alcohol monomers when brought in contact with Brønsted acid sites. The individual fluorescent reaction products are detected with a very sensitive detector and a typical fluorescence trajectory of an individual hotspot can be visualized (Fig. 30e). Fig. 30d shows a 2D wide-field fluorescence micrograph of an FCC particle, in which the white circles indicate localized fluorescence bursts originating from the fluorescent oligomerization products. In a next step, a statistical analysis procedure has been developed to obtain accumulated reactivity maps for Brønsted acid domains present within the FCC catalyst particles. An example of a so-called SOFI map is shown in Fig. 30f, which has been obtained for a focal depth of 1 mm. Interestingly, these SOFI maps allow to determine the aggregate sizes of the zeolite ZSM-5 domains and Fig. 30g shows a histogram of the zeolite domains distribution. It can be seen that most of the zeolite ZSM-5 domains are well-dispersed and range in their size between 0 and 0.2 mm 2 . This is in good agreement with the zeolite ZSM-5 dispersion data described earlier by Sprung and Weckhuysen, 177 although the single molecule fluorescence microscopy study now allows to obtain much higher spatial resolution, as illustrated in Fig. 30h. Another advantage of the single molecule spectroscopy approach is that it enables to put reactivity numbers on the visualized zeolite ZSM-5 aggregates. More specifically, the intensity map in Fig. 30h reveals the presence of zeolite aggregate domains, which have almost no reactivity towards furfuryl alcohol, whereas other zeolite aggregate domains are much more reactive. As a consequence, one can plot the SOFI brightness of distinct zeolite ZSM-5 domains as a function of their measured turnover rates. This approach is shown in Fig. 30i, which indicates that the most active zeolite ZSM-5 domains differ approximately an order of magnitude in reactivity when compared to the less active zeolite ZSM-5 domains. These reactivity differences should be related to differences in the framework aluminum content of the embedded zeolite domains and/or local accessibility differences. Another powerful chemical imaging technique, namely IR microscopy with a 2D spatial resolution of 10 mm, has been explored by Buurmans et al. 179 and applied on single FCC catalyst particles containing zeolite Y, either in their fresh or deactivated state. By investigating a population of 15 FCC catalyst particles, after adsorption of pyridine, it has been possible to evaluate both Brønsted and Lewis acidity at the single particle level. Comparable acidity trends have been obtained as with bulk pyridine transmission IR spectroscopy and they follow the same reactivity order as described before when using confocal fluorescence microscopy; i.e., Fresh 4 ST 4 CD 4 MI. In other words, the IR microscopy data corroborate the data obtained with confocal fluorescence microscopy, indicating that it is now possible to reliably access Brønsted acidity within FCC catalyst particles at the single particle level. Moreover, the single FCC catalyst particle IR study revealed large interparticle heterogeneities in both Brønsted and Lewis acidity within the different FCC catalyst batches. Interestingly, and in line with the data described in Fig. 29, E-cat catalyst particles possess a significantly wider variety in Brønsted acidity as compared to CD and MI FCC catalyst particles, which may be explained by the age distribution in the E-cat. Correlative characterization methods Ideally one would like to link reactivity differences to information on the ultra-structures of the different components present within a single FCC catalyst particle. This has recently become possible by using an integrated laser and electron microscope (iLEM) as a chemical imaging method, as demonstrated by Karreman and co-workers, 180 and by combining fluorescence microscopy with Focused Ion Beam-Scanning Electron Microscopy (FIB-SEM) imaging, as explored by Buurmans, De Winter, and co-workers. 181,182 The iLEM technique combines the strength of a regular fluorescence microscope with that of a transmission electron microscope (TEM), but now performed in one set-up. It enables the rapid identification of fluorescent domains after applying a selective staining procedure and subsequent investigation of these regions with superior spatial resolution as provided by TEM. To make this possible FCC catalyst particles containing zeolite Y has been first stained taking 4-fluorostyrene as probe molecule, followed by embedding the stained FCC catalyst particles within a resin. In a next step thin sections of the FCC material have been made and placed on a TEM grid. In a first step fluorescence microscopy images have been collected to determine the relative fluorescence of specific domains in the sliced FCC catalyst particles, followed by a detailed TEM analysis of the ultra-structures, which show (in different degrees) fluorescence upon staining, or simply are not fluorescent. The iLEM method, as applied to a fresh FCC catalyst particle, allowed the identification of areas mainly consisting of zeolite Y, present as structures with dimensions of a few hundred nanometers that display a very strong structural resemblance to pure zeolite Y, which became fluorescent after catalyst staining. Electron diffraction patterns confirmed the presence of zeolite Y. Other areas, which did not show any appreciable fluorescence upon catalyst staining, consisted of matrix material and for example plate-like components, assumed to be clay particles, as well as amorphous material could be observed. Finally, areas with intermediate fluorescence intensity upon catalyst staining revealed the presence of a mixture of zeolite, clay and amorphous material. To further explore the iLEM method an FCC catalyst material hydrothermally deactivated with steam has been investigated. In contrast to fresh FCC catalyst particles, steamed FCC catalyst particles showed much lower fluorescence intensity. Furthermore, the steam treatment also affected the zeolite Y structure as a large fraction of the crystals was damaged and appeared macroporous. Electron diffraction experiments indicated that in addition to the diffraction maxima of zeolite Y, also small g-Al 2 O 3 crystallites were present. In a follow-up study, Karreman et al. 183 have systematically investigated a large set of iLEM images obtained for stained FCC catalyst particles, either fresh or steamed, as well as stained E-cat catalyst particles. This enabled the identification of a wide variety of ultra-structures, which are summarized in Fig. 31. In addition to the expected undamaged zeolite and clay domains, which are exclusively present in the fresh FCC catalyst particles, also mesoporous, macroporous, clotted and severed zeolite domains could be discerned. Furthermore, it was found that next to amorphous regions also fragments and damaged clay domains can be present. Fig. 32 illustrates the advantage of the iLEM method by focusing on the presence of clotted zeolites, which occur almost exclusively in E-cat catalyst particles. One can notice that in the outer surface region of the redcolored fluorescent E-cat catalyst particle there is a region which has a clear ultra-structure, but which is non-fluorescent. This non-fluorescent layer is about 1 mm thick, and a further zoom-in revealed that it consists of 200-500 nm-sized crystallites. It was assumed that these clotted zeolite crystals, with very low or no Brønsted acidity, are poisoned with metals, such, as Fe, Ni or V, as one may expect from the higher described X-ray micro-and nano-tomography studies. In an analogous manner, Buurmans, de Winter and coworkers have first stained the zeolite domains within individual FCC catalyst particles by making use of 4-fluorostyrene as probe molecule, followed by subsequently FIB-milling the catalyst particle, followed by imaging the porosity network with SEM. 181,182 Fig. 33 shows a snapshot of the porous network of an FCC catalyst particle, including a fluorescence microscopy image, illustrating that both zeolite domains and porosity can be imaged, and both types of information can be correlated. From the detailed iLEM characterization study Karreman and co-workers have proposed a model for the functional and structural degradation of zeolite Y crystals in FCC and E-cat catalyst particles. This is summarized in Fig. 34. FCC catalyst particles have a different deactivation route than E-cat catalyst particles. In the case of an E-cat catalyst particle, deactivation occurs mostly through fragmentation and/or decrystallization of zeolite crystals and the formation of meso-and macropores (Fig. 34, right hand side). Furthermore, the observed ''clotting'' of mesoporous zeolites (Fig. 34, left hand side) is limited to E-cat materials. In contrast, steam deactivation strongly induces the severing of the zeolite Y crystals, while the formation of fragments in these samples is very rarely observed. The latter indicates that the formation of fragments is a phenomenon reserved for the harsh industrial catalytic cracking conditions. Although the zeolite degradation model, described in Fig. 34, suggests a continuous process of FCC catalyst particle degradation, various degradation stages were observed throughout the sections of the E-cat samples investigated. The structural diversity within the E-cat catalyst particles was not limited to the presence of different domains as fragments, macroporous and mesoporous zeolites, as well as amorphous material, were formed scattered throughout the thin sections of the catalyst particles. Different stages of catalyst deactivation occurred within close proximity of each other as zeolite degradation can either occur in homogeneous domains within the catalyst particle, or in a more heterogeneous manner, where neighboring zeolite crystals can vary in their exact structure. In other words, the degradation of zeolite Y crystals is a nonsynchronous process, which varies in onset and progression from zeolite to zeolite within a single E-cat catalyst particle. Concluding remarks and look into the future The review presented here indicates that research in FCC catalysts and processes is very much alive, in spite of the fact that this important catalytic process is close to 75 years old. Recent changes in the availability of feedstocks, including renewables and tight oils, and longer-term trends in the demand for propylene, gasoline, and middle distillates require further developments in both catalyst and process. A plethora of new spectroscopic and microscopic tools utilizing ultravioletvisible and infrared light, hard and soft X-rays, electrons, and, very importantly, combinations of the techniques on the exact same sample spots in integral catalyst particles have recently increased fundamental understanding of the catalyst considerably. Detailed analyses of the pore structure, metal deposition, and zeolite deactivation have been published. Resolution in the analytical techniques applied on integral particles is approaching the nanometer-range, which will allow a more detailed analysis of the interaction between the matrix and the zeolite, and a complete analysis of the pore system in the micro-and mesopore range. In situ characterization techniques will yield fundamental insights in the chemistry and dynamics of the process during individual cracking and regeneration cycles, and will increase our understanding of the deactivation of the catalyst as a function of metal deposition, steaming, and cokelaydown. It is extremely important to create a clear connection between the macroscopic world of catalyst testing and realunit performance and the microscopic world. The present microscopic and spectroscopic tools rely on the analysis of a limited number, often not more than a handful, of FCC catalyst particles, whereas the industrial performance takes place at many orders of magnitude larger. This gap needs to be bridged and proper tools will have to be developed to make this possible. These insights will allow us to fine-tune the catalyst performance in the directions required by the large-scale trends in raw material availability and product demands, as summarized in Table 2. Since FCC is one of the largest catalytic processes in the world, any improvement in the process efficiency (e.g. coke make) will be multiplied by extremely large factors. For instance, burning coke from FCC amounts to at least 100 million tons of Resistance to specific poisons (e.g. alkaline metals) and the high acidity of bio-oils Tight oil conversion Resistance to specific poisons and fracking additives CO 2 per year. Research into FCC catalysts and processes, as well as analytical methods, especially those that investigate real commercial catalysts, preferably under operando conditions, therefore remains highly relevant.
2017-04-20T05:29:00.675Z
2015-10-05T00:00:00.000
{ "year": 2015, "sha1": "32660669bfa18bf19f0d3f61ddbd9043e8e1f7aa", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/cs/c5cs00376h", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0fb668f4a99cf6932e15705cc7f61610e8a53040", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
209445300
pes2o/s2orc
v3-fos-license
Approach to hair transplantation in advanced grade baldness by follicular unit extraction: A retrospective analysis of 820 cases Background: In advanced grade baldness (Norwood 5–7), hair restoration has been considered difficult due to the donor recipient area mismatch. In this article, we have given a comprehensive methodical approach to manage these cases. Objective: To assess the outcome and challenges faced with follicular unit extraction (FUE) and to plan a successful management in advanced grade baldness in 820 cases of androgenic alopecia. Materials and Methods: A retrospective analysis of 820 male patients with advanced grade of baldness (grade 5–7) treated by FUE. The patients were divided into five groups based on the extent of scalp coverage, for example, frontal coverage, frontal + mid-front coverage, vertex, full coverage, and frontal forelock only. The results were analyzed at 6, 9, 12, and 24 months. Results: At 12 months, 94% patients were satisfied with the results, whereas 62% wanted another sitting for increasing the coverage area/density. Conclusion: Hair transplantation can give natural and aesthetic results even in advanced baldness. Beard and body hairs can be used to augment results in cases with limited donor supply. A mature hairline with an adequate density in a gradient, from front to back helps in achieving a satisfactory response even in extensive cases of advanced baldness. IntroductIon Patients with hair loss have an increased risk of psychosocial and psychiatric morbidity. [1] For androgenic alopecia (AGA), the only permanent available solution is hair transplant. Follicular unit extraction (FUE) has evolved dramatically as the most recent advancement in minimally invasive surgical hair restoration. Dr. Orentreich, a dermatologist, was the first to describe surgical hair restoration in the 1950s. [2,3] FUE was first described in 2002 by Inabas, who noted that only the upper third of follicular unit (FU), with the arrector pili muscle attachment needs to be freed by a punch for effective FUE. [2,4,5] Instead of removing a large area of scalp skin for FU harvest, in FUE, individual FUs are removed from the donor area and prepared for transplantation into recipient scalp. This process leaves little scar, and thus creates a natural, aesthetically pleasing result. [6] Thus, FUE is a simple surgical technique that serves as an important alternative in the management of advanced AGA where other methods have proved ineffective. MaterIals and Methods A retrospective analysis of 820 male patients who underwent FUE for AGA (Norwood G5-7) between 2012 and 2017 at our center was conducted. Inclusion criteria: All patients with Norwood grade 5-7 AGA, diagnosed by a dermatologist (outside as well as at our center), who underwent hair transplant surgery at our center. Patients were provided with written information regarding both preoperative and postoperative instructions. The patients were divided into five groups based on the scalp coverage required, for instance, patients requiring frontal coverage, frontal + mid-front coverage, vertex, full coverage, and frontal forelock only. Donor area was assessed to establish its adequacy in the permanent donor zone and extraction was limited to 25% of the permanent hair zone available. In cases where donor area was limited, beard and body extraction was planned. Hence, the basis of categorizing these patients into various groups was dependent on two factors: the availability of suitable donor area as well as the desired coverage area as expressed by the patient. For instance, (1) patients with limited donor supply from scalp, beard, and body were offered frontal forelock only, (2) patients with limited scalp donor supply but adequate beard and body were offered frontal coverage, (3) patients with average donor supply were offered front + mid-front, (4) patients with limited donor supply specifically desiring vertex coverage were offered vertex only, and (5) patients with adequate density and quality of donor supply were offered full coverage. The patient also underwent certain laboratory blood investigations, and their list of regular medications was reviewed to rule out any drugs that may affect bleeding time. The patient's hair was trimmed short before the surgery. Anxiolytics, painkiller, and antibiotics were administered at the start of surgery. Strict surgical asepsis was taken care of and supraorbital and supratrochlear nerve blocks were given to anesthetize the recipient area. Ring block anesthesia was administered to the recipient area using a combination of xylocaine and bupivacaine. This was followed by the tumescent injection, which was a mixture of lignocaine, bupivacaine with adrenaline, saline/ringer lactate, and triamcinolone acetate. Slits were made using Cut-To-Size blades (Robbins instruments, NJ 07928,USA) blades of 0.9-1 mm width. Also to determine the depth of the blade, we extracted 2-3 hairs from different areas and kept the blade length accordingly. Similarly, a ring block anesthesia followed by tumescent solution was administered to the donor area. After this, follicles are extracted using micro motor punches, from the safe donor area. FUs were implanted into the slits using two forceps. All patients received postoperative antibiotic therapy, oral corticosteroids, and analgesic medications. Dressing on the donor area was removed after 3 days. Patients were followed up in the immediate postoperative period, at 3, 7, and 14 days, and then monthly up to 1-year postoperatively. [7] Follow-up period was 24 months to try and assess the longevity of transplanted hair. Patients were asked to fill up a questionnaire at 12 months, which included their satisfaction level on a five-point scale, any visible reduction in donor area or need for 2nd sitting, and side effects, if any. The patients were required to subjectively rate their satisfaction level using a five-point scale, which was as follows: 1: very unsatisfied, 2: satisfied, 3: good, 4: very good, and 5: excellent. Challenges faced during hair transplantation by FUE in G5-7 male pattern hair loss Successful hair restoration in advanced grade baldness has been considered difficult due to the donor recipient area mismatch and various other limitations. Thus, we have discussed in detail the major pitfalls and an efficient strategy to overcome these inadequacies under the following headings [ Table 1]: Donor area management and planning of distribution Plan a mature hairline and avoid succumbing to impracticable demands by patients requesting further lowering of the hairline. Give realistic expectations to the patients based on the available donor area. A safe donor area is roughly measured about 6 cm from the external occipital protuberance. Thus, the average permanent donor zone is about 10,000-15,000 FU. If the rule of 1:4 (25%) extraction were to be followed, we should get 2500-3500 FU. [8][9][10] Assessment of the donor area constitutes gauging the hair follicle thickness, quantity, quality, and caliber as well as documenting the presence/absence of retrograde alopecia or previous scarring of the donor area. An area of 1 × 1 cm was marked out in the donor area and a trichoscan was performed to evaluate the following parameters of hair growth, hair density, hair diameter, and hair quality [ Figure 1]. Assessment such as hair count and hair thickness in donor area helps to plan the surgery. On the basis of availability of the donor grafts, we can plan the coverage area that is feasible. For full coverage, a minimum of 6000 grafts, front and mid-front coverage of 4500-5000 grafts, frontal coverage of 3000-3500 grafts, and for frontal forelock, a minimum of 1500-2000 grafts are required approximately. In patients with advanced grade baldness and a limited safe donor area, we need to augment the donor zone by providing alternate donor areas (e.g., beard/body hair). In most cases, 1000-1500 FU can be extracted from the beard. Body donor hair assessment includes the quality, density, thickness, and caliber of the hair. For instance, in a case where the donor scalp area yields 3000-3500 grafts, we can augment it with 1000 grafts from the beard and thus plan a frontal and mid-frontal area coverage. The best method of utilizing scalp, beard, and body hair in advanced baldness cases is carried out as follows [ Figure 2]: [11][12][13][14] • The frontal hairline is designed using scalp hair • The mid-front area is designed using a mixture of scalp and beard/body hair • The vertex is designed using a mixture of scalp, beard, and body hair • Body hairs can be used to soften the appearance of hairline and temporal triangles Time management Motorized FUE allows for extraction rate of >1000 FU/h with a <5% transection rate. Sharp punches with surrounded edges of size 0.8-0.9 mm are preferred over blunt punches. [15] Good tumescence followed by continuous scoring by the surgeon in sessions followed by extraction of the FUs helps hasten the process without compromising the graft survival. Good quality loupes and adequate lighting are other essential requirements. Graft management Simultaneous scoring, extraction, and implantation can be carried out in an organized manner to save time. The extracted grafts need to be constantly hydrated and stored in specialized solutions at all stages. Body donor hair particularly tends to be fragile, finer, shorter, and subject to desiccation injury, thus they require constant vigorous hydration along with less "out of body storage time" of these grafts. The "No touch to root" method of implantation is followed at all steps by the team. The implantation density should ideally be maintained at 35-40 FU/cm 2 for frontal area with reducing gradient from frontal hairline toward vertex (20-25 FU/cm 2 ). However, certain situations where we ought to be wary of dense packing is in chronic smokers, long-standing grade 7 cases, chronic hypertensives, old age, and female hair transplantation. The hair follicles are distributed in a gradient from front to back with reducing density from frontal hairline to vertex. Anesthesia management The anesthetic agents used by us in FUE were 2% lignocaine and 0.5% bupivacaine given in the form of nerve blocks, ring block, and tumescent anesthesia. The maximum safe dose for lignocaine is 3 mg/kg and 7 mg/kg with adrenaline, whereas for bupivacaine, it is 2 mg/kg. To further reduce the adverse effects related to anesthesia, certain modifications have proved useful, such as planning the surgery over 2-3 days and the use of plain normal saline to reinforce tumescence. Doctor and staff management As we know hair transplantation is a team effort, it cannot be accomplished single-handedly. Rotation of doctors and technicians is the key to prevent fatigue. Regular ergonomic muscle and back stretching exercises need to be practiced by both doctors and technicians. Postoperative management Postoperatively, the patients were empirically started on a broad-spectrum antibiotic, pain killer, short course of oral steroid, and topical antibiotic. They were instructed to sleep with head slightly elevated to prevent dislodging of the grafts and regular spraying of normal saline on the recipient grafts. They were also asked to remove the donor area dressing after 2 days followed by daily gentle shampooing from day 4 to 10. There was no dietary restriction; however, patients were asked to refrain from heavy exercise, smoking, and drinking for a couple of weeks. Swelling over the forehead occurs in approximately 25% of patients on day 3 of surgery because of postoperative edema and the presence of large amount of saline injected during anesthesia. This is temporary and can be managed with ice compresses, forehead massages, and administration of a short course of oral steroids. Infection in the form of folliculitis can be managed by empirical use of a systemic/topical antibiotic. In case of persistent crusting, moist saline compresses and frequent shampooing help in dislodging the crusts. Necrosis is best managed by conservative topical antibiotic, topical nitroglycerin ointment, and moist dressings to facilitate separation of the overlying crust. results The implantation density was between 30 and 40 FU/cm 2 for frontal area with reducing gradient from frontal hairline toward vertex (20-25 FU/cm 2 ). Average number of grafts transplanted were 2982 FU for frontal coverage [ Figure 3A and B], 4164 FU for frontal + mid-front coverage [ Figures 4 and 5], 2770 FU for vertex coverage, 6237 FU for full coverage [ Figure 6A and B], and 1240 FU for frontal forelock only. Average number of grafts extracted from the scalp per patient was 2956 FU (6320 hair follicles). For groups requiring greater coverage, for instance, frontal + mid-front coverage and full coverage, beard extraction was carried out by FUE with average number of grafts 1100 FU (1500 hair follicles), whereas for group 4 (full coverage) body hairs were also extracted [ Figure 6A and B], with average 1500 FU (1650 hair follicles) [ Tables 2 and 3]. As far as patient satisfaction score was concerned, it was found to be the highest in the front + mid-front coverage group, followed by frontal coverage (3.8) and frontal forelock (3.4), and closely followed by full coverage (3.3). The minimum satisfaction score was seen in the group of patients belonging to vertex alone. The average satisfaction score (s) in each group was Most patients demanded full coverage in one sitting, which was not possible in most cases because of the limited donor area; we attempted full coverage (one sitting) in 100 cases that were having very dense donor areas with thick beard (>70 micron) and body hairs (>40 microns). At 12 months post surgery, 94% of the patients were satisfied with the overall results. Also 88% of the patients felt that the donor area looked completely normal and about 62% of the patients opted for a second sitting of hair transplant surgery inorder to increase the coverage area/ density. A total of 207 patients completed 24 months follow-up and 9% reported a decrease in density in the transplanted area. The most common complications noticed in this study were postoperative pain, periorbital edema, folliculitis, ingrown hairs, cysts, telogen effluvium (shock loss), excessive persistent crusting, and rarely necrosis. We also had a few patients with seborrheic dermatitis and scalp psoriasis in whom we advised withholding minoxidil a few weeks before the surgery while advocating topical ketoconazole lotion for the seborrheic dermatitis. This is essential as these patients faced excessive crusting, which may affect the results. After surgery, they were further asked to reinitiate the use of minoxidil after 2 weeks along with the treatment for seborrheic dermatitis. dIscussIon Recent advances in FUE have made the management of even grade 7 baldness a reality as have been highlighted in Figure 7: Algorithm to plan area of coverage based on donor area availability this article. However, the focus of this article has been on the multiple complexities faced and a structured protocol to deal with these glitches during the surgery. The essence of a good result after hair transplant is dependent on planning a mature and realistic hairline based on the donor supply and recipient demand. [2] The donor hair follicles are assessed for their density, thickness, and caliber, and on the basis of the source being scalp, beard, or body, they are distributed accordingly. Motorized FUE using sharp punches in the setting of good magnification, tumescence, and lighting provides faster extraction rates with minimum transection. Constant hydration of grafts, reduction of their "out of body time", and the "No root touch" method of implantation enhances their survival rate. Adhering to the maximum dose and keeping in mind the duration of action of anesthetic agents used and limiting the use of anesthesia whenever possible help prevent several complications. Furthermore, ergonomic exercises and rotation of the staff help reduce fatigue among the staff. These key steps if kept in mind can definitely help develop an easy approach to manage these challenging advanced cases while reducing the complications [ Figure 7]. conclusIon Hair restoration can give better than satisfactory results even in advanced baldness. Beard and body hair can be used to augment results in cases with insufficient donor area. Mixing with scalp hair for mid-front and vertex coverage best utilize them. A mature hairline with cosmetic density in a gradient from front to back helps in achieving satisfactory response in cases with advanced baldness. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2019-12-19T09:22:21.247Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "5c7f6e81e91559063722057a96d7d9d15fed1176", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jcas.jcas_173_18", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "501537537705787d3180fa04a2bccbd2e8a3d683", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261564295
pes2o/s2orc
v3-fos-license
Immunomodulation with IL-7 and IL-15 in HIV-1 infection Immunomodulating agents are substances that modify the host immune responses in diseases such as infections, autoimmune conditions and cancers. Immunomodulators can be divided into two main groups: 1) immunostimulators that activate the immune system such as cytokines, toll-like receptor agonists and immune checkpoint blockers; and 2) immunosuppressors that dampen an overactive immune system such as corticosteroids and cytokine-blocking antibodies. In this review, we have focussed on the two primarily T and natural killer (NK) cell homeostatic cytokines: interleukin-7 (IL-7) and -15 (IL-15). These cytokines are immunostimulators which act on immune cells independently of the presence or absence of antigen. In vivo studies have shown that IL-7 administration enhances proliferation of circulating T cells whereas IL-15 agonists enhance the proliferation and function of NK and CD8+ T cells. Both IL-7 and IL-15 therapies have been tested as single interventions in HIV-1 cure-related clinical trials. In this review, we explore whether IL-7 and IL-15 could be part of the therapeutic approaches towards HIV-1 remission. Immunomodulation in the presence or absence of antigen HIV-1 disease is a chronic condition treatable with antiretroviral therapy (ART).Due to the persistence of a stable latent virus reservoir that is unaffected by ART, lifelong treatment is needed for people with HIV-1. 1 To achieve HIV-1 remission, defined as sustained ART-free virological control, this latent HIV-1 reservoir needs to be eradicated, reduced, silenced and/or contained by potent HIV-1-specific immune responses.In this review, we will focus on two immunostimulatory homeostatic cytokines, IL-7 and IL-15, which both act on immune cells independently of the presence or absence of antigen.Other reviews focus on interventions that rely on the presence of HIV antigen like broadly neutralizing anti-HIV-1 antibodies (bNAbs) and bispecific molecules. 2,3n HIV-1 disease, immunomodulatory interventions are usually administered during suppressive ART, where the majority of HIV-1 proviruses remains in a latent state and HIV-1 antigen burden is low.However, recent studies have begun to explore immunomodulatory interventions given during the viremic state 4 or prior to and at the start of an ART interruption (ATI) period. 5[8] Interleukin-7 and -15 [11] IL-7 and IL-15 are members of the four α-helix bundle family of cytokines, which also includes another four interleukins: IL-2, IL-4, IL-9 and IL-21.These cytokines share the common cytokine receptor γ-chain (γ c also known as CD132), while only IL-2 and IL-15 share the other part of the heterodimeric receptor, the IL-2/IL15Rβ-chain also known as CD122.IL-15 first binds to the membrane-bound IL-15Rα that is highly expressed by monocytes and dendritic cells and is then trans-presented in a cell-cell contact-dependent manner to responder cells expressing the IL-2/IL15Rβ-γ c heterodimer.IL-15 signalling can also occur by cis-presentation or through soluble IL-15Rα. 10,11IL-7 binds and signals via IL-7Rα (also known as CD127) as the last part of the heterodimeric receptor, and exists in membrane-bound and soluble forms.The IL-7Rα is mainly expressed on the least differentiated T cell subsets, while the IL-2/IL15Rβ is primarily expressed on the more differentiated ones. 10,12,13Both IL-7 and IL-15 signal downstream through Janus 49 in vitro CD4 + T cells p24 ELISA in supernatant Induced viral replication Unutmaz D. et al. 51 in vitro Resting (CD69-HLA-DR-) CD3 kinases (JAK)/signal transducer and activator of transcription proteins (STAT) pathways. 9,11odifications have improved the pharmacokinetic profile of recombinant human IL-7 and IL-15, especially cytokine half-life. 11,14,15A long-acting IL-7 was made by fusing recombinant human IL-7 with a hybrid immunoglobulins D/G4 Fc (rhIL-7-hyFc). 16Two key modifications (heterodimerization and superagonist) of IL-15 have increased binding ability to IL-2/IL15Rβ-γ c and thus immunostimulatory activity 1) A recombinant, heterodimeric form of IL-15 bound to IL-15Rα (het-IL-15) 17 ; and 2) A superagonist mutein containing amino acid substitutions, including IL-15N72D, 18 and a superagonist fusion where IL-15N72D is bound to a sushi domain of IL-15Rα fused with immunoglobulin G1 Fc, also known as N-803 (previously ALT-803). 19nder normal homeostatic conditions without lymphopenia, plasma levels of IL-7 and IL-15 are tightly regulated, being either undetectable or very low by standard assays. 10,201][22][23] In turn, elevated plasma IL-7 and IL-15 levels are associated with downregulation of their cellular receptors. 7,12The effects of homeostatic cytokines on CD4 + T cell proliferation in HIV-1 disease depends on the setting: 1) During initial untreated disease; the levels of homeostatic cytokines are high reflecting the 'lack' of consumption due to high turnover of CD4 + T cells, followed by 2) A progressively decrease of the levels of homeostatic cytokines and increase of CD4 + T cells counts after ART initiation, 13,[20][21][22][23] but the responding receptor expression on T cells is not fully restored within 2 years. 12,24In an earlier study using single structured ATI as a way to augment autologous HIV-1-immunity, individuals with delayed/absent viral rebound during ATI had significantly increased levels of plasma IL-15 prior to and during ATI compared to individuals with rapid viral rebound. 25In another study, the expression of IL-17 mRNA in and IL-15Rα on monocytes was higher among long-term non-progressors than progressors. 7In cancer therapy, repeated doses of IL-7 administration was shown to increase numbers of circulating T cells in a dose-dependent manner, 26 while the different IL-15 therapies increased proliferation and function of circulating NK and CD8 + T cells. 9Collectively, these findings suggest that there is a therapeutic rationale for investigating treatment with IL-7 and IL-15 in HIV-1. 27,28 In vitro and ex vivo effects of IL-7 or IL-15 The cell types and tissues assayed as well as the results from in vitro and ex vivo experiments of IL-7 or IL-15 are summarized in Table 1.We focus on the immune-stimulating effect of IL-7 or IL-15 administration on virus-specific CD8 + T cells as well as NK cells, and briefly describe their latency reversing potential. Immune-modulatory effects: IL-7 stimulation of peripheral blood mononuclear cells (PBMCs) ex vivo enhanced virus-specific CD8 + T cell cytotoxicity 29 and function 30 (against influenza and cytomegalovirus).IL-15 stimulation ex vivo of isolated CD8 + T cells enhanced activation, but this was only observed in the most differentiated subsets. 31During untreated HIV-1 infection, activated CD8 + T cells are more susceptible to apoptosis 32 ex vivo, which can be inhibited by IL-15. 31IL-15 stimulation of PBMCs also increased numbers, 33 activation status 34 and functional capacity 33 of HIV-1-specific CD8 + T cells, whilst their susceptibility to apoptosis again was reduced. 34IL-15 stimulation of pre-activated PBMCs enhanced the frequency of HIV-1-specific T cells 31 and non-specific IFNγ secretion in the supernatant 34 several fold compared to IL-15 stimulation without pre-activation.Ex vivo stimulation with the IL-15 superagonist N-803 also enhanced HIV-1-specific T-cell cytotoxicity 35,36 and function. 36,370][41] After IL-15 stimulation, NK cells had higher expression of the activation receptor NKG2D and cytotoxicity receptor NKp30 39,40,42 (also seen after in vivo N-803 administration 43 ), while expression of the cytotoxicity receptor NKp46 was decreased. 40,42IL-15 stimulation and NKG2D engagement were needed for vaccine-primed-K cells to control HIV-1 in autologous CD4 + T cells. 444][55] Additionally, IL-7 stimulation of T cells resulted in a shift of subset distribution towards the memory compartment, 48,52 which supports the critical role of IL-7 for generation of long-lived memory T cells. 56IL-15 may also work as latency-reversing agent. 37,38,42More importantly, IL-7-and IL-15-primed latently infected cells to increased CD8 + T cell recognition and killing, which is an important finding since latently infected cells after stimulation with other latency-reversing agents seems to be resistant to killing. 37,40,57,58IL-15 in combination with other latency-reversing agents have been shown to abrogate each other's latency -reversing effects, although the combination of IL-15 and a protein kinase C agonist potentiated viral (re)activation and showed sustain NK cell function. 42n summary, ex vivo studies indicate that IL-7 stimulation may maintain or even expand the size of the HIV-1 reservoir due to homeostatic proliferation of infected CD4 + T cells.In contrast to IL-7, IL-15 stimulation appears to enhance cellular responses against HIV-1 in vitro and ex vivo without expanding the size of the HIV-1 reservoir.Studies are listed chronologically.Two studies are not mentioned in the text, but included in this table [111-112].Abbreviations: ADCC; antibody-dependent cellular cytotoxicity, B-LCL; lymphoblastoid B-cell line, CCR5; C-C chemokine receptor type 5, ELISA; enzyme-linked immunosorbent assay, ELISPOT; enzyme-linked absorbent spot,IFN; inteferon, IL; interleukin, LTR; long terminal repeat, NK; natural killer, PBMCs; peripheral blood mononuclear cells, PCR; polymerase chain reaction, RT; reverse transcription, 7-AAD; 7-Aminoactinomycin D. Effects of IL-7 and IL-15 in HIV-1 animal models Of note, in murine [59][60][61][62] and HNP models, [63][64][65] the use of IL-7 or IL-15 as vaccine adjuvants enhanced the immune responses to HIV/SIV/SHIV vaccines, but this vaccine adjuvant effect could not be repeated in humans. 66IL-15-adjuvanted SIV vaccination prior to SIV challenge preserved the CD4 + T cell numbers in the tissue to a greater extent than vaccination without IL-15 as adjuvant. 67Further, some non-human primates (NHPs) primed with both TLR agonists and IL-15 prior to SIV vaccination were protected again SIV infection. 68mmune-modulatory effects: IL-7 administration among simian immunodeficiency virus (SIV)-infected NHPs enhanced proliferation [69][70][71][72] and activation 69,71 of CD4 + and CD8 + T cells leading to a transient increase in numbers [69][70][71][72] (Table 2).0][71][72][73] The expression of IL-7Rα on these cells was transiently downregulated following IL-7 administration. 70,71The proliferative effects were more sustained with repeated than single IL-7 administration. 73IL-7 stimulation also enhanced HIV-1-specific CD8 + T cell cytotoxicity, 74 and lymphadenopathy occurred following subcutaneous (SC) IL-7 injection due to cell migration. 69,70-15 administration among viremic NHPs only led to modest increases in CD8 + T cell and NK numbers 6,75,76 as opposed to what was observed in SIV-uninfected NHPs (reviewed elsewhere 27 ) as well as ex vivo IL-15 stimulation of pre-activated PBMCs as described above .31,34 In one of the studies, IL-15 administration led to increased viral set-point (as seen in a HIV mice model 77 ) and accelerated disease progression.So the modest effect seen on CD8 + T and NK cells of additional exogen IL-15 administration during untreated infection could be due to already high levels of endogen IL-15 78 as well as short half-life of responder cells.27 Whether or not IL-15 stimulation during the viremic phase of HIV-1 infection might provide superior effects on T cell function remains to be determined; however, one concern would be that of excessive CD8 + T cell activation, which is known to be associated with worse outcomes in people with HIV-1.79 IL-15 administration at ART initiation among SIV-infected NHPs increased the number of CD8 + T cells. 80 Moreimportantly, following IL-15 administration, the virus-specific CD8 + T cell proliferated, 80 but no effect was observed on IFNγ cytotoxicity 75,80 or control during ATI.In non-progressors and ART-suppressed SIV-infected NHPs, IL-15 increased the proliferative activity of T cells 6,81 and enhanced frequency of T effector memory cells expressing granzyme B. 81 N-803 administration was also tested among ART-treated virally-suppressed NHPs, which generally confirmed the findings described above .82,83 Studies have also tested whether N-803 could benefit virus-specific immunity among more favourable clinical phenotypes (SIV controllers) in the absence of ART.85 After N-803 and het-IL-15 administration, there was increased proliferation of NK and CD8 + T cells.81,[83][84][85] There are conflicting findings on whether the cytotoxicity of virus-specific CD8 + T cells were increased 81 or not 85 following N-803 administration.The majority of the NK and CD8 + T cells that increased in numbers following N-803 expressed CD16 + and had effector memory phenotype, respectively.81,[83][84][85] Virological control during ATI was not observed among SIV/SHIV-infected NHPs after N-803 administration. 82,83f note, prior to N-803 administration, SIV-specific CD8 + T cells primarily localized in the extrafollicular space of lymph nodes, but after N-803 administration the numbers of SIV-specific CD8 + T cells significantly increased within B-cell follicles.84 Migration of CD8 + T cells to the lymph nodes occurred via upregulation of CXC chemokine receptor 5 (CXCR5).84 The expression of IL-2/IL15Rβ-y c on central memory CD8 + T cells transiently declined during weekly N-803 administration, but was somewhat restored by deferring administration, 85 thus spacing administrations seems optimal due to refractoriness.6,75,[85][86][87][88] Multiple administrations of N-803 led to increased PD-1 expression on CD8 + T cells, which might be due to activation rather than exhaustion.85 In two HIV mice models IL-15-primed CD8 + T cells showed enhanced in vivo activity on initial viremia when given at the day of infection, but administration of an IL-15 superagonist led to increased plasma viral setpoint. 77In the other model, N-803 administration 3 days after infection induced NK cell-mediated inhibition of acute infection.36 Thus, interventional approaches within the first days of infection in these models does not mirror what is feasible in people with HIV-1. 0][71] Following N-803 administration, plasma viral loads decreased indicating that the in vivo latency reversal effect of N-803 might be masked due to antiviral effector functions, 37,82,84,85 which also explains the diverse findings from the other studies using IL-15 therapies. 6,75,80,81,83everal studies have assessed the effect of IL-15 therapies on the size of the viral reservoir with mixed results (Table 2), as it was found to either decrease, 81,83 remain unchanged 82,84 or increase. 65n summary, in multiple studies across several animal models, IL-7 administration increased absolute T cell numbers, and N-803 administration increased the total numbers of NK and CD8 + T cells, with spaced administration of more than one week being most optimal.Despite these proliferative effects of IL-15, more information is needed on the HIV-1specific effects as well as antiviral functions of the effector cells following the interventions as there was no apparent impact of IL-15 on virological control among animals that had interrupted ART. Effects of IL-7 and IL-15 therapy in people with HIV-1 0][91][92] The maximum tolerated dose (MTD) was 30 μg/kg with single subcutaneous (SC) injections 89 and 20 μg/kg MTD of repeated injections. 91,920][91][92][93] The results were overall similar to the findings from NHPs: IL-7 administration enhanced proliferation as measured by the expression of Ki67 on CD4 + and CD8 + T cells leading to increased cell number [89][90][91] ; mainly in the central memory (CD45RA-CCR7+ 90,91,93 or CD45RA-CD62L- 89 ), but also the naïve (CD45RA + CCR7+) 90,91,93 subsets, confirming prevalent IL-7Rα expression on these subsets.Frequencies of HIV-1-specific CD8 + T cells were unchanged after IL-7 administration, but one study has observed a trend towards enhanced proliferation as measured by Ki67 expression. 89IL-7 administration did not affect T cell subset distribution, PD-1 frequencies or IL-7Rα expression. 89everal clinical cancer trials have tested the safety and effect of IL-15 therapies.Pharmacokinetics was dependent on the exact IL-15 compound that was tested as well as the route of administration.IL-15 administration SC had a longer half-life due to a slower release, 94 but intravenous (IV) administration resulted in higher plasma concentrations, which explain the difference in SC versus IV toxicity.Of note, a greater mean fold increase in circulating numbers of NK and CD8 + T cells were observed after SC compared to IV administration of N-803, 43,94 and doses up to 20 μg/kg were tested IV without significant toxicities.In 2022, a phase 1 dose-escalation study of N-803 was conducted among 16 people with HIV-1 on suppressive ART. 95Five different dosing schemes were planned with three individuals per scheme receiving three doses once weekly.Two individuals received the lowest dose of 0.3 μg/kg IV, but due to data from cancer trials showing increased risk of a cytokine release syndrome with IV administration, the following doses were given as SC administration.The MTD was found to be 6.0 μg/kg, and of note no anti-N-803 antibodies have been observed in 6.0 μg/kg cohorts. 94,96,97Every individual experienced Grade 3 injection site erythema and 22 (65%) of the 34 injections were associated with adenopathy, but none of them were Grade 1.There were differences in pharmacodynamics at 6.0 μg/kg between cancer and HIV trials (Table 3), which have been ascribed to target-mediated drug disposition. 15In the HIV trial, three N-803 administrations increased the number of circulating NK over CD8 + T cells with enhanced proliferation as measured by Ki67 expression on both cell types.N-803 also increased expression of the activation markers, CD69 and HLA-DR on both NK and CD8 + T cells but the study could not demonstrate increased HIV-1-specific T cell responses. Latency reversing potential: Mixed results were seen on whether IL-7 could work as a latency-reversing agent, but if plasma viremia increased this was ascribed to activation of already transcriptional active cells rather than reactivation of de novo viral transcription 53,98 as observed with other latency-reversing agents. 991][92][93] In the phase 1 clinical trial using N-803, there was evidence of a latency-reversal effect, in that 91% and 100% of the individuals had detectable plasma viral loads and increased HIV-1 mRNA transcription in memory CD4 + T cells during the interventional period, respectively. 95he impact of N-803 on the latent HIV-1 reservoir was investigated using two assays.The frequency of memory CD4 + T cells that could be activated to viral transcription was significantly reduced over the 6months course of the trial (P =<0.001).By contrast, the level of intact proviral HIV-1 DNA per million CD4 + T cells measured by the intact proviral DNA assay (IPDA) actually increased over the interventional period (P = 0.098).The levels of the defective proviral HIV-1 DNA per million CD4 + T cells did not change over the interventional period, so the authors have speculated that the intact reservoir increased due to the expansion rather than the infection of new cells, but only integrationsite analyses can ultimate clarify this point. Cancer trial 43 HIV-1 trial 95 In summary, IL-7 and N-803 have overall shown to be safe and welltolerated among people with HIV-1 on suppressive ART.Whereas the overall number of T cells increased with IL-7 therapy, including those that were latently infected, N-803 primarily led to increases in CD8 + T and NK cells.Additional trials are needed to address whether more potent cellular responses against HIV-1 are developed following the interventions and their effect on ART-free virological control during an ATI (NCT04808908, NCT04505501). Future directions for IL-7 or IL-15 therapy in HIV-1 Modern ART regimens are highly effective at suppressing plasma viremia, achieving undetectable levels within months of treatment.However, restoration of CD4 + T cell counts takes considerably longer, in some cases years.Central to this immune reconstitution is endogenous IL-7 and IL-15 production which regulates CD4 + T cell homeostasis (Fig. 1), and thus, could interventions with IL-7 or IL-15 help restoration of CD4 + T cells?There are two issues with the homeostatic cytokines that should be considered prior to any intervention.1) The cytokineinduced CD4 + T cell expansion of the HIV-1 reservoir, 50,100 which might be overcome by making the interventional homeostatic cytokines CD8-targeted, but then the restoration of the CD4 + T cell compartment would be missed, or by simultaneously inhibiting cell-intrinsic anti-apoptosis pathways to diminish infected CD4 + T cells 101,102 ; and 2) The increased susceptibility to infection 103,104 by enhanced CCR5 expression on CD4 + T cells. 51,82,100,105Importantly, a recent study found that IL-15 stimulation in vitro and in humanized mice promoted proliferation and survival of the HIV-1 target cell (CCR5 expressing CD4 + T cells), and furthermore the life span of infected CCR5-expressing CD4 + T cells was prolonged and their virus production was increased. 100L-15 agonists could have a role in HIV-1 cure-related strategies.N-803 induced proliferation of NK and CD8 + T cells among ARTsuppressed people with HIV-1, 95 and enhanced migration of these cells to the tissues in vivo. 6,84The impact of N-803 on T cell migration and homing is important given that homeostatic proliferation of infected cells also occurs in tissues. 106Expression of IL-15Rα on responder cells .IL-7 binds to the IL-7Rα receptor, here on the CD4 + and CD8 + T cells (mainly naïve and central memory subsets).Cells undergo homeostatic proliferation with expansion of T cell counts.In the CD4 + T cell compartment, both infected and uninfected cells proliferate leading to the expansion of the HIV-1 reservoir.During the proliferation of the infected CD4 + T cells, some degree of latency reversal occurs and a transient increase in plasma HIV-1 RNA levels can be observed.The CD8 + T cell compartment also proliferates, which might also expand the HIV-1-specific CD8 + T cells.The expression of IL-7Rα is downregulated following IL-7 stimulation of T cells. 22,37,42(b) Illustration of the effects of IL-15 superagonist N-803 administration during suppressive ART (on ART).N-803 binds to membrane IL-15Rα on a trans-presenting cell and in a cell-cell contact-dependent manner to responder cells expressing IL-2/IL15Rβ-γ c .Upon stimulation with N-803 the responder cells undergo homeostatic proliferation.As seen with administration of IL-7, during proliferation of the infected CD4 + T cells some degree of latency reversal occurs and a transient increase in plasma HIV-1 RNA levels can be observed.In the CD4 + T compartment, both infected and uninfected cells expand.The NK cells and CD8 + effector memory (EM) T cells also expand.Expansion of the CD8 + EM T cells might also lead to the expansion of the HIV-1-specific CD8 + T cells as well as upregulation of the expression of CXCR5 leading to tissue migration. (caption on next page) J.D. Gunst et al. are somewhat restored by ART, but inclusion of a TLR agonist could further induce IL-15Rα expression 67 and thus enhance the effect of N-803.As stated in the introduction, the timing of immunotherapies relative to viremia versus viral suppression is currently being explored, but interventions with homeostatic cytokines in this setting seems unfit since levels are already elevated. Despite some latency reversal effect of plasma viremia by N-803, available data indicates that more effective latency reversal is needed to decrease the HIV-1 reservoir. 107One interventional strategy to overcome reversal of latency is to pause ART during an ATI to let the virus rebound: Two clinical trials are testing this strategy with administration of N-803 in combination with two broadly neutralising antibodies (bNAbs) during suppressive ART and then conducting an ATI at a later time point (NCT04340596) or N-803 in combination with two bNAbs during suppressive ART, but administered 2 days prior to an ATI (NCT05245292) (Fig. 2).The ex vivo findings of enhanced antibody-dependent cellular cytotoxicity [39][40][41] and enhanced expression of CD16 on NK cells in animal studies 81,[83][84][85] have not been confirmed among people with HIV-1, but in theory the combinatorial approach with bNAbs will result in killing of infected cells by different Fc-mediated mechanisms (Fig. 2).The presence of bNAbs at therapeutic levels will also limit the infection of new cells by direct neutralization of cell-free virions.Another interventional combination with N-803 could include an immune checkpoint inhibitor.A mathematical model based upon one of the NHP studies 85 has shown that co-administration of an immune checkpoint inhibitor may improve N-803 efficacy, 108 which has been confirmed in a phase 1b cancer trial. 97Other combinatorial approaches with N-803 being tested in cancer research are with antibodies, 109 cell110 or gene therapy (NCT05618925, NCT04847466).Notably, as for other interventions tested in HIV-1 research, 99 N-803 has also induced person-specific changes (NK cell functionality) in cancer trials moving curative interventions towards a personalized medicine approach. 43n conclusion, IL-7 has shown to be safe and well-tolerated among people with HIV-1 leading to increased absolute T cell numbers, including latently infected cells.Prior to future trials using IL-7, this expansion of the HIV-1 reservoir needs to be hindered.IL-15 therapies have also shown to be safe and well-tolerated among people with HIV-1, leading to increased total numbers of primarily CD8 + T and NK cells.Thus, IL-15 administration could be part of therapeutic approaches towards HIV-1 remission, but future trials should have broadened the immunological assessments on both the HIV-1 reservoir as well as antiviral responses: HIV-1-specific immunity and the effect on virological control during ATI.Since HIV-1 infection causes widespread dysregulation of the host immune system that is slow to recover, even after successful ART, the interventional timing is another important aspect to consider. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.Fig. 2. Effects of IL-15 stimulation on antiretroviral therapy (ART) and in a combinatorial approach with broadly neutralizing anti-HIV-1 antibodies (bNAbs) off ART.(a) Illustration of the effects of IL-15 superagonist N-803 administration during suppressive ART (on ART) as shown in Fig. 1.(b) Illustration of the effects of N-803 in combination with bNAbs prior to (NCT04340596) and into (NCT05245292) an analytical treatment interruption (ATI; off-ART).During ATI, infected CD4 + T cells with inducible HIV-1 reservoir might be (re)activated to initiated transcription/translation due to immune activation, which can lead to antibody-dependent cellular cytotoxicity by the NK and CD8 + T cells.Viral particles produced by the (re)activated infected CD4 + T cells can form bNAbs-antigen complexes that bind to plasmacytoid dendritic cells (pDCs).This cross presents viral antigens leading to the development of HIV-1-specific CD8 + T cell and enhanced killing of infected cellsa vaccinal effect. 5 Fig. 1 . Fig. 1.Effects of IL-7 and IL-15 therapies in vivo during suppressive antiretroviral threapy (ART).(a) Illustration of the effects of IL-7 administration during suppressive ART (on ART).IL-7 binds to the IL-7Rα receptor, here on the CD4 + and CD8 + T cells (mainly naïve and central memory subsets).Cells undergo homeostatic proliferation with expansion of T cell counts.In the CD4 + T cell compartment, both infected and uninfected cells proliferate leading to the expansion of the HIV-1 reservoir.During the proliferation of the infected CD4 + T cells, some degree of latency reversal occurs and a transient increase in plasma HIV-1 RNA levels can be observed.The CD8 + T cell compartment also proliferates, which might also expand the HIV-1-specific CD8 + T cells.The expression of IL-7Rα is downregulated following IL-7 stimulation of T cells.22,37,42(b) Illustration of the effects of IL-15 superagonist N-803 administration during suppressive ART (on ART).N-803 binds to membrane IL-15Rα on a trans-presenting cell and in a cell-cell contact-dependent manner to responder cells expressing IL-2/IL15Rβ-γ c .Upon stimulation with N-803 the responder cells undergo homeostatic proliferation.As seen with administration of IL-7, during proliferation of the infected CD4 + T cells some degree of latency reversal occurs and a transient increase in plasma HIV-1 RNA levels can be observed.In the CD4 + T compartment, both infected and uninfected cells expand.The NK cells and CD8 + effector memory (EM) T cells also expand.Expansion of the CD8 + EM T cells might also lead to the expansion of the HIV-1-specific CD8 + T cells as well as upregulation of the expression of CXCR5 leading to tissue migration. Table 1 Summary of the in vitro and ex vivo effects of IL-7 or IL-15 therapies. Table 2 Summary from animal studies using IL-7 or IL-15 therapies.
2023-09-07T15:19:59.767Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "098890cb41cf55d147c4e198c894db48c92b8f9f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jve.2023.100347", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a922505402e18bf3561d784da8f07f743fcd6d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56041530
pes2o/s2orc
v3-fos-license
Strategic Management Key to Success for Kosovo Companies-Expansion in International European Market A recent signed Stabilisation and Association Agreement (SAA), an platform that promotes harmonious economic relations and gradually development of a free trading areas between EU and Kosovo1, encouraged many companies from this country to start thinking big for international business expansion in Europe. Access in Europe via free trade, is a mine of gold opportunity since it allows reaching a large number of customers in a vast and broad market. Getting and even more competing with European and other international companies for a slight market share, obviously it is a hard "piece of cake". Path to be paved attaining business objectives could be more easily headed applying an appropriate management strategy. Strategic thinking, strategic planning, strategic marketing, and international managerial strategy are pillars that could support and craft any company have a proper approach and successive performance into European market environment. Introduction From early stages of open international markets, collision between businesses that seek entering into new markets and those that attempt to protect current marketplace and market-shares are ongoing and endless.The impact of global competition is being felt in every industry.Firms and countries long used to dominate in their respective international markets must reckon with aggressive and innovative competitors from all concerns of the globe (Inkpen & Ramanswamy, 2006).Economical open policies in region of Western Balkans in particular the European Union policy for Stabilization and Association Agreement (SAA) 2 , is seen as an opportunity enabling and at the same time encouraging many companies from Kosovo to take a big leap by joining European free-trade market."This agreement is a milestone for the EU-Kosovo relationship.It will help Kosovo make much needed reforms and will create trade and investment opportunities.It will put Kosovo on the path of a sustainable economic growth" (Hahn, 2015).Given that majority of businesses for long time were accommodated to circumstances and conditions of almost closed market within boundaries of native country, to be mentioned some exceptions of CEFTA agreement bonded amongst South Eastern European countries, nowadays it's obvious that boundless and large European free market requires almost totally different operation commitments.Hence, path to accessing big free European market encompasses besides strict standardized rules and regulations, it also requires an "unwritten law" which companies itself are aware in order to succeed -change of current operation strategies.Clue conquering obstacles, prevailing upon and progressing continuously towards settling into new big and boundless European market for Kosovo businesses as 1 novice market players, is modification of currents strategic management way of operations into much more contemporary and advances strategic managerial approaches.EU market could be seen as a gate of many opportunities for business expansion, indeed, but entrance and expansion intending a sustainable operation abroad are achieved with selected an appropriate strategy even though it's so obvious that headway has numbers of significant hindrances to overcome.Importance of strategic management in businesses is proven to be an important contribution factor and a necessary tool making a stream of movements from vagueness and ambiguity situations toward bright and sustainable successive operational business."The goal of the strategy is to beat competition, but before you test yourself against the competition; strategy takes shape in the determination to create value for the customer" (Ohmae at Abraham, 5, 2012). Research paper through expression of original description derived from qualitative methods supported by abundant literature and quantitative data making it a highly valuable paper, has a purpose to present the best ideas making companies understand and utilize strategies available for new market expansion.Surely, paper will also be a useful guide and can be used as a prime for many businesses that aspire to enter into the EU and other major markets worldwide.While by offering many examples and providing practical examples, doubtlessly this research paper can be beneficiary for Kosovo businesses to prosper in EU and other international cross cultural markets. Strategic Thinking Access to EU market for many businesses of Kosovo may seem a new venture and an ongoing challenge, so doom of participating in such vast trading environment must probably require an up-to-date strategy application as a compulsory modification from local traditional or better saying in many occasions conservative business strategies to ones suitable at new market environment.Any initiation for strategy modification, improvement or even change it is firmly rooted in strategic thinking as basement for solid business operation.According to DiVanna who believes that strategy development is shifting from a function traditionally restricted to an elite group within a company to a process in which strategic thinking must now aggregate across an organization's many level into cohesive set of strategic initiatives that are driven by sensing changes in the business environment.To make this transition successful, corporations must integrate the act of strategy development into the fabric of their business processes making them able to sense changes in the business climate and initiate tactical adjustments based on preconceived scenarios (DiVanna& Austin, 2004). Strategic thinking is the mind-set, frame of reference or paradigm that takes an initial focus on mega results (that is, positive societal impact) and defines the future we want to help create for the future.Using it allows for the continues adjustment and adaptation to the changing realities, thus creates the future instead of simply reacting to it (Kaufman, Browne, Watkins & Leigh, 16,2003).Strategy as its heart is about positioning for the future competitive advantage.That is its essence.Any strategic thinking must reflect this essence.It is the purpose that drives strategy.Nonetheless, purpose of the strategic thinking as a part of overall strategy is gaining and sustaining a competitive advantage.Devising a sound a strategy is impossible without strategic thinking.Coming up with different, plausible strategic alternatives is both creative and conceptual, but must also be grounded in a broad knowledge of the relevant industries, competitors, markets, technologies and other trends.Strategic thinking should not be done just when a firm engages in strategic planning, but rather all time.It requires a deep understanding of how markets and competitors are changing and of where opportunities may lie in order to determine a better strategic alternative exist and what it is (Abraham, 2012). While Sloan expresses that, once we have an understanding of what strategic thinking is, we can proceed with endless options for development.The purpose of strategic thinking is to suspend problem solving and in a rigorous process of examination, exploration and challenge of the underlying premise of the strategy; and to generate new options as means to create a winning innovative and sustainable strategy (Sloan, 2014). Another wise and useful thought depicting strategic thinking based on her personal experience, Ann Herrmann-Nehdi a CEO of Herrmann International, states that: strategic thinking is a mindset that allows to: 1) Anticipate future events and issues, 2) Create alternative scenarios, 3) Understand your options, 4) Decide on your objectives, 5) Determine the direction to achieve those objectives on a winning basis. Moreover, she also added that once you have accomplished the latter, a plan may be developed.Without a strategic thinking approach as the foundation, so-called strategic plans end up frequently becoming operational practical plans in disguise (Herrmann-Nehdi). Strategic Planning Obviously, a great deal of strategic thinking must go into developing a strategic plan and, once developed, a great deal of strategic management is required to bring its aims to fruition.But, as several authors have pointed out, the objective is indeed to think and manage strategically, not to blindly engage in strategic planning for the sake of strategic planning (Nickols, 2016).This could be an essential advice for Kosovo companies avoiding to create a mere blueprint but making a firm strategic plan with clear goals for future realistic achievements, instead.About the significance of strategic planning Simerson (2011) counsels that it requires consideration of external and internal factors: evaluate what and where business or company currently stands and where it hopes becoming; considering alternate futures, various intents and goals, recognize resource limitations, uncertainties and therefore formulate contingencies; prioritize options for whatever future actions needed to be taken likely to yield the most optimal results and outcomes consistent with the organization's mission and vision. A consistent and well prepared strategic planning for businesses must have five major components that provide neat and precise answers such are: 1. Analysis: Where we are now?What are our internal strengths and weaknesses and external opportunities and threats? 2. Formulation: What are our mission and vision?What is our sweet spot?What are our strategic intents and goals?Means of strategy to accomplish the goals? 3. Action Planning: How could strategic goals be translated into specific and concrete tactics?What kinds of obstacles are most likely to be faced and how to resolve unexpected occurrences? 4. Execution: What steps could be taken to ensure subsequent execution throughout the entire organization? 5. Continues Improvement: What can company do to constantly and continuously improve strategic planning processes thru ongoing operations?(Simerson, 10, 2011). Substantially, strategic planning is backbone support to strategic management.It is not, of course, the entirety of strategic management but is as a major process in the conduct of strategic management.Strategic planning is part of the total planning process that includes management and operational planning precisely it consist of developing concepts ideas, and plans for achieving business successfully and for meeting and beating competition (Steiner, 1979). Strategic Marketing The primary role of strategic marketing is to identify and create value for the business through strongly differentiated positioning.The businesses achieve this by influencing the strategy and culture of the organization in order to ensure that both have a strong consumer focus.Moreover, strategic marketing is about choices that consumer-focused organizations make on where and how to compete and with what assets.It is also about developing a specific competitive position using tools from the marketing armory including brands, innovation, customer relationships and services, alliances, channels and communications, and a very well defined price strategy, as well.The concept of strategic management draws heavily on the theory and practice of strategic management not just of marketing.This is an important distinction since strategic marketing is a much a part of directing how the organization competes as it is a part of marketing itself.In other words, strategic marketing is the "glue" that connects many aspects of the businesses towards major company goal achievements (Ranchhod & Marandi, 2005). According to Sahaf, strategic marketing sought to address two main issues like which market to enter and how to compete in those markets which fall within the scope of a strategic dimension of an organization or business.Bearing in mind that strategic marketing is very vast and immense field to talk about, due to the connection of issues we based our focus on marketing orientation as a strategic perspectives that suits the best for businesses of Kosovo aiming to enter into EU and wide markets, though.Basically, our statement is further proven that perspective of market orientation asserts that marketing must be concerned with making available what consumers want rather than with trying to persuade people to buy what the firms finds it convenient, congenial or just profitable to make.Thus, market oriented businesses are characterized by a consistent focus by employees in all departments and all levels on consumers' needs and competitive circumstances in market environment (Sahaf, 2013).Prerequisites for a successful and strong market orientation suggestion are portrayed in a figure given beneath by Narvar and Slate (Narvar & Slate at Sahaf, 2013). International Strategic Management For decades businesses have realized that emerging markets have become increasingly important for international companies not only as a source of inexpensive labor, but also as a source market growth.At the present time, moreover, we witness that an increasing number of companies founded in emerging countries are accelerating their efforts to integrate into global economy (Hoskisson at Tallman, 2007).International strategic management is seen in the environment driven strategies of successful businesses competing in a diverse market.Due to ongoing dramatically changing global market environment, businesses should take into account some characteristics that inescapable could be encountered on way.As for instance to be mentioned few: A) Strategic management is a necessary process of gaining competitive advantage, requiring the active participation of all functions areas.B) The environmental, ethical, product quality, and integrity aspects of business practice are critical concern requiring active support, commitment and involvement of top management.C) Development of international strategies in some occasions could be complex process because of the existence of trade blocks such are EU, ASEAN, and NAFTA.D) Strategies increasingly involve inter-organizational teams and strategic alliances on global scale, redirecting the company focus on customer and global competition (Alkhafaji, 2003). Developing the right business strategy, according to Aaker & McLoughlin (2010), is a basic goal, but it is not the end of the story.With a proper business strategy in hand, the task is to continuously challenge the strategy in order to make sure that it remains relevant to the changing marketplace and responsive to emerging opportunities.Meanwhile, it has to ensure that organization develops and retain necessary skills and competencies to make strategy succeed.Accessing new markets, especially those with strict requirements and policies, the most secured path with less strategic concerns, international strategic alliance is perceived as a vivid outlook for international expansion and sustainable presence for businesses wide.Having known the experience and capacity of Kosovo companies in the international markets, the best and most appropriate strategy chosen would surely be strategy alliance with companies of the same field of operation from Europe.Strategic alliances are seen as proper mechanism hedging risks, thus, according to Contractor and Lorange (at Buckley, 1998), they indentified ways of reducing risks, thru: enabling product diversification, enabling faster market entry and quicker establishment of presence in the market, lowering cost of total investments of particular project or the assets of risk by combining expertise and slack facilities.Meanwhile, international strategic alliances could reduce the cost by using the comparative advantage of each partner.Where, for instance, partners belong to different locations or countries, production can be transferred to the lower cost location or country, hence, this strategy creates greater comparative advantage, certainly.Cost lowering, in addition, is an incentive for companies to focus on economies of scale even if demand for some products in a particular country may be limited (Mariti and Smiley, at Buckley, 1998).Strategic alliance designated as a part of international managerial strategy, doubtlessly accelerates way in and presence in market of ally partner that consequently provides businesses a golden opportunity achieving long term goals by making their market expansion with products or services being traded internationally that in essence is a real goal of the best part of companies from Kosovo. Conclusion Majority of companies while preparing their business plans, point out some distinguished points among other objectives.Undisputable gaining a certain market share, creating an operational stability and make business expansion are goals the mostly could be read in business plans.The same could be encountered at many companies in the world including companies from Kosovo.Ever since Stabilization and Association Agreement (SAA) was signed between EU and Kosovo, many of companies started modifying and even more creating a new expansion strategy entering in a new gate of large market such are EU.As a result, some of them could struggle and face hard time to get the proper strategies due to the long term operation in a almost closed market, apart from some regional deals some of the companies could have made in past.Applying the right and most suitable strategy for entering in a free large market, barely a few companies could make in no time, hence, the clue could be by embedding a strategy that starts thinking out of the box or with strategic thinking, adding strategic planning, an proper strategic marketing that is mostly needed for new market entrance, and above all international managerial strategy that in sum encompass an appropriate business managerial strategy that stands for expansion and a successive business operation worldwide. Council of the European Union: STABILISATION AND ASSOCIATION AGREEMENT BETWEEN THE EUROPEAN UNION AND THE EUROPEAN ATOMIC ENERGY COMMUNITY, OF THE ONE PART, AND KOSOVO*, OF THE OTHER PART -Article 1, page 10. 2 Council of the European Union: STABILISATION AND ASSOCIATION AGREEMENT BETWEEN THE EUROPEAN UNION AND THE EUROPEAN ATOMIC ENERGY COMMUNITY, OF THE ONE PART, AND KOSOVO*, OF THE OTHER PART Figure 1 . Figure 1.A model of Market Orientation Sources: "Strategic Marketing: Making Decisions for Strategic Advantage"
2018-12-11T08:01:32.808Z
2017-05-19T00:00:00.000
{ "year": 2017, "sha1": "0bfd898497311219cac4d5a23d800f67163e4091", "oa_license": "CCBY", "oa_url": "http://journals.euser.org/index.php/ejis/article/view/2041/2001", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0bfd898497311219cac4d5a23d800f67163e4091", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
264494038
pes2o/s2orc
v3-fos-license
Comparison of respiratory pathogens in children with community-acquired pneumonia before and during the COVID-19 pandemic Background Multifaceted non-pharmaceutical interventions during the COVID-19 pandemic have not only reduced the transmission of SARS-CoV2, but have had an effect on the prevalence of other pathogens. This retrospective study aimed to compare and analyze the changes of respiratory pathogens in hospitalized children with community-acquired pneumonia. Methods From January 2019 to December 2020, children with community-acquired pneumonia were selected from the Department of Respiratory Medicine, Shanghai Children’s Medical Center. On the first day of hospitalization, sputum, throat swabs, venous blood samples from them were collected for detection of pathogens. Results A total of 2596 children with community-acquired pneumonia were enrolled, including 1871 patients in 2019 and 725 in 2020. The detection rate in 2020 was lower than in 2019, whether single or multiple pathogens. Compared with 2019, the detection rate of virus, especially parainfluenza virus, influenza virus and respiratory syncytial virus, all decreased in 2020. On the contrary, the prevalence of human rhinovirus was much higher than that in 2019. In addition, the positivity rate for bacteria did not change much over the two years, which seemed to be less affected by COVID-19. And Mycoplasma pneumoniae which broke out in 2019 has been in low prevalence since March 2020 even following the reopening of school. Conclusions Strict public health interventions for COVID-19 in China have effectively suppressed the spread of not only SARS-CoV2 but parainfluenza virus, influenza virus and Mycoplasma pneumonia as well. However, it had a much more limited effect on bacteria and rhinovirus. Therefore, more epidemiological surveillance of respiratory pathogens will help improve early preventive measures. Supplementary Information The online version contains supplementary material available at 10.1186/s12887-023-04246-0. Background In December 2019, the coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) broke out in Wuhan, Hubei, China [1].On 31 January 2020, COVID-19 was declared a Public Health Emergency of International Concern (PHEIC) by the World Health Organization (WHO).According to Johns Hopkins Coronavirus Resource Center, by the end of October 2021, more than 230 million people were infected with this virus, of which 4.8 million had died [2].COVID-19 is mainly transmitted through contact and droplets, and the population is susceptible in general [3][4][5].Since February 2020 [6], China has taken a various of nonpharmaceutical interventions (NPIs) measures to curb the spread of this virus, such as wearing masks, washing hands frequently, paying attention to indoor ventilation, maintaining social distance and supporting employees to work and study at home.Since the outbreak of the epidemic, Shanghai has entered a stage of normalization of epidemic prevention and control.The government postponed the start of the spring semester of 2020 in primary and secondary schools and cancelled all offline training courses.Consequently, during the COVID-19 pandemic, students had to obtain online courses at home until schools reopened in early June 2020.During this time, Shanghai had strict border controls.Those entering through the Shanghai port were transferred shortly to appointed hotels for at least 14 days of quarantine, and if they tested positive for SARS-CoV-19 PCR during this period, they had to be removed to designated hospitals for further treatment.As of December 31, 2020, confirmed cases of new coronary pneumonia in Shanghai amounted to 349 which were indigenous cases and 1167 which were imported cases. Lower respiratory tract infections (LRTIs), for instance, bronchiolitis and pneumonia, remain a dominant public health problem and a major cause of morbidity and mortality in children under 5 years old [7].Since common childhood respiratory pathogens such as respiratory syncytial virus (RSV) and mycoplasma pneumoniae (M.pneumoniae, MP), share similar routes of transmission with SARS-CoV2, these multifaceted NPIs not only diminish the spreading of the COVID-19, but influence the epidemiology of common childhood respiratory pathogens to a certain extent [8].In this paper, we aimed to observe the epidemiological characteristics of ordinary respiratory pathogens in children with communityacquired pneumonia (CAP) in 2020 (post-pandemic) and 2019 (pre-pandemic) in Shanghai, China. Study population We conducted a retrospective study of children aged 1 month to 16 years with radiologically confirmed community-acquired pneumonia.Venous blood, throat swab and sputum specimens were obtained from these patients on the day of hospitalization at the Department of Respiratory Medicine, Shanghai Children's Medical Center (SCMC) from January 1, 2019, to December 31, 2020.These specimens were tested for particle agglutination (PA), real-time polymerase chain reaction (RT-PCR), simultaneous amplification and testing (SAT), and bacterial culture of sputum.The study was approved by the Institutional Review Board and the Ethics Committee of Shanghai Children's Medical Center (SCMCIRB-K2019060-1), and written informed consent was obtained from the parents of each patient. Particle agglutination (PA) Particle agglutination antibody titres for mycoplasma pneumoniae were assayed using SERODIA MYCO-II (Fuji Rebio Ltd., Tokyo, Japanese), which was performed using artificial gelatine particles sensitized with cell membrane components of M. pneumoniae.The result was considered positive if the titre was 1:160 or more (≥ 1:160). Real-time polymerase chain reaction (RT-PCR) The respiratory secretions of the patient's throat were collected, sealed and sent for testing by using RT-PCR.The detection reagents for Mycoplasma pneumoniae and Legionella pneumophila were provided by Shanghai Zhijiang Biotechnology Co., Ltd.And human rhinovirus (HRV) detection reagents were provided by Hubei Langde Medical Technology Co., Ltd. Simultaneous amplification and testing (SAT) In a short period of time, throat swab samples were collected to identify seven common respiratory pathogen RNAs, including influenza A, influenza B, respiratory syncytial virus (RSV), human parainfluenza virus (HPIV), adenovirus (ADV), Mycoplasma pneumoniae (MP) and Chlamydia pneumoniae (CP), based on the doubleamplification method of RNA isothermal amplification and multiple biotin signals (Zhongzhi Biotechnologies, Wuhan, China). Bacterial culture of sputum On the day of hospitalization, samples were collected using a sterile suction tube attached to a special suction device at one end and the other end inserted into the child's nasal cavity, from the nasopharynx into the airway, using negative pressure suction to draw sputum out of the respiratory tract.Sputum would be sent to the examination room for screening and pre-treatment before inoculation for culture.The bacterial species mainly linked to community-acquired pneumonia in clinical practice were selected as the target bacterial species, which included Streptococcus pneumoniae, Staphylococcus aureus, Escherichia coli, Haemophilus infuenzae, Pseudomonas aeruginosa, Klebsiella sp, Acinetobacter sp, etc. Statistical analysis SPSS software package v25.0 was used for all statistical analyses.Categorical variables were expressed as frequencies and percentages.Proportions of categorical variables were compared using the chi-square test or Fisher's exact test.P < 0.05 was considered statistically significant. General description A total of 2596 patients diagnosed with community acquired pneumonia, aged 1 month to 16 years, were registered in the present study, 1871 in 2019 and 725 in 2020.We divided the children into three groups on the basis of age, as follows: infants (age: < 3 years), preschoolers (age: 3-5 years) and school-aged children (age: 6-16 years).Except for the prevalence of infants, which was higher in 2020 than in 2019, the prevalence of preschoolers and school-age children decreased.The proportion of patients with underlying diseases, especially congenital heart disease, and children with severe pneumonia requiring oxygen in 2020 were significantly higher than in 2019.In addition, there was no significant difference in gender, liver function damage, myocardial ischemia and other complications between the year of 2019 and 2020 (Table 1). In 2019, of 1871 specimens, 1451 (77.55%) tested positive for at least one of the pathogens; among 1082 (57.83%) of these positive specimens, single pathogen was investigated, and in 369 (19.72%) patients, two or more pathogens were detected.In 2020, of 725 specimens, 406 (56.00%) tested positive for at least one of the pathogens; among 325 (44.83%) of these positive patients, single pathogen was investigated, whereas in 81 (11.17%) of patients, there were two or more pathogens.The detection rate in 2020 was substantially lower than that in 2019 (Table 1). Comparison of positives rates of pathogens between 2019 and 2020 The top three viruses in 2019 and 2020 were RSV, HRV and HPIV.Except for the significant decrease in the detection rate of HPIV, other viruses showed similar detection rates between the two years.In terms of bacteria, the detection rate of Haemophilus influenzae was 7.00% in 2019, but decreased to 1.93% in 2020.In contrast, the detection rates of Staphylococcus aureus and Escherichia coli in 2020 increased compared with those in 2019.In addition, Mycoplasma pneumoniae, one of the most common cause of community-acquired pneumonia in children, showed the most significant decrease, from 48.42 to 16.97% (Table 2). Changes in specific pathogens based on month Compared with 2019, the detection rates of virus decreased after March 2020, but the seasonality in 2020 did not change, and rates also peaked in winter (Fig. 1A).RSV was at a low prevalence after April 2020, and gradually increased after October to a peak in December (Fig. 1B).In contrast to RSV, HRV increased in prevalence after schools reopened in June 2020, much higher than the same period in 2019 (Fig. 1C).Influenza had seasonal prevalence, with high incidence in winter and spring (Fig. 1D).PIV was almost undetected in the first half of 2020, with a significant increase in detection rates after September, which was opposite to the seasonal distribution in 2019 (Fig. 1E).ADV showed a small peak in early 2020, and had been in a low detection rate since then (Fig. 1F).Interestingly, circulation and seasonality of bacterium appeared to have remained the same during the two years, less affected by the COVID-19 epidemic (Fig. 1G).Mycoplasma pneumoniae was detected throughout the year, with a high prevalence in 2019 and peaked in autumn.Nevertheless, it was barely detected after March 2020, and low prevalence remained even after the reopening of school (Fig. 1H). Changes of the number of positive detections in different age groups In 2019, school-age children (6-16 years) had the highest positive detection rate of common respiratory pathogens (85.25%), and the positive detection rates of < 3 years old and 3-5 years old age groups were similar (72.10%; 79.41%).Since the outbreak of the epidemic in January 2020, the detection rate of all age groups has decreased, and the most obvious was the school-age group (56.03%).However, in terms of monthly trends, the rates of positive test were similar in the three age groups (Fig. 2). Discussion In this retrospective study, we assessed the epidemiological characteristics of common respiratory pathogens in children with community-acquired pneumonia before and during the local COVID19 pandemic.Our study showed an important influence of the COVID19 epidemic on the spread of common respiratory pathogens in Shanghai, China.Through a series of strict NPIs such as wearing masks, closing schools and maintaining social distance, not only the diffusion of SARS-CoV2 had been reduced, but also the epidemic pattern of other common pathogens [9], especially respiratory virus and Mycoplasma pneumoniae. Compared with 2019, the number of hospitalized children with pneumonia in our department diminished by 61.25% in 2020.And the total detection rate of respiratory pathogens also fell off significantly (from 77.55 to 56.00%), whether single or mixed infection.In one research conducted in New Zealand, incidence rate of severe acute respiratory illness among hospitalized patients showed very low owing to the use of strict NPIs such as the blockade and border closures in 2020 [10].However, our findings showed that the prevalence of pneumonia in children with congenital heart disease and the proportion of children with severe pneumonia requiring oxygen inhalation increased compared with 2019, and bacterial or RSV infection was the main cause.In terms of age, the proportion of infants had increased, probably because it was difficult for children under the age of 3 to wear masks.Perhaps, NPIs could not reduce the incidence of infants, children with underlying diseases and severe pneumonia. Compared with 2019, the whole detection rate of viruses decreased in 2020, but the rate of RSV increased (8.93% vs. 9.38%), especially the winter peak of RSV reappeared as usual.RSV disease occurs in all age groups, but the incidence is higher under 2 years of age [11].RSV infection has been supported to be in association with asthma and acute lower respiratory tract infection, leading mortality and morbidity to increase in children [12][13][14].In this study, RSV was still the most common source of respiratory viral infection in infants (age: < 3 years), children with congenital heart disease, and severe pneumonia.Therefore, further research is needed on preventive measures for RSV.There was a small pinnacle in the positive detection rate of HPIV in the spring and summer of 2019, but it did not appear in the same period in 2020. Instead, the number of HPIV tests increased dramatically after September.Human behavior is one of the main factors influencing the seasonality infections of respiratory viruses.As a matter of fact, in the context of the easing of the domestic COVID-19 in China, people in low-risk areas have basically resumed their normal work and life, which might be the reason for the surge in RSV and HPIV infections from September to December 2020 [15].These results showed outbreaks may take place outside of the typical season during the COVID19 pandemic.As NPIs are relaxed, it is necessary for healthcare systems to prepare for future outbreaks of ordinary respiratory viruses in children.Many studies have shown that influenza has spread in a similar way to COVID-19, such as droplet and contract transmission [16,17].Therefore, non-pharmaceutical interventions in linkage to reducing the spread of COVID-19 may also significantly reduce influenza [18,19].Despite returning to school, resumption of work and seasonal epidemics, the detection rate of 2020 influenza remained low.First, the Shanghai government increased the scope of influenza vaccination, especially for young Next, the COVID19 pandemic has changed health-seeking behavior and increased the focus on non-pharmacological interventions to decrease the risk of infection with the spread of influenza [20].Meanwhile, many viral-viral interactions may also affect the incidence of respiratory viral infections.Interferon-stimulated immunity caused by infection with one virus can provide nonspecific interference that makes it difficult for other viruses to establish in a population [21].Increased circulating levels of influenza A virus have been shown to limit rhinovirus epidemics, possibly through an interferon-mediated mechanism [22].Interestingly, despite the adoption of NPIs in 2020, the detection rate of HRV increased significantly, a trend not seen with other viruses after the restarting of schools in June.A former study showed that surgical masks could keep human coronaviruses and influenza viruses from transmitting, but not rhinoviruses transmission by respiratory droplets and aerosols in symptomatic patients with acute respiratory disease [23].In addition, rhinoviruses are non-enveloped viruses, so might be inherently less inactivated by washing hand with soap and water or by ethanol-containing disinfectant [24,25].Furthermore, the quality of children's hand washing may be poor.These factors may explain the reason that rhinovirus infection remained its usual circulation level. In terms of bacteria, the most common ones in 2019 were Streptococcus pneumoniae and Haemophilus influenzae, which were common bacteria in children with community-acquired pneumonia.Notably, Global Action Plan For Prevention and Control of Pnuemonia by the World Health Organization in 2008 listed immunization coverage for Haemophilus influenzae and Streptococcus pneumoniae, and immunization against pertussis and measles as primary prevention strategies.Given that vaccines covering for either were not routinely used in China, it was not surprising that the rates of pneumococcal and Haemophilus influenzae B infection in children were relatively high.However, by June 2020, the detection rate of bacteria increased, dominated by Staphylococcus aureus and Escherichia coli.The reason was that in the late stage of the epidemic, congenital heart disease complicated with pneumonia increased in children hospitalized in the respiratory department, whose sputum cultures were mainly Staphylococcus aureus, Escherichia coli and Klebsiella pneumoniae, considering with largescale use of antibiotics, pathogenic bacteria variation, regional differences, pathogenic bacteria changes and other factors.Moreover, children with congenital heart disease are more likely to be infected with Staphylococcus aureus in infancy or winter than ordinary children, which may be related to factors such as their own hemodynamic characteristics and low immunity. Mycoplasma pneumoniae is the one of the most popular pathogen of community-acquired pneumonia, which especially occurs in school-aged children.It can cause obvious disturbance of immune function in children.And if treatment is not timely, it will cause breathing difficulties, heart failure, etc., and even death in severe cases [26].Mycoplasma pneumoniae pneumonia occurs in regional outbreaks every 3 to 7 years, and each may endure 1 to 1.5 years.The last two epidemics of MP were in 2013 and 2016 [26,27].When encountering epidemic years, the infection rate of MP would increase by 3 to 4 times in children and adolescents.Our study showed that the detection rate of Mycoplasma pneumoniae was close to 50% in 2019, based on a combination of molecular assays and serology, which was considered an outbreak of MP infection.This might also be the reason why the positive rate of school-aged children in 2019 was markedly higher than that of the other two groups of age groups.Climatic conditions, such as humidity and temperature, have been reported to affect the survival and spread of airborne M. pneumoniae significantly [28,29].37 °C is the optimum growth temperature for MP, which grows best in the hottest months in China such as July, August and September.But in 2020 fewer patients visited clinicians following the outbreak of the COVID-19 pandemic and restrictive means against COVID-19 cut down the incidence of respiratory infections, there was a considerable reduction in the positive rate of MP since March, which remained at a comparatively low level afterwards, consistent with previous findings in other studies [30][31][32].At the start of the new term, the "Guidelines for the Prevention and Control of the Novel Coronavirus Pneumonia in Primary and Secondary Schools" was issued by the Ministry of Education to give a guide and assistant on the prevention and control of the epidemic in schools.These restrictive measures on COVID-19 could effectively reduce the transmission of Mycoplasma pneumonia, which led to a rapid decline in the positive rate of school-aged children in 2020 as well.And it might be that older children were better able to comply with various defensive measures. This paper not only compared the epidemiological features of common respiratory viruses in children, but also bacteria and Mycoplasma pneumoniae during the COVID-19 pandemic in China.However, there are some limitations in it.First, this study was conducted in a single center and all of the patients were hospitalized, which might lead to a preselection bias.Second, the methods used to detect respiratory pathogens such as viruses and bacteria were relatively simple, which might lead to false negative.Third, during the pandemic, a lot of public health interventions were enforced and some measures (such as wearing masks) still exist later.Consequently, the sample sizes should be further expanded and pathogens should be evaluated for at least two years before and after SARS-CoV2 to examine which of these measures may be the most powerful in preventing the spread of respiratory pathogens. Conclusions Strict public health interventions for COVID-19 in China have effectively suppressed the spread of SARS-CoV2.We observed unprecedented reductions in Human parainfluenza virus, influenza and Mycoplasma pneumonia, most likely due to the role of NPIs.However, it had a much more limited effect on infants, other pathogens such as bacteria and rhinovirus.With the introduction of mass vaccination against COVID-19 and the relaxation of control measures, infection rates in younger age groups are expected to return to previous levels.Therefore, it is necessary to obtain more epidemiological surveillance of respiratory pathogens, which will help improve early preventive measures. Fig. 1 Fig. 1 Monthly activity of pathogens during the COVID-19 pandemic year of 2020 (gray line) compared with the previous year of 2019 (blue line) Fig. 2 Fig. 2 Positive detection rates of pathogens in children of different ages from January 1, 2019 to December 31, 2020 Table 1 General characteristics of the patients Table 2 Comparison of positives rates of pathogens in 2019 and 2020
2023-04-11T13:04:06.359Z
2023-10-27T00:00:00.000
{ "year": 2023, "sha1": "f9efb5f1f3824aef4a4d4020d88316220836cc95", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "507973c3aadb6e20333e3d0665448a058f267fe4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
186663321
pes2o/s2orc
v3-fos-license
Investigating Teacher Candidates' Beliefs about Standardized Testing The purpose of this study is to examine the beliefs of prospective teachers’ about standardized testing in terms of some variables. This piece of research is in survey model. The study is carried out with 442 randomly selected prospective teachers registered in different departments at Dicle University in Turkey during the 2015-2016 academic year. Selected teacher candidates are grouped under 5 fields according to the branches they have registered (social sciences, language education, mathematics and sciences, fields requiring special ability, basic education). The Beliefs about Standardized Testing Scale (BAST) developed by Magee and Jones (2012) and adapted into Turkish language by İlhan, Çetin and Kinay (2015) is used as data collection tool. As a result of the study teacher candidates are found to have moderate beliefs for standardized tests. There is a significant difference between the candidate teachers' beliefs about standardized tests in terms of their gender and fields. Teacher candidates’ beliefs about standardized tests do not differ significantly according to the grade variable. Introduction There are many factors that direct human behavior. One of these is the beliefs of the individual. While making the meaning of his environment and life, people analyze certain events and phenomena through beliefs and in parallel with this belief system they develop certain forms of behavior. Belief is defined as a representation of one's knowledge about the object or the individual's understanding of himself and his environment [1]. Beliefs and systems of beliefs serve as a guide to help one to know and understand the world [2]. Considering the determinative role of beliefs on human behavior, understanding the educational beliefs of teachers and teacher candidates is important for the development of teaching practices and for the potential success of educational reforms [3]. Because the primary factor that directs teachers' in-class practices is their beliefs. Educational beliefs that teachers have, provides a window on their decision-making, practice and the effectiveness of teaching practices [4], [2]. In this context, one of the beliefs that teachers and teacher candidates possess and needs special attention during the teaching and learning process is the beliefs about standardized testing [5]. According to [6], the origins of standard tests are based on ancient China, where government tasks were mainly given according to the test results of questions about Confucius philosophy and poetry. The use of standard tests for children is seen in IQ tests of French psychologist Alfred Binet. With the development of Cliff Stone's first standard success test in 1908, standard evaluation has started to focus on success rather than intelligence. Tests were also used to assess the capabilities of potential officers by the US Army at the end of World War I. In 1958, American President Eisenhower to compete with the Soviet technological developments, and in particular the launching of the Sputnik; implemented the National Defense Education Act which has allowed state funds to be used to raise student test scores, educational performances, and test success [7]. According to the A Nation at Risk report published in 1983, in the period from 1963 to 1980, math and verbal tests mean scores of the students in the United States dropped about 40-50 points [8]. After George W. Bush's No Child Left Behind Act of 2001, the Obama administration with the Race to the Top Initiative 2009 initiative has set performance standards for teachers and administrators, linking provincial funds to standardized test results [7]. The No Child Left Behind Act requires students to achieve sufficient success in reading, mathematics and science. Student achievement is measured by standardized tests that are applied to all schools in the same way. The main reason for this is to establish uniform success standards to measure student achievement [9]. Standard tests are assessment instruments that are administered and scored according to predetermined standards [10] with a test manual that explains in detail the methods to be followed in the interpretation of the application conditions, scoring principles and scores obtained [11]. Standardized tests are developed according to accredited test development standards. Anyone who takes the test is asked the same questions, and all participants take the test under the same conditions. The same instructions are given to the students about the test and they are given the same duration for the test. The test scoring method is the same [8]. Those who are in charge of applying the standard test must follow the same directives, materials and processes, and make assessment and evaluation according to the criteria specified in the test manual [12]. The purpose of standard tests that differ in aim and design from other tests is to evaluate and compare the skills and competencies of individuals in a diverse community (e.g. having different educational backgrounds and learning in different institutions) [13]. Today, the most commonly used standard test type is multiple-choice tests that require students to find the correct answers to the questions asked in a particular order within a given time period, from the choices given. All variables in standard tests are predetermined and same for all students. Its high reliability and effectiveness in terms of practicality are among the advantages of standard tests [14]. Other advantages of standard tests include:  Standardized tests can lead to positive changes in the classroom or school setting by making the objectives clearer for teachers concerning curriculum that they need to focus on [15].  They are more objective and reliable than other assessment tools in measuring student access in a given area or areas. They also help teachers and administrators monitor the effectiveness of teaching methods, instructional materials and the new program [16].  As they are developed by experts, the qualifications of test and test items are high [11].  Standardized tests allow making international comparisons between students who are at the same level of knowledge and skill [17].  Standardized tests allow monitoring the student progress objectively and effectively at the end of the specified period [18]. Apart from these positive aspects of the standard tests, there are also some negative aspects. According to [16], preparing for standard tests and the impact on being successful in those tests affects negatively the natural learning process and causes students to focus only on the parts of the subject that will be include in the test. The process of preparing students for standardized tests reduces the amount of time required for teaching, and narrows the content of instructional programs and teaching methods [9]. According to [19], focusing on the results of standard tests can only reflect "teaching the test" instead of more important gains in learning, thus reducing the focus of the curriculum on tests only. When the curriculum is further narrowed, content and skills that are not included in the standard tests are removed from the program. Teachers feel pressure to ensure that the classroom activities are in a way that is appropriate for the materials that are included in the standard assessment [20], even if they believe that other materials will better prepare students for success in real life. Moreover, the disadvantages of standard tests include:  Standard tests require a lot of time and money. Standardized tests that are punishing the variety in teaching environment can cause students and teachers to hate school [21].  Standard tests provide very limited information about the student's past or future talent and status. They also have negative effects on teachers' use of various and differentiated techniques in teaching and evaluation [7].  Standard test-based assessments cause inequality by increasing the gap between the rich and poor students in the long run [22].  Standard tests encourage students to become superficial thinkers unintentionally, by ignoring qualities that they cannot accurately assess such as creativity, motivation, perseverance, curiosity, and by seeking quick, easy, and clear answers [23].  Standard tests miss out the internal and mental processes that students experience by focusing only on the correct or incorrect answer [24]. Standardized tests continue to be used to assess student in many countries despite their advantages and disadvantages. It is important to examine the beliefs of teacher candidates for standardized tests in order to get better understanding about the reflections of the standardized tests that emerged as a result of the educational policies of the countries in the teaching-learning process. The beliefs about standardized tests can be defined as the beliefs about the objectivity of standardized assessments, beliefs about high-stakes decisions and beliefs about bonus money [25]. The following questions have been investigated for this purpose; 1. What is the teacher candidates' belief levels regarding standardized tests? 2. Is there a meaningful difference between teacher candidates' beliefs about standardized tests in terms of their gender, field and grade levels? Research Model This piece of study is in general survey model. Survey models are research approaches aiming to depict conditions in the past or present as they exist today [26]. In this study, survey model is used as it's aimed to describe teacher candidates' beliefs about standardized tests. Study Group This research was carried out with 442 randomly selected teacher candidates registered in different (12) departments at Dicle University in Turkey during 2015-2016 academic year. The selected teacher candidates are grouped under the 5 fields (social sciences, language education, science and mathematics, special abilities and primary education) according to the branch they study. The demographic information regarding the study group is presented in Table 1. Data Collection Tool Beliefs about Standardized Testing (BAST) developed by Magee and Jones [25] and adapted to Turkish by İlhan, Çetin and Kinay [27] is used as data collection tool. BAST was developed to determine the beliefs of university students about standardized tests. BAST has 5-Likert type rating and contains 9 items. The BAST consists of three dimensions: beliefs about the objectivity of standardized tests, beliefs about high-stakes decisions and beliefs about bonus money. The reliability coefficients for the scale obtained from the data obtained from the teacher candidates in the Turkish adaptation study were found to be .77 [27]. The Cronbach Alpha reliability coefficient is .65 in this study. [28] stated that the reliability coefficient of .50 and above is acceptable. Accordingly, it can be said that the reliability coefficients of the scale are sufficient. Data Analysis Data obtained from the study are analyzed using the SPSS 20.0 software program. Percentages and frequencies for the demographic information for the teacher candidates in the study group were calculated. The mean and standard deviation values are calculated to determine the level of beliefs of the teacher candidates about the standard tests. Mean scores are interpreted by considering the range of points and levels of the data in Table 2. In this study parametric tests are used because the data have a normal distribution and a homogenous structure for the three variables used in the study. To determine if the beliefs about the standardized tests of teacher candidates differ according to the gender variable unpaired sample t-test was used. And One Way Variance Analysis (ANOVA) was used to see whether it changed according to field and class variables. In the comparison, significance level was set at 0.05. LSD test was used to determine the source of the difference and the effect value was calculated to determine the effect size of the significance. According to Cohen [1988], the eta squared between .01 and .06 is small, between .06 and .14 is moderate, and .14 and above is interpreted as the greatest effect (cited in [29]; [30]). Table 4. Findings When Table 4 is examined, it is determined that the teacher candidates differed significantly in their beliefs about standardized tests (p <.05) in favor of male teachers according to gender. Significant difference (.141> .140) was found to be large when the effect value is examined. According to this finding male teacher candidates have more belief in standardized tests when compared to female ones. Findings are presented in Table 5 to show whether the teacher candidates' beliefs about standardized tests differ according to the areas they study. When Table 5 is examined, it is determined that there is a significant difference between the teacher candidates' beliefs about standardized tests and the areas they study. When mean scores are taken into consideration, it is found that teacher candidates who study in the field of mathematics and science have the highest mean scores and the teacher candidates who study in the primary education field have the lowest means. Furthermore, when the impact value is taken into consideration, it is seen that the meaningful difference is large (.171>.140). Teacher candidates studying in special ability, language education and social sciences have more belief in standardized tests when compared to primary education counterparts. Findings are presented in Table 6 to show whether the teacher candidates' beliefs regarding standardized tests differ according to their grades. When Table 6 is examined, it is determined that there is no meaningful difference between teacher candidates' beliefs about standard tests in terms of grade variable. The mean scores according to the grade level are seen to be very close to each other. But as the candidates move from one grade to another their beliefs are increasing as well. However the change in their beliefs is not statistically significant. Conclusions, Discussion and Recommendations When findings related to the first question of the research are examined, it is concluded that teacher candidates' beliefs about the standardized tests are at moderate level. It can be said that the teacher candidates participating in the research are not against these tests however and they do not fully support the use of these tests either. In addition to research that suggests teachers 'beliefs about standardized tests are mostly negative, there are also studies that show that teachers' beliefs are both positive and negative [31], [32] found that teachers found it useful to make comparisons between students via standardized tests; however the limitations of these tests were far more than beneficial. Teachers state that standard tests are causing stress for them and that these tests are not a valid way to measure students' learning. When the relevant literature is examined, it is seen that teachers' beliefs about standard tests are negative as well [33]. [34] state that primary and secondary school teachers believe that standardized tests are waste of time, does not comply with the goals that the curriculum and are very poor in reflecting students' knowledge and skills. According to [35] a large number of teachers believe that the standardized tests are do not point the schools in the right direction and the results of one-off tests are not an accurate indicator of student learning and development. In addition, teachers believe that standard tests have negative effects on curriculum, learning-teaching, student-teacher motivation. [36] indicate that teachers believe that their own tests provide more information than the standardized tests in terms of assessment and evaluation. According to [17] teachers express that standard tests produce "teaching of the test techniques" on teachers. Teachers also believe that using standardized testing leads to the labeling of students. Although teachers find standardized tests valid and useful in the classroom context when the focus shifts away from the classroom context and is to make decisions regarding the students' future or future school life very little teachers show the same supportive attitude towards these tests. According to [37] 95% of teachers state that standardized tests are causing more stress and they feel pressure because of increasing test practices and accountability [38] [39], [40]. In addition, the use of standardized tests reduces teachers' job satisfaction and simplifies teaching and learning activities. Standard tests require the teaching of the test technique, teachers' rethinking important content, and cause some content to be preferred to others [41] inform that with the use of standardized tests teachers change their teaching style and are forced to focus on a more teacher-centered teaching. Besides student interest and motivation decreases and these tests create a less inclusive classroom environment for students coming from disadvantaged groups. For teachers, the use of standardized tests limits professionalism and the positive effect in instructional decisions. In addition, based on the result of a single exam, teachers are forced to defend their in-class actions [42]. Standardized tests used in determining teacher competencies do not provide excellence in accountability or teaching It is stated that the standard tests used in teacher selection have the potential to exclude visible minorities and alternative thinkers who have a world view that can enrich the education system [43]. When the findings are examined, significant difference is found among the beliefs of the teacher candidates regarding the standardized tests in terms of gender variable. As a result of this research, male teacher candidates' beliefs regarding the objectivity of standardized tests, beliefs about high-stakes decisions and beliefs about bonus money based on standardized tests were found to be higher than female teacher candidates. [17] indicated that male teachers believe more that the intelligence tests which are a type of standard tests give better ideas about students' intelligence compared to female teachers. [44] stated that the teacher beliefs about the effects of standard tests do not show any significant difference according to the gender variable. As being for or against standardized tests is highly influenced by the individual's epistemological beliefs, it is important to consider how differences in worldviews shape the assumptions of people about tests while discussing these tests [26]. Another finding of the study is that there is a meaningful difference between teacher candidate's beliefs in terms of branch variable. The teacher candidates who had more belief in standardized tests were those studying in mathematics-science and social sciences field. It was also determined that the teacher candidates with the lowest beliefs for standardized tests were teacher candidates who study in special skills (e.g. art, music) and primary education field. Teacher candidates in primary education field are expected to work in elementary schools and kindergarten in the future. In their professional lives, they use of process assessment tools such as developmental files much more than standardized tests as they have no testing obligations. Teacher candidates who study in areas requiring special skills are enrolling to the programs with a special aptitude test besides standardized tests. Thus the importance of these tests is secondary for them. Besides they will not be using any standardized test for their students and they have not experienced effects of these tests as teacher candidates in other fields. It is not surprising that that the beliefs of the teacher candidates in these two areas are low for the standard tests. In the field of mathematics and science, teacher candidates enroll to the programs only by achieving standardized tests and they are exposed to these tests more in their educational backgrounds. Teacher candidates studying in these areas are also experience those test much more and have awareness that they will use standard tests frequently in their professional lives. It can be argued that this case is one of the reasons for the teachers of science and mathematics and social sciences candidates to have higher beliefs in standard tests than those in other fields. According to [45] it is difficult to distinguish teachers from their past and school experiences. At this point, the teacher candidates' internal talks on teaching and the beliefs about learning and teaching should be heard first when they sign up to the education faculty. [46] indicated that teacher candidates' beliefs can change or be shaped within certain parameters. In this direction, it is possible for a teacher candidate with a student-centered teaching tendency to develop more sensitive ideologies with special support. [47] pointed out that the beliefs of unconventional and successful students following a different career are highly influenced by past experiences while the beliefs of traditional students are shaped more within past school experiences. From this point it can be said that teacher candidates' educational experiences and background are very important in terms of beliefs about the standardized tests. Thus it can be said that the teacher education programs should take more responsibility in helping teacher candidates to develop more original perspectives on the new philosophy and paradigms. When the findings of the second question of the research are examined, it is determined that the grade variable had no significant effect on the beliefs of the teacher candidates about the standard tests. In other words, it can be said that the education process of the candidates in the education faculty has no effect on their beliefs about the standardized tests, which are mostly related to the traditional learning and teaching process. [48] informed that the individual experiences and the experiences gained during the student years are effective on the teachers' beliefs. Besides teaching-learning practices are influential in shaping students' beliefs in learning [49]. According to this, because the teacher candidates participating in the research go on their education at the same faculty, they have had similar learning experiences even though they are in different grades. Therefore, it is expected that the teacher candidates who have similar experiences are close in the context of the belief levels for the standardized tests. According to the results of the study the following suggestions are made; 1. Standardized tests are the best way to objectively measure how much a student knows. 2. Students are too different for a single standardized test to really be useful in measuring their abilities. 3. A good standardized test can provide a fair (unbiased) indication of the quality of education a student receives in school. 4. True knowledge is too complex to be measured by a standardized test. 5. It is impossible for a standardized test to really be unbiased. 6. It is a good idea for states to require all high school students to pass a standardized test or set of standardized tests in order to graduate from high school. 7. Students, in any grade above third grade, who do not pass their grade-level standardized test should have to repeat that grade level. 8. Schools whose students have the highest scores on standardized tests should receive bonus money. 9. Teachers whose students score the highest on standardized tests should receive bonus money.
2019-06-13T13:11:49.169Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "f359a1d8a570fb026318e5c64ae63b89c7bee67c", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20171130/UJER19-19510600.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cf54a60c8733d927356d0047504d072b0096fbac", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
266786197
pes2o/s2orc
v3-fos-license
Comprehensive reference intervals for white blood cell counts during pregnancy Background White blood cell (WBC) count increases during pregnancy, necessitating reliable reference intervals for assessing infections and pregnancy-related complications. This study aimed to establish comprehensive reference intervals for WBC counts during pregnancy. Methods The analysis included 17,737 pregnant women, with weekly WBC count measurements from pre-pregnancy to postpartum. A threshold linear regression model determined reference intervals, while Harris and Boyd’s test partitioned the intervals. Results WBC count exhibited a significant increase during pregnancy, characterized by a rapid rise before 7 weeks of gestation, followed by a plateau. Neutrophils primarily drove this increase, showing a similar pattern. The threshold regression model and Harris and Boyd’s test supported partitioned reference intervals for WBC counts: 4.0–10.0 × 10^9/L for < = 2 weeks, 4.7–11.9 × 10^9/L for 3–5 weeks, and 5.7–14.4 × 10^9/L for > = 6 weeks of gestation. These reference intervals identified pregnant women with high WBC counts, who had a higher incidence of pregnancy-related complications including placenta previa, oligohydramnios, secondary uterine inertia, and intrauterine growth restriction. Conclusion This study establishes comprehensive reference intervals for WBC counts during pregnancy. Monitoring WBC counts is clinically relevant, as elevated levels are associated with an increased risk of infection and pregnancy-related complications. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-06227-8. Introduction The white blood cell (WBC) count, also known as leukocyte count, undergoes significant changes during pregnancy and the initial postpartum period.These changes are part of the body's adaptation to accommodate the demands of the developing fetus.Specifically, the WBC count tends to increase from the first to the third trimester and peaks during the initial postpartum period [1].Understanding these normal variations in WBC counts can assist clinicians in distinguishing between normal leukocytosis (an increase in the number of white blood cells) and pathological elevation of the WBC count during pregnancy and the initial postpartum period.This differentiation is crucial as it can prevent misdiagnosis of physiological leukocytosis as a bacterial infection, which could lead to unnecessary medication use that may harm the fetus. In addition, a study found that leukocytosis (> 13.8 × 10^9/L) during the first trimester of pregnancy is significantly associated with an increased risk for obstetrical complications.These complications include preterm delivery before 37 weeks, hypertensive disorders, gestational diabetes mellitus, and cesarean section.Furthermore, women with leukocytosis during the first trimester had significantly higher rates of fetuses who were small for gestational age and with birth weight less than 2,500 g [2]. The findings of these studies indicate that the currently used reference interval (RI) for WBC count during pregnancy, based on the non-pregnant range (4.0-10.0× 10^9/L) in China, [3] is inadequate for distinguishing infection and alerting for pregnancy-related complications.Previous studies have reported varying upper limits of the RI during pregnancy, ranging from 13.8 to 19.6 × 10^9/L.However, these studies were conducted on smaller populations, with differences in ethnicity and gestational age at the time of sampling [4][5][6][7][8]. A large-population study of 24,318 pregnant women in Oxford, UK have mapped the trajectory of WBC between 8 and 40 weeks of gestation and defined 95% RI for total WBC as 5.7-15 × 10^9/L [9].However, the RI within the 0-7 weeks of gestation was not investigated.The main objective of this study was to define pregnancy-specific RI for WBC count and assess their ability to detect pregnancy-related complications. Study population and design A retrospective longitudinal study was conducted at Shenzhen Longgang Maternity and Child Health Hospital, utilizing data from deliveries that occurred between June 2020 and March 2022.During this period, the equipment and reagents for blood routine test were consistent, and testing was recommended at under 13 weeks, 16-20 weeks, 24-28 weeks, 30-32 weeks, and around 37 weeks of pregnancy.The inclusion criteria were as follows: patients who were registered, examined, and delivered at our hospital and had at least one blood routine test, not in the week before laboring.Blood routine tests conducted within the week before laboring probably affected by some conditions that can initiate labor, such as premature rupture of membranes, or by the administration of certain medications like steroids.Exclusion criteria included: age < 18 years old; lasting infectious diseases such as human immunodeficiency virus infection, syphilis, hepatitis B or C; immune diseases such as systemic lupus erythematosus, Sjögren's syndrome, ankylosing spondylitis, and antiphospholipid syndrome; a history of malignant or borderline tumors; other diseases affecting blood cells such as thalassemia and glucose-6-phosphate dehydrogenase; more than ten blood tests during pregnancy that increased likelihood that these were taken to investigate an abnormality; incomplete blood routine data; extreme outliers that lay more than three times the interquartile range below the first quartile or above the third quartile [10]. Data collection Demographic, clinical, and laboratory data were collected from electronic hospital records in this study.Demographic data encompassed information such as ethnicity, age, height, and weight.Height and weight measurements were taken during registration, primarily between 6 and 12 gestational weeks, and were used to calculate body mass index (BMI).Clinical data included variables such as blood pressure, gestational age at delivery, baby sex, gravidity, parity, delivery style, and laborrelated diagnoses.Laboratory data consisted of blood routine tests performed as part of routine clinical care.Venous blood samples were collected using 4.5 mL potassium EDTA tubes and analyzed using the Sysmex XS-500i hematology analyzer (Sysmex Corporation Kobe, Japan).The same analytical method was consistently employed throughout the study period.Gestational age was recorded in weeks, including the corresponding weeks and days (e.g., 37 weeks represents a period from 37 to 37 + 6 weeks), and categorized into three stages: first trimester (0 to 13 weeks of gestation), second trimester (14 to 27 weeks), and third trimester (28 to 42 weeks).Additionally, samples during the prepregnancy stage (0 to 11 weeks prior to pregnancy) and the postpartum stage (0 to 11 weeks following delivery) were collected for the comparison with WBC trends during pregnancy. Statistical analysis Statistical analysis was performed using R software in this study.Descriptive statistics were used to summarize the demographic and clinical characteristics of the study samples.Continuous variables were presented as means with standard deviations (SD) or medians with interquartile ranges (IQR), depending on their distribution.Categorical variables were reported as frequencies and percentages. In the analysis of the trend of WBC count, median was used due to the non-normal distribution of the data (Figure S2), and Wilcoxon tests with Bonferroni correction was used to compare WBC levels between different gestational stages. To achieve a normal distribution and maintain the trend of WBC count across pregnancy, WBC count was log-transformed (Figure S3).A threshold regression model with a segmented-type change point was used to fit the log-transformed WBC count against gestational age.Due to the uneven sample size across different gestational ages, bootstrapping with 100 replications was employed for resampling.Each replication included 120 samples per gestational week.The median values of the model parameters were extracted from the bootstrap results to construct the regression equation, and the mean log-transformed WBC count for each gestational week was calculated.The residuals of the model across different gestational ages showed minimal deviation (Figure S4), allowing us to utilize the residual standard deviation (RSD) as the standard deviation of WBC count for each gestational week.The gestational age-specific RIs for log-transformed WBC count were calculated as the mean ± 1.96 standard deviations.Finally, the RIs for logtransformed WBC count were transformed back to RIs for WBC count. Harris and Boyd's test, which is recommended by the National Committee for Clinical Laboratory Standards (NCCLS), was used to determine the partitioning of RI during pregnancy [11].Harris and Boyd's test is composed of two independent partitioning tests.The first one uses the standard normal deviate score z as a test parameter: , where ‾x i , s i , and n i are the mean, standard deviation, and sample size, respectively, of subgroup i. Mean and standard deviation were from the threshold regression model.When z score > = 5[(n 1 + n 2 )/240] 1/2 , a reference interval partitioning was required.The second one is if the ratio between standard deviations > = 1.5, separate reference intervals are recommended for the subgroups, even if the means are equal.In this study, the second test was not applicable due to equal standard deviations. In the analysis of the association between high WBC and pregnancy-related complications, we conducted an analysis of the diagnoses for all subjects and selected the most prevalent pregnancy-related complications for further investigation.Utilizing the RIs proposed in our study, we employed them to identify pregnant women with high WBC counts.Women with at least one test result above the gestational age-specific RI were categorized into the High group, while those with all test results within the RI were classified into the non-High group.The association between WBC count and pregnancy-related complications was assessed using chi-square tests or Fisher's exact test, depending on the expected frequencies. All statistical tests were two-tailed, and a significance level of P < 0.05 was considered statistically significant. Overview of the study population In this study, a total of 19,748 deliveries were initially identified to meet the inclusion criteria.Subsequently, women with age less than 18 years old, infectious diseases, immune diseases, a history of malignant or borderline tumors, other diseases impacting blood cells, incomplete blood routine data, and extreme outliers were excluded from the analysis.As a result, 17,737 pregnancies were included for the analysis of the WBC trend, as illustrated in Fig. 1.The majority of the study population consisted of ethnic Han individuals (97.1%), and the average age was 30.2 ± 4.48 years.Further characteristics of the population are detailed in Table S1. White blood cell increased during pregnancy The WBC count was assessed in this study at a median of five times, with an interquartile range of 3-6, resulting It is widely recognized that WBC levels increase during pregnancy.This study confirms this trend by analyzing data from the prepregnancy period to the postpartum phase.The median WBC count remained relatively stable at around 6 × 10^9/L before pregnancy and increased after conception (Fig. 2a).A notable surge in WBC count was observed during the first 7 weeks of gestation, followed by a relatively steady trend until the 15 weeks.Subsequently, it gradually increased, reaching a peak of 9.9 × 10^9/L at 25 weeks, followed by a gradual decline until 40 weeks.The increase in WBC count during the first and second trimesters was statistically significant (all P values < 0.001), rising from 6.25 × 10^9/L in the prepregnancy phase to 8.73 × 10^9/L in the first trimester and 9.33 × 10^9/L in the second trimester (Fig. 2b).WBC count in the third trimester was similar to that in the second trimester, measuring 9.35 × 10^9/L.An intriguing observation was noted during the postpartum period, where the WBC count rapidly increased to its highest level of 11.3 × 10^9/L in the immediate postnatal week (0 to 6 days).Following this peak, the WBC count gradually declined, reaching a level of 7.0 × 10^9/L at 4 weeks postnatally, which was similar to the WBC count in the prepregnancy phase.Subsequently, the WBC count remained relatively consistent. The elevation of white blood cell (WBC) count during pregnancy is primarily attributed to an increase in neutrophils, which exhibit a similar trend throughout gestation.Monocytes also experience an increase during pregnancy, albeit to a lesser extent, as their absolute count is relatively low.Interestingly, lymphocytes display an opposite pattern compared to neutrophils.They decline during the first trimester and remain relatively stable during the second and third trimesters.Following childbirth, lymphocytes experience a sharp decline within the initial week, reaching their lowest level, and subsequently begin to increase, gradually returning to prepregnancy levels during the 3-4 weeks postnatally.Eosinophils and basophils, on the other hand, do not exhibit significant variations during pregnancy (Fig. 3). Estimation of reference interval of white blood cell To determine the RI for WBC count, we included samples obtained from 0 to 40 weeks of gestation, comprising a total of 16,230 pregnant women in this analysis.Notably, a turning point was observed at 7 weeks of gestation.A regression model as well identified a threshold at 7 weeks, prior to which WBC count exhibited a rapid increase from 5.8 to 9.1 × 10^9/L, and subsequently, a slower increase from 9.1 to 9.5 × 10^9/L.The upper limit progressively increased from 9.3, 14.4, to 15 × 10^9/L at 0, 7, and 40 weeks, respectively.Likewise, the lower limit increased from 3.7, 5.7, to 6 × 10^9/L at 0, 7, and 40 weeks, respectively (Fig. 4).The upper and lower limits of all gestational weeks were presented in Table S2. As Table 1 shows, we partitioned the reference intervals for 4 and 7 weeks of gestation.However, the difference in RIs between 7 and 40 weeks of gestation was insufficient for further partitioning, indicating that the reference interval of 5.7-14.4× 10^9/L is suitable for the gestational period from 7 to 40 weeks.Within the first 7 weeks of gestation, the increase of one or two weeks did not warrant partitioning of the RIs according to the Sensitivity analysis To allow for a more comprehensive evaluation of RI for WBC count during pregnancy, four different approaches for establishing RIs were compared in this study: nonpregnant 95% RI, pregnant 95% RI using threshold regression, parametric pregnant 95% RI, and non-parametric pregnant 95% RI (Table 2).The currently utilized RI for pregnancy is based on the non-pregnant range of 4-10 × 10^9/L, with both the upper and lower limits lower than those established by the other three methods.The new reference interval for > = 6 weeks of gestation based on the threshold regression (5.7-14.4× 10^9/L) was similar to the parametric (5.9-14.5 × 10^9/L) and non-parametric (5.9-14.4× 10^9/L) methods, indicating the robustness of the results.However, the RI for 3-5 weeks was smaller in the upper limit (4.7-11.9 vs. 4.6-13.2vs. 4.6-12.9× 10^9/L), probably due to the impact of low sample size. Additionally, the study compared the difference between threshold and linear regressions for establishing RIs.Given the limited sample size in the 0 to 6 weeks gestation period and a changing point at 7th week, this comparison was conducted using samples from 7 to 40 weeks gestation.As depicted in Figure S5, the upper limits in the threshold regression model were only slightly higher than those obtained through linear regression. High white blood cell count is associated with pregnancyrelated complications As demonstrated in Table 3, women in the high WBC group exhibited a significantly increased risk of placenta previa by 111% (P = 0.003), oligohydramnios by 46% (P = 0.029), secondary uterine inertia by 32% (P = 0.027), and intrauterine growth restriction by 73% (P = 0.032).Furthermore, within the complicated cases, the High group exhibited a higher proportion of women experiencing one, three, and four complications (Figure S6). Discussion This study analyzed white blood cell (WBC) trends in a large population of 17,737 pregnant women, confirming increased WBC levels during pregnancy, primarily due to neutrophils.In a subpopulation of 16,230 pregnant women, results suggested RIs for < = 2 weeks, 3-5 weeks, and > = 6 weeks of gestation can utilize the range of 4-10 × 10^9/L, 4.7-11.9× 10^9/L, and 5.7-14.4× 10^9/L, respectively.Pregnant women with WBC over the upper limits had higher risk in certain pregnancy complications.These insights can help improve health monitoring and risk assessment during pregnancy. WBC count, particularly neutrophil count, is known to increase during pregnancy, a phenomenon termed "physiologic leukocytosis of pregnancy".Several factors contribute to this phenomenon.Hormones such as estrogen and cortisol, elevated during pregnancy, stimulate the bone marrow to produce more WBCs.Additionally, these hormones prolong the lifespan of neutrophils by inhibiting their apoptosis, leading to an increased number of circulating neutrophils.Pregnancy itself induces a stress state, triggering the release of stress hormones like cortisol and catecholamines, which can further stimulate WBC production and release from the bone marrow.The mild systemic inflammatory state associated with pregnancy also promotes the production of certain cytokines that drive WBC production [12,13].Following labor, the rise in WBC count is a normal and beneficial response to the stress of childbirth.This increase serves to protect the mother from infections and support the healing process.The release of inflammatory mediators during labor and the tissue trauma associated with childbirth contribute to the elevation in WBC count.Moreover, the presence of bacteria in the birth canal can also trigger this response.3.8-9.84.6-13.25.9-14.5 Non-parametric pregnant 95% reference intervals (×10^9/L) 3.9-9.1 4.6-12.95.9-14.4†RI, reference interval The rapid increase in WBC count is considered a protective mechanism [14,15].Notably, the phenomenon of WBC increase during pregnancy and its subsequent peak after delivery has been observed in previous studies as well, confirming its consistency and significance [1,9].Although several studies have explored the RI of WBC count during pregnancy, [1,5,[7][8][9][16][17][18][19][20] there are notable limitations that cannot be overlooked.Firstly, many of these studies were based on small populations, which may not provide robust and reliable results.Secondly, some studies only sampled specific gestational weeks or trimesters, failing to capture the entire gestational process.Thirdly, most studies did not consider the necessity of RI partitioning, instead focusing solely on providing the 95% confidence interval.Akkaya et al. conducted a study involving 40,325 pregnant women with 82,786 complete blood count evaluations from 6 to the weeks of gestation.They reported the 3rd, 5th, 10th, 50th, 95th and 99th percentile values for total and differential leukocyte counts according to trimester.While this study encompassed a large-scale population and a wide range of gestational ages, the clinical applicability of the results may be limited due to the specific percentile values chosen that were not the 2.5th and 97.5th percentile [1].Another large-scale study conducted by Dockree et al. included 24,318 pregnant women with 80,637 samples from 8 to 40 weeks of gestation, and RI was determined as 5.7-15.0× 10^9/L.The authors confirmed the need for a pregnancy-specific RI, but refutes the need for partitioned, gestational-age specific limits [9].The results were similar to ours, in which RI was suggested as 5.7-14.4× 10^9/L when gestational age > = 6 weeks. In previous studies, the estimation of WBC count before 6 weeks of gestation was often neglected due to the low likelihood of detecting pregnancy during this early period.In our study, which involved a large population, the sample size for gestational age < 5 weeks did not reach the minimum requirement of at least 120 participants needed for accurate estimation of reference limits [21].Therefore, we employed bootstrapping to even the sample size as 120 at each gestational week and a threshold regression model to fit the means, using the residual standard deviation as the standard deviation for calculating the RIs.The approaches help reduce variation resulting from the small sample size.The threshold regression model proved to be as robust as linear regression (Figure S5), while also accommodating data with changing points.The impact of the low sample size is evident in Table 2, where the RIs for gestational age > = 6 weeks estimated by three different methods were similar.However, for the 2-5 weeks gestation period, the upper limits varied among the methods.Dockree et al., who only included women with gestational age over 8 weeks, concluded that RI partitioning was unnecessary [9].However, when we included gestational age before 8 weeks, an intriguing finding emerged.A significant turning point was identified in the 7th week, with WBC count increasing from a median of 5.8 to 9.1 × 10^9/L before that point (Fig. 4).According to the Harris and Boyd's test, RI partitioning was warranted in the 4th week, resulting in the range of 4.7-11.9× 10^9/L.Furthermore, our analysis revealed that within the first 7 weeks of gestation, the progression of one or two weeks refuted the need for RI partitioning. Based on these observations, we propose the following RIs: <=2 weeks of gestation can utilize the non-pregnancy reference interval of 4-10 × 10^9/L.For 3-5 weeks of gestation, the reference interval can be set as 4.7-11.9× 10^9/L, while > = 6 weeks of gestation can utilize the range of 5.7-14.4× 10^9/L.These recommendations provide more accurate and appropriate RIs for WBC count during different weeks of gestation. During pregnancy, the information carried by blood cells is beyond that found in non-pregnant individuals.Elevated levels of haemoglobin concentration has been associated with adverse maternal and neonatal outcomes [22], and similarly, a high WBC count is also linked to the adverse outcomes.A retrospective study showed that total leukocyte count, neutrophil count, and neutrophilto-lymphocyte ratio were significantly higher in the placenta previa group compared to the controls, suggesting a valuable predictor for placenta previa [23].A study suggests that higher total WBC and absolute neutrophil counts in the third trimester are associated with smallfor-gestational-age birth.These associations may indicate a cycle of inflammation and placental dysfunction contributing to fetal growth restriction [24].In our study, we as well found the WBC count over upper limit was associated with the placenta previa and fetal growth restriction.Moreover, we discovered additional associations between high WBC count and complications such as oligohydramnios and secondary uterine inertia, which have not been previously reported.The potential mechanisms for these observations might be attributed to systemic chronic inflammation, which can lead to alterations in the uterine environment [25].These novel findings suggest that establishing an RI for WBC during pregnancy is crucial not only for detecting infections but also for identifying and monitoring various pregnancy-related complications. In conclusion, our study examined WBC trends in a large population of pregnant women and confirmed the well-known increase in WBC levels during pregnancy, primarily driven by neutrophils.Our findings suggest that different RIs should be applied based on gestational age, with RI partitioning necessary for specific periods.We also identified associations between high WBC count and various pregnancy-related complications, including placenta previa, fetal growth restriction, oligohydramnios, secondary uterine inertia, and shoulder presentation.These results highlight the importance of using appropriate RIs for WBC count during pregnancy to enhance health monitoring and risk assessment. Strength and limitation The strengths of this study lie in its extensive coverage of a large population, encompassing the prepregnancy, pregnancy, and postpartum periods.This comprehensive monitoring of WBC count provides a panoramic view of WBC trends throughout the entire pregnancy journey, ensuring robust and generalizable findings.The utilization of a threshold regression model to establish gestational week-specific RIs for WBC count addresses the limitations of previous studies and improves the accuracy of interpretation.The implementation of partitioned RIs for WBC count demonstrates high clinical applicability and translational potential.Furthermore, the identification of associations between high WBC count and various pregnancy-related complications contributes valuable insights to the existing scientific knowledge.Nonetheless, limitations of the study include its singlecenter nature and the homogeneity of the racial composition, potentially introducing biases.Additionally, the limited sample size of less than 120 participants per week before 5 weeks of gestation may constrain the generalizability of the RI established for early pregnancy.Finally, our study has excluded many, but not all, diseases and medical conditions known to affect WBC count. Fig. 1 Fig.1Flowchart of the study design Fig. 2 Fig. 2 Trend of white blood cell count across prepregnancy, pregnancy, and postpartum.(a) Displays the median and 95% confidence interval (light blue zone) for white blood cell count in each week.Gestational age was recorded as weeks, representing the corresponding weeks and days (e.g., 37 weeks means 37 to 37 + 6 weeks).(b) Compared the median white blood cell count between each two adjacent stages.Wilcoxon test was used for the comparison.ns means P > = 0.05 and **** means P < 0.0001 Fig. 4 Fig. 4 Means and reference intervals for each week of gestation.Means and reference intervals were calculated based on a threshold regression model.The grey zone shows reference intervals.Values of means and reference intervals for the 0th, 7th, 40th week were annotated Table 1 Application of the Harris and Boyd partition criteria for pregnancy age Table 2 Reference intervals for white blood cell count during pregnancy Table 3 Comparisons of complications between pregnant women with high and non-high white blood cell count
2024-01-07T05:08:15.094Z
2024-01-05T00:00:00.000
{ "year": 2024, "sha1": "de6e0c8424be3a3d5a499ec4cd44b635ee17eb06", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "de6e0c8424be3a3d5a499ec4cd44b635ee17eb06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
205643643
pes2o/s2orc
v3-fos-license
Downy mildew symptoms on grapevines can be reduced by volatile organic compounds of resistant genotypes Volatile organic compounds (VOCs) play a crucial role in the communication of plants with other organisms and are possible mediators of plant defence against phytopathogens. Although the role of non-volatile secondary metabolites has been largely characterised in resistant genotypes, the contribution of VOCs to grapevine defence mechanisms against downy mildew (caused by Plasmopara viticola) has not yet been investigated. In this study, more than 50 VOCs from grapevine leaves were annotated/identified by headspace-solid-phase microextraction gas chromatography-mass spectrometry analysis. Following P. viticola inoculation, the abundance of most of these VOCs was higher in resistant (BC4, Kober 5BB, SO4 and Solaris) than in susceptible (Pinot noir) genotypes. The post-inoculation mechanism included the accumulation of 2-ethylfuran, 2-phenylethanol, β-caryophyllene, β-cyclocitral, β-selinene and trans-2-pentenal, which all demonstrated inhibitory activities against downy mildew infections in water suspensions. Moreover, the development of downy mildew symptoms was reduced on leaf disks of susceptible grapevines exposed to air treated with 2-ethylfuran, 2-phenylethanol, β-cyclocitral or trans-2-pentenal, indicating the efficacy of these VOCs against P. viticola in receiver plant tissues. Our data suggest that VOCs contribute to the defence mechanisms of resistant grapevines and that they may inhibit the development of downy mildew symptoms on both emitting and receiving tissues. . Overview of the experimental design. Leaf samples of the susceptible Vitis vinifera cultivar Pinot noir and four resistant Vitis spp. hybrids (BC4, Kober 5BB, SO4 and Solaris) were collected immediately before inoculation (0 dpi) and six days post inoculation (6 dpi) with Plasmopara viticola. Ground leaves were subjected to headspace-solid-phase microextraction gas chromatography-mass spectrometry analysis (HS-SPME/GC-MS) and two independent experimental repetitions were analysed to annotate/identify volatile organic compounds (VOCs). VOCs were selected according to their different levels in resistant and susceptible genotypes after pathogen inoculation and they were tested as single pure compounds in the functional assays. Two protocols were tested to asses the effect of pure VOCs against P. viticola i) in water suspension and ii) in air volume without direct contact with the leaf tissue. Figure S2. Comparison of the measured mass spectra of the volatile organic compounds (VOCs) in grapevine leaf samples with that of the corresponding pure VOC: 2-phenylethanol (A), ɣ-cadinene (B), δ-cadinene (C), β-caryophyllene (D), trans-2-pentenal (E), 2-ethylfuran (F), and β-cyclocitral (G). The mass spectrum similarity score and retention index values are reported for each VOC. Table Legends Table S1. Volatile organic compounds (VOCs) detected by headspace-solid phase microextraction-gas chromatography-mass spectrometry from five grapevine genotypes in the first experiment. Table S2. Volatile organic compounds (VOCs) detected by headspace-solid phase microextraction-gas chromatography-mass spectrometry from five grapevine genotypes in the in the second experiment. Column A. VOCs were grouped in six metabolite groups according to their profiles in: VOCs with a higher abundance in all resistant genotypes as compared with Pinot noir in both experiments in at least one time point (Group 1); VOCs with a higher abundance in two or more resistant genotypes as compared with Pinot noir in both experiments in at least one time point (Group 2), VOCs with a higher abundance in only one resistant genotype as compared with Pinot noir in both experiments in at least one time point (Group 3); VOCs with a lower abundance in at least one resistant genotype as compared with Pinot noir in both experiments in at least one time point (Group 4); VOCs with different abundance profiles in the two experiments (Group 5); VOCs only found in the first or in the second experiment (Group 6). Column B. Names of VOCs found in grapevine leaves using a HS-SPME-GC-MS analysis. Green cells represent VOCs with increased abundance consistent in the two experiments. Orange cells represent VOCs with decreased abundance consistent in the two experiments. White cells represent VOCs with increased or decreased abundance in one of the two experiments. Column C. CAS Registry Numbers. Source: http://webbook.nist.gov/chemistry/ Column D. Measured retention index (Measured RI). Column E. Retention index measured from an in-house library of authentic reference standards (Reference RI). Column F. Measured retention time (Measured RT). Columns G, M, W, AG, AQ. Mean of absolute peak area (abundance) expressed as counts per seconds (cps) of five biological replicates (plants) at 0 dpi. Columns H, N, X, AH, AR. Standard error of absolute peak area (abundance) expressed as cps of five biological replicates at 0 dpi. Columns I, O, Y, AI, AS. Mean of absolute peak area (abundance) expressed as cps of five biological replicates at 6 dpi. Columns J, P, Z, AJ, AT. Standard error of absolute peak area (abundance) expressed as cps of five biological replicates at 6 dpi. Columns K, Q, AA, AK, AU. Fold change (FC) values between 0 and 6 dpi for each genotype. Values are reported for significant changes (p ≤ 0.05 of Kruskal-Wallis test and FC fold change > 1.5). Coloured cells represent consistent statistical differences in the two experiments (green and orange for VOC with increased or decreased peak area, respectively).
2018-04-03T02:57:37.042Z
2018-01-26T00:00:00.000
{ "year": 2018, "sha1": "b113fcdf26b732244f9907abd410c8b0f4d90096", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-19776-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af95788ca477aff7ae1e67a2b0b32a98eb977604", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
236611058
pes2o/s2orc
v3-fos-license
THE ANALYSIS OF AIRPORTS' PHYSICAL FACTORS IMPACTS ON WILDLIFE The impacts of manmade structures on wildlife are often underestimated due to misbelieve that wild animals avoid living in close proximity to any kind of technogenic object. However, such objects may offer a range of benefits to animals and thus become points of attraction, being still a source of hazards for these living organisms. The airports are considered to be dangerous industrial facilities for they create chemical and physical pollution, as well as host a variety of biohazards, originating from transported items and dense groups of population. Meanwhile they are often located outside the urban areas in previously pristine areas, specially allocated for this purpose and animals, whose habitat they occupy undergo all these impacts equally with passengers and staff. The aim of the research is to conduct differential analysis of physical factors of influence within the airport impact area and evaluate the negative trends for exposed animals. The physical factors were divided into the physical objects and physical fields. The assessment of these factors was based on the data obtained using special metering equipment for measuring the level of noise, light and electro-magnetic pollution, while the intensity of visual pollution and fragmentation effects by airport infrastructure were evaluated using qualitative approach. The airport facilities itself and ground access infrastructure are showed as the causes of habitat destruction by barrier and edge effects, as well as structural transformations of landscapes, in particular, relief and phytocenosis. The impact of physical fields coming from the airport territory is formed by light, vibration and electro-magnetic pollution. The intensity of considered factors is different, but the sensitivity of laboratory animals to these factors is high enough to cause a range of effects. However, the methods for mitigation of some other airport impacts can exacerbate the value of the existing sources of impacts. The light pollution is measured and defined as the most significant and damaging. Thus, there is a clear need to pay attention to the interactions between an airport and wildlife to reduce the intensity of negative effects. The predicted and described effects for wildlife could be very diverse, but they need verification by field surveys in the impacts areas of airports is highlighted. Introduction The work of civil aviation facilities has very strong impact on the society development trends, as it is able to shape the trade, tourism and education pattern. It demands active development of infrastructure, which raises a range of social and economic controversies and concerns, including environmental. However, these concerns are more connected with potential human health issues, neglecting the impacts of civil aviation facilities on wildlife. Object of research is the interaction of wildlife with manmade structures. Subject of research is the physical effects of airports on wildlife. The aim of the research is to consider the potential stress factors, aggregated here as physical impacts, which wildlife is exposed to due to activity of airports and their infrastructure. In order to reach the set aim the relevant research tasks were formulated: • analyze the effects of airport structures on the environment; • measure the level of physical factors intensity: vibration, visual, electromagnetic and light pollution; • compare the results with data available on the threshold levels of pathological effects observed in wildlife. Scientific novelty of the research is formed by presenting for the first time the effects of civil aviation facilities on animals with assumptions about the potential pathological processes, resulted from exposure to their physical factors. Practical significance of the research results -the obta-ined data should be applied in the amendment of nature protection activity for the improvement of the environmental performance of airports. Analysis of recent research and publications. There is a wide range of medical research results by L. Tauri, J. Nriagu, B. S. Cohen, K. H. Jung, A. Kobayashi, D. Westerdahl and others, showing that airport emissions provoke respiratory effects in humans and probably the same results are valid for air breezing animals. The health effects of aircraft noise are also intensively studied, in particular in the works of S. Morell, D. Huang the connection of vascular disorders to the activities of airports was well substantiated. Similarly, the airport noise can generate stress and jeopardize wildlife reproduction, as it is shown in the works by R. D. Alquezar, P. A. Anderson, J. R. Barber, E. M. Bayne, A. E. Bowles. However, not many researches were found to mention possible effects of other physical factors on animals. The impacts on radar systems are known to be present, as it is shown by E. Sheridan, T. L. DeVault, B. Bruderer, and J. Everaert, but the light pollution from airports, vibration are not well studied yet. Problem statement: airports as sources of physical impacts for wildlife Any modern airport is a system of manmade objects with elaborated structure and hierarchy. Being such a complicated industrial facility an airport include numerous sources of impacts both living and non-living environments are exposed to. The assessment of physical factors of airport influence on wildlife should be conducted in 2 fields. First of all this is the direct impact of physical bodies or material objects, an airport is made of. The other issue is the physical pollution, spread beyond the borders of an airport facility. Structural elements of airports. Airports require vast territories, mostly occupied by two main components -runways and terminal buildings, as well as maintenance hangars, parking, and other facilities. The runway remains the most important organizing element, taking at least 500 hectares of land, demanding on the scale of an airport's operations. This is the first disturbing element for wildlife, and the intensity of disturbance depends on the location of the airport. The most important is their location in relation to urban centers: normally airports have been placed outside the cities to reduce discomfort for residents due to aviation noise and to provide efficient maneuvering for aircrafts. But this increases the possibility of contact between animals and airport facilities, because it is an intrusion into natural areas previously not transformed and visited by people. After the beginning of construction wildlife is stressed by the noise and pollution and thus forced to move to other territories. But this is the case for major animals, while smaller animals, like rodents may find such changes useful, as they receive new forage areas with reduced predator pressure. This leads to overpopulation, diseases propagation and creates threats to equipment integrity. Nevertheless, many airports built over 30-40 years ago at the city outskirts are now located at the edge or even inside the developed urban areas. In such situation an airport becomes a new spot of nature for scanty urban fauna, in particular birds: they use airfields to look for food, becoming a problem for flights safety. At the same time some animals feel uncomfortable at huge open spaces and this al-so prevents their normal activity. Leveling of relief and drainage of territories is also a problem, especial for wetlands. Forced to leave traditional areas some of the birds move to the airport areas and try to find new residence there, causing problems for air traffic and operations. Airport location issues. As for the specific location of an airport it is chosen accounting multitude of factors, each of which has certain interactions with wildlife: 1. The demand for a particular airport services defines the types of aircrafts, accepted by the airport and thus affects the number and length of runways and the size of airport terminals, and therefore the physical size of the airport itself. The larger the airport is, the higher the need for territory transformation and area seized from wildlife habitats are. This seems to be obvious, but it exacerbates the abovedescribed problem of free spaces. 2. Runway configuration affects the choice of an airport placement in terms of the possibility to build intersected or parallel runways. The better option for an efficient airport is parallel runways, but they need 30 % more territory. Under such conditions more populations and habitats will be affected. Moreover, intensive traffic and big aircrafts create additional cumulative aposematic effect on animals, due to increased noise and movement of huge objects. 3. Altitude affects the diversity and nomenclature of biota in the area of airport location. In those cases, when an airport is located at higher altitudes it will demand longer runway, but simultaneously the area available for construction is limited. As a result the needs of local wildlife are not accounted in the decision making process. 4. Climate conditions, in particular humidity and temperature, are the most important factors in terms of species composition typical for the territory of airport placement. Local variations in prevailing winds have effect on the birds' migration and food relocation. From more local point of view, the combination of climatic parameters define how attractive the airfield will be for animals and which of them are attracted. For instance, the airfield may be dryer or vice versa richer in greenery, as compared to adjoined territory, and thus attract certain species and favor or threaten their survival, depending on the level of animal control activity at an airport. 5. Topography is important for the choice of airport placement, as it needs the flat relief. If the latter is created by intrusion of man this heavily affects the local habitats quality and microclimate conditions, which will finally lead to transformation of local biocenosis. 6. Environmental considerations are partially accounted in the choice of airport location and normally they are located away from the sensitive areas. However, if an airport was build long time ago, its location could be chosen omitting the specifics of local ecosystems, as they were not known. 7. Adjacent land uses affect the activity and expansion of an airport. If there is a choice of possible territory to be added to an airport, than the competition between valuable agricultural land and natural areas may be not positive for natural zones. 8. The operational processes, namely aircrafts flight is affected by the presence of certain obstructions, like mountains, hills, and heavily built-up areas. Additionally flying over residential areas may be limited to certain hours by noise restrictions. The same may be applied to flying over protected areas. However, it doesn't cover those natural ecosystems without the protection status. 9. Intensity of flights is an important issue in terms of limits for the available airspace and constrains for new airport operations. The same factor affects the risks of bird collisions and increases pressure on living organisms from busy sky, producing noise and pollution. This is especially true for those metropolitan areas, served by few airports with overlapping airspace, like in London, Moscow, San Francisco, Paris, New York, Seoul, Tokyo, Shanghai, and Washington. Airport ground access. The crucial element of the civil aviation infrastructure is that an airport must be accessible to the communities it serves. As a result any airport needs specific line objects, namely highways, railways or subway to provide access to it. The connecting infrastructure to the airports located outside cities at the distances over 10 km has a range of serious implications for wildlife. The lines of road dissect the pristine natural areas and cause the fragmentation of habitats -a common problem of the modern times. The traffic intensity is the decisive parameter for the problem: highway with more than 10000 vehicles per day makes such road impermeable barrier for almost all species [4]. If an airport is located less than 5 km away from an urban area its effect is not very profound as the territory within the suburban zone is already changed and sensible animals have moved away from the territory. Our research has showed that the diversity of the biocenosis at suburban area is at least 2.5 times lower as compared to the same territories without human intervention. This is especially true for railway and metro connections. Highways give rise to the same problems, but they are also associated with higher incidence of wildlife -vehicle collisions, which put both people and non-human animals at risk. Basically, the fragmentation affects all types of land animal movement -moving in search for food and shelter, mating search and territory care. These will definitely put animals under the lethal risks due to starvation, lack of inter-individual communication or protection from predators. Seasonal migrations and availability of land for youth are also affected and contribute to genetic diversity [6]. Additionally roads as well as airport facilities impacts spread far from the immediate borders of the facility, creating edge effect, when wider areas around and along these objects are not comfortable for animals, due to pollution and change of plant diversity, distorted by intrusion of ruderal and alien species. This threatens the food reserves for animals and imposes risks as some of the newcomers can be dangerous for animals, cause allergic reactions in humans and problems for agricultural fields. The injuries to animals are less cared and noticeable, but the mortality is the problem visual to anybody. Unfortunately, there is no good method to mitigate this problem and all of the applied are not free from serious shortcomings. For instance, the major method to reduce the possibility of collisions with animals is setting the fences along the road, but this aggravates the fragmentation effect and ruins connection between separated parts for all animals except birds. Another example is the cutting tall trees along the roads to prevent entering the exact edge of the road by wildlife gives possibility to avoid serious collisions with large animals, but simultaneously increases the destruction of habitat, edge effect and loss of food and shelter for birds. Most of the airport possesses two types of ground access -highway in combination with metro or railway. Placing two or more forms of transport infrastructure along the same corridor (in the immediate vicinity) may be positive for some species, since only one barrier is created. But as in the case with fences, such solution increases the barrier effect for other species [13]. Separate attention should be paid to birds. The primary concern is about the minimization of collision risks, which is provided by the ornithological control department of airports. The use of deterging methods has negative impact on birds, but it is considered acceptable as compared to possibility of aircraft accidents. Nevertheless there exists a problem of collisions with airport towers. This is resulted by temporary meteorological conditions, which either reduce visibility or lead to low cloud ceiling, hiding stellar cues for birds' orientation [4]. From the other side, birds don't feel disturbed about barrier effects of airport facilities and ground access roads directly. However, the indirect impacts can be considerable as their food base could shrink. They could be also limited in their habitat area and lack shelter and nesting places due to deforestation. Methods and materials. The physical factors of airports are different in nature and they need to be evaluated using different approaches. The intensity of physical fields was measured using special equipment. Thus, the light pollution was measured via standard photomery with Luxmeter Yu-116 and complemented with visual observations The level of vibration was measured with the vibrometer Wintact wt63B, which was placed on the of soil ground, but not on the solid covers, like concrete or asphalt. Electro-magnetic field intensity was tested with the electromagnetic field meter П3-31, which is used to detect and control biologically hazardous electromagnetic radiation. It works within the high-frequency ranges typical for airports. In all cases measurements were conducted around the airport outside its industrial area at the points at four geographical directions from the airport, where there are no artificial structures and the plant cover is well preserved. The measurements were conducted in summer 2020. A serious issue for the assessment is the absence of any regulations or threshold value of physical impacts on animals. In order to process the results, obtained in this research, the experimental results from open access publications were used. Research Results Traditionally airports are analyzed in terms of noise pollution they produce. In this research we decided to cover the other important components of physical pollutionlight, vibration and electromagnetic fields. Light pollution from airports. Due to peculiarities of aviation services provision, the intensive illumination at airports is a matter of safety and control over operations. Using the standard luxmeter measurements of illumination at the vicinity of Kyiv Boryspil and Kyiv Sikorsky airports were measured in summer and autumn, 2020. The values varied from 690 to 1125 Lux, depending on the location (the highest was at the international terminal entrance and airfield facilities. As a result the level of light pollution at night around the airport is almost equal to the level of light at sunrise at the busiest airports -this phenomenon of overillumination is typical for all airfields. For those airport located in the vicinity of settlements, the specific problem is light-trespass, which affects the living activity of people at the adjoined areas. The light pollution from distant airports is better defined as clutter -excessive groupings of light sources, which confuse organisms and distract them from the obstacles, leading to accidents. Light pollution impacts wild life by complicating orientation in space, change intraspecific interactions, alter predator-prey relations, and affect animal physiology. But primary effects of light pollution are observed at plants, whose living processes are extremely dependent on the light and cycles of illumination. The most prominent consequences of exposure to light pollution are disruption of flowering and developmental patterns. As a result they fail to start flowering, then defoliation and enter the dormancy condition on time [3]. The resulted damage to plants by winter process and reduced reproduction of vegetation species puts animals to threat of food shortage and lack of shelter. Birds are actually the most affected by the airport light pollution, as it prevents normal navigation, circadian rhythms and mating processes. Insects, which make up considerable part of birds diet are also strongly affected by airport illumination, but this may have double effect on birds' populations: airports attract food for birds in this way, as a result birds penetrate to the territory of airports in search for easy food and thus increase both injuries and accidents incidence. Hydrobiontes in the water bodies within the airport impact area could be also affected by light pollution; in particular, over-illumination of water surface prevents zooplankton, such as Daphnia, from eating surface algae, which eventually contributes to algal blooms and elimination of fish and water plants due to water quality reduction [11]. Finally, it must be noted, that light pollution is also a powerful deterrent factor for nocturnal animals and it form a sort of non-material barrier, contributing to habitats fragmentation. This is especially true for ground access roads. Vibration effects on wildlife. Vibration, caused by airport is rarely considered as a serious negative environmental factor. However, the intensity of vibration by landing aircraft or working engine is considerable enough to be felt by living organism. Moreover, sources of vibration at the airport facilities are tightly bounded to the sources of noise formation: high-level and short-term sources of vibration are run-ups, engines start-up, take-off and landing; thrust reversers; high-level and long-term sources of vibration are taxiing and idle, working auxilliary power units, maintenance equipment, as well as ground access transport. The highest impact for wildlife rises from long-term sources, as they create the hazardous background for living organisms in the airport area. Still it is necessary to account the attenuation of vibration over the distance, at which representatives of wildlife can be found. The research works show that it may reduce the vibration speed by a level of at least one power with 100 m [5]. Vibration plays considerable role in the lives of the whole spectrum of wildlife, from the simplest to the most complicated types of organisms, as alongside with sound it is involved in such vital processes as communication, interpersonal (especially, mating and parents-young relations), population (territory occupation) and interspecies (predatorprey) relations, foraging and food storage, survival strategies etc. [7]. Nevertheless, vibration effects are highly understudied, although certain facts are known from simple observations, such as response in different domesticated and other species prior and in time of earthquakes. The impact of vibration on an organism depends on the whole range of equally important factors, such as amplitude, frequency, acceleration rate and others. Yet, one of the crucial parameters when studying biological effects -is resonance frequency. As soon as biological systems possess certain tolerance to the factors of influence, to characterize the possibility of physiological disorders a few ranges of near-resonance frequencies effects are used: resonance frequency range (RFR) -range with the highest potential of the most adverse physical effects; sensitivity frequency range (SRF) -levels at which vibration is still perceived and may cause distress. Those values are poorly known for most wildlife species. Those, that are known, exist mostly for domesticated or highly synanthropic species [9]. For instance, resonance frequencies for rats are 27-29 Hz (abdomen), 225-230 Hz (thorax), and 75-80 Hz (head) [14]. For piglets vibration sensitivity manifests in stress hormones increase and behavior alterations at acceleration of 1 m/sec 2 and frequency of 2-18 Hz (in case of whole-body vibration) [10]. Similar values caused avoidance behavior in chicken [1]. Among other adverse effects cardiovascular processes alteration, fertility decline, stress and aversion, other neural and muscle alterations in mice, rats, pigs, dogs and rabbits, as well as mortality in mice (at extreme values of 10-25 Hz and more than 140 m/sec 2 in case of whole-body vibration for 5-10 minutes) can be mentioned. Alternatively, it has been also investigated, that exposure to vibration can also have potential positive implications for organisms. Examples of this can be the same mice and rats, who exhibited, among other things, increased fat and bone formation, decrease in bone volume loss, alterations in serotonin volumes, improvement of metabolism, improved healing etc. [12]. Yet, such therapeutic effects are possible only at certain specific vibration values, in highly controlled environments. Yet it is highly likely, that the most common reaction to the vibration manifestations would be aversion (although in certain laboratory a posteriori research [15] mice responded to earthquake vibrations with decreased activity). The measurements of aircraft vibration show that it lies within the range from 216-256 Hz at the level equivalent to 92 dB [8], and the vibration measured with standard vibrometer during the aircraft landing outside the airport territory was quite low and corresponded the level of 13 dB within the frequency range of 8-16 Hz. At the given level of the possible effects will be avoidance behavior of animals, stress (increasing concentration of cortisol in blood) with digestive processes disorders and fertility reduction -this assumptions are based on the results of testing under laboratory conditions. Effects of electromagnetic fields on wildlife. The sources of electromagnetic fields (EMF) at the territory of an airport are diverse and numerous. The most prominent and known to be the noticeable emitters are radiolocation and navigation equipment (screens and antennas), control towers, battery and transformation stations as well as other electric equipment. The measurements conducted by the research group in a range of Ukrainian airports (Kyiv Sikorsky, Kyiv Boryspil, Odesa) show that the levels of EMF, created by the above mentioned sources in Ukrainian airports stay within the hygienic standards (they normally fall within the range 1.2-1.6 V/m) or the processes of radiation attenuation reduce the increased levels (the highest values under antennas are from 32 to 60 V/m) to the acceptable. However, the sensitivity of wildlife to EMF is higher than that of humans. Thus, the study shows that density of nesting, number of youth and overall density of population in birds decrease by 50-80 % at the areas exposed to electromagnetic field strength of 3-3.5 V/m [2]. Laboratory mammals (rats, mice and rabbits) demonstrate behavior disruption (active avoidance, panic reaction, disorientation and a greater degree of anxiety) even under the influence of a power density as low as 0.1-0.4 mW/cm 2 at 1.5 GHz. The reproductive disorders are also common among mammals exposed to high-frequency EMF typical for airports, including miscarriages, progressive drop in the number of pregnancies, embryos defects and spatial memory [2]. The aggregated effects bring populations to dangerously low number and threaten their extinction without the signs of direct mortality increase. Moreover, animals don't have mechanisms of reaction to EMF of artificial origin, as they are a new environmental factor, and stay within the exposure area exacerbating the negative trends in the population. Unfortunately, due to the formal correspondence of the EMF parameters to sanitary standards there is no possibility to introduce any additional protective measures. Discussion of the obtained results. Among known negative airport impacts on the environment, physical factors are underestimated, except for the noise pollution. The intensity of these factors influence and magnitude of the consequences depend on the exact location of an airport: enterprises located in close proximity to an urban area produce lower individual effects, because such territories have already been damaged by settlements activity. And in this case the exact effects of airports are not separated from those by urban areas. Another important issue is the absence of any threshold values appropriate for making conclusions about the real magnitude of negative impacts. The given research relies on the data obtained under controlled environment conditions, which makes the possible range of deviation under natural conditions quite wide. Conclusions 1. Airports' activity gives rise to a wide variety of environmental externalities, for which only humans are usually considered as an injured party, but wildlife at the areas adjoined to airport facilities is also seriously affected. 2. The analysis of airport facilities structure showed that there are two groups of physical disruptors: physical bodies (airfields and ground infrastructure) and physical process, like vibration, propagation of light and electromagnetic fields. 3. The airfield as an open space may play both attracting and repelling role for wildlife, leading to increased mortality or relocation of animals. The same set of problems is created by ground access roads and parking areas. The most prominent impact of all the structural objects of airports and highways is instant and gradual destruction of habitats due to, correspondingly, construction of facilities and fragmentation of natural ecosystems. 4. Light pollution from airports is very intensive and can provoke a range of negative effects on mobility, nutrition, reproduction, physiological processes and biorhythms for all groups of wildlife. The measurements at the airport area (Kyiv Boryspil airport and Kyiv Sikorsky airport) at night time show the values over 1000 Lux, which are few powers over the natural illumination. 5. Vibration is measured to be quite insignificant outside the airports territories, but animals are known to be more sensitive to vibration and research results from lab experiments demonstrate a range of behavioral disorders among lab animal subjected to constant effects of vibration. 6. EMF at the airports turned to be low enough to meet the requirements of sanitary standards. However, as in the case with other physical factors, animals are more sensitive to the low-level EMF, which accompany the activity of airports and numerous research works prove animal health risks from exposure to EMF. 7. Predicted negative consequences from airport activity for living organisms are defined based on the results of lab experiments and must be supported by field data, which are currently unavailable and will be the next stage of research. 8. Methods and equipment used to prevent animal contact with sources of hazard at airports and access roads are often the reason of additional negative action and pressure and need to be improved. However, light pollution and fragmentation -the most significant consequences of airport physical factors impacts -can be efficiently mitigated without causing harm to animals. Simultaneously, there is need to reconsider the need and parameters of buffer zones around the airport territory in order to prevent both violation of technological processes safety and reduce negative impacts of physical factors on animals. However, this task still lacks reasonable solutions due to habitats fragmentation issues.
2021-08-02T00:06:41.984Z
2021-04-29T00:00:00.000
{ "year": 2021, "sha1": "cccba8d1129eb39d048818047eeef46cddee080c", "oa_license": "CCBY", "oa_url": "https://nv.nltu.edu.ua/index.php/journal/article/view/2322/2336", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d1096c41e4648f6a76c80c45184cec6d0ef9541", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
238728678
pes2o/s2orc
v3-fos-license
The effect of reproductive state on activity budget, feeding behavior, and urinary C-peptide levels in wild female Assamese macaques The source of maternal energy supporting reproduction (i.e., stored or incoming) is an important factor determining different breeding strategies (capital, income or mixed) in female mammals. Key periods of energy storage and allocation might induce behavioral and physiological shifts in females, and investigating their distribution throughout reproduction helps in determining vulnerable phases shaping female reproductive success. Here, we examined the effects of reproductive state on activity budget, feeding behavior, and urinary C-peptide (uCP) levels, a physiological marker of energy balance, in 43 wild female Assamese macaques (Macaca assamensis). Over a 13-month study period, we collected 96,266 instantaneous records of activity and 905 urine samples. We found that early lactating females and non-gestating–non-lactating females follow an energy-saving strategy consisting of resting more at the expense of feeding and consuming mostly fruits which contributed to enhancing their energy intake and feeding efficiency. We found an opposite pattern in gestating and late lactating females who feed more at the expense of resting and consume mostly seeds, providing a fiber-rich diet. Storing food into cheek pouches increased throughout gestation while it decreased all along with lactation. Lastly, we found the highest uCP levels during late gestation. Our results reflect different feeding adaptations in response to the energetic costs of reproduction and suggest a critical role of fat accumulation before conception and metabolizing fat during gestation and lactation. Overall, our study provides an integrative picture of the energetics of reproduction in a seasonal species and contributes to our understanding of the diversity of behavioral and physiological adaptations shaping female reproductive success. To offset their substantial energetic investment in reproduction, mammalian females may modify their behavior and the way they extract energy from their environment. In addition, as a result of heightened energy expenditure, female reproduction might trigger physiological shifts. To date, most studies investigated the energetic costs of female reproduction using either a behavioral or a physiological approach. To arrive at a more comprehensive picture, we combined behavioral data with a physiological marker of energy balance, i.e., urinary C-peptide, in a seasonal primate species in its natural habitat. Our results indicate that throughout the reproductive cycle, behavioral and physiological adaptations operate concomitantly, inducing modifications in female activity budget, feeding behavior, and suggesting shifts in fat use. Overall, our results illustrate the relevance of combining data on behavior and hormones to investigate breeding strategies in coping with the energetic costs of reproduction. Introduction In comparison to males, female mammals bear most of the energetic costs of reproduction as they have to meet the nutritional requirements of gestation and lactation (Gittleman and Thompson 1988;Prentice and Prentice 1988). This major maternal investment allocated to reproduction can lead to a negative energy balance, i.e., energy expenditure exceeding energy input (Hall et al. 2012), with subsequent drastic repercussions on the mother's fitness. For example, females lose weight during lactation in numerous species (humans: Guillermo-Tuazon et al. 1992; Columbian ground squirrels (Spermophilus columbianus) : Neuhaus 2000; domestic pigs (Sus domesticus): Tantasuparuk et al. 2001; bonnet macaques (Macaca radiata): Cooper et al. 2004). This energy deficiency in reproducing females can make females more vulnerable when facing adverse situations, such as food shortages, infection, or injuries (Festa-Bianchet 1989;Gould et al. 2003;Archie et al. 2014;East et al. 2015). As a result of the energetic expense, reproducing females may decrease their investment in future reproduction or may forego it altogether (Koivula et al. 2003). As stated by life history theory (Stearns 1992), reproduction is, therefore, a tradeoff between current reproductive investment and future fitness (Clutton-Brock et al. 1989;Koivula et al. 2003;Altmann and Alberts 2005;Festa-Bianchet et al. 2019). Various adaptations evolved in response to the energetic load of reproduction to minimize its fitness-threatening consequences. Reproducing females may modify their activity budget, for example by increasing the time they spend resting in order to save energy and offset the costs of reproduction (Goldberg et al. 1991; Barrett et al. 2006;Gamo et al. 2013). Additionally, the increased energy requirements of reproduction can be met through modifications in female feeding behavior. Three main feeding strategies to maximize intake have been described: females can feed longer, faster, or more selectively (Lee 1987). A female can devote more time to feeding, at the expense of other behaviors, in order to support her current requirements (Dunbar and Dunbar 1988). Besides increasing feeding time, a female can also increase her feeding efficiency, i.e., the nutrient intake per unit of feeding time. To do so, a female can either increase ingestion rate for the same food (Schülke et al. 2006) or select food with higher nutrient density, modifying diet composition toward specific nutrient requirements (Mellado et al. 2005). These three feeding strategies have been observed in gestating and/or lactating females in various mammalian species (Logan and Sanson 2003;McCabe and Fedigan 2007;Clutton-Brock et al. 2009). Overall, reproducing females increase their energy intake, reduce their energy expenditure, or both, depending on whether their strategy consists of building up fat stores or reinvesting incoming energy immediately into reproduction. Another major adaptation to cope with the energetic costs of reproduction relies on the timing of reproductive events relative to variation in food abundance. Mammals living in habitats with seasonally fluctuating resources show temporal patterns in the scheduling of breeding events (Jönsson 1997; introduced first in birds: Drent and Daan 1980). At one extreme of the seasonal breeding spectrum are capital breeders that rely on endogenous condition, building up fat stores during the rich season before they breed, and using that maternal capital to invest in offspring during the lean season (e.g., bighorn ewes (Ovis Canadensis): Festa- Bianchet et al. 1998). At the other extreme, income breeders mate prior to the peak of food availability to synchronize lactation with the period of the highest environmental energy supply, so that income can be immediately reinvested in the offspring (e.g., Antarctic fur seals (Arctocephalus gazelle): Oftedal et al. 1987). These strategies are not mutually exclusive, as some species use a mixed strategy when timing their reproductive events (Koenig et al. 1997;Wheatley et al. 2008). Investing or not in reproductive effort depending on current and/or future food resources is thus an adaptation to maximize reproductive success by offsetting the costs of reproduction with stored and/or incoming energy sources. Contrary to strict income or capital breeders, relatively little is known about the suite of adaptations to the energetic costs of reproduction in mammals following a mixed breeding strategy. With our study, we aim at contributing to start filling this gap. Mixed breeding strategies have been described in several primate species (Brockman and van Schaik 2005), making primates a suitable model for our study on the energetics of mammalian female reproductive strategies. Primates are also interesting in this respect as they experience comparatively high costs of reproduction, which should be reflected physiologically and behaviorally. Reproduction is particularly costly in primates, as females give birth to well-developed and large-brained neonates relative to maternal body size and birth weight, making infants energetically more expensive to produce compared to nonprimate taxa (Bennett and Harvey 1985). Once born, neonates require considerable maternal investment before the main phases of brain development are achieved and before the infant reaches a critical mass to insure independence at weaning age (Lee et al. 1991;Martin 1996). To account for these costs, gestation and lactation are longer in primates, with slower rates of fetal and neonatal growth compared to other mammals of similar body size (Dufour and Sauther 2002). Spreading gestation and lactation over time lowers the daily energetic load and allows more metabolic and physiological flexibility when coping with reproduction and its energy requirements (Payne and Wheeler 1968;Dufour and Sauther 2002). Still, the costs of reproduction are substantial in primates, especially during lactation which is associated with important maternal energy constraints to support milk production and infant carrying (Tardiff 1997;Hinde and Milligan 2011). Above all, the extended durations of gestation and lactation in primates allow the investigation of potential critical periods in terms of energetic costs within reproductive states. The energetic costs of female reproduction are reflected physiologically and can be assessed from concentrations of urinary C-peptide of insulin (uCP), a non-invasive marker of energy balance in primates (Sherry and Ellison 2007;Deschner et al. 2008;Emery et al. 2008;Girard-Buttoz et al. 2011;Fürtbauer et al. 2020). The C-peptide originates from the pancreas where it is cleaved from the inactive proinsulin and released into the bloodstream together with active insulin. Thus, measuring C-peptide concentration in urine provides an indirect assessment of insulin production (Melani et al. 1970;Meistas et al. 1982). The few studies linking uCP and female reproduction in primates found uCP levels to be unchanged across the reproductive cycle (Grueter et al. 2014;Bergstrom et al. 2020), higher during gestation (McCabe and Emery Thompson 2013;Nurmi et al. 2018;Fürtbauer et al. 2020), or lower during lactation (Ellison and Valeggia 2003;Emery Thompson et al. 2012). Female energy balance, therefore, seems to be affected differentially by reproductive states across species. The discrepancy of results might be explained by distinct breeding strategies and associated differences in behavioral adaptations throughout the reproductive cycle. Combining uCP levels and behavioral measures of energy balance aids in providing a comprehensive picture of the different ways females cope with reproduction. This may help towards a better understanding of the interrelationship between behavioral and physiological adaptations related to female reproductive energetics across the various reproductive strategies. Here, we investigate the coping mechanisms in the face of the energetic costs of female reproduction in a wild primate population following a mixed breeding strategy. To do so, we compared potential behavioral and physiological differences both between and within reproductive states. To our knowledge, to date, only one study investigated the effect of female reproductive state on both behavioral and uCP measures (Cano-Huertes et al. 2017). Such an integrated approach not only sheds more light on behavioral adaptations when coping with gestation or lactation but also provides an estimation of the physiological costs of reproduction by inspecting whether or not gestation and lactation are associated with changes in energy balance. Assamese macaques (Macaca assamensis) are reproductively seasonal and follow a mixed breeding strategy as they have been classified as relaxed-income breeders with food abundance prior to the mating season mediating conception rate and the birth peak occurring during a period of high food availability (Brockman and van Schaik 2005;Heesen et al. 2013). Although 79% of births occur in a 3-month period (JO and OS unpublished data), in a given year, births often are spread over five months. The habitat does not allow females to reproduce every single year. Females who conceive early in the year are able to reproduce again late in the following year; otherwise, they typically skip a year (Fürtbauer et al. 2010). In any case, the interbirth interval is longer than 12 but shorter than 24 months, which prohibits a strict income breeding strategy of aligning peak needs with peak resource abundance. As a result, in mixed breeders such as Assamese macaques, strategies to support reproduction cannot be perfectly tied to the environment but rather to the respective reproductive schedule of each female. The study population of Assamese macaques offers two advantages for the purpose of our study. First, as a consequence of their mixed breeding strategy and lack of strong synchrony in conceptions during the mating season, reproductive events and seasonality in food availability are not completely confounded. This is particularly well illustrated by visualizing the temporal succession of reproductive states for each female throughout a year (Fig. 1). Note, for example, that all possible reproductive states overlap in November, December, and May (gestating, lactating, and non-gestating-non-lactating), allowing us to study the energetic costs of reproduction in a seasonal primate species while controlling for ecological factors. This is critical since food availability induces changes in feeding behavior, (Knott 1998;Sherry and Ellison 2007;Tsuji et al. 2008;Harris et al. 2009;Heesen et al. 2013;Lambert and Rothman 2015). Second, cercopithecine primates, such as macaques, have cheek pouches used for storing intact food items for later consumption. Filling pouches is a feeding strategy ensuring rapid harvesting of a large quantity of food items that can be carried, processed, and ingested away from co-feeding sites and therefore, increasing feeding efficiency (Lambert 2005). The use of cheek pouches has been investigated particularly with regards to feeding competition avoidance in primates (Lambert 2005;Buzzard 2006), including the study population (with increased cheek pouch use in low-ranking females; Heesen et al. 2014). Surprisingly, to date, very few studies have examined a potential effect of female reproductive state on cheek pouch use. Using cheek pouches to store valuable, contestable food items may be a coping mechanism to offset, for example, the substantial costs of lactation (Hayes et al. 1992) in the face of feeding competition. We investigated the coping mechanisms used to offset the energetic costs of different female reproductive states in Assamese macaques comparing different metrics. We first investigated female activity budgets, as reproductive state-dependent trade-offs in activity have been observed in female primates (Muruthi et al. 1991). Second, we examined how females spend their feeding time in terms of diet composition or feeding efficiency across reproductive states. Specifically, we quantified female diet composition in order to evaluate whether females feed more selectively at certain stages of their reproductive cycle by selecting specific nutrients from their habitat, such as digestible carbohydrates (e.g., fruits; Murray et al. 2009) and/or proteins (Herrera and Heymann 2004;Miller et al. 2006). Additionally, we considered the proportion of time females used their cheek pouches as a complementary feeding strategy to store specific and valuable nutrients. Third, we further dissected the nutritional impact of diet by calculating female energy and protein intake to capture potential differences between reproductive states (Muruthi et al. 1991;Miller et al. 2006). Relating energy intake to total feeding observations, we compared feeding efficiency between reproductive states (Muruthi et al. 1991;Serio-Silva et al. 1999). Lastly, we focused on female physiology and assessed female energy balance during the reproductive cycle using the measurements of uCP levels. Controlling for the potential effects of ecological factors (fruit availability in the habitat) and travel distance (to account for potential behavioral and/or physiological shifts relative to physical activity), we tested the following predictions. We predicted that gestating and lactating females would exhibit different activity budgets than non-gestating-non-lactating females (prediction 1); lactating females in particular were expected to allocate more time to resting at the expense of feeding to conserve energy (Heesen et al. 2013). Gestating and lactating females were also predicted to change their diet composition to support their higher energetic requirements (prediction 2), feeding selectively by consuming more food items rich in sugar (e.g., fruits) and biasing their diet towards food items with high protein content (e.g., young leaves). We also expected gestating and lactating females to use their cheek pouches more than non-gestating-non lactating females (prediction 3). We predicted that gestating and lactating females had higher energy intake (prediction 4) and higher protein intake than nongestating-non-lactating females (prediction 5). Additionally, we expected gestating and lactating females to optimize their energy intake per unit of feeding time and therefore, to be more efficient than females outside of gestation or lactation when feeding in general (prediction 6) and when feeding on proteins (prediction 7). Finally, we expected to find the lowest uCP levels in lactating females, as their behavioral strategies to overcome the lactation load would not preclude them from losing weight (prediction 8; Heesen et al. 2013). To account for variation in reproductive costs within a reproductive state, we divided gestation and lactation periods into an early and a late phase of equal length. We expected the second phase of gestation (Durnin 1991) and particularly the first phase of lactation, which comprises the peak of lactation in our study population (Berghänel et al. 2016), to be the costliest periods. Thus, the above predicted patterns should be stronger in the early phase of lactation and the late phase of gestation, and early lactation should have the strongest effect of all reproductive states. Overall, as a consequence of their mixed breeding strategy, female Assamese macaques should go through periods of energy storage, energy-saving, and energy loss, and we aim at understanding the timing of these periods relative to reproduction. Study site and subject The study was conducted at Phu Khieo Wildlife Sanctuary (PKWS) in Northeastern Thailand (16° 05′ to 16° 35′ N, 101° 20 to 101° 55′ E). The sanctuary covers an area of more than 1650 km 2 and the study site (Huai Mai Sot Yai, 16° 27′ N, 101° 38′ E, 600 to 800 m above sea level) comprises dry evergreen forest with patches of dry dipterocarp forest and bamboo stands (Borries et al. 2002). The vegetation is dense; the terrain hilly and the habitat exhibit two distinct seasons, with a rainy season from March to October and a dry season from November to February (Richter et al. 2016). The mean annual temperature is 21.2 °C, with daily minimum temperatures ranging from 5.4 °C in the dry season to 23.2 °C in the rainy season (Borries et al. 2011;Richter et al. 2016). The annual amount of rainfall averages 1140 mm (Borries et al. 2011). The study covered a 13-month period (July 2017-July 2018) and was conducted on 43 adult females belonging to three neighboring study groups. One adult female died in September 2017. At the onset of the study, each group was composed of several adult males (9, 6, and 3, respectively), several adult females (20, 13, and 9, respectively), and a large number of immatures. Females in the study population breed seasonally, with a mating season from October to February, and a birth season from April to July (Fürtbauer et al. 2010). A female was considered adult at the onset of the mating season of her first conception (which usually occurs at about 5.5 years of age). Three females that were nulliparous at the onset of the study period (and therefore were-strictly speaking-not yet categorized as adult at this time prior to the mating season) were included as focal "adult" females as they were about to conceive for the first time (and to become adult) during the 2017-2018 mating season. Two old females were discarded from the study, as they had not conceived for at least the last 6 years and thus seemed to have become reproductively inactive. Reproductive states We considered five reproductive states: early gestation (EG), late gestation (LG), early lactation (EL), late lactation (LL), and a non-gestating-non-lactating state (NGNL). Lactation lasts a year and we differentiated EL (first 6 months) from LL (last 6 months), as we observed more frequent infant suckling at daytime during EL (Berghänel et al. 2016), making this early phase of lactation a potentially more energetically costly period for the mother. Gestation lasts on average 164 days (Fürtbauer et al. 2010), and was also split in half, since the energy requirements in the second half of gestation have been shown to be higher, as illustrated by high levels of thyroid hormones and glucocorticoids (Touitou et al. 2021). When the exact date of parturition was unknown, we assigned the birth date based on visual criteria of the infant, and on the last day, we saw the mother not carrying a newborn. Once the date of parturition was known (37% of the births) or estimated (with a certainty of ± 1 week in 75% of the cases), we back-counted 164 days to pinpoint the date of conception. To avoid confounding energy requirements of simultaneous reproductive states, we discarded some data coming from five females when they were going through two different reproductive states at the same time (gestating while still nursing the last infant). Behavioral data collection We followed a group for several days in a row (a "sampling block"). As the number of focal females differed substantially between groups, we accordingly attributed different sampling effort to each of the three groups (on average 16, 13, and 8 consecutive sampling days per respective group from the largest to the smallest). The interval between two consecutive sampling blocks for the same group was on average 28.5 days (range: 18 to 48 days). We evenly distributed those sampling blocks for each group across the study period (9 to 10 per group). Behavioral data were collected on female subjects from dawn to dusk and sleep tree to sleep tree (average ± SE: 8.1 ± 0.4 focal animal sampling hours collected per day). It was not possible to record data blind as our study involved focal animals. We performed 40-min focal animal sampling during which instantaneous records were collected at 2-min intervals (96,266 data points in total and 232.5 ± 4.8 (average ± SE) data points per female and sampling block). An effort was made to evenly distribute focal sampling across females and time of day. During the instantaneous data collection, we recorded the focal female's activity, i.e., whether she was feeding (ingesting, chewing food), traveling (vertical or horizontal locomotion), resting (being stationary and not doing anything else), or involved in a social activity (grooming, being groomed, mating, etc.). Assessing such a complete activity budget allows the determination of potential tradeoffs between behaviors during the female reproductive cycle. For each instantaneous record, we also recorded whether the focal female was feeding from her cheek pouches (typically recognizable by the female pressing the outer part of her cheek with her hand or shoulder to move food item from the cheek pouch into the mouth) or not. Putting food into the cheek pouches was recorded as feeding whereas feeding from the cheek pouches was not, to avoid redundancy. Diet composition For each feeding event, we recorded the food item ingested. Assamese macaques are mainly frugivorous, with fruits (pulp or full fruit) and seeds representing up to 59% of their diet (Heesen et al. 2013). Additionally, they also feed on animal matter (insects, spiders, mollusks, small reptiles, etc.), leaves, and more rarely on flowers, bark, mushrooms, and roots (Heesen et al. 2013). For each feeding event involving a plant food item, we recorded the species ingested if known (88% of cases) and the plant part (fruit, pulp, seed, leaf, or flower) together with its state of maturity. We summarized per sampling block the proportion of all feeding records spent on consuming the main food item categories fruits, seeds, young leaves, and animal matter, leaving the category "other items" for rarely consumed food items. Ingestion rate, nutritional analysis, and energy intake calculation From the instantaneous records, we used the relative frequency of feeding per female, among all other activities, that we multiplied by 630 min (i.e., the average active day length) to estimate the average duration of feeding on each plant food item in a day. When possible, we recorded ingestion rate data in order to assess the average quantity of plant food units (full fruit, seed, bite, handful of leaves, etc.) ingested per minute. Plant food items were collected, processed in the camp (to keep only the part consumed by the monkeys), and their respective units (same as the one used to calculate ingestion rate) were weighed to estimate wet unit weight ingested per minute. Additional plant samples were kept frozen (− 19 °C) until transport to the Animal Nutrition Laboratory of the Department of Animal Science, Kasetsart University, Bangkok where their nutritional content was measured. From the nutritional analyses, the proportions, in dry matter, of crude protein, crude fat, neutral detergent fiber (NDF), total non-structural carbohydrates, and ash, were determined together with the percentage of moisture. Considering that carbohydrates and protein provide 4 kcal/g, fat 9 kcal/g and NDF 3 kcal/g with 52% being transformed into energy (Conklin-Brittain et al. 2006;Sawada et al. 2010;Heesen et al. 2013), we were able for each plant food item analyzed to assess its energy content in kJ/g of dry matter (with 1 kcal = 4.184 kJ). We used the percentage of moisture to calculate the dry unit weight out of the wet unit weight. The dry unit weight of one unit was then multiplied by the respective food item's ingestion rate and energy content to get the energy yield (in kJ/min) of the item. In total, we were able to determine the energy yield for 43 different plant food items. We multiplied energy yield by the estimated duration (in min/day) during which the female was feeding on the respective item during a sampling block to obtain the energy intake (in kJ/ day) coming from each important plant food item, which we further summed up to assess the energy intake for each female within each sampling block. We additionally followed the exact same procedure but focusing only on the protein part of each plant food item to get an approximation of the protein intake (in g/day) of each female within each sampling block. We considered our estimation of energy intake (and therefore protein intake) reliable when it was calculated from at least 60% of the total feeding observations per female and per sampling block. In cases where energy data were available for less than 60%, the respective female in the respective sampling block was discarded from the analysis on nutritional intake. In the remaining cases, our assessments of energy and protein intakes relied on about 75% of a female's feeding observations. For further details on our energy intake calculation, see Touitou et al. (2021). Among the 32 important plant food items consumed during the study period (items which respectively represented at least 5% of the total feeding observations in a sampling block), we were able to get the energy yield for 24 of them (i.e., 75%). The other 8 items were consumed for more than 5% of feeding time (range 5-15%) only during a single sampling block each. Feeding efficiency for energy and protein In order to assess the amount of energy a female ingested per unit of feeding time within one sampling block, and thus her feeding efficiency, we divided energy intake by the number of feeding observations from which we calculated the respective energy intake (referred to as energetic feeding efficiency in the following). We also considered a female's feeding efficiency when ingesting plant protein matter (protein feeding efficiency). To do so, instead of energy intake, we used protein intake in our feeding efficiency calculation. Dominance hierarchy We established a female dominance hierarchy based on focal and ad libitum data of clearly submissive (bare teeth, give ground, make room) behaviors during decided agonistic interactions, i.e., spontaneous submission or aggression followed by submission by only one individual (Ostner et al. 2008). Ranks of all adult females were calculated as the standardized normalized David's score using the "DS" function of the "EloRating" package (Neumann et al. 2011) in R (versions 3.5.3; R Core Team 2020). Phenology At the middle of each month, ecological data were collected through phenology records on 45 botanical plots (32 plots of 50 × 50 m and 13 of 100 × 100 m), covering a total area of 21 ha of forest. We monitored 673 trees (≥ 10 cm diameter at breast height (DBH)), shrubs, and climbers (≥ 5 cm DBH) of 55 different fruit species, including 94% of the 35 important fruit items we had energy data on. Abundance of fruits (unripe, ripe, and old) was visually recorded using binoculars and scored on a logarithmic scale (1 = 1-9; 2 = 10-99; 3 = 100-999; 4 = 1000-9999; 5 = 10,000-99,999; Janson and Chapman 2000). Moreover, for each fruit species, density per hectare was calculated (based on all botanical plots). From the abundance and density data, we calculated a fruit availability index (FrAI) for each calendar month, using the following formula: where FrAI m is the total fruit availability index of the month m for n fruit species, A i the mean fruit abundance score for species i, and D i the mean density of species i. Using the from two consecutive months, we were able to estimate a daily increase or decrease in FrAI between those 2 months. Thus, we assigned a fruit availability index for each day between the middle of two consecutive months with the following calculation: where FrAI x d is the daily FrAI of the x th day between months m and m + 1, FrAI m is the FrAI of month m, FrAI m+1 is the FrAI of month m + 1 and n the total number of days between month m and m + 1 (i.e., n = 28, 30, or 31). As previously done in Heesen et al. (2013), we only based this index on fruit items, as Assamese macaques are mainly frugivorous (Schülke et al. 2011;Heesen et al. 2013) and we expected fruit items to have a prevailing role in a female's diet and energy balance. We attributed an individual FrAI for each female within a sampling block by averaging the from the days during which this female had been observed within the sampling block (i.e., days for which we have feeding data for this female). Travel distance We recorded GPS data at the beginning and at the end of each 40-min focal sampling session and at the sleeping trees (GPS device: Garmin GPSMAP 64 s). We calculated the shortest distance between two consecutive GPS records to assess the distance the group traveled on a given day. In a few cases (N = 36/363) where sleeping tree information was missing and/or less than five GPS coordinates were recorded during a day, the subsequent travel distance calculation was considered not reliable and no travel distance value was attributed to the respective day. For each sampling block, we attributed to each female the mean of the group travel distances during the days in which the respective female had been observed within the sampling block (i.e., days for which we have behavioral data for this female). On average, we used 25.8 ± 0.3 (average ± SE) GPS coordinates to calculate the distance traveled by a group in 1 day (range: 12-50). Our estimation of daily travel distance was independent from the number of GPS points collected per day (linear regression: t = 1.76, N = 283, P = 0.08). Urine sample collection We opportunistically collected urine samples throughout the day using disposable mini-pipettes. The first morning void was not caught. Urine contaminated by fecal matter was not collected as this could affect uCP measures (Higham et al. 2011a). We pipetted urine from clean disposable plastic bags placed underneath the urinating female or directly from vegetation. Urine was transferred into 2 mL Eppendorf vials and labelled with date, time of collection, and female ID. Right after urine collection, a few drops were pipetted on the lens of a handheld refractometer (Atago PAL-10S) to record specific gravity for adjusting uCP concentrations (see below), after which they were pipetted back into the vial. The vial was sealed with Parafilm and the refractometer lens was cleaned with Kimtech wipes. The urine samples were kept out of sunlight and cooled in a Thermos flask containing ice cubes until they were stored at − 12 °C in a camp freezer for a few days. They were later transported and placed into a bigger freezer (− 19 °C) in a nearby village where they remained until export on dry ice to the endocrinology laboratory for hormone analysis. Urinary C-peptide analysis Frozen samples were thawed at room temperature and we assessed urinary C-peptide of insulin (uCP) via enzyme immunoassay using a commercial C-Peptide ELISA kit from IBL International GmbH Hamburg, Germany (RE 53,011), which has been validated and used successfully for uCP measurements in other macaques (Girard-Buttoz et al. 2011Higham et al. 2011a, b), as well as baboons (Fürtbauer et al. 2020). Prior to analysis, urine samples were diluted 1:2 to 1:20 (depending on C-Peptide concentration and urine volume available) with IBL sample diluent (RE 53,(17)(18)(19)(20), to bring the concentration into the linear range of the standard curve (0.2-16 ng/mL). In few cases, pure urine was used (1:1). Serial dilutions of urine samples resulted in displacement curves running parallel to the uCP standard curve, indicating no matrix interference. Assay sensitivity was 0.064 ng/mL and this threshold value was assigned to samples for which the uCP level was below assay sensitivity, as done in several uCP studies (Deschner et al. 2008;Girard-Buttoz et al. 2011;Higham et al. 2011a). The inter-assay coefficient of variation (CV) calculated from high and low value quality controls assessed in each plate was 11.1% and 15.3%, respectively (N = 68). Intraassay coefficient of variation, calculated as the average value from the individual CVs for all the sample duplicates, was below 10%. To adjust for urinary concentration, we corrected uCP levels by the specific gravity (SG) of each sample (Miller et al. 2004;Anestis et al. 2009) and reported all corrected (corr.) uCP values in ng/mL corr. SG. Highly diluted samples (SG below 1.002; N = 27) were discarded as correction with very low SG values might overestimate uCP concentrations. In total, the uCP concentration from 905 urine samples was used in this study, corresponding to an average of 2.2 ± 0.05 (average ± SE) urine samples per female and per sampling block. Activity budget (model 1) and diet composition (model 2) In order to investigate whether female reproductive state (5 different states) influenced the female's activity budget and/or diet composition, we fitted two linear mixed models (LMM; i.e., with Gaussian error distribution; Baayen et al. 2008) in R (versions 3.5.3 to 4.0.3; R Core Team 2020) using the function "lmer" (versions 1.2-21 to 1.1-25) of the package "lme4" (Bates et al. 2015) and "lmer" (for testing individual effects) of the package "lmerTest" (version 3.1-3; Kuznetsova et al. 2017). Proportions of time allocated to each activity (4 categories: feeding, traveling, resting, social) as well as proportions of plant food items in the diet (5 categories: fruits, seeds, young leaves, animal matter, others) were calculated. To do so, for each female and within each sampling block, we (i) divided the number of instantaneous points recorded for each of the four activities by the total number of instantaneous points recorded, and (ii) divided the number of instantaneous points at which the female was feeding on each of the five respective categories of food items by the total number of feeding points. When a female's activity was unknown or when a female was feeding on an unknown food item, these observations were discarded, and therefore subtracted from the total number of instantaneous or feeding observations, respectively. All proportions calculated to describe a female's activity budget or to describe her diet in a sampling block are not independent as they sum up to one. Therefore, the activity budget and diet composition data were respectively modelled all at once, using two compositional models. In addition, proportions range between zero and one. To account for this particular nature of the response variables, we transformed both response variables using a centered log-ratio (CLR), which is the log of the ratio between the observed proportions and their geometric mean per observation period and which removes the range restriction (Xia et al. 2018). To account for the zeros in the dataset, and therefore the impossibility to implement the CLR transformation in these cases, we first rescaled our proportions using the following formula, recommended for models to be fitted using a beta error distribution: where x is the observed proportion and length(x) our sample size (N = 1612 or 2015 for activity budget or diet composition analysis respectively; Smithson and Verkuilen 2006). We used the CLR-transformed proportions accounting for activity budget (model 1) and diet composition (model 2) as compositional response variables in our two LMMs. We included as fixed effects activity (4 categories, in model 1), or food item (5 categories, in model 2), and its respective two-way interaction with reproductive state, which accounted for our main hypothesis, namely, that female activity budget and diet composition vary with reproductive state. In addition to group, we controlled for the potential effects of FrAI and travel distance (to account for fruit abundance and physical expense, respectively) in explaining female activity budget and diet composition by including their two-way interactions with activity (model 1) and food item (model 2). We included two random intercepts in both LMMs: female ID and female ID nested in sampling block, with the latter accounting for the non-independence of the proportions within each sampling block. In order to avoid overconfident models, reproductive state and activity, or food item, were dummy coded and centered to be added as random slopes within female ID with activity, or food item, interacting with FrAI and with travel distance (Schielzeth and Forstmeier 2009;Barr 2013). We weighted the two models with the total number of instantaneous points (model 1) or feeding points (model 2) to account for a likely link between responses accuracy and sampling size. The samples analyzed comprised a total of 1612 or 2015 transformed proportions on activity budget and diet composition, respectively, obtained for 43 adult females during 9 or 10 sampling blocks, depending on the group (403 observations comprising 4 or 5 proportions each). Cheek pouch use (model 3) We tested whether females in different reproductive states differed in their cheek pouch use. To do so, we fitted a model with a beta error distribution (Bolker 2008) and logit link function (McCullagh and Nelder 1989) with the function "glmmTMB" of the package "glmmTMB" (version 1.0.2.1; Brooks et al. 2017). The response variable was the proportion of time spent using cheek pouches per female and per sampling block (N = 403 observations). Proportion of time using cheek pouches was calculated by dividing the number of observations where the focal female was feeding from her cheek pouches by the total number of observations within a sampling block for the respective focal female. To avoid proportions being exactly zero, we rescaled our proportions using the same formula as described above. Reproductive state was the predictor variable. Additionally, we included as control variables FrAI, travel distance, group, and female dominance rank (which was found to predict female cheek pouch use in a previous study in this population; Heesen et al. 2014). Female dominance hierarchy was not included in any other models, as female activity budget and energy intake were found to be independent from it in this population (Heesen et al. 2013). Female ID was added as a random intercept effect, and within it, we included random slopes of reproductive state (dummy coded and centered), FrAI, and travel distance. The number of instantaneous points from which the response was calculated was further included as a weight to account for a potential link between response reliability and sampling size. Energy intake, protein intake, and feeding efficiency (models 4 to 7) To test whether females' energy intake, protein intake, and feeding efficiency differed between reproductive states, we fitted four LMMs. The four responses were energy intake, protein intake, energetic feeding efficiency, and protein feeding efficiency (models 4, 5, 6, and 7 respectively). Reproductive state was the fixed effect predictor variable while FrAI, travel distance, and group were added as additional fixed effects to control for their potential influence. Female ID was included as a random intercept effect. FrAI and travel distance were added as random slopes within focal individual. We further weighted the models with the number of feeding points from which the four responses were calculated. Some energy calculations could not be considered reliable enough (not representing at least 60% of a female's feeding observations within one sampling block), and were therefore discarded. Consequently, our sample size for these models was smaller than the previous ones (N = 177 instead of 403, N = 43 females). uCP levels (model 8) We fitted an additional LMM to investigate how reproductive state affected female uCP levels. The response variable was uCP level in each urine sample (N = 905). The reproductive state of the female was our predictor variable and we added , daily travel distance, and group as fixed effects. Moreover, as in several primate species uCP have been found to be dependent on time of day (chimpanzees (Pan troglodytes): Georgiev 2012; gorillas (Gorilla beringei beringei): Grueter et al. 2014; blue monkeys (Cercopithecus mitis): Thompson et al. 2020), we added sampling time as an additional fixed effect in the model. Female ID was included as a random intercept effect, as was collection day, in order to account for multiple urine sampling per day. Reproductive state (dummy coded and centered), , daily travel distance, and time of day were included as random slopes within focal ID. Model assumptions and additional information Prior to fitting the models, we inspected quantitative predictors (FrAI and travel distance) to make sure their distributions were roughly symmetric. To achieve a more symmetric distribution, distance traveled was log-transformed (in models 3 to 8). The responses energy intake, protein intake, and feeding efficiency (energetic and protein) were log-transformed. In all models, we z-transformed FrAI and travel distance (as well as dominance hierarchy in model 3 and time of day in model 8) to make the models more likely to converge and easier to interpret (Schielzeth 2010). We also investigated potential under/overdispersion in the beta model (model 3). The dispersion parameter was 1.24, and therefore, the response was a little overdispersed which could be an issue in case a p value would be just slightly below the 0.05 threshold. After fitting the models, we visually inspected a QQ-plot of residuals and residuals plotted against fitted values to check whether the residuals were normally distributed and homogeneous (LMMs). These indicated no severe deviations from the assumptions. Using the function "vif" of the package "car" (Fox and Weisberg 2018), we checked for potential collinearity between predictor variables by inspecting the Variance Inflation Factors (VIF) derived from models lacking the random effects and interactions (maximum VIF across all models was 1.6, indicating no issues with collinearity). Furthermore, we assessed the models' stability by dropping the levels of the random intercepts one at a time (Nieuwenhuis et al. 2012) using a function provided by Roger Mundry; this procedure revealed the models to be of good stability. We bootstrapped models' estimates and confidence limits of model predictions using the functions "bootMer" of the package "lme4" or simulate of the package "glmmTMB", respectively. To test the overall effect of reproductive state, and the interaction it was involved in on females' activity budget and diet composition, we compared the full models (as described above) with respective null models lacking reproductive state in the fixed effects part (Forstmeier and Schielzeth 2011). These comparisons were based on a likelihood ratio test (Dobson 2002). Apart from testing the general effect of reproductive state on our various responses, we also performed post hoc Tukey's pairwise comparison among each reproductive state from models 3 to 8. We tested the individual effects of each fixed effect using the function "drop1" from the package "lme4." This function executes likelihood ratio tests comparing the full models with reduced ones lacking one fixed effect at a time. Activity budget (model 1) Overall, the null model was significantly different from the full model (χ 2 = 216.0, df = 16, P < 0.001). The interaction between activity and reproductive state was significant (F(12,827.5) = 19.6, P < 0.001), indicating that the activity budget of females differed between reproductive states. Plotting the results and inspecting fitted values together with their confidence intervals revealed that social time and traveling time were similar across reproductive states ( Fig. 2; supplementary Table S1), while there was variation in feeding and resting time. Specifically, EL and NGNL females fed less and rested more than females in other reproductive states. Diet composition (model 2) The null model was significantly different from the full model (χ 2 = 278.4, df = 20, P < 0.001). The interaction between food item category and reproductive state revealed a significant result (F(16,893.9) = 18.8, P < 0.001), indicating that female diet composition differed between reproductive states. Plotting the results and inspecting the fitted values together with their confidence intervals revealed considerable variation with regards to the proportions of fruit and seeds in the diet (Fig. 3; supplementary Table S2). Specifically, EL and NGNL females fed more on fruits and less on seeds compared to females at other reproductive stages. Cheek pouch use (model 3) The full-null model comparison revealed a significant result and therefore, a significant effect of reproductive state on the frequency of cheek pouch use (χ 2 = 54.6, df = 4, P < 0.001; Fig. 4; Table 1). As the p value is very small, we were confident that the significance can be trusted despite the slight overdispersion. Post hoc pairwise comparisons revealed that EG females used their cheek pouches the least. EG females used their cheek pouches less than LG (t = − 2.7, P = 0.050), EL (t = − 8.6, P < 0.001), LL (t = − 4.0, P = 0.001), and NGNL females (t = − 3.3, P = 0.009). Additionally, EL females used their cheek pouches more than LL females (t = 5.7, P < 0.001). Energy intake (model 4) Null and full models differed significantly from each other, suggesting that reproductive state had a significant effect on a female's energy intake (χ 2 = 23.2, df = 4, P < 0.001; Fig. 5a; Table 2a). Energy intake in EL and NGNL females was higher than in EG females (z = 4.5, P < 0.001 and z = 3.6, P = 0.003, respectively). Additionally, energy intake in EL females was higher than in LL females (z = 3.1, P = 0.02). uCP levels (model 8) The last model revealed a significant difference between the null and the full model, meaning that uCP levels significantly differed between reproductive states (χ 2 = 18.3, df = 4, P = 0.001; Fig. 6; Table 3). Gestating females exhibited the highest uCP levels, with uCP levels in LG females being higher than in EL (z = 3.8, P = 0.001), LL (z = 3.1, P = 0.016), and NGNL females (z = 2.7, P = 0.048). Additionally, there was a trend toward higher uCP levels in EG than in EL females (z = 2.7, P = 0.055). Table 1 Results of the beta model on cheek pouch use. This model tested the effect of reproductive state (EG, early gestation; LG, late gestation; EL, early lactation; LL, late lactation; NGNL, non-gestating-non-lactating) on the proportion of time a female used her cheek pouches while controlling for the potential effects of FrAI, daily travel distance, dominance rank, and group. Female ID was added as a random intercept and each reproductive state was centered to be included with FrAI and daily travel distance as random slopes within female ID. The number of instantaneous points from which the response was calculated was further included as a weight. N = 43 females, N = 403 observations. Table shows model estimates, SE standard errors, CI confidence intervals, and min and max range of the estimates obtained when dropping one female from the random intercept one at a time (to assess model's stability) a Dummy coded, with "EG" being the reference category; the indicated test refers to the overall effect of reproductive state b z-transformed; mean and standard error of the original variable were 36.131 and 0.679, respectively c log-and z-transformed; mean and standard error of the original variable were 1755.073 and 15.091, respectively (in meters) d z-transformed; mean and standard error of the original variable were 7.719 and 0.157, respectively e Dummy coded, with group "MOT" being the reference category Discussion Species living in a seasonal habitat typically cope with the energetic costs of reproduction by timing their reproductive events with fluctuations of food abundance. In income breeders, females invest incoming energy in reproduction and synchronize lactation and peak of food abundance (Jönsson 1997). In capital breeders, females conceive after a peak of food abundance so that they store fat that will be metabolized during the reproductive cycle (Jönsson 1997). Relaxed income breeders, such as Assamese macaques, follow a mixed strategy consisting of fat accumulation before conception and a birth season timed around a peak of food availability (Brockman and van Schaik 2005;Heesen et al. 2013). Relatively, little is known regarding how this mixed breeding strategy translates into behavioral and physiological shifts during reproduction. We characterized these shifts by investigating female behavior and physiology in Assamese macaques throughout the reproductive cycle. To do so, we used an integrative and multivariate approach, assessing female activity budget, diet composition, cheek pouch use, energy intake, protein intake, feeding efficiency, and urinary C-peptide (uCP). We compared these metrics both between and within gestation and lactation, using non-gestating-non-lactating (NGNL) females as a reference group. Gestation Gestating females behaved differently than NGNL females. First, and as predicted, gestating females exhibit different activity budgets than NGNL females. More specifically, they spent more time feeding and less time resting than NGNL females. In our study population, resource abundance prior to the mating season modulates the subsequent conception rate (Heesen et al. 2013), in line with the pattern of female primates that exhibit better physical condition (more fat reserves) being more likely to conceive than females in worse physical condition (McFarland 1996;Ziegler et al. 2000). Additionally, being relaxed-income breeders, female Assamese macaques barely accumulate fat during gestation Table 2 Results of the (a) energy intake, (b) energetic feeding efficiency, and (c) protein feeding efficiency models. Each model tested the effect of reproductive state (EG, early gestation; LG, late gestation; EL, early lactation; LL, late lactation; NGNL, non-gestatingnon-lactating) on the respective response variable while controlling for the potential effects of FrAI, daily travel distance and group. In the four models, female ID was included as a random intercept. FrAI and daily travel distance were added as random slopes within female ID. The number of feeding points from which the response was calculated was further included as a weight in the four models. N = 43 females, N = 177 observations. (Brockman and van Schaik 2005), and therefore rely mostly on their pre-mating fat stores to support the energetic costs of gestation. Together, this suggests that gestating females, by feeding more at the expense of resting, do not follow an energy-conserving strategy, as their fat stores can be drawn upon to support the energy requirements of gestation. Second, and as expected, diet composition differed between gestating and NGNL females, but contrary to our prediction, gestating females consumed more seeds and less fruits than NGNL females. More generally, while fruits are richer in readily available energy (carbohydrates in the form of soluble sugar), seeds contain more protein and more fibers (Table S3; Fig. S1). Although not predicted specifically, fibers might be a valuable nutrient during gestation. Fibers refer to plant cell wall components, such as cellulose and hemicellulose, which need to be fermented in the gastrointestinal tract to provide energy. A fiber-rich diet during gestation increases female reproductive success through modulation of her gut microbiota composition, with consequences for the offspring's immune development (T cell maturation, antioxidant defense) and birth weight (mice (Mus musculus): Nakajima et al. 2017;humans: Hu et al. 2019;domestic pigs: Li et al. 2019;Weng 2019;Zhuo et al. 2020). Consuming more fibers during gestation may be selected in Assamese macaques for its beneficial effect on female reproductive performance. Contrary to our predictions, gestating and NGNL females did not differ in the other investigated metrics. For example, gestating females did not consume more proteins than NGNL females, as there was no difference in young leaf consumption or in protein intake. Note that our protein intake calculation came exclusively from plant food items, as we could not account for the nutrient content of animal matter, and thus excluded a probably important source of protein (Bergstrom et al. 2019), which can represent up to 24% of feeding time, for example, in early lactating females of our study. We cannot discuss the proportion of animal matter in the diet, as we were unable to control for animal prey availability in the habitat. Although gestating and NGNL females consumed the same quantity of (plant) protein, the proportion of protein in the diet may differ. We thus ran a post hoc model similar to the protein intake model (model 5), but with protein intake ratio as the response variable (dividing the quantity of protein intake by the total quantity of plant food item consumed in a day; Table S4; Fig. S2). The proportion of protein was indeed significantly higher in early gestating compared to NGNL females. Consequently, as the proportion of protein is higher in early gestating than in NGNL females, while the quantity of protein is the same among these two categories of females, early gestating females may consume a lower amount of food per day than NGNL females. Still, early gestating females manage to have similar protein intake as NGNL females, probably by consuming more seeds. The higher proportion of protein in the diet of early gestating females may have beneficial effects on the offspring (Langley-Evans et al. 1996). Levels of uCP were the highest during gestation, especially in late gestation, in line with results in other primate species (bonobos (Pan paniscus): Nurmi et al. 2018; chacma baboons (Papio ursinus): Fürtbauer et al. 2020). Elevated uCP levels during gestation can reflect maternal insulin resistance induced by a shift in maternal energy metabolism from carbohydrate to lipid oxidation (Cianni et al. 2003). Consequently, we cannot reliably assess energy balance from uCP levels during gestation because of the maternal change in insulin sensitivity at this stage of the reproductive cycle. However, we can infer that insulin resistance and the associated metabolic shift in mothers' energy source allow a redirection of carbohydrates to support the fetus' needs (Butte 2000) and reflect a physiological adaptation prioritizing the energetic requirements of the fetus via the most readily available source of maternal energy. Among gestating females, our results emphasize differences between early and late gestation. We found that only early (and not late) gestating females have lower energy intake and lower feeding efficiency than NGNL females. Although in contrast with our predictions, these results make sense in light of the results discussed above. Feeding mainly on seeds during gestation does not provide much energyrich carbohydrates and may yield a low energy intake and feeding efficiency. The fact that late gestating females were not significantly different from NGNL females in terms of energy intake and feeding efficiency may suggest a slight change in feeding behavior over the course of gestation. This is also consistent with late gestating female relying less on protein in their diet than early gestating females (Fig. S2) and using their cheek pouches more often. Storing more food items in their cheek pouches might contribute to a slightly better (although non-significant) feeding efficiency in the later stages of gestation. More data are needed to investigate further feeding behavior shifts during gestation. Lactation Unexpectedly, early lactation did not exhibit the strongest behavioral differences compared to NGNL females. To the contrary, early lactation was very similar to NGNL with regards to all metrics analyzed. Early lactating and NGNL females spent more time resting and less time feeding than females at other stages, indicative of an energy-conserving strategy in time of energetic constraints (Dasilva 1992;Rose 1994). More resting at the expense of feeding in early lactating females (Heesen et al. 2013) is potentially associated with several, non-exclusive benefits. Firstly, this energy-conserving strategy could be a way to offset the energy expenses induced by substantial suckling frequency in daytime during early lactation (Berghänel et al. 2016). As some maternal fat stores have been depleted during pregnancy (as reported in other species: Cothran et al. 1987;Lassek and Gaulin 2006), an energy-conserving strategy after parturition allows mothers to "save what is left" in order to allocate it to lactation. Secondly, while the mother is sitting and resting, the infant is in an upright position, which may increase its efficiency when suckling and digesting milk. Moreover, nursing a newborn might involve an additional activity such as vigilance, which is compatible with resting but not with feeding (Barrett et al. 2006;Dias et al. 2018). Lastly, while mothers in the study population typically rest together without grooming, the infants play nearby, which promotes their motor development (Berghänel et al. 2016;Ostner and Schülke 2018). Therefore, an energy-conserving strategy would have been selected for its contribution to increased female performance during the peak of lactation. Surprisingly, NGNL females follow the same energyconserving strategy as early lactating females. It might be that NGNL females are not as free from energy requirements as we expected. NGNL females need to prepare for their next conceptions and thus, have to store as much fat as possible before the mating season to be able to reproduce (McFarland 1996;Brockman and van Schaik 2005;Heesen et al. 2013). The NGNL period may therefore be seen as setting the stage for subsequent reproduction, as fat accumulated during this period has an essential role to play in the subsequent reproductive states and thus, shapes female reproductive success. Early lactating and NGNL females feed more on fruits than on seeds, and therefore seem to select items with higher carbohydrate content (Table S3; Fig. S1). Early lactating females are going through a substantial energy expense and carbohydrates provide a readily available source of energy, necessary to support maternal body function and maintenance. Fruit consumption probably is responsible for the high energy intake and feeding efficiency of early lactating and NGNL females. As these females cannot allocate as much time to feeding in order to rest and save energy to support lactation (EL females) or fat storage (NGNL females), when they feed they need to do it efficiently to optimize their time. Our results show that this is indeed the case as, similarly to other primate species, early lactating females exhibit a specific feeding strategy of increased energy intake and feeding efficiency, potentially helping them to compensate for the energetic costs of milk production (Kirkwood and Underwood 1984;Muruthi et al. 1991;Nievergelt and Martin 1999;McCabe and Fedigan 2007). Contrary to our predictions, and as found in gestating females, lactating females had a protein intake similar to NGNL females. Here, our post hoc model with protein ratio as response becomes again useful to investigate whether these comparable quantities of protein ingested represent similar proportions in female diet. The results of the post hoc model showed that the proportion of protein was significantly higher in lactating females compared to NGNL females (Table S4; Fig. S2). Therefore, lactating females rely more on protein than NGNL females as this nutrient intake is likely associated with reproductive benefits (Kanakis et al. 2020). uCP levels in early lactating females were unexpectedly similar to NGNL females (as found in Grueter et al. 2014;Cano-Huertes et al. 2017;Bergstrom et al. 2020;Fürtbauer et al. 2020), suggesting that females of these two reproductive stages-who have similar energy intake and a carbohydrate-rich diet-have similar energy balance, although early lactating females have substantial energy expenses. Some studies have shown that even under favorable nutritional conditions, uCP levels still decrease in situations of high energy expenditure requiring a rapid and immediate energy supply, such as in a period of infection or mating (Emery Thompson et al. 2009;Higham et al. 2011b). However, in our case, energy expenditure induced by early lactation is likely supported, at least in part, by lipid oxidation (maternal fat stores), alleviating the contribution of immediate energy intake. Additionally, the carbohydrate-rich diet of early lactating females is likely to induce high levels of uCP (Buyken et al. 2006). Together, fat oxidation and carbohydrate intake might compensate for the costs of early lactation, and participate in maintaining female uCP to similar levels as NGNL females (who store and save fat for later use). Lastly, as predicted, early and late lactation patterns differed within lactating females, which reflects that the energetic costs of lactation are not static, but rather vary according to the infant growth rate (Lee 1987). Late lactating females were very similar to gestating females in not following an energy-conserving strategy. Potentially, as the infant becomes older and more autonomous during the day, it frees some time to the mother who can therefore allocate this extra time into feeding. Additionally, as females are in low physical condition when entering late lactation (Heesen et al. 2013), they might need to devote more time into feeding in order to regain body mass. Early lactating females also differ from late lactating ones in their cheek pouch use, with early lactating females using their cheek pouches more often than late lactating ones, probably because of their feeding time constraints. Moreover, when nursing a very young infant, early lactating females might try to avoid aggression induced by feeding competition on a food patch by filling up their cheek pouches and feeding in a safer place (Heesen et al. 2014). Therefore, in late lactation, females become relieved from some of the nursing load and allocate more time to feeding, as they do not have to rest as extensively anymore. Adaptive strategies or environmental effects? Across analyses, the most similar patterns occurred in females that had the strongest temporal overlap, i.e., earlylactating and NGNL on one side and gestating and late-lactating on the other side (Fig. 1). Importantly, all analyses controlled for variation in fruit availability, yet the nutritional quality of food may have changed over time and may account for the similar feeding behavior of females of different reproductive states co-occurring in the same months. Thus, females may either actively change their feeding behavior in response to their reproductive state, or may opportunistically consume what is available, resulting in comparable feeding patterns across females within month, irrespective of their reproductive state. We cannot completely discard the possibility that some periods of the year provide a nutrient access that is matching with specific nutrient requirements during the reproductive cycle, which may have led to the temporal timing of reproductive events in this population. To further investigate this possible environmental effect, we plotted for each model and for each female the residuals against months and visually inspected them. If a temporal variation existed and had been missing in our models, then the residuals, i.e., the differences between the fitted and the observed values, would be very similar from one month to the next. None of the plots revealed such clustering of residuals through consecutive months, which indicates that there is no obvious monthly modulation in residuals, and thus, no important monthly parameters that we missed (Fig. S3). This suggests that our results are not explained by the temporal variation of a parameter (such as the nutritive quality of the month) that was not included in our models and thus, most likely reflect active changes of female feeding strategies. Our data show, however, large stochasticity, indicated by large confidence intervals. Therefore, some uncertainty still remains and further investigation is needed to fully confirm active, rather than passive, shifts in feeding behavior during the reproductive cycle. Overall, Assamese macaque females coped with the variation in energetic costs of reproduction by actively or passively shifting behavioral patterns. They modified their activity budget, diet composition (leading to changes in energy intake and feeding efficiency), the use of cheek pouches, and sensitivity to insulin. Considering consequences of reproductive state both on female feeding behavior, assessed from different and complementary perspectives, and on female physiology, our study allowed us to address several questions simultaneously, and provided a comprehensive picture of the energetics of reproduction in a seasonal species with a mixed breeding strategy. Our results provide evidence that characteristics that are typical to strict income (no fat accumulation during gestation and high energy intake during lactation) and capital (fat accumulation before conception) breeders co-emerged in a species following a mixed breeding strategy. Further integrative studies will be helpful in determining whether different sets of such mixed characteristics are exhibited in other seasonal mammals that meet halfway on the spectrum of breeding strategies.
2021-09-25T16:24:47.858Z
2021-08-23T00:00:00.000
{ "year": 2021, "sha1": "61b3711e6ddedf3d09183ddb7eeac4c28dde861a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00265-021-03058-5.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "9aba94a9828b6fbb51df478077ad3bb7abae8979", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
261576894
pes2o/s2orc
v3-fos-license
Development and Characterization of a Hydrogel Containing Curcumin-Loaded Nanoemulsion for Enhanced In Vitro Antibacteria and In Vivo Wound Healing Curcumin (CUR) is a natural compound extracted from turmeric (Curcuma longa L.) used to cure acne, wound healing, etc. Its disadvantages, such as poor solubility and permeability, limit its efficacy. Nanoemulsion (NE)-based drug delivery systems have gained popularity due to their advantages. This study aimed to optimize a CUR-NE-based gel and evaluate its physicochemical and biological properties. A NE was prepared using the catastrophic phase inversion method and optimized using the Design Expert 12.0 software. The CUR-NE gel was characterized in terms of visual appearance, pH, drug release, antibacterial and wound healing effects. The optimal formulation contained CUR, Capryol 90 (oil), Labrasol:Cremophor RH40 (1:1) (surfactants), propylene glycol (co-surfactant), and water. The NE had a droplet size of 22.87 nm and a polydispersity index of 0.348. The obtained CUR-NE gel had a soft, smooth texture and a pH of 5.34 ± 0.05. The in vitro release of CUR from the NE-based gel was higher than that from a commercial gel with nanosized CUR (21.68 ± 1.25 µg/cm2, 13.62 ± 1.63 µg/cm2 after 10 h, respectively). The CUR-NE gel accelerated in vitro antibacterial and in vivo wound healing activities as compared to other CUR-loaded gels. The CUR-NE gel has potential for transdermal applications. Introduction Traditional medicine has historically utilized turmeric powder to treat a number of conditions, including rheumatic diseases, diabetes, cancer, liver diseases, infectious diseases, and digestive problems such as flatulence, dyspepsia, and gastric/duodenal ulcers.Curcumin (CUR) research has revealed a wide range of intriguing biological and therapeutic functions, including anti-microbial, anti-cancer, anti-inflammatory, and antidiabetic qualities [1].On the other hand, CUR has low solubility, permeability, and stability, making it difficult to administer the chemical efficiently via the skin.To overcome this issue, new research has looked at the use of nanosystems, such as polymeric nanosystems and liposomes, to increase CUR's distribution and effectiveness [2]. Nanoparticle-based drug delivery approaches offer promising ways to increase the efficacy of CUR in treating a range of diseases, especially those that are infectious.In multiple in vitro and in vivo studies, CUR nanoparticles have demonstrated greater therapeutic outcomes compared to free CUR.Furthermore, the nano-sized formulation of CUR enhances its solubility in water and boosts its antimicrobial activity [1].Nanocurcumin, produced from CUR extracted from turmeric rhizome, exhibited greater antibacterial action against Staphylococcus aureus (S. aureus) and Escherichia coli (E.coli) bacteria compared to bulk curcumin.The nanocurcumin-loaded cream was more antimicrobial than bulk curcumin cream after a month of storage [3].Wet-milled curcumin nanoparticles were tested for antibacterial activity against four bacteria by Adahoun M. A. et al. in 2017.Nanocurcumin inhibited all tested bacteria better than bulk curcumin [4].Because of the potency of CUR's antibacterial effect, recent research has focused on developing nanocarriers as CUR drug delivery vehicles [1,5].Numerous studies have incorporated CUR into various nanoformulations, including polymeric nanoparticles [6][7][8], liposomes [9], solid lipid nanoparticles [10], and micelles [11], with the aim of enhancing CUR's antibacterial efficacy. Nanoemulsion (NE)-based technologies have received a lot of interest in recent decades for overcoming the stratum corneum barrier for effective transdermal drug delivery.A colloidal system composed of immiscible liquids, that is, oil and water stabilized by emulsifiers, is known as an NE [12].NEs offer a wide range of possible uses due to their tiny size, including drug delivery, food manufacturing, and cosmetics [13]. NEs have been widely employed for the transdermal administration of hydrophobic and hydrophilic molecules that have difficulties with solubility, lipophilicity, and bioavailability.Nowadays, therapeutic plant formulations might be advantageous because they have biocompatible, wound-healing, and antibacterial qualities.Their use can help to reduce cytotoxicity, improve wound healing, and minimize antibiotic resistance. Gel networks allow nanovesicles to migrate from the matrix to the skin surface easily and in a controlled manner, and hydrated skin allows for higher payloads.Biocompatible polymers such as Carbomers, Pluronics, xanthan gum, and carrageenan were also employed as gel systems to deliver emulsified drugs transdermally [12]. Piroxicam oil/water (O/W) NE gel was made by Dhawan et al. (2014) from oleic acid, Tween 80, ethanol (Smix: a surfactant and co-surfactant mixture), and water.The marketed product had lower skin retention, greater skin permeation flux, and longer lag time [14].In 2017, a nanoemulgel outperformed its NE counterpart (18% Capryol 90, 30% OP-10, 15% 1,2-propylene glycol) in terms of triptolide release and pharmacokinetic profile.The gel maintained the drug release due to the presence of 1.5% w/v Carbomer 940, which provided a 2.93-fold greater systemic circulation AUC than a drug solution [15]. CUR encapsulation in Nes has been studied to improve its pharmacological activities.Ahamad N. et al. selected clove oil, Tween 80, and PEG 400 as excipients because they could dissolve the drug ingredient and create the biggest emulsion area.The NE healed wounds quicker than the control samples [16].In 2019, Kole et al. prepared o/w NE containing tetrahydroxy CUR by the high-pressure homogenization technique.The results of the antimicrobial activity essay showed good effects against E. coli and Bacillus subtilis [17].A novel NE technology was developed in order to evaluate its effectiveness in diabetic rats with open incision wounds.The NE system significantly decreased oxidative stress, expedited collagen deposition, and avoided the bacterial infection of the wound, all of which sped up the regeneration of skin tissue [18].Low-energy emulsification enhanced the CUR NE, which was gelled with crosslinked polyacrylic acid (Carbopol 934) to form a nanoemulgel.In psoriatic mice, those treated with a nanoemulgel healed faster than those treated with a CUR and betamethasone-17-valerate gel [19].High-energy ultrasonic emulsification was used to encapsulate CUR in an NE system.A 0.5% Carbopol ® 940 hydrogel was used to topically apply the optimized CUR-loaded NE.The CUR nanoemulgel had a thixotropic rheology, increased skin penetrability, and improved the effectiveness of in vivo wound healing [20]. For ease of scaling-up, the goal of this work was to optimize and analyze the physicochemical characteristics of CUR NE formulations fabricated by the phase inversion method.The optimized CUR NE was then introduced into the gel to evaluate its in vitro antibacterial activity and in vivo wound healing properties, as compared to a pure material-loaded gel and a commercial gel containing nanosized CUR. The Saturation Solubility of CUR in Different Excipients Table 1 shows the findings of the assessment of CUR's solubility in different excipients.Because of its high solubilizing capacity, Capryol 90 was chosen as the oil phase among the oil excipients.Among the surfactants, Labrasol exhibited the greatest saturation solubility of CUR, followed by the LS:CRH40 mixture.Cremophor RH40 was not chosen since its CUR became dark red during testing.CUR was shown to dissolve better in Lauroglycol as a co-surfactant than in propylene glycol.The findings of the solubility studies would be used in combination with the emulsification study to identify suitable Smix components [21]. Emulsification Efficiency Table 2 shows the findings of an investigation into the emulsification potential of surfactants and co-surfactants.Because of their capacity to emulsify oil to generate a clear emulsion, the LS:CRH40 mixture and propylene glycol were selected as the surfactant and co-surfactant, respectively [21]. Construction of Ternary Phase Diagrams The phase diagrams for various Smix ratios are illustrated in Figure 1 below.Because of their capacity to emulsify oil to generate a clear emulsion, the LS:CRH40 mixture and propylene glycol were selected as the surfactant and co-surfactant, respectively [21]. Construction of Ternary Phase Diagrams The phase diagrams for various Smix ratios are illustrated in Figure 1 below.For future investigations, the phase diagram corresponding to Smix 2:1 with the greatest clear emulsion zone was chosen [21]. Optimization of CUR-Loaded NE Based on the phase diagram of Smix 2:1, the range of variation for the input variables were selected, as shown in Table 3 [21].For future investigations, the phase diagram corresponding to Smix 2:1 with the greatest clear emulsion zone was chosen [21]. Optimization of CUR-Loaded NE Based on the phase diagram of Smix 2:1, the range of variation for the input variables were selected, as shown in Table 3 [21].The experimental design and the results obtained from the experiments during the optimization process are presented in Table 4 [21].The translucent emulsion droplets varied in size from 17.73 to 69.12 nm, which was within the acceptable size range of 10-100 nm.The polydispersity index (PDI) values of 0.318-0.658implied a wide droplet size dispersion [21]. The Effects of Factors on Droplet Size and Droplet Size Distribution Figure 2 shows the effect of input parameters on droplet size and PDI of CUR-NE [21]. The experimental design and the results obtained from the experiments during the optimization process are presented in Table 4 [21].The translucent emulsion droplets varied in size from 17.73 to 69.12 nm, which was within the acceptable size range of 10-100 nm.The polydispersity index (PDI) values of 0.318-0.658implied a wide droplet size dispersion [21].In general, droplet size was larger in the region with high-water content and low-oil content, as well as in the region with high-oil content and low-water content.When the oil ratio was 5%, increasing the amount of Smix led to a smaller droplet size, while at a 15% oil ratio, increasing the amount of Smix resulted in a larger droplet size. The Effects of Factors on Droplet Size and Droplet Size Distribution Overall, increasing the water ratio resulted in an increase in PDI.With a fixed water ratio, PDI increased as the Smix ratio increased, and the oil ratio decreased.Similarly, when the oil ratio was fixed, an increase in the water ratio led to an increase in PDI. The equations used to predict the relationship between droplet size, PDI, and the input variables, along with the statistical parameters of the prediction models, are shown in Table 5 [21]: In general, droplet size was larger in the region with high-water content and low-oil content, as well as in the region with high-oil content and low-water content.When the oil ratio was 5%, increasing the amount of Smix led to a smaller droplet size, while at a 15% oil ratio, increasing the amount of Smix resulted in a larger droplet size. Overall, increasing the water ratio resulted in an increase in PDI.With a fixed water ratio, PDI increased as the Smix ratio increased, and the oil ratio decreased.Similarly, when the oil ratio was fixed, an increase in the water ratio led to an increase in PDI. The equations used to predict the relationship between droplet size, PDI, and the input variables, along with the statistical parameters of the prediction models, are shown in Table 5 [21]: Based on the results, it can be observed that the F-value of Equation ( 1) is 4.35, and the p-value is 0.0443, indicating that the prediction model was significant.The negative value of R 2 pre suggests that the overall mean value could be better predicted for the response to droplet size than the current model.Moreover, Equation ( 2) has an F-value of 10.55 and a p-value of 0.0044, demonstrating that the prediction model was significant.The difference between R 2 adj and R 2 pre was less than 0.2, indicating the validity of the model.The optimized formula had a desired coefficient of 0.798 with X1 (% Capryol 90) at 11.83%, X2 (% Smix) at 58.17%, and X3 (% water) at 30%.For the output variables (Y1, Y2), the expected values were 25.17 nm and 0.384, respectively.The actual examination findings were 22.87 nm and 0.348, respectively.The droplet size and PDI were both within the predicted ranges (droplet size, 10-100 nm, and PDI, <0.5). Characterization of CUR-Loaded NE and CUR-NE Gel The physicochemical properties of the optimized NE were evaluated, and the results are displayed in Figure 3 [21]. Based on the results, it can be observed that the F-value of Equation ( 1) is 4.35, and the p-value is 0.0443, indicating that the prediction model was significant.The negative value of R 2 pre suggests that the overall mean value could be be er predicted for the response to droplet size than the current model.Moreover, Equation ( 2) has an F-value of 10.55 and a p-value of 0.0044, demonstrating that the prediction model was significant.The difference between R 2 adj and R 2 pre was less than 0.2, indicating the validity of the model. Characterization of CUR-Loaded NE and CUR-NE Gel The physicochemical properties of the optimized NE were evaluated, and the results are displayed in Figure 3 The obtained CUR-NE was yellow-orange in color, transparent, and homogeneous.The TEM images indicated spherical particles with sizes ranging from 15 to 25 nm, with some particles clumping together.The emulsion was stable with no phase separation and no drug precipitates after centrifugation, satisfying the kinetic stability criteria. When compared to the commercial gel, the CUR-NE gel released more drug amounts at each sample time point.After 10 h, the cumulative amount of CUR in CUR-NE gel was 21.68 ± 1.25 g/cm 2 , which was 1.59 times greater than that of the commercial gel at 13.62 ± 1.63 g/cm 2 .This difference was statistically significant (p < 0.05, n = 3). Additionally, the CUR-NE gel was soft, smooth with a yellow color and had a pH of 5.34 ± 0.05, which is in line with the physiological pH of the skin (4.2-5.6), thus minimizing the risk of irritation [21]. In Vitro Antimicrobial Activity The antibacterial efficacies of the various samples were displayed in Table 6 and Figure 4. Ampicillin (10 µg), used as positive control, demonstrated the maximum zones of inhibition against S. aureus and E. coli (21.0 mm and 18.4 mm, respectively).The aqueous suspension of CUR and the CUR gel did not exhibit any efficacy against the tested bacteria at the tested concentrations.The commercial gel displayed antibacterial activity against S. aureus, whereas its efficacy against E. coli was only observed at a high concentration (700 µg/mL).In contrast, the CUR NE showed potential antibacterial efficacy against S. aureus and E. coli at both concentrations.Furthermore, the inhibition zone results also showed the antibacterial efficacy of the CUR-NE-based gel against both tested bacteria.The obtained CUR-NE was yellow-orange in color, transparent, and homogeneous.The TEM images indicated spherical particles with sizes ranging from 15 to 25 nm, with some particles clumping together.The emulsion was stable with no phase separation and no drug precipitates after centrifugation, satisfying the kinetic stability criteria. When compared to the commercial gel, the CUR-NE gel released more drug amounts at each sample time point.After 10 h, the cumulative amount of CUR in CUR-NE gel was 21.68 ± 1.25 g/cm 2 , which was 1.59 times greater than that of the commercial gel at 13.62 ± 1.63 g/cm 2 .This difference was statistically significant (p < 0.05, n = 3). Additionally, the CUR-NE gel was soft, smooth with a yellow color and had a pH of 5.34 ± 0.05, which is in line with the physiological pH of the skin (4.2-5.6), thus minimizing the risk of irritation [21]. In Vitro Antimicrobial Activity The antibacterial efficacies of the various samples were displayed in Table 6 and Figure 4. Ampicillin (10 µg), used as positive control, demonstrated the maximum zones of inhibition against S. aureus and E. coli (21.0 mm and 18.4 mm, respectively).The aqueous suspension of CUR and the CUR gel did not exhibit any efficacy against the tested bacteria at the tested concentrations.The commercial gel displayed antibacterial activity against S. aureus, whereas its efficacy against E. coli was only observed at a high concentration (700 µg/mL).In contrast, the CUR NE showed potential antibacterial efficacy against S. aureus and E. coli at both concentrations.Furthermore, the inhibition zone results also showed the antibacterial efficacy of the CUR-NE-based gel against both tested bacteria. In Vivo Wound Healing Study Figure 5 displays the variations in the size of the wounds.In comparison to the control groups, mice treated with the CUR-loaded NE-based gel showed increased wound healing.Based on the obtained macroscopic findings, it was found that the CUR-loaded NE-based gel was almost more potent than the commercial formulation and the formulation containing pure CUR. Histopathological Study The CUR formulation improved epidermal and dermal regeneration, according to histological studies (Figure 6).Re-epithelization was found in mice in the formulationtreated group after 5 days of therapy.When re-epithelization and wound healing were examined in the 14-day groups, the CUR-NE gel and commercial gel groups performed significantly be er than CUR gel group and Placebo gel group.Compared to the commercial group, the CUR-NE gel group produced much superior outcomes.In all groups, Histopathological Study The CUR formulation improved epidermal and dermal regeneration, according to histological studies (Figure 6).Re-epithelization was found in mice in the formulation-treated group after 5 days of therapy.When re-epithelization and wound healing were examined in the 14-day groups, the CUR-NE gel and commercial gel groups performed significantly better than CUR gel group and Placebo gel group.Compared to the commercial group, the CUR-NE gel group produced much superior outcomes.In all groups, wound healing was observed.The CUR-NE gel formulation was found to enable faster re-epithelization and wound healing. Discussion The investigation of a drug's solubility in excipients is necessary to select the components for the NE formula, especially for the oil phase.For poorly water-soluble drugs, such as CUR, the drug will mostly distribute in the oil phase.Therefore, an oil phase that can dissolve CUR well will help increase the drug-loading capacity of the NE [22].Additionally, evaluating the emulsification ability is an important basis for selecting suitable surfactants and co-surfactants to be included in the formula.This is a key test to ensure that NE meets the prerequisite requirements of uniformity and clarity, especially for NEs prepared using low-energy methods.Meanwhile, constructing a phase diagram is a method used to roughly determine the relationship between the ratios of the three basic components in the formula, including oil, Smix, and water, to form the emulsion [23]. Initial investigations help determine the appropriate range of variation of the three components, which is the region that forms the emulsion on the phase diagram, as well as to determine the water content needed to direct it into a hydrogel form.Droplet size and PDI are two basic parameters for evaluating nanoscale particles and their distribution, respectively, with droplet size playing an important role in the stability and skin penetration ability of the NE [12].Therefore, they were selected as output variables to optimize the NE's preparation formula.The droplet size obtained was uniformly within the size range of the NE (10-100 nm), while PDI < 0.5 indicated that the size distribution was not substantial.These results were supported by the study of Ahmad et al. (2019) on CUR-NE prepared using ultrasonication with droplet size range of 50.85 to 188.60 nm and PDI of 0.256 to 0.559 [16]. Regarding the effect of input variables on droplet size, when the water ratio was high and the amount of oil was low, as well as for the lowest water content region and the highest oil content region, the droplet size was significantly larger, similar to the study of Kumar N. et al. (2015) [24].In addition, the prediction equations of Kumar N. et al. (2015) and Fouad S. A. et al. (2013) showed that the independent variables of oil, water, and Smix ratios increased droplet size, while the interaction between these variables decreased droplet size.Equation ( 1) in this study also yielded similar results [24,25]. Discussion The investigation of a drug's solubility in excipients is necessary to select the components for the NE formula, especially for the oil phase.For poorly water-soluble drugs, such as CUR, the drug will mostly distribute in the oil phase.Therefore, an oil phase that can dissolve CUR well will help increase the drug-loading capacity of the NE [22].Additionally, evaluating the emulsification ability is an important basis for selecting suitable surfactants and co-surfactants to be included in the formula.This is a key test to ensure that NE meets the prerequisite requirements of uniformity and clarity, especially for NEs prepared using low-energy methods.Meanwhile, constructing a phase diagram is a method used to roughly determine the relationship between the ratios of the three basic components in the formula, including oil, Smix, and water, to form the emulsion [23]. Initial investigations help determine the appropriate range of variation of the three components, which is the region that forms the emulsion on the phase diagram, as well as to determine the water content needed to direct it into a hydrogel form.Droplet size and PDI are two basic parameters for evaluating nanoscale particles and their distribution, respectively, with droplet size playing an important role in the stability and skin penetration ability of the NE [12].Therefore, they were selected as output variables to optimize the NE's preparation formula.The droplet size obtained was uniformly within the size range of the NE (10-100 nm), while PDI < 0.5 indicated that the size distribution was not substantial.These results were supported by the study of Ahmad et al. ( 2019) on CUR-NE prepared using ultrasonication with droplet size range of 50.85 to 188.60 nm and PDI of 0.256 to 0.559 [16]. Regarding the effect of input variables on droplet size, when the water ratio was high and the amount of oil was low, as well as for the lowest water content region and the highest oil content region, the droplet size was significantly larger, similar to the study of Kumar N. et al. (2015) [24].In addition, the prediction equations of Kumar N. et al. (2015) and Fouad S. A. et al. (2013) showed that the independent variables of oil, water, and Smix ratios increased droplet size, while the interaction between these variables decreased droplet size.Equation ( 1) in this study also yielded similar results [24,25]. When the oil ratio was fixed at its lowest level (5%), more Smix amount led to smaller droplet sizes.When the oil level was at 15%, more Smix amount led to larger droplet sizes.This could be explained by the fact that a greater amount of Smix could help to emulsify the oil well, reducing the size of the oil droplets, but when the oil ratio was high, the Smix no longer effectively emulsified [24]. Regarding the influence of input variables on PDI, Equation ( 2) showed that all independent factors increased the distribution of droplet size.As the water ratio increased, the interfacial surface area between the oil and water phases increased.This could reduce the emulsifying ability of Smix and result in the production of O/W emulsion droplets with less uniform sizes.Meanwhile, when the Smix ratio was too high, it could create micelles or aggregates due to excess surfactant molecules.O/W emulsion droplets with different surfactant concentrations at the interfacial layer could also be formed, leading to non-uniform sizes [26].With a relatively high desirability coefficient of 0.798, the optimized formula included 11.13% oil, 58.87% Smix, and 30% water.This composition could ensure the formation of an O/W emulsion that was suitable for application in hydrogel form for external use, which was superior to the formulation reported by Fouad et al. (2013) [25]. The O/W emulsion met appearance requirements, did not undergo phase separation after centrifugation, and had a spherical shape as observed by TEM.Incorporating the O/W emulsion into a gel form with Carbopol showed better in vitro drug release performance compared to a commercial gel product containing nanosized CUR (with an equivalent amount).This suggests that the O/W emulsion contributed to enhancing the solubility and dissolution rate of the drug, as well as improving its diffusion and permeability properties.Therefore, this drug delivery system has potential applications in gel forms for increasing the bioavailability of CUR for topical use. The present study assessed the antibacterial efficacy against S. aureus and E. coli of CUR NE, CUR-NE-based gel through the agar well diffusion method.In most of the tested concentrations of samples (CUR NE, CUR-NE-based gel, commercial gel), the antimicrobial effects against Gram-positive bacteria (S. aureus) were greater than those against Gram-negative bacteria (E.coli), which may be due to the different compositions and structures of bacterial cells.This outcome was similar to the reported study by Asabuwa Ngwabebhoh F. et al. ( 2018) [27].Comparing CUR NE and CUR aqueous suspension, CUR NE showed potential antibacterial efficacy against Gram-positive bacteria (S. aureus) and Gram-negative bacteria (E.coli) at both concentrations, whereas CUR aqueous suspension had no antibacterial efficacy to any of tested bacteria species.This could be attributed to the increased diffusion capacity of CUR in NE droplets into the agar medium and microbial membrane as compared to that in the aqueous suspension, due to their smaller size.Because of its ability to successfully penetrate into the bacterial cell wall, CUR being encapsulated in an oil phase droplet was able to perform antibacterial activity.This led to the lysis of the peptidoglycan layer, which ultimately resulted in a deformation and lysis of the bacterial cell [18].Antibacterial activities of CUR could be due to its interaction with the cell division-initiating protein FtsZ, which was related to its methoxy and hydroxyl groups [28].The result confirmed that oil-in-water NE is a potent delivery system to improve the antibacterial activity of CUR. In the case of the CUR-NE-based gel, the diameters of its inhibition zones were smaller than the CUR NE's.This could be explained by the elevated viscosity of the gel compared to the NE.However, the observations implied that antibacterial efficacy was maintained when the NE was transformed into gel form.On the other hand, the CUR-NE-based gel showed somehow efficacy against tested bacteria than commercial gel while the CUR gel showed no effect at the tested CUR concentrations of 350 or 700 µg/mL.The results indicated that the CUR-NE-based gel had more antibacterial activity than the commercial gel with nanosized CUR and enhanced the antibacterial property compared to the conventional gel.Additionally, the antibacterial effectiveness of CUR encapsulation in NE assisted in lowering the bacterial load and improving wound healing [29]. Wound healing is known to be a complicated and dynamic process that generally contains discrete phases denoting the healing stages and necessitates the participation of various cell types in a range of cellular activities [30].Hemostasis is the initial step of wound healing, followed by the inflammation phase, the proliferative phase, and the maturation phase.The inflammation phase is a critical and required step for wound healing.During this phase, cells associated with inflammation (neutrophiles, macrophages, and so on) move to the wound site and ensure the removal of germs and tissue remnants from the environment.Fibroblasts and epithelial cells are also stimulated by inflammatory cells [31].Both acute and chronic forms of inflammation may exist.Acute inflammation helps the wound heal; however, chronic inflammation slows down the healing process and makes it take longer [32].CUR was demonstrated in previous studies to have wound healing, antibacterial, and anti-inflammatory effects [3,33].The wound healing of CUR is also attributed to its antioxidant activity [34,35].In the study of Ouyang et al., the results indicated that the free radical scavenging ability of the novel multifunctional hydrogel promoted hemostatic function in wound management [36].The antioxidant activities of CUR were protected in CUR-loaded NE and CUR-NE gels [35,37].When the findings of the wound healing trial were analyzed, it revealed that the CUR-NE-based gel formulation had greater effectiveness than the CUR gel and commercial gel groups.The tissue regeneration in the epithelial and dermal tissues was more considerable than in the CUR gel and commercial gel groups, particularly in histopathological alterations.Histological tests revealed that the CUR-NE-based gel formulation enhanced re-epithelization and vascularization in injured tissue, hence expediting wound healing.Recovery was also quicker when the shrinking of the wound region's diameter was assessed [38].For further usage in clinical trials, the biocompatibility of CUR NE and CUR-NE gel should be evaluated [39,40].However, in this study, excipients (Capryol 90, Labrasol, Cremophor RH40, propylene glycol, Carbopol 940, etc.) used in the investigated formulations for preparing topical CUR-NE gels were used in approved pharmaceutical products and/or are generally recognized as safe (GRAS) by the Food and Drug Administration [41,42]. Materials CUR was purchased from India (purity of 95%).Standard CUR was obtained from the Vietnam Institute of Quality control (99.56%).Capryol 90, Labrafil, Labrasol, and Lauroglycol were obtained from Gattefossé (Paris, France).All other chemicals were of analytical grade.A commercial gel (containing nanosized CUR) is a marketed product, which was purchased at a pharmacy in Vietnam. Assessment of the Saturation Solubility of CUR in Different Excipients The saturation solubility of CUR was investigated in several excipients, including oils (Capryol 90 and oleic acid), surfactants (Labrafil, Labrasol, and Cremophor RH40), and co-surfactants (propylene glycol, Lauroglycol).A centrifuge tube with 1 mL of excipient was agitated to disperse excess CUR.To reach equilibrium, the tube was placed in a thermostatically controlled bath shaker at 37 • C and 100 rpm for 72 h.The tube was centrifuged at 10,000 rpm (Hermle Z32 HK, Wehingen, Germany) for 10 min.Supernatant was collected and filtered using a 0.45 µm membrane filter.To estimate CUR saturation solubility in excipients, the sample was diluted in methanol and analyzed using a UV-Vis spectrophotometer at 421 nm (Jasco V-630, Tokyo, Japan). Evaluation of Emulsification Efficiency The water emulsification ability of 0.1 g of each excipient (Labrafil, Labrasol, LS:CRH40 mixture, propylene glycol, and Lauroglycol) was tested in 2 mL of distilled water.Then, 10 µL of oil was progressively added to the solution while stirring and recorded.The final point occurred when a turbid emulsion developed [19]. Optimization of CUR-Loaded NE The Smix mixture was created by mixing the surfactant and co-surfactant in proper proportions.The oil was added and agitated until it produced a uniform oil phase.CUR was added to the oil phase (at a concentration of 0.5% in NE formulations) and stirred until fully dissolved.Distilled water was gently added to the mixture and magnetically agitated to create a uniform emulsion [19]. The D-optimal model with 12 tests was designed using Design Expert 12.0 software.The three primary excipient system components-oil, Smix, and water-were the input variables (X1, X2, X3, respectively).Based on the phase diagram analysis, the oil and Smix components and ranges were identified.Y1, the droplet size (Size), and Y2, the PDI, were the output variables.The optimization was run using Design Expert 12.0 software with the following settings: 3 input variables with "in range" variation, and Y1 and Y2 of minimum values with equal weight. Formulation of CUR-NE-Based Gel Carbopol 940 was dispersed in water to form a gel by swelling overnight and neutralized with triethanolamine.The gel was homogeneously mixed with the NE using a magnetic stirrer [19].The components were mixed in appropriate proportions to achieve the final gel formula containing 0.035% (w/w) of CUR in 1% (w/v) of Carbopol 940.The NE was yellow to orange, clear, and homogenous.Any cloudiness, phase separation, active component crystallization, or discoloration was noted. Droplet Size and Size Distribution A diluted NE sample was prepared at a suitable ratio to achieve a counting rate of 200-400 kcps.A plastic cuvette was used to measure the sample using a Zetasizer Lab device (Malvern, UK). Particle Morphology A high-resolution transmission electron microscope (HR-TEM, JEM 2100, Jeol, Tokyo, Japan) was used to examine the morphology of the CUR NE. Dynamic Stability The centrifugation technique was used to assess dynamic stability.Two milliliters of the emulsion was placed in a Falcon tube and centrifuged for 30 min at 5000 rpm (Hermle Z32 HK, Wehingen, Germany) [43]. Assay for Drug Content The concentration of CUR was measured using UV-visible spectrophotometric analysis (Jasco V-630, Tokyo, Japan) with a wavelength of maximum absorbance at 421 nm after being diluted with methanol (1:1000) (Figures S1-S3) [21]. Visual Appearance and pH The appearance, the presence of particle matter, and the homogeneity of gel formulations containing CUR-NE were visually assessed.The pH of each hydrogel formulation (1 g) was determined using a pH meter (pH Sension PH3, HACH, Loveland, CO, USA) [44]. In Vitro Drug Release Hanson Research's release device with a receptor volume of 7.0 mL and a working surface of 1.767 cm 2 was used for in vitro drug release.A dialysis cellulose membrane (12-14 kDa, Visking tube, Medicell, London, UK) was used as the releasing membrane.After equilibration with the receptor medium (ethanol and distilled water in a 1:1 (v/v) ratio) for 1 h, 0.3 g of CUR-loaded NE gel was gently applied to the membrane.The control gel used was a commercial gel.The release medium was maintained at 37 ± 0.5 • C and 350 rpm.At set times, 2 mL aliquots were removed and replaced with the same amount of fresh medium [40].The CUR concentration in each sample was measured using UV-visible spectrophotometry (Jasco V-630, Tokyo, Japan) at a wavelength of maximum absorbance of 431 nm (Figure S4) [21]. In Vitro Antimicrobial Activity The antibacterial efficacies of CUR NE and CUR-NE-based gel were tested against S. aureus ATCC 25923 (Gram-positive bacteria) and E. coli ATCC 25922 (Gram-negative bacteria) (MicroBiologics, St Cloud, MN, USA).The agar well diffusion method was processed based on the study of Hettiarachchi with some modifications [3].Before the experiments, Petri dishes were prepared by pouring Mueller-Hinton (MH) agar (Merck, Darmstadt, Germany) and allowing it to solidify.The bacterial cultures were adjusted to 0.5 McFarland standard by diluting with 0.9% sterile sodium chloride solution [45].The suspensions of tested bacteria were swabbed on the surface of MH agar plates using a sterilized cotton swab to maintain uniform distribution of the bacteria across the plate surface.Then, wells were created by using the sterile back of a Pasteur pipette.The samples (CUR NE, CUR aqueous suspension (CUR), CUR-NE-based gel, CUR gel, Commercial gel) were dispersed in distilled water at a CUR concentration of 700 µg/mL or 350 µg/mL.After that, about 50 µL of prepared suspensions were added into wells.Placebo (A vehicle without CUR) was used as a negative control.The standard antibiotic disc containing Ampicillin (10 µg, Liofilchem srl, Roseto degli Abruzzi (TE), Italy) served as a positive control for both S. aureus and E. coli.The plates were then incubated at 37 • C for 24 h.After the incubation, inhibition zones were determined by measuring their diameters using a caliper (RS PRO, Shanghai, China). In Vivo Wound Healing Study The Swiss albino mice (weight; 25-30 g) were housed in cages of six, given unrestricted access to food and water, and kept at a room temperature of 25.0 ± 2.0 • C with natural light/dark cycle.The mice were anesthetized with xylazine (8 mg/kg, Xyla, Interchemie, Venray, The Netherlands) and Zoletil (64 mg/kg, Zoletil 100, Valdepharm, Val-de-Reuil, France) by intraperitoneal injection.The mice's dorsal hairs were shaved.Two circular incisions (10 mm each) were made on the dorsal interscapular area of each animal by excising the skin using surgical scissors.The following categories make up the animal treatment groups (treated with CUR every 2 days, six mice in each group) for the left wound on each mouse: group CUR-NE gel (CUR-loaded NE-based gel formulation treatment), group CUR gel (CUR-loaded gel treatment), and group Commercial gel (commercial gel treatment).For the right wound on each animal, all groups were treated with a vehicle (placebo gel treatment) every 2 days.The camera was used to take pictures documenting the wound's progression at the beginning, middle, and end of therapy for 14 days.ImageJ software (version 1.54d, NIH, Bethesda, MD, USA) was used to determine the level of wound closure by measuring the wound diameter and the wound diameter ratio (cm/cm) of groups treated with different CUR-loaded formulations, and Placebo gel at day 14 was calculated [38].The in vivo experiments were approved by the Institutional Ethics Committee of University of Medicine and Pharmacy, Hue University (Approval number: H2022/034 date on 20 May 2022). Histopathological Study On the 5th, 10th and 14th days of the research, the mice were put under anesthesia, and the wound area was removed for histological investigation.Hematoxylin and eosin staining (H&E) was used to stain the extracted tissues.A microscope (Nikon SMZ745T, Tokyo, Japan) was used to identify histopathologic alterations in the dermis and epidermis [38]. Statistical Analysis To undertake the statistical analysis, the Origin 9.0.0 (MA, USA) software was utilized.The data were displayed as mean ± standard deviation (SD) and analyzed using the Student's t-test or one-way ANOVA followed by Tukey's multiple comparisons test.Statistical significance was defined as p < 0.05. Conclusions This study reported the successful preparation of an NE containing CUR using the emulsion inversion method with the following optimal formula: 0.5% CUR, 11.77% Capryol 90, 19.29% Labrasol, 19.29% Cremophor RH40, 19.29% propylene glycol, and water by using the Design Expert 12.0 software.The resulting NE had a yellow-orange color and was transparent, uniform, and stable throughout the centrifugation with a droplet size of 22.87 nm, and a PDI of 0.348.The TEM analysis showed that the particle shape was spherical.Furthermore, when incorporated into a gel, the CUR-NE gel showed a smooth texture, a pH of 5.34 ± 0.05, and better in vitro drug release capability as compared to a commercial gel with nanosized CUR (21.68 ± 1.25 µg/cm 2 , 13.62 ± 1.63 µg/cm 2 after 10 h, respectively).The CUR aqueous suspension and CUR gel did not show any antibacterial efficacy against S. aureus and E. coli at the tested concentrations of 350 and 700 µg/mL.However, the CUR NE and CUR-NE gel showed better antibacterial activity than the CUR gel and commercial gel.Our CUR-NE gel formulation significantly accelerated wound healing in vivo as compared to the CUR gel and commercial gel with nanosized CUR.Therefore, the CUR-containing NE has great potential for use in the development of topical gels to improve the bioavailability of CUR. Figure 1 . Figure 1.A phase diagram constructed for various Smix (a surfactant and co-surfactant mixture) ratios. Figure 1 . Figure 1.A phase diagram constructed for various Smix (a surfactant and co-surfactant mixture) ratios. Figure 2 Figure2shows the effect of input parameters on droplet size and PDI of CUR-NE[21]. Figure 5 . Figure 5. (A) Imaging of circular wound areas at days of treatment of CUR gel, CUR-NE gel, and a commercial gel with nanosized CUR (Left wounded area: treatment with different CUR-loaded gels and right wounded area: Placebo gel treatment); (B) The change in wound diameter after treatment with different CUR-loaded gels and (C) The wound diameter ratio (cm/cm) of groups treated with different CUR-loaded gels and Placebo gel at day 14 (*: p < 0.05 vs. mice treated with CUR gel, n = 6). Figure 5 . Figure 5. (A) Imaging of circular wound areas at days of treatment of CUR gel, CUR-NE gel, and a commercial gel with nanosized CUR (Left wounded area: treatment with different CUR-loaded gels and right wounded area: Placebo gel treatment); (B) The change in wound diameter after treatment with different CUR-loaded gels and (C) The wound diameter ratio (cm/cm) of groups treated with different CUR-loaded gels and Placebo gel at day 14 (*: p < 0.05 vs. mice treated with CUR gel, n = 6). observed.The CUR-NE gel formulation was found to enable faster reepithelization and wound healing. Table 1 . Saturation solubility of CUR in different excipients. Table 2 . Emulsification capability of surfactants and co-surfactants. Table 3 . The input and output variables for optimization of CUR-loaded NE. Table 3 . The input and output variables for optimization of CUR-loaded NE. Table 4 . The physicochemical results of the designed experiments. Table 4 . The physicochemical results of the designed experiments. Table 5 . The equations and statistical parameters of predicted models. Table 5 . The equations and statistical parameters of predicted models. Table 6 . Inhibition zone of different formulations containing CUR.
2023-09-07T15:13:46.098Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "53862c36bde75402ff7eaa89d687cb7360f5cd53", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/17/6433/pdf?version=1693914803", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad7d6506cad6df09d04edf99f04cba983c8109c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
29175696
pes2o/s2orc
v3-fos-license
The DZERO DAQ/Online Monitoring System and Applications, Including an Active Auto-recovery Tool The DZERO experiment, located at the Fermi National Accelerator Laboratory, has recently started the Run 2 physics program. The detector upgrade included a new Data Acquisition/Level 3 Trigger system. Part of the design for the DAQ/Trigger system was a new monitoring infrastructure. The monitoring was designed to satisfy real-time requirements with 1-second resolution as well as non-real-time data. It was also designed to handle a large number of displays without putting undue load on the sources of monitoring information. The resulting protocol is based on XML, is easily extensible, and has spawned a large number of displays, clients, and other applications. It is also one of the few sources of detector performance available outside the Online System's security wall. A tool, based on this system, which provides for auto-recovery of DAQ errors, has been designed. This talk will include a description of the DZERO DAQ/Online monitor server, based on the ACE framework, the protocol, the auto-recovery tool, and several of the unique displays which include an ORACLE-based archiver and numerous GUIs. INTRODUCTION In March 2001 the Fermilab Tevatron proton-anti-proton collider started RunII with a center-of-mass collision energy of 1.96 TeV.Both the CDF and D Ø detectors and their trigger/readout electronics underwent extensive upgrades to take advantage of the increased center-of-mass energy and luminosity.The DØ L3 Trigger/DAQ group designed and implemented an Ethernet based L3 Trigger/DAQ system (L3DAQ) capable of reading out the DØ detector at a rate of 1 kHz [1].This paper details two projects that grew out of the L3DAQ upgrade: a monitor data server and a DAQ auto recovery tool. All D AQ/Trigger systems must have close to 100% uptime.A great deal of effort goes into a system design to achieve this, but inevitably problems occur during operation.Many times problems that stop an experiment's DAQ system are external to the DAQ itself -a digitizer card hangs, for example.In order to quickly diagnose these problems a responsive monitoring system containing complete status data is required. The monitor system must be able to display system data for both experts and non-experts in a timely fashion to allow for quick problem diagnosis as and debugging.The system must be flexible enough to handle the dual tasks of debugging and commissioning as well as production running.It should minimally impact the performance of the system and also be fairly simple to extend with new monitor data as the need arises. The monitor system described in this paper successfully met these goals.It is easy to use and has been slowly spreading beyond the L3DAQ project.It has spawned a large range of monitor-data related tools, some of which are described in section 2.4.The monitor system is described in section 2. The program structure, communication protocols, performance, and future directions are discussed. One of the tools spawned from the monitor project is an auto-recovery system called daqAI.This program gathers monitor information from several different detector components and makes a once-per-second decision about the health of the system.A rule-based expert system the monitor data to make the decision and informs the control room via a text -to-speech synthesizer of a problem.In some cases daqAI can also issue an init or reset to fix the problem.For these classes of problems daqAI has dramatically reduced the time to detect and resume from a problem. The second section of this paper, section 3, describes this tool, including the expert system, the programming model and possible future modifications. THE MONITOR SERVER The L3DAQ system contains over 150 separate software and hardware components.Understanding the health of the system requires monitor data from all of them.In turn, we have a large number of displays, many designed to address a different audience -experts or shift personnel -or particular tasks --flagging a rare error condition. The monitor system's initial design was based on the following requirements.Many of these were based on our RunI experience: • The addition of new data types to the system must be easy.• Arbitrarily complex data types must be permitted. • Allow many copies of the same source object (i.e., 67 readout-crates, 82 nodes, etc.).• Allow a large number of displays all querying for data. THGT004 • Do not make excessive requests for the same data from the same monitor source in short periods of time.In particular if several copies of the same display are running, they should share similar data. Monitor System Design We choose to base our system around a Monitor Server.Figure 1 is a block diagram of the system.Clients furnish data, and Displays request the data.The Monitor Server (MS) sits between the two.The displays do not make direct connections to the clients.All requests in the system are driven by the displays.If a particular client's monitor data is not requested, then the Monitor Server will never request it from a client.Figure 1 : A high level diagram of the monitor system.Monitor data flows from left to right, and requests for particular monitor data from right to left.The Monitor Server (MS) caches replies from the clients. Monitor data is indexed by three keys: the machine name, the monitor type, and the item name.The machine name is the DNS name of the source machine.The monitor type is the class of monitor client -for example a l3xnode is a collection of items from a L3 Trigger farm node.Finally, the item name refers to a particular data item.The data returned for an item is arbitrary, and be as large or small as desired (see below).However, the finest grained monitor data request is a monitor item. The MS stores the most recent reply from each client in a data cache.When a display requests data the MS first checks the cache.If there is a match, the cached data is returned instead making a new request to the client.The display's request may optionally specify a staleness time.If the cache data is older than the staleness time the cache is refreshed with a request to the client.If no staleness time is specified in the request a default time of one second is used. All communication between monitor system components is over TCP/IP sockets.We use the ACE framework for all sockets programming [2].This has the added benefit of making the code cross platform (the MS is designed to run on both Windows and Linux). We have also taken advantage of ACE's multithreaded programming paradigms (see section 2.2).The threading was specifically added to take care of timeouts in the clients with minimal extra programming work.While this will hang a single display's request for data, it will not hang other display's requests. All connections to the MS are persistent.This avoids overhead involved in setting up new connections.There exists a web-accessible deamon that acts as a go-between to the MS for displays that require a short-lived connection (see section 2.4.2). The system is designed to recover from crashes and power outages.The protocol requires both the displays and clients initiate their connections to the MS.If the connection is dropped for any reason, the display or client immediately tries to reconnect.It is possible for a TCP/IP connection to be broken without being closed.This occurs most frequently when the MS suffers a power outage.A ping message is sent by the MS to the client every 10 seconds if there has been no data request to the client.If the client doesn't see a ping message every 25 seconds, the connection is dropped.Without this feature all clients would have to be restarted in case of a MS failure. 2.1.1.Data Format All data between the MS and the clients and displays are XML based.The XML structure is shown in Figure 2. At its core, the XML consists of a monitor item name as the XML tag.The reply from the client contains the data as the contents of the tag.We do not maintain a DTD.Sample client XML request and reply.The upper block contains the XML query sent by the MS to the client, and the lower block represents the reply from the client with the data fields filled in. The data format has been extremely helpful in debugging and commissioning the system: one can easily read the text that comes back from a monitor item request from a python program or similar. Each monitor item can contain arbitrary data.The client programmer is encouraged to provide the data in a XML format, but that is certainly not a requirement.While binary data is not legal, it is possible to include almost any arbitrary character using the CDATA XML construct. All TCP/IP messages are a 32-bit network ordered length word followed by the contents of the XML in ASCII.No binary data is sent in either direction. Program Design The block diagram for the MS is shown in Figure 3 The display handlers feed requests to a processing queue.The dispatcher takes the requests off the queue, parses them, and sends them to the clients for processing.Once all the data has been received back by the dispatcher, the data is sent back to the displays. The display handlers, dispatcher, and receiver parts of the client handlers all have an associated thread.The display requests are linearly queued.The dispatcher removes them one at a time from the queue and parses the XML.As it parses the display's request, the dispatcher builds a request for each client.Once built, the requests are handed off to the client handlers.After the client handlers have assembled all the requested data and handed it back to the dispatcher, the dispatcher builds the complete reply message and sends it directly back to the waiting display.If the client can't find the data in the cache, it will request it directly from the client.Most requests take less than 150 ms to complete, much less if they involve only cache hits. All components of the monitor system make connections to the MS.If the MS is not available the client or display will keep attempting to reconnect. The protocol for the display is very simple.After making the connection to the MS, XML formatted requests are sent, similar to Figure 2.After the MS retrieves the data from its cache or requests it from the clients, it returns a similarly formatted XML document that contains both the data and also all the machines that are of that monitor type.If the data requested is from a machine type with many copies -like a l3xnode -then a copy of the data will be returned for each machine.Data from a specific machine can be requested. The client communication protocol is very similar.The MS will send the client an XML request very similar in format to the display's request.The XML is designed such that the client can just fill in the monitor items one at a time and reply with that information (using the XML Document Object Model (DOM)). There are numerous timeouts in the system to keep it well behaved even when a client or display misbehaves.If the request cannot be queued by the display handler the display gets an error message.If the request sits on the internal queue longer than one second a timeout message is sent back to the display.A client has 3 seconds to reply to a request for data.If it fails to reply 10 times in a row within 3 seconds, it is disconnected.The dispatcher thread allows 2 seconds for all clients to return their data, and if a client is busy processing the previous request when it starts, it will mark that client as having timed out in the display's reply. In order to correctly put monitor data in the cache the MS must parse the reply from the client.This is done with a high-speed, zero-copy, hand coded parser. Client and Display Design It was recognized early in the monitor system project that simple interfaces would make for wider adoption.The TCP/IP client and display protocols were designed with this in mind: we have also written API's and libraries to implement the protocols.We currently have API's implemented in C++ and python for the client-side protocol, and implementations in C++, python, java, and C# for the display-side protocol. When a client first connects it advertises its type and machine name by sending an initial XML message.Clients must have a thread listening to the port for incoming messages and must serve them as fast as possible.If the client takes longer than 3 seconds to respond, the MS will flag an error.Repeated failure to respond in time will cause the MS to drop the client's connection. We have clients in the system that implement the TCP/IP and XML protocol directly.We also have a collection of objects that will take care of all required XML parsing and data conversion.In fact, it is possible to declare an arbitrary instance of a data type to be monitored.Using the common C++ template traits technique the underlying code will render the data to XML whenever a request for the data arrives.Integer counters, for example, can be declared as a template and then used as normal integer in most cases.We have also written a python compiled module that uses a simple name-value pairing to set monitoring variables.The servicing of monitor requests from the MS is invisible to the user.Both API implementations use the xerces XML parser [3]. We have created a similar set of libraries for the display writer.The request to the MS is usually part of the display's main program loop.Various displays often vary what MS items they are requesting depending upon the view the user has chosen to display.The libraries all incorporate XML parsing of one sort or another, though further parsing of complex monitor data item is left entirely up to the display writer.The package most appropriate to the language the user is using is generally used. A small set of monitor displays are also clients.These frequently collate large amounts of information and publish it back in a collated form.This reduces the amount of data that has to be sent over the wire especially to a display on THGT004 the other end of a low-end DSL line.The daqAI auto recovery program, described in Section 3, is one such display/client combination. Security Fermilab is a National Lab, and, as such, all computer systems critical for the operation of the accelerator and the taking of data must be protected by a firewall.The MS is no exception, and thus there is no way to directly contact the MS from outside the firewall.Early on it was recognized that this made the system less useful for remote debugging if displays could not connect.We have received permission to open a single port to a specific machine across the firewall.This second machine receives MS requests and relays them to the MS, and relays the answers back.The relay contains no intelligence, but does do careful buffer length checks, illegal character checks, etc.The relay system is a Windows XP system.All clients must be inside the firewall. Monitor Displays and Clients This section contains a brief description of a number of the monitor displays and clients we have running in production. Monitor Clients The L3DAQ's readout crates contain a Single Board Computer (SBC) that runs the VME readout.The system supplies monitor information on the readout state of each crate, CPU usage statistics, and data transmission failures.The statistics furnished by the SBC to the monitor system requires traversing fairly complex data structures in the program.We have had to use a fast mutex to protect modification by the main SBC program while the monitor data is being collected.Performance of the SBC is not noticeably affected by the locking because the caching feature in the MS reduces the monitor requests to about two per second. The Level 3 farm nodes are another component for which CPU is a valuable resource.Currently information on incomplete events and CPU usage are generated.There are plans to convert trigger pass statistics and physics performance from another monitor system to the o ne described in this paper. The DØ trigger framework (TFW), a non-L3DAQ system, also generates extensive information.This includes all the scalars for the Level 1 and Level 2 triggers and configuration information. There are also a number of monitor repeaters.For example, we have one system that monitors a web page generated by the accelerator division and scrapes the CDF and DØ luminosity, anti-proton stack size, and even the temperature. Monitor Displays The principle shift monitor displays for the L3DAQ are written in Java.The designs are based upon the principles outlined in Tuffte's books on the display of graphical information [4].The main L3DAQ display, uMon, contains a relatively large amount of densely packed information arranged for interpretation by both experts and non-experts.In general we find that though non-expert shifts require about a week to familiarize themselves with the display, they can diagnose a large range of L3DAQ and other subsystem problems with just a glance.Figure 4 shows a portion of the uMon display.A similar display for the L3 CPU farm also exists.The displays were carefully prototyped with simple paint programs and handed around to a small group of experts and non-experts before programming began (PowerPoint, xfig).The displays' designs and usability benefited from this process.This set of displays run on both Linux and Windows.Figure 4 : A small portion of the uMon shiftermonitor display.Each large box represents a single readout crate.The % shows the incomplete event rate for the crate and below it is the status of the L3DAQ route and event queues (on the left, in the white area).The yellow area shows the status of every connection the SBC maintains (there are three farm nodes down).The white area on the right is a rate plot; one small downtime is visible as an inverse white spike. The L3DAQ also has an expert display based on the freeware version of Qt [5].This display has a fairly simple main window from which further dialog boxes can be opened.The drill down approach has worked will for getting progressively more detailed information.The display alters its monitor data requests to suit the information it needs to show.Thus it can request detailed, expensive-to-generate information for one or two particular monitor system clients.This display also runs on both Linux and Windows. We also have written a small Windows systray monitor.This puts a small 32x32 pixel icon in the Windows taskbar that displays the system's health continuously.It has only a rate meter and two green/red circles that indicate general system health.Moving the mouse pointer over the icon will display a small popup with further information.This small display was inspired by Quite Computing principles and has proved useful a useful way for experts to watch L3DAQ while doing other work. The systray monitor is often run on a portable, which isn't always connected to the internet.It is more convenient to use an http based interface for this monitor tool.There is a web site that acts as a front-end for the monitor server.The web site, called l3mq, also allows developers debugging the system to issue monitor queries without having to write code.It is also possible to store a query and reissue it by accessing a single URL.Finally, the web site collects statistics from the MS about which items have been requested and maintains a database.The Web Site user can then add documentation.In the future this will automatically THGT004 be turned into a manual of all monitor items available in the system. We also have an archiver program which issues a query once every 15 seconds and writes the results to a large data file.A web interface allows one to make time-based queries and plots online.The format was originally a root file, but indexing proved difficult.The database was converted to Oracle, but the amount of stored data proved to be too large for the system.We are planning to switch to a mixed system: root to store the raw data, and Oracle to store the index information. Performance The MS has been in operation for almost 2 years.The monitor system currently runs on a dual 1.2 GHz CPU Linux based system with 0.5 GB of RAM.Typical usage has it querying clients for 0.5 MB/sec and replying to displays with 1-2 MB/sec of data.The CPU is usually 25% busy.The display typically has 150 clients connected and over 70 displays. The data format is ASCII and not compressed.During our initial running we had a system-wide query that was delivering 1 MB/sec to each copy of a particular display type.We developed an ASCII encoding to compress the data so that less than 100 KB was delivered to each display. The MS can have more than 100 threads executing at a time.We have not noticed degradation in the CPU or the performance as a function of the number of threads running on the machine. Our original implementation of the MS used Xerces to parse the replies from the clients.This proved to be a CPU bottleneck.Since the MS doesn't care about the contents of the data item, it didn't make sense to spend CPU cycles parsing it.We wrote our own custom parser that takes advantage of the known format of the XML replies.This reduced the CPU utilization by an order of magnitude. Future Directions The MS is a stable product and rarely crashes or has modifications made to its source base.We have altered some of the communication timeouts as the rest of the system grows more stable (lengthening them). In future we may desire to have more monitor displays or clients.One possibility is to make a hierarchy of caching monitor servers.Each MS queries the MS below for information.This is particularly attractive if you have a large number of a particular client with a fairly stable query. It is also possible to run with multiple MS, each one devoted to a particularly large sub-system.The implementation for this is a matter of configuring what machines/ports the MS runs on. THE DAQAI AUTO RECOVERY UTILITY There are small classes of DAQ problems in a large experiment like D Ø that are easy to recover but cause significant downtime.For example, DØ had a bug in some readout crate code.The programmer was unable to fix the bug immediately because they were stuck outside the country (visa difficulties, post 9/11).The result was 30-120 seconds of downtime every 10 minutes.The length of downtime was a strong function of the wakefulness of the shift personnel.The problem was easily recognizable and also easy to recover from: a single init command needed to be issued.This experience and several other similar ones were daqAI's genesis.The utility is designed to recognize a number of specific problems, and, if possible, recover the DAQ system to continue taking data without shifter intervention.The daqAI utility also informs the control room via a text -to-speech interface what it is doing and what problem it has found and keeps a shift summary.It is important that the control room be informed of daqAI's actions other wise the shifter and the program could work at odds. The system is designed around a fact-based embeddable expert system, CLIPS [6].The system is built around a ruleinference system.Monitoring data is collected from the MS and converted to facts.The facts drive the rule engine, which will in turn execute embedded subroutines and functions or define new facts which will, in turn, cause more rules to execute.Functions are defined that can effect the desired changes.Figure 5 shows a high level architectural diagram of the system. Figure 5 : The daqAI architecture.A C++ shell mediates the actions and inputs of the CLIPS script and all components external to the system.Connections to the Logger, Run Control, the text -to-speech synthesizer, and the monitor server are all by TCP/IP.daqAI is both a MS client and display.It uses the display features to gather the data it uses to make its decisions, and the client features to publish its internal state, actions, and log. Program Structure daqAI, the C++ program, is a shell.Embedded in the shell is the CLIPS system.At runtime a CLIPS rule script is loaded and run thus allowing us to change its behavior without having to rebuild the system.Figure 5 shows the general design.The system then resets the CLIPS system to its cleared state and defines facts corresponding to the monitor data.The translation algorithm is a fairly simple text based one.The CLIPS inference engine is then run.During the engine execution rules make callbacks into the daqAI shell to request logging output or request an init of the L3DAQ system.The daqAI shell does nothing during the callbacks other than to mark they have occurred.Once the inference engine has run to completion, the daqAI shell examines the DAQ init requests, log requests, text -to-speech requests, etc., for new ones that weren't present on the previous iteration.The new requests are acted upon, the old ones ignored.Any requests that were made last iteration but not the current iteration are so noted, though no external action is taken.Finally, monitor variables and information are generated and published for any MS requests.The loop then repeats. The CLIPS rule engine is started fresh for each iteration through the main loop.All previous knowledge is erased from the engine at the start of the loop.Thus, the system arrives at the same set of conclusions each time through the loop as long as the inputs remain the same.Of course, if a problem occurs requiring a reset to be issued, one would expect the system to issue a reset each time through the loop.As shown in Figure 5, daqAI C++ is a shell around the CLIPS system.It watches the requests that are made by the CLIPS system and takes action only when it observes a change.So the first time a reset request is made, the shell will actually issue the reset.If the same request is made on the next iteration, no reset will be issued to the DAQ system.This pattern is followed for all actions. The system could have been designed to remember facts from iteration-to-iteration.A facts based system is well suited to noticing a set of facts in combination and flagging them.However, the code to recognize that this set of conditions no longer exists, but did just before is not clear or easy to write.The problem domain is also well suited to the idea of a fresh start each iteration: this version of the system isn't designed to watch for patterns in the time -domain: just the presence of a set of condition. There are situations that require some memory from previous iterations.A number of monitor variables tend to give false reading for a very short period of time , for example .daqAI must make sure the monitor variable is out of range for an extended period of time.daqAI contains a number of useful constructs give the script crude model of time.There are timers that will count up as long as a special function is called each iteration.If it isn't called, the timer resets to zero.The timer is available as an input for rules as a fact.There are also counters and even arbitrary facts that can be set and thus remembered from iteration to iteration. The main loops plug-in architecture allows for communication to an arbitrary set of external devices.Currently these include the main DØ Run Control program, log files, a control room text -to-speech synthesizer, the official control room logbook, and email.Each time daqAI identifies an error it will assign it a name.Once a downtime condition has been resolved a complete report is added to the official online logbook, where it can be viewed by any member of DØ.At the end of each shift daqAI reports what errors occurred and their total downtime.This gives an accurate accounting of downtime at DØ. 3.1.2.The CLIPS Language CLIPS rules are stored as a text file.The structure of the script is completely up to the programmer.The language is rich containing not only rule constructs but also objects.The daqAI script makes use only of the rule style.For complete documentation see references [6]; this section contains a very brief introduction to rule based programming in CLIPS. The runtime environment contains a list of active facts.Each fact has a name and an arbitrary number of arguments.For example (daq_rate 33) might indicate the L3DAQ rate is currently running at 33 Hz, and (bad_muon_roc) might indicate that a muon readout crate has gone bad.In the daqAI program, the C++ shell defines a list of facts that are a direct translation of the monitor data. The CLIPS program is a list of rules.Rules have preconditions and actions.A rule fires when the preconditions are met.Preconditions are usually pattern matches involving facts.Figure 6 is a simple example.This rule will fire only if the daq_rate is less than 50 Hz.In this case, this rule will assert a new fact: (b_daq_rate_low 33) given the initial presence of the (daq_rate 33) fact. Very powerful programs can be built out of this simple set of constructs.Facts represent the current environment and rules represent the knowledge. The CLIPS daqAI Script The daqAI CLIPS script is the heart of daqAI.Its rules contain the hand coded knowledge of the problems it recognizes and the actions it should perform upon their recognition. The layout of the script is in several tiers, each tier feeds the next .The lowest tier contains the facts that are directly converted to facts by the daqAI C++ shell.The second tier contains v ery basic inferences from the raw data.For example, it contains a rule testing for a low L3DAQ event rate.The third tier contains problem recognition rules and often involves several second tier inputs.The third tier also (defrule b_daq_rate_low "Is rate too low?" (daq_rate ?rt&:(< ?rt 50)) => (assert (b_daq_rate_low ?rt)) ) Figure 6: A simple CLIPS rule.This rule will assert a new fact, b_daq_rate_low if the fact (daq_rate x) is present and x is less than 50. THGT004 insures that the existence of a problem is worrisome.This prevents daqAI from trying to control the system during commissioning, for example.The fourth tier contains the action rules.These issue commands to run control, log messages, send text to the speech synthesizer (and old DECTalk DTC01 machine). Performance The current version of daqAI has been in operation for almost a year without major modifications.When the daqAI program first started running the DØ DAQ system had a number of problems that daqAI was able to fix much faster and more consistently than most shifters.The result was data taking efficiency went from about 75% to 85%.Currently, in June of 2003, the DØ DAQ system is much more stable and as a result daqAI's direct impact is less.One of its more important functions now is to create the shift summaries and list the individual downtimes. The current version of the daqAI CLIPS script uniquely recognizes 8 different problems, in addition to the general Unknown downtime.For 4 of them there are established automatic recovery procedures.The unknown problem classification indicates a problem that daqAI doesn't explicitly recognize. One key to a system like daqAI's success is the wealth of monitor information available to it.Adding new data sources to our MS system only increases the potential of a tool like daqAI. Comments on Usage Though daqAI has proved quite useful, it is not without its problems.In particular, the amount of work required to identify a common problem and implement an automated fix can be daunting.Especially when it is considered that in a system as complex as the DØ DAQ the same symptoms can mean different problems over time. Finding a new problem isn't difficult.daqAI leaves behind enough logging information to make this easy.A key indication is if the Unknown category of downtime is quite large.Log file investigations and some time in the control room on shift will quickly point out the class of problems.Unfortunately, the symptoms for the problem are often duplicated during normal running and it can often take a few days of testing to get the rules just right.Once that is correctly implemented an automated fix can be added.It was often found that what looked like single problems were , in fact, two types and required different fixes.This process can take again several days to sort out correctly. In the long term the inability of the script to respond to changing conditions can lead to problems.If the underlying problem is fixed, but daqAI's script is not changed it can introduce dead time into the system by issuing run control commands where they are not required. Lastly, we were perhaps naïve thinking that the sociology of the experiment was not something we would have to deal with.Many detector groups were reluctant to have anything but a shifter control their detector.daqAI quickly gained acceptance as a tool to identify problems, but took a longer before people were comfortable with it sending direct commands and manipulating the system. Future Directions There are two possible improvements to daqAI based on current experiences: sensitivity to the time domain, and automatic run-condition classification. daqAI is not sensitive to the sequence in which things happen without resorting to the timer-counters mentioned above.The symptoms for a problem often evolve over time, or the differentiating fact is what happens in the initial 10 seconds after the data rate begins to fall.Some of the monitor data has only course time resolution -more than 5 seconds -but much of it has 1 second resolution. The second improvement addresses the most time consuming aspect of daqAI problem identification.There are many automatic classification schemes for physics variables based on various figures of merit.Something similar could be designed for a daqAI like system.One could imagine adding the ability to monitor shifter actions.When the shifter took an action that clearly changed the state of the system it would be recorded along with the current system state.Enough statistics and the system may be able to being a model. Both of these approaches, though interesting, would require a good deal of effort.Their implementation schedule has not yet been decided. CONCLUSIONS The monitor system based on a caching monitor server has proved to be a simple, robust, and easy to use monitor system for the DØ DAQ and other parts of the DØ online system.The key to its adoption by the rest of DØ was the easy with which one could communicate.The protocol was designed to be as simple as possible and thus gained the acceptance that other monitor systems didn't as readily.We believe it is important to have as few monitor systems in an experiment as possible as the monitor system is only as powerful as the data it is serving. Many clients and displays were discussed in this paper.In particular, the daqAI auto recovery program has proved to be a unique use of this monitor data.When first implemented it helped DØ gain over 10% data-taking uptime, and in that sense was very successful.It correctly identified and fixed the most vexing problems.It continues to function, though DØ's DAQ system is much more stable that before and thus makes use of daqAI auto-recovery feature less frequently. The most important lesson learned by our group during the design and implementation of thes e projects was the key to having monitor information easily and quickly available.The design allowed us to quickly add new monitor items in even some of the busiest environments.Rich and prompt monitor data is a good start to better experiment uptimes. Figure 2 : Figure 2 : Sample client XML request and reply.The upper block contains the XML query sent by the MS to the client, and the lower block represents the reply from the client with the data fields filled in. Figure 3 : Figure 3 :Block diagram of the MS's object structure.The display handlers feed requests to a processing queue.The dispatcher takes the requests off the queue, parses them, and sends them to the clients for processing.Once all the data has been received back by the dispatcher, the data is sent back to the displays. 3. 1 . 1 . The C++ Shell daqAI is designed around a loop that executes forever.The loop is repeated approximately once per second.a MS request to gather the monitor data.
2017-09-27T12:43:53.977Z
2003-06-29T00:00:00.000
{ "year": 2003, "sha1": "b77f92b9725b846bba4b15ae62bdfa45ef31aa94", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b77f92b9725b846bba4b15ae62bdfa45ef31aa94", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [] }
237714362
pes2o/s2orc
v3-fos-license
Assessments of Solar, Thermal and Net Irradiance from Simple Solar Geometry and Routine Meteorological Measurements in the Pannonian Basin : In this paper, we discussed several different procedures for calculating irradiation from routine weather measurements and observations. There are between four and eight frequently used parameterizations of radiation balance components in meteorological preprocessors, and we investigated them. First of all, the estimated and measured solar and net irradiance were compared. Af-terwards, the estimated and measured longwave irradiance were investigated. Then, we recalculated the net irradiance from the sum of global solar irradiance, longwave downwelling irradiance, reflect solar irradiance and upwelling longwave irradiance. Statistical estimates of the described methods were also recalculated compared with each shortwave and longwave radiation budget component measured separately with WMO first-class radiation instruments (Kipp&Zonen CMP6 and CMP11 and CGR3 and CGR4) in the Agrometeorological Observatory Debrecen, Hungary during one-year time period. Finally, we compared the calculated and measured values for longer periods (2008–2010 and 2008–2017) through statistical errors. The suggested parameterizations of the net radiation based on the separately parameterized all radiation balance components were: Foken’s calculation for clear sky solar global irradiance, Beljaars and Bosveld parameterization for albedo, Dilley and O’Brien methodology for the clear sky incoming longwave (LW) irradiance and Holstlag and Van Ulden cloudiness correction for all sky incoming LW and for the LW outgoing irradiance. Introduction In meteorology and climatology, a routine meteorological observation program generally selects only state variables of the atmosphere for measurements. At the same time, the growing research on the problem of climate change, air pollution dispersion or wind energy resource has increased demand for the best possible information about surface layer parameters (SLP): turbulence fluxes of momentum ( ), sensible ( ) and latent heat ( ), trace gases ( ) and aerosol particles ( ) near the ground [1][2][3][4]. It has been known for some time already that surface fluxes of momentum, heat and moisture ( ) are essential for determining atmospheric steady states [5,6]. These fluxes can be both directly measured and calculated by, e.g., eddy covariance, profile, gradient and Bowen ratio methods or calculated from routine measurements [7][8][9]. Therefore, the development and checking models for these calculations are an important area of research where the Monin-Obukhov similarity theory (MOST) is a generally accepted framework for describing interactions between the Earth surface and atmosphere [10]. When dealing with problems inside the surface layer using MOST, Monin-Obukhov (MO) scaling variables are the key variables in a dimensional analysis. There are: length scale ( ), scales of velocity or friction velocity ( * ), velocity scale for the convective planetary boundary layer (PBL) dependent on the sensible heat flux ( * ), roughness length ( ) and temperature scale ( * ). The most important scaling variable in MOST is the turbulent length scale-, which dictates stratification of the atmosphere. In Western Europe, many authors have investigated processes inside the surface and boundary layer, using the Cabauw experiment and data, suggesting ten classes of atmospheric stability, depending on the turbulent length scale, . These classes are: extremely stable, very stable, moderate stable, light stable, neutral, light unstable, moderate unstable, very unstable and free local convection. It has been discovered that the universal function of MOST does not work properly in the case of extremely stable stratification of the atmosphere. Comparing the number of hours with extremely stable stratification of atmosphere on the Cabauw location and on the locations of Novi Sad and Debrecen, both in the Pannonian region (Figure 1d), enormous differences were noticed. While the Cabauw location has about 3% percent of these situations, both stations in the Pannonian Basin have more than 30%. Due to the significances of the , the authors who offered calculations of this parameter from routine meteorological measurements gave huge support to the investigation of the PBL. The modeling of these scales from standard weather observation FM12-SYNOP was developed and described by Holtslag, Van Ulden and De Bruin from 1982 to 1988 and partly modified by Foken and Göckede recently [1, 5,[11][12][13][14][15]. Their model needs only astronomical conditions and routine SYNOP measurements for determining SLP, so it can be widely accepted where there are no special flux or profile measurements. The first step in these models, which still determine the development of the parameterization methods [16][17][18], is assessing a global solar radiation ( ↓) in the cases when this parameter is not measured on a routine basis. The next steps are the assessments of albedo ( ) and, finally, crucial parameter net radiation ( * ), which is usually not included in standard meteorological measurement programs. However, these models/parameterizations can be used only after careful consideration of all the relevant facts. Namely, the parameters that are used were empirically determined by experiments, and they are a function of the place, time and weather situation in the period where the experiment was done. Therefore, these models may have different representativeness in different regions. Since the calculation of from routine SYNOP measurements essentially depends on a sensible heat flux, and a sensible heat flux essentially depends on upwelling and downwelling shortwave and longwave radiation, the basic question is to find out which are the most suitable radiation models in the Pannonian Basin. It is the first step in the classification of stability. The stability class is crucially important in the region that has a specific distribution of atmospheric stability in comparison with other regions. There are already several papers for checking the parameterization procedures of the individual radiation balance components, especially for extreme geographical locations in the recent years. For example, longwave parameterization procedures were investigated over the Tibetan Plateau by Zhu et al. [19] and Liu et al. [20]. Short-and longwave radiation parameterizations were tested with satellite measurements over an alpine glacier in Italy (Senese et al. [21]). Stettz et al. [22] analyzed the shortwave radiation parameterizations in a tropical area. Lindauer et al. [23] developed a new general parameterization for incoming solar radiation dependent only on screen-level relative humidity with site-specific astronomical information. In addition to the surface measurements, satellite observations are increasingly used in radiation balance modeling. Our work is different, as we take into account all of the components of the radiation balance, focusing on the modeling methods used in meteorology for a lowland area of Europe where a high-density surface measurement network is available over eight countries of the Carpathian Basin. The results of previous research of radiation and surface energy budget investigations in Southeast Europe and the Carpathian region were also illustrated by a few examples [24][25][26][27][28]. Two of the SVAT models are highlighted. The land-surface flux model (PROGSURF) was designed jointly at the Universities of Vienna and Budapest [24]. This model solves the surface energy budget equation using the Penman-Monteith approach. Net radiation is modeled separately for bare and vegetation-covered soil through the radiation balance components. The model comprises one vegetation layer and three soil layers. Surface temperature prediction is made by the heat conduction equation in conjunction with the force-restore method. The ground-air parameterization system (LAPS) also relies on data from synoptic stations. It was developed at the University of Novi Sad [25]. It is structured according to similar principles as the previous one [24]. The model is suitable for calculations over both heterogeneous and nonheterogeneous surfaces. It follows the methodology of De Bruin and Holtslag [10] in the modeling of radiation balance components. The parameterization of the radiation balance components plays an important role in the meteorological preprocessor prepared for Hungary, which determines the daily course of the PBL thickness and the surface energy budget's components for air quality purposes [26]. The development is based on the methodology of Holtslag and Van Ulden [5] and COST Action 710 [1]. For many tasks, e.g., energy production forecasting of solar collectors, a detailed shortwave radiation estimate is required. The next two articles on this topic are significant in the Carpathian Basin. Based on radiation measurements in Romania, computing global and diffuse solar hourly irradiation in a clear sky was reviewed and tested using 54 parameterizations on daily and hourly scales [27]. The empirical models for estimating solar insolation were developed in Serbia by using meteorological data on cloudiness [28]. Solar energy utilization has been a priority in the countries of the Carpathian Basin in recent years. This requires reliable global radiation estimates at least in hourly time resolutions. Here, the primary data sources are standard meteorological measurements. It is important to select the optimal parameterization method to be used in the Carpathian Basin to quantify the application uncertainties. Modeling of the radiation balance components in daily and longer time scales, as well as applications of GIS-based modelling for solar energy estimation in different time scales and different climate regions are beyond the scope of this paper (see references [29][30][31][32][33] for an overview). This paper aims to contribute to the present understanding of how successful the methods for the assessment of shortwave and longwave radiations are in the Pannonian Region. Two solar radiation models, often used in meteorology, are described. Further, we investigate the relationship between downwelling shortwave radiation and the meteorological parameters that modify it. Besides that, many methods that calculate the upwelling shortwave radiation hourly, as well as a component of longwave radiation as a function of a state of the ground and a state of the atmosphere, are studies. This model of the approximation has become common in SVAT (Soil-Vegetation-Atmosphere-Transport) modeling, and these types of radiation parameterization procedures are also often used in one-dimensional PBL models. We believe that the results of our study are significant for data quality control. Further, we believe they may help in understanding and bridging differences between the point measurements and atmospheric models for all radiation components, SLP, as well as the state variables, especially temperature and wind, in our region. Terminology Solar radiation is electromagnetic energy originating from the Sun [34,35]. Of the light that reaches Earth's surface, infrared radiation makes up around 50% of it, while visible light provides about 42% and ultraviolet radiation just over 8%, respectively. Thermal or terrestrial radiation emitted by the Earth's surface and atmosphere is in the range of 4-100 μm. Irradiation is the energy received per area. It is the process by which an object is exposed to radiation; in this case, coming from the Sun, from the atmosphere or from the Earth surface. Its SI unit is (J m -2 ), including time-averaged values (for example, hourly or daily irradiation). Irradiance is the radiant flux received by a surface per unit area (W m -2 ). Description of Key Variables Although solar radiation has a leading role in almost all processes in the atmosphere it was not part of standard reports. During the last century, the instruments that measure irradiance were rare and usually placed on high mountains or in deserts, so as to minimize the atmospheric influence. The growing interest for renewable energy sources is making this measurement more and more popular. In this paper, the following irradiance measurements were used on a horizontal plane (without surrounding obstacles and buildings) at the Earth's surface: Global solar irradiance is radiant flux emitted from the Sun and received at the Earth's surface separated in two basic components: direct and diffuse. Global solar irradiance is a measure of the rate of total incoming solar energy, both direct and diffuse, on a horizontal plane at the Earth's surface. It depends on position of the Sun in the sky, season, time of the day and turbidity of the atmosphere. Turbidity mostly depends on the cloudiness, humidity, content of aerosol particles and, of course, from the pressure (amount of the air column). Reflected solar irradiance is part of global solar irradiance that is reflected from the Earth's surface. It depends on the global solar irradiance and the surface albedo (function of the angle of solar elevation and characteristics of the ground surface). Incoming longwave (LW) irradiance is a downward flux of thermal radiation emitted from the atmospheric molecules (such as H2O, CO2 and O3); aerosol particles and clouds per unit horizontal area in a given time period. It depends, first of all, on cloudiness, temperature precipitable water and turbidity of the atmosphere. Outgoing (upwelling) longwave irradiance represents a redistribution of the absorbed global solar irradiance. The power of this energy emitted from the Earth's surfaces per unit area and in the given time is called thermal (terrestrial) irradiance. Besides the global solar irradiance, it depends on the temperature of the Earth's surface (or atmospheric temperature). Net irradiance is often convenient to split into four components: , , and . Therefore, net irradiance is a sum of these compo- (1) In our case, the signs of all the surface radiation balance components were positive or zero. Net irradiance is necessary fuel for all motion processes in the atmosphere. Energy budget on the Earth surface in the case of the quasi-steady state of PBL, when there is no heating or cooling, sees the net irradiance as a sum of the sensible heat flux, , latent heat flux, , and flux into or from the ground, . Accordingly, is key connection between radiation and stability of the atmosphere. In order to estimate the upper irradiances, the next variables are measured or derived from routine SYNOP reports and the metadata about the location where measurements are conducted: ϕ The angle of solar elevation (rad) Hour angle (rad) ℎ Hour angle used in [5] (rad) ℎ Hour angle used in [11] (rad) Description of the Key Method The measurement mentioned in Section 1 helps to better understand how solar energy changes the state of the atmosphere and how the state of the atmosphere influences the global solar irradiance and thermal irradiance on the Earth's surface. There are some investigations that try to track the interaction between the Sun and Earth at some locations hourly through the relationship between: global solar irradiance, ↓, reflected solar irradiance, ↑ , incoming longwave, LW, irradiance, ↓, and outgoing LW irradiance, ↑ , in the surface and meteorological parameters. The methodologies that estimate the net radiation and all its components used in this paper are listed in Table 1 below. Each methodology is represented through its source, i.e., reference (ref.), abbreviation (abbr.) and equations number (eq.), which describe the components of irradiance as a function of variables written in parentheses in the last column. Table 11 row (1) = f (T, e) [44] ↓ Table 11 (2) = f (T, e) [45] ↓ Table 11 (3) = f (T) [46] ↓ Table 11 (4) = f (T, e) [47] ↓ Table 11 (5) = f (T) [48] ↓ Table 11 (6) = f (T, e) [43] ↓ Table 11 (7) = f (T, e) [46] ↓ Cloudiness parameterization for downwelling LW radiation based on Iziomon et al. [51] Cloudiness parameterization for downwelling LW radiation based on Swinbank [47] and Dilley and O'Brien [48] Cloudiness parameterization for downwelling LW radiation based on Niemelä et al. [43] Each an estimated radiation component is compared with the measurements and discussed through statistical errors: BIAS, MAE, RMSE and correlation coefficient ( ). Statistical errors for the estimated and measured values are calculated by: where and are the mean values of the modeled and measured hourly values, respectively, and n is the number of cases (measured hours). The Description of the Datasets The Agrometeorological Observatory in Debrecen is on a flat, mostly homogenous terrain in agricultural surroundings with geographic coordinates: = 47.5291° (latitude) and = 21.6397° (longitude). There is a meteorological 10-m mast with profile measurements of ten-minute average values of: • temperature, relative humidity and wind speed, at 1, 2, 4 and 10 m; • infrared ground surface temperature; • two levels of soil temperature and humidity (5 and 10 cm) and • all radiation balance components: global solar radiation or , reflected solar radiation , incoming longwave (LW) radiation and outgoing LW radiation . Together with the station's pressure, precipitation and snow cover obtained from routine meteorological measurements and observations, we used a 2-m temperature and relative humidity and 10-m wind speed from the mast. The momentum, CO2 and sensible and latent heat fluxes were also measured directly using the CSAT-3 and LI-7500 instruments with the Campbell data acquisition program. Standard meteorological measurements, including cloudiness and visibility, were provided from the SYNOP station in Debrecen Airport (12882) 8.5 km from the Agrometeorological Observatory [54,55]. The new generation of micrometeorological measurements started in 2008. The instruments were initially factory-calibrated, and a year-long dataset was investigated in the year 2009. That is why the first full year has been chosen for model development. For the long-term comparative study of the developed calculation methods, two test periods were selected. The first was 3 years between 2008 and 2010 when cloud and present weather observations were made hourly at Debrecen Airport (SYNOP station No. 12882). The second, longer test period was 7 years between 2011 and 2017. Namely, after that, from 2011, only partial cloud observations took place at night, so we checked the parameterization of the shortwave radiation balance components and, on occasion, the longwave radiation balance components. The entity responsible for the operation of Agrometeorological Observatory Debrecen was changed in 2017. Data with quality control has been available to us until this year. Since 2018, the measuring site has been operating as a "backup" site with limited data access because there has been a new measuring site installed. Long-term agroclimatological investigations and reorganization of the high-precision radiation and micrometeorological measurements are in progress at Agrometeorological Observatory Debrecen. The location of the observation in the Pannonian Basin, main measurement items and the instrumentation are presented in Figure 1 and Table 2. The measurement of solar radiation balance components is part of the measuring program of the surface observing system of the Hungarian Meteorological Service. The operation of observing the system and calibration of sensors is regulated by the ISO9001:2015 quality control system. The calibration of solar radiation sensors is done according to the relevant working instructions; therefore, the calibration of the solar radiation sensors of Agrometeorological Observatory Debrecen is done the same way. Regarding the shortwave sensors, the reference instrument is an HF-type absolute cavity pyrheliometer (s.n. 19746) that has been calibrated at International Pyrheliometer Comparisons (IPC-s) at the World Radiation Centre (WRC) in Davos every five years since 1980. In the case of longwave sensors, the reference sensors are an Eppley PIR (s.n. 29582F3) that was calibrated two times at WRC Davos between 2006 and 2014 and a Kipp&Zonen CGR4 (s.n. 080066) that was calibrated during IPC-XII in 2015. The calibration of the radiation sensors was done with parallel measurements between reference and site instruments, which happened in every two years between 2008 and 2013 and every three years after 2013. The history of the calibrations shows that, in the case of shortwave down-and upward sensors, the averaged differences among the values of reference and site instruments did not exceed ±1.5% and ±1%, respectively, if the level of irradiance was over 200 W m -2 . Regarding the downward and upward longwave sensors, the differences, because the downward sensor was not shaded and ventilated, were between ±3.5% and 2%, respectively. The addition of the personal checking of measuring sites was done every 3 months when the leveling of the sensors was checked and cleaning was done. The detailed calculations of uncertainty of the calibration factors of the solar sensors is not part of the relevant working instructions. Assessments of Solar Irradiance The Earth's surface, the basic energy transfer area for atmospheric processes, is heated by the absorption of Sun shortwave radiation, ↓. A small part of this radiation, depending on the complexity of the surface, the state of the ground surface, the Sun's position on the sky, visibility and cloudiness, is reflected back to the sky, ↑. The mean annual daily variation of calculated ↓ and ↑, as well as measured and irradiance, for the clear and cloudy sky are represented by taking the different procedures for their calculations. The statistical errors (See Section 2.3) of their calculations represent the reliability of these calculations. Assessments of Downwelling Shortwave Solar Radiation for Clear Sky The astronomical quantities and downwelling solar radiation are calculated using Foken [11] and Holtslag and Van Ulden [5] methodology. Both well-known methods estimate downwelling solar radiation as a function of , the angle of solar elevation. This angle is a function of: solar declination, , latitude, , and hour angle, ℎ (all angles are in radians): where , the solar latitude, is and is the Day of the year. Holtslag and Van Ulden [5] used the hour angle as a function of , east longitude in radians and time as UTC time. The equation of hour angle, ℎ , is calculated as follows: Foken [11] took the hour angle, ℎ , as a function of Δ , the duration of full rotation of the Earth (between two midnights = 86,400 s), and , the time distance from to culmination of the Sun in seconds. is the difference between the true and averaged local time and is the Central European Time. From tables of time equations [56] for 15° E and 50° N, Göckede [57] calculated an approximation equation for , and the coefficients , , … , were listed in Foken [11]. After checking the agreement between Equations (10) and (11), one notices that the resultant angles of solar elevation (see, also, Equation (7)) are very similar but not same. They have a maximum at the same hour but, obviosly, not at the same moment. At the hour with maximum solar elevation, the Foken's value of the solar elevation angle is about 0.6° greater than Holstlag's during the day of the sumer solsticium and about 0.2° greater than Holstlag's during the winter solsticium. Concerning the daily circling, it looked that Foken's solar elevation lasted about 20-25 min after Holstlag's. This deviation was established arbitrary based on Figure 2a, on which the x-axis represents "hours in the year" and y-axis represents solar elevation. This deviation was not the same every day, and 20-25 min was the annual maximum. It happened on August 2, when the differences between the angles of solar elevation were a little smaller than 6.5°. In 285 out of 8760 h, these differences were greater than 6°; in 1104 h, they were greater than 5° and, in 4145, they were greater than 3°. Using the angle of solar elevation , which is calculated with the hour angle described by Equations (10) and (11), downwelling the clear sky solar radiation ↓ following Holstlag's calculations based on reference [5] is: where and are the mean and actual distances from the Earth to the Sun. The ratio of both can be determined according to Hartmann (1994) (see reference [11]) as a sum of the first three components of the Fourier series. The argument of the Fourier series is 2 265 ⁄ (denominator is 266 in the case of a leap year), and the coefficients of the Fourier series are listened in reference [11]. The angle of solar elevation is calculated with the hour angle described by Equation (11). is the solar constant ( = 1368 W m ). Although during the year 2009, there were only 201 h both with > 0 and with clear skies, based on the SYNOP station at the airport in Debrecen (12882) (Figure 2), statistical errors for this set of data have also been investigated (Table 3.) Based on the 201 clear sky hours, the Foken's calculation made an error over 100 W m -2 in 11-h data, while Holtslag's calculation had nine such situations. The errors over 50 W m -2 took up about one-quarter of the hourly data, mostly during the winter period, and were fewer for Holtslag's than Foken's calculations. Assessments of Downwelling Shortwave Solar Irradiance for All Sky Condition To include cloudiness in the calculation of shortwave incoming solar irradiance, we applied two widely used methodologies based on Kasten and Czeplak [39]: and Burridge and Gadd [40]: where is total cloud cover that goes from 0 for clear skies to 8 octa for total cloudy skies, and , and are cloud covers of high, middle or low clouds, respectively. The first method was described by Holtslag and Van Ulden [5] and Nyren and Gryning [37]. This method was also popular in climatological investigations and applications requiring rough estimations of solar radiation on horizontal surfaces [58][59][60]. The second method was described by Stull [61] and used in different micrometeorological modeling works for radiation parameterization, such as in surface hydrological modelling or snow evolution system modeling [62,63]. The number of SYNOP stations with complex cloud observations, especially the cloudiness and cloud types, decreased with the installation of automatic surface weather stations from the mid-1990s [64]. Observations of cloud types are still generally performed by human observers and naturally have a few limitations [65]. Although it seems clear that the second method includes cloudiness in a more sophisticated way than the first, the lack of information about cloudiness for different levels: , and , makes the first method more practical. Namely, in the case when the total cloud cover was 8 octa and the upper cloud levels were invisible, we had to approximate and and took = = 5. Besides that, when = 8 and < 8, we supposed that = 8 or = 8, which depended on the type of clouds. On the other hand, instead of a total cloud cover in the first method, effective cloudiness = ( + )/2 is used as an alternative solution in the calculation of ↓ whenever is available from SYNOP. Statistical errors for global solar irradiance for the four investigated models, comparing 4022 h during the year 2009, when all data necessary for the calculations was available ( > 0 W m ), are shown in Table 4. The errors were absolutely the same using either total or effective cloud cover. About 3000 of 4022 h of data had absolute errors in estimation fewer than 100 W m -2 , but about 500 of them had errors greater than 200 W m -2 . There is no comment here whether these errors were the result of the crude method for estimation of the input solar irradiance or even sporadic errors in the measurements. Namely, the radiation balance components are measured with first-class radiation sensors separately using the Campbell CR1000 datalogger, so the measurements are quality-controlled and mostly reliable. Therefore, without analyzing the reason, all situations when the differences between the measured and estimated solar radiations were fewer than: 50, 100, 200,..., 700 W m -2 were countered, and the results are shown in Table 5. After this counting, we recalculated statistical errors for 3500 h of data when all the methods had absolute errors fewer than 200 W m -2 . In that case, the MAE became smaller around 25% and RMSE around 50%, and the coefficient of correlation grew to 0.96, while the BIAS stayed similar. The few greatest absolute errors (>200 W m -2 ) seemed to be connect to the Kasten and Czeplak [39] method for describing cloudiness, because after excluding these situations, the RMSE and MAE became 10% smaller for this method than for the Burridge and Gadd [40]-type cloudiness parameterizations for both clear sky calculations. Relative errors were fewer than 10% for only 18-25% of the hourly data. Both methods look very similar through statistical errors (Table 4). Looking at the absolute errors, one notices that the Kasten and Czeplak [39] cloud parameterization has a greater number of hours with errors smaller than 50 W m -2 , although the few greatest AE relate to this parameterization. All methods have relative errors below 25% in about 45% of data hours. The usual errors are connected with the short periods when the sky fast becomes either overcast or clear, or when solar elevation is low, especially during the winter period. Although it is not very obvious which clear sky model and model for cloudiness is better (Figure 3 and Tables 4 and 5), we clearly notice that: (1) Foken's clear sky model, together with Kasten and Czeplak [39] methodology, were the best during unstable stratification when the solar elevation was high, although a few of the greatest absolute errors happened then (Figure 3b). (2) Holtslag's clear sky model and Burridge and Gadd [40] cloudiness parameterization were the best for low solar elevation, when, especially during sunset, Foken's calculation permanently overestimated the measurements. One can conclude that both clear sky methods and both cloudiness methods are very useful tools for data quality control through intercomparison tests that check the relationship between cloudiness and global solar radiation at midday and separately at sunset or sunrise. Assessments of Upwelling Shortwave Solar Irradiance The upwelling solar radiation is part of downwelling solar radiation, which is reflected from the Earth's surface and the atmosphere. The ratio of reflected to incoming shortwave radiation is called albedo. It depends on the angle of solar elevation, cloudiness and composition and state of the Earth's surface. It is not measured on a routine basis. There are hourly values of the albedo as the fraction of the measured incoming shortwave radiation that is reflected in a Debrecen dataset. Assessments of Albedo In 1995, Geiger et al. [36] offered a classification where high solar elevation albedo depended on the surface type and varied from 0.024 for rough seas to 0.98 for clean snow. Since there was no data in Debrecen for "state of the ground with/without snow or measurable ice cover"-WMO SYNOP code tables 0901 E and 0975 E', the hourly data about precipitation and present and past weather, as well as the temperature, must be used in order to classify the "state of the ground". Namely, the nearest stations that included the "state of the ground" in the SYNOP reports at 06 UTC were: the station WMO 13174 Kikinda (in Serbia) and WMO 11968 Kosice (in Slovakia). In this paper, the albedo A was calculated for three different values of A ∶ = 0.11 for wet soil (precipitation in present or past weather), = 0.25 for dry soil (nice weather) and A = 0.85 for snow (snow cover). The equation that was adopted after reference [66] and used by Nyren and Gryning [37] expresses albedo, , as a function of the solar elevation angle (in degrees): The other similar empirical equation was used by references [67,68]. This type of methodology for the calculation of albedo that depends on the solar elevation and the soil wetness has been applied up to the present day [41]. Statistical errors for proposal albedos are the same for the first three decimal places, using both Equations (16) and (17). Due to that, only one set of results is presented. The calculated albedo compared with the measurements is illustrated in Table 6. Without hours with precipitations and snow cover, a total of 3473 out of 4022, the statistical errors were smaller, while the 126 h with precipitation and 423 with snow cover without precipitation had expectedly greater errors. [37] model of albedo for all data and separately for nice weather without snow cover ( = 0.25), for weather with snow cover and without precipitation ( = 0.85) and, finally, for precipitation weather ( = 0.11). Besides this calculation, we tried to find albedo as a function of solar elevation in Debrecen in a similar way as Duynkerke [69], as well as Beljaars and Bosveld [38]. They suggest that albedo depends on the angle of solar elevation and on the relation between diffuse and total incoming solar radiation, i.e., cloud cover. Statistical Error for Albedo Following their idea about the dependences of the albedo of the cloud cover, firstly, a linear function = ( ) for clear skies is calculated, using observations of albedo and Foken's calculation for solar elevation during clear sky conditions ( = 0). For the 178 h without snow cover, this linear dependency is: Following idea described in reference [38] alongside this equation, cloudiness is included to describe albedo in nonclear sky conditions using the least squares method for the different forms of the functions without and with snow cover surface. Beljaars and Bosveld [38] introduced the relation of downwelling diffuse incoming solar radiation, , and totally downwelling solar radiation, , as a factor with leading influence on the albedo. Since there is no solar radiation instrument equipped with a shadow band at Agrometeorological Observatory Debrecen, taking that diffuse radiation as 10% of the total downward solar radiation for clear skies ( = 0) and 100% for cloudy skies ( = 8), we supposed that their ratio rises linearly between 0.1 and 1.0 with the rising cloudiness. Therefore, it used (0.125 • − 0.1), instead of ( / − 0.1), given by reference [38]. For 3473 situations without snow cover and precipitation, when ≥ 0, we obtained: For 423 data with snow cover, we obtained: For hourly data both with precipitation and snow, we took = 0.2. The statistical errors obtained from these calculations were significantly smaller than in previous calculations, except in the case of precipitation (Table 7). Useful, statistical significance correlation was formed in the case of all data ( = 0.79), but in all three independent cases, the absolute values of the correlation coefficients were below 0.3, so there were no useful connections for the practices among the measured and calculated albedo. The absolute values of the BIAS, except for the few precipitation cases, were below 0.025, which is an acceptable low value. The shortwave reflected solar irradiance is computed as a multiplication of global solar radiation and albedo. The precision for evaluated albedo is very important, and we represented this fact by comparing the mean annual daily variation of calculated and measured upwelling shortwave radiations and through the statistical errors for two mentioned methods: first-Nyren and Gryning [37], Equation (16), and second-Beljaars and Bosveld [38], Equations (20) and (21), for the assessments of albedo ( Through all the statistical errors, the evaluation of albedo that includes cloudiness is much better than other methods. The advantage of the Beljaars and Bosveld [38] method is also obviously visible in Figure 4, which represents the mean annual daily variation of measured and calculated reflected solar irradiance. Assessments of Net Irradiance The net radiation of the ground surface is given by the sum of the shortwave and longwave radiation components. However, contrary to its importance, it is very rarely included in the standard meteorological measurements. In this section, a mean annual daily variation and statistical errors for measured and estimated * are discussed, where the estimated value depends on the global solar radiation, albedo, cloudiness and, in one parameterization, on the temperature. We analyze the frequently used classical parameterizations in the SVAT (Soil-vegetation-atmosphere transfer) models in the daytime ( ↓> 0) and in the nighttime ( ↓= 0) separately for a better understanding of the accuracy of the parameterizations in the Pannonian Basin. Daytime To estimate the net irradiance ( * , (W m -2 )) from the routine meteorological measurements, an empirical equation was taken, such as Holtslag and Van Ulden [5]. * = 0.9 (1 − ) where : atmospheric temperature (K) from a 2-m measuring height, : the Stefan-Boltzmann constant ( = 5.670 • 10 W m K ), = 5.3 • 10 W m K , = 60 W m and = 0.12. In this equation, we used calculations for Holtslag's clear sky calculations and changed both the cloudiness corrections and both albedos to obtain the statistical errors. Besides that, the empirical equation following the idea of Göckede and Foken [14] was used. They used the global solar irradiance as an input parameter [11]. In this equation is the measured global solar irradiance and can be substituted with ↓, is albedo, as earlier, ↓ is described in Sections 2 and 4.1 as the downwelling solar radiation for clear sky situations based on Foken [11] and is the air density (kg m -3 ). Based on the ideal gas low, where measures the station pressure in (hPa), = 287.06 J kg K is the specific gas constant for the dry air and is the virtual temperature in (K). where is the specific humidity (kg kg -1 ) that is calculated as = 0.622 is the water vapor pressure in the same unit as the pressure (hPa), which is calculated based on the relative humidity ( ℎ) and the saturated water vapor pressure ( ) in the temperature, , is calculated as follows: (27) for ice. The water vapor pressure is calculated as = • ℎ/100, and finally, we give the formula of the specific heat of moist air at a constant pressure: Furthermore, we compared Holtslag and Van Ulden [5] and Foken [11] equations for * , using measured global solar radiation and Beljaars and Bosveld [38] methodology for calculation of the albedo. The following statistical errors are calculated and shown in Table 9: BIAS, MAE and RMSE, together with the percentage of cases in which the relative errors were fewer than 25% (Rerr < 25%) for the calculation for clear sky global solar radiation, cloudiness calculations and albedo, where albedo ′ does not depend on the cloudiness, whereas does. The second numbers in the column represent statistical errors of the net radiation but only for periods with high solar elevation when the sensible heat flux ( ) was positive. The moment when the sensible heat flux changed its direction was calculated, according to De Bruin and Holtslag [10], from equation: where = 20 W m is the constant, is the ground heat flux transported into the soil, = / and the psychometrics constant is = / , where λ is the latent heat of vaporization. The rough estimation of the ground heat flux transported into the soil is = 0.1 • * [70]. There are a few approximated relationships between and [11,12,37]. We checked all the approximations in our dataset and chose the second method [12]. Table 9. Statistical errors and percentages of cases in which the relative errors were smaller than 25% (Rerr < 25%) in the calculations of the net irradiance, * . We used the next symbols in the table: Foken [11] net radiation model * and Holtslag and Van Ulden [5] net radiation model * . The low index used for the measured global solar radiation and , , and for its assessment depending on Foken's or Holtslag's clear sky calculation and cloudiness correction. ′ represent the Nyren and Gryning [37] and Beljaars and Bosveld [38]-type parameterizations of albedo. The second column inside the main columns represents statistical errors of the net radiation but only for period with high solar elevation when the estimated sensible heat flux based on De Bruin and Holtslag [10] was positive. Gray-methods with higher correlation coefficients. Although a significant difference is not noticed between Holtslag and Van Ulden [5] and Foken [11] calculations for net radiations, or between Kasten and Czeplak [39] and Burridge and Gadd [40] mathematics for cloudiness, it is clear that the inclusion of cloudiness in the calculation of albedo, as recommended in Beljaars and Bosveld [38], improves the estimation of net radiation. It is obviously confirmed that the inclusion of measured global solar irradiance, as suggested in Göckede and Foken [14], significantly improved the previous estimations of net radiation when these measurements were not available, as is expected. Net Furthermore, describing a successful estimation of net radiation can help the estimation of daytime sensible heat flux from routine meteorological measurements. It is especially useful at locations where special profile or flux measurements do not exist. In addition, these methods can be very useful tools for data quality control for special micrometeorological measurements. Nighttime The nighttime net thermal radiation depends on the temperature and cloudiness. We used an equation that was adapted from Burridge and Gadd [40]: where is the measured temperature at a 2-m height (K), = 285 K , * = −91 W m and = ( + )/16 is the effective cloud cover, where is in octa (0, 1,…, 8). Although nighttime net thermal radiation is not usually used for the calculation of sensible heat flux from routine measurements, we offered statistical errors for its estimation: = −24 W m , = 26 W m , = 31 W m and the coefficient of correlation was 0.74. The small difference between the absolute value of and represents a systematic underestimation, as we can see in Figure 5. We represented only * net radiation models with Foken's clear sky calculations, * net radiation models with Holtslag's clear sky calculations and both models with measured , emphasizing the differences between ′ and albedo based on Nyren and Gryning [37] and Beljaars and Bosveld [38] methodology separately. Figure 5 represents the mean annual daily variation of net radiation, which clearly confirms the facts that were discussed through statistical errors and suggests the following conclusions: (1) During the midday period, when sensitive heat flux is directed upwards, Foken's calculation is clearly better than Holtslag's calculation. (2) Holtslag's method is slightly better for low solar elevation when a sensible heat flux is directed downwards. (3) The difference between the measured ( ) and calculated ( * ) net irradiance is very small when we compared both methods for mathematically described cloudiness. (4) An albedo that includes cloudiness ( ) is much better than one ( ′) that does not. (5) Measured global solar radiation makes an estimation of the net irradiance significantly better in comparison to when this value is estimated. (6) Using the same value for global solar radiation, Foken's estimation for net irradiance is slightly better than Holtslag's. (7) The estimation of nighttime net thermal radiation is crude and has systematic errors. Assessment of Longwave Irradiance The absorbed solar energy on the Earth's surface is emitted into the atmosphere as upwelling LW radiation ( ↑). The part of this radiation and part of solar radiation absorbed by cloudiness and atmospheric particles and gases go back and/or forward to the Earth's surface as downwelling LW radiation ( ↓). The fluxes of longwave radiation >4 μm are key terms in the surface energy budget and vitally important in applied meteorology. Its knowledge is essential for the forecast of frosts, fogs, temperature variations, etc. [71,72]. Although parameterization schemes for longwave radiation have their background in the irradiative transfer theory, most of them, as in the case of shortwave radiation, are based on empirical relationships derived from observed fluxes. Empirical relations usually describe LW radiation well, especially in climatic conditions similar to those from which they were derived and in clear sky conditions. Cloudiness correction, again as in the case of SW radiation, has to be included in the calculations. Assessment of Upwelling LW Irradiance Outgoing longwave radiation from the Earth's surface is driven by the effective radiative, infrared surface temperature, : where ~0.98 is the Earth emissivity, and is Stefan-Boltzmann's constant. is very rarely available from routine observations. Due to this, it is necessary to substitute this value with the screen level atmospheric temperature, , which is always smaller during the day and greater during the night. Sellers [52] and Holtslag and van Ulden [5] described upwelling LW radiation using a Taylor series centered at : Since we consider the above equation over short grass, we assume = 1; hence, the effect of a gray body is neglected. Since feels the change of with a delay, especially during the day period, we used atmospheric temperature with a one-hour delay when the solar elevation was positive. For the daytime case, the last term can be approximated using either ↓ or . Since more and more routine meteorological measurements are including global solar radiation, the last approximation is used more often. According to Foken [11] and Offerle et al. [53], who modified the first form of Holtslag and van Ulden [5] approximation by including albedo ( ): We also used the same mathematical expression approximated by * according to Holtslag and van Ulden [5]: Instead of a constant 0.12 in this equation, the same authors used values that depended on atmospheric and surface conditions, especially soil moisture: where and were mentioned in previous sections, and , is the modified resistance for sensible heat flux. The default value : , = 80 s m , as noted in reference [5], is a psychrometric constant, and is the Pristly-Taylor coefficient, which is a function of the soil moisture, as described in reference [73]. = 0.1 and = • * represents the part of the net radiation that goes into the soil. Although the soil moisture, as well as the surface soil temperature, is not part of the routine weather observations, in this paper, it is compared by measuring the outgoing LW radiation with the estimated ↑ in a few different cases: There are 7933 data hours calculated and compared. Contrary to expectations, we found that the calculations with measured did not have the best evaluation of ↑. Unexpectedly, it is evident that including soil moisture measurements through did not improve our calculations (Table 10). In that case, three numerical experiments were done: the first experiment = 1.0, the second experiment: = 0.95 from April to September and = 0.85 from October to March and the third experiment = 0.65. The last one was a little better than the first and the second, but it was worse than when we used the constant. According to the assessment of upwelling LW from the routine measurements, it was compared to the Offerle et al. [53] parameterization, which used (Equation (33)) with measured ( ↑ ) and calculated ↓ ( ↑  ) with Holtslag's parameterization (Equation (34)) with * , where the net radiation was calculated again with the measured ( ↑ * ) and calculated ↓ ( ↑ * ). It is evident that Holtslag and van Ulden [5] parameterization with net radiation is the best in both cases: with and without measured ( Figure 6 and Table 10). Figure 6. Mean annual daily variation of ↑ for four models and measurements. Models that used routine meteorological measurements included, separately: ↓ or . Abbreviations of the models are same as in Table 9. In this paper, eight methods for modeling clear sky downwelling LW radiation as a function of screen level temperature, , expressed in (°K), and water vapor pressure, , expressed in (hPa), are compared with measurements through statistical errors. The applied parameterizations are presented in Table 11. [43] parameterization, ↓, was the best (Table 12). Similar to ↓, similar errors have parameterizations from Idso [45], ↓, although their errors were the smallest during the day period and almost the worst during the night, when clear sky conditions were more frequent. Firstly, in order to include cloudiness in our calculations, the eight parameterizations for clear sky , ,… , ↓ are combined with three different methods that describe the effects of cloudiness. The first group of parameterizations was constructed on the basis of cloudiness correction, following Niemelä et al. [43]. There are three parameterizations, where: Based on Jacobs [49] and Maykut and Church [50], separately. The third cloudiness correction method was for a lowland site, following Iziomon et al. [51]: All clear sky downwelling LW radiation parameterizations ( , ,…, ↓), (see Section 6.1.1) were applied in cases ( ↓) , ( ↓) & and ( ↓) . There were (3 • 8 = 24) different cases for hourly downwelling LW irradiance calculations for all the sky conditions. The downwelling LW radiation, as described by Holtslag and Van Ulden [5], was calculated using clear sky parameterizations, where the downwelling irradiance was a function of from Swinbank [47] and Dilley and O'Brien [48] separately: Finally, following Niemelä et al. [43], the downwelling LW irradiance was calculated for all sky conditions, with a parameterization that was developed using clear sky downwelling irradiance, cloudiness and upwelling LW irradiance: There were eight cases for each hour. Based on the intention to work with routine measurement data and considering the results from previous sections, the upwelling LW irradiance in Equation (41) was calculated as a function of and * , while the net irradiance was calculated by Foken's clear sky model, Kasten and Czeplak [39]-type cloudiness parameterizations, Beljaars and Bosveld [38] albedo and Göckede and Foken [14] calculations for net irradiance, ↑ * . In general, comparing by the statistical errors only for cloudiness correction, we concluded that the Maykut and Church [50] method was slightly better than Jacobs [49]. At the same time, the cloudiness correction described by Iziomon et al. [51] was between them with: BIAS between 0 W m -2 for ( ↓) and 17 W m -2 for ( ↓) , MAE between 18 W m -2 for ( ) and 26 W m -2 for ( ↓) , RMSE between 22 W m -2 for ( ↓) and 32 W m -2 for ( ) and a correlation coefficient 0.91 for almost all parameterizations except ( ↓) and ( ↓) , which were less. Slightly better statistical errors than all three above-mentioned assessments of downwelling LW irradiance were described by Niemela et al. [43] and shown in the grey column of Table 13. The number of hours with absolute errors greater than 100 W m -2 for Niemelä et al. [43] parameterization was very stable and varied between 24 for Dilley and O'Brien [48] to 29 for Swinbank [47] and Iziomon et al. [51] clear sky downwelling LW parameterization. According to the relative errors, it was noticed that, for almost all parameterizations, the relative errors were smaller than 25%. However, in order to distinguish slight differences between them, percentages for the relative errors smaller than 10% are offered in Table 14. The columns represent different clear sky parameterizations, while the rows represent additional cloudiness corrections. ). It is noticeable for the downwelling irradiance, according to Niemelä et al. [43], that all sky cloudiness conditions, together with Dilley and O'Brien [48] ( ↓), Niemelä et al. [43] ( ↓), Iziomon et al. [51] ( ↓) or Pratta [5] ( ↓) parameterization of the clear sky LW irradiance, gave an excellent estimate (Table 12 and Figure 7). Table 8. Two parameterizations: ( ( ↓) and ( ↓) ) based on Holtslag and Van Ulden [5] are represented in colored lines. See, also, Tables 11,12 and 14. The Statistical Errors and Validation of the Results The stability validation of our previous results was done by comparing the statistical errors calculated for one year with the same values calculated for longer periods. Namely, we compared BIAS, MAE, RMSE and the coefficient of correlation ( ) calculated for the year 2009 with the same errors for the two longer periods. The first period was from 2008 to 2010 and included 17,964 data hours for net radiation and 9814 data hours when > 1 W m −2 . The second period was from 2008 to 2017 and included 61,880 data hours for LW radiation and 40,869 data hours with > 1 W m −2 (Table 15). We used the parameterization with the proven best results. The results of the interpretation were separated, because there was the transition from manual to automatic measurements after the year 2010 at Debrecen Airport. It had the consequence of missing cloudiness measurements during the night. Namely, hours with a lack of cloudiness had to be excluded from the calculations, as well as the hours with accidentally missed data. During the period 2008-2017, there were night measurements of cloudiness only in the first half of the period-in the beginning completely and, later, only partly. Since 2018, the measuring site Debrecen Airport WMO 12882 has been operating as a "backup" site with limited data access. Finally, the comparison between the estimated * and measured irradiance was conducted in four different cases, and the results are shown in Tables 15 and 16 and Comparing statistical errors for a 10-year time period, 2008-2017, the stability of the presented calculations was clearly confirmed (Table 17). The same statistical errors for the same variables for the time period from 2008 to 2010 are shown in Table 16 and Figure 8. It is obvious that the correlation coefficient is better for a longer period for all the variables, while the statistical errors MAE and RMSE looked clearly better only for * , * and * . Again, one emphasizes the significance of the measurements of global solar irradiance for the estimation of net irradiance. Namely, Q * and Q * have smaller statistical errors and better coefficients of correlation than * and Q * , although we believe that the estimation of global solar radiation can become better if we include astronomical parameters for the EQT (Equations of Time) for our geographical location instead of = 15° and = 50°. We recalculated the solar elevation angles using EQT (in minutes) from reference [74] and compared the original Foken's [11] and Holtslag's [5] methodology with modified Foken's methodology through the application EQT from reference [74] in Equation (11) during the year with an hourly time step. The differences between the angles of solar elevation for this new calculation and original Foken's methodology were much smaller compare with Holtslag's methodology. The absolute differences were 0.84 ± 0.72° and 3.0 ± 1.6° C, respectively. We underlined the importance of the precise calculations of solar geometry. Besides that, it was shown that net radiation ( * ) is little better if it is calculated through the sum of its components than if it is calculated from the first step of parameterization-directly, Tables 16 and 17 and Figure 9. Conclusions Comparisons of widely used parameterizations of shortwave and longwave radiation balance components in micrometeorology were provided based on the long-term measurements in the Northeastern part of the Pannonian region in Agrometeorological Observatory Debrecen. Quality-controlled datasets from WMO first-class radiation sensors were used during the time period 2008-2017. The method for estimating the global solar irradiance (K ↓) can be used for data quality control of the measured parameters and for intercomparison tests between the global solar irradiance ( K ↓) and cloudiness ( ). All the methods had relative errors below 25% in around 45% of the data hours. The usual errors were connected with the short periods when the sky fast became either overcast or clear or when solar elevation was low, especially during the winter period. Individual overview of the data, synoptic situation, weather and cloudiness measurements, which were provided by Debrecen Airport (12882) 8.5 km from the Agrometeorological Observatory, are necessary in situations when the errors between the measured and calculated K ↓ are significant. In these cases, decisions can be made whether the measured or modeled data are acceptable. Comparing nearly 62,000 h of observations with all sky conditions, we concluded that: • Foken's clear sky calculations [11], together with Kasten and Czeplak's [39] methodology, are the best during unstable stratification when solar elevation is high. • Measured global solar irradiance makes the estimation of the net irradiance significantly better in comparison to when this value is estimated. • The estimation of nighttime net thermal irradiance is crude and has systematic errors. • Dilley and O'Brien's [48] clear sky parameterization with Holtslag and Van Ulden [5] cloudiness corrections had the smallest statistical errors for the estimation of downwelling longwave irradiance. • Holtslag and Van Ulden [5] modified by Offerle [53] had the smallest statistical errors for the assessment of upwelling LW irradiance. The method for estimating net irradiance is significant, because it is a variable that dictates the sensible heat flux and flux of the momentum when the measured surface wind is taken into consideration in the surface layer of PBL. Processes inside this layer are typically described by the laws of similarity theory, which lean on the Buckingham π-theorem. The most famous of these theories of Monin-Obukhov use the mentioned fluxes to determine the characteristic turbulent length scale on which classes of atmospheric stability are typically based. It would be valuable to know this scale and the components of the radiation and other surface layer parameters (SLP) by the same values from the numerical models, as well as to discuss the spotted differences. We believe that the observed differences emphasized the importance of special flux and radiation measurements in the Pannonian Plain, where the wind is weak and the duration of extremely unstable and extremely stable stratifications of PBL, which is still a discussion topic, are far more common than in Western Europe. The chosen methods can be used for data quality control and filling the gaps in these measurements.
2021-09-27T20:55:49.771Z
2021-07-21T00:00:00.000
{ "year": 2021, "sha1": "dbf83fc2a242c80ed27e7bb8a9c5c4cf375ba964", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/12/8/935/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2916d5e350fea49791550068ea950ec45188cb44", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
110756348
pes2o/s2orc
v3-fos-license
Experimental Research on Properties of Materials of Grounding Resistor In this study, we have a experimental research on properties of materials of grounding resistor. Experiment test of the grounding resistor in the state of analog ground fault have been done, the performance parameters on the mechanics, thermal and electrical of alloy materials with different kinds and different specification have been got. The performance and its character of alloy materials have been grasped in the state of analog ground fault by analysis and processing. The research results have an important significance on the material selection and structure design of low resistance grounding resistor. INTRODUCTION In high-voltage distribution system, the choice of the grounding way of neutral point is an all-around technical problem."With the increase of load, the overhead line is substituted by cable line gradually, the grounding way by small resistance is more and more adopted" (Zhao et al., 2007;Ming-Yan, 2004).But because its short circuit current is high "(100 A-2000 A)" (National Technical Supervision Bureau, 2001), it will has an obvious impact on the performance of alloy material because of high temperature.In order to assure the low resistance grounding resistor can run in the state of safety, stability and economic, the test experiments of alloy materials in the simulated state of earth fault were done, the experimental results were analyzed.The research result will has important significance for the choice of material and structural design. CURRENT DENSITY AND MAXIMUM TEMPERATURE RISE OF ALLOY MATERIALS Test data: The alloy materials with different kinds and different specifications in the state of simulation ground fault were tested and the current duration is 10 sec.The highest temperature rise and maximum current density were obtained.The results of through-flow test of alloy materials with different kinds and size were shown as Table 1. Data processing and basic conclusions: The so-called maximum temperature rise is the highest temperature variation value of the resistance chip without permanent deformation.At this time, the current value of unit cross-sectional area is the maximum current density of this kind of specification material that can withstand. Form Table 1, we can see that all kinds resistance chip composed of different materials withstand the maximum temperature rise and the biggest current density are related to the structure, specification and materials properties.Alloy material, such as Cr20Al3, Cr15Al5, "the relations of biggest current density and crosssectional area are shown as Fig. 1 and 2. From the figures, we can see that the biggest current density of material is increasing with its cross-sectional area increases" (Dae-Jung et al., 2009;Da-Jiang et al., 2011). The highest temperature variation value of different size with the same alloy is related to the current density and current continue time.According to the heat balance theory, the relationship of alloy material between temperature rise and its current density is shown as follows (Dae-Jung et al., 2009) (calculated in the adiabatic condition, without consideration of decalescence of ceramics and other non-metallic insulation materials): by In the formulas,∆ T represents temperature rise of the material, I, R, t represents the current, resistance and current duration separately , c, m, S represents specific heat, mass and cross sectional area; ρ 0 , α, γ express resistivity, temperature coefficient and density of alloy material at 20°C respectively.In this test, t is set at 10 sec, from expression (2), it can be seen that the maximum temperature rise of alloy material is proportional to the square of the maximum current density approximatively.The variation relation of the maximum temperature rise and the square of the maximum current density of Alloy materials Cr15Al5 with different specifications is shown in Fig. 3.In Fig. 3, we can see that experimental results are basically consistent with the theoretical analysis and the error is mainly due to the heat dissipation of alloy material.In order to save alloy materials, reduce the size of resistor, we should raise the temperature rise of material to its highest temperature that it can tolerate as far as possible.From formula (1), (2),we can get formula (3) as follows: It can be seen from the formula (3) that the temperature rise of alloy material is inversely proportional to the square of the cross-sectional area.When the failure response time and fault current are fixed, the smaller cross-sectional area of the material, the higher temperature rise of the material can withstand.Therefore, from the perspective of saving alloy material, it should be possible to improve the temperature rise of the alloy material in the design of grounding resistor.Of course, setting too high the temperature rise of the alloy material will affect the stability and safe operation of the grounding resistor to some extent.So, in order to achieve the safety and economic operation of grounding resistor, it needs a comprehensive consideration of various factors, such as cost of material, resistor stability, security and so in the designing of the resistors (Zhen-Dong and Gong, 2006;Yu-Shi et al., 2005). RESISTIVITY OF ALLOY MATERIAL Test data of alloy material resistivity: The resistance value of different material with different current density is tested and the current is set at 10 seconds.After deduction ,we get the resistance value and temperature coefficient of various alloy materials in 20°C, shown as Table 2. Data processing and basic conclusions: In general, the resistivity of nickel-chromium-iron alloy is lower than the radiohm alloy with higher percent of Al and Cr.Considering cost savings and size reducing, the radiohm alloy material should be selected in the low resistance grounding resistor.Chromium and aluminum are the main elements to improve the resistivity in the Fe-Cr-Al alloy series materials.It can be seen from the test results that the resistivity will increase with the Al, Cr content increasing.The resistance-temperature coefficient of the alloy material is related to the material composition, especially the content of the main compositions such as nickel, chromium, aluminum.From the experimental test results, we can see that the resistance-temperature coefficient will decrease with Al content increases for Fe-Cr-Al alloy series materials.The alloy materials such as Cr19Al2,Cr19Al3,Cr20Al3 etc. have higher resistance-temperature coefficient because the content of Al composition is lower, for this reason, the resistivity will increase obviously in the process of current work.When the voltage is maintained, the current will continue to reduce and then the current operating power is instability.But heating power stability is not necessary to the grounding resistor which is used to provide energy instant release channels.Conversely, in the design of the grounding resistor, its nominal resistance value can adopt tight design scheme, which can not only save resistance material but also basically reach the rated value of the earth fault current.Table 3 shows the test results of the tensile test of materials.as Tensile strength in room-temp.and plasticity are related to the element content of Al,Cr ,etc and the most obvious is the content of Al element.The tensile results of Cr20Al3, Cr15Al5 at room-temp is shown as Fig. 4 and 5 respectively.By comparison of Fig. 4 and 5, it can be seen that when the aluminum content increases, the plasticity of the alloy decreased.Table 4 shows Thermal expansion coefficient of alloy materials (unit:: E-6*1/°C) The tensile curve of Cr20Al3 at high temperature (730°C) is shown in Fig. 6.Comparing Fig. 4 and 6, it can be seen that under the condition of high temperature, the Fe-Cr-Al alloy material tensile strength, plasticity will rapid decline, but the brittleness will increased.It can be seen from the figures that the coefficient of thermal expansion of alloy materials will increase with the temperature increasing, the higher the content of aluminum chromium, the more obvious that the thermal expansion coefficient will increases with the increase of temperature. CONCLUSION The test results show that when the grounding resistor is in the ground fault condition, its force, thermal and electrical performance characteristics is not only related with the type of material, but also related with the specifications and component content of materials.For the radiohm alloy materials, Chromium, aluminum component content of materials have the most obvious influence on material properties, in order to achieve the safety, stability and economic operation of low resistance grounding resistor, various factors should be comprehensively considered. Fig. 1 :Fig. 2 : Fig. 1: The relation of biggest current density and crosssectional area of alloy material Cr20A13 Fig. 3 : Fig. 3: The relation of biggest current density and crosssectional area of alloy material CR15A15 area S/mm 2 Current density J (A/mm ) Fig. 4 :Fig. 6 : Fig. 4: The relation of atress and strain of Cr20A13 material in normal temperature Table 2 : The test results of the resistivity of Aludirome materials with different specification
2019-01-02T09:10:54.401Z
2013-03-20T00:00:00.000
{ "year": 2013, "sha1": "8dfa5b0f4946275054131da5977c79e1ad03a7d1", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/RJASET/5-2858-2862.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8dfa5b0f4946275054131da5977c79e1ad03a7d1", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
233209949
pes2o/s2orc
v3-fos-license
Lp-asymptotic stability of 1D damped wave equations with localized and linear damping In this paper, we study the $L^p$-asymptotic stability of the one-dimensional linear damped wave equation with Dirichlet boundary conditions in $[0,1]$, with $p\in (1,\infty)$. The damping term is assumed to be linear and localized to an arbitrary open sub-interval of $[0,1]$. We prove that the semi-group $(S_p(t))_{t\geq 0}$ associated with the previous equation is well-posed and exponentially stable. The proof relies on the multiplier method and depends on whether $p\geq 2$ or $1 the linear localized damping case in higher dimensions, exponential stability has been established several times using different tools, in particular using the multiplier method which is the relevant method to our paper context. We refer the reader to [12] for a complete presentation of the method as well as the tools associated to it. As for the stability results obtained by this method in this case, we refer for instance to [2] and [14] for detailed proofs and extended references. The non-linear problem on the other hand has been studied (for instance) in [15] with no localization and in [11] for a localized damping. We refer the reader to the excellent survey [2] for more references in the Hilbertian framework i.e. when p = 2. As for more general functional frameworks, in particular L p -based spaces with p = 2, few results exist and one reason is probably due to the fact that, in such non-Hilbertian framework, the semigroup associated with the d'Alembertian (i.e., the linear operator defining the wave equation) is not defined in general as soon as the space dimension is larger than or equal to two, see e.g., [16]). This is why most of the existing results focus on several stabilization issues only in one spatial dimension. Well-posedness results as well as important L p estimates have been shown in [8], in particular the introduction of a p-th energy of a solution as a generalization of the standard E 2 (t) = 1 0 z 2 x +z 2 t 2 . Some of these results have been used in [3,5] recently. The latter reference relies on Lyapunov techniques for linear time varying systems to prove L p exponential stability in the nonlinear problem under the hypothesis that initial data live in L ∞ functional spaces and with p ≥ 2 only; other stability results have been shown in the same reference in particular L ∞ stability but always with more conditions on initial data which creates a difference between the norms of trajectories and the norms of initial data used in their decay estimates. In this paper we extend the results existing in the case p = 2 to the case p ∈ (1, ∞) by adapting the multiplier method to that issue. We start first by stating the problem and defining the appropriate L p functional framework as well as the notion of solutions. We prove the wellposedness of the corresponding C 0 semi-group of solutions using an argument inspired by [9] and [5]. As for stability issue, we prove that these semi-groups are indeed exponentially stable. Even though the argument depends on whether p ≥ 2 or p ∈ (1, 2), it is another instance of the multiplier method, where the multipliers are expressed in terms of the Riemann invariants coordinates ρ = z x + z t and ξ = z x − z t . In particular, one of the multipliers in the case p = 2 is equal to φ(x)z with φ a non negative function which is used to localize estimates inside ω. If p ≥ 2, this multiplier is replaced by the pair of functions φ(x)z|ρ| p−2 and φ(x)z|ξ| p−2 . Clearly, such multipliers cannot be used directly if p ∈ (1, 2) and must be modified, which yields to a more delicate treatment. In both cases, energy integral estimates are established following the standard strategy of the multiplier method and exponential stability is proved. For the two extremes cases p = 1 and p = ∞, we are able to prove that the corresponding semi-groups are exponentially stable only for particular cases of global constant damping. However, we conjecture that such a fact should be true in case of any localized damping. The paper is divided into four sections, the first one being the introduction and the second one devoted to provide the main notations used throughout the paper. Section 3 deals with the well-posedness issue and Section 4 contains the main result of the paper, i.e. exponential stability of the C 0 semi-group of solutions for p ∈ (1, ∞) as well as the partial result for p = 1 and p = ∞. We gather in an appendix several technical results. 2 Statement and main notations of the problem Consider Problem (1) where we assume the following hypothesis satisfied: (H 1 ) a : [0, 1] → R is a non-negative continuous function such that where ω is a non empty interval such that c = 0 or d = 1, i.e.,ω contains a neighborhood of 0 or 1. There is no loss of generality in assuming d = 1, taking 0 as an observation point. Remark 2.1 The results of this paper still hold if the assumption that c = 0 or d = 1 is removed by using a piecewise multiplier method, i.e., we can use both 0 and 1 as observation points (instead of simply 0 here) to obtain the required energy estimate. For p ∈ [1, ∞), consider the function spaces where X p is equipped with the norm and the space Y p is equipped with the norm Initial conditions (z 0 , z 1 ) for weak (resp. strong) solutions of (1) are taken in X p (resp. in Y p ), where the two concepts of solutions are precisely defined later in Definition 3.1. For all (t, x) ∈ R + × (0, 1), define the Riemann invariants Along strong solutions of (1), we deduce that with (ρ 0 , ξ 0 ) ∈ W 1,p (0, 1) × W 1,p (0, 1). We define the pth-energy of a (weak) solution of (1) as the function E p defined on R + by and E p can be expressed in terms of ξ and ρ as For r ≥ 0, we introduce the following notation where sgn(x) = x |x| for nonzero x ∈ R and sgn(0) = [−1, 1]. We have the following obvious formulas which will be used all over the paper: Before we state our results, we provide the following proposition (essentially taken from [9]). Proposition 2.1 Let p ∈ [1, ∞) and suppose that a strong solution z of (9) is defined on a non trivial interval I ⊂ R + containing 0, for some initial conditions where F is a C 1 convex function. Then Φ is well defined for t ∈ I and satisfies d dt Proof. By the regularity assumptions, ρ(t, .) and ξ(t, .) are absolutely continuous functions. Formal differentiation, easy to justify a posteriori by the regularity of the data, yields Using (9), one obtains that Since F is convex, F is non-decreasing, implying that (ρ − ξ)(F (ρ) − F (ξ)) ≥ 0 which gives the conclusion when combining it with (18). suppose that the solution z of (1) exists on R + . Then the energy t −→ E p (t) is non-increasing and, for t ≥ 0, Proof. For (z 0 , z 1 ) ∈ Y p and p > 1, we apply Proposition 2.1 with F (s) = |s| p p , which proves (19). Well-posedness We start by recalling the classical representation formula for regular solutions of (1) given by the d'Alembert formula, cf. [17,Equation 8, page 36]. Proposition 3.1 Consider the following problem with an arbitrary source term g ∈ C 2 (R + ×R, R) and initial data z 0 ∈ C 2 (R) and z 1 ∈ C 1 (R), Then the unique solution z of this problem is in C 2 (R + × R, R) and is given for all (t, x) ∈ R + × R by d'Alembert formula In order to apply the above proposition to (1), we extend by a standard procedure (cf. [7, Exercise 4, section 4.3]) the following partial differential equation defined on R + × (0, 1) to an equivalent partial differential system defined on R + × R. We first extend the data of the problem by consideringz 0 ,z 1 andg the 2-periodic extensions to R of the odd extensions of z 0 , z 1 and g to [−1, 1]. Using (21), we obtain then the expression of the solution z for the problem on R + × (0, 1), which is the following which clearly provides, for every t ≥ 0, a 2-periodic odd function z(t, ·) on R. We also have the expression of the derivatives and Before we proceed to the well-posedness of (1) in X p (resp. Y p ), we need to define the notion of its weak and strong solutions. Theorem 3.1 (Well-posedness) Let p ∈ [1, ∞). For any initial data (z 0 , z 1 ) ∈ X p (resp. Y p ), there exists a unique weak (resp. strong) solution z such that Moreover, in both cases, the energy function t → E p (t) associated with a solution is non-increasing. Proof. The arguments for both items is adapted from that of [5,Theorem 1]. We prove the existence of an appropriate solution y of (25) by a standard fixed point argument. We proceed on some interval [0, T ] for T > 0 small enough independent on the initial condition. We can then reproduce the reasoning on [T, 2T ] starting from the solution at t = T and so on to establish well-posedness for all t ≥ 0. Sinceg is 2-periodic function in space, it is natural to work in a space of functions that have the same features. Hence we denote by B T the space of functions that are defined on [0, T ] × R, odd on [−1, 1] and 2-periodic in space and p-integrable. The space B T is equipped with the norm which makes it a Banach Space. We define the mapping Since a is bounded, it is clear that F T is a contraction on B T for T > 0 small enough, hence the existence of a fixed point to F T , which is a (weak) solution of (1). It is also clear that T does not depend on the initial condition (z 0 , z 1 ) ∈ X p . As explained previously, this enables one to prove well-posedness in X p . As for the part regarding Y p , the argument is similar to the previous one, after replacing B T by the space D T consisting of the functions defined on [0, T ] × R which are odd on [−1, 1] and 2-periodic in space with p-integrable derivative with respect to x, equipped with the norm given by For (z 0 , z 1 ) ∈ X p , p > 1, we get that t → E p (t) is non increasing by the fact that Y p is dense in X p . For p = 1, we use the facts that X p is dense in X 1 for p > 1 and the map p → E p (t), for a fixed trajectory and a fixed positive time t, is right-continuous. (1) is linear and t → E p (t) is non-increasing, the flow of its weak solutions defines a C 0 -semigroup (S p (t)) t≥0 of contractions of X p , for every p ∈ [1, ∞). Exponential Stability In this section, we aim to establish exponential stability for the C 0 -semigroup (S p (t)) t≥0 defining the weak solutions of (1) for every p ∈ (1, ∞). The argument relies on the multiplier method and is slightly different whether p ≥ 2 or not. Indeed, two multipliers involve the exponent p − 2, which becomes negative if p ∈ (1, 2). In the latter case, one must modify all the multipliers to handle that situation. Before starting describing such results, we have the following weaker general stability result for p ∈ (1, ∞). Proposition 4.1 (Strong stability) Fix p ∈ [1, ∞) and suppose that Hypothesis (H 1 ) is satisfied. Then, for every (z 0 , z 1 ) ∈ X p , the solution z(t, ·) of (1) starting at (z 0 , z 1 ) tends to zero as t tends to infinity. Proof. We follow the proof provided in the case p = 2 in [6]: by a standard density argument, it is enough to establish the result for strong solutions of (1). The latter is obtained by a LaSalle type of argument using the energy function E p and the fact that the set {z(t, ·), t ≥ 0} is relatively compact in W 1,p 0 (0, 1), which is itself obtained by noticing that z t is a weak solution of (1) with bounded energy E p . We introduce next some functions and notations which are common to the handling of both cases. Recalling that we have chosen x 0 = 0 as an observation point, we consider for 0 < 0 < 1 < More precisely, the functions ψ, φ and β are smooth with compact support and defined as follows: (31) Remark 4.1 In the sequel, we will denote by C p positive constants only depending on p and by C positive constants depending on a(·) (typically through its upper bound A on [0, 1] and its lower bound a 0 on ω), and on ψ, φ and β (through bounds of their first derivatives over their supports). Our main result is the following theorem: As usual, it is enough to prove Theorem 4.1 for strong solutions and then extend the result for weak solutions by a density argument. In turn, the theorem for strong solutions classically follows from the next proposition, cf. [2, Theorem 1.4.2] for instance. Proposition 4.2 Fix p ∈ [2, ∞) and suppose that Hypothesis (H 1 ) is satisfied. Then there exist positive constants C and C p such that, for every (z 0 , z 1 ) ∈ Y p , it holds the following energy estimate: where E p (·) denotes the energy of the solution of (1) starting at (z 0 , z 1 ). The proof will be divided into four steps in subsections 4.1.1-4.1.4. We fix an arbitrary pair of times 0 ≤ S ≤ T and a strong solution z(·, ·) of (1) starting at (z 0 , z 1 ) ∈ Y p , and we consider three sets of multipliers: where the function f is defined by Note that we use the usual notation q = p p−1 for the conjugate exponent of p. Remark 4.2 In the Hilbertian case p = 2, the classical multipliers as given in [2] are xψ(x)z x (t, x), xφ(x)z(t, x) and v associated with p = 2 (i.e. v xx = βz). Then, while clearly our third multiplier v is a straightforward extension of the Hilbertian case to any p ∈ [1, ∞), the two sets of multipliers given in Items (m1) and (m2) seem to be new, even if those of Item (m2) are identical when p = 2. First set of multipliers The first step toward an energy estimate consists in obtaining an inequality that contains the expression of the energy E p and, for this purpose, we use the first set of multipliers of Item (m1). We obtain the following lemma. where C p denotes constants that depend on p only. Proof. Multiplying the first equation of (9) by x ψ f (ρ) and integrating over Starting with Regarding − T S 1 0 x ψ f (ρ)ρ x dx dt, we use an integration by part with respect to x and we obtain By combining (38) and (39), we get We proceed similarly by multiplying the second equation of (9) by x ψ f (ξ) and, following the same steps that yielded (40), we obtain that Summing up (40) and (41), we obtain Using the definition of ψ, we obtain We now complete the expression of the energy E p in the left-hand side of the previous equality and, since We start by estimating where S 4 has been defined in (36). As for S 2 , using the fact that |xψ| < 1 and the fact that t → E p (t) is non increasing, one gets the following upper bound for S 2 , We finally estimate S 3 . Recall that q := p p−1 denotes the conjugate exponent of p. Using (152) in Lemma A.1 with A = a(x) |ρ − ξ| , B = |f (ξ)| + |f (ρ)| and η = η 1 where η 1 > 0 an arbitrary constant, it follows that Set R = max(|ρ|, |ξ|). Then, for every 0 < µ 1 < 1, one has For the first integral term above, we have directly from Lemma A.3 with a = ρ, b = ξ that As for the second integral term in (48), we have that Combining (48), (49) and (50), we obtain that By combining (51) with (47), we obtain that Gathering (44), (45), (46) and (52), it follows that We can choose η 1 > 0 and µ 1 > 0 such that which proves (36). Second pair of multipliers The second set of multipliers given in Item (m2) is used to handle the term S 4 in (36) and it will lead us to the following lemma. where η 2 is an arbitrary constant in (0, 1) and C and C p are positive constants whose dependence is specified in Remark 4.1. Proof. We multiply the first equation of (9) by φf (ρ)z, where z is the solution of (1) and we integrate over On one hand, we have that On the other hand, an integration by part with respect to x yields and then Putting together (56),(57), (58) and (59) it follows that We proceed similarly after multiplying the second equation of (9) by φf (ξ)z and, following the same steps that led to (62), we obtain that We take the sum of (62) and (63) and get Using the definition of φ in (31) and the fact that 2z t = ρ − ξ, we derive that We start by estimating T 1 . We have T 1 ≤ T 1 where which gives when using (152) in Lemma A.1 with A = |z| and B ∈ {|f (ρ)|, |f (ξ)|}, where η 2 > 0 is arbitrary. To estimate T 2 , we have by Young's inequality recalled in Lemma A.1 that Using Poincaré's inequality, Combining (68) and (69) and the fact that t → E p (t) is non increasing, it follows that As for T 3 , we first notice that for every (ρ, ξ) ∈ R 2 , one has It follows that T 3 ≤ C p T 1 which has been defined in (66) and which is upper bounded in (67). Third multiplier It remains to tackle the term T 5 appearing in (55). To handle it, we consider the multiplier introduced in Item (m3) and, in order to achieve future upper bounds, we will be needing estimates of the L q -norms of v and v t , where q = p p−1 , given in the following lemma. Lemma 4.3 For v as defined in (33), we have the following estimates: where σ > 0 is an arbitrary positive constant and C and C p are positive constants whose dependence is specified in Remark 4.1. Proof. From the definition of v, one gets One deduces that, by using Hölder's inequality, Poincaré's inequality yields Similarly, one has By using Hölder inequality and the fact that β is bounded by 1, one deduces that If p = 2, we have q = 2 and get (74) after integrating over x ∈ [0, 1]. For p > 2, we apply Young's inequality with the pair of conjugate exponents (p − 1, p−1 p−2 ) and conclude as for (73). The next lemma shows the use of the third multiplier v. (33), we have the following estimate, Lemma 4.4 Under the hypotheses of Proposition 4.2, with v as defined in where r = 2p 2 −p−2 p , η is any real number in (0, 1) and C and C p are positive constants whose dependence is specified in Remark 4.1. Proof. We multiply the first equation of (9) by v First, an integration by part with respect to t gives Then, an integration by part with respect to x yields We have that which gives that Combining (80), (81) and (84), we obtain We next multiply the second equation of (9) by v and, following the same steps that yielded (85), we get Now taking the sum of (85) and (86), we obtain Using the definition of β, we obtain We start by estimating V 1 . For fixed t ∈ [S, T ], we have, by using (73) and hence, since E p (T ) ≤ E p (S), we get Using Young's inequality, we have for every η > 0 From (74) and the fact that the definition of β implies β ≤ C a, we get for every σ > 0 that Using (51), we obtain for every σ, µ 1 > 0 Combining (91) and (93), we obtain for every η, σ, µ 1 > 0 Choosing σ = η 2 and µ 1 = η 2 p−1 p , one gets, for every η > 0 Finally, we estimate V 3 in (88). Using Young's inequality, we have, for every ν > 0, which yields by using (73) and (51), that for every ν, µ > 0, one has Choosing µ p = ν, one gets that for every η > 0 Combining (88), (90), (95) and (98) and taking ν = η < 1, we obtain (79). End of the proof of Proposition 4.2 Collecting (36), (55) and (79), we obtain for every positive η 2 , η 3 and η ∈ (0, 1) that Taking η = η p+q 2 and fixing η 2 so that 2CC p η q 2 = 1 2 , we immediately get (32). It is then standard to deduce that there exists γ p > 0 such that, for every (z 0 , z 1 ) ∈ X p , the energy E p associated with of the solution z(t) of (1) starting at (z 0 , z 1 ) satisfies the following, That concludes the proof of Proposition 4.2. Case where 1 < p < 2 The main issue to prove Theorem 4.1 in the case p ∈ (1, 2) (with respect to the case p ∈ [2, ∞)) is the trivial fact that p − 2 < 0 and hence the weights f (ρ) and f (ξ) used in the multipliers of Items (m2) and (m3) may not be defined on sets of positive measure. As a consequence we cannot use these multipliers directly and we have to modify the functions f and F . This is why, we consider, for p ∈ (1, 2), the functions g and G defined on R, by It is clear that one has that |g(y)| ≤ |f (y)| and |G(y)| ≤ |F (y)| for every y ∈ R. Finally, using the function g, we also modify the energy E p by considering, for every t ∈ R + and every solution of (1), the function E p defined by We start with an extension of Proposition 2.1. Lemma 4.5 For every p ∈ (1, 2), • the function g is an odd bijection from R to R with a continuous first derivative which is decreasing on R + ; • the function G is even, of class C 2 and strictly convex; • the energy t → E p (t) is non-increasing on R + . Proof. One has that, for every x ∈ R, It is clear that g is continuous and positive, which proves the strict convexity. The last item follows after using Proposition 2.1 with F = G p which admits a continuous first derivative by what precedes. We define now the convex conjugate (cf. [10]) of G which we denote from now on by H and which is defined as the Legendre transform of G, i.e., Since G is of class C 2 with invertible first derivative g, one has that and The second equality in (106) is obtained using the change of variable v = g −1 (s) and (107) follows (for instance) by integration by part of the right-hand side of (107). The proof of Theorem 4.1 in the case p ∈ (1, 2) relies on the following proposition which gives an estimate of the modified energy E p of a strong solution and which is similar to Proposition 4.2. Proposition 4.3 Fix p ∈ (1, 2) and suppose that Hypothesis (H 1 ) is satisfied. Then there exist positive constants C and C p such that, for every (z 0 , z 1 ) ∈ Y p verifying we have the following energy estimate: We next develop an argument for Proposition 4.3, which follows the lines of the proof of Proposition 4.2. The main idea consists in replacing f, F by g, g and to control all the constants C p involved in these estimates in terms of p ∈ (1, ∞). We also provide a sketchy presentation where we only precise details specific to the present case. As a consequence of (108) and Corollary 2.1 and standard estimates (such as the fact that E p ≤ E p ), one deduces that where C p is a positive constant that depends on p only. First pair of multipliers For the first pair of multipliers, we change the function f in Item (m1) by the function g and hence use x ψ g(ρ), x ψg(ξ), where ψ is defined in (31). Lemma 4.6 Under the hypotheses of Proposition 4.3, we have the following estimate Proof. Estimate 111 is obtained by following the exact same steps as those given to derive (36), with the difference that we use the function g instead of the function f . By multiplying the first equation of (9) by x ψ g(ρ) and the second one by x ψ g(ξ), we perform the integrations by parts described to obtain (43) with the function f and, we are led to the similar equation which yields that Using the fact that ψ x is bounded, we get at once that where S 4 has been defined in (111). Using now the fact that |xψ| ≤ 1 and the fact that t → E p (t) is non increasing, it follows that As for S 3 , we proceed as for the estimate of S 3 by first using (167) and Lemma A.6 instead of Lemmas A.1 and A.3 respectively. In particular, we have the following estimate, which extends (51) to the case p ∈ (1, 2) and which holds for every µ 1 ∈ (0, 1), where E p (t) is defined by Second pair of multipliers The goal of this subsection is to estimate S 4 . To do so, we change the function f in Item (m2) by the function g and hence define the pair of multipliers: φg (ρ)z, φg (ξ)z where φ is defined in (31). Lemma 4.7 Under the hypotheses of Proposition 4.3 and for 1 < p < 2 with φ as defined in (31), we have the following estimate: where η 2 is an arbitrary constant in (0, 1) and C and C p are positive constants whose dependence are specified in Remark 4.1. on p. Proof. Estimate (118) is obtained by following the same steps as those given to derive (55), with g instead of f . By multiplying the first equation of (9) by φg (ρ)z and the second one by φg (ξ)z, where z is the solution of (1), we perform the integrations by parts described to obtain (64) with the function f and we are led to the equation According to (159), one has Hence, also using the definition of φ, it follows from (119) that for some positive constant C. The above equation must be put in parallel with (65) where the term T j , 1 ≤ j ≤ 4 in (121) corresponds to the term T j in (65). The term T 1 is handled exactly as the term T 1 while using (167) instead of Lemma A.1 in order to obtain where η 2 > 0 is arbitrary. We proceed similarly for the term T 2 by using (167) instead of Young's inequality and Corollary A.1 instead of the standard Poincaré inequality to obtain The term T 4 can also be treated identically as the term T 4 to obtain We now turn to an estimate of T 3 which differs slightly from that of T 3 because of the appearance of the function g . Using (110) and the second equation in (163), one deduces that where C p is a positive constant only depending on p. One derives that Applying (167) to the above, we end up with an estimate of T 3 by exactly the right-hand side of (122) and one concludes. Third multiplier We finally turn to an estimation of the term T 5 and, relying on the multiplier defined in Item (m3), we get after changing the function f by the function g the multiplier (still denoted) v solution of the following elliptic problem defined at every t ≥ 0 by where β is defined in (31). We will be needing the following estimates of v and v t given in the next lemma. Lemma 4.8 For v as defined in (127), we have the following estimates: where E p (t) is defined in (117). Proof. From the definition of v, one gets It immediately follows from the above that where we have used (163)whether C p 1 0 β|g(z)| ds ≥ M or not. Since H is (strictly) convex, one can apply Jensen's inequality to the right-hand side of the above equation to get that and one derives (128) by using (163) together with (110). Similarly, one has that Upper bounding |g (z)| by 1, one deduces that where we used the fact that β(x) ≤ Ca(x) on [0, 1] and the convexity of H. Since 1 0 a(x)|z t | dx = |zt|≤M + |zt|>M , we have according to (163), (164) and Hölder's inequality that We now use the multiplier v in (9) and we get the following result. (127), we have the following estimate where C p is a positive number that depends on p only and η is any real number in (0, 1). Proof. Proceeding as in the proof of Lemma 4.4 to derive (88), we obtain After using Fenchel's inequality (153) and (128), one gets the following estimate for V 1 As for V 2 , we first apply (166) (corresponding to the adaptation to the case p ∈ (1, 2) of the use of Young's inequality in (91)) to get that for every 0 < η < 1. To handle the first integral term in the right-hand side of the above equation, we use (129) and (116) to get that for every 0 < η, µ < 1. For V 3 , we apply Fenchel's inequality, (128) and (116) to get that for every 0 < λ, σ < 1. One chooses appropriately λ, µ and σ in terms of η to easily conclude the proof of (139). It is immediate to derive (109) by gathering (111), (118) and (139) with a constant C p only depending on p. One deduces exponential decay of E p exactly of the type (100) with a constant γ p > 0 only depending on p for weak solutions verifying (108) for their initial conditions. Pick now any (z 0 , z 1 ) ∈ X p such that E p (0) = 1. One deduces that for every t ≥ 0, since E p ≤ E p . Set and let c p be a positive constant such that Note that such a constant c p > 0 exists according to the second equation in (164) and can be taken equal to p−1 2 . For every t ≥ 0 and x ∈ [0, 1], let R(t, x) = max(ρ(t, x), ξ(t, x)). It holds by elementary computations that One deduces at once that Set Then, using (142), it follows that E p (t) ≤ 1 2 if t ≥ t p and then which implies that the C 0 -semi-group (S p (t)) t≥0 is exponentially stable for p ∈ (1, 2). Remark 4.3 From the argument, it is not difficult to see that γ p is bounded above and c p must tend to zero as p tends to infinity. That yields that our estimate for t p tends to infinity as p tends to one. Hence it is not obvious how to use our line of proof to get exponential stability for p = 1. Case of a global constant damping Suppose now that we are dealing with a global constant damping, in other words ω = (0, 1) and where α is a positive constant. We then prove the following proposition. We can now state a lemma which is basic for our subsequent work. Lemma A.4 Let p ∈ (1, 2). Then, the function g, G and H defined in (101), (102) and (105) satisfy the following relations: (i) for every x ∈ R, one has (ii) for every x ∈ R, it holds that (iii) There exists a positive constant C p only depending on p such that, for every x ∈ R, one has C p xg(x) ≤ H(g(x)) ≤ C p xg(x). (160) Since both H and G are even functions, we assume with no loss of generality that both a and x are non negative. Using the estimates for G and H given in (163), (164) and (168) and (169) respectively, we deduce that there exists a positive constant C p only depending on p ∈ (1, 2) so that for every a ≥ 0 and η ∈ (0, 1), and one immediately gets (165) from (170) and (171). On the other hand, (167) follows from (165) and (171) after setting setting x = g(y) and using (161). Similarly, to get (166), we start from and we proceed as above to get the conclusion. As a corollary of the previous lemma, we have the following Poincaré-type of result. Proof. With no loss of generality, we can assume that the right-hand side of (173) is finite. One has for every x ∈ [0, 1] G(z(x)) = x 0 z (s)g(z(s))ds. By applying (167), one gets that for every x ∈ [0, 1] G(z(x)) ≤ C p η p 1 0 g(z (s))ds + C p η 1 0 g(z(s))ds, for every η > 0 and positive constants C p only depending on p. By integrating between 0 and 1 and then choosing appropriately η one concludes. The following lemma is a useful extension of Lemma A.3 with f, F replaced by g, g. Lemma A.6 For p > 1, there exists a positive constant C p such that, for every real numbers a, b and µ ∈ (0, 1) subject to |a − b| ≥ max(|a|, |b|)µ, one has Proof. Thanks to (159), it i s enough to prove the existence of C p > 0 so that for every a ≥ b, µ ∈ (0, 1) such that |a − b| ≥ µR where R = max(|a|, |b|). Assume first that ab ≤ 0. Then the left-hand side of (177) is smaller than g(2R) while |g(a) − g(b)| ≥ g(R). Clearly g(2R) ≤ 2g(R) since g is concave and hence (177) holds true in that case for any C p ≥ 2. We next assume that a ≥ b ≥ 0 and we consider c = a − b instead of b. The assumption on a, b reads c ≥ µa. Equation (177) becomes g(c) ≤ C p µ 2−p (g(a) − g(a − c)) . Note that the right-hand side of the above equation defines a decreasing function of a, once the other parameters are fixed. It is therefore enough to consider the case a = c µ . By replacing c by c µ in the explicit expression of g, we are led to prove the existence of C p > 0 so that for every c > 0 and µ ∈ (0, 1). By applying the mean value theorem to both sides of the above equation and reordering the terms, (179) reads µ + µη 2 η 1 + 1 for some η 1 ∈ (0, µc) and η 2 ∈ ((1 − µ)c, c) both depending on c > 0 and µ. Assume first that µc ≤ 1. Then clearly (179) holds true for any C p ≥ 2 2−p according to (180). If now µc > 1, then c > 1 and since the left-hand side of (179) is smaller than (µc) p−1 , we are left to find C p > 0 such that µ + µη 2 µc The left-hand side of the above equation is again smaller than 2 2−p and one concludes.
2021-04-13T01:15:58.134Z
2021-04-12T00:00:00.000
{ "year": 2021, "sha1": "95ba85ecbe36d5d5f3776dccb1898df026493f65", "oa_license": null, "oa_url": "https://www.esaim-cocv.org/articles/cocv/pdf/forth/cocv210066.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "95ba85ecbe36d5d5f3776dccb1898df026493f65", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
71147542
pes2o/s2orc
v3-fos-license
Evaluation of endoscopic visible light spectroscopy: comparison with microvascular oxygen tension measurements in a porcine model Background Visible light spectroscopy (VLS) is a technique used to measure the mucosal oxygen saturation during upper gastrointestinal endoscopy to evaluate mucosal ischemia, however in vivo validation is lacking. We aimed to compare VLS measurements with a validated quantitative microvascular oxygen tension (μPO2) measurement technique. Methods Simultaneous VLS measurements and μPO2 measurements were performed on the small intestine of five pigs. First, simultaneous measurements were performed at different FiO2 values (18%–100%). Thereafter, the influence of bile was assessed by comparing VLS measurements in the presence of bile and without bile. Finally, simultaneous VLS and μPO2 measurements were performed from the moment a lethal dose potassium chloride intravenously was injected. Results In contrast to μPO2 values that increased with increasing FiO2, VLS values decreased. Both measurements correlated poorly with R2 = 0.39, intercept 18.5, slope 0.41 and a bias of − 16%. Furthermore, the presence of bile influenced VLS values significantly (median (IQR)) before bile application 57.5% (54.8–59.0%) versus median with bile mixture of the stomach 73.5% (66.8–85.8), p = < 2.2 * 10−16; median with bile mixture of small bowel 47.6% (41.8–50.8) versus median after bile removal 57.0% (54.7–58.6%), p = < 2.2 * 10−16). Finally, the VLS mucosal oxygen saturation values did not decrease towards a value of 0 in the first 25 min of asystole in contrast to the μPO2 values. Conclusions These results suggest that VLS measures the mixed venous oxygen saturation rather than mucosal capillary hemoglobin oxygen saturation. Further research is needed to establish if the mixed venous compartment is optimal to assess gastrointestinal ischemia. Background Visible light spectroscopy (VLS) is a technique used to measure the mucosal capillary hemoglobin oxygen saturation based on reflectance spectrophotometry [1]. The mucosal oxygen saturation can be calculated by the marked difference in the absorption spectra of oxygenated and deoxygenated hemoglobin. Endoscopic VLS measurements are performed during upper GI endoscopy [2][3][4]. As determined previously by van Noord et al., measurements are defined positive for ischemia if the measured saturation is lower than 63% in the antrum of the stomach, lower than 62% in the duodenal bulb and 58% in the descending duodenum [4]. VLS is used in clinical practice in the work-up of the diagnosis of chronic mesenteric ischemia (CMI). CMI is defined as ischemic symptoms caused by insufficient blood supply to the gastrointestinal (GI) tract [5]. The main cause of CMI is stenosis of one or more mesenteric arteries due to atherosclerosis [6]. Other occlusive causes are external compression of the celiac artery and/or celiac ganglion by the median arcuate ligament and diaphragmatic crura (median arcuate ligament syndrome (MALS)) and mesenteric artery stenosis due to vasculitis. However, CMI can exist in the absence of mesenteric artery stenosis. Non-occlusive mesenteric ischemia (NOMI) is caused by hypo-oxygenation due to underlying conditions such as cardiac and pulmonic insufficiency, spasms of small arteries, shunts, occlusion of smaller arteries, e.g. by micro-emboli, and autonomic dysfunction [7]. The diagnosis of CMI is a clinical challenge because of the diverse presentation of CMI. Symptoms overlap largely with many other disorders and the high prevalence of asymptomatic mesenteric artery stenosis in the general population of (3-29% [8,9]) due to the existence of an extensive collateral circulation. However, mesenteric artery stenosis can become symptomatic if this collateral circulation is not sufficient and/or the extent of the stenosis becomes significant. Accurate identification of patients with CMI is important to select those patients who will benefit of therapy, but to withhold invasive therapy from those who will not. Treatment consists of endovascular revascularization with expandable metal stents or surgical revascularization of obstructed vessels, both methods that are invasive, costly and not without sideeffects. A functional test to determine mucosal ischemia of the GI tract is therefore essential. In the absence of one specific test for the diagnosis of CMI [10], the diagnosis is established by consensus in a multidisciplinary meeting attended by gastroenterologists, vascular surgeons and interventional radiologists. Symptoms alone do not accurately predict the diagnosis of CMI [7,11,12]. Therefore, consensus diagnosis is based on the combination of symptoms, imaging of the mesenteric vasculature and functional assessment of mucosal ischemia with gastric-jejunal tonometry [13,14] or VLS [1,4]. The diagnosis is confirmed if successful therapy results in symptom relief. This method for the diagnosis of CMI has an acceptable diagnostic yield [15] and this method is excepted in absence of a gold standard test [10]. Endoscopic mucosal oxygen saturation measurements with VLS are already used in clinical practice to evaluate CMI, however no extensive validation studies have been performed for this intended use. In the current study, VLS mucosal oxygen saturation is compared with a validated microvascular oxygen tension (μPO 2 ) measurement technique [16,17]. The microvascular oxygen tension technique used in this study is a Palladium (Pd) porphyrin phosphorescence lifetime technique that measures oxygen tension, introduced by Van der Kooi at the end of the 1980s [18]. Palladium porphine (Pd-porphyrin) bound to albumin, has become a standard phosphorescent dye for μPO 2 measurements in vivo [16,17]. This quantitative measurement is also located in the microcirculation making it a convenient comparison to mucosal oxygen saturations measured with VLS. The objective of this study was to validate the VLS technique. This validation consisted of 3 experiments in a porcine model: (1) comparison of VLS mucosal oxygen saturation and μPO 2 measurements at different levels of FiO 2 , (2) VLS mucosal oxygen saturation measurements in the presence of bile and (3) comparison of VLS mucosal oxygen saturation and μPO 2 measurements during asystole. Ethical statement This study was approved by the local Animal Research Committee of the Erasmus MC University Medical Center in accordance with the National Guidelines for Animal Care and Handling (protocol number DEC 129-13-06 EMC3185). To enhance transparency this article is written according to the ARRIVE guidelines for animal research [19]. Experimental animals In total, 5 female crossbred Landrace x Yorkshire pigs, with mean body weights of 28.1 ± 0.6 kg (mean ± standard error of mean), age 2-3 months were used for the experiments. Sample size calculation determined that 5 animals were sufficient to detect a difference of at least 5% in mucosal saturation measured with VLS before and after bile per location with an alpha of 0.05 and a power of 90% [20]. Pressure-controlled mechanical ventilation (Servo 300; Siemens-Elema, Solna, Sweden) was performed with a FiO 2 between 24% and a positive end-expiratory pressure of 5 cm H 2 O while no intervention was done. Normothermia, measured nasal, was maintained between 38 and 39 °C, with two heating pads underneath and an electric heating blanket above the animal. Furthermore, hearth rate, MAP, SpO 2 and temperature were monitored continuously throughout the entire experiment. Arterial blood samples were collected to determine the arterial oxygen pressure and arterial oxygen saturation (ABL 800Flex (Radiometer, Denmark). A 4F thermodilution catheter (Pulsion Medical Systems AG München, Germany) was placed in the left femoral artery for arterial blood sampling. An 9Fr introducer sheath (Arrow International Inc., USA) was placed in the right jugular vein for infusion of palladium porphyrin. Both catheters were placed using the Seldinger technique. A lower midline abdominal incision was made to insert a cystostomy tube into the urinary bladder with purse-string sutures for urine collection. The animals were placed in supine position and an incision was made to open the abdomen. A small intestinal loop was dissected and a small incision was made at the non-vascularized side to expose the intestinal mucosa (Fig. 1). Mucosal oxygen saturation measurements were performed with a fiberoptic probe (Endoscopic T-Stat Sensor; Spectros, Portola Valley, California, USA) connected to the VLS oximeter (T-Stat 303 Microvascular Oximeter, Spectros, Portola Valley, California). Microvascular oxygen tension measurements were done with oxygen dependent phosphorescent dye palladium porphine (Pd-porphyrin). Palladium porphyrin is a large molecule with optical properties that can absorb energy and react with oxygen. In the absence of oxygen it will release the absorbed energy from an excitation source via phosphorescent light with a specific decay time, i.e. lifetime. The lifetime is related to the amount of oxygen surrounding the Pd-porphyrin described by the Stern-Volmer relation [18]. It has been tested for pH, temperature and diffusivity dependency [17]. Calibration experiments are done and determine the O 2 accuracy of 5% independent of phosphorescence intensity itself [17]. For the laboratory experimental setup of the μPO 2 measurements the excitation source was an Opolette 355-I tunable laser (Opotek, Carlsbad, CA, USA) set to a wavelength of 524 nm. An optical fiber developed by TNO and produced by Light Guide Optics was used that would fit through the working channel of a gastroduodenal endoscope. It has one central located excitation fiber with several surrounding detection fibers. The phosphorescence was collected with a gated micro channel plate photomultiplier tube (MCP-PMT R5916U series, Hamamatsu Photonics, Hamamatsu, Japan). Phosphorescence lifetime analysis was done with a self-written software program in Labview (version 13.0, National Instruments, Austin, TX, USA). For the detailed setup description we refer elsewhere [21]. The probe palladium porphyrin was Pd(II) meso-Tetra (4-carboxyphenyl)porphine (80 mg/animal) (Frontier Scientific, Logan, USA) dissolved in 1 ml DMSO and TRIS Trisma ® Base (Sigma, St. Louis, MO) was combined with a 4% bovine serum albumin solution solved in phosphate buffered saline. This method has been validated in vitro and in vivo [17]. Pd-porphyrin bound to albumin, forms a high-molecular-weight complex, confining it mainly to the vascular compartment when infused intravenously. Both optical fibers were fixated together to perform stable simultaneous mucosal oxygen saturation and μPO 2 measurements of the same mucosal spot of the small intestine ( Fig. 1). Mucosal oxygen saturation versus μPO 2 measurements at different FiO 2 values Simultaneous VLS mucosal oxygen saturation and μPO 2 measurements were performed at different FiO 2 values ranging from 18 to 100%. The mucosal oxygen saturation and μPO 2 measurements were simultaneously performed for 2 min at a specific FiO 2 value. When a new FiO 2 value was set, the start of a set of new measurements was awaited for the first two 2 min. To compare the two measurement techniques the μPO 2 was converted into a corresponding saturation. For the μPO 2 conversion, for every measured value in mmHg the corresponding % was calculated called micro-vascular oxygen saturation converted (μSO 2 .converted). The conversion can be found in Fig. 2. Influence of bile on mucosal oxygen saturation Furthermore, the influence of bile on mucosal oxygen saturation values measured with VLS was assessed. Mucosal oxygen saturation measurements were performed of the small intestine mucosa in presence of bile. Two different types of bile were used: fluid obtained during upper GI endoscopy from the stomach of the animal and fluid obtained from the small intestine of the animal. The sticky viscosity of the bile ensured the fixation of the bile on the measurement area and continuous visual confirmation ensured that the bile measurements were performed on surface covered with bile. The amount of bile applied to the mucosa, the thickness of the bile applied and the exact content of the bile applied were not controlled. The mucosal oxygen saturations in presence of bile were compared with the mucosal oxygen saturations before the bile was applied to the mucosa (baseline) and the mucosal oxygen saturations every time after the bile was removed with saline fluid as control. For every step approximately 30 measurements were done. Mucosal oxygen saturation versus μPO 2 during asystole Finally, simultaneous mucosal oxygen saturation and μPO 2 measurements were performed from the moment a lethal dose potassium chloride was intravenously injected. A measurement period of 25 min after injection was considered long enough to ensure a steady state since Benaron et al. showed detection of local ischemia with VLS within 120 s [22]. Experimental outcomes Mucosal oxygen saturation values were defined in percentage tissue hemoglobin saturation. The μPO 2 measurements were defined in mmHg. Analytical and statistical methods Statistical analysis was performed with R Statistics software (v3.2.4). Normal distribution was assessed visually and with the Shapiro-Wilk normality test. Normal distributed data is presented as mean ± standard deviation (SD) and abnormally distributed data is presented as median with interquartile range (IQR). A linear regression model was used for the FiO 2 , mucosal oxygen saturations, and μPO 2 . A scatter plot was used to show the mucosal oxygen saturation versus the μPO 2 measurements at different FiO 2 values. To compare the two measurement techniques, the μPO 2 was converted from mmHg to % porcine hemoglobin saturation. To determine the saturation a porcine-specific hemoglobin saturation formula published by Serianni To compare the saturation, the difference in measurement frequency had to be overcome. The mucosal oxygen saturation has a fixed measurement interval whereas the μPO 2 is measured on demand. To equally compare the two measurements the mucosal oxygen saturation was averaged over same period as one μPO 2 was done. Thereafter these results were visualized with linear regression and with a Bland-Altman comparison plot [24]. The Wilcoxon signed-rank test was used to compare the measurement before, with and after application of bile. A two-tailed p value of < 0.05 was considered significant. After the potassium chloride injection mucosal oxygen saturation measurements were compared with μPO 2 . Because VLS measures every second, a symmetrical moving average of 20 samples was taken to smooth the data, for example the eleventh sample is an average of sample . Baseline data All 5 animals were in good clinical condition before the start of the experiment. Mucosal oxygen saturation versus μPO 2 measurements at different FiO 2 values The mucosal oxygen saturation levels versus the μPO 2 levels different values of FiO 2 in 5 animals were measured. The mucosal oxygen saturation decreased with increasing FiO 2 in contrast to the μPO 2 values that increased with increasing FiO 2 . The spread of the mucosal oxygen saturation levels and the FiO 2 levels was large, shown in Fig. 3. Figure 4a shows the correlation between mucosal oxygen saturation and the converted μPO 2 saturation. There is a poor linear correlation with an r 2 = 0.39, an interception of 18.5% and a slope of 0.41. In the Bland-Altman plot (Fig. 4b) also a poor correlation is seen with a mean difference of − 16%. If the saturation increases the mucosal oxygen saturation undervalues the saturation even more. Figure 5 shows the mucosal oxygen saturation measurements without the presence of bile, with the presence of a bile mixture from the stomach and with the presence of a bile mixture from the small bowel and measurements without any of the bile mixtures measured in a total of 2 animals. The mucosal oxygen saturation measurements before application of the bile mixtures and after the bile mixtures were removed were not significantly different (mucosal oxygen saturation before application of bile mixture median (IQR) 57.5% (54.8-59.0%) versus mucosal oxygen saturation after removal bile mixture 57.0% (54.7-58.6%), p = 0.2743). However, a significant increase of the mucosal oxygen saturation was seen when the bile mixture from the stomach was applied compared to the mucosal oxygen saturation before application of the bile mixtures (median mucosal oxygen saturation with mixture of the stomach (IQR) 73.5% (66.8-85.8) p = < 2.2 * 10 −16 ). When the bile mixture from the small bowel was applied, the mucosal oxygen saturation was significantly lower with a median (IQR) 47.6% (41.8-50.8), p = < 2.2 * 10 −16 compared to mucosal oxygen saturation measurements with bile mixture form the stomach and the mucosal oxygen saturation increased significantly after the bile mixtures had been removed (p = < 2.2 * 10 −16 ). Mucosal oxygen saturation versus μPO 2 during asystole The mucosal oxygen saturation measurements and μPO 2 measurements during the minimally first 25 min of asystole in 5 animals are shown in Fig. 6. In all 5 animals the μPO 2 measurements decreased towards a value of 0. The mucosal oxygen saturation measured with VLS decreased and increased variably during the measurement period and the mucosal oxygen saturation never reached a stable state around 0%. Adverse events No adverse events occurred during the 5 porcine experiments. Discussion In this study we validated mucosal oxygen saturation measurements by comparing VLS with calibrated μPO 2 measurements. This study showed that the mucosal oxygen saturation values decreased with increasing FiO 2 in contrast to the μPO 2 values that increased with increasing FiO 2 with a large spread of the measured mucosal oxygen saturation levels and FiO 2 levels and a poor linear correlation. Furthermore, a significant influence of bile on the mucosal oxygen saturation values was shown. Finally, this study showed that the mucosal oxygen saturation values, in contrast to the μPO 2 values, did not decrease towards a value of 0 in the first 25 min of asystole. The found inverse relationship of the mucosal oxygen saturation measurements by VLS with FiO 2 is remarkable. Mucosal oxygen saturations measured with VLS are expected to increase with increasing FiO 2 if VLS measures the capillary oxygen saturation level. However, VLS measures not only arterial saturation but also a large venous compartment. If a large mixed venous saturation determines the overall saturation value the influence of FiO 2 is expected to be minimal. Potentially due to hyperoxic vasoconstriction the actual venous saturation can decrease more compared to normoxic situations. The high FiO 2 values will be measured by the μPO 2 . Furthermore, the measured values, both VLS as μPO 2 values, have a great spread. Possibly, the oxygen tension was very variable in the gastrointestinal vessels as intestinal ischemia is also patchy and heterogenic distributed [5]. During the experiment the hemodynamic state of the animals worsened by all experimental handlings, also contributing to a great spread of measured values. Significant influence of bile on the mucosal oxygen saturation values measured with VLS was confirmed. Therefore it is advised and mentioned in the prescription to remove any bile remnants before the start of the VLS measurements. The bile has its own absorption spectrum of light. It also absorbs light in the same wavelengths as oxyhemoglobin and deoxyhemoglobin [25], and influences the result to determine the mucosal oxygen saturation. The amount of bile applied to the mucosa, the thickness of the bile applied and the exact content of the bile applied were not controlled in this experiment. However, these factors contribute to the light absorption by the bile and thus influence its effects on the VLS signal. Therefore, we advise to remove any fluid on the measuring area of the GI mucosa before the VLS measurements. The idea that VLS measures mixed venous oxygen saturation is further confirmed by the fact that VLS measured still a reasonable oxygen saturation 25 min after asystole. The saturation in the capillaries is decreased towards zero over time due to diffusion of oxygen towards the still oxygen consuming cells. However, in the venous compartment the oxygen will desaturate slowly by the large buffer capacity. Therefore, the oxygen saturation will not decrease towards zero immediately after asystole. Dips in oxygen saturation are seen in the mixed venous compartment measured by VLS as shown in Fig. 6 due to spasm in the supplying arteries. After such a peristaltic contraction the blood flow stabilizes and no decrease in saturation is seen. VLS is a powerful technique to measure oxygen saturation at a microvascular level. In the microvasculature oxyhemoglobin/deoxyhemoglobin is proportional mainly located in the venous compartment of the microvasculature. Therefore the saturation measured by the VLS is mainly represented by the venous compartment. For detection of an oxygen transport problem that results in ischemia, the microvascular arterial saturation is of importance, a part that is underexposed by VLS. This is endorsed by the fact that after a lethal potassium chloride the saturation does not drop in comparison to μPO 2 , which is an exaggerated model of instant ischemia. This study has some limitations. First, the experiments performed in this study were designed to enable generalizability in humans. However, to enable stable oxygen saturation measurements with VLS and μPO 2 of the mucosa of the small intestine of a pig, the abdomen had to be opened to open the small intestinal loop. The mucosa of this small intestinal loop was exposed to room air and room temperature. This will result in oxygen diffusion into the tissue and rapid decrease in temperature for the exposed tissue. Furthermore, the abdominal anatomy of a pig is different from the human abdominal anatomy. The GI tract of a pig is monogastric like the human GI tract, however the colon lies in a spiral. The mesenteric vascularization in humans consists of individual variable, mesenteric vessel formations with arcades, lateral branches and anastomoses in the bowel wall [26]. The mesenteric vascularization in pigs consists of bundles of vessel branched of the main stem arising from the mesentery and passing directly into the bowel wall without any branching of arcades [26]. Conclusion This study showed that VLS measures the mixed venous hemoglobin oxygen saturation and not the mucosal capillary hemoglobin oxygen saturation. The presence of bile significantly influences the oxygen saturation levels measured with VLS. VLS is currently used in clinical practice in the clinical work-up of CMI. Further research is needed to establish if the mixed venous compartment is optimal for mucosal hemoglobin saturation measurements to assess GI ischemia.
2019-03-01T00:23:22.151Z
2019-02-28T00:00:00.000
{ "year": 2019, "sha1": "05d102e9032568faf3315103156fe5bb962e580e", "oa_license": "CCBY", "oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-019-1802-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9782ca4f17a66198bf9bc2f2a4d88982b52d889", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
202566534
pes2o/s2orc
v3-fos-license
Incorporating Statistical Test and Machine Intelligence Into Strain Typing of Staphylococcus haemolyticus Based on Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry Staphylococcus haemolyticus is one of the most significant coagulase-negative staphylococci, and it often causes severe infections. Rapid strain typing of pathogenic S. haemolyticus is indispensable in modern public health infectious disease control, facilitating the identification of the origin of infections to prevent further infectious outbreak. Rapid identification enables the effective control of pathogenic infections, which is tremendously beneficial to critically ill patients. However, the existing strain typing methods, such as multi-locus sequencing, are of relatively high cost and comparatively time-consuming. A practical method for the rapid strain typing of pathogens, suitable for routine use in clinics and hospitals, is still not available. Matrix-assisted laser desorption ionization-time of flight mass spectrometry combined with machine learning approaches is a promising method to carry out rapid strain typing. In this study, we developed a statistical test-based method to determine the reference spectrum when dealing with alignment of mass spectra datasets, and constructed machine learning-based classifiers for categorizing different strains of S. haemolyticus. The area under the receiver operating characteristic curve and accuracy of multi-class predictions were 0.848 and 0.866, respectively. Additionally, we employed a variety of statistical tests and feature-selection strategies to identify the discriminative peaks that can substantially contribute to strain typing. This study not only incorporates statistical test-based methods to manage the alignment of mass spectra datasets but also provides a practical means to accomplish rapid strain typing of S. haemolyticus. INTRODUCTION Staphylococcus haemolyticus is one of the most significant species among the coagulase-negative staphylococci (CoNS), whose main ecological niches are skin and the human and animal mucous membranes (Becker et al., 2014). They are often the causative agents of septicemia, peritonitis, otitis, and urinary tract infections. In particular, the multidrug resistance, the early acquisition of resistance to methicillin, and various glycopeptide antibiotics by this species has troubled patients for many years (Froggatt et al., 1989;Hiramatsu, 1998). Strain typing of pathogenic S. haemolyticus forms an important part of the response to modern public health infectious disease outbreaks (MacCannell, 2013). For example, an outbreak of S. haemolyticus had been reported to be the cause of burn wound infections after a serious explosion event in Taiwan during June 2015 (van Duin et al., 2016;Chang et al., 2018). Rapid typing of S. haemolyticus facilitates the identification of the origin of infection, and allows rapid infection control when patients are critically ill. Consequently, a cost effective and rapid identification strategy that targets strain typing issues is essential and needs to be incorporated in routine clinical microbiology laboratory practices. Whole-cell matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) is widely used in clinical microbiology laboratories worldwide. This is because MALDI-TOF MS allows rapid, reliable, and costeffective identification of bacterial species (Vrioni et al., 2018;Wang et al., 2018c). The MALDI-TOF mass spectrum contains extensive information regarding the matter that constitutes microorganisms. In addition to the identification of bacterial species, MALDI-TOF MS has the potential to allow strain typing and/or antibiotic resistance profiling with high accuracy when machine learning methods are also implemented (Croxatto et al., 2012;Mather et al., 2016). Compared to the other strain typing methods, such as pulse-field gel electrophoresis and multi-locus sequence typing (MLST), analysis by MALDI-TOF MS to determine strain type is advantageous owing to its lower cost and rapid turn-around-time (Wang et al., 2018b). Strain typing via MALDI-TOF MS is promising; however, the subtle differences in MALDI-TOF MS spectra of different strains has hindered the introduction of this type of analysis in a clinical context in the absence of incorporation of computational methods (Sandrin et al., 2013;Camoez et al., 2016). Numerous methods have been developed in recent years to overcome this drawback in strain typing by MALDI-TOF spectrum analysis. The visual examination of a MALDI-TOF pseudo-gel or spectrum to pinpoint strain-specific peaks has been implemented by some research groups (Wolters et al., 2011;Josten et al., 2013). Visual examination of the MALDI-TOF MS is easy in practice, but the analytical accuracy is highly dependent on the operator. Inter-batch and/or intrabatch analytical variation is extremely likely. Moreover, visual examination of a MALDI-TOF MS or pseudo-gel is laborintensive. Analyzing complex proteomic data, such as those obtained by MALDI-TOF MS, by visual examination often does not attain the appropriate level of precision, adequate objectivity, and/or a high enough throughput. With the rapid advancements in artificial intelligence, machine learning-based methods have been implemented to identify classifiers when facing such classification problems (Mather et al., 2016;Wang et al., 2018b). More specifically, the logistic regression (LR), support vector machine (SVM), the decision tree (DT), the random forest (RF), and k-nearest neighbor (KNN) approaches have been widely implemented to build classifier model systems. In recent years, the application of machine learning-based methods in the field of medicine has received considerable attention, and several studies have demonstrated that the use of artificial intelligence to analyze complex data in medical practice is apposite and promising (Shameer et al., 2018;Hannun et al., 2019). Specifically, machine learning-based classifiers allowing professional diagnosis of retinopathy (Gulshan et al., 2016), can be used to analyze electrocardiography data (Hannun et al., 2019), and have been used to predict the prognoses of diseases (Wang et al., 2016;Yu et al., 2016;Lin et al., 2018). In addition to image analysis, applying machine learning-based methods to proteomic studies, specifically MALDI-TOF MS investigations, has assisted in attaining high accuracy in strain type prediction and/or strain antibiotic resistance (Wang et al., 2018a,b,c). Machine learningbased methods are able to utilize the signal intensities of specific peaks in their predictions, and this provides additional and more improved information than those obtained by the traditional method based on the presence or absence of peaks (Walker et al., 2002;Wolters et al., 2011;Lasch et al., 2014). In addition to providing robust prediction accuracy, machine learningbased methods, when analyzing MALDI-TOF MS, are also able to generate sets of discriminative peaks that are essential for accurate prediction. These specific sets of discriminative peaks can be used to pinpoint the possible combinations of molecules that are responsible for the various strain types and the variation in drug resistance profiles (Vrioni et al., 2018). As mentioned previously, slight differences in MALDI-TOF MS results among different strains should be considered critical in preprocessing the spectral data. Specifically, the determination or extraction of representative features is essential before constructing the classifiers. Yet, little research is being done to develop a definitive strategy to solve such issues, not to mention incorporating statistical tests. In this study, we first developed a statistical test-based strategy for dealing with the alignment issue for the MALDI-TOF MS according to the massto-charge ratio (m/z) values, and further considered the signal intensity to construct the classification models. Various machine learning algorithms were trained and validated with the aim of discriminating the ST3, ST42, and various other STs of S. haemolyticus. We also investigated the discriminative peaks that are central to strain typing of S. haemolyticus with MALDI-TOF MS. This approach will not only be beneficial in rapid outbreak control for S. haemolyticus infection but also provide a definite strategy for preprocessing the spectral data. Bacterial Isolates A total of 254 unique S. haemolyticus isolates had been collected at Chang Gung Memorial Hospital, Linkou branch, Taiwan. The period of collection was between June and November 2015, which was the period when a significant number of burn patients were admitted to the hospital. The isolates were stored at −70 • C until use. This was a retrospective study investigating the relation between MS spectrum and microbial strain typing. No diagnosis or treatment was involved by the study. Waiver of informed consent was approved by the Institutional Review Board of Chang Gung Medical Foundation (No. 201600049B0). Analytical Measurement of MALDI-TOF MS To carry out the analysis, we cultivated the isolates on blood agar plates (Becton Dickinson, MD, USA) initially in a batch manner. The isolates were cultured in 5% CO 2 incubator for 16 h. We then conducted the analytical measurements required for MALDI-TOF MS following manufacturer's instructions. First, we picked a single colony from a blood agar plate and spread it onto a steel target plate as a thin film (Bruker Daltonik GmbH, Bremen, Germany). One µl of 70% formic acid (Bruker Daltonik GmbH, Bremen, Germany) was then applied onto the steel target plate followed by drying in room air. One µl of matrix solution (Bruker Daltonik GmbH, Bremen, Germany) was then added. After the sample preprocessing, a MicroFlex LT mass spectrometer (Bruker Daltonik GmbH, Bremen, Germany) using a linear positive model was used for data acquisition. For each batch, a Bruker Daltonics Bacterial test standard (Bruker Daltonik GmbH, Bremen, Germany) was analyzed to allow calibration. The sampling setting of the laser shot was 240 shots (20 Hz) for each isolate. The MALDI-TOF MS spectra were analyzed using Biotyper 3.1 software (Bruker Daltonik GmbH, Bremen, Germany). The analytical range of each spectrum was 2,000-20,000 m/z. S. haemolyticus identification was set at high confidence (score > 2 in the reports of Biotyper 3.1 software). Furthermore, FlexAnalysis 3.3 (Bruker Daltonik GmbH, Bremen, Germany) was also implemented to acquire the numerical spectra data which derived from MALDI-TOF MS. Specifically, the original signals were smoothed by Savitzky-Golay algorithm and their baselines were subtracted by the top hat method. Meanwhile, some thresholds that were adopted to extract reasonable peaks were setup as explained below: signal-to-noise ratio was 2, relative intensity and minimum intensity were both 0, maximal number of peaks was 200, peak width was 6, and height was 80%. On the basis of the single measurements, we hypothesized that strain typing of S. haemolyticus is possible when the variability issue is handled using information engineering technology. Multilocus Sequence Typing of S. haemolyticus We defined the strain typing of S. haemolyticus by sequencing seven housekeeping genes, namely arc, SH1200, hemH, leuB, SH1431, cfxE, and RiboseABC (Panda et al., 2016). The sequencing results of these genes were used to assign the sequence types of S. haemolyticus throughout the present analysis using the MLST database (https://pubmlst.org/shaemolyticus/) powered by the BIGSdb genomics platform (Jolley et al., 2018). MS Data Preprocessing for Classifiers Construction Several computational tools have been developed for the preprocessing and extraction of features from MS data (Wong et al., 2005;Mantini et al., 2007;Gibb and Strimmer, 2012). More specifically, spectral data preprocessing would transform a set of raw spectra into a numerical table which include mass-to-charge (m/z) states with associated intensity for each isolate. Generally, m/z values with adequate intensities are considered as the fingerprint signatures when using spectral data, and these can be extracted to build up models for discriminating different subgroups. Note that a peak has an m/z value. As a result, a valuable analysis would highly depend on the appropriate use of preprocessing techniques. The MS data derived from FlexAnalysis 3.3 were of high quality, but their resulting peaks were not aligned within the dataset. Meanwhile, the aforementioned tools lack of specific information about the reference spectrum when implementing the alignment of peaks. Therefore, we developed a statistical test-based method for determining the reference spectrum within a given dataset, then further realizing the alignment of the peaks. The reference spectrum should be capable of discriminating between different subgroups within a dataset. Consequently, we mainly focused on determining what pattern of peaks in the reference spectrum can indicate the differences among different groups in this study. For each spectrum, we first rounded each m/z value to the nearest whole number, and then all peaks that occurred were used to form a set of named candidate peaks set (CPS). The peaks in CPS were then sorted into ascending order. After a tolerance value is suggested, each adjacent peak in the CPS is either lower than or is equal to the given tolerance value; in such circumstances, the one with the higher difference in occurring ratio is retained. The difference in occurring ratio for m/z = k, in Dalton (Da), is defined below. where x 1 , x 2 , and x 3 are the counts that are aligned to m/z = k, and n 1 , n 2 , and n 3 are the number of isolates for ST3, ST42, and other ST types, respectively. For example, suppose that the tolerance value is 1 Da, and the CPS = , which has an ascending order. In other words, the RPS is the reference spectrum and feature set used to construct the classification models. To analyze the common peaks across the datasets given in this study, we employed Fisher's exact test (Raymond and Rousset, 1995) to determine a tolerance value for constructing the RPS due to relatively small sample sizes. For each tolerance value, there are three p-values determined by comparing ST3 and ST42, ST3 and other ST types, and ST42 and other ST types. As mentioned previously, the reference spectrum should be capable of discriminating between different subgroups within a dataset, and the tolerance value could be adopted according to its ability of separating these three groups. Therefore, the tolerance value was selected based on the obtained reference spectrum that would produce the largest number of p-values that were less than 0.001. We then further adopted the repeated 5-fold cross validation to demonstrate the efficiency of the determined tolerance value. Note that the determination of CPS and RPS was based on the training data when the repeated 5-fold cross validation was used. In other words, the repeated 5-fold cross validation was implemented here to simulate an external validation for evaluation of the performance in the determination of the reference spectrum. The flowchart of preprocessing is shown in Figure 1. After determining the RPS, the alignment of the m/z with intensity is another critical part of the process, whereby the strength of signal at a specific m/z is determined. Therefore, in these circumstances, it is straightforward to move the specific m/z value of an isolate to the closest one in the RPS. As the tolerance value increases, more than one m/z values might be aligned to the same specific m/z in the RPS. In this situation, the intensity with the minimum distance between its own m/z and the specific m/z, is preserved. Hence, duplication problems can be solved. For instance, if both m/z = 2530 Da and m/z = 2535 Da in a spectrum are aligned to 2532 Da, which is a member of the RPS, the intensity of the m/z = 2530 Da is used for representing the strength of signal at 2532 Da. Supplementary Figure 1 illustrates how this alignment takes place. Development of Machine Learning-Based Classifiers In this study, we implemented four machine learning methods; multiple logistic regression (MLR), support vector machine (SVM) learning, decision tree (DT) learning, and random forest (RF) learning, to construct the strain type classifiers for S. haemolyticus using R software (version 3.5.1, R Foundation for Statistical Computing, https://www.r-project.org/). MLR is a basic parametric model used in dealing with the present types of classification problems. The primary objective of SVM is to find a hyperplane that is able to segregate different classes of data and therefore it is commonly used to solve classification problems. DT and RF are both non-parametric tree-based strategies. Owing to the small size of data, the unsophisticated structure of DT can help us interpret the important features of the data more clearly. On the other hand, RF can provide evaluation metrics for the features and thus is able to identify the important features used during the model construction. The glmnet package (Friedman et al., 2010) of R was applied during this study to construct the MLR model. More specifically, the MLR model can be defined as where K is the number of levels of the response variable, and G = (1, 2, . . . , K) is the set of levels. Note that this parameterization is not estimable due to identical probabilities. However, regularization is able to deal with this. Hence the MLR model can be obtained by maximizing the penalized log-likelihood where p j (x i ) = P(G = j|x i ), and g i ǫ (1, 2, . . . , K) is the ith response. Therefore, MLR-based classifiers are able to be constructed by adopting this package. The SVM classifier was built using the e1071 package (Chang and Lin, 2011). In this package, the multi-class problem is approached via the "one-against-one" approach (Knerr et al., 1990). Consequently, there are K(K-1)/2 classifiers that are needed to be constructed for K classes. In this study, the SVMbased classifier was required to construct three classifiers due to the presence of three classes. More precisely, the training data was used to form the ith and jth classes and was able to deal with the following two-class classification problem. Following this, a voting strategy is adopted, the class with the maximum number of votes is considered to be the most probable one. The DT-based classifier was implemented using the caret package (Therneau and Atkinson, 2018) of R. Specifically, the package mainly provides classification and regression trees (CART). Furthermore, the randomForest package (Liaw and Wiener, 2002) of R was also employed in this study to construct a random forest-based classifier. The package mainly provides an R interface using a Fortran program developed by Breiman (2001). Ensemble learning and bagging are the two important concepts used when creating the random forests. Furthermore, a random forest is a classifier consisting of a collection of treestructured classifiers (Breiman, 2001). Therefore, according the voting results, we should be able to obtain the prediction for a specific data-set. In addition, RF provided the functions, that allow the evaluation of the effect of features during model construction. The mean decrease in accuracy and mean decrease in node impurity are provided by randomForest package (Liaw and Wiener, 2002). Note that the impurity is defined as where p i is the probability of correct classification. Frontiers in Microbiology | www.frontiersin.org FIGURE 1 | Flowchart of preprocessing of spectral data given that the tolerance value is 5. The incidence ratio was determined by the number of the isolates among the CPS. D k was defined as the total difference between the incidence ratios. In addition to the aforementioned multiclass classification approaches, we also adopted these methods when examining binary classification in order to better distinguish ST3 and ST42. The same package was implemented for this process, but in this case using the binary option. For instance, logistic regression (LR) was used to construct the binary classification model using the glmnet package (Friedman et al., 2010). Similarly, for SVM, DT, and RF, the same packages were adopted. Statistical Analysis It is important to note that we were concerned not only with the frequency of the peaks, but also with the intensity of a specific peak among the multiple spectra, which is also a critical in discriminating these three groups. Therefore, in order to compare differences in intensities of specific peaks among these three groups, the Kruskal-Wallis test (Kruskal and Wallis, 1952) and Kendall's tau coefficient (Kendall, 1938) were both adopted as part of this study. Moreover, to obtain the ability of an individual peak to distinguish between the three groups, the area under the receiver operating characteristic curve (AUC) was taken into consideration. Note that to deal with multi-class performance evaluation, the pROC package (Robin et al., 2011) in R was implemented in order to obtain an estimation for the multi-class AUC (Hand and Till, 2001). When comparing the difference between two independent samples, the Wilcoxon rank-sum test was employed, and it was also implemented to compare cross validation performance. To find the optimal cut-off points for each ROC curve during binary classification, the OptimalCutpoints package (López-Ratón et al., 2014) was applied. Evaluation Metrics of the Classifiers To evaluate the performance of the classifiers constructed by the aforementioned machine learning methods, the stratified 5-fold cross validation technique was implemented. The first procedure of the stratified 5-fold cross validation splits the dataset into 5 groups, preserving the percentage of data for each class. Then, one group is left as the testing dataset, while the remaining groups form the training dataset. The classification model was built according to the training dataset and was evaluated using the testing dataset. Note that each group was a testing dataset. Consequently, we obtained 5 prediction performances for these 5 groups. The average accuracy and the AUC among the five testing sets were determined in order to compare the performance when constructing the multiclass classifiers. As a result, the AUC was calculated by using the pROC package (Robin et al., 2011) in R. By way of contrast, we used sensitivity, specificity, accuracy, and AUC when evaluating the binary classification performance. More specifically, suppose that the class of ST42 is labeled as 1, these metrics are defined as follows: where TP means the true positives and refers to the number of ST42 that were correctly predicted by the classifier, TN means true negatives and refers to the number of ST3 that were correctly predicted by the classifier, FP means false positives and refers to the number of ST42 that were incorrectly predicted by the classifier, and FN means false negative and refers to the number of ST3 that were incorrectly predicted by the classifier. Feature Selection Strategies In addition to applying the importance evaluation from RF, we also developed two strategies, the stepwise strategy and the forward strategy, to find the peaks that needed to be considered as classifiers. More specifically, these two strategies were adopted when constructing the multi-class RF-based classifiers in order to obtain the peaks that are essential when distinguishing these three groups. The stepwise strategy starts initially with a specific peak, such as the one with the largest AUC, the largest absolute value of Kendall's tau coefficient, and so on. Further, the next peak to be selected must attain the largest AUC or accuracy when combined with the currently selected peak(s) among those peaks that have not been selected. The process is then repeated until the AUC or the accuracy does not increase anymore. When using the forward strategy, the peaks must be sorted into a specific order. For example, the peaks can be sorted by their AUCs in the descending order. Then the forward strategy would follow this order to adding new peaks if the new one is able to increase the AUC or accuracy. Otherwise, the peak will not be regarded as a helpful feature when constructing the classifier, and thus will be discarded. FIGURE 4 | Mass spectra before and after peak alignment. The left panel is the number of spectra appearing the specific peaks under the original signal of the mass spectra and the right panel is after the alignment strategy with tolerance value 5. The sensitivity of both these strategies is dependent on the selection of the initial peak. In other words, the first selected peak will affect different peak combinations and this may produce different performances. Moreover, different criteria are likely also to result in different combinations. In this study, both AUC and the accuracy are two of the major concerns when building the multi-class classifiers. On the other hand, the balance between the sensitivity and specificity also needs to be taken into consideration. Nevertheless, the major aspects of the evaluation still are dependent on the AUC and the accuracy. Summary Statistics of Spectra Data Among the 254 isolates used in the present study, 62 isolates were ST3, 145 isolates were ST42, and 47 isolates were neither ST3 nor ST42 and formed a separate group of strains. The details of the other ST types show in Supplementary Table 1. Given that we aimed to develop and validate a rapid S. haemolyticus strain typing tool, we designed the classes based on the local epidemiology, whereas ST3 and ST42 accounted for the majority of strains. In clinical practice, the developed tool would provide preliminary strain typing information, notifying clinical physicians if the isolate of interest is of the major ST types. When the isolate of interest is classified by the model as a major ST type, outbreaks from the origin should be suspected and further investigation could be initiated immediately. As noted, this classification was determined by the local epidemiology of S. haemolyticus in Taiwan. Figure 2 demonstrates the data statistics and the distribution of number of peaks identified for each group. On an average, the number of peaks identified in the range 2,000 Da to 17,000 Da was 76.48, with a standard deviation of 13.46. More specifically, the average number of peaks identified for ST3 was 77.03, while that of ST42 was 77.68, and the number of peaks identified for the other ST types was 72.04. Although the number of peaks identified for the other ST types seemed to be lower than that for the other two strains, the Kruskal-Wallis rank sum test did not show a significant difference between the three groups (p = 0.0586). When spectra signal intensity was examined, the average (standard deviation) normalized intensity across the three groups was 0.16 (0.18). The average normalized intensity of ST3 was 0.13 (0.16), while that of ST42 was 0.17 (0.19), and that of the other group of ST types was 0.18 (0.18). The normalized intensity of ST3 seemed to be lower than that of other two groups and the result of the Kruskal-Wallis rank sum test also showed that there were significant differences between these three groups (p < 0.0001). Determination of Tolerance Value In the previous section, we have described the strategy for determining the RPS using Fisher's exact test. Figure 3 demonstrates the proportion of significance for different tolerance values. More specifically, the proportion of significance was determined by the number of occurring significance. Note that the significance here indicates that the p-value of Fisher's exact test is <0.0001. When the tolerance value is 5, the proportion of significance is highest. The spectra with and without preprocessing is shown in Figure 4. In addition, Figure 5 demonstrates the performance of the 5-fold cross validation repeated 100 times. Specifically, there were 500 independent tests of ACCs and AUCs for evaluating whether the tolerance value was robust enough. These results implied that the tolerance value was adequate for further analysis. The AUC of different classifiers under different tolerance values, which are shown in Figure 6, demonstrated that the AUC was able to attain a value of 0.8 with a low standard deviation for the tolerance value of 5. Therefore, we used a tolerance value 5 for the feature selection because of its robustness. Table 1 shows the mean ± standard deviation of the accuracy and AUC values for the 5-fold cross validation using the different machine learning methods. Wilcoxon rank sum test was then used to compare their performances. It should be noted that the p-value next to the accuracy/AUC column is from the Wilcoxon rank sum test results and this was employed to compare the accuracy/AUC when using the MLR method on the test data during 5-fold cross validation. Furthermore, we also found that the RF values tended to be robust due to the presence of a lower standard deviation compared to other methods for the different tolerance values present in Figure 5. Hence the feature selection strategies, when implemented to find important features, used RF. It should be noted that the number of peaks in RPS was 583 for a tolerance value of 5 and thus it was these 583 features that were used to construct the multi-class classifiers used to discriminate the three groups. Table 2 demonstrates the results of the two feature strategies when RF was used to construct the classification models. The forward strategy was highly dependent on the order of inclusion of the features. On the other hand, the starting peak in the stepwise strategy was critical. Both these strategies demonstrated that a reduction in the number of features appeared to increase the accuracy or AUC. In other words, the selected peaks were found to be highly correlated with S. haemolyticus and were able to distinguish between the three groups of ST strains. A total of 10 models were constructed by adopting different feature selection strategies and selecting different peaks. We next identified the peaks that were selected in more than five models and these were regarded as discriminative peaks. Table 3 shows the occurrence and proportions of these discriminative peaks. From this table it can be seen that the ST42 isolates almost always present the peaks 4999 and 6496, explicitly they were present in over 90% of samples. However, neither ST3 nor ST42 ever presented the peak 5635. In addition, Figure 7 presents the whole spectral incidence for the three groups, and specifically focuses on the area from 4700 to 7100 Da, which allows closer observation of the behavior of the discriminative peaks. Specifically, the red bars show the differences between these three groups that seem to be critical to constructing the classifiers. When considering the intensity, Table 4 presents the means and standard deviations of the normalized intensities of the discriminative peaks. Since the incidence tends to be small, and the normalized intensity is between 0 and 1, the average values also tend to be low. Nevertheless, some peaks still showed strong intensity. For example, peaks 6781, 6496, and 4999 still have relatively large intensity values. The Kruskal-Wallis test was employed to test difference among the three groups and when there was a difference between two groups, the p-value tended to be lower. Hence the p-values in Table 4 are very small. It should be noted that these discriminative peaks are the ones that are often selected using the various different feature selection strategies shown in Table 2. Moreover, the boxplots in Figure 8 can be used to demonstrate the distribution of intensities among the different ST types. According to Table 3, the intensity in event of a lower incidence tends to be smaller. This can also be seen in Figure 8 for peaks such as 4674 and 4659. Results of Feature Selection Strategies on RF-Based Classifiers Classifier for Discriminating ST3 and ST42 Table 5 shows the performance of the classifiers used to distinguish ST3 and ST42. Since the majority of data available was for ST42, the specificity of these classifiers tended to be higher. Even so, the AUCs among the different classifiers also showed impressive results. In both Figures 7, 8, it can be seen that the incidence and intensities are evidently different for some specific peaks. DISCUSSION This is a study that focused on the strain typing of S. haemolyticus based on the MALDI-TOF MS utilizing statistical tests and machine learning methods simultaneously. Specifically, the Fisher's exact test was employed to determine the reasonable tolerance values on preprocessing the spectra data. We have not only constructed machine learning-based classifiers that allow for different feature selection strategies, but have also employed statistical tests to compare the performance of the various discriminative peaks related to the different ST types. The rapid identification of S. haemolyticus strain types will facilitate the identification of origins of infection and will also provide critically-ill patients with substantial benefits because it will allow for rapid infection control. Additionally, further exploration of the discriminative peaks will allow the identification of each corresponding peptide. Such findings should provide clinically valuable information pertaining to the different subtypes of S. haemolyticus. Previous studies used "type templates" for each ST type based on the incidence of specific peaks in their MALDI-TOF MS spectra in order to handle the issue of peak shifting; furthermore, log-transformed intensity was used to represent corresponding signal strength for each peak (Wang et al., 2018a,b). These studies also used the signals with the highest incidence probability in a local region (± 5 m/z) as the center of each peak feature. In other words, determining the local region was based on the incidence probability without the adoption of any statistical tests. In this study, we used statistical analysis and also measured the performance of classifiers. Such an approach involving measurement of the tolerance value is an excellent approach for dealing with the peak shift problem present when using spectral data. As the tolerance value increases, the number of peaks in the RPS decreases, and vice versa. The reason is that a larger tolerance value may lead to the alignment of more discriminative peaks with the same specific peak. In contrast, a lower tolerance value results in a paucity of data. Specifically, in these circumstances, much less data can be aligned to the same specific peak, which produces a reduced amount of training data and eventually results in poor performance. In such circumstances we used both Fisher's exact test, and an evaluation of the variation in performance of different classifiers with different tolerance values. In short, the variation among different classifiers and tolerance values was taken into consideration and this increased the robustness of our model. When the tolerance value was 5, the significance value was the largest and the standard deviation among the 5-fold cross validation analysis tended to be lower. Therefore, we used 5 as the final tolerance value when creating the RPS using 583 peaks. There are a variety of machine learning methods that can be used for modeling different types of data. In this study, we adopted a number of relatively uncomplicated models to construct the classifiers. These uncomplicated methods are readily interpreted, which makes interpretation of the peak results easier and allows the initiation of further investigations into specific peaks simpler. Multinomial logistic regression is a generalized logistic regression model that is used for FIGURE 7 | Overview of processed MS data. Occurrence proportions among the three groups over the range from 2,000 to 17,000 Da and zoomed in for the range 4,900 to 7,100 Da. The red areas include peaks 4548, 4673, 4999, 5036, 5129, 5635, 6466, 6496, and 6781, which are the important peaks when constructing the RF-based classifiers. handling multi-class problems and is one of the most common parametric statistical models. Our major concern in adopting the multinomial logistic regression model was multicollinearity. When the dependency among different independent variables is high, the estimators can be misinterpreted, and this may increase the prediction bias (Myers and Myers, 1990). Although the performance of MLR, as shown in Table 1, tended to be lower than other methods, the estimation of the parameters does seem to provide some information about the discriminative peaks. In other words, the estimators of the MLR were able to reveal which peaks potentially correlated with different ST types. It should be noted that a consideration of the standard errors of these estimators is an important reference point that can be used to avoid the multicollinearity effects. This is because there are few restrictions on the use of non-parametric methods such as SVM, DT, and RF. Their primary weakness is the time required for training the model when they use large scale datasets. However, this was not an issue in this study due to the relatively limited amount of data. Consequently, the performance of the non-parametric methods was better than that of MLR. Furthermore, the performance of RF was more robust than other methods. This is possibly due to two of the essential concepts of RF, namely ensemble learning and bagging. Previous studies also have reported the various advantages of RF (Boulesteix et al., 2012). In this study, we have also demonstrated that RF not only provides the highest accuracy and AUC, but it also retains the lower standard deviation. Only a slight variation at the bacterial subspecies level is observed when they are compared using mass spectra (Lasch et al., 2014;Wang et al., 2018b). Nevertheless, until now, no studies have been able to identify the discriminative peaks when discriminating the different ST types of S. haemolyticus based on MALDI-TOF MS spectral data. Therefore, we used a variety of different strategies in order to identify the discriminative peaks that are very likely to be highly related to the different ST types. An exploration of the discriminative peaks is highly dependent on the feature selection strategy and the machine learning method. It is important to note that the performance of RF is relatively robust and that it is also less time-consuming during training; in these circumstances, we largely adopted feature selection using RF for this study. The stepwise strategy is similar to the brute force method when used to find the best combinations for the classifiers. Consequently, the results of the stepwise strategy are generally better than those of the forward strategy. Furthermore, there is only one model that did not include peak 4673, which strongly supports peak 4673 as a discriminative peak. In addition, peak 5129 was not selected by two models, as shown in Figure 8, indicating that the normalized intensities across the three groups for this peak are apparently different. In addition, both Figure 8 and Table 3 show that the occurrence ratio is also significantly different across the three groups. Specifically, ST42 rarely presented peaks 4673 and 5129, while ST3 usually presented peaks at m/z 4673 and 5129. Further experiments are needed to identify the peptides corresponding to these peaks. Although the machine learning-based classifiers has demonstrated impressive performance in this study for distinguishing different ST types of S. haemolyticus, there are FIGURE 8 | Boxplots for the normalized intensity for the discriminative peaks. Frontiers in Microbiology | www.frontiersin.org still some limitations. One major concern is that subspecies composition of the microbial strains may differ in different bacterial populations or in different regions of the world. In such circumstances the construction of machine learning-based classifier-based method might break down because these groups have different discriminative peaks for these subspecies. Even so, the machine learning-based classifier approach, in conjunction with the associated statistical tests, still provides a novel framework for analyzing MALDI-TOF MS data. Another critical issue that has been identified in the previous studies is the reproducibility of the mass spectra when MALDI-TOF MS is being used in bacterial typing (Walker et al., 2002;Wolters et al., 2011;Croxatto et al., 2012;Sandrin et al., 2013). There are a variety of factors involved in the reproducibility of the mass spectra and these include sample processing and specimen type (Josten et al., 2013;Sandrin et al., 2013;Mather et al., 2016). As of yet no standard protocol has been proposed for strain typing by MALDI-TOF MS. Nevertheless, a standard protocol should be optimized and specified for each species in order to achieve a robust performance when strain typing (Walker et al., 2002;Sandrin et al., 2013). The College of American Pathologists accreditation and proficiency test has been conducted for years to ensure the performance and quality standards of personnel and tests at Chang Gung Memorial Hospital, Linkou Branch. Therefore, on the basis of previous qualified MALDI-TOF MS workflow and data used here, the constructed classification models used in this study are readily available for S. haemolyticus strain typing. Our study has demonstrated a method of developing robust classifiers for discriminating different ST types of S. haemolyticus based on MALDI-TOF MS data. The multi-class classifier demonstrated an AUC of 0.848 and accuracy of 0.886 when discriminating these three groups. If we only consider binary classification for ST3 and ST42, the AUC reaches an excellent discrimination power of 0.972. The constructed classifiers were able to provide instant information when identifying the origin of infection, which will allow rapid infection control. As a result, we believe that we have hereby developed a cost effective and rapid identification method for the strain typing of S. haemolyticus. This provides a great opportunity for further improvement of this new protocol and its introduction into routine clinical microbiology laboratory practices in order to attain rapid infection control. Furthermore, the explicit strategy for the determination of representative peaks before constructing the classifiers provides some indications for those who are interested in further analysis of spectra data. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
2019-09-14T13:04:37.560Z
2019-09-13T00:00:00.000
{ "year": 2019, "sha1": "d7ed114ef2da6806ac3d492e9ca1920f0cf6e88d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02120/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7ed114ef2da6806ac3d492e9ca1920f0cf6e88d", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
265542388
pes2o/s2orc
v3-fos-license
A Flexible PDMS-Based Optical Biosensor for Stretch Monitoring in Cardiac Tissue Samples Cardiotoxicity, characterized by adverse impacts on normal heart function due to drug exposure, is a significant concern due to the potentially serious side effects associated with various pharmaceuticals. It is essential to detect the cardiotoxicity of a drug as early as possible in the testing phase of a medical composite. Therefore, there is a pressing need for more reliable in vitro models that accurately mimic the in vivo conditions of cardiac biopsies. In a functional beating heart, cardiac muscle cells are under the effect of static and cyclic stretches. It has been demonstrated that cultured cardiac biopsies can benefit from external mechanical loads that resemble the in vivo condition, increasing the probability of cardiotoxicity detection in the early testing stages. In this work, a biosensor is designed and fabricated to allow for stretch monitoring in biopsies and tissue cultures using an innovative sensing mechanism. The detection setup is based on a biocompatible, thin, flexible membrane—where the samples are attached—which is used as an optical waveguide to detect pressure-caused shape changes and stretches. Various prototypes have been fabricated with a cost-effective process, and different measurements have been carried out to experimentally validate the proposed measurement technique. From these evaluations, stretches of up to 1.5% have been measured, but the performed simulations point towards the possibility of expanding the considered technique up to 10–30% stretches. Introduction Polymers and flexible substrates are considered to be of key importance in many fields and applications.For instance, flexible materials are largely employed in the design and implementation of biosensors thanks to their malleability and elasticity [1][2][3][4].These properties allow for polymer-based sensors to have a wide range of applications that include tissue/cell culturing [5], e-skin [3,[6][7][8], wearable electronics and devices [9,10], self-healing robots [11], Human-Machine Interfaces [12][13][14], and industral applications [15,16].In all these applications, the design is based on a soft and flexible substrate able to measure stretch under the appropriate load conditions.The most-used materials are polydimethylsiloxane (PDMS), polymethylmethacrylate (PMMA), and graphene. Technology-wise, stretches and deformations are typically measured with strain gauges [17], where the pressure applied to the material is translated into a change in electrical resistance [10].Even though this technique has proven to be as a very effective solution for in situ stretch monitoring, the photolithography or deposition of other materials like graphene is needed, adding complexity to the fabrication process. Capacitive strain sensors are another widely spread class of devices used for the realtime monitoring of deformations.In this technology, an insulating layer is manufactured between two conductive electrodes.Applying a certain strain to the electrodes, the capacitance is modified due to the change in the geometrical area of the equivalent capacitor, enabling the readout of strain [18].Furthermore, piezoelectric and triboelectric materials are also printed on flexible substrates for strain-sensing, obtaining high precision and fast stretch measurements [19][20][21]. Another very important class of sensors is the Fiber Bragg Grating (FBG)-based sensor.In these implementations, the sensing device consists of an optical fiber whose refractive index has been externally modified, or one in which some impurities have been added to the fiber structure [22].By analyzing the output radiation, it is possible to quantify the stretch/strain that is applied to the fiber.The majority of the aforementioned techniques, independently of the application, involve the use of a polymeric composite as a structural material.More specifically, a thin polymeric layer is often used as a matrix for the fabrication of the necessary sensing elements. Some proposals based on these ideas have recently been developed, considering PMMA as a structural material.The sensor proposed in [23] was based on the mechanical movement of two optical fibers that were adjacent under rest conditions.When subjected to an external stretch, they became distanced, resulting in a loss of light intensity due to the alterations in the refractive index.In this work, an innovative sensing principle is presented where the polymeric thin layer is used as a sensing layer, employing it as an optical waveguide and using the principle of Total Internal Reflection (TIR) to detect stretches.After defining the operating principle, the mechanical structure is mathematically modelled and then validated through software simulations.Afterwards, the device is specialized for cardiac tissue and cellular growth and maturation applications, adapting its functionalities to the field requirements.Finally, a thick-film, cost-effective fabrication method is presented and measurements are carried out to validate the new sensing principle and test its suitability for the aforementioned application. Stretch Sensor: Design and Fabrication In this section, an innovative sensing principle is proposed and the sensor's mechanical structure is designed.Mathematical models are used to predict the geometry's behaviour and simulations are carried out to validate the design.Finally, the thick-film, low-cost fabrication process is presented. Stretch Application and Sensing Mechanism To measure stretch, a mechanism was required to induce deformation on a flexible thin membrane, where the object of interest was situated.In this particular instance, pressure was applied from below, resulting in an upwards bending of the flexible substrate, which caused a longitudinal elongation of the fixed element, as visible in Figure 1.The consequential membrane shape change could be employed for detecting the induced elongation through optical measurements, using the membrane as an optical waveguide.Specifically, when no pressure was exerted, the TIR condition was met, resulting in maximum optical transmittance.On the other hand, when pressure deformed the flexible layer, the TIR condition was gradually lost and the membrane optical transmittance lowered accordingly, resulting in a lower optical intensity being received at the other membrane end. PDMS was chosen as structural material for the sensor due to its biocompatibility, flexibility, low-cost manufacturing and transparency in visible light.For the optical measurements, two commercial plastic optical fibers (POF) (MH4001 EskaTM, 1.0 mm Core Simplex High-Performance Plastic Optical Fiber, 2.2 mm OD Polyethylene Jacketed, Industrial Fiber Optics, Tempe, AZ, USA) were positioned at the sensor's lateral faces in correspondence with the active layer.Hence, the decision was made to set the thickness of the active membrane at 1 mm, ensuring a consistent optical path. Sensor Structural Design A hollow rectangular PDMS structure was designed to mechanically support the active membrane and allow for the application of pressure.Two different possible geometries were investigated, as shown in Figure 2. In both designs, a cavity was embedded for air injection.The shape of the cavity and the thickness of the upper layer are pivotal structural parameters, as they dictate the pressure that is required to achieve the intended stretch.Moreover, the upper layer thickness has to be paired with the optical fiber diameter to allow for impedance-matching between the two waveguides, reducing power losses at the interface.Thus, the thin sensing layer was designed to be of 1 mm thickness as the optical fiber diameter.Following a thorough analysis, the prototype featuring a circular cavity was rejected due to the pressure's inability to cause a significant displacement of the upper layer, rendering this prototype ineffective.The geometry was selected to be of dimensions 76 × 14 × 8 mm so that the air boundary conditions at the inlets and at the central part would be separated, achieving a more uniform deformation of the membrane part that adheres with the object of interest.To facilitate pressure application, two identical Polylactic Acid (PLA) funnels were designed using Fusion 360 (Autodesk, Mill Valley, CA, USA), 3D-printed, and glued to the structure.Figure 3 shows a 3D model realized with Fusion 360 of the whole structure with embedded funnels. Mechanical Design Model and Validation The main goal of this section is to study the mechanical properties of the 1 mm membrane and to demonstrate that the stretch values can occur within the range of interest.A mathematical model was used to predict the required vertical displacement and strain on the membrane that is achievable with this system architecture.Figure 4 represents a schema of the problem.L 0 + ∆L is the longitudinal increased length of the sensitive membrane, H is the vertical displacement under the pressure effect and R is half of the cavity width-in this case, 5 mm.Using the schema shown in Figure 4, the strain can be calculated as in [24]: In a first instance, Equation (1) was used to graph the strain with the normalized vertical displacement H/R.From Figure 5, it is possible to notice that, to achieve a strainand thus a stretch-of 10-30%, a vertical displacement of H = 2-3.5 mm is needed.The Bulge Test Equation [24] was utilized to determine the pressures at which these displacements occur: where 2a is the membrane width, σ 0 is the initial stress, t is the membrane thickness, and E and ν are the Young's modulus and the Poisson's ratio of the membrane, respectively.Since, in this case, the initial stress was null, the equation was reduced to: from this, the displacement can be retrieved: where, for the proposed structure, a = 5 mm, t = 1 mm, ν = 0.5 and the Young's Modulus of PDMS (E) is 1.2 MPa [25].From Figure 6, it is possible to infer that, to obtain a vertical displacement H = 2 mm (10% stretch), a P = 27 kPa is needed.However, to obtain H = 3.5 mm, corresponding to a stretch of 30%, a P = 150 kPa is needed. To corroborate these statements, a COMSOL Multiphysics (COMSOL Inc., Burlington, MA, USA) simulation was carried out.To simulate the scenario, only the 1 mm active layer was considered in the model and a constant pressure was applied to the lower membrane surface.After drawing the geometry, selecting the material, and adding the appropriate equations, a parametric sweep of the applied pressure was performed.Figure 7a shows the 3D result of this parametric sweep.In order to increase the simulation accuracy, the hyperelastic properties of PDMS were added to the COMSOL material model.The results are presented in Figure 7b, where it is possible to notice that stretches in the 10-30% range could be achieved with pressures in the range 25-70 kPa. Stretch Monitoring Applied to Cardiac Research In the field of pharmaceuticals, new drug compounds undergo multiple stages of testing throughout their development process prior to being introduced to the market.After the discovery of a new compound, various preclinical and clinical testing phases are required.One of the requirements is the evaluation of cardiotoxicity of a drug, which can alter the normal functioning of the cardiomyocytes (human cardiac cells) and cause lifethreatening arrhythmia.During the pre-clinical phase, tests are carried out using models that closely resemble the human cardiac condition (in vivo), excluding human participants from the experiments.Preclinical models involve: In silico models: software simulations of the cardiac structure under the compound effect. • In vitro models: 3D tissue slices or human-induced pluripotent stem-cell-derived cardiomyocytes (hiPSC-DM) that are cultured and kept under standard conditions. When extracted from the functional organ, cardiomyocytes maintain their functional structure and behaviour.However, some time after isolation, these cells start to change their structural properties and functionality [26,27].To stop this degrading, it is of key importance that the slice is cultured under conditions that mimic the in vivo situation.These conditions are: Oxygen and nutrients supply. Electromechanical stimulation has to be applied with external systems, while the oxygen and nutrient supply is guaranteed by placing the slice in contact with an appropriate salt solution or culture medium that mimics native extracellular fluid (ECF) [28].For this reason, Biomimetic Culture Chambers (BMCCs)-where all these physiological conditions are met-are needed for the appropriate culturing of both cardiomyocytes and tissue slices.In various studies [29][30][31][32][33], BMCCs have been developed to investigate the effect of mechanical stimulation protocols over cardiomyocytes and cultured slices.The main findings were that the best static mechanical stretch was 20-30% of the original length, corresponding to a sarcomere length (SL) of 2.2-2.4 µm.With these mechanical loads, tissue slices were successfully cultured and kept alive for up to 4 months.In [5], stretch measurements of up to 4-5% were obtained using a microfabricated device with strain gauges, and cardiomyocytes were successfully cultured under such stretch conditions.However, in [30], stretch measurements were carried out employing a digital camera and a millimeter graph paper; thus they lacked resolution, requiring expensive equipment and extensive calibration and data post-processing.The device proposed in [5] needs fabrication that can produce such thin-film substrates and the equipment required to read the sensor output.For these reasons, advanced measuring techniques are required to enhance the design of BMCC and optimize stretch monitoring effectiveness, and a costeffective, widely available fabrication process has to be introduced for such devices.The presented measurement method is proposed as an affordable alternative to the state-ofthe-art sensing mechanisms, and tests on various prototypes with different properties are carried out to verify its suitability.The cardiac biopsies taken into account for this study were of the dimensions 5 × 7 × 0.3 mm. Mold and Sensor Fabrication As a first fabrication step, the molds were designed and fabricated according to the fixed structural dimensions.The chosen mold material was PMMA.After creating a Corel Draw v.20 (CAD) 3D model, the mold was manufactured using a CO 2 laser printer (Epilog Mini 24, Epilog Laser, Golden, CO, USA).PMMA foils were adhered with a pressuresensitive adhesive (PSA) ARcare-8939 (Adhesive Research Inc., Glen Rock, PA, USA). Figure 8 shows the PSA structure.PSA and PMMA were chosen for their robustness, low-cost manufacturing and suitability for clinic devices and applications.Furthermore, in order to realize the internal sensor cavity, the same PMMA/PSA process was used to fabricate a rectangular bar.Afterwards, to facilitate demolding, a silane pre-treatment of 2 h was performed on the mold.A total of 10 µL of silane CF 3 (CF 2 ) 5 (CH 2 ) 2 SiCl 3 , 97% (ThermoFisher GmbH, Kandel, Germany) was applied close to the mold under vacuum conditions in order to form a uniform thin cap on the container surface. The third manufacturing step was to actually realize the polymer.For this purpose, Sylgard-184 silicone elastometer (The Dow Chemical Company, Midland, MI, USA) was put into use.The base and curing agent were put into a standard laboratory mixer with a 10:1 ratio and manually mixed until homogeneity was achieved.Three different fabrication steps are possible: 1. Evacuation of the silicone encapsulant in a vacuum chamber for 3 h to remove bubbles and subsequent drying heat treatment at 70 • C for 1 h.2. Centrifugation at 1200 rpm for 5 min of the container and the encapsulant to eliminate entrapped air and 3 h of 70 • C heat-curing.3. Mold with encapsulant left at room temperature for 48 h. All three proved successful at removing bubbles from the sensing membrane (i.e., the 1 mm layer where light passes); however, the second produced more consistent sensors in terms of density, texture, and opacity, so the other two methods were discarded.Figure 9a shows the mold with the polymeric solution, while Figure 9b shows how light is injected from the optical fiber to the PDMS sensor.Three different prototypes were fabricated, with different structural properties, to validate the sensing system, as follows: Pattex Repair Extreme ® was explored as an alternative glue since it provided a better bonding between the funnels and the PDMS sensor thanks to its polymer-like bond-creation properties.Loctite Super Glue-3 ® was proved to create a less reliable contact that was often subject to pressure losses and detachment, invalidating the respective prototypes.For this reason, and as seen in the characterization results, prototype 1 was not further taken into consideration. Measurements Setup The measurement setup is presented in this section.In the first instance, optical path characterization was performed to understand the optical behaviour of the membrane.Once its suitability as an optical waveguide was determined, and the wavelengths for which the transmittance is at its maximum were found, the best optical components for the application were chosen.Afterwards, the mechanical setup to accommodate the sensor and align the optical fibers was studied and fabricated, taking into account the typical tissue and BMCC dimensions.Furthermore, the hardware and software setup is presented and the electronic measurement techniques are discussed. Optical Path Characterization To choose the best radiation parameters for the application, an optical path characterization was carried out.This experiment consists of emitting a broadband light radiation into the 1 mm active sensing layer and recording the received light intensity.In this way, one could understand which wavelengths cause the material to be transparent.These wavelengths will be =used to select the most appropriate optoelectronic components for the proposed system.The broadband light source used for this experiment was HL-2000 (Ocean Insights, Orlando, FL, USA) and the spectrometer was QE6500 (Ocean Insights, Orlando, FL, USA). Figure 10 presents the results of this spectroscopy, realized in the UV/Visible (λ = 200-1000 nm) band.The Baseline graph shows the results of the test when no pressure was applied and the membrane was in its resting position.Test 1 and Test 2 show the results of the same experiment but when pressure was applied to the sensing layer, causing its shape to change.As expected, the membrane bending caused a loss of the TIR condition, and less light intensity was recollected, but no variation was recorded in the frequency band.The blue baseline graph is the average of multiple measurements performed to test the repeatability, and a maximum normalized standard deviation of SD = 0.01 was recorded.Seeing these results, an FB00AKAR (Firecomms, Lehenagh More, Ireland) optocoupler was selected for the emission and recollection of radiation in the prototype.This optocoupler features a red LED that emits within the wavelength range of interest and has a high bandwidth, low capacitance, and separated photodiode. Mechanical Setup The mechanical setup consisted of mainly two parts: 1. Pneumatic circuit.2. Sensor mechanical support. The pneumatic circuit was formed by a syringe for controlled air injection and a series of silicone tubes, of 4 mm outer diameter and 2 mm inner diameter, that were connected to the sensor through its PLA funnels.In order to allow for proper fiber alignment to the 1 mm active layer, a platform incorporating a micrometer screw and the fiber accommodation holes was designed and 3D-printed.Moreover, a small chamber was placed on top to ensure that only the central membrane region deformed and to serve as a BMCC.With this setup, visible in Figure 11, it was possible to adjust the alignment of the optical fibers to the desired location, allowing for more repetitive measurements.To ensure proper alignment with the sensing layer, the micrometer screw was initially placed in a position where the radiation would not pass through the PDMS device.Thereafter, the fiber support was lowered until the first received light intensity maximum was recorded, guaranteeing that the light was guided by the device sensing layer.In this way, a repetitive measurement method was guaranteed for each prototype.An appropriate amplification and further analog signal conditioning allowed for the appropriate LED polarization, generating the desired optical radiation.This optical signal was guided through the 1 mm thick sensing layer and collected with the photodiode.Both the LED and the photodiode were operating in their linear range.After adequate filtering and amplification, the signal carrying the information was digitized and postprocessed to obtain low noise results.The ADC used for the data digitization was the one embedded in the microcontroller unit (MCU)-with 12-bit resolution-and samples were taken with f s = 25 kHz.Since the photodiode proved to be very sensitive to ambient light variations, the optical components had to operate under conditions that were robust to the DC variations that affected the measurements.Therefore, a feedback loop was introduced to implement a DC-level control over the excitation signal.In this way, the information regarding the membrane shape change was only encoded in the AC amplitude of the recollected signal at the photodiode, desensitizing the measurements from ambient light variations and other DC phenomena.The signal generation and digitization were fully developed with a Nucleo-L452RE-P board (ST Microelectronics, Geneva, Switzerland).A pressure sensor DLHRL10G (Allsensors, Morgan Hill, CA, USA) was employed for real-time pressure measurements. Communications between the MCU, pressure sensor and PC were implemented with different communication protocols.The pressure sensor sent raw data to the MCU with the I2C communication protocol.Furthermore, the information recollected from the photodiode was digitized using the embedded ADC of the MCU.With UART protocol, strings containing the raw pressure data and intensity data were sent to the PC for further processing.Matlab (The MathWorks, Apple Hill Drive Natick, MA, USA) scripts were developed to synchronize the operations between the pressure sensor and the intensity data recollection system.These scripts were also employed to recollect, organize, and post-process the UART data. Voltage vs. Pressure Characterization The first characterization step was to find the relationship between the injected air pressure and an electrical quantity that could carry the information of the membrane deformation.The root mean square (RMS) voltage of the sine wave collected by the photodiode was selected.The AC amplitude of this signal varied based on the deformation of the sensing layer.Therefore, the RMS of this sine wave is a noiseless measure of the deformation amplitude.Figure 14 shows the setup configuration with an allocated cardiac tissue sample.Figure 14a shows how the stretch is induced in the cardiac fiber, and Figure 14b shows how light passes through the sensing membrane.The recollected signal from the photodiode was conditioned and digitized with the Nucleo-L452RE-P board.Raw data from the pressure sensor were sent to the MCU with the I2C protocol.Through a software interruption, data from the sine wave were digitized and, together with pressure data, were sent with the UART protocol to a PC.Once the data were sent to the PC, the dataset was saved in a text file and post-processed.In Figure 15, the relation between RMS voltage (Vrms) and pressure for prototype 3 can be seen.This measurement was first taken by inflating the membrane with the syringe and then letting it gradually deflate.As is evident, when observing the same pressure point, the Vrms measured during inflation and deflation differed, particularly within the high-sensitivity central region of this characteristic.This observation indicates that the material exhibits an hysteresis that can be justified using the viscoelastic behaviour of PDMS [35]: due to creep, a viscoelastic material delays the complete recovery of its initial shape after a load is released.Nevertheless, due to the slower rate and the availability of more measuring points, only this segment was taken into account when constructing the following Vrms-pressure relationships. Figure 16 shows the characteristics of prototypes 2 and 3, respectively.As expected, the two curves differ in pressure range.This behaviour can be explained by the differences in the sensing layer thickness.In the case of prototype 3, light remains confined within the sensing layer for an extended duration during both inflation and deflation, resulting in a broader pressure range being reliably captured.However, it is anticipated that a higher pressure will be necessary for prototype 3 to achieve an equivalent vertical displacement compared to prototype 2. Vertical Displacement Characterization Having characterized the relation between Vrms and pressure, the vertical displacement of the membrane was measured.Using a vertical displacement laser LK-H082 (Keyence Corporation Of America, Itasca, IL, USA), and with the setup shown in Figure 17, the vertical displacement was related to the air pressure in the cavity.Since the PDMS used to produce the sensors was transparent to the laser wavelengths, the upper layer of the samples was painted with a grey paint to allow for proper reflections needed by the vertical displacement laser.The optical fiber was disconnected from this setup because, otherwise, the direct characterization of Vrms vs. height would not be possible for two reasons: 1.The paint changes the way light is guided in the sensing layer.It appeared that light was better confined in the membrane, changing the Vrms vs. pressure behaviour.2. Measuring the vertical displacement without painting the top layer was not possible, since PDMS is highly transparent under red light, preventing the proper vertical measurements and changing the Vrms characteristics due to interference from the laser light and fiberoptic radiation. Following these arguments, only the circuit part, responsible for pressure measurements, was kept connected.Once the vertical displacement tool was properly focused and positioned at the membrane center, using a syringe, air was blown inside the cavity, causing the upper membrane to displace vertically. Pressure points were taken within a sampling period of 60 ms, while the vertical displacement tool was set to capture height points with a sampling period of 20 ms.Thus, to relate pressure and displacement, a resampling process was necessary.Moreover, to better adjust these points, one of the two curves was shifted and compared to the other until the best correlation condition was met.Once this processing was finished, data cleaning and interpolation were performed to obtain height values at specific pressure points to allow for further characterization. In Figure 18, the vertical displacement H vs. pressure characteristics are shown for prototype 2 and prototype 3. Comparing the two prototypes for the same pressure points, prototype 3 always measured a smaller vertical displacement.This was expected, since the sensing layer of prototype 3 was fabricated to be thicker than that of prototype 2. Hence, more pressure was needed to achieve the same vertical displacement.Measurements were performed for up to 2.5 kPa, since this was the maximum pressure range of the commercial pressure sensor employed in the measurement. Stretch Calculation Based on the sensor's geometry and as the maximum vertical displacement that was achieved was for prototype 2, the strain-pressure characteristic was evaluated, as in Equation ( 1), although only for this prototype.From Figure 19, it is possible to see how, for a pressure of 2.5 kPa, a strain of 4% could be achieved, thus resulting in a 4% tissue stretch.This results demonstrate that the structure can be used for cyclic mechanical stimulation for the maturation of cardiac microtissues since, in these stretch ranges, they remained functional for up to 4 days, as stated in [31]. Vrms vs. Displacement Characteristics Having characterized the Vrms and the vertical displacement with pressure, the relation between Vrms and height was found using various Matlab scripts.First of all, an interpolation of pressure points was carried out in a selected vertical displacement rang.Afterwards, the Vrms points were interpolated in these new pressure points.These interpolations and data processing allowed for the Vrms-height characteristic to be established.Figure 20 shows the results of this processing, where it is possible to notice that the 20% sensing layer thickness increase between prototype 2 and 3 influences the RMS-H characteristic, as expected.Moreover, the sensing technique proves to be able to measure stretches up to 1.5%.To test the repeatability of the two prototypes, the mean and standard deviation of the Vrms-height curves were calculated using a Matlab script from several measurements.These results are shown in Figure 21.For prototype 2, the maximum standard deviation was reported to be σ max = 1.13%, while for prototype 3 σ max = 0.61%, showing the very high repeatability of the two prototypes. Impact of Painting on the Vrms Characteristic As previously stated, in order to measure the membrane vertical displacement, the top layer needed painting to facilitate the laser reflection and allow for correct measurement.The painting does not affect the membrane displacement, as it is weightless compared to the membrane.However, paint was proven to change the Vrms-pressure characteristic. In Figure 22, it is possible to notice how painting the membrane changes its optical behaviour. The curve appears to be linearized by the painting, and the high-sensitivity region indicates a better confinement of the LED radiation to the 1 mm sensing layer.Since the vertical measurement tool and the LED used a radiation of the same wavelength, the laser light is reflected, allowing for vertical displacement measurements; additionally, LED light is optimally reflected and confined to the sensing membrane, linearizing the characteristic and resulting in a broader measurement range.These results suggest that the use of a biocompatible paint in future studies might improve the behaviour of the sensor. Discussion and Future Work In this work, a new sensing mechanism for stretches is proposed, by considering an LED, a photodiode and the processing electronics.Compared with the state of the art, the required fabrication process is simpler and does not need any advanced instrumentation [5], and the proposed measurement technique is more affordable than [30] , considering its cheaper measurement setup and easier post-processing of data.Furthermore, from [23], mechanical improvements were made, since the stretches are applied through a soft membrane deformation in this proposal, and not through the mechanical displacement of two parts, reducing the breaking risk as fewer mechanical stresses are present.Moreover, the optical fibers are now isolated from the culture medium, reducing unwanted optical effects at the interfaces.The sensing technique (i.e., using the PDMS as an optical waveguide) was tested to check its suitability for a proposed application.The findings indicate that, with the current setup, stretches of up to 1.5% were experimentally measured, but simulations point towards the possibility of expanding the technique to stretches of up to 10-30%.These considerations open the possibility of many research lines and future work, where better light confinement should be sought in the sensing layer.For instance, SU-8 is a biocompatible [36] photoresist that is widely employed in the MEMS field and is highly reflective of the infrared wavelengths [37].The effects of these material coatings on the sensor characteristic could be investigated to improve the optical coupling and increase the sensing range. Conclusions In this article, an innovative stretch sensing mechanism has been proposed, where a thin PDMS membrane is used as an optical waveguide and tested for a specific application.The results obtained from this proof-of-concept have demonstrated that, with a simple and cost-effective fabrication process, the detecting mechanism is capable of measuring stretches of up to 1.5% at the device-sensing layer at low pressures.Simulations point towards the possibility of expanding the range to up to 10-30% stretch with higher pressures.Painting the top layer of the membrane improves the linearity and the sensing range of the membrane, opening future lines of investigation regarding this aspect by enhancing the fabrication method with the use of more precise instrumentation and coatings. Figure 1 . Figure 1.Sensor working principle.Radiation (red arrows) is guided through the thin membrane that deforms; recollected light carries information about the deformation. Figure 2 . Figure 2. Cross section of two possible sensor geometries. Figure 3 . Figure 3. 3D model of sensor with the two PLA funnels attached to it. Figure 4 . Figure 4. Schema of the vertical displacement for strain calculations. 3 Figure 6 . Figure 6.H vs. pressure according to the Bulge Test equation (Equation (2)) for the device membrane. Figure 7 . Figure 7. COMSOL Simulations of the membrane cyclic loading test.(a) 3D results of the vertical displacement COMSOL simulation.(b) Comparison between the Bulge Test Equation and the hyperelastic model simulated with COMSOL. • Prototype 1 :Figure 9 . Figure 9. Representation of the molding process and completed and mounted sensor.(a) PDMS in mold before heat processing and top layer bubble removal.(b) PDMS sensor after fabrication with light and pneumatic circuit. Figure 10 . Figure 10.Optical path characterization.Baseline represents the results obtained for 0 Pa pressure, Test 1 and Test 2 for 100 and 200 Pa, respectively, to show that only the recollected light intensity changed when applying strain. Figure 11 . Figure 11.Final mechanical setup.The two fibers are always aligned and the support mimics the typical shape of a biomimetic culture chamber.Height was adjusted to improve alignment with a micrometer screw. Figures 12 and 13 Figures 12 and 13 show a schema of the hardware/software architecture for optical sensing.The signal generation block was formed by a DAC, where a sine wave of 500 Hz was generated. Figure 12 . Figure 12.Electronic system architecture.The feedback loop maintains a constant DC level, desensitizing the measurements to ambient light variations. Figure 13 . Figure 13.Characterization setup and communications.The MCU was communicating with an I2C protocol with a pressure sensor and with an UART protocol to send data to the PC. Figure 14 . Figure 14.Measurement setup with heart sample allocated in the BMCC.To apply strain to the sample and guarantee its adherence to the top layer, the slice was glued to the PDMS substrate.(a) Inflated membrane.It is possible to observe how the cardiac sample is stretched.(b) Relaxed membrane with red light passing through the sensor. Figure 16 . Figure 16.Vrms vs. pressure characterization for the two valid prototypes. Figure 18 . Figure 18.H vs. pressure characterization for the two valid prototypes. Figure 21 . Figure 21.Mean and standard deviation for Vrms vs. H curves. Figure 22 . Figure 22.Variation in the Vrms-pressure curve when paint is applied to the device top layer.
2023-12-03T16:18:41.732Z
2023-11-28T00:00:00.000
{ "year": 2023, "sha1": "dfdbfbccf16a70534306b5470e1fcc9bb4498af5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/23/9454/pdf?version=1701152763", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a26aec017c332c03d2331ca11c690df13a57010", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
2274194
pes2o/s2orc
v3-fos-license
Modelling survival and allele complementation in the evolution of genomes with polymorphic loci We have simulated the evolution of sexually reproducing populations composed of individuals represented by diploid genomes. A series of eight bits formed an allele occupying one of 128 loci of one haploid genome (chromosome). The environment required a specific activity of each locus, this being the sum of the activities of both alleles located at the corresponding loci on two chromosomes. This activity is represented by the number of bits set to zero. In a constant environment the best fitted individuals were homozygous with alleles' activities corresponding to half of the environment requirement for a locus (in diploid genome two alleles at corresponding loci produced a proper activity). Changing the environment under a relatively low recombination rate promotes generation of more polymorphic alleles. In the heterozygous loci, alleles of different activities complement each other fulfilling the environment requirements. Nevertheless, the genetic pool of populations evolves in the direction of a very restricted number of complementing haplotypes and a fast changing environment kills the population. If simulations start with all loci heterozygous, they stay heterozygous for a long time. Introduction A Mendelian population is, by definition, a group of interbreeding diploid individuals sharing the same genetic pool. Such population should be panmictic, which means that each individual can randomly choose a mating partner from the whole population. If the population is large, one can expect random and independent assortment of alleles to the gametes. In fact, in Nature, populations of one species usually do not fulfil the parameters of the Mendelian population even if they occupy the same and uniform environment. First of all, inbreeding populations could be much smaller than the whole considered populations, not necessarily because of real physical or biological barriers, but simply because of physical distances between individuals. Wright has introduced the definition of effective population size (Wright 1931). According to this definition the effective population is ''the number of breeding individuals in an idealized population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration''. Introducing the finite effective size of population, one has to consider much deeper changes in the population evolution. The most important is that in such populations some genes cannot be inherited independently of each other. Genes in the genomes are arranged linearly in the large sequences called chromosomes. A diploid genome is composed of several pairs of homologous chromosomes (i.e., the human genome is composed of 23 pairs of chromosomes). Even if we assume that chromosomes are inherited independently, the genes located on single chromosomes are genetically linked. Two homologous chromosomes (one inherited from the mother, the second from the father) can recombine during gamete formation exchanging corresponding parts, in a process called crossover, but the frequency with which this process occurs is restricted and relatively low. For example, the human chromosome 19 contains almost 1,500 genes while the crossover happens usually only once during gamete production (Jensen-Seaman et al. 2004;Kong et al. 2002). As a result, two neighbouring genes can be inherited together as a linked unit for hundreds of generations. Thus, single genes are not inherited independently, they tend rather to form clusters of genes. Deleterious genes can be even lethal, they can kill their carriers. Nevertheless, they can be compensated by a functional copy of the other allele located in the corresponding locus of the homologous chromosome since they are usually recessive. In situations of large Mendelian populations, deleterious genes are eliminated by purifying selection: If a given locus in a genome is occupied by two lethal alleles, the individual is eliminated by selection, thus decreasing the number of deleterious genes in the genetic pool. In cases where we should consider the inheritance of large clusters of genes, the strategy of population evolution could be different. Not only single deleterious genes can be complemented by their functional alleles, but whole clusters, with several deleterious genes, can be compensated by the corresponding fragment of the homologous chromosome. This strategy has been called the complementing strategy , as an alternative for the above purifying selection. In the complementing regions, the fraction of heterozygous loci (occupied by two different alleles) is much higher than in the regions under purifying selection and can reach up to 100% of the positions (Zawierta et al. 2008;Waga et al. 2007Waga et al. , 2009). The transition between the two alternative strategies may have the character of a phase transition and it depends on the frequency of crossover between homologous chromosomes, effective population size, and the number of genes in the crossing chromosomes: Lower recombination rate, smaller effective population size, longer chromosomes and changing environment favour the complementing strategy Zawierta et al. 2008). Complementing strategy allows sympatric speciation-emerging new species inside the larger ones without any barriers . This is possible because the complementing clusters in the modelled populations have a unique sequence of defective and wild alleles. Surviving offspring can be produced mainly by individuals sharing the same sequence of defective genes in the complementing clusters. A tendency for gene clustering has been observed in other models which have allowed gene shuffling by the nonreciprocal recombination events in the haploid genomes (Pepper 2007). On one hand, this strategy explains also the results showing that fecundity in the human population decreases with large genetic distance between spouses (Helgason et al. 2008). According to the predictions of models of population evolution under purifying selection, on the other hand, fecundity should increase with the genetic distance between spouses, while models with gene cluster complementation predict both, inbreeding and outbreeding depression (Bońkowska et al. 2007;Cebrat and Stauffer 2008). The last predictions of the models are in agreement with other findings of Helgason et al that evolutionary success measured in the number of grandchildren has a maximum for spouses which are the third level of cousins. In the previous articles dealing with complementing strategy only lethal characters of defective alleles have been considered. Thus, each gene could be represented only by two different alleles: the wild one, which is functional, and the defective one, lethal in the homozygous state and neutral in the heterozygous state. Many agentbased models of population evolution share this feature (Stauffer et al. 2006). In this article, we are introducing more polymorphic alleles with hundreds of possibilities (morphs). The environment requires specific values of loci activities and these requirements may change during the evolution of the population (Sá Martins et al. 2009). The required activity can be realised by several combinations of different alleles of a specific gene. Since a single allele is now composed of eight bits, the number 256 of different alleles which can occupy a given locus is high and the population can be highly polymorphic. The model A diploid individual genome is represented by a pair of parallel bit-strings, each one with 1,024 bits. Each string is divided into L = 128 equal pieces or loci of 8 bits each. Thus, each genome consists of a pair of homologous chromosomes with 128 pairs of alleles located at corresponding loci. Each string corresponds to one chromosome and each locus of 8 bits is occupied by one allele. The two alleles at the corresponding loci on two chromosomes are denoted A and a, but this terminology does not indicate a dominant or recessive character of these alleles. We denote as activities A l and a l , with l ¼ 1; 2. . .L; the number of bits set to zero in the corresponding alleles. The activity of the l-th locus is given by the sum A l þ a l ; which is a number between 0 and 16. The environment is also represented by a (single) string of L = 128 loci, where the value E l of each locus represents the ideal activity of the corresponding l-th locus of each genome. The total deviation D ¼ P L l¼1 jA l þ a l À E l j determines the individual survival probability x 1?D during one iteration, with a free parameter x slightly below 1. With this selection strategy even perfectly fitted individuals have a probability 1x to die at each iteration. For each adult who dies, one new baby is born in the same iteration, from two randomly selected different partners; no distinction between males and females is made. For each of the two partners, the genome strings are crossed over with an intergenic recombination probability C 1; thus the 8 bits of each locus are always kept together. The baby gets with probability m one new mutation in each of its two genome strings, modelled by inverting the value of one randomly chosen bit. If pre-natal selection is used (in ''Other initial configurations'' section but not in ''Starting from full complementarity'' section), the baby has to pass the above selection mechanism, depending on its deviation D, in order to be born; if it is not born, a new pair of parents is randomly selected and a new trial is made, up to some maximum number of attempts. The population size N is fixed if pre-natal selection is not used, but we regard the population as extinct and stop that simulation if the number of surviving adults, without the newborn babies, approaches zero. With pre-natal selection, the population size may decrease if the maximum number of trials for a newborn is reached without any success in passing the selection mechanism. The environment string may change at each iteration with a low probability e by changing one E l by ±1 at a randomly selected locus l. As mentioned in the ''Introduction'' section, our main purpose is to check if this more realistic model also presents the two different strategies of evolution, the Darwinian purifying selection and the complementarity of haplotypes, and how these patterns behave in a changing environment (Gibbons 2010). For this purpose we first need surviving populations. It is important to notice that in this model complementarity of haplotypes at a given locus means that the activities of the two corresponding alleles are different (heterozygous locus), but their sum is close enough to the environmental requirement for that locus for the individual to stay alive. On the other hand, purifying selection favours the homozygous state of loci, with the same activities of both alleles. Because we failed to reach full complementarity (for all loci) starting from homozygous loci, we decided to first start from a fully complementary configuration and check its stability, as described in the next section. Results for other initial configurations will be discussed in ''Other initial configurations'' section. Starting from full complementarity Initially we set E l = 8 for all loci, meaning that the ideal activity (sum of the activities of both alleles) of any locus equals 8, and set all alleles of one bitstring to have bits 00000000 while all alleles of the second bitstring have bits 11111111. That is, we start with ideal complementary activities and check their evolution. Our parameters are: population size N = 5000, number of genes L = 128, selection strengths x = 0.9 (strong selection) or x = 0.995 (weak selection), no pre-natal selection, probability of changing one locus of the environment string e = 10 -4 (slowly) or e = 10 -2 (rapidly changing), total number of time-steps between 0.1 million and 10 millions and mutation probability per string at birth m = 0.9, all reversible (0 $ 1). At each time step and for each adult individual the activities A l and a l are computed and their differences D ¼ P L l¼1 jA l À a l j and deviations from environmental ideal D ¼ P L l¼1 jA l þ a l À E l j are calculated. In the following figures we show the evolution of the averages per locus of both quantities, hDi=L and hDi=L: Due to the initial condition, we always have hDi=L starting from eight and hDi=L starting from zero. Figure 1 shows for x = 0.9 and a constant environment that for C = 1 both averages rapidly approach the value 4, while for C ¼ 0:512; hDi=L decays from 8 to 5 and hDi=L increases from zero to 3. Such results indicate that the initial heterozygosity is maintained, although with fluctuations which are stronger for larger C, since crossing mixes the genomes disturbing their initial perfect heterozygous configurations (We will turn back to the value 4 mentioned above when discussing Fig. 4). Figure 2 shows the results for a slowly (e = 10 -4 , part a) and a more rapidly (e = 10 -2 , part b) changing environment, also including the crossing probability C = 0.256. Now both averages approach the value 4, independently of the C value. Notice the changed time scale from part (a) to part (b) due to early extinction: for x = 0.9 populations do not survive in a drastically changed environment. Note that for e = 0.01 we have one unit, E l , of the environment string mutated at every 100 steps. After 10,000 steps on average 100 units were mutated, meaning that most of the 128 ideal gene activities were changed. If selection pressure is high, there is no time for the population to adapt to these changes. For e = 0.0001 the population died out only after almost 1 million time steps. Figure 3 shows that if the selection pressure is reduced, x = 0.995, the populations are stable for times above one million time-steps even for a rapidly changing environment (part a) and above 10 million time-steps (part b) for a slowly varying one, even for C = 1. Now we go back to x = 0.9 in constant environment and present in Fig. 4a the histograms for j A l À a l j and for the bit-by-bit Hamming distance between the corresponding 8-bit genes of the two genome strings. For C = 1 about half of these distances are 0 and half are maximal (8), which explains why the averages hDi=L presented in Fig. 1 go to 4. Figure 4b shows the variation with the recombination rate C. (For these histograms all time steps between 90,000 and 100,000 and all adults were summed over.) In all results presented so far averages were performed at each time-step before applying the environment-dependent selection mechanism. If they are performed after selection, the populations are much smaller (since only adults contribute to them; not shown), and the averages stay much closer to the initial full complementarity, as shown in Fig. 5. Results do not change much if mutations are made irreversible instead of reversible, babies are born from adults only, the genome length L is shortened from 128 to Other initial configurations When we started our simulations our intention was to check if we could reach full complementarity and not to depart from it. For this reason we established that the ideal number of bits set to 0 at a locus would have to be even: an odd-valued requirement would favour heterozygosity at this locus, independent of the value of the crossing probability, since the number of bits set at any locus of a haplotype is an integer in the model. A number of different initialisations were then tried, before we could find one that led the resulting genetic pool of the population to resemble the expected complementarity. Those attempts were: 1. The environment requires the same activity E l = 8 for each locus and all individuals have all loci in the homozygous state-at each locus, each haplotype has 4 bits, chosen randomly, set. 2. The environment requires the same activity E l = 8 for each locus and half of the individuals are in the homozygous state, as above, and the other half in the extreme complementary state, as in the previous section. 3. Initially the environment requires no bits to be set at all loci and all individuals are ideal, i.e., have no bits set. Then the environment changes constantly with some probability e per step. The results of strategy 2 quickly decay into those for strategy 1, no matter what value of C is used. This means that the complementary state is metastable and that the homozygous state is much more robust. In fact, when strategy 1 is used the population never leaves the homozygous state, for all values of C. These results may be understood by the following reasoning: in the perfectly heterozygous state, if the effect of new mutations is omitted, only 50% of the tentative newborn will have complementary genomes while the other 50% have genomes composed of two identical haplotypes, which are lethal. In the case of a full homozygous state, all newborns are surviving. Strategy 3 has as a drawback the fact that if a mutation of the environment pattern adds ±1 to the required number of bits set at some locus the end result may be an odd number, and this does not allow that particular locus to be homozygous for a well fit individual. If we add or subtract two, then the load carried by each environment mutation is so strong that it may lead the population to extinction. There are, nevertheless, unexplored regions of parameter space that might lead to interesting developments using this strategy or some variant thereof. The procedure that gave heterozygosity starts with the environment requiring no bits to be set at all loci and all individuals are ideal, i.e., have no bits set. The population is allowed to evolve for a number of time steps proportional to the initial population and then starts an equilibration procedure: at each Monte Carlo step, with a 0.001 probability, the environment changes its requirement at some random locus, by increasing by one the number of set bits required. This procedure continues until 16 loci require each of the even values 0; 2; . . .; 14; which happens, for the parameters we used, shortly before 1 million time steps have elapsed. The environment loci requirements are no longer increased during this initialisation after having reached these values. We start presenting the most important results we got with this last version of the model, which also includes prenatal selection, namely the partial answer to the original question about the establishment of a purifying regime for large values of C and of a complementary one for small values of this parameter. Figure 6 shows this feature for two different measures of heterozygosity, which is now understood to be the translation of complementarity for this model. One is activity-wise: a locus is homozygous if the activity of both its alleles are the same, it is heterozygous otherwise. The fraction of the loci that are heterozygous in this concept is plotted against the value of C (? symbols). The other measure is bit-wise: a locus is homozygous if the bit configurations of both its alleles are precisely the same, and are heterozygous otherwise. The fraction of loci that are heterozygous in this concept is shown as a function of C (9 symbols). The end result of both measures is similar: this fraction is higher for small than for large C. Nevertheless, the heterozygosity measured by bit-wise Hamming distance is higher, which means that polymorphic alleles of the same activity for the same locus were generated several times independently if the recombination rate was low. Under higher recombination rates, the fraction of heterozygous loci measured by both Hamming distances, activity-wise and bit-wise, are the same, suggesting other mechanisms of generating the polymorphism. The results shown were produced by one single run for each value of C, as in the previous section. The other parameters are shown in the caption of Fig. 6. Now come some time evolution plots. The first, Fig. 7, shows how the average deviation D from the environment ideal per locus evolves. The equilibration step of this version starts at step 64,000 for the parameters used in this plot. Up to that point, the average deviation per locus, which started with a value of zero, grows thanks to the mutations to a stable value below 0.02 for all values of C. For C = 0 (no recombination) it stays small and even decreases during the equilibration process to half of its original value. For C = 0.512 it increases during equilibration but not much, and by the end of this step it is back to the value it had before equilibration. For C = 1 the behaviour is completely different: as equilibration starts, the deviation first increases up to nine times its original value, and can lead smaller populations to extinction. At the end of this step, the deviation decreases back to its starting value and at the end, when the population is brought down to the value of N, lies in between the values for the two other values of C. This difference between C = 0.512 and 1 agrees with Figs. 1, 2, and 4b. Figure 8 shows how the allele activity difference per locus hDi=L evolves with time. Before the start of the equilibration process it has a small residual value, reflecting the initialisation of the population and of the environment ideal (all zeroed for all loci). During the equilibration process it first rises for all values of C, more steeply for smaller values then for larger ones, but it decreases afterwards. At the end of this phase of the simulation it stabilises, albeit at values that depend on C monotonically: they are larger when C is smaller. This result reflects again the answer given by simulations of this model to the original question. For C = 1 the population is close to being completely homozygous, which leads the difference between allele activities to a small value, while for C = 0 heterozygosity emerges and the value gets close to unity. As in Figs. 1, 2, 3, 4, 5, 6, and 7, homozygosity increases with increasing recombination. The importance of inbreeding for this effect is shown by the data from a simulation where the population size is initially 300, whereas this value was 2,000 for all the others. Now, not only the size of this difference is bigger but also the fraction of loci where the allele difference is non-zero increases (not shown). This effect is even stronger for a population of 20,000 and agrees with earlier simulations (Zawierta et al. , 2008Waga et al. 2007Waga et al. , 2009Helgason et al. 2008, Bońkowska et al. 2007), but contradicts our results in ''Starting from full complementarity'' section (see caption of Fig. 2b) where N had little influence. As a final statement about the issue of purifying selection versus the strategy of complementarity, or homoversus heterozygosity, we present our Fig. 9. It shows the distribution of the activity-wise Hamming distance, defined as follows: for each individual of the population, each of its haplotypes is compared with each of the two haplotypes of all the other individuals, and the summed activity difference between them computed and normalised by the total number of such pairs. When C = 1 this distribution has a single peak close to 0. In fact, in this limit most loci are homozygous and the individuals are all very similar to each other. The resulting distances are all very small, and it does not matter which haplotype of one individual is compared to haplotypes of other individuals since they are essentially equal. This situation changes as C decreases, and a doublepeaked distribution develops, analogous to Fig. 4a. For C = 0 it is clear that two patterns of haplotypes have been fixed in the population. Among each pattern the distances are very small, but the distance between two different patterns is substantial, characterising in a different way the establishment of a heterozygous regime in the population. For the strong selection that was used in the simulation, the average deviation from the ideal is essentially zero and the maximum value of this distance would then be 512. The result of the simulation shows a peak close to 200, at 0.4 of the maximum. This feature of high degree of homozygosity for C = 1 has an interesting counterpart in the actual distribution of the crossing loci of the gametes of the population as shown in Fig. 10. These loci are those points where the maternal and paternal bit-strings crossed over to produce those gametes which now form the investigated surviving adult. The few loci where full homozygosity did not set in are located close to the mid part of the chromosome-this is a general feature noticed in other models Mackiewicz et al. 2010) and it will be further discussed elsewhere. Because of this location, gametes generated by crossing near the middle of the chromosome will, with a very high probability, end up pairing into homozygosity for some of those loci and an activity far from the one required by the environment, leading the individual to fail pre-natal selection. The individuals that are alive come from crossings that avoid this region of the genetic material. As a final remark, we show a time evolution plot for a simulation where the environment varies with some probability after the initial equilibration, Fig. 11, while for Figs. 6, 7, 8, 9, and 10 the environment was kept constant after its initial equilibration. It shows the effective population, defined as the difference between its actual size at the start of one Monte Carlo step and the number of individuals that die in that same step. For the case C = 1 where we have maximum homozygosity in the population, this variability leads to a quick extinction. This feature has to be checked for other values of the parameters, in particular for a less selective environment. Discussion In this article, we have assumed that two alleles complement each other if they have different activity (heterozygous locus) and together fulfil the environment requirement. If both alleles have the same activity (homozygous locus), their relation is not considered as complementation even if they fulfil the environmental demands. In a constant environment, the most robust populations have all loci homozygous. This is an obvious result because, independently of haplotypes' combination, the activities of loci corresponded exactly to the environment demand. Thus, ignoring new mutations, the fusion of gametes always resulted in forming a surviving newborn. If a specific locus can be occupied by alleles of different activities, only some of them combine in such a way that they fit exactly to the environment requirement. If we imagine such complementing alleles in a given locus, then to succeed in forming the fittest zygote, all other loci should also complement or should be homozygous. One should expect a tendency to form clusters of complementing alleles. This is more probable under low recombination rate and/or high inbreeding (smaller effective populations). In fact, we have observed this phenomenon (see Figs. 6,9) in our simulations. On the other hand, genes located at the ends of chromosomes, close to the telomeric regions, are more efficiently separated by recombination, while genes in the middle of chromosomes could be transferred into gametes as a linked unit more frequently. As a result, genes in the middle of chromosomes are more prone to form clusters of genes which are complementing other clusters. It is expected that in the regions of clusters, recombination is restricted. This is presented in Fig. 10, even for relatively high recombination rate (C = 1). The same phenomena have been observed in models where genes were represented by single bits and existed in the genetic pool only into two states: wild-functional and defective-recessive lethal (Zawierta et al. , 2008. The observed complementation and gene clustering is not connected with an effect of epistasis, like in the case of the phenomena described by Pepper (2003). In his models, genes could be relocated by recombination (inversion) on Parameters of the model presented in this article seem to be too restrictive to allow the evolution of high polymorphism. The function of selection is too rigorous, killing even slightly unfit genomes. For the version of ''Other initial configurations'' section, the assumption that the activities of genes can be regulated in a broader range, like a survival probability exp½ÀconstD 2 instead of x D , should allow for the generation of a much higher polymorphism of the genetic pool, and it should be possible to find a configuration of parameters such that a larger set of haplotypes in the genetic pool could be more advantageous in a changing environment. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2009-11-03T15:17:50.000Z
2009-11-03T00:00:00.000
{ "year": 2011, "sha1": "cde4143bba8c533952c117807278807c67b9f870", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12064-011-0120-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cde4143bba8c533952c117807278807c67b9f870", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
232772026
pes2o/s2orc
v3-fos-license
Digital Smile Designed Computer-Aided Surgery versus Traditional Workflow in “All on Four” Rehabilitations: A Randomized Clinical Trial with 4-Years Follow-Up The aim of the present study was to evaluate and compare the traditional “All on Four” technique with digital smile designed computer-aided “All on Four” rehabilitation; with a 4-years follow-up. The protocol was applied to a total of 50 patients randomly recruited and divided in two groups. Digital protocol allows for a completely virtual planning of the exact position of the fixtures, which allows one to perform a flapless surgery procedure with great accuracy (mini-invasive surgery) and also it is possible to use virtually planned prostheses realized with Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM) (methods for an immediate loading of the implants. After 4 years from the treatments 98% of success were obtained for the group of patients treated with the traditional protocol and 100% for the digital protocol. At each time interval a significant difference in peri-implant crestal bone loss between the two groups was detected; with an average Marginal Bone Loss (MBL) at 4 years of 1.12 ± 0.26 mm in the traditional group and 0.83 ± 0.11 mm in the digital group. Patients belonging to the digital group have judged the immediate loading (92%), digital smile preview (93%), the mock-up test (98%) and guided surgery (94%) as very effective. All patients treated with a digital method reported lower values of during-surgery and post-surgery pain compared to patients rehabilitated using traditional treatment. In conclusion, the totally digital protocol described in the present study represents a valid therapeutic alternative to the traditional “All on Four” protocol for implant-supported rehabilitations of edentulous dental arches. Introduction The therapeutic efficacy of rehabilitations based on the use of a reduced number of implants, with a high aesthetic and functional yield is now universally recognized [1][2][3][4]. Among the most adopted implantology protocols for the treatment of dental arches with moderate/severe bone atrophy, the "All on Four" technique continues to achieve great success among the scientific community [5][6][7][8]. This method involves the placement of four implants: two axial ones positioned in the anterior sector and two inclined at about [30][31][32][33][34][35] • with respect to the occlusal plane in the lateral alveolar areas . This inclination allows it to distalize the implant emergency and to provide support to a prosthetic arch up to the first molar, and to avoid any damage the noble structures such as the maxillary sinus (upper arch) and the inferior alveolar vascular-nerve bundle (lower arch). This also prevents the bone regeneration procedure in the presence of severe atrophies [5]. In recent years, digital technologies have significantly changed the clinical dental practice with regards to diagnosis, prosthetic planning, guided surgery and implant-supported Patients Selection The implant-prosthetic protocol was conducted including a population of 50 patients aged between 46 and 85, who underwent rehabilitation of the edentulous maxilla with a reduced number of implants, At the Department of Dentistry (San Raffaele-Milan), directed by Prof. E. F. Gherlone. Twenty-five patients were randomly selected and subjected to the implant-prosthetic protocol with the digital method. The remaining twenty-five underwent the traditional "All on Four" protocol ( Figure A5). Inclusion criteria were: patients of any ethnicity over 18 years of age, male and female; patients with good general health, without chronic disease (immunosuppression, untreated coagulation problems, chemotherapy and radiotherapy, assumption of bisphosphonate drugs, cardiac conditions and uncompensated diabetes). The selected patient must have had at least one totally edentulous arch or with few hopeless elements, upper mouth opening wider than 50 mm, sufficient bone available for implant fixtures placement: for the edentulous maxilla the anatomical inclusion criterion was a residual ridge crest of a minimum of 4 mm wide buccolingually and higher than 10 mm high from canine to canine; for the lower maxilla a residual ridge crest at least 4 mm wide buccolingually and higher than 8 mm high in the intraforaminal area. Exclusion criteria were: smoking and drug habits, pregnancy, irregular or thin bone crest and high smile line in the maxilla that would have needed bone reduction. First Appointment In the Dentistry department of the Vita-Salute University of San Raffaele, patients from both groups were examined in a preliminary oral examination. During the appointment, after a detailed compilation of the medical and dental history, the clinicians would confirm the presence of an edentulous maxilla or treat the patients with few hopeless elements before the procedure with full mouth extractions and delivery of a temporary immediate total prosthesis. After that, the clinician prescribed an initial orthopantomography to the patient and takes alginate impressions for the construction of occlusal rim, in order to produce a total diagnostic prosthesis correct from an aesthetic and functional point of view. Once it was clear that a patient could be included in the clinical protocol, he or she signed a specific informed consent form for implant surgery with immediate loading. Before the next session, the patients were divided into two groups through a randomization process: 25 patients underwent the digital protocol, and the remaining 25 was treated with the traditional protocol. Randomization processes occurred by lots in closed envelopes and were performed by a blinded operator. Second Appointment The patient underwent a professional oral hygiene session of the antagonist arch. Photos of the edentulous jaw were taken ( Figure 1) and the wax wall was "functionalized" using a traditional method. Once it was clear that a patient could be included in the clinical protocol, he or she signed a specific informed consent form for implant surgery with immediate loading. Before the next session, the patients were divided into two groups through a randomization process: 25 patients underwent the digital protocol, and the remaining 25 was treated with the traditional protocol. Randomization processes occurred by lots in closed envelopes and were performed by a blinded operator. Second Appointment The patient underwent a professional oral hygiene session of the antagonist arch. Photos of the edentulous jaw were taken ( Figure 1) and the wax wall was "functionalized" using a traditional method. Third Appointment (Traditional Protocol) A prosthetic device structure and functionality test and an aesthetic/phonetic evaluation test were performed. Each patient filled out a one-dimensional Verbal Rating Scale (VRS) for the assessment of his appreciation of the aesthetic test (1-very effective, 2effective and 3-ineffective) ( Figure A1). All these procedures then led to the realization of a traditional provisional prosthetic device. Fourth Appointment (Traditional Protocol): Surgical Phase and Immediate Loading Prosthesis One hour before surgery the patient received 2 g of amoxicillin (Zimox, Pfizer Italia, Latina-Italy), who continued to assume 1 g twice a day for the week after surgical procedure. After local anesthesia (4% articaine with 1:200.000 adrenaline), an incision was made starting on the center of the ridge alongside the entire length of the ridge, from the area of the first molar to the area of the first contralateral molar, with bilateral discharge incisions; a full-thickness mucoperiosteal flap was elevated and a bone remodeling were performed, if necessary to obtain a uniformly leveled bone crest. Two-implant fixtures were inserted in the lateral alveolar areas, tilted by about 30-45 degrees relative to the occlusal plane. Then the two axial fixtures were inserted in the anterior sector ( Figure 2). Third Appointment (Traditional Protocol) A prosthetic device structure and functionality test and an aesthetic/phonetic evaluation test were performed. Each patient filled out a one-dimensional Verbal Rating Scale (VRS) for the assessment of his appreciation of the aesthetic test (1-very effective, 2-effective and 3-ineffective) ( Figure A1). All these procedures then led to the realization of a traditional provisional prosthetic device. Fourth Appointment (Traditional Protocol): Surgical Phase and Immediate Loading Prosthesis One hour before surgery the patient received 2 g of amoxicillin (Zimox, Pfizer Italia, Latina-Italy), who continued to assume 1 g twice a day for the week after surgical procedure. After local anesthesia (4% articaine with 1:200.000 adrenaline), an incision was made starting on the center of the ridge alongside the entire length of the ridge, from the area of the first molar to the area of the first contralateral molar, with bilateral discharge incisions; a full-thickness mucoperiosteal flap was elevated and a bone remodeling were performed, if necessary to obtain a uniformly leveled bone crest. Two-implant fixtures were inserted in the lateral alveolar areas , tilted by about 30-45 degrees relative to the occlusal plane. Then the two axial fixtures were inserted in the anterior sector ( Figure 2). In the presence of bone with a well-represented trabecular portion, an under-preparation has been performed, to obtain a high primary stability, necessary for the following immediate loading. The insertion torque range of all implants was 35-55 N/m. EATx Winsix extreme abutments (Biosafin S.R.L., Ancona, Italy) of 0 • , 17 • or 30 • were screwed in at 10-20 N/m, in order to compensate for the lack of parallelism between the implants; the angle was chosen to obtain the position of the screw access hole at the occlusal or lingual level of the prosthesis. The access flap was adapted and sutured with absorbable 4-0 sutures. In the presence of bone with a well-represented trabecular portion, an under-preparation has been performed, to obtain a high primary stability, necessary for the following immediate loading. The insertion torque range of all implants was 35-55 N/m. EATx Winsix extreme abutments (Biosafin S.R.L., Ancona, Italy) of 0°, 17° or 30° were screwed in at 10-20 N/m, in order to compensate for the lack of parallelism between the implants; the angle was chosen to obtain the position of the screw access hole at the occlusal or lingual level of the prosthesis. The access flap was adapted and sutured with absorbable 4-0 sutures. At the end of the surgery, specific temporary abutments for immediate loading (EAx, Biosafin S.R.L., Ancona, Italy) were placed, mucosa was isolated with a dental dam and the prosthesis was adapted and relined directly into the patient's mouth with cold resin. The prosthesis was then refined and polished in the on-site laboratory, where the palatal portion was removed. Finally, the prosthetic device was screwed back in the patient's mouth to obtain immediate loading of the implants. Third Appointment (Digital Protocol) During the third visit, an occlusal rim was tested. The rim was previously functionalized according to traditional phonetic and aesthetic criteria. The specific photographic protocol for digital planning, including intraoral and extraoral photos of the patient was performed. All the pictures were taken with the occlusal rim positioned inside the patient's mouth with landmarks positioned on the anterior portion of the rim. These landmarks allow for the alignment of the photograph and the At the end of the surgery, specific temporary abutments for immediate loading (EAx, Biosafin S.R.L., Ancona, Italy) were placed, mucosa was isolated with a dental dam and the prosthesis was adapted and relined directly into the patient's mouth with cold resin. The prosthesis was then refined and polished in the on-site laboratory, where the palatal portion was removed. Finally, the prosthetic device was screwed back in the patient's mouth to obtain immediate loading of the implants. Third Appointment (Digital Protocol) During the third visit, an occlusal rim was tested. The rim was previously functionalized according to traditional phonetic and aesthetic criteria. The specific photographic protocol for digital planning, including intraoral and extraoral photos of the patient was performed. All the pictures were taken with the occlusal rim positioned inside the patient's mouth with landmarks positioned on the anterior portion of the rim. These landmarks allow for the alignment of the photograph and the Standard Triangle Language (STL) ile inside the CAD Software (on both sides the canine line and the intermediate line between canine line and median line). Two extraoral photos were also taken with a specific measurement marker positioned on the side of the patient's face. These will be used for the realization of a two-dimensional digital project of the new smile (smile design). A VRS one-dimensional scale (1-very effective, 2-effective and 3-ineffective) for assessing the patient's appreciation of the computerized previsualization of the prosthetic project was submitted to the patients ( Figure A3). Between the third and fourth appointments two-dimensional digital project of the new smile was realized using the Smile Lynx software (8853 S.P.A., Milan, Italy). The scans of the edentulous model and the previously mentioned occlusal rim were obtained using a laboratory scanner (MyRay 3Di TS, Cefla, Italy). Then, the scans were matched with the 2D digital project within the CAD software (CAD Lynx 8853 S.P.A., Milan, Italy), thus allowing the three-dimensional design of the prosthesis ( Figure 3). The provisional total prosthesis complete with the palatal portion was milled in PMMA (Poly(methyl methacrylate)) by a five-axis CAD/CAM milling machine ( Figure 4). dimensional digital project of the new smile (smile design). A VRS one-dimensional scale (1-very effective, 2-effective and 3-ineffective) for assessing the patient's appreciation of the computerized previsualization of the prosthetic project was submitted to the patients ( Figure A3). Between the third and fourth appointments two-dimensional digital project of the new smile was realized using the Smile Lynx software (8853 S.P.A., Milan, Italy). The scans of the edentulous model and the previously mentioned occlusal rim were obtained using a laboratory scanner (MyRay 3Di TS, Cefla, Italy). Then, the scans were matched with the 2D digital project within the CAD software (CAD Lynx 8853 S.P.A., Milan, Italy), thus allowing the three-dimensional design of the prosthesis ( Figure 3). The provisional total prosthesis complete with the palatal portion was milled in PMMA (Poly(methyl methacrylate)) by a five-axis CAD/CAM milling machine ( Figure 4). Fourth Appointment (Digital Protocol) A mock-up test, using the provisional prosthesis, was performed trying the aesthetic appearance of the definitive prosthetic device. The patients then filled a one-dimensional VRS scale for the assessment of their appreciation of the mock-up test ( Figure A1). A specific device with the radiographic landmark (Evo-Bite with 3D-Marker, 3DIEMME, Como, Italy) was then adapted to the prosthesis directly in the oral cavity with radiotransparent silicon and delivered to the patient at the end of the appointment for the radiological exam. Various scans were then acquired with the same spatial coordinates: one of the stereolithographic model alone, one of the temporary prothesis placed on the model and one (1-very effective, 2-effective and 3-ineffective) for assessing the patient's appreciation of the computerized previsualization of the prosthetic project was submitted to the patients ( Figure A3). Between the third and fourth appointments two-dimensional digital project of the new smile was realized using the Smile Lynx software (8853 S.P.A., Milan, Italy). The scans of the edentulous model and the previously mentioned occlusal rim were obtained using a laboratory scanner (MyRay 3Di TS, Cefla, Italy). Then, the scans were matched with the 2D digital project within the CAD software (CAD Lynx 8853 S.P.A., Milan, Italy), thus allowing the three-dimensional design of the prosthesis (Figure 3). The provisional total prosthesis complete with the palatal portion was milled in PMMA (Poly(methyl methacrylate)) by a five-axis CAD/CAM milling machine ( Figure 4). Fourth Appointment (Digital Protocol) A mock-up test, using the provisional prosthesis, was performed trying the aesthetic appearance of the definitive prosthetic device. The patients then filled a one-dimensional VRS scale for the assessment of their appreciation of the mock-up test ( Figure A1). A specific device with the radiographic landmark (Evo-Bite with 3D-Marker, 3DIEMME, Como, Italy) was then adapted to the prosthesis directly in the oral cavity with radiotransparent silicon and delivered to the patient at the end of the appointment for the radiological exam. Various scans were then acquired with the same spatial coordinates: one of the stereolithographic model alone, one of the temporary prothesis placed on the model and one Fourth Appointment (Digital Protocol) A mock-up test, using the provisional prosthesis, was performed trying the aesthetic appearance of the definitive prosthetic device. The patients then filled a one-dimensional VRS scale for the assessment of their appreciation of the mock-up test ( Figure A1). A specific device with the radiographic landmark (Evo-Bite with 3D-Marker, 3DIEMME, Como, Italy) was then adapted to the prosthesis directly in the oral cavity with radiotransparent silicon and delivered to the patient at the end of the appointment for the radiological exam. Various scans were then acquired with the same spatial coordinates: one of the stereolithographic model alone, one of the temporary prothesis placed on the model and one of the prothesis on the model with the Evo bite positioned on it (3D-Marker, 3DIEMME, Como, Italy). A CBCT (Cone Beam Computed Tomography) was prescribed to the patient. This exam had to be taken with the patient wearing the temporary prosthesis with the Evo-Bite positioned on it, including an additional radiopaque marker to be used as a reference for the following radiologic evaluation (Scan Marker, 3DIEMME, Como, Italy). Using the RealGuide Implant Design Software (3DIEMME, Milan, Italy), the Digital Imaging and Communications in Medicine (DICOM) data of the patient's CBCT was then matched within the STL data of the previously mentioned scans, and the virtual position of the implants was planned, based on the aesthetic prosthetic project (Figures 5 and 6). The implant project was then sent to the laboratory for the realization of the stereolithographic model, which reported the exact sites for the placement of the analogs, and the surgical guide (3DIEMME, Milan, Italy) (Figure 7). Imaging and Communications in Medicine (DICOM) data of the patient's CBCT was then matched within the STL data of the previously mentioned scans, and the virtual position of the implants was planned, based on the aesthetic prosthetic project (Figures 5 and 6). The implant project was then sent to the laboratory for the realization of the stereolithographic model, which reported the exact sites for the placement of the analogs, and the surgical guide (3DIEMME, Milan, Italy) ( Figure 7). Using the RealGuide Implant Design Software (3DIEMME, Milan, Italy), the Digital Imaging and Communications in Medicine (DICOM) data of the patient's CBCT was then matched within the STL data of the previously mentioned scans, and the virtual position of the implants was planned, based on the aesthetic prosthetic project (Figures 5 and 6). The implant project was then sent to the laboratory for the realization of the stereolithographic model, which reported the exact sites for the placement of the analogs, and the surgical guide (3DIEMME, Milan, Italy) (Figure 7). An hour before the surgery, 2 g of amoxicillin+clavulanic acid were given to the patient, which continued to assume for the following week (1 g twice a day). After local anesthesia (4% articaine with 1:200.000 adrenaline), the surgical template was positioned and fixed in the patient's oral cavity (Figure 8). The implants were inserted through the surgical guide, with the flapless technique, using a preordained sequence of drills dedicated to guided surgery (Figure 9). The two-implant fixtures were inserted in the lateral alveolar areas, tilted by about 30-45 degrees relative to the occlusal plane. Then An hour before the surgery, 2 g of amoxicillin+clavulanic acid were given to the patient, which continued to assume for the following week (1 g twice a day). After local anesthesia (4% articaine with 1:200.000 adrenaline), the surgical template was positioned and fixed in the patient's oral cavity (Figure 8). The implants were inserted through the surgical guide, with the flapless technique, using a preordained sequence of drills dedicated to guided surgery (Figure 9). The two-implant fixtures were inserted in the lateral alveolar areas , tilted by about 30-45 degrees relative to the occlusal plane. Then the two axial fixtures were carried out in the anterior portion (Figure 1). Winsix TTx implants (Biosafin S.R.L., Ancona, Italy) with a diameter of 3.3 or 3.8, 11 or 13 mm length for the axial fixtures and 13 or 15 mm length for the tilted implants (Table 1) were used. All implants were inserted with 35-55 N/m torque. drills dedicated to guided surgery (Figure 9). The two-implant fixtures were inserted in the lateral alveolar areas, tilted by about 30-45 degrees relative to the occlusal plane. Then the two axial fixtures were carried out in the anterior portion (Figure 1). Winsix TTx implants (Biosafin S.R.L., Ancona, Italy) with a diameter of 3.3 or 3.8, 11 or 13 mm length for the axial fixtures and 13 or 15 mm length for the tilted implants (Table 1) were used. All implants were inserted with 35-55 N/m torque. The EATx WinSix extreme abutments (Biosafin SRL, Ancona, Italy) of 0°, 17° or 30° were screwed on at 10-20 N/cm, previously selected according to the prosthetic-implant project within the specific software for guided surgery, to offset for the lack of parallelism between implants. The angle was chosen to obtain the position of the screw access hole at the occlusal or lingual level of the prosthesis. Specific temporary abutments (EAx, Biosafin S.R.L., Ancona, Italy) were placed, and the mucosa was isolated with a dental dam sheet. Immediate loading was then performed, positioning the provisional prosthetic device that had adapted and relined directly with pink cold resin. The device was then refined in the laboratory, where the palatal portion was removed ( Figure 10). Finally, the prosthetic device was screwed back in the patient's mouth ( Figure 11). The EATx WinSix extreme abutments (Biosafin SRL, Ancona, Italy) of 0 • , 17 • or 30 • were screwed on at 10-20 N/cm, previously selected according to the prosthetic-implant project within the specific software for guided surgery, to offset for the lack of parallelism between implants. The angle was chosen to obtain the position of the screw access hole at the occlusal or lingual level of the prosthesis. Specific temporary abutments (EAx, Biosafin S.R.L., Ancona, Italy) were placed, and the mucosa was isolated with a dental dam sheet. Immediate loading was then performed, positioning the provisional prosthetic device that had adapted and relined directly with pink cold resin. The device was then refined in the laboratory, where the palatal portion was removed ( Figure 10). Finally, the prosthetic device was screwed back in the patient's mouth ( Figure 11). After all the surgical-prosthetic procedures, a visual analog scale (VAS.) was submitted to both groups to evaluate pain (during and post surgery), with values from 0 (absent pain) to 10 (the maximum possible pain) ( Figure A2). Final Prosthesis Four months after the surgery, an impression was taken. After all the surgical-prosthetic procedures, a visual analog scale (VAS.) was submitted to both groups to evaluate pain (during and post surgery), with values from 0 (absent pain) to 10 (the maximum possible pain) ( Figure A2). Final Prosthesis Four months after the surgery, an impression was taken. After all the surgical-prosthetic procedures, a visual analog scale (VAS.) was submitted to both groups to evaluate pain (during and post surgery), with values from 0 (absent pain) to 10 (the maximum possible pain) ( Figure A2). Final Prosthesis Four months after the surgery, an impression was taken. From the traditional group, prosthetic rehabilitations were manufactured using conventional pick-up impression. Impression transfers were screwed over the fixtures and the impression material used was Impregum (Impregum Penta, 3M Italia, Pioltello, Italy). In the digital group, an intraoral scanner was used. Scan bodies (for TTx, Winsix, Biosafin S.R.L., Ancona, Italy) were screwed over the fixtures and splinted together. The intraoral scanner used was a Carestream CS 3500 (Version 2.5 Acquisition Software, Carestream Dental LLC, Atlanta, GA, USA). Monolithic zirconia with vestibular ceramization final prostheses were delivered using CAD-CAM technology in both groups ( Figure 12). A final orthopantomography was prescribed to the patient ( Figure 13). Biosafin S.R.L., Ancona, Italy) were screwed over the fixtures and splinted together. The intraoral scanner used was a Carestream CS 3500 (Version 2.5 Acquisition Software, Carestream Dental LLC, Atlanta, GA, USA). Monolithic zirconia with vestibular ceramization final prostheses were delivered using CAD-CAM technology in both groups (Figure 12). A final orthopantomography was prescribed to the patient ( Figure 13). Follow-Up Follow-up visits were performed at 12, 24, 36 and 48 months after the surgery. These appointments provided for radiographic analysis for the evaluation of marginal bone loss. The intraoral radiographs were made with the long cone parallel technique, performing the radiography perpendicular to the longitudinal axis of the implant, using a custom occlusal model to measure the level of the marginal bone. It was then possible to measure the difference in bone level through specific software (DIGORA 2.5, Soredex, Tuusula, Finland), calibrated for each image using the implant diameter calculated on the most coronal portion of the implant neck. The linear distance between the most coronal point of the BIC. (bone-implant contact) and the coronal margin of the implant neck was measured on both mesial and distal sides, at the value closest to 0.01 mm, and then a mean value was calculated. Besides, professional oral hygiene procedures were performed six months after implant placement and every four months after that. In the digital group, an intraoral scanner was used. Scan bodies (for TTx, Winsix, Biosafin S.R.L., Ancona, Italy) were screwed over the fixtures and splinted together. The intraoral scanner used was a Carestream CS 3500 (Version 2.5 Acquisition Software, Carestream Dental LLC, Atlanta, GA, USA). Monolithic zirconia with vestibular ceramization final prostheses were delivered using CAD-CAM technology in both groups (Figure 12). A final orthopantomography was prescribed to the patient ( Figure 13). Follow-Up Follow-up visits were performed at 12, 24, 36 and 48 months after the surgery. These appointments provided for radiographic analysis for the evaluation of marginal bone loss. The intraoral radiographs were made with the long cone parallel technique, performing the radiography perpendicular to the longitudinal axis of the implant, using a custom occlusal model to measure the level of the marginal bone. It was then possible to measure the difference in bone level through specific software (DIGORA 2.5, Soredex, Tuusula, Finland), calibrated for each image using the implant diameter calculated on the most coronal portion of the implant neck. The linear distance between the most coronal point of the BIC. (bone-implant contact) and the coronal margin of the implant neck was measured on both mesial and distal sides, at the value closest to 0.01 mm, and then a mean value was calculated. Besides, professional oral hygiene procedures were performed six months after implant placement and every four months after that. Follow-Up Follow-up visits were performed at 12, 24, 36 and 48 months after the surgery. These appointments provided for radiographic analysis for the evaluation of marginal bone loss. The intraoral radiographs were made with the long cone parallel technique, performing the radiography perpendicular to the longitudinal axis of the implant, using a custom occlusal model to measure the level of the marginal bone. It was then possible to measure the difference in bone level through specific software (DIGORA 2.5, Soredex, Tuusula, Finland), calibrated for each image using the implant diameter calculated on the most coronal portion of the implant neck. The linear distance between the most coronal point of the BIC. (bone-implant contact) and the coronal margin of the implant neck was measured on both mesial and distal sides, at the value closest to 0.01 mm, and then a mean value was calculated. Besides, professional oral hygiene procedures were performed six months after implant placement and every four months after that. Statistical Analysis Dedicated software (GraphPad Prism 8.1.2, GraphPad Software Inc., California, United States) was used for statistical analysis. Peri-implant bone level measurements were reported as mean ± standard deviation values at 12, 24, 36 and 48 months. Through the one-way ANOVA test (p < 0.05), peri-implant bone loss was compared between the two groups at each time interval (12,24,36 and 48 months) and within each group by analyzing each time stage with the following ones. (Table 1). From All patients received a temporary prosthetic device and, after 6 months from the procedure, a definitive prosthetic device. All implants were inserted at a torque of at least 35 Ncm and were subjected to immediate loading. Implant Failure and Complications Among the patients rehabilitated according to the traditional protocol, during the first 4 months after implant insertion, 2 failures were recorded, one in the upper maxilla and one in the lower maxilla, both concerning tilted implants ( Table 2). The implant fixtures were immediately replaced without compromising the prosthetic function. In patients rehabilitated with the digital protocol 100% implant survival was achieved. A patient treated with the traditional protocol showed discomfort, pain, swelling and the presence of pus three months after surgery, while no episode of peri-implantitis, pain, paresthesia or pus was observed among the patients rehabilitated according to the digital protocol (Table 3). Two fractures of the provisional prosthetic device were recorded for each group. Occlusal screw loosening of provisional prosthesis was observed in five cases: three were treated with a traditional method and two with the digital method. In the definitive prostheses, a 24-month and 48-month unscrewing was reported in rehabilitations performed with a traditional method, while in digitally treated patients an unscrewing at 24 months, one at 36 months and a further 48 (Table 3). At 12 months, a case of chipping of the definitive device obtained using the traditional method was found. At 24 months a case of chipping of a definitive prosthesis obtained by the digital method was observed (Table 3). In both cases, direct repair of the existing prosthesis was performed. Marginal Bone Level The marginal bone level (MBL) was recorded during follow-up at 12, 24, 36 and 48 months (Table 4) through radiographic evaluation. As for patients treated with the traditional protocol, the loss of peri-implant crestal bone over time has remained constant. At 48 months the mean value for bone loss for axial implants in the maxilla was 1.11 ± 0.32 mm (n = 30), 1.13 ± 0.24 mm for tilted implants in the maxilla (n = 30), 1.08 ± 0.25 for axial implants in the jaw (n = 20) and 1.13 ± 0.26 for jaw tilted implants (n = 20) (Table 4). The difference in the MBL between the two groups was statistically significant (p < 0.0001) in each time interval. The difference within each group at different time intervals was significant only between the average MBL of the digital group at 12 months compared to the same group at 36 months (p = 0.0066) and 48 months (p < 0.0001). Patients' Appreciation Patients treated with the traditional protocol considered immediate loading with a temporary prosthesis to be very effective (95%). As for the mock-up test, 45% of the patients considered it very effective, 37% effective and 18% considered it ineffective. Traditional surgery was rated as very effective by 71% of patients and effective for the remaining 29% (Table 5). Patients treated with the digital protocol considered digital smile previsualization (93%), mock-up test (98%), guided surgery (94%) and immediate loading (92%) to be very effective (Table 5). At the end of the surgical procedures and after seven days ( Figure A4), a visual analog scale (VAS) was submitted to the patients for the evaluation of postoperative pain. All patients belonging to the group treated with the digital method, which provides flapless surgery, reported a significantly lower value of pain compared to patients treated with the traditional method. Discussion The aim of this study was to evaluate the survival rate of implant-prosthetic rehabilitations in patients with an edentulous arch, rehabilitated according to an entirely digital protocol, in order to understand the value of this approach in the prosthetic and surgical phases of treatment, comparing with the traditional "All on Four" method, already validated by numerous studies in the literature [5][6][7][8]. Capparé et al. and Gherlone et al. demonstrate that "All on Four" method can also be used in HIV-positive patients with a stable immune system [15][16][17]. The digital planning of the implant-prosthetic rehabilitation begins with the use of Smile Design, which allows one to obtain a two-dimensional project of the patient's future smile. This allows a correct planning of rehabilitation in aesthetic terms, improves the interaction between specialists and communication with the patient, all of whom have been shown to appreciate the previsualization, and therefore allows a higher quality of treatment, as already described by Coachman et al. in 2017 [18]. Patients' appreciation of digital aesthetic planning has also been described by Cattoni et al. in 2016, through the use of a VAS-type scale, which would measure the happiness of each subject with final aesthetic result of the placement of ceramic crowns and veneers in the anterior areas [19]. It has also been evaluated by Cattoni et al. in 2020 that there's a possible neurocognitive measure of how the perception of oneself can change as a significant consequence of aesthetic prosthetic rehabilitation reduced for all the other conditions, including selfportraying pictures before the intervention, and pictures of others. Most importantly, the study reports that, among all self-retracting faces in the different stages of the prosthetic rehabilitation, those portraying the subject in her/his actual physiognomy have a somewhat special status in eliciting selectively greater brain activation in the supplementary motor area (SMA) [20]. A specific software that allows the transition from the two-dimensional previsualization of the smile to a three-dimensional volumetric study and then a CAD-CAM processing for the realization of the prosthetic product were used, as also described by Coachman et al. in 2017 [18]. It was reported in the literature in 2014 by Kapos et al. that the survival rates of crowns, abutments and superstructures made with the CAD-CAM technology are similar to those of the same manufactured with traditional methods [21]. The digital construction of the prosthetic device can be accompanied by the digital planning of the surgical procedure, due to the matching between the data of the prosthetic project and the data obtained by the CBCT, as described by several authors [22,23]. Schneider et al. in 2009 and Vinci et al. in 2020, and other authors, showed the efficacy and accuracy of computer-assisted implant surgery [24,25]. The overlap of intra-and extra-oral photographs, models, intraoral scans and CBCT is recognized as a reliable procedure by the fifth Consensus Conference of the European Association of Osseointegration of 2015 [26]. Meloni et al. in 2010 in a retrospective analysis conducted on 15 patients, described the possibility of planning implant surgery in a guided and flapless way and with immediate loading [27]. This has also been confirmed by other authors such as Komiyama et al. in 2012 [28]. The present study involves the use of a mucosal-supported surgical templates, and Gallardo et al. in 2016 and Vinci et al. in 2020 confirmed that this is a predictable procedure for implant placement [25,29]. It is widely known that a method that involves flapless implant insertion greatly reduces post-operative pain and discomfort during and after surgery, compared to open flap procedures, as also demonstrated in the present study [30,31]. Similarly, the main advantages of computer-assisted implant surgery, as already described by Hultin et al. in 2012, are the significant reduction of pain and postoperative discomfort for the patient, and the possibility of creating a temporary prosthesis to be used for the immediate functionalization of the implants [32]. The main disadvantages of guided surgery, as already described insufficient bone volume, remaining teeth that interfere with the planning for implant placement, insufficient mouth opening to accommodate surgical instrumentation of at least 50 mm or bone reduction needed due to a high smile line in the maxilla, irregular bone crest or thin bone crest [34]. Inclusion criteria of the present study included all these contraindications, especially insufficient bone volume. Accuracy and predictability of the intraoral scanner for implant full-arch rehabilitations are demonstrated by many authors so digital impressions is a viable alternative to analog techniques [35,36]. The levels of peri-implant bone loss obtained in the present study have proved to be similar to those reported by other authors in the literature, both for the group of patients treated with traditional surgery, and for the group treated with guided surgery [7,34,37]. The present clinical trial has some limitations, the main one is the follow-up. This type of studies would need longer follow-up. Further studies with a larger number of patients are also needed. Conclusions The obtained results show that the present protocol that is entirely digital, represents a valid therapeutic alternative to the traditional "All on Four" protocol for implant-supported rehabilitation of edentulous arches. However more long-term prospective clinical trials are needed to confirm the effectiveness of the surgical-prosthetic protocols used in this study and it is good not to underestimate the design difficulties: to be successful, a broad knowledge and mastery of topographical anatomy, radiographic imaging, surgical techniques and prosthetic procedures are essential. It is necessary to select more carefully the clinical cases subject to both methods, as described previously in the inclusion and exclusion criteria. Ultimately with the evolution of technologies, it is hoped that a digital workflow can be further simplified and increasingly within reach of each clinician. Appendix A Figure A1. Questionnaire 1. Figure A1. Questionnaire 1. Figure A1. Questionnaire 1.
2021-04-04T06:16:31.307Z
2021-03-26T00:00:00.000
{ "year": 2021, "sha1": "d074175489fcbc80bbac6aa27497ae49b05c16fa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/7/3449/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3b2ee2cd5314b1becaf6498b33026707656eb79", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237796111
pes2o/s2orc
v3-fos-license
Multidisciplinary studies supporting conservation programmes of two rare, endangered Limonium species from Spain Two local threatened endemics from Valencian salt marshes were analysed from a multidisciplinary perspective combining field studies with experiments performed under greenhouse-controlled conditions. The work aimed to investigate the habitat of the two species but also to explore their limits of tolerance to severe drought and salinity and the mechanisms behind their stress responses. The number of individuals in several populations, climatic conditions, soil characteristics and accompanying vegetation in the natural habitats were analysed in the field study. Plants obtained by seed germination were grown in the greenhouse and subjected to one month of water and salt stress treatments. Growth and biochemical parameters were analysed after the treatments were finalised. No correlation between climatic parameters and the number of individuals censed of the two Limonium species could be established. Although L. dufourii was found in more saline soils in the natural habitats, under controlled greenhouse conditions, this species was more severely affected by salt treatment than L. albuferae, which is more susceptible to water stress. A common biochemical response was the increase of proline under all stress treatments, but mostly in water-stressed plants. Oxidative stress markers, MDA and H2O2, did not indicate significant differences between the treatments. The differences in the two species' responses to the two kinds of stress were correlated with the activation of the antioxidant enzymes, more pronounced in conditions of salt stress in L. albuferae and of water stress in L. dufourii. Although L. albuferae is found in sites with lower salinity in the natural habitats, the greenhouse experiment indicated that it tolerates higher concentrations of salt than L. dufouri, which is more resistant to drought. The two species efficiently mitigate oxidative stress by activation of antioxidant enzymes. The results obtained may be helpful for the conservation management of the two species: whereas salinity is not problematic, as the two species tolerated under controlled conditions salinities far beyond those in their natural environments, water scarcity may be a problem for L. albuferae, which proved to be more susceptible to water deficit. Introduction Coastal salt marshes represent ecosystems of great biodiversity and great ecological value (Gardner et al. 2015;Mitsch et al. 2015; Sutton-Grier and Sandifer 2019; Wolanski et al. 2009). In the region of Valencia (E Spain), they often appear as depressions integrated into dune systems, with the most saline areas located in the centre and the least saline at the edges of the salt marsh. The distribution of the different plant species in these saline areas is mainly determined by their relative tolerance to salinity, so that the plant communities are installed in concentric rings, depending on the salinity of the soil -although other factors, such as competition between species, can contribute significantly to the distribution of plants in the salt marsh (Grigore and Toma 2020). The complex of salt marshes developed in the Albufera Natural Park territory, located a few kilometres south of the city of Valencia, is of particular floristic and environmental interest (Ballester et al. 2003;Soria 2006). It shelters the unique populations of endemic Limonium albuferae, only found in this area (Ferrer-Gallego et al. 2016), and Limonium dufourii, which is present also in a few other salt marshes outside the Natural Park (Aguilella et al. 2010). The genus Limonium Mill. (Plumbaginaceae) is outstanding in the region of Valencia; of the 28 species present, 20 are Iberian endemics, and 12 grow exclusively in this region (Mateo and Crespo 2014). One of the most threatened endemic species of Limonium of the Valencian territory is L. dufourii (Girard) Kuntze (Aguilella et al. 2010). Historically, this species was more widely distributed along the coast and salt marshes in the region of Valencia, but today it is represented only by five natural populations restricted to small coastal areas in the provinces of Castellón (Torreblanca) and Valencia: Marjal dels Moros (with three populations), El Saler (Albufera Natural Park) and Cullera (Aguilella et al. 2010). Most of these populations have a very low number of individuals, and molecular analyses show that substantial genetic variability and differentiation exist within and between populations (Palacios and González-Candelas 1997;Palacios et al. 1999). All the populations of L. dufourii are included in the Plant Microreserve network or Natural Parks (L'Albufera, Prat de Cabanes-Torreblanca) of the Valencian Community and also, additionally, in the European Union's Natura 2000 network of protected sites (as Site of Community Importance, SCI). The species is strictly protected in the Valencian region at the highest legal category, "In danger of extinction", included in the Valencian Catalogue of Threatened Plant Species (Aguilella et al. 2010). Limonium albuferae P.P. Ferrer et al. is known only from a small site in the Albufera Natural Park, Racó de l'Olla (Ferrer-Gallego et al. 2016). At the beginning of 2020, 255 plants were counted, covering an area of about 160 m 2 . Therefore, this species will be included in the "In danger of extinction" category in the next edition of the Valencian Catalogue of Threatened Plant Species. In a previous study on the two species (L. dufourii and L. albuferae), based on metabolites profiling and the analysis of ion transport and accumulation, L. albuferae was found to be more salt-tolerant than L. dufourii, primarily due to its ability to accumulate fructose as a specific osmolyte (González-Orenga et al. 2019a). However, there is no information on the responses of the two species to drought, which can also affect their natural populations, especially in the changing climatic conditions of the global warming scenario, and neither on their ability to activate antioxidant mechanisms. Salinity and drought, like all other types of abiotic stress, are associated with an increase in reactive oxygen species (ROS) production, leading to cellular damage by oxidising unsaturated fatty acids in cell membranes, amino acid residues in proteins, and DNA molecules (Apel and Hirt 2004;Choudhary et al. 2019;Das and Roychoudhury 2014). Different biomarkers can be used for assessing the extent of the oxidative stress affecting the plants; for example, malondialdehyde (MDA), a lipid peroxidation product employed as a reliable oxidative stress marker in both animals and plants (Del Río et al. 1996) or hydrogen peroxide (Sofo et al. 2015). In response to increased ROS production, two main antioxidant categories are activated by plants. One is represented by non-enzymatic antioxidants, including phenolic compounds, especially the subclass of flavonoids, carotenoids, ascorbic acid, or glutathione. To the second category belong antioxidant enzymes, such as superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX) (and other peroxidases), or glutathione reductase (GR), which are activated in conditions of oxidative stress (Dumanović et al. 2021;Ozgur et al. 2013). This study has been performed from a multidisciplinary perspective, including an analysis of soil and vegetation in the natural environments of the two species, completed with climatic information, and an analysis of physiological and biochemical responses of plants grown under controlled stress conditions in the greenhouse. Several questions concerning the habitats occupied by the two species were posed. Are there any differences between the two species in the soil characteristics and composition of plant communities? Is the decline of the populations of L. dufourii related to reduced water availability or increased salinity enhanced by changes in the climatic conditions due to global warming? On the other hand, considering that water scarcity and increased soil salinity may be restrictive factors, the study aims to analyse the responses of plants to induced water stress and salinity under controlled greenhouse conditions, with a special emphasis on their antioxidant mechanisms not investigated before in these species. Based on our previous knowledge, the starting hypothesis was that the decline of some populations of L. dufourii is related to changes in its habitat and a lesser efficiency of stress tolerance mechanisms than in the recently described L. albuferae. Area of study The field study was conducted in several salt marshes from the Albufera Natural Park, located in the "Devesa de l'Albufera" (Valencia province, Spain). The area belongs to Wetlands of International Importance of the Ramsar Convention since 1990, and in 1991 it was declared as Special Protection Area under the EU Directive on the Conservation of Wild Birds (79/409/EEC). It also contains habitats and refuges of species included in the EU Habitats Directive (92/43/ EEC), and it is also classified within the Special Protection Areas in the Mediterranean, according to the Geneva Protocol (Soria 2006). The populations of the two Limonium species (L. dufourii and L. albuferae) are located in small salt marshes, locally named 'malladas', which are inter-dune depressions, often inundated during the winter period. Climatic analysis To establish a correlation with the evolution of the number of individuals in the censed populations, climate data were retrieved from SIAR, the Agroclimatic Information System for Irrigation (SIAR 2020) of the Spanish Ministry of Agriculture, Fisheries and Food. Data on the mean, maximum and minimum temperatures, rainfall and reference evapotranspiration (ETo) were collected for the past 19 years, on a monthly basis, from the agroclimatological station Benifaio (Valencia province), located 11 km from the area of study. Population censuses In each monitoring unit, censuses were made following the methodology of the Spanish Atlas and Red Data Book of Vascular Plants (Iriondo et al. 2003(Iriondo et al. , 2009, adapted for the monitoring of endangered Valencian plant species by Navarro et al. (2010). Censuses were made from late July to late August, coinciding with the blooming period. For L. dufourii, five natural population monitoring units were established, referred to as Devesa A (monitored since 2004), B (since 2005), C (since 2005) and D (since 2006); a new E unit has been established 1 3 in 2020, as a result of the tracking made for the present work. Additionally, Devesa 1 and 2 units were established for two new artificial populations, planted in winter 2013-2014. Although the species vanished in monitoring units A, C and D in 2008-2009, their sites have been revisited every year, corroborating the absence of the species. Vegetation analysis Vegetation inventories were carried out in the areas where the populations of the two species are located in the study territory. The study was conducted according to the phytosociological method (Braun-Blanquet 1964), adopting the International Code of Phytosociological Nomenclature (Weber et al. 2000). Braun-Blanquet values were transformed according to van der Maarel (1979). The nomenclature of the taxa follows EuroMed (2006) and the syntaxonomic nomenclature, according to . Three measurements of soil electrical conductivity (EC, dS m −1 ) were performed with a WET sensor (Delta Devices, Cambridge, England) at 10 cm depth in each inventory. The inventories were carried out mainly from mid-June to mid-November 2019. Soil characteristics Soil sampling was performed in July 2019. Samples were taken at 0-10 cm and 10-20 cm depth in the vicinity of specimens of the two species, from one salt marsh where the present unique population of L. albuferae is located, and from three salt marshes for L. dufourii. From each salt marsh, three soil samples were taken (n = 3). The samples were air-dried at room temperature and then crushed with a roller to break aggregates and passed through a 2-mm sieve. Analyses were performed on fine soil (diameter < 2 mm). Soil texture was analysed by the hydrometer method (Bouyoucos 1962). Organic matter was determined by the Walkley and Black method (1934) and carbonates by the technique of Bernard calcimeter (Loeppert and Suarez 1996). The following parameters were analysed in a saturation extract: pH, electrical conductivity (EC), Cl − , Na + , K + , Ca 2+ , and Mg 2+ . A Crison pH-meter Basic 20 and a Crison conductivity-meter Basic 30 (Crison, Barcelona, Spain) were used to measure pH and EC, respectively. Sodium and potassium were quantified with a PFP7 flame photometer (Jenway Inc., Burlington, VT, USA), chlorides were measured in a MKII Chloride Analyzer 92 6 (Sherwood, Inc., Cambridge, UK). Divalent cations (Ca 2+ and Mg 2+ ) were measured with an atomic absorption spectrometer SpectrA 220 (Varian, Inc., CA, USA). Plant growth under greenhouse conditions Seeds of L. albuferae and L. dufourii provided by the Centre for Forest Research and Experimentation of the Valencian Region (CIEF, Valencia) were sown on a mixture of commercial peat and vermiculite (3:1) and watered with Hoagland nutrient solution (Hoagland and Arnon1950). After three weeks, plantlets were transferred to individual 1 L pots placed in plastic trays, with five pots per tray, and watered one further week with Hoagland solution. One week later, when plants had achieved a sufficient size, stress treatments were started. Plants subjected to the salt treatments were watered with aqueous solutions of 200, 400, 600, and 800 mM NaCl; those for the controls with distilled water, and those for the water stress treatment were not irrigated at all. Watering was performed by adding 1 L of the corresponding salt solution or water to each tray every five days. Five replicas (individual plants) were used per species and per treatment. All experiments were conducted in a controlled environment chamber in the greenhouse under the following conditions: long-day photoperiod (16 h of light), temperature of 23 °C during the day and 17 °C at night, and 50-80% relative humidity. Moisture and EC in the pots were measured with the WET sensor (Delta Devices, Cambridge, England) at the beginning and during the treatments, as long as it was permitted by the device's limitations. Pot substrates were collected at the end of the treatments, and moisture and EC were determined in the laboratory. Moisture was determined by the gravimetric method. The samples were dried in an oven at 105 °C until they reached constant weight and then weighed again to calculate the water content as WC% = [(FW-DW)/FW] × 100, where FW and DW are the fresh and dry weights of the substrate samples. For EC measurements, samples were collected from each pot, air-dried and then passed through a 2 mm sieve. A soil: water suspension (1: 5) was prepared in deionised water and mixed for one hour at 1 3 600 rpm and 21 °C before being filtered. Electrical conductivity was measured with a Crison 522 conductivity-meter and expressed in dS m −1 . After one month of treatment, the aerial parts and the roots of the plants were harvested and weighed separately, and several growth parameters were measured: Fresh weight of leaves (FWL) and roots (FWR), water content percentage of leaves (WCL) and roots (WCR), and leaf number (LN). Water content percentage in leaves was calculated as indicated above for the soil samples, except that the plant material was dried at 65 °C. Photosynthetic pigments Chlorophyll a (Chl a), chlorophyll b (Chl b) and total carotenoids (Caro) were quantified according to the method reported by Lichtenthaler and Wellburn (1983), from 0.1 g of fresh leaves ground in 30 mL of ice-cold 80% acetone, mixed by vortexing and then centrifuged. The absorbance of the supernatant was measured at 663, 646 and 470 nm, and the concentration of each group of compounds was calculated according to equations previously described (Lichtenthaler and Wellburn 1983). Pigment concentrations were expressed in mg g −1 DW. Osmolytes Proline (Pro) content was quantified using fresh leaf material, according to the ninhydrin-acetic acid method of Bates et al. (1973). Pro was extracted in 3% aqueous sulphosalicylic acid, the extract was mixed with acid ninhydrin solution, incubated for one h at 95ºC, cooled on ice and then extracted with two volumes of toluene. The absorbance of the supernatant was read at 520 nm, using toluene as a blank. Pro concentration was expressed as μmol g −1 DW. Total soluble sugars (TSS) were measured according to a previously published procedure (Dubois et al. 1956). Fresh leaf material was ground in liquid N 2 and extracted with 80% (v/v) methanol. After mixing in a rocker shaker for 24 h., the samples were centrifuged at 12,000 rpm for 10 min; supernatants were collected, appropriately diluted with water and supplemented with concentrated sulphuric acid and 5% phenol. After 20 min incubation at room temperature, the absorbance was measured at 490 nm. TSS concentrations were expressed as equivalents of glucose, used as the standard (mg eq. glucose g −1 DW). Oxidative stress markers and non-enzymatic antioxidants Leaf hydrogen peroxide contents in both, control and salt-treated plants were quantified as previously described (Loreto and Velikova 2002). Fresh leaf material (0.05 g) was extracted with a 0.1% (w/v) trichloroacetic acid (TCA) solution, followed by centrifugation of the extract. The supernatant was thoroughly mixed with one volume of 10 mM potassium phosphate buffer (pH 7.0) and two volumes of 1 M potassium iodide. The absorbance of the sample was determined at 390 nm. Hydrogen peroxide concentrations were calculated against an H 2 O 2 standard calibration curve and expressed as µmol g −1 DW. Malondialdehyde (MDA), total phenolic compounds (TPC), and total flavonoids (TF) were quantified in the same methanol extracts of fresh leaf material used for TSS measurements. MDA was determined according to the method of Hodges et al. (1999), with some modifications (Taulavuori et al. 2001). The extracts were mixed with 0.5% thiobarbituric acid (TBA) prepared in 20% TCA and then incubated at 95 °C for 20 min. After subtracting the non-specific absorbance at 440 and 600 nm, the MDA contents were calculated using the equation included in Taulavuori et al. (2001), based on the extinction coefficient at 532 nm of the MDA-TBA adduct (155 mM −1 cm −1 ). Control samples (extracts mixed with 20% TCA without TBA) were processed in parallel. The concentration of MDA was finally expressed as nmol g −1 DW. TPC were quantified, according to Blainski et al. (2013), by reaction with the Folin-Ciocalteu reagent. The methanol extracts were mixed with sodium bicarbonate and the reagent, incubated at room temperature in the dark for 90 min and the absorbance was recorded at 765 nm. Gallic acid (GA) was used as standard, and the measured TPC concentrations were expressed as GA equivalents (mg eq. GA g −1 DW). Total 'antioxidant flavonoids' (TF) were determined by a previously described method (Zhishen et al. 1999), based on the nitration of aromatic rings containing a catechol group, by incubation with NaNO 2 , followed by reaction with AlCl 3 at alkaline pH. After the reaction, the absorbance of the samples 1 3 was determined at 510 nm, and TF contents were expressed as equivalents of the catechin standard (mg eq. C g −1 DW). Antioxidant enzymatic activity Antioxidant enzyme activities were determined, at room temperature (25 °C), in crude protein extracts prepared from fresh plant material as described by Gil et al. (2014). Samples were ground in the presence of liquid N 2 and then mixed with extraction buffer [20 mM Hepes, pH 7.5, 50 mM KCl, 1 mM EDTA, 0.1% (v/v) Triton X-100, 0.2% (w/v) polyvinylpyrrolidone, 0.2% (w/v) polyvinylpolypyrrolidone and 5% (v/v) glycerol]. A 1/10 volume of 'high salt buffer' (225 mM Hepes, pH 7.5, 1.5 M KCl and 22.5 mM MgCl 2 ) was added to each sample, and the homogenates were centrifuged for 20 min at 20,000 g and 4 ºC. Supernatants were collected, concentrated in U-Tube TM concentrators (Novagen, Madison, WI, USA), and centrifuged to remove precipitated material. The final samples (referred to as 'protein extracts') were divided into aliquots, flash-frozen in liquid N 2 and stored at -75 ºC until used for enzyme assays. Protein concentration in the extracts was determined by the Bradford's (1976) method, using the Bio-Rad commercial reagent and bovine serum albumin (BSA) as the standard. Superoxide dismutase (SOD) activity in the protein extracts was determined according to Beyer and Fridovich (1987) by following spectrophotometrically (at 560 nm) the inhibition of nitroblue tetrazolium (NBT) photoreduction; the reaction mixtures contained riboflavin as the source of superoxide radicals. One SOD unit was defined as the amount of enzyme causing 50% inhibition of NBT photoreduction under the assay conditions. Catalase (CAT) activity was determined, according to Aebi (1984), following the decrease in absorbance at 240 nm due to the consumption of H 2 O 2 added to the protein extracts. One CAT unit was defined as the amount of enzyme that will decompose one mmol of H 2 O 2 per minute at 25 ºC. Ascorbate peroxidase (APX) activity was determined as described by Nakano and Asada (1981) by measuring the decrease in absorbance at 290 nm, which accompanies ascorbate oxidation as the reaction progresses. One APX unit was defined as the amount of enzyme required to consume one mmol of ascorbate per minute at 25 ºC. Glutathione reductase (GR) activity was determined according to Connell and Mullet (1986), following the oxidation of NADPH [the cofactor in the GR-catalysed reduction of oxidised glutathione (GSSG)] by the decrease in absorbance at 340 nm. One GR unit was defined as the amount of enzyme that will oxidise one mmol of NADPH per minute at 25 ºC. Statistical analysis Data were analysed using the programme Statgraphics Centurion XVII (Statgraphics Technologies, The Plains, VA, USA). All mean values throughout the text are based on five biological replicates. Significant differences between treatments were tested by one-way analysis of variance (ANOVA) at the 95% confidence level, and post hoc comparisons were made using Tukey's HSD test at p < 0.05. A two-way analysis of variance (ANOVA) was performed for all traits analysed to check the interaction between the species and the treatments. A principal component analysis (PCA) was used to check the similarity in the responses to water and salt stress between the two species. Climatic analysis As it can be observed in the climatic diagram calculated for the period 2001-2019 (Fig. 1a), there is a strong water deficit in summer in the area of study, which belongs to the thermo-Mediterranean climate belt, specific for coastal and low-altitude zones, according to the Worldwide Bioclimatic Classification System (1996-2020. The evapotranspiration surpasses the rainfall amount in all the years analysed (Fig. 1b). Evapotranspiration did not vary much during the last two decades, although a slight increase can be noticed in the last few years. On the contrary, both mean temperatures and evapotranspiration curves showed a substantial variation from one year to another. 3 Trimestral variation of the main climatic parameters (mean, maximal and minimal temperatures, rainfall and evapotranspiration) is presented as supplementary material in Suppl. Table 1. A strong variation of the trimestral rainfall was detected, with minimal values in the third trimester and maximal in the fourth, coinciding with the general Mediterranean climate pattern, characterised by dry summers and rainfall mainly in autumn. A similar situation could occur for L. dufourii plants in the Devesa B monitoring unit, situated at a slightly higher level than the surrounding saline basins, unlike the previous ones. After several years with a low number (50 or less) of registered individuals, this monitoring unit experienced a notable increase between 2014 and 2015, from 40 to 241 specimens. This increase occurred after the succession of two very dry years, 2013 (only 263.8 mm) and 2014 (224.40 mm). Population levels remained high in subsequent years, with a minimum of 133 individuals (in 2017) and a maximum of 374 (in 2018). However, after the intense rainfall recorded in 2018 (684.02 mm), the population showed a sharp decline again, with only 94 specimens registered in 2019. Vegetation analysis Vegetation inventories of plants communities were performed in 22 locations, corresponding to different salt marshes in the Albufera Natural Park. Each inventory was accompanied by the collection of soil data obtained with a portable sensor. Suppl. Table 2 summarises the habitat characteristics of each site (the extension and coverage of the plant community), soil moisture and electrical conductivity, the list of species present in the community and their coverage. Two species had a higher presence in some of the inventories, equal to 4 in the Braun-Blanquet scale: Sarcocornia fruticosa, a structural shrubby species of the Mediterranean salt marshes, but also Spartina patens, an invasive with an increasing presence in recent years in the area of study. The soil electrical conductivity measured by the WET sensor near the plants of the two studied species in all inventories was higher in the case of L. dufourii, but it is necessary to take into consideration that L. albuferae was found in one unique location; thus, this finding does not demonstrate a better salt tolerance of L. dufourii. Soil characteristics Soil samples, collected in the summer of 2019 from the unique location of L. albuferae and three salt marshes with L. dufourii, all located in the Albufera Natural Park, were analysed. From each location, three soil samples were taken in the vicinity of the plants at two depths, 0 -10 cm and 10-20 cm. Their physical and chemical properties are summarised in Table 2. All soils had a sandy texture. The percentage of sand represented the primary component, with low amounts of silt and clays. The soil pH is neutral, and the salinity in the superficial layer (0-10 cm) was higher than at 10-20 cm depth − which is the area The most abundant ion in the soil was Na + , found at a higher concentration than that of Cl − . The soil samples are also characterised by a high percentage of carbonates and divalent cations, Ca 2+ and Mg 2+ . Plant growth under greenhouse conditions Substrate EC was measured with the WET Sensor at the beginning and after one week of treatment, but further measurements were not possible due to the high EC reached in the salt treatments of 400-800 mM NaCl, which were beyond the capacity of the device. Therefore, the final EC was measured in an extract 1:5. EC in the pots gradually increased in parallel to the concentration applied, reaching values over tenfold higher than in the control in those watered with 800 mM NaCl, for the two species (Fig. 2a). Substrate moisture decreased drastically in the WS treatment already after one week and even more after 17 days in the two species; no further measurements were possible with the WET sensor. Thus, the final moisture determination was carried out using the gravimetric method. The results indicated a similar reduction in soil moisture in the two species at the end of the WS treatments (down to around 3%), whereas only a slight decrease was found in the presence of 400, 600 and 800 mM NaCl, with respect to the control and 200 mM NaCl treatments (Fig. 2b). Analysis of the growth parameters Stress treatments had a strong effect on all analysed growth parameters and also on the photosynthetic pigments contents, whereas the effect of species was significant only for the leaf fresh weight (LFW) and leaf water content (LWC), as well as for Chl a content. The interaction of the two factors was also significant only for leaf traits: number of leaves (Lno), leaf area (LA), mean fresh weight (LFW) and mean water content (LWC) ( Table 3). Chlorophyll a showed a predominantly uncontrolled variation, as shown by the higher SS percentage of the residual. Figure 3 shows the variation of leaf traits according to the various applied treatments. Leaf area decreased mainly in L. dufourii, whereas L. albuferae showed only a smaller reduction under water stress (Fig. 3a). The leaf number strongly decreased under the highest salt concentration, but the formation of new leaves was also reduced under water stress and salt treatments in the two species (Fig. 3b). Leaf fresh weight suffered a reduction under water stress in both species and under all salt concentrations in L. dufourii, but only under the higher salt concentrations in L. albuferae (Fig. 3c). Leaf water content showed a similar variation in the two species being affected only by the water stress (Fig. 3d). Osmolytes, oxidative stress markers and antioxidant systems Several biochemical parameters, such as osmolytes (proline and total soluble sugars), oxidative stress markers (MDA and H 2 O 2 ), non-enzymatic antioxidants (total phenolic compounds and flavonoids) and the activity of antioxidant enzymes (superoxide dismutase, catalase, glutathione reductase), were determined in leaves of plants sampled at the end of the water stress and salt treatments. The two-way ANOVA showed that, except MDA, all other analysed biochemical traits were significantly influenced by the treatments and, with the exception of MDA and total phenolic compounds (TPC), also by the species. The interaction between the two factors was highly significant for total soluble sugars contents (TSS) and the antioxidant enzymes activities, but not significant for Pro and TPC ( Table 4). The most significant contribution to variation of MDA, TPC and TF is accounted for by the residual source of variation. As indicated above, the variation of proline (Pro) leaf contents followed a similar pattern in the two species, increasing in parallel to the external concentration of NaCl, and especially under water stress. The relative increase over control values in plants subjected to water deficit was 29-fold for L. albuferae and 2.7-fold for L. dufourii; the corresponding values in the presence of the highest salt concentration applied, 800 mM NaCl, were 17-fold and 1.7-fold, respectively. However, it should be noted that Pro concentrations differed in the two species, being (Fig. 4a). Under water stress, total soluble sugars (TSS) significantly increased in L. dufourii and decreased in L. albuferae, although the levels in non-stressed, control plants were more than four-fold higher in the latter species. Watering the plants with 800 mM NaCl induced a significant increase of TSS contents in both species. When comparing the two species, apart from the controls, significant differences in TSS levels were found under water deficit and moderate (200 mM) salinity conditions, but not in the presence of 400 mM or higher NaCl concentrations (Fig. 4b). Malondialdehyde (MDA) contents did not vary in L. albuferae under any of the applied stress treatments and showed a significant (albeit small) increment only in water-stressed L. dufourii plants (Fig. 5a). In contrast, hydrogen peroxide decreased with respect to the corresponding controls in both species (Fig. 5b). Total phenolic compounds (TPC) showed a similar pattern of variation in response to stress in the two species, with small, in most cases non-significant changes as compared to the controls (Fig. 5c), whereas total flavonoids (TF) contents increased significantly only in plants of L. albuferae treated with 400 mM or higher NaCl concentrations (Fig. 5d). Activity of antioxidant enzymes The specific activities of the three tested antioxidant enzymes (SOD, CAT, and GR) showed different qualitative and quantitative patterns of variation in the two species in response to the applied stress treatment (Fig. 6). In L. albuferae, compared to the basal levels in non-stressed plants, the activity of the three tested enzymes increased significantly at very high salinities (600-800 mM NaCl) but not at lower NaCl concentrations or under water deficit conditions (Fig. 6a, b, c). In L. dufourii, SOD increased significantly only in the presence of 800 mM NaCl and in water-stressed plants (Fig. 6a), and GR also at the highest salt concentration tested, but not under water deficit stress (Fig. 6c). In contrast, CAT activity did not show significant changes in any of the treatments (Fig. 6b). Comparing the two species, significant differences were found; higher activation of SOD and CAT under salt tress in L. albuferae, whereas, on the contrary, Principal component analysis A principal component analysis (PCA) was also performed, including all growth parameters, osmolytes, oxidative stress markers, antioxidants and enzyme activities determined in control and stressed plants. Five components had an Eigenvalue above 1. The biplot of the two main principal components, which together explained 67.96% of total variability, is shown in Fig. 7. The first component (X-axis), explaining 39.57% of variability, is related to the moisture of the substrate and, therefore, mainly to the water stress effect. The second component (Y-axis), explaining an additional 28.12% of variability, is related to the EC of the substrate and, as such, mostly to the salt treatments. Changes in substrate moisture correlated positively with changes in all growth parameters -especially the water content of root and leaves and leaf fresh weight -and photosynthetic pigments concentrations, which agrees with the inhibition of growth and the decrease in pigments contents observed under water stress (Fig. 7a). On the other hand, a strong negative correlation was detected between substrate water content and Pro, reflecting the large increase in Pro levels induced by water deficit (Fig. 7a). Regarding changes in the substrate EC, strong positive correlations were found with the antioxidant systems, especially with the activity of the SOD, CAT and GR enzymes, which increased with the salt treatments, at least at high salinity (Fig. 7a). The PCA also showed a clear separation of control, water stress and salt treatments but not of the two species, which responded in a similar manner to each applied stress treatment (Fig. 7b). Discussion As stated in the introduction, the two studied Limonium species are extremely interesting from the conservationist perspective. Both are endemic, with a small distribution area in Eastern Spain and highly threatened due to the scarcity of their populations (a single one was known for L. albuferae) and the large fluctuations in the number of their individuals. The highest number of individuals of L. dufourii were registered in drier years, when flooding of the salt marshes did not occur or was very brief; conversely, the population declined after a year with intense rainfall. It should be taken into account that in this area of Eastern Spain, the highest concentration of precipitation occurs in autumn and, therefore, its effects on the censuses of Limonium species are detected when they are carried out in the summer of the following year. The possible effects of climatic conditions on the number of individuals of L. albuferae could not be assessed, as the species has been described only recently (Ferrer-Gallego et al. 2016). In 2019, an apparent decrease in the population of L. albuferae was observed; only 39 specimens were initially counted, which could be related to the intense colonisation of this site by the invasive species Spartina patens (Aiton) Muhl. However, after manual The soil salinities in the salt marshes where the two species were found were moderate, and in the case of L. albuferae, significantly lower than those registered for the less salt-tolerant L. dufourii. Moreover, the salinity of the natural habitat of L. albuferae was well below the limit of tolerance of the species established in the greenhouse experiments. However, due to the extreme scarcity of this species, represented by a single population, the soil analyses were performed from only one area and are not conclusive for its ecological characterisation. The peculiar rarity of L. albuferae is definitely not related to edaphic conditions but most likely to evolutionary factors. Regarding the analysed soil parameters, Na + and Cl − contents in samples from 0-10 cm depth were 3 and 4.5-fold higher, respectively, in the areas of L. dufourii than in that of L. albuferae at the same depth. These differences explain the higher mean EC also detected with the WET sensor in the areas where L. dufourii was present. The same pattern was found for K + , Ca 2+ and Mg 2+ soil concentrations, although the differences between the areas of the two species were not as marked as for Na + and Cl − . Other soil properties relevant to plants' life, such as texture or pH, were similar in all soil samples. The phytosociological analysis inventories could not be ascribed to associations as the specimens of the two species are often located in areas that have been extensively altered and have suffered a geomorphological restoration. The vegetation dynamics is very rapid, subjected to flooding and, therefore, to changes in salinity. This triggers the advance and retreat in a short time of different species, and the rainfall regime has been very variable in recent years, also greatly altering the plant communities. Nevertheless, the performance of phytosociological inventories brought relevant information by revealing the abundant presence in some inventories of the invasive species Spartina patens, which was recently reported as a major threat for native halophytes in this area (Martínez-Fort and Donat-Torres 2020). Spartina patens appears to pose a severe risk also to the two endemic Limonium species, as the salt marshes where L. dufourii disappeared (Devesa A, C, and D) are completely invaded by this species. In the remaining sites, the large yearly variation in the number of individuals is related to genetic factors of L. dufourii, which shows very different flowering patterns, occasionally behaving as annual or monocarpic perennial (plants die after the first reproductive stage) or flowering every year. Also, it should be noticed the high frequency of Dittrichia viscosa (L.) Greuter, regarded as a native invasive species, extremely competitive at low and moderate salinities in salt marshes of the region (Al Hassan et al. 2016). Salt marsh ecosystems are highly dynamic, characterised by large variations in the salinity of the soil at the temporal and spatial scales, as reported in previous studies performed on the territory of the Albufera Natural Park (Boscaiu et al. 2013;Gil et al. 2014;González-Orenga et al. 2020). Therefore, reintroduction or reinforcement programmes for endemic and rare salt marsh species should also consider information on their limits of tolerance to stressful environmental factors. In Mediterranean salt marshes, a general increase in average temperatures and shortterm 'heatwaves', due to climate change, will lead to increased evapotranspiration. Consequently, drought and soil salinity will also intensify, inflicting greater stress on plants and potentially causing the dieback of those less tolerant (Touchette et al. 2019). The analysis of growth inhibition in response to the applied stress treatments indicated that the two species were mostly affected by water stress. In both, the strongest reduction in the most relevant growth parameters, leaf fresh weight and water content, was found in plants subjected to one month of water deficit, especially those of L. albuferae. This indicated that L. albuferae is much more susceptible to drought than other Limonium species growing in the study area, which were the subject of previous work (González-Orenga et al. 2019b). On the contrary, the salt-induced changes in growth parameters suggested that L. dufourii is more sensitive to high soil salinity than L. albuferae. Growth reduction under salt stress is a general trait in glycophytes but also in many halophytes (Flowers et al. 1986;Flowers and Colmer 2008). Only in some dicotyledonous halophytes, especially in those more salt-tolerant, low and moderate concentrations of NaCl stimulate growth, as we have observed in L. albuferae, in which foliar fresh weight was slightly higher in the presence of 200 mM NaCl than in control plants. Stimulation of growth under low and moderate salinity conditions has been reported only in a few species of the genus Limonium, such as L. bicolor (Bunge) Kuntze (Li 2008;Wang 1 3 et al. 2017), L. delicatulum (Girard) Kuntze (Souid et al. 2016), L. pectinatum (Aiton) Kuntze (Morales et al. 2001), or L. girardianum andL. virgatum (Al Hassan et al. 2017). In some others, such as L. stocksii, no differences with respect to the control were found up to 300 mM NaCl (Hameed et al. 2015), whereas in many species, growth was optimal in control conditions (Ben Hamed et al. 2014;Grieve et al. 2005). Contrary to the intense dehydration caused by water stress, for both species, leaf water content decreased only slightly in the plants subjected to salt stress, demonstrating the small contribution of water loss to the reduction of fresh weight. The biochemical analyses revealed an increase of Pro contents in the two Limonium species, more pronounced in response to the water stress treatment than under salt stress. The relative increase was more accentuated in L. albuferae due to the very low Pro levels in the absence of stress, but higher absolute values were found in L. dufourii in all applied treatments. The accumulation of Pro to high levels under water deficit conditions agrees with its strong negative correlation with substrate water content, revealed by the PCA. Pro is also a reliable marker of salt stress, increasing in the plants in parallel with the increase in the external concentration of NaCl; however, Pro does not seem to be directly involved in the mechanisms of salt tolerance, as it accumulates to higher absolute levels in L. dufourii, the less salt-tolerant of the two species. Pro biosynthesis in salt-stressed plants of Limonium is a well-known phenomenon and was already reported in the early work of Cavalieri and Huang (1979). In general, plant species of a particular genus tend to use only one, or very few different compounds, as functional osmolytes; one representative example is Plantago: all investigated species of this genus accumulate predominantly sorbitol in response to various abiotic stresses (Flowers and Colmer, 2008). In Limonium, however, a large variety of chemical compounds with the function of compatible solutes have been reported in different species including, besides Pro, quaternary ammonium compounds like β-alanine betaine, choline-0-sulfate or glycine betaine, and different soluble sugars (fructose, sucrose and glucose) and polyalcohols (e.g., inositol isomers and derivatives) (Al Hassan et al. 2017;Furtana et al. 2013;Gagneul et al. 2007;González-Orenga et al. 2019b;Hanson et al. 1991;Morales et al. 2001;Rhodes and Hanson 1993;Tabot and Adams 2014;Tipirdamaz et al. 2006). Recently, in a metabolic profiling of these two species, we reported a gradual increase in Pro concentrations in parallel to increasing salinity but also a higher accumulation of fructose and glucose in L. albuferae (González-Orenga et al. 2019a). These data are consistent with the results presented here, indicating higher values of total soluble sugars in salt-stressed plants of this latter species. As already mentioned, abiotic stress is associated with increased ROS production that generates oxidative stress (Das and Roychoudhury 2014;Dumanović et al. 2021). In our experiments, no significant changes in MDA levels were observed in the stressed plants, except for a slight (but significant) increase in plants of L. albuferae subjected to water stress; H 2 O 2 levels even decreased in comparison to the non-stressed controls. Similarly, no variation in MDA and H 2 O 2 under salt treatments was found in L. latifolium (Ben Hamed et al. 2014), but an increase was reported in some other species (Hameed et al. 2015;Souid et al. 2016). Several studies have shown that halophytes generally do not generate ROS in excess as they are perfectly adapted to the stressful environments where they live and possess efficient mechanisms to avoid or substantially reduce oxidative stress (Bose et al. 2014;Gil et al. 2014), and this seems to be also the case in the selected Limonium species. Phenolic compounds, especially the subgroup of flavonoids, include many secondary metabolites that are potent antioxidants and increase under stressful conditions in many plant species (Di Ferdinando et al. 2012). Many Limonium species contain efficient free radical scavengers and have strong antioxidant properties (Senizza et al. 2021;Souid et al. 2019;Ruiz-Riaguas et al. 2020). Field studies on several Limonium species from Tunisia indicated a variation in the levels of polyphenols and flavonoids in relation to seasonal constraints in their natural habitats (Souid et al. 2018(Souid et al. , 2019. In salt-treated plants of some species, an increase in the concentration of these compounds has been reported (Wang et al. 2016), but in others, only small increases (Souid et al. 2016) or no significant variations under water stress (González-Orenga et al. 2019a, b) were observed. In the present work, the only significant increases in total phenolics or flavonoid levels were observed in salt-stressed L. albuferae plants. 3 Phenolic compounds (including flavonoids), as other non-enzymatic antioxidants, are regarded as a secondary line of defence against oxidative stress, activated only under severe stress conditions, whereas antioxidant enzymes constitute the first ROS scavenging system (Fini et al. 2011). The specific activities of three antioxidant enzymes were determined in control and stressed plants of the two investigated species since enzymatic antioxidant mechanisms have been reported to be important for counteracting oxidative stress in Limonium under salt (Li 2008;Souid et al. 2016;Zhang et al. 2014) and drought (Souid et al. 2018) stress conditions. SOD is the first enzyme to be activated in response to stress as it catalyses the dismutation of superoxide radicals into O 2 and H 2 O 2 (Alscher et al. 2002). CAT complements the activity of SOD by decomposing the produced H 2 O 2 into O 2 and H 2 O and is induced by the accumulation of its substrate (Gunes et al. 2007). Glutathione reductase (GR) contributes to recover and maintain the adequate cellular redox state by reducing oxidised glutathione (GSSG) to its reduced form (GSH), using NADPH as a cofactor (Hameed et al. 2015). Changes in the activities of these enzymes in response to stress followed different qualitative and quantitative patterns regarding both the stress treatment and the species. Thus, water deficit induced SOD activity in L. dufourii, but no significant changes with respect to the controls were observed for the other two enzymes in this species, nor in L. albuferae plants for any of the three tested enzymes. Therefore, it appears that the activation of enzymatic antioxidant mechanisms against water stress is more efficient in L. dufourii than in L. albuferae, which may contribute to the relatively higher drought tolerance of the former species. Conversely, in the more salt-tolerant L. albuferae, the three antioxidant enzyme activities increase significantly in response to the 600 and 800 mM NaCl treatments, whereas in L. dufourii SOD and GR (but not CAT) activities also increased, but to lower levels and only under the presence of the highest salt concentration tested. Conclusions The field study did not reveal a clear correlation between the number of individuals censed in the analysed populations with the climatic conditions. The vegetation analysis underlined the presence of invasive species, mostly Spartina patens, with a notable presence in some inventories. Although in the natural habitats, L. albuferae is found in sites with lower salinity, the observed changes in several growth and biochemical variables in plants of the two selected Limonium species subjected to stress treatments under controlled greenhouse conditions, indicated that L. albuferae is more salt-tolerant than L. dufourii but more susceptible to drought stress. Conversely, L. dufourii is more drought-tolerant but more saltsensitive than L. albuferae. In its natural habitat in the salt marsh, L. dufourii appears to be sensitive to prolonged flooding. Proline was synthesised in both species, especially under water stress, whereas MDA and H 2 O 2 did not show a significant variation. The activity of antioxidant enzymes plays the most important role in the mitigation of oxidative stress in both species and both stress types. Increased accumulation of phenolic compounds in the two species, and flavonoids in the more salt-tolerant L. albuferae, also contribute to alleviating oxidative stress in the presence of high salt concentrations. The results presented here may be useful in the conservation management of the two species. Salinity does not seem to threaten the future reintroduction of specimens in salt marshes, as the two species under controlled conditions tolerated salinities far beyond those in their natural environments. Water scarcity, however, may be a problem for L. albuferae, which proved to be more susceptible to water deficit. On the other hand, L. dufourii should not be introduced in sites prone to prolonged flooding. The field study also established that, besides abiotic stress factors, competition with invasive species could be a major threat to the preservation of these species in their natural habitats. These data should be considered in the design and implementation of conservation, reinforcement or reintroduction programmes and for the general management of the threatened populations of these rare and endemic Limonium species. Valencia, Spain) for her valuable suggestions for improving the manuscript. Thanks to Inmaculada Ferrando Pardo for helping in the study and conservation of the seeds in the Centre for Forest Research and Experimentation of the Valencian Region (CIEF). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-09-28T01:09:04.901Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "ce068d6aa3254597acdb5bd847232d1d80e7f984", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-021-05059-9.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "edef15ebd406f09dfebc96324ce8f8dba275cc2c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
232161653
pes2o/s2orc
v3-fos-license
Neuroendocrine neoplasms of the duodenum, ampullary region, jejunum and ileum Summary Neuroendocrine neoplasms of the small intestine are some of the most frequently occurring along the gastrointestinal tract, even though their incidence is extremely variable according to specific sites. Jejunal-ileal neuroendocrine neoplasms account for about 27% of gastrointestinal NETs making them the second most frequent NET type. The aim of this review is to classify all tumors following the WHO 2019 classification and to describe their pathologic differences and peculiarities. Introduction Duodenal neuroendocrine neoplasms (Duo-NENs) are uncommon, accounting for about 4% of all gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) 1,2 . They include ampullary NENs, which arise within or around the major or minor papilla/ampulla and extra-ampullary NENs. Their incidence is increasing, likely due to improved diagnostic techniques. Clinical presentation Patients with Duo-NENs may present with abdominal pain, jaundice, bleeding or anemia 3 ; however, many nonfunctioning neuroendocrine tumors (NETs) are discovered incidentally. Zollinger-Ellison syndrome due to a gastrinoma may occur, sometimes in the setting of multiple endocrine neoplasia type 1 (MEN1) syndrome, whereas somatostatinoma or carcinoid syndrome are extremely rare. duo-Nets Duo-NETs are graded according to the WHO proliferative criteria as G1, G2 and G3; most Duo-NETs (66-80%) are low-grade (G1) tumors, while grade 3 NETs are very rare. Three main clinico-pathologic subtypes of Duo-NETs have been described 4 (Tab. I): a Gastrinoma (i.e. functioning gastrin-producing NETs). This tumor subtype is, by definition, associated with Zollinger-Ellison syndrome and characterized by gastrin expression of neoplastic cells. Duodenal gastrinomas usually show a well-defined trabecular pattern, with frequent vascular pseudorosettes ( Fig. 1). About 30% of duodenal gastrinomas arise in patients with MEN1 syndrome; MEN1-associated gastrinomas are often coupled with diffuse hyperplastic gastrin and somatostatin cell changes and multicentric gastrin-producing micro-NETs 5 . Despite their usually small size (0.7-0.8 cm), gastrinomas are more frequently associated with lymph node metastases in comparison with non-functioning gastrin-expressing Duo-NETs 6 . b Ampullary-type somatostatin-producing NETs (AS-NETs), also known as "somatostatinoma" , despite the frequent lack of an associated hyperfunctioning clinical syndrome. They are characterized by a more or less prominent tubulo-acinar/pseudoglandular pattern of growth, often with psammoma bodies, and extensive (more than 50% of tumor cells) somatostatin reactivity (Fig. 2). AS-NETs represent the most common histologic subtype among NETs of the major and minor papilla/ ampulla regions 4,7 ; they can, however, be occasionally found in the extra-ampullary duodenum. A fraction of such neoplasms occur in patients with neurofibromatosis type 1 and they show a biallelic inactivation of NF1 gene 8 . In addition to reactivity for general neuroendocrine markers and somatostatin, AS-NETs are, as a rule, positive for Although AS-NETs are significantly larger (median size: 1.8-2.5 cm) and have a higher lymph node metastatic rate (about 50% of cases) than ordinary non-functioning, mostly extra-ampullary, Duo-NETs, they display an indolent behavior, even when metastatic to the liver. Differential diagnosis with duodenal or ampullary adenocarcinomas is therefore of utmost clinical importance. In contrast to AS-NETs, adenocarcinomas show higher nuclear atypia and mitotic activity, absence of psammo-ma bodies, and negativity (or only focal positivity) for general neuroendocrine markers and somatostatin. It should also be recalled that duodenal NECs and gangliocytic paragangliomas may also express somatostatin; however, their cellular and architectural features allow a straightforward distinction from AS-NETs. c Ordinary non-functioning NETs. The remaining Duo-NETs showing the "canonical" NET organoid architecture (nests, trabeculae, ribbons) and, in addition to general neuroendocrine markers, variable expression of gastrin or, less frequent- ly, somatostatin 2 account for the vast majority of extra-ampullary Duo-NETs. Gastrin-producing Duo-NETs are more frequently detected in the first portion of the duodenum. Worthy of note is that enterochromaffin-cell serotonin-expressing NETs are exceptionally rare in the duodenum, in comparison with the jejunum or ileum. In Duo-NETs, risk factors for lymph node metastasis encompass tumor size, invasion of muscularis propria or beyond, lymphovascular invasion, and grade (2 or 3), while independent prognostic factors include tumor stage, tumor size (patients with tumors of 2 cm in diameter or larger have worse outcome) and lymphovascular invasion 4 . gaNgliocytic paragaNgliomas Gangliocytic paraganglioma represents a rare and distinct tumor type, which is almost always located in the ampullary region. It is characterized by a triphasic morphology, i.e. i) an epithelioid, paraganglioma-like neuroendocrine component (reactive for general neuroendocrine markers and, frequently, for cytokeratins, somatostatin, pancreatic polypeptide and progesterone receptors), ii) a Shwannian-like spindle cell component (reactive for S100 protein and SOX10 and often for synaptophysin), and iii) a ganglion-like cell component (reactive for synaptophysin and, sometimes, for somatostatin, S100 or cytokeratins) 4,9,10 . The three components may be variably intermingled. Despite their often pseudo-infiltrative pattern, gangliocytic paraganglioma is considered a very-low-grade tumor, with uncommon metastases, essentially to loco-regional lymph nodes. It should be mainly distinguished from Duo-NETs, especially from AS-NETs, which display a greater metastatic potential, and from true paragangliomas, gastrointestinal stromal tumors and ganglioneuroma. In addition to the typical triphasic histology, immunohistochemistry for progesterone receptor and pancreatic polypeptide may help distinguish gangliocytic paraganglioma from Duo-NET 9 . Recently, Mamilla et al conclude that gangliocytic paragangliomas have a NET-like 9 immunoprofile but differ from ordinary paragangliomas, almost all of which are cytokeratin-negative 10 . Gastrointestinal stromal tumors have a different immunophenotype, while ganglioneuroma lacks the epithelioid component. duodeNal Necs They are by definition high-grade NENs. Histologically, NECs are arranged in poorly formed trabeculae, large and confluent nests or sheet-like growths, similar to those described in the lung or remaining gastroenteropancreatic tract. Most duodenal NECs arise around the major ampulla 11,12 , where they form large and invasive masses (median size: 2.5 cm). They may be separated histologically in two variants: small cell NECs and large cell NECs. Duodenal NECs, regardless of histologic variant, are generally associated with an advanced stage and a worse prognosis 4,12 . More than half of ampullary NECs show loss of Retinoblastoma (RB1) expression, which may be helpful to support the diagnosis of NEC (versus a NET G3) in challenging cases, while p53 overexpression occurs in about 30% of cases 12 . duodeNal miNeNs Few ampullary MiNENs, composed of a NEC component combined with an adenocarcinoma component, each of which accounting for at least 30% of neoplastic growth, have been described and most display aggressive behavior 12 . Introduction Jejunal-ileal neuroendocrine neoplasms (Je-Ile NENs) are almost exclusively represented by well differentiated serotonin producing enterochromaffin cell neuroendocrine tumors (EC cell-NETs) of the terminal ileum. They account for about 27% of gastrointestinal NETs making them the second most frequent NET type 13 . The remaining Je-Ile NENs are mostly represented by NETs producing gastrin (especially in the jejunum) 14 . Poorly differentiated NECs and MiNENs represent rare entities. Clinical presentation About half of Je-Ile EC cell NET patients are asymptomatic and their tumors are incidentally detected. Patients can be asymptomatic even if they may show high serum neuroendocrine markers, urinary 5-hydroxyindoleacetic acid (5-HIAA) and liver metastases. Primary tumor identification, in presence of liver metastases may be difficult due to small size of primary tumor, limitations of endoscopy and standard imaging techniques. Cases with symptoms present with crampy abdominal pain, due to intestinal obstruction and/or ischemia. The "carcinoid syndrome" , characterized by cutaneous flushing, diarrhea, bronchospasms and fibrous thickening of endocardium and valves of right heart, occurs only when liver metastases are present and is detected in at most 10% patients. Subtypes These are represented by WD Je-Ile NET, NECs and MiNENs. Je-Ile NETs Je-Ile NETs are graded according to the WHO proliferative criteria as G1, G2 and G3. Most Je-Ile NETs are low grade; grade 3 NETs are rare. Two Je-Ile NET clinico-pathologic subtypes have been reported: a EC cell NETs Pathology and Immunohistochemistry: they are most-ly located in the distal ileum, only 11% in the jejunum and rarely they are found in Meckel's diverticulum. EC cell NETs are multiple (2-100 tumors) in about one third of cases and in familial cases. Tumor size is usually small, ranging from 1 to 2 cm. These NETs appear as firm white-yellow mucosal-submucosal nodules with intact or minimally eroded overlying mucosa. Muscular wall and peritoneal infiltration is frequent and consists either of extensive peritoneal fibrosis caused by fibroblastic growth factors produced by the tumor or by metastatic lymph nodes fused together. EC cell NETs are composed of solid rounded nests (Fig. 3A) of closely packed tumor cells, often showing peripher- Cribriform, glandular-like and rosette-type structures are also frequently observed. Tumor cells are uniform with little or no pleomorphism and mitotic activity is null or low in most of the cases (0 to 2/2 mm2), which classifies these tumors as G1. Lymphovascular and perineural invasion are frequently observed. In addition to reactivity for general neuroendocrine markers (Fig. 3B) and serotonin (Fig. 3C), EC-cell NETs express CDX2 and type 2A somatostatin receptors. Ki-67 proliferative index is very low (0-2%) (Fig. 3D) in most cases classified as G1 but may be more than 2% in some that are classified as G2 and more than 20% in few cases classified as G3. 18 . The majority (> 60%) of patients with EC cell Je-Ile NETs present with metastatic disease. Metastases are principally located in regional lymph nodes and the liver 19 . Notwithstanding this, patients with advanced disease show prolonged survival related to the very low proliferative rate of these tumors. The 5-year overall survival rate of patients with localized disease is 70-100% while that of patients with distant metastases is 35-60%. Long term recurrence rate is roughly 50% 20 . The risk of recurrence is increased in patients with nodal metastases, mesenteric invasion, lympho-vascular invasion and perineural invasion. b Heterogeneous group of Je-Ile NETs This group comprises the subgroup of trabecular G1 NETs expressing gastrin, located in the jejunum, sharing the same general behavior as their duodenal counterpart, and a second subgroup represented by jejunal non-hormone expressing NETs, most of large size and locally invasive and frequently of G2 or G3 grade, located in the upper jejunum 14 . Je-Ile NECs They are by definition high-grade malignant Je-Ile NENs and show poorly formed trabeculae, large and confluent nests or sheet-like growths similar to those previously described. Very few cases of this neoplasm have been reported 21 . Je-Ile MiNEN As far as we know no well described cases of Je-Ile MiNEN have been reported so far 22 .
2021-03-10T06:23:18.774Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "93c4aa72a0bd52d8290e88d050dcfd3eb71e41ae", "oa_license": "CCBYNCND", "oa_url": "https://www.pathologica.it/article/download/228/215", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a120c87f2fa4ce764e9292b8034f9b1dbcc88879", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232172191
pes2o/s2orc
v3-fos-license
Meta-Analysis of Hepatic Arterial Infusion for Liver Metastases From Colorectal Cancer The aim of the present study was to evaluate the potential benefits of hepatic arterial infusion chemotherapy (HAIC) in the management of colorectal liver metastases (CRLM). Electronic databases, including PubMed, EMBASE, Medline, Web of Science, and Cochrane Library, were comprehensively searched from inception to November 2020. Prospective randomized trials with HAIC vs. systemic chemotherapy (SC) were selected. The overall survival (OS), tumor response rates (RRs), progression-free survival (PFS), and corresponding 95% confidence intervals (CIs) were assessed in the meta-analysis. Subsequently, the heterogeneity between studies, sensitivity, publication bias, and meta-regression analyses were performed. Finally, 18 studies, which contained 1,766 participants (922 in the HAIC group and 844 in the SC group) were included. There was a significantly higher OS rate in the HAIC as palliative treatment group (HR, 0.17; 95% CI, 0.08–0.26; P = 0.000) and HAIC as adjuvant treatment group compared with SC group (HR, 0.63; 95% CI, 0.38–0.87; P = 0.000). The complete and partial tumor RRs were also increased significantly in the HAIC as palliative treatment group (RR = 2.09; 95% CI, 1.36–3.22; P = 0.001) and as adjuvant treatment group compared with SC group (RR = 2.14; 95% CI, 1.40–3.26; P = 0.000). However, PFS did not differ significantly between the HAIC and SC groups (P > 0.05). Meta-regression analysis showed potential covariates did not influence on the association between HAIC and OS outcomes (P > 0.05). The results of the present study suggested that HAIC may be a potential therapeutic regimen that may improve the outcomes of patients with CRLM. The present meta-analysis has been registered in PROSPERO (no. CRD 42019145719). INTRODUCTION Colorectal cancer (CRC) is the third most common type of cancer in terms of incidence (10.2%) and the second leading cause of cancer-associated death (9.2%). In 2018, there were over 1.8 million new CRC cases and 881,000 estimated deaths worldwide (1). It is estimated that there were 51,020 deaths in 2019 in the USA (2). The liver is the most frequent site of distant metastases of CRC (3), and serves as the leading cause of death in patients with CRC. It is estimated that 50% of patients develop liver metastases, of which ∼25% of patients present with synchronous metastases and another ∼50% with developing metachronous metastases (4). R0/R1 resection of both the liver metastases and the primary CRC has been demonstrated to improve longterm survival times to a certain degree (5,6). Of patients with CRC with liver metastases, 15-20% of patients with liver metastases undergo surgical operation at presentation (7,8), and the 5-year overall survival (OS) rates are in the range of 34-36% (3,4). Regarding unresectable colorectal liver metastases (CRLM), therapeutic management is more controversial, and is generally associated with less favorable prognoses (5). Thus, optimization of the treatment for CRLM is required. Over the past decade, effective systemic and regional chemotherapy for CRLM has been introduced. Hepatic arterial infusion chemotherapy (HAIC), a locoregional therapy for treatment with liver metastases, is a potentially appealing treatment and it has been developed over the last three decades for patients with CRLM (9). HAIC possesses theoretical advantages over standard intravenous systemic chemotherapy (SC) due to the anatomical characteristics. Portal vein to parenchyma, while hepatic artery to metastatic tumor of the liver make it possible to cure patients with CRLM (6,7). The schematic diagram of HAIC is presented in Figure 1. Protocol Registration The present study was previously registered in PROSPERO during Nov 2019 (registration no. CRD 42019145719; crd.york.ac.uk/PROSPERO). Eligibility Criteria This study developed the inclusion and exclusion criteria based on "PICOS" principles. Inclusion criteria were as follows: (i) Design of studies, prospective RCTs; (ii) patients (P), patients with CRLM, which is defined as ≥4 metastases or metastatic nodules >50 mm, bilobar characteristics, invasion of pedicle lymph nodes, serum levels of carcinoembryonic antigen >200 ng/ml; (iii) intervention (I), HAIC; (iv) control (C), SC; (v) outcomes (O), the primary endpoints were OS, which was defined as the time from identification to death by any cause. The secondary endpoint was RRs, which was defined as the percentage of complete (tumor disappearance), or partial (tumor shrinkage ≥50%) RRs, and progression-free survival (PFS), which was defined as the length of time that patients lived with the tumor without evidence of progression of the cancer. The exclusion criteria were: (i) Irrelevant studies and duplicate literature; (ii) studies without useful data; and (iii) letters, reviews, case reports, comments, laboratory studies, and meta-analyses. Search Methodology The selection and systematic review of clinical studies were performed and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (8). The search was limited to RCTs published in English. Electronic databases Study Selection All search results were combined in Endnote TM , Version X8 (Thompson Reuters). Duplicates were removed manually. Two investigators independently screened the studies based on the titles and abstracts. If the article met the eligibility criteria, the full text was read. Any discrepancies between the two investigators were resolved by discussion or third-party consensus. Data Extraction Two investigators used the inclusion and exclusion criteria to retrieve relevant citations. Using a standardized data extraction form, two investigators independently extracted the following data from each study: (i) Study ID, including the name of the first author and publication year; (ii) country where the study was performed; (iii) study subjects, number of participants and their ages; (iv) treatment regimens for the treatment and control groups; and (v) the primary endpoint (OS) and the secondary endpoints (RRs and PFS). For reports of the same trial at different follow-up periods, data from the last report were used for analysis. If insufficient details were reported, the authors were contacted for further information. Any disagreements were resolved by consensus. Quality Assessment The Cochrane Collaboration tools for assessing risk of bias and the Jadad's scale (31) were both used to evaluate the quality of the included RCTs. Jadad's scale with a maximum of five scores assesses the quality of the study based on three criteria: (i) Randomization; (ii) double blinding; and (iii) withdrawals and dropouts. A study was awarded a maximum of 2 points for randomization, 2 points for double blinding, and 1 point for withdrawals and dropouts. A final score ≥3 or above was regarded as high quality, whilst a score of 0-2 was considered low quality. Any disagreements during assessment were resolved by consensus. Statistical Analysis All data were analyzed using Stata version 13.0 (Stata Corporation). Heterogeneity amongst studies was assessed using a Q test and an I 2 test before determining the pooled effect (32). A fixed effects model and a random effects model were based on the results of the Q test and I 2 test. A fixed effects model was adopted if I 2 < 50% and P > 0.1. Otherwise, a random effects model was used. For the outcomes, OS and PFS, which were time-to-event variables, were expressed as pooled hazard ratios (HR). The HR of OS and PFS with 95% confidence intervals (CIs) were directly extracted from the Kaplan-Meier survival curves or calculated using a calculation sheet as described by Tierney et al. (33). The logarithm of HRs and the corresponding standard error (SE) were applied as data points for the meta-analysis. The tumor RRs, which was dichotomous data, expressed as pooled relative risk (RR) and 95% CIs were calculated. The significance of pooled effects was determined using a Z test; P < 0.05 was considered to indicate a statistically significant difference. Possible sources of heterogeneity were assessed performing meta-regression to evaluate the impact of covariates on overall heterogeneity, the restricted maximum likelihood (REML) estimation method proposed by Harbord et al. (9) was used for meta-regression. Sensitivity analysis was utilized to investigate the influence of a high-risk study on overall meta-analysis. Possible publication bias was determined using Egger's regression asymmetry test (34). Additionally, a contour-enhanced funnel plot was used to distinguish the detailed reasons underlying publication bias (35). Study Selection Outcome A total of 2,197 potentially relevant articles were retrieved using the search strategy described above. Among these, 918 were duplicates. A total of 1,168 articles were excluded by screening the titles and abstracts as they were reviews, letters, comments, not in English, case reports, or laboratory studies, leaving 111 articles. A further 93 articles were excluded by examining the abstracts or full-texts. Finally, 18 studies (10-27) met the inclusion criteria and were included in the present meta-analysis. The detailed flowchart of the selection process for eligible studies is shown in Figure 2. Study Quality Assessment Methodological quality graphs and a summary of the included studies are presented in Figures 3A,B. The generation of randomized sequence was identified adequately in all trials. Appropriate allocation concealment was missing in several trails. None of studies had robust double blinding procedures. The results of quality assessment, based on Jadad's scale, are presented in Figure 3C. The scores of included studies ranged between 1 and 3 points (mean, 2.22). A total of five studies (12,21,22,26,27) reported sequence generation and ensured random allocation. One trial (18) scored 1 point overall due to inappropriate allocation concealment. All included studies reported withdrawal/dropout rates. Therefore, five studies (12,21,22,26,27) out of 18 studies were considered high quality (≥3 points). Subsequently, heterogeneity was examined prior to pooled analysis. Test results revealed there were no significant heterogeneity across 10 palliative studies (P = 0.071, I 2 = 43.0%) and 7 adjuvant studies (P = 0.111, I 2 = 42.0%). Thus, a fixed effects model was applied for the pooled analysis. In the pooled meta-analysis, OS was significantly increased in the HAIC as palliative treatment group compared with patients in the SC group (Z = 3.66, P = 0.000; HR, 0.17; 95% CI, 0.08-0.26). Furthermore, OS was significantly increased in the HAIC as adjuvant treatment group compared with SC group (Z = 3.99, P = 0.000; HR, 0.63; 95% CI, 0.38-0.87). These results showed that HAIC was an effective Tumor RRs In total, 11 trials out of 18 reported RRs, and all 1,022 patients in these studies were included for pooled analysis. Among these, nine studies (2-5, 7, 9, 14, 18, 28) applied HAIC as a palliative treatment in patients with unresectable colorectal liver metastases, and two studies (14,24) used it as an adjuvant treatment in patients with curative resection of liver metastases. Heterogeneity among the studies was also examined. The results showed that there was statistical heterogeneity among nine palliative studies (P = 0.000, I 2 = 72.2%). Thus, a random effects model was used for pooled analysis. In addition, there were no significant heterogeneity across two adjuvant studies (P = 0.604, I 2 = 0.0%) and a fixed effects model was applied for the pooled analysis. Pooled data demonstrated higher rates of RRs in the HAIC as palliative treatment group compared with the SC group (Z = 3.36, P = 0.001; RR = 2.09; 95% CI, 1. 36-3.22). In addition, RRs was significantly increased in the HAIC as adjuvant treatment group compared with SC group (Z = 3.53, P = 0.000; RR = 2.14; 95% CI,1. 40-3.26). Pooled analysis is presented in Figure 6. Sensitivity Analysis Robustness of OS was further confirmed by sensitivity analysis in palliative treatment group ( Figure 8A) and adjuvant treatment group (Figure 8B). Sensitivity analysis was performed using a leave-one-out at a time procedure, and the results showed that exclusion of any individual study did not significantly skew the pooled effect (P < 0.05), indicating that the results of pooled analysis for OS were robust to some extent. Publication Bias Egger's test and contour-enhanced funnel plot were used to assess potential publication bias. Firstly, Egger's test was used to assess potential publication bias in the pooled OS as the results are quantitative. Egger's test showed there were no significant publication bias in HAIC as palliative treatment group (P = 0.057; Figure 9A) and adjuvant treatment group (P = 0.201; Figure 9B) in this meta-analysis. Subsequently, a contour-enhanced funnel plot, which added conventional milestones in levels of statistical significance (P < 0.1, P < 0.05, P < 0.01) to funnel plots, was utilized to distinguish detailed reasons of publication bias. Results indicated several missing studies were in areas of higher statistical significance (P < 0.01, Figures 9C,D), highlighting that the potential reason of the asymmetry may be due to factors other than publication bias. Finally, the original research was traced again, speculating that lower methodology quality (such as non-double-blinded design, unsatisfactory calculation of power and small sample sizes) may account for the bias. These limitations may undermine the reliability of the results. Meta-Regression Analysis Meta-regression was performed to assess the effects of any underlying confounding factors on the pooled effect, and to identify potential sources of heterogeneity in the OS. The following covariates were considered as potential factors: (i) Different treatment regimens, FUDR alone vs. non-FUDR; (ii) sample size, n ≥ 100 vs. n < 100; and (iii) methodology quality assessment, high quality vs. low quality. Overall, univariate analysis showed all these three covariates did not exert any significant influence on the association between HAIC as a palliative or as adjuvant treatment and OS outcome (P > 0.05, Table 2). Subsequently, multivariate meta-regression was used to assess the effect of various covariates on the pooled effect of OS. The results revealed that all these three variables did not affect the relationship between HAIC and OS (P = 0.43) and heterogeneity was not observed based on this model. The results are shown in Table 2. DISCUSSION To the best of our knowledge, the present meta-analysis is the first study to show the potentially positive benefits of HAIC in improving OS among CRLM patients compared with SC. This integrated analysis, which included 18 prospective RCTs with 1,766 participants, demonstrated that CRLM patients treated with HAIC had significantly higher OS rates compared with those treated with SC. HAIC as a palliative treatment in patients with unresectable colorectal liver metastases and as an adjuvant treatment in patients with curative resection of liver metastases were likely to prolong the OS time of CRLM patients. The rates of complete and partial RRs also increased significantly in the HAIC group compared with the SC group. However, PFS did not differ significantly between the two groups. These data demonstrate that HAIC may be an effective intervention for the treatment of CRLM, particularly in improving OS and RRs. However, pooled data demonstrated that PFS outcomes were not different between the HAIC and the SC group. This result may be interpreted as both HAIC and SC may effectively reduce progression or recrudescence of tumors. HAIC is a mode of chemotherapeutic drug administration. HAIC, as a locoregional therapy, has significant advantages in terms of pathological RRs with a ≥ 6-fold increase in effective dose in CRLM patients (29). Owing to anatomical features, liver metastases are formed primarily from the blood supply from the hepatic artery, whereas normal liver tissue is primarily perfused by the portal vein; thus, a significantly higher local concentration Additionally, the quality of life was maintained even with longterm treatment. Another two trials also showed patients with CRLM had longer OS times when treated with HAIC (18,22). In recent years, novel chemotherapeutic drugs, such as irinotecan, oxaliplatin, bevacizumab, and cetuximab are widely administrated through HAIC in clinical practice, contributing to longer survival times of >20 months in patients with CRLM (37,38). These data suggest the potential of HAIC for the management of CRLM. However, based on available evidence currently, HAIC used as an adjuvant or as a palliative treatment may lead to different results. According to the conclusions from original studies (21,23,28,29), HAIC as adjuvant treatment after curative liver resection (R0/R1) in CRLM patients did not show survival benefits. Thus, the conclusion from this study deduced that HAIC could serve as a palliative treatment for patients with CRLM and could beneficial to a longer survival time. This integrated analysis provides evidence suggesting that HAIC is effective at controlling CRLM. However, potential limitations should be noted when accepting the conclusions of the present study. First, side-effects related to HAIC have been reported in several studies. Adverse effects may be technical or harmful when the pump and catheter are placed. Complications of pump placement, such as catheter-connected events, including hepatic artery occlusion, thrombosis and catheter-related infection, have garnered increasing attention, even though the rates were <7% (39,40). Gastrointestinal symptoms, such as hyperbilirubinemia, biliary sclerosis, nausea, diarrhea, vomit, and stomatitis were observed in 25-35% of patients treated with HAIC (41,42). Secondly, there was notable heterogeneity and bias between studies, which should be taken into consideration. Variations in the duration of administration of the chemotherapeutic drugs used in the HAIC and SC groups, inconsistent baseline data, such as the number of metastases, tumor size and location may result in heterogeneity. In addition, the improvement in OS from HAIC should also be assessed based on whether the liver metastases are resected. Finally, the methodological limitations should be acknowledged. None of the included studies had robust double blinding procedures, allocation concealment was missing in several studies, and small sample sizes may have resulted in selection and performance bias. All these factors may result in instability in the present analysis. Thus, more prospective studies with larger samples sizes, longterm survival time evaluation and standardized protocols are required to accurately determine the role of HAIC in controlling colorectal liver metastases. AUTHOR CONTRIBUTIONS YZ, KW, and TY performed the search and drafted the manuscript. YC and WL performed the data extraction and analyzed the data. YZ and TY designed the study and amended the original draft. XY and TX provided the clinical imaging data of the patients and equally contributed to the conception of the study. All authors contributed to the article and approved the submitted version.
2021-03-11T14:20:55.464Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "6de53e204be2003dc965603d74b496d661945549", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.628558/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6de53e204be2003dc965603d74b496d661945549", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249119978
pes2o/s2orc
v3-fos-license
Benthic silicon cycling in the Arctic Barents Sea: a reaction–transport model study . Over recent decades the highest rates of water column warming and sea ice loss across the Arctic Ocean have been observed in the Barents Sea. These physical changes have resulted in rapid ecosystem adjustments, manifesting diatom-derived BSi deposited after the surface water bloom at the marginal ice zone. This benthic–pelagic coupling will be to change with the continued northward migration of Atlantic phytoplankton species, the northward retreat of the marginal ice zone and the observed decline in the DSi inventory of the subpolar North Atlantic Ocean over the last 3 decades. Abstract. Over recent decades the highest rates of water column warming and sea ice loss across the Arctic Ocean have been observed in the Barents Sea. These physical changes have resulted in rapid ecosystem adjustments, manifesting as a northward migration of temperate phytoplankton species at the expense of silica-based diatoms. These changes will potentially alter the composition of phytodetritus deposited at the seafloor, which acts as a biogeochemical reactor and is pivotal in the recycling of key nutrients, such as silicon (Si). To appreciate the sensitivity of the Barents Sea benthic system to the observed changes in surface primary production, there is a need to better understand this benthic-pelagic coupling. Stable Si isotopic compositions of sediment pore waters and the solid phase from three stations in the Barents Sea reveal a coupling of the iron (Fe) and Si cycles, the contemporaneous dissolution of lithogenic silicate minerals (LSi) alongside biogenic silica (BSi), and the potential for the reprecipitation of dissolved silicic acid (DSi) as authigenic clay minerals (AuSi). However, as reaction rates cannot be quantified from observational data alone, a mechanistic understanding of which factors control these processes is missing. Here, we employ reaction-transport modelling together with observational data to disentangle the reaction pathways controlling the cycling of Si within the seafloor. Processes such as the dissolution of BSi are active on multiple timescales, ranging from weeks to hundreds of years, which we are able to examine through steady state and transient model runs. Steady state simulations show that 60 % to 98 % of the sediment pore water DSi pool may be sourced from the dissolution of LSi, while the isotopic composition is also strongly influenced by the desorption of Si from metal oxides, most likely Fe (oxyhydr)oxides (FeSi), as they reductively dissolve. Further, our model simulations indicate that between 2.9 % and 37 % of the DSi released into sediment pore waters is subsequently removed by a process that has a fractionation factor of approximately −2 ‰, most likely representing reprecipitation as AuSi. These observations are significant as the dissolution of LSi represents a source of new Si to the ocean DSi pool and precipitation of AuSi an additional sink, which could address imbalances in the current regional ocean Si budget. Lastly, transient modelling suggests that at least one-third of the total annual benthic DSi flux could be sourced from the dissolution of more reactive, Introduction Diatoms are photosynthesising algae that take up dissolved silicic acid (DSi) from seawater to build silica-based frustules (termed "biogenic silica" (BSi) or "opal"), which are then recycled or reworked in transition to and within the seafloor. The seafloor acts as a biogeochemical reactor, generating a benthic return flux of DSi across the pan-Arctic region that is estimated to equal the input from all Arctic rivers (März et al., 2015). These recycling and reworking processes are therefore important for the regional silicon (Si) budget and for fuelling subsequent blooms, where seafloor-derived nutrients are able to be advected into the photic zone. Typically, Barents Sea phytoplankton spring blooms are dominated by diatoms (Wassmann et al., 1999;Orkney et al., 2020). However, temperate flagellate species are becoming more dominant in the Eurasian Basin of the Arctic Ocean and are expected to become the resident bloom formers in the region (Neukermans et al., 2018;Orkney et al., 2020;Oziel et al., 2020;Ingvaldsen et al., 2021). This shift in species composition is thought to be driven in part by an expansion of the Atlantic Water realm ("Atlantification") ( Fig. 1). Furthermore, nutrient concentrations in Atlantic Water flowing into the Barents Sea have declined over the last 3 decades and are forecast to do so throughout the 21st century (Neukermans et al., 2018, and references therein). Crucially, a much more significant drop in DSi concentrations has been observed relative to nitrate (Rey, 2012;Hátún et al., 2017), creating less favourable conditions for diatom growth (Neukermans et al., 2018). This shift in phytoplankton community composition is predicted to reduce the export efficiency of phytodetritus, with potentially significant implications for pelagic-benthic coupling and thus the Si cycle (Fadeev et al., 2021;Wiedmann et al., 2020). Observations from long-term sediment trap data show that carbon export and aggregate sinking rates are 2-fold higher underneath diatom-rich blooms in seasonally sea-ice-covered areas of the Fram Strait, compared with that in Phaeocystis pouchetii-dominated blooms in the icefree region (Fadeev et al., 2021). A similar contrast was observed in carbon export fluxes measured using short-term sediment trap deployments north of Svalbard (Dybwad et al., 2021). It is estimated that 40 %-96 % of surface ocean primary production is exported to the seafloor in the Barents Sea (Cochrane et al., 2009, and references therein), while the ex-port efficiency of net primary production out of the euphotic zone in the central gyres is typically < 10 % (Turner, 2015, and references therein). Given the changes forecast in the pelagic-benthic coupling of Si in the Arctic, it is important to understand the baseline benthic biogeochemical system in order to anticipate the implications of further perturbations. Based on Si isotopic data from various reactive sedimentary pools and the sediment pore water dissolved phase from the Barents Sea seafloor, Ward et al. (2022) hypothesised that the Si cycle is isotopically coupled to the redox cycling of metal oxides, most likely solid-phase Fe (oxyhydr)oxides. The reductive dissolution of Fe (oxyhydr)oxides and release of adsorbed Si (FeSi) are thought to drive marked shifts in the isotopic composition of the Barents Sea sediment pore water DSi pool towards lower values. Further, Ward et al. (2022) propose that sediment pore water undersaturation drives the contemporaneous dissolution of lithogenic silicate minerals (LSi) alongside BSi, some of which are reprecipitated as authigenic clay minerals (AuSi), representing a sink of isotopically light Si to the regional Si budget. Finally, Ward et al. (2022) propose that seasonal pelagic phytoplankton blooms generate stark peaks in pore water DSi that dissipate on the order of weeks to months. However, to fully understand the early diagenetic cycling of Si within the seafloor of the Barents Sea, we must be able to quantify the relative contribution of LSi and BSi to the DSi pool, as well as establish whether AuSi precipitation removes a significant portion of that pool. Here we employ steady state reaction-transport modelling to reconstruct the benthic cycling of Si in the Barents Sea, informed by our dataset of solid-and dissolved-phase Si isotopic compositions (Ward et al., 2022) to test these hypotheses. Such techniques allow for the disentangling and quantification of the aforementioned early diagenetic reactions (Geilert et al., 2020a;Ehlert et al., 2016a;Cassarino et al., 2020), as well as the return benthic flux of DSi to the overlying bottom water. Furthermore, reaction-transport modelling allows for the quantification of processes on much shorter timescales; thus we use transient model runs to validate the hypothesis that the pulsed deposition of bloom-derived BSi can perturb the benthic Si cycle. We then quantify the bloom-derived BSi contribution to the total annual benthic DSi flux, the deposition of which is subject to the anticipated shifts in community compositions of pelagic primary producers across the Arctic Ocean. Understanding the key aims presented here not only is important for anticipating the biogeochemical response of the Barents Sea seafloor to physical, chemical and biological changes in the surface ocean but also has implications for the pan-Arctic Si budget. Currently there are disparities in the isotopic and mass balances of the Arctic Ocean Si budget, with Torres-Valdés et al. (2013) concluding that the Arctic Ocean is a slight net exporter of Si. Furthermore, a recent isotopic assessment identified the need for an additional benthic sink of light Si to close the Si budget (Brzezinski et al., 2021 ). However, current understanding is limited by a lack of direct observations from major gateways, including the Barents Sea . By coupling observational data with reaction-transport modelling, we are able to construct a balanced Si budget for the Barents Sea (Sect. 3.5), contributing to the data gaps that currently limit our understanding of pan-Arctic Ocean Si cycling. 2 Oceanographic setting, materials and methods Oceanographic setting The Barents Sea is one of seven shelf seas encircling the central Arctic Ocean and lies on the main inflow route for Atlantic Water. Oceanic circulation is driven by regional cyclonic atmospheric circulation and constrained by areas of prominent bathymetry (Fig. 1) (Smedsrud et al., 2013). Atlantic Water is fed in through the Barents Sea Opening between mainland Norway and Bear Island. This water mass then flows northwards, where it is met by colder, fresher Arctic Water, infiltrating the Barents Sea from the northern openings (Oziel et al., 2016). The oceanic polar front delineates these two water masses, the geographic position of which is tightly constrained in the western basin by the bathymetry but is less well defined in the east (Barton et al., 2018;Oziel et al., 2016). The heat content of the Atlantic Water-dominated region south of the polar front maintains a sea-ice-free state year-round, whereas the northern Arctic Water realm is seasonally sea-ice-covered, with a September minimum and a March/April maximum (Årthun et al., 2012;Faust et al., 2021). The Barents Sea winter sea ice extent has been in decline since circa 1850 (Shapiro et al., 2003), but from 1998 the rate of retreat has become the most rapid observed on any Arctic shelf (Oziel et al., 2016;Årthun et al., 2012). Current forecasts suggest the Barents Sea will become the first year-round, sea-ice-free Arctic shelf by 2075 (± 28 years) (Onarheim and Årthun, 2017). The atmospheric and water column warming driving this sea ice retreat is a result of both anthropogenic and natural processes, with recent Atlantifica-tion arising from a northward expansion of the Atlantic Water realm (Årthun et al., 2012) and a reduction in sea ice import to the northern Barents Sea. The impact of these changes is an increase in upward heat fluxes, which inhibits sea ice formation (Lind et al., 2018). The dynamic nature of the Barents Sea with respect to the physical oceanographic characteristics is reflected biologically in the ecosystems of the two main hydrographic realms. Annual primary production is estimated to range from 70 to 200 g C m −2 , with lower values found in the northern Arctic Water realm, where a deep meltwater-formed pycnocline limits nutrient replenishment through wind-induced mixing (Sakshaug, 1997;Wassmann et al., 1999). However, the most distinct peaks in the rates of primary production are found in the marginal ice zone (MIZ) (reaching 1.5-2.5 g C m −2 d −1 ; Hodal and Kristiansen, 2008;Titov, 1995) (Fig. 2), which forms in spring/early summer as sea ice melts and retreats northwards, stratifying the water column and stabilising the nutrient-rich photic zone (Wassmann et al., 2006;Reigstad et al., 2002;Olli et al., 2002;Krause et al., 2018;Wassmann et al., 1999;Vernet et al., 1998;Wassmann and Reigstad, 2011). The phytoplankton communities of the Barents Sea in proximity to the polar front and MIZ tend to be dominated by pelagic and ice-associated diatom species, as well as the prymnesiophyte P. pouchetii (Syvertsen, 1991;Wassmann et al., 1999;Degerlund and Eilertsen, 2010;Makarevich et al., 2022). General approach We use the Biogeochemical Reaction Network Simulator (BRNS) to disentangle the interplay of chemical and physical processes involved in the early diagenetic cycling of Si at stations B13, B14 and B15 of the Changing Arctic Ocean Seafloor (ChAOS) project in the Barents Sea (Figs. 1 and 2, Table 1). These stations span the main hydrographic features (polar front) and realms (Atlantic Water and Arctic Water) of the Barents Sea. BRNS is an adaptive simulation environment suitable for large, mixed kinetic-equilibrium reaction networks (Regnier et al., , 2003Aguilera et al., 2005), which is based on a vertically resolved mass conservation equation (Eq. 1) (Boudreau, 1997), simulating concentration changes for solid and dissolved species (i) in porous media at each depth interval and time step. where ω, C i , t and z represent the sedimentation rate, concentration of species i, time and depth respectively. The porosity term σ is given as σ = (1 − ϕ) for solid species and σ = ϕ for dissolved species, where ϕ is sediment porosity (Table S2). This term ensures that the respective concentrations represent the amount or mass per unit volume of sediment pore water or solids as required (Boudreau, 1997). D bio is the bioturbation coefficient (cm 2 yr −1 ) and was determined experimentally alongside this study (Solan et al., 2020). D i (cm 2 yr −1 ) is the effective molecular diffusion coefficient (D i = 0 for solids), and α i (yr −1 ) represents the bioirrigation rate (α i = 0 for solids). R j represents the rate of each reaction (j ), and λ j i is its stoichiometric coefficient. A full description of the model can be found in Sect. S2 of the Supplement. Steady state reaction-transport modelling Steady state modelling was employed to reproduce Si isotopic observational data in order to quantify the reaction rates of key processes involved in the cycling of Si within the seafloor. The version of the Si BRNS model employed here is adapted from Cassarino et al. (2020), which largely follows the approach of Ehlert et al. (2016a) and assumes a steady state. To ensure a steady state was achieved in the baseline simulations, the applied run time was dependent upon the sedimentation rate (0.05-0.06 cm yr −1 ; Zaborska et al., 2008;Faust et al., 2020) and core length so as to allow for at least two full deposition cycles (∼ 2500 years for a 50 cm Barents Sea core). The implemented reaction network accounts for a pool of pore water DSi, sourced by a dissolving BSi phase, from which Si can be incorporated into authigenic clay minerals (AuSi) as they precipitate. The kinetic rate law for the dissolution of BSi follows Eq. (2) (Hurd, 1972), where k diss is the reaction rate constant (yr −1 ) and BSi sol is the solubility of BSi (mol cm −3 ), implying that the rate of dissolution is proportional to the saturation state. The rate of BSi dissolution is allowed to decrease exponentially downcore in order to account for a reduction in reactivity due to BSi maturation and interaction with dissolved Al, as well as the preferential dissolution of more reactive material at shallower depths (Rickert, 2000;Van Cappellen and Qiu, 1997b;Rabouille et al., 1997;Dixit et al., 2001). The rate constants for BSi dissolution (k diss , Eq. 2) were constrained using the solid-phase BSi content measurements (Fig. 3). Equation (2) represents a simplification of the reaction rate law, which in reality is influenced by processes not incorporated into the model, such as surface area, temperature, pH, pressure and salinity. It is possible in some circumstances for the dissolution rate to deviate from the linear rate law (Van Cappellen et al., 2002); however, it is generally accepted that the dissolution of BSi is predominantly driven thermodynamically by the degree of undersaturation, leading to the linear rate law implemented in this study (Van Cappellen et al., 2002;Rimstidt and Barnes, 1980;Van Cappellen and Qiu, 1997b;Loucaides et al., 2012). BSi sol (2) Figure 2. Changing Arctic Ocean Seafloor (ChAOS) project summary. Chlorophyll α represents the peak value measured at each station during JR16006 CTD casts; data are available at https://www.bodc.ac.uk/data/published_data_library/catalogue/10.5285/ 89a3a6b8-7223-0b9c-e053-6c86abc0f15d/ (last access: 14 April 2022). Benthic nutrient flux magnitudes are for DSi measured in this study and Ward et al. (2022), as well as PO 3− 4 and NH + 4 from Freitas et al. (2020). The red box is a schematic summary of the main processes involved in the early diagenetic cycling of Si in the Barents Sea, derived from the results of the steady state model simulations in this study. Sea ice extent represents an approximation of the conditions at the time of sampling in 2017 and 2019. BSi reactivities were determined by the steady state model simulations. Please see Supplement Sect. S4 for a description of sediment pigment extraction methods. BSi -biogenic silica; LSi -lithogenic silica; AuSi -authigenic clay minerals; DSi -dissolved silicic acid. The precipitation of AuSi was modelled through Eq. (3), where k precip is the precipitation rate constant (Ehlert et al., 2016a). This rate law assumes that the reaction will proceed, providing the concentration of DSi is greater than the solubility of the AuSi (AuSi sol ). The rate is thus proportional to the degree of pore water DSi oversaturation (Ehlert et al., 2016a). We assume a value of 50 µM for AuSi sol at all three stations (Lerman et al., 1975;Hurd, 1973) (Table S2). As with BSi dissolution, the rate of AuSi precipitation was allowed to decrease exponentially with depth, compatible with the hypothesis that the majority of AuSi precipitation occurs in the upper portion of marine sediment cores. Here, DSi can more easily precipitate in the presence of more readily available dissolved Al, the concentration of which is typically higher in the upper reaches of shelf sediments, sourced from the dissolution of reactive LSi (e.g. feldspar and gibbsite) con-temporaneously to that of BSi (Aller, 2014;Rabouille et al., 1997;Van Beusekom et al., 1997;Ehlert et al., 2016a). In addition to the dissolution of BSi and precipitation of AuSi accounted for in previous early diagenetic modelling studies of the benthic Si cycle (Ehlert et al., 2016a;Cassarino et al., 2020), we incorporate the dissolution of LSi, which is thought to be an important oceanic source of numerous elements, including Si (Geilert et al., 2020a;Tréguer et al., 1995;Jeandel et al., 2011;Fabre et al., 2019;Ehlert et al., 2016b;Jeandel and Oelkers, 2015;Pickering et al., 2020;Morin et al., 2015). Here we assume that the dissolution of LSi is predominantly driven by the degree of undersaturation (Eq. 4), although as with the dissolution of BSi, LSi disso-lution is a complex reaction and sensitive to processes that are not included in the model, including the potential for being catalysed by microbes (Vandevivere et al., 1994;Vorhies and Gaines, 2009;Liu et al., 2017). The undersaturation of Si minerals is known to include most primary and secondary silicates; thus dissolution extends beyond BSi in marine sediments (Isson and Planavsky, 2018, and references therein). Indeed, a suite of experiments have shown that primary silicates and clay minerals can rapidly release Si when placed in DSi-undersaturated seawater and take up Si in DSi-enriched waters (Siever, 1968;Mackenzie et al., 1967;Lerman et al., 1975;Hurd et al., 1979;Fanning and Schink, 1969;Mackenzie and Garrels, 1965;Gruber et al., 2019;Pickering, 2020). Lerman et al. (1975) determined in one such experiment that the dissolution of eight clay minerals could be described by a first-order reaction rate law driven by the saturation state, consistent with that applied here. Further, our assumption that LSi dissolution is driven by the degree of undersaturation is consistent with the suggestion that low bottom water DSi concentrations of the North Atlantic Ocean could allow for the dissolution of silicate minerals and thus account for high benthic DSi flux magnitudes in areas almost devoid of BSi (< 1 wt %) (Tréguer et al., 1995). A value of ∼ 100 µM was used for the solubility of LSi (LSi sol ) at all three stations, consistent with observations during multiple dissolution experiments of common silicate minerals in seawater (Table S3) as well as the estimated solubility of amorphous silica in high-detrital-component estuarine sediments (Kemp et al., 2021). The desorption of Si from solid Fe (oxyhydr)oxide phases under anoxic conditions was simulated using a simple reaction rate constant (k FeSi ), representing the rate of desorption. The value assigned to k FeSi was calculated during the modelling exercise, and no assumed amount of FeSi was included in the upper boundary conditions. This parameter likely represents a significant simplification; however the exact process pertaining to the adsorption of Si onto Fe (oxyhydr)oxides is unclear and requires further study (Geilert et al., 2020a). Step functions were included in the FeSi reactions in the model to simulate the desorption of this phase at specific depth intervals, representing the Fe redox boundaries identified in Ward et al. (2022). The step functions act as a cut-off mechanism, either setting reaction rates to zero or activating them at specific depths. A full description of the model, including all boundary conditions and how isotopic fractionation was imposed in the AuSi precipitation and FeSi desorption reactions, can be found in Sect. S2 of the Supplement. Our estimates for all reaction rate constants in the steady state simulations (k diss , k precip , k LSidiss and k FeSi ) were not based on published values and were model-derived (Table S2). These values were constrained by ensuring the best fit of the observational data with the simulated solid-phase BSi content and pore water DSi concentration and isotopic compositions, which were obtained by minimising the rootmean-square error (RMSE) between simulated and measured values (Table S1). Despite being model-derived, k diss values (0.0055-0.074 yr −1 ) are found to lie within the published range for marine sediment BSi (Sect. 3.2). After the bestfit scenarios were established for each station, a sensitivity experiment was carried out by sequentially setting each reaction rate constant to zero in order to assess the importance of each process to the model fit (Fig. 3). Processing the simulated data Depth-integrated rates (R) of a given reaction (j ) were calculated across the model domain using Eq. (5), for the best-fit simulation data of each station. where L is the model domain length and dx denotes the given depth interval. The deposition flux of BSi at the sediment-water interface (SWI) (J BSi,in ) was then calculated based on Eq. (6), which states that J BSi,in equates to the sum of the flux of BSi out of the sediment (assumed to equate to the integrated rate of BSi dissolution, R db ) and the BSi burial flux (J BSi,bur ) (Burdige, 2006;Freitas et al., 2021). J BSi,bur was estimated at the base of the model domain (50.4 cm), following Eq. (7) on mass accumulation (Varkouhi and Wells, 2020, and references therein), which is controlled solely by advection. The sedimentation rate at depth (ω z ) was corrected for compaction following Eq. (8) (Berner, 1980). J BSi,bur calculations assume a sediment wet bulk density of 1.7 g cm −3 , consistent with previously assumed values for the Arctic seabed Backman et al., 2004) and that measured in clay-rich sediments of the Barents Sea (Orsi and Dunn, 1991). We are also able to use the model simulation output to determine the total benthic flux of DSi at the SWI (J tot ), which has multiple constituent parts that contribute to the benthic flux magnitude. Following Eq. (9) (Freitas et al., 2020), we calculate J tot and thus the relative contributions from bioturbation (J bioturb ), bioirrigation (J bioirr ), advection (J adv ) and molecular diffusion (J diff ) to complement the calculated J diff estimates and core-incubation-derived J tot of Ward et al. (2022). Transient reaction-transport modelling The influence of seasonality in pelagic primary production on the benthic Si cycle of the Barents Sea has been inferred through interpretation of pore water DSi depth profiles at station B14 (Ward et al., 2022). However, BRNS assumes a steady state and therefore cannot resolve seasonal biogeochemical dynamics without modification to allow certain boundary conditions to become time-dependent, enabling their activation and deactivation on a temporal scale. Here we use transient model runs to test the hypothesis that the pulsed deposition of bloom-derived BSi can rapidly perturb the benthic DSi pool, which is then able to recover on the order of weeks to months. The steady state baseline simulations at station B14 represent a data-model best fit of the 2018 observational data, wherein we do not observe transient peaks in sediment pore water DSi concentrations (Fig. 4). This steady state scenario was used as the initial conditions for the transient simulations, which were run for 1 simulation year, producing output data at weekly time intervals. We simulate the phytoplankton spring bloom event by incorporating a step function to temporarily increase the BSi deposition flux and reactivity, simulating the effects of a 1-to 3-week spring bloom in late spring/early summer on the delivery of BSi to the seafloor. Durations of 1 and 3 weeks are thought to represent the typical length of a Barents Sea MIZ and spring bloom respectively (Sakshaug, 1997;Dalpadado et al., 2020) (Fig. 4). All boundary conditions were kept constant, with the exception of those related to the bloom-derived BSi pool. During the 1-to 3-week time interval, the bloom-derived material was deposited at a rate equivalent to an increase in the steady state BSi deposition flux of 10-to 30-fold (Fig. 4). The background BSi deposition flux magnitude at station B14 is a model-derived parameter, constrained in the steady state simulations with the measured sediment BSi content and pore water DSi depth profiles. A 30-fold increase on the background deposition flux is similar to observations of a 10 d post-bloom diatom mass sinking event in the subpolar North Atlantic down to 750 m depth (26-fold increase) (Rynearson et al., 2013). The Barents Sea covers a relatively shallow continental shelf (average depth 230 m), and intense physical mixing at the polar front has been shown to enhance rates of vertical organic carbon flux at depth, close to station B14 (Wassmann and Olli, 2004). We could therefore anticipate an even greater increase in BSi depositional flux under bloom conditions than the maximum value assumed here. The reactivity of the bloom-derived material (k dissbloom ) ranged from 5 to 20 yr −1 , which is within the reactivity range of fresh pelagic BSi (3 to 100 yr −1 ; Ragueneau et al., 2000;Nelson and Brzezinski, 1997) (Fig. 4). Each of these three boundary conditions (length of the bloom period, k dissbloom and the deposition flux) was varied across multiple simulations within the constraints of published values to assess the influence of each parameter on the size and longevity of the sediment pore water DSi peak. The boundary conditions were determined to test whether peaks in pore water DSi concentrations are able to form within 1.5 months and dissipate within 3 months of the bloom, as proposed by Ward et al. (2022) and evidenced by contrasting sea ice conditions across the three cruises (Sect. 3.2; Fig. 4). Observational data As many reactions responsible for the biogeochemical cycling of Si between the solid and dissolved phases fractionate the isotopes of Si ( 28 Si, 29 Si, 30 Si) relative to each other, we are able to use stable Si isotopes as a tool to trace these pathways. All solid-phase, core top and sediment pore water samples collected for Si isotopic analysis were collected over three summers in the Barents Sea (30 • E transect spanning 74 to 81 • N) aboard the RRS James Clark Ross (2017,2018,2019). Dissolved-phase pore and core top water DSi concentration measurements were determined on board using a Lachat QuikChem 8500 flow injection analyser with an accuracy of 2.8 %, defined using certified reference materials (CRMs; Kanso Technos Co., Ltd.). Stable Si isotopic compositions of the samples were determined at the University of Bristol in the Bristol Isotope Group laboratory. Isotopic compositions are expressed in δ 30 Si notation (per mille, ‰), relative to the international Si standard NBS-28 (Eq. 10). A full description of field methods, as well as Si isotopic and concentration data of the solid-and dissolved-phase reconstructed using BRNS, is provided in Ward et al. (2022). Results and discussion Gaining a better mechanistic understanding of benthic biogeochemical Si cycling in the Barents Sea is important to anticipate the effects of further natural and anthropogenically driven environmental perturbations, as well as to begin to address key knowledge gaps that currently limit our understanding of the pan-Arctic Ocean Si budget . Ward et al. (2022) propose that LSi is dissolving alongside BSi in Barents Sea sediments, that part of the DSi pool is reprecipitated as AuSi, that the benthic Si and Fe redox cycles are coupled, and that seasonal variability in the deposition of BSi is expressed in the sediment pore water DSi pool. Here we reconstruct the benthic Si cycle of the Barents Sea using a reaction-transport model to further investigate and disentangle the interplay of processes that combined to produce our observational dataset and test these hypotheses. This approach allows for the quantification of reaction rates and fractionation factors, as well as of deposition and benthic flux magnitudes, which are used to inform a balanced Barents Sea Si budget that could have implications for the pan-Arctic Ocean budget. 3.1 What can reaction-transport modelling reveal about the controls on the background, steady state benthic Si cycle? Coupling a reaction-transport model with δ 30 Si values measured in the solid and dissolved phases represents a powerful tool with which to trace early diagenetic reactions, given that multiple reaction pathways have been shown to fractionate Si isotopes. One early diagenetic process that fractionates isotopes of Si is the formation of clay minerals, which preferentially takes up the lighter isotope from the dissolved phase, leaving residual waters relatively isotopically heavy in composition. A Si isotopic fractionation factor ( 30 ) associated with AuSi formation has yet to be thoroughly established; however Ehlert et al. (2016a) and Geilert et al. (2020a) modelled a 30 of −2 ‰ for marine AuSi formation. A 30 of this magnitude is consistent with the formation of clay minerals in riverine and terrestrial settings (−1.8 ‰ to −2.2 ‰) (Hughes et al., 2013;Ziegler et al., 2005a, b;Opfergelt and Delmelle, 2012) and similar to that observed in the adsorption of Si onto Fe (oxyhydr)oxide minerals ( 30 of −0.7 ‰ to −1.6 ‰) (Zheng et al., 2016;Delstanche et al., 2009;Wang et al., 2019;Opfergelt et al., 2009). However, the magnitude of 30 associated with AuSi precipitation can reach up to −3 ‰ in deep-sea settings (Geilert et al., 2020b), likely depending on pore water properties (pH, temperature, salinity, saturation states). This relatively high 30 is also consistent with repetitive clay mineral dissolution-reprecipitation cycles (Opfergelt and Delmelle, 2012). Similarly, 30 during Si adsorption onto Fe (oxyhydr)oxides is thought to increase with mineral crystallinity ( 30 of −1.06 ‰ for ferrihydrite and −1.59 ‰ for goethite) . Isotopic fractionation during dissolution is less well constrained, with previous work suggesting that BSi dissolution could induce a slight fractionation that enriches the DSi pool in the lighter isotope ( 30 of −0.55 ‰ to −0.86 ‰; Demarest et al., 2009;Sun et al., 2014) or occur without isotopic fractionation (Wetzel et al., 2014;Egan et al., 2012). Here we assume the latter and thus impose a 30 of 0 ‰ for the dissolution of BSi and LSi, which is consistent with similar, previous reactiontransport model studies (Geilert et al., 2020a;Ehlert et al., 2016a). LSi dissolution and AuSi precipitation Model results show that the sediment pore water DSi profiles cannot be reproduced by the dissolution of BSi alone at all three stations (B13, B14 and B15) (dash-dotted blue lines, Fig. 3). At station B13 the simulations suggest that while there is sufficient DSi released to reproduce the asymptotic DSi concentration due to the higher BSi content at depth compared to B15, the rate of release in the upper sediment layers is not consistent with that in the pore water DSi concentration profiles downcore of the SWI in the observational data (Fig. 3). This observation is in contrast to B15, where the simulated asymptotic DSi concentration is just 23 µM when BSi is the only source of DSi, consistent with the measured BSi content profiles that suggest a cessation in BSi dissolution by the middle of the sediment cores (∼ 15 cm, asymptotic BSi content of ∼ 0.2 wt %) (Fig. 3). Because of the continued increase in DSi with depth at station B13 (solid grey line), partly driven by the elevated BSi content in the midcore relative to B15, a relatively slow rate of AuSi precipitation is required at depth to take up the excess DSi and reproduce the observed asymptotic DSi value. Generally it is assumed that AuSi precipitation is concentrated in near-SWI sediments (0-5 cm), where the concentration of other essential solutes (Al, Fe, Mg 2+ , K + , Li + , F − ) is generally highest, sourced from Fe and Al (oxyhydr)oxides and reactive LSi (Ehlert et al., 2016a;Van Cappellen and Qiu, 1997a;Mackin and Aller, 1984;Aller, 2014). However, the uptake of DSi through AuSi precipitation has previously been inferred in terrigenous-dominated shelf sediments of the Arctic Ocean at > 50 cm depth (März et al., 2015). Due to the discrepancies between observational and simulated DSi pore water concentration data, we incorporated a LSi phase into our model, which dissolves according to the presumed degree of undersaturation. Without this additional phase (when k LSidiss is set to zero), model simulations show that the rate of DSi release is insufficient to reconstruct the observational DSi data (Fig. 3). Implementing LSi dissolution in conjunction with BSi produced the best data-model fit (dashed red lines). The dissolution of LSi has been inferred in a similar study of marine sediment pore waters of the Guaymas Basin (Geilert et al., 2020a), as well as in beach and ocean margin sediments (Fabre et al., 2019;Ehlert et al., 2016b). Indeed, Morin et al. (2015) report Si dissolution rates for basaltic glass particles in seawater that exceed those of diatoms (Pickering et al., 2020, and references therein). The inference based on the pore water DSi concentration profiles that an additional phase, most likely LSi, is dissolving into Barents Sea pore waters is supported by the simulated isotopic composition of the pore water phase at stations B13 and B15. Without the dissolution of the LSi phase, the δ 30 Si of the pore waters represents a mixture of the composition of the core top water and the BSi phase composition (+1.44 ‰ and +1.04 ‰ at B13 (solid grey line) and B15 (dash-dotted blue line) respectively) (Fig. 3). In this simulation scenario at station B15, the integrated rate of AuSi precipitation is zero as the concentration of pore water DSi has not surpassed the imposed AuSi solubility so cannot influence the sediment pore water δ 30 Si. Therefore, the model set-up without LSi dissolution cannot reproduce the intricacies of the downcore isotopic profile, as the lack of DSi released results in an insufficient concentration to allow for the precipitation of AuSi; thus the relative shift from isotopically lighter to heavier compositions between 0.5 and 2.5 cm cannot be resolved. The downcore shift to heavier isotopic compositions between 0.5 and 2.5 cm is thought to be caused by AuSi formation as the pore water DSi concentration crosses the saturation of the AuSi phase, facilitating its precipitation. Autochthonous benthic diatoms have been found in the Barents Sea at a maximum depth of 245 m (Druzhkova et al., 2018), which could fractionate the DSi pool during uptake with a 30 of ∼ −1.1 ‰ (De La Rocha et al., 1997). However, they are in very low abundance and so unlikely to contribute significantly to sediment pore water DSi uptake or isotopic fractionation in the uppermost sediment layers. Figure 3 indicates that the observational data from stations B13 and B15 can be reproduced by a model that assumes a steady state, which is less the case for the data from station B14, particularly for the 2017 and 2019 profiles (Fig. S2). This observation suggests that the reaction-transport model will need to resolve transient, non-steady state dynamics in order to better represent the observational data, which will be discussed further in Sect. 3.2. These model observations are consistent with a mass balance calculation using the isotopic compositions of the 0.5 cm pore water sample, as well as the BSi and LSi leachate samples at stations B13 and B14, which indicate a contemporaneous release of both phases. Our model findings therefore support the hypothesis of Ward et al. (2022) that LSi minerals are likely to be dissolving in the upper few centimetres of the Barents Sea seafloor. The depth-integrated reaction rates of the best-fit steady state simulations suggest that between 60 % and 98 % of the DSi released into the sediment pore water from the solid phase is sourced from the dissolution of LSi (Table 2). This range was determined by calculating the depth-integrated rate of LSi dissolution across the model domain and three stations as a proportion of the total integrated rate of DSi input from the three simulated sources (BSi dissolution, LSi dissolution and desorption from metal oxides). The predominance of LSi over BSi dissolution is consistent with the observation that Barents Sea sediments consists of ∼ 96 % terrigenous material (Ward et al., 2022), which is compatible with previous work showing that clay mineral assemblages in the Barents Sea are dominated by terrestrial signals from Svalbard and northern Scandinavia (Vogt and Knies, 2009). The Si isotopic composition of the LSi phase in surface sediments of stations B13, B14 and B15 (−0.89 ± 0.16 ‰; Ward et al., 2022) is also closer to secondary clay minerals (−2.95 ‰ to −0.16 ‰; Opfergelt and Delmelle, 2012, and references therein) than primary silicates of the crust and mantle (∼ 0 ‰ and −0.34 ‰ respectively; Opfergelt and Delmelle, 2012, and references therein). This isotopic composition is not surprising, given the predominance of the clay and silt size fraction in these sediment cores (87 % < 63 µm) . However, δ 30 Si measured in the Si-NaOH pool could also represent a combination of reactive primary and secondary silicates, ranging from ∼ 0 ‰ to −2.95 ‰. While previous mass balance calculations and the modelderived integrated reaction rates presented here agree that both LSi and BSi dissolution contribute to the sediment pore water DSi pool, the magnitudes of the LSi dissolution contribution vary significantly. Ward et al. (2022) suggest that just 14 % and 13 % of the DSi pool in the 0.5 cm pore water interval are sourced from the dissolution of LSi at stations B13 and B14 respectively, while at station B15 it is inferred that the δ 30 Si can be resolved through BSi dissolution alone. This is in contrast to the 84 %, 60 % and 98 % contributions calculated here (Table 2). There are multiple contributing factors to this discrepancy. Firstly, previous mass balance calculations are based on one depth interval, whereas the estimates presented here are derived from depth-integrated dissolution rates of the entire 50.4 cm model domain. Furthermore, reaction-transport modelling has revealed that this contradiction is likely born of the assumption that the BSi pool at all three stations is sufficient to fuel the pore water DSi stock. Dissolution dynamics were not taken into account in the simple mass balance calculation of Ward et al. (2022); however Table S2 for all boundary conditions). Additional simulations are based on a series of sensitivity experiments carried out to assess the importance of each reaction pathway. Left to right: sediment pore water DSi concentration, sediment pore water DSi δ 30 Si, solid-phase BSi content. Vertical dashed black lines represent the core top water δ 30 Si from 2017; open shapes show observational pore water data (Ward et al., 2022). Error bars on δ 30 Si data are based on long-term reproducibility, derived from repeat measurements of the diatomite standard (2σ ± 0.14, n = 116), unless the error derived from analytical replicates was greater. here we have shown that the composition of Barents Sea surface sediments, which are almost devoid of BSi (0.26 wt %-0.52 wt %, or 92-185 µmol g −1 dry wt), cannot reproduce the rate of pore water DSi build-up with depth from the SWI and can only support an asymptotic DSi concentration of 23 µM at station B15 (dash-dotted blue line, Fig. 3). This observation implies that the additional assumption of Ward et al. (2022) that the 0.5 cm pore water δ 30 Si value is not impacted by AuSi precipitation could be invalid. If the 0.5 cm pore water interval was directly or indirectly influenced by AuSi precipitation, such an assumption would lead to an underestimation of the LSi contribution as the δ 30 Si value would be isotopically heavier than if it were derived solely from dissolving solid phases mixing with trapped core top water. Previously it has been suggested that the quantification of AuSi precipitation rates in marine sediments is not critical in order to fully understand the early diagenetic cycling of Si as reverse weathering typically represents a diagenetic solidphase conversion from BSi to AuSi, via the dissolved phase (DeMaster, 2019). Model simulations reveal that 37 %, 2.9 % and 13.8 % of the DSi released across the 50 cm model domain at stations B13, B14 and B15 respectively are taken out of solution in the formation of AuSi. This observation is consistent with a similar, previous study of the Peruvian margin upwelling region, which determined that 24 % of DSi Aller, 2004). Reprecipitation of the DSi pool within the relatively shallow cores studied here will inhibit its exchange with overlying bottom waters. Therefore, in this context, AuSi precipitation can be considered a sink term for the regional Si budget (Ward et al., 2022). In a setting where AuSi precipitation occurs at such a depth where pore water DSi exchange with bottom waters is not possible, this reaction pathway could instead be considered an early diagenetic solid-phase conversion (from BSi and/or LSi to AuSi), as opposed to a true sink term (Frings et al., 2016;DeMaster, 2019). Evidence for coupling of the benthic Fe and Si cycles in the Barents Sea Ward et al. (2022) suggest that the benthic Fe and Si cycles are coupled in the Barents Sea, evidenced by a contemporaneous increase in pore water Fe concentrations with an enrichment in the lighter Si isotope of the DSi pool at all three stations. Model simulations support this hypothesis by demonstrating that the Barents Sea DSi pore water profiles can be reconstructed when applying the dissolution of both a BSi and a LSi phase; however under the model scenario where the desorption of FeSi is inhibited (k FeSi = 0), the δ 30 Si pore water profiles are inconsistent with the observational data (solid black lines, Fig. 3). With the dissolution of the LSi phase implemented at both stations B13 and B15, it is possible to resolve the δ 30 Si pore water profiles in the upper 2.5 and 8.5 cm respectively. However, below these depths the simulated profiles have isotopically heavier compositions than the observational data. Release of an isotopically light phase at specific depth intervals (beginning at 1.5 cm at B13 and 10 cm at B15) results in a simulated δ 30 Si profile within the range of the observational data (Fig. 3). This isotopic shift to lower pore water δ 30 Si is interpreted to represent the desorption of Si from solid Fe (oxyhydr)oxide phases, wherein the depth intervals of the same isotopic shifts correspond to the depths at which Fe is released into the pore waters (Faust et al., 2021;Ward et al., 2022). This increase in pore water Fe also occurs at similar depth intervals to decreases in pore water dissolved O 2 and NO − 3 concentrations, consistent with a transition from oxic to anoxic conditions (Freitas et al., 2020;Ward et al., 2022). As discussed above, the Si-HCl reactive Si pool is isotopically light and thought to be associated with metal oxide coatings on BSi . The δ 30 Si composition of the Si-HCl phase is assumed here to represent the composition of the phase desorbing across the Fe redox boundaries. When using a composition of −2.88 ‰ for the FeSi phase, simulation scenarios have identical DSi concentration profiles whether k FeSi is active or set to zero (dashed red and solid black lines, Fig. 3). This similarity is in contrast to the δ 30 Si pore water profiles, which cannot be reproduced without release of this isotopically light phase at the redox boundaries of all three modelled stations. This discrepancy between the sediment pore water DSi and δ 30 Si profiles when the FeSi desorption is active and inactive may explain why the influence of FeSi desorption is so apparent in Barents Sea sediment cores and more ambiguous in similar, previous studies of the Si cycle in lower-latitude marine sediments. The preferential adsorption of 28 Si onto Fe (oxyhydr)oxides and the subsequent dissolution or formation of these minerals have been used to interpret both heavy and light δ 30 Si marine sediment pore water signals in previous work (Ehlert et al., 2016a;Geilert et al., 2020a). In addition, while not inducing a clear signal in the δ 30 Si, redox cycling of Fe was highlighted as a potential regulating factor in the release of DSi into pore waters of the Greenland margin (Ng et al., 2020). Ng et al. (2020) hypothesised that the reductive dissolution of Fe mineral coatings increased the reactivity of the BSi pool, hence the elevated DSi concentrations found in cores with increased pore water Fe (Ng et al., 2020). In the Barents Sea, FeSi desorption across sedimentary redox boundaries is thought to be so prominent in the δ 30 Si data because the asymptotic concentration of pore water DSi (∼ 100 µM) is much lower than that in the aforementioned studies (∼ 350-900 µM). The low sediment pore water DSi concentration allows for the direct detection of this process, whereas in previous studies the influence of Fe on the benthic Si cycle is inferred either through elevated DSi and Fe concentrations (Ng et al., 2020) or by depositional context, for example in cores sampled from systems with an abundance of reactive Fe (e.g. hydrothermal vent systems (Fe sulfides) -Geilert et al., 2020a, or the Peruvian oxygen minimum zone - Ehlert et al., 2016a). At stations B13 and B14, as well as to a lesser extent at B15, there is an increase in the sediment pore water DSi concentration downcore of the middle to the base of the profiles. This feature can be reproduced in the model simulations with the desorption of FeSi from the respective redox boundary depths (Fig. S1); however the required ratio of k FeSi for 28 Si and 30 Si suggests an isotopic composition of the FeSi phase of just −1.0 ‰ to −1.5 ‰. This isotopic composition is heavier than that measured in the Si-HCl pool (−2.88 ‰), likely reflecting a complexity in the desorption process not captured by the model. Nevertheless, both scenarios support the release of an isotopically light phase at depth, most likely sourced from the Fe redox cycle. In summary, model simulations somewhat support the hypothesis, based on observational data, that the Barents Sea benthic Si cycle is influenced by the Fe redox system. Model results suggest that the influence of the Fe redox cycle is relatively unimportant for the magnitude of the pore water DSi pool, which appears to be controlled by release from the BSi and LSi phases. However, the coupling of these element cycles is evidenced in the pore water δ 30 Si data, indicating that the Fe cycle is important for the isotopic budget within the seafloor. We suggest the influence of FeSi desorption is detectable in the Barents Sea pore water δ 30 Si data, due to the relatively low pore water DSi concentrations and the distinctly isotopically light nature of the FeSi phase. These findings indicate that FeSi desorption should be considered when interpreting downcore δ 30 Si trends, especially in low-DSiconcentration settings. What can transient reaction-transport modelling reveal about non-steady state dynamics in the benthic Si cycle? Observational pore water DSi concentration data from station B14 suggest that a pulsed increase in the deposition of reactive phytodetritus to the seafloor, derived from phytoplankton blooms, can drive transient peaks in pore water DSi of up to ∼ 300 µM (Ward et al., 2022). This non-steady state dynamic is evidenced in the 3 consecutive years of pore water DSi concentration data collected during the summers of 2017 to 2019, which show that in 2017 and 2019, when the MIZ was above station B14 just 1.5 months prior to sediment coring, a sediment pore water DSi peak is present. This characteristic is in contrast to 2018 when station B14 had been sea-ice-free for 3 months prior to sampling (Downes et al., 2021) and no peak in DSi concentration was observed in the pore water nutrient data (Fig. S6). This observation may indicate that the sediment pore water DSi peaks form under the MIZ, which supports the formation of phytoplankton blooms in late spring/early summer, supplying fresher BSi to the benthos relative to the background BSi pool. Dissolution of this fresher BSi could then fuel subsurface peaks in pore water DSi concentration that dissipate between 1.5 and 3 months after formation, driven by the increased concentration gradi-ent across the SWI that would enhance the rate of molecular diffusion. Additional model simulations were carried out on the baseline, steady state, best-fit model scenario of station B14 to assess this hypothesis. Results of the transient simulations show that it is possible for the deposition of fresh, bloom-derived BSi to reproduce the observed peaks in pore water DSi concentration in 2017 and 2019 (Fig. 4). Calculated rates of background BSi deposition across the three stations (0.17 to 1.71 mmol Si m −2 d −1 , Table 2) are similar to BSi export rates measured in short-term and moored sediment traps in Kongsfjorden, Svalbard at 100 m depth and the eastern Fram Strait at 180-280 m depth (0.2-1.3 mmol Si m −2 d −1 ) (Lalande et al., 2013(Lalande et al., , 2016. A simulated 3-week, 10-fold increase in this BSi depositional flux at a much higher reactivity (k dissbloom 20 yr −1 ) than the background value (k diss 0.074 yr −1 ) derived from the 2018 baseline simulation, results in a DSi peak consistent with the magnitude of that in the observational data after 1.5 months (dashed red lines, Fig. 4). This simulated peak in DSi concentration is then able to dissipate by 3 months after bloom initiation. Similarly, with a shorter bloom (1 week), compatible with the typical length of an ice edge bloom in the Barents Sea (Dalpadado et al., 2020) and a 30-fold increase in BSi depositional flux of k dissbloom 15 yr −1 , the DSi peak is able to form and dissipate on a time frame similar to that of the former scenario (solid black lines, Fig. 4). The generated peak in DSi concentration must be able to disperse after 3 months if the timing of core sampling relative to MIZ retreat is valid as the explanation for the lack of pore water DSi peak observed in the 2018 data. The implemented k dissbloom values (5-20 yr −1 ) are higher than those typically observed in marine sediment cores. k diss values of 0.98-1.38 yr −1 have been observed at 1 to 20 cm depth in sediment cores collected from the Porcupine Abyssal Plain (mean water depth of 4850 m), considered high for BSi in deep marine sediments (Ragueneau et al., 2001), while values of up to 6.8 yr −1 have been measured in much shallower sediments of Jiaozhou Bay in the Yellow Sea (Wu et al., 2015). k diss values are highly sensitive to species assemblage and temperature, generally decreasing from the surface ocean towards the seafloor (Natori et al., 2006;Rickert et al., 2002;Rickert, 2000;Roubeix et al., 2008;Kamatani, 1982;Tréguer et al., 1989;Ragueneau et al., 2000). However, k diss values in fresh diatoms are thought to range from 3 to ∼ 100 yr −1 (Ragueneau et al., 2000;Nelson and Brzezinski, 1997), and previous experiments have measured dissolution rate constants of 27 yr −1 and above (Rickert, 2000;Kamatani and Riley, 1979) at lower temperatures (2-4 • C) in seawater, consistent with the values employed in this study. Station B14 is located beneath the polar front, which is a location of intense physical mixing due to the interleaving of multiple water masses (Barton et al., 2018), which has been shown to drive enhanced depositional fluxes of particulate organic carbon relative to that measured ) of the bloom-derived BSi, which was assigned a reactivity (k dissbloom in yr −1 ) and deposited over either 1 or 3 weeks. Note that the "Bloom Initiated" panel represents the first day of the simulated bloom; therefore all the modelled scenarios overlap and reflect the background steady state simulation. The panels "+1.5 months" and "+3 months" refer to the time elapsed since the bloom was initiated, which are presented to demonstrate how the DSi peak evolves over time. Ward et al. (2022) suggested that DSi peaks could form within 1.5 months and dissipate after 3 months, as evidenced by contrasting sea ice conditions across the three cruises. both north and south of the frontal zone (Wassmann and Olli, 2004). Furthermore, data collected from sediment traps deployed to the north and north-west of Svalbard have uncovered an approximately 2-fold-higher vertical carbon export flux from diatomaceous aggregates formed in seasonally sea-ice-covered regions, compared with aggregates from P. pouchetii blooms (Fadeev et al., 2021;Dybwad et al., 2021). Therefore, it is considered here that given the shallow depth of the Barents Sea seafloor and the location of station B14 beneath the polar front, fresh and reactive BSi formed in MIZ blooms could be efficiently ferried to the seafloor. The k dissbloom of the bloom-derived BSi is much higher than that required to reproduce the solid-phase BSi content profiles of the background system (k diss ) (Fig. 3). At stations B13, B14 and B15, the implemented k diss values in the steady state simulations were 0.0055, 0.074 and 0.0105 yr −1 respectively (Table S2, Fig. 2), which are not dissimilar to those estimated for deep-sea sediments from DSi pore water profile-fitting procedures and flow-through reactor experiments (0.006-0.44 yr −1 ) (McManus et al., 1995;Rabouille et al., 1997;Ragueneau et al., 2001;Rickert, 2000). The inverse of the dissolution rate constant provides an estimate of the residence time or mean lifetime of the given pool of BSi (McManus et al., 1995), which suggests that the less reactive pool of BSi has a mean lifetime of 182, 13.5 and 95 years at stations B13, B14 and B15 respectively. These mean lifetimes are too long to influence the Si cycle on a seasonal scale, which must be < 1 yr −1 to do so, as is the case for organic matter (Burdige, 2006). The model-derived estimates of k dissbloom on the other hand would suggest a mean lifetime of approximately 20 d for the fresh BSi. Therefore, this work suggests that there are at least two types of BSi in Barents Sea sediments, one less reactive pool that dissolves at a slower rate and one fresher, bloomderived pool that is able to perturb the sediment pore water DSi stock on a seasonal timescale. This conclusion is compatible with findings from a study of the equatorial Pacific region (McManus et al., 1995) and observations from the Arabian Sea, indicating that the bulk sediment BSi content should not be treated as a single pool of uniform reactivity but should instead be separated into reactive and unreactive fractions (Rickert, 2000;Schink et al., 1975). Consistent with these conclusions, previous experiments have demonstrated that dissolution patterns of some diatom frustules can be best described by two k diss values, an order of magnitude apart (Boutorh et al., 2016;Moriceau et al., 2009). These results indicate the presence of two phases of BSi within diatom frustules, denoting a potential physiological basis for the differentiation in reactivity of seafloor BSi. What is the simulated benthic DSi flux, and how important is the contribution of bloom-derived BSi dissolution to the annual flux? The simulated J diff magnitudes (0.11-0.27 mmol Si m −2 d −1 ) that contribute to J tot (Table 2) are within error of previously calculated J diff values (0.10-0.37 mmol Si m −2 d −1 ) for these Barents Sea sediment cores (Ward et al., 2022). Thus, our model-derived benthic DSi fluxes are well within the range of a compilation of pan-Arctic shelf benthic DSi fluxes (−0.03 to +6.2 mmol Si m −2 d −1 ) (Bourgeois et al., 2017). Based on previous calculations and the simulation results presented here, we estimate that the mean DSi J diff benthic flux magnitude for the Barents Sea is +0.23 (±0.11 1σ ) mmol Si m −2 d −1 , ranging from +0.08 to +0.54 mmol Si m −2 d −1 . J tot at all stations is dominated by the molecular diffusive component (J diff ) (76 %-85 %), in agreement with simulated estimates of phosphate fluxes at the same stations (Freitas et al., 2020) (Fig. 5). J tot at station B13 has the highest contribution from bioturbation (6 %), consistent with the highest experimentally determined bioturbation diffusion coefficient of the Barents Sea stations (Solan et al., 2020). The advective component of J tot is negligible at all stations, while the bioirrigation element represents the greatest source of uncertainty in the simulated flux magnitudes as this parameter was not constrained in parallel to this work; thus a global value was assumed (Thullner et al., 2009). At station B15, the modelderived J tot (+0.25 mmol Si m −2 d −1 ) is greater than J BSi,in (0.17 mmol Si m −2 d −1 ) ( Table 2). This observation points to the release of an additional source of DSi to the dissolving BSi, which is compatible with the hypothesis that LSi is being released into the pore water dissolved phase. DSi benthic flux magnitudes were also calculated for the transient simulations carried out on station B14 to quantify the influence of fresh bloom-derived BSi. Dissolution of the fresher BSi has an immediate and significant effect on the benthic flux, doubling the steady state background value of +0.34 to +0.66 mmol Si m −2 d −1 within 1 week, which peaks 2 weeks after bloom material deposition at +1.84 mmol Si m −2 d −1 , representing a 5-fold increase (an additional +1.5 mmol Si m −2 d −1 ) in the background steady state benthic flux magnitude. The prominent DSi peak then dissipates and becomes largely undetectable after 3 months (Fig. 4). The average DSi benthic flux at the SWI over the 12-week period is +1.07 mmol Si m −2 d −1 , indicating that the bloom-derived BSi releases an additional +0.73 mmol Si m −2 d −1 to the overlying bottom water. The steady state and transient model simulations therefore suggest that the background benthic flux of Si from the benthos is +124 mmol Si m −2 yr −1 at station B14, while the additional contribution over the 12-week period sourced from fresh BSi dissolution is +61.3 mmol Si m −2 (based on a rate of +0.73 mmol Si m −2 d −1 ). This estimate suggests that a minimum of 33 % of the total annual benthic flux of DSi discharging from the seafloor at station B14 is sourced from the deposition of fresh BSi during the 1-week MIZ bloom. The contribution of bloom-derived BSi dissolution to the annual benthic DSi flux magnitude reported here (an additional +0.73 mmol Si m −2 d −1 over the 3 months) for station B14 is greater than that in Ward et al. (2022) (an additional +0.23 mmol Si m −2 d −1 over the same time interval), although the proportion is consistent across the two estimates (approximately one-third of the total annual benthic DSi flux). In part, this is due to the simulated fluxes incorporating the contribution from bioirrigation and bioturbation (J tot ). When using only the simulated J diff component, an additional +0.46 mmol Si m −2 d −1 is estimated to be sourced from the bloom-derived BSi, which is more consistent with the observational data calculations. However, this disparity is also due to the nature of the simulated flux calculation. The model-derived benthic flux magnitudes are calculated at the SWI, whereas previous J diff estimates are based on observational data of a much lower resolution, with the concentration gradient determined based on the DSi concentration in the core top water and in the sediment pore water at 0.5 cm depth. Furthermore, the simulated benthic flux estimates are based on a mean value derived from a weekly temporal resolution, which is not accessible in the observational data. However, both estimates can be used to draw a range of possible contributions from the bloom-derived BSi, and although there is a disparity in the benthic flux magnitude, both methodologies suggest that at least one-third of the annual DSi benthic flux at station B14 is sourced from the dissolution of BSi deposited after a short MIZ bloom. How much BSi is buried in the long term in the Barents Sea? Traditionally, burial efficiencies of BSi were not included within Si budgets of the Arctic Ocean due in part to a low mean BSi content (< 5 wt %), as well as to a low estimated sedimentation rate (a few mm kyr −1 ) März et al., 2015). However, more recently Arctic Ocean seafloor BSi burial efficiencies are being re-evaluated due to revised models of sediment accumulation rates Table 2), corresponding to 0.012 Tmol Si yr −1 for the whole shelf, assuming an area of 1.4 × 10 6 km 2 . Global seafloor BSi burial efficiencies (calculated as the BSi burial flux divided by the deposition rate) range from 1 %-97 % (Westacott et al., 2021;Frings, 2017;Ragueneau et al., 2001Ragueneau et al., , 2009DeMaster, 2001;DeMaster et al., 1996;Liu et al., 2005), averaging ∼ 11 % (Tréguer et al., 2021;Frings, 2017). Typically, BSi burial efficiencies are much higher in coastal and shelf settings and underneath polar fronts, due in part to higher rates of sedimentation (DeMaster, 2001). Indeed, studies of the Southern Ocean (Indian sector and the Ross Sea) and the Peruvian margin uncovered a 4-to 30-fold increase in BSi burial efficiency at the continental shelf stations (22 %-58 %), relative to the openocean counterparts (2 %-17 %) (Dale et al., 2021;DeMaster, 2001;Ragueneau et al., 2001). BSi burial efficiencies estimated here for three Barents Sea stations (1.4 %-12 %, Table 2) are within the range of published values and similar to the global mean at stations B13 and B15 but low relative to other continental shelves. Our estimated BSi burial efficiencies are based on the same sedimentation rates employed in the model (Table S2), which are similar to the Barents Sea mean of 0.07 cm yr −1 (Zaborska et al., 2008). However, Barents Sea sedimentation rates of up to 0.21 cm yr −1 have been estimated since the last glacial period (Faust et al., 2021), which would significantly increase the estimated BSi burial efficiencies (5 %-33 %) (Eq. 7). While the BSi burial efficiency calculated at station B14 is also within previously published values, it is much lower than the Barents Sea stations to the north and south. Solidphase BSi contents in the surface intervals were determined from samples collected during the third cruise in summer 2019. As discussed, the pore water DSi profiles at B14 from this cruise are thought to be influenced by the dissolution of bloom-derived BSi, which would account for the elevated BSi content in the surface sediment relative to that at stations B13 and B15. The higher BSi content has likely resulted in an elevated estimation of J BSi,in and thus a reduced burial efficiency, which would also explain why the model-derived estimate of the contribution of LSi dissolution to the DSi released from the solid phase is lower at station B14 (60 %), due to an elevated R db , which is enhanced by the deposition of fresher BSi. In addition, this would also explain why the estimated mean lifetime of the BSi from the steady state simulations at station B14 is much shorter than for the other two sites (13.5 years vs. 95-182 years). If the measured surface BSi content at station B14 is indeed influenced by residual bloom-derived BSi, this would result in an overestimate of the background BSi reactivity in the model. Relatively low burial efficiencies of BSi in sediments underneath oxygen-depleted bottom waters (7 %-12 %), similar in magnitude to those calculated here for the Barents Sea, have previously been attributed to low rates of bioturbation, resulting in less efficient export of BSi towards more saturated pore waters (Dale et al., 2021). Bioturbation coefficients were determined experimentally for the Barents Sea stations (2-6 cm −2 yr −1 ) (Table S2) and are much lower than might be expected, based on an empirical global relationship with water depth (∼ 24 cm −2 yr −1 ) (Middelburg et al., 1997). Furthermore, the impacts of the low rates of macrofaunal mixing on BSi burial efficiency are likely exacerbated by slow rates of sediment accumulation in the Barents Sea. High rates of BSi burial in the Bohai Sea (60 %) and Yellow Sea (42 %) are thought to be driven by high sediment accumulation rates (Liu et al., 2002), which as with bioturbation is much lower in the Barents Sea than might be expected based on an empirical global relationship with water depth (0.55 cm yr −1 ) (Middelburg et al., 1997). The combination of low rates of macrofaunal mixing and sediment accumulation may therefore be the cause of the lower BSi burial efficiencies observed here relative to other continental shelves. . There must therefore be an additional sink of isotopically light Si if the Arctic Ocean Si isotope budget is to maintain balance (Fig. 6). The absence of direct isotopic observations from some of the major gateways, including the Barents Sea shelf over which most of the DSi sourced from the Atlantic Ocean flows (Torres-Valdés et al., 2013), as well as a lack of data for the isotopic composition of BSi in Arctic Ocean sediments, must be addressed to confirm the mechanisms proposed by Brzezinski et al. (2021). Si isotopes measured in the weak alkaline leachate (0.1 M Na 2 CO 3 , δ 30 Si Alk ) extracted from surface sediment sequential digestion experiments and measurements of δ 30 Si in core top waters (Ward et al., 2022), coupled with reaction-transport modelling (this study) for stations B13, B14 and B15, contribute to the Arctic Ocean Si isotope dataset and help to fill these knowledge gaps. δ 30 Si Alk in the Barents Sea ranges from +0.82 ± 0.16 ‰ at station B15 to +1.50 ± 0.19 ‰ at B14 (Ward et al., 2022). The Na 2 CO 3 leachate activates an operationally defined reactive pool of Si, thought to be associated with authigenically altered and unaltered BSi . Molar Al/Si ratios of the Na 2 CO 3 support this concept, which fall within the range of that expected of BSi (Ward et al., 2022). δ 30 Si values measured in the core top waters at the Atlantic Water station (B13, +1.64 ± 0.19 ‰) and Arctic Water station (B15, +1.69 ± 0.18 ‰) are similar to the composition of the main Arctic Ocean inflow and outflow water masses (∼ 1.7 ‰) (Giesbrecht, 2019;Brzezinski et al., 2021;Liguori et al., 2020) and heavier than that measured in the BSi deposited at the seafloor. Therefore, assuming the composition of the BSi below the BSi dissolution zone within the seafloor is similar to that at the SWI, Barents Sea sediments represent a sink of 28 Si, relative to the composition of the inflow waters. However, δ 30 Si Alk at stations B13 and B14 is still isotopically heavier than the Arctic Ocean riverine input (+1.30 ± 0.3 ‰), as well as the assumed composition of the BSi buried across the Arctic seabed (+1.16 ± 0.10 ‰) in the Si budget (Fig. 6). Through reaction-transport modelling we have estimated that between 2 % and 40 % of the sediment pore water DSi pool is sourced from the dissolution of BSi. Moreover, 2.9 %-37 % of the total amount of DSi released is reprecipitated as AuSi (Table 2). AuSi preferentially takes up the lighter isotope in the Barents Sea with a 30 of −2.0 ‰ to −2.3 ‰ (Table S2), thereby enhancing the preservation of BSi and further enriching the solid phase in the lighter isotope. Clays formed during weathering have a δ 30 Si composition ranging from −2.95 ‰ to −0.16 ‰ (Opfergelt and Delmelle, 2012). The burial of AuSi alongside BSi could therefore account for some of the isotopic imbalance. What Our reaction-transport model study has also highlighted the important contribution of LSi dissolution to the sediment pore water DSi pool (60 %-98 %). If our findings on the dissolution of LSi are consistent across other Arctic shelves, a portion of the benthic DSi flux cannot be defined as internal cycling of Si and should be recategorised as an additional input to that from the major ocean gateways and discharge from rivers. It is currently estimated that the benthic flux of DSi across the whole Arctic Ocean seafloor is ∼ 0.39 Tmol Si yr −1 (März et al., 2015); therefore between 0.23 and 0.38 Tmol Si yr −1 may represent an input of Si rather than a recycling term. This recategorisation could account for the additional Si inputs required to close the Si budget, as currently 32 %-47 % (or 0.21-0.38 Tmol Si yr −1 ) of the estimated net Si output is unaccounted for . The addition of AuSi as an output to resolve the isotopic imbalance would offset to some extent the release of DSi from LSi, while the LSi input compounds the isotopic imbalance identified by Brzezinski et al. (2021). However, here we show that through AuSi precipitation acting as an additional sink for 28 Si, both mass and isotopic balance can be attained in our proposed Si budget for the Barents Sea (Fig. 6, Supplement Sect. S3). Future work should look to assess whether similar relationships exist between the dissolution of LSi and precipitation of AuSi on other Arctic Ocean shelves if we are to use these mechanisms to balance the pan-Arctic Si budget. Additional work could include empirical assessments, such as through batch or flow-through reactor experiments, to study dissolution (both BSi and LSi) and precipitation (AuSi) kinetics in Arctic Ocean sediments and to further examine and better constrain the relationship between benthic Si and Fe redox cycles. Conclusions In this study we quantify and disentangle the processes involved in the early diagenetic cycling of Si in the Arctic Barents Sea seafloor by reproducing Si isotopic and DSi concentration data from the solid and dissolved phase in a reactiontransport model (Figs. 2 and 3). Baseline simulations are able to reproduce the observational data well; however we have also shown that the benthic Si cycle is responsive on the order of days to the delivery of fresh BSi. Therefore, while the transient disturbances appear to be short-lived, future work should look to incorporate these processes into the baseline simulations. Baseline model simulations also reveal that a significant proportion of the Si released from the solid phase within Barents Sea surface sediments is sourced from the dissolution of LSi (60 %-98 %) on account of the low BSi contents (0.26 wt %-0.52 wt %). Furthermore, we demonstrate that without the influence of the Fe redox cycle, which results Figure 6. The Arctic Ocean Si budget (left) and a proposed Si budget for the Barents Sea (right), including a benthic flux recategorisation (i.e. contributions from BSi and LSi) and AuSi burial. Boxes include flux magnitudes given in Tmol Si yr −1 (top values) and the flux δ 30 Si in per mille (‰; italicised bottom values). "In" and "Out" refer to the Si fluxes, discounting the water mass inflow and outflow (i.e. in (blue) includes rivers + LSi; out (red) includes AuSi + BSi). Grey boxes and arrows represent internal cycling. See Supplement Sect. 3 for further information on how the Barents Sea Si budget was calculated. in the release of Si adsorbed onto solid Fe (oxyhydr)oxides under anoxic conditions, the observed isotopic composition of the pore water DSi pool cannot be reconciled. Both the LSi and FeSi sources are depleted in the heavier isotope (−0.89 ‰ and −2.88 ‰ respectively), as demonstrated in a sequential digestion experiment (Ward et al., 2022), consistent with the observation that sediments of the Barents Sea represent a source of light DSi to the overlying bottom waters . Of the DSi sourced from BSi, LSi and FeSi, we show that between 2.9 % and 37 % is reprecipitated as AuSi. Coupled with the observation that a significant proportion of the sediment pore water DSi pool is sourced from the dissolution of LSi, this hypothesis is significant for the regional Si budget. The dissolution of LSi represents a source of "new" Si to the ocean DSi pool, and the precipitation of AuSi inhibits exchange of pore water DSi with overlying bottom waters and therefore represents a sink term. These observations could require a recategorisation of a portion of the benthic flux in the Arctic Ocean Si budget, which is currently defined as a recycling term, as well as the inclusion of an additional Si sink. If LSi dissolution and AuSi precipitation are not exclusive to the Barents Sea shelf, the additional input and isotopically light output could account for both the isotopic imbalance and the remaining proportion of net Si outflow that is currently unaccounted for . Model simulations also highlight a dichotomy in the cycling of Si in the Barents Sea seafloor, which is hypothesised to occur on at least two timescales. Observational data at sta-tions B13 and B15 can be reproduced by assuming a steady state dynamic, thus representing a background system, which is controlled by the release of Si into the DSi pool from LSi and the reprecipitation of DSi as AuSi. However, sampling across 3 years at station B14 has uncovered Si cycling on a much shorter timescale, controlled by the deposition of fresh phytodetritus. In this transient dynamic, the release of DSi is controlled by the dissolution of more reactive BSi. The processes occurring on the former steady state time frame will likely remain largely unaltered with further Atlantification of the Barents Sea due to the mineralogical control on DSi release, whereas the latter transient system is reliant upon the seasonal delivery of fresh BSi, which is subject to change as the community compositions of the MIZ and spring phytoplankton blooms shift to favour temperate Atlantic flagellate species (Neukermans et al., 2018;Orkney et al., 2020) or diatoms with lower silica content than polar species (Lomas et al., 2019). Furthermore, we have shown that the benthic DSi flux magnitude can increase 5-fold after a simulated 1week bloom, which is calculated here to contribute a minimum of one-third of the total annual flux of DSi from the seafloor at station B14. Any perturbation in the delivery of bloom-derived, relatively reactive BSi to the seafloor could therefore be detrimental to the total annual supply of DSi from Barents Sea sediments.
2022-05-27T23:25:53.458Z
2022-07-21T00:00:00.000
{ "year": 2022, "sha1": "17234080498d9791caa0ce2d840aaa2a7636fb53", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/19/3445/2022/bg-19-3445-2022.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2e0d63e43ccab4b52d45408ca131965b9f7de371", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
260406475
pes2o/s2orc
v3-fos-license
Association of the EPAS1 rs7557402 Polymorphism with Hemodynamically Significant Patent Ductus Arteriosus Closure Failure in Premature Newborns under Pharmacological Treatment with Ibuprofen Patent ductus arteriosus (PDA) is frequent in preterm newborns, and its incidence is inversely associated with the degree of prematurity. The first choice of pharmacological treatment is ibuprofen. Several genes, including EPAS1, have been proposed as probable markers associated with a genetic predisposition for the development of PDA in preterm infants. EPAS 1 NG_016000.1:g.84131C>G or rs7557402 has been reported to be probably benign and associated with familial erythrocytosis by the Illumina Clinical Services Laboratory. Other variants of EPAS1 have been previously reported to be benign for familial erythrocytosis because they decrease gene function and are positive for familial erythrocytosis because the overexpression of EPAS1 is a key factor in uncontrolled erythrocyte proliferation. However, this could be inconvenient for ductal closure, since for this process to occur, cell proliferation, migration, and differentiation should take place, and a decrease in EPAS1 gene activity would negatively affect these processes. Single-nucleotide polymorphisms (SNPs) in EPAS1 and TFAP2B genes were searched with high-resolution melting and Sanger sequencing in blood samples of preterm infants with hemodynamically significant PDA treated with ibuprofen at the National Institute of Perinatology. The variant rs7557402, present in the EPAS1 gene eighth intron, was associated with a decreased response to treatment (p = 0.007, OR = 3.53). The SNP rs7557402 was associated with an increased risk of pharmacological treatment failure. A probable mechanism involved could be the decreased activity of the product of the EPAS1 gene. Abstract: Patent ductus arteriosus (PDA) is frequent in preterm newborns, and its incidence is inversely associated with the degree of prematurity. The first choice of pharmacological treatment is ibuprofen. Several genes, including EPAS1, have been proposed as probable markers associated with a genetic predisposition for the development of PDA in preterm infants. EPAS 1 NG_016000.1:g.84131C>G or rs7557402 has been reported to be probably benign and associated with familial erythrocytosis by the Illumina Clinical Services Laboratory. Other variants of EPAS1 have been previously reported to be benign for familial erythrocytosis because they decrease gene function and are positive for familial erythrocytosis because the overexpression of EPAS1 is a key factor in uncontrolled erythrocyte proliferation. However, this could be inconvenient for ductal closure, since for this process to occur, cell proliferation, migration, and differentiation should take place, and a decrease in EPAS1 gene activity would negatively affect these processes. Single-nucleotide polymorphisms (SNPs) in EPAS1 and TFAP2B genes were searched with high-resolution melting and Sanger sequencing in blood samples of preterm infants with hemodynamically significant PDA treated with ibuprofen at the National Institute of Perinatology. The variant rs7557402, present in the EPAS1 gene eighth intron, was associated with a decreased response to treatment (p = 0.007, OR = 3.53). The SNP rs7557402 was associated with an increased risk of pharmacological treatment failure. A probable mechanism involved could be the decreased activity of the product of the EPAS1 gene. Introduction The ductus arteriosus is a central vascular shunt connecting the pulmonary artery to the aorta, allowing oxygenated blood from the placenta to bypass the uninflated fetal lungs and enter the systemic circulation. The rapid closure of the ductus after birth is essential for vascular transition to the mature divided pattern of arteriovenous circulation. Failed ductus arteriosus closure, termed patent ductus arteriosus (PDA), is frequent in preterm newborns, with up to 64% of infants born at 27 to 28 weeks exhibiting PDA. The incidence of PDA is inversely associated with the degree of prematurity [1,2]. The National Institute of Perinatology is a tertiary-level hospital, where 300 preterm newborns weighing <1500 g are born each year. An observational study performed in 2016 in 295 preterm infants between 27 and 28 weeks found PDA in 21.6% of patients [3]. The diagnostic gold standard is two-dimensional color Doppler echocardiography, which can determine the shape and diameter of the ductus arteriosus at the aortic and pulmonary edges and the degree of hemodynamic burden [4][5][6]. When there is hemodynamically significant PDA, the treatment scheme involves pharmacological treatment as the first choice. When the patient has a condition that contraindicates the use of drugs or if this treatment is not effective, surgical closure may be necessary. Untreated hemodynamically significant PDA can result in life-threatening conditions, such as congestive heart failure, pulmonary artery hypertension, and neonatal necrotizing enterocolitis [7]. Cyclooxygenase (COX) inhibitors are administered as a pharmacological treatment for hemodynamically significant PDA: the most commonly used inhibitors are indomethacin and ibuprofen, although the use of acetaminophen has recently been approved [8][9][10][11][12]. Up to 30% of pharmacological treatment failure has been observed in preterm infants. Both indomethacin and ibuprofen have shown similar efficacy [8][9][10][11][12]. In the National Institute of Perinatology, the most commonly used pharmacological treatment is ibuprofen. In the aforementioned study at the National Institute of Perinatology, 47.7% success was found with 10 mg/kg/d ibuprofen on the first day followed by 5 mg/kg/d ibuprofen on the second and third days, respectively, and 42.8% success was observed during the second cycle [3]. The ductus arteriosus (DA) derives from the left sixth aortic arch, which derives from neural crest cells. To successfully achieve ductal closure, cell growth and differentiation are essential, involving the induction of specific gene expression mediated by the interaction of transcription factors with response elements [13,14]. AP-2 transcription factors play a major role in the cellular differentiation induced by retinoic acid, particularly in neural crest cells. PDA probably results from the abnormal development of the neural crest [15]. Transcription factor AP-2 beta (TFAP2B) expression is enriched in the neural crest and may play an important role in regulating DA closure. This transcription factor has been reported to regulate the expression of EPAS1 (endothelial PAS domain protein 1, also known as hypoxia-inducible factor 2 alpha), which is involved in oxygen sensing [2,16]. The expression of EPAS1 is cell-type-restricted and predominantly occurs in endothelial cells, lung epithelial cells, and cardiac myocytes. EPAS1 trans-activated target genes contain the hypoxia responsive element (HRE) [17,18]. A bioinformatic and statistical study led by Dagle in 2009 reported several genes as probable markers associated with the genetic predisposition to PDA in preterm infants. Of all possible markers analyzed, those that showed a significant association were the polymorphisms rs987237 of TFAP2B (p < 0.005) and rs1867785 of EPAS1 (p < 0.005). These same polymorphisms were studied in relation to the failure of pharmacological treatment with indomethacin, also exhibiting significant association [13]. NG_016000.1:g.84131C>G or rs7557402 has been reported to be probably benign and associated with familial erythrocytosis by the Illumina Clinical Services Laboratory [19]. Other variants of EPAS1 have been previously reported as benign for familial erythrocytosis because they decrease gene function, and they have been revealed to be positive for familial erythrocytosis because the overexpression of EPAS1 is a key factor in uncontrolled erythrocyte proliferation [20]. However, this could be inconvenient for ductal closure, since for this process to occur, cell proliferation, migration, and differentiation should take place, and a decrease in EPAS1 gene activity would negatively affect these processes [21][22][23]. In this study, we explored the association between genetic variants of TFAP2B and EPAS1 and PDA ibuprofen treatment failure. Study Population A cross-sectional exploratory study including 47 newborns with a diagnosis of PDA treated with ibuprofen was carried out at the National Institute of Perinatology in Mexico City. On behalf of the children enrolled in our study, we obtained written informed consent from their legal guardians. The study complied with the Declaration of Helsinki and was approved by the Institutional Ethics Committee and registered to the National Institute of Perinatology, project number 212250-3140-11105-01-16. Convenience sampling of consecutive cases was performed. All premature newborns (gestational age between 25 and 36.6 gestation weeks determined by Ballard) with hemodynamically significant PDA diagnosis and ibuprofen treatment born at INPer during the period from 1 January 2017 to 31 December 2019 were included. Oral ibuprofen treatment was used as follows: the first cycle of 10 mg/kg/d on the first day, followed by 5 mg/kg/d on the second and third days, respectively. In the case that no clinical and echocardiographic response was obtained (Supplementary Materials), a second oral ibuprofen cycle was administered (20 mg/kg/d), followed by a second and third dose at 10 mg/kg/dose, with administration intervals of 24 h between the doses [3]. The sample population was separated into two groups. The case group (n = 19) included patients who presented ibuprofen treatment failure, and hemodynamically significant PDA was closed via surgery. The control group (n = 28) included patients with successful pharmacological treatment with ibuprofen. In addition to the premature newborn samples, we included a family of nine members with a background of PDA. Blood samples from members of this family were used as controls for the techniques employed in this study. Genetic Variant Analysis A sample of peripheral blood was taken with a BD Microtainer ® (Franklin Lakes, NJ, USA) blood collection system. DNA was isolated from leukocytes using the Promega Wizard ® Genomic DNA Purification Kit (Madison, WI, USA). The DNA samples were quantified with a Thermo Scientific NanoDrop™ 2000 instrument (Waltham, MA, USA) and aliquoted and stored at −20 • C until later use. The high-resolution melting (HRM) method was used to detect genetic variants of the TFAP2B and EPAS1 genes. The HRM primers were designed using the Primer Select program (DNASTAR lasergene, Maddison, WI, USA), which also ensures that the primer sequences do not form secondary structures during PCR that could increase the complexity of melting profile interpretation. The primer specificity was tested using the Primer Blast platform (NCBI) [24]. Primers were designed to amplify complete exonic sequences and small flanking intronic sequences. Reactions were performed with a total volume of 20 µL (5 µL of Milli-Q water, 1.5 µL of each primer at 20 pmol/µL, 10 µL of Bio-Rad Precision Melt Supermix (Hercules, CA, USA), and 2 µL of DNA at 150 ng/µL). The amplification parameters were 95 • C for 4 min, 30 cycles of 94 • C for 30 s, annealing temperature for 30 s, and 72 • C for 30 s, followed by a final extension step of 72 • C for 5 min. For the melting curve analysis, the parameters were 95 • C for 30 s and 75 • C for 30 s. Data were collected over a temperature range of 75-95 • C in 0.1 • C increments every 10 s using the CFX-96 Touch Real Time PCR System of Bio-Rad (Hercules, CA, USA). Once each HRM reaction was standardized, all samples were tested in triplicate. Melting curve analysis was performed with Bio-Rad Precision Melt Analysis Software v.1.3. (Hercules, CA, USA). All samples with melting curves different from the average underwent capillary sequencing to determine the alteration in the sequence that yielded the difference in the melting curve (see Supplementary Materials). Moreover, three random samples with an average melting curve were capillary sequenced to ensure that the observed curve corresponded to the consensus sequence obtained from the RefSeq database (NCBI) [25]. DNA Sequencing After purification with Thermo Fisher Scientific PCR ExoSAP-IT TM (Waltham, MA, USA), HRM products were sequenced using Applied Biosystems Big Dye Terminator v1.1 and v3.1 kits (Applied Biosystems, Foster City, CA, USA) and an ABI PRISM 3130 DNA Analyzer (Applied Biosystems, Foster City, CA, USA). The obtained sequences were analyzed with BioEdit v.7.2 software (Ibis Biosciences, Carlsbad, CA, USA) and the NCBI Nucleotide Blast platform (Blastn) [26]. Nucleotide sequences were translated using the translate tool in the ExPASy Bioinformatics Resource Portal [27]. Whenever a sequence variant was found, the sample was sequenced again from the opposite direction to confirm the nucleotide change. Bioinformatic Analysis All variants found were searched in the NCBI databases dbSNP [28] and ClinVar [29] to gather information from previous reports. They were also analyzed with Mutalyzer v.3. software [30] to predict whether they could produce a change in the protein sequence. The variants that were predicted to alter the protein sequence were analyzed with the PolyPhen v2.0.23 bioinformatic tool [31], which can classify those changes as probably damaging, possibly damaging, or benign. Predictions are based on multiple alignments of proteins that are closely related in function and amino acid sequence to the tested protein. This tool also considers all available data in the protein databases UniProt [32] and RCSB [33] PDB, such as tridimensional structures and specific protein domain information. Synonymous variants were analyzed with Human Splicing Finder v.3.0 (HSF) [34], which integrates internal and external detection matrices of splicing sites, branch points, splicing regulatory sequences, etc., to detect site break canonical splicing acceptors and donors, create alternative donor and acceptor sites, analyze branch point breaking, create silencers, and remove enhancers. Statistical Analysis Allele and genotype frequencies of the studied polymorphisms were obtained via direct counting. The Hardy-Weinberg equilibrium (HWE) was calculated using the χ 2 test. The significance of the difference between groups was determined using Mantel-Haenzel chi-square analysis. All calculations were performed using SPSS version 18.0 (SPSS, Chicago, IL, USA). Means + SDs and the frequencies of baseline characteristics were calculated. Student's t-test was performed to compare differences between continuous variables, and the categorical data were analyzed using χ 2 and Fisher exact tests. Logistic regression analysis was used to test for the association of polymorphisms with clinical variables under dominant, recessive, codominant, and additive inheritance models in the independent analysis. The most appropriate inheritance model was selected based on Akaike information criteria and was adjusted for gestational age and sex. The statistical power to detect associations with clinical variables was >0.80, as estimated with QUANTO v.1.2.4 software [35]. Pairwise linkage disequilibrium (LD, D') estimations between polymorphisms and haplotype reconstruction were performed with Haploview version 4.1 [36] (Broad Institute of Massachusetts Institute of Technology and Harvard University, Cambridge, MA, USA). Population Sample Characteristics A population sample of 47 patients with a diagnosis of hemodynamically significant PDA participated in the study, and they were divided into two groups: 19 cases (11 males and 8 females) and 28 controls (19 males and 9 females) ( Table 1). Gestational age and birth weight are important risk factors for developing PDA; for this reason, it was necessary to ensure that they were comparable between groups (Table 1). No statistically significant differences were found regarding gestational age (p = 0.112) ( Table 1). A significant difference was found with respect to birth weight (p < 0.0001). Therefore, further analysis was adjusted for this variable to avoid spurious results. We also found statistically significant differences in sepsis development (p < 0.0001), with or without taking the time of development into account, but it was more significant in cases of late development, as observed in Table 1. Genetic Variant Detection We found 63 genetic variants in the population sample: 24 only in the case group, 26 only in the control group, and 13 in both groups ( Table 2). Of the 24 variants found in the case group, 13 were synonymous, 6 were missense, and 5 were INDEL. Only 11 variants resulted in changes in the sequence of amino acid residues in the protein. In the control group, we found 19 synonymous variants, 7 missense variants, and no INDEL variants (26 altogether, as mentioned before). In this group, only 7 variants affected the protein. The frequency of genetic variants that modify the sequence of amino acids was higher in the case group, especially considering that this group is smaller. Moreover, this group contained all INDEL variants found. INDEL variants generally cause a shift in the open reading frame, completely changing the amino acid sequence, and can result in premature stop codons. The statistical analysis of the variants shared by both groups showed no association of TFAP2B variants with treatment response (Table 2). Only the variant NG_016000.1:g.84131C>G, present in the EPAS1 gene, eighth intron, previously reported as rs7557402, was associated with a risk factor for failure to respond to treatment (p value of 0.007, OR of 3.53, and 95% confidence level). It was found in both groups of the study and in the samples of the PDA family used for the standardized techniques employed in this study. Because we found the three genotypes present in our study population, it was possible to test haplotype association with treatment response, finding that the recessive model (homozygosity) was strongly associated with treatment failure (p value of 0.017 and 95% confidence level), unlike the dominant and codominant models, which were not statistically associated (Table 3). According to the Mutalyzer analysis [30], this variant produces no change in amino acid sequence; therefore, it could not be analyzed using the PolyPhen2 bioinformatic tool [31]. Instead, it was analyzed with Human Splicing Finder 3.0 (HSF) [34]. HSF analysis showed that this nucleotide change generates the disruption of the wild-type acceptor splice site located in the intron 8 acceptor splice site (Figure 1). The genome viewer showed that the site where the variant is located is an acceptor site with a medium force, whose disruption most likely affects the splicing of the EPAS1 gene 8th intron. This variant has been reported as probably benign for familial erythrocytosis by the Illumina Clinical Services Laboratory. The classification does not have a clinical basis, but a criteria classification basis, according to ACGM (American College of Medical Genetics and Genomics) guidance [37]. This variant has been reported as probably benign for familial erythrocytosis by the Illumina Clinical Services Laboratory. The classification does not have a clinical basis, but a criteria classification basis, according to ACGM (American College of Medical Genetics and Genomics) guidance [37]. The analysis of the codifying sequence of EPAS1 showed that exon 9 codes for a hydroxylation site and that it is necessary for the covalent post-translational processing of the transcription factor ( Figure 2). The genome viewer showed that the site where the variant is located is an acceptor site with a medium force, whose disruption most likely affects the splicing of the EPAS1 gene 8th intron. This variant has been reported as probably benign for familial erythrocytosis by the Illumina Clinical Services Laboratory. The classification does not have a clinical basis, but a criteria classification basis, according to ACGM (American College of Medical Genetics and Genomics) guidance [37]. The analysis of the codifying sequence of EPAS1 showed that exon 9 codes for a hydroxylation site and that it is necessary for the covalent post-translational processing of the transcription factor ( Figure 2). Discussion PDA has a high frequency in the Mexican population, especially in preterm infants [3]. At birth, hemodynamically significant PDA is treated with COX inhibitors, and oral ibuprofen is the treatment of choice at the National Institute of Perinatology, but the use of these types of drugs generates adverse effects that can further complicate patient health. Therefore, the search for biological markers for predicting the efficacy of pharmacological treatment in preterm newborns could avoid unnecessary exposure to the adverse effects of drugs. TFAP2B is the most studied gene in the development of PDA, since it was described as being part of Char syndrome, the phenotype of which may include PCA [38,39]. Nevertheless, the use of massive sequencing and bioinformatic tools has made it possible to identify new target genes for this disease, such as EPAS1. This gene encodes a transcription factor induced when oxygen levels fall. It is part of the molecular pathway necessary to close the ductus arteriosus and is regulated by oxygen levels [40]. A higher number of variants in both genes in the case group and the presence of variants such as rs7557402 indicates a weak but measurable association with a lack of response to pharmacological treatment, highlighting the need for the further and deeper study of this population. The rs7557402 variant has been reported as probably benign according to the American College of Medical Genetics and Genomics (ACGM) criteria. This variant has been associated with familial erythrocytosis by the Illumina Clinical Services Laboratory [21]. Some other variants in EPAS1 have been previously reported as benign and are also associated with familial erythrocytosis because they decrease gene product function. These variants have a positive effect on familial erythrocytosis since the overexpression of EPAS1 is a key factor for uncontrolled erythrocyte proliferation [20]. It is possible that the variant described as "benign" for familial erythrocytosis may be associated with a decrease in EPAS1 activity. This effect is inconvenient for ductal closure, which requires cell proliferation, migration, and differentiation. A decrease in EPAS1 gene activity would negatively affect all of these processes [22,23]. The activity of EPAS1 could be diminished since the analysis with HSF3 v. 3.0. software showed that the C>G substitution in intron 8 leads to the breakdown of an acceptor site. The ultimate effect is faulty splicing of the mRNA, and therefore a defective protein. Inside the EPAS1 exon 9 sequence, there is a hydroxylation site. EPAS1 (or HIF-2α) belongs to the family of hypoxia-inducible factors (HIFs). These factors are formed by two subunits, one with nuclear localization HIF-β and the other with cytoplasmic localization HIF-α. HIF-2α possesses 48% amino acid identity with HIF-1α; it is regulated by prolyl hydroxylation, dimerizes with HIF-2β, and binds to the same target DNA sequence (5 -RCGTG-3 ) as the HIF-1α:HIF-1β heterodimer. The sets of genes regulated by HIF-1 and HIF-2 overlap, but some are specific and depend on cell type [41]. At physiological oxygen levels (normoxia), HIF-prolyl hydroxylases (PHDs) hydroxylate proline residues on HIF-α subunits, leading to their destabilization by promoting ubiquitination via the von Hippel Lindau (VHL) ubiquitin ligase and subsequent proteasomal degradation [42]. Functional specificity for transactivation via HIF-1α and HIF-2α appears to reside in amino acid residues 415 to 659 and 418 to 619, respectively [41]. HIF-α transactivation is also repressed in an O 2 -dependent manner due to asparaginyl hydroxylation via the factor-inhibiting HIF (FIH). In hypoxia, the O 2 -dependent hydroxylation of HIF-α subunits via PHDs and FIH is reduced, resulting in HIF-α accumulation, dimerization with HIF-β, and migration into the nucleus to induce an adaptive transcriptional response [42]. In this case, the covalent modification determines where the protein should be located according to the cellular conditions. HIF-2α hydroxylation sites are located inside exons 9 and 12 ( Figure 2). However, how these hydroxylation sites are regulated through space and time is not very clear. One possibility is that the rs7557402 variant disrupts the hydroxylation site encoded in exon 9, leading to the increased translocation of HIF-2α into the nucleus and triggering the expression of genes related to the hypoxia response, which will result in the opposite response as expected. Another possible explanation could be related to a reduced rate of HIF-2α degradation secondary to deficient ubiquitination and degradation by the proteasome. The lack of renewal of HIF-2α could compromise its function. Under normal conditions, HIF-α is continuously synthesized and degraded [42]. In addition to PHDconferring alterations in protein stability, there is now evidence that hydroxylation can affect protein activity and protein/protein interactions with respect to alternative substrates; therefore, if the hydroxylation sites are lost, HIF-2α activity could be affected [41]. The variant rs7557402 has a CADD score for the G allele of 7.555; this score indicates that the variant is likely benign. In addition, the GERP score is 1.33, which means that the variant is highly conserved among species. We performed an alignment including ten species of mammals, and all of them but the capra hircus showed conservation of the C allele [43]. Furthermore, the analysis with RegulomeDB v.2 data showed that the rs7557402 variant generates more compacted chromatin, resulting in a weaker transcription of the EPAS1 gene in the fetal heart [44]. It is also important to highlight the presence of rs7557402 in the samples of the family used for the standardization of the techniques employed in this study. These samples were not part of the study because we do not have enough information about the treatments employed before closure via catheterization for each case. Nevertheless, we could infer that they ultimately required invasive treatment for the closure of the PDA, as in our case group. Moreover, the presence of this variant in a family case corroborates the effects of genetic factors other than the TFAP2B gene on PDA development and, in this particular case, on the lack of response to pharmacological treatment and the need to close PDA using invasive treatment, such as surgery or catheterization. These results should be interpreted with caution because of the limited sample size. This study must be replicated in a larger population to confirm or reject the association. Conclusions The rs7557402 SNP is associated with the failure of pharmacological treatment for PDA closure, specifically with ibuprofen in our study population. A possible explanation could be the decrease in the activity of the EPAS1 gene associated with the variant. Further studies with a larger sample size are needed to strengthen this association and decide whether this polymorphism could be used as a prognostic biomarker for pharmacological response. Moreover, our findings suggest the need to study other genetic factors involved in PDA development in addition to TFAP2B in preterm newborns. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/diagnostics13152558/s1. Supplementary Figure S1: Transthoracic echocardiogram with bidimensional and color Doppler modalities that shows a short-axis view of the heart with a hemodynamically significant patent ductus arteriosus. Supplementary Figure S2: Transthoracic echocardiogram with bidimensional and color Doppler modalities that shows a sagittal view of a hemodynamically significant patent ductus arteriosus. Supplementary Figure S3: Transthoracic echocardiogram with bidimensional and color Doppler modalities that shows a short-axis view of the heart of a patient that underwent pharmacologic treatment for the closure of the ductus arteriosus. The image shows laminar flow in the pulmonary artery without any residual shunt. Informed Consent Statement: On behalf of the children enrolled in our study, we obtained written informed consent from their legal guardians. Data Availability Statement: All data generated or analyzed during this study are included in this published article.
2023-08-03T15:17:32.312Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "fca1cefe394066072bc4fc39041b3b848a7bf533", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/15/2558/pdf?version=1690878932", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e0283feeed4bfe9dfbc9173bb1bb0a23f637ae2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246327857
pes2o/s2orc
v3-fos-license
Effect of seed priming on drumstick ( Moringa oleifera L.) seedling growth with different concentrations of gibberellic acid The moringa is a higher consumer preferable and nutritious vegetable tree whose propagation with seeds & cutting. Seed, sexual propagation is effective and commercially profitable. A pot experiment on Effect of seed priming on drumstick ( Moringa oleifera L.) seedling growth with different concentrations of gibberellic acid, was carried out during kharif season at Department of Horticulture, College of Agriculture, Rajmata Vijayaraje Scindia Krishi Vishwa Vidyalaya, Gwalior (M.P.). The experiment was laid out in Complete Randomized Design with eight treatments and three replications. Observations of the characters under study of drumstick plants were recorded from five randomly selected plants from each treatment. The data was recorded as per the standard procedure and analyzed statistically as per design. For each observation, five seedlings per treatment were selected randomly. The observations were recorded on growth parameters viz . height of seedling (cm), number of leaves per seedling, diameter of stem (mm) and length of shoot (cm) at 25 and 45 days after seed sowing. While, root length (cm), root diameter (mm), fresh and dry weight of root (g), fresh and dry weight of shoot (g), fresh and dry weight of seedling (g), and survival percentage of seedling observed at 45 days after seed sowing. Maximum height of seedling (22.82cm and 38.62cm), number of leaves per seedling (83.51 and 107.25), diameter of stem (5.37mm and 7.83mm), length of shoot (19.15cm and 30.31cm) at 25 and 45 days after seed sowing, maximum root length (8.31cm), root diameter (3.56mm), fresh and dry weight (13.29g and 4.03g) of shoot, fresh and dry weight (6.01g and 1.94g) of root, fresh and dry weight (19.30g and 5.97g) of seedling and survival percentage (81.33%) at 15 days after transplanting of seedling was recorded with seed soaked in 10ppm GA 3 for 24 hours. Introduction Drumstick (Moringa oleifera L.) also known as 'Horseradish tree', 'Mullakkai', 'Murrugi', 'Sahjan' and 'Muringa'. Moringa is an important vegetable tree belongs to the single genius family Moringaceae and grown mainly in the semi-arid, the tropical and subtropical regions. Moringa grown best where temperatures range from 25˚C -38˚C and annual rainfall at least 500 mm. It is multi-purpose, fast-growing tree, all parts of which can be used in various ways, including for food (Hsu et al., 2006) [6] and medicinal purposes (Fuglie, 2001) [4] . Drumstick fresh leaves contain 19.3 -26.4 per cent crude protein which are essential for livestock (Aregheore, 2002) [2] . Its Leaves are also rich in vitamin-C seven times more than orange, calcium and protein four times and two times more than milk, Potassium three times more than banana, iron three times more than Indian spinach and vitamin A four times more than carrot (Anonymous. 2010; Hossain et al. 2012) [1,5] . The seeds of moringa contain a colourless, oil known as "Ben", "Benhen", or "Benhenen oil" which have high market-value of olive oil, and is considered to be a substitute for sperm whale oil, often used for lubricating purposes (Booth and Wickens, 1998) [3] . Moringa is propagated sexually through seeds, and vegetatively through stem cuttings seed propagation is best than vegetative. Propagation through seeds which are usually planted in the nursery using a light media (3/1 proportion), mixture of soil and sand, respectively. Priming is a technique in which seeds are soaked in different solutions with high osmotic potential. This prevents the seeds from absorbing enough water for radicle protrusion, thus suspending the seeds in the lag phase (Taylor et al., 1998) [9] . Hydropriming involves soaking seeds in water before sowing. The seed needs to be wet to soften the seed coat and a pre-soak provides this necessary moisture. Hydration method of priming is sufficient to allow pre-germination, early growth and increased metabolic activation to take place, but insufficient to allow radical protrusion through the seed coat. Hormonal priming technique may enhance seed germination, growth and seedling uniformity. Currently, synthetic growth regulators have received wide spread acceptance and application in the field of horticulture. Now a days various uses of growth regulators, seed germination, initiation of rooting, enhancing growth of the cutting is most useful to the growers. The treated cutting rapidly produces a uniform and extensive root system, which when transplanted survives better than untreated cuttings. Among the different plant growth regulators, auxin is most effective as a rooting aid. Auxins stimulate adventitious root formation in stem cutting. Gibberellic Acid (GA3) is the most important growth regulator, which breaks seed dormancy, promotes germination, hypocotyls growth and cell division in cambial zone and increases the size of leaves. GA 3 stimulates hydrolytic enzymes that are needed for the degradation of the cells surrounding the radicle and thus initiate germination by promoting seedling elongation growth of cereal seeds (Rood et al. 1990) [8] . Optimum socking time and concentration of growth regulator (Gibberellic acid) is important to help in enhancing seedling growth of drumstick. Materials and Methods Experiment location and site The Investigation was carried out at experimental field of department of Horticulture, College of Agriculture, RVSKVV Gwalior (M.P.). The experimental field of Department of Horticulture, College of Agriculture, Gwalior is located at 26˚ 13' N latitude and 78˚ 14' E longitude at a height of 211.5m m elevated from the mean sea-level in the Agro-climatic region of Madhya Pradesh. The rainfall is occurred during mid-June till September & occasionally occur during winters. The climate of experimental area is sub-tropical with hot and dry summers where maximum temperature exceeds up to 45°-47° C in May-June. During winter the minimum temperature as low as 2 ˚C in December and January. The frosting may occur from mid-January to first week of February. Details of the experiment The experiment consists eight treatments and three replications under Complete Randomized Design (CRD). The treatments include, Different soaking time of seeds i.e. 24 hours, 48 hours with different concentrations gibberellic acid i.e. 5ppm, 7.5ppm, and 10ppm. Source of seed The freshly healthy and sound PKM-1 seeds were taken from department of horticulture, college of agriculture Gwalior (M.P.). Nursery Preparation Nursery was prepared with mix the of soil, sand and vermicompost in a proportion of 2:1:1 and treated with the solution of Captan @ 0.2% uniformly then lefted it for 24 hours. Then after total 720 polythene bags (18x12 cm size) were filled with this prepared growing media. GA3 solutions prepared for Seed's treatment Required quantity of gibberellic acid i.e., 5 mg, 7.5 mg and 10 mg were weighted with the help of an electronic balance. Then after balance different quantities of gibberellic acid were transferred separately into different glass beakers. Then, after transferred different quantity of GA 3 dissolved with the help of 95% ethyl alcohol. In each concentration of gibberellic acid containing labeled beakers, 1000 ml of distilled water was added in to make the solution of 5ppm, 7.5ppm and 10ppm. Soaking of seeds in different treatment 90 seeds in each treatment were soaked for 24 hours and 48 hours. Experiment management Seeds sowing After completion of 24 hours and 48 hours of seeds soaking, seeds were removed from beakers and one seed sown in each polythene bags with the two-to-three-centimeter depth Pinching Pinching was done to strengthen uniform growth of moringa plants and also removing over topped leaves to make maximum use of space. Irrigation Light irrigation was given just after sowing of seeds in each treatment and subsequent irrigations was given uniformly, as and when required depending upon climatic conditions. Weeding Two hand weeding were done at 15 and 30 days after seed sowing for minimized weed infestation. Plant Protection Measures Plant Protection measures were uniformly used when infestation occur at different stages of the experiment. Observations For observations, five seedlings per treatment were selected randomly. The observations were recorded on growth parameters viz. Height of seedling (cm), the height of seedling measure from the root tip to primary leaf with the help of measuring scale at 25 & 45 days after sowing. Number of leaves per seedling, the no. of leaves per seedling in each treatment was counted at 25 and 45 days after sowing and averaged it. Diameter of stem (mm), diameter was measured at center of the stem length at 25 and 45 days after seed sowing with the help of vernier caliper and mean stem diameter was expressed in millimeter. Length of shoot (cm), shoot length measure from the collar region to the tip of the shoot at 25 and 45 days after seed sowing and mean shoot length was expressed in centimeter. Root length (cm), it was measured from lower collar portion to the tip of primary root at 45 days after sowing and means root length was expressed in centimeter. Root diameter (mm), it was measured at center of the root length with the help of vernier caliper at 45 days after sowing and mean root diameter was expressed in millimeter. Fresh weight of shoot (g), measurement of shoot fresh weight was taken by weighing of fresh shoots at 45 days after seed sowing with the help of weighing balance and expressed in grams. The data were recorded as per standard procedure and analyzed statistically as per design. Dry weight of shoot (g), fresh shoots kept in an oven maintained at 80 °C±1 °C for twenty-four hours. After drying, the weight of dried shoot was recorded with the help of electronic weighing machine and means weight was calculated and expressed in grams. It was taken at 45 days after seed sowing. Fresh weight of root (g), fresh root was taken for measuring fresh weight of root. It was observed at 45 days after sowing with ~ 1819 ~ The Pharma Innovation Journal http://www.thepharmajournal.com the help of weighing balance and expressed in grams & analyzed. Dry weight of root (g), fresh root kept in an oven maintained at 80°C±1°C for twenty-four hours. After drying, the weight of dried root was recorded with the help of electronic weighing machine and means weight was calculated and expressed in grams. It was taken at 45 days after seed sowing. Fresh weight of seedling (g), fresh weight of seedling was calculated by summing of already measured fresh weight of shoot and root of same seedling and expressed in gram. Dry weight of seedling (g), dry weight of seedling was calculated by summing of already measured dry weight of shoot and root of same seedling and expressed in gram. Survival percentage of seedling. It was calculated at 15 days after transplanting with the help of following formula. Statistical analysis The experimental data were noted and analyzed using Complete Randomized Design (CRD) technique suggested by Panse and Sukhatme (1985) [7] . The critical differences for the treatment comparisons were worked out, wherever the "F" test was found significant at 5% level of significance. Results and Discussion Growth studies Height of Seedling (cm) The data presented in Table 1 reveals that the maximum height of seedling (22.82cm and 38.62cm) was recorded with seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (21.19cm and 36.93cm), seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (20.56cm and 36.82cm) at 25 and 45 days after seed sowing, respectively. While, Minimum height of seedling was recorded with seed soaked in normal water for 48 hours (T 5 ) (12.99cm and 27.09cm) followed by seed soaked in normal water for 24 hours (T 1 ) (15.12cm and 29.31cm) at 25 and 45 days after seed sowing, respectively. However, seed soaked in normal water for 24 hours was found at par with seed soaked in normal water for 48 hours at 25 days after seed sowing. Number of leaves per seedling The data presented in Table 1 Diameter of Stem (mm) The data presented in Table 2 reveals that diameter of stem was significantly increased with increasing the concentration of GA 3 under both the soaking times at 25 and 45 days after seed sowing over seed soaked in normal water. Maximum stem diameter (5.37mm and 7.83mm) was recorded with seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (5.16mm and 7.64mm), seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (4.92mm and 7.32mm) at 25 and 45 days after seed sowing, respectively. However, similar concentrations of GA 3 was at par with each other under both the soaking times at 25 and 45 days after seed sowing. Minimum stem diameter was recorded with seed soaked in normal water for 48 hours (T 5 ) (2.55mm and 5.06mm) followed by seed soaked in normal water for 24 hours (T 1 ) (2.71mm and 5.27mm) at 25 and 45 days after seed sowing, respectively. However, seed soaked in normal water for 24 hours (T 1 ) was found at par with seed soaked in normal water for 48 hours (T 5 ) at 25 and 45 days after seed sowing. Length of shoots (cm) The data for various treatments in respect to length of shoot was summarized in Table 2. Result indicated that the different seed soaking treatments were affected to shoot length of drumstick. The seed soaked in 10ppm GA 3 solution for 24 hours (T 4 ) was exhibited maximum shoot length (19.15 cm and 30.31cm) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (17.58 cm and 29.11cm) and seed soaked in 7.5ppm GA 3 solution for 24 hours (T 3 ) (17.47cm and 29.26cm) at 25 and 45 days after seed sowing, respectively. However, both the follower treatments were at par with each other at both the crop growth stages. Higher concentration of GA 3 under both the soaking times was also at par with each other at 25 days after seed sowing. Minimum shoot length (9.91cm and 21.76cm) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (11.90cm and 23.46cm) at 25 and 45 days after seed sowing, respectively. Root length (cm) It is clearly evident from the Table 3 indicated that the root length was significantly influenced by different treatments. Maximum root length (8.31cm) was taken by seed soaked in 10ppm GA3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (7.82cm), seed soaked in 7.5ppm GA 3 for 24 (T 3 ) and 48 (T 7 ) hours (7.56cm and 7.38cm), respectively. However, there was no significant difference found between both the soaking times at similar GA 3 concentrations except higher concentration. Minimum root length (5.33cm) was recorded with seed soaked in normal water for 48 hours (T 8 ) followed by seed soaked in normal water for 24 hours (T 1 ) (5.85cm). Root diameter (mm) The data for various treatments in respect to root diameter was summarized in Table 3. Result indicated that seed soaked in 10ppm GA 3 solution for 24 hours (T 4 ) exhibited maximum root diameter (3.56mm) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (3.41mm), seed soaked in 7.5ppm GA 3 solution for 24 hours (T 3 ) (3.38mm) and 48 hours (T 7 ) (3.26mm) these were at par with each other. Whereas, minimum root diameter (2.71mm) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (2.82mm), which was at par with each other. Fresh weight of shoot (g) The data pertaining in the Table 3 is clearly evident that the fresh weight of shoot was significantly increased with increasing GA 3 concentration under both the soaking times. Significantly maximum fresh weight of shoot (13.29g) was taken by seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (12.61g) and seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (11.72g). Minimum fresh weight of shoot (7.73g) was recorded with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (8.37g) which was at par with each other. Dry weight of shoot (g) The data summarized in Table 3. Result indicated that the different seed soaking treatments significantly affected the dry weight of shoot. The seed soaked in 10ppm GA 3 solution for 24 hours (T 4 ) was observed with maximum dry weight of shoot (4.03g) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (3.82g), seed soaked in 7.5ppm GA 3 solution for 24 hours (T 3 ) (3.35g) and 48 hours (T 7 ) (3.21g). Whereas, minimum dry weight of shoot (2.21g) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (2.52g) which was at par with each other. Fresh weight of root (g) The data summarized in Table 4. Result indicated that the different seed soaking treatments were significantly affected to root diameter. The seed soaked in 10ppm GA3 solution for 24 hours (T 4 ) was exhibited maximum fresh weight of root (6.01g) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (5.72g) which was at par with each other. While, minimum fresh weight of root (3.87g) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (4.01g) which was at par with each other. Dry weight of root (g) The data pertaining in the Table 4 is clearly evident that dry weight of root was significantly increased with increasing GA 3 concentration under both the soaking times. Maximum dry weight of root (1.94g) was taken by seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (1.83g) and seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (1.78g). However, these were statistically at par with their follower treatment. The minimum dry weight of root (0.77g) was recorded with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (0.88g) which was at par with each other. Fresh weight of seedling (g) The data pertaining in the Table 4 is clearly evident that the fresh weight of seedling was increased with increasing GA 3 concentration under both the soaking times. Maximum fresh weight of seedling (19.30g) was taken by seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (18.33g) and seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (17.34g) which was at par with each other. Minimum fresh weight of seedling (11.60g) was recorded with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (12.38g) which was at par with each other. Dry weight of seedling (g) The data summarized in Table 4. Results indicated that the different seed soaking treatments were significantly affected to dry weight of seedling. An increase in GA 3 concentration significantly increased the dry weight of seedling. The seed soaked in 10ppm GA 3 solution for 24 hours (T 4 ) exhibited maximum dry weight of seedling (5.97g) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (5.65), seed soaked in 7.5ppm GA 3 solution for 24 hours (T 3 ) (5.13g) and 48 hours (T 7 ) (4.73g). Whereas, minimum dry weight of seedling (2.97g) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (3.40g) which was at par with each other. Survival percentage of seedling The data on survival percentage of seedling presented in Table 4 revealed that it was significantly increased with increasing GA 3 concentration under both the soaking times. The maximum survival percentage (81.33%) was recorded with seed soaked in 10ppm GA 3 for 24 hours (T 4 ) followed by seed soaked in 10ppm GA 3 for 48 hours (T 8 ) (78.32%) and seed soaked in 7.5ppm GA 3 for 24 hours (T 3 ) (78.17%), which was at par with each other. Seed soaked in 5ppm and 7.5ppm GA 3 for 24 hours (T 2 and T 3 ) was found at par with seed soaked in 7.5ppm and 10ppm GA 3 for 48 hours (T 7 and T 8 ), respectively. While, minimum survival percentage of seedling (55.64%) was noted with seed soaked in normal water for 48 hours (T 5 ) followed by seed soaked in normal water for 24 hours (T 1 ) (61.67%). Conclusion From the this investigation, we can conclude that among all the treatments, pre sowing seed treatment of drumstick seeds with soaking in 10ppm GA 3 concentration for 24 hours was found to be the most suitable for all the parameters of seedling growth and survival of drumstick seedlings.
2022-01-28T16:54:15.623Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "cd52605df8e911cecf4342926e0bc5a618d5c4b9", "oa_license": null, "oa_url": "https://www.thepharmajournal.com/archives/2021/vol10issue11/PartZ/10-11-39-272.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cecc347514eab9d957e0e9091089051d42ca1e1b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
225967072
pes2o/s2orc
v3-fos-license
Analysis and Recognition of Cello Timbre Based on Deep Trust Network Model Voice color analysis and similarity calculation of music signals are the important research contents of computer music information retrieval system. In this paper, the deep trust network model is applied to the study of musical tone model. The 72-dimensional features of the cello tone are first extracted. Using the wrapper feature selection method, a 14-dimensional optimal feature subset that reflects the tone characteristics is selected, which greatly reduces the complexity of cello tone similarity calculation. In the set, SVR is used to classify and distinguish eight types of tone data, and a recognition accuracy of 62% was achieved, which is verified the feasibility of the tone model. Introduction As the most important multimedia form, music has received widespread attention in the field of computer research. In recent years, with the rapid growth of digital musical tone data, audio information retrieval of musical tone signals has received widespread attention in the field of computer research. In the commercial application of music information retrieval, music software and search engines can be easily implemented. However, the retrieval of these information is essentially based on the existing text tag information of the music signal, such as song title, singer name, song style, etc., The recommended music based on user behavior characteristics and the combination of the characteristic factors of the music itself are not enough [1] . The characteristic information of the music itself, such as tone information, melody and other music information, has yet to be tapped. In essence, it is still a traditional text retrieval. The text information corresponding to a music file can only be obtained by manual annotation. In the face of a large number of multimedia files, this method is not only laborious and time-consuming, but also almost impossible to complete. At the same time, labeling music files with text cannot represent the complete information of the music, especially information that reflects the characteristics of the music signal itself, such as tone color, melody, pitch, pitch, etc. The loss of this information will seriously affect the accuracy of the music retrieval results, resulting in low retrieval efficiency. Audio information retrieval for musical tone signals includes multiple research directions: musical instrument recognition, singer recognition, humming retrieval, automatic beat detection, and sentiment analysis. One of the important research contents is the recognition of automatic instrument sounds, which involves the sounding principle of the musical tone and the perception mechanism of the human ear. It has important significance for the mining and application of the characteristic information contained in the musical tone signal. The problems of musical instrument recognition and speaker recognition in speech signal processing are similar. Both are based on the timbre characteristics of the audio signal to determine the sound source of the signal. However, the concept and perception of musical tone has always been vague and mysterious. In fact, its definition is not clearly defined in psychology, musicology, or computer science. [2] The complexity of timbre is reflected in the following aspects: timbre is a subjective attribute of sound perception, not a pure physical attribute; timbre is a multi-dimensional attribute; no subjective scale for judging a timbre is currently found to be suitable; there is currently no unified musical tone signal Standard set for researchers to test the developed timbre calculation model. [3,4,5] Construction and Implementation of Deep Trust Network Model The deep trust network constructed in this paper is composed of a 1-layer Gaussian distribution function with explicit nodes of the RBM, a multi-layer hidden layer of the RBM, and a 1-layer SVR model. During the model pre-training, the calculation method for the joint distribution of data at the input layer and the conditional distribution of the hidden layer is: Here The middle layer is the traditional RBM information conversion, that is, the (obvious layer) Bernoulli-(hidden layer) Bernoulli RBM data conversion. Its energy function is defined as: Here θ is the given model parameter ωij represents the correlation weight between the explicit node vi and the hidden node hj, bi is the offset of the visible node, αj is the offset of the hidden node, and I is the explicit layer The number of nodes in the structure, J is the number of nodes in the hidden layer structure. SVR model The algorithm extracts effective information by transforming the kernel function of the support vector to obtain the decision result. Figure 1 shows the model of SVR [6,7] . x 1 x 2 x 3 … Figure 1. Schematic diagram of SVR algorithm Let { } be the prediction reference data sample set. There are n sample data in ( x i, y i), i = 1, 2, ..., n the sample set. Where x is the input vector and ∈ ; is the decision result and ∈ R. The x i R d y i y i function expression of SVR is: In the formula: ω represents the weight taken by different factors, and φ (x) represents the mapping function. Considering that the mapping data may still have high-dimensional spatial linear inseparability, and the high-dimensional fuzzy separability of this part of the data has little effect on the actual prediction, a relaxation variable is introduced to control the scale of fuzzy classification. The optimization of SVR can be expressed as: Here and are both relaxation variables. The optimization problem of this function can be * solved by Lagrange function: From equation (5) to solve equation (3), the SVR prediction model is: In the formula, is the kernel function of SVR. The kernel function with appropriate K( x i,x) accuracy can be selected according to actual requirements. Deep trust SVR model building The deep trust SVR model built in this paper is different from the traditional SVR shallow model. The model consists of a deep learning model consisting of a 1-layer RBM with a Gaussian distribution node, a multi-layer hidden layer RBM, and a 1-layer SVR machine (the schematic diagram of the model is shown in Figure 2). Voice characteristic analysis Using Wrapper feature selection, 30-dimensional and 18-dimensional feature vectors are obtained. In The final optimal feature subset was 21 dimensions. In these three experiments, feature vectors with a probability of 100% are formed to form a 14-dimensional feature subset. We call this feature subset the smallest core feature subset vector that reflects the timbre. The specific weights are: the spectral attenuation cutoff frequency (4) Note: 4, 8, 9 and so on are the core feature subset numbers Tone Similarity Experiment Results After feature selection, the selected features in test set 2 are selected as the test set for testing. Note that in our data set, we split the test set into two. Test set 1 is used as the test set for training the model, and test set 2 is used as the test for the overall model. Figure 3 shows the tone recognition results of test set 2 after feature selection and multiple GMM models under different dimensional feature vectors. We can see from the figure that after feature selection, the timbre characteristics expressed by the Gaussian model in the 30-, 18-, and 21-dimensional feature subsets have achieved good timbre discrimination. Under the 14th dimension of the smallest feature subset, a result close to the best recognition rate is also achieved. Conclusion Starting from the deep learning model, this chapter takes the temporal integration of the timbre feature sequence as the input of the deep learning model to realize the instrument recognition. Deep learning models have greatly improved the recognition effect of wind instruments, while improving the overall
2020-06-25T09:06:51.923Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "83700bd3ce2fc7ebc70dff8f2173e58a95241285", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1533/2/022015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ceaaa83c42d4218954325a978cd3e4b0801a45af", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
43024658
pes2o/s2orc
v3-fos-license
Cyclooxygenase-2-prostaglandin E2-eicosanoid receptor inflammatory axis: a key player in Kaposi's sarcoma-associated herpes virus associated malignancies. The role of cyclooxygenase-2 (COX-2), its lipid metabolite prostaglandin E2 (PGE2), and Eicosanoid (EP) receptors (EP; 1-4) underlying the proinflammatory mechanistic aspects of Burkitt’s lymphoma, nasopharyngeal carcinoma, cervical cancer, prostate cancer, colon cancer, and Kaposi’s sarcoma (KS) is an active area of investigation. The tumorigenic potential of COX-2 and PGE2 through EP receptors forms the mechanistic context underlying the chemotherapeutic potential of nonsteroidal anti-inflammatory drugs (NSAIDs). Although role of the COX-2 is described in several viral associated malignancies, the biological significance of the COX-2/PGE2/EP receptor inflammatory axis is extensively studied only in Kaposi’s sarcoma-associated herpes virus (KSHV/HHV-8) associated malignancies such as KS, a multifocal endothelial cell tumor and primary effusion lymphoma (PEL), a B cell-proliferative disorder. The purpose of this review is to summarize the salient findings delineating the molecular mechanisms downstream of COX-2 involving PGE2 secretion and its autocrine and paracrine interactions with EP receptors (EP1-4), COX-2/PGE2/EP receptor signaling regulating KSHV pathogenesis and latency. KSHV infection induces COX-2, PGE2 secretion, and EP receptor activation. The resulting signal cascades modulate the expression of KSHV latency genes (latency associated nuclear antigen-1 [LANA-1] and viral-Fas (TNFRSF6)-associated via death domain like interferon converting enzyme-like- inhibitory protein [vFLIP]). vFLIP was also shown to be crucial for the maintenance of COX-2 activation. The mutually interdependent interactions between viral proteins (LANA-1/vFLIP) and COX-2/PGE2/EP receptors was shown to play key roles in the biological mechanisms involved in KS and PEL pathogenesis such as blockage of apoptosis, cell cycle regulation, transformation, proliferation, angiogenesis, adhesion, invasion, and immune-suppression. Understanding the COX-2/PGE2/EP axis is very important to develop new safer and specific therapeutic modalities for KS and PEL. In addition to COX-2 being a therapeutic target, EP receptors represent ideal targets for pharmacologic agents as PGE2 analogues and their blockers/antagonists possess antineoplastic activity, without the reported gastrointestinal and cardiovascular toxicity observed with few a NSAIDs. testinal and cardiovascular toxicity observed with few a NSAIDs. (Translational Research 2013; 162:77-92) In the 19th century, Rudolf Virchow first proposed a potential link between inflammation and cancer based on his observations on the presence of leukocytes in tumors. 1 Inflammation is a physiological mechanism evolved for wound healing and therefore is counterintuitive to consider it to be oncogenic. Nevertheless, inflammation is a 'double-edged sword' with a pathologic edge that can promote various aspects of tumorigenesis deregulated such as cell proliferation, migration, angiogenesis, and apoptosis. 1 Within the last decade, a multitude of studies demonstrating the a) abundance of inflammatory cells such as macrophages and fibroblasts in cancer biopsies, b) the role of proinflammatory molecules such as cyclooxygenase-2 (COX-2), prostaglandin E2, leukotrienes, transforming growth factor beta (TGF-b), hypoxia inducible factor-1 alpha, vascular endothelial growth factor (VEGF), nitric oxide synthase, nitric oxide, reactive oxygen species (ROS), cytokines and chemokines in the pathogenesis of several cancers, and the tumorigenic nurturing properties of the proinflammatory tumor microenvironment strongly indicates that inflammation plays a pathogenic role in several cancers. [1][2][3][4][5][6][7][8] Chronic persistent inflammation is believed to play an important role in the pathogenesis of 15% of all malignancies. [1][2][3][4][5] Depending on the type and stage of cancer, the physiological to pathologic switch of inflammation is triggered by various factors such as genomic instability, epigenetic changes, somatic mutations, tumor suppressor and oncogene mediated carcinogenesis, chronic persistent infections, and environmental stressors such as pollutants. 1,7,8 The role of tumor viruses in chronic persistent inflammation associated carcinogenesis is demonstrated in several malignancies such as Kaposi's sarcoma associated-herpes virus (KSHV/HHV-8) in Kaposi's sarcoma (KS) and primary effusion lymphoma (PEL), Epstein-Barr virus (EBV) in Burkitt's lymphoma and nasopharyngeal carcinoma, human papillomavirus (HPV) in cervical cancer, hepatitis B (HBV) and hepatitis C viruses (HCV) in hepatocellular cancer, and human T-lymphotropic virus (HTLV) in T-cell leukemia. 6,[9][10][11] Viruses are obligate intracellular parasites and use host proteins for genome replication and production of progeny. 12 Piracy of inflammatory mechanisms is a recurring theme in the story of infections by KSHV, EBV, HCV, HPV, HBV, and HTLV because of the proliferative, angiogenic, immunesuppressive, and antiapoptotic niche that persistent inflammation provides. 11 The purpose of this review is to highlight the salient findings demonstrating how KSHV uses the pivotal COX-2/PGE2/EP receptor mediated inflammatory axis for its survival and pathogenesis and, therefore, plays a crucial role in KSHV-associated malignancies. VIRAL INFECTIONS AND COX-2 Infections by several viruses have been shown to regulate COX-2 expression and PGE2 production such as HBV in hepatocytes, 28,29 HCV in Huh-7 cells, 30 human herpesvirus 6 (HHV-6) in monocytes, 31 human cytomegalovirus (CMV) in Peripheral blood mononuclear cells (PBMCs), smooth muscle cells, and fibroblasts, 32-35 murine gammaherpesvirus 68 (MHV-68) in NIH 3T3 cells, 36 HIV in monocytes, 37,38 HTLV-1 in PBMCs, 39 influenza virus in PBMCs, 40 enterovirus 71 in human neuroblastoma cells, 41 dengue virus in dendritic cells, 42 Severe acute respiratory syndrome (SARS)-associated coronavirus in 293T cells, 43 Theiler's murine encephalomyelitis virus in astrocytes, 44 encephalomyocarditis virus in macrophages, 45,46 coxsackie virus B3 in monocytes, 47 respiratory syncytial virus in macrophages and dendritic cells, 48 and canine distemper virus in monocytes. 49 COX-2/PGE2 has been implicated in a multitude of viral mechanisms such as genome replication (HBV), (CMV, HTLV), gene expression (MHV-68), transmission (HTLV), cell tropism (rhesus CMV), cell invasion (CMV), T cell regulation (HIV), and even has identified a viral homologue of COX-2 in rhesus CMV revealing the significance of COX-2 in the evolution of inflammation mediated viral pathogenesis. [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][47][48][49][50][51] Among the herpes viruses, studies using COX inhibitors have shown the role of COX-2/PGE2 pathways for replication and successful lytic cycle in HSV, CMV, HHV-6, and MHV-68. 31,[34][35][36][51][52][53][54][55][56][57][58] However, the role of the extensive molecular framework underlying the COX-2/PGE2/EP receptor inflammatory axis in herpes viral latency is described only in KSHVassociated malignancies such as KS and PEL. [59][60][61][62][63][64][65] KSHV ASSOCIATED DISEASES KSHV/HHV-8 is grouped in the g-2 herpes virus family and is the etiologic agent underlying KS, PEL, and multicentric Castleman's disease. [66][67][68][69][70] Like other herpes viruses, the KSHV life cycle is characterized by 2 phases, the latent and the lytic cycles. 70 After infection, KSHV enters the latency phase, where the virus remains evasive by transforming the infected cell into a stable reservoir. [66][67][68][69][70] The lytic cycle results in the replication of the viral genome and production of new viral progeny. 70 Both life cycles are associated with distinct viral proteins. 70 Gene expression profiles of KS, PEL, and multicentric Castleman's disease biopsies have shown that the majority of tumor cells express latency transcripts with 1%-3% of tumor cells undergoing the lytic cycle at a given time point and both stages of the life cycle are implicated in the pathogenesis of KSHV associated diseases. 70 Although, there are no specific treatments targeting KSHV associated diseases, highly active antiretroviral therapy (HAART) and consequent immune reconstitution is demonstrated to be beneficial in treating AIDS-KS. [70][71][72] KS. Epidemiologically, KS is classified into 4 subgroups: (1) classical KS as described by Moritz Kaposi in elderly men of Mediterranean origin in 1872, 73 (2) endemic KS in sub-Saharan Africa, (3) epidemic KS in AIDS patients, where KS forms the most common AIDS associated malignancy, and (4) transplant defining KS. [74][75][76] Pathologically, KS is a multifocal angioproliferative tumor of vascular nature characterized by extravascular erythrocytes, spindle shaped cells of endothelial origin, inflammatory cells such as monocytes, fibroblasts, neutrophils, and lymphocytes interspersed between narrow, irregular angulated slits within a proinflammatory and angiogenic microenvironment. 70 Fatality by KS is often due to systemic spread into the respiratory system, gastrointestinal tract, lymph nodes, and other organs. 70 PEL. PEL is a rare, yet aggressive form of B cell lymphoma that accounts for 2%-4% of all AIDS associated NHLs with a prognosis of less than 6 months. 66,67,69,71,77 PEL is characterized by primary lymphomatous aggregations within the major body cavities such as the pleura, pericardium, and the peritoneum. 66,67,69,71,77 Pathologically, PEL cells show varying phenotypes, such as immunoblastic, plasmablastic, and anaplastic, and are proposed to lie between the pro-B cell and plasma cell lineage. 66,67,69,71,77 PEL cells are characterized by B cells transformed by persistent KSHV infection and consists of multiple copies (in the order of 50-150 copies/cell) of episomal KSHV genomes with the latent viral gene expression pattern involving latency associated nuclear antigen (LANA)-1, viral homologues of host proteins cyclin (vCyclin) and FLICE-inhibitory protein (Viral-FLICE-inhibitory protein (vFLIP)), a pre-microRNA transcript encoding viral microRNAs, as well as vIRF3/K10.5/LANA-2 and a homologue of IL-6 (vIL-6) is also expressed in some PEL cells. 66,67,69,71,77 PEL cells express a variety of cell surface markers from different stages of B cell development such as the activation markers CD30, CD38, and CD71 and several plasma cell markers including CD138, VS38c, and MUM-1/IRF4 but are devoid of the B cell markers CD19 and CD20. 66,67,69,71,77 KSHV LATENCY, INFLAMMATION, AND COX-2 KSHV latency is proposed to be a symphony of wellorchestrated interactions between viral and host proteins leading to the transformation of infected cells for viral survival through successful genome replication and immune evasion. 70,[78][79][80][81] The host and viral protein interactions initially established by KSHV infection for survival through the establishment and maintenance of latency progress pathologically as KS and PEL under conditions of persistent selective pressures such as AIDS related or transplant associated immune suppression. 70,[79][80][81] The decrease in the incidence of KS post-HAART therapy in AIDS patients is suggestive of this scenario. [70][71][72] The host mechanisms underlying the establishment and maintenance of KSHV infection and KSHV associated malignancies include cell signaling, anti-apoptosis, angiogenesis, immune modulation, and cell proliferation mediated by cytokines, growth factors, and inflammatory molecules. 70,[79][80][81] Thus, identification of molecules used by the KSHV latency program will enable us to delineate the pathogenesis of KS and PEL as well. Studies by Naranatt et al (2004) 82 and Sharma-Walia et al (2006) 61 first indicated the induction of COX-2 during de novo KSHV infection of human microvascular dermal endothelial (HMVEC-d) cells Studies by introduced a novel idea regarding the functional significance of COX-2/PGE2 within the context of the KSHV latency program. KSHV infection induced COX-2/ PGE2 and PGE2 supplementation reversed the COX-1/ COX-2 inhibitor mediated downregulation of latency gene LANA-1. Therefore, the studies for the first time generated a hypothesis that KSHV infection induced COX-2/PGE2 is crucial for establishment and maintenance of latency. 61 Considering the oncogenic potency of COX-2 through the activation of inflammatory mechanisms, the proinflammatory mechanisms underlying KS and PEL pathogenesis, and the well characterized roles of COX-2 in other viral tumors such as Burkitt's lymphoma and cervical cancer, the study by Sharma-Walia et al (2006) INDUCTION OF COX-2 AND EP RECEPTORS BY KSHV Several gene array studies have demonstrated the induction of COX-2 in a multitude of malignant and premalignant human cancer lesions with progressive increase in expression as the stage of the cancer advances. 83 We demonstrated CD31-COX-2 double stained spindle shaped cells in tissue microarray of human KS sections (eye orbit, tonsil, mouth, and small bowel) (Sharma-Walia et al (2010). 59 Similarly, abundant expression of mPGES, PGE2, and EP1-4 was observed in human KS biopsies (George Paul et al (2010). 62 Collectively, these findings corroborate with earlier in vitro observations 61,82 and is the first detailed investigation of COX-2 and EP receptors in human KS and is the first detailed investigation of COX-2 and EP receptors in human KS biopsies. There are several possible mechanisms underlying COX-2/ PGE2/EP receptor induction in KS lesions are several such as persistent KSHV infection, persistent chronic inflammation, and pathologic stress from chronic persistent infection and inflammation in KS patients. COX-2 induction has been demonstrated by other viral proteins as well such as Tax protein (HTLV-1), gp120 (HIV), HBx (HBV), and CoV N-protein (SARS virus). 29,37,43,93 KSHV latency protein vFLIP and lytic proteins KSHV G protein-coupled receptor, a constitutively active lytic phase protein with significant homology to the human IL-8 receptor, and K15 are the viral proteins proposed to be capable of inducing COX-2. v-FLIP has been shown to induce COX2 in other studies, 94,95 Recently, we 64 delineated the detailed mechanistic aspects of vFLIP mediated COX-2 expression that is mediated through NFkB, p38, RSK, and transcription factor CREB. In addition, vFLIP activated COX-2 expression and PGE2 secretion was demonstrated to be part of a signaling loop where COX-2/ PGE2 was required for vFLIP-induced NF-kB activation. 64 The induction of COX-2 by lytic proteins KSHV G protein-coupled receptor and K15 also raises the question of whether COX-2 plays a role in the lytic cycle and is still being investigated. 96,97 The induction of EP receptors by viral infections is largely an unexplored arena. Studies by George Paul et al (2010) 62 demonstrated that EP1, EP3, and EP4 protein levels are significantly upregulated in longterm-KSHV-infected endothelial cells. We observed upregulation of EP1-4 receptors in de novo KSHV infected HMVEC-d cells 65 too. EP receptors are present in endothelial cells because of their general homeostatic functions such as GI mucosal protection. 13 However, their pathologic upregulation is a characteristic of many malignancies such as colorectal cancer 19 and, therefore, the work by George Paul et al (2010) 62 and George Paul et al (2013) 65 is strongly suggestive of their role in KS pathogenesis. Further work is required to characterize the signaling and transcriptional mechanisms underlying the induction of EP receptor expression. REGULATION OF KSHV LATENCY BY EP RECEPTORS PGE2 and EP receptors are proposed to be the tumorigenic workhorses of COX-2. 5 EP receptors are GPCRs and have been well-characterized in the pathogenesis of a multitude of cancers such as melanoma, breast cancer, and colon cancer by contributing to proliferation, immunosuppression, angiogenesis, invasion and blocking apoptosis through the activation of Src kinase, cAMP/ CREB, PI3K/Akt, Ras/Raf, ERK-1/2, NFkB, EGFR, PPARd/b, and GSK-3b/b-catenin pathways. [22][23][24] The role of Src kinase, PI3K, Akt, ERK, and NFkB during the early events of KSHV infection and establishment of latency is well-characterized. [84][85][86]89,98 Collectively, studies by Sharma-Walia et al (2010; 2012) 59,64 demonstrated that pathways downstream to COX-2 when activated by viral and nonviral mechanisms or both participate in enriching the tumor microenvironment and consequently various pathologic processes underlying KS such as endothelial transformation, neovascularization, and metastasis. 59,64 However, pathways downstream of COX-2 resulting in the activation of Src, cAMP, PI3K, Akt, Ras/Raf, ERK, NFkB, EGFR, and GSK-3b/b-catenin pathways, which form the first line of signal transducers in a molecular avalanche eventually resulting in the induction of ICs, GFs, AFs, and MMPs, is still an active area of investigation. We 62 identified the involvement of EP receptors in the induction of various signaling molecules downstream of COX-2 in KSHV latency program. Specifically, the EP1 receptor was implicated in the activation of Ca 21 , PI3K, and NF-kB, the EP2 receptor in PI3K, PKCz/l, and NFkB activation and the EP4 receptor in PI3K, PKCz/l, ERK 1, ERK 2, and NF-kB activation in long-term-infected cells. 62 EP1, EP2, and EP4 antagonists could also downregulate the expression of major KSHV latency gene LANA-1 by inhibiting the induction of Ca 21 , phosphorylation of Src, PI3K, PKCz/l, and NFkB signaling. 62 The signal molecules that regulate the COX-2 promoter and PGE2 induced LANA-1 promoter activity were found to be similar to the EP receptor mediated signal transduction pathways in latently infected endothelial cells and COX-2 gene expression and PGE2 secretion was also significantly downregulated by the pharmacologic inhibition of EP2 and EP4 receptors. 62 These observations implicate for the first time the role of EP receptors in any form of herpesvirus latency and, thus, substantiating the earlier observations by Sharma-Walia et al (2006) 61 and elucidated signal transduction network through EP receptors, initiated by KSHV infection mediated COX-2 activation and PGE2 secretion. 62 PGE2 in the tumor microenvironment activates EP receptor mediated signal cascades in a paracrine and autocrine fashion that exert its effects on LANA-1 and COX-2 expression. 62 Consequently, a self-sustained positive feedback loop networking the KSHV protein LANA-1 to the proinflammatory pathways regulated by COX-2/PGE2/EP receptors is created by viral infection. Recent work by Dupuy et al (2012) further substantiates the role of EP receptors in KS pathogenesis by reporting the use of PGE2 inhibitors as an attractive approach to treat aggressive KS, as they could restore activation and survival of tumoricidal NK cells. 99 These studies provided strong evidence that downmodulation of NKG2D is mediated by inflammatory PGE2, known to be released by KS cells, and also showed that PGE2 acts by preventing IL-15-mediated activation of NK cells. 99 The role of EP receptors in the induction of several KSHV associated signal networks and consequently various pathogenic mechanisms is indicative of how KSHV subverts the COX-2/ PGE2/EP receptor mediated protumorigenic signal pathways to sustain viral and host gene expression. 62 However, these studies also demonstrated that neither chemical inhibitors (NS-398 and indomethacin) nor si-COX-2 could completely abolish the induction of ICs, GFs, MMPs, and AFs indicating the presence of a multitude of host molecules like COX-2 subverted by KSHV infection. 59 ROLE OF COX-2 IN KSHV ASSOCIATED B CELL NEOPLASIA (PEL) PEL is comprised of B cells transformed by KSHV latent infection. 77,100,101 Studies have proposed the cumulative interdependent vitality of the expression of KSHV latency genes, the proinflammatory environment, and the manipulation of canonical anticancer host defense machinery, such as p53 and p21, in the metamorphosis of PEL neoplasia. 63,77,100,101 The mechanistic role of COX-2 in hematological malignancies 18 and KSHV latency program in endothelial model systems is well established. [59][60][61][62]64 The study by Paul et al (2011) 63 for the first time delineated the role of COX-2 in PEL pathogenesis using the COX-2 inhibitor nimesulide. Nimesulide downregulated KSHV latency genes vFLIP and LANA-1 and induced G1 cell cycle arrest and apoptosis through the activation of the p53/p21 tumor suppressor pathway and downregulation of cell survival kinases p-Akt1/2 and p-GSK-3b, and angiogenic factor VEGF-C in PEL cells. 63 LANA-1 is a multifunctional protein and a major marker for KSHV latency. 70,102 The diverse roles of LANA-1 in KSHV latency include maintenance of viral episomes, host gene manipulation through the recruitment of chromatin binding proteins, cell cycle regulation and blockade of apoptosis by downregulating p53 and Rb. 70,102 vFLIP is one of the key KSHV latent proteins; it performs multiple functions such as IL-8 and IL-6 upregulation, induction of NFkB, spindling of infected endothelial cells, and modulation of cell proliferation, and immune evasion. 64,95,103-106 PEL consists of transformed B cells with in vitro clonogenic properties attributed to a multitude of molecules. 77 A key observation by Paul et al (2011) is the inhibition of the colony formation capacity of PEL cells by nimesulide because it encapsulates the pathologic consequence of COX-2 inhibition mediated latency blockade, G1 arrest, and apoptosis induction in PEL cells. 63 Nimesulide mediated proliferation arrest, alteration in cell cycle profile, and apoptosis in PEL cells could be related to the downregulation of KSHV latency proteins LANA-1 and vFLIP resulting in the blockade of virus induced prosurvival mechanisms in PEL. [107][108][109][110][111][112][113][114] However, considering the oncogenic potential of COX-2/PGE2/EP receptors in other cancer systems that are also important for PEL pathogenesis, the antigrowth effects of nimesulide could also be due to the drug's effects on these pathways as well as independent of viral proteins. 5,15,[115][116][117][118][119][120][121][122][123] CHEMOTHERAPEUTIC POTENTIAL OF NSAIDS IN TREATING PEL NSAIDs consist of COX-1/COX-2 inhibitors such as aspirin, indomethacin, and diclofenac and COX-2 specific inhibitors such as nimesulide and the COXIB (celecoxib, rofecoxib, valdecoxib, and lumiracoxib) family. 124,125 COX-2 specific drugs such as COXIBs have gained popularity and notoriety in the last 2 decades because of their potent antipyretic and analgesic effects and numerous trials strongly suggesting an increase in cardiovascular events from the chronic use of rofecoxib and celecoxib, respectively. 124,125 From a chemotherapeutic perspective after considering the severe side effects of existing anti-PEL drug regimens, which provide no specific cure for PEL, the goal should be to identify a drug with potent anti-KSHV and anticancer activity with the least side effects. Several lines of work are currently underway to develop anti-PEL therapies based on PEL pathogenesis such as the proapoptotic agents bortezomib and azidothymidine, antiproliferative antibiotic rapamycin, p53 activator nutlin-3a, antiviral compounds cidofovir and IFN-a, reactive oxygen species hydrogen peroxide, activation of unfolded protein response, and KSHV latency gene blocking agents glycyrrhizic acid, and small RNA transcripts. 77,113,114,[126][127][128][129][130][131][132][133][134][135][136][137][138] The well-established tumorigenic potential of COX-2/PGE2/EP receptor pathway, 18,24 the availability of well-characterized EP receptor antagonists and Food and Drug Administration-approved COX-2 inhibitors with known anticancer effects, 18 the demonstration of COX-2/PGE2/EP receptors in KSHV latency 59,61,62 and the correlation between COX-2 expression and poor NHL prognosis 18 provided an excellent context to examine the chemotherapeutic potential of NSAIDs in treating PEL by Paul et al (2011). 63 The study by Paul et al (2010) and work by George Paul et al (2013) 65 examined the chemotherapeutic potential of nimesulide and celecoxib against PEL and several NHL cell lines, respectively. Nimesulide is a wellcharacterized COX-2 inhibitor with known anticancer properties and is already prescribed to approximately 500 million people in 50 different countries since its introduction in 1985. 14,121,139 Celecoxib was introduced in 1998, and several lines of work have strongly suggested the anticancer effects of both nimesulide and celecoxib. 140,141 Celecoxib's anticancer effect is proposed to be due to COX-2 inhibition and non-COX dependent antigrowth effects. 142 Nimesulide could induce significant proliferation arrest on a multitude of KSHV1/EBV-(BC-3, KSHV-BJAB), KSHV-/EBV1 (Akata/EBV1, LCL, Raji), KSHV1/EBV1 (JSC-1), and KSHV-/EBV-(Loukes, Ramos, Akata/EBV-) NHL cell lines with selective potency against KSHV1/ EBV-cell lines suggesting that the proliferation arrest induced by nimesulide on all NHL cell lines tested is not due to the generalized antiproliferative effects of NSAIDs on tumor cell lines. 63 In the work by Paul et al (2013), 65 celecoxib had significant antiproliferative effects on KSHV1/EBV-(BCBL-1 and BC-3), KSHV-/EBV1 (Akata/EBV1), KSHV1/ EBV1 (JSC-1), and KSHV-/EBV-(BJAB) cell lines. The chemotherapeutic potential of EP receptor antagonists in any NHLs is still unexamined. The study by Paul et al (2013) 65 CONCLUSIONS AND FUTURE STUDIES A key aspect of chronic inflammation and oncogenesis attributable to inflammation is the sustenance of the driving factors such as COX-2 activity, PGE2 secretion, and PGE2 mediated functional autocrine and paracrine signaling. 5, 27 An interesting finding in the study by George Paul et al (2010) 62 is the downregulation of COX-2 gene expression and PGE2 secretion by EP2 and EP4 antagonists indicating a positive feedback loop mediated through EP2 and EP4 receptor signaling that simultaneously regulates LANA-1 and COX-2 expression. 62 Mechanistically, the stability of the COX-2 messenger RNA (mRNA) transcript has been shown to be mediated by p38/MK2 dependent signaling acting on the ARE sequences in the 3 0 UTR region of the COX-2 mRNA. 181 Interestingly, the KSHV protein kaposin B is also shown to stabilize mRNA transcripts with 3 0 UTR ARE sequences through p38/MK2 signaling. 182 Further studies are critical to fully understand this pathway, such as examining the effect of EP receptor antagonism on the gene expression of Kaposin B, cytokine and p38/ MK2 activation, and COX-2 protein levels. Multiple promoters (Lti, Ltc, Ltd) have been identified in the KSHV latency locus and account for the transcripts of LANA-1, vFLIP, vCyclin, viral microRNAs, and Kaposins. 183,184 Therefore, the induction of the LANA-1 promoter by PGE2 and EP receptor agonists also raises the question whether the PGE2 and EP receptors could 98,193 KSHV interactions with receptors, while binding and entering the target cell, induces a variety of overlapping cell signaling cascades (Extracellular signal-regulated kinase, Phosphatidylinositide 3-kinase, Rho family of GTPases, Focal adhesion kinase, Src, nuclear factor kappa-light-chain-enhancer of activated B cells, and protein kinase C) and transcription factors (c-Fos, c-Jun, c-Myc, and Signal transducer and activator of transcription 1-alpha) early during infection. 59,[84][85][86][87][88][89][90]92,[193][194][195][196][197][198] KSHV infection via the induction of signal pathways also reprograms and modulates various host cell genes, 82 and one of these molecules is the angiogenic stress response gene COX-2. 61,82 KSHV infection induced COX-2 led to the secretion of its inflammatory metabolite PGE2. 61 A variety of transcription factors (NF-kB, NFAT, NF-IL-6/cEBP, AP-1, and CRE) can stimulate COX-2 expression. KSHV entry associated signal cascades involving FAK, Src, JNK, and p38 activate transcription factors NFAT and Cyclic adenosine monophosphate response element-binding CREB, which stimulate COX-2 gene expression and PGE2 secretion. 60 PGE2 exerts its effect through the family of 7-transmembrane G-protein-coupled rhodopsin-type EP (1-4) receptors, which along with COX-2 and PGE2 were detected in human KS lesions. 59,62 Besides manipulating host genes, KSHV establishes latency in the host cell as observed by increased expression of its viral latent genes latency associated nuclear antigen (LANA)-1 and vFLIP. PGE2 in the microenvironment of the infected cell functions in paracrine and autocrine fashion to augment its goal to establish and maintain the expression of viral latency protein LANA-1 through Ca 21 , Src, PI3K, NK-kB, and ERK1/2 mediated signal cascades. 62 EP receptor antagonists downregulate LANA-1 expression through inhibition of Ca 21 , p-Src, p-PI3K, p-PKCz/l, and p-NF-kB while exogenous PGE2 and EP receptor agonists induced the LANA-1 promoter by activating transcription factors (yin-yang1, Specificity Protein 1, octamer transcription factor-1, octamer transcription factor-1, CCAAT-enhancer-binding proteins, and c-Jun). 62 Collectively, our studies demonstrate that KSHV has pirated the proinflammatory PGE2 and its receptors for maintaining its latency in the host cell. Conversely, viral latency protein vFLIP mediated signaling sustains COX-2 expression and PGE2 secretion. 64 KSHVoncogenic protein vFLIP induces COX-2/PGE2 to enhance its transforming ability (anchorage independent colony formation), metastatic potential (matrix metalloproteinase (MMP)-10), and inflammatory phenotype (inflammatory cytokines: monocyte chemotactic protein-1, RANTES, GRO-a/b, interleukin 8, and interleukin 6; inflammation-related adhesion molecules: ICAM-1, VCAM-1; and chemokines: CXCL-6 and CXCL-5), and to promote anoikis resistance and prolong infected cell survival (cell survival genes: Cellular activate other latency promoters as well. Elucidating such mechanisms, if any, would provide a comprehensive perspective on how KSHV utilizes PGE2 and EP receptors for regulating latency. Studies with LANA-1 promoter deletion constructs identified a PGE2 response region in the KSHV latency locus. 62 The KSHV latency locus is known to be regulated by Sp1, CTCF, and several other unidentified transcription factors (TFs). 185,186 Among the TFs identified within the minimal region of the LANA-1 promoter required for PGE2 mediated LANA-1 promoter activity, there are several transcription factors that could be potentially stimulated by PGE2 and the EP receptor such as Sp1, C/EBP, c-Jun, Oct-1, and Oct-6. 22,187 The functional significance of these TFs in inducing the LANA-1 promoter, their specific binding sites, and influence of PGE2 and EP receptors over these TFs remains to be determined. Key concept introduced by George Paul et al (2010) 62 is the role of the EP1 receptor in inducing Ca 21 signaling in the KSHV latency program. The study had identified a specific type of calcium signal induced by EP1 receptor in long-terminfected cells leading to several questions such as the effector molecules and the transcription factors activated by calcium signaling. One of the most intriguing findings of the study by Paul et al (2011) 63 was the downregulation of syndecan-1, VDR, and AQP3 expression by nimesulide in PEL cells. Syndecan-1/CD138, VDR, and AQP3 are uniquely overexpressed in PEL cell lines unlike other NHLs. 69,188 The role of transmembrane proteoglycan syndecan-1 in cell migration through Rac-1/PKCa signaling and the significance of syndecan secretion in proteoglycan signaling are key aspects of oncogenesis. 189 VDR is the natural receptor for 1a25-dihydroxyvitamin D3. 190 Induction of VDR is associated with chromatin remodeling and is also proposed to increase the risk of esophageal squamous, prostate, and pancreatic cancers by the activation of osteopontin and Ran-GTPase. 190 AQP3 is a channel protein involved with the transportation of water and glycerol, and ATP generation. 191 In lung adenocarcinoma, colorectal cancer, and squamous cell carcinoma, AQP3 has been proposed to play a role in promoting cell migration through actin depolymerization and ATP generation. 191 The link between COX-2 and the expression of syndecan-1, AQP3, and VDR within the context of PEL raises several important questions such the role of proteoglycan mediated signaling, chromatin remodeling, and ATP metabolism in PEL and how COX-2 might be contributing to PEL pathogenesis through such a novel signal network. Overall, the studies reviewed here provide a glimpse of the molecular framework underlying the angiogenic stress response proinflammatory protein COX-2, its infamous lipid metabolite PGE2, and EP receptors in the establishment and maintenance of KSHV latency and, therefore, implicated COX-2 inhibitors and EP receptor antagonists as potent chemotherapeutic modalities in treating KSHV related lymphomas (Fig 1). Thus, the studies add a novel paradigm in the pathogenesis of KSHV associated diseases and raise several questions that could expand our understanding of the role of chronic persistent inflammation in KS and PEL. Currently, NHLs are the fifth most common cancer in the United States and account for 5% of all cancers with an annual incidence increasing by 1%-2%. 126,192 Keeping in mind the ultimate aim of cancer treatment is to inhibit the growth of precancerous and cancerous cells without affecting the normal cells, could the studies reviewed here suggest the antiproliferative effects of COX-2 inhibitors and EP receptor antagonists against various NHL cell lines? The data emanating inhibitor of apoptosis protein-1, Cellular inhibitor of apoptosis protein-2, X-linked inhibitor of apoptosis protein, Superoxide dismutase 2, B-cell lymphoma 2, immediate early response gene X-1; antiapoptotic proteins: B-cell lymphoma 2, myeloid leukemia cell differentiation protein, B-cell lymphoma-extra large, Bcl-2 interacting mediator of cell death, and BAX translocation to the cytoplasm; and cell survival kinases; NF-kB, PI3 K, and AKT). 64 In addition KSHV-induced COX-2/PGE2 regulated multiple events involved in KS pathogenesis such as secretion of proinflammatory cytokines and growth factors (Interleukin-1 alpha, Interleukin-1 beta, Subunit beta of interleukin 12/cytotoxic lymphocyte maturation factor 2, Tumor necrosis factor alpha, Interferon gamma-induced protein 10, neutrophil-activating protein-2, Oncostatin M, thrombopoeitin, fibroblast growth factors, Flt3-ligand, Fractalkine, Insulin-like growth factor-binding protein and Osteoprotegerin), angiogenic factors (vascular endothelial growth factor [VEGF]-A/-C), and invasive factors (MMP-2/-9). 59 COX-2 blockade reduced latently infected endothelial cell adhesion/invasion, survival and proliferation (shortened S phase, arrested infected cells at G1/S phase). 59 Similar to COX-2/PGE2 downstream effects in KS pathogenesis, we established that COX-2 contributes to PEL pathogenesis via viral gene independent and dependent pathways. COX-2 blockade reduced KSHV latent (LANA-1 and vFLIP) gene expression, disrupted p53-LANA-1 protein complexes, and activated the p53/p21 tumor-suppressor pathway in PEL cells. 63 COX-2/PGE2 contributed to prosurvival mechanisms in PEL cells via regulating cell survival (p-Akt and p-GSK-3b), cell cycle and apoptosis blockade (cyclins E/A and cdc25C), angiogenesis (VEGF-C), transforming potential (colony forming capacity of PEL cells), and modulation of PEL defining genes (syndecan-1, aquaporin-3, and vitamin-D3 receptor). 63 Collectively, these observations provide a comprehensive molecular framework linking COX-2/PGE2 with KS and PEL pathogenesis and identify the chemotherapeutic potential of targeting COX-2-PGE2-EP axis in treating KS and PEL. = from our in vitro studies is valuable, informative, and requires further examination (ongoing studies) using an in vitro angiogenic model and an in vivo nude mice model to further validate COX-2, PGE2 inhibitors, and EP receptor antagonists as novel therapeutics to target latent KSHV infection, viral pathogenesis, and associated diseases; KS and PEL. The authors thank Keith Philibert for critically reading the review.
2018-04-03T05:28:51.033Z
2013-04-06T00:00:00.000
{ "year": 2013, "sha1": "4583dec9d7efcb0b5c82ccb798fa4448ae2ca2d8", "oa_license": null, "oa_url": "http://www.translationalres.com/article/S1931524413000777/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ce8e4fef2d0b163a0822383d091fbcf85831870e", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine", "Biology" ] }
234369408
pes2o/s2orc
v3-fos-license
3D Cluster-Based Ray Tracing Technique for Massive MIMO Channel Modeling In this paper, a novel 3-dimensional (3D) approach is proposed for precise modeling of massive multiple input multiple output (M-MIMO) channels in millimeter wave (mmW) frequencies. This model is based on both deterministic and statistic computations to extract characteristics of the propagation channel. In order to increase algorithm execution speed, the physical channel is divided into two regions. The first region refers to those parts of channel which can be mapped with simple planes such as walls, ramps and etc. The second region is usually complex which is defined by considering the channel with physical clusters. These physical clusters yield multipath components (MPCs) with similar angles of arrival (AoA) and time delay. The ray tracing algorithm is utilized to find ray paths from transmitter (Tx) to receiver (Rx). Some characteristics of MPCs in each cluster are defined according to some appropriate statistical distribution. The non-stationary property of M-MIMO along the antenna array axis is considered in the algorithm. Due to the correspondence between propagation environment and scatters, the accuracy of the model is highly increased. To evaluate the proposed channel model, simulation results are compared with some measurements reported in the literature. INDEX TERMS Channel Modeling, Cluster, Massive MIMO, Stochastic Modeling. I. INTRODUCTION he fifth generation (5G) of wireless communication systems has attracted extensive amount of research efforts and attentions. The 5G technology can present extremely low latency, high energy efficiency, and very high data rate [1]- [5]. To achieve 5G design targets, information theory suggests three main key approaches [2,6]: (i) ultradense networks which are known as small cell technology, (ii) large quantities of new bandwidth which means utilizing higher frequencies, and (iii) high spectrum efficiency by using numerous antennas at the base station (BS) refers to massive multiple input multiple output (M-MIMO). The antenna configuration in M-MIMO systems includes hundreds or even thousands of antenna elements to increase the channel capacity, improve spectral and energy efficiency and promote reliability of the system with respect to the conventional MIMO [7]- [16]. The M-MIMO systems may confront with non-stationary properties across the array axis. This phenomenon is usually neglected in the case of conventional MIMO channels [17]. Thus, different antenna elements may face distinctive propagation characteristics. Due to the large dimension of the antenna array, a receiver may be located in a shorter distance than Rayleigh distance criteria [14]. In such a case, the far-field (FF) condition is not fulfilled, and a spherical wavefront has to be considered [15,16]. Millimeter wave technology is one of the most effective solutions to achieve huge bandwidth for 5G systems [17]- [20]. High data rates of several gigabits per second can be easily obtained in mmW communications [21]- [23]. Consequently, a mmW M-MIMO system has a great potential to improve wireless access and throughput [6,24]. Furthermore, very short wavelength of mmW frequencies helps to significantly reduce the M-MIMO array size. Additionally, the high gain of large number of antenna provided by M-MIMO systems is helpful to overcome the severe pathloss of mmW signals [25]. Since performance of mmW M-MIMO systems is greatly related to the propagation environment, channel models of mmW M-MIMO systems are very necessary in system design. Even though many models such as WINNER II [26], WINNER+ [27], COST 2100 [28], IMT-A [29] and 3D SCM [30] have been presented for conventional MIMO systems, but these channel models cannot sufficiently cover new emerging mmW M-MIMO technology for 5G systems. In recent years, many efforts have been accomplished to extract the channel behavior in mmW and M-MIMO systems. T The birth-death behavior of the clusters across array aperture was observed in the measurement-based channel model presented in [39], where a virtual 40×40 planar antenna array was used at Tx side. An indoor M-MIMO channel measurements at different frequencies 11, 16, 28, and 38 GHz were conducted in [40] by a large virtual uniform rectangular array. Furthermore, many geometrybased stochastic models (GBSMs) have been presented to model massive MIMO channels with new features [18], [41]- [43]. The METIS channel model is a map-based model in which the layout of the propagation environment is predefined and channel coefficients are computed based on it [43]. In [44], a 3D vehicle M-MIMO channel model has been proposed where the spherical wavefront with non-stationary conditions has been considered to statistically derive channel parameters. Another channel model has been proposed in [45], where the far-field (FF) signals are modeled by plane waves and near-field (NF) signals are modeled with spherical waves. Some cluster-based channel models are summarized in Table I. In this paper, a novel 3D hybrid channel model is proposed for wireless communication systems with the capability of simulating M-MIMO mmW channels. Here, the modeling procedure consists of two successive deterministic and stochastic modes. The combination of both modes forms the final channel model. In the deterministic mode, all surfaces, such as walls, floors, ceilings, ramps and etc. are defined with their equivalent planes. These planes are imported to the simulator in the form of rectangles along with their four corners position. Then, the Line of Sight (LoS) and reflection paths between Tx array antenna elements and Rx users of M-MIMO system are extracted, and the image theory is utilized to find the first and second order reflection paths. If a path is encountered by a surface, the transmission phenomenon is also considered in the calculations. In the stochastic mode, those parts of the channel that cannot be modeled in the deterministic mode due to their geometric complexities are defined based on the cluster concept. Accordingly, the MPCs arrive at Rx user with similar features of delay, and AoA make clusters. By means of appropriate statistical random distributions, the MPCs of the clusters are extracted and their characteristics such as phase shifts, intra-cluster delay, and their position within each cluster are modeled. The channel impulse response (CIR) is obtained from combination of two deterministic and stochastic modes. Other main characteristics of the channel such as delay spread, AoA and angle of departure (AoD) are then extracted. This proposed model not only has high accuracy, but also takes little computational time to model whole M-MIMO channel. To evaluate this model, simulation results are compared with some measurements reported in [46]. The major contributions of this paper are summarized as follows: The proposed model can be applied to a wide range of propagation channels. Since the modeling procedure is performed in two successive deterministic and stochastic modes, it can adopt to the real propagation environments, despite its simplicity of implementation. Also, the nonstationary property of the channel can be modeled along the M-MIMO antenna array in this model. The following sections of this paper are organized as follows. Section 2 describes fundamental of the proposed channel model for non-stationary M-MIMO channels. Theoretical and mathematical details of the proposed channel model are explained in Sec.3. The scenario description is presented in Sec.4. Some simulation results and discussions are given in Sec.5. Finally, conclusions are presented in Sec.6. II. DESCRIPTION OF THE CHANNEL MODEL At first step, geometrical boundaries of the propagation environment as well as walls, ceiling and floor are defined and their electromagnetic properties are specified. Furthermore, Tx and Rx antenna positions and their characteristics such as radiation pattern, array elements configuration and polarization are defined. Other parameters such as frequency and transmit power are also specified. The desired channel parameters are defined at this step to be extracted at the end of the simulation procedure. At the next step, channel simulation is performed according to the predefined propagation environment and simulation strategy. To overcome the complexity of the channel, this step is divided into two deterministic and stochastic modes as mentioned before. A. Deterministic modeling Deterministic modeling considers those parts of the channel that can be defined as planes. If these parts are located in the FF region, the FF approximation is considered to compute the amplitude and phase shift of each path. Otherwise, the NF conditions are considered in calculations. The 3D ray tracing algorithm is used in this stage to find propagation paths between the Tx and Rx. Two important propagation phenomena (reflection and transmission) are considered in deterministic modeling. According to Fig. 1, by utilizing image theory, the reflection points on each wall are obtained. The incident angle of each path and electromagnetic characteristics of the obstacles defined at the first step, are used to calculate Fresnel reflection and transmission coefficients according to the polarization. These reflected and transmitted paths are considered to calculate the intensity of the received signal. The whole procedures are done between each Tx element and all users. This algorithm is continued until all defined deterministic parts are checked to find existence (or non-existence) of a path between each Tx antenna and every user. Then, the algorithm runs out to switch to the stochastic modeling (if necessary). B. STOCHASTIC MODELING Generally, deterministic modeling of the whole channel is very complicated for M-MIMO systems. To overcome this complexity, a cluster-based stochastic modeling is proposed to complete the previous step and improve model accuracy. The cluster-based model is shown in Fig. 2. In this model, those parts of the channel that modeled stochastically are mapped by clusters. The stochastic modeling is based on the clustering behavior of the propagation channel since the MPCs reach the Rx in the form of bundles of rays with similar properties such as delays and AoAs. The stochastic parts of the channel are defined as shown in Fig. 3a and called interacting objects (IOs). These interacting objects are fences, crowds of vehicles, vegetation and so on that cannot be modeled in the deterministic mode due to the complexity of their geometry and the lack of conditions for utilizing ray tracing algorithm. The physical boundaries of these parts are then determined, as shown in Fig. 3 b. In order to simplify the calculations, equivalent spheres are embedded in the specified area instead of IOs, which have the role of meshing in the calculations of the stochastic mode. The configuration of the equivalent spheres can have a uniform or non-uniform structure similar to that shown in Fig. 3 c. Then, the space around the Rx is divided into cluster cells as shown in Fig. 3 d. If the cluster cells have physical intersection with the equivalent spheres, then we will have an active cluster cell. Several active cluster cells are shown in Fig. 3 d. The MPCs are assigned to the active cluster cells as shown in Fig. 3 e. The phenomena of reflection and transmission occur in the cluster and other components are neglected in the calculations due to their low amplitude level. All MPCs experience extra phase shifts and delays in the cluster that need to be considered. The phase shifts of each MPC can be any amount in the interval of . A uniform distribution is proposed to model phase shift behavior of each MPC. The number of MPCs in each cluster is modeled by Poisson distribution with mean value of 10. An exponential distribution is used to model the intra-cluster delay time. DOD & DOA Wrapped Gaussian or Von Mises [18], [49] The mean value of the exponential distribution is chosen according to the physical size of the clusters in which the ray strength reduced significantly and here is considered to be about 2.5 ns. These distributions are summarized in Table II. C. COMBINATION OF TWO STAGES As described in the previous subsections, the proposed channel model is executed in two stages. When both deterministic and statistic simulations are completed, the results should be post-processed. For this purpose, all MPCs ) 2 , 0 [ π received by the user are sorted according to their delays. The multipath delay axis is quantized into equal time delay segments called delay bins. Each bin has a time delay width equal to for to , where represents total number of equally-spaced bins, and is determined by maximum delay of MPCs. Any MPC received within the th bin is represented by a single resolvable delay . All MPCs within delay bin of are summed up together to obtain signal strength at that bin. Then, desired characteristics of the channel are extracted based on processing of MPCs data. A general flowchart of the proposed channel model algorithm is illustrated in Fig. 4. III. THEORETICAL DETAILS OF THE PROPOSED CHANNEL MODELING ALGORITHM The MIMO channel is described by a matrix , where the BS is equipped with M antenna elements and serves N single-antenna users. Each entry of the matrix includes superposition of both deterministic and stochastic impulse responses. If the channel can be considered noiseless, the input-output relationship in the time domain can be expressed by: (1) where is an vector of transmitted signals, is an vector of received signal and operator (*) denotes convolution. The channel matrix is given by: where is CIR of the th time bin denotes with . A. Mathematic theory of the deterministic algorithm In the deterministic algorithm, the normalized unit vectors of the surfaces defined in Sec. B. Mathematic model of stochastic algorithm Interacting objects modeled in the stochastic mode are replaced by equivalent spheres. The configuration of the equivalent spheres is similar to the structure of the IOs. Then, active clusters are identified through the intersection detection algorithm of equivalent spheres and clusters. Although equivalent spheres can have variable centers and radii, the following limitations must also be considered: • Equivalent spheres are just defined in the area of IOs, • Depending on the distance from the Rx and the dimensions of the cluster cells, the radius of the equivalent spheres is variable, as shown in Fig. 3, • Each equivalent sphere should not occupy more than one cluster cell, • The equivalent spheres diameters should not be larger than IOs dimensions. According to the cluster parameters shown in Fig. 5, the following condition is considered to determine the radius of each equivalent sphere : …….where … The range and azimuth and elevation angles of the th cluster cell are denoted by , and as shown in Fig. 5. The dimensions of the IOs are assumed larger than wavelength in the calculations. Thus, the minimum radius of the equivalent spheres is greater than half of the wavelength. The delays of each MPC are calculated based on the length of the path that each MPC travels, while the angles of arrival to the receiver depend on the position of the each MPC in the cluster. In this way, in each cluster, a number of MPCs may reach the receiver. Therefore, according to the origin of the rays generated within each cluster, the vector between the points in the corresponding cluster to the receiver is obtained. Accordingly, the arrival angles are calculated in both azimuth and elevation planes. It should be noted that the scatters are positioned based on a uniform statistical distribution in each cluster. The phase of each path is also obtained according to the path length of each MPC, in addition to the amount of phase that is added after interaction within each cluster (which is modeled using a uniform distribution). However, in relation to the amplitude of each MPC, first, the path loss is obtained according to the Friis transmission equation and then, the attenuation coefficient in each cluster is added due to the propagation phenomenon that occurs in each interaction. Since there are closed-formed mathematic relations for reflection and transmission coefficient, these coefficients are taken into account in calculating the amplitude of these MPCs. Although these two phenomena may occur in many cases, the phenomena of diffraction and scattering are also very likely. However, diffraction and scattering phenomena do not have simple and closed-form mathematical equations, and on the other hand, in most cases, the power levels of such components are very small to be neglected in the calculations. C. Integration of deterministic and stochastic calculations ( 1) corresponding to each bin is calculated to determine the power delay profile (PDP). Accordingly, CIR of th interval can be presented as: (7) where and are number of deterministic and stochastic MPCs with delay in the th interval , respectively. By quantizing CIR, received power from the th Tx antenna to the th user can be expressed by: where is transmit power. The mismatch and cable losses are neglected in (8). Since AoAs of the MPCs in the respective time bin vary relative to each other, the AoA is defined individually for each MPC in the integration unit. The root-mean-squared (RMS) delay spread can be calculated by using PDP. The RMS delay shows dispersion of delay and can be expressed as follow [46]: (9) where and are delay and received power of the th path between the th Tx antenna and the th user, respectively. Depending on the position of each antenna (assuming the phase center of each antenna which is related to the center position of that antenna), the ray tracing algorithm is applied separately to all of the antenna elements. IV. SIMULATION ENVIRONMENT The wave propagation simulator is developed in MATLAB software. The simulator has a main engine that consists of several sub-functions. These sub-functions calculate different parts of the channel. Figure 6 shows the general structure of the simulator program. This program includes the section of initial definitions of IOs in both deterministic and stochastic modes such as, operating frequency, transmission power, polarization of the antennas, and the section of the calculation of the arrival and departure angles of each MPC, attenuation coefficients of each MPC and its phase, and finally a section which displays the results. The elapse time of the program strongly depends on the details of the propagation environment. However, since calculating the MPCs is performed in a computational procedure and is not based on the search algorithms, modeling in this method is much faster. All simulations are conducted for the Center Hall in JiuJiao Teaching Building, Beijing Jiaotong University, China presented in [46]. The obstacles in the Hall are assumed to be fixed without any movement. The antenna array at Tx side includes 64 elements with uniform linear configuration parallel to the ground. The carrier frequency is set to 26 GHz during our simulations. The distance between successive antenna elements is equal to half of the wavelength. Transmit power is assumed to be 1 W (0 dBW) in the simulations. All Tx and Rx antennas have isotropic radiation patterns. The simulations begin with initial parameters given in Table III V. RESULTS AND ANALYSIS First of all, according to Fig. 8, the walls, floor and ceiling are defined in the deterministic mode. The Tx has 64 antenna elements to cover a type of M-MIMO system. The LoS and reflection MPCs are found in this stage. The first and second order reflections are considered by applying image theory to all paths between each Tx array antenna element and Rx user. If any MPC is cut off by a surface, the transmission coefficient is also taken into account. Then, the remaining parts of the environment, such as hall seats, are defined in the stochastic mode as shown in Fig. 9. The PDP diagram between the first antenna element of the Tx array and the Rx is illustrated in Fig. 10. The measurement data extracted from the reported results in [46] is used in this figure to compare with our simulation. As illustrated by authors in [46], their measurement system includes a virtual antenna array at the Tx, while a single antenna is used at the Rx. Some more details of the measurement system are described in [46]. It can be seen that the peaks and nulls of the simulation model follow substantially the measurement results. The RMS delay of the proposed channel model is compared in Fig. 11 with those of the measurement results reported in [46]. As it can be seen, the RMS delays fluctuate across the array antenna of M-MIMO system because the MPCs power changes along the axis of the array. The AoAs of MPCs in the elevation and azimuth planes are shown in Fig. 13 a and Fig.13 b, respectively for the first array element. Different kinds of MPCs are shown with various signs. It can be observed in Fig. 13 a, that accumulation of MPCs in elevation plane is around 100• as it is expected. This is because of the Rx antenna is located at a higher level in the middle of the clusters. Furthermore, the MPCs within each cluster have a bit deviation in AoAs in the elevation plane. On the other hand, since the Rx antenna is located amid of the clusters, the AoAs in azimuth plane spread in a wide range of angles according to Fig. 13 b. The AoAs on the array axis in the elevation and azimuth planes are also illustrated in Fig. 14 a and Fig. 14 b, respectively. VI. CONCLUSION A novel channel model for M-MIMO over mmW has been proposed in this paper for 5G networks. The proposed model is divided into two deterministic and stochastic regions. In the deterministic mode, the channel is defined for the simulator with details and by applying ray tracing algorithm, all propagation paths between Tx and Rx are determined. Then intensity and other characteristics of the received signals are obtained. But, since the M-MIMO channel is generally a very complex environment, the whole channel cannot be simulated in the deterministic mode. Thus, the second mode of the channel modeling algorithm is applied to characterize different objects located in the propagation environment. In this stage, those parts of the channel that cannot be modeled in the deterministic mode are simulated by some physical clusters through appropriate statistical distributions. This channel model has a very low computational time in comparison with full deterministic models, since it uses image theory instead of meshing the propagation environment to find the paths between Tx and Rx. In addition, it employs statistic distributions to model complex parts of the channel and reduce computational time and memory. The accuracy of the model is high, because the main structures of the propagation channel are modeled in the deterministic mode and other complex parts of the channel are mapped with corresponding clusters. This approach has been applied to an indoor environment where the field measurements have been reported. Then, simulation results have been compared with measurement results to evaluate the accuracy of the proposed algorithm. A comparison between simulation and measurement shows that total behavior of the simulated channel follows measurements results.
2021-05-12T05:01:07.525Z
2021-04-10T00:00:00.000
{ "year": 2021, "sha1": "29e69ad26fdccdab9933c0dde465115896de3077", "oa_license": "CCBY", "oa_url": "https://aemjournal.org/index.php/AEM/article/download/1349/530", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "29e69ad26fdccdab9933c0dde465115896de3077", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
267645436
pes2o/s2orc
v3-fos-license
From use cases to business cases: I‑GReta use cases portfolio analysis from innovation management and digital entrepreneurship models perspectives Introduction The global shift towards renewable energy sources, driven by the urgency to mitigate climate change and the depletion of fossil fuels, leads to a growing need for energy systems that are not only flexible but also possess substantial energy storage capabilities (Gielen et al. 2019). As renewable sources like solar and wind are inherently intermittent, the ability to store energy during peak production times and release it on demand is critical to maintaining grid stability and ensuring a consistent energy supply (Denholm and Mai 2019). This transition is further motivated by the aim to enhance energy security, reduce greenhouse gas emissions, and foster economic growth through sustainable energy industries (Jacobson et al. 2017). The development of such flexible energy systems is pivotal in supporting the integration of renewables into the existing infrastructure, making storage technologies and smart grid solutions essential components of the future energy landscape (Burrett et al. 2009). Aligning the technical and functional aspects of these systems with their business cases is paramount, ensuring that the market alignment Abstract This study provides a detailed exploration of how innovation management and digital entrepreneurship models can help transform technical use cases in smart grid contexts into viable business cases, thereby bridging the gap between technical potential and market application in the field of energy informatics. It focuses on the I-GReta project Use Cases (UCs). The study employs methodologies like Use Case Analysis, Portfolio Mapping of Innovation Level, Innovation Readiness Level, and the Tech Solution Business Model Canvas (TSBMC) to analyse and transition from technical use cases to viable business cases. This approach aligns technological solutions with market demands and regulatory frameworks, leveraging digital entrepreneurship models to navigate market challenges and foster energy management, sustainability, and digitalization. Introduction The global shift towards renewable energy sources, driven by the urgency to mitigate climate change and the depletion of fossil fuels, leads to a growing need for energy systems that are not only flexible but also possess substantial energy storage capabilities (Gielen et al. 2019).As renewable sources like solar and wind are inherently intermittent, the ability to store energy during peak production times and release it on demand is critical to maintaining grid stability and ensuring a consistent energy supply (Denholm and Mai complements the technical feasibility, thus enabling scalable, economically sustainable solutions that address market needs and regulatory compliance (Gregory 2015).This strategic alignment underpins the successful integration of renewable energy into the grid, leveraging technical innovations to meet real-world demand and supporting the broader goals of energy policy and economic development (Kober et al. 2020). The selection of the I-GReta project for a Use Cases portfolio analysis from the perspectives of innovation management and digital entrepreneurship was motivated by its unique positioning at the intersection of advanced energy technologies and marketoriented solutions (I-GReta website 2023).I-GReta as a project focuses on smart grid and renewable energy innovation which results in a rich and complex array of use cases.The I-GReta project aims to develop strategies for the planning and operation of flexible energy systems, utilizing storage capacities and integrating a high proportion of renewable energy sources into regional and local power grids community.This integration involves demand flexibility, building-level forecasting, and large-scale optimization for controlling electrical, heating, and cooling consumption.The project envisions creating a digitalized and decentralized energy system, connecting trial sites across four countries through an ICT platform that incorporates FIWARE components.FIWARE is an open-source initiative that aims to provide a universal set of standards for developing smart applications in various sectors (FIWARE website 2023).It is designed to ease the development of smart solutions by offering a curated framework of cloud-computing components that can be assembled like building blocks in a construction set.FIWARE's mission is to drive the adoption of open standards that ease the development of smart applications in multiple vertical sectors.I-GReta's key stakeholders are occupants, building owners, and energy system operators, who are invited to actively participate in a Virtual Smart Grid (VSG) facilitated by this platform.One of the research dimensions is the trading of storage capacities through the platform, with individual storage solutions offering significant value and impact at a local level.This setup presents an opportunity for exploring various digital entrepreneurship models, particularly in the domain of energy trading, storage management, and energy informatics. The integration of digital entrepreneurship models serves as an intermediary step in transitioning from use case development to business case realization, ensuring that the technological advancements are not only technically robust but also primed for market acceptance and success.This alignment is critical in bridging the gap between innovation and market adoption, providing a framework within which technologies can be evaluated against their commercial potential and market fit.The analysis of digital entrepreneurship models offers insights into potential revenue streams, business scalability, and user adoption strategies, thereby enhancing the value proposition of the use cases to stakeholders and investors.Including this analysis ensures that R&D initiatives are designed with a clear understanding of the market dynamics and customer needs, ultimately fostering a smoother transition to viable business cases.This approach is underpinned by literature that emphasizes the importance of integrating market analysis in the early stages of technological development, such as the work of Chesbrough on open innovation (Chesbrough 2003) and the business model canvas by Osterwalder and Pigneur (2010).Both studies are instrumental in mapping out the business potential of new technologies. The research question of this study is "What role do innovation management and digital entrepreneurship models play in enhancing the development of business cases from use cases in the I-GReta R&D portfolio?".It centres on understanding the transformative impact of strategic innovation and digital business strategies on the progression from technical use cases to commercially viable business cases within the smart grid context.The research objectives are multi-faceted: • Mapping Use Cases: To map out the I-GReta UCs portfolio, ensuring the alignment of technological innovations with the structural requirements of smart grid architectures.• Conducting a Portfolio Analysis for Innovation Potential: Conduct I-GReta UCs portfolio analysis, as informed by (Vandaele and Decouttere 2013), to assess the innovation potential of I-GReta UCs from both technological and market perspectives.Through these objectives, the study seeks to illuminate how innovative management practices and digital entrepreneurship can effectively bridge the gap between the technical potential of smart grid solutions and their practical market applications. To effectively prioritize project activities within the I-GReta initiative, consortium members have decided to underpin all upcoming technical and user-focused developments with a suite of well-defined Use Cases (UCs).These use cases are tailored to the specific needs of the partner organizations overseeing field trials in Austria, Romania, Sweden, and Germany.Collectively, the project has documented 13 use cases within the Smart Grid UCs Repository, conforming to the IEC 62559-2:2015 standard (Kuchenbuch et al. 2023). Table 1 lists various UCs along with their field trial locations, spanning Austria, Romania, Sweden, and Germany.Use cases like "Flexible Charging Tariffs" and "Smart Home" originate from Austria, focusing on dynamic pricing and home automation, respectively.Romania's use cases involve investment strategies and optimization of prosumer operations, while Sweden's use cases emphasize building energy management and solar energy utilization.Germany's use cases include power shifting and sector coupling, demonstrating the project's comprehensive approach to enhancing energy efficiency and integration across different levels of the energy supply chain.The initial phase of the project entails gathering an array of potential use cases, encompassing both thermal and electrical flexibilities, as depicted in Fig. 1.These are subsequently categorized into three distinct sectors: Building, Community, and Public Energy System.The Building sector focuses on flexibilities within a structure's thermal or electrical system.The Community sector Analysing UCs individually is essential to grasp the distinct complexities and opportunities each presents, ensuring they meet their unique technical specifications, regulatory requirements, and stakeholder expectations (Bittner and Spence 2003).When considered as a collective portfolio, these use cases enable a strategic overview, highlighting how they align with overarching organizational goals and identifying potential synergies, which can lead to more efficient resource utilization and improved risk management strategies (Cassiman et al. 2005).Furthermore, a portfolio perspective encourages innovation through the cross-fertilization of ideas and ensures the suite of use cases is robust against market fluctuations and responsive to customer needs, ultimately fostering a holistic and resilient approach to smart grid development (Curley and Salmelin 2017).Moreover, the adoption of FIWARE and the implementation of a cross-national platform for energy systems facilitate standardized data exchange and interoperability across use cases, enhancing the overall efficacy of the smart grid ecosystem. For simplification and focused study within this paper, we have distilled the full spectrum of the I-GReta UCs portfolio into four generalized use cases.The selected UCs are: UC1, "Upscaling of Battery Storage," which explores the enhancement of energy storage capabilities; UC2, "Energy Flexibility, Renewable Sources and Building Energy Management System," which involves integrating smart technologies for building management systems; UC3, "Planning of Energy Communities for Flexibility Scenarios," which aims to develop cooperative models for energy sharing and management; and UC4, "Bi-Directional Charging," which tests the capacity for electric vehicles to contribute to grid stability through bidirectional energy flows.These cases represent the diverse yet interconnected facets of the I-GReta project and provide a comprehensive overview for analysis. The paper presents a comprehensive structure beginning with an introduction that sets the stage for discussing renewable energy, smart grids, and the I-GReta project's use cases.The background section reviews relevant literature and theoretical frameworks in energy technology research, particularly the innovation management, R&D portfolio analysis and technology and market alignment techniques.Methodologically, the paper describes the transition from the I-GReta project use cases to business cases, employing various analytical tools.It then delves into an in-depth analysis of these use cases, their market readiness, and innovation levels, followed by a detailed look at the application of a tailored business model canvas.The paper culminates with a synthesis of digital entrepreneurship models suited to these use cases, a critical discussion of the findings and methodologies, and a conclusion summarizing the project's key contributions to energy management and sustainability. Background In the realm of energy technology R&D projects, the alignment of use case and business case development from the conceptual level to the implementation phase is a topic that has garnered significant attention in the literature.This is particularly relevant in the context of integrating renewable energy sources and advancing smart grid technologies.At the conceptual stage, the literature emphasizes the importance of feasibility studies and stakeholder analysis.As noted by (Smith and Woodworth 2012), early-stage assessments should consider technical feasibility, market demand, and regulatory landscapes to ensure that proposed solutions are viable.This stage often involves extensive modelling and simulation to predict system behaviour and potential challenges.The use of TRLs, as discussed by (Mankins 1995), is a common approach in energy grid projects to assess the maturity of a technology.This method helps in systematically progressing from concept (low TRL) to implementation (high TRL), ensuring that each stage of development is grounded in reality and technologically feasible.A more recent study focusing on the assessment of district heating and cooling systems, where it evaluates their technical feasibility, market demand, and regulatory landscapes, also highlights the need for early-stage alignment between UC and BC (Yang et al. 2022). As projects advance, the focus shifts towards market analysis and business modelling.Priem highlighted the need for developing comprehensive business models that align with the technology's capabilities and market needs (Priem et al. 2018).This includes exploring different revenue streams, cost structures, and market positioning strategies.The literature also underscores the significance of regulatory and policy considerations in the development process.As argued by (Reyna and Chester 2017;Rajavuori and Huhta 2020), navigating the complex regulatory environment is crucial for the successful deployment of energy grid technologies.This involves understanding and influencing policy decisions that affect market entry and technology adoption.Finally, the role of stakeholder engagement and the formation of strategic partnerships are critical in bridging the gap between use cases and business cases.According to (Meijer et al. 2019), collaboration with industry partners, regulatory bodies, and end-users helps in tailoring solutions to market needs and enhances the likelihood of successful technology adoption and implementation.The Smart Grid Architecture Model (SGAM) is increasingly recognized in the literature as a pivotal framework for aligning the use case and business case development process in energy grid R&D projects, from conceptualization to implementation (Albano et al. 2014;Panda and Das 2021).SGAM provides a multilayered, structured approach for the design and development of smart grid technologies.It encompasses various aspects of the smart grid, from physical components to business and market considerations, allowing for a holistic view of project development.According to (Mashlakov et al. 2019), SGAM facilitates interdisciplinary collaboration by providing a common language and reference model for stakeholders from different domains, including engineers, business developers, and policymakers.This is crucial for ensuring that technical solutions are aligned with market needs and regulatory frameworks.A literature review from (Rip and Kemp 1998) emphasizes the importance of aligning technical solutions with viable business models, ensuring that the developed technologies are economically sustainable and market-ready.The use case and business case development process under the SGAM model involves transitioning from high-level conceptualization to detailed implementation.This structured approach ensures alignment with strategic objectives and regulatory compliance throughout the development process. The focus of this study is limited to Use Case (UC) and Business Case (BC) layers only (Fig. 2).Developing the business layer within the SGAM model involves considerations like stakeholder interests, economic viability, regulatory compliance, and market dynamics (Mashlakov et al. 2019).Important aspects include clear value propositions, scalable business models, and a thorough understanding of the energy market's regulatory landscape.For more in-depth insights, the works of Giordano et al. on the importance of business models in the smart grid sector provide valuable guidance (Giordano et al. 2011).In the context of emerging energy markets, it means overseeing and fostering new technologies, strategies, and business models that can effectively cater to the growing and often unique needs of these markets. The digitalization of the energy sector, characterized by the integration of advanced technologies such as IoT, big data analytics, and blockchain, has opened new avenues for entrepreneurial ventures (Bumpus 2019).These technologies facilitate the creation of innovative business models like Energy-as-a-Service (EaaS) and peer-to-peer energy trading platforms, which are reshaping traditional energy market structures (Iria and Soares 2023).Digital entrepreneurship, characterized by the utilization of digital tools, platforms, and technologies, plays a pivotal role in innovating business models, products, and services (Bican and Brem 2020).In the context of emerging energy markets, this form of entrepreneurship manifests in diverse forms (Nambisan 2017).Digital entrepreneurship in the energy sector not only accelerates the transition towards more sustainable and efficient energy systems but also fosters economic growth by spawning new market opportunities and business models (Bocken et al. 2014).This is particularly evident in the growing trend of decentralized energy systems, where digital platforms enable consumers to become prosumers, actively participating in energy production and consumption (Parag and Sovacool 2016).Thus, the role of digital entrepreneurship in the energy sector is multifaceted, driving innovation, market restructuring, and contributing to the broader objectives of energy sustainability and resilience. In the context of the SGAM, the application of digital entrepreneurship models facilitates the alignment of technological use cases with market-oriented business cases.Platform-based business models are instrumental in enhancing stakeholder collaboration across different SGAM layers, ensuring the integration of diverse smart grid technologies (Menci and Valarezo 2024).The adoption of agile and lean methodologies aids in the rapid prototyping and iterative development of smart grid solutions, allowing for flexible adaptation to changing market needs (Duc et al. 2019).Furthermore, a user-centric design approach ensures that the technologies developed are marketable and meet end-user requirements (Jin et al. 2017;Shafqat et al. 2021). The intersection between UC and BC through the lens of digital entrepreneurship models, suggests a conceptual bridge facilitated by digital innovation.Digital entrepreneurship models can act as a conduit between the technical and functional aspects of UC and the strategic and economic dimensions of BC (Wang and Shao 2023;Satalkina and Steiner 2020).At this intersection, digital entrepreneurship models apply innovative digital tools and platforms to translate the technical solutions and functionalities defined in UC into market-ready products and services within BC (Bumpus 2019).This includes leveraging digital platforms to define roles and responsibilities, utilize data from smart grid technologies to inform policies and regulations, and reshape traditional business models to align with new market demands.By doing so, digital entrepreneurship ensures that the technical innovations are not only aligned with strategic business goals but are also designed with market adoption and regulatory compliance in mind.This intersection is pivotal for ensuring that the smart grid solutions are economically viable, socially acceptable, and technologically feasible, fostering a seamless transition from concept to commercialization. This study highlights the essential integration of renewable energy sources and smart grid technologies in energy R&D projects.It emphasizes early-stage assessments involving technical feasibility, market demand, and regulatory landscapes.However, existing studies often lack a holistic approach that combines these elements systematically.The study addresses this by advocating for the Smart Grid Architecture Model (SGAM) and digital entrepreneurship models.These models not only align technical and market considerations but also foster flexibility and user-centricity in smart grid solutions.The study thus fills a gap in the existing literature by providing a more integrated and dynamic framework for energy technology development.The summary of the background is presented in Table 2, where the key elements and a critical comparison with existing frameworks are highlighted. Methodology The study methodology for transitioning from use cases to business cases within the I-GReta UCs portfolio involves an approach that incorporates various innovation management and business analytical tools.It begins by deploying a UCs analysis.Following this, a Portfolio Analysis informed by (Vandaele and Decouttere 2013) assesses the innovation potential of various I-GReta UCs from a market and technological perspective.The study then evaluates the innovation readiness levels, which include Technology Readiness Level (TRL), Market Readiness Level (MRL), and Regulatory Readiness Level (RRL), to estimate the maturity of each use case.In the framework of the project, we have developed a Tech Solution Business Model Canvas (TSBMC) tailored for the I-GReta project, which analyses the solution's functionality, infrastructure, and security features, alongside its unique value proposition and stakeholders involved, including beneficiaries, operators, and potential synergies with SMEs and start-ups. The outcomes of the previously described analysis of various intersection between market and technology is the proposal of digital entrepreneurship models and specific business cases for each use case within the I-GReta portfolio.The aim of this Intersection of UC and BC through Digital Entrepreneurship The study suggests that digital entrepreneurship models act as a bridge between technical-functional aspects of UC and strategic-economic dimensions of BC methodology is to facilitate a strategic move from a technology-centric analysis to a business-centric evaluation.This reflects a multidisciplinary approach, encapsulating technical, economic, and strategic dimensions, and it demands a thorough understanding of both the theoretical frameworks and the practical implications of such an integration within the energy sector.The overall methodology of this study is presented in Fig. 3. The methodology used in this study is specifically tailored to the needs of the I-GReta project, which focuses on integrating renewable energy sources and advancing smart grid technologies.The use of Use Case Analysis, Portfolio Mapping, Innovation Readiness Level assessment, and Tech Solution Business Model Canvas (TSBMC) is appropriate for this project because it allows for a comprehensive evaluation of technical, market, and regulatory aspects essential in the energy sector.The benefits of this methodology include its holistic approach, which integrates multiple dimensions of analysis, ensuring that the solutions are viable across technical, market, and regulatory landscapes.This is particularly important for the I-GReta project, which operates in a complex and rapidly evolving field.The selection of methods occurred under a specific work package where both technical and business experts from the I-GReta consortium proposed various tools and methods.The final selection of methods was based on their clarity and comprehensibility to both groups.This collaborative approach ensured that the chosen tools were accessible and understandable to all parties involved.However, potential improvements could include incorporating more quantitative data analysis to complement the qualitative assessments, providing a more robust empirical foundation for the conclusions.Additionally, case studies or pilot projects could be integrated into the methodology to test theories and models in real-world scenarios, thereby enhancing the practical applicability of the research findings. Use case analysis Standardized Use Case Methodology (UCM), as specified in (Schäfer, et al. 2018), is a critical step in enhancing smart grid analysis by providing a structured approach to describe use cases.It relies on detailed narratives to comprehensively capture the essence of each use case.In Step 1 of the Use Case description, according to (Schäfer, et al. 2018), each use case is assigned a unique identifier, a specific field of application, and a characteristic name.This step includes version management with the goal and boundary conditions of the use case.Key performance indicators and necessary conditions, such as assumptions and preconditions, are outlined, with the option to add further classification details like relationships to other use cases or priorities.In SGAM, a UC is a scenario detailing the interaction between entities, such as users and systems, within a smart grid to achieve specific objectives (Albano et al. 2014).Key elements include actors (the interacting entities), scenarios (descriptive narratives of interactions), requirements (technical and functional needs), constraints (regulatory or technological limitations), and expected results (anticipated outcomes or benefits).The I-GReta UCs portfolio employs a UCM approach to ensure that technological solutions are developed with the end-user in mind.This strategy involves creating narratives that encapsulate typical user behaviours, facilitating the understanding of objectives and supporting technical solution development that meets users' needs.This user-centric approach, integrating Use Case Scenarios (UCS), is foundational for aligning technical development with user requirements and market demands.In this study, we will limit ourselves to Use Case descriptions, which will include the name of the Use Case, scenario and technical requirements. Portfolio mapping of innovation level Portfolio Mapping of Innovation Level, as discussed by (Vandaele and Decouttere 2013), provides a framework for evaluating R&D projects based on technological innovation and market readiness.This methodology categorizes innovations within a portfolio, assessing their position in terms of development stage and market potential.Applying this to the I-GReta UCs portfolio involves evaluating the technological maturity and market viability of each use case.By evaluating UCs based on their technological maturity and market readiness, it facilitates a preliminary understanding of where each UC stands in terms of development and market potential.An adopted form (Vandaele and Decouttere 2013) template was used in our study (Fig. 4). Innovation readiness level Understanding the maturity levels of each use case in the I-GReta project is vital not only for individual project assessment but also for ensuring effective cross-use case communication and data exchange, especially when utilizing a cross-national platform like FIWARE.The readiness of each use case in terms of technology, market, and regulatory aspects influences how well these systems can interact and share data across different national contexts.This impacts the overall efficacy and integration of the smart grid solutions within the project.This holistic view is crucial for achieving seamless interoperability and efficient energy management across borders. Incorporating IRL analysis with the Portfolio Mapping of Innovation Level methodology offers a comprehensive approach to evaluating UCs in terms of their technological and market readiness.This combined methodological approach enhances strategic decision-making by providing a multifaceted view of a project's maturity, encompassing technology, market, and regulatory readiness aspects (Borgefeldt and Svensson 2022).IRL analysis complements the innovation and market assessments of Portfolio Mapping, allowing for a more nuanced understanding of potential risks, gaps, and the overall readiness of a project.The IRL is dissected into three key dimensions: • TRL: Evaluating the maturity of the technology, from conceptualization to full-scale deployment.• MRL: Analysing the extent to which the technology is prepared to meet market demands and the challenges it might face.• RRL: This indicates the technology's compliance with existing regulations and identifies potential regulatory barriers. The description of each Level of the IRL dimensions is presented in Fig. 5. Tech Solution Business Model Canvas In the context of this study, a synthesis of various business model canvases is undertaken to construct a comprehensive Tech Solution Business Model Canvas tailored for the project's R&D needs.This integration incorporates the holistic business operation perspective from (Osterwalder et al. 2015), the start-up-centric problem-solution focus of (Maurya 2022), and the customer-oriented design of the Value Proposition Canvas (Osterwalder et al. 2015).Additionally, it involves elements from the Digital Policy the canvas, detailing aspects such as the solution's functionality, infrastructure, security, value proposition, stakeholders, and the environmental, social, and economic impacts. Following the individual team activities, a collective discussion round is conducted.This process not only fosters a deeper understanding of each UC unique and shared attributes but also encourages cross-pollination of ideas, leading to a more integrated and coherent approach to innovation within the R&D portfolio. Results This section will provide a detailed analysis of the use cases developed within the I-GReta project.Each use case will be assessed for its functionality, economic viability, and stakeholder engagement potential and innovation readiness. I-Greta use cases analysis The I-GReta project encompasses an innovative portfolio of UCs that aim to foster advancements in smart grid and energy management technologies.This analysis delves into the practical applications and technological integrations offered by the project, highlighting the synergy between various components aimed at enhancing energy efficiency, flexibility, and sustainability.UC1 engages in the analytical observation of photovoltaic (PV) and battery systems to understand their charging and discharging behaviours, with a specific concentration on the efficiency of energy storage within batteries.The approach involves a set of scenarios where PV profiles during diverse seasonal conditions, such as summer and autumn, are used to illustrate the dynamic behaviours of battery systems as they interact with the power grid.In an empirical progression of these scenarios, a standard 30 kW battery system is experimentally scaled to a more robust 100 kW system, allowing for an assessment of scalability and performance.This use case necessitates the integration of PV systems, battery and battery management systems, alongside automation systems critical for communication processes, highlighting the technical intricacies required to achieve the desired outcomes in energy storage efficiency.UC1 exemplifies the project's commitment to energy conservation and management.By analysing the charging and discharging patterns of PV-battery systems, this use case explores the potential for improved energy storage solutions, focusing on the efficiency of batteries and their integration into the grid.The application of this use case in different seasonal profiles underlines its adaptability and scalability in real-world scenarios. UC2 focuses on the innovation of the Home Energy Management Systems (HEMS) towards more energy flexibility features and functions.The initiative aims to provide end users with optimal time recommendations for energy-consuming appliances and electric vehicles (EVs).A practical scenario involved tenants from multiple residential buildings engaging with HEMS in a real-life setting to adapt their daily activities and energy usage to coincide with times of lower demand or peak solar generation.The technical infrastructure supporting this use case comprises PVs, energy storage solutions, EV charging units, home appliances sockets' smart meters and the HEMS itself, all equipped with interfaces designed for ease of use.Thereby facilitating user interaction and promoting efficient energy consumption patterns in a user-centric way.UC2 addresses the integration of smart home systems with advanced energy management platforms.This use case underscores the role of user engagement and behavioural adjustments in energy consumption, facilitated by HEMS that align usage patterns with optimal energy production times. UC3 employs the FIWARE open-source platform to strategically plan and optimize investments in photovoltaic (PV) systems and battery storage, with a focus on operational flexibility.The use case is exemplified through a public building that has the potential for maximum PV energy production yet exhibits a highly variable and inflexible electric load profile.This scenario uncovers the necessity for additional local storage capacity, facilitated by appropriate power conversion units, which serves to reduce the financial burden on the local electricity grid infrastructure, including aspects managed by Distribution System Operators (DSO).The technical underpinnings of UC3 include the deployment of smart meters, Raspberry Pi 3 boards as control units, and the requisite hardware servers, alongside Ethernet cables and network switches, to construct a robust and responsive network infrastructure.UC3 utilizes the capabilities of the FIWARE platform to plan for smart investments in energy infrastructure.This use case presents a model for energy communities where flexibility and operational efficiency are paramount, particularly in public buildings with variable energy demands. UC4 explores the innovative practice of bi-directional charging for electric vehicles (EVs), assessing their capacity not only to charge but also to discharge their batteries back into the power grid.This dual functionality of EVs holds the potential for enhancing grid stability and optimizing energy management.The practical application of this use case is demonstrated through the deployment of two bi-directional EV charging stations within a laboratory environment that is integrated with RES energy generation and management systems.Here, EVs serve as both energy consumers and potential storage solutions, as they can return electricity to the grid, thereby facilitating more efficient energy consumption and utility for users.The technical infrastructure required to realize UC4 encompasses EV charging stations with bi-directional charging capabilities and vehicles equipped to handle such bidirectional energy flows.UC4 brings to the fore the transformative potential of electric vehicles (EVs) as both energy consumers and providers.By exploring the bi-directional charging capability, this use case serves as an important exploration for smart energy management, where EVs can contribute to grid stability and offer innovative solutions for energy storage and distribution. The UCs within the I-GReta project form a cohesive portfolio that exemplifies the integration of technical innovation with pragmatic execution.Each UC transcends its function as an isolated inquiry to become an essential component of the overarching aim to transform the energy sector.The detailed examination of each case provides valuable perspectives on practical viability, involvement of stakeholders, and the technological expertise that characterizes the I-GReta initiative.This collective approach not only drives the project's mission forward but also contributes to setting new standards in energy management and sustainability.The utilization of the FIWARE open-source platform plays a crucial role in all the UCs, underpinning the seamless integration of various smart grid components and facilitating a connected digital ecosystem.It exemplifies how a unified digital platform can act as a catalyst in advancing the future of smart grids, enhancing operational efficiency, stakeholder engagement, and overall grid management. I-GReta use cases Portfolio Mapping In the I-GReta project, the Portfolio Mapping of Innovation Level has been applied to categorize the UCs based on their technological novelty and market readiness (Fig. 7).UC1 is positioned within the quadrant of existing technologies and applications for current consumers, indicating an incremental improvement on current market offerings.UC2 occupies a space indicating new market products, suggesting a more innovative approach within familiar consumer segments.UC3 and UC4 are both placed in the sector indicative of next-generation technology for new applications and consumers, highlighting their pioneering status in terms of technology and target market.Each UC's positioning reflects its strategic direction: UC1 is focused on enhancing and optimizing existing solutions, UC2 on introducing new products to established markets, and UC3 and UC4 on developing breakthrough technologies that create new markets and consumer segments, driving forward the R&D portfolio's innovative edge.By situating each use case on a matrix that cross-references technological novelty against market readiness, the method provides clarity on the innovation trajectory and commercial potential of each use case.For example, UC1's position suggests a focus on incremental innovation, optimizing existing technologies for current markets, which may require a business model cantered around process improvement and cost efficiency.Conversely, UC3 and UC4, positioned as next-generation technologies for new markets, point towards the need for robust digital entrepreneurship models that can navigate uncharted market territories, spearhead customer education, and build new revenue streams. I-GReta use cases innovation readiness level For UC1 the technology appears to be at a mature stage, indicated by a high TRL, suggesting it has been validated in relevant environments.Its MRL positioning reflects an early market introduction with a validated business model, while its RRL indicates that Fig. 7 Portfolio Mapping of Innovation Level applied to I-GReta UCs the use or production will require some form of permission or approval, hinting at a need for further engagement with regulatory bodies.UC2 shows a robust TRL with proven system functionality in a natural environment.The MRL is at a stage where products are being launched in limited scope, and the RRL suggests that regulatory changes or reinterpretations are anticipated, which could imply potential legislative tailwinds or headwinds.UC3 displays a TRL indicative of prototype demonstration in natural settings, implying readiness for pilot studies or early adoption.The MRL is at the level where the market confirms progress or improvements, and the RRL is optimistic, with necessary approvals likely in place, suggesting a smoother path to market entry.Lastly, UC4 is positioned at a similar TRL to UC3, with a prototype demonstrated in real-world conditions.Its MRL suggests that it is still at the validation or small pilot campaign stage, indicating initial market tests are underway.The RRL is the most advanced among the use cases, with necessary approvals for use or production perceived to be imminent. Overall, the IRL analysis provides a nuanced picture of where each UC stands in terms of development, marketability, and regulatory engagement (Fig. 8).This assessment is pivotal for strategic planning in the I-GReta project, as it identifies potential barriers and facilitators in the journey from concept to market realization within the EU's dynamic energy sector. I-GReta use cases Tech Solution Business Model Canvas This sub-chapter provides an overview of four UCs, conceptualized through the TSBMC, addresses different aspects of energy flexibility, management, and storage, demonstrating a commitment to integrating advanced technologies with market needs and regulatory frameworks (Fig. 9). In UC1, the TSBMC illustrates a strategic approach to enhance energy flexibility.The solution central to this use case involves connecting stationary energy storage systems to FIWARE platforms, offering a digital twin of local energy systems.This innovative integration promises a unique value in allowing customers to integrate their storage systems with existing FIWARE systems.The canvas identifies energy prosumers as the primary beneficiaries and housing associations or building owners as the key operators.Furthermore, it acknowledges synergies with ongoing projects like the Battery Loop project in Gothenburg, suggesting a collaborative effort in the energy sector.From an economic Fig. 8 Innovation Readiness Level applied to I-GReta UCs perspective, the use case focuses on optimizing operations, which includes improving efficiency and reducing costs.The environmental impact analysis foresees an increased use of Renewable Energy Sources (RES), aligning with broader sustainability goals.Socially and culturally, the projected outcome is the stabilization of the energy system, which could translate into a more reliable energy supply for consumers.In summary, UC1 addresses the critical need for energy flexibility by proposing a solution that not only integrates with existing smart platforms but also provides a framework for future enhancements in energy management, usage, and stability. UC2, as detailed by the TSBMC, explores the optimization of energy flexibility within the grid, specifically within the housing sector.This use case aims to utilize HEMS connected to various home appliances and EV charging stations to enhance energy flexibility and integrate PV Demand Side Management (DSM) and a scalable platform for other Demand-Response Management (DRM) energy services.The unique value proposition lies in its dual focus on cost reduction and CO2 emission reductions through personalized optimization, which is expected to foster user motivation for adopting energy flexibility services.Beneficiaries of this use case include grid operators, who will benefit from reduced load during peak hours, and households, who will see reduced electricity bills and more sustainable building operations.The operators listed are appliance add-on services, energy utility services, and building owners, among others.The use case also draws on synergies with EU projects like DigitalTwin4PED and Gente.From an economic standpoint, the use case seeks to reduce long-term capital expenditure by lowering energy bills for households, minimizing load on the grid during peak hours and integrating energy generated by PV.Environmentally, it provides the option to reduce CO2 emissions and energy cost from household activities.Socially, the use case supports the development of energy communities and encourages pro-environmental social norms by involving citizens and energy utility companies in the energy management process.UC2 addresses the need for enhanced energy management through innovative and personalized home Fig. 9 Tech Solution Business Model Canvas applied to I-GReta UCs automation systems that offer economic and environmental benefits, while also supporting social and cultural shifts towards more sustainable energy consumption behaviours. The UC3 is focused on the strategic planning of energy communities with an emphasis on flexibility scenarios tailored for prosumer involvement.The Tech Solution Business Model Canvas for this use case delineates a multifaceted approach where the solution involves high-resolution measurement devices and local energy infrastructure integrated with FIWARE platforms to enable precise tracking and control of energy production and consumption.The value proposition of this use case hinges on the detailed cost-benefit analysis and real-time data utilization to maximize PV generation potential and load profile variability, thereby offering a unique analytical advantage over existing market alternatives.The target beneficiaries include Distribution System Operators (DSOs) and building owners who are positioned to benefit from enhanced energy management and cost savings.Economic benefits are projected in terms of direct energy cost reductions for buildings, while the environmental impact is anticipated to increase the use of local energy sources, particularly photovoltaics.Socially and culturally, the use case contributes to the development of energy communities and fosters citizen engagement in local energy system development and operations, which aligns with broader sustainability and participatory goals.Collaboration and data security are underscored through partnerships with ongoing projects and adherence to data ownership and exchange protocols, ensuring that the solution not only advances technical objectives but also aligns with regulatory and community-centric frameworks.In summary, UC3 represents an integrative model that leverages technological innovation and community engagement to foster sustainable energy ecosystems, offering economic and environmental benefits while promoting social inclusion in energy management practices. In UC4, the TSBMC outlines the deployment of bi-directional EV (Electric Vehicle) charging stations as a dual-purpose solution: providing energy storage and contributing to grid stability through demand-response capabilities.The solution leverages EV batteries as mobile energy storage units, enabling them to contribute power back to the building or grid during peak demand or power shortages, which not only aids in energy management but also provides a backup in emergencies, augmenting resilience.The value proposition centres on utilizing renewable energy sources for charging.This approach offers a distinct advantage over existing market alternatives by integrating renewable energy sources and smart technology, positioning it as a forward-thinking model in line with contemporary environmental objectives.Beneficiaries of this use case include renewable energy service companies, EV owners, and manufacturers, all of whom stand to benefit from the increased utilization and efficiency of EVs as energy storage solutions.Operators of these systems would primarily be the renewable energy service companies responsible for implementing the charging stations.Economic incentives are provided to EV owners for their participation in the flexibility market, promoting the adoption of this innovative charging approach.The environmental impact is significant, with an increase in the use of renewable energy sources, while socially and culturally, the model supports participation in energy communities, facilitating collective engagement in sustainable practices.This model represents an integrated approach that enhances energy systems' sustainability while offering economic incentives and social engagement, showcasing a progressive step towards a more resilient and renewable energy infrastructure. Digital entrepreneurship models and potential business cases The final part of this study analysis is synthesis of previous results and proposal of the strategic digital entrepreneurship models and corresponding potential business use cases for the I-GReta project use cases (Fig. 10). For UC1, the entrepreneurship model identified is "Energy Storage as a Service (ESaaS)."This model could facilitate business use cases like "Commercial Battery Solutions," focusing on providing battery storage systems to commercial entities, and "Energy Storage Integration Consultancy," which would offer expert advice on integrating energy storage solutions into existing systems.In UC2, the "Building Automation Platform" model is proposed.Potential business cases under this model include "Smart Home Energy Optimization," which involves optimizing energy use in homes for efficiency and cost savings, and "Demand-Response Aggregation," which could enable the collective management of consumer energy demand in response to supply conditions.UC3 aligns with the "Peer-to-Peer Energy Trading" model.This facilitates business use cases such as "Prosumer Network Expansion Services," aiming to expand networks of energy producers and consumers who trade energy, and "Community Energy Management Systems (CEMS)," which manage the energy usage of a community to optimize consumption and costs.For UC4, the "Vehicle-to-Grid (V2G) Services" model is suggested.This model could create business use cases like "EV Energy Storage Solutions," which would use electric vehicle batteries as temporary energy storage, and "Renewable EV Charging Networks," which would establish networks for charging electric vehicles using renewable energy sources.Each digital entrepreneurship model is tailored to Fig. 10 Digital entrepreneurship models and potential business use cases for I-GReta UCs address the specific needs of the use cases, proposing innovative business use cases that could drive forward the goals of the I-GReta project.These models and business cases collectively reflect a commitment to leveraging technology for sustainable energy management, community engagement, and the creation of new market opportunities within the energy sector.The selected digital entrepreneurship models within the energy sector hold significant potential for reshaping the industry landscape.These models offer various economic, social, and regulatory implications that warrant closer examination.In the Table 3 there is a summary of Economic, Social and Regulatory implications from TSBMC workshop. Discussion This study has provided a nuanced analysis of the I-GReta project's R&D portfolio through various methodological lenses, including use case mapping, innovation readiness levels, and the development of a tailored Tech Solution Business Model Canvas.Through the systematic categorization of use cases, a trajectory of innovation and market alignment has emerged, suggesting distinct pathways for each use case from technological readiness to market penetration and regulatory compliance. For UC1, we see the potential for a service-oriented approach, aligning with business models such as Energy Storage as a Service (ESaaS).This resonates with trends towards service-oriented business models in the energy sector (Shafqat et al. 2021), which emphasize customer value over product ownership.The commercial viability of battery storage integration consultancy suggests a demand for expertise in integrating advanced energy storage solutions with existing infrastructures, a trend supported by the increasing complexity of energy systems (Richter 2013).UC2 reveals the importance of smart home energy optimization and demand-response aggregation in enhancing energy efficiency.These potential business cases reflect a growing market for home The proposed digital entrepreneurship models in this study carry economic, social, and regulatory implications potential for the future UCs development.Economically, these models offer avenues for cost reduction and revenue generation; for instance, Energy Storage as a Service (ESaaS) and Vehicle-to-Grid (V2G) services provide reduced upfront costs and new revenue streams, respectively.Socially, the implications are equally transformative, with models like Peer-to-Peer Energy Trading fostering community engagement and empowerment, and Building Automation Platforms enhancing indoor comfort and well-being.These models promote energy sustainability, independence, and resilience, contributing to reduced environmental impact and the integration of renewable energy sources.Regulatory implications are intricate, as they must address the balance between innovation and compliance.ESaaS requires careful consideration of fair pricing and data privacy, while V2G services pose challenges in establishing technical and operational standards.The regulatory landscape must adapt to these emerging models, ensuring grid compatibility and energy efficiency standards are met without stifling innovation.This study showcases the interwoven nature of these implications, emphasizing the need for a coherent strategy that aligns technological advancement with economic incentives, social benefits, and a flexible regulatory framework to support sustainable growth in the digital energy marketplace. The mapping of use cases to smart grid architectures revealed challenges in aligning complex technological innovations with existing systems.The portfolio analysis highlighted notable innovation potential, but also underscored the need for a more nuanced understanding of market dynamics.Evaluating readiness levels proved an understanding of different projects readiness for the market deployment, but also the variability in market and regulatory environments suggests that a one-size-fits-all approach might be limiting.The Tech Solution Business Model Canvas offered comprehensive insights, yet the adaptability of these models in varying market conditions requires further exploration.The proposed digital entrepreneurship models are innovative, but their real-world applicability may face hurdles such as stakeholder resistance and technological integration challenges.Overall, the study makes significant strides in understanding the transition from technical use cases to business cases but also reveals areas requiring deeper investigation and strategic refinement. The interdependency of technology, market, and regulation presents both challenges and opportunities.While technological readiness can be achieved through rigorous R&D and pilot testing, market readiness requires a robust understanding of customer needs and competitive positioning.Regulatory readiness, perhaps the most unpredictable, necessitates a proactive and responsive approach to policy changes and legislative developments.Unexpected findings, such as the variable stages of regulatory readiness among the use cases, open new avenues for discussion.For example, the imminent regulatory approval for UC4's bi-directional charging suggests a conducive legislative environment, while the anticipated changes for UC2 may introduce both opportunities for influence and risks of non-compliance.These findings suggest the need for a flexible, adaptive approach to innovation management and digital entrepreneurship within the energy sector.As the sector continues to evolve, the capacity to anticipate and respond to market and regulatory shifts will be crucial.Additionally, the role of digital platforms and tools in facilitating new business models, such as Energy-as-a-Service and peer-topeer energy trading, cannot be overstated.These models offer the potential to disrupt traditional energy markets, creating new value for consumers and providers alike. The literature reviewed in the Background section suggests a shift toward more comprehensive and digitally integrated models in energy informatics, focusing on the earlystage assessment, regulatory considerations, and the intersection of use cases (UC) with business cases (BC) through digital entrepreneurship.These scholarly perspectives can be connected to our results and findings by demonstrating how our study adopts these evolved approaches.The advancements in early-stage assessments and policy landscapes as highlighted by Smith and Woodworth (2012) and Reyna and Chester (2017) resonate with our findings on the criticality of aligning technological capabilities with market needs and regulatory requirements.The comprehensive approach of integrating use case and business case levels, as discussed by Priem et al. (2018), is reflected in our study's methodology that interweaves technical, business, and policy aspects.The evolution towards digitalization and agile methodologies emphasized by Bumpus (2019) and Duc et al. (2019) is mirrored in our research outcomes, showcasing how digital tools and flexible design contribute to the adaptability and success of energy solutions.Lastly, the intersection of use case (UC) and business case (BC) through digital entrepreneurship, as suggested by Wang and Shao (2023), is demonstrated in our findings where digital entrepreneurial models serve as a linchpin connecting technical functionality with strategic-economic viability, reinforcing the holistic nature of our study's insights.Our findings provide a fresh perspective on the interplay between technology and market dynamics, offering a new paradigm for how energy-related R&D can evolve to meet the challenges of digital transformation and sustainable development.This connection enriches the academic discourse with practical examples and solidifies our contribution to the field, emphasizing the relevance and timeliness of our work in light of these recognized scholarly advancements. While this study provides valuable insights, it also acknowledges certain limitations.While the current methodology offers significant insights, however it could benefit from a more balanced approach between qualitative and quantitative analysis.The inclusion of quantitative data would provide a stronger empirical basis, lending statistical weight to the qualitative observations.This dual approach could deepen the understanding of the complexities involved in the research.Furthermore, the application of case studies or pilot projects would serve as a practical test bed, allowing for the evaluation of theories and models in real-world settings.This would not only validate the research findings but also enhance their relevance and applicability to actual industry scenarios, bridging the gap between theory and practice.Another limitation is the dynamic nature of technological innovation and market forces, which requires ongoing analysis to stay current.Future research could explore longitudinal studies to track the progression of these use cases as they navigate market entry and scaleup.To delve deeper into the application and effectiveness of the proposed models, involving stakeholders in empirical studies would offer critical insights into the realworld challenges of adoption and implementation.Further, a thorough investigation into how these business models perform under diverse market conditions would add valuable understanding to their scalability.Equally important is a more comprehensive exploration of the legislative landscape, which could illuminate both potential regulatory hurdles and opportunities, significantly affecting the models' success and feasibility. Conclusion The I-GReta project's approach to integrating advanced technologies with market and regulatory frameworks demonstrates the potential for significant contributions to the field of energy management and sustainability.The project's four cornerstone use cases-Upscaling of Battery Storage, Building Energy Management System, Planning of Energy Communities, and Bi-Directional Charging-each contribute uniquely to the energy sector's transformation.These cases collectively illustrate the potential for scalable, economically sustainable solutions that not only meet immediate market needs but also align with broader sustainability goals.UC1 has shown the potential for service-oriented business models like Energy Storage as a Service (ESaaS), which capitalize on the growing complexity of energy systems.UC2's Building Energy Management System has highlighted the growing market for smart home solutions and demand-response aggregation, emphasizing the role of user engagement in energy efficiency.UC3's Planning of Energy Communities has aligned with the decentralized energy paradigm, empowering consumers to become prosumers.UC4's Bi-Directional Charging has merged the transportation and energy sectors, expanding the utility of EV batteries.The I-GReta project represents a concerted effort to navigate the complex interplay of technology, market, and regulatory frameworks within the energy sector.In conclusion, this study marks an advancement in energy informatics and digital entrepreneurship by demonstrating how digital tools and agile methodologies can be harmonized to drive innovation in the energy sector.Our research provides novel insights into the development of sustainable business models that potentially can be economically viable and technologically advanced.Furthermore, the practical implications of our work, such as the enhanced adaptability of energy systems to market changes and the facilitation of regulatory compliance, pave the way for real-world applications, offering a roadmap for industry stakeholders to implement cutting-edge energy solutions effectively.The study highlights its limitations, suggesting a balanced approach and inclusion of both qualitative and quantitative data to improve empirical strength and industry applicability, while recommending future research to explore the dynamics of technology and market forces through longitudinal studies and stakeholder engagement for better scalability and adoption insights. FieldFig. 1 Fig. 1 Clustering of use cases in I-GReta (use cases of one colour are undertaken at the same field trial location) Fig. 3 Fig. 3 Overall methodology of the study Fig. 5 Fig. 6 Fig.5The Innovation Readiness Scale and the three dimensions of readiness level assessment, adapted from(Borgefeldt and Svensson 2022) • Evaluating Innovation Readiness Levels: To evaluate the Technology Readiness Level (TRL), Market Readiness Level (MRL), and Regulatory Readiness Level (RRL) of each use case, pre-estimating their maturity and market viability.• Creating a Tech Solution Business Model Canvas: To create and utilize a tailored Tech Solution Business Model Canvas (TSBMC) for I-GReta, analysing solution functionality, infrastructure, security, value proposition, and stakeholders. • Proposing Digital Entrepreneurship Models: Based on the intersection of market and technology analysis, propose relevant digital entrepreneurship models and specific business cases for each use case in the I-GReta portfolio. Table 1 I -GReta use cases list pertains to small-scale energy systems where resources are communally shared.The Public Energy System represents an expansive version of the Community sector, facilitating resource exchange beyond community confines, necessitating compliance with the public power grid's regulations.Each sector is divided into two general categories-Smart Grids and Smart Tariffs-and a sector-specific category.Smart Grids capture UCs that leverage advanced Smart Grid features, such as smart tariffs and demand response.This classification aims to pinpoint synergies between UCs and their interconnections.Notably, some UCs span multiple sectors and subdivisions.From this comprehensive set, a subset of use cases is selected based on technical feasibility at trial sites.A prime area of focus within the project is Smart Buildings, which are instrumental in linking heat and electricity, thereby unlocking new flexibility exchange opportunities.Subtopics within the Smart Buildings category address the integration of renewable energy sources into building energy management systems, incorporating smart charging for alternating current (AC) and direct current (DC), and fostering Smart Energy Communities. Table 2 Summary of the background studies Table 3 (Madina et al. 2016setti 2020)y implications summary Iria and Soares 2023) and the increasing role of digital platforms in energy consumption management(Duch-Brown and Rossetti 2020).The peer-to-peer energy trading model proposed for UC3, highlights the emerging paradigm of prosumer-centric energy markets.This model aligns with the shift towards decentralized energy systems, where digital platforms empower consumers to engage in energy production and consumption(Burger et al. 2020).Lastly, UC4 demonstrates the intersection of transportation and energy sectors, with vehicle-to-grid (V2G) services showcasing an innovative use of EV batteries for energy storage.The proposed business case for renewable EV charging networks aligns with current drives towards sustainability and renewable energy integration(Madina et al. 2016).
2024-02-14T16:20:25.930Z
2024-02-12T00:00:00.000
{ "year": 2024, "sha1": "a023b9f9337d6ccf3245dcf67af2b971a8067fe3", "oa_license": "CCBY", "oa_url": "https://energyinformatics.springeropen.com/counter/pdf/10.1186/s42162-024-00310-w", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "70f25c3d495445e24f77873a4c21c90a9a138876", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
272047471
pes2o/s2orc
v3-fos-license
Knowledge-Based Assessment and training of Agricultural Development Programme (ADP) - Staff on Roselle as an economic valuable crop for food security and income generation. . Palabras clave: Cáliz de Roselle, mermelada de Roselle, bebida con sabor a manzana, té de Roselle INTRODUCTION Roselle (Hibiscus sabdarffa) also known as sorrel is a herb belonging to the family of Malvaceae.Roselle is locally known by different names in different countries (Ismail et al., 2008).Roselle originated from West Africa and is widely grown in tropical African countries like Sudan, Egypt, Mali, Nigeria, Ethiopia, Chad as well as India, West Indonesia, Brazil, Malaysia, Australia, Mexico, the Philippines and other tropical American countries (Shoosh, 1993). But it is disheartening to note that the potentials of this miraculous crop (Roselle) is grossly underutilized in some developing countries likely Nigeria.Roselle remains one of the underexploited food crops with nutritional and food industry processing potentials.For instance, Roselle seed is a good source of oils for nutritional, pharmaceutical and purposes (Betiku and Adepoju, 2013;Anel et al., 2016).Several parts of roselle (H.sabdariffa) such as the flower and leaves are used as vegetables in many countries.Roselle calyx and leaves among others possess beneficial health characteristics for humans (Cid-Orteg and Guerrero-Beltran, 2016).Every part of Roselle plant including fruits, roots Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-V13N1-art657and seeds is utilized in various foods.Furthermore Islam et al. (2016) reported that Roselle is more than an eyecatching crop and has been used in number of dishes, beverages and conventional remedy for diseases.And it follows that if the potential of Roselle is well-harnessed, Roselle is an economic crop for wealth creation.Roselle extracts are also used as natural pigments for foods and beverages as well as for preparing jams, ice-cream, gelatin, pudding, cakes, jellies and concentrates possessing red colour with a characteristic sour taste.Several research publications have pointed out that Roselle extracts may have various therapeutic effects; one of which emphasized on its antioxidant capacity, and this was attributed mainly to the content of anthocyanins and phenolic compounds (Tsai and Huang, 2004).Roselle calyces have a characteristic deep red colour, which is mainly due to the presence of anthocyanins.The most common use of Roselle calyces is for obtaining aromatic infusions of intense red colour that are traditionally consumed either cool or hot.(Tsai, et al., 2002;Tsai and Huang, 2004;Anokwuru, et al., 2011;Amer et al., 2012).It has been reported higher antioxidant capacity in roselle calyces of red variety than in the white variety (Christian and Jackson, 2009).Roselle seed is a valuable food resources on the account of its protein, calorie, fat, fibre and micro-nutrients (Akanbi et al., 2009).Roselle calyces contain nine times more vitamin C than Citrus sinensis (Amin et al., 2008).There is great market potential for farmers of Roselle as a cash crop in Nigeria In view of the multi-farious nature of Roselle crop, the Institute of Agricultural Research and Training, (IAR&T) Ibadan, Nigeria has developed and processed different products such as Roselle jam, Roselle wine, Roselle tea, Roselle drink (zobo) among others from Roselle.Training package was organized for the-manpower trainers in agriculture and other food related processing industries.The processing and packaging technologies will not only increase income generation of middle and average household-mothers but also boost the nutrition security of their households.The major objective of this research work was to unveil the nutritional potential of Roselle as well as to empower the household-mothers with the requisite skills for processing Roselle foods for a sustainable livelihood. It is aimed at train the trainers on Roselle processing, food safety practices and adequate packaging as well as valueaddition of Roselle products.This will open new opportunities for income and employment generation in many developing countries like Nigeria.Without doubts, when Roselle -food-products are well packaged, it will contribute to the economic growth of the nation (Nigeria) in terms of gross domestic products (GDP) through export as a source of foreign exchange earnings.(RSD) were formulated and processed from Roselle extract and fruit juice pulp in the ratio 3:1 and 3:2 respectively for each product category.Roselle Jam was also processed as described by Islam et al. (2016).The developed Roselle products were randomly subjected to sensory evaluation as to determine the most preferred.This was done by a team of twenty (20) panelists who were the trainees for the products development.The products were presented as randomly coded samples.The products were analyzed for appearance, colour, flavor, texture, taste and overall acceptability.Each panelist recorded their degrees of likes and dislikes using a nine -point-hedonic scale (1= dislike extremely, 2 = "dislike very much", 3 = "dislike moderately", 4= "dislike slightly", 5= "neither like nor dislike", 6= Selection "like slightly", 7= "like moderately", 8= "like very much" and 9 = like extremely) as prescribed by Iwe (2003).Before each sample testing, the panelists rinsed their mouths with pure water as to avoid cross interaction of product sensorial properties.and tea processing.The novelty in these Roselle food products underpinned the need to explore its potentials in food industry and this result was strongly supported the publication report by Leung and Foster (1996). Although, less than twenty percent of the participants were able to mention four Roselle products before the training neither did any of the participants indicated their readiness generate income with their previous training.This may probably be due to inadequate on the processing technologies. Sensory evaluation: Sensory analyses of the different formulations for RAD and RSD were presented in Table 3. From the result, it was observed that RSD was most preferred by the panelists with a rating score of 8.4 point.On the same vein, the panelists' responses showed that RSD was significantly rated higher than RAD in terms of flavor, taste and texture.This observation was in contrast with the data reported by Ismail et al.( 2016) which highlighted that taste, aroma, consistency and flavor were not significantly different for ROD and RAD.This variation could be due to infinitesimal amount of fruit juice used was not significant enough to have influenced the flavor, taste and texture of the mixture.However the appearance of the RAD and RSD were both significant (p < 0.05) and this was in consonance with the general knowledge about a consumer's judgement on the colour of the product.Generally all the categories of Roselle products were acceptable to the consumers.The sensory data and consumer's responses obtained from this study were in strong agreement with the publication report by Luvanga (2014), which emphasized that there was no significant difference among Roselle drinks mixed with apple, orange and melon. Roselle calyces as beverage, jam and tea: The use of Roselle calyces in the production of beverage is novel, cost effective and income generating for household mothers and youth.Industrially, this novelty can be the best option for soft drink production items of its availability and cost.Either fresh or dried calyces can be used to prepare drinks as shown Figure 1b.The drinks made of fresh fruits, juices, or extracts are commonly consumed as cheap beverage in many African countries like Nigeria.According to publication report by Fellow and Axtell (2014) which emphasized that dried calyces and readymade drinks are widely available in the groceries and health food stores throughout The United Kingdom and United States.However, ninety-eight percentage (98 %) of the participants who were staff of the state government,) expressed their readiness to use the new technologies for income generation. Preparation of Roselle tea was outlined in Figure1a.Roselle tea is commonly called sugary herbal tea in many African countries, while in Jamaica Roselle tea is produced by adding flavor from ginger.It was believed that Roselle tea reduces cholesterol level and was highly valued as organic product (Mohamed et al., 2012). Roselle jam processing was described as the most effective and attractive way of utilizing Roselle.Jam is easy to make only with Roselle calyces and sugar.Roselle jam has been reported to be rich in vitamin B1, B2, B3 and C, minerals and antioxidants.These antioxidants are from calyces are good for our heart. Multipurpose uses of roselle parts: Roselle is an underutilized multipurpose crop with enormous potentials for economic and industrial development aiming at food and nutrition security.The seconadry nutritional data adopted from Islam et al.(2016) as shown in (Dy Phon, 2000). As conclusion, the sensory evaluation showed that all the Roselle products assessed: Roselle-Apple Drink of participants (women and youth) for the training • Invitation of subject matter specialists from the Oyo State Agricultural Development Programme (ADP) • Carry out pre-test evaluation on the participants' knowledge about the importance, uses and processing of Roselle Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-V13N1-art657• Enlightenment of the participants on the nutritional and health importance of Roselle • Training of participants on Roselle drinks, Roselle fruit-flavoured drinks, jam, tea and Roselle wine • Administer sensory test on the products with the participants • Conduct post-test evaluation on the participants after the training.Preparation of the sample materials for the training: Roselle calyces were harvested at maturity from the experimental field of Kenaf and Jute Programme, Institute of Agricultural Research and Training Ibadan, Nigeria.Dried calyces were sorted out to remove dirt.Pre-test: Evaluation for the trainees' knowledge: A pretest was conducted on the respondents, who were the staff of the Oyo State Agricultural Development Programme (OYADP) Ibadan, Nigeria.The content of the questionnaire includes information on the: age, work type, education status, previous training in food processing and products, previous training in Roselle processing, using previous training for income generation Dissemination and practical demonstration of Roselle technologies: The processing technologies for the different Roselle products were demonstrated during the training organized for the trainers in Agriculture Development Programme.Twenty staff of the Oyo State Agricultural Development Programme (OYADP) were present as participants.They were trained on processing of Roselle wine, Roselle-Apple Drink (RAD), Roselle-Sugar Drink (RSD), Roselle tea and Roselle jam.The processing method for Roselle drink was conducted according to Fasoyiro et al. (2005) as well as the processing method for Roselle jam and tea were done according to prescribed procedure by Ashaye and Adeleke (2009).The flow chart for processing Roselle-apple drink (RAD) and Roselle-sugar drink (RSD): For production of Roselle fruit-flavored e.g.apple, pineapple, orange etc. drinks.Processing of Roselle tea and jam. Figure 1 : Figure 1: The flowchart for processing of Roselle drink ( RAD), Roselle Sugar Drink (RSD) and Roselle jam were highly acceptable to the participants.Post test results showed that 100% of the participants had better understanding in food and Roselle processing after the training.All the participants (100%) also showed their readiness to explore the nutritional potentials of Roselle calyces for generation of income.Plate a: Participants inspecting the calyces being sundried Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-V13N1-art657Plate b: Cross section of the Participants at the training lecture Plate c: Cross-section of the participants during the hands-on-training (Practical session)Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-,Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-V13N1-art657 , Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer- Twenty participants (all women) from the Oyo State Agricultural Development Programme (OYADP) were present at the training and the training was participatory.Post test showed that half of the participants were mainly in the age range 41-50 year (≤ 50%) with education up to postgraduate level as 25% of the participants.All the participants had previous knowledge in food processing such as: cassava, plantain, fruits, soybean processing.Majority of the participants had previous training on other food processing while few number of participants had previously involved in Roselle drink (popularly called Zobo), but not in fruit-flavored drinks processing.Significant number of the participants have no previous knowledge in Roselle jam Location: The training was conducted at the Institute of Agricultural Research and Training (IAR&T), Ibadan, Oyo State, Nigeria.Target beneficiaries: The targeted beneficiaries of this training were the trainers in states' Agricultural Development Programme (who will in turn train farmers, youths, and women in agriculture).Post-Test Evaluation: The participants were evaluated after the training on the following: understanding of food processing after the training, understanding of Roselle processing, mentioning Roselle-food products, ready to process of Roselle-food products for income generation.Statistical Analysis: All data obtained were subjected to two ways Analysis of variance (ANOVA) and means were separated using t-Test with significant difference at P <0.05RESULTS AND DISCUSSIONSocio-demographic Characteristics of the participants: Table 4, it was evidence that every part of Roselle is nutritious and Sustainability, Agri, Food and Environmental Research, (ISSN: 0719-3726), vol 11, 2023 http://dx.doi.org/10.7770/safer-V13N1-art657posses great health benefits to humans.Due to roselle's nutritional benefits ,dietary inclusion of roselle basedproducts is imperative to achieve nutrition security.Consumption of Roselle based-products enriches the consumer with vitamin C,B, phosphorus, calcium, antioxidants and other mineral elements.Roselle products are affordable to poor, average, intermediate and rich class of people in the society Table 2 : Pre-test on the participant's involvement on previous training in food processing Table 3 : Mean sensory scores of Roselle products by the participants using 9-point hedonic scale Table 4 : Nutritional composition of 100g fresh Roselle calyces, leaves and seed Table 5 : Participant's response to previous training on food processing Table 7 : Trainee 's understanding about food processing Table 8 : Roselle products that the trainee can process
2024-08-29T16:36:18.742Z
2023-05-13T00:00:00.000
{ "year": 2023, "sha1": "34305d4bcd1914c748d58140105ee3d992484fb5", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.7770/safer-v13n1-art657", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b4434dbfdc4ff1fc9367f62dd29742144ade8d8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
226227276
pes2o/s2orc
v3-fos-license
Sensor-based localization of epidemic sources on human mobility networks We investigate the source detection problem in epidemiology, which is one of the most important issues for control of epidemics. Mathematically, we reformulate the problem as one of identifying the relevant component in a multivariate Gaussian mixture model. Focusing on the study of cholera and diseases with similar modes of transmission, we calibrate the parameters of our mixture model using human mobility networks within a stochastic, spatially explicit epidemiological model for waterborne disease. Furthermore, we adopt a Bayesian perspective, so that prior information on source location can be incorporated (e.g., reflecting the impact of local conditions). Posterior-based inference is performed, which permits estimates in the form of either individual locations or regions. Importantly, our estimator only requires first-arrival times of the epidemic by putative observers, typically located only at a small proportion of nodes. The proposed method is demonstrated within the context of the 2000-2002 cholera outbreak in the KwaZulu-Natal province of South Africa. Introduction One of the most important factors in epidemic control is to trace the source or origin of an epidemic [1,2]. This problem is sometimes called 'source localization' (and can in fact involve multiple sources). Ideally, one would like to locate the source based on data capturing the entire history of the epidemic, including times of infection / recovery of individuals as well as information on contact between individuals and of individuals with infective aspects of the environment (e.g., water sources). However, epidemic history is complex and high-dimensional, and almost invariably the data are incomplete -often substantially so [3,4]. Over the past 5-10 years, researchers have found it useful to reformulate the localization problem as that of estimating a source node(s) on a complex network. There have been a large number of contributions in this area to date. A recent and comprehensive review has been conducted by [5]. Many approaches use network-distance-based measures of centrality to identify the source node in a complex network, such as rumor centrality [6,7] or Jordan centrality [8,9]. A related idea is that of effective distance-based source detection [10,11]. However, typically these methods assume network-wide observation of the infection status of nodes at either a single time point or a handful of such snapshots, which is generally unrealistic for large networksparticularly in the context of human disease. Alternatively, sensor-based methods are designed to instead locate a source based on arrival-time information of infection from only a subset of observer nodes (e.g., [12,13,29]). Despite the development in this area, there is still substantial room for improvement [5]. In general, methods proposed to date frequently fail to assimilate the often-abundant information that can be gained through epidemic modeling, as well as additional prior information. In addition, they typically do not provide measures of uncertainty quantification. Both of these aspects are especially important in the context of human disease, where policy providers and decision makers are often data-poor and yet required to make concrete decisions that have pronounced impact on society. In this paper, focusing on the illustrative example of cholera epidemics, we propose a method of source detection that integrates (i) a sensor-based approach, with (ii) a stochastic differential equation model for water-borne disease. In turn, we adopt a Bayesian framework, thus allowing for uncertainty quantification and the formal use of prior information. A key component of our approach is the incorporation of human mobility networks. Human mobility is one of the main drivers for the spreading of infectious diseases. Understanding, predicting and possibly controlling the propagation of an epidemic in a population cannot prescind from the analysis of the underlying human mobility patterns. Historically, network-based research incorporating human mobility has focused on infectious diseases transmitted through direct contact between individuals e.g. [15][16][17][18]. However, the role of human mobility in the spreading of waterborne diseases (where transmission is mediated by water) has also recently attracted increasing attention e.g. [19,[21][22][23]. Indeed, a susceptible individual can be exposed to contaminated water while travelling or commuting and seed the infection in the resident community once back. On the other hand, asymptomatic infected individuals (who potentially shed pathogens but whose movement is not impaired by the disease symptoms) can spread pathogens while moving among different human communities. These two mechanisms highlight the critical role of human mobility as a notion extending beyond direct contact. In this paper, we recast the source detection problem as one of identifying the relevant mixture component in a multivariate Gaussian mixture model from [12]. Human mobility within the stochastic, spatially-explicit epidemiological model of [23] is used to calibrate the parameters. Our estimator requires only first-arrival times of the epidemic at a small proportion of nodes, termed sensors or observers. Adopting a Bayesian perspective opens the possibility to seamlessly integrate available nontrivial prior knowledge from previously observed spreading patterns or other data sources. Moreover, we are able to quantify uncertainty in the resulting estimators. Specifically, our approach provides (a) statistically well-defined region(s) of nodes that are likely to be the spreading origin of the observed process, accompanied by a corresponding posterior probability. We develop and apply our method in the context of the 2000-2002 cholera outbreak in the KwaZulu-Natal province, South Africa. In particular, our integrative, Bayesian approach demonstrates significant improvement in this context over the use of a generic sensor-based source detection approach alone [12]. To better place our contributions in context, we note the following points in comparison to related work in the literature. First, while there are a number of network-based methods of epidemic source detection that are not generic and that incorporate some knowledge of disease epidemiology (e.g., [24][25][26]), this is the first article to integrate cholera-specific transmission models in network-based source detection. Second, while human mobility networks have been used previously in network-based epidemic source detection (e.g., [27], who also use a gravity model similar to ours), this is the first article to integrate the role of human mobility in the complex spreading of waterborne diseases. Finally, while a number of Bayesian approaches have been suggested or developed in epidemic source detection (e.g., [24,25,28]), to the best of our knowledge none of these have developed an informed prior probability distribution. In addition, while our use generic networks for the underlying spreading pattern is less common in the literature (in contrast to assuming a tree-like structure), there is indeed precedent (e.g., [26,29]). Code implementing our proposed method has been integrated into the NetOrigin package in R. Sensor-based source localization: overview of proposed method We assume a network G = (V, E) to be given that is composed of a set of nodes v ∈ V that are inter-connected by links (u, v) ∈ E. Furthermore, there is a spreading process on this network, which originates in source node s * ∈ V . For pre-defined sensors at a small fraction of nodes, O = {o k } K k=1 , K ≤ |V |, we observe the first-arrival times of the spreading process, i.e. t = (t 1 , . . . , t K ) . In the epidemiological context motivating our work, the set of nodes v ∈ V are human communities, and the first-arrival times are the time points at which a given level of disease incidence is attained in observed communities. Our aim is to develop a good estimator for the source s * and to quantify the uncertainty in that estimator. Conditional on the underlying spreading process and a given source s * , the first-arrival times t are assumed to follow a K-dimensional multivariate Gaussian distribution. The a priori chance that a given node s is the epidemic source is modeled according to a prior distribution π = (π 1 , . . . , π N ) over network nodes v ∈ V with N v=1 π v = 1, where N = |V | is the total number of nodes. Through this prior we incorporate subjective beliefs or other sources of information about the origin of the spreading process. Statistical inference on source location is then based on the corresponding posterior, with the most probable source determined aŝ and the most probable region, by where here α ∈ (0, 1) is pre-specified and τ α ∈ (0, 1) is the largest such threshold for which the conditions in (2) hold. The underlying spreading process is modeled using a set of stochastic differential equations for the spread of water-borne disease, consisting of three main elements. First, fundamentally, our model is a version of the well-known susceptible-infected-removed (SIR) model, but expanded to differentiate rates of death cross each class of individuals as well as to include components for both symptomatic and asymptomatic infection. Most of the rate parameters are simple constants to be set by the user (e.g., using historical data, public records, etc.). However, second, the rates of (a)symptomatic infection are modeled proportional to a 'force of infection' term which, for a given location, summarizes the aggregate contribution of bacterial concentration at neighboring locations and the extent of human mobility from the latter to the former location. Finally, third, the bacterial concentration at each location is modeled using a linear differential equation that includes a term reflecting the number of infected individuals and the volume of the local water reservoir. Human mobility is represented through the network G, which is taken to be directed and weighted. Here nodes correspond to communities and weights on links between nodes reflect the probability of movement by individuals from one node to another. In our applications, these probabilities are calculated using a simple gravity model, combining information on the size of communities and the distance between them. Our overall approach to sensor-based source localization combines a Bayesian extension of the method in [12] with the human mobility portion of the spreading model in [21]. By construction, the source localization problem in our setting effectively reduces to that of identifying through the posterior distribution the relevant component in a multivariate Gaussian mixture model. The necessary parameters for the individual Gaussian components, i.e., the means and covariances of the first-arrival times observed at sensors 1, . . . , K, are calibrated using a combination of stochastic simulation from our spreading model and statistical smoothing of the corresponding output. These simulations in turn are run using various rate parameters whose values are retrieved through literature review. Additional details regarding modeling and implementation can be found in Methods. Analysis of the 2000-2002 South African cholera outbreak We applied our source estimation approach to data from the 2000-2002 cholera outbreak in the KwaZulu-Natal province, South Africa. The outbreak lasted for two years, starting in August of 2000, and ultimately involved about 140, 000 recorded cases in two major waves in the respective summers [30]. Figure 1 shows the epidemic curve. We can see the peak of the first wave is much higher than the peak of the second wave. Figures 2 and 3 show a spatial representation of some of the data and results relevant to our model. These data have already been described in detail in [21]. In the figures are shown the spatial locations of N = 851 communities in the KwaZulu-Natal province, indicated by dots for which the area scales with population size. In turn, each community corresponds to a node in our human mobility network G. A visualization of this network is also provided, as an overlay. In order to improve interpretability, only the three most frequented outbound links are shown (typically corresponding to roughly 10% of the outward mobility from a node). Among those links, we split them into 2 sets: the links with top 10% weights and those with bottom 90% weights. For the first set, we kept their weights unchanged; for the second set, we decreased their weights to 10% of their original value. That is sufficient to illustrate several characteristics of the network. In particular, we note the local, grid-like connectivity of much of the network, which is then complemented by a handful of nodes with substantially higher and more global connectivity. The network visualization suggests small-world behavior, which can be confirmed through computational methods applied to the underlying human mobility network (see the Supplemental Text (S1 File)). The more highly connected nodes with global connectivity correspond roughly to (i) Durban, the largest city in KwaZulu-Natal, and other cities in the Greater Durban Municipality (e.g., Inanda); (ii) Pietermaritzburg, the capital and second-largest city in KwaZulu-Natal, situated 80 km inland from Durban; and (iii) Newcastle, the third largest city, located near the northwest edge of the province. Also represented in Figures 2 and 3 is a local version of the basic reproduction number, R 0 , for each community, through appropriate shading of the nodes. In epidemiology, the basic reproduction number of an infection can be thought of as the number of cases to derive from one infected case on average over the course of its infectious period, in an otherwise uninfected population [2]. In a well-mixed population, when R 0 < 1, the infection will die out in the long run. On the contrary, if R 0 > 1, the infection will spread. In the case of multiple interconnected local populations, the concept of basic reproduction number has been generalized by [31,32]. Here, a local version of the reproduction number has been computed for each node, following the approach of [21], which combines information on community size with models for contamination and exposure rates that incorporate access to (in)adequate toilet facilities and to water, respectively. See Methods for additional details. For each wave, nine nodes with highest weighted degree (also called node strength) in the human mobility network were chosen, from among those nodes that were infected during a given wave, to serve the role of 'observers' (or sensor nodes) in our source detection algorithm. These represent roughly 1% of the total nodes in the underlying network for each wave. Selecting observer nodes based on degree is expected to improve detection accuracy. (We examine this assertion further in the synthetic experiments described later in this section.) We see that the two resulting sets of observer nodes are largely complementary in nature. The observer nodes for Wave 1, shown in Figure 2, are spread throughout much of the province, to the north and northwest of Durban / Pietermaritzburg and to the south and southeast of Newcastle. In contrast, the observer nodes for Wave 2, shown in Figure 3, are concentrated almost entirely between these two major metropolitan regions. Now consider the results of our source detection methodology applied to these data. Shown in Figure 4 are the posterior probabilities for those ten nodes found to have the largest chance of being a source node, for each of Wave 1 and 2, under both a uniform prior on nodes and a prior proportional to the local R 0 . For each of the four combinations of wave and prior, the corresponding ten nodes ended up representing a most probable posterior region of roughly 0.70 posterior mass. In comparing results for the two waves, there is clear evidence that the posterior in Wave 1 is substantially more concentrated on just one (R 0 prior) or two (uniform prior) nodes. On the other hand, while in Wave 2 there is some evidence of similar concentration under the uniform prior, with the R 0 prior there is comparatively less information in the posterior to differentiate the ten most probable nodes. Accordingly, we see that incorporating prior information in the form of the local reproduction numbers (which in turn reflect a combination of community size with contamination and exposure rates) has a substantive impact on the shape of the corresponding posterior distributions. To understand the impact of these differences in posterior shape on the rankings of putative sources (and, hence, the potential impact on decisions of policy, resources, etc.), consider the plots in Figure 5. Overall, it would seem that the ranks are fairly stable, with seven and eight of those nodes ranked top-10 under the uniform prior still remaining in the top-10 under the R 0 prior, for Waves 1 and 2, respectively. However, there are important exceptions. For example, in Wave 1, there are four points whose The same human communities mobility network as in Figure 2, but with observers and putative sources corresponding to analysis of Wave 2 of the cholera epidemic. (Note: The fourth putative source is also an observer node.) initial rankings change considerably by including R 0 information (three of which drop well out of the top-10), all of which have very small R 0 (i.e., 0.21 or less). Interestingly, one of these four corresponds to the top-ranked node under the uniform prior, which nevertheless remains top-10 under the R 0 prior (i.e., ranked 7th), suggesting that the evidence in the data towards it being a source is particularly strong. On the other hand, another of these nodes drops from third to 16 th . Although there is no ground truth for these data, some conjectures can be made based on these results. As can be seen from the map in Figure 2, these two nodes (i.e., the first and third ranked putative sources under the uniform prior) are in fact geographically quite close together and located in the vicinity of Durban. They correspond roughly to Town Verulam and Westville, respectively. The first is located on the coast about 27 kilometers north of Durban, and the second, about 10 kilometers to the west of Durban. Both have comparatively large populations and low R 0 . In contrast, the node corresponding to an area called Eshane has a small population (∼ 500) with a very large R 0 (7.18), and yet is ranked the second most likely source under either choice of prior. Eshane is about 45 kilometers east of Greytown, a town situated on the banks of a tributary of the Umvoti River in a fertile area that produces timber, and which sits at the nexus of multiple regional routes (i.e., R33, R74 and R622) which might help the waterborne disease, cholera, to spread. Examination of the epidemic time course for Wave 1 shows that the wave was first found to spread largely along the coast (see Supplemental Figure 6). Together, therefore, these observations suggest that the results of our analysis of Wave 1 can be interpreted as saying that either (i) the epidemic originated in the interior (near Eshane) and was brought to the coast, or (ii) it in fact originated on the coast (just outside of Durban). In comparison, the results of our analysis for Wave 2 seem to tell a consistent story, whether under the uniform prior or the R 0 prior, in that the two most likely putative sources are the same for either choice of prior (albeit with their order switched) and are located fairly close together. As seen from the map in Figure 3, application of our methodology indicates that the epidemic source for this wave lies inland, in the more sparsely populated central region of the province between the two major metropolitan areas of Durban / Pietermaritzburg and Newcastle. Under the uniform prior, the most likely source is Ezakheni E, a town of moderate population size and somewhat elevated R 0 (i.e., 1.67). And about half as likely is the town of Estcourt, about 60 kilometers south, also of moderate (although smaller) population size but with substantially larger R 0 (i.e., 3.62). Alternatively, under the R 0 prior, these two nodes have nearly equal (and lower) posterior probability of being a source. Examination of the epidemic time course for Wave 2 shows that, while the earliest reported cases were to the north and northeast of the region surrounding these two nodes (i.e., near Newcastle), the bulk of the infections during this wave seemed to concentrate in this region (see Supplemental Visualization of the extent to which posterior-based ranking of the top ten putative source nodes, under the uniform prior (x-axis), change when using the R 0 prior instead (y-axis), for Waves 1 (left) and 2 (right). The size of each point in the scatterplots is in proportion to the R 0 value for the corresponding node. Figure 7). Therefore, the results of our analysis suggest that this second wave most likely originated from this central region, between the two larger metropolitan areas, and spread outward from there, perhaps through the system of rivers flowing through the area (i.e., Ezakheni E and Estcourt lie close to the rivers Kliprivier and Boesmansrivier, respectively). In the above analyses, a node was said to be infected once the prevalence (i.e. the number of infected individuals) first exceeds 0.1% of the population. Additional analysis shows that our results (i.e., the top-ten ranked nodes) remain robust when this choice of threshold is decreased to 0.09% and even 0.05%, but deteriorates at 0.01% (see Fig 14, in Supplemental Materials). At the same time, thresholds larger than 0.1%, even 0.2%, makes the inference procedure fail, since not all observers are infected. Formally, this failure could be avoided through appropriate adjustments to the underlying formulas and procedures (i.e., accounting for right-censoring in the data for these nodes), but one would still nevertheless expect a deterioration in performance. Synthetic experiments In order to gain some insight into the reliability of the above results, we conducted a simulation study in the context of the 2000-2002 cholera outbreak. Specifically, we used the generative model underlying our methodology (described above and in Methods) to generate a collection of synthetic outbreaks in order to • investigate the impact on source estimation performance of changes in certain November 3, 2020 9/31 fundamental implementation details; and • compare the proposed method with a comparable established approach [12] (see Discussion). We simulated N = 851 scenarios, where each node was allowed to be the epidemic source in turn. A given source node was infected at Day 1 and the first arrival-time of the epidemic at an arbitrary node is defined as the day on which the prevalence (i.e. the number of infected individuals) first exceeds 0.1% of the population. For each scenario, we then generated 400 realizations, 300 of which were used for training (i.e. estimating the spreading parameters and in turn calibrating the model) and the remaining 100 of which were used for testing, allowing us to compare the accuracy with which source estimates matched the true underlying sources. We investigated the robustness in performance of our methodology, as a function of changing the number of observers, using different observer placement strategies, and incorporating prior knowledge or not. Specifically, we varied the 1. number of observers: 9 or 18 observers representing 1% or 2% of the total number of nodes, respectively. 2. observer placement strategies: random observer selection (random) or high-degree observer placement strategy (high-degree) [12], i.e. selecting observers with the highest (weighted) node degrees in the human mobility network. 3. incorporation of prior knowledge: informative prior, where the prior is proportional to each node's R 0 (R 0 prior), and non-informative prior, following a uniform distribution (uniform). Nodes with larger R 0 are easier to be infected, so it is reasonable to let the prior be proportional to these values. Accuracy of our methodology was quantified using the following four criteria with respect to the true source s * : 1. the probability that the 0.95 credible region contains s * ; 2. the size of the 0.95 credible region; 3. the probability that s * is ranked among the Top 10; 4. the mean distance between s * and the estimated sourceŝ. Based on our simulation results, we can conclude the following: 1. The high-degree observer placement strategy outperforms the random placement strategy. 2. The frequency with which the true source s * is ranked in the Top 10 increases, and the mean distance between the true source and the estimation decreases, with increasing number of observers. At the same time, there is also a small decrease in the coverage probability of the 95% credible region and a much larger decrease in the size of the 95% credible region. 3. When using the high-degree placement of observers, the performance corresponding to 9 observers and that corresponding to 18 observers are comparable. 4. Use of a prior proportional to R 0 yields better results than a uniform prior when the source has large R 0 . 5. Using either prior (uniform prior or prior proportional to R 0 ), empirical coverage probabilities of the 0.95 credibility regions are good (> 0.7) for sources with not too small R 0 's (> 1.8) or moderate population (log 10 > 3.5). 6. It is possible for the 0.95 confidence sets to contain over 100 nodes. However, these sets will be substantially smaller (e.g., 10's of nodes) and have good coverage probabilities if the R 0 of the sources are not too small (> 1.8) and the population is large (log 10 ≥ 4.5). 7. For the probability of the true source being in the Top 10 to be at least 0.5, under a uniform prior, the R 0 of sources should not be too small (> 1.8) and the population should be large (log 10-base ≥ 4.5). Under a prior proportional to R 0 , the R 0 of sources should be larger (over 2.7). 8. Using either prior (uniform or proportional to R 0 ), if the true source has moderate R 0 (≥ 2.7) and large population (log 10 ≥ 4.5), the distance between true and estimated sources can be smaller than 50 (km). In general, as can be expected, our proposed method has good performance when the source node/city has moderate or large R 0 and population. These simulation results also suggest the following guidelines for usage of our methodology in practice: 1. To monitor spreads of epidemics, placing resources onto transportation hubs, i.e. 'high-degree' nodes is preferred. Thus although we will not know the R 0 and the population of the true source beforehand, there are still large chances we can ensure good (> 0.7) coverage probabilities. 3. There are only about 5% nodes with large population (log 10 ≥ 4.5) thus the chance that we have small credible region (10's of nodes) with reasonable coverage is small. However, if we use 18 observers, in most cases we can ensure good coverage (as Item 5 in the conclusion list describes) with credible regions less than 100 nodes, which are usually feasible. 4. If we use a prior proportional to R 0 , there is a large chance that the probability of the true source being in the Top 10 to be at least 0.5, which is 41.5% (We only need the source has large(> 2.7) R 0 ), comparing with using the uniform prior case -where there is only less than 5% chance that the source fulfills the requirements described in Item 7 in the conclusion list. Discussion Tracking the source of an epidemic outbreak is of crucial importance in epidemiology. Indeed, the identification of the area or the human community that sparked an outbreak is useful not only for the short-term disease control, i.e. focusing interventions in the area in an effort to stop the transmission, but also for the long-term management of the disease as such area could be the designated target of future interventions to curb the risk of new outbreaks. Therefore, the source detection problem is relevant not only in real-time, but also retrospectively on past data. However, the correct identification is often impaired by the lack of widespread and efficient surveillance networks, especially in developing countries. Even in the cases where such health infrastructures exist, the simple analysis of the data to identify the area where the first cases where reported might lead to an incorrect identification of the true source. In fact, the real beginning of an outbreak could go unreported because initial cases are misdiagnosed. This is the case for instance with cholera, for which lab confirmation of suspected cases is typically performed routinely only when an ongoing outbreak is declared. In this context, thus, mathematical models for source identification are of primary importance. In this paper, we developed a framework that allows the probabilistic identification of the source based on first-arrival times of the infection on a small subset of nodes (e.g. human communities) used as observers, thus potentially reducing the cost to set up and maintain a surveillance network. From a mathematical perspective, we recast the source detection problem as identifying a relevant mixture component in a multivariate Gaussian mixture model. The framework is complemented by a stochastic spatially-explicit epidemiological model that embeds information about the human mobility network and is used to calibrate the parameters characterizing the probability distributions of first arrival-times. With our approach we address the major challenges stated by [5]. Building on the sensor-based Gaussian mixture approach, our data needs are realistic for practical settings. Additionally, the implementation is computationally feasible in large networks. Furthermore, we allow generic networks for the underlying spreading pattern (in contrast to assuming a tree-like structure). Moreover, adopting a Bayesian perspective opens the possibility to seamlessly integrate available nontrivial prior knowledge from previously observed spreading pattern or other data sources. While there are many methods for source detection, it is comparatively more rare that they also quantify the estimator accuracy, and none with an informed Bayesian prior probability distribution. Also note that our uncertainty quantification does not take into account uncertainty in the model components including gravity model and human mobility, epidemiological model for cholera spread, etc. These components are almost certainly idealized and only, at best, useful rather than correct. Thus practitioners should interpret the numbers coming at guiding decision making rather than absolute truths. We define (a) statistically well-defined region(s) of nodes that are likely to be the spreading origin of the observed process. Because this region need not be contiguous, it also arguably provides some information on the prospect for multiple sources (although we do not formally solve here the problem of detecting multiple sources, which is notably more complex). Among existing methods in the literature, our method can perhaps be viewed as closest to the seminal work of Pinto et al. [12], which has been shown to be quite competitive with many other methods under a variety of scenarios [5]. However, for the specific context of water-borne diseases studied here, our method substantially outperforms that of Pinto et al. in simulation (see Fig 11, in Supplemental Materials). This advantage illustrates the value-added yielded by our use of highly informative prior information, i.e., through (i) utilization of the full human mobility network, (ii) encoding of prior information on quality of water and toilet facilities, and (iii) integration of a stochastic spatially-explicit epidemiological model to calibrate the means and covariances in our Gaussian mixtures. The capability of the proposed method is demonstrated in the context of the 2000-2002 cholera outbreak in the KwaZulu-Natal province, through analyses of both actual data from the outbreak and a corresponding collection of synthetic experiments. In the experiments, we showed that the proposed method performs well if the source has moderate or larger R 0 and population. Examination of experimental output suggests that the decay in performance in the case of small R 0 or population may be due to a lack of fit with the assumed multivariate Gaussian in the mixture model at the core of our framework. While simulation suggests that the Gaussian can be quite reasonable otherwise (see Fig 12 and Fig 13, in Supplemental Materials), the use of more general mixture models may help (e.g., nonparametric Bayesian mixtures [20]). However, it is not immediately apparent how best to integrate such models with an underlying epidemiological model. Alternatively, one might instead specify a mixture of epidemiological models, each defined conditional on a different node being the source. However, posterior-based inference of the source under this approach is likely to be nontrivial to implement, since even just parameter estimation in a single such version of our underlying epidemiological model has been found to require the use of sophisticated Markov chain Monte Carlo algorithms [21]. Accordingly, our proposed approachdetecting sources through posterior-based inference in Gaussian mixture models, with mean and covariance parameters informed by epidemiological models -may be viewed as a compromise that allows for increased interpretability and computational efficiency, arguably blending statistical and mathematical modeling in the spirit of data assimilation techniques. We note that our analysis of the KwaZulu-Natal data is a retrospective study in nature -we effectively work from those nodes with sufficiently high prevalence and infer 'backwards' through the human mobility network to putative sources. Importantly, those nodes with insufficient prevalence do not contribute to the analysis (i.e., the difference in observer times with these nodes is right-censored and hence effectively infinity). A prospective study would potentially yield different results, depending on the choice of observer set. For example, if the observer set is chosen to contain the union of the two sets we have used in this paper (i.e., for Waves 1 and 2, respectively), then the results will be unchanged. On the other hand, to the extent that a common observer set contains only part or none of the two wave-specific sets we used, the results will change, and can be expected to degrade. Finally, the framework and the results presented herein allow preliminary delineating a road-map to set up a surveillance network based on the proposed method in a country. The first step should consist in retrieving data on the spatial distribution of population. Available global sources are, e.g., WorldPop (www.worldpop.org), LandScan (landscan.ornl.gov) or the Global Human Settlement Layer (ghsl.jrc.ec.europa.eu). Then, possible census data on WASH (Water, Sanitation and Hygiene) conditions, e.g. access to tap water or toilet facilities, should be sought in order to possibly characterize the spatial heterogeneity of the Basic Reproductive number R 0 . Once such information is collected, the spatially-explicit stochastic epidemiological model can be set up. If data on past outbreaks are available, critical epidemiological parameters can be estimated using such information. Otherwise, reference literature values for such parameters can be assumed. As previously described, simulations of the epidemiological model are used to calibrate the parameters of the probability distributions of first-arrival times. An analysis as the one reported in section Synthetic experiments is also recommended to select the best strategy to allocate the observer nodes. Once the number and the location of the observer nodes are decided, an epidemiological surveillance system instructed to routinely perform lab testing for each suspect case of the selected disease is to be established. If an outbreak occurs, data on first arrival times at the selected nodes should readily allow the inference of the possible region of the source of the outbreak, thus enabling fast and effective interventions. Gaussian source estimation with prior information Following [12], we cast the source detection problem as identifying the relevant mixture component in a multivariate Gaussian mixture model. However, from the Bayesian perspective we adopt here, whereas the authors in [12] use a uniform prior over sources in their formulation, here we incorporate substantially more structured prior information. This structure arises both through the use of potentially nonuniform priors over sources (i.e., informed by local values of R 0 ) and through calibration of the multivariate Gaussian parameters using a human mobility network and a stochastic epidemiological model. Let π s be the prior probability of node s ∈ V being the source and let t be the K-dimensional vector of observed first-arrival times. Conditional on s being the true source, t is assumed to follow a multivariate Gaussian distribution, with mean vector µ s and covariance matrix Λ s . Denote the corresponding density function by φ(t; µ s , Λ s ). Then t has density N j=1 π j φ(t; µ j , Λ j ) . ( A point estimateŝ of the true source, say s * , can be obtained by maximizing the posterior probability computed by Bayes theorem, i.e. The formula above can be written as: Hence, this approach is equivalent to standard linear discriminant analysis for K-dimensional classification, with pre-defined class weights [33]. The Gaussian source estimator by [12] is a special case, assuming a uniform prior, i.e., π 1 = · · · = π K = 1/K. A set estimate C for s * may be obtained in the form of a highest posterior density (HPD) region, by applying the largest threshold τ α corresponding to choice of a pre-specified α so that The HPD region fulfills the condition that P (s|T = t) > P (s|T = t) for all s ∈ C and s ∈ C, and consequently minimizes the volume of the area covered, among all sets with at least 1 − α posterior mass [34]. Note that this definition does not consider distance with respect to the network connectivity. Furthermore the HPD region does not necessarily need to be a connected subgraph of the network. Parameter calibration using a human mobility network and the stochastic epidemiological model In order to produce the point and/or set estimatesŝ and C, values must be available for the mean and covariance parameters µ s and Λ s of each Gaussian component. There are deterministic estimates available for these parameters, which can be derived easily from network topology information only, using shortest path lengths between potential source candidates and sensors [12]. But µ s and Λ s -representing first and second order information on the behavior of the first arrival times t -are reflective of what in the current setting is typically a highly complex stochastic phenomenon. Accordingly, we instead calibrate these values in our model using a stochastic epidemiological model that integrates human mobility network information. A stochastic, spatially-explicit epidemiological model for the transmission of cholera, a prototypical waterborne disease, has been introduced in [23]. This model considers a set of human communities interconnected by a mobility network and describes the temporal evolution of the integer number of susceptible (S i ), infected (I i ), and recovered (R i ) individuals hosted in the nodes i of the network. Additionally, the model incorporates the evolution of the environmental concentration of bacteria (B i ). Specifically, all events involving human individuals (births, deaths and changes of epidemiological status) are treated as stochastic events that occur at rates that depend on the state of the system. The possible events and their corresponding rates are shown in Table 1. Table 1 shows transitions and rates of occurrence for all possible events indexed by a given node i. The generic event k occurs in node i at rate ν k i . The population of each node is assumed to be at demographic equilibrium, with µ being the human mortality rate and µH i a constant recruitment rate. The force of infection, which represents the rate at which susceptible individuals become infected due to contact with contaminated water, is defined as: The parameter β i represents the exposure rate. The fraction B i /(K + B i ) is the probability of becoming infected due to the exposure to a concentration B i of V. cholerae, K being the half-saturation constant [35]. Because of human mobility, a susceptible individual residing at node i can be exposed to pathogens in the destination community j. This is modeled assuming that the force of infection in a given node depends on the local concentration B i for a fraction (1 − m) of the susceptible hosts and on the concentration B j of the surrounding communities for the remaining fraction m. The parameter m thus represents the community-level probability that individuals travel outside their node and is assumed, in this formulation, to be node-independent. The concentrations B j are weighted according to the probabilities Q ij that an individual living in node i would reach j as a destination. Matrix Q thus epitomizes information about human mobility. Formally, human mobility patterns are defined according to a gravity model in this approach. Q ij is defined as: where the attractiveness factor of node j depends on its population size, while the deterrence factor is assumed to be dependent on the distance d ij between the two communities and represented by an exponential kernel (with shape factor D). Concentration B i (t) is modeled as a stochastic variable in continuous time, as the number of bacteria is expected to be large enough to allow a continuous representation. Its evolution is described by: where µ B is the mortality rate of the bacteria in the environment, p i is the rate at which bacteria produced by one infected person reach and contaminate the local water reservoir of volume W i , and I i is the number of infected. Assuming a single node s as the source of the epidemic, the stochastic model just described allows for the generation of multiple Monte Carlo realizations of the outbreak. From these realizations we may obtain estimates of the mean and covariance parameters µ s and Λ s for the first-arrival times t at the observers. (Methods of numerical integration or similar might be used here instead.) This procedure is repeated assuming each node in turn as a potential source. To estimate µ s accurately we rely on large-sample properties of simple averaging. However, our estimation of Λ s was found to benefit from the use of shrinkage methods. We adopted the approach of [36], which assumes zero covariance among off-diagonal elements (supported by our data, most likely due to the sparse and distributed nature of our observer nodes), but heterogeneous variances, which are estimated using a distribution-free shrinkage towards the median. Additional implementation details may be found in the Supplemental Text (S1 File). In particular, information on how we set the various rate parameters in our stochastic model may be found therein, with corresponding pointers to the supporting literature. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Fig 12. Visualizations illustrate that the quality of the Gaussian approximation is quite reasonable, under model assumptions. To illustrate, we used the 1st-ranked inferred source in the second wave as a source and simulated outcomes according to our model. The marginal distributions for arrival times at the various 2nd wave observers are shown above. We can see that a normal approximation agrees well with the histograms of arrival times. Sensitivity analysis shows that our results (i.e., the top-ten ranked nodes) remain robust when this choice of threshold is decreased to 0.09% and even 0.05%, but deteriorates at 0.01%. S1 File. Additional details. Small-world analysis of human mobility network To interrogate the human mobility network for the small-world property, we implemented a variation on the analysis described in [37,Ch5.5.2], as adapted for weighted networks in [38]. Specifically, this consists of (i) calculating the average shortest path length and the (weighted) clustering coefficient for the human mobility networks, and (ii) comparing the resulting values to their distributions under an analogous random graph ensemble model. Here the random graph ensemble was defined through permutation of the weights in the original human mobility network (a fully connected, weighted network). Shortest path length was calculated based on a distance between nodes defined as the inverse of the corresponding weight on the respective link between those nodes. We find that whereas the average shortest path length in the original network (70.67) is on par with typical values in the random ensemble (minimum of 72.74, maximum of 73.36), its clustering coefficient (0.135) is orders of magnitude larger than the typical values in the random ensemble (minimum of 0.00241, maximum 0.00246). In other words, while the original network seems to share small shortest-path distances with a random graph, it exhibits substantially more clustering. Together, these two aspects suggest (weighted) small-world behavior. Implementation details We adopted the stochastic model of cholera transmission proposed in [23] focusing on the human mobility network and discarding the river network component. For the application to KwaZulu-Natal, we use the domain discretization and the demographic information from [21]. Following [23], we introduce for each node i the contamination rate θ i that embeds information regarding the parameters p i , K and W i . We modulate the exposure, β i , and contamination, θ i , rates using census information on access to safe water and toilet facilities, respectively, as in [21]. Specifically: β i = β max · no water access rate and θ i = θ max · no toilet access rate. Then R 0 for each node is calculated by: Table 2 summarizes the model parameters along with the literature references or notes for the assumed values. Finally we set θ max , which in turn controls the distribution of R 0 , so as to obtain an overall cholera incidence and time to epidemic peak comparable to the one observed. This resulted in θ max =15 day −1 . We used the model set up described above to simulate epidemic waves for a given source. We assume 0.1% population is symptomatic infected in the source node, i.e., I i (t = 0) = H i · 0.001, where i is the index of the source and H i is the population of that node. We could further calculate initial recovered population based on symptomatic ration σ: R i (t = 0) = (1 − σ)/σ · I i (t = 0). For all other nodes we set S i (t = 0) = H i , I i (t = 0) = R i (t = 0) = B i (t = 0) = 0. For each simulated realization, we look at 100-day time range at ∆t = 0.1 day resolution. The number of susceptible, infected, recovered and corresponding cumulative cases are continuously updated according to the events in Table 1. Bacteria concentration vector B is instead updated at every ∆t solving analytically equation in Page 16 assuming a constant bacterial input (i.e. constant number of infected people) for the duration of the timestep. At each time point, we track the following 5 quantities: [21,23] ρ 0 immunity loss rate (1/day) As disease induced immunity is believed to last some years, it can be neglected when studying the initial phase of an outbreak β max 1 maximum exposure rate (1/day) [21] D 50 distance scaling parameter (km) [23] m 0.3 the probability individuals leave their original nodes [23] susceptible, infected, recovered, bacteria concentration and cumulative cases at each node. In this way, we tracked the number of the infected in each node in 100 days. Thus, for a particular source i and a set of observers, we got the observer infected dates for each realization. As we mentioned in synthetic experiments section, we generated 400 realizations, 300 of which were used for training, i.e., the 300 observer infected dates vectors are used to estimate mean vector µ s and covariance matrix Λ s in Methods -Gaussian source estimation with prior information section. Using either uniform prior or prior proportional to R 0 , we could get source estimations for the remaining 100 realizations, or estimations for real application waves. The realizations simulation is the most computationally intensive part. We conducted parallel computing using Boston University Shared Computing Clusters. For each single job, we generated 5 realizations for a single node. The total simulation procedure took about 10 hours.
2020-11-03T02:00:53.511Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "4512ae0a692a414b4d89335568e8117ad53f391f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1008545&type=printable", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "75a402f1b051cd48f208cc3fe78007a94ffa49a6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Mathematics" ] }
252903213
pes2o/s2orc
v3-fos-license
Subcortical motor ischemia can be detected by intraoperative MRI within 1 h – A feasibility study Introduction To achieve a maximum extent of resection, intraoperative MRI (ioMRI) scan is frequently performed. Intraoperative diffusion-weighted imaging (DWI) is not standardly performed and has been described to be inferior to early postoperative MRI regarding the detection of ischemia. Research question This feasibility study evaluates the detection of ischemia by ioMRI and its clinical relevance in patients with motor-eloquent gliomas. Material and methods Of 262 glioma patients, eight patients (3.1%) showed an amplitude loss of continuous motor evoked potential (MEP) monitoring during resection before the ioMRI scan (group loss of MEP = LOM). In these patients and a matched-pair cohort (MPC) of glioma resections without MEP loss, we performed additional ioMRI sequences including turbo-spin-echo (TSE)- and echo-planar-imaging (EPI)-DWI and perfusion-weighted imaging (PWI). The clinical outcome was measured 5 days and 3 months after surgery. Results The mean ± standard time between loss of MEPs and ioMRI was 63.0 ± 8.7 min (range: 40–84). Ischemia within the motor system could be detected by ioMRI in group LOM in 100% EPI-DWIs, 75% TSE-DWIs, and 66.7% PWIs. No sequence showed motor ischemia in the MPC group. All patients of group LOM and no patient of group MPC suffered from permanent motor deficit. Discussion and conclusion The current results provide data on the time sequence of ischemia apparent in MRI sequences which is superior to previous data on symptomatic stroke patients on this topic. The early detection of ischemia adds an additional predictor for the long-term outcome of patients and shows the reason of an intraoperative loss of MEPs. Thereby the performance of intraoperative EPI-DWI might be justified after confirmation of the present data in a larger cohort. Subcortical ischemia can be detected by ioMRI after MEP loss during the resection of motor-eloquent gliomas and was clinically relevant in all cases. Introduction The microsurgical resection of motor eloquent gliomas must avoid surgery-related deficits while achieving a maximum extent of resection (EOR) for an optimal oncological treatment (Stummer et al., 2008;Wijnenga et al., 2018). The gold standard technique for the surveillance of motor function is intraoperative neuromonitoring (IONM) (Krieg et al., 2012;Neuloh et al., 2007;Kombos et al., 2001;Deletis, 1993). Postoperatively, the EOR should be examined within 24-48 h and no later than 72 h after surgery according to the Response Assessment in Neuro-Oncology (RANO) criteria, not least to receive a baseline image for later adjuvant therapies and follow-up (Wen et al., 2010). Thus, the necessity of a postoperative magnetic resonance imaging (MRI) has repeatedly been reported (Henegar et al., 1996;Ulmer et al., 2006;Smith et al., 2005). Furthermore, particularly in patients with high-grade gliomas, intraoperative MRI (ioMRI) has shown to be beneficial regarding the EOR, the patients' overall survival and quality of life, at least with level 2 evidence (Kubben et al., 2011;Li et al., 2017;Jenkinson et al., 2018). In contrast, it has been discussed that ioMRI might be inferior to early postoperative MRI regarding the detection of ischemia due to the late appearance of ischemic changes which could be overlooked in diffusion-weighted images (DWI) of ioMRI. Recently, a study compared ischemic lesions as measured by ioMRI and early postoperative MRI within the same scanner in patients who underwent resection of gliomas. The authors came to the conclusion that DWI sequences for the detection of ischemic lesions should only be performed during the early postoperative MRI, since a large proportion of ischemia has been overlooked during ioMRI scans. Most importantly, ischemic lesions were mainly asymptomatic in this publication (Masuda et al., 2018). However, earlier studies have already shown that new postoperative functional deficits are more likely associated with ischemic lesions than with damage of eloquent brain areas (Gempt et al., 2013a(Gempt et al., , 2013b. Thus, the present feasibility study aims to evaluate two hypotheses: 1) Subcortical ischemia can be detected within the motor system by ioMRI in patients with a loss of motor evoked potentials (MEP) during the resection of motor-eloquent gliomas. 2) Subcortical ischemia as detected by ioMRI is clinically relevant and predicts the patients' functional long-term outcome. Ethics The study was approved by the local ethics board (Ethikkomission der TU München, Ismaninger Str. 22, 81675 Munich, Germany;registration number: 336/17, 192/18, 18/19). The study was performed in accordance with the Declaration of Helsinki. All included patients provided written informed consent. Eligibility criteria We prospectively included patients with suspected cortical or subcortical motor eloquent gliomas as defined by preoperative MRI scan who were scheduled for resection at our department. The indications for tumor resection were made by the interdisciplinary neurooncological board. Patients with an age of less than 18 years or general MRI exclusion criteria were excluded. MRI scan We performed a structural MRI scan in all patients (3 T MR scanner Achieva, Philips Medical System, Netherlands B.V.) according to the standard MRI protocol including DTI sequences with 32 orthogonal sequences. The same MRI scan was performed postoperatively within 48 h after surgery for the final determination of the EOR. A threshold of 5% of residual tumor was defined to separate gross total resection (GTR) and subtotal resection (STR) (Bloch et al., 2012;Southwell et al., 2018). Intraoperative neuromonitoring A total intravenous anesthesia (TIVA) was used in all cases. For the transcranial electrical stimulation (TES) for MEP monitoring we used an ISIS stimulator with stimulation needles (inomed Medizintechnik, Emmendingen, Germany) at C3 and C4 as determined by the 20-10 electroencephalography (EEG) system. For direct cortical stimulation (DCS) for MEP monitoring we positioned a strip electrode with 4 contacts over the primary motor cortex and an additional needle electrode at Fpz as the cathodic pole (inomed Medizintechnik, Emmendingen, Germany). The decision on the use of DCS versus TES, and for TES C3-C4 stimulation versus C1-C2 stimulation, was preoperatively made based on the tumor location as shown by MRI, the planned approach to the tumor, cortical versus subcortical eloquence, and cortical or subcortical relevance of IONM in the individual case. Standardly, compound muscle action potentials (CMAP) were recorded at least within three muscles for upper extremity monitoring, and within one muscle for lower extremity monitoring (Krieg et al., 2012). We used a train-of-five stimulation technique. The stimulation parameters were adjusted in case of MEP failure prior to resections and a baseline measurement was performed after the opening of the dura as a reference value for later responses. During the resection MEPs were recorded continuously with intervals of 10 s or less. In case of an amplitude decline or complete loss, technical issues or anesthesiological reasons were ruled out first before forwarding to surgeons. After forwarding to the surgeon and prior to the final documentation of an amplitude decline or complete loss, tumor resection was stopped and irrigation with Ringer's solution and vasodilators was performed. Similarly, anesthesiologist were asked for specific events and to optimize parameters such as blood pressure. In case of still persisting MEP changes, an amplitude decline of more than 50% of the baseline amplitude was considered to be significant and was documented as a decline for the present study in case of missing recovery above 50% of the baseline amplitude. Onset of amplitude decline or a complete loss or recovery were documented to the minute. Furthermore, the number of declined or lost MEPs with respect to specific muscles was documented (Krieg et al., 2012;Neuloh et al., 2007;Taniguchi et al., 1993). Microsurgical tumor resection and intraoperative MRI scan Microsurgical tumor resections were performed via cavitronic ultrasound aspirator under general anesthesia and by the use of neuronavigation as well as continuous IONM with TES or DCS monitoring (Krieg et al., 2012(Krieg et al., , 2013. Our department has a two-room ioMRI setup which is used for all glioma cases in a MRI-compatible headclamp with included coil array (Noras MRI products, Hoechberg, Germany). Preoperatively determined cortical MEP-positive sites by navigated transcranial magnetic stimulation (nTMS) motor mapping and nTMS-based diffusion tensor imaging fiber tracking (DTI FT) of the corticospinal tract (CST) were displayed at the neuronavigation system throughout the whole resection. After the resection was completed, IONM needles were removed for safety reasons prior to ioMRI scanning. The scanner room for the ioMRI was cleaned 40 min before the ioMRI scan. Subsequently, ioMRIs were performed according to the standard ioMRI protocol at our department (Dinevski et al., 2017). In short, after the completion of the initial resection and hemostasis, the resection cavity was refilled with ringer's solution and prophylactically closed with a collagen sponge and rough suture. The approach was covered with three sterile layers. The patient was then transferred to the ioMRI scanner after the completion of checklists. This interval for the preparation of the ioMRI was the most time-consuming factor affecting measured durations between the loss of MEPs and ioMRI. We performed a protocol of sequences including turbo-spin-echo (TSE)-and echo-planar-imaging (EPI)-diffusion-weighted imaging (DWI) with according apparent diffusion coefficient (ADC) maps as well as perfusion-weighted imaging (PWI). Data analysis All patients underwent clinical examination including motor testing and documentation according to British Medical Research Council (BMRC) scale (0 ¼ no contraction, 1 ¼ flicker or trace of contraction, 2 ¼ active movement with gravity eliminated, 3 ¼ active movement against gravity, 4 ¼ active movement against gravity and resistance, 5 ¼ normal power) preoperatively, postoperatively, and at 3 months follow-up. Surgical details were documented standardly. MRI scans with regard to EOR and especially with regard to the detection of ischemia were independently rated by at least two board-certified neuroradiologists and two board-certified neurosurgeons. In case of disagreement, a further boardcertified neuroradiologist and neurosurgeon was consulted. Ischemic lesions in a thin linear rim around the resection cavity were excluded from the present analysis based on prior publications (Smith et al., 2005). To evaluate the reliability of detecting ischemia within the motor system by ioMRI and to compare outcome parameters and ioMRI procedures, we performed a matched-pair analysis (group matched-pair cohort ¼ MPC). Patients without loss or decline of MEPs during IONM who also received the extended ioMRI protocol were matched according to baseline characteristics, tumor location and tumor entity. Statistical analyses were performed using GraphPad Prism software (GraphPad Prism 8, San Diego, CA, USA). The baseline characteristics of the two groups were compared by independent t-tests and Fisher's exact or chi-square test. A p-value <.05 was considered significant. Initially, Gaussian distribution was tested for all measures. Because of the small cohort size, Gaussian distribution was additionally tested using the Shapiro-Wilk-Test. In case of rejecting the null hypothesis, further calculations for the tested data were performed using the Mann-Whitney Test. Results Between July 2018 and January 2020 we screened 262 patients who underwent microsurgical glioma resection. Of these, eight patients (3.1%; 4 female) with a mean AE standard deviation (SD) age of 52.4 AE 16.0 years showed a loss of MEPs as measured by IONM during microsurgical tumor resection (¼ group loss of MEP ¼ LOM). These patients were included in the present study and received special sequences for the detection of ischemia during ioMRI according to the study protocol. Additionally, we analyzed a matched-pair cohort (¼ group matched-pair cohort ¼ MPC) of eight patients (3.1%; 4 female) with a mean age of 59.1 AE 12.8 years of the same cohort, who did not show a loss of MEPs as measured by IONM during microsurgical tumor resection and also received special sequences for the detection of ischemia during ioMRI. Histopathologically, all tumors were classified as gliomas. Table 1 shows detailed patient and tumor characteristics ( Table 1). Surveillance of motor function during tumor resection was performed by TES-and DCS-monitoring in six and two cases of group LOM and by TES-monitoring in all cases of group MPC (p ¼ .2333). The ranges of intensities for TES-and DCS-monitoring were 65-130 mA and 5-13 mA, respectively. The mean duration of surgery in group LOM was 238.8 AE 47.5 (range 159-292) min and 245.3 AE 62.2 (range 125-359) min in group MPC (p ¼ .8295). The mean duration of ioMRIs (time between insertion and removal of collagen sponge) was 68.1 AE 2.0 (range 65-70) min in group LOM and 75.4 AE 14.0 (range 40-86) min in group MPC (p ¼ .0095). This discrepancy between the two groups could not be referred to a specific cause. No adverse events occurred in both groups. The mean duration between the loss of MEPs and the first DWI sequence of the ioMRI in group LOM was 63.0 AE 13.4 (range 40-84) min (Table 2). In detail, we measured mean intervals starting at the loss of MEPs and the scanning of EPI sequences of 65.0 AE 12.7 (range 49-82) min, TSE sequences of 65.3 AE 11.8 (range 51-84) min, and PWI sequences of 62.2 AE 12.7 (range 46-79) min. The mean duration between the loss of MEPs and the detection of ischemia was 65.0 AE 12.7 (range 49-82) min for EPI-DWI and EPI-ADC, 69.0 AE 11.2 (range 58-84) min for TSE-DWI, 67.3 AE 11.2 (range 57-84) min for TSE-ADC, and 60.8 AE 12.3 (range 46-79) min for PWI (Fig. 1). Loss of MEPs of group LOM correlated with subcortical ischemia within the motor system as measured by the ioMRI in 100%. Single ioMRI sequences correlated with loss of MEPs in 100% (EPI-DWI), 100% (EPI-ADC), 75% (TSE-DWI), 87.5% (TSE-ADC), and 66.7 (PWI) of cases. In the present cohort, we did not find cases with a loss of MEPs and no subcortical ischemia within the motor system. Fig. 1 shows ioMRI sequences of group LOM as well as detailed intervals between the loss of MEPs and single sequences (Fig. 1). The detection of subcortical ischemia within the motor system by ioMRI correlated with a permanent motor deficit in all patients of group LOM. No sequence showed motor ischemia in the MPC group. No patient of group MPC suffered from permanent motor deficit. Patients of group LOM suffered from slight preoperative motor deficits in 6 cases and in 1 case of group MPC. These did not base on subcortical motor ischemia as controlled by preoperative MRI scan (Table 3). Discussion By the present results we were able to show that subcortical ischemia can be detected by ioMRI in patients with a loss of MEPs during the resection of motor-eloquent gliomas at a very early stage. Ischemia within the motor system could be detected in any of the ioMRI sequences in all cases within approximately 1 h after the intraoperative loss of MEPs. The shortest durations for the proof of ischemia by ioMRI were 49 min for EPI-DWI and EPI-ADC, 58 min for TSE-DWI, 57 min for TSE-ADC, and 46 min for PWI. It must be emphasized that durations presented in this study measure the time between the earliest electrophysiological correlate of an intraoperative ischemic event. The event per se already occurred minutes before the loss of MEPs. For single ioMRI sequences we found a positive correlation with a loss of MEPs in 100% (EPI-DWI), 100% (EPI-ADC), 75% (TSE-DWI), 87.5% (TSE-ADC), and 66.7% (PWI). Hence, especially standard EPI sequences seem to be qualified for the early detection of ischemia during ioMRI. In contrast, TSE-DWI and PWI sequences which were performed due to the study protocol, did not give additional information for the detection of subcortical motor ischemia. The reliability of the present results could be shown by the results of the MPC group. Furthermore, subcortical ischemia as detected by ioMRI was clinically relevant and led to a permanent motor deficit in all patients of group LOM. Here, it must be highlighted that the study was focused on patients with an intraoperative loss of MEPs, being the most relevant and strongest predictor of postoperative motor deficits. The detection of subcortical ischemia by ioMRI might show the underlying pathology of an intraoperative loss of MEPs and could have an additional predictive value regarding the long-term outcome of patients. Since ioMRI is usually performed after the resection is preliminarily completed, it neither prevents ischemia-related motor deficits nor substantially changes the surgical strategy. However, by adding an additional predictor for the longterm outcome of patients and in some circumstances showing the reason of an intraoperative loss of MEPs, the results of our study support the performance of at least standard DWI during ioMRI. As shown by the mean durations, the scanning of additional sequences did not result in an extensive prolongation of ioMRI procedures. Additionally, without taking literature as a basis, it has been discussed that perilesional ischemia visualized in postoperative MRI is based on intermediate-term vasospasm. Based on the present results we could show that these changes are already present during surgery while their pathophysiology remains subject for discussion. Prior studies have already shown that ischemic lesions as detected by early postoperative MRI, are associated with functional deficits (Gempt et al., 2013a(Gempt et al., , 2013bBette et al., 2016;Jakola et al., 2014). Yet, all these studies defined 'early' as 48 or 72 h after surgery, but not 60 min after the electrophysiological correlate of an ischemic event. In contrast to the present results, it has been discussed by others that ioMRI might be inferior to early postoperative MRI regarding the detection of ischemia due to the late appearance of ischemic changes which could be overlooked in DWI of ioMRI. Yet, whether such ischemia happened during resection or due to borderline perfusion and vascular changes afterwards cannot be said for sure, thus, resulting in inadequate or inaccurate results or interpretation of those. Recently, a study compared ischemic lesions as measured by ioMRI and early postoperative MRI within the same scanner after glioma resection. The authors came to the conclusion that DWI sequences for the detection of ischemic lesions should only be performed during the early postoperative MRI, since a large proportion of ischemic tissue has been overlooked during ioMRI scans. Most importantly, ischemic lesions were mainly asymptomatic in this publication and therefore their accurate time of onset impossible to define (Masuda et al., 2018). Ischemic lesions on standard DWI could be detected in five and 16 of 30 patients by ioMRI and early postoperative MRI in this study. Only three of the 16 ischemic lesions in early postoperative MRI were symptomatic, and the authors did not publish if these have already been detected on ioMRI or detailed information on the functional status of patients and the long-term outcome. The masking of ischemic lesions in ioMRI was reasoned by susceptibility artifacts from the air in the resection cavity in six of eleven patients and by equivocal signal changes in five of eleven patients. Apart from the fact that the whole study cohort was reviewed in this study without relation to IONM results, regarding artifacts, the refilling of the resection cavity with ringer's solution according to our standard ioMRI protocol might have been a reason for a better image quality in our cohort. Moreover, it must be emphasized that we performed ioMRIs using a 3 T scanner while a 1.5 T scanner was used for the study of Masuda et al. (2018). What is more, the results of the present study accompany those of earlier trials in stroke patients. It is known that DWI imaging detects ischemia even at very early stages after the event as shown by the imaging of hyperacute strokes (Lansberg et al., 2000;Warach et al., 1992;Petkova et al., 2010). Hence, DWI imaging combined with further sequences is used to determine the age of the ischemic lesion (Petkova et al., 2010;Thomalla et al., 2011Thomalla et al., , 2018. Furthermore, DWI and PWI sequences have been used to predict the outcome after ischemic stroke (Barber et al., 1998). However, CT imaging instead of MRI imaging is still commonly used to detect acute ischemic stroke (Vilela and Rowley, 2017). It is suspected that the detection of very early ischemia cannot be visualized by MRI. Furthermore, studies on stroke patients have to rely on the reported onset of symptoms. In contrast, this could be measured in our study by an approved electrophysiological method to the minute. Thus, our results give detailed information on the time span between deterioration of motor function as measured by IONM and the visualization by MRI in patients for the first time. By the present feasibility study's results we can confirm that ischemia can be visualized by MRI even within approximately 1 h after the ischemic event. The mean time between the loss of MEPs and the scanning by ioMRI was 65.0 min for EPI sequences, 65.3 min for TSE sequences, and 62.2 min for PWI sequences. Thereby, the present results might also have a major impact on procedures in acute stroke patients. The small sample size is a major limitation of our study. On the one hand, this is reasoned by the exclusive inclusion of patients with a loss of MEPs during tumor resection. However, the study cohort as well as the matched-pair cohort are highly homogeneous with regard to patient and tumor characteristics ( two groups should not have affected the core results of the present study. Additionally, the presence of preoperative subcortical motor ischemia could be ruled out for both groups by preoperative MRI scans. Furthermore, we found positive correlations and consistent results throughout the entire study cohort. Nevertheless, it must be highlighted that the results of the present study have to be confirmed in a larger cohort in order to fully prove the hypotheses. From an IONM perspective, the applied techniques TES versus DCS as well as the application mode of TES is a subject for debate. The choice of IONM techniques was individually based on the tumor location and the approach for resection (Table 2). DCS was used in case of a craniotomy with safe access to the central region. Regarding the use of TES, it must be highlighted that the application of C3-C4 versus C1-C2 is risky and has the potential to provide false-negative results in case of CST stimulation distal of the tumor. By the intraoperative course of MEPs and the postoperative outcome we can rule out false-negatives and a CST activation distal of the tumor in the present cohort. However, risks, advantages, and disadvantages of applied IONM techniques must also be considered when discussing the present results. Conclusion Subcortical ischemia can be detected by ioMRI in patients with a loss of MEPs during the resection of motor-eloquent gliomas even 60 min after the earliest intraoperative electrophysiological correlate of an ischemic event which was clinically relevant in all cases. Standard EPI-DWI sequences were the most sensitive modality. Due to the select choice of inclusion criteria the sample size of the presented cohort showing an intraoperative loss of MEPs is small. Apart from the clinical perspective with adding an additional predictor for the long-term outcome of patients and in some circumstances showing the reason of an intraoperative loss of MEPs as well as the support of performing intraoperative DWI, the results of the present study should also be rated by their scientific value with showing the very early occurrence and detection of subcortical ischemia. With this in mind, these findings are also of interest in early diagnostics of ischemic stroke. The table shows the findings of intraoperative magnetic resonance imaging (ioMRI) and postoperative (PostOP) MRI sequences as well as the long-term outcome of patients as rated by British Medical Research Council scale (0 ¼ no contraction, 1 ¼ flicker or trace of contraction, 2 ¼ active movement with gravity eliminated, 3 ¼ active movement against gravity, 4 ¼ active movement against gravity and resistance, 5 ¼ normal power); N/A ¼ not available, PreOP ¼ preoperatively, þ ¼ positive MRI signal, -¼ negative MRI signal, NND ¼ no new surgery-related deficit).
2022-01-21T16:56:53.472Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "58e7fa67e4f80e9c324feee92575e53ae7f981df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bas.2022.100862", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3dfd12f62ca502091f87add4f786b4c3fb5ee33c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232185349
pes2o/s2orc
v3-fos-license
Multifractional theories: an updated review The status of multifractional theories is reviewed using comparative tables. Theoretical foundations, classical matter and gravity dynamics, cosmology and experimental constraints are summarized and the application of the multifractional paradigm to quantum gravity is discussed. We also clarify the issue of unitarity in theories with integer-order derivatives. Introduction In the attempt to unify the forces of Nature, several proposals of a quantum theory of gravitation have been flourished from the last quarter of the XX century until today. 1,2 There is no unique answer to the question about how to quantize gravity consistently and, until empirical evidence of phenomena beyond general relativity is found, it is not possible to decide which theory, if any among the extant ones, describes more faithfully the physics at the frontier between gravity and quantum interactions. Why are there different solutions to the problem of quantum gravity? It has to do with the way we picture ourselves the problem, as exemplified by the following quotation: Let us take a real problem: The machines designed to pick tomatoes are damaging the tomatoes. What should we do? If we represent the problem as a faulty machine design, then the goal is to improve the machine. But if we represent the problem as a faulty design of the tomatoes, then the goal is to develop a tougher tomato. 3 In our context, the machine is perturbative quantum field theory (QFT), the tomato is classical gravity and the final product, canned tomatoes, is quantum gravity. Some theories of quantum gravity represent the issue of the non-renormalizability of perturbative quantum gravity as perturbative QFT being faulty and they recur to different machines applied to classical general relativity: the functional renormalization-group approach in asymptotic safety, 4-6 the canonical quantization in Ashtekar-Barbero variables in loop quantum gravity, 7,8 or a path integral over triangulated geometries in causal dynamical triangulations. 9,10 Other theories opt for keeping perturbative QFT as their machine while changing the tomato for an altogether different fruit, such as strings on a worldsheet 11,12 or fields on a group manifold as in group field theory. 1,2,13 Still other proposals content themselves with modifying the tomato just enough to make the perturbative-QFT machine working well enough not to crush the fruit: this is the case of the perturbative quantization of modified gravitational actions as in nonlocal quantum gravity 14 and multifractional theories, the topic of this paper. Multifractional theories are classical and/or quantum field theories of gravity and matter characterized by a spacetime with multiscale properties, i.e., the phenomena registered by clocks, rulers and detectors depend on the probed scale. This is a common feature of theories of quantum gravity [15][16][17][18][19] that usually emerges as a byproduct, while in multifractional spacetimes it is built in explicitly from the start. The way to do so is by modifying the integro-differential calculus defining the action, the dynamics, the line element, and so on. While it turns out to be difficult to improve the renormalizability of the gravitational interaction, this shift of paradigm from ordinary to anomalous geometry opened a Pandora's box of conceptual insights and phenomenology that, on one hand, have tied together several loose strands that can contribute to a unified picture of quantum-gravity models and their capabilities and, on the other hand, has hopefully led modified gravity and quantum gravity closer to experiments. The purpose of this paper is to offer an updated review on the topic of multifractional theories. The most complete review to date is Ref. 20, but while the latter discusses comprehensively the conceptual framework of these models and its development until early 2017, here we will concentrate on the general classification of the theories and their characteristics, stressing some of the advances made since then: • The understanding of logarithmic oscillations as the manifestation of complex dimensions, 21 with new applications to inflation 21 and late-time cosmic acceleration. 22 • The construction of a stochastic or fuzzy version of multifractional spacetimes where the fractional power-law corrections to the measure and, hence, to lengths is reinterpreted as an intrinsic uncertainty on distance measurements. 23,24 • The construction of black-hole solutions. 25 • New observational constraints from the Standard Model of electroweak and strong interactions 26 and new applications to the field of gravitational waves (GWs), in particular regarding the luminosity distance of standard sirens 27 and the primordial stochastic GW background. 28 • The construction of multifractional derivatives 29 and of the long-sought theories with fractional differential operators. 30 Since we do not aim at covering all the theoretical aspects considered in Ref. 20, the present review may be seen as a complement to that publication. A forthcoming textbook will give a longer, in-depth introduction to multiscale and multifractional theories. 31 Section 2 introduces the main features of the multifractional geometry of spacetime and the classification of multifractional theories as we understand them now. Classical gravity and cosmology in each theory is discussed in Sec. 3, while QFT and quantum gravity are reviewed in Sec. 4. We give a perspective on future research on the subject in Sec. 5. Spacetime geometry of multifractional theories The geometry of spacetime defining multifractional theories can be read out from the prototype massless scalar-field action where D is the number of topological dimensions (D = 4 for physical models), v(x) is a measure weight that depends on the spacetime coordinates x µ , µ = 0, 1, . . . , D −1, K is the kinetic operator depending on the derivatives "D" defining the theory, and V is a potential including nonlinear interactions. Each theory is named with a label T 1 , T v , T q , . . . that roughly summarizes its differential structure. In this section we do not consider gravity, so that the spacetime metric is the Minkowski metric Therefore, here covariant operators such as the d'Alembertian = η µν ∂ µ ∂ ν are defined with zero affine connection. Lorentz indices are contracted with the Minkowski metric in Einstein convention, while they are not contracted in one-directional expressions such as q µ (x µ ). The integro-differential structure and geometry of multifractional theories is compared in Tables 1 and 2 for theories with integer-order operators and in Table 3 for theories with fractional operators. Table 1. Characteristics of spacetime geometry of multifractional theories with integer-order operators on Minkowski spacetime. Acronyms: ultraviolet (UV), infrared (IR), discrete scale invariance (DSI). Ultra-IR means scales much beyond particle-physics scales, for instance cosmological. In the first papers on multiscale spacetimes, the spacetime measure d D q(x) appearing in Table 1 had been proposed as a profile naturally possessing critical exponents 17,32,47 and the discrete symmetry typical of deterministic fractals. 32,41,48 The idea was that, on one hand, multiscale phenomena are universally described by critical exponents and that, if the geometry of spacetime had one or more fundamental scales, then its measure should also be of the form of a generalized polynomial with different exponents α l . This first argument led to the power-law dependence ∼ |x| α of q(x). On the other hand, deterministic multifractals (i.e., fractals which are exactly self-similar) are a special case of multiscale systems where the critical exponents α ± iω are complex and the complex part is associated with a discrete scale invariance (DSI) 49, 50 and a length scale ℓ ∞ appearing inside the logarithms to make their argument x/ℓ ∞ dimensionless. This led to the log-oscillatory dependence in q(x). However, in Refs. 20, 33 a much stronger result was proven, namely, that this spacetime measure not only obeys the above two universal features of exactly self-similar multiscale systems, but it is also the most general measure under three assumptions: (i) spacetime is a continuum, (ii) the ordinary Lebesgue measure must be recovered in some regime which is reached "slowly enough" (i.e., via a flat asymptote), and (iii) the measure is factorizable in the coordinates. This result goes under Table 3. Characteristics of spacetime geometry of multifractional theories with fractional operators on Minkowski spacetime. Empty cells correspond to topics not studied yet. fractional Poincaré symmetry 20,32 Fractional Poincaré symmetry of variable order 30,32 Ordinary Poincaré symmetry 30 Integer frame No 30 Constraints on ℓ * the name of flow-equation theorem. a According to the same theorem, the derivation of the log-oscillatory part of the measure from the pairing of complex conjugate power laws leads to identify the two scales ℓ ∞ = ℓ * , thus reducing the number of free parameters in the measure. 24 Multifractional spacetimes are by definition multiscale spacetimes with a factorizable measure. 46 The assumption of factorizability is made to drastically simplify calculations. This of course breaks Poincaré symmetries. Theories with more symmetries such as spatial rotations have been considered, 17, 47, 51 but they are much more difficult to handle and it is preferable to give up Poincaré symmetries altoa Assumption (iii) can be relaxed and one can apply the theorem directly to the Hausdorff and the spectral dimension. 20,33 gether and recover them at large scales. 32,35,42,52 All the parameters α l = α l,µ , ℓ l = ℓ l,µ , A l,n , B l,n = A l,n,µ , B l,n,µ , ω l = ω l,µ in the general measure can be different for different directions, but in Table 1 we wrote down a simplified "isotropic" version where the index µ has been omitted everywhere. However, it is not uncommon to consider geometries where only the time direction or only the spatial directions are multifractional. The n-dependence of the amplitudes A l,n and B l,n has been worked out in Ref. 21 looking at the typical dependence found in critical, complex, and fractal systems, and it is such that the amplitudes decrease exponentially or as a power law in the order n of the harmonic: where a n , b n , c and u are constant in the deterministic version or view of the measure, while a n and b n are random variables in the so-called stochastic view, 23,24 where the fractional corrections to the ordinary measure are stochastic fluctuations around the zero mode that make spacetime fuzzy. About theories with fractional operators (I) The expressions for K with explicit scale-dependence in Table 3 Table 3. One such profile could be ( Fig. 1) where ℓ * ≪ ℓ c . Other single and two-scale profiles can be found in Refs. 22,30,32,53,54. The fact that these profiles are chosen ad hoc can be considered as a weakness of these theories because it introduces an element of arbitrariness that, to date, we are unable to constrain with theoretical arguments. However, the payback is noteworthy because it may allow us achieve unitarity and renormalizability at the same time, something problematic for the theories T [∂ + ∂ γ ] and T [ + γ ]. 30 In all the theories with fractional operators in Table 4, the ordinary Lebesgue measure has been chosen because the generalized polynomial of the other theories is not necessary to improve renormalizability, but this assumption can be relaxed. In the cases with fractional derivatives T [∂ + ∂ γ ] and T [∂ γ(ℓ) ], call the generalized theories with multifractional measure In particular, there are strong similarities between the theories and the theory with q-derivatives, since the scaling of the q-derivative is the same as the scaling of fractional derivatives. This correspondence, which has not been explored yet, is indicated as 20 and it could mean that the observational constraints found for T q could be applied also to, or be very similar to those for, the theories (7). About dimensions The Hausdorff dimension d H of spacetime is defined as minus the scaling of the position-dependent part of the spacetime measure, where [x µ ] = −1, while the Hausdorff dimension d k H of momentum space is the scaling of the momentum-dependent part of the momentum-space measure, where [k µ ] = 1 and usually w(k) = v(k). Notice the specification that the scaling is the one of the variable part of the measure. For instance, by definition [v] = 0, but its position-dependent part in the UV has Since the expression q(x) is real-valued, there is no physical problem with having complex dimensions and, in fact, we can even observe them in principle, for instance, as a modulation in the CMB primordial spectrum 21 or as a cosmic acceleration at late times. 22 The peculiarity of DSI is that it is a UV symmetry that affects the IR even when broken. This departure from the usual UV/IR dichotomy happens because DSI is characterized by infinitely many scales λ ±n l ℓ * spanning all ranges. The spectral dimension is related to the momentum space of the model. Consider the Schwinger representation of the Green function The spectral dimension is where P(ℓ) is called return probability. In multifractional theories, the basis functions are factorizable in the coordinates 34, 45 and we can write For the theory T v with weighted derivatives, the basis is 34 which are eigenfunctions of the operator K = + (∂ µ v/v)∂ µ in the corresponding column in Table 1: K (k, x) = −k 2 (k, x). Since σ(ℓ) = ℓ 2 in this theory, after defining the dimensionless variable y = ℓk, one has P(ℓ) where C is a constant. Therefore, d S = D, 20 which is the result obtained in Ref. 40 setting the parameters β and ν therein to their natural value β = 1 = ν. The theory T 1 follows the same dimensional flow. 40 For the theory T q with q-derivatives, one has 45 and K (k, x) = −p 2 (k) (k, x). Here σ(ℓ) = q 2 (ℓ) and, therefore, P(ℓ) scales as ℓ −Dα in the UV or at any other plateau. The spectral dimension for the theories with fractional operators was calculated in Ref. 30, where some profiles for γ(ℓ) in the theories T [∂ γ(ℓ) ] and T [ γ(ℓ) ] were also proposed (see below). All multifractional spacetimes are multiscale but not all are also multifractal. Multifractal spacetimes are such that the spectral and Hausdorff dimension are related to each other by a fixed relationship d W = 2d H /d S that also involves the so-called walk dimension d W , which is calculated independently. 46 Without entering into details, there is mathematical evidence that d S = d k H for fractals 55 and T q is the only theory among those with integer-order derivatives that satisfies this property. About symmetries Continuous symmetries are classified as ordinary or deformed depending on whether their generators are ordinary or not. However, these generators can satisfy the ordinary symmetry algebra in some cases, such as the free scalar field theory with weighted derivatives. 35 The absence of action symmetries in the theory T 1 is responsible for the absence of a mathematical integer frame or picture 20 where one can simplify the dynamics and make it superficially identical, or at least very similar, to standard mechanics or field theory. Concerning discrete symmetries in a QFT context, both T v and T q are CPT invariant (charge conjugation, parity and time reversal). 38 About experimental bounds The simplified two-scale measure has been used to get experimental bounds on the scales ℓ * and ℓ c from, respectively, particle-physics and cosmological observations, with or without log-oscillations ( Table 2). The upper bounds on the UV scale ℓ * in Table 2 are obtained for α * ≪ 1 and are the weakest possible. They tighten progressively when α * increases from 0 to 1. 20,26,[36][37][38]44 For T v and T q , there also exist constraints from the CMB black-body spectrum for a fixed α * , which are of the same order of magnitude as the bounds from particle physics. 45 In Table 2, bounds with a dagger ( † ) are avoided in the stochastic view of the theory T q . 23,24 If the zero mode in the measure vanishes, A 0 = 0, fractional corrections cancel out in average and the strongest bounds in Table 2, as well as the bounds from the CMB black-body spectrum, 45 cease to be. 24 Bounds on α * , α c and ℓ c , as well as constraints on the log-oscillation amplitudes, are also available: • α * < 0.47 in the stochastic view of the theory T q , according to limits on the strain noise in present and future GW interferometers. 27 In the deterministic view of the same theory and in the presence of one harmonic in the oscillatory part of the measure, α * 0.1−0.6 if inflationary scales include those of the UV regime of the theory. 45 • A, B < 0.4 when α * = 1/2 and the measure has only one harmonic with amplitudes A and B, according to CMB constraints on inflation in the deterministic view of the theory T q . 45 • α c ≈ 3.8 and t c = t Pl ℓ c /ℓ Pl > 3.9 t 0 , where t 0 ≃ H −1 0 is the age of the universe, in the theory T v with many harmonics in the measure, from latetime measurements of the accelerated expansion of the universe. 22 Classical gravity in multifractional theories Gravity in multifractional spacetimes is described by an action where κ 2 = 8πG, G is Newton's constant, g is the determinant of the metric g µν , L g is the gravitational Lagrangian, and S matter is the action for matter fields. In general, the dynamics of gravity is defined by the measure weight v and the curvature tensors (Riemann tensor, Ricci tensor and Ricci scalar) built with the metric and its derivatives (∂ µ , β D µ , ∂ q µ or ∂ γ µ , depending on the theory). The general structure of the Levi-Civita connection, the Ricci tensor, the Ricci scalar and the Einstein tensor in multifractional theories is where D andD denote generic derivative operators, not necessarily equal to each other due to symmetry requirements 39 that go beyond the scope of this introductory review. When D =D, we only write one argument in the curvature tensors, in particular, R µν [D] and R[D]. Furthermore, when D = ∂ we denote the standard Ricci tensor, Ricci scalar and Einstein tensors with the usual symbols In the theories T [∂ γ(ℓ) ] and T [ γ(ℓ) ], the action has an extra integration over a length parameter ℓ, possibly with a measure τ (ℓ). For a single-scale geometry, 30, 32 The Lagrangians L g and L Tables 4 and 5. Table 4. Characteristics of and topics in classical gravity in multifractional theories with integer-order operators. "Diffeo" stands for diffeomorphism. Empty cells correspond to topics not studied yet. Items with a tick ✓ indicate that a certain feature has been studied, while a question mark "?" indicates partial results. T 1 Tv Tq Theory ordinary derivatives weighted derivatives q-derivatives ω and U arbitrary 39, 47 Alternative to dark matter About cosmology Studies on late-time acceleration have been carried out on a homogeneous and isotropic background, in particular, a flat Friedmann-Lemaître-Robertson-Walker (FLRW) metric. In the theory T 1 with ordinary derivatives, the problem has been considered mainly with exotic dark-energy components, in flat 47, 57-63 as well as non-flat Table 5. Characteristics of and topics in classical gravity in multifractional theories with fractional operators. Empty cells correspond to topics not studied yet. Items with a question mark "?" indicate partial results. Theory fractional derivatives fractional d'Alembertian Lagrangian Lg (one scale) FLRW. 64,65 In these cases, late-time acceleration is possible, although the theoretical motivation is no more robust than in general relativity. It is indeed possible to realize dark energy with an ordinary fluid with mildly negative barotropic index w, 66 but it is not clear whether a scalar field with such properties would need fine tuning just like quintessence; hence the question mark in the table. The conclusion reached in the theory T q is that a cosmological constant or exotic fluids are required at late times to sustain acceleration. 22 Thus, while in T 1 such fluids are optional and there is still the possibility to get acceleration with conventional matter, in T q this option seems barred. In contrast with these theories, in the theory T v geometry can sustain acceleration without the need of matter, provided an ultra-IR regime exists. 22,39 This intriguing scenario has been tested with late-time data but it awaits a more complete analysis. The dark-matter row in Tables 4 and 5 refers to the possibility of explaining galaxy rotation curves within the multifractional paradigm, without invoking a dark matter component. To date, this possibility has not been explored. About theories with fractional operators (II) Inspired by Hořava-Lifshitz gravity, 67 fractional derivatives and integrals have been invoked since the earliest papers on the multifractional paradigm, 17, 32, 41 but it was only very recently that multifractional theories with fractional derivatives have been Table 6. QFT of matter and gravity in multifractional theories with integer-order operators. Empty cells correspond to topics not studied yet. Items with a tick ✓ indicate that a certain feature has been studied, while a question mark "?" indicates partial results. Improved perturbative renormalizability No 20,69 No in deterministic view 20,69 ? in stochastic view 20 constructed explicitly. 30 This is the reason why Table 5 is emptier than the others: there has been little time to develop the phenomenology of these theories. Also, originally only one theory with fractional operators was envisaged (it was called T γ in Ref. 20), while now we can recognize at least four. QFT and quantum gravity in multifractional theories In this section, we summarize the status of multifractional theories as quantum field theories of matter and gravity (Tables 6 and 7). About quantum gravity The quantum-gravity row in Tables 6 and 7 refers to the discussion of the theory as an independent perturbative QFT of gravity. Other papers dealt with the relationship and similarities between multifractional theories and other theories of quantum gravity. 20,23,24,32,42,70,71 About renormalizability The first three papers on multiscale field theories 17, 47, 51 considered a nonfactorizable Lebesgue-Stieltjes measure. In these spacetimes, the momentum transform 47, 51 is different with respect to the transform in terms of Bessel functions of the multifractional theories T 1 and T v . 34 Many of the results for the scalar field Table 7. QFT of matter and gravity in multifractional theories with fractional operators. Empty cells correspond to topics not studied yet. Items with a tick ✓ indicate that a certain feature has been studied, while a question mark "?" indicates partial results. • In D = 4 when γ > 1: γ = 2 in 17, 47, 51 look similar to those for the scalar field in T 1 but, in fact, there are some differences. 32 However, the power-counting argument is the same and also the equations of motion. The theory T 1 is difficult to handle due to the absence of action symmetries, a non-self-adjoint kinetic operator, and an issue with unitarity. 20 For these reasons, its development as a QFT has not gone beyond some basic results at the tree level. 32 In the theory T v with weighted derivatives, perturbative QFT is unviable in the presence of nonlinear interactions, as in the case of an isolated scalar field. 38,69 However, this problem does not arise for the full Standard Model due to the presence of an integer frame where the theory is made formally equivalent to the ordinary Standard Model in all sectors. 38 Power-counting renormalizability is determined by the superficial degree of divergence, which is 30, 32 where L is the number of loops in a one-particle-irreducible Feynman diagram. Replacing γ = 1 for the theory T 1 , γ = α for the theory T q (and T α , not reported in the tables), and α = 1 for the theories with fractional operators and ordinary measure, one gets the results reported in Tables 6 and 7. In particular, while in the deterministic view the superficial degree of divergence of T q and T α is the same as a standard QFT, in the stochastic view it is possible that the stochastic fluctuations of the measure render spacetime fuzzy at the scale ℓ * and the concept of coincident spacetime points loses meaning. 20 Whether this leads to an improved perturbative renormalizability is not clear, and the power-counting argument is not conclusive. The case of the theory T v is also delicate because the momentum-space basis (k, x) carries a measure weight that changes the power counting and, eventually, renormalizability is not improved because momentum integrals have the same degree of divergence than a standard QFT. 20,69 Regarding the theories with fractional operators, unitarity and renormalizability with fractional derivatives have not been studied yet, apart from power-counting renormalizability. More is known for the cases with fractional d'Alembertian. The theory T [ + γ ] cannot be at the same time unitary and perturbative renormalizable, since the range of values of γ for which unitarity is respected never intersects the ranges for which the theory has improved renormalizability. About unitarity Based on the nonconservation of Noether currents, in previous papers it was claimed that the multifractional theories T 1 , T v and T q are not unitary. 20,35,47 This was not felt as a problem at least for T v and T q because one could reformulate these theories in the integer frame as unitary models and, somehow, control the loss of unitarity in the fractional frame. However, here we show that at least T v and T q are indeed unitary. To do so, we work in the fractional (physical) frame and check the property of reflection positivity in the Euclidean version of the theory. The details of the procedure can be found, e.g., in Refs. 30, 72 and amounts to show that the scalar product of field functionals ϕ defined through the Green's function is positive definite. In Euclidean position space with coordinates x 1 , x 2 , . . . , x D , one defines a reflection operation R such that spatial coordinates are unchanged while Rx D = −x D . For any test function ϕ chosen in an appropriate functional space, for a generic multifractional theory we have to show that where the Green's function G is given by Eq. (12). We choose a chargedistribution-type of test functions, which on a multifractional spacetime is ϕ( where we used Eq. (15). Reflection positivity holds if I ij 0. In the theory T 1 , the basis (k, x) is made of Bessel functions and the calculation of I ij becomes involved. We will not consider this case here but we note that the no-unitarity arguments of Refs. 20, 47 remain valid, since there is no integer frame here. The basis in the theory T v is given by Eq. (16). Adding a mass term to the kinetic operator where ω 2 k := |k| 2 + m 2 so that, denoting r ij := x (i) Therefore, the theory obeys reflection positivity and, by analytic continuation to Lorentzian signature, it is unitary. This is in agreement with the fact that the Smatrix in the quantum mechanics of T v is unitary. 52 The basis in the theory T q is given by Eq. (17) and where ω 2 p := |p| 2 + m 2 , implying where ]. Thus, also T q is unitary. What next? We conclude by listing some of the topics to be explored in the near future. • With the study of multifractional theories with integer-order derivatives almost complete, attention has been recently shifted to the theories with fractional operators. 29,30 Unitarity and one-loop renormalizability of the theory T [∂ + ∂ γ ] is an open question, even if we expect similar problems than for the theory T [ + γ ]. As a start, one could employ the methods of Ref. 30 to check these properties for the no-scale theory T [∂ γ ]. • The theory T [ γ(ℓ) ] with variable-order fractional operators could avoid the renormalizability-versus-unitarity problem of T [ + γ ], but the details of how to manipulate the integration over ℓ in calculations have not been worked out. • As noted in Ref. 20, the theories T α [∂ + ∂ α ] and T α [ + α ] with multifractional measure could be akin to T q . Studying the correspondence (8) might help to understand how to develop these theories to the point of extracting observational predictions. • The big-bang problem has been cursorily touched upon in Ref. 39 for the theories with integer-order operators and a bounce may be possible in T v and T q without invoking exotic matter. It would be interesting to develop more detailed bouncing models. • To date, black holes and cosmology in theories with fractional operators are still virgin territory. • There are promising signs that the theories T 1 and T v can sustain inflation with or without matter fields. 39 However, no study of primordial scalar, vector and tensor perturbations and of the corresponding spectra has been carried out. • The problem of dark energy has been explored extensively for the theory T 1 , but only one paper pointed out a scenario with a conservatively realistic fluid component. 66 This has been done with a power-law measure weight v = a m , where a(t) is the scale factor, and without trying to realize the same equation of state with a scalar field. Therefore, it remains to be seen how a multifractional weight v = 1 + a m + . . . would modify these results, or whether a scalar field would be subject to a fine tuning on the initial conditions similarly to quintessence in general relativity. Moving to scenarios with fractional operators, the theory T [ + γ ] with fractional d'Alembertian could have an important application in explaining the latetime acceleration of the universe. 30 In fact, in the limits γ → 0, 1 it can reproduce, unify and theoretically justify classical models of IR modifications of gravity with Lagrangian where c 0,2 are constants and n 0,2 classify different scenarios: the n 0 = 1 = n 2 model, 73-76 the n 0 = 0, n 2 = 1 model, 77, 78 the n 0 = 0, n 2 = 2 model, 79,80 and the n 0 = 2, n 2 = 0 model. 81,82 • The problem of finding alternatives to dark matter has not been considered in any multifractional theory, with integer-order or fractional operators. In our opinion, the value of the multifractional paradigm can be appreciated especially when phenomenological explorations, for instance in cosmology, are pursued with the goal of offering scenarios with less fine tuning and less exotic matter components than in general relativity. We hope that this short review will stimulate the reader in that direction.
2021-03-12T02:16:19.857Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "0487d04dee81545dcb64ff809dbff40b1ea4d7df", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2103.06557", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0487d04dee81545dcb64ff809dbff40b1ea4d7df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247145248
pes2o/s2orc
v3-fos-license
The Natural Product Curcumin as an Antibacterial Agent: Current Achievements and Problems The rapid spread of antibiotic resistance and lack of effective drugs for treating infections caused by multi-drug resistant bacteria in animal and human medicine have forced us to find new antibacterial strategies. Natural products have served as powerful therapeutics against bacterial infection and are still an important source for the discovery of novel antibacterial drugs. Curcumin, an important constituent of turmeric, is considered safe for oral consumption to treat bacterial infections. Many studies showed that curcumin exhibited antibacterial activities against Gram-negative and Gram-positive bacteria. The antibacterial action of curcumin involves the disruption of the bacterial membrane, inhibition of the production of bacterial virulence factors and biofilm formation, and the induction of oxidative stress. These characteristics also contribute to explain how curcumin acts a broad-spectrum antibacterial adjuvant, which was evidenced by the markedly additive or synergistical effects with various types of conventional antibiotics or non-antibiotic compounds. In this review, we summarize the antibacterial properties, underlying molecular mechanism of curcumin, and discuss its combination use, nano-formulations, safety, and current challenges towards development as an antibacterial agent. We hope that this review provides valuable insight, stimulates broader discussions, and spurs further developments around this promising natural product. Introduction There is an urgent unmet medical need for new antibiotics for infections caused by multidrug-resistant (MDR) Gram-negative 'superbugs' Pseudomonas aeruginosa, Acinetobacter baumannii, and Klebsiella pneumoniae and Gram-positive methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), and mobilized colistin resistance gene (MCR)-producing Enterobacteriaceae bacteria, which are resistant to almost all available antibacterial drugs [1]. The coronavirus disease 2019 (COVID- 19) pandemic especially led to the increased clinical use of all antibiotics, which further promoted the development of bacterial resistance, highlighting the unmet medical need for new antibiotics [2]. Since the golden age of antibiotic discovery in the mid-20th century, natural products have served as the major foundation for the development of the majority of antibiotic drugs in clinical use to this very day [3]. Natural product antibiotics act by directly inhibiting the growth or killing the bacteria, acting as potentiators that augment or transform other agents or as immunomodulators to host cells or block pathogen virulence [1]. For example, a recent Documented biological activities of curcumin include antimicrobial, antioxidant, anti-inflammatory, neuroprotective, anticancer, and immuno-modulatory activities [20]. Due to its various biological activities, curcumin has been used extensively in traditional medicine for the treatment of various illnesses including autoimmune, neurological, diabetic, cardiovascular, and infectious disease [5,21]. In the proceeding discussions, we elaborate on the antibacterial activities of curcumin, its mechanism of action, and barriers associated with its clinical application as an antibiotic therapy. Antibacterial Activity of Curcumin In 1949, Schraufstatter and colleagues were the first to report the antibacterial properties of curcumin [22]. In the past seventy years, there have been several studies of the broad-spectrum inhibitory effect that curcumin exhibits against various Gram-negative and Gram-positive bacteria, including A. baumannii, E. faecalis, K. pneumoniae, P. aeru- Antibacterial Activity of Curcumin In 1949, Schraufstatter and colleagues were the first to report the antibacterial properties of curcumin [22]. In the past seventy years, there have been several studies of the broad-spectrum inhibitory effect that curcumin exhibits against various Gram-negative and Gram-positive bacteria, including A. baumannii, E. faecalis, K. pneumoniae, P. aeruginosa, Bacillus subtilis (B. subtilis), Staphylococcus epidermidis, Bacillus cereus (B. cereus), Listeria innocua, Streptococcus pyogenes, S. aureus, Helicobacter pylori (H. pylori), Escherichia coli (E. coli), Salmonella enterica serotype Typhimurium, and Streptococcus mutans (Details shown in Table 1) [6,8,10,23,24]. Importantly, curcumin also exhibits marked antibacterial activities against MDR-isolates, such as polymyxin-resistant K. pneumoniae and MRSA [9,10,24]. A recent study by Batista de Andrade Neto et al., reported that minimum inhibitory concentration (MIC) values for curcumin against clinical isolates of MRSA were in the range of 125-500 µg/mL [25]. Another study by Yasbolaghi Sharahi et al., reported that MICs of curcumin against MDR-A. baumannii, P. aeruginosa and K. pneumoniae were in the range of 128-512 µg/mL [8]. Notably, there were significant differences in the MICs of curcumin against certain stains reported by different research groups [26]. This may be due to the difference in solubility of curcumin in the different vehicles (e.g., water, DMSO, and ethanol) used by each research group [26]. In addition, these differences may be related to the MIC test methodology, impact of the vehicle against the bacterial outer membrane, and purity of the curcumin used in the study [27]. Table 1. Documented antibacterial activities of curcumin. Cell Membrane Disruption Curcumin and its two analogs, DMC and BDMC, have been shown to possess antibacterial activity against a wide range of bacteria [23]. Studies have shown that curcumin can damage the permeability and integrity of bacterial cell membranes in both Gram-positive and -negative bacteria, finally leading to bacterial cell death [47]. Curcumin's lipophilic structure allows it to directly insert into liposome bilayers, which in turn enhances the bilayer permeability [47]. Solid-state nuclear magnetic resonance (NMR) spectroscopy studies revealed that curcumin can insert deep into the membrane in a trans-bilayer orientation, resulting in disordering 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) membranes and influencing exocytotic and membrane fusion processes [48]. Tyagi et al., demonstrated that curcumin at a concentration of 100 µM can induce permeabilization of both S. aureus and E. coli cell walls [49]. This membrane permeabilization property could account for the direct bacterial killing effect of curcumin against Gram-positive and -negative bacteria [49]. Indeed, the increase in membrane permeabilization of bacteria caused by curcumin could increase the uptake of other drugs [50]. This is a critical mechanism to explain the synergistic effect of curcumin combination therapy with other antibiotic drugs or natural products, as discussed in detail below. Inhibition of Bacterial Quorum Sensing System and Biofilm Formation Quorum sensing (QS) system is a cell-cell communication system that is ubiquitously used in microbial communities to monitor their population density and adapt to external environment [51]. To date, there are three main QS systems, (1) the acylhomoserine lactone (AHL) QS system in Gram-negative bacteria; (2) the autoinducing peptide (AIP) QS system in Gram-positive bacteria, and (3) the autoinducer-2 (AI-2) QS system, which is in both Gram-negative and Gram-positive bacteria. It is well-known that QS systems play critical role in the formation and maturation of bacterial biofilms, which are associated with about 80% microbial infections [52]. Bacteria growing in biofilms are largely protected from antibiotics or host immune cells, leading to the failure of antimicrobial therapy [52]. QS systems are the master controllers for the entire process of biofilm formation, including bacterial adhesion, biofilm development, and maturation. Therefore, the discovery of new inhibitory compounds targeting bacterial QS systems is an important strategy to control bacterial biofilm formation and resistance. Several studies reported that curcumin inhibits bacterial QS systems/biofilm formation and prevents bacterial adhesion to host receptors in various species, including S. aureus, E. faecalis, E. coli, Streptococcus mutans, Listeria monocytogenes, H. pylori, P. aeruginosa, Serratia marcescens, Aeromonas hydrophila and A. baumannii [36,38,50,[53][54][55]. We have summarized the QS system's curcumin targets in various bacteria in Table 2. In addition, Figure 3 provides an overview of the inhibitory mechanisms of curcumin against biofilm formation, inhibition of bacterial swimming/clustering behaviors, and inhibition of virulence [35,36,38,50,[53][54][55]. Interestingly, available data suggest that the autoxidation of curcumin could also contribute to the inhibition of biofilm formation [56]. For example, curcumin was shown promote the production of lactate dehydrogenase (LDH) in P. aeruginosa, S. aureus, and E. faecalis, wherein the curcumin/LDH complex exhibited antibacterial and anti-biofilm activities [56]. Clearly, the anti-biofilm properties of curcumin increase its potential as a tractable anti-infective agent. Table 2. Targets or action model of curcumin in the inhibition of biofilm in various bacteria. Staphylococcus aureus By inhibiting the activity of sortase A by interaction with VAL-168, LEU-169, and GLN-172 sites based on curcumin and its analog methoxyl group on the benzene ring [30,57] Enterococcus faecalis Unclear [54] Listeria monocytogenes By circumventing the limitations to singlet-oxygen diffusion imposed by the extracellular matrix [36] Bacillus cereus Unclear [35] Bacteria Type Targets or Action Model of Curcumin References Helicobacter pylori By inhibiting biofilm maturation [38] Pseudomonas aeruginosa By inhibiting the production of the QS-dependent factors, such as exopolysaccharide production, alginate production, swimming, and swarming motility of uropathogens [30,58] Escherichia coli Similar to Pseudomonas aeruginosa [58] Streptococcus mutans By inhibiting sortase A activity; suppressing the expression of genes related to extracellular polysaccharide synthesis, carbohydrate metabolism, adherence, and the two-component transduction system [59][60][61] Serratia marcescens By inhibiting the production of violacein production in a QS-independent manner, as well as swimming and swarming motility. [55] Klebsiella pneumoniae Unclear [62] Acinetobacter baumannii By blocking BfmR, which is a response regulator in a two-component signal transduction system [43] Aeromonas hydrophila Inhibition of violacein production and swimming motility [53,63] Porphyromonas gingivalis By inhibiting the activities of Arg-and Lys-specific proteinase (named RGP and KGP, respectively) [45] Antioxidants 2022, 11, x FOR PEER REVIEW 6 of 22 Serratia marcescens, Aeromonas hydrophila and A. baumannii [36,38,50,[53][54][55]. We have summarized the QS system's curcumin targets in various bacteria in Table 2. In addition, Figure 3 provides an overview of the inhibitory mechanisms of curcumin against biofilm formation, inhibition of bacterial swimming/clustering behaviors, and inhibition of virulence [35,36,38,50,[53][54][55]. Interestingly, available data suggest that the autoxidation of curcumin could also contribute to the inhibition of biofilm formation [56]. For example, curcumin was shown promote the production of lactate dehydrogenase (LDH) in P. aeruginosa, S. aureus, and E. faecalis, wherein the curcumin/LDH complex exhibited antibacterial and anti-biofilm activities [56]. Clearly, the anti-biofilm properties of curcumin increase its potential as a tractable anti-infective agent. Staphylococcus aureus By inhibiting the activity of sortase A by interaction with VAL-168, LEU-169, and GLN-172 sites based on curcumin and its analog methoxyl group on the benzene ring [30,57] Enterococcus faecalis Unclear [54] Listeria monocytogenes By circumventing the limitations to singlet-oxygen diffusion imposed by [36] Inhibition of Cell Division Inhibition of bacterial cell division is an important mechanism of curcumin's antibacterial activity [23,64]. Filament temperature-sensitive protein Z (FtsZ) is shown to be essential for bacterial cell division [64,65]. It consists of an N-terminal polymerization domain connected to a highly conserved C-terminal peptide (CCTP) of~eight amino acids by an intrinsically disordered linker region of variable length (50 amino acids in E. coli). FtsZ associates in a GTP-dependent manner to form polymers [64]. This process is coupled to the conversion between closed and open conformations of FtsZ and plays a critical role in the formation of the Z ring of FtsZ. The polymerized FtsZ filaments attach to the cytoplasmic membrane through membrane anchors ZipA and FtsA, mediated by the CCTP of FtsZ ( Figure 4). Rai et al., showed that curcumin blocks the formation of the cytokinetic Z ring through direct interaction with FtsZ in B. subtilis and E.coli [64]. In addition, curcumin also increased the GTPase activity of FtsZ, which in turn aborted the polymerization process [64]. Molecular docking of curcumin to the E. coli FtsZ structure suggests binding occurs within the GTPase catalytic pocket, with the curcumin molecule making key contacts with Gly20, Gly21, Gly109, Thr132, and Asn165 and residues at the sites of Gly21, Gly22, Gly72, Thr133, and Asn166 in B. subtilis FtsZ ( Figure 4) [66]. More recently, Morão et al., showed that a molecular simplified version of curcumin where its β-diketone moiety had been substituted with a monocarbonyl group could disrupt the divisional septum of B. subtilis without exerting a direct inhibition of FtsZ. These findings suggest that the simplified curcumin exerts its antibacterial action largely through membrane permeabilization, with disruption of the membrane potential necessary for FtsZ intra-cellular localization [23]. Inhibition of Cell Division Inhibition of bacterial cell division is an important mechanism of curcumin's antibacterial activity [23,64]. Filament temperature-sensitive protein Z (FtsZ) is shown to be essential for bacterial cell division [64,65]. It consists of an N-terminal polymerization domain connected to a highly conserved C-terminal peptide (CCTP) of ~eight amino acids by an intrinsically disordered linker region of variable length (50 amino acids in E. coli). FtsZ associates in a GTP-dependent manner to form polymers [64]. This process is coupled to the conversion between closed and open conformations of FtsZ and plays a critical role in the formation of the Z ring of FtsZ. The polymerized FtsZ filaments attach to the cytoplasmic membrane through membrane anchors ZipA and FtsA, mediated by the CCTP of FtsZ ( Figure 4). Rai et al., showed that curcumin blocks the formation of the cytokinetic Z ring through direct interaction with FtsZ in B. subtilis and E.coli [64]. In addition, curcumin also increased the GTPase activity of FtsZ, which in turn aborted the polymerization process [64]. Molecular docking of curcumin to the E. coli FtsZ structure suggests binding occurs within the GTPase catalytic pocket, with the curcumin molecule making key contacts with Gly20, Gly21, Gly109, Thr132, and Asn165 and residues at the sites of Gly21, Gly22, Gly72, Thr133, and Asn166 in B. subtilis FtsZ ( Figure 4) [66]. More recently, Morão et al., showed that a molecular simplified version of curcumin where its β-diketone moiety had been substituted with a monocarbonyl group could disrupt the divisional septum of B. subtilis without exerting a direct inhibition of FtsZ. These findings suggest that the simplified curcumin exerts its antibacterial action largely through membrane permeabilization, with disruption of the membrane potential necessary for FtsZ intra-cellular localization [23]. Induction of Oxidative Stress and Programmed Cell Death Traditionally, programmed cell death (PCD) is an important biological and pathological process in the life-cycle of eukaryotic multicellular organisms [67]. Similarly, monocellular organism such as bacteria can activate signaling pathways, leading to cell death within a colony. In bacteria, many factors, including stress response, developmental phase, genetic transformation, and biofilm formation contribute to the induction of bacterial programmed apoptotic-like death processes [67]. The physiological and biochemical hallmarks of apoptotic-like death in terminally stressed E. coli involve the production of reactive oxygen species (ROS), chromosomal condensation, extracellular exposure of phosphatidylserine, DNA fragmentation, membrane potential (∆Ψ) dissipation, and loss of structural integrity, all markers of eukaryotic apoptosis [68]. ROS-mediated cell death results from the damaging effects of the superoxide anions (O 2 •− ), hydrogen peroxide (H 2 O 2 ), and hydroxyl radicals (OH•) on bacterial cellular components (DNA, membrane lipids, and proteins) [69]. Curcumin at MIC concentrations induces the production of ROS in bacterial cells, resulting in an apoptosis-like response in E. coli, including the accumulation of ROS, membrane depolarization, and increase of Ca 2+ influx [50]. At the genetic level, curcumin induced the upregulation of RecA protein expression, which mediates apoptotic-like death processes in bacteria [50]. In line with this finding, E. coli RecA knock-outs displayed curcumin resistance, consolidating the conclusion that curcumin-induced cell death in E. coli is dependent on apoptotic pathways [50]. In addition, curcumin has been shown to downregulate the expression of genes that mediate the SOS response in bacteria, which rescues the cell from DNA damage and is involved in biofilm formation and division [68]. LexA is a DNA-binding transcriptional repressor that regulates genes involved in the SOS response [70]. Recent studies indicated that curcumin inhibited the SOS responses caused by UV-induced DNA damage in Salmonella typhimurium and E. coli by suppressing the expression of LexA. The inhibitory effects of curcumin on biofilm formation and cell division mentioned above are likely associated with its inhibitory effects on the bacterial SOS response. Curcumin has also been shown to directly interact with bacterial DNA to produce a bacteriostatic effect [50]. We have provided an overview of curcumin-induced bacterial cell death in Figure 5. nocellular organism such as bacteria can activate signaling pathways, leading to cell death within a colony. In bacteria, many factors, including stress response, developmental phase, genetic transformation, and biofilm formation contribute to the induction of bacterial programmed apoptotic-like death processes [67]. The physiological and biochemical hallmarks of apoptotic-like death in terminally stressed E. coli involve the production of reactive oxygen species (ROS), chromosomal condensation, extracellular exposure of phosphatidylserine, DNA fragmentation, membrane potential (ΔΨ) dissipation, and loss of structural integrity, all markers of eukaryotic apoptosis [68]. ROS-mediated cell death results from the damaging effects of the superoxide anions (O2 •− ), hydrogen peroxide (H2O2), and hydroxyl radicals (OH•) on bacterial cellular components (DNA, membrane lipids, and proteins) [69]. Curcumin at MIC concentrations induces the production of ROS in bacterial cells, resulting in an apoptosis-like response in E. coli, including the accumulation of ROS, membrane depolarization, and increase of Ca 2+ influx [50]. At the genetic level, curcumin induced the upregulation of RecA protein expression, which mediates apoptotic-like death processes in bacteria [50]. In line with this finding, E. coli RecA knock-outs displayed curcumin resistance, consolidating the conclusion that curcumin-induced cell death in E. coli is dependent on apoptotic pathways [50]. In addition, curcumin has been shown to downregulate the expression of genes that mediate the SOS response in bacteria, which rescues the cell from DNA damage and is involved in biofilm formation and division [68]. LexA is a DNA-binding transcriptional repressor that regulates genes involved in the SOS response [70]. Recent studies indicated that curcumin inhibited the SOS responses caused by UV-induced DNA damage in Salmonella typhimurium and E. coli by suppressing the expression of LexA. The inhibitory effects of curcumin on biofilm formation and cell division mentioned above are likely associated with its inhibitory effects on the bacterial SOS response. Curcumin has also been shown to directly interact with bacterial DNA to produce a bacteriostatic effect [50]. We have provided an overview of curcumin-induced bacterial cell death in Figure 5. Phototoxicity Curcumin absorbs blue light in the range of 455-460 nm and can be employed as an effective photosensitizer to promote the success of photodynamic processing [19]. This photosensitizing property has been exploited to induce phototoxicity in Gram-positive and -negative bacterial cells under blue light irradiation [50,71]. It is noteworthy to mention that Gram-positive bacteria are known to be more sensitive and are easily killed by photosensitizers compared to Gram-negative bacteria [50]. This difference may be related to the more robust outer membrane structure of Gram-negative bacteria compared to the more porous cytoplasmic membrane structure of Gram-positive cells in which allowed photosensitizers more easily penetrate into cells [72]. Recently, it was found that ethylene diamine tetraacetic acid (EDTA), which permeabilizes the cell membrane, could significantly enhance the antibacterial effect of blue light-activated curcumin in S. aureus and S. mutans cells [73]. In the past 10 years, researchers have developed a working understanding of the molecular mechanisms of curcumin-induced phototoxicity, although the precise molecular mechanism is still unclear [19]. It has been demonstrated that the antibacterial effect of blue light-activated curcumin involves an autoxidation to generate ROS, which in turn damage lipids, protein, and DNA, finally leading to bacterial cell death [50]. Jiang et al., showed that blue light-activated curcumin could significantly increase the levels of intracellular ROS and membrane damage in S. aureus [74]. A recent study showed that curcumin-mediated phototoxicity involves the direct induction of DNA damage and protein degradation, eradication of biofilms and inhibition of virulence genes (e.g., inlA, hlyA, and plcA) in Listeria monocytogenes [75]. Chen et al., showed that the process of curcumin-mediated phototoxicity is temperature dependent [76]. Very recently, it was reported that curcumin could be employed as a coating on the surface of the endotracheal tube, (which was considered the primary cause of ventilator-associated pneumonia), capable of a robust photodynamic inactivation under blue light activation (at 450 nm) against E. coli, S. aureus, and P. aeruginosa [77]. This photodynamic activity of curcumin provided a novel application in avoiding ventilator-associated pneumonia in patients. Curcumin Perturbs Bacterial Cell Metabolism Many antibiotics, such as β-lactams, aminoglycosides, and quinolones, have been widely used in clinical practice, and the primary mechanisms of action have been wellestablished [1]. However, more recent metabolomics studies from high-throughput technologies have indicated that, in addition to these distinct mechanisms, subsequent metabolic changes that occur downstream of the interaction of the antibiotics with their primary targets also play an important role in their antibacterial-killing mechanism [78]. It has been reported that L-serine supplementation could sensitize E. coli to gentamicin by promoting the production of NADH and ROS production, which also mediated the bacterial killing of curcumin [79]. Adeyemi et al., reported that curcumin treatment of S. aureus impacts the levels of kynurenine, nitric oxide, and total thiol levels, indicating that perturbations in the aforementioned metabiotic pathways contribute to the antibacterial killing mechanism of curcumin [80]. The activation of the kynurenine pathway likely produces a decrease in the cellular L-tryptophan pool available to support bacterial growth, thereby starving bacterial cells of an essential nutrient [80]. Curcumin Regulates Intracellular Bacterial Proliferation Curcumin is a powerful immune-regulator, with a proven ability to modulate host defenses against intracellular bacterial infections [81]. Marathe et al., showed that pretreatment of macrophages with curcumin attenuated Listeria monocytogenes and Shigella flexneri intracellular infection, albeit the pre-treatment had the opposite effect on infection by Salmonella enterica serovar Typhimurium, S. aureus, and Yersinia enterocolitica, which were aggravated by curcumin [81]. This differential effect may be attributed to the membranestabilizing effect of curcumin wherein S. enterica serovar Typhimurium, S. aureus, and Y. enterocolitica have acquired machinery that inhibits the fusion of the pathogen-containing vacuole with lysosomes [82]. By contrast, Listeria monocytogenes and S. flexneri in the host cells can escape into the cytosol and prevent lysosomal degradation [83]. Recent studies also indicated that curcumin can protect human macrophages against Mycobacterium tuberculosis infection by inducing apoptosis, autophagy, and the activation of nuclear factor-kappa B (NF-κB) [84]. To date, the key targets of curcumin in the host that governs the growth and proliferation of intracellular pathogens are still unclear, and the precise molecular mechanisms require further investigation. Synergistic Antibacterial Effects of Curcumin with Antibacterial or Non-Antibacterial Agents Synergistic antibacterial effects between antibiotics are strictly defined microbiological phenomena, requiring two bioactive agents to exhibit a greater effect in bacterial killing than the added effects of each constituent [85]. Curcumin and Polypeptide Antibacterial Drugs In the clinic, vancomycin and polymyxins (including polymyxin B and E, also called colistin) are commonly employed as antibacterial drugs against MDR Gram-negative and Gram-positive bacteria, respectively [90]. The emergence of polymyxin-and vancomycinresistant bacteria has posed a huge challenge and medical burden. The well-accepted primary mechanism of action of polymyxins is through spatially displacing the cations (e.g., Ca 2+ and Mg 2+ ) in the Gram-negative outer membrane and binding to the lipid A component of the lipopolysaccharide (LPS), subsequently disrupting the stability of both the outer and inner membranes, ultimately leading to bacterial cell lysis [91]. Recent studies also indicated that polymyxins can also induce the production of excessive ROS (i.e., OH • ) in bacterial cells, leading to oxidative stress-dependent cell death [92]. Polymyxin B in combination with curcumin showed a marked synergetic effect against polymyxin-susceptible and -resistant Gram-positive (e.g., Enterococcus, S. aureus, and Streptococcus) and Gram-negative (e.g., A. baumannii, E. coli, P. aeruginosa, and S. maltophilia) bacterial isolates associated isolated from traumatic wound infections [32]. This synergistic effect may be due to curcumin's ability to permeabilize the outer membrane, which facilitates the entry of the secondary agent to enter the bacterial cells and cause cell death [24]. In addition, this synergistic effect could be attributed to the inhibitory effect of curcumin on the activities of efflux pumps [9,24]. Curcumin and polymyxin combination treatment for bacterial infections may have another advantage, i.e., significant improvement in the therapeutic index of polymyxins by additionally inhibiting polymyxin-induced cytotoxicity, neurotoxicity, and nephrotoxicity, which is beyond antibacterial activity [93]. This combination may have a powerful application in clinical practice and warrants clinical trials. Vancomycin is a glycopeptide antibiotic that inhibits a specific step in the synthesis of the peptidoglycan layer in Gram-positive bacteria. It has been reported that curcumin combined with vancomycin showed a synergistic effect against MDR clinical K. pneumoniae isolates [94]. This potential mechanism may be dependent on the synergistic effect of cell membrane permeability [94]. Moreover, curcumin could also attenuate vancomycininduced nephrotoxicity by inhibiting oxidative stress and the inflammation response in a rat model [94]. Curcumin and β-Lactam Antibacterial Drugs β-lactam antibiotics are the most widely used antibacterial agents worldwide. βlactamases confer significant antibiotic resistance to their bacterial hosts by hydrolyzing the amide bond of the four-membered β-lactam ring of β-lactam antibiotics, which include four classes of drugs, i.e., penams (penicillins), cephems (cephalosporins), monobactams, and carbapenems [95]. It has reported that a curcumin and meropenem combination displayed markedly synergistic or additive effects against antibiotic-susceptible and -resistant Gram-positive (E. faecalis) and carbapenem-associated MDR A. baumannii, P. aeruginosa, and K. pneumoniae isolates via the observation of MICs [86]. A report by Yadav et al., showed that a water-soluble curcumin derivative could reverse meropenem resistance by targeting the activity of carbapenemases and the AcrAB-TolC multidrug efflux pump system [96]. Mun et al. showed that a curcumin combination with oxacillin and ampicillin exhibited a marked synergistic effect against S. aureus ATCC (American Type Culture Collection) 25,923 (methicillin-sensitive strain) [97]. Similarly, in another study, BDMC in combination with oxacillin showed a marked synergistic effect against S. aureus ATCC 33,591 (methicillin-resistant strain) and clinical MRSA isolates [98]. The potential mechanism may be dependent on the expression of the mecA gene that encodes penicillin-binding protein 2a (PBP2a), which governs the resistance of MRSA isolates to β-lactam antibiotics [98]. Sasidharan et al. found that curcumin in combination with third-generation cephalosporins (e.g., cefaclor, cefodizime, and cefotaxime) showed marked synergistic effect against S. aureus, B. subtilis, and E. coli, which are also associated with infectious diarrhea [87]. There was no increased toxic effects between these combinations [87]. These results indicated curcumin and cephalosporin combination are promising therapeutic options for infectious diarrhea disease. Curcumin and Aminoglycoside Antibacterial Drugs Aminoglycosides are potent, broad-spectrum antibiotics that act through inhibition of protein synthesis by irreversibly binding to 30S ribosomal subunits [99]. A report by Teow et al., stated that curcumin in combination with two aminoglycoside antibiotics (e.g., amikacin and gentamicin) showed a powerful synergistic effect against S. aureus strains, and these synergistic effects were stronger than that of curcumin in combination with ciprofloxacin [100]. Notably, this difference in synergistic effect may be related to the difference in the primary targets between quinolone and aminoglycosides against bacteria [101]. The potential action mechanism is related to the inhibition of biofilm formation, which was evident by the significant inhibition of their combination of the swarming motilities and the mRNA expression of several key QS regulatory genes (e.g., lasI, lasR, rhlI, and rhlR) [100]. In addition, it has been reported that curcumin can also attenuate gentamicin-induced nephrotoxicity and neurotoxicity by inhibiting oxidative stress and cell apoptosis in a rat model [102]. Therefore, the combination between curcumin and aminoglycosides can not only improve the antibacterial effectiveness but can also decrease the toxic effects of gentamicin. Curcumin and Macrolide Antibacterial Drugs Azithromycin is a macrolide antibiotic, which can exhibit a good antibacterial effect by inhibiting bacterial protein synthesis, quorum-sensing, and the formation of biofilms. In clinical practice, azithromycin has been used in treating respiratory, urogenital, dermal, and other bacterial infections [103]. Bahari et al., found that curcumin in combination with azithromycin showed a synergistic effect against P. aeruginosa PAO1, and the value of FICI was 0.25 [100]. The potential action mechanism may be similar to the above-mentioned combination of curcumin and gentamicin [100]. Erythromycin is in a class of medications called macrolide antibiotics. The action mechanism involves the blockade of bacterial growth. In a rat model, oral administration of curcumin (50 mg/kg) and erythromycin (20 mg/kg) significantly inhibited the growth of MRSA isolates in bone tissue compared to either administered alone [11]. The curcumin and erythromycin combination also significantly alleviated bone infection and the inflammatory response [11]. Curcumin and Quinolone Antibacterial Drugs There was a marked synergistic effect in curcumin combination with two quinolone antibiotics (e.g., ciprofloxacin and norfloxacin) against the S. aureus ATCC 33,591 strain and clinical MRSA isolates [97]. On the contrary, curcumin treatment reduced the antimicrobial activity of ciprofloxacin against Salmonella typhimurium and Salmonella typhi [97]. This may be related to the antioxidant property of curcumin and its inhibition of the expression of interferon γ (IFNγ) in vitro and in a mouse model [97]. Curcumin and Berberine Berberine is a benzylisoquinoline alkaloid compound and has antimicrobial properties against both Gram-negative and Gram-positive bacteria [104]. Berberine has been widely used in traditional Chinese and native American medicines. FtsZ protein is an important target of berberine in inhibiting bacterial division [105]. Interesting, co-encapsulation of berberine and curcumin in liposomes decreased their MICs against MRSA by 87% and 96%, respectively, as compared to their free forms, with an FICI of 0.13, indicating a synergistic effect [88]. However, the synergistic effect in their combination in native form was not detected. In addition, co-treatment of berberine and curcumin in liposomes also significantly improved intracellular infection and the inflammation response in macrophages following MRSA infection. Mechanically, the synergistic effect between curcumin and berberine is partly dependent on the inhibition of biofilm formation and improvement of their solubilities [88]. Additionally, similar to curcumin, berberine is also an FtsZ inhibitor and inhibits bacterial cell division [104]. Therefore, this synergistic effect between curcumin and berberine may also be partly dependent on the inhibition of FtsZ assembly. Curcumin and Epigallocatechin Gallate Epigallocatechin-3-gallate (EGCG) is a polyphenol found in green tea, which, similar to curcumin, has been linked with health benefits and has significant antimicrobial activity against some MDR pathogens, including MDR S. maltophilia, A. baumannii, and S. aureus [106]. In vitro, it has been found that curcumin in combination with EGCG exhibited a marked synergistic effect against MDR A. baumannii [107]. A possible explanation for the synergy between curcumin and EGCG could be disruption of the outer membrane and facilitation of curcumin to enter bacterial cells [108]. In another study, it was suggested that inhibition of acylhomoserine lactone-mediated biofilm formation may contribute to this synergistic effect, and investigations of precise mechanisms are still required [109]. Curcumin and Metals Many metals have been used as antimicrobial agents due to the antiquity and potential molecular mechanism involved in oxidative stress, protein dysfunction or membrane damage in bacterial cells [110]. A copper (II) sulfate pentahydrate-curcumin complex (Cu-CUR), iron (III) nitrate nonahydrate-curcumin complex (Fe-CUR), and zinc (II) chloridecurcumin complex (Zn-CUR) all significantly inhibited cell growth in P. aeruginosa PAO1 compared to curcumin treatment alone [111,112]. Furthermore, the authors found that the Cu-CUR complex significantly inhibited the formation of the biofilm and the production of QS-related virulence factors of P. aeruginosa PAO1 [89]. Consistently, the synergistic activity of curcumin and silver/copper nanoparticles (NPs) was detected against the cell growth and biofilm formation of S. aureus and P. aeruginosa compared to curcumin, AgNPs or CuNPs alone [113]. These marked synergistic effects may be related to the improvement of curcumin or intracellular uptake of curcumin [114]. Safety of Curcumin Curcumin has been proven to be safe and tolerable across various animal studies as well as clinical trials [115][116][117]. Orally administered curcumin at the dose of 50, 250, 480, and 1300 mg/kg body weight for 13 weeks did not exhibit acute toxicity in rats [118]. However, some abnormal effects including increased liver weight, stained fur, discolored faces, and hyperplasia of mucosal epithelium in the cecum and colon were observed in animals from the highest dosage group (2600 mg/kg body weight). Orally administered curcumin at 100, 200 or 400 mg/kg/day has been shown to effectively inhibit acute liver damage, nephrotoxicity, and nerve damage caused by colistin, aflatoxin B1, carbon tetrachloride, and cadmium [21,[119][120][121][122] in rat or mouse models. In an infection model, oral administration of curcumin at 25 or 50 mg/kg body weight for two weeks could significantly ameliorate the H. pylori infection-induced inflammation response in gastric tissues of mice [123]. A phase I human trial showed that oral administration of curcumin in some cancer patients at a dose of 8 g/day for three months did not show any adverse effects, albeit some adverse effects were detected when the patients were administered a higher dose of 12 g/day [124]. The results of a 4-month phase I clinical trial in cancer patients showed that oral curcumin at a dose of 3.6 g/day significantly inhibited levels of serum prostaglandin E 2 (PGE2) production, a biomarker of the inflammatory response. Notably, no adverse effects were reported in the curcumin treatment cohort [125]. Consistently, a triple blinded clinical trial showed that a combination of 500 mg curcumin (equal to 8.33 mg/kg body weight) and 40 mg famotidine daily for one month significantly decreased the rate of H. pylori infection in patients [126]. Collectively, these studies indicated that the therapeutic dose of curcumin is far lower than the dosages at which toxicity is observed, thus giving curcumin a good therapeutic index. Nano-Formulations of Curcumin Curcumin has low water solubility (about 11 ng/mL), which results in its poor bioavailability under oral consumption [127]. Additionally, curcumin degrades rapidly, resulting in low concentrations in the blood or organs of the body, making it difficult to reach the effective concentration to treat the bacterial infection in the liver, lungs, or other organs [128]. To overcome this insufficiency of bioavailability, scientists have developed various nan-formulations of curcumin, such as lipid-based nanocarriers (e.g., liposomes, solid lipid nanoparticles, nanostructured lipid carriers, and nano-emulsion), biopolymers (e.g., nanocomposite, polymeric nanoparticles, hydrogel, and polymeric micelles), techniquebased nanoparticles (e.g., spray-dried nano-formulation of curcumin, and nanofibers), and other miscellaneous types of nanocurcumin (curcumin nanocrystals, quantum dots, and graphene oxide) [18,[129][130][131][132]. In addition, nanomaterial-based combinations of curcumin with other anti-bacterial agents that are effective against bacteria were also developed. Most of them are used in cancer therapy [133]. Here, we summarized the main types of nanocurcumin that are applied due to their antibacterial effect, as shown in Table 3. Their special characteristics and antibacterial activities have been well-described and addressed (see Sharifi et al.'s review paper) [132]. It is notable that there was no clinical trial to test the effectiveness of these various nano-formulations of curcumin, although they exhibited a better antibacterial effect in vitro and animal experiments by improving the solubility and biocompatibility. Therefore, more clinical trials are still required. In addition, beyond the development of nano-formulations, other types of new formulations (e.g., inclusion technology, solid dispersion technology, microspheres, and microcapsules) were also developed to improve the solubility and bioavailability of curcumin. For example, Yaday et al. found that various cyclodextrin (CD) complexes of curcumin could enhance the solubility of curcumin > 100-fold compared with curcumin per se in water [134]. However, similar to the nano-formulations, the development of these new formulations of curcumin remains at the laboratory research stage, and there are no necessary clinical studies. Table 3. Nano-formulations of curcumin and their antibacterial effects. Improvement in Antibacterial Activity (Accessed by MICs or Biofilm Formation) Reference Curcumin nanoparticles (curc-np) Curcumin was encapsulated into a silane-hydrogel nanoparticle vehicle. Average hydrodynamic diameter at the range of 222 ± 14 nm. In vitro, curc-np significantly inhibited the growth of MRSA and P. aeruginosa isolates compared to native curcumin. In a mouse model: significantly reduced bacterial burden in MRSA-infected burn wounds compared to native curcumin administration. Improvement in Antibacterial Activity (Accessed by MICs or Biofilm Formation) Reference Nanoparticles of curcumin (nanocurcumin) A wet-milling technique was used to make the particle size of curcumin 2-40 nm, and nanocurcumin was freely dispersible in water. aeruginosa, A. niger, much higher than native curcumin in DMSO (the corresponding MICs were 150, 100, 300, 250 and 400 µg/mL). [28,130] Microcapsule curcumin Microcapsule curcumin could be prepared with gelatin and porous starch as a wall system by a spray-drying method. The size was not reported. Poly-(lactic-co-glycolic acid) Curcumin nanocapsules(PLGA-CUR-NCs) Curcumin (CUR) nanocapsules (NCs) were prepared by the solvent displacement method with some modifications. The detailed information has been described in a published paper. The solubility in water increased to 591-928 µg/mL, and its solubility could be regulated by changes in the oil and water ratio. The sizes were in the range of 100-1000 nm, dependent on the ratio of glucose. [138] Nano-sized particles of curcumin Colloids of curcumin nanoparticles with an average diameter of 20-40 nm were prepared in accordance with the method (a wet-milling technique). Nano-curcumin could enhance the inhibition of biofilm formation in P. aeruginosa. There was no marked change in the MICs. The cytotoxicity of CPCF significantly decreased in human skin fibroblasts compared to native curcumin. [140] Curcumin-chitosan-zinc oxide (CCZ) Curcumin and chitosan were layered on a hexagonal ZnO, and the particles were sized to about 48 ± 2 nm. Increased antibacterial activity of the CCZ against MRSA and E. coli compared to native curcumin or ZnO. [141] Pectin/curcumin/sulfur nanoparticles films pH-responsive pectin-based functional films were prepared by incorporating curcumin and sulfur nanoparticles (SNP). Curcumin and SNP were uniformly dispersed in the pectin to form a composite film. The composite film exhibited enhanced inhibitory effect against E. coli and L. monocytogenes, with enhanced strong antioxidant activity. [131] Conclusions and Perspectives In the past decades, the potential molecular mechanisms of curcumin's antibacterial activities have been extensively studied, involving the disruption of the bacterial membrane, the inhibition of the production of bacterial virulence factors and bacterial biofilm formation, induction of oxidative stress leading to programmed cell death, bacterial metabolic disturbance, and phototoxicity. These characteristics also contribute to explain how curcumin acts a broad-spectrum antibacterial adjuvant, which was evidenced by the markedly additively or synergistically effect with various conventional antibiotics or non-antibiotic compounds, such as antibacterial agents, natural products, and metals. Animal experiments and human clinical trials reveal that curcumin has high safety. However, unlike curcumin as a chemotherapy drug in cancer therapy, curcumin as a potential antibacterial therapy still has many challenges: (1) the critical targets of curcumin alone or combination in bacteria and precise molecular mechanisms are poorly understood; (2) the poor solubility, low bioavailability, and rapid degradation in humans or animals when curcumin was consumed orally; (3) no effective clinical trials. In order to overcome the poor solubility of curcumin, scientists have developed various curcumin nano-formulations and they indeed exhibited better solubility and antibacterial activity compared to native curcumin. However, there is a lack of evidence-based randomized investigation especially exploring the therapeutic roles of the nanocarrier-based delivery systems in enhancing anti-bacterial actions; therefore, much needs to be explored.
2022-02-27T16:09:49.470Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "9b06c030575c38f1ada77300e560e2bcbf731ad7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/3/459/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ffaa4d3f387571054cf7916222c2e42dc40d9ef", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21905161
pes2o/s2orc
v3-fos-license
The hydrodynamic properties of dark- and light-activated states of n-dodecyl beta-D-maltoside-solubilized bovine rhodopsin support the dimeric structure of both conformations. Rhodopsin (Rho) has been extracted in n-dodecyl beta-D-maltoside (DM) from bovine retinal rod outer segments and purified to homogeneity by affinity chromatography on concanavalin A-Sepharose. Because chemical cross-linking of Rho and photoactivated Rho (Rho*) provided initial evidence for the oligomeric nature of the photoreceptor protein, we carried out a hydrodynamic characterization of the native and activated conformations of detergent-solubilized Rho. The molecular weights of the complexes between dark and photoexcited states of Rho and DM were determined by gel filtration chromatography on Sephacryl S-300, in the presence of 0.1% DM. Subtracting the size of the corresponding detergent micelles resulted in molecular masses of 78 kDa for native Rho and 76 kDa for Rho*. The measured content of 0.97 g of detergent/g of protein resulted in a calculated partial specific volume of 0.765 cm(3)/g for the protein-detergent complex and a molar mass of 64-65 kDa for the protein moiety. The sizes of Rho.DM and Rho*.DM complexes were also evaluated by sedimentation on 10-30% sucrose gradients, in the presence of 0.1% DM, and molecular masses of about 60 kDa were estimated for both the dark- and light-activated states of the photoreceptor protein. The size of Rho was determined to be 65,300 and 69,800 Da, respectively, when the purified Rho.DM complex was either chromatographed on Sephacryl S-300 or ultracentrifuged on sucrose gradients in the absence of DM. All these results were consistent with a dimeric quaternary structure for both conformations of Rho. Additionally, the functional integrity of the purified photoreceptor protein following gel filtration chromatography and ultracentrifugation was demonstrated by three criteria as follows: (i) its characteristic UV-visible absorption spectra, (ii) its capability to photoactivate transducin, and (iii) its ability to serve as a substrate for rhodopsin kinase. G protein-coupled receptors (GPCRs) 1 are a large group of integral membrane proteins that respond to environmental signals and initiate transduction pathways that activate cellular processes. In general, the activation of the receptor by binding of an extracellular signal or by light absorption triggers a conformational change in its structure, which then activates a peripherally membrane-associated heterotrimeric G protein. One of the most important unanswered questions is how these receptors operate and couple to their cognate G proteins. A growing body of recent pharmacological, biochemical, and biophysical data strongly suggests that GPCRs are organized as functional homo-and heterodimers as well as higher order oligomers (1,2). Oligomerization of GPCRs may cluster these receptors in particular regions of the membrane. This process could be critical for the proper kinetics of GPCR signaling, selectivity, desensitization, and internalization. Receptor maturation during biosynthesis and translocation to the plasma membrane could also benefit from oligomerization (3). Additionally, the formation of heteromers may expand the repertoire of GPCRs and their physiological responses. The photoreceptor protein rhodopsin (Rho) is a prototypical GPCR, which is involved in the molecular transformation of light energy into a neuronal signal transmitted to the secondary neurons of the retina, and ultimately to the brain, during scotopic vision. Rho is composed of the protein opsin covalently linked to 11-cis-retinal. The light-induced isomerization of 11cis-retinal to its all-trans configuration leads to a conformational change in Rho that triggers the signal transduction cascade via reactions of the heterotrimeric G protein transducin (T). T, which is arranged as two units, the ␣ subunit (T ␣ ), and the ␤␥-complex (T ␤␥ ) transduces the visual stimuli by activating a cGMP phosphodiesterase. Fast depletion of cGMP in the rod outer segments results in the closure of cGMP-gated channels located in the plasma membrane of the ROS and the blockage of the inward flux of Na ϩ and Ca 2ϩ ions. The reduction in the circulating electrical current leads to the hyperpolarization of the membrane and to the generation of a neuronal signal. Electron microscopy, low angle x-ray diffraction, and neutron diffraction analyses have indicated that Rho is a monomer randomly distributed in the plane of the membrane without any special ordering (4 -8). Additionally, biophysical measurements using high speed flash photometry and microspectrophotometry have shown that Rho undergoes rapid rotational and lateral diffusion (9,10). Cross-linking studies have also suggested a monomeric organization for rhodopsin (11,12). Accordingly, the concept of how Rho functions in the disk membrane of retinal ROS has been dominated by the hypothesis that Rho rapidly diffuses as a monomeric unit in the fluid membranes, which are mostly composed of highly unsaturated phospholipids, to encounter T. However, Fotiadis et al. (13,14) and Liang et al. (15) have recently demonstrated by atomic force microscopy that both Rho and opsin molecules are packed as dimers in isolated murine disk membranes at both low and room temperatures. To contribute to the clarification of this controversy, we have analyzed here the hydrodynamic properties of n-dodecyl ␤-D-maltoside (DM)-solubilized bovine Rho and photoactivated Rho (Rho*), with the purpose of elucidating their native quaternary structures. Preparation of ROS and Washed Membranes-ROS membranes were isolated from frozen bovine retinas as described previously (16). Darkdepleted ROS membranes were prepared by washing ROS with 5 mM Tris-HCl (pH 7.4), 2 mM EDTA, and 5 mM ␤-mercaptoethanol until no significant amount of peripheral proteins was released with the wash buffer. ROS membranes and dark-depleted ROS membranes were stored in the dark at Ϫ70°C. Preparation of an Enriched-fraction of Rhodopsin Kinase-Freshly prepared ROS membranes were washed three times with a buffer containing 70 mM potassium phosphate (pH 6.8), 5 mM magnesium acetate, 5 mM ␤-mercaptoethanol, and 0.1 mM phenylmethylsulfonyl fluoride. Following centrifugation, the isotonically washed ROS pellet was hypotonically extracted with 5 mM Tris-HCl (pH 7.4), 5 mM magnesium acetate, 5 mM ␤-mercaptoethanol, and 0.1 mM phenylmethylsulfonyl fluoride. An enriched fraction of rhodopsin kinase was obtained in the supernatant produced after centrifugation. The whole procedure was carried out at 4°C, in the dark under red light. Purification of Rho and T-Rho was extracted from the ROS membranes, under dim red light, with 1% DM in Rho buffer (50 mM Hepes (pH 6.6), 140 mM sodium chloride, 3 mM magnesium chloride, 20% glycerol, 2 mM calcium chloride). The sample was then diluted 10 times with Rho buffer to reduce the concentration of DM to 0.1% and centrifuged at 100,000 ϫ g, for 30 min, at 4°C. The resulting supernatant was transferred to a different tube, and Rho was purified by batchwise affinity chromatography on concanavalin A-Sepharose (17), using DM instead of n-octyl-␤-D-glucopyranoside as the detergent. T was isolated from ROS membranes prepared under room light, at 4°C, following the affinity procedure carried out by Kü hn (18). GTP (ϳ100 M) was used to elute T from the washed illuminated ROS membranes, and T was further purified to homogeneity by anion exchange chromatography on a diethylaminoethylcellulose DE52 column, as described elsewhere (19,20). Cross-linking of Rho and Rho*-Samples of washed ROS membranes or purified Rho (1.28 M) were incubated in the dark or in the presence of light with sulfo-SMCC (5 mM), MBS (5 mM), o-PDM (2-8 mM), or p-PDM (2-8 mM) for 1 h at room temperature. Stock solutions of MBS, o-PDM, and p-PDM were freshly prepared in dimethyl sulfoxide, and sulfo-SMCC was dissolved in water. The reactions with sulfo-SMCC and MBS were carried out in 10 mM sodium phosphate (pH 7.2), 5 mM magnesium acetate, whereas the reactions with both phenylenedimaleimides were performed in 50 mM Tris-HCl (pH 7.5), 5 mM magnesium acetate. As controls, samples of washed ROS membranes or purified Rho were incubated with the corresponding vehicles. Additionally, the time course of cross-linking was determined by incubating purified Rho (1.28 M) with 5 mM sulfo-SMCC, MBS, or o-PDM, as described above. At designated time intervals (0 -60 min), the reactions were terminated by the addition of 20 mM ␤-mercaptoethanol. Various concentrations of purified Rho (1.28 -20.5 M) were also incubated for 1 h, at room temperature, with 5 mM sulfo-SMCC. In all cases, the samples were separated by SDS-PAGE, and the cross-linked products were stained with Coomassie Blue or Silver. Gel Filtration Chromatography of Purified Rho and Rho* in DM-Because Rho was purified in the presence of 0.1% (1.96 mM) DM, which is above its critical micelle concentration (0.18 mM), the formation of detergent micelles was eminent, and Rho⅐DM complexes were formed. Purified Rho samples (0.5-0.8 mg) were applied to a Sephacryl S-300 size exclusion column (total volume (V t ) ϭ 50.3 ml) previously equilibrated with 50 mM Hepes (pH 6.6), 150 mM sodium chloride, 3 mM magnesium chloride, 2 mM calcium chloride, 5 mM ␤-mercaptoethanol, and 0.1% DM. Protein standards were used to calibrate the column and were chromatographed together with the photoreceptor samples, under dim red light, at 4°C. The excluded (V o ) and included volumes were determined by chromatographing blue dextran and potassium dichromate, respectively. Parallel separations using the same Sephacryl S-300 column were also carried out with purified Rho samples in the absence of protein standards. The column was run at a flow rate of 150 l/min, and the eluting proteins were simultaneously monitored at 280, 380, and 498.5 nm and subsequently separated by SDS-PAGE. The elution of Rho was also immunologically monitored by Western blot using the monoclonal antibody 1D4. The elution volume (V e ) was measured for each protein, and the corresponding K av was calculated from Equation 1, The molecular weight of the complex between Rho and DM was empirically determined by plotting the K av value of each standard versus the logarithm of its molecular weight. Additionally, a linear relationship was obtained by plotting (Ϫlog K av ) 1/2 against each Stokes radius (21). An identical methodology was employed to determine the size of the complex between purified Rho* and DM, with the exception that the whole procedure was performed under illumination. Sucrose Gradient Ultracentrifugation of Purified Rho and Rho* in DM-Linear 10 -30% sucrose gradients (4.6 ml) were prepared in 50 mM Tris-HCl (pH 8.0), 0.1 mM EDTA, 5 mM magnesium chloride, 0.15 M ammonium chloride, 0.2 mM dithiothreitol, and 0.1% DM. Marker proteins with known sedimentation coefficients were employed to calibrate the gradients, and the soluble form of a variant surface glycoprotein purified from the TEVA1 Trypanosoma evansi Venezuelan isolate, which sedimentation coefficient was recently reported (22), was also included as a standard. Samples containing Rho (0.3 mg) and all protein markers (0.3 mg) were carefully layered on top of the sucrose gradients, under dim red light. The gradients were spun at 200,000 ϫ g, for 18 h, at 4°C, in a Beckman SW 50Ti rotor. Fractions were collected from the bottom of the tubes, and aliquots were analyzed by SDS-PAGE. The migration of the Rho⅐DM complex was determined by monitoring its absorption at 280, 380, and 498.5 nm and by Western blot analyses using the monoclonal antibody 1D4. The volume of migration of each standard was plotted against its corresponding sedimentation coefficient, and the resulting linear curve was utilized to calculate the sedimentation coefficient of the Rho⅐DM complex (23). The molecular mass of the complex between Rho and DM was estimated from a calibration curve of the protein standards molecular masses versus their Stokes radii multiplied by their sedimentation coefficients. A similar procedure was employed to determine the sedimentation coefficient and size of the complex between purified Rho* and DM. Other Procedures-The light-dependent guanine nucleotide binding activity of T was measured by Millipore filtration using [ 3 H]GMPpNp (24,25). T GTP hydrolysis assays were performed in the presence of 0.0075% phosphatidylcholine (16). Protein concentration was determined according to Bradford (26), using bovine serum albumin as protein standard. SDS-PAGE was carried out on 1.5-mm thick slab gels containing 10 or 12% polyacrylamide (27). Because heat induces the formation of high-molecular weight Rho aggregates, Rho-containing samples were not boiled prior to SDS-PAGE. For Western blot analyses, the proteins were electrotransferred from the gels to nitrocellulose filters (28). For immunodetection, the filters were incubated with the monoclonal antibody 1D4 (dilution 1:15,000). The membranes were then treated with alkaline phosphatase-conjugated secondary antibodies against mouse IgG, at a dilution of 1:2,000, and the immunoreactive bands were visualized with bromo-4-chloro-3-indolyl phosphate and nitro blue tetrazolium. The phosphorylation of Rho* samples was carried out in 50 mM Tris-HCl (pH 8.0), 5 mM magnesium acetate, 20 mM potassium fluoride, and 50 M [␥-32 P]ATP (specific activity Ϸ4,500 cpm/pmol), in the presence of a 50-l aliquot of an enriched fraction of rhodopsin kinase. Following incubation for 1 h at room temperature, the kinase reactions were terminated with sample buffer for SDS-PAGE (27) and completely loaded on 12% polyacrylamide slab gels. The resulting 32 P-labeled phosphopolypeptides were separated by electrophoresis and electrotransferred to polyvinylidene difluoride membranes (28). The membranes were exposed to Kodak X-Omat x-ray films for 24 h, at Ϫ80°C, using intensifying screens, and the phosphorylated bands were qualitatively analyzed by autoradiography. The concentration of DM was estimated by the anthrone method (29). Briefly, a 200-l aliquot of the appropriate DM solution was mixed with 800 l of the anthrone reagent (2 g/liter in 17 M H 2 SO 4 ) and heated for 15 min at 100°C. After cooling, the mixture was diluted with 10 ml of 17 M H 2 SO 4 , and the absorbance was measured at 620 nm. A standard linear curve for the relation of A 620 nm against the concentration of DM was determined in the range from 10 to 100 g of DM. Because Rho is a glycoprotein containing two sites of oligosaccharide attachment, at Asn 2 and Asn 15 (30), and these two sites were found to contain predominantly the uniquely small GlcNAc 3 Man 3 , with smaller amounts of chains containing Man 4 and Man 5 (31, 32), we prepared control samples containing an excess of 20 mol of ␣-methylmannoside/mol of Rho used in the determinations. Under our conditions, this amount of carbohydrate contributed less than 1% to the absorbance at 620 nm and did not influence DM estimations. N-Acetylglucosamine was not included in the controls because it has been reported that hexosamines do not give any color in this reaction (33). Because it is known that tryptophan can influence the absorption yield at 620 nm by reaction with anthrone (33), we have added an amount of bovine serum albumin known to contain five times more tryptophan than Rho. Bovine serum albumin also contributed less than 1% to the absorbance at 620 nm. The partial specific volume of Rho was calculated from its amino acid and carbohydrate composition using the known partial specific volumes of the amino acids (34) and the corresponding carbohydrates (35). Covalent Cross-linking of Rho and Rho*-Washed ROS membranes were incubated with sulfo-SMCC and MBS, two bifunctional reagents capable of forming bridges between Cys and Lys spatially located about 11.6 and 9.9 Å apart, respectively (36 -38). As expected, the migration of Rho in SDS-PAGE was not modified in the control sample (Fig. 1A, top). On the other hand, treatments of washed ROS membranes with 5 mM sulfo-SMCC or MBS, at room temperature, resulted in a decrease of the ϳ35-kDa species corresponding to the monomer of Rho (R1), with a concomitant covalent formation of dimers (R2), trimers (R3), and oligomeric forms (Rn) (Fig. 1A, top). The Rn species were not able to enter the staking gel and probably consisted of several multimeric arrays of Rho. As also seen in Fig. 1A (top), the formation of Rho cross-linking products with either sulfo-SMCC or MBS was incomplete, and a clear prevalence of cross-linked dimers was obtained. The application of chemical cross-linking to the membrane systems is, in general, complicated by the fact that membranebound proteins exist at sufficiently high densities that accidental collisional cross-linking between monomeric proteins or between oligomeric complexes cannot be ruled out. Therefore, covalent cross-links formed do not necessarily reflect the stable, naturally occurring association of proteins. Dissolution of the membrane significantly lowers the effective protein concentration, with a consequent diminution in the frequency of random collisions between protein molecules. Then Rho was extracted in DM from ROS and purified to homogeneity by affinity chromatography on concanavalin A-Sepharose. Fig. 1A (bottom) resumes the results of incubating purified Rho with 5 mM sulfo-SMCC or MBS. Similar to Rho in washed ROS membranes, the appearance of species migrating at twice the apparent molecular mass of the monomeric protein (R2, ϳ75 kDa), as well as other supramolecular arrangements (R3 and Rn), following treatment of purified Rho with these cross-linking agents was also observed. Yet, partial formation of R2, R3, and Rn species was also obtained when DM-solubilized Rho was incubated with either sulfo-SMCC or MBS, and again the majority of the resulting cross-linked products consisted of dimers (Fig. 1A, bottom). o-PDM and p-PDM are specific homobifunctional agents that cross-link Cys residues located at 9 or 12 Å, respectively (36,39). Fig. 1B shows the reaction of purified Rho, in its dark (Ϫ) and photolyzed (ϩ) states, with increasing concentrations of o-PDM (2-8 mM). Similar to sulfo-SMCC and MBS, o-PDM was also capable of cross-linking Rho producing R2 cross-linked species in the dark. Most interestingly, a significant reduction of Rho monomers (R1) with a concomitant enhancement of dimeric (R2), trimeric (R3), and multimeric (Rn) species were obtained when Rho* was incubated with increasing concentrations of the cross-linking agent (Fig. 1B). An almost complete disappearance of the R1 species was attained when 8 mM o-PDM was employed (Fig. 1B). These cross-linking data corroborate the conformational changes caused in Rho upon illumination. Identical results were achieved when Rho and Rho* were incubated with p-PDM (data not shown). In an attempt to further distinguish transient from stable interactions, time courses of the cross-linking of DM-solubilized Rho were carried out with the various cross-linking compounds. Study of the kinetics of the sulfo-SMCC, MBS, and o-PDM reactions followed by SDS-PAGE and stained by silver showed that the ϳ75-kDa species (R2) was formed in proportion to the disappearance of the ϳ35-kDa species corresponding to the monomer of Rho (R1) (Fig. 2A). Nevertheless, in all cases the reactions were not stoichiometric, and limited formation of the R2 species was obtained even after 1 h of incubation with the cross-linking reagents. These results may indicate a shortage of suitable target residues that are required to be located at the right distances in Rho. Monofunctional modification by only one end of the bifunctional reagent is an additional possibility because a high percentage of the target residues may end up with a defunct reagent after some have become modified. Although the formation of trimeric (R3) and multimeric (Rn) Rho species was not evident in the results shown in Fig. 2A, primarily due to the amount of protein loaded (1.5 g of Rho/lane), these higher molecular weight species clearly appeared when parallel experiments were analyzed by Western blotting and revealed with the anti-Rho monoclonal antibody 1D4 (data not included). Most interestingly, as the time courses of the various cross-linking reactions proceeded, the resulting Rho species were not efficiently colored by silver ( Fig. 2A), suggesting that the incorporation of these bifunctional compounds hindered the appropriate staining of the protein. Slightly greater apparent molecular masses were observed for the ϳ35-kDa Rho species through the time course of the various reactions ( Figs. 1 and 2A), indicating either the formation of intramolecular cross-linked products within the Rho monomeric unit or an extensive monofunctional incorporation of the cross-linking compounds into the protein by only one end of the reagents. Alternatively, cross-links introduced by the various chemical agents used here may reduce binding of SDS and, consequently, decrease migration. The R2 region may also contain cross-linked dimers formed from mixed combinations between native Rho and/or intramolecularly cross-linked Rho. Fig. 2B, I, shows the cross-linked products obtained when increasing concentrations of Rho⅐DM (1.28 -20.5 M) were incubated with a fixed concentration of sulfo-SMCC (5 mM). An increase in the amount of the cross-linked R2 species was proportionally obtained when a 13-l aliquot of each reaction was loaded onto the gel. However, when aliquots of each sample containing the same amount of Rho (1.5 g) were electrophoresed, it was clearly observed that the different concentra-tions of Rho did not affect the ratios of R2 to R1 species obtained following incubation with the cross-linking compound (Fig. 2B, II). Because the same cross-linked products were formed even at concentrations of 20.5 M Rho, they probably reflect a stable, naturally occurring association between native Rho molecules rather than an accidental and transient collisional interaction. Gel Filtration Chromatography in the Presence of DM-As illustrated in Fig. 3A, chromatography of Rho⅐DM on Sephacryl S-300 resulted in elution of a single sharp peak absorbing at 498.5 nm, which did not show detectable absorbance at 380 nm. On the contrary, chromatography of detergent-solubilized Rho* on the same column produced a single symmetric peak absorbing at 380 nm, with no perceptible absorbance at 498.5 nm (Fig. 3B). In both cases, single sharp peaks absorbing at 280 nm were obtained that overlapped with the corresponding absorbing peaks at 498.5 or 380 nm for Rho⅐DM and Rho*⅐DM, respectively (data not included). Moreover, the ratios of absorbance at 280 to 498.5 nm for Rho⅐DM and absorbance at 280 to 380 nm for Rho*⅐DM were constant throughout the peaks, further indicating the presence of a single species. Additionally, equivalent separations of Rho⅐DM and Rho*⅐DM were also performed in the presence of various protein standards, and the purified receptors were eluted in the same fractions as in Fig. 3. The figure also illustrates the elution peaks for the molecular weight markers employed (Fig. 3). A yield of ϳ85-90% of the initial amount of Rho⅐DM and Rho*⅐DM that was loaded onto the resin was recovered following gel filtration chromatography. After separating Rho in the dark or in the light, a reddish or yellowish layer, respectively, was always observed on the top of the resin. These results suggested the occurrence of a small proportion of oligomers of DM-solubilized Rho and Rho* that were permanently adsorbed by the resin, and that may account for the 10 -15% final loss of protein. The sizes of the native Rho⅐detergent and illuminated Rho⅐detergent complexes were empirically determined to be 128,000 and 126,000 Da, respectively (Fig. 4A). Given that the molecular weight of DM micelles has been calculated to be about 50,000 (40), the molecular masses of native Rho and Rho* were calculated by subtracting this value from the total size of the protein-detergent complexes, yielding 78,000 and 76,000 Da, respectively. These results predicted that both conformations of Rho are dimeric. A calibration curve of (Ϫlog K av ) 1/2 versus the Stokes radii of each standard was also obtained (Fig. 4B). The measured elution volumes yielded the (Ϫlog K av ) 1/2 for Rho⅐DM and Rho*⅐DM complexes, and Stokes radii of 4.18 and 4.15 nm, respectively, were determined by interpolation (Fig. 4B). Estimation of the Partial Specific Volume of Rho⅐DM Complexes-The partial specific volume 2 is mainly dependent on the chemical composition of the protein-detergent complex and a good approximation of its value is given by Equation 2, where p is the partial specific volume of the protein; d is the partial specific volume of the detergent, and ␦ is the binding ratio of detergent to the protein. Accordingly, the p value was calculated to be 0.709 cm 3 /g for Rho. The detergent concentration of the Rho⅐DM fraction after gel filtration on the Sephacryl S-300 column was measured by the anthrone method. The free detergent concentration was determined in a fraction eluted outside the Rho⅐detergent elution volume. The value for ␦ was calculated to be 0.97 g of detergent/g of protein. Then we calculated the molecular mass of the protein moiety contained in the protein-detergent complex as 65,000 Da for Rho and 64,000 Da for Rho*, using the measured ␦. Combined with the molar mass of ϳ40,000 Da determined from the primary structure and carbohydrate composition of Rho, we estimated that the protein-detergent complex contained 1,63 copies of Rho/mol. A similar analysis estimated that the Rho*⅐DM complex contained 1.6 copies of Rho*/mol. These results were also consistent with native Rho and Rho* being dimeric oligomers. From the determined p and ␦ values, and using the d for DM of 0.824 cm 3 /g, which was previously reported by Møller and le Maire (41), a of 0.765 cm 3 /g was calculated for the Rho⅐detergent complex. Sucrose Gradient Ultracentrifugation in the Presence of DM-The sizes of Rho⅐DM and Rho*⅐DM complexes were also evaluated by subjecting purified Rho and illuminated Rho to velocity sedimentation on a 10 -30% sucrose gradient prepared in a buffer containing 0.1% DM. Proteins with known sedimentation coefficients were included as standards, and the migration of Rho was identified by SDS-PAGE and Western blot analysis using the monoclonal antibody 1D4 (data not shown). About 10% of the original Rho⅐DM and Rho*⅐DM samples possessed a very high isopycnic point and sedimented at the bottom of the corresponding centrifuge tubes, indicating that some higher order oligomers of Rho were preserved even after detergent solubilization. However, the bulk of purified Rho and Rho* fractionated traveling to their respective buoyant densities on the sucrose gradient, at which point they ceased to move. The sedimentation coefficients for both Rho⅐DM and Rho*⅐DM complexes were determined to be 5.78 S (Fig. 4C). For spherical molecules, the molecular mass of a species can be calculated from a combination of the measured Stokes radius and sedimentation coefficient using Equation 3 (21), where M is the molecular mass; a is the Stokes radius; s is the sedimentation coefficient, is the partial specific volume; is the viscosity of the medium; is the density of the medium, and N is Avogadro's number. By using this approximation, a calibration curve of M versus the Stokes radius multiplied by the sedimentation coefficient was prepared using the values reported for the protein markers, and a molecular mass of 107,000 Da was estimated for the photoreceptor protein-DM complexes, under dark and light conditions (data not included). Once subtracted the molecular mass of the detergent micelle, a size of about 57,000 -61,000 Da, was obtained for both conformations of Rho. Again, these results were consistent with a dimeric quaternary structure for the photoreceptor protein. Determination of the Frictional Coefficient f/f 0 for Rho⅐DM and Rho*⅐DM Complexes- The frictional coefficient f/f 0 can also be determined from the molecular mass and the Stokes radius as seen in Equation 4, When the molecular weights obtained by gel filtration were employed, the calculated frictional ratios for the Rho⅐DM complexes were 1.56 and 1.58 for the dark and illuminated states, respectively. However, frictional ratios of 1.4 for both native Rho⅐DM and illuminated Rho⅐DM complexes were found when the sizes determined by ultracentrifugation were used. Similar to most detergent-membrane protein complexes, which have frictional ratios in the range of 1.4 (42), the f/f 0 values attained here suggested some asymmetry in the Rho⅐DM and Rho*⅐DM complexes and indicated that their native-like conformations lie in the boundary between globular and moderately expanded, when compared with compact spheres (43,44). Molecular Exclusion Chromatography and Sedimentation of Purified Rho in the Absence of DM-Purified Rho was chromatographed on a Sephacryl S-300 column in the absence of DM to prevent the formation of detergent micelles. The peak of Rho eluted as a unique species with a molecular weight of 65,300 (data not included). Purified Rho was also ultracentrifuged on a 10 -30% sucrose gradient in the absence of DM. Following sedimentation, two species of Rho were found with sizes of 122,600 and 69,800 Da, respectively (data not shown). The high molecular weight fraction probably corresponded to some remaining Rho⅐DM complex, which was expected to persist as the final detergent concentration was slightly below its critical micelle concentration. Both the 69,800-and 65,300-Da species attained by gel filtration chromatography and sedimentation, respectively, must represent the native Rho dimer. Approximately 10% of the total protein persisted as Rho oligomers because it was adsorbed on top of the Sephacryl S-300 matrix and sedimented in the first fraction after isopycnic centrifugation. Fig. 5A, the detergent-solubilized Rho maintains its characteristic absorption spectrum after gel filtration (GF) or sedimentation (S). These samples of Rho have the ability of catalyzing the light-dependent GMPpNp binding activity of transducin (Fig. 5B), up to 60 -75% of the level induced by washed ROS membranes or concanavalin A-Sepharose affinity-purified Rho. In addition, both samples of Rho were capable of stimulating the GTPase activity of transducin under illumination (Fig. 5C). The ability of both Rho samples to serve as substrates for rhodopsin kinase was also evaluated. Fig. 5D shows that an enriched fraction of rhodopsin kinase was capable of phosphorylating these samples in a light-dependent manner. However, no Rho phosphorylation was attained when the reaction was carried out in the dark. All these results demonstrated that Rho conserved its native-like structural integrity and functional features following gel filtration chromatography and sedimentation on sucrose gradients. DISCUSSION ROS disk membranes contain densely packed Rho molecules for optimal light absorption and subsequent amplification by (24,25). Experiments with concanavalin A-Sepharose affinity-purified Rho (Rho-DM (Con A)) and washed ROS membranes (W-ROS) were also included as controls. C, light-dependent stimulation of the T GTPase activity. The Rho⅐DM complex after gel filtration (Ⅺ) or sedimentation (ࡗ) was used to induce the light-dependent [␥-32 P]GTP hydrolytic activity of T. Assays were also performed in the dark as controls. D, autoradiography showing the light-induced in vitro phosphorylation of Rho⅐DM by rhodopsin kinase (RK). I, intact ROS membranes incubated with [␥-32 P]ATP under dark (Ϫ) or light (ϩ) conditions. II, enriched-fraction of RK incubated with [␥-32 P]ATP in the presence of light (ϩ); identical results were obtained in the dark (data not included). III, Rho⅐DM following gel filtration (GF) or sedimentation (S) was incubated with the enriched fraction of RK and [␥-32 P]ATP in the dark (Ϫ) or light (ϩ). The arrow indicates the migration of phosphorylated Rho*. the visual signaling cascade (45). Low angle x-ray diffraction studies have suggested that Rho is monomeric on the basis of the occurrence of particles 40 -50 Å in diameter in frog retinal receptor disk membranes that were immunologically identified as the photopigment molecules (4). The nature of the diffraction was not consistent with a planar crystalline lattice of the particles within the disk membranes but with a planar liquidlike arrangement of the particles (5). These measurements have been questioned (6) because information from x-ray diffraction studies is limited by the imperfect stacking of the membranes, the low contrasts of electron densities among the components of the lipid-protein-water structure, and by the difficulty of placing electron density profiles on an absolute scale. However, neutron diffraction studies of retinal ROS also suggested a random distribution and monomeric organization of Rho in the membrane (7). Fast, transient, flash-induced photodichroism showed the rapid rotational diffusion of Rho in situ (9), and the kinetics of flash-bleaching recovery indicated that Rho undergoes rapid lateral diffusion in intact rods (10). Yet, transient photodichroism cannot reliably distinguish between freely rotating dimers and monomers. Most interestingly, no detectable change was observed in the rotational diffusion of Rho upon illumination indicating that oligomers of Rho do not form during excitation (46). Electron microscope images of snap-frozen, freeze-etched frog rods (8) were also able to resolve Rho monomers in a random array, with no evidence of dimers. From all these early experiments a regular distribution of Rho in disk membranes would be expected. However, recent studies (47)(48)(49) have demonstrated the existence of detergent-resistant membrane microdomains or lipid rafts in ROS and therefore a nonuniform distribution of lipid and protein. In particular, Rho is found in raft and nonraft portions of the membrane, and its distribution does not change in the light or dark (47,48). In addition, virtually all the key components of the phototransduction cascade are either permanently associated with the ROS lipid rafts or translocate there in a light-dependent manner. Thus, alternative interpretations of the early biophysical results may be appropriate. Finally, atomic force microscopy experiments have revealed distinct rows of Rho dimers and paracrystalline arrays in native murine optic disk membranes (13,15). Two different types of Rho-containing domains were identified: 1) large uniform paracrystals and 2) rafts of smaller Rho paracrystals separated by lipid (14,15). Topographs recorded at higher magnification unveiled rows of Rho dimers forming the paracrystal in both domains 1 and 2, identifying the Rho dimers as the building blocks of the paracrystals (14,15). This supramolecular arrangement was also found for the apoprotein, opsin (15). Occasionally, single receptor monomers were detected on such topographs, but the presence of Rho monomers was relatively rare in these images. Dimerization and higher order organization of Rho were also observed when electron micrographs and atomic force microscopy topographs were measured on native disk membranes prepared at room temperature (14), indicating that the observed packing arrangement of Rho and opsin was not artificially induced by the segregation of protein and lipid at low temperatures. Also, freeze-fracture electron microscopy has revealed paracrystalline Rho arrays in Drosophila photoreceptive membranes (50) and in the plasma membrane of bovine ROS (51). Consequently, the sum of all these early and recent results has led to the emergence of an interesting controversy (52,53). Functional Integrity of the Rho⅐Detergent Complex Following Molecular Exclusion Chromatography and Ultracentrifugation-As illustrated in By analyzing the hydrodynamic properties of DM-solubilized bovine Rho and Rho*, we have elucidated here the native quaternary structures of Rho and Rho*. Our results are consistent with a dimeric structure for both conformations of the photoreceptor protein and agree with the results reported by Liang et al. (15) and Fotiadis et al. (13,14). Molecular exclusion chromatography demonstrated that Rho and Rho* have molecular weights of 78,000 and 76,000, respectively, which is approximately twice the size observed by SDS-PAGE under denaturing conditions. A Stokes radius of 4.18 and 4.15 nm for Rho and Rho* was also determined, which again indicated the dimeric structure of the photoreceptor protein. In addition, both conformations of Rho showed sedimentation coefficients of 5.78 S and frictional ratios of about 1.4 -1.6 were calculated for the Rho⅐DM and Rho*⅐DM complexes. By assuming a globular and compact shape for the protein-detergent complexes, a slightly lower molecular weight (ϳ60,000) was estimated for Rho and Rho*. However, most biological macromolecules are not spheres, and ellipsoids of revolution (prolate or oblate ellipsoids) are more realistic models than a sphere. The incorrect assumption that Rho is globular may account for the small discrepancy obtained when its size was determined by ultracentrifugation. Ellipsoids have larger frictional coefficients than equivalent spheres. Because the volume of a molecule is proportional to the molecular weight, it has been reported that the more a molecule deviates from a sphere, the larger its frictional coefficient will become. The f/f 0 value determined for the dark and light states of the Rho⅐DM complex corresponds to a macromolecule having either a prolate ellipsoid shape with an axial ratio of 10:1 or an oblate ellipsoid shape with an axial ratio of about 12:1 (54). In fact, the crystal structure of Rho (55)(56)(57) shows that the protein has an ellipsoidal shape. The dimensions of the ellipsoid are ϳ48 Å wide and ϳ35 Å thick in the plane of the membrane and ϳ75 Å perpendicular to the membrane. Analysis by electron microscopy of preparations containing crystalline arrays has also shown that Rho molecules have planar dimensions of about 28 ϫ 39 -40 Å and are ϳ63-64 Å in height (58,59). All these results are consistent with the asymmetric shape deduced here for the protein photoreceptor. The pattern of cross-linking for the various preparations of Rho showed predominance of formation of cross-linked dimers with a progressively diminishing yield of cross-linked products from dimer to trimer and higher oligomers. This pattern could result from a native oligomeric assembly of Rho or alternatively by cross-linking between random collision complexes of monomeric Rho molecules. Given that Rho is embedded in the disk of the ROS membranes, the protein is free to rotate and diffuse in the plane of the membrane, allowing an estimated collision frequency between molecules of 10 5 and 10 6 /s (9, 10). However, it has been reported that at concentrations of protein below 10 M, no significant accidental intermolecular cross-linking between separate molecules of protein occurs when a solution of protein is mixed with a cross-linking agent (60). Instead, what is observed are the products that result from intra-oligomeric cross-linking among the fixed number of polypeptides of which the protein is composed. Additionally, solubilization of the membrane with detergents significantly lowers the effective protein concentration, with a concomitant reduction in the frequency of transient collisions between Rho molecules. Initially, all of our cross-linking experiments with Rho in washed ROS membranes or DM-solubilized Rho were carried out at protein concentrations of 1.28 M, hindering the probability of accidental collisions between Rho molecules. Moreover, when increasing concentrations of Rho (1.28 -20.5 M) were incubated with fixed concentrations of bifunctional reagents, the cross-linking compound did not alter the proportion of the resulting Rho cross-linked products. These facts strongly imply that the cross-linked products probably reflect a stable association between native Rho molecules rather than a random interaction and suggest a dimeric/oligomeric structure for the photoreceptor protein. Semi-empirical models for the packing arrangement of Rho molecules derived from atomic force microscopy topographical data (14,15) and the crystal structure (55,61) suggest that the intradimer interface comprises contacts between helices H4 and H5. Additionally, most of the interacting residues are located on the cytoplasmic loop between helices H3 and H4, and on the carboxyl-terminal region. Moreover, other interaction sites are also located within the membrane. Cross-linking occurring on any pair of residues located at these intradimeric interfaces will account for the formation of Rho dimeric cross-linked products. Some cross-linked Rho trimers and higher order oligomers were also attained using the various bifunctional reagents. These results were a little surprising because if Rho forms specific dimers then the interface of one molecule should already be occupied hindering the formation of oligomeric crosslinked products. However, evidence from atomic force microscopy, supported by electron microscopy, revealed distinct rows of Rho dimers and paracrystalline arrays in native disk membranes (13,15). Contacts between dimers are created entirely by the intracellular loop between helices H5 and H6 from one monomer in a dimer with the loop between helices H1 and H2 and the carboxyl-terminal residues from the same monomer at the adjacent dimer. Contacts between rows of dimers are maintained through hydrophobic residues from helix H1 close to the extracellular side. Then formation of cross-linked Rho trimers and oligomers may be easily explained by these various interfaces between dimers and rows of dimers. Most interestingly, a small portion of Rho⅐DM and Rho*⅐DM was strongly adsorbed on the top of the Sephacryl S-300 gel filtration resin. Additionally, a minor fraction of the original Rho⅐DM and Rho*⅐DM samples (ϳ10%) sedimented at the bottom of the centrifuge tube during isopycnic ultracentrifugation on 10 -30% sucrose gradients. These results suggested the occurrence of some Rho oligomers even in the presence of DM, which probably account for the high molecular weight products obtained following cross-linking of Rho⅐DM and Rho*⅐DM. In the case of DMsolubilized Rho*, an enhancement of cross-linked dimeric, trimeric, and multimeric Rho species was apparent, which was consistent with the additional sulfhydryl reactivity previously reported for Rho* (62) and with the conformational changes produced in Rho upon illumination (63)(64)(65)(66). The formation of Rho cross-linked dimers was always incomplete rather than stoichiometric. However, several factors influence the formation of a cross-linked product. These include the availability of the appropriate amino acid residues in the proteins, the chemical specificity of the bifunctional crosslinker, and the reaction conditions. Negative results in chemical cross-linking experiments do not conclusively demonstrate that two protein components are not close to each other. A paucity of cross-linked products may be the result of a lack either of spatial proximity or of appropriate reactive groups on the adjacent polypeptide chains. In addition, the reaction of each of the chemical ends of the bifunctional reagents with their target residues in the protein, in aqueous solution, is a competition between formation of the desired products and the possibility of hydrolysis of the reagents. For example, it has been reported that the two reactive maleimide rings of the bismaleimides, such as o-PDM and p-PDM, are hydrolyzed much more rapidly than the single maleimide ring of the monofunctional analogue N-ethylmaleimide (67). Because it renders the maleimide ring unreactive toward cysteine, this rapid hydrolysis can limit the extent of cross-linking of protein by the bismaleimide. In consequence, any of these factors either indi-vidually or in combination could generate an incomplete formation of Rho cross-linked products in the dimeric Rho unit. The concept of oligomerization in the presence or absence of ligands is generally accepted for many GPCRs. This oligomerization has been reported to affect GPCR trafficking, signaling, and pharmacology. Based on certain key sequences, GPCRs can be grouped into several distinct families. Rho belongs to the class A or family 1 of GPCRs (also known as Rho-like GPCRs), and several of its members have been shown to homo-oligomerize (68). Rho does not seem to be an exception, as indicated by our results that provide strong evidence for its dimeric state. Furthermore, many pairs of family 1 GPCRs have been shown to form heteromers as well (68), exhibiting novel functional characteristics distinct from the individual homomeric receptors. When GPCRs are activated, the oligomers rearrange and cluster, and a novel mechanism by oligomer intercommunication assisted by components of the plasma membrane and by scaffolding proteins is possible (69). A simple model of a 1:1 Rho-T interaction is not compatible with the size of the cytoplasmic surface of Rho, which is too small to anchor both T ␣ and T ␤␥ , and with the reported cooperativity for this interaction, which exhibits a Hill coefficient of ϳ2 (70). The packing of Rho molecules as dimers provides a platform that can easily accommodate both T functional units (71,72) and is consistent with the kinetic studies of the Rhocatalyzed guanine nucleotide exchange (70), as well as with binding studies between Rho and T (73,74), which demonstrated allosteric regulation of the interaction of T with Rho*. In addition, it has been speculated that one molecule of Rho in the dimer is needed for productive coupling with T, whereas the second one provides a partial scaffold to dock subunits of T (75). The application of the evolutionary trace method to 113 aligned G protein ␣ subunit sequences resulted in the identification of two functional sites (76). One large, well defined site was clearly identified with the binding of ␤␥-complexes, regulators of G protein signaling (RGS), and effector proteins like adenylyl cyclase. The other functional site, which extends from the raslike or GTPase domain onto the helical domain, had the correct size and electrostatic properties for GPCR dimer binding (76). These theoretical predictions can be extrapolated to T ␣ and are consistent with the dimeric quaternary structure reported here for Rho. Recently, Filipek et al. (75) also modeled how T docks onto oligomeric Rho and described structural details of this critical interface in the signal transduction process. Visual arrestin, another Rho-binding protein, has a bipartite structure of two structurally homologous seven-stranded ␤-sandwiches, forming two putative Rho binding grooves (77,78). The positive charge arrangement on the surface of the Rho dimer matches the negative charges on arrestin (15). Thus, one arrestin monomer is likely to also bind one Rho dimer. Finally, the crystal structure of G protein-coupled receptor kinase, GRK2, is also in structural agreement with the oligomeric structures of GPCRs (79). Results from nondenaturing gel electrophoresis and analytical ultracentrifugation have also suggested the presence of oligomeric states of T and its subunits (80). T oligomers have been trapped by using bifunctional maleimides (81) and by 1-ethyl 3-(3-dimethylaminopropyl)carbodiimide-induced crosslinking, providing physical evidence for the existence of these oligomers under native conditions (82). Moreover, T ␣ has been reported to spontaneously form disulfide linkages in the absence of reducing agents (83), a condition that produced the total inactivation of the holoenzyme once reconstituted with native T ␤␥ (84). Additionally, oligomeric forms of T ␣ were the predominant species when highly specific photoactivated crosslinking reagents were employed (85). Compatible with these findings is the cooperativity reported for the interaction of Rho with T (72,73,86). Mixon et al. (87) have described the threedimensional structure of G i ␣ 1 , an ␣ subunit isotype of G i . They have shown that the ␣ subunits form extensive quaternary contacts with neighboring ␣ subunits in the crystal lattice. Specifically, the ␣ subunits are organized in a "head-to-tail" oligomer such that each subunit is related to the next by a 2-fold screw rotation that positions the NH 2 terminus from the ras-like or GTPase domain of one subunit into the ␣-helical domain of the adjacent subunit (87). In a membrane environment where the concentration of macromolecules is high, the kinetics of interactions between receptor and G protein is likely to be diffusion-limited. Evidently, the formation of Rho dimers and oligomers would overcome the limitation by allowing T molecules to interact with locally concentrated pools of Rho dimers. Moreover, the formation of multimeric complexes of T would also permit the interaction of Rho dimers with clusters of T ␣ subunits. Both phenomena will facilitate the amplification of the light response.
2018-04-03T02:54:30.218Z
2004-09-17T00:00:00.000
{ "year": 2004, "sha1": "825a7f2a602b41db842d2d9f0d5a6424066f4e41", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/38/39565.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "dc295a8f423e874f58f6b555b4625f2218deb47a", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
264647518
pes2o/s2orc
v3-fos-license
8th International conference on management and rehabilitation of chronic respiratory failure: the long summaries – Part 3 This paper summarizes the Part 3 of the proceedings of the 8th International Conference on Management and Rehabilitation of Chronic Respiratory Failure, held in Pescara, Italy, on 7 and 8 May, 2015. It summarizes the contributions from numerous experts in the field of chronic respiratory disease and chronic respiratory failure. The outline follows the temporal sequence of presentations. This paper (Part 3) presents a section regarding Moving Across the Spectrum of Care for Long-Term Ventilation (Moving Across the Spectrum of Care for Long-Term Ventilation, New Indications for Non-Invasive Ventilation, Elective Ventilation in Respiratory Failure - Can you Prevent ICU Care in Patients with COPD?, Weaning in Long-Term Acute Care Hospitals in the United States, The Difficult-to-Wean Patient: Comprehensive management, Telemonitoring in Ventilator-Dependent Patients, Ethics and Palliative Care in Critically-Ill Respiratory Patients, and Ethics and Palliative Care in Ventilator-Dependent Patients). Background This paper summarizes the Part 3 of the proceedings of the 8 th International Conference on Management and Rehabilitation of Chronic Respiratory Failure, held in Pescara, Italy on 7 and 8 May, 2015. It summarizes the contributions from numerous experts in the field of chronic respiratory disease and chronic respiratory failure. The outline follows the temporal sequence of presentations. Moving across the spectrum of care for long-term ventilation Rationale As technology advances, therapeutic options for individuals with chronic respiratory failure requiring short-and longterm ventilator support increase. This section will review old and new indications for ventilator therapy, implementation and feasibility of these types of complex interventions, potential methods to improve their applicability and safety, and economic issues resulting from their use. Moving across the spectrum of long term ventilation (Roger Goldstein) References to mechanical ventilation are found in the writings of Hippocrates (460-375 BC) and Paracelsus (1493-1541). However, since the 20 th century, long term mechanical ventilation (LTMV) has become important in the management of two overlapping groups of patients; those who have recovered from an acute episode of respiratory failure but who require ongoing ventilatory support despite being clinically stable and those who require ventilation electively to avoid the requirement for urgent ventilation. Added to the above, is the increased awareness of the value of a rehabilitative focus, to enhance function and improve autonomy of the ventilator assisted individual (VAI) in a non ICU environment. Mandatory ventilation Patients receiving mandatory ventilation in the Intensive Care Unit (ICU) find themselves in an environment in which, understandably, the attending clinical team is focused on those with acute clinical issues. Their mobility is confined to the length of their ventilator tubing. Subsequent management outside the ICU will depend on the availability of resources such as a chronic assisted ventilator care unit, a long term acute care unit or a skilled nursing facility. The resource utilization is inversely related to the level of patient independence ( Fig. 1) [1]. With the addition of rehabilitation many VAI are able to leave the ICU for an assisted living facility or even better, to go home, provided they have access to technical and clinical support services. Preparation for the relocation should begin in the ICU, as outlined in Table 1. The availability of physical rehabilitation in the ICU has a positive influence on muscle function, independence and time to wean (Table 2) [2]. Elective ventilation In contrast, the journey of elective ventilation often begins and ends at home. It hinges on the prompt initiation of elective ventilation for those whose conditions are progressing to cardio-respiratory failure. Good clinical and laboratory monitoring is important as the onset of respiratory failure may first be identified through a deterioration of nocturnal blood gases. Progression may include brief exacerbations with respiratory failure requiring ICU management and relatively easy weaning. As with mandatory ventilation, it is necessary to have access to home respiratory care services as well as scheduled monitoring after the initiation of ventilator support. The following example illustrates the relevance of monitoring in those likely to develop respiratory failure. Case example A 45 year-old woman with thoracic restriction developed gradually progressive dyspnea on exertion. Her vital capacity was 43 % predicted, her total lung capacity was 44 % predicted and her forced expired volume in one second to forced vital capacity was 90 %. Arterial blood gases taken on room air showed her to have: pH 7.39, PaCO 2 46 mmHg, PaO 2 78 mmHg, SaO 2 95 %. After an episode of pneumonia a two channel overnight recording showed satisfactory oxygenation, mild nocturnal hypercapnia with periodic (likely REM related) worsening of gas exchange (Fig. 2a). She began to feel unwell over the next few months and on repeat evaluation after 6 months (Fig. 2b) she was noted to have marked hypercapnia. Bi-level positive airway pressure ventilation was initiated electively (Fig. 2c) and her clinical state as well as her blood gases stabilized. She remains stable on nocturnal non-invasive positive pressure ventilation. Prevalence of HMV The prevalence of home ventilation is influenced by the increasing incidence of the underlying disorders, the increased knowledge of the healthcare providers (HCP) regarding the option of being safely ventilated outside of the ICU and the guidelines and recommendations of professional societies regarding LTMV [3]. It is also influenced by the attitudes and preferences of the patient and family as well as the availability of formal and informal (caregiver) support services. In Europe (Fig. 3) [4] the prevalence of HMV varies widely (France 17/100,000 to Poland 0.1 per 100,000), as does the distribution of diseases requiring ventilatory support (thoracic cage disorders, neuromuscular disorders and airway disorders). If the patient is unable to return home immediately, a chronic assisted ventilatory care (CAVC) unit will provide a safe, non-acute care environment with a rehabilitative focus, to optimize health related quality of life and promote autonomy. The CAVC unit requires a multidimensional continuum of services, by an interdisciplinary team trained both in ventilator management and rehabilitation. The preferred patient is medically stable, mentally alert, understands that ventilatory assistance is long term, is prepared to participate in comprehensive training and will relocate with appropriate supports. In order for a patient to return home, it must be safe and have the required utilities as well as trained care givers. The availability of home health care, technical support and organized followup is critical. Ventilator-assisted individuals' perspectives User perspectives [5] suggest that irrespective of ventilation being elective or mandatory, the most difficult period for coping is the initial 3 months after returning home. When asked to give their experience of LTMV, ventilator-assisted individuals voiced both positive and negative experiences regarding mobility, symptoms, equipment concerns and social implications. Disappointingly, not all users felt that they had made an informed choice when they started ventilation or when it became permanent. Ventilator-assisted individuals (VAIs) have noted the relevance of both physical and psychological adjustments to being ventilated [6]. They describe the positive impact that their physicians' confidence in the effectiveness of LTV has on them as well as the importance of the opinion of other VAIs. The adjustment to LTMV is more difficult when it is initiated in the ICU especially if impaired verbal communication limits their decision to initiate ventilation. The following quotes are illustrative of some of the experiences of the ventilator-assisted individuals: Adjustment quotes "As I became stronger I thought what is so different about my care that I could not learn?" "Lots to do with cleaning equipment and tracking supplies but I do it as part of my daily routine and its easier." Caregiver burden Separate from the paid caregivers, informal caregivers, usually family members, are essential to the development of an environment that enables the ventilator-assisted individual to live safely at home. These informal caregivers often underestimate the care burden involved, which is especially high when that individual also has neuromuscular disease (NMD). Semi-structured caregiver interviews [7] of those looking after patients with NMD highlighted their sense of duty and their huge commitment. However caregiver burnout was evident and the need for professional support, especially in the initial weeks of their loved ones returning home, was evident. The following quotes are illustrative of some of the issues that the caregivers face: Restriction in day-to-day life "I am a prisoner in my own home, at my own will. Although I don't regret it, this is the way I feel." Training and education "It was very hard to come home the first time after the hospital. Even though we got trained you don't know what to expect so it was very difficult." "It's not enough to only teach the medical things, you need to know what to expect in the long run. Knowing about the disease really helps." "It's quite overwhelming in the beginning." Tele-medicine follow-up Key points that contribute to caregiver success are summarized in Table 3. Regular pre-scheduled follow-up, the ability for VAI initiated medical support and respite care for the ventilator-assisted individual or caregiver, are especially important. The frequency and complexity of follow up is determined by both medical and social factors. It will vary among individuals and in the same individual at different points in time. The arrival of modern telemedicine technology has resulted in more frequent home based rather than the institutional follow-up. For example, video-conferencing is achievable with a personal laptop computer, linked to healthcare professionals through videoconferencing software and high speed internet. Regular sessions can be scheduled for the VAIs convenience. The patient, family, caregiver and health team can all be present as can a pulmonologist and a community care access case manager. This approach has the advantage of enabling more frequent follow-up at home and broad health team access. It is also less expensive than home visits. Summary Although the spectrum of long term ventilation begins with either mandatory or elective ventilation, the ideal destination is home or if this is not possible a safe non acute care facility with a multidisciplinary team trained in both LTMV and rehabilitation. User perspectives emphasize that the most difficult period of coping is the first few months after returning home, when both physical and psychological adjustments are necessary. Caregiver burden is substantial and under-recognized both by the healthcare team and by the caregivers when they make their initial commitment to accept a ventilator-assisted individual at home. Access to home healthcare and technical services is critical to successful home ventilation. Telemedicine technology using personal computer video-conferencing software has enabled more frequent, less expensive follow-up with improved access by the patient and the caregiver to healthcare professionals. New indications for non-invasive ventilation (Nicolino Ambrosino) Key points The use of non invasive ventilation (NIV) is an option in acute hypercapnic respiratory failure, cardiogenic pulmonary oedema, acute respiratory distress syndrome (ARDS), community-acquired pneumonia, and weaning failure Evidence supports NIV during complicated bronchoscopy, some cases of transoesophageal echocardiography, and in some interventional cardiology NIV can reduce the need for deep sedation or general anaesthesia NIV should be considered with caution in severe communicable airborne infections likely to progress to ARDS The role of assisted ventilation during exercise training is still controversial NIV should be applied under close monitoring, and endotracheal intubation should be promptly available in the case of failure. A trained team, careful patient selection and optimal choice of devices, can optimize outcome of NIV Non invasive ventilation (NIV) may be considered as one of the most important advances in respiratory medicine over the past 20 years, [8,9] and is increasingly being utilized world-wide [10]. A PubMed search from January 1966 to March 2015 with the term "non invasive ventilation" defines NIV as "any form of ventilatory support applied without endotracheal intubation (ETI)". There is strong evidence (Level A) for the use of NIV to prevent ETI in acute on chronic respiratory failure, acute cardiogenic pulmonary oedema, and to facilitate extubation in patients with acute exacerbations of chronic obstructive pulmonary disease (COPD). Less evidence supports the use of NIV for patients with severe acute asthma exacerbations, post-operative or post-extubation acute respiratory failure (ARF), pneumonia, or acute respiratory distress syndrome (ARDS) [8,9]. Nevertheless, many other potential applications have been proposed [12]. This review will focus on potential new indications for NIV. Although potentially risky, bronchoscopy may be required for some severely hypoxaemic patients [13]. In the past, the American Thoracic Society (ATS) did not recommend flexible bronchoscopy and bronchoalveolar lavage (BAL) in such conditions when supplemental oxygen cannot correct an arterial oxygen tension (PaO 2 ) at least to 75 mmHg or an arterial oxygen saturation (SaO 2 ) to 90 % [14]. On the other hand, non-use of bronchoscopy in these high risk patients may result in less effective, empiric treatment. Until recently, when bronchoscopy wws needed in hypoxaemic conditions, only ETI and mechanical ventilation were available to provide adequate ventilation and oxygenation. Unfortunately, invasive mechanical ventilation is associated with complications related to ETI, baro-or volutrauma, and the loss of airway defense mechanisms. NIV has the potential to avoid these complications while ensuring a similar level of ventilatory efficacy and control of hypoxemia. In a randomised controlled trial (RCT), mask Continuous Positive Airway Pressure (CPAP) reduced the risk of acute respiratory failure complicating bronchoscopy in severely hypoxaemic patients [15]. Another RCT in hypoxaemic patients showed that during bronchoscopy NIV increased the PaO 2 /inspiratory oxygen fraction (FIO 2 ) ratio, whereas the patients randomised to only oxygen therapy showed a worsening in oxygenation [16]. NIV during bronchoscopy is also useful in hypercapnic COPD patients with pneumonia [17]. CPAP was able to reverse reductions in tidal volume and respiratory flow associated to flexible bronchoscopy in spontaneously breathing young children [18]. In patients with acute exacerbation of COPD due to community-acquired pneumonia, in danger of ETI and unable to clear secretions, NIV with early therapeutic bronchoscopy was feasible, safe and effective [19]. A recent study suggests that in awake, critically ill patients with moderate to severe hypoxaemia undergoing bronchoscopy, the application of NIV is superior to High Flow Nasal Cannula Oxygen in oxygenation before, during and after the procedure [20]. NIV during bronchoscopy may be performed by means of commercial or modified oronasal or full-face masks [21]. These reports support the use of NIV during fiberoptic bronchoscopy especially when risks of ETI are high, such as in immunocompromised patients. However, an expert team with skills in both endoscopy and NIV should be available for any emergency [12]. In general, this should be performed in ICU. Transoesophageal echocardiography and interventional cardiology In orthopnoeic cardiac patients needing transoesophageal echocardiography (TEE), NIV can reduce the need for deep sedation or general anaesthesia. NIV allows performance of continuous TEE examination in lightly sedated patients, avoiding ETI and general anaesthesia. The level of evidence is lower than in fiberoptic bronchoscopy and is more linked to author's experience [22]. The author of this review is not aware of recommendations in such situations. Recent advances in interventional techniques have made it possible to offer minimally invasive treatment of aortic valve stenosis to elderly or complex patients unable to undergo standard surgical procedures due to a compromised health status or severe comorbidities, such as pulmonary diseases [22]. Furthermore, orthopnoea may make it difficult for patients to stay supine. Our initial experience with NIV in interventional cardiology to support patients with severe pulmonary disease needing percutaneous implantation of an aortic bioprosthesis for severe valve stenosis was positive [23]. NIV reduced the need for general anaesthesia, relieved orthopnoea and prevented postoperative ARF [23]. As for TEE, the evidence behind this review is based mainly on author's experience, and a large clinical trail would be needed to confirm this preliminary observation. Interventional pulmonology Intermittent Negative Pressure Ventilation (INPV) through a poncho-wrap may be useful in reducing apnoeas during laser therapy under general anaesthesia, thus reducing hypercapnia, related acidosis, and required oxygen supplementation with related explosion hazard [20]. Furthermore, compared with spontaneous ventilation, INPV in paralysed patients during interventional rigid bronchoscopy may reduce need of opioids, shorten recovery time, prevent respiratory acidosis, need for manually assisted ventilation, reduce the oxygen need and allow optimal surgical conditions [24,25]. This author is aware that INPV is not commonly used in this condition, mainly due to lack of large randomized controlled trials. Accordingly, a review such this is important to disseminate experience and promote research in this area. Video-assisted thoracoscopic surgery is a minimally invasive technique allowing for intrathoracic surgery without any formal thoracotomy and related complications [26]. We successfully used face mask NIV with regional anaesthesia during this technique which requires the exclusion of a lung from ventilation [27]. High transmissible infections There are still insufficient data on the use of NIV during pulmonary infections, including pandemic respiratory infections [28]. NIV was used in patients with Severe Acute Respiratory Syndrome (SARS) in 2002-2003 and also during the H1N1 epidemic in 2009. Thereafter NIV has been used to treat ARF due to other infectious diseases, like pandemic avian influenza (H5N1). However, NIV in these conditions requires caution. Although studies of NIV use in ARF during H1N1 influenza [28,29] do not report disease transmission from patients to healthcare workers, the World Health Organization (WHO) has included NIV among aerosol-generating procedures with possible risk of pathogen transmission [30]. The members of an International NIV Network examined the literature of NIV in SARS, H1N1 and tuberculosis. The conclusion was that early application of NIV in selected patients can reverse ARF. Furthermore there were only a few reports of infectious disease transmission among healthcare workers [31]. Despite these positive results, the guidelines from the European Respiratory Society (ERS)/European Society of Intensive Care Medicine (ESICM), WHO, the UK National Health Service, Hong Kong Lung Foundation and the American Association for Respiratory Care (AARC), suggest that NIV should not to be used as first-line therapy in H1N1-associated ARF for several reasons: [32,33] 1) Poor clinical efficacy in severe ARF rapidly progressing to refractory hypoxaemia and ARDS; 2) More prevalent hypoxaemic instead of hypercapnic ARF in patients with H1N1; 3) Concern about aerosol droplet particle dispersion and spread of infection. Technical issues in ARF caused by airborne infectious diseases include: 1) Ventilators with a doubleline circuit without an expiratory port (like whisper, plateau exhalation valve, anti-rebreathing valve etc.) should be preferred. This can reduce the risk of dispersion of exhaled infected particles through the intentional leaks of a single line circuit; 2) Well customized face masks should be preferred to nasal masks to avoid the potential spreading of contaminated air particles from the mouth; 3) Healthcare workers should be aware of the potential risks of using NIV in such conditions taking appropriate precautions especially during the patient disconnection from the NIV; [34] 4) In general, patient isolation and protective measures also for care-givers should limit if not avoid disease transmission; 5) The use of other techniques such as high flow nasal cannula are controversial. Palliative and end-of-life care Most end-stage patients with chronic respiratory failure complain of dyspnoea in the last 3 months of life [35]. Breathlessness is often more severe in these patients than in those with advanced lung cancer [36]. As a consequence, NIV is being increasingly used to relieve dyspnea in these patients [37,38]. Recent guidelines state the following: "As relief of dyspnoea with NIV may not relate to changes in arterial blood gases, it is appropriate to reassess the br0eathlessness experienced by patients receiving such ventilatory support at frequent intervals" [39]. Observational studies as well as clinical trials have recently confirmed the role of NIV in patients with chronic disease and poor life expectancy (with or without COPD), showing that this ventilatory technique may favourably reduce dyspnoea shortly after initiation, even without an associated episode of hypercapnic ARF [40]. About half of the patients survived the episode of respiratory distress and were discharged from the hospital. A Task Force of the Society of Critical Care Medicine defined the approach to NIV use for end-stage patients who choose to forego ETI [41]. The use of NIV for patients with ARF could be classified into three categories: 1) NIV as life support with no preset limitations on life sustaining treatments; 2) NIV as life support when patients and families have decided to forego ETI; and 3) NIV as a palliative measure when patients and families have chosen to forego all life support, receiving comfort measures only. NIV should be applied after careful discussion of the goals of care, with explicit parameters for success and failure, by experienced personnel and in appropriate healthcare settings [41,42]. The use of NIV in these circumstances should take into account ethical, legal and religious issues. Elective ventilation in respiratory failure -can you prevent ICU care in patients with COPD? (Michael Dreher, Michele Vitacca, Nicolino Ambrosino) Key points Chronic respiratory failure is very frequently the final stage of the natural history of chronic obstructive pulmonary disease. The role of long-term non invasive positive pressure ventilation in improving survival in COPD patients with CRF is still discussed. Long-term night non invasive ventilation in these patients has some physiological and clinical benefits. Long-term non invasive ventilation should be reserved to individual patients. Chronic respiratory failure (CRF) is very frequent in the end stage of the natural history of chronic obstructive pulmonary disease (COPD). Among other factors, inspiratory muscle dysfunction due to pulmonary hyperinflation may lead to ineffective alveolar ventilation resulting in chronic hypercapnia. Whether chronic hypercapnia is adversely associated with overall prognosis is still discussed, at least in patients on long term oxygen therapy (LTOT) [43]. Home long-term non-invasive positive pressure ventilation (NPPV) is widely used around Europe to treat CRF due to different aetiologies such as restrictive thoracic (RTD) and neuromuscular disorders (NMD), obesity hypoventilation syndrome and COPD [4]. The hypothesized -but not proven -mechanisms of action of long-term NPPV in stable hypercapnic COPD patients include: reverting hypoventilation; respiratory muscle unloading; resetting of respiratory centers; and cardiovascular effects. These mechanisms may work alone or in combination. Hypoventilation Physiological studies demonstrate that in these patients, NPPV is able to improve alveolar ventilation by increasing the tidal volume and reducing the respiratory rate [44]. Respiratory muscles Inspiratory support is able to unload the inspiratory muscles, and positive end expiratory pressure (PEEP) counteracts the intrinsic PEEP associated with hyperinflation, [45] an effect more evident in acute exacerbations. Respiratory centers Compared with LTOT alone, the addition of night NPPV results in significant increases in day-time arterial oxygen (PaO 2 ) and reductions of carbon dioxide (PaCO 2 ) tension, total sleep time, sleep efficiency, and overnight PaCO 2 . Additionally, health-related quality of life with LTOT plus NPPV was significantly better than with LTOT alone. The degree of improvement in day-time PaCO 2 correlates significantly with the improvement in mean overnight PaCO 2 [46]. Cardiovascular effects Nighttime NPPV may improve heart rate variability, reduce circulating natriuretic peptide levels, and increase the functional performance of patients with advanced but stable COPD -suggesting that night NPPV may reduce the impact of cardiac comorbidities in COPD patients [47]. Clinical results Although home NPPV is widely accepted for the treatment of chronic hypercapnia due to respiratory or neuromuscular disease, whether stable hypercapnic COPD patients should routinely be offered this therapy is still discussed [48]. Recently, the role of ventilator management on physiological parameters and outcome in stable hypercapnic COPD patients has become more evident. It is suggested that its benefits depend on the ability of NPPV to substantially reduce PaCO 2 through using "high" inflation pressures [49]. This was confirmed by prospective trials, showing an advantage of high over lower inspiratory pressure levels, with regard to improvements of lung function, blood gases, exerciseinduced dyspnoea and health status [50,51]. A multicenter study showed a highly significant survival advantage of NPPV (compared with standard care) when it was targeted to maximize hypercapnia reduction [52]. The findings of that study may influence the attitude of clinicians on the use of NPPV in patients with stable hypercapnic COPD. However, the effect of elective home NPPV on exacerbation frequency in stable hypercapnic COPD remains to be determined. The use of NPPV is a first line treatment of acute on chronic hypercapnic respiratory failure in COPD patients [8]. However, once acute hypercapnic respiratory failure is successfully managed and these patients are discharged, there is an 80 % re-hospitalization rate due to another acute exacerbation over the following year [53]. Furthermore, long-term survival in this patient cohort remains poor [54]. Three relatively small studies investigated the effect of home NPPV after acute hypercapnic respiratory failure successfully treated in COPD patients. One study showed that, compared to sham (continuous positive airway pressure) ventilation, NPPV significantly reduced the probability of recurrent acute hypercapnic respiratory failure [55]. Another study compared home NPPV versus standard therapy in chronic hypercapnic respiratory failure patients after acute exacerbation in order to prevent clinical worsening [56]. The authors demonstrated that the probability of clinical worsening was significantly lower in the group receiving home NPPV, with additional improvements observed in exercise capacity. The third, retrospective, study demonstrated better survival in COPD patients discharged after acute respiratory failure with home NPPV compared to those discharged without this form of therapy [57]. Pro/Con long-term NPPV There is limited evidence to support the provision of NPPV in the home environment after successful treatment of acute hypercapnic respiratory failure in COPD patients. However, those studies supporting this intervention had limitations, including small sample size, retrospective nature, and a lack of control group. Struik et al. [58] evaluated whether home NPPV after successfully treated acute respiratory failure reduces re-hospitalization and improves survival. The investigators randomized patients to home NPPV or standard treatment 48 h after "acute" ventilator support was terminated. The study failed to show a positive effect of home NPPV on time to readmission or death. This was not anticipated and stands in clear contrast to the smaller studies published before. Looking deeper into the study it can be seen that both groups had reductions in PaCO 2 over time. Therefore, one explanation of why this multicenter study was negative was the fact that patients were randomized too early: given the natural course of the disease, patients might have been randomized while they were still recovering from acute hypercapnia. Therefore, home NPPV might have been prescribed to patients not suffering from chronic hypercapnia. This study underscores the importance of carefully selecting patients for home NPPV. Another study [59] was unable to show an improvement in 2-year survival, despite the demonstration of reductions in day-time PaCO 2 (while breathing oxygen), improvements in health status, and reductions in readmissions. Therefore, it appears unlikely that differences in 1 year survival between the Köhnlein study [52] and others [58,59] are due only to "high inspiratory pressures" or simply reductions in PaCO 2 [52]. As a matter of fact, the control group of the Köhnlein study suffered from a high mortality rate, which may indicate that severity of disease rather than the correction of hypercapnia or the beneficial effect of "high inspiratory pressures" primarily drives survival in patients treated with NPPV. Furthermore, claim that chronic hypercapnia is associated to worse survival is questionable -at least in those patients receiving long-term oxygen therapy [43]. Furthermore there is growing evidence that mortality in COPD is influenced by several other factors, such as exercise capacity, comorbidities and inflammatory status [60]. Overall, home NPPV has been shown to improve important physiological parameters in stable hypercapnic COPD patients by the use of a treatment strategy which sufficiently decreases elevated PaCO 2 levels [51]. By doing so, long term survival can be significantly improved. However, the influence of home NPPV to prevent re-hospitalization is still unclear, and future trials are needed to identify the subgroup of COPD patients which benefits most from home NPPV. From a clinical point of view, it seems reasonable that patients with acute hypercapnic respiratory failure needing mechanical ventilation in hospital and suffering from prolonged hypercapnia, the ones you can define as acute on chronic hypercapnia respiratory failure, might benefit most. However, inconclusive data are available up to date and further investigation is needed in this area. Conclusion There is conflicting evidence regarding the effect of NPPV on reducing health care utilization and mortality in acute on chronic respiratory failure due to COPD. We need to better assess when to initiate this therapy in patients with hypercapnia in this setting. Once stable hypercapnia is proven, NPPV may improve survival and health status. Therefore, despite recent studies adding some new data, the authors cannot recommend the widespread use of this therapeutic intervention after an episode of acute-on-chronic respiratory failure in COPD. There is simply not enough evidence to support it. Instead, this modality should be reserved for individual cases, treated in specialized centers experienced with NPPV for the treatment of stable hypercapnic COPD. Weaning in long-term acute care hospitals in the United States Hospitals (Martin Tobin, Amal Jubran) The non-intuitive term "long-term acute care hospital (LTACH)" is viewed as the antonym of short-term acute care hospital (STACH). The term originates with Medicare bureaucrats who define LTACH as an acute care hospital with a mean length of stay of at least 25 days. Prolonged ventilation has been variously defined, as greater than 2 days, 14 days or 29 days, and now is generally, but arbitrarily, defined as at least 21 consecutive days of mechanical ventilation [61]. A number of different names has been applied to facilities focused on weaning from prolonged ventilation, including step-down units, respiratory intensive care units, and intermediate care units, which are located within a short-term acute care hospital, or a LTACH, which commonly is a free-standing hospital [61]. Much of the driving force behind LTACHs relates to money. Costs for ICU beds in the US have increased dramatically: by 30.4 % per day between 2000 and 2005 [62]. Costs for mechanical ventilation in the US are estimated at $27 billion, representing 12 % of all hospital costs. Because of the formula employed for payment by Medicare, the diagnosis-related group (DRG) system, hospitals begin to lose large amounts of money when length of stay exceeds 14 days. Transfer of patients out of an acute ICU to a stepdown unit or LTACH saves money on a per-day basis, largely by lower nurse-to-patient ratios, and by increase in the availability of ICU beds for more profitable cases such as elective surgeries. Although money is the dynamo behind the expansion of LTACHs, it is also recognized that patients being weaned from prolonged ventilation have different needs than patients in acute ICUs. These patients require a greater rehabilitative, as opposed to life-support, focus and they may benefit from being transferred out of the high technology environment of an ICU. Given the colossal sums of money spent on caring for patients requiring prolonged mechanical ventilation, it is amazing that these patients have attracted minimal attention from science-oriented investigators as opposed to health economists. This review is focused on science and on how best to wean patients receiving prolonged ventilation rather than on the economics of ventilator care. Between 2000 and 2010, Jubran et al. conducted a randomized controlled trial to determine whether the method selected for weaning influenced weaning duration in patients receiving prolonged ventilation [63]. The two arms of the study consisted of pressure support and trials of unassisted breathing using an O 2 delivery device connected to a tracheostomy tube (a trach collar). The primary aim of the study was to determine the length of time required for weaning with pressure support versus trach collar. Patients were eligible for entry into the study if they had received mechanical ventilation for at least 21 days. All patients underwent a screening procedure, which consisted of breathing unassisted through a trach collar for 5 days. One hundred and sixty patients did not develop distress during the five days and were considered to have been successfully weaned and were not randomized. Three hundred and sixteen patients developed respiratory distress during the 5-day period and were judged to have failed the screening procedure and were randomized to wean with pressure support or trach collar. Patients randomized to trach collar were disconnected from the ventilator and allowed to breathe through the tracheostomy. During the first day, the patient was allowed to breathe unassisted for a maximum of 12 h. The patient was then reconnected to the ventilator and assist-control ventilation was instituted for the next 12 h. On the second day, the 12-h trach-collar challenge followed by assist-control ventilation was repeated. On the third day, the patient was disconnected from the ventilator and allowed to breathe unassisted through the trach collar up to 24 h. In the pressure-support arm, on the first day the initial level was titrated to achieve a total respiratory frequency of less than 30 breaths per minute. Attempts were made to decrease pressure support by 2 cmH 2 O three times each day. When a patient was able to tolerate pressure support of no more than 6 cmH 2 O for at least 12 h, the ventilator was disconnected and the patient allowed to breathe unassisted through the tracheostomy up to a maximum of 24 h each day. The primary outcome, weaning duration, defined from the first day of randomization to the day the patient was successfully weaned, was shorter with trach collar than with pressure support: 15 versus 19 days. Patients were considered weaning successes when they breathed without ventilator assistance for at least 5 days. A Cox proportional hazards model revealed that the rate of successful weaning was 1.43 times faster with trach collar than with pressure support. Mortality was equivalent in the two arms, but, of course, the study was not powered to detect a difference in mortality. Of the entire 500 randomized and non-randomized patients, 54 % were alive at 6 months after enrollment and 45 % were alive at 12 months. This survival rate is surprisingly high. To put the numbers in perspective, 1-year survival in older (66 years) patients ventilated in an ICU was approximately 40 % [64,65]. That is, the LTACH patients in the study of Jubran et al., who were ventilated for 67 days, had a 1-year mortality comparable to ICU patients who were ventilated for 9 days. Indeed, 72 % of the 260 patients who had been weaned by discharge were alive at 12 months. What explains the faster pace of weaning with a trach collar than with pressure support? One explanation lies with how doctors make decisions. During a trach-collar challenge, the amount of respiratory work is determined solely by the patientthe ventilator cannot do any work. As such, a physician observing a patient breathe through a trach collar has a completely clear view of the patient's respiratory capabilities. During pressure support weaning, a clinician's ability to judge weanability is clouded because the patient is receiving ventilator assistance and it is extremely difficult to distinguish between how much work the patient is doing and how much work the ventilator is doing [66]. Accordingly, clinicians are more likely to accelerate the weaning process in patients who perform unexpectedly well during a trach-collar challenge than when a low level of pressure support is being used. This notion is borne out by the Kaplan-Meier plot, which shows that the superiority of trach collar over pressure support was evident within the first ten days of the study [63]. In summary, the number of patients requiring prolonged mechanical ventilation, whether they are placed in a shortterm acute care hospital or some other location, is likely to increase enormously in the next few decades. The use of a trach collar accelerates the pace of weaning of such patients by more than 40 % as compared to weaning using pressure support. Key points Prolonged weaning is defined as the need for more than three weaning trial failures, or 7 days from the first spontaneous breathing trial Specialized weaning units allow greater weaning rate, better functional status Survivors may suffer from long-lasting physical and cognitive disabilities resulting in impaired quality of life Physiotherapy is part of the comprehensive management Protocol-based weaning strategies may be effective High risk of dysphagia has been reported in critically ill patients. Prolonged weaning is defined as the need for more than three weaning trial failures, or 7 days from the first spontaneous breathing trial [67]. It occurs in up to 14 % of patients admitted to intensive care units (ICU) and treated with invasive mechanical ventilation, accounting up to 37 % of ICU costs [68,69]. These patients have a hospital mortality up to 32 %, [70] and fewer than half of them survive beyond 1 year [71]. Specialized weaning units allow better results, in terms of percentage of patients free from mechanical ventilation, and functional status at discharge, particularly if the organizational model is focused to the early post-acute period [72]. Clinical outcomes of critically ill patients admitted to ICUs showed a huge improvement in the last decades, due to the advancements in critical care. Nonetheless, survivors may suffer from long-lasting physical and cognitive disabilities resulting in impaired quality of life, even after long time from the acute illness [73]. It has been reported that muscle wasting in critically ill patients starts in the very first week of illness being more severe in patients with multiorgan failure than in those with a single organ failure [74]. Physiotherapy must be considered as an integral part of the comprehensive management of these critically ill patients. A strategy of early comprehensive rehabilitation based on interruption of sedation and physical and occupational therapy is safe and well tolerated, resulting in better functional outcomes at hospital discharge, shorter delirium, and more ventilator-free days [75]. Current guidelines and recommendations promote early mobilization in ICU, to reduce deconditioning and other immobility related complications, and increase functional independence and psychological well being [76]. Neuromuscular electrical stimulation (NMES), able to exercise muscles with minor burden on cardio-ventilatory system, can be easily performed in the ICU and applied to muscles of patients laying in bed to prevent the ICU neuromyopathy [77]. Despite no definitive results exist regarding the application of a fixed protocol-based procedure to discontinue mechanical ventilation, the use of this care plane has proven to be effective when applied to the weaning process in the critical care area [78]. Recent advances in mechanical ventilation (NAVA, closed loop) were developed to facilitate weaning in acute care and in prolonged weaning [79]. Some recent meta-analysis [80] showed that weaning with closed-loop ventilators significantly decreased weaning time in critically ill patients, however, its utility when compared with respiratory physiotherapist protocolized weaning is still a matter of debate [81]. Aside from regaining respiratory autonomy and clinical stability, the removal of tracheotomy may represent a difficult challenge in prolonged weaning patients, and currently available recommendations are still largely based on subjective criteria rather than on standardized protocols. High risk of dysphagia has been reported in critically ill patients, and an accurate evaluation of swallowing disorders may reduce risk of infections and failure of tracheostomy weaning [82]. Telemonitoring in ventilator dependent patients (Michele Vitacca) Key points Home mechanical ventilators may be equipped with remote monitoring tools in order to improve physician supervision, with the aim to adapt settings to the needs and comfort of the patient Economic, regulatory and legal impacts of home telemonitoring will be important in its adaption by health care systems Relevant issues are prescription criteria, modalities of follow-up, team expertise, technologies, adherence, bundling of services, and outcomes Introduction and rationale Patients with chronic respiratory insufficiency requiring home mechanical ventilation (HMV) have a high, although underestimated prevalence in Europe [4]. Home mechanical ventilation requires patient and family cooperation, nevertheless clinical conditions, technology needs, lack of professional supervision, and acute exacerbations make its management a difficult task [4,83]. Provision and maintenance is often carried out by external companies, without any accepted standardisation, and a regular feedback to the clinical centres is usually lacking [84]. The need to reduce healthcare costs has prompted the development of telemedicine for home assistance [85]. However, only few controlled studies evaluating its effectiveness are available so far. Identification and selection of HMV patients who may benefit from such tele-monitoring approach represent key factors [86]. There are real challenges when providing HMV, including patient and caregiver training, adequacy of respiratory care, and reimbursement. The aim of a recent ERS Task Force has been to develop and establish a European network of clinical experts in HMV for a critical analysis of the current status of telemonitoring services in ventilator-dependent patients and provide a consensus document on common clinical criteria, equipment, and facilities. Overview on telemedicine, telemonitoring definitions Telemedicine (TM) is the distribution of health servicesin conditions where distance is a critical factor -by health care providers using information and communication technologies to facilitate the exchange of important clinical information [86]. TM dimensions may be divided into functionality, applications and technology categories. Functionality, in turn, may be divided into: a) Tele-consultation: Second opinion on demand between patient/family and staff or among health operators; opinions, advice provided at distance between two or more parties separated geographically b) Decision support system: Alerting health personnel, in response to a sentinel value, who then contact the patient or caregivers c) Remote diagnosis: Identifying a disease by the assessment of the data transmitted to the receiving party through instrumentation monitoring a patient away. d) Tele-therapy: Direct prescription e) Mentoring (i.e., tele-coaching): Direct reinforcement or recorded messages/communications to improve adherence f ) Telemonitoring: Digital/broadband/satellite/wireless or bluetooth transmission of physiologic and other non-invasive data (i.e. biological storage data transfer) g) Tele-evaluation: On-demand data transfer to use as biological outcome measures h) Telecare: Network of health and social services in a specific area; in case of emergency, patient calls medical personnel, emergency call service or members of family i) Telerehabilitation: The system which allows for receiving home care and guidance on the process of rehabilitation through connections for point-topoint video conferencing between a central control unit and a patient at home. j) Emergency calls: Helpline service that gives the ability to initiate a call for help to an Operation Centre, usually active 24 h a day throughout the year k) Teleconference-Audio: Electronic two-way voice communication between two or more people located in different places, which make use of transmission systems voice, video and/or data. l) Telepresence: Use of robotic devices and other devices allowing to perform a task in a remote place by manipulating instruments and receiving sensory information and reactions. m)Telespirometry: Remote control of a flow volume curve through a spirometer which is then sent to a central processing and reporting Indications for TM in ventilator-dependent patients In general, TM would be appropriate in patients receiving supported ventilation outside an acute care hospital, including those receiving non invasive ventilation (NIV) and those receiving invasive ventilation (IV). The latter would include those with as weaning failure and those undergoing some kind of a weaning process. Telemonitoring could be used for: Ventilator Weaning: As an adjunct to weaning outside the acute care hospital [87]. ALS (amyotrophic lateral sclerosis) [87][88][89]: TM in ventilated patients due to ALS has been addressed in the medical literature. One study by De Almeida and colleagues demonstrated that the device was user-friendly [89]. A prospective, single blinded, controlled trial of TM versus no TM in 40 ALS showed that telemonitoring reduced health care utilization and probably had beneficial effects on survival and functional status [87]. TM is cost-effective in these patients representing major cost savings to the NHS in the order of 700 euros/patient/year. Equipment/technology available The components of the technological dimension can be grouped into three sets, of variables: synchronicity, network design, and connectivity [85]: a) Synchronicity is used here to incorporate both timing and technology. b) Network design/configuration includes three modalities: Virtual Private Networks, the open internet, and social networks, in which information is posted and shared. c) Connectivity, wired and wireless, provides different levels of bandwidth and the attendant speed and resolution or quality of service. A wide range of remote health monitoring systems is available. The correct level of technology should be: i) safe; ii) feasible; iii) effective; iv) sustainable; v) and flexible to meet different patient's conditions and needs. Legal issues The use of TM has highlighted several medico-legal issues that must be addressed as this intervention achieves greater acceptance [107]. Further governmental, ethical, legal, regulatory, technical, and administrative standards for remote medicine will be necessary to assist individuals and organizations in providing safe and effective services. Economical considerations As awareness of the potential role of at-home telecare and telemonitoring in the care of ventilator-dependent (VD) patients increases, potential roadblocks also become more apparent. This type of care is labor-intensive and costly, [107] and the current medical literature on its cost/effectiveness presents contrasting results [89,95,100,104]. Analyses comparing institutional versus at-home interventions in VD patients focused on traditional outcomes such as hospitalization rates. This narrow approach ignored such important methodologies and outcome areas as: a) Telemonitoring vs. formal caregiver monitoring in at home VD patients' care, in order to potential savings of telemonitoring compared to high intensity labor home activities; b) Quality of life comparison between the above two groups. To evaluate the real cost/effectiveness of a new method such as remote monitoring in this population, it is important to understand what "standard therapy" and "usual therapy" actually refer to in published papers. Often the comparator treatment is quite variable among European countries. "Standard therapy" can be considered to encompass the drug prescription, control by the general practitioner (GP), structured outpatient programs, and pathways of integrated on-demand home visits with dedicated paths in highly disabled patients. Each of these programs may have different indications, applicability and costs, making generalizations from comparison s with a new protocol of remote monitoring problematic. Despite preliminary studies that have shown an advantage in applying telehealth systems, more recent research casts some doubts on their superiority with regard to effectiveness or cost savings. Tele-rehabilitation for home mechanical ventilated patients Integrating telehealth into existing health service delivery patterns will require a reliable technological infrastructure, effective clinical demonstrations, assessment of practitioners' readiness, and careful integration of technologies into workflow and policy synchronization. Future initiatives will cover developing organizational models, promoting sustainability and participation, creating feasible, economical, effective and safe technological models, developing new technical devices and software andultimately -demonstrating effectiveness at the clinical level (including cost reduction, enhancement of quality of life, and patient/caregiver support). Role of telemedicine in sleep-related breathing disorders Sleep-related breathing disorders (SRDB) are a group of pathologies characterized by abnormalities of the respiratory pattern during sleep. The two most important are obstructive sleep apnea (OSA) and the reduction of ventilation during the night (hypoventilation syndromes). Recent investigations have evaluated the application of telemedicine in the diagnosis, treatment and compliance in OSA patients. For hypoventilation syndromes, the following areas should be considered in investigations. a) Indications for treatment; b) NIV titration; c) Optimal NIV devices and quality control; d) Follow-up strategies; e) Procedures to obtain adequate ventilation; and f) Treatment adherence. Finally, cost-effectiveness must ultimately be addressed. Telemedicine at the end of life Telehospice, the use of telemedicine technologies to provide services to hospice patients [108][109][110], may offer an innovative solution to the challenges of providing highquality, cost-effective end-of-life care. Future considerations Tele-monitoring could become be a key element (part of the 'total package') in the integrated management of the patient requiring home mechanical ventilation for chronic respiratory failure. Future outcome assessment could include: Ethics and palliative care in critically-ill respiratory patients (Michele Vitacca) Key points The trajectory of the dying process in COPD patients is highly variable Lack of surveillance and inadequate services with absence of palliative care is a routinely experience. Patients with COPD most frequently request information on the diagnosis and disease process, its treatments, prognosis, maintaining quality of life, and advance care planning. All too often, palliative home care programs and hospice admissions for end of life care in respiratory patients are insufficient or absent. In the USA, non-oncological respiratory causes account for 8 % of all deaths and 9.6 % of deaths in individuals over age 65 years; of these 56 % are from COPD [111]. The COPD time course is characterized by a progressive worsening of dyspnoea, reduced effort tolerance, and more frequent exacerbations and hospitalizations [36]. In COPD, oxygen therapy and mechanical ventilation (MV) improve survival and morbidity in acute-on-chronic respiratory failure [36]. The downhill trajectory in COPD patients is variable and not as predictable as that of other chronic diseases [36]. In general, the course of COPD is that of progressive long-term disability, with periodic exacerbations and unpredictable timing of death which characterize dying with chronic multiorgan failure [36]. In a provocative paper, Curtis et al. [36] proposed that, after a serious analysis of the conditions and clinical status, we would define a patient needing palliative care when he/she has a low chance of recovery, poor rehabilitation potential, and high organizational complexity and instrumental requirements. Multiple factors influence quality of care for COPD patients requiring palliative care. Three examples include: 1) the presence of anxiety and depression, which are as common in advanced COPD; 2) the use of advance care planning; and 3) effective communication among the patient, family and health care providers. Those individuals in their last days of life (typically, an estimated of death within the next 7 days) may be defined as end of life (EOL) patients. Caring for these patients should be defined as potentially "futile", i.e. disproportionate measures in terms of quality and quantity of care with poor expected quality of life. The hospital is often the location where EOL decisions are made for patients with end-stage COPD [112]. The patient, family and health care providers are usually involved in this process; all provide different perspectives and expectations. In a recent survey Nava et al. [113] showed that, in European respiratory intermediate care units and high dependency units, an EOL decision was made in 21.5 % of patients. Withholding of treatment, do-notintubate/do-not-resuscitate orders, and noninvasive mechanical ventilation (NMV) as the ventilatory care ceiling are the most common forms of decision-making. In the same survey, the investigators showed that competent patients, together with nurses, are often major players in EOL decisions. A common notion is that European intensive care unit (ICU) physicians, in most cases, do not experience difficulties with EOL decisions. However, Sprung et al. [114] underline that EOL decisions change according to diagnosis, countries and doctors' religion. Another important point is the well-known difficulty in accurately predicting outcomes (including death) for COPD patients admitted the ICU. Wildman and colleagues [115] investigated whether clinicians' prognoses matched survival outcome in patients hospitalized in 92 ICU and three respiratory high-dependency units in the United Kingdom with severe acute exacerbations of COPD. Of this group, 517 (62 %) survived out to 180 days. In general, the clinicians' prognoses were too pessimistic: their predicted survival was 49 %. Furthermore, for their patients with what they considered the severest disease, predicted survival was 10 %, but in reality it was 40 %. Gerstel at al. [116] pointed out that one of the main problems is that withdrawal of life support in the ICU is often a complex process, influenced strongly by patient and family characteristics. In this study, in almost one-half of the group, the decision to withdraw life-sustaining therapy took longer than one day. Those patients with longer decision-making were younger, had a longer length of stay in the ICU, received more life-sustaining interventions, were less likely to have a diagnosis of cancer, and had more decision-makers involved in the process. A longer decision process leading to withdrawal of life support was associated with increased family satisfaction, as was extubation before death. As compared to hospitalized patients with lung cancer, individuals with COPD were more likely to receive mechanical ventilation, tube feeding, and resuscitation [117]. Furthermore, in COPD patients, mechanical ventilation had greater short term effectiveness, based on survival to hospital discharge (76 % vs. 38 %), and had higher 2-month and 6-month survival. Curtis and colleagues [118] pointed out an additional important problem related to EOL is the strategy of communication: the physicians' frequent difficulties in discussing EOL care with patients and their families and caregivers Health care utilization is strongly weighted toward the end of life in COPD as well as in other diseases. For example, Andersson and colleagues [119] showed that more than 68 % of all COPD admissions and 74 % of all days in hospital occurred in the 3.5 years before death. The last 6 months of life accounted for 22 % and 28 % of all COPD admissions and days, respectively. Suboptimal surveillance, inadequate services, and absence of palliative home care are common in severe COPD patients with EOL issues [120]. This also holds for respiratory patients who are housebound with high levels of morbidity and high requirements for community health services. COPD patients approaching EOL require, at a minimum, education on diagnosis and disease process, available treatment modalities, what they have to do and what to expect, and information on prognosis. Despite this, only 32 % of respiratory patients report discussing EOL cares with their physicians [120]. Stated barriers in this study included, "I would rather concentrate on staying alive than talk about death" or "I'm not sure which doctor will be taking care of me if I get very sick." Thus, it is necessary to identify areas of communication that physicians do not address and areas that patients rate poorly, including talking about prognosis, dying and spirituality. Finally, issues in COPD patients with mechanical ventilation (MV) deserve mention. Marchese et al. [121] describe survival, predictors of long-term outcome and attitudes in patients treated at home by tracheostomy-intermittent positive-pressure ventilation (TIPPV) over a 10-year period. Sixty-four out of 77 patients (83 %) were pleased to have chosen MV with tracheostomy and 69 (90 %) would choose this option again. Forty-two caregivers (55 %) were pleased the patients had chosen home mechanical ventilation (HMV), but 29 (38 %) reported major burdens. TIPPV is generally well-received by patients, is considered safe, and often permits survival for relatively long periods of time. Vitacca et al. [35] describe the family's perception of care delivered to home MV patients during the last 3 months of life. Eleven Respiratory Units submitted a binary 35-item questionnaire with 6 domains (symptoms, awareness of disease, family burden, dying, medical troubles and technical problems) to close relatives of 168 deceased patients (41 % with COPD). The majority had prominent respiratory symptoms and were aware of the severity and prognosis of their disease. Family burden was high, especially with respect to financial burden. During hospitalisation, 74.4 % of patients had been admitted to an ICU and 27 % received resuscitation manoeuvres. Hospitalisations and family financial burden were unrelated to diagnosis and use of MV, and families of the patients did not report major technical problems regarding the use of ventilators [35]. Steele et al. [122] describe how hospice care can offer expertise for palliation and may be used as a bridge between hospital and home. Communication with patients and families about EOL issues is an important component of proper medical care that is often neglected in the training of clinicians. Although direct studies of health care provider interaction in COPD in this setting are not readily available, Vitacca et al. [123] showed how to communicate bad news to caregivers of patients with amyotrophic lateral sclerosis (ALS). In particular, caregivers require major assistance during the delicate times of discussing advance care planning and directives and critical treatments decisions. Clinicians are, therefore, an important target group for education on this type of communication. The Calgary-Cambridge model for medical consultations and the SPIKES (Setting up, Perception, Invitation, Knowledge, Emotions, Strategy and Summary) [124] for breaking "bad news" provides examples of consultation guides that integrate patient agenda with biomedical issues. Communication of advance care planning, defined as an ongoing discussion among patients and family members may be a more effective mean to meet patients' wishes. Future direction for non-oncological respiratory patients with severe disease or EOL issues will focus on outcomes as well as skills and interventions for doctors, nurses and respiratory therapists. Many questions remain unanswered: Which are the important, measurable prognostic indicators? Which are indicators for unmet patient and caregiver needs? Which interventions optimize quality of life in this setting? Which are the important, relevant and priority criteria for palliative network and hospice access? What are the economic and social costs? What about enterable nutrition withdrawing? What about bio-ethical issues? What about patient information as awareness and self decision? In conclusion, for respiratory patients with EOL issues we need to: Offer the best practice to ameliorate the pervasive effects of the disease Recognition that medication alone is insufficient to achieve optimal outcome Control the often overwhelming symptoms (such as dyspnoea) and psychological symptoms Focus our care on our patient and the family Allow for and foster a continuous presence of family, friends and religious assistance Give time and place to our patient to say everyone "good bye" Talk to our patients and relatives using their language Listen to our patients and their families and caregivers Consider patients' preferences Not unduly prolong suffering to maintain life in some EOL situations Consider hospice and "palliative care" as opportunities for our patients Ethics and palliative care in ventilator dependent patients (Guido Vagheggini, Nicolino Ambrosino) Key points The care of end-stage patients requires a progressive reduction of useless and "futile" treatments and an increasing approach to relieve of symptoms Clinicians are involved in surrogate or joint decision making Patient centered supportive care should respect patients' values and preferences Physicians and healthcare professionals are challenged by prognostic accuracy of patient survival. End-stage COPD patients receive far less opiates than cancer subjects to alleviate dyspnea. The care of end-stage patients requires a progressive reduction of useless and "futile" treatments and an increasing approach aimed to prevention and relieve of symptoms, including maintenance and improvement of quality of life of patients and families. Nevertheless endstage lung diseases like Chronic Obstructive Pulmonary Disease (COPD) have a similar severe prognosis as lung cancer, but lower risk of Intensive Care Unit (ICU) admission refusal compared to cancer or haematological malignancies patients [125,126]. Due to the different evolution of end stage chronic respiratory diseases, and to the lack of physician's accuracy in judgment of prognosis in terminally ill patients, it may often be difficult to decide when to start palliative treatment [36,127]. In these patients, the transition from the usual to the palliative and end of life care cannot be a step down, but should start simultaneously to the care, as soon as needed, and last until and after the death to ensure appropriate support to the family. In these patients the main questions a clinician should face are: What might be desirable in terms of medical intervention? Should we pursue aggressive treatment or comfort treatment alone? In these patients, determining whether a patient is dying or not has become as important as the management of organ support therapy itself, as withdrawal or withholding of artificial life support may be determinant for their survival [128]. Very often clinicians are involved in surrogate or joint decision making, even in major medical decisions, so they need to be in partnership and communicating with surrogate decisions makers [129]. Patient centered supportive care should respect patients' values and preferences, be coordinated and integrated in the care programme, and include an adequate information, communication and education. Physical comfort and emotional support of the patient should be pursued, with the involvement of family and friends, in order to share decisions and avoid abandonment experiences when care is redirected toward a more palliative purpose [130]. Physicians and healthcare professionals are challenged by prognostic accuracy of patient survival in patients with severe end-stage of COPD, and they are less likely to engage in end-of-life care planning in contrast with terminal diseases like cancer [131]. Home mechanical ventilation (HMV) is a growing issue in developed western countries; it is often administered as a life-sustaining treatment, but may have an important role also as a palliative treatment of dyspnea [4,132,133]. More recent guidelines include the use of pharmacological treatment of dyspnoea. Nevertheless, despite the beneficial response and the safety of opiates in end stage lung disease has been demonstrated, the end-stage COPD patients receive far less opiates than cancer subjects to alleviate dyspnea [134][135][136]. Conversely, end-stage COPD patients undergo more hospital and ICU admissions than cancer patients, an evidence that some issues in general and medical culture prevent appropriate supportive and palliative care in non-cancer end-stage lung disease [137]. Also in amyothrophic lateral sclerosis (ALS) patients, a high rate of hospital death is reported, the place of death depending widely on the attitude of the hospital and resources availability of the environment, more than on patient's and families' preferences [138]. In conclusion, in the care of end-stage lung disease patients, we must facilitate care in accordance with patient's wishes, when possible, by exploring advance directives and involving family and care team in the development of the management plan. Continued and intensive efforts have to be addressed to palliate symptoms as earlier as possible during the clinical course of the illness, simultaneously to the care of treatable conditions, and recognizing the need of end-of-life care when appropriate. In this perspective, harmonising acute care criteria for admission to ICU and criteria for long-term care is a crucial challenge to make more ethical the care provided to the patient.
2016-05-04T20:20:58.661Z
2015-10-06T00:00:00.000
{ "year": 2015, "sha1": "e20c3efee1c9653c22821eb34138d6138daefe40", "oa_license": "CCBY", "oa_url": "https://mrmjournal.biomedcentral.com/track/pdf/10.1186/s40248-015-0028-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e20c3efee1c9653c22821eb34138d6138daefe40", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
229174804
pes2o/s2orc
v3-fos-license
Assessment of the Quality of Mobile Applications (Apps) for Management of Low Back Pain Using the Mobile App Rating Scale (MARS) Digital health interventions may improve different behaviours. However, the rapid proliferation of technological solutions often does not allow for a correct assessment of the quality of the tools. This study aims to review and assess the quality of the available mobile applications (apps) related to interventions for low back pain. Two reviewers search the official stores of Android (Play Store) and iOS (App Store) for localisation in Spain and the United Kingdom, in September 2019, searching for apps related to interventions for low back pain. Seventeen apps finally are included. The quality of the apps is measured using the Mobile App Rating Scale (MARS). The scores of each section and the final score of the apps are retrieved and the mean and standard deviation obtained. The average quality ranges between 2.83 and 4.57 (mean 3.82) on a scale from 1 (inadequate) to 5 (excellent). The best scores are found in functionality (4.7), followed by aesthetic content (mean 4.1). Information (2.93) and engagement (3.58) are the worst rated items. Apps generally have good overall quality, especially in terms of functionality and aesthetics. Engagement and information should be improved in most of the apps. Moreover, scientific evidence is necessary to support the use of applied health tools. Information Sources and Search Strategy Two reviewers (physical therapists with experience in lower back pain (LBP) management and mobile health (mHealth), previously trained in the use of The Mobile App Rating Scale (MARS)) searched applications (apps) that included an intervention for LBP in the official stores of the two main operating systems, Android (Play Store) and iOS (App Store). The search was carried out in September 2019. To maximise the chances of recovering all the target results, a general term such as "low back pain" was used for the search. Since each platform offered different content depending on the region, the search process was repeated on both platforms for localisation in Spain and the United Kingdom. Eligibility Criteria Concerning this study, those applications (apps) related to lower back pain (LBP) that included feedback or intervention were selected. Apps that included other pain conditions with specific sections for LBP also were selected. Apps based solely on general pain or that were in languages other than English or Spanish were excluded. Likewise, apps with significant technical problems and paid apps also were excluded. Apps that were free to download and use but included exclusive paid content were not discarded, since they allowed the app to be used and evaluated. However, any free apps that subsequently required a subscription or that only allowed an initial assessment before requiring payment were excluded. Application (App) Selection The same independent reviewers screened the title and the download page of the found applications (apps). Potentially eligible apps were imported into a database (in case of doubt, they were imported), and duplicates from apps found in both regions were identified and unified. Remaining apps were downloaded, and any apps that did not meet the selection criteria were deleted. A third reviewer was consulted in the event of any doubts concerning the eligibility of an app. Data Extraction and Quality Assessment Two reviewers (physical therapists with experience in lower back pain (LBP) management and mobile health (mHealth), previously trained in the use of The Mobile App Rating Scale (MARS)) independently downloaded, used, and evaluated the remaining applications (apps), using an extraction form for data extraction. The form included data about the app developer, platform, version, year of publication of the last version, cost (paid or free app, and additional paid content), number of downloads, user rating, availability of privacy statement and privacy technical aspects, medical product information, and the items included in the MARS. The MARS [29] was applied to assess the quality of the included apps. This scale was based on 23 items grouped in the following five sections: (A) Engagement (five items): Fun, interesting, customisable, interactive (e.g., sends alerts, messages, reminders, feedback, enables sharing), well-targeted to the audience. (B) Functionality (four items): App operation, easy to learn, navigation, flow logic, and gestural design of the app. (C) Aesthetics (three items): Graphic design, overall visual appeal, colour scheme, and stylistic consistency. (D) Information quality (seven items): Contained high-quality information (e.g., text, feedback, measurements, and references) from a credible source. (E) Subjective quality (four items): Personal interest in the app Each item was scored from 1 (inadequate) to 5 (excellent), and a final score of app quality was obtained with the mean score of sections A, B, C, and D. The subjective app quality score was obtained independently with the average score of section E. Moreover, there were six app-specific items grouped in section F that assessed the perceived impact of the app on the user's knowledge, attitudes, intentions of change, as well as the likelihood of an effective change toward the health behaviour in question. Due to the exclusion of the subjective quality section from the overall mean app quality score, MARS's ability to assess quality objectively increased. Moreover, the high correlation between the MARS total score and the user's star rating found in previous studies suggested that it was capturing the perceived overall quality adequately [29]. Data Synthesis and Analysis Each reviewer independently assessed each of the Mobile App Rating Scale (MARS) items in each of the applications (apps). Regarding each item of each app, the mean value between the values of both reviewers was calculated. This mean value was used for the analysis of the mean of all the apps for each item and section of the MARS. Thus, average scores and standard deviations were retrieved from each section, as well as the total score and standard deviation of each app, allowing for a descriptive analysis. A third reviewer was consulted in the event of discrepancies in the descriptive items (items without a numerical score) between the two main reviewers. Following the example of a previous study [33], apps were classified into tertiles to facilitate the interpretation of the readers. Results A total of 500 applications (apps) were retrieved from the Play Stores of the United Kingdom and Spain, while 53 apps were found in the App Stores of both countries. After deleting duplicates and screening the title and the download page of the remaining apps, 34 Android and 15 iOS apps were selected as potentially eligible from the Play Store and App Store, respectively. After downloading and checking the fulfilment of the selection criteria, 17 apps finally were included in the descriptive analysis. Three of the included apps were obtained from the App Store, while the remaining 14 apps were retrieved from the Play Store. Tables 1 and 2, respectively, show the details and the main characteristics of the included apps. Ten apps included a privacy statement detailing which information was collected and for what purpose ( Table 2). Only six apps introduced login or password options to improve user's data privacy (Table 2). One app (Healo) declared to be General Data Protection Regulation compliant and secure in accordance with the European Union regulation and to be registered with the Swedish Medical Products Agency (Table 2). Table 3 shows a summary description of how the apps work. The mean score of the users of the ten apps that reported these data was 4.11 (Standard deviation (SD) 1.07) on a scale of 1-5 stars. According to the Mobile App Rating Scale (MARS) scoring, mean app quality was 3.81 (SD 0.43), ranging from 2.83 (the worst rated) to 4.57 (the most highly rated). Table 4 shows the score of each app according to the MARS. Table 3. Summary description of how the applications (apps) work. App Name How the Apps Work Back pain relief exercises The user chooses their pain area and receives a list of suggested exercises. There is a chat to contact health care advisers. It contains premium exercises unlocked by monthly payment. The app allows the user to set reminders. Lower back yoga-floor class The application (app) offers examples of exercises and yoga poses to be performed, as well as tips for back care. Free access to sample exercises. The full content is available under subscription for three months. It allows the user to set reminders. Regimen-back pain relief The app offers examples of guided exercises depending on the objective selected by the user. It allows the user to set reminders. 6 Minute Back Pain Relief The app offers nine guided exercises related to yoga poses. It allows the user to set reminders. Yoga Poses for Lower Back Pain Relief The app offers information related to back pain and guided exercises related to yoga poses. It allows the user to set reminders. Lower Back Pain Exercises The app offers guided exercises. It allows the user to configure the difficulty of the exercises and set reminders. Escuela de Espalda The app includes extensive information related to anatomy, biomechanics, exercise, postural hygiene, lifestyles, diagnostic tests, and pain management strategies. It provides evaluation tests and questionnaires to be completed by the user that allows follow-up, as well as guided exercises. User can select favourite exercises. The app includes a section with utilities for the healthcare professional. WomenBackWorkout The app offers guided exercises. It allows the user to configure the difficulty of the exercises and set reminders. Healthy Spine & Straight Posture-Back exercises/Columna vertebral sana & Postura recta The app contains some exercise programmes to be carried out in a guided way, as well as a battery of explained exercises. The user can self-evaluate. It allows the user to set reminders and configure the training frequency. Requires payment for full access to programmes. Back Doctor (FREE) Health. Stretch. Workout The app includes exercise programmes for neck and back pain, as well as chronic pain and other conditions. It provides information about lifestyles. Right Motion-Alivia tu dolor solo con ejercicios The app offers guided exercises. It allows the user to set reminders. Healo The app includes exercise programmes with exercises described in text and video. Exercise programmes are personalised depending on the results of the diagnosis received by a doctor or through a self-assessment in the app. The app allows guided self-diagnostics that include the location (body chart) and the characteristics of the pain, its onset, or its duration, among others. It asks the user questions trying to find red flags. The app offers you a potential diagnosis and an associated exercise programme in which reminders can be set. It includes a chat with health care advisers. The design and graphs of the app are very attractive. Doado. Your Back Companion The app includes explanatory videos of exercises and recommended postures in various situations of daily life. It allows the user to evaluate their pain to keep track of it. Bella's Lower Back Pain Exercises The app includes an exercise programme divided into progressive phases. The exercises are explained in writing, as well as in audio and video. It requires payment for access to full content. Healure: Physiotherapy Exercise Plans The app contains some exercise programmes to be carried out in a guided way, as well as assessments related to the pain and symptoms. It allows the user to schedule programme sessions and set reminders. It requires payment for access to full content. Curable: Back Pain, Migraine & Chronic Pain Relief The app offers a narrative in the form of a written and listened conversation through which it discovers information related to pain. The intervention is focused on the neuroscience of pain and patient education. The design of the app is very attractive. Improve Posture For A Healthy Spine The app contains some exercise programmes to be carried out in a guided way, as well as a battery of explained exercises. It allows user to set reminders. It requires payment for access to full content. Discussion This study aimed to review which mobile applications (apps) are currently available in the app market and to carry out an exhaustive assessment of their quality using the Mobile App Rating Scale (MARS). Quality Described above in the Methods, the Mobile App Rating Scale (MARS) divides the application (app) scores into three dimensions. The assessment of sections A-D provides the mean quality score of the app. Subjective quality is scored according to the evaluation of section E. Finally, section F assesses the perceived impact of the app. This evaluation structure prioritises aspects such as engagement, functionality, aesthetics, and information over the subjective opinion of the assessor about the potential effectiveness of the app (E, subjective quality; F, perceived impact of the application). While digital health interventions require appropriate functional and design aspects, a high potential for impact and effectiveness are essential for continued use of the health tool. General assessment of the quality of the apps, therefore, should take into account the quality items assessed in sections A-D of the MARS, along with the score for subjective quality and app impact sections, as well as user rating. Moreover, not all aspects assessed on the scale have the same relevance for digital health interventions. Detailed analysis of the results in each section is essential in choosing an appropriate app. It is difficult to deduce the characteristics that an ideal app should have. Obtaining high levels of engagement through appropriate strategies that motivate behaviour change is essential, on the one hand. However, engagement is significantly influenced by the functionality and ease of use of the app and aspects such as aesthetics that make it more attractive. It is essential, on the other hand, that the information provided by the app is of quality to determine if its use will be safe. The efficacy of the intervention using each specific mobile health (mHealth) app should be studied, but this rarely occurs. Legally, the handling of the patient's private information will be a relevant aspect to know to comply with current regulations. During this study, the average quality of the 17 included apps ranged between 2.83 (Right Motion-Alivia tu dolor solo con ejercicios) and 4.57 (Healthy Spine & Straight Posture-Back exercises/Columna vertebral sana & Postura recta) (mean 3.82) according to the MARS, scored on a scale from 1 (inadequate) to 5 (excellent). Considering sections A-D, the best scores were found in functionality (mean 4.7), followed by aesthetic content (mean 4.1). To contrast, information (mean 2.93) and engagement (mean 3.58) were the worst rated items. Regarding subjective quality (mean 2.6) and app-specific assessment (mean 2.82), where impact on the user is most relevant, the mean scores were low. However, in these sections, the variability between apps was significant, with scores ranging between 1.25 (6 Minute Back Pain Relief)-4.25 (Healo) in subjective quality assessment and 1.83 (Yoga Poses for Lower Back Pain Relief; Bella's Lower Back Pain Exercises)-4.33 (Lower back yoga-floor class) in the specific items of the app, respectively. Overall, the mean score was good in most apps, with only one app (Right Motion-Alivia tu dolor solo con ejercicios) with a value below 3/5. However, the moderately low mean scores found in fundamental aspects such as engagement (3.58) and subjective assessment (2.6) should be analysed, due to the risk of developing technically well-designed applications that are unable to produce engagement and adequate adherence in users. Substantial differences were found between the quality score obtained using the MARS and the score awarded by users using stars. Concerning the case of user rating, the score for the apps was higher (mean 4.01) than in the assessment using the MARS (mean 3.82), except in three of them. The MARS score is an objective tool based on technical criteria considered necessary for an appropriate application of the intervention. Conversely, user assessment is usually subjective and, although it does not refer to specific items, it is mostly influenced by the impact upon and the subjective appreciation of the user. This fact makes comprehensive analysis of both assessment methods essential, otherwise technically well-designed apps could be overrated while those with aspects that were better subjectively valued by users could be dismissed. Tailored Interventions: Behaviour Change Techniques One of the foundations of tailored interventions is the premise that changes in sociocognitive determinants (attitudes, beliefs of efficacy) favour behavioural change [45]. There are different hypotheses about the mechanisms that underlie this process, but most of them agree that it is necessary to transform intention into action [46]. Education or provision of information seems to be the main strategy to influence these determinants [45]. Consequently, an intervention that provides details about the benefits of a specific healthy behaviour could persuade a person to take actions oriented toward this type of behaviour, which in turn could lead to a major change in behaviour. Other strategies to favour this behavioural change could be the use of feedback, monitoring, reminders, or the creation of a social group. The Mobile App Rating Scale (MARS) has a list of theoretical background/strategies that may have been applied in applications to promote behaviour change. These strategies are based on evaluation, feedback, information and monitoring, the choice of goals, and the proposal of advice and training methods, among other behavior change techniques. Additionally, it has a list of possible technical aspects that enhance these strategies. The evaluator marks the strategies used by each app. During this study, information and education, followed by the advice, strategies, and skills training were the most used strategies to motivate behaviour change. The most widely used technical aspect to favour these strategies was sending reminders. Lack of Evidence Health interventions supported by mobile health (mHealth) applications (apps) have experienced a significant increase. However, this rapid proliferation often hinders appropriate assessment of the effectiveness and validity of the applied tools [30]. Consequently, although studying the feasibility, safety, and effectiveness of implementing a health intervention is essential, most of them are applied despite a lack of such evidence [47]. The Mobile App Rating Scale (MARS) assesses whether there is a scientific basis (if the app has been trialled/tested and verified by evidence published in scientific literature) supporting the use of apps through an item included in the information section [29]. No scientific evidence supporting use of any of the apps included in this study was found. These results are in-line with previous reviews carried out on mHealth apps focused on pain, in which a significant lack of scientific basis was found to support the use of the available apps [33,48]. Previously suggested by some authors, these results may be due to the commercial and non-scientific origin of the included apps [33]. It is possible that promoting the development of apps by academic and scientific institutions (and the collaboration with commercial partners) could help improve this aspect [33]. However, it seems that the long times required for research, with long delays with respect to the development time of new applications, could precisely be a possible cause for the lack of app evaluations [49]. Greater support of scientific and academic institutions with more resources to shorten the development and testing of this type of mHealth solution, therefore, could favour the proliferation of apps with evidence-based use. Privacy and Confidentiality Health applications (apps) handle users' private information on health status, habits, or preferences. While any personal information must be handled with caution, health information is especially sensitive [50][51][52][53]. The law protects patients' rights to the confidentiality of their health data, but such laws are usually drafted for application in the health system. Applying and monitoring compliance with these rights in mobile health (mHealth) apps is, therefore, a challenge [51]. The Mobile App Rating Scale (MARS) includes items about whether access controls have been introduced in the app using login or password options to improve privacy. However, these items are included in a section of technical aspects of the apps; this is merely for descriptive purposes and does not influence the quality score. During this study, six apps (Back pain relief exercises; Regimen-back pain relief; Healo; Bella's Lower Back Pain Exercises; Healure: Physiotherapy Exercise Plans; Curable: Back Pain, Migraine & Chronic Pain Relief) introduced login and password options. This may be essential as a first step to maintaining the confidentiality and security of user data. However, this information does not cover what happens with these data after collection. Storage and possible use of personal data by the app developers or third-party use are other fundamental aspects to analyse in the preservation of privacy and confidentiality. However, in the case of apps, the user does not usually have access to this information. A previous study found that most apps do not follow well-known practices and guidelines, threatening user privacy [50]. This circumstance could be improved if the developers informed users with privacy policy statements about possible uses of the user's information, but this statement is often absent [54]. Concerning our study, ten apps ( Safety Safety in the use of applications (apps) that can be used for health purposes is an essential aspect for practitioners. Clear indications about those activities and exercises that can be performed autonomously at home, or those that should be performed supervised, must be indicated for an adequate prescription and use. Information about possible limits related to the health status of users is also essential. Some of them had an initial profile that allows the intervention to be personalised. However, the adequacy of this assessment is also unclear. Some of the apps included disclaimers, stating that the services offered are for informational purposes only and not for professional medical advice. Thus, five apps (6 Minute Back Pain Relief; Back Doctor (FREE) Health. Stretch. Workout; Curable: Back Pain, Migraine & Chronic Pain Relief; Yoga Poses for Lower Back Pain Relief; Escuela de Espalda) declared that the information provided by the app is not a substitute for a medical service, recommending visiting the doctor before starting the programme. Only one app, (Healo), declares to be secure in accordance with European Union regulation and to be registered as a medical product (Swedish Medical Products Agency). Developers should consider this aspect in future apps. The Mobile App Rating Scale (MARS) evaluates the information provided by the developers and the credibility of the source (conflicts of interest, commercial interest, academic origin, etc.). Additionally, MARS describes the evidence base of the apps (if the app has been trialled/tested and verified by evidence published in scientific literature). None of the included apps was found to be supported by evidence published in the scientific literature. This fact may be related to the commercial origin of all the included apps. Regarding this, the long times required for research, with long delays compared to the development time of new applications, have been attributed as a possible cause for the lack of app evaluation [49]. Recommended Applications (Apps) "Healo" is the only application (app) that declares to be registered as a medical product. Obtaining an average quality score of 4.2 (1.23), it is one of the best-rated apps in this study. It had high scores in the engagement, functionality, and aesthetics sections. Additionally, the subjective quality score and the perceived impact of the app on the user are high. However, the information provided in the app description is insufficient. The advanced self-diagnosis tools it contains, the availability of a chat with health care advisers, the adaptation of the programmes to each user, as well as its attractive design, make it a potentially useful mobile health (mHealth) app. "Healthy Spine & Straight Posture-Back exercises/Columna vertebral sana & Postura recta" is the app with the highest objective score using the The Mobile App Rating Scale (MARS) scale [4.57 (0.63)]. It is a well-configured functional app that allows one to carry out exercise programmes in a guided way. The app allows the user to self-assess and track the progress of the programme and the results of the assessments. However, this app requires payment to have full access to the contents. "Curable: Back Pain, Migraine & Chronic Pain Relief" is the only one of the included apps that does not offer an exercise-based intervention. Focused on patient education and pain neuroscience, it uses an attractive design through a narrative in the form of conversation, with text and audio. Obtaining an average objective quality score of 4.38 (0.82), it is one of the best-rated apps. It had high scores in the engagement, functionality, and aesthetics sections. Additionally, this app obtained high scores in the quality of the information and in the subjective perception of the quality and its impact on the user. Study Limitations The main limitation of this review is the exclusion of paid applications (apps). The search was carried out in duplicate in app stores in Spain and the United Kingdom. Therefore, apps available only in stores of other countries could not be retrieved. The app market is constantly updating. Although this may be a limitation, this review faithfully shows the current state of the market at the time of evaluation. Conclusions This study offers an analysis and description of applications (apps) available for managing lower back pain (LBP), to help practitioners and users to recommend quality mobile health (mHealth) apps adapted to the needs of each patient. The assessed apps generally had good overall quality, especially in terms of functionality and aesthetics. However, some apps must improve aspects such as engagement and information to increase their impact on users and ensure better security and privacy. "Healo" and "Healthy Spine & Straight Posture-Back exercises/Columna vertebral sana & Postura recta" are mHealth apps with high objective and subjective quality scores that include an exercise-based intervention. "Curable: Back Pain, Migraine & Chronic Pain Relief" is another well-rated app that includes a neuroscience pain-based intervention programme. Moreover, additional scientific evidence is necessary to support the use of each mHealth app. Therefore, despite the extensive volume of apps on the market, this review shows an absence of validity studies, studies of psychometric characteristics, and clinical trials that demonstrate the effectiveness of these apps in people with LBP. This indicates the need to develop studies to allow the use of validated, effective apps supported by evidence. The safety of use and privacy also should be improved in most of the apps.
2020-12-10T09:02:37.126Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "b9fd431bd25674b5f9d296a4b1ac1955e7bd471b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/24/9209/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bbd5718d74d53362c79d09d707775d3b5286b3f8", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
59397042
pes2o/s2orc
v3-fos-license
Quid Pro Quo Nature of Leadership Trust Formation – A Monadic Study from the Subordinate ’ s Perspective This study investigates the relevance of leader characteristics in influencing the leadership trust formation among subordinates through a structural model. Although previous studies on leadership trust have identified such antecedents, specific structural models of leadership trust formation are not established hitherto. Hypothesised relationships in this study are investigated using structural equation modeling to establish direct and indirect linkages between leader characteristics and leadership trust formation, thereby leading to an established model. The findings reveal that the tenure of the work relationship between a leader and subordinate and the leader’s trustworthy behaviour will not directly influence leadership trust. The perceived ability of the leader and the interdependent nature of work directly influences trust formation in leaders. The trust reciprocity variable – the belief that the leader trusts the subordinate – significantly influences leadership trust. This research is cross-sectional in nature and three selected variables are measured through single item global scales, which call for further studies overcoming their inherent weaknesses. As this study is monadic from the perspective of the subordinates, a dyadic study including the leader’s perspective is recommended. To generate leadership trust, a leader should not only exhibit trustworthy actions and behaviour but also should ensure that the subordinates believe that he/she trusts them. Therefore, a leader’s trustworthy actions will create a reciprocal belief among subordinates and in turn generate trust in leaders. The “give and take” nature of leadership trust formation is established through a structural model which is the unique contribution of this study to the extant literature. Introduction Can leaders influence and have an impact on the trust levels of their subordinates?This is a challenging question which was addressed by many earlier researchers through their studies (Lewicki, McAllister, & Bies, 1998;Kramer & Tyler, 1996;Mayer, Davis & Schoorman, 1995).Bromily and Cummings (1992) define trust as the expectation that the other individual will act in good faith towards fulfilling the commitments in an honest manner and without taking too much advantage of the trustor.The loss of trust in transactions between leaders and subordinates will lead to poor communication, lack of respect, backbiting, avoidance, inappropriate independence, spiteful conformity and deflection between them.This will be highly detrimental to the organisations (Goldsmith, 1991;Josephson, 1993;1994;Walker & Williams, 1995).In this context, this study aims to identify the antecedents to subordinates' trust in the leader and to establish a causal relationship among them using structural equation modeling.The findings of the study will help the leaders to display positive characteristics so as to enhance the trust level of their subordinates. Literature Review Amongst the various trust variables as identified in previous studies (Marquis, 2002;Mayer et al., 1995, Butler, 1991;Gabarro, 1978;Kee & Knox, 1970) the leader behaviour and trustworthiness carried importance in the recent literature (Korsgaard, Brodt & Whitener, 2002;Whitener, Brodt, Korsgaard & Werner., 1998).Much of the recent research has studied the role of trust antecedents or the trustor characteristics that led to the formation of trust (Kramer, 1999;Mayer et al., 1995;Whitener et al., 1998).Marquis (2002) contends that the application of quantitative analysis in the study on antecedents of trust is limited and further opines that the immediate manager's behaviour and its impact on subordinate's trust should be analysed. Stability of managerial personnel was found to have a strong impact on the trust level of the subordinates.Longer tenure in the working relationship between the manager and subordinates helps in building trust (Marquis, 2002;Kramer, 1999;Whitener et al., 1998).On the other hand, contrasting findings emerged stating the lack of relationship between the tenure of the leader with subordinates and their corresponding trust levels (Dirks & Ferrin, 2002;Cardona & Elola, 2003;Kiluchinov, 2011).Though mutually conflicting outcomes are evident, the relevance of the tenure of the leader and subordinates in trust formation requires a consideration for a conclusive outcome and therefore included in this study: H1: The tenure of work relationship between the leader and subordinates significantly influences the amount of trust the subordinates have towards their leaders. Recent studies on leadership trust emphasize the relevance of leader behaviour and characteristics in influencing the trust formation in followers.Mayer and Davis (1999) maintain that calculated efforts and specific actions from a leader will lead to trust formation.Subordinates trust their leader when they are trustworthy (Hosmer, 1995;Mayer et al., 1995).Das and Teng (2004) argue that the subordinate's trust is a function of the behaviour of the leader who wants to be trusted.Webber (2002) proposes that the actions and behaviour of a leader are important in creating trust.Kramer and Tyler (1996) opine that the expectations of the subordinates towards their leader should be consistent to generate leader trustworthiness and such trustworthiness is based on a history of interactions they have with their leader.Therefore, it is essential for the leaders to be consistent in their behaviour, as trust is history based.In this study, the leader characteristics and behaviour are considered as relevant and important factors in determining the trust formation: H2: The leader characteristics and behaviour will have a significant impact on the trust formation in subordinates. The leader characteristics and behaviour are broader in scope.To narrow down this hypothesis, specific variables of leader characteristics and behaviour are identified for this study.The perceived ability of the leader carried significance in previous studies.Mayer et al., (1995) define the ability of the leader as the skills, competencies and characteristics that enable them to influence their subordinates.Competence and ability of leaders in terms of interpersonal skills, technical skills, and expertise as an antecedent to trust in leaders is discussed in earlier studies (Gabarro, 1987;Mishra, 1996;Mullen, 1998).Cardona and Elola (2003) established the role of perceived ability in leadership trust formation. H2a: The perceived ability of the leader will have a significant impact on the leadership trust formation. The extent of interdependence of the task between the leaders and subordinates can affect the trust formation in leaders.Mayer et al., (1995) argue that leaders and subordinates depend on each other in various ways to achieve personal and organisational goals.Therefore, the extent of interdependence of jobs has an influence on the trust levels between the leaders and subordinates.On the contrary, Cardona and Elola (2003) found that task interdependence has no relation to leadership trust.As task interdependence and leadership behaviour are closely related they are also included in this study. H2b: The task interdependence between the leader and subordinates will have a significant impact on the leadership trust formation.Mayer et al., (1995) identified that the trustworthiness of leaders is widely based on their ability, benevolence and integrity.Here, benevolence is the extent the trustor believes the trustee is intending to do good acts and show concern.This emphasizes the role of concern shown by the leaders in creating their trustworthiness.The trustee's behaviour can influence the trustor's judgment of trustworthiness (whitener et al., 1998).Further, Mayer et al., (1995) define integrity as the perception by subordinates that the leader follows a set of acceptable principles.Robinson (1996) emphasizes that apart from benevolence, integrity and behavioural consistency of leaders, their open communication and sharing of control with subordinates influences their trustworthiness.Managerial trustworthy behaviour is considered to be an important antecedent of trust between leader and subordinate relationships (Cardona & Elola, 2003). H2c: The trustworthy behaviour of the leader will have a significant impact on the leadership trust formation. Trust Reciprocity: Subordinates can trust their leader when they feel confident that their obligations are met through their leader.The leader's personal qualities and favourable behaviour towards subordinates' welfare is found to increase the trust levels of them towards their leader.Therefore, subordinates' trust in leaders is considered to be reciprocal and often dependent upon the trustworthy actions of leaders (Gouldner, 1960).Kramer et al. (1996) argues that lower level employees over-personalise the task-oriented trust based cues from their leader, leading to reciprocal disappointments.He advocates that leaders have to manage the governance context of the subordinates to reduce misunderstandings between them.This gives proper grounding to hypothesise that the subordinate's belief that the leader's actions are favourable towards them will generate reciprocal trust towards their leader.Cardona and Elola (2003) lend their support to this through their finding that trust reciprocity is positively related to leadership trust.This relationship is hypothesised as follows. H3: The trust reciprocity variable -'belief that the leader trusts the subordinate'-will significantly influence the leadership trust formation. Conceptual Framework To establish the proposed hypotheses, a structural model is conceptualized.It forms the basis for the initial hypothesised model in SEM analysis.The demographic variable -tenure with the leader -is treated as the primary variable in the model.The leader characteristics, which include perceived ability of the leader, perceived task interdependence and leader trustworthy behaviour, are treated as secondary independent variables.The trust reciprocity variable is treated as the tertiary independent variable.It is unique in its hypothesis that, the tertiary variable "belief that leader trusts subordinates" is considered as an intermediary variable between the other leader characteristics and the dependent variable "trust in leaders".This lays the foundation for the quid pro quo nature of leadership trust formation.I.e.The subordinates will trust their leaders only when they believe that the leader trusts them (refer figure 1).Moreover, this framework identifies the various antecedents to trust in leaders and helps to establish the causality of them in leadership trust formation.Marquis (2002) used similar variables in her study based on the adaptation of the model as proposed by Mayer (1995).Cardona and Elola (2003) also used similar variables in their study and their relationship was established using hierarchical regression model.All these studies lacked evidence for directionality and causality, which paved the way for this proposed structural model. Measurement Design The variables identified in the conceptual framework are measured using multiple items in a questionnaire.The Leader Trustworthy Behaviour (LTB) is measured through a scale developed by Whitener et al. (1998) which encompasses five categories, namely behavioural consistency, sharing and delegation of control, openness of communication, acting with integrity and demonstration of concern.This scale was further developed by Cardona and Elola (2003) following a theory-based deductive approach (Hinkin, 1995).The scale consists of Leader fifteen items, with each category of LTB measured by three items.Following Butler (1991), one negative response question was added to each of the categories to avoid socially responsive bias.This also reduced the tendency to give acquiescent sets of responses (Cardona & Elola, 2003).The LTB is measured by parceling all the fifteen items to form a single measure.This reduces the complexity of the model with fewer parameters to estimate.This process of partial aggregation can provide a meaningful fit for a complex model with a reasonable sample size (Heidt and Scott, 2007).Cardona and Elola (2003) reported an overall internal consistency alpha value of 0.88, which is an acceptable measure of reliability for any new instrument (Hair et al., 1998).The same instrument is used in this study and it is further validated using confirmatory factor analysis.The overall reliability score of the 15-item scale from this study is 0.857, which is acceptable and good (Kline, 1999). The trust is measured using a single direct question "I trust my leader" in a five-point Likert scale ranging from strongly disagree to strongly agree.This approach is followed by Cardona and Elola (2003) in their study on trust and also by several others (Brockner, Siegel, Daly, Tyler, & Martin, 1997;Robinson, 1996).Similarly, the "subordinate's perception of the superior's trust in them" is measured through a single direct item "I believe that my supervisor trusts me".The task interdependence between the subordinates and leader is measured through a single item scale as followed by Cardona and Elola (2003).These single-item scales are referred to as "Global scales" or clinical combinations where respondents will cognitively give a single combined global judgement (Ironson, Smith, Brannick, Gibson, & Paul, 1989) on the measures.De Vellis (1991) argue that multiple-item scales assess the constructs in a better way than a single-item scale.Rossiter (2008) strongly argues that as long as the content validity of the item in a single item scale is established, no other validities are required. The perceived ability of the leaders is measured through the opinion from the subordinates using a three-item scale which is adapted by Cardona and Elola (2003) from Schoorman, Mayer and Davis's (1996) six-item scale. The three-item perceived ability scale has a reported Cronbach's alpha value of 0.79 (Cardona & Elola, 2003). The three items in the perceived ability scale are used as such in the model, without parceling them.In this study, the alpha value of the perceived ability scale is 0.585 which is lesser than the prescribed minimum of 0.70, indicating weak reliability.This is attributed to the use of fewer items to measure the construct (Field, 2005).Kline (1999) opines that the complexity and diversity in measurement of psychological constructs sometimes realistically invite Cronbach's alpha values lesser than 0.7. Sampling Design The external validity of the study is ascertained through a proper multi-stage sampling plan.In the first stage of sample selection, purposive sampling is used to select three organisations from a basket of manufacturing and service organisations in South India, based on the accessibility to the samples.Two service organisations (a placement consultancy firm and an outsourcing service provider) and one manufacturing organisation (edible oil manufacturing plant) are taken as the sampling base.In the second stage a census study is followed in each organisation.The sample size is determined through the basis of Tabachnick and Fidell's ( 2001) recommendation that at least 10 samples are required for every parameter to be estimated to ensure a model fit.Marquis (2002) recommends inclusion of more female respondents both in the lower level and middle level management positions, as the earlier studies mostly involved male employees.The service organisations have more female employees at the middle and lower levels of management.A total of 420 questionnaires are distributed in all the three organisations and the response rate in the Outsourcing organisation is 80.21 % with 150 filled-in responses.The response rate in the manufacturing organisation is 71.67 % with 86 filled-in responses and the placement consultancy organisation is 75.22 % with 85 filled-in responses.A total of 321 completely filled-in questionnaires are gathered excluding the incomplete responses. Validity and Reliability of the Leader Trustworthy Behaviour Scale The construct validity of the Leader Trustworthy Behaviour instrument is established through a second order confirmatory factor analysis using Amos 6.0.The five factors as proposed by Whitener et al. (1998) are tested for construct validity using a second order factor analysis.The 15 items in the leader trustworthy behaviour scale are fitted with five first order factors, namely: behavioural consistency, sharing and delegation of control, openness of communication, acting with integrity and demonstration of concern.The higher order factor Leader Trustworthy Behaviour (LTB) is considered to have direct causal effects over the lower order factors (Kline, 2005).This hierarchical factor analysis facilitates parceling all the 15 items into a single LTB score.The overall Cronbach's alpha reliability measure for all the 15 variables is 0.857, which is more than the prescribed minimum of 0.7 (Hair, Anderson, Tatham & Black, 1998).This is equivalent to the reported internal consistency value of 0.88 by Cardona and Elola (2003).The second order confirmatory factor analysis, with a maximum likelihood solution, shows a good model fit with a chi-square value of 72.173, RMSEA value of .024and C. min/df value of 1.183.The other model fit measures like GFI, AGFI, TLI, CFI and NFI, are above 0.9, confirming the model fit. Analysis As the hypothesis involves verifying the role of multiple independent variables and their impact on the trust in leaders, the dependent variable, SEM analysis is preferred over the multiple regression analysis.SEM analysis reveals the causal relationships and the direct and indirect effects of the variables (Hair et al., 1998).The initial hypothesised model is constructed based on the conceptual framework which includes the demographic variable "tenure with the leader", which is considered to be the primary variable.The secondary independent variables are the leader characteristics namely: "perceived ability", "task interdependence" and "Leader Trustworthy Behaviour".The tertiary variable is the trust reciprocity variable "belief that the leader trusts the subordinate".The hypothesised structural model is depicted in Figure 2 Evaluating the Hypothesised Structural Model The hypothesised model is not supported and has a poor model fit, which is demonstrated by the fit indices as displayed in the table 2. The p value is less than .01 with all other goodness-of-fit indices less than 0.9 and RMSEA greater than .05,indicating poor structural fit (Hair et al, 1998;Kline, 2005).Similarly as identified from the table 3, all the hypothesised relationships in the structural model are supported except the relationship between the primary variable "tenure with the leader" and the dependent variables "trust in the leader" and "Leader Trustworthy Behaviour". Structural Model Re-Specification As the hypothesised model is not supported by the sample data, it is revised in order to improve fit.So, the model is revised by removing the non-significant paths and adding other paths to the model based on the two rules (i) The decision should be based on logic (Arbuckle & Wothke, 1999;Facteau, Dobbins, Russell, Ladd & Kudisch 1995) and (ii) it should yield significant results (Mathieu, Tannenbaum & Salas, 1992).The non-significant path (from tenure with the leader to trust in the leaders) is removed.The other non-significant relationship is retained based on the logic that the tenure with a leader will have an impact on the leader trustworthy behaviour.Further, based on the examination of modification indices, the following three paths are added to the original model, keeping the above two rules (Wothke and Arbuckle, 1995). (a) Perceived ability to Leader Trustworthy Behaviour: This path is considered because when employees believe that their manger is talented and able, the perceived trustworthy behaviour of the leaders should be higher.(c) Perceived interdependence to perceived ability: The task interdependence between the leader and the subordinates will create a positive opinion on the Leader's ability.The re-specified model A shows a dramatic improvement in fit with a large drop in χ2 from 205.538 in the original model to 20.482 in the re-specified model.All the fit indices (GFI, AGFI, TLI, CFI and NFI) are above the minimum standard of 0.9 and the RMSEA value is 0.52 (refer to table 2) which is less than the prescribed minimum value of 0.8 (Hair et al., 1998;Kline, 2005).As the p value is still significant (p<.05), which indicates that the actual and predicted input matrices are statistically different (Hair et al., 1998), the model is considered to be lacking adequate fit.Here, the two paths from perceived interdependence to Leader Trustworthy Behaviour and the LTB to the "trust in leader" are dropped as they are not significant (refer to Table 3).The re-specified model 'B' as displayed in figure 3 demonstrated a good model fit to the sample data with statistically significant path co-efficients (p> .05)and further decrease in the χ 2 value (Hair et al., 1998).All the fit indices as seen in the table 3 shows a good model fit (GFI, AGFI, TLI, CFI and NFI with values above 0.9) and RMSEA value less than 0.5 (Hair et al., 198, Kline, 2005).Also all the relationships between the variables are proved to be significant (refer to Table 3). Leader Figure 3. Re-Specified model B Discussion and Theoretical Implications The final structural model clearly identifies the direct and indirect relationships between the leader characteristics and leadership trust formation.Moreover, the relevance of the trust reciprocity variable as an antecedent to the leadership trust formation is empirically established through this study. It is evident from the study that the tenure of work relationship between the leaders and subordinates will not directly influence the subordinate's trust in leaders (refer to Figure 3).Therefore H1 is not supported.Dirks and Ferrin (2002) also reported a similar lack of relationship between work tenure and trust formation.On the contrary, Lewicki and Bunker (1996) reinforce that the trust in leaders will develop over a course of time.This study through its structural model reveals that the tenure of the work relationship cannot be a direct cause of subordinate's trust in leaders.But, tenure of work strongly influences the leader characteristics, namely, perceived interdependence of work (β=0.115,p<.05), perceived ability of leaders (β=0.197,p<.01), leader trustworthy behaviour (β=-0.099,p<.05) and the trust reciprocity variable "belief that the leader trusts the subordinates" (β=0.120,p<.05).These leader characteristics and trust reciprocity in turn influence the leadership trust formation, thereby validating their mediating role in the indirect relationship between work tenure and leadership trust.The final structural model (refer to Figure 3) clearly validates the hypothesis that the leader characteristics and attributes, namely the task interdependence (β=0.146,p<.001) and perceived ability of the leaders (β=0.306,p<.001) have a direct impact on the subordinate's trust.Therefore the hypotheses H2a and H2b are supported.But the Leader Trustworthy Behaviour (LTB) has no direct effect on the subordinate's trust.So, H2c is not supported.This is a significant finding revealed though this study.On the contrary, Cardona and Elola (2003) reported that the LTB is having a significant impact on the subordinate's trust levels.But this structural model clearly exemplifies that a leader's trustworthy behaviour cannot directly influence trust formation among subordinates.The leader trustworthy behaviour has an indirect effect on the subordinate's trust, which is strongly mediated by the trust reciprocity variable. The relevance of the trust reciprocity variable in leadership trust formation is strongly established through the structural model.Therefore, H3 is supported.Amongst all the relationships in the model (as evident in Table 3) the trust reciprocity has a significant impact on leadership trust (β=0.441,p<.001).This emphasizes that the subordinate's trust in the leaders is primarily dependent upon the ability and characteristics of the leaders in creating an impression among their subordinates that they believe in them.Therefore, leadership trust is mostly reciprocal and available on a quid pro quo basis from the subordinates.Boyett (2006) opines that trust formation among subordinates is totally a "Give and Take" game and is purely reciprocal upon its initiation from the leader.The finding from this study further validates the reciprocity of trust as identified in the previous studies (Dirks & Ferrin 2002;Konovsky & Pugh 1994;Organ, 1990). Managerial Implications The evidence from this study suggests that a manager cannot be able to build trust among subordinates solely through developing a longer working relationship.But such work tenure can indirectly increase the levels of managerial trust by influencing the employee's perception that their work is interdependent with their manager's work and their manager is more competent.Therefore, it is no guarantee that a manager who works for a longer tenure with the subordinates can necessarily build trust in them directly.But, such tenure can build the perceived competence and interdependence of the manager by the subordinates, which in turn can mediate the trust formation between them. Further evidence from this study establishes that a manager can instill trust among the subordinates by displaying task-based ability and by establishing the interdependent nature of the tasks between them.But, interestingly, trustworthy behaviours from a manager like consistent behaviour, sharing, delegation, open communication and concern cannot guarantee a direct resultant trust formation in subordinates.Such trustworthy behaviour should induce the subordinates to believe that the manager trusts them.This resultant reciprocal belief can substantially influence trust formation among subordinates.Therefore, the manager should initially make their subordinates believe that he/she trusts them through his/her ability, task interdependence and trustworthy actions, which in turn will lead to reciprocal trust formation among the subordinates. Limitations and Future Directions Though earlier researchers used structural equation models to establish causality, it needs to be interpreted with a greater degree of caution.Mere model fit may not establish causality among variables (Pearl, 2011).Therefore, model testing using experimental methods without giving any room for measurement errors and other noise disturbances, is recommended for future studies.The use of single item global scale has drawn much criticism.As the constructs trust reciprocity, leadership trust and task interdependence are measured using single item scales, a more robust multi-item measurement scale is recommended.Here, leadership trust is measured only from the perspective of subordinates and the leader's point of view is not considered, which necessitates a dyadic study of leader-subordinate trust to validate the same findings of this research.Further, external validity of this causal study cannot be safely ascertained given the abovementioned weaknesses in the model (Shadhish, Cook & Campbell, 2002).Similar studies are required elsewhere to establish the reciprocity of leadership trust development and the role of the antecedent "belief of subordinates that the leader trusts them" in creating reciprocal trust in leaders. Conclusion This study reveals the various antecedents to trust in leaders and emphasizes clearly that the tenure of the work relationship between the leader and subordinates and the leader's trustworthy behaviour will generate a reciprocal trust in them indirectly.For developing trust in their leaders, the subordinates should rather have to believe that the leader trusts them first.The leaders should have to understand this quid pro quo nature of trust creation and exhibit trustful actions and behaviour.These actions of a leader will create a strong belief among the subordinates that the leader trusts them.This in turn will generate trust in leaders.Moreover, the competence of the leaders and the perceived interdependence of their jobs will also generate higher trust levels among subordinates towards their leaders.Therefore, the managers have to understand this indirect "Give and Take" nature of trust creation and have to display trustworthy actions and competence over a period of time to increase the trust levels of subordinates. Figure 1.Conceptual framework Figure 2. Initial hypothesized model Perceived interdependence to Leader Trustworthy Behaviour: When the employees feel that the successful performance of the job requires cooperation from their leaders it will have an impact on the Leader Trustworthy Behaviour. Table 2 . Comparison of structural equation models Table 3 . Significance of the model parametersAs the re-specified model 'A' lacked model fit, the two non significant paths in the model are dropped and a new re-specified model 'B' is constructed.
2018-12-30T00:29:05.971Z
2012-10-23T00:00:00.000
{ "year": 2012, "sha1": "be863ced2e0d98b9bcef31dde284be1cd8a1ff27", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijbm/article/download/20299/14063", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "be863ced2e0d98b9bcef31dde284be1cd8a1ff27", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [ "Economics" ] }
208227900
pes2o/s2orc
v3-fos-license
Capillary Refill—The Key to Assessing Dermal Capillary Capacity and Pathology in Optical Coherence Tomography Angiography Background/Objectives Standard optical coherence tomography angiography (OCTA) has been limited to imaging blood vessels actively undergoing perfusion, providing a temporary picture of surface microvasculature. Capillary perfusion in the skin is dynamic and changes in response to the surrounding tissue's respiratory, nutritional, and thermoregulatory needs. Hence, OCTA often represents a given perfusion state without depicting the actual extent of the vascular network. Here we present a method for obtaining a more accurate anatomic representation of the surface capillary network in human skin using OCTA, along with proposing a new parameter, the Relative Capillary Capacity (RCC), a quantifiable proxy for assessing capillary dilation potential and permeability. Methods OCTA images were captured at baseline and after compression of the skin. Baseline images display ambient capillary perfusion, while images taken upon capillary refill display the network of existing capillaries at full capacity. An optimization‐based automated vessel segmentation method was used to automatically analyze and compare OCTA image sequences obtained from two volunteers. RCC was then compared with visual impressions of capillary viability. Results Our OCTA imaging sequence provides a method for mapping cutaneous capillary networks independent of ambient perfusion. Differences between baseline and refill images clearly demonstrate the shortcomings of standard OCTA imaging and produce the RCC biometric as a quantifiable proxy for assessing capillary dilation potential and permeability. Conclusion Future dermatological OCTA diagnostic studies should implement the Capillary Refill Methods over standard imaging techniques and further explore the relevance of RCC to differential diagnosis and dermatopathology. Lasers Surg. Med. © The Authors. Lasers in Surgery and Medicine published by Wiley Periodicals, Inc. Background/Objectives: Standard optical coherence tomography angiography (OCTA) has been limited to imaging blood vessels actively undergoing perfusion, providing a temporary picture of surface microvasculature. Capillary perfusion in the skin is dynamic and changes in response to the surrounding tissue's respiratory, nutritional, and thermoregulatory needs. Hence, OCTA often represents a given perfusion state without depicting the actual extent of the vascular network. Here we present a method for obtaining a more accurate anatomic representation of the surface capillary network in human skin using OCTA, along with proposing a new parameter, the Relative Capillary Capacity (RCC), a quantifiable proxy for assessing capillary dilation potential and permeability. Methods: OCTA images were captured at baseline and after compression of the skin. Baseline images display ambient capillary perfusion, while images taken upon capillary refill display the network of existing capillaries at full capacity. An optimization-based automated vessel segmentation method was used to automatically analyze and compare OCTA image sequences obtained from two volunteers. RCC was then compared with visual impressions of capillary viability. Results: Our OCTA imaging sequence provides a method for mapping cutaneous capillary networks independent of ambient perfusion. Differences between baseline and refill images clearly demonstrate the shortcomings of standard OCTA imaging and produce the RCC biometric as a quantifiable proxy for assessing capillary dilation potential and permeability. Conclusion: Future dermatological OCTA diagnostic studies should implement the Capillary Refill Methods over standard imaging techniques and further explore the relevance of RCC to differential diagnosis and dermatopathology. Lasers Surg. Med. © The Authors. Lasers in Surgery and Medicine published by Wiley Periodicals, Inc. INTRODUCTION Optical coherence tomography angiography (OCTA) has emerged as a promising tool for the differential diagnosis of dermatological conditions affecting the microvasculature [1][2][3]. OCTA provides a means of imaging capillary perfusion by contrasting the motion of scattering blood components with static structural tissue [4,5]. OCTA overcomes limitations of other diagnostic tools in clinical dermatology by generating non-invasive, high-resolution maps of perfusion in vivo [1]. Previous studies validated the application of OCTA in dermatology with established clinical techniques and investigated its sensitivity to induced physiological blood flow changes [6,7]. OCTA images are often subjected to automatic or semi-automatic methods for vessel segmentation and analysis of microvascular morphology [2,[8][9][10] Segmented capillary maps are then further characterized by quantitative metrics such as vessel area, number, tortuosity, and the complexity of capillary network architecture. These metrics provide the basis for comparing sets of OCTA imaging to identify unique biomarkers of cutaneous dermatological pathologies such as atopic dermatitis [2], melanoma [11], hemangiomas [12], and other inflammatory skin conditions [3,13,14] and to evaluate treatment response [15] or assist for treatment planning [12]. The human cutaneous capillary network is embedded in the papillary dermis beneath the dermal-epidermal junction. Capillary perfusion is the primary channel for nutrient and gas exchange in the skin and changes dynamically to meet cellular requirements [16]. However, thermoregulation is the major determinant in capillary perfusion dynamics of the skin, enabling the human body to rapidly adjust to changes in environmental temperature [17]. Various factors including aging, smoking, and certain pathological states have been associated with morphological changes in the cutaneous capillary bed including stiffening of the vascular wall, decreased permeability, and variation in vessel network density [18]. However, measurements obtained by standard OCTA imaging methods are limited to capturing perfusion at a given point in time, providing an incomplete representation of the underlying anatomy. Distinct from retinal or cerebral capillaries, perfusion in skin is primarily dependent on ambient and body temperatures, and it is impossible to derive a complete map of cutaneous microvasculature from a standard scan. Standard OCTA imaging can vary dramatically between images of the same area at different time points, making the values obtained from the representations of limited use and rarely reproducible. Moreover, the maximum capacity of the capillary bed cannot be assessed. These limitations can be overcome by applying pressure to the skin and inhibiting perfusion to the capillaries. When pressure is released after an extended length of compression, blood rushes into the capillaries and the skin visibly reddens within seconds [15]. To correct for shortcoming in standard OCTA imaging, we propose acquiring a sequence of images before and after compression. By deriving perfusion of the vascular network in both OCTA images using an automatic vessel segmentation algorithm, for example, the optimization-based vessel segmentation pipeline (OBVS) [10], and quantitative metrics, it is possible to acquire a quantitative representation of the existing capillary network and to assess the dilation capacity of the vessel walls. Using the composite data from the two samples, we propose a novel criterion for comparing the functionality of microvasculature. The relative capillary capacity (RCC) quantifies the relative dilation of vessels at baseline compared with maximum capacity, which serves as a proxy for vascular efficiency in gas exchange and nutrient delivery. This proposed metric not only maps the capillary bed, but also provides valuable insight into the character of the imaged vessels. Specimen/Subjects OCTA imaging sequences were performed on the inner forearms of two healthy volunteers. Subject 1 is a 22-yearold Caucasian female with no history of smoking, and Subject 2 is a 66-year-old Caucasian female with a history of smoking half a pack of cigarettes per day for 10 years before discontinuing approximately 15 years prior to imaging. All procedures were approved by the Institutional Review Board (IRB) of Massachusetts General Hospital (Protocol No.: 2018P001115) and informed consent was obtained from both subjects. Image Acquisition OCTA images were acquired with the commercially available spectral-domain OCT scanner TELESTO II (Thorlabs, Newton, NJ). The device operates at a central wavelength of 1,300 nm with an axial resolution of 4.2 µm in tissue. Images were acquired by a lens with a lateral resolution of 13 µm (LSM03; Thorlabs) and images were two times oversampled with a voxel size of 6.5 × 6.5 × 4.2 µm. The field-of-view (FOV) was × × 6 6 1.5 mm 3 (length × width × depth). Angiographic OCT volumes were acquired with 2× slow-axis averaging, using an A-Scan rate of 76 kHz. Image acquisition took approximately 30 seconds with another 30 seconds for saving data onto the hard drive. During imaging, the subject's forearm was positioned on a surgical arm pillow and the imaged skin area was immobilized using a z-spacer mounted on the scanner (IMM3; Thorlabs). Glycerol was used as an immersion fluid for the spacer head. The z-spacer and glycerol were kept at room temperature at all times. Sequential OCTA images were acquired from each subject. For the baseline measurement, the z-spacer was positioned on the skin to avoid motion. Great care was taken to not include air bubbles inside the immersion media or suppress any blood flow. After completing the baseline scan, the z-spacer was used to compress the skin until the point of visible blanching for 2 minutes. The capillary refill measurements were obtained immediately after the release of compression. The sequence of acquired images is shown schematically in Fig. 1. Angiographic Imaging Angiographic volumes were generated using the specklevariance algorithm (svOCT) proposed by Barton et al. [4,19] with two times slow-axis averaging. For further analysis, angiographic volumes were cropped to show capillary layers between 80 and 420 µm and condensed to a two-dimensional representation by a maximum-intensity projection. Vessel Segmentation Technique The automatic characterization of the capillary network in the OCTA image requires a reliable vessel segmentation technique. This study employs the optimization-based vessel segmentation (OBVS) method described in our previous work [10]. Briefly, the optimal combination of image processing methods and parameters was found to best approximate the result of manual labeling by an expert. The segmentation pipeline entails of methods of denoising, contrast enhancement, binarizing, refining, and skeletonizing OCTA images with the following components: Vascular Characterization Metrics Segmented vascular maps were characterized by quantitative metrics that have been utilized in previous work of OCTA [5,10,22]. The branchpoint index (BI) of a skeleton map was determined as the number of branchpoints divided by the image size. A branchpoint was identified when the × 3 3 neighborhood around a skeleton pixel contained at least 3 more skeleton pixel. The RCC was obtained by combining metrics from the base and refill images from the imaging sequence shown in Fig. 1. RCC represents the dilation potential of the cutaneous capillaries, which varies due to the stiffness of the capillary wall and is correlated with vessel permeability and capacity for gas exchange and nutrient delivery [18]. RCC is typically handled in percent. RESULTS OCTA sequences were obtained from two volunteers, with images shown in Figs. 3 and 4. The vascular networks differed substantially between subjects with each experimental condition. In baseline images, a homogenously distributed network of capillaries was visible. Upon capillary refill, the vessels of the capillary network appear more detailed and of greater number compared with the baseline images, with many new vessels and branchpoints visible. Additionally, pairs of aligned vascular structures that appeared as a single vessel in the baseline image could be resolved. Comparing both subjects, the baseline images captured a similar number of capillaries (VLD Score 0.053 vs. 0.045); however, Subject 1's vessels appear slightly smaller in diameter than those of Subject 2 (average diameter given by the ratio VAD/VLD: 5.2 vs. 5.7). Upon capillary refill, while OCTA images of both subjects showed a greater number of capillaries than baseline, Subject 1 showed greater dilation compared with baseline images (RCC Index 81.1% vs. 25.3%). Quantitative metrics were derived from vessel maps obtained by the OBVS method and confirm the qualitative observations from the image sequences. For both subjects, the maximum values of the three metrics (VAD, VLD, and BI) correspond to the capillary refill images. The RCC metric quantifies vessel dilation by normalizing measurements in the maximum capillary capacity image by the respective baseline values (Equation (3)). In this study, the RCC metric clearly differentiates between Subject 1, with a score of 81.1% of the RCC index, and Subject 2, with a score of 25.3%. DISCUSSION OCTA imaging is an established technique for visualizing capillary perfusion in the human retina and brain. Recently, these methods of OCTA imaging have been applied to the Fig. 3. Sequence of Subject 1: optical coherence tomography angiography sequence acquired at the forearm of Subject 1-a 22-year-old Caucasian female with no history of smoking-shown as maximum-intensity projection of the papillary dermis. Image size 6 × 6 mm, depth range 80-420 μm, resolution 6.5 μm. study of cutaneous microvasculature without considering the unique challenges posed by human skin. OCTA imaging is reliant on the active perfusion of capillaries, which is a dynamic, multifactorial process. It has been demonstrated that OCTA derived metrics differ by anatomical site and are sensitive to moderate physical stimuli [7,8] or even just the positioning of the specimen [6]. Still, in previous OCTA studies, differences between single baseline images have been used to distinguish between skin diseases in vivo [2,3,[11][12][13]; however, the exact mechanism behind these differences remains controversial. Standard OCTA imaging has been limited to baseline conditions that are a poor approximation of actual capillary bed anatomy. While this limitation also applies to retinal and neurological imaging, skin is unique in that the metabolic needs of the surrounding tissue only play a minor role in determining capillary circulation [17]. Pappano established that the changes in skin blood flow are primarily due to the dynamics of ambient and internal body temperatures; however, previous OCTA studies have not differentiated between thermoregulatory, inflammatory, and metabolic capillary responses, introducing confounding factors to their experiments [2,3,8,[11][12][13][14]. Findings of such studies demand sensible interpretation, especially in the context of melanoma diagnosis [11] and when used for treatment guidance [12]. Furthermore, standard OCTA imaging does not provide any information about the functional characteristics of the imaged vessels, making it difficult to provide valid interpretations of experimental results. To address this shortcoming, we propose the capillary refill method as a means of controlling for the thermoregulatory and inflammatory contributions to skin microvasculature. By maximizing capillary perfusion and diameter, it becomes possible to visualize the patient's actual anatomy. Furthermore, we propose the use of a two-image OCTA sequence to produce the RCC. Quantifying the dilation potential of the capillary bed serves as a valuable proxy for assessing vascular wall stiffening and oxygenation capacity [18]. The RCC metric can be used to suggest reasons for the differences in VAD and VLD observed between the two study subjects. Subject 2's capillaries are of larger caliber than Subject 1's in the baseline images. Since Subject 1 is a 22-year-old female with no history of smoking and Subject 2 is a 66-year-old female with a long history of smoking, this result is surprising. Both age and smoking status are associated with ischemia, poor gas exchange, and stiffening of the vascular wall [18], so one would expect the relationship between the subjects' vessel caliber to be reversed. Interpreting these results in terms of RCC and the two-image OCTA sequence, a more coherent explanation emerges. While the old smoker's baseline capillary vessel area measurements (VAD) are greater than those seen in the young non-smoker, the young non-smoker experiences a greater increase in vessel area during capillary refill. This observation is quantified by the RCC metric, which takes into account vessel dilation potential by calibrating the mean capillary diameter at baseline with that at maximum capacity (Equation (3)). Differences in baseline measurement can now be explained by looking to physiological disparities between the capillary networks. We posit that Subject 2's low score on the capillary capacity index is associated with the blood vessels' impaired dilation potential and permeability to gas exchange. This RCC theory is consistent with the lack of observed differences in the VLD and BI metrics across subjects; however, it is impossible to determine statistical significance with a single subject in each experimental group. Further investigation is necessary to make this claim generalizable and promote novel diagnostic techniques in clinical dermatology. While our OCTA method shows great promise, the technique requires scrutiny. Although the reperfusion OCTA images reveal a more extensive capillary network than standard OCTA images, it is unclear how closely the image approximates the Fig. 4. Sequence of Subject 2: optical coherence tomography angiography sequence acquired at the forearm of Subject 2-a 66-year-old Caucasian female with a history of smoking-shown as maximum-intensity projection of the papillary dermis. Image size 6 × 6 mm, depth range 80-420 μm, resolution 6.5 μm. physical reality. Further investigation is necessary to validate the correlation between maximum capillary capacity OCTA imaging and histological evaluation of vascular endothelial cells. On the technological side, during the acquisition of the OCTA sequence, some vessels that appeared as singular structures in baseline imaging appeared as multiple structures upon capillary refill. Current OCTA analysis cannot determine whether the baseline image represents a singular or multiple perfused structures because of the imaging system's poor resolution. To counteract the effect of VLD doubling with the increased ability to differentiate between paired structures in the capillary refill images, the OBVS segmentation method does not distinguish between paired structures to preserve the integrity of the RCC metric. Additionally, the OBVS segmentation method employed in this study was developed and optimized for OCTA imaging of murine skin, and it has yet to be optimized for human skin imaging [10]. Due to the prohibitive cost of obtaining manual delineation of the images, the quality of the segmentation results in this study was not quantified. CONCLUSION The OCTA sequential imaging method proposed in this paper introduces a new biomarker for assessing the relative capacity of capillaries and generates a more accurate representation of the existing vascular network than standard OCTA methods. The vascular architecture represented by the capillary refill images is believed to provide a complete picture of microvasculature; however, cross-validation of OCTA refill images with histology or other imaging modalities is necessary. The relationship between baseline and reperfusion images provides novel information about the percentage of the capillary network perfused under ambient conditions along with valuable data on vessel dilation potential. The proposed RCC metric quantifies the observed functionality of imaged vessels into a single parameter. This metric is useful in that it provides information beyond simply documenting the architecture of a capillary network by suggesting differences in the stiffening of the vascular wall, vessel permeability, and capacity for gas exchange [18]. Further studies are necessary to assess the distinction of the RCC metric and will require controlled experimental groups with a sample size large enough to achieve statistical significance. We believe the systematic implementation of our method of OCTA imaging in clinical settings will be invaluable in the differential diagnosis of a wide array of dermatological and vascular conditions by providing new pathological biomarkers, along with providing the tools for monitoring therapies targeted at manipulating surface microvasculature. Potential technological improvements to OCTA imaging include benchmarking OBVS with deep learning vessel segmentation, which may result in enhanced segmentation quality. Additionally, the use of faster OCTA imaging systems could allow for real-time visualization of reperfusion, generating novel insights into the mechanisms of vascular physiology.
2019-11-23T14:02:33.883Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "75151dac8ae622ecaf704dd7edd30fc7d3f37a73", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lsm.23188", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "57ede369844e1ba63622ffc54a8607bafcba536f", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119340751
pes2o/s2orc
v3-fos-license
Uniqueness Results for Weak Leray-Hopf Solutions of the Navier-Stokes System with Initial Values in Critical Spaces The main subject of this paper concerns the establishment of certain classes of initial data, which grant short time uniqueness of the associated weak Leray-Hopf solutions of the three dimensional Navier-Stokes equations. In particular, our main theorem that this holds for any solenodial initial data, with finite $L_2(\mathbb{R}^3)$ norm, that also belongs to to certain subsets of $VMO^{-1}(\mathbb{R}^3)$. As a corollary of this, we obtain the same conclusion for any solenodial $u_{0}$ belonging to $L_{2}(\mathbb{R}^3)\cap \mathbb{\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$, for any $3<p<\infty$. Here, $\mathbb{\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$ denotes the closure of test functions in the critical Besov space ${\dot{B}}^{-1+\frac{3}{p}}_{p,\infty}(\mathbb{R}^3)$. Our results rely on the establishment of certain continuity properties near the initial time, for weak Leray-Hopf solutions of the Navier-Stokes equations, with these classes of initial data. Such properties seem to be of independent interest. Consequently, we are also able to show if a weak Leray-Hopf solution $u$ satisfies certain extensions of the Prodi-Serrin condition on $\mathbb{R}^3 \times ]0,T[$, then it is unique on $\mathbb{R}^3 \times ]0,T[$ amongst all other weak Leray-Hopf solutions with the same initial value. In particular, we show this is the case if $u\in L^{q,s}(0,T; L^{p,s}(\mathbb{R}^3))$ or if it's $L^{q,\infty}(0,T; L^{p,\infty}(\mathbb{R}^3))$ norm is sufficiently small, where $3<p<\infty$, $1\leq s<\infty$ and $3/p+2/q=1$. Our results rely on the establishment of certain continuity properties near the initial time, for weak Leray-Hopf solutions of the Navier-Stokes equations, with these classes of initial data. Such properties seem to be of independent interest. Consequently, we are also able to show if a weak Leray-Hopf solution u satisfies certain extensions of the Prodi-Serrin condition on R 3 ×]0, T [, then it is unique on R 3 ×]0, T [ amongst all other weak Leray-Hopf solutions with the same initial value. In particular, we show this is the case if u ∈ L q,s (0, T ; L p,s (R 3 )) or if it's L q,∞ (0, T ; L p,∞ (R 3 )) norm is sufficiently small, where 3 < p < ∞, 1 ≤ s < ∞ and 3/p + 2/q = 1. Introduction This paper concerns the Cauchy problem for the Navier-Stokes system in the space-time domain Q ∞ := R 3 ×]0, ∞[ for vector-valued function v = (v 1 , v 2 , v 3 ) = (v i ) and scalar function p, satisfying the equations in Q ∞ , with the initial conditions v(·, 0) = u 0 (·). (1.2) This paper will concern a certain class of solutions to (1.1)-(1.2), which we will call weak Leray-Hopf solutions. Before defining this class, we introduce some necessary notation. Let J(R 3 ) be the closure of with respect to the L 2 (R 3 ) norm. Moreover, is defined the completion of the space C ∞ 0,0 (R 3 ) with respect to L 2 -norm and the Dirichlet integral Let us now define the notion of 'weak Leray-Hopf solutions' to the Navier-Stokes system. (v · ∂ t w + v ⊗ v : ∇w − ∇v : ∇w)dxdt = 0 (1. 6) for any divergent free test function The initial condition is satisfied strongly in the L 2 (R 3 ) sense: Finally, v satisfies the energy inequality: v(·, t) 2 The corresponding global in time existence result, proven in [26], is as follows. Under certain restrictions of the initial data, it is known since [26] that 1)(Regularity) implies 2)(Uniqueness) 3 . However, this implication may not be valid for more general classes of initial data. Indeed, certain unverified non uniqueness scenarios, for weak Leray-Hopf solutions, have recently been suggested in [21]. In the scenario suggested there, the non unique solutions are regular. This paper is concerned with the following very natural question arising from 2)(Uniqueness). (Q) Which Z ⊂ S ′ (R 3 ) are such that u 0 ∈ J(R 3 ) ∩ Z implies uniqueness of the associated weak Leray-Hopf solutions on some time interval? There are a vast number of papers related to this question. We now give some incomplete references, which are directly concerned with this question and closely related to this paper. It was proven in [26] that for Z = • J 1 2 (R 3 ) and Z = L p (R 3 ) (3 < p ≤ ∞), we have short time uniqueness in the slightly narrower class of 'turbulent solutions'. The same conclusion was shown to hold in [12] for the weak Leray-Hopf class. It was later shown in [22] that Z = L 3 (R 3 ) was sufficient for short time uniqueness of weak Leray-Hopf solutions. At the start of the 21st Century, [17] provided a positive answer for question (Q) for the homogeneous Besov spaces with p, q < ∞ and 3 p + 2 q ≥ 1. An incomplete selection of further results in this direction are [7], [9]- [10], [18] and [27], for example. A more complete history regarding question (Q) can be found in [18]. An approach (which we will refer to as approach 1) to determining Z such that (Q) is true, was first used for the Navier-Stokes equation in [26] and is frequently found in the literature. The principle aim of approach 1 is to show for certain Z and u 0 ∈ Z ∩ J(R 3 ), one can construct a weak Leray Hopf solution V (u 0 ) belonging to a path space X T having certain features. Specifically, X T has the property that any weak Leray-Hopf solution (with arbitrary u 0 ∈ J(R 3 ) initial data) in X T is unique amongst all weak Leray-Hopf solutions with the same initial data. A crucial step in approach 1 is the establishment of appropriate estimates of the trilinear form F : As mentioned in [18], these estimates of this trilinear form typically play two roles. The first is to provide rigorous justification of the energy inequality for is another weak Leray-Hopf solution with the same initial data. The second is to allow the applicability of Gronwall's lemma to infer w ≡ 0 on Q T . The estimates of the trilinear form needed for approach 1 appear to be restrictive, with regards to the spaces Z and X T that can be considered. Consequently, (Q) has remained open for the Besov spaces The obstacle of using approach 1 for this case, has been explicitly noted in [17] and [18]. 'It does not seem possible to improve on the continuity (of the trilinear term) without using in a much deeper way that not only u and V (u 0 ) are in the Leray class L but also solutions of the equation.'( [17]) For analagous Besov spaces on bounded domains, question (Q) has also been considered recently in [13]- [15]. There, a restricted version of (Q) is shown to hold. Namely, the authors prove uniqueness within the subclass of 'wellchosen weak solutions' describing weak Leray-Hopf solutions constructed by concrete approximation procedures. Furthermore, in [13]- [15] it is explicitly mentioned that a complete answer to (Q) for these cases is 'out of reach'. In this paper, we provide a positive answer to (Q) for Z =Ḃ In fact this is a corollary of our main theorem, which provides a positive answer to (Q) for other classes of Z. From this point onwards, for p 0 > 3, we will denote Moreover, for 2 < α ≤ 3 and p 1 > α, we define Now, we state the main theorem of this paper. Consider a weak Leray-Hopf solution u to the Navier-Stokes system on Q ∞ , with initial data . Then, there exists a T (u 0 ) > 0 such that all weak Leray-Hopf solutions on Q ∞ , with initial data u 0 , coincide with u on Q T (u 0 ) := R 3 ×]0, T (u 0 )[. Let us remark that previous results of this type are given in [7] and [9] respectively, with the additional assumption that u 0 belongs to a nonhomogeneous Sobolev space H s (R 3 ), with s > 0. By comparison the assumptions of Theorem 1.3 are weaker. This follows because of the following embeddings. For s > 0 there exists 2 < α ≤ 3 such that for p ≥ α: Consider a weak Leray-Hopf solution u to the Navier-Stokes system on Q ∞ , with initial data . Then, there exists a T (u 0 ) > 0 such that all weak Leray-Hopf solutions on Q ∞ , with initial data u 0 , coincide with u on Q T (u 0 ) := R 3 ×]0, T (u 0 )[. Our main tool to prove Theorem 1.3 is the new observation that weak Leray-Hopf solutions, with this class of initial data, have stronger continuity properties near t = 0 than general members of the energy class L. In [8], a similar property was obtained for for the mild solution with initial data iṅ B − 1 4 4,4 (R 3 ). Recently, in case of 'global weak L 3 solutions' with L 3 (R 3 ) initial data, properties of this type were established in [33]. See also [2] for the case of L 3,∞ initial data, in the context of 'global weak L 3,∞ (R 3 ) solutions'. Let us mention that throughout this paper, where Γ(x, t) is the kernel for the heat flow in R 3 . Here is our main Lemma. Lemma 1.5. Take α and p as in Theorem 1.3. Assume that Then for any weak Leray-Hopf u solution on Q T := R 3 ×]0, T [, with initial data u 0 , we infer the following. There exists β(p, α) > 0 and γ( u 0 Ḃ sp,α p,p (R 3 ) , p, α) > 0 such that for t ≤ min(1, γ, T ) : This then allows us to apply a less restrictive version of approach 1. Namely, we show that for any initial data in this specific class, there exists a weak Leray-Hopf solution V (u 0 ) on Q T , which belongs to a path space X T with the following property. Namely, X T grants uniqueness for weak Leray-Hopf solutions with the same initial data in this specific class (rather than for arbitrary initial data in J(R 3 ), as required in approach 1). A related strategy has been used in [9]. However, in [9] an additional restriction is imposed, requiring that the initial data has positive Sobolev regularity. Remarks 1. Another notion of solution, to the Cauchy problem of the Navier Stokes system, was pioneered in [22] and [23]. These solutions, called 'mild solutions' to the Navier-Stokes system, are constructed using a contraction principle and are unique in their class. Many authors have given classes of initial data for which mild solutions of the Navier-Stokes system exist. See, for example, [6], [19], [25], [30] and [36]. The optimal result in this direction was established in [24]. The authors there proved global in time existence of mild solutions for solenoidal initial data with small BMO −1 (R 3 ) norm, as well as local in time existence for solenoidal u 0 ∈ V MO −1 (R 3 ). Subsequently, the results of the paper [28] implied that if u 0 ∈ J(R 3 ) ∩ V MO −1 (R 3 ) then the mild solution is a weak Leray-Hopf solution. Consequently, we formulate the following plausible conjecture (C). 2. In [32], the following open question was discussed: (Q.1) Assume that u 0k ∈ J(R 3 ) are compactly supported in a fixed compact set and converge to u 0 ≡ 0 weakly in L 2 (R 3 ). Let u (k) be the weak Leray-Hopf solution with the initial value u (k) 0 . Can we conclude that v (k) converge to v ≡ 0 in the sense of distributions? In [32] it was shown that (Q.1) holds true under the following additional restrictions. Namely and that u (k) and it's associated pressure p (k) satisfy the local energy inequality: for all non negative functions ϕ ∈ C ∞ 0 (Q ∞ ). Subsequently, in [33] it was shown that the same conclusion holds with (1.11) replaced by weaker assumption that sup In [2] , this was further weakened to boundedness of u we see that this improves the previous assumptions under which (Q.1) holds true. 3. In [34] and [31], it was shown that if u is a weak Leray Hopf solution on Q T and satisfies (1.14) then u coincides on Q T with other any weak Leray-Hopf solution with the same initial data. The same conclusion for the endpoint case u ∈ L ∞ (0, T ; L 3 (R 3 )) appeared to be much more challenging and was settled in [11]. As a consequence of Theorem 1.3, we are able to extend the uniqueness criterion (1.14) for weak Leray-Hopf solutions. Let us state this as a Proposition. Proposition 1.6. Suppose u and v are weak Leray-Hopf solutions on Q ∞ with the same initial data u 0 ∈ J(R 3 ). Then there exists a ǫ * = ǫ * (p, q) > 0 such that if either Let us mention that for sufficently small ǫ * , it was shown in [35] that if u is a weak Leray-Hopf solution on Q ∞ satisfying either (1.15)-(1.16) or (1.17)-(1.19), then u is regular 5 on Q T . To the best of our knowledge, it was not previously known whether these conditions on u were sufficient to grant uniqueness on Q T , amongst all weak Leray Hopf solutions with the same initial value. Uniqueness for the endpoint case (p, q) = (3, ∞) of (1.17)-(1.19) is simpler and already known. A proof can be found in [27], for example. Hence, we omit this case. Notation In this subsection, we will introduce notation that will be repeatedly used throughout the rest of the paper. We adopt the usual summation convention throughout the paper . For arbitrary vectors a = (a i ), b = (b i ) in R n and for arbitrary matrices For spatial domains and space time domains, we will make use of the following notation: For Ω ⊆ R 3 , mean values of integrable functions are denoted as follows with respect to the L s (Ω) norm. For s = 2, we define We define • J 1 2 (Ω) as the completion of C ∞ 0,0 (Ω) with respect to L 2 -norm and the Dirichlet integral The usual modification is made if s = ∞. With this notation, we will define with usual norm. In addition, let C w ([a, b]; X) denote the space of X valued functions, which are continuous from [a, b] to the weak topology of X. We define the following Sobolev spaces with mixed norms: Homogeneous Besov Spaces and BMO −1 We first introduce the frequency cut off operators of the Littlewood-Paley theory. The definitions we use are contained in [1]. For a tempered distribution f , let F (f ) denote its Fourier transform. Let C be the annulus For a being a tempered distribution, let us define for j ∈ Z: Now we are in a position to define the homogeneous Besov spaces on R 3 . Let Remark 2.3. It is known that for s = −2s 1 < 0 and p, q ∈ [1, ∞], the norm can be characterised by the heat flow. Namely there exists a C > 1 such that for all u ∈Ḃ −2s 1 p,q (R 3 ): is the kernel for the heat flow in R 3 . We will also need the following Proposition, whose statement and proof can be found in [1] (Proposition 2.22 there) for example. In the Proposition below we use the notation Proposition 2.4. A constant C exists with the following properties. If s 1 and s 2 are real numbers such that s 1 < s 2 and θ ∈]0, 1[, then we have, for any p ∈ [1, ∞] and any u ∈ S ′ h , is the space of all tempered distributions such that the following norm is finite: , with respect to the norm (2.9). Lorentz spaces Given a measurable subset Ω ⊂ R n , let us define the Lorentz spaces. For a measurable function f : Ω → R define: , is the set of all measurable functions g on Ω such that the quasinorm g L p,q (Ω) is finite. Here: It is known there exists a norm, which is equivalent to the quasinorms defined above, for which L p,q (Ω) is a Banach space. For p ∈ [1, ∞[ and 1 ≤ q 1 < q 2 ≤ ∞, we have the following continuous embeddings and the inclusion is known to be strict. Let X be a Banach space with norm · X , a < b, p ∈ [1, ∞[ and q ∈ [1, ∞]. Then L p,q (a, b; X) will denote the space of strongly measurable (2.14) In particular, if 1 ≤ q 1 < q 2 ≤ ∞, we have the following continuous embeddings and the inclusion is known to be strict. Let us recall a known Proposition known as 'O'Neil's convolution inequality' (Theorem 2.6 of [29]), which will be used in proving Proposition 1.6. Then it holds that Let us finally state and proof a simple Lemma, which we will make use of in proving Proposition 1.14. Lemma 2.6. Let f :]0, T [→]0, ∞[ be a function satisfying the following property. In particular, suppose that there exists a C ≥ 1 such that for any In addition, assume that for some 1 ≤ r < ∞: Then one can conclude that for all t ∈]0, T [: Proof. It suffices to proof that if f satisfies the hypothesis of Lemma 2.6, along with the additional constraint then we must necessarily have that for any 0 < t < T The assumption (2.24) implies that This, together with (2.27), implies (2.25). Decomposition of Homogeneous Besov Spaces Next state and prove certain decompositions for homogeneous Besov spaces. This will play a crucial role in the proof of Theorem 1.3. In the context of Lebesgue spaces, an analogous statement is Lemma II.I proven by Calderon in [5]. Before stating and proving this, we take note of a useful Lemma presented in [1] (specifically, Lemma 2.23 and Remark 2.24 in [1]). Lemma 2.7. Let C ′ be an annulus and let (u (j) ) j∈Z be a sequence of functions such that converges (in the sense of tempered distributions) to some u ∈Ḃ s p,r (R 3 ), which satisfies the following estimate: (2.32) Now, we can state the proposition regarding decomposition of homogeneous Besov spaces. Note that decompositions of a similar type can be obtained abstractly from real interpolation theory, applied to homogeneous Besov spaces. See Chapter 6 of [3], for example. Proposition 2.8. For i = 1, 2, 3 let p i ∈]1, ∞[, s i ∈ R and θ ∈]0, 1[ be such that s 1 < s 0 < s 2 and p 2 < p 0 < p 1 . In addition, assume the following relations hold: Suppose that u 0 ∈Ḃ s 0 p 0 ,p 0 (R 3 ). Then for all ǫ > 0 there exists u 1,ǫ ∈Ḃ s 1 It is easily verified that the following holds: Thus, we may write For the sake of brevity we will write N(j, ǫ). Using the relations of the Besov indices given by (2.33)-(2.34), we can infer that The crucial point being that this is independent of j. Thus, we infer Next, it is well known that for any u ∈Ḃ s 0 p 0 ,p 0 (R 3 ) we have that m j=−m∆ j u converges to u in the sense of tempered distributions. Furthermore, we have that∆ j∆j ′ u = 0 if |j − j ′ | > 1. Combing these two facts allows us to observe thaṫ It is clear, that Here, C ′ is the annulus defined by C ′ := {ξ ∈ R 3 : 3/8 ≤ |ξ| ≤ 16/3}. Using, (2.41)-(2.42) we can obtain the following estimates: It is then the case that (2.46)-(2.3) allow us to apply the results of Lemma 2.7. This allows us to achieve the desired decomposition with the choice Corollary 2.9. Fix 2 < α ≤ 3. For p and α satisfying these conditions, suppose that and div u 0 = 0 in weak sense. Then the above assumptions imply that there exists max (p, 4) < p 0 < ∞ and δ > 0 such that for any ǫ > 0 there exists weakly divergence free functions Proof. 3−α Under this condition, we can find max (4, p) < p 0 < ∞ such that Clearly, 0 < θ < 1 and moreover From (2.54), we see that δ > 0. One can also see we have the following relation: The above relations allow us to apply Proposition 2.8 to obtain the following decomposition: (we note thatḂ 0 2,2 (R 3 ) coincides with L 2 (R 3 ) with equivalent norms) u 0 = u 1,ǫ + u 2,ǫ , (2.58) (2.60) For j ∈ Z and m ∈ Z, it can be seen that Using this, (2.61) and the definition of u 1,ǫ from Proposition 2.8, we can infer that To establish the decomposition of the Corollary we apply the Leray projector to each of u 1,ǫ and u 2,ǫ , which is a continuous linear operator on the homogeneous Besov spaces under consideration. Second case: α = 3 and 3 < p < ∞ In the second case, we choose any p 0 such that max (4, p) < p 0 < ∞. With this p 0 we choose θ such that These relations allow us to obtain the decomposition of the Corollary, by means of identical arguments to those presented in the the first case of this proof. Construction of Mild Solutions with Subcritical Besov Initial Data Let δ > 0 be such that s p 0 + δ < 0 and define the space From remarks 2.2 and 2.3, we observe that In this subsection, we construct 6 mild solutions with weakly divergence free initial data inḂ . Before constructing mild solutions we will briefly explain the relevant kernels and their pointwise estimates. Let us consider the following Stokes problem: Furthermore, assume that F ij ∈ C ∞ 0 (Q T ). Then a formal solution to the above initial boundary value problem has the form: The kernel K is derived with the help of the heat kernel Γ as follows: Moreover, the following pointwise estimate is known: Theorem 3.1. Consider p 0 and δ such that 4 < p 0 < ∞, δ > 0 and s p 0 +δ < 0. Suppose that u 0 ∈Ḃ sp 0 +δ 4) then there exists a v ∈ X p 0 ,δ (T ), which solves the Navier Stokes equations (1.1)-(1.2) in the sense of distributions and satisfies the following properties. (3.18) Step 2: establishing energy bounds First we note that by interpolation (0 ≤ τ ≤ T ): Specifically, It is then immediate that From this, we conclude that for 0 ≤ t ≤ T : Let r ∈ L 4 (Q T ) ∩ X p 0 ,δ (T ) ∩ L 2,∞ (Q T ) and R := G(r ⊗ r). Furthermore, define π r⊗r := R i R j (r i r j ), where R i denotes the Riesz transform and repeated indices are summed. One can readily show that on Q T , (R, π r⊗r ) are solutions to We can also infer that R ∈ W 1,0 Since the associated pressure is a composition of Riesz transforms acting on r ⊗ r, we have the estimates Step 4: estimate of higher derivatives All that remains to prove is the estimate (3.11). First note that from the definition of X p 0 ,δ (T ), we have the estimate: Since π v⊗v is a convolution of Riesz transforms, we deduce from (3.38) that One can infer that (v, π v⊗v ) satisfies the local energy equality. This can be shown using (3.7)-(3.10) and a mollification argument. If (x, t) ∈ Q λ,T , then for 0 < r 2 < λ 2 we can apply Hölder's inequality and (3.38)-(3.39) to infer Clearly, there exists r 2 0 (λ, ε CKN , u 0 Ḃ sp 0 +δ By the ε-regularity theory developed in [4], there exists universal constants c 0k > 0 such that (for (x, t) and r as above) we have ). From these estimates, the singular integral representation of π v⊗v and that (v, π v⊗v ) satisfy the Navier-Stokes system one can prove the corresponding estimates hold for higher time derivatives of the velocity field and pressure. Proof of Lemma 1.5 The proof of Lemma 1.5 is achieved by careful analysis of certain decompositions of weak Leray-Hopf solutions, with initial data in the class given in Lemma 1.5. A key part of this involves decompositions of the initial data (Corollary 2.9), together with properties of mild solutions, whose initial data belongs in a subcritical homogeneous Besov space (Theorem 3.1). In the context of local energy solutions of the Navier-Stokes equations with L 3 initial data, related splitting arguments have been used in [20]. Before proceeding, we state a known lemma found in [31] and [34], for example. Suppose that w ∈ L p,r (Q T ), v ∈ L 2,∞ (Q T ) and ∇v ∈ L 2 (Q T ). Then for t ∈]0, T [: Here, p and α satisfy the assumptions of Theorem 1.3. We will write u 0 =ū 1,ǫ +ū 2,ǫ . Here the decomposition has been performed according to Corollary 2.9 (specifically, (2.50)-(2.53)), with ǫ > 0. Thus, Throughout this section we will let w ǫ be the mild solution from Theorem 3.1 generated by the initial dataū 1,ǫ . Recall from Theorem 3.1 that w ǫ is defined on Q Tǫ , where In accordance with this and (3.45), we will take The two main estimates we will use are as follows. Using (3.1), (3.6) and (3.45), we have (3.50) The second property, from Theorem 3.1, is (recalling that by assumption u 1,ǫ ∈ L 2 (R 3 )): Consequently, it can be shown that w ǫ satisfies the energy equality: for 0 ≤ s ′ ≤ s ≤ T ǫ . Moreover, using (3.1), (3.45) and (3.47), the following estimate is valid for 0 ≤ s ≤ T ǫ : . Construction of Strong Solutions The approach we will take to prove Theorem 1.3 is as follows. Namely, we construct a weak Leray-Hopf solution, with initial data , by perturbation methods. We refer to this constructed solution as the 'strong solution'. Then, Lemma 1.5 plays a crucial role in showing that the strong solution has good enough properties to coincide with all weak Leray-Hopf solutions, with the same initial data, on some time interval ]0, T (u 0 )[. With this in mind, we now state the relevant Theorem related to the construction of this 'strong solution'. Let us introduce the necessary preliminaries. The path space X T for the mild solutions constructed in [24] is defined to be Here, From (2.9), we see that for 0 < T ≤ ∞ , we can see from the above that for Recalling the definition of G(f ⊗ g) given by (3.3), it was shown in [24] that there exists a universal constant C such that for all f, g ∈ E T Here is the Theorem related to the construction of the 'strong solution'. The main features of this construction required for our purposes, can already be inferred ideas contained in [28]. Since the proof is not explicitly contained in [28], we find it beneficial to sketch certain parts of the proof in the Appendix. There exists a universal constant ǫ 0 > 0 such that if then there exists a v ∈ E T , which solves the Navier Stokes equations in the sense of distributions and satisfies the following properties. The first property is that v solves the following integral equation: in Q T , along with the estimate The second property is that v is a weak Leray-Hopf solution on Q T . If π v⊗v is the associated pressure we have (here, λ ∈]0, T [ and p ∈]1, ∞[): The final property is that for λ ∈]0, T [ and k = 0, 1 . . ., l = 0, 1 . . .: Proof of Theorem 1.3 Proof. Let us now consider any other weak Leray-Hopf solution u, defined on Q ∞ and with initial data where ǫ 0 is from (4.6) of Theorem 4.1. Consider 0 < T < T (u 0 ), where T is to be determined. Let v : Q T → R 3 be as in Theorem 4.1. From (4.8), we Moreover, w satisfies the following equations in Q T , with the initial condition satisfied in the strong L 2 sense: lim t→0 + w(·, 0) L 2 = 0 (4.14) in R 3 . From the definition of E T , we have that v ∈ L ∞ (Q δ,T ) for 0 < δ < s ≤ T . Using Proposition 14.3 in [27], one can deduce that for t ∈ [δ, T ]: Using Lemma 3.2 and (4.8) , we see that The main point now is that Lemma 1.5 implies that there exists β(p, α) > 0 and γ( u 0 Ḃ sp,α p,p (R 3 ) , p, α) > 0 such that for 0 < t < min(1, T, γ) : This allows us to take δ → 0 in (4.2) to get: Using (4.2) and (4.18) we see that for t ∈ [0, T ]: (4.20) Using this and (4.11), we have Using (4.4), we see that we can choose 0 < T = T (u 0 ) < T (u 0 ) such that With this choice of T (u 0 ), it immediately follows that w = 0 on Q T (u 0 ) . Uniqueness Criterion for Weak Leray Hopf Solutions Now let us state two known facts, which will be used in the proof of Proposition 1.6. If v is a weak Leray-Hopf solution on Q ∞ with initial data u 0 ∈ J(R 3 ), then this implies that v satisfies the integral equation in The second fact is as follows. Consider 3 < p < ∞ and 2 < q < ∞ such that 3/p + 2/q = 1. Then there exists a constant C = C(p, q) such that for all f, g ∈ L q,∞ (0, T ; L p,∞ (R 3 )) These statements and their corresponding proofs, can be found in [27] 8 and [28] 9 , for example. The remaining conclusions of Theorem 4.1 follow from similar reasoning as in the proof of the statements of Theorem 3.1, hence we omit details of them.
2016-12-08T17:31:54.000Z
2016-10-26T00:00:00.000
{ "year": 2018, "sha1": "3fe926086069bcee47a9a7083e4d6b7c64c8008f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.08348", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3e446eead98de36b57219e7428c8986c1cafb695", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
54995523
pes2o/s2orc
v3-fos-license
Successful Pregnancy Outcome in a Case of Dengue with Eclampsia Department of Obstetrics and Gynecology, Grant Government Medical College and Sir J.J. Group of Hospitals, Mumbai Corresponding Author Dr Poornima M Resident, Department of Obstetrics and Gynecology, Grant Government Medical College and Sir J.J. Group of Hospitals, Mumbai Abstract Malaria and dengue are one of the commonest vector borne diseases seen in India. While management of uncomplicated dengue is usually symptomatic severe disease like dengue shock syndrome and dengue hemorrhagic fever may require tertiary medical care. Pregnancy complicated by dengue hemorrhagic fever is not usually seen in obstetric practice but with increase in cases of dengue in child bearing age group this incidence is expected to rise. Proper and in-time management of such cases are important as delay in treatment may adversely affect fetal and maternal well-being. Pregnancy complicated by dengue is also dangerous because it increases the risk of premature delivery, adverse neonatal outcome and postpartum hemorrhage. We here report a case of 3 rd gravida female with bad obstetric history who presented to us at 32 weeks of gestation with dengue hemorrhagic fever. This report emphasizes the complications of dengue infection in pregnancy. Early diagnosis, proper referral and immediate treatment are the key factors in the management of pregnancy complicated by dengue fever. Key word: Pregnancy, Dengue, Thrombocytopenia, Post-partum hemorrhage. Introduction Dengue fever is a febrile illness caused by dengue virus which is a flavivirus belonging to family flaviviridea. There are four serotypes of dengue viruses responsible for clinically significant dengue fever. They are DENV-1, DENV-2, DENV-3, and DENV-4. The spectrum of illness is wide and may differ from a mild fever with myalgia to severe hemorrhage and shock. The diagnosis of dengue depends upon positive serology which can be confirmed by viral PCR studies. Combination of NS1 Ag Strip and IgM ELISA is reported to be a suitable combination tests for timely and accurate dengue diagnosis on single serum specimen [1] . Pregnancy complicated by dengue poses a risk for mother as well as the fetus. There is an increased risk of antepartum hemorrhage, postpartum hemorrhage and adverse fetal outcome. There are some case reports where neonatal thrombocytopenia was reported secondary to vertical transmission of dengue from mother to fetus [2] . Management of pregnancy complicated by dengue requires proper obstetric care. The risk of hemorrhage during delivery [3] . The pregnancy complicated by dengue and PIH is also difficult to manage because it's difficult to differentiate hemolysis , elevated liver enzymes, and low platelet counts (HELLP) syndrome caused by PIH and similar blood picture caused by dengue [4] . Proper management of such patients may prevent maternal mortality as well as adverse fetal outcome [5] . We here report a case of patient who presented to us with pregnancy complicated by dengue hemorrhagic fever. She was successfully managed and there was no adverse maternal or neonatal outcome. Case Report A 30 year G3P2A1L1 female was referred to us at 32 weeks of gestation with a history of high grade fever with chills. The investigations done at the referring hospital showed severe thrombocytopenia and NS1 positive for dengue. The history revealed that she was registered and immunized at a private hospital. She had regular antenatal visits during present pregnancy. She had a history of fever with chills along with severe headache 8 days back. For these complaints she was admitted in the same hospital. At the time of initial admission her investigations were done. Her CBC was done Hb was 9 gm TLC was 11500, platelet count 98,000/cumm. NSI was done which was positive. A diagnosis of dengue was made and symptomatic treatment was started. IV antibiotics and IV fluids were given along with presumptive antimalarial. On D4 a repeat CBC showed Thrombocytopenia (Platelet count 34,000/-). On examination there was appearance of petechial rash over both legs. Also there was ecchymotic patches over venipuncture sites. There was also systemic hypertension (Blood pressure 150/110 mm of hg). Also patient started complaining about severe headache and blurring of vision. Blood transfusion was initially given and In view of increased severity of illness along with thrombocytopenia and blurring of vision the patient was referred to nearby district hospital from where she was referred to us. At the time of admission to our hospital obstetric history was reviewed. Patient was gravida 3. 1st issue was male child delivered by LSCS. The indication for LSCS was Pregnancy induced hypertension. Baby was delivered prematurely at 32 weeks. Immediately after delivery the baby developed respiratory difficulty and hence was admitted in NICU. Baby died on D7 of life. The cause of death was told to be prematurity with respiratory distress. There was also a history of first trimester abortion at 10 weeks of gestation during subsequent pregnancy. Gravida 3 was present pregnancy. At the time of admission patient's general condition was poor. Patient was febrile with presence of pedal edema and ecchymosis and petechial rashes involving hands, legs, abdomen and chest. Pulse was 98/min, regular but low volume. Blood pressure was 150/100 mm of hg. On palpation height of uterus was corresponding to 30 weeks of gestation with cephalic presentation. FHS was recorded at 140/min. There was presence of LSCS scar but there was no scar tenderness. Patient was irritable and was complaining of severe headache and blurring of vision. In view of NS1 positive with petechial and purpura, feeble pulse and complaints of blurred vision patient was shifted to Intensive care unit. Investigations done at the time of admission revealed Hb-10 gms, Platelets 8000/cumm, TLC 15900/cumm, Sr Mg-3.6, Blood urea-50, Sr creatinine-1.1, Total Billirubin-1.2, SGOT 70 U/dl, SGPT 86 U/dl, PT-14 seconds, APTT-29 seconds and INR was 1.09. Immediately after admission the patient was started on IV fluids and IV antibiotics. Simultaneously PCV, Fresh frozen plasma and platelet transfusions were started. An obstetrics ultrasound was done which showed single live intrauterine gestation with cephalic presentation. Placenta was located posteriorly mean gestational age was determined to be 34 weeks and amniotic fluid was adequate. Effective fetal weight was 1840 gms. Fetal Doppler was done which revealedfetoplacental and urteroplacental insufficiency. On day 1 of admission itself (after 8 hours of admission) patient went into spontaneous preterm labour and delivered a male baby weighing 1.7 kg. After delivery baby developed respiratory distress. Baby was examined by pediatrician and was admission in NICU in view of hyaline membrane disease Four to five hours after delivery patient developed atonic post-partumhemorrhage. There was profuse vaginal bleeding. A local examination revealed vulvovaginalhematoma.Patient was given IV bolus and shifted immediately to operation theatre. Uterotonics were started and given till uterus was well contracted. Vulvovaginal hematoma was evacuated, episiotomy resuturing and vaginal packagingwas done. Her hemoglobin turned out to be 6.5 gms and platelets were 75,000/cmm. Again the patient was transfused with packed cell red blood cells, platelets and fresh frozen plasma. Again the patient was shifted to intensive care unit.The vital parameters of the patient along with urine output and abdominal girth was monitored. On subsequent day patient developed hemorrhagic shock due to postpartum hemorrhage secondary to coagulopathy. Patient was immediately intubated and connected to mechanical ventilationand ionotropeswere started. Uterine exploration was done and blood clots were removed from uterus and vagina.After removal of clots uterine tamponade with shivkar's pack (condom catheter)was given.Repeat investigations showed Hb-4.5 gms,platelet -1.1 lakhs/cmm. Again the patient was transfused with PCV, Platelets and fresh frozen plasma.On day 3 of delivery patient was hemodynamically stable and hence was weaned offventilator and inotrope. Later patient was put on CPAP and eventually was extubated on 4 th day.A fundus examination was done which was normal. Next day condom catheter (uterine tamponade) was removed. There was no e/o any bleeding per vaginum. Repeat investigations showed Hemoglobin-9.3 gms. Platelet-1.8 lakhs/cmm. Since the Patient was vitally stable she was shifted back to post natal ward. Later baby was discharged from NICU after successful surfactant therapy. Later Patient had two episodes of fever with chills. Blood culture was done which showed growth of klebseilla pneumonia sensitive to amoxicillinclavulanic acid. Appropriate Antibiotics were immediately started. Patient's blood pressure was kept under control by antihypertensives. Her Kidney and liver function test were normal. Bilateral lower limb Doppler was also done which was normal. In view of no new complaints mother was shifted to step down nursery and eventually was discharged after establishment of breast feeding. Discussion In countries like that of India where dengue is endemic it is important to include dengue in differential diagnosis of all patients presenting with fever and bodyache. There are studies suggesting that the incidence of dengue is increasing amongst adults [6] . As the incidence of dengue is rising in child bearing age group it is essential that all pregnant females should be investigated for dengue if they present with fever, body ache and chills. There are many case reports describing dengue fever in pregnancy and its outcome. The common presenting complaints of dengue during pregnancy are fever, chills, arthralgia, headache and myalgia [7] . The diagnosis and assessment of severity of the dengue fever in pregnancy is complicated by many facts including physiological reduction of hematocrit during pregnancy, overlapping features of HELLP and dengue hemorrhagic fever and bleeding manifestations of dengue and postpartum hemorrhage. A high index of suspicion is therefore necessary to diagnose dengue in pregnant females. A detailed history combined with serological tests may be helpful in diagnosis. Other features may include thrombocytopenia, rise in hematocrit, mildly elevated liver enzymes and atypical lymphocytosis [8] . Serological diagnosis of dengue depends upon presence of dengue IgM. NS1 antigen may diagnose dengue fever at an early stage [9] . Dengue during pregnancy may affect mother as well as the fetus. As already mentioned dengue may cause fetal thrombocytopenia due to vertical transmission. Also there is a risk of need to conduct premature delivery in cases of severe dengue. This preterm deliveries may be responsible for respiratory distress, birth asphyxia, neonatal hypoglycemia and various other neonatal morbidities [10] . The management of pregnancies complicated with dengue is critical and like in this case may requires blood and platelet transfusions. In severe cases ventillatory support may be needed. Babies born in these patient may have severe morbidity in the form of prematurity, low birth weight, respiratory distress and bleeding manifestation. In complicated cases unless proper intensive care is provided mother and newborn are at a great risk of morbidity and mortality. Conclusion With increase in cases of dengue the incidence of dengue in pregnancy is expected to rise.Dengue should be suspected in all pregnant women presenting with fever, chills and myalgia. Usually symptomatic treatment is all that is required. But complicated cases, like this case, may require intensive care. Early diagnosis and appropriate treatment is key to proper management of such cases.
2019-03-15T13:12:30.114Z
2016-06-29T00:00:00.000
{ "year": 2016, "sha1": "b895d749609f0ff3afed01243159216a9bc50479", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v4i7.06", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e001e13e2042f57dfb86b57abe747ab606a4baa9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9410429
pes2o/s2orc
v3-fos-license
Focal concavity of posterior superior acetabulum and its relation with acetabular dysplasia and retroversion in adults without advanced hip osteoarthritis Background Although little is known, a limited number of three-dimensional computed tomography (CT) images of the pelvis present focal concavity of posterior superior acetabulum. The purpose of the present study was to investigate this morphologic deformity and its relation with dysplasia and retroversion in adults who were expected to have the original morphology of the acetabulum after growth. Methods Consecutive adult patients with hip pain who visited our hospital and had three-dimensional pelvic CT images were retrospectively analyzed after approval of the institutional review board; exclusion criterions included diseases, injuries and operations that affect the morphology of the hip including radiographic osteoarthritis Tönnis grades 2 and 3. Focal concavity of posterior superior acetabulum was evaluated by three-dimensional CT image. Acetabular dysplasia was determined by lateral center edge (LCE) angle <25°, Tönnis angle >10°, and anterior center edge (ACE) angle <25° on standing hip radiographs. Acetabular version angle was measured at the one-fourth cranial level of axial CT image. A subgroup analysis included only younger adult patients up to 50 years. Results The subjects analyzed were 46 men (92 hips) and 54 women (108 hips) with a median age of 57.5 (21–79) and 51.0 (26–77) years, respectively. Focal concavity of posterior superior acetabulum was observed in 13 hips; 7 patients had unilaterally, while 3 patients showed bilaterally. Among these hips, pain was observed in 8 hips but 4 hips (2 patients) were associated with injuries. This morphologic abnormality was not associated with acetabular dysplasia determined by LCE angle <25°, Tönnis angle >10° or ACE angle <25°. Of note, no acetabulum with the deformity plus dysplasia was retroverted. These findings were confirmed in a subgroup analysis including 22 men (44 hips) and 27 women (54 hips) with a median age of 31.0 (21–50) and 41.0 (26–50) years, respectively. Conclusions Focal concavity of posterior superior acetabulum could be a rare morphologic abnormality of acetabular formation independent of lateral or anterior dysplasia or retroversion. Electronic supplementary material The online version of this article (doi:10.1186/s12891-015-0791-z) contains supplementary material, which is available to authorized users. Background In 1999, Reynolds et al. described retroversion of the acetabulum as a solitary anomaly that could result in hip pain [1]. It is now generally accepted that acetabular retroversion is a cause of painful femoro-acetabular impingement [2,3]. It has been consistently reported that patients with acetabular dysplasia have higher frequency of acetabular retroversion if a cross-over sign on the anteroposterior radiograph of the pelvis is used for the diagnosis [4,5], while recent data have also suggested the differences between dysplasia and retroversion of the acetabulum. For example, Tannast et al. [6] showed that pelvic morphology differed in rotation and obliquity between acetabular retroversion and developmental dysplasia, and Tannenbaum et al. [3] found that the frequency of acetabular retroversion was higher in men compared to women in contrast to acetabular dysplasia. There are few reports assessing the original morphology of the adult acetabulum with dysplasia without advanced hip osteoarthritis [7]. We have observed that a small number of three-dimensional computed tomography (CT) images of the pelvis present focal concavity of posterior superior acetabulum ( Fig. 1 and Additional file 1: Figure S1; unpublished data). To our knowledge, however, this morphologic abnormality has not yet been studied. The present study retrospectively investigated the focal deformity and its relation with dysplasia and retroversion in adults without diseases, injuries or operations that affect the morphology of the hip. Subject selection In the present study, we included adults less than 80 years old from consecutive patients with hip pain who visited our hospital and had three-dimensional CT images of the pelvis from January 2010 to August 2012. We excluded patients without standing pelvic radiographs of the anteroposterior and false profile views or with radiographic hip osteoarthritis Tönnis grades 2 and 3; mild hip osteoarthritis (Tönnis grade 1) was judged to be acceptable for the analysis of original morphology. Patients were also excluded if they had a history of hip fracture or surgery and diseases that affect the morphology of the hip including osteonecrosis of the femoral head and rheumatoid arthritis, or if CT images limited to measure angles precisely because of poor positioning and there were no raw data available to recreate reconstructed images. In addition to the analysis of all subjects, we performed a subgroup analysis that was limited to only younger adult patients up to 50 years to further focus on the original morphology after growth. The institutional review board of the Saitama Medical University Hospital approved the present study (approval No. 13-047-1); informed consent was waived because of the retrospective design. Plain radiograph acquisition Standing anteroposterior radiographs of the hip were made with the limbs parallel and with the feet internally rotated approximately 20°. The central beam was directed to the midpoint between the superior border of the pubic symphysis and the center of a line connecting both anterior superior iliac spines, at a distance of 120 cm from the film. False-profile radiographs of the hip were obtained in a standing position. Affected hip was positioned against the film cassette, with the ipsilateral foot parallel to the cassette stand. The pelvis was rotated 65°relative to the cassette. The x-ray beam was directed toward the center of the femoral head at a tube-to-film distance of 120 cm. CT image acquisition All CT images were acquired with a 16-slice or 128-slice multidetector CT scanner system (Somatom Emotion 16 or Somatom Difinition Flash; Siemens Healthcare, Forchheim, Germany). The scan parameters for the 16-slice CT scanner were tube voltage 130 kV, reference mAs 140 mAs, collimation 1×16×0.6 mm, gantry rotation time 0.6 s, pitch 0.9, pixel matrix size 512×512, and those for the 128-slice CT were tube voltage 120 kV, reference mAs 185 mAs, collimation 2×64×0.6 mm, gantry rotation time 1.0 s, pitch 0.8, pixel matrix size 512×512. Automatic exposure control (CARE Dose 4D, Siemens Healthcare, Forchheim, Germany) was activated in all scans. For a given reference mAs, this technique can adjust the tube current in realtime to optimize radiation dose utilization. The radiation doses of all patients were recorded; the average CT dose index volume (CTDI vol ) on 16-slice and 128-slice CT was approximately 12 mGy and 8 mGy, respectively, while the corresponding dose-length product (DLP) was approximately 375 mGy*cm and 238 mGy*cm. Patients were placed spine with the limbs parallel and with enough internal rotation for the feet to touch each other. Images were obtained from anterior superior iliac spines to the proximal portion of the femurs. Axial and coronal images were reconstructed at 3-mm slice thickness using filtered back projection. Threedimensional volume-rendered images were acquired with a 0.75-mm reconstructed slice thickness and a 0.5-mm reconstruction increment, on Aquarius iNtuition 3D workstation (TeraRecon, Foster City, CA, USA). Image analysis Focal concavity of posterior superior acetabulum ( Fig. 1 and Additional file 1: Figure S1) was evaluated by three-dimensional CT image of the pelvis and the selection was performed under the agreement of all authors. Acetabular dysplasia was determined by not only lateral center edge (LCE) angle <25°on standing anteroposterior radiographs, but also Tönnis angle >10°and anterior center edge (ACE) angle <25°on standing radiographs of the anteroposterior and falseprofile views [8], respectively. LCE angle was formed by a vertical line through the center of the femoral head and a second line through the lateral edge of the acetabulum to the center of the femoral head. Tönnis angle was created by a horizontal line and a line connecting the lateral and inferior aspects of the acetabular sourcil. ACE angle was composed of a vertical line through the center of the femoral head and a second line through the most anterior point of the acetabulum to the center of the femoral head. Acetabular retroversion was judged by version angle <0°at the one-fourth cranial level of the acetabulum in an axial CT image according to a recent validation study [9]; we did not use cross-over sign because recent studies suggest that it might not provide the accurate diagnosis of acetabular retroversion [10,11]. This angle was Table S1, 3: Figures S2 and 4: Figure S3. Statistical analysis Comparisons of continuous variables for two groups and associations between categorical variables were analyzed by Mann-Whitney U test and Fisher's exact test, respectively, using StatMate v4.01 (ATMS Co., Ltd., Tokyo, Japan). A p-value of <0.05 was considered statistically significant. All subjects Among 488 patients selected according to the inclusion criterions, we excluded those without standing pelvic radiograph of the false profile view (n = 283) and with radiographic hip osteoarthritis Tönnis grades 2 and 3 (n = 152), a history of hip fracture (n = 121) or surgery (n = 125), diseases that affect the morphology of the hip (n = 75) and inappropriate CT images (n = 26). The numbers of patients excluded in these criterions overlap and subjects analyzed in the present study were a total of 100 patients (200 hips). There were 46 men (92 hips) and 54 women (108 hips) with a median age of 57.5 (21 to 79) and 51.0 (26 to 77) years, respectively. Focal concavity of posterior superior acetabulum was observed in a total of 13 hips (6.5 %); 7 patients had unilaterally (3 hips with pain and 4 hips without pain) while 3 patients showed bilaterally (5 hips with pain and 1 hip without There was no gender-or age-related difference in focal concavity of posterior superior acetabulum. In contrast, the frequency of acetabular dysplasia was higher in women while that of acetabular retroversion was higher in men (Table 1); notably, men had 22 retroverted acetabuli (23.9 %) but women had only 2 retroverted acetabuli (1.9 %). Patients with retroverted acetabuli were younger than those with anteverted acetabuli ( Table 2). Focal concavity of posterior superior acetabulum was not associated with acetabular dysplasia determined by LCE angle <25°, Tönnis angle >10°or ACE angle <25° ( Table 3, Fig. 2). Of note, no acetabulum with this morphologic abnormality plus dysplasia was retroverted (Table 4, Fig. 3). No gender-or age-related difference in focal concavity of posterior superior acetabulum was observed. In contrast, the frequency of acetabular dysplasia was higher in women and that of acetabular retroversion was higher in men (Table 5); men had 15 retroverted acetabuli (34.1 %) while women had only 1 retroverted acetabuli (1.9 %). Patients with retroverted acetabuli were younger than those with anteverted acetabuli (Table 6). Discussion The present study investigated adult patients without diseases, injuries or operations that affect the morphology of the hip including radiographic osteoarthritis Tönnis grades 2 and 3. As a result, focal concavity of posterior superior acetabulum was observed in 6.5 % of 200 hips in 46 men (92 hips) and 54 women (108 hips) with a median age of 57.5 (21 to 79) and 51.0 (26 to 77) years, respectively. A similar frequency (7.1 % in 98 hips) of this deformity was confirmed by a subgroup analysis including 22 men (44 hips) and 27 women (54 hips) with a median age of 31.0 (21 to 50) and 41.0 (26 to 50) years, respectively. All subjects had hip pain unilaterally or bilaterally and it was unclear whether the morphologic abnormality can result in hip pain. This focal deformity did not show any specific feature regarding gender or age, while there are marked gender-and age-related differences in dysplasia and retroversion of the acetabulum. Focal concavity of posterior superior acetabulum was not associated with lateral or anterior acetabular dysplasia determined by LCE angle <25°, Tönnis angle >10°or ACE angle <25°, or acetabular retroversion measured at the one-fourth cranial level of axial CT image. These results might be compatible with previous reports suggesting that the original morphology of acetabular dysplasia has a wide variety of deficiency types [7] and that there are differences between dysplasia and retroversion of the acetabulum [3,6]. In agreement with the finding by Tannenbaum et al. [3], the present data showed that men had more retroverted acetabuli; although little is known, this apparent gender-related difference might be linked to the observation that external rotation of the lower limbs was more common in boys before birth [12]. The data presented also confirm that acetabular retroversion was associated with earlier onset of hip pain, as previously reported [5]. The consistency between our results and others [3,5] could support that the present subjects were properly selected. From a diagnostic point of view, acetabular retroversion can be one cause of hip pain, potentially relating to femoro-acetabular impingement, especially in younger men, while such possibility might be low when focal concavity of posterior superior acetabulum as well as lateral or anterior acetabular dysplasia exists because no acetabulum with this morphologic abnormality plus dysplasia was retroverted. The acetabulum is formed by ilium, ischium and pubis during growth and focal concavity of posterior superior acetabulum could be one hypoplastic deformity of acetabular wall. Indeed, it appears that the region of this deformity corresponds to the ilium (Additional file 7: Figure S6), possibly resulting from the relative growth disturbance compared to the ischium developmentally. If correct, acetabular retroversion [1,12,13] might be associated with congenital mal-orientation, because all acetabuli with the morphologic abnormality plus dysplasia were not retroverted. The hypothesis would be consistent with the facts that the position of a fetus in an uterus can influence acetabular morphology [12] and acetabular version angle at the one-fourth cranial level increases with growth [14]. The present study has several limitations. There is certain selection bias due to the way patients were selected for this retrospective review; non-patient volunteers or patients without hip pain were not available due to practical difficulties including the radiation dose of threedimensional CT. Accordingly, the present results cannot be applied to general population. Another methodological issue could be consensus interpretation in imaging research [15]. Analyzing all three-dimensional CT images, acquired by two types of CT scanners, together might also cause difficulties with interpretation. Conclusions In adult patients who were expected to have the original morphology of the acetabulum after growth, focal concavity of posterior superior acetabulum was observed in 13 hips (6.5 % of 200 hips). Among these hips, pain was observed in 8 hips (61.5 %), though 4 hips (2 patients) Fig. 4 Relation between focal concavity of posterior superior acetabulum and acetabular dysplasia in subjects at 50 years or younger. Focal concavity of posterior superior acetabulum was evaluated by three-dimensional CT image. Acetabular dysplasia was determined by lateral center edge angle <25°, Tönnis angle >10°, or anterior center edge angle <25°on standing pelvic radiographs Table 8 Relation between acetabular dysplasia and retroversion in subjects at 50 years or younger with focal concavity of posterior superior acetabulum
2016-05-12T22:15:10.714Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "7788e0fc001f40b78b88f0e9a8397799c388125e", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-015-0791-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7788e0fc001f40b78b88f0e9a8397799c388125e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16490251
pes2o/s2orc
v3-fos-license
A New Method for Feedback on the Quality of Chest Compressions during Cardiopulmonary Resuscitation Quality of cardiopulmonary resuscitation (CPR) improves through the use of CPR feedback devices. Most feedback devices integrate the acceleration twice to estimate compression depth. However, they use additional sensors or processing techniques to compensate for large displacement drifts caused by integration. This study introduces an accelerometer-based method that avoids integration by using spectral techniques on short duration acceleration intervals. We used a manikin placed on a hard surface, a sternal triaxial accelerometer, and a photoelectric distance sensor (gold standard). Twenty volunteers provided 60 s of continuous compressions to test various rates (80–140 min−1), depths (3–5 cm), and accelerometer misalignment conditions. A total of 320 records with 35312 compressions were analysed. The global root-mean-square errors in rate and depth were below 1.5 min−1 and 2 mm for analysis intervals between 2 and 5 s. For 3 s analysis intervals the 95% levels of agreement between the method and the gold standard were within −1.64–1.67 min−1 and −1.69–1.72 mm, respectively. Accurate feedback on chest compression rate and depth is feasible applying spectral techniques to the acceleration. The method avoids additional techniques to compensate for the integration displacement drift, improving accuracy, and simplifying current accelerometer-based devices. Introduction Chest compressions delivered at an adequate depth and rate, allowing full chest recoil, and with minimal interruptions are key to improve survival from cardiac arrest [1][2][3]. Current cardiopulmonary resuscitation (CPR) guidelines [4,5] recommend chest compression depths and rates of at least 5 cm and 100 min −1 , respectively. However, out-of-hospital and inhospital studies on CPR quality show that delivering chest compressions with adequate rate and depth is difficult, even among well-trained responders [6,7]. The use of real-time CPR feedback devices has contributed to improve the quality of CPR provided by lay people and trained rescuers in both simulated and real life scenarios [8,9]. The first CPR feedback devices used force/pressure sensors on the assumption of a linear relation between compression force and depth [10][11][12]. However, the chest has a nonlinear variable stiffness within the compression cycle which varies among individuals [13][14][15][16], a fact that has been confirmed on cardiac arrest data with simultaneous force and depth recordings [17]. Consequently, most current CPR feedback devices are based on accelerometers. These devices calculate the instantaneous displacement of the chest, that is, the compression depth (CD) signal, by integrating the acceleration twice [9]. However, noise in the acceleration signal compromises the accuracy of methods based on the double integration. Even a small offset in the acceleration signal produces integration errors that rapidly accumulate, making feedback impossible unless the resulting displacement drift is compensated for every compression [18]. Over the last decade several drift compensation mechanisms have been conceived, giving rise to complex and sometimes bulky devices that incorporate additional sensors [19,20] and/or use elaborate signal processing techniques [1,[21][22][23]. Accelerometer-based devices calculate rate and depth values for feedback for each compression [24][25][26]. Audiovisual 2 BioMed Research International feedback to the rescuer is then given at every compression or by averaging these values over the last 3-5 compressions [24,25]. Feedback on rate and depth at every compression seems excessive and may be ignored by the rescuer [9,27]. A more sensible approach to feedback would be to average rate and depth over the last compressions, resulting in feedback times somewhere in the 2-5 s range. This study introduces a new paradigm on accelerometerbased devices. Instead of calculating the CD signal, feedback on the average rate and depth during a short analysis interval is directly computed from the acceleration by means of spectral techniques. Drift compensation or additional sensors would no longer be needed, giving rise to simpler, smaller, and more user-friendly feedback devices. Equipment and Data Collection. A Resusci Anne manikin (Laerdal Medical, Norway) was equipped with a photoelectric sensor (BOD 6K-RA01-C-02, Balluff, USA) to register the actual CD signal, which was used as gold standard. The accelerometer (ADXL330, Analog Devices, USA) was placed in an enclosure which was fixed to the manikin's sternum, and the manikin was placed on the floor, as shown in Figure 1. The three acceleration axes and the CD signal were digitized using an NI-USB6211 (National Instruments) data acquisition card with a sampling rate of 500 Hz and 16-bit resolution. Twenty volunteers received basic compression-only CPR training before participating in two recording sessions: a regular session, in which the vertical axis of the accelerometer was perpendicular to the manikin's chest, and a tilt session, with an 18 ∘ misalignment (see Figure 1). These sessions were defined to study situations in which the accelerometer may not be in a fixed position relative to the patient's chest. In each session the volunteers delivered 60 s of uninterrupted compressions eight times, combining different target rates (80, 100, 120, and 140 min −1 ) and depths (30 mm and 50 mm). A metronome was used to guide compression rate, and a custom-made computer program displayed the CD signal in real-time to guide compression depth. The recorded signals were preprocessed with a thirdorder Butterworth low-pass filter (cut-off frequency 15 Hz) to suppress high-frequency noise and resampled to 100 Hz. Compressions were automatically identified in the CD signal using a peak detector with a fixed 15 mm threshold, and the annotations were then manually reviewed. Mathematical Model. Feedback was calculated for short analysis intervals during continuous chest compressions. If the intervals are short, then it is possible to assume that all chest compressions within the analysis interval are very similar. Mathematically this means that acceleration and CD are almost periodic signals, whose fundamental frequency is the mean frequency of the compressions, cc (Hz). For each analysis interval, their periodic representation, denoted by ( ) for the acceleration and ( ) for the CD signal, is then a good approximation of the real signals. These periodic representations can be modelled using the first harmonics of their Fourier series decomposition (without DC component): Since the feedback device records the acceleration the problem is then to obtain ( ) from ( ) knowing that the acceleration and the displacement are related by which, in the general case, involves a double integration of the acceleration signal. However, for the quasiperiodic approximation, using the Fourier series representation of ( ) and ( ) in (3) yields the following relations between the amplitudes and phases of their harmonics: = 1000 These equations can be used to reconstruct ( ) once cc , , and are obtained from the acceleration signal. Spectral Method for Feedback on Rate and Depth. Spectral analysis was used to estimate the harmonics of ( ) needed to reconstruct ( ). In summary, feedback on the mean rate and depth for each analysis interval were obtained following the steps described in Figure 2. In Step 1, a Hamming window was applied to the acceleration signal to select the analysis interval. Its 2048-point fast Fourier transform (FFT) with zero padding was computed in Step 2. Then, the first three harmonics and their fundamental frequency Feedback values for the interval Analysis interval, T w = 2 s Step 1 Select analysis interval in a(t) Step 2 Calculate FFT Step 3 Harmonic estimation (peak detection) f cc , A k , k Step 4 Calculate S k , k : reconstruct s(t) Step 5 Calculate feedback: were estimated (Step 3). Equation (4) was used to compute and , which were used to reconstruct ( ) from (2) (Step 4). Finally, in Step 5, feedback on rate and depth were obtained using the reconstructed cycle of ( ) as rate (min −1 ) = 60 ⋅ cc , Several characteristics of the method such as the Hamming window, the number of harmonics, and the number of points to compute the FFT were selected using signal processing criteria to guarantee a high accuracy. Performance Evaluation. To evaluate the accuracy of the method we assumed feedback would be given at the end of each analysis interval; consequently records were divided into nonoverlapping consecutive analysis intervals of duration . For each analysis interval, feedback for rate and depth obtained by the method was compared to that obtained from the distance sensor placed inside the manikin. First, the mean rate and depth per record were analysed for the different targeted CPR test conditions. The distributions of the mean rate and depth did not pass the Kolmogorov-Smirnov normality test and are presented as median (5-95 percentiles). The median values obtained from the gold standard and the method were compared using the Mann-Whitney test, and differences were considered significant for values under 0.05. Then, errors in rate/depth feedback were obtained for every analysis window. The rootmean-square error (RMSE) of all feedbacks in a session (regular/tilt) was used to measure the global accuracy of the method as a function of the duration of the analysis interval, . Finally, a Bland-Altman analysis [28,29] was conducted for = 3 s to assess the agreement on feedback between the gold standard and the method, and the 95% limits of agreement (LOA) were obtained. Results The dataset comprised 320 60 s records with a total of 35 312 compressions. Table 1 compares the mean rate and depth per episode obtained from the gold standard and the method when = 3 s. There was no significant difference between the method and the gold standard for any of the CPR target conditions. Figure 3 shows the RMSE as a function of for the tilt and regular sessions. For between 2 and 5 s the RMSE for rate and depth were below 1.5 min −1 and 2 mm. Finally, Figure 4 shows the Bland-Altman plots of the difference between the method and the gold standard for = 3 s. For the regular session, the differences in feedback for rate and depth showed a 95% LOA of −1.64-1.67 min −1 and −1.57-1.57 mm, respectively. For the tilt session, the differences in feedback for rate and depth showed a 95% LOA of −1.59-1.61 min −1 and −1.69-1.72 mm, respectively. Discussion CPR feedback on chest compression rate and depth improves the quality of CPR both during training [25,30] and in the field [27,31,32]. Currently, most real-time devices for CPR feedback are based on the double integration of the acceleration which inevitably requires adding drift compensation techniques [18,33] that result in bulky devices and/or occasional inaccurate depth feedback [19,34]. This study presents, to the best of our knowledge, the first accelerometerbased method for feedback on rate and depth of chest compressions that avoids the drift problem. The method is based on simple and optimised spectral techniques making it computationally very efficient. These considerations would utterly simplify current accelerometer-based devices. The accuracy of the method was tested in a manikin platform. This allowed the recording of the actual instantaneous chest compression depth for use as gold standard but also the testing the algorithm for a wide range of controlled conditions: different rescuers, target depths and rates, and the influence of the relative position of the device and the chest (regular versus tilt). Misalignment between the device and the chest was tested for two reasons. First, although the device is usually in contact with the patient's chest, the sternum may not be completely horizontal due to anatomical considerations, even when the patient is in supine position. Second, other suitable positions of the device could be envisioned, such as on top of the hand or fixed to the wrist. In those situations tilt may vary during chest compressions. In either case, there were no significant differences in rate and depth feedback between the gold standard and the method. Moreover, for all the tested conditions the RMSE for rate and depth were below 1.5 min −1 and 2 mm, respectively, which guarantees a very accurate feedback for analysis intervals in the 2-5 s range. Furthermore, the Bland-Altman analysis revealed that all individual feedbacks were very accurate and that the method is reliable because it did not present outliers. The method presented in this study directly estimates the mean rate and depth for feedback without the need to obtain the instantaneous CD signal. This avoids the need to integrate the acceleration signal twice. Double integration introduces a large displacement drift [18], which has to be compensated. Over the years several techniques have been developed to correct the displacement drift. Some solutions correct the drift for each compression cycle. This involves the detection of the start of each compression using either additional force sensors [18,20] or a combined analysis of the CD and the ECG signals [21]. Others compensate the drift adaptively using filters based on additional reference signals such as force, blood pressure, ECG, or thoracic impedance [19]. However, incorporating additional sensors makes the feedback device more complex, and recording the ECG bounds the feedback device to the defibrillator. Alternatively, solutions based exclusively on signal processing techniques have also been developed to minimize or cancel the drift [1,22,35]. However, these techniques may introduce errors in depth as large as 6 mm for 95% of the cases [24]. The spectral technique introduced in this paper is more robust to acceleration noise because it only estimates three harmonic components of the acceleration for an accurate feedback. Improvement of CPR quality relies on two key factors: real-time monitoring of CPR parameters and debriefing [36][37][38]. Real-time feedback in short time intervals is demonstrated in this study. Debriefing could easily be implemented simply by storing the rate and depth feedback values for each interval. These values could then be used to obtain postresuscitation scorecard with global measures of CPR quality and graphs of the time evolution of rate and depth [39]. The method shares two common limitations of all accelerometer-based devices. First, accurate depth feedback is compromised if there is incomplete chest recoil, that is, rescuer leaning [38]. The actual depth of a compression is the displacement of the patient's sternum from its resting position towards the spine. Accelerometer-based devices are accurate only if the sternum returns to its resting position on every compression [23]. Otherwise, the only solution is to detect incomplete chest recoil and then launch an alarm to correct excessive leaning [38]. Second, the study was conducted for the manikin resting on a hard incompressible surface. On softer surfaces depth is overestimated as the sum of the sternum-spine displacement and mattress compression [40]. This drawback can be corrected by using two aligned accelerometers (chest and back) and processing the difference of the recorded accelerations [26,35]. Our method can be directly adapted to use the difference of the two accelerometers. We demonstrated the accuracy of the method during continuous chest compressions. However, a full evaluation of the method using retrospective out-of-hospital cardiac arrest records in which acceleration signal is available is still needed. Such study would serve to evaluate the feasibility and reliability of the method in real resuscitation scenarios, where pauses in chest compressions are frequent and the acceleration patterns may differ from those generated in manikins. Conclusion This study introduces a new paradigm in accelerometerbased CPR feedback devices because it allows calculating rate and depth values for feedback without reconstructing the instantaneous CD signal. It avoids additional techniques to compensate the drift caused by accelerometer noise and double integration, thus simplifying feedback devices. Feedback is accurate for analysis intervals of a few seconds during continuous chest compressions. Further studies with retrospective episodes would serve to evaluate the feasibility and reliability of the method in a real resuscitation scenario.
2018-04-03T06:11:15.500Z
2014-08-28T00:00:00.000
{ "year": 2014, "sha1": "1ae3cba10532381f4b37c94055949003f8ff1eef", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/865967.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c71c97a57cecfcc75fe71cc7c0dd94305ee0ca1", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
252853527
pes2o/s2orc
v3-fos-license
Providing End-of-Life Care to COVID-19 Patients: The Lived Experiences of ICU Nurses in the Philippines In the midst of COVID-19, radical change in the work environment further exacerbated the detrimental effects of critical illness in the intensive care unit (ICU). This may be heightened if the patient experiences a lamentable end-of-life experience due to inadequate end-of-life care (EoLC). Anchoring on the theory of bureaucratic caring and the peaceful end-of-life theory, insights can be gained into the motivations and behaviors that support the delivery of palliative care during COVID-19. With this having been having said, the objective of this study was to use a narrative approach to examine the lived experience of 12 nurses who provided EoLC in the COVID-19 ward of several hospitals in the Western Philippines. Participants’ narratives were transcribed, translated and analyzed. Among the themes that have emerged are: establishing a peaceful journey to death, holistic caring for the end of life, venturing into risky encounters in the call of duty, staying close amidst the reshaped work environment, and preparing the family life after a loved one’s departure. The study identified the importance of assisting patients on their journey to a peaceful death, but this journey was also accompanied by a sense of self-preservation and safety for colleagues and families. Introduction A patient's quality of life is already compromised by critical illness if they are in the intensive care unit (ICU), which can be exacerbated by the appalling quality of death and indignity associated with poor end-of-life care (EoLC) [1]. Prior to the pandemic, ICU nurses played a key role in providing care to critically ill patients by utilizing a variety of technologies, assessing and managing complex disease conditions, and providing appropriate care with respect for other health care professionals [2]. This vital role involves a wide range of clinical settings, including the care of patients with serious illnesses and integrating their families while advocating for quality EoLC [3]. These situations put nurses in a precarious position to make decisions for patients who are more often unable to make the decisions for themselves, resulting in adverse effects for ICU nurses, such as emotional and psychological burnout and reduced job satisfaction [4,5]. For more than two years, COVID-19 has been impacting ICUs in the Philippines with cases that are highly prevalent and morbid. Patients frequently present with worsening symptoms and an elevated need for palliative, supportive, and EoLC [6]. Moreover, with the constant fear of becoming infected, those in the profession have become very stressed [7][8][9][10]. It is evident that the way ICU nurses work has changed in the context of the pandemic [11,12]. Without a doubt, the efforts of nurses to provide life-saving procedures during COVID-19 are very commendable [13,14]. However, limited information is still available on the experiences of COVID-19 ICU nurses working in the Philippines in relation to the EoLC process. In the midst of the COVID-19 outbreak, hospital protocols have radically changed, presenting multiple obstacles to quality EoLC [15,16]. For example, family members are not allowed to stay and be physically close to patients, while nurses assigned to COVID-19 patients in ICUs have many restrictions. Additionally, discussions at EoLC family conferences have become more difficult as family members are restricted to virtual meetings. These situations altogether add up to the vulnerability of the mental and physical health of ICU nurses [17]. Against this background, it is therefore important to identify aspects of care and implementation obstacles that are essential for the care of critically ill ICU patients at the end of their lives. In order to achieve this, it would be worthwhile to first understand and analyze the key experiences of ICU nurses in this area. In particular, it is important to undertake a study c in the Western Philippines, where this type of study is relatively scarce. The current study is based on the concepts of the theory of bureaucratic caring [18] and the peaceful end-of-life theory [19]. These theories provide insights into the purpose and behavior that support EoLC provision without protocols in place. Based on the theory of bureaucratic caring, the provision of care is emphasized within organizations. Caring may be characterized by its dynamic relationships between right action and charity, between love and compassion expressed in response to suffering or need, and by fairness or justice regarding on what should be done [20]. Having an understanding of how human beings, environments, and circumstances are interconnected is fundamental to a better understanding of this type of theory. In addition, it provides a specialized perspective on how healthcare institutions, healthcare systems, organizational bureaucracies, and nursing care are integrated and coordinated with each other, as wholes and components in the health and organizational system [21]. On the other hand, the peaceful end of life theory focuses on the structural setting, that is, the family system, which includes both the terminally ill patient and all other family members who are being cared for in an acute care setting. Quality of life is therefore seen as an essential concept in providing high quality EoLC [22,23]. Ruland and Moore [19] assume that feelings and events during the end of life experience are individualized and personal. The goal of EoLC is not to optimize the care provided, but rather to maximize treatment that includes the best possible care, including technological and comfortable measures to improve and achieve quality EoLC [24]. Suffering outside of physical ailments is not readily understood, but alleviating suffering is a fundamental goal of EoLC and is necessary to achieve comfort and a peaceful end of life experience. In these context, a quality EoLC can be considered an invaluable aspect for critical COVID-19 patients. Undoubtedly, COVID-19 has brought fear and pain not only to patients but also to nurses in the ICU [25,26]. In addition to the stress and fear of contracting the virus [27], nurses' accounts from previous lived experience studies describe their unrelenting efforts to provide comfort and care to their patients [28][29][30]. Furthermore, nurses too are required to use a variety of coping mechanisms to cope with difficult times [31,32], whether they are religious oriented [33] or counselling among peers [34,35]. More importantly, amidst all of these factors, nurses are expected to be resilient as the work paradigm changes [36,37], while maintaining close communication with family members of the patients [38]. Both the theory of bureaucratic caring and peaceful end of life theory provided fundamental support for the importance of ensuring that the patient achieves a comfortable, dignified, and peaceful end of life experience, a peaceful end of life requires complex and holistic nursing care. These theories provided the basis for explaining the various untold experiences encountered by ICU nurses during the COVID-19 pandemic as various factors influence the spiritual and ethical care of dying patients. Study Design This study was designed within a narrative perspective in which ICU nurses provide personal accounts or stories reflecting their experiences with COVID-19 patients during EoLC [39]. It also presents the sum of the experiences of several individuals who have all undergone the same experience. Due to the COVID-19 pandemic, the term lived experience is frequently used to describe the journey that nurses undertake [40,41]. This translates into a desire to understand how a person's experience, in this case the experience of the ICU nurse, is absorbed into their awareness. Consequently, this also translates into a desire to understand the significance of such experience. Participants Study participants were twelve (12) carefully selected ICU nurses deployed to COVID-19 units from multiple hospitals in the Western Visayas region of the Philippines. The participants were chosen based on assurances that they hold relevant experience and possess a genuine understanding and viewpoint of the phenomenon being considered [42]. Inclusion criteria include: currently employed as an ICU nurse, at least six (6) months experience as a nurse in the ICU, at least one (1) month of experience in the COVID-19 critical care area in the hospital, has experienced providing EoLC to a critically ill or dying COVID-19 patient that may or may not be on do-not-resuscitate (DNR) status, within the age range of 25 to 45 years old, either male or female, holds at least a bachelor's degree, either single or married, works in either a private or public (government/state) hospital, and has knowledge of providing EoLC. Table 1 shows the participants' backgrounds, career descriptions, and pseudonyms. Of the 12 participants, eight were male and four were female. The average age was 34 years old. Eight work in private hospitals, while the remaining four work in state or public hospitals. The average length of service as a nurse is nine years, with all participants having at least three years of ICU experience. In terms of experience with COVID-19 patients in the ICU, the majority of participants had more than five months of experience, whereas two participants had one and two months of experience, respectively. Data Collection and Analysis Recruitment of participants began after the study protocol was reviewed and approved by the University of St La Salle Graduate Program review panel and ethics committee. The participants were initially recruited using the purposive sampling method guided by the aforementioned inclusion criteria. To bolster the recruitment of the participants, the snowballing technique, in which hospital contacts recommended participants who met the research criteria and who might also be willing to participate in the study [43], was used. This method was selected specifically for its ability to reach hard-to-find populations [44]. The final sample size of 12 participants was set once saturation was reached or no new concepts or themes emerged during the interviews [41]. Potential ICU nurses assigned to different hospitals were contacted and invited through Facebook Messenger, e-mail, or mobile phones. As part of the informed consent process, all information about the study was disclosed to the participants. The study was initiated by sending a letter of request and an informed consent form which included information about the study, such as the introduction and purpose, participant selection, a clause for voluntary participation, procedures, risks and benefits, and a confidentiality agreement. Once consent was obtained, a schedule for the interview was arranged. Participants were free to choose which method of online communication was most convenient for them (Zoom, Facebook Messenger, Google Meet, Skype). The interview was scheduled to last for 30 min to an hour. Participants were also encouraged to select a convenient time and location that would enable them to be comfortable, relaxed, and at ease during the interview. In view of the ongoing COVID-19 pandemic, the participants were initially offered the option of conducting the interview online. However, if the participant decided to conduct the interview face-to-face, precautions were taken accordingly. In total, only two participants chose to conduct the interview face-to-face, while the remaining ten chose online interviews. Prior to the interview, participants were reminded that the session would be recorded and that they may stop the interview at any time. All interviews were conducted in English. However, participants were free to use any language they wished to share their thoughts. In the later transcription of the data, non-English languages were carefully translated into English. Participants were also asked to select their preferred pseudonym to be used for the transcription. After the transcription was completed, the participants were asked to validate the accuracy of the data gathered. A description of the ICU nurse's experience was derived and acquired through the use of narratives or stories about their lived experience during their involvement in the caring of the dying COVID-19 patient. A guiding context was provided to encourage participants to share more about key moments in the provision of EoLC. For instance, an overarching question was asked: "What is your experience of providing EoLC to a dying COVID-19 patient?". Second, several preset probing questions were also used to help confirm and clarify certain descriptions or statements made by the participants. Examples of probing questions were: "What are the different things you do while providing EoLC?", "What was the patient's response like while receiving EoLC?", and "Is there anything else that you would like to discuss in relation to your experiences in EoLC?". The content of the guide also enabled the determination of the profile of the participants, which included their age, gender, marital status, educational level, length of service, hospital type (whether private or public), hospital location, and the number of months being assigned to a COVID-19 critical care unit. Data analysis followed Colaizzi's [45] method. This approach provides a rigorous analysis through a unique seven-step process in which each step remains close to the data in order to ensure the credibility and reliability of the results, as well as exposing emerging themes and their interwoven relationships [46]. This approach depends on the rich first-person accounts of experience that may emerge from interviews conducted online or face-to-face [47]. The seven steps in Colaizzi's [45] descriptive method include the following: (1) familiarization by reading and re-reading the ICU nurses' narrative transcripts multiple times, (2) identifying and extracting significant statements from the participants that relate to their profound experience while providing EoLC, (3) forming meanings from the statements of the participants, (4) grouping and organizing the ICU nurses statements to form relevant and meaningful themes, (5) developing an exhaustive description of the thematic findings, (6) constructing the basic structure, and (7) seeking verification of the basic structure and to validate the results by returning them to the participants and confirming the results with their experiences [45]. Experiences and thematic statements of the participants were highlighted in order to generate a series of themes as to how ICU nurses experience EoLC on a day-to-day basis. The intuitive process was completed by establishing rapport and contact with the participants, paying particular attention to their daily activities and absorbing their lifestyle. Intuition refers to the researchers' attempt to empathize with the situation by empathizing with a feeling or thought of the participants that they have experienced themselves. This occurs when researchers remain open to the meanings attributed to the phenomenon by those who have experienced it [48]. Results and Discussions From the narratives of the participants, five main themes were identified that are consistent with both the theory of bureaucratic caring and the peaceful end of life theory: establishing a peaceful journey to death, holistic caring for the end of life, venturing into risky encounters in the call of duty, staying close amidst the reshaped work environment, and preparing the family life after a loved one's departure. Theme I: Establishing a Peaceful Journey to Death Patients with COVID-19 infection and the novelty of this disease pose a complex set of complications, particularly in patients with co-morbidities and the unvaccinated [49]. In caring for the dying COVID-19 patients, the primary goal of most participants was to ensure that the patients could have a peaceful death at the end of their lives. Several instances were identified in the narratives, reflecting the participants' experience in delivering EoLC. In addition, the results also show the various core components that form a cluster of aspects that ensure a peaceful end of life to dying COVID-19 patients. For instance, participants have shared their perceptions on what a peaceful death looks like. Geoffrey mentioned: " . . . they already consent that they will not move to the aggressive side, because they already accept the situation; they know that the end is actually death, so probably that would be one thing where I could really say that the patient died peacefully." This means that acceptance is not only on the part of the patient, but also by the relatives who allow the patient to go to the afterlife with no baggage, which can be symbolized through advance directives or DNR forms. With these vital consents, the patients are allowed to move on peacefully, unlike those with no DNR, as stated by Siegfried: "I could really see and say that the patient was really at peace if they are on DNR status, because they become flat on their own." Importantly, although the global pandemic has already claimed the lives of millions of people, ICU nurses have retained an innate desire to help patients end their lives in comfort by providing EoLC. These efforts can be seen through the three emerging subthemes: witnessing acceptance of mortality and the inevitability of the end of life, assisting in a symptom-free death and the alleviation of suffering, and fostering a graceful and dignified death. Witnessing Acceptance of Mortality and Inevitability of End of Life This is relative to ICU nurses who witnessed dying COVID-19 patients say "yes" to death. When death is due to a natural cause associated with old age and there is an ability to make meaning of death, there is a neutral acceptance of the end of life and mortality [26]. The participants asserted that patients who were able to accept their eventual voyage to the afterlife were able to verbalize appreciation and actual acceptance when their spirituality and wellbeing were addressed. Acceptance is not only widespread among patients, but also among their loved ones, as they religiously elevate their lives to God when inner burdens are removed and problems with loved ones are resolved. Participants shared how spiritual enlightenment helped their patients achieve peace. Shiloh mentioned: "No matter how toxic the patient is or how grumpy the people are; they are spiritually enlightened about things that are happening." Most participants believed that when treating and healing critical COVID-19 patients is already a lost battle, giving comfort and peace is the highest priority. Witnessing acceptance of inevitable death comes in various forms, such as verbalizing, as stated by Winifred: "She (patient) said thank you, grateful that she could speak to the chaplain before she died. So her spiritual being is complete. She was able to speak. Then we talked about what she felt . . . " Another form is when the patient or the family of the comatose patient signs an advance directive, as stated by Geoffrey: "The patient has advance directives . . . , especially if the patient himself signed the advance directives, so, probably, if everything else is exhausted . . . , let's say his own breathing, his own mechanism, his own heartbeat; probably I could say that he died peacefully, because that's his choice . . . " Essentially, the participants believed they had experienced how they had helped the patient through various interventions with empathy and caring. Assisting in Symptom-Free Dying and the Alleviation of Suffering In the course of the disease, dying COVID-19 patients have displayed diverse types of symptoms including fever, cough, difficulty breathing, and pneumonia-related symptoms, as well as abdominal pain, chest pain, and headaches [25]. Severe symptoms are one of the main causes of suffering in dying COVID-19 patients. It is the anguish caused by the experiences perceived by the patients' loved ones that drives them to take their patient out of suffering and sign the DNR forms. Callum explained: "Occasionally, family members decide against intubation because they do not wish to witness their family member suffer further and add to the agony of the patient." The narratives also disclosed various methods for managing symptoms and pain; for instance, pharmacologic approaches through pain medication and sedation for patients. Solomon noted that: "So, in the end, I hope that my support will somehow ease their suffering. Especially the family members. Sedation is the most effective measure to alleviate the patient's suffering. It is better to die comfortably than in agony." A participant indicated that a doctor stated that sedation may also cause a slight degree of amnesia, so that the procedure will not be as traumatic for the patient. This was affirmed by the statement of Serena when she said: "Being sedated also makes you feel as if you are experiencing amnesia. Therefore, you will not remember the time that you were intubated, . . . as being intubated and in the hospital is a very traumatic experience." Consequently, it is imperative that different methods be attempted for managing symptoms and pain. Studies have shown that even the use of warm blankets can reduce agitation, pain, and the use of analgesics. Additionally, displaying visually appealing elements such as family photographs or natural scenery may encourage patients to relax [28]. Fostering a Graceful and Dignified Death According to findings, participants in either public or private hospitals continue to be committed to ensuring a dignified death. From the participants' experiences, safeguarding privacy, respect, and confidentiality preserves the dignity of the dying. This involves dressing the patients, particularly the female patients, preventing exposure of the private parts and utilizing blankets to conceal their bodies. The integrity and dignity of the patient are maintained regardless of whether the patient is conscious or not. Solomon affirmed this by stating: "Although a patient is dying, you continue to provide them with nursing care. I give the same level of care as I would to other patients who are not terminal. Nothing has changed. Both dying and non-dying patients receive the same care . . . ". Nurses who specialize in ICUs provide post-mortem care to patients not only during the dying process, but also after the patient dies. Solomon added: "Even when they were on the verge of death, our daily care ensured that at least they felt that they were not worthless," and " . . . that they are still human beings. Even if they have a very poor prognosis, they are still effectively treated . . . " as expressed by Irene. Important here is that ICU nurses are cognizant of the reality of death and commit themselves to making it as painless, comfortable, and dignified as possible, despite the environment of critical care that encourages a paradigm of curing rather than one of caring [29]. Nurses serve as an invaluable substitute for relatives who cannot be with their patients. In providing the patients with the assurance that they are not alone, the nurse also safeguards the patients' privacy and integrity, ensuring that they die in dignity [30]. In addition, participants emphasized that maintaining dignity for dying COVID-19 patients should extend throughout the dying process and into the final moments of the patient's life in the ICU. Milo revealed: "While the Glasgow Coma Scale (GCS) score of the patient is 3 and the tube has already been removed, you should position the patient correctly and clean them if needed. At least the patient retains their dignity until the end of their life. Ensure that they are still presentable when they pass away." In sum, ICU nurses strive to minimize the anxiety and discomfort of dying patients. Moreover, ICU nurses believe that dying patients should have some control over how and with whom they spend their last moments. Importantly, despite the presence of a crisis standard of care within which the healthcare team and nurses are operating, it is vital that patients continue to receive compassionate EoLC [50]. According to the theory of bureaucratic caring, this compassionate EoLC represents the concept of caring. As noted earlier, the concept of caring may be seen as a dynamic relationship between service and correct action. When caring for dying patients, it is important to ensure a dignified death, which includes respecting their wishes and ensuring their comfort. When it comes to the care of patients at the end of life, the primary goal must be to eliminate all pain and fear. Theme II: Holistic Caring for the End of Life The second theme relates to the participants' perceptions of patient needs and how they provide holistic EoLC to their dying COVID-19 patients. Specifically, these interventions are categorized into perception of mental support, physical support through nursing interventions, emotional support, spiritual care, and social support. These experiences were confirmed by Ray's theory of bureaucratic caring, which discusses how nursing is a spiritual, relational, ethical, and holistic profession that seeks the good of both self and others [18,20]. According to Serena, "I made sure I provided a holistic approach since I believe there are four levels to holistic care within a patient's life: physical, emotional, social, and spiritual." For the participants, the physical, emotional, social, and spiritual care and support are all interconnected. Even during COVID-19, the implementation of holistic EoLC is still possible. Milo added that, " . . . despite the pandemic, we continue to provide proper care. As far as feeding and suctioning are concerned, they remain the same. We continue to clean the patient, bathe the patient, and provide holistic care to the patient." Furthermore, these statements demonstrated that a holistic approach leads to not only a peaceful death, but also a dignified one. Geoffrey reiterated, " . . . throughout the end of life journey how meeting the patient's needs equates to a peaceful death. In my opinion, it could be considered decent dying. If you were able to address the concerns of the patient holistically, then you may consider it; the dying process is actually decent." There may have been differences in hospital policies (between private and public institutions). In spite of this, nurses retain their ability to care for patients and address their needs in a holistic manner. Even with the fear of contracting COVID-19, and the limited time spent at the patient's bedside, the statements of the participants appear consistent in their approach to the holistic care of the dying. Furthermore, despite the fact that COVID-19 patients are dying and have a poor prognosis, holistic care may continue to influence their mental health and satisfaction [51]; hence, it is still critical to emphasize the individualized aspect of care. Several sub-themes were identified from the clusters of statements: the shift from a negative to a positive outlook on death, giving relentless bedside care for physical needs, lifting burdened emotions over adversity regardless of patient's response and consciousness, instilling faith to embrace fate, and surrounding by the presence of the intangible loved ones. The Shift from a Negative to a Positive Outlook on Death In this sub-theme, participants described different interventions provided to dying COVID-19 patients with regard to their mental health. Considering the proliferation of information regarding COVID-19's horrific effects on many lives, participants experienced encounters with dying patients who already knew that their death was imminent. Patients are mostly alone in their respective rooms and experience mental difficulties as a result of their knowledge of their condition. During the course of the study, the participants witnessed patients going through the stages of denial, anger, bargaining, depression, and acceptance (DABDA). As Milo noted, "They have certainly reached the point where they already know that they are dying and they could not survive this condition." Axel added, "Knowing that they are on the brink of death, their struggle is intense." Despite their own physical and mental struggles, ICU nurses still attempt to shift the perspective of dying COVID-19 patients. A perspective shift is the process of switching from your normal or usual viewpoint in order to discover new ways of thinking and understanding. Furthermore, in spite of the limited amount of time allowed inside the ICU, some participants attempt to converse with the patients and to provide them with comfort, especially if they are conscious. One of the participants, Serena, has shared that she "sometimes shares some happy experiences with the patients so that they can at least try to imagine the world outside, or perhaps even imagine a better life for themselves. It is critical to maintain a positive attitude. At the same time, try to impart a positive outlook on them." Giving Relentless Bedside Care for Physical Needs In the second sub-theme, participants describe their experiences performing different procedures to address the physical needs of patients during EoLC. Despite being critically ill and having a poor prognosis, the patient was consistently and timely fed, whether by tube feeding or parental feeding. The following findings illustrate how the physical support aspect of holistic care was implemented, despite the fact that death is an inevitability. As part of this process, bedside procedures such as suctioning, hygiene, bathing, and turning are performed. The routine procedures integrated into the EoLC of the dying patient will continue in the absence of a waiver or directive of DNR regarding the holding or refusal of medications, laboratories, or procedures in order to maintain the patient's comfort. In the ICU, nurses persevere until the patient passes away on their own. According to Geoffrey, " . . . we wish to address the patient's concerns holistically as much as possible. In such a case, we should maintain the patient's care, coordinate with the doctor, provide good nutrition, monitor the patient's vital signs, administer available medications." Since the pandemic began, frontline healthcare workers have been working tirelessly. Having a relentless mindset is what allows you to continue to survive and overcome even when others are unable to do so [52]. Patients were provided with physical care regardless of their status or prognosis. According to Siegfried, "No matter how busy, we always make sure that the medicine is delivered on time." Relentless bedside care also includes providing comfort through touch and voice. It involves gestures of comfort provided by the participants while caring for the dying COVID-19 patients. Providing comfort, care, and solace to dying patients and their families is an important role nurses play, as they help them accept and cope with death [53]. Additionally, the voice of comfort on the deathbed provides reassurance to the patient that they are still being cared for, regardless of whether or not they are responsive. Furthermore, the soothing touches and prayers of participants can also help ease discomfort. Serena stated, "This prayer provides relief and comfort to both our patient and the family... Even holding the hand will bring comfort to the patient, making sure they are not alone in this world." Touch plays an integral role in caregiving, which has been proven to reduce pain in patients. The peaceful end-of-life theory holds that feelings and events experienced at the end of life are personal and individualized; the gesture of "touch", in this case, facilitates this connection [19]. Lifting Burdened Emotions over Adversity Regardless of Patient's Response and Consciousness The third sub-theme pertains to the findings regarding the participants' ability to provide emotional support to patients who are burdened with emotions. A common practice among ICU nurses was to maintain professionalism and empathy by putting themselves in their patients' shoes. As Milo stated, "It is essential to remain professional while also being empathetic to them and continue to provide them with care, meet their needs, and provide them with emotional support." Participants maintained communication with dying COVID-19 patients while they were conscious, responsive, or comatose without responding. It was important for the participants not to show any signs of weakness or distress as they understood that the patients were already emotionally exhausted. According to Winifred, " . . . talking to dying COVID-19 patients will help lessen their pain and divert their attention from the pain they are experiencing." Patients are also reassured that their families are doing everything possible for them and that they should follow the doctors' recommended regimen. Irene shared, "We reassure the patients that their family is doing all they can to help." The goal is to give patients a sense of security, knowing that their family cares about them and will not leave them alone. Along with providing emotional support through communication, a common experience for the participants is therapeutic touch. Using the hands that bring tranquility was an effective technique of providing emotional support and providing the dying patients with the sense that they were not alone. Solomon stated that he sometimes "asks the family member which is the patient's favorite song, then simply asks them to play that music." It is evident that lifting the emotional burdens of the patient is essential, regardless of their level of consciousness. It is critical to communicate with the patients and remind them that while these events are beyond their control, they can still learn to shift their reactions and emotions [54]. Instilling Faith to Embrace Fate In this sub-theme, participants discussed how they provided spiritual care to their dying COVID-19 patients. The pandemic has affected the way in which ICU nurses perform their duties, and this has not spared pastoral or spiritual care for the patient. All participants agree that it is crucial to pray for and with the patient. Milo stated that "saying a prayer for them will always make them feel strong and hopeful." However, Geoffrey emphasized that before providing this care, "we should first verify their religious beliefs and practices." Despite knowing the outcome of the patient's journey, the participants still made sure that the patient received prayer to the full extent of their abilities. Another challenge was the muffled sound of the prayer behind the mask and personal protective equipment (or PPE); the participants, however, still pray with the patient. It is believed among participants that the last sense to disappear is the sense of hearing, therefore spiritual care is extended to even the comatose. Because of strict COVID-19 health protocols, the participants resorted to virtual services like Zoom or other online platforms. They also give the patients some religious items like a rosary or a Bible. Siegfried said, "For others, they just use a bible and a rosary. We give it to the patient. Then, for other patients who are already unconscious, they would use a cellphone with a recording that replaces prayer. So we just play and let the patient listen." In order to die peacefully, patients may wish to reexamine and reiterate their beliefs. This is because the end of life is often the time when spiritual matters are brought to the forefront. Most religions emphasize living purposefully, involving submission to a divine entity and offering rituals that provide comfort and influence to patients and their families during their final days. As the pandemic unfolds, participants acknowledge the role of family and religious leaders in fostering spiritual renewal and meaning [33] within the provision of quality EoLC. Surrounding by the Presence of the Intangible Loved Ones In this sub-theme, participants discussed methods of facilitating the continuous presence of dying patients' loved ones via virtual means. During COVID-19, family members can only communicate using cellular phones, video calls online, or digital platforms; physical contact is not possible. Participants reported that their tangible presence was helpful in comforting the patient. Facilitating human contact and emotional connection between the family and the patient provided an invaluable source of comfort. Olivia stated, "They are thankful that we helped them connect with their loved ones." The participants also provided heartwarming items such as pictures and cards with messages in order to increase the patients' well-being. During significant occasions, the participants felt they represented the families of the patients. As Axel mentioned, "I believe it was the birthday of one of the patients. To celebrate, we placed colorful balloons inside the room. Although I cannot recall if flowers were given, there are balloons in the room. We decorate the whiteboard and write Happy Birthday on it." Due to the absence of a loved one, the participants served as the primary social support for the dying COVID-19 patients. According to Milo, "I have also had experiences in which the patient was already in a critical condition. Even though he was unable to hear me, I continued to speak to him. I told him stories. I just continued to talk to him." In fact, the presence of a significant other, even with the use of virtual platform, has been shown to mitigate the effects of stress [31]. Even if the significant others are not tangible, their virtual presence through video or audio calls can help support the struggling COVID-19 patients. In other words, connecting to immediate family members is critical to delivering quality EoLC [19]. Theme III: Venturing into Risky Encounters in the Call of Duty Theme III consists of a cluster of statements related to the participants' perceptions of their role, their decisions, their feelings, their purpose, and their mission during their experience of providing EoLC to a dying COVID-19 patient. The following statements describe the events that occurred within the ICU in relation to the provision of lifesaving measures and EoLC by healthcare workers. According to the participants, they were fully aware of the dangers associated with their occupation in the course of performing their duties. There was a high level of caution among the participants due to the highly transmissible nature of COVID-19, and each encounter with a COVID-19 positive patient posed a risk. In reality, the perception of people's own risk of contracting the disease increased significantly during the COVID-19 pandemic [9]. Participants described what it was like to work in the COVID-19 ICU during the outbreak of the COVID-19 pandemic. Siegfried explained, "First of all, there is fear in the COVID-19 ICU. There were a lot of deaths caused by COVID-19. We didn't know the proper safety precautions, such as PPE and double masking." Additionally, the participants described what the infected patients may breathe out in the form of droplets and minute particles containing the virus that other people may breathe in. There is a possibility that these particles may land on the eyes, mouth, or nose, as well as contaminate surfaces. The participants in their statements shared that somehow, PPE made it more difficult to care for the dying COVID-19 patients. Callum explained, "Wearing PPE is also a challenge. As a result of aerosols being used to transport viruses, we must wear full PPE, especially when working on an intubated patient." In spite of the danger involved, the participants believed they were obligated to perform their professional duty due to moral and personal obligations. Participant Olivia stated, "That's really a part of your oath when you became a nurse; so even if you're not well compensated, you're still happy. Happy with the profession that you've chosen." Overall, these perceptions, roles, feelings, and choices associated with the perilous duty were revealed in the following sub-themes: the dilemma between self-preservation and precarious interventions, spotting the signs of death, sensitivity to responses after breaking the news, elucidating conflicting emotions in desolating frontlines, and giving the best effort in everything despite the perceptions of futile care. Dilemma between Self-Preservation and Precarious Interventions According to the first sub-theme, participants were faced with a dilemma as they had to choose between two options, each of which had undesirable outcomes. There have been instances in which participants have been torn between fear and the desire to maintain personal safety by wearing PPE. This can affect the speed with which they can respond to an emergency, which can lead to a worsening of the patient's condition. Milo mentioned, "There are times that we get carried away by our emotions; sometimes, we forget to protect ourselves, especially during the pandemic. At first, that is what we are saying, that there is no emergency in a pandemic. However, still, we are nurses, we have this caring nature, that even though we know that the patient is already in an emergency, sometimes, these are the moments that we commit mistakes, that we get infected, because of the procedures we perform; because we rush to aid our patient." Participants indicated that wearing PPE was both physically and mentally taxing, as they needed to be sure they were well protected and sealed before entering the room. However, concerns remain about the possibility of viruses escaping or entering through leaks in the PPE. In addition, participants reported having ambivalence regarding the use of aerosolizing procedures such as suction and bag-valve-mask ventilation, which may improve the oxygenation of the patient, yet increase the risk of virus transmission. A dilemma is created by this situation since if they do not perform these interventions, the patient will deteriorate due to hypoxia, but if they do, they will increase the risk of aerosolizing COVID-19 and increasing its transmission. As explained by Irene, "Back tapping is not permitted because we do not wish to stimulate coughing, especially if they are intubated. Maybe we can simply perform a close suction so that at least we can assist the patient." Despite the fear of acquiring the virus through suctioning, Axel further confirmed the caring nature of ICU nurses by asserting the necessity to use the close-suction system to support the patients' airway. As he mentioned, "We secured and looked for a closed-suction system in order to support his airway, because it is quite difficult for nurses to perform suctioning, especially for COVID-19 patients. It was actually very risky on our part." In essence, despite the dilemma faced by COVID-19 ICU nurses, the testimonies of the participants still showed how fears for their personal safety are being overcome. Spotting the Signs of Death The second sub-theme relates to statements by participants describing their role in providing EoLC to dying patients. ICU nurses are responsible for closely monitoring patients so that any signs of deterioration or abrupt changes in their condition may require immediate intervention to save their lives. For dying COVID-19 patients, EoLC requires a skill that permits them to recognize signs of impending death and to alert their physician and family, thus preventing untoward reactions such as shock, hysteria, and retaliation due to the fact that they were only informed after the patient's death. The participants observed that co-morbidities make it more difficult to wean dying COVID-19 patients off ventilators. According to Siegfried, "most of the patients I have handled who have succumbed to COVID-19 also had co-morbidities." In addition, he added that " . . . diabetic patients are more likely to suffer from kidney failure, which makes it impossible to wean them off their ventilator." Yet, despite their skills, participants were able to identify the signs that indicate that their patients might not survive. One participant, Irene, stated that "of course, we are the ones responsible for holding their lives, but there are times when we feel that their time is coming. At times, we may feel that they are nearing the end of their lives. The signs are there." In addition, most participants reported renal symptoms such as a decrease in urine output, laboratory findings such as acidotic ABG results, diagnostic procedures such as chest X-rays, and the mottled appearance of extremities. Detecting near-death events is one of the primary responsibilities of COVID-19 ICU nurses, since they are the ones who spend most of their time at the bedside, and they serve as a liaison between the patient and his or her physician. It has also been suggested that DNR status and advance directives may affect the course of dying of COVID-19 patients, as explained by Geoffrey, "So those are the things; the only difference between the patient who has an advance directive and the patient who is still aggressive is that we can intervene." However, he added, "if despite interventions, the patient continues to deteriorate when it comes to vital signs, then perhaps that marks the end of our patient's life." Sensitivity to Responses after Breaking the News In this sub-theme, participants are portrayed as representatives of the healthcare team and of patients' significant others. It is considered that the participants are the bearers of both blissful and horrendous news to the patient. Although some participants mentioned being proactive regarding the patient's status in order to avoid inflicting distress on them, they should never forget that veracity prevails over all other considerations. Healthcare workers often lack the skills needed to effectively deliver bad news to patients and their families in the clinical setting. Axel described, "So after being told that they were about to die... It was almost as if they were begging for the doctors to prolong their lives." Although physicians commonly deliver bad news to patients, nurses also play a crucial role in doing so. Consequently, one should be trained in clinical and communication skills; to be able to convey bad news appropriately and effectively. With COVID-19's rapid deterioration, ICU nurses often carry the burden of being listeners or witnesses to patients who can express their feelings when they hear the bad news. Some patients undergo DABDA even after initially denying that they are near death. Participants have acknowledged the importance of gently informing the patient of their current status, which will eventually lead to acceptance. According to Milo, " . . . you should explain it slowly; first explain it in medical terms, then explain it in layman's terms. There is no question that the participants need to understand what the treatment is or what course of treatment will be taken and what the next steps will be for them." The participants understood the need to be professional and to empathize with the dying patients. Elucidating Conflicting Emotions in Desolating Frontlines In the course of an epidemic outbreak, both negative and positive emotions are experienced by the front-line nurses. A negative emotion was dominant at the beginning, followed by a gradual development of a positive emotion. In reality, maintaining the mental health of nurses depends on self-coping styles and psychological growth [32]. The everyday life of COVID-19 nurses is described as being filled with mixed emotions. In spite of this, it is very important to maintain the appropriate level of composure [10]. Milo stated, "Whenever possible, we should not demonstrate any weaknesses; we should not show any signs of grief as this may be transmitted to the patient." Another burden that accompanied their struggles was the fear of exposure and the transmission of COVID-19. Axel explained, "Nurses who care for COVID-19 patients are heavily burdened. In order to protect ourselves, we had to wear PPE at all times, which limited our ability to move and function... I am no longer able to handle patients as effectively as I once did." Giving the Best Effort in Everything despite the Perceptions of Futile Care In the course of the discussion, participants acknowledged that they were already providing futile care to patients with poor prognoses. Despite their best efforts, they knew that they would not be able to protect everyone from this epidemic [37]. Serena commented, "It is heartbreaking for a nurse to witness this. No matter how much we would like to save our patients, it is impossible to save them all. Thus, what we do is to provide them with the best possible care during their final stages of life." Additionally, participants believed they had already maximized everything before the patient died, giving them the satisfaction of knowing they had done their best. According to Shiloh, "we could not blame ourselves, because we did everything in our power to keep the patients alive." The participants provided EoLC to these patients and, despite the fact that they were dying, they were still eager to give their best efforts to provide care to the patients. In sum, participants asserted the importance of making it known to the patients that someone was taking care of them. These efforts are seen as vital aspects within the theory of bureaucratic caring and the peaceful end of life theory. In addition, the participants considered doing everything they can, to the best of their abilities, as a means to mitigate any psychological effect of futile care to COVID-19 patients. Theme IV: Staying Close Amidst the Reshaped Work Environment This theme highlighted the cluster of statements that describe how the workplace environment at the ICU has dramatically changed since the pre-pandemic period. It includes aspects such as routine bedside care, challenges associated with performing standard procedures, and the difference between the care required by COVID-19 patients and the usual ICU patients. From the clusters of statements, four sub-themes were identified, namely: continuing unhampered quality care despite risks, enduring the compounding troubles on the field, acknowledging the diversity of COVID-19 cases, and surmounting the burdens and surges of workloads. Continuing Unhampered Quality Care despite Risks During the pandemic, the participants demonstrated an inclination to serve despite their ambivalent feelings about maintaining continuity of care. According to Geoffrey, "the only difference for me was that we wore protective clothing and infection control measures were heightened. However, when it comes to the provision of healthcare to COVID-19 patients who are critically ill; as far as I am concerned, the quality of care has not been compromised." During COVID-19, every patient should receive the highest standard of care, regardless of their condition, which is the primary goal of both theory of bureaucratic caring and the peaceful end of life theory. Additionally, Milo explained that nurses in the ICU continued to provide different types of support to patients throughout this pandemic: "So, continue the care, especially if they are under your care, because they need both emotional and spiritual support." When ICUs are focused on the preservation of life, they may have the perception that death is synonymous with failure. Therefore, the tsunami of deaths caused by this pandemic may cause stress and distress [9]. Even so, the majority of participants remained in their respective healthcare institutions. In Serena's words, "despite our ambivalence, we still need to go there; we must be upfront and provide patients with the best possible care." Enduring the Compounding Troubles on the Field This sub-theme noted the exacerbation of different challenges faced by participants as the pandemic progressed, aside from wearing PPE and being precise with their actions. Concerns about accidental exposure magnify the challenges associated with caring for dying COVID-19 patients. Milo stated that "one of the most challenging aspects of COVID-19 is the prevention of accidental exposure through aerosol particles." In emergency situations, the importance of wearing complete PPE hindered the immediate response of ICU nurses. As Winifred explained: "It is not easy to enter and exit the room. It is not possible to provide immediate care to the patient..., you must first protect yourself to avoid endangering others' lives." In addition, the lack of vital resources in both machines and medications has added to the burden on nurses [5]. Winifred stated: "There was a time when the medication was unavailable. This made me feel extremely uncomfortable." Furthermore, nurses working in ICUs who were under intense pressure to continue working despite experiencing fatigue, burnout, and symptoms of the virus themselves were suffering unprecedented physical, psychological, and moral injuries due to the lack of knowledge regarding the course of treatment [27]. Frederick explained: "So, it was a bit difficult at first, since we did not know what to do with them. After all, what is the correct name for this? There are limited resources... It's a very challenging situation, and with a heavy heart, there are so few options available to you." Acknowledging the Diversity of COVID-19 Cases In the ICU, COVID-19 cases are quite different. This is in contrast to other chronic diseases, such as cancer, where the patient somehow understands where he or she is heading. Milo stated: "COVID-19 is not the same as other illnesses, such as cancer patients in the fourth stage who have no cure, at least by that time, they know where they are heading. In contrast, among COVID-19 patients, it is as if the disease was an accident." It is because of this phenomenon that EoLC is quite different for a dying COVID-19 patient. To add, the transmissibility of the COVID-19 virus and the mandatory use of PPE have also contributed to the paradigm shift in critical care [36]. According to Olivia, "PPE should always be worn. Prior to the pandemic, you could simply enter the patient's room. Now it is different, you must protect yourself from COVID-19." Participants also noted that witnessing the death of COVID-19 patients is quite different from witnessing the death of non-COVID-19 patients. According to Frederick, " . . . most patients are intubated. It is possible that normal patients will eventually recover and then be extubated. For COVID-19 patients, they will not." Furthermore, the amount of time spent with COVID-19 patients is also quite limited, as stated in the guidelines. According to Solomon, "I have been handling dying patients for the past 12 years in the ICU, so it is not that new to me. The only difference is that this is COVID-19, and it is contagious." Surmounting from the Burden and Surge of Workload This sub-theme described how participants cope with the heavy burden of treating critically ill COVID-19 patients as well as ensuring a comfortable and peaceful death for those with poor prognoses or those on DNR status. Participants believed that sharing the load greatly reduced their stress levels, which is a common experience for COVID-19 nurses [34,35]. They find opportunities to express their feelings, their compassion and pity towards the patient, as well as their frustrations and problems. As Winifred explained: "Sometimes, I speak with my colleagues or feel the need to debrief myself with my supervisor. There are also some small gatherings where we share our feelings. It is critical to check our feelings from time to time." Olivia also provided an example of how she handled her negative emotions. She stated, "Yes, sharing stories is like venting out. I don't have to carry everything, the heavy burden transferred to me by the patient, at least I can share them with those who can relate." Another method for reducing workload is through time management. Due to the fact that this is a novel condition and they have limited knowledge and training, all they can do for the dying patient is to do their duty. As Geoffrey reiterated: "True, if the patient dies, we might also feel upset because it is a life lost, but there is also a sense that we tried our very best." Participants also indicated that doing the things they love assisted them in coping with the demands of providing EoLC in COVID-19 ward. Winifred noted that: "As an individual, I still resorted to doing things that I love, like when I get home, I watch TV and relax myself and leave my feelings at work." ICU nurses are dynamic in nature and the impact of COVID-19 on their workload is often considered passive, including unplanned and under-resourced changes. Therefore, nurses must be able to cope with the challenges brought by pandemics, which include protective measures, faith-based practices, social support, and psychological support [55]. Collectively, these resources work hand-in-hand to deliver quality EoLC. Theme V: Preparing the Family Life after a Loved One's Departure Families of dying COVID-19 patients have often expressed concern about facing death, as being with a dying loved one or person can be a frightening experience. ICU nurses should assess and prepare for the needs of loved ones through careful and clear communication [38]. As EoLC is being provided, there is already an element of inclusivity on the part of the family members, as the COVID-19 ICU nurses are also addressing their emotional and mental needs. Care for dying patients often involves the consideration of family members. According to the participants, family members may exhibit various emotions, including regret, guilt, anger, sadness, and uncertainty. As Axel related, the mother of a patient suffered a dreadful experience: "She wept a lot, and I assured her that the medical team would do everything possible to take care of her son." Additionally, the participants shared that providing support to family members was also a priority. As Frederick mentioned: "Supporting the family and preparing them for the possible outcome of the patient is very critical." Furthermore, because protocols were being implemented, they were not permitted to physically be present in the ICU. Within the context of caring for the family of a dying patient, fulfilling the family's emotional needs is also a part of the process. As Solomon explained, "I usually include the family in the nursing care that I provide due to the difficulty involved. As it is difficult to think about your family member dying from COVID-19, there is the issue of immediate cremation or burial. It is really very difficult to imagine that." The participants' experiences in caring for the patients' love ones are revealed in four sub-themes, including: respecting difficult informed decisions, bearing unfavorable information about the course of death, bridging the gap through technology, and journeying with the bereaved in the grieving process. Respecting Difficult Informed Decisions This sub-theme is about close communication and the facilitation of doctor-family conferences that enable them to make difficult informed decisions. It is pertinent to note that each family has its own specific reasons for acting aggressively. In addition, each family may decide to wait until the patient passes away on his or her own. Besides the patient's poor prognosis, financial considerations are also taken into account along with his or her co-morbidities and age. When a DNR decision is made in a particular medical situation, ethical dilemmas may arise for the patient and family, healthcare providers, including nurses, and the institution [56]. Axel reiterated, "When they sign the DNR form, it is expected that you have presented them with all the information they require. In other words, I am referring to all the scientific evidence, including X-ray images, that can be viewed on a cellphone." The participants emphasized the importance of signing the informed consent for DNR or treatment refusal. In some instances, one of the reasons for making sure the family is well informed is to enable them to be guilt-free following the death of the patient. The participants, however, believe that no matter what, this decision is quite distressing. According to Serena, " . . . signing a DNR form can be very frightening and distressing for a family. This is particularly true when a loved one's life is being taken away from them. There is a great deal of difficulty involved and Yes, it is extremely difficult to witness." Bearing Unfavorable Information about the Course of Death This sub-theme is associated with the statements that depict the experience relative to the provision of appraisal regarding the patient's current status. Working as an ICU nurse for COVID-19 patients, participants observed how the disease is progressing and how COVID-19 gradually results in the patient's death. Usually, this occurs when the patient already has a poor prognosis. As Callum stated, "The first thing we do for our patients on DNR status is to brief the family members on the patient's condition." The participants also agreed that this intervention was similar to providing information and health education to the relatives of their patients. As Olivia stated, "Health education is really applicable to cases in which additional information is given to patients' love ones rather than to the patients themselves." Due to the fact that the participants are usually alone in the room with the patient, the family members are likely to rely on the assessment provided by the nurses in the ICU. As a result of this collaborative effort, the physician will be able to formulate a prognosis before meeting with the patient's family members. According to Milo, "we must also focus on the family. Most of the time, the family members also have many questions. Since they were unable to see the patient, what they can do is rely to on us." Providing support to family members is of the utmost importance. The best way to address their anxiety is to let them be heard and to allow them to express their concerns. A helpful step for family members is to provide them with the structure to know what to expect, including the schedule of routines, the type of treatment being administered, and how their dying loved one is being cared for and comforted. Bridging the Gap through Technology This sub-theme addressed the facilitation of communication and updates between patients, their loved ones, and physicians through the use of technology. Among these technologies are video calls and cellular phone calls. The use of modern telecommunications technology can play an instrumental role in the care of COVID-19 patients [57]. The participants reported that isolation made it difficult for family members to communicate with their patient. As Winifred shared, "During end of life situations, the patient cannot be seen, so proper communication with the relatives is difficult due to the isolation that occurs. So we facilitate video calls and audio calls." Communication is so vital that ICU nurses will extend their time and take risks to facilitate the conversation between the patient and their loved one. Milo shared, "Sometimes we have to stay longer inside the patient's room because we have to hold the phone while their family is talking to them." Within the ICU, patients are generally alone in the room. Listening to or seeing their loved ones can really uplift them and provide them with emotional and social support. As Callum mentioned, "... sometimes relatives need to contact their father, or mother, they call, even just by hearing them is okay. This is a part of the emotional support." During EoLC, it is extremely important for patients and family members to be able to see one another for the last time; even by means of a virtual platform. Callum further explained, "So for family interaction, those who are quarantined and can't go out. What we do is to have them connect online and talk with the patient. As long as they can communicate with their family members before the end of life." As the COVID-19 patient is already dying, the participants agreed that the needs of the family members to see the patient for the final time and bid their last farewells, for humanitarian reasons, take precedence over the ICU protocol. Olivia stated, "Sometimes, I violate the confidentiality and protocols of the hospital since, for example, we are not permitted to take pictures or videos within the ICU. It is, however, not uncommon for the folks to request the last time they see their parents or siblings if the patient is dying. So there were times that we allowed with instruction not to record the video or no picture would be taken while online." In addition, the participants expressed the desire to use technology to facilitate online chaplain, religious, or spiritual services for the patient for the final time. Depending on the patient's religion, the service would differ. Irene noted that, "... however, if they have family members who are not Roman Catholic, if they wish to hear a prayer, we could still put their cellphones on their ears." Lastly, the participants also expressed their role in facilitating communication between doctors, family members or the patients to reduce skepticism. Serena explained, "We need to have the different departments or specialized doctors to talk to each other about how to coordinate their care for their patients and also to inform the families about their care to allay their skepticism. Because again, we are actually not really that familiar with the management of COVID-19, so updating is always needed." Journeying with the Bereaved in the Grieving Process In this last sub-theme, participants shared their experiences with the family members as they embarked on the grieving process following the death of the COVID-19 patient. According to Milo, "If they need someone to talk to for the grieving process, if you will not be affected, or if there will be no breach of protocol, just stay with them. Our responsibility is to provide comfort to the family." During the interview, participants noted that some family members were in denial regarding the cause of death. The primary role of the ICU nurses was to help them accept the death of their loved one by providing detailed information and facilitating a conference with the physician. Hence, regular updates should be provided in order to prevent situations of unacceptance and complaints. One of the most difficult trials of life is the loss of a loved one. However, most people are able to weather these storms with the support and comfort of their loved ones. Many participants described how difficult it was to witness the patients die alone without being able to be with their family and love ones. Winifred stated, "It's a bit painful to witness the family see their loved ones for the last time. It was not possible for them to hold them physically; it was impossible for them to say goodbye properly; they could not hug a departing individual physically. There is a great deal of crying on their part. In front of the family, we maintain professionalism regardless of their feelings towards me. It was the same every day when I returned home. Nevertheless, it was still manageable." As the nurses begin the journey with the patient and family, they may spend time with them and seek their perspectives and understanding of the patient's condition. Family members may also require spiritual and emotional support by offering them spiritual advisors or involving them in the pastoral care of the patient through virtual anointing prayers or religious rituals that are in keeping with the patient's faith and beliefs [58]. Providing bereavement care to family members following the death of a patient in the ICU has been shown to reduce the risk of post-traumatic stress disorder and prolonged grief [59]. In sum, following up with family members is highly appreciated by them. Family members can explore and identify the events that led to the death of their loved one during such a meeting, which may assist them in coming to terms with their loss. Furthermore, participants stated that respecting the family members' privacy and giving them time to grieve are essential aspects of caring for them after the loss of a dear one. When a loved one passes away, it is a significant change in the family's lives, and the time spent with them is crucial to the grieving process. Within the theory of bureaucratic caring, it is important to understand how people are connected [20], Therefore, it is important to understand and provide time for the grieving process. Conclusions The current study examined the lived experiences of ICU nurses providing EoLC to critically ill COVID-19 patients with poor prognoses or in DNR status. A number of limitations were identified in the study. As a result of the recent emergence of COVID-19 cases within the local community, communications and consent acquisition were conducted via Facebook Messenger or email, and interviews were conducted using the online platform Zoom. Observations of gestures were limited to the upper part of the body due to the angle at which the camera was pointed at the participants. Furthermore, the current study concentrates on nurses working in hospital settings within the Western Philippines, which may not be representative of the national capital region, where resource availability may vary. Using both the theory of bureaucratic caring and the peaceful end of life theory as a framework, it is evident from the lived experience of the COVID-19 ICU nurses that they have a clear understanding of their role as they face risky encounters in the course of their duties. An obvious difference between the provision of EoLC before and during the pandemic was the presence of ambivalent feelings and dilemmas that forced ICU nurses to choose between self-preservation and precarious interventions. Nevertheless, some participants reported that despite this sense of concern about being infected with COVID-19, they still carried out emergency operations while wearing only level 2 PPE. It was also clear to the participants that one of their primary responsibilities at the bedside as the primary care provider is to identify signs of impending death. This is in order to inform physicians and family members as soon as possible. As a result of these risky encounters, the participants explained that the negative emotions they experienced while working were well known to them. As a result, the phenomenon of EoLC persists among dying COVID-19 patients despite the lack of a protocol. Even with altered working methods, an increase in workload, elevated stress, anxiety, and fear due to the contagious disease conditions, and mixed feelings of compassion and fear, the participants were still able to incorporate the core principle the dying COVID-19 patient hopes to experience, which agrees with both of the aforementioned theories. Moreover, in order to make the peaceful end of life theory a reality, the acceptance of mortality and the end of life, free of suffering through symptoms management, feelings of dignity and respect, as well as the presence of loved ones surrounding comfort with the use of video calls and virtual platforms. Additionally, COVID-19 ICU nurses are required to maintain continuity of holistic care while maintaining personal safety and self-preservation goals, to use PPE, and to modify routine care with the integration of digital spiritual and family support. Recommendations A major challenge for hospital administrators is to collaborate and develop unified policies and protocols regarding the provision of EoLC to dying patients. Funding is also needed to purchase equipment, including digital devices, music therapy gadgets, systems, and infrastructure that cater to the holistic needs of patients. The provision of an online service may be a suitable alternative if the need for physical interaction cannot be met. An isolation room with cameras can be established to enable feasible and real-time monitoring of patients, thus reducing the need for PPE and unnecessary room visits that may expose nurses and physicians to contaminants. The hospital administration should also establish a highly specialized team for the provision of EoLC, including training in therapeutic communication, psychological first aid, and bereavement support. It is important that this team includes nurses not only from the ICU, but also from other departments that may treat infectious patients in the future. Aside from training the staff nurses, the hospital administrator should set up a mental health department. This will assist the nurses in caring for the bereaved family; particularly for those who are in denial or who have shown signs of distress. Psychological support may also be provided for nurses who are experiencing difficulties. For ICU nurses, it is imperative that they are aware of the various core concepts involved in EoLC. To ensure the safety of their patients, they must make prudent decisions regarding vital matters. In order to have guidance and direction when they are assigned to a dying COVID-19 patient, ICU nurses should be proactive in requesting and collaborating with the administration in the establishment of EoLC protocols in hospitals without one. PPE must be worn to ensure that they can provide holistic care while maintaining their own safety. Through the establishment of a protocol, patients dying of COVID-19 and other infectious diseases will be treated with high-quality unified care which will result in quality EoLC. Importantly, when appropriate policies or protocols are put in place and correctly followed, ICU nurses do not face the fear of being discriminated against by patients, family members or other healthcare professionals. As a result, nurses will be provided with additional legal protection. It is also important for ICU nurses to be proactive in seeking professional help from a mental health expert if they experience any negative feelings or thoughts as a result of providing EoLCs. Furthermore, hospital administrations should provide nurses with training in mental tenacity and resilience. Finally, it may be necessary to conduct further research to determine the extent to which COVID-19 ICU nurses are aware of, have an attitude toward, and adhere to EoLC concepts. It may also be recommended that further research be carried out with other ICU nurses who hold different religious or spiritual beliefs. This will enable us to determine how their beliefs may influence the provision of EoLC. It is also possible to conduct studies to assess the awareness and attitude of the general public towards advance directives and EoLC. Data Availability Statement: Data for the current study are not publicly available due to privacy concerns.
2022-10-13T15:02:44.086Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "00c4ef04d793efe52c5df4deaae7767d6a3e9aa3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/19/12953/pdf?version=1665397076", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6258b9fa735401726bb8cb942565a6f671ea1b37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268016050
pes2o/s2orc
v3-fos-license
Influence of Bioceramic Cements on the Quality of Obturation of the Immature Tooth: An In Vitro Microscopic and Tomographic Study The present in vitro study focuses on the filling ability of three different bioceramic cements with or without the addition of a bioceramic sealer in an open apex model on the marginal apical adaptation, tubule infiltrations, and void distributions as well as the interface between the cement and the sealer materials. To this end, sixty mandibular premolars were used. MTA-Biorep (BR), Biodentine (BD), and Well-Root Putty (WR) were used to obturate the open apex model with or without the addition of a bioceramic sealer, namely TotalFill® BC sealer™ (TF). A digital optical microscope and scanning electron microscope (SEM) were used to investigate the cement–dentin interface, marginal apical adaptation, and the material infiltration into the dentinal tubules. Micro-computed X-ray tomography and digital optical microscopy were used to investigate the cement–sealer interface. The results were analyzed by using the Kruskal–Wallis test. No significant difference was found between the groups for the marginal apical adaptation quality (p > 0.05). Good adaptation of the dentin–cement interface was found for all tested groups and the sealer was placed between the cement material and dentinal walls. All the groups demonstrated some infiltrations into the dentinal tubules at the coronal part except for the BR group. A good internal interface was found between the cement and the sealer with the presence of voids at the external interface. A larger number of voids were found in the case of the BD-TF group compared to each of the other two groups (p < 0.05). Within the limitations of the present in vitro study, all the groups demonstrated good marginal apical adaptation. The use of a sealer in an open apex does not guarantee good filling and, in addition, creates voids at the external interfaces with the dental walls when the premixed sealer is used with powder–liquid cement systems. The use of a premixed bioceramic cement could offer fewer complications than when a powder–liquid cement system is used. Introduction Incomplete root development is one of the most frequent complications observed in traumatized teeth when the vitality of the pulp vanishes before the accomplishment of dentin deposition [1].Consequently, the probability of fracture could be increased due to the thin and weak dentinal walls of the root [2].Regenerative endodontic treatment has gained significant interest in the endodontic domain.This treatment could affirm the replacement of the damaged dental structure [1].However, this treatment could not reach its purpose, especially in immature permanent teeth with apical periodontitis, pulpal necrosis, or roots with previous unsuccessful endodontic treatments [2,3].Therefore, apexification using an apical plug or long-term calcium hydroxide application could be the best choice to ensure the healing of apical pathosis and achieve apical closure [4]. Open apex treatment in endodontics is a challenging procedure due to the anatomic configuration of the apex, the difficulty of the delivery of the endodontic material to the apical third without over-or short filling, and the ability of 3D obturation to fill the root canal with minimal voids [5].This is a nonsurgical procedure that is used to treat immature or incomplete root canal development [5,6]. Mineral trioxide aggregate, "MTA", was introduced in 1993 and considered the gold standard material for several endodontic treatments [17].Different calcium silicate cements were developed to improve the quality of the initial MTA.They replaced bismuth oxide, which is toxic and could change the color of the teeth, with calcium tungstate, which is more biocompatible as well as allows the addition of organic plasticizers to the liquid of this cement [15].MTA Biorep (Itena Clinical, Paris, France) is a powder-liquid calcium silicate cement that is used on endodontic treatment and showed its efficiency among in vitro and in vivo studies [18,19].Another commercial product that is used worldwide is Biodentine (Septodont, Saint-Maur-des-Fossés, France).This product is a calcium silicate material that is commercialized in powder-liquid formulations with an advantage that its mechanical properties are close to those of dentine [20]. Ashi et al. [19] and Kharouf et al. [21] demonstrated that the premixed bioceramic materials among sealer or cement formulations are better than the bioceramics in powder-liquid formulations in terms of their filling ability, application, and handling.Novel premixed bioceramic materials based on calcium aluminosilicate have recently been introduced in the dental market, an example being Well-Root™ PT (Vericom, Chuncheon-si, Republic of Korea) [21].It has been reported that the physicochemical, biological, and mechanical properties of this material are comparable to those of MTA and Biodentine™ [21 -23].There are no studies in the literature of the three mentioned bioceramic cements and a calcium silicate sealer in the obturation step.Moreover, the effect of using a calcium silicate sealer combined with a putty form has rarely been studied. The purpose of the present in vitro study was to investigate the filling ability of each of the three different bioceramic cements with or without the addition of a bioceramic sealer in an open apex model using different imaging techniques as well as investigation of the cement-sealer interfaces.The null hypotheses were that there was no difference between the three bioceramic cements with respect to marginal apical adaptation, the state of the cement-dentin interface, the extent of infiltration of tubules, and the state of the cement-sealer interface (when a sealer is used as a supplementary material) on the quality of obturation of a single-rooted canal in an open apex model.The cements used in the present study have different formulations (powder-liquid or premixed) and chemical compositions (tricalcium silicate or calcium aluminosilicate); thus, several studies mentioned that these different characteristics could play an important role on the biological, physicochemical, and mechanical properties [15,19,21]. Teeth Preparation Sixty freshly extracted first mandibular premolars, with a single canal and root curvature <10 • extracted for orthodontic or surgical reasons from patients aged between 18-25 years, were obtained under patient informed consent after obtaining approval from the Ethics Committee at Damascus University, Syria (protocol no.2571/2023).The recruited teeth were stocked in saline solution at 4 • C until their use [21].The cusps of the teeth were flattened using a rotating polishing machine (Escil, Chassieu, France) with carbide grit paper (600 grit) to have a standard root canal length of 18 ± 1 mm.A single operator (endodontist: R.A.) performed all the clinical steps.The access cavity was prepared under an optical microscope (Zumax Medical Co., Ltd., New District Suzhou, Jiangsu, China) by using burs and ultrasonic tips.A #10 K-file (Coltene, Lezennes, France) was used to perform the canal scouting step.Then, 0.06 tapered rotary files (Eighteeth, Changzhou City, Jiangsu Province, China) were used by orthograde direction to a master apical file of #30.The open apex divergent was created by using 0.06 tapered rotary files (Eigthteeth, Changzhou City, Jiangsu Province, China) in a retrograde direction to a master file of #30.The diameter of the anatomic open apex was 1.26 mm after the use of #30/0.06 in a retrograde direction to its total length.Then, a final irrigation protocol was used for all the specimens as follows: 5 mL of 17% EDTA solution (2 min), 2.5 mL of 0.9% NaCl (1.5 min), 5 mL of a 6% NaOCl (2 min), followed by a final rinsing with 2.5 mL of 0.9% NaCl (1.5 min).A 31-gauge Navitip needle with a 5 mL syringe was used to deliver all the irrigants by orthograde direction.Teeth were randomized using a computer-generated random sequence at www.randomizer.organd divided into 6 equal groups (n = 10) according to the following classifications: Group 1 (G1): BD: After mixing, the cement was applied using MapOne carrier (MapOne system, Produits Dentaires SA, Verey, Switzerland); Group 2 (G2): BD-TF: TF was applied by using a gutta percha point with a painting technique; after that, the cement was applied using MapOne carrier; Group 3 (G3): BR: After mixing, the cement was applied using MapOne carrier; Group 4 (G4): BR-TF: TF was applied by using a gutta percha point with a painting technique; after that, the cement was applied using MapOne carrier; Group 5 (G5): WR: The premixed cement was applied using MapOne carrier; Group 6 (G6): WR-TF: TF was applied by using a gutta percha point with a painting technique; after that, the cement was applied using MapOne carrier. Before plug placement, the teeth were embedded in a moistened floral sponge [26].After drying the canal with paper points, the same operator performed the placement of apical plug in an orthograde direction to rich a 6 ± 0.5 mm plug length coronal to the apex.The plugs were condensed by using fitted pluggers (Dentsply Sirona, Germany).The operator used an operating microscope (Zumax Medical Co., Ltd., New District Suzhou, Jiangsu, China) during the placement of cements.Before injecting warm gutta percha to fill up the root canal using the Fast-pack Pro system (Eighteeth, Hangzhou City, Jiangsu Province, China); a final radiograph (buccal-lingual direction) was taken for each filled tooth to verify the plug length.All the specimens were conserved in the dark in a container at 37 • C and 95% relative humidity for 48 h until completely set [27]. Apical Region Evaluation The quality of obturation in the apical region of each root was investigated using a digital optical microscope (Keyence, Osaka, Japan) at 50× and 100× magnifications.Micrographs of the filled root apices were evaluated by 2 blinded endodontists.Grading criteria were used to evaluate the marginal adaptation based on a previous study [5]: Score 1: Root-filling material is well adapted: close marginal approximation of the filling material to the dentinal wall.No spacing defect present at the material-dentin interface in >70% of the circumference of the open apex; Score 2: Root-filling material is moderately adapted: close marginal approximation of the filling material to the dentinal wall.Presence of spacing defects at the material-dentin interface in 30-60% of the circumference of the open apex; Score 3: Root-filling material is poorly adapted: poor marginal approximation of the filling material to the dentinal wall.Major voids and/or significant spacing defects at the material-dentin interface in > 60% of the circumference of the open apex. When different scores were attributed by the two examiners, they reanalyzed the micrograph with a third endodontist to reach an agreement. Cement-Dentin Interface Investigation To evaluate the interface adaptation among the different groups, all the teeth were sectioned longitudinally down the middle in the buccal-lingual direction by using a precision cutting machine (Micracut 152, Metkom, Bursa, Turkey) under continuous water cooling.The quality of the interface was investigated using a digital optical microscope at 50× magnification.A qualitative evaluation was performed by the same examiners. After that, 1200, 2400, and 4000 P-grade (number of abrasive grains per cm 2 ) abrasive papers were used to polish the longitudinal section surfaces.Then, the surfaces were treated with 37% phosphoric acid for 10 s and 2.5% NaOCl for 3 min to eliminate the smear layer that was created during sectioning.They were then mounted on SEM stubs and sputter-coated with gold-palladium (20/80).The samples were analyzed using a scanning electron microscope (SEM) (FEI Company, Eindhoven, The Netherlands) at 10 kV and magnifications of 50×, 200× and 1000× with a working distance of 10 mm.SEM imaging was focused on the cement-dentin interfaces along the plug length as well as the cement or sealer material infiltrations into the dentinal tubules. Cement-Sealer Interface Investigation Four cylinders from each cement-sealer group were prepared by using a Teflon mold (height: 3.8 mm; diameter: 3 mm).Each cement was firstly placed in the mold to achieve around 2 mm of height; then, the sealer was injected onto the cement material to fill the mold's remaining space.All the samples were incubated at 37 • C for 48 h to allow the materials to set properly.After the storage period, the samples were removed from the mold and observed on two sides (2 images for each cylinder) using a digital optical microscope to check the cement-sealer interfaces at 70× magnification.Then, the external voids localized at each cement-sealer interface were measured by using VHX-5000 software (Keyence, Osaka, Japan) (Figure 1).After the external interface observations were made using a digital optical microscope, and due to the difficulty of the investigation of the interface in the WR-TF group, 3D imaging by means of micro-computed X-ray tomography (EasyTom 160 from RX Solutions, Chavanod, France) was performed.This analysis was focused on the internal and external cement-sealer interfaces in the case of one sample per group.The projections were recorded at a voltage of 45 kV, a current of 160 µA, and a frame rate of 0.5 image/s.The source-to-detector distance (SDD) was set to about 293.6 mm, whereas the source-to-object distance (SOD) was set at about 5.6 mm, providing a voxel size of about 2.5 µm.A total of 1440 projections were obtained over 360 • with a total acquisition time of 48 min.The volume reconstruction was performed with the software Xact64 (RX Solutions) and the 3D image analysis was performed using the Avizo software (ThermoFisher, Waltham, MA, USA). Figure 2 shows the sample preparation protocols and the different investigation methods. Statistical Analysis SigmaPlot release 11.2 (Systat Software, Inc., San Jose, CA, USA) was used to analyze the results.To determine the significance of the difference in the marginal apical adaptation, tag infiltration length at the cement-dentin interface, and void measure (area and volume) at the cement-sealer interface (in groups where a sealer was used), the Kruskal-Wallis test including comparison procedures (Tukey's Test) was used.A significant difference was indicated when p < 0.05. Marginal Apical Adaptation Micrographs showing the marginal apical adaptations are presented in Figure 3.After the evaluation of the different micrographs by the examiners, there was no statistically significant difference in the marginal adaptation between the tested groups (p > 0.05) (Table 2).Moreover, the use of a sealer for painting the dentinal walls before the obturation with the cements produced no significance difference in the quality of marginal adaptation for G1 vs. G2 (p = 0.2); G3 vs. G4 (p = 0.06) and G5 vs. G6 (p = 0.146).In addition, no statistically significant difference was detected between the groups of cements (p = 0.576) as well as between the groups of sealer/cements (p = 0.342). Cement-Dentin Interfaces Good qualitative adaptations were obtained for all the groups (Figure 4).In the BD-TF micrograph, the distribution of sealer/cement was visible (black arrows) due to the different colors of the materials (Biodentine and sealer).The sealer was present on the dentinal walls and the Biodentine material was localized in the center of the root canal. Cement-Dentin Interfaces (Tags) All tags among the samples were detected at the coronal parts (>4 mm from the apex) of the apical plugs.Both cement groups without a sealer, G1 (BD) and G5 (WR), demonstrated material infiltrations, whilst G3 (BR) had no material infiltration into the dentinal tubules.No significant difference was found between the tag length of the WR and BD groups (p = 0.326).All the cement groups with sealer injection demonstrated material infiltrations into the dentinal tubules (Figure 5).WR-TF demonstrated significantly longer infiltrations (29.1 ± 12.1 µm) than BR-TF (18.8 ± 7.5 µm) (p = 0.047), whilst no significant differences were found between BD-TF (20.8 ± 6.3 µm) and BR-TF or WR-TF (p > 0.05) (Figure 6). Cement-Sealer Interfaces In all the groups, voids were observed at the cement-sealer interfaces (Figure 7, blue arrows; and Table 3).No statistical differences were found between the three tested groups (p > 0.05).The interfaces between BD and TF as well as between BR and TF were easily observed due to the difference in color between the tested materials.The WR-TF interface was difficult to image and contained few voids compared to BR-TF and BD-TF.The results showed that all the voids were located on the external surfaces between the tested materials.The internal bonding interface among all the groups was well adapted with no detected voids.The WR-TF interface was also difficult to examine with the microcomputed X-ray tomography, as shown through the digital optical microscope (Figure 7).A significantly larger void volume was found for BD-TF compared to BR-TF and WR-TF (p = 0.001, p = 0.004, respectively) (Table 3), whilst no significant difference was found between WR-TF and BR-TF (p = 0.890).The difficulty in this investigation, as already shown for the digital observation, was still the possibility of detecting the interface between WR and TF.Table 3. Mean and standard deviations of the observed voids at the cement-sealer interfaces by using a digital optical microscope (µm 2 ) and micro-computed X-ray tomography (µm 3 ).Biodentine + Total Fill: BD-TF; MTA Biorep + Total Fill: BR-TF; Well-Root PT+ Total Fill: WR-TF.Different superscript letters ( a , b , and c ) indicate significant differences (p < 0.05). Discussion The success of root maturation after regenerative endodontic procedures in immature teeth with apical periodontitis and pulp necrosis is not always possible.In particular, immature roots with previously unsuccessful endodontic treatments fail to continue further root development in width and length [28].Therefore, apexification could be considered as an alternative strategy to achieve apical barrier formation.The apical sealing of bioactive endodontic material as an apical plug-in an open apex model is an essential parameter for the clinical success of endodontic treatment.The hermetic seal of these materials plays an important role to avoid the reinfection of the root canal system by microorganisms [29].Moreover, these bioactive endodontic materials have better biological and physicochemical properties than resin-based materials [30,31].These cements have an alkaline pH and can release Ca +2 ions that could kill bacteria and enhance the remineralization process [15,19].In addition, the shrinkage of resin materials during polymerization could create cohesive fractures and voids, whilst the calcium silicate material does not have this inconvenience [30].Therefore, the purpose of the present in vitro study was to evaluate the marginal apical adaptation, material-dentin interface, and material infiltrations into the dentinal tubules for a premixed calcium aluminosilicate and two powder-liquid calcium silicate cements with or without the addition of a calcium silicate sealer before the obturation with cement. The results showed that there was a statistically significant difference between the tested materials among material infiltrations into the dentinal tubules and the voids created between the cements and the sealer materials.Moreover, no significant differences were found between the tested materials among marginal apical adaptation.Therefore, the null hypothesis must be partially rejected. A final irrigation protocol was performed by using 6% NaOCl and 17% EDTA to eliminate the smear layer and open the dentinal tubules.MapOne carrier was used to facilitate the delivery of endodontic materials into the canal system and the moistened floral sponge was used to simulate an apical barrier [26]. No significant difference was found between the marginal apical adaptation of the different tested groups.In addition, the use of a bioceramic sealer before obturation with cement did not ameliorate the marginal adaptation.In contrast, Tran et al. [5] compared the average gap size of cement obturation against the same cement associated with a sealer.Significantly fewer gaps are created when the cement is associated with its sealer.In the present study, there was no significant filling quality amelioration with the addition of the sealer.This could be related to the fact that the sealer used was shipped from another company other than that of the used cements, thus having different chemical compositions.Moreover, all the groups demonstrated well-adapted materials to the root canal and could be considered suitable for clinical application by orthograde obturation in an open apex model. SEM observations demonstrated that some material infiltration occurred among all the tested groups except for the BR group at the coronal side of the apical plug.This finding could be related to the fact that the dentinal tubule diameter increases from the apical to the coronal thirds [32].Larger tubule diameters could ameliorate the infiltration of materials into the dentinal structure.BR showed no tags, whilst BR-TF demonstrated some tags in the coronal side of the plug.This finding could be explained by the influence of the particle size and flowability of the tested materials [33,34].The addition of a sealer that has a higher volume of finer and smaller particles, and higher flowability than MTA could ameliorate the penetration into the dentinal tubules [33,35,36].BD demonstrated material infiltrations into the dentinal tubules at the coronal part of the apical plug.In accordance, Atmeh et al. [37] reported BD tags in the dentinal tubules.Moreover, WR demonstrated tags in the dentinal tubules; this could be due to the fact that the new premixed calcium silicate materials are fabricated from nanoparticles and some products included polymers [38].In time and after four months, a calcium silicate material could create mineral depositions in the dentinal tubules that are responsible for killing the deep bacteria in the dentinal structure [39]. Longitudinal sections were observed by a digital optical microscope to evaluate the material-dentin interfaces as well as the distribution of the sealer and cement in the root canal system.The use of a digital optical microscope demonstrated in the BD-TF group that the sealer was placed on the dentinal walls and the cement was localized in the center of the obturation structure.Therefore, only the sealer was in contact with the dentinal walls and will react with the dentinal fluids and dentinal structure.BR-TF and WR-TF were not clearly observed due to the closed colors between the cements and the premixed sealer.But, normally, the sealer will be on the walls, as shown in the BD-TF group. The interfaces between the different cements and the premixed sealer were firstly observed by a digital optical microscope.No significant differences were found between the tested groups.Moreover, the micro-computed X-ray tomography, as a more specific method, was used to investigate the internal and external structures of the materials in 3D.By using micro-computed X-ray tomography, BD-TF showed a larger number of voids compared to WR-TF and BR-TF.This could be due to the particle sizes, which are nanoscale for the premixed materials [39].The obtained results (Figure 8) demonstrated similar structures of WR and TF, whilst BD and BR demonstrated different structures with qualitatively bigger particles.Moreover, the premixed form or the ready-to-use form do not need manual mixing or preparation before the application of the materials.Therefore, the ready-to-use product could reduce the errors that could alter the physicochemical properties of these materials [21].In addition, Kharouf et al. [21] demonstrated that a premixed bioceramic could offer lower void percentages than a powder-liquid bioceramic.All the detected voids between the cements and the premixed sealer were localized at the external interface.Various studies [40,41] reported that the closed porosity within the materials could be considered as isolated infiltrated porosity that has almost no potential for bacterial migration.In addition, the coronal seal plays an important role in the success of open apex treatment because the microorganisms could penetrate through the root-filling materials along the dentinal walls in the area of open porosity, resulting in root-canal-system reinfection [30].Consequently, the combination of a premixed cement and a premixed sealer in root canal treatment could offer lower voids between the two materials.In contrast, higher open pore percentages could be created by the combination of materials with different application modes, which play a role as a pathway for microorganisms.Moreover, further studies on the use of a combination of materials that contain the same/different chemical compositions should be performed.The findings of the present study support the clinical use of the novel premixed calcium aluminosilicate cement as root-end filling material.Moreover, dentists should respect the manufacturing instructions.In addition, as WR is commercialized as a capsule-form, ready-to-use product, there is no information about the expiry date after the opening of the capsule.Several methods and techniques were proposed in order to ameliorate the filling ability of the endodontic materials and to minimize the void percentages.The use of ultrasonic activation could ameliorate the obturation quality and enhance the filling ability compared to manual compaction [42,43].Moreover, the use of different applicators such as the MAP system could ameliorate the sealing ability of the apical plugs [44]. Further studies should be performed by using the non-invasive technique of tomography to investigate the void percentages and distinguish between the open and closed porosities.This technique could offer 3D imaging that could contribute to evaluating the quality of the obturation within the root canal system [45].Moreover, a microleakage experiment should be carried out to evaluate the quality and stability of coronal and apical sealings with time.Marginal apical adaptation evaluation with longer storage periods for teeth obturated with bioceramic materials should be performed, as these materials are known for their high solubility [46].In addition, further long-term studies in animal as well as clinical trials should be performed to evaluate the effects of these cements on the periapical tissues.While statistical significances were not detected for the marginal apical adaptation between the three tested materials, further study with a larger sample size should be performed in order to validate the findings.Moreover, only one technique of cement application and one technique of sealer injection were used in the present study.Further research should be performed to also compare different application techniques for the cement [47] and the sealer [48], which could change the filling ability. Conclusions Within the limitations of the present in vitro study, the three tested bioceramic cements demonstrated good marginal apical adaptation.The use of a sealer in the open apex model before the final obturation with cement does not improve the quality of the obturation, but it could increase penetration of the materials.The use of a sealer associated with cement could create external voids at the interfaces between the materials.The use of BD cement with a premixed sealer could increase the void percentages.The handling of a premixed bioceramic cement (WR) may be easier than would be the case with a powder-liquid cement (BD or BR). Figure 2 . Figure 2. Sample preparation and investigation methods. Figure 4 . Figure 4. Digital micrographs showing the material-dentin interfaces in longitudinal sections teeth filled with the different cements and teeth filled with sealer-cements.Biodentine: BD; BR: MTA Biorep; Well-Root PT: WR; Biodentine + Total Fill: BD-TF; MTA Biorep + Total Fill: BR-TF; Well-Root PT+ Total Fill: WR-TF.Black arrows indicate the sealer layer at the dentin-material interface. Table 2 . Means and standard deviations of the assigned marginal apical adaptation scores.
2024-02-27T16:34:28.429Z
2024-02-23T00:00:00.000
{ "year": 2024, "sha1": "b5a58bfe63d8717148437fb1a37a082c4585f0f6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5354/11/3/213/pdf?version=1708702427", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "945ec56eb4cefb71a5b6e3aa407cd181f07bb8dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
218889836
pes2o/s2orc
v3-fos-license
Applications of Algebraic Combinatorics to Algebraic Geometry We formulate a number of new results in Algebraic Geometry and outline their derivation from Theorem 2.4 which belongs to Algebraic Combinatorics. Theorem 8.3 is new, while complete proofs of other results are in [7] and [8]. Introduction Let K be a field. It will be important to distinguish between an algebraic variety X defined over K and the set X := X(K) of K-points of X. We denote by A the affine line which is an one-dimensional algebraic variety whose K-points are K. To a K-vector space V one canonically associates an algebraic K-variety V such that V(K) = V . To a familyP = {P 1 , . . . , P c } of polynomials on V one associates a morphismP : V → A c . For anyt ∈ K c we associate a subscheme Ft(P ) :=P −1 (t) ⊂ V. We write XP :=P −1 (0). Definition 1.1. (1) Let P be a polynomial of degree d on a K-vector space V. We define the rank r(P ) as the minimal number r such that P can be written in the form P = r i=1 Q i R i , where Q i , R i are polynomials on V of degrees < d. (2) For a familyP = {P i } 1≤i≤c of polynomials on V we define the rank r(P ) as the minimal rank of polynomials Pā := c i=1 a i P i ,ā ∈ k c \ {0}. Example 1.2. If P : V → K is a non-degenerate quadratic form then dim(V )/2 ≤ r(P ) ≤ dim(V ). We show that morphismsP : V → A c for families of sufficiently high rank possess a number of nice properties for fields of characteristic > |d| := max i deg(P i ). In particular we show that these morphisms are flat and that their fibers are complete intersections with rationally singularities. In the case of fields of small characteristic we have to replace the rank r(P ) by the non-classical rank r nc (P ). (1) We denote byP : V d → k the multilinear symmetric form given bỹ P (h 1 , . . . , h d ) := ∆ h 1 . . . ∆ h d P : V d → k, where ∆ h P (x) := P (x + h) − P (x). (2) We define the non-classical rank (nc-rank) r nc (P ) to be the rank ofP . (3) For a familyP = {P i } 1≤i≤c of polynomials on V we define the nc-rank r nc (P ) as the minimal nc-rank of polynomials Pā := c i=1 a i P i ,ā ∈ k c \ {0}. Remark 1.4. (1) If char(k) > d then r(P ) ∼ r nc (P ). (2) In low characteristic it can happen that P is of high rank whileP is of low rank. Example 1.5. Let K be a field of characteristic 2, V = A n and P (x 1 , . . . , x n ) = 1<i<j<k<l≤n x i x j x k x l is of rank ∼ n, but of nc-rank 3, (see [15]). We will denote finite fields by k and general fields by K. 1.1. Acknowledgement. We thank U. Hrushovski for his help with simplifying the proof of Theorem 6.2. The main tool Definition 2.1. Let k := F q , let V be a k-vector space andP : V → k c be a map. We define (2) νP : k c → C is the function given by νP (t) := |Ft(P )|/q dim(V )−c . Lemma 2.2. LetP = {P 1 , . . . , P c }, P i : V → k be a family of maps such that all the maps Pā := c i=1 a i P i ,ā ∈ k c \ {0} are s-uniform. Then the mapP is (s − c)-uniform. Proof. We start the proof with the following general well known result. Let A be a finite commutative group and Ξ be the group of characters χ : A → C ⋆ . For a function f : A → C we definef (χ) = 1/|A| a∈A χ(a)f (a)). Let A = k c We fix a non-trivial additive character ψ : k → C ⋆ and associate with anyā ∈ k c a character χā : k c ∈ C ⋆ by χā(t) = ψ(<ā,t >) where <, >: k c × k c → k is the natural pairing. The mapā → χā is a bijection between A and the group Ξ of characters of A. The Fourier transform of the function ν is given byν Since, by the assumptions of Lemma, maps Pā are s + cuniform for allā = 0 we see that |ν(χā)| ≤ q s+c forā = 0. Theorem 2.4. There exist explicitly definable functions e(d), a(d) > 0 such that or any d the following holds. Let V be a F q -vector space. Then any polynomial Remark 2.5. (1) This result from Algebraic Combinatorics the corner stone of our work. (2) A weaker form of this result which shows the existence of functions α d (r), lim r→∞ , α d (r) = ∞ such that polynomials P : V → k of degree d are α d (r(P )) uniform was proven earlier in [2]. (3) The effectiveness of lower bound for r nc (P ) is important for some of our results. (4) We provide a proof of a generalization of the result in [2] in the Appendix. (5) The theorem was proved for partition rank, but as shown in [7] the partition rank is proportional to the nc-rank. The following result, to which we often refer, follows immediately for Lemma 2.2 and Theorem 2.4. Theorem 2.8 (Uniform). LetP = {P 1 , . . . , P c } be a family of polynomials of degreed and of nc-rank r ≥ r(d) we have |aP (t) − q −c | ≤ q −(c+2) for allt ∈ k c . Irreducibility of fibers In this section we show a derivation of the following result from Theorem 2.8. Proof. As well known (see Krull's principal, [11]) any irreducible component Y of Ft(P ) is of dimension ≥ (n − c). So it is sufficient to show that varieties Ft(P ) are irreducible and of dimension ≤ (n − c). We first consider the case when K is a finite field when we can use the following result (see [10]). Let k := F q , k l := F q l , X be an m-dimensional algebraic variety defined over k and c(X) be the number of irreducible components of X of dimension m (considered as a variety over the algebraic closurek of k). We define τ l (X) := |X(k l )| q ml , l ≥ 1. Lemma 3.2. There exists u ≥ 1 such that lim l→∞ τ lu (X) = c(X). To prove Theorem 3.1 in the case when K = F q we observe that Lemma 3.2 and Theorem 2.8 imply that dim(Ft(P )) = n − c and c(Ft(P )) = 1. Now we consider the case when K is an algebraic closure of F q . Since K = F q n we may assume thatt ∈ F c q n . So dim(Ft(P )) = n − c and c(Ft(P )) = 1. Therefore Theorem 3.1 is proven in the case when K is an algebraic closure of a finite field. We start the reduction of the general case of Theorem 3.1 to the case when K is an algebraic closure of a finite field with a reformulation. Proof. It is clear that it is sufficient to prove this claim for algebraic closed fields. Our proof of Claim 3.4(n) uses the following result from Model theory (see [12]). Claim 3.5. Let T be the theory of algebraically closed fields. Then any first order property in T true for algebraic closure of finite fields is true for all algebraic closed fields. Since ⋆(n, d) is a first order property in T , Claim 3.4(n), n ≥ 1 are proven. Since the constant r(d) does not depend on n, the validity of Claim 3.4(n) for all n ≥ 1, implies the validity of Theorem 3.1. Universal Theorem 4.2 (Universal). There exists a function R(d, m) such that any familȳ P of polynomials degreed and nc-rank ≥ R(d, m) is m-universal where K is a field which is either finite or algebraically closed. Proof. To simplify notations we only consider the case when c = 1 and thereforē P = P . Let W be the vector space of affine maps φ : K m → V and L be the vector space of polynomials Q ∈ K[x 1 , . . . , x m ] of degree ≤ d. Choose a basis λ i , i ∈ I of the dual space to L. For any polynomial P of degree d on V we define the mapR : W → K I given by φ → {R i (φ)} i∈I . We have to show the surjectivity of the mapR(P ). This surjectivity follows immediately from the following result (see Claim 3.11 in [7]). Claim 4.3. For any r there exists h(r, d, m) such that the nc-rank ofR(P ) is ≥ r for any polynomial P on V of nc-rank ≥ h(r, d, m). Weakly polynomial functions We start the next topic with a couple of definitions. (1) Let K be a field, V be a K-vector space and X a subset of V . A function f : X → K is weakly polynomial of degree ≤ a, if for any affine subspace L ⊂ X the restriction of f on L is a polynomial of degree ≤ a. (2) An algebraic K-subvariety X ⊂ V satisfies the condition ⋆ a if any weakly polynomial function of degree ≤ a on X is a restriction of a polynomial function of degree ≤ a on V . The following example demonstrates the existence of cubic surfaces X ⊂ K 2 which do not have the property ⋆ 1 for any field K = F 2 . x is weakly linear but one can not extend it to a linear function on V . Definition 5.3. For e ≥ 1 we say that a field K is e-admissible if K ⋆ contains the subgroup C of size > e. Theorem 5.4 (Extension). There exists an S = S(a, d) such that any hypersurface Y ⊂ V of degree d and nc-rank ≥ S satisfies the condition ⋆ a if K is an ad-admissible field which is either finite or algebraically closed. Remark 5.5. (1) The main difficulty in a proof of Theorem 5.4 is the nonuniqueness of an extension of f to a polynomial on V in the case when a > d. (2) An analogous statement is true for weakly polynomial functions on subsets whereP is a family of a sufficiently high nc-rank. Proof. We fix the degree d of P . The proof consists of two steps. In the first step we construct a family X m ⊂ V m of hypersurfaces of degree d and nc-rank ≥ m defined over Z such that subsets X m := X m (K) ⊂ V m satisfy the condition ⋆ a for all ad-admissible fields. In the second step we derive the general case of Theorem 5.4 from this special case. 5.1. The first step. Proof. The inequality r nc (Q m ) ≥ m follows immediately from Lemma 16.1 of [14]. To outline a proof of the second statement we introduce a number of definitions. We fix m and write X instead of X m . Since our field K is ad-admissible the group K ⋆ contains a finite subgroup C isomorphic to Z ad . is the subspace of weakly polynomial functions of degree ≤ a on X. (7) P a (X) ⊂ P w a (X) is the subspace of functions f : X → k which are restrictions of polynomial functions on V of degree ≤ a. (8) For denote by P w a (X) θ ⊂ P w a (X), P a (X) θ ⊂ P a (X), θ ∈ Θ the subspaces θ-eigenfunctions. Since C ⊂ K ⋆ we have direct sum decompositions P w a (X) = ⊕ θ∈Θ P w a (X) θ and P a (X) = ⊕ θ∈Θ P a (X) θ . Therefore for a proof of Proposition 5.7 it is sufficient to show the equality P w a (X) θ = P a (X) θ for θ ∈ Θ. Fix f ∈ P w a (X) θ . Since L ⊂ V is a linear subspace the restriction f |L extends to a polynomial on V . So (after the subtraction of a polynomial) we may assume that f |L ≡ 0. We show that any weakly admissible function f ∈ P w a (X) θ vanishing on L is identically 0. So f ∈ P a (X) θ . 5.2. The second step. The proof of the general case of Theorem 5.4 is based on the following result. Proposition 5.9. There exists a function r(d, a) such that the following holds. Let K be a field which is either finite or algebraically closed, V a K-vector space, P a polynomial of degree d and W ⊂ V an affine subspace such that nc-rank of the restriction of P on W is P ≥ r(d, a). Then any weakly polynomial function f on X of degree ≤ a such that f |X∩W extends to a a polynomial on W of degree ≤ a is a restriction of a polynomial of degree a on V . Proof. After a subtraction of a polynomial from f we may assume that f |X∩W ≡ 0. Using the induction on the codimension of W we reduce the Proposition to the case when W ⊂ V is a hyperplane. We fix a direct sum decomposition V = W ⊕K and denote by t : V → K the projection. Our proof is by induction in a. The functiong := f /t is defined on X \ X ∩ W . We start with a construction of an extension ofg to a function g X. Given a point y ∈ X ∩ W consider the set L of lines L ⊂ X such that L ∩ W = {y}. Since f is weakly polynomial, the restriction f L is a polynomial p L (t) vanishing at 0. We define g L (y) as the value of p L (t)/t at 0. It is clear that the following two results imply the validity of Proposition 5.9. We extendg to a function on X whose values on y ∈ X ∩ W are equal to g(y). Claim 5.11. The function g : X → k is weakly polynomial of degree a − 1. Now we can finish the proof of Theorem 5.4. Fix m such that r nc (X m ) ≥ r(d, a). Let R(d, m) be as Theorem 4.2. I claim that for any admissible field K which is is either finite or algebraically closed, any hypersurface Y ⊂ V of degree d and nc-rank ≥ R(d, m) satisfies ⋆ k a . Really, let f be a weakly polynomial function on X = X P of degree ≤ a where P : V → K is a polynomial of degree d of nc-rank ≥ R(d, m). Since r nc (P ) ≥ R(d, m) there exists an affine map φ : K m → V such that P • φ = Q m . It is clear that the function f • φ is a weakly polynomial function on X m of degree ≤ a. Therefore it follows from Proposition 5.7 that the restriction of f on Im(φ) ∩ X extends to a polynomial on Im(φ). It follows now from Proposition 5.9 that f extends to a polynomial on V . Nullstellensatz Let k be a field and V be a finite dimensional k-vector space. We denote by V the corresponding k-scheme, and by P(V ) the algebra of polynomial functions on V defined over k. For a finite collectionP = (P 1 , . . . , P c ) of polynomials on V we denote by J(P ) the ideal in P(V ) generated by these polynomials, and by XP the subscheme of V defined by this ideal. Given a polynomial R ∈ P(V), we would like to find out whether it belongs to the ideal J(P ). It is clear that the following condition is necessary for the inclusion R ∈ J(P ). (N) R(x) = 0 for all k-points x ∈ XP (k). Proposition 6.1 (Nullstellensatz). Suppose that the field k is algebraically closed and the scheme XP is reduced. Then any polynomial R satisfying the condition (N) lies in J(P ). We will show that the analogous result hold for k = F q if XP is of high nc-rank. From now on we fix a degree vectord = (d 1 , . . . , d c ) and write D : Theorem 6.2. There exists and an effective bound r(d) > 0 such that for any finite field k = F q , of characteristic > d, any larger than r nc (d) and any polynomial R of degree a such that q > aD, the vanishing condition (N) implies that R lies in the ideal J(P ). Proof. We start with the following rough bound (see [4]). . , x n ] be a family of polynomials of degrees d i , 1 ≤ i ≤ c such that the variety X ⊂ A n is of dimension n − c. Then |X(F q )| ≤ c i=1 d i q n−c . For a convenience a reader we reproduce the proof. Proof. Let F be the algebraic closure of F q . Then X(F q ) is the intersection of X with hypersurfaces Y j , 1 ≤ j ≤ n defined by the equations h j (x 1 , . . . , x n ) = 0 where h j (x 1 , . . . , x n ) = x q j − x j . Let H 1 , . . . , H n−c be generic linear combinations of the h j with algebraically independent coefficients from an transcendental extension F ′ of F and Z 1 , ..., Z c+1 ⊂ A n be the corresponding hypersurfaces. Intersect successively X with Z 1 , Z 2 , . . . . Inductively we see that for each j ≤ n−c, each component C of the intersection X∩Z 1 ∩· · ·∩Z j has dimension n−c−j. Really passing from j to j +1 for j < n−c we have dim(C) = n−c−i > 0. So not all the functions h j vanish on C. Hence by the genericity of the choice of linear combinations {H j } we see that H j+1 does not vanish on C and therefore Z j+1 ∩ C is of pure dimension n − c − j − 1. Thus the intersection X ∩ Z 1 ∩ · · · ∩ Z n−c has dimension 0. By Bezout's theorem we see that Now we can finish the proof of Theorem 6.2. Let R ∈ F q [x 1 , . . . , x n ] be a polynomial of degree a vanishing on the set X(F q ). Suppose that R does not lie in the ideal generated by the P i , 1 ≤ i ≤ c. Then the variety Z cut out by the P i and R has pure codimension c + 1 and the sum of the degrees of it components is at most the product aD = d 1 . . . d c . As follows from Theorem 2.8 there exists an effective bound r(d) > 0 such that the condition r nc (P ) ≥ r(d) implies the inequality |XP (F q )| ≥ q dim(V )−c /2. On the other hand we Lemma 6.3 shows that |Z(F q )| ≤ Dq dim(V )−c−1 . But we assumed that X(F q ) = Z(F q ). This construction shows that R belongs tom the ideal generated by the P i , 1 ≤ i ≤ c. Rational singularities Definition 7.1. Let X be a normal irreducible variety over a field of characteristic zero and a :X → X a resolution of singularities. We say that X has rational singularities if Ra i ⋆ (OX) = {0} for i > 0. Remark 7.2. This property of X does not depend on a choice of a resolution a :X → X. Then the variety XP has rational singularities. Proof. To simplify the exposition we assume thatd = {d}. SoP = {P } where P ∈ C[x 1 , . . . , x n ] is a homogeneous polynomial of degree d. We first consider the case when P ∈ K[x 1 , . . . , x n ] where K/Q is a finite extension. In this case there exists an infinite set S of prime ideals in O K such that (1) for any π ∈ S the completion O K π of O K at π is isomorphic to Z p where p = char(O K /π), (2) P ∈ O K π [x 1 , . . . , x n ] and (3) the reductionP ∈ F p [x 1 , . . . , x n ] is of rank r(P ). We fix π ∈ S such that p > the degree of P . As follows from Theorem A of [1], the inequality (⋆)| X(Z/p m Z) p m(n−1) − 1| ≤ p −1/2 , m ≥ 1 would imply that that the variety X P has rational singularities. (1) We fix l ≥ 1 and write A l = Z/p l Z. (5) For χ ∈ Ξ we denote by d(χ) the smallest number d such that χ |p d A ≡ 1. (6) Ξ d ⊂ Ξ is the subset of characters χ such that d(χ) = d. (7) For χ ∈ Ξ we define b(P, χ) := p −nl v∈A n χ(P (v)) = p −nl a∈A ν(a)χ(a). Proof. The proof is completely analogous to the proof of Lemma 2.2. We see that the validity of Theorem 7.3 in the case when P ∈ K[x 1 , . . . , x n ] where K/Q is a finite extension is implied by the following result which is proven in the next section. Proposition 7.6. There exists a function r(d, s) such that for any polynomial P : A n l → A l of degree d with the reductionP : F n p → F p for p > d of rank ≥ r(d, s) we have |b(P ; χ)| < p −sd(χ) for all χ ∈ Ξ d . In the rest of this section show how to derive the general case of Theorem 7.3 from the case when P ∈ K[x 1 , . . . , x n ] where K/Q is a finite extension. Definition 7.7. Let a : X → Y be a morphism between complex algebraic varieties such that all the fibers X y are normal and irreducible. We write Y = Y(C) and denote by Y a ⊂ Y the subset of points y such that the fiber X y has rational singularities. Claim 7.8. If a be a projective morphism defined over Q then the subset Y a ⊂ Y is also defined over Q. Proof. The proof is by induction in d = dim(Y ). Suppose that Claim is known in the case when the dimension of the base is < d. Let t be the generic point of Y and X t the fiber of X over t. Fix a resolutionb :X t → X t over the field k(t) of rational functions on Y . Then there exists a non-empty open subset U ⊂ Y such that b :=b |(a•b) −1 U is a resolution of of X U := a −1 (U). Since by definition Y a ∩ U = {u ∈ U|Rb i ⋆ (OX ) u = {0}} for all i > 0. Since the sheaves Rb i ⋆ (OX ) are coherent we see that the subset Y a ∩ U is defined over Q. On the other hand, the inductive assumption implies that Y a ∩ (Y \ U) is defined over Q. Now we can finish a proof of Theorem 7.3 in the general case. Consider the trivial fibrationâ : A n × Y → Y where Y ⊂ P n d is the variety of polynomials of degree d on A n and of rank ≥ r(d). Let X ⊂ A n ×Y be the hypersurface such that a −1 (P ) ∩ X = X P and a : X → Y be the restriction ofâ onto X. As follows from Theorem 3.1 and Proposition III C of [14] all fibers of a are irreducible and normal. For a proof of Theorem 7.3 we have to show that Y a = Y . The validity of Theorem 7.3 in the case when P ∈ K[x 1 , . . . , x n ], where K/Q is any finite extension, shows that any point of Y defined over a finite extension K of Q belongs to Y a . Since the subset Y a of Y is is defined over Q we see that Y a = Y . padic-bias-rank Let V l = A N l , and P : V l → A l be of degree d. We denote byP its reduction mod p. Let χ : A l → C ⋆ be the character χ(a) = e(a/p l ) = e 2πia/p l . We assume that p > d. Proposition 8.1. Let χ : A l → C ⋆ be a primitive character. For and s > 0, there exists r = r(d, s) such that ifP is of rank > r then |b(P ; χ)| < p −sl . Proposition 8.1 will follow from the following more general result: Proposition 8.2 (B). For and s > 0, there exists r = r B (d, s) such that for any polynomial S of degree < d, any m ifP is of rank > r then E x∈V d l e P (x)/p l + S(x)/p m < p −sl . Proof. We will need the following lemma which we prove in the Appendix. Lemma 8.3. Let s > 0. There exists r l (d, s) be such that ifP is of rank > r l (d, s) then for any polynomial S of degree < d, any m we have . It suffices then to prove the theorem for multilinear symmetric function. We will from now on assume that P : V d l → A l is a multilinear symmetric function. From Lemma 8.3 we have that forP of rank > r 1 (d, s) we have any S of degree < d, any m, E x∈V d 1 e P (x)/p + S(x)/p m < p −s . For any function f : V d l → T, we define the U d Gowers norms: It is well known that the U d norms for a monotone sequence e(f ) U 1 ≤ e(f ) U 2 ≤ . . .. Furthermore if P is a symmetric degree d multilinear function, then e(P /p) We prove Proposition 8.2 by induction on d, l. Let P be so thatP is of rank > c(d, s) where c(d, s) = max max a≤d r a (a, s), r 1 d, 2 d r B (d − 1, 2s + 1) + 2ds + 1 . For any d and l ≤ d this is known (by definition of r a (a, s)). So we assume l > d. Suppose the Proposition 8.2 holds for all degree a polynomials 1 ≤ a < d, and for degree d polynomials it holds for d < j < l. Suppose the proposition does not hold for l. Then there exits a degree d multilinear polynomial P : V d l → A l , of rank > c(d, s) and some polynomial S : V d l → A l of degree < d and m ∈ N such that: Ee P (x)/p l + S(x)/p m ≥ 1/p ls . From this it follows that 1 p 2ls ≤ E x∈V d l e P (x)/p l + S(x)/p m 2 = E x,y∈V d l e (P (x + y) − P (x))/p l + (S(x + y) − S(x))/p m ) = E t∈V d 1 E y∈V d l :y≡t(p) E x∈V d l e (P (x + y) − P (x))/p l + (S(x + y) − S(x))/p m Fix t and consider the inner average: by shift invariance in x we have e (P (x + y ′ + y) − P (x + y ′ ))/p l + (S(x + y ′ + y) − S(x + y ′ ))/p m = E x∈V d l E y≡t(p) e (P (x + y))/p l + S(x + y))/p m 2 By the induction hypothesis this is ≤ 1 p 2(l−d)s : indeed we have E y≡t(p) e (P (x + y))/p l + S(x + y))/p m = E y∈V d l ,y≡0(p) e (P (x + t + y))/p l + S(x + t + y))/p m = E y∈V d l−1 e (P (x + t + py))/p l + S(x + t + py))/p m = E y∈V d l−1 e P (y) Now ∆ y P is of degree < d so the induction on the degree ∆ tP is of rank < r B (d − 1, 2s + 1). This implies that for ≥ 1 But this now implies that and we are given thatP is of rank > r 1 d, 2 d r B (d − 1, 2s + 1) + 2ds + 1 , contradiction. appendix We use the notation of Section 8. We modify the argument in [2] to obtain the following claim 1 : Lemma 9.1. Let s > 0. There exists r l (d, s) be such that ifP is of rank > r l (d, s) and p > d then for any polynomial S of degree < d, any m we have E x∈V d l e P (x)/p l + S(x)/p m < p −ls . Proof. Let q = p l , and P : V = V d l → A l a polynomial of degree d . Let e q (t) = e 2πt/q . Denote µ = |E x∈V e(P (x))|. We assume that q −s ≤ |E x∈V e(P (x) + qS(x)/p m )| ≤ E x∈V e(P (x) U d , so that µ = |E x∈V e(P (x))| ≥ q −2 d s . Replacing s with 2 d s we may assume that µ > q −s . Let E ⊂ A k l be the set defined as follows Fix x ∈ V and pick z 1 , . . . , z k ∈ V uniformly at random. For a ∈ E write a · z for k i=1 a i z i . For a ∈ E let W a (z) be the random variable defined by W a (z) = e q (∆ a·z P (x)) Claim 9.2. For a ∈ E we have EW a = µe q (−P (x)). Proof. Since a has a coordinate that is coprime to p: E z∈V k W a (z) = E z∈V k e q (∆ a·z P (x)) = E y e q (∆ y P (x)) = µe q (−P (x)). Claim 9.3. Let k ≥ 2. For (a, b) ∈ E 2 outside a set F of size |E|q(1−1/p)q/p) k−1 we have that EW aWb = EW a EW b . Proof. Fix a, b ∈ E. We calculate EW aWb = E z∈V k W a (z)W b (z) = E z∈V k e q (∆ a·z P (x) − ∆ b·z P (x)) = E z∈V k e q (P (x + a · z) − P (x + b · z)). Choose an indexes i. Now make the change of variable z i → z i − m =i (a m /a i )z m ., and then z i → z i /a i . We get If for some j = i we have that ((b j − a j b i /a i , p) = 1 then the later is E z i e q (∆ z i P (x))E z j e q (∆ z 2 P (x)) = EW a EW b . Otherwise for any i, j we have that ((b j − a j b i /a i , p) = 1, i.e p|(b j a i − a j b i ) for all i, j. We count the number of pairs (a, b) : We have |E| choices for a. Once a is chosen we have q(1−1/p) choices for b 1 , and then (q/p) k−1 choices for (b 2 , . . . , b k ). All together |E|q(1 − 1/p)(q/p) k−1 pairs. Let Γ : E \ {0} → A l be defined as follows Γ((a) a∈E ) = arg min 1 |E| a∈E e q (a) − e q (−t)µ . i.e. the value t for which the expression on the right is minimal.
2020-05-27T01:01:07.432Z
2020-05-26T00:00:00.000
{ "year": 2021, "sha1": "26388ba16444987ccdefaad2bfe2e4d8b21159a8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.12542", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e9c3951c7c61cfa38201097bb47ce75150e87433", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
164886179
pes2o/s2orc
v3-fos-license
LOW ALLOY STEEL SHAFT SURFACE REGENERATIVE WELDING WITH MICRO-JET COOLING In this paper, the effect of surface preparation after innovating welding technology with micro-jet cooling was reported. Substantial information about parameters of steel machine elements surfacing with the micro-jet cooling process was given. Recorded evidence concerning the influence of various microjet parameters on the metallographic structure of the machine shaft after surface welding was taken. There were tested metallographic and tribology properties of welds. The tribology interactions of a solid shaft surfaces were examined after surface welding. INTRODUCTION The purpose of metal surfacing is to obtain the best possible coating of the welding element. Usually, after regeneration, machine components regain good operating properties. Crankshafts are subjected to wear due to prolonged friction with cooperating parts. For example, the shaft of the gearbox should be regenerated in many vehicles after passing 150 Mm, and the shafts of mining heading machines regenerate only after 4 months of intensive exploitation, unless the head of the combine has contact with hard increments (then exploitation time of mining combine elements will be even shorter) [1,2]. The surface welding process is mainly and often used to apply hardness or wear-resistant layer of base metal [3]. It is a very important method of extending the life of machines, tools, and construction equipment. The main goal of the paper is to explore the possibilities of surfacing with micro-jet cooling. For welding, the machine parts with the use of micro-jet cooling, only welding processes in which no slag is formed can be used. The biggest application of microjet cooling takes place in the Metal Inert Gas (MIG), Metal Active Gas (MAG) and Tungsten Inert Gas (TIG) processes. The metallographic structure was analysed in terms of micro-jet parameters. For getting various amounts of ferrite, bainite and martensite in this welding method, it is necessary to determine the main parameters of the process such as:  the diameter of the stream of the micro-jet injector.  type of micro-jet gas or gas mixture.  micro-jet gas pressure.  a number of jets. Welding with micro-jet technology was very carefully tested for low alloy welding [4,5]. In low alloy steel weld, mechanical properties of weld correspond with the chemical composition and metallographic structure [6,7]. In the case of hardfacing, it is important to obtain a martensitic structure in order to increase the hardness of the coating and their resistance to abrasive wear [8][9][10]. The goal of this paper is to describe the possibilities of shaft MAG surface welding process with micro-jet cooling, which allows obtaining the various content of ferrite, martensite, bainite. In the case of ferrite, it is important that the grain is not as small as possible, which translates into better tribological properties of regenerated shafts and does not lead to fissures. In weld metal deposit there are three morphological varieties of ferrite: grain boundary ferrite, side place ferrite and acicular ferrite, which is the most advantageous due to the small grain size. MATERIALS AND METHOD A test stand for hardfacing was made. To obtain the various amount of ferrite, martensite and bainite in shaft surface weld, it was installed through a welding process with micro-jet injector with a variable number of micro-streams. The diameter of streams was on the level of 50 µm and 60 µm. To analyse the surface welding with micro-jet cooling, there were chosen shafts of 40NiCrMo6 steel with a diameter of 34 mm. An example of a machine shaft supplied for regenerative welding with the use of micro-jet cooling is shown in Figure 1. Surface weld was prepared by welding with micro-jet cooling with varied parameters. Two micro-jet gases (nitrogen and helium) were tested in the cooling process just after surface welding. There were also other varied important micro-jet parameters: gas pressure, micro-jet gas pressure and micro-jet diameter. The main data about the parameters of welding were shown in Table 1. MAG surface welding process with micro-jet cooling was carefully tested. Helium was chosen for the micro-jet cooling because of its good cooling properties. Nitrogen was used for the micro-jet cooling, under the assumption that there could be observed slight nitriding of the surface welds. Low alloy steel shaft surface regenerative welding with micro-jet cooling 207. Diameter of wire 1.2 mm 2. Standard current 220 A 3. RESULTS AND DISCUSSION A goal of the study was to examine the varying structure of the typical surfacing shaft after welding. Steels with a carbon content of about 0.3% should be preheated to a temperature of about 200℃. The possibility of welding cracks in these steel grades is caused by the presence of such structures in Weld Metal Deposit (WMD) as martensite, bainite and grain boundary ferrite. Such a structure can promote cracking after welding. Nevertheless, it was decided to check the possibility of surface welding steel shaft without preheating due to the use of microjet cooling, thinking that it might reduce the size of ferrite significantly. Micro-jet gas could have both influence on cooling conditions and the chemical WMD composition (nitrogen amount in WMD) (Figures 2, 3 and 4). Important t8/5 welding parameter, which informs about the cooling time between 800 and 500°C, where the most important austenite transformation occurs, is the largest (on the level of 10 sec) when micro-jet cooling is not used. In the case of t8/5 welding parameter, it is much smaller (on the level of 6 sec) when nitrogen micro-jet cooling is used. In the case of t8/5 welding parameter, it is also smaller (on the level of 4 sec) when helium micro-jet cooling is used. Heat transfer coefficient of tested micro-jet gases influences on cooling conditions of welds (Figures 3 and 4). This corresponds to varied martensite content. This is due to the different conductivity coefficients (λ·105), which for N2 and Ar, in the 273 K is on the level of 23.74 J/cm·s·K. Helium gives much stronger cooling conditions due to the higher conductivity coefficients (λ·105), which for He is 143.4 J/cm·s·K. A typical WMD had similar chemical composition in all tested cases. The chemical composition of WMD after MAG welding with and without micro-jet cooling is presented in Table 2 For standard MAG welding without micro-jet cooling and for MAG welding with helium micro-jet cooling, the amount of nitrogen was always on the level of 55 ppm. For MAG welding with nitrogen micro-jet gas cooling, the amount of nitrogen was much higher, on the level of 70 ppm. After chemical analyses, the metallographic structure was given. Presence of such structures in WMD as martensite, bainite and grain boundary ferrite were identified. Weld cracks were not observed in all examined cases, especially where micro-jet cooling was used. An additional success of the research was the possibility of controlling the martensite content. For the sake of transparency in the interpretation of results, it was decided to compare only the content of martensite in the weld. A piece of information about martensite amount in WMD is shown in Tables 3 and 4 Micro-jet cooling does not have a greater influence on the chemical composition of the weld. In the case of nitrogen micro-jet cooling, there was additionally observed traces of nitrides. It was observed that the micro-jet cooling is able to increase the content of martensite to 65% and seriously reduce the size of ferrite grains (Figures 5 and 6). It is not so easy to precisely count martensite amount such as other typical low alloy steel weld phases: acicular ferrite, grain boundary ferrite, side plate ferrite for low alloy welding [11]. Martensite amount was only estimated. Cooling allows for the increase of the content of the martensite in the weld from 45% to 65%. After microscope observation, a microhardness was carried out (Figures 7, 8 and 9). Standard surface welding could not guaranty high hardness (Figure will be able to be integrated in 8). Surface weld hardness of the shaft was decreased in terms of the distance from weld face; the maximum value was much below 450 HV. Much higher hardness values were observed after welding with helium micro-jet cooling (Figure 8). Surface shaft welding with helium micro-jet cooling allowed to excide hardness even above 450 HV. The effect of nitrogen micro-jet cooling on steel WMD hardness is shown in Figure 9. Surface shaft welding with nitrogen micro-jet cooling allowed to excide hardness even above 450 HV. This is translated by increased nitrogen content in WMD (from 55 ppm to 70 ppm). Finally, tribological tests were done using the Amsler machine. Results of the Amsler tests are shown in Table 5 Table 6, it was found that the highest resistance to abrasive wear has the sample taken after welding with micro-jet cooling. Favourable micro-jet gases are nitrogen or helium. After hardness analysis, Charpy V impact test of the deposited metal was carried out. The Charpy tests were done at temperature +20°C on 5 specimens having been extracted from each weld metal (Table 6). The impact toughness of all WMD is comparable among themselves. The impact toughness of this steel (with 0.31% C) is not very high, however, the influence of micro-jet cooling on the elastic properties of steel can be lightly noticed. Helium micro-jet cooling shreds the ferrite grain, which can lead to a small increase in impact strength. Cooling with nitrogen micro-jet cooling allows nitrogen to be increased in WMD, which adversely affects elastic properties of the material. Helium with minimal could be regarded as a good choice. After the impact toughness analysis, a fractography test was conducted. Fractographic methods are routinely used to determine the cause of failure in engineering structures. Figure 10 presents a typical fracture of the WMD after MAG welding without micro-jet cooling. Figure 11 shows a typical fracture of the WMD after MAG welding with helium micro-jet cooling. Comparing both drawings, it is possible to deduce that after welding with helium micro-jet cooling fracture of WMD is more ductile than after welding with nitrogen micro-jet cooling. CONCLUSIONS The micro-jet surfacing technology was tested for surface welding with various micro-jet parameters. Micro-jet technology could be treated as a very beneficial process during shaft surfacing. Structure change was observed, especially the increase in martensite content and ferrite fragmentation in the metal deposit. On the basis of the investigation it is possible to deduce that:  micro-jet-cooling could be treated as an important element of MAG welding process.  it is possible to steer the metallographic structure (martensite, nitrides).  it is possible to steer the weld harness by various micro-jet parameters.  there is no great difference between the influence of argon and helium on cooling conditions.  nitrogen used for micro-jet cooling (instead of argon and nitrogen) is responsible for the highest hardness in all tested.  there were observed traces of nitrides when nitrogen was used for micro-jet cooling (instead of argon when nitrides were not observed).  the highest resistance to abrasive wear has the sample taken after welding with micro-jet cooling.  micro-jet cooling does not have a noticeable influence on the impact toughness of WMD.
2019-05-26T14:19:31.020Z
2019-03-30T00:00:00.000
{ "year": 2019, "sha1": "fba1bf5a191e87d71e2313cf479a2f63f2aeb608", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20858/sjsutst.2019.102.17", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c9abef8d960f2ffccefe9dfddc0a9ad204f138a0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
233952795
pes2o/s2orc
v3-fos-license
Incidence trends and survival prediction of urothelial cancer of the bladder: a population-based study Background The aim of this study is to determine the incidence trends of urothelial cancer of the bladder (UCB) and to develop a nomogram for predicting the cancer-specific survival (CSS) of postsurgery UCB at a population-based level based on the SEER database. Methods The age-adjusted incidence of UCB diagnosed from 1975 to 2016 was extracted, and its annual percentage change was calculated and joinpoint regression analysis was performed. A nomogram was constructed for predicting the CSS in individual cases based on independent predictors. The predictive performance of the nomogram was evaluated using the consistency index (C-index), net reclassification index (NRI), integrated discrimination improvement (IDI), a calibration plot and the receiver operating characteristics (ROC) curve. Results The incidence of UCB showed a trend of first increasing and then decreasing from 1975 to 2016. However, the overall incidence increased over that time period. The age at diagnosis, ethnic group, insurance status, marital status, differentiated grade, AJCC stage, regional lymph nodes removed status, chemotherapy status, and tumor size were independent prognostic factors for postsurgery UCB. The nomogram constructed based on these independent factors performed well, with a C-index of 0.823 and a close fit to the calibration curve. Its prediction ability for CSS of postsurgery UCB is better than that of the existing AJCC system, with NRI and IDI values greater than 0 and ROC curves exhibiting good performance for 3, 5, and 8 years of follow-up. Conclusions The nomogram constructed in this study might be suitable for clinical use in improving the clinical predictive accuracy of the long-term survival for postsurgery UCB. Introduction Urothelial cancer of the bladder (UCB) is the most common pathological type of bladder cancer, and its incidence is especially high in Western countries [1,2]. The incidence of this cancer is closely related to tobacco consumption and exposure to occupational carcinogens [3,4]. However, the incidence of UCB may have changed over the past few decades due to industrial developments, the implementation of policies for controlling tobacco, and progress in disease diagnosis and treatment [5,6]. There have been few analyses of the incidence of UCB despite many studies researching the incidence trends of bladder cancer [7]. UCB is most frequently diagnosed in males and people older than 55 years [7]. Surgical resection is the mainstay treatment for UCB, but many people-especially those presenting with muscle invasion-have poor outcomes despite receiving surgery and systemic treatment [8]. Given the aforementioned situation, this study analyzed trends in the incidence of UCB and established a nomogram based on a Cox proportional-hazards regression analysis of the prognostic factors for predicting the survival of UCB after surgery based on data obtained from the Surveillance, Epidemiology, and End Results (SEER) database [17]. Data collection and definition The data were extracted retrospectively from the SEER database and downloaded using SEER*Stat software (version 8.3.6, National Cancer Institute). To identify UCB patients, we searched the database using the tumor-site ICD-9 codes (C67.0-C67.9) and ICD-O-3 code (8130/3). To analyze the trends in the incidence of UCB, the ageadjusted incidence rate of UCB diagnosed from 1975 to 2016 was calculated. To establish a nomogram for analyzing survival, the following variables for UCB were extracted from the SEER database: age at diagnosis, sex, ethnic group, primary site, grade, metastasis stage, derived AJCC stage, regional lymph nodes removed, radiation status, chemotherapy status, insurance status, marital status, tumor size, survival time, and cancer-specific death status. We only included patients who received surgery, which were identified with "Surgery performed" record on the item "Reason no cancer-directed surgery." Other exclusion criteria were (1) only autopsy findings being available, (2) diagnosis based on direct visualization without microscopic confirmation, (3) not the first malignant primary indicator, and (4) incomplete information for the above-listed variables. Statistical analyses The data for the age-adjusted incidence rate of UCB from 1975 to 2016 was used to calculate the annual percentage change (APC) in the incidence using the weighted least-squares method. Joinpoint regression analysis (version 4.7.0, Joinpoint, IMS, Calverton, MD, USA) was performed to delineate trends in the incidence of UCB from 1975 to 2016. Considering the large difference in the incidence between males and females, the APC analysis and the joinpoint regression analysis were performed with stratification by sex. All of the patients included in the cancer-specific survival (CSS) analysis were randomly divided into a training cohort and a validation cohort at the ratio of 7:3. We first used the data in the training set to find independent prognostic factors and construct a nomogram, and then applied the data to the validation cohort to evaluate the distinguishability, calibration, and clinical effectiveness of the prediction model. Differences in the distribution of categorical variables between the training cohort and validation cohort were estimated using the chi-square test. Differences in age between the two cohorts were assessed using Student's t test, and differences in survival time were assessed using the log-rank test. Statistical analyses to identify risk factors were performed by applying the backward stepwise selection method of multivariable Cox regression to the training cohort. A nomogram was then established based on the identified risk factors. The distinguishability of the nomogram was evaluated using the consistency index (C-index) calculated by Harrell's C statistic, the net reclassification index (NRI), and the integrated discrimination improvement (IDI). The C-index was used to describe the difference between the real values and those predicted by the model. This index ranges from 0.5 (no discrimination) to 1 (excellent discrimination), with a value of ≥ 0.70 indicating that the distinguishability of the prediction model is acceptable. Values of NRI and IDI of > 0 (compared with the traditional AJCC staging system) indicate that the prediction ability of the nomogram is better than that of the AJCC staging system, while negative values would indicate that it is inferior. The calibration of the nomogram was evaluated using a calibration plot, on which the abscissa shows the predicted values for different groups and the ordinate shows the actual probabilities. The value points for different groups are connected by line segments to form a calibration line. A calibration curve that is closer to the standard line of y = x indicates a smaller error between the model's prediction and the actual situation, and hence a better calibration capability of the model. The clinical effectiveness of the nomogram was evaluated using the receiver operating characteristics (ROC) curve. Statistical analyses were performed using the R software (version 3.5.1; https://www.r-project.org/). Statistical significance was defined as a two-sided probability value of < 0.05. Among female UCB patients, the age-adjusted incidence rate increased slightly from 3.8 per 100,000 persons in 1975 to 5.3 per 100,000 persons in 2016. Only one join point was identified (Fig. 1), with the incidence rate showing a slowly increasing trend from 1975 to 1996 (APC = 2.0%, 95% CI = 1.7-2.4%, P < 0.0001), followed by a slowing decreasing trend from 1997 to 2016 (APC = −0.8%, 95% CI = −1.2% to −0.5%, P < 0.0001). The age was showed as mean ± standard deviation In terms of treatment modalities, the regional lymph nodes were removed in 2215 (19.24%) patients, 3814 (33.13%) had received chemotherapy, and 718 (6.24%) had received radiation. There were no significant differences between the training and validation cohorts in sex, ethnic group, tumor size, marital status, insurance status, differentiate grade, metastasis stage, AJCC stage, tumor location, regional lymph nodes removal status, chemotherapy status, or radiation status (P > 0.05). The patients in the validation cohort were slightly older than those in the training cohort (P = 0.04). The log-rank test showed that the survival time did not differ significantly between the training and validation cohorts (P = 0.3). Independent prognostic factors and construction of the nomogram Multivariable Cox regression with the backward stepwise selection method revealed that the statistically significant factors affecting postsurgery UCB survival in the training cohort were the age at diagnosis (hazard ratio [ Table 2). These independent prognostic factors were used to construct a prognostic nomogram for predicting the 3-, 5-, and 8-year CSS of postsurgical patients with UCB (Fig. 2). The nomogram shows that the age at diagnosis and the AJCC stage were the strongest factors influencing the prognosis. < 0.001). These performance indicators demonstrate that the nomogram showed better discrimination than the AJCC staging system. The calibration plots showed excellent consistency between the observed and nomogram-predicted probabilities in the training and validation cohorts (Fig. 3). The ROC curve of the predictive model showed good clinical effectiveness in both the training cohort (Fig. 4A), with areas under the ROC curve (AUCs) for 3, 5, and 8 years of follow-up of 0.831, 0.808, and 0.789, respectively, and the validation cohort (Fig. 4B), with corresponding AUCs of 0.811, 0.798, and 0.789. Discussion This study analyzed incidence trends in order to establish a survival predictive model for postsurgery UCB based on data in the SEER database. From 1975 to 2016, the overall incidence rate showed an upward trend, despite a slight decrease from the beginning of the twenty- The overall upward trend in the incidence of UCB over the past 40 years is consistent with the results of many studies, although the types of pathologies investigated have varied [18][19][20][21][22]. This increase is mostly attributable to progress in the development of diagnostic tools, especially in ultrasonography, computed tomography, and magnetic resonance imaging [23]. Another possible reason is the global trend of population aging, since this cancer is more common in the elderly, while the joinpoint regression also found that the incidence of UCB was not always rising, but had experienced a process of rapid rise, slow rise and then decline in men and the general population. We speculate the downward trend may be related to the control of tobacco consumption. Tobacco smoking is the main factor underlying the incidence of bladder cancer [24]. A report from the Centers for Disease Control and Prevention showed that the smoking rate has decreased markedly in American adults over the past few decades, from 42.4% in 1965 to 16.8% in 2014 [18]. It should be noted that there was long latency between tobacco exposition and bladder cancer diagnosis [25]. So the downward trend only began to appear around 2000. Another issue is that the incidence of female UCB had declined in earlier years. We suspect that the possible explanation is that women had a lower bladder cancer incidence because of potential biologic factors, and the decrease in tobacco consumption exerted a more significant impact on them. Our study found that the prognosis is worse for postsurgery UCB patients who are single, separated, divorced, or widowed than it is for married patients. We speculate that this could be due to the mental status of UCB patients affecting their survival. It has been shown that single patients with bladder cancer are more likely to have a posttreatment psychiatric diagnosis than are married patients, and that the prognosis of bladder cancer is worse in patients with a psychiatric diagnosis [26]. Other analyses of the prognosis of bladder cancer using data from the SEER database have also found that the marital status can affect the prognosis of the disease [27,28]. We further found that the prognosis is worse in patients without insurance than in those receiving medical insurance/medical assistance. This is somewhat consistent with the findings of Sung et al. [29] based on California Cancer Registry data that the survival time for bladder cancer is worse for not-insured patients and those with an unknown insurance status than it is for those with managed care, although there was no significant difference in the CSS. That study also found that among all insurance categories, the prognosis was worst for Medicaid insurance in the USA. We speculate that the main reason is that Medicaid is aimed at low-income people, who are less likely to receive treatment within 12 weeks of a diagnosis [30]. Sung et al. [29] also found that Medicaid patients had more advanced-stage, highergrade tumors compared with patients covered by Medicare or managed care, and so their prognosis may be worse. This has been confirmed in other previous research [31]. In our study, we did not subdivide the patients into different types of insurance, instead only dividing them into insured and uninsured/unknown, which may be the main reason for the difference in the research results. Regardless, the type of and accessibility to medical insurance may affect the survival rate of bladder cancer, possibly due to differences in basic living conditions (e.g., income and living environment), disease prevention, and the treatment of people covered by different types of medical insurance. Other independent prognostic factors for postsurgery UCB identified in this study were the age at diagnosis, black ethnic group, lower differentiation grade, lower AJCC stage, no regional lymph nodes removed, not receiving chemotherapy, and larger tumor, which is traditional prognostic factors for bladder cancer that have been reported previously [32][33][34]. Based on these factors and the aforementioned marital status and insurance status, we established a nomogram for the individualized prognosis of postsurgery UCB, and found that the AJCC stage and the age had the greatest impact on individualized prognoses. This was not surprising. The AJCC stage itself reflected the severity of the tumor to a large extent. On the other hand, the elderly patients usually suffered from reduced physiological function, coupling with other underlying diseases, resulting in that perioperative mortality and postoperative complications had increased significantly. Additionally, the risk of recurrence increased with age, and the prognosis of older patients was poor [35]. However, the contribution of other variables to the model cannot be ignored. We calculated the NRI and IDI of established model using "Age + AJCC stage" as the control model and found the NRI values for 3, 5, and 8 years of follow-up were 0.23, 0.2, and 0.17, respectively, in the training cohort, and 0.19, 0.12, and 0.12 in the validation cohort; the corresponding IDI values were 0.03, 0.03, and 0.03 in the training cohort, and 0.02, 0.02, and 0.03 in the validation cohort (all P < 0.001). These indicated that variables other than AJCC and age also exerted a positive contribution to the prediction of prognosis. The nomogram developed in this study is the first one reported for postsurgery UCB. Zhang et al. [36] established a nomogram for the individualized prognosis of bladder cancer based on data in the SEER database. The variables in that model include the age at diagnosis, ethnic group, sex, and TNM stage. That model also indicated that age and the T stage have the greatest impact on the prognosis, which is essentially consistent with our model; the main differences are that we used AJCC staging, which is also based on the TNM stage, and we targeted postsurgery UCB. Our nomogram might be superior since we take into account the clinical treatment received by the patients and a broader range of demographic information. In addition, the nomogram that we have established exhibits good discrimination, calibration, and clinical effectiveness, and a better prognostic ability for postsurgery UCB than the currently used AJCC staging system. This easy-to-use nomogram can help doctors to estimate the likelihood that a patient will survive at a certain point in time. Several limitations of this study should be considered. Firstly, the data used in the validation cohort also came from the SEER database, and so the nomogram still needs to be validated using data from another database or using clinical prospective data. Secondly, some important clinical factors were not collected, such as the smoking status after diagnosis, parameters of social status (e.g., socioeconomic status or level of education), condition of the underlying disease, comorbidities, and biochemical indicators such as the C-reactive protein level. The data available are also subject to the limitations of the SEER database. Finally, for patients with bladder cancer to have a good prognosis, preventing relapse is also an important indicator for the clinical treatment of the disease [37,38], but we did not analyze the risk of recurrence in patients. Conclusions In conclusion, this study has revealed the incidence trends of UCB and constructed a nomogram for predicting the long-term survival of individual postsurgery UCB patients based on a population cohort. The nomogram showed good predictive performance, and may serve as an effective and convenient evaluation tool for helping surgeons to perform personalized survival predictions and mortality risk identification in postsurgery UCB patients. 2019SF-140). The funders had no role in the study design, collection, analysis, interpretation, or writing of the manuscript. Availability of data and materials The data that support the findings of this study are available on request from the corresponding author.
2021-05-08T00:03:49.624Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "eb2e91a7b53057cb7148e2962086a3f673040c29", "oa_license": "CCBY", "oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-021-02327-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2c942cf597068d0a4b1f183ef0a76e6d3dccb65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239054748
pes2o/s2orc
v3-fos-license
Childhood Maltreatment Predicts Specific Types of Dysfunctional Attitudes in Participants With and Without Depression Background: Studies have shown a strong association between childhood maltreatment (CM) and major depressive disorder (MDD). Dysfunctional attitudes (DAs) play a crucial role in the development of MDD. In this study, we aimed to investigate whether (1) DAs are associated with CM, (2) specific CM types predict specific types of DAs, and (3) higher childhood trauma counts (CTCs) predict more DAs. Methods: One hundred seventy-one MDD participants and 156 healthy controls (HCs) were enrolled for the study. CM was assessed retrospectively with the Childhood Trauma Questionnaire. DAs were evaluated using the Chinese version of the Dysfunctional Attitude Scale–Form A (C-DAS-A). A series of analyses, including multiple analyses of covariance and hierarchical regression analyses, were used in this study to examine the hypotheses. Results: The proportion of CM was 60.2% in the MDD group and 44.2% in the HC group. The 2 × 2 analysis of covariance results showed no interaction effect between CM and MDD on C-DAS-A total score. When the factor scores replaced the C-DAS-A total score, a similar trend was observed. Within the MDD group, emotional abuse (EA) predicted two forms of DAs: self-determination type and overall DAs; physical neglect (PN) was predictive of attraction and repulsion-type DAs. Higher childhood trauma counts significantly predicted more types of DAs in the MDD group. Conclusion: DAs are a trait feature of CM. EA and PN predict specific types of DAs in MDD patients. Higher CTCs predict more DAs in MDD patients. INTRODUCTION Childhood maltreatment (CM) is deviant behavior toward an underage that causes harm or entails a risk of causing harm in physical, sexual, and emotional aspect. Several CM forms are recognized: emotional abuse (EA), physical abuse (PA), sexual abuse (SA), and neglect [emotional neglect (EN) and physical neglect (PN)] (1). It constitutes a global threat leading to significant health concerns. Worldwide, one of two children is a victim of any form of CM (2). They lead to severe long-term consequences not limited to work and relationship difficulties, disappointing academic performance, and impaired mental health, including major depressive disorder (MDD) (3)(4)(5). Individuals who underwent CM exhibit psychological consequences and disruptions in neurobiological mechanisms; the stress system is affected, and there is impeded brain connectivity, primarily in the frontal brain cortex (6,7). Depression is one of the leading causes of psychiatric morbidity globally (8); documentation of its association with CM is not scarce in the medical literature (9,10). Some reported that MDD was twice likely in individuals with CM (11). CM has a varying effect on depression onset (12), course and response to treatment, and other attributes (10,13). When considering the individual types of CM, EA increases the risk of depression twice as likely as PA (14); others suggested EN significantly predicts depression, whereas EA correlates with depression severity (15). Under Beck's views of depression, a negative self-schema may be acquired during childhood due to scarring life events. Those are not limited to abuse and neglect. The negative self-schemas remain quiescent unless triggered by stressors (16). Those negative self-schemas, commonly referred to as dysfunctional attitudes (DAs), are ubiquitous negative thought processing styles that affect one's belief about oneself, the world, and the future; they are at the core of depressive pathologies. Several studies thoroughly investigated the impact of DAs in depression patients. They constitute a considerable risk and poorer prognosis of depression (10), as well as being a long-term predictor for relapse (17) and decreased effectiveness to antidepressant therapy (18). The relationship between CM and MDD is moderated by DAs (19,20). However, not all individuals with DAs will develop depressive disorders, leading us to contemplate whether DAs are a trait resulting from CM. Also, as the individual types of CM have varying effects on depression, could it be possible for the specific CM types to forecast global DAs and specific DAs? Nevertheless, this relationship is unexplored. Only a few studies partly address the question. In a study involving a sample of women, the researcher suggests a significant association between EA and DAs (19). Another researcher suggests a significant association between childhood neglect (CN) and DAs (20). The amount of DAs influences the threshold of an adverse event to onset depression; the higher the DAs grade, the lesser the adverse event's threshold (21,22). We therefore hypothesized that the more the specific CM types present, the more the DAs. In this study, we hypothesized that (1) DAs are associated with CM, (2) specific types of CM can predict specific types of DAs, and (3) higher childhood trauma counts (CTCs) can predict more DAs. METHODS The set of data used for this study derives from a longitudinal project to scrutinize the psychological and biological mechanisms of MDD (hypothalamic-pituitary-adrenal axis function and magnetic resonance imaging study of trauma-related depression, registration no. ChiCTR1800014591). One hundred seventy-one participants with MDD were enrolled from inpatient and outpatient departments of the Zhumadian Psychiatric Hospital (Henan, China), and 156 participants were recruited from the local area through flyers for a healthy control (HC) group. The enrolment procedure started in January 2013 and ended in December 2018. An eligibility criterion was set for the two groups, and two well-trained psychiatrists supervised the process. The enrolment criteria set for the MDD group was as follows: (1) age 18-60 years, (2) diagnosed with MDD and medicationfree for not <2 weeks, (3) diagnosis of MDD confirmed by two well-trained psychiatrists using the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, (4) 24-item Hamilton Depression Rating Scale score (HAMD 24 ) ≥20 (23), and (5) consent form signed by the patient. The exclusion criteria set was as follows: (1) comorbid Axis I or II or a history of bipolar disorder; (2) history of head injury, neurological disorders, or other internal illnesses; (3) history of substance abuse or dependence except for tobacco dependence; and (4) having suicidal tendencies or ideation. As for the HC group, the inclusion criteria set was as follows: (1) age 18-60 years, (2) HAMD 24 score <8, and (3) signed consent form by the participant. The HC group's exclusion criteria were as follows: (1) history of any psychiatric disorders; (2) history of substance abuse or dependence except for tobacco dependence; and (3) history of head injury, neurological disorders, or other internal illnesses. Measures Depression A 24-item HAMD 24 was used to assess depression. It is a commonly used clinician-rated questionnaire (23). The scale was translated by the Shanghai Mental Health Center, and it summed up to have a good reliability and validity in the Chinese community. The scale consists of 24 items: 12 items were rated 0-4, nine items were rated 0-2, and three items were rated 0-3. Hence, the total score ranges from 0 to 75. A cutoff score of at least 20 signifies moderate depression (24). Anxiety A 14-item Hamilton Anxiety Rating Scale (HAMA 14 ) was used to assess anxiety among the participants. It is a clinician-rated questionnaire consisting of 14 items. Each item is rated on a range of 0 (absent) to 4 (severe). The total score ranges from 0 to 56 (25). The Chinese version of the HAMA 14 summed up to have a good reliability and validity in the Chinese community (26). Dysfunctional Attitudes The Dysfunctional Attitude Scale was used to assess cognitive vulnerabilities. The Chinese version of the Dysfunctional Attitude Scale-Form A (C-DAS-A) was used for the study. It is a self-reporting scale designed to evaluate DA's rectitude (27). This scale has good reliability and validity in Chinese MDD samples (28,29). The Chinese version of the scale includes 40 items and encompasses eight subscales. The total score ranges from 40 to 280, with higher the total score, the more DAs. The eight subscales are vulnerability, attraction and repulsion, perfectionism, compulsion, seeking applause, dependence, selfdetermination attitude, and cognition philosophy (28). More details about the C-DAS-A questionnaire and the nature of the several factors involved can be found in our other articles (30,31). Childhood Maltreatment CM was assessed using the Childhood Trauma Questionnaire (CTQ). It is a retrospective assessment tool consisting of five factors for maltreatment, and it is evaluated through 28 items in the questionnaire. It accounts for maltreatment before the age of 16 years, and it is summed up to have good reliability and validity in the Chinese community. The five factors for CM assessed are EA, PA, SA, EN, and PN. Participants were identified as positive for CM if any one of these factors exceeded their cutoff score as mentioned: EA > 12, PA > 9, SA > 7, EN > 14, and PN > 9 (32)(33)(34). The CTC was defined as the sum of the total CTQ factors exceeding their respective cutoff scores. Its minimal score is therefore 0, whereas the maximal score is 5. Data Analytic Plan SPSS version 25.0 was used for the analytic procedure and a p = 0.05 (two-tailed) for statistical significance. χ 2 tests and independent t-tests were used to check for group differences for categorical variables and continuous variables, respectively, in the MDD and HC groups. To test our first hypothesis, that is, DAs are associated with CM; we used a 2×2 analysis of covariance (ANCOVA) of the diagnosis and CM on C-DAS-A total score with age, sex, and education as covariates; post-hoc analyses followed it. The same procedure was repeated with the eight C-DAS-A subscale scores as the dependent variable. For our second hypothesis, a hierarchical regression analysis was used to estimate the different CM types' influence magnitude on C-DAS-A total score first. Then, the eight various C-DAS-A subscale scores replaced the C-DAS-A total score. The procedure was run in the two groups, MDD and HC groups, independently. Afterward, we assessed whether higher CTCs lead to more DA, that is, our third hypothesis, by running a hierarchical regression analysis of CTC on C-DAS-A total score followed by its substitution with the eight different DAS factor scores. The process was run separately in the MDD and HC groups. Demographic/Clinical Information/Prevalence of CM, CM Types, and CTC Three hundred twenty-seven participants fulfilled the eligibility criteria, including 171 MDD and 156 HC participants. The mean age of the MDD group (35.06 years) was higher than the HC group (34.62 years). The average years of schooling in the MDD group (10.23 years) was lower than in the HC group (11.12 years). The male proportion was also lower in the MDD group (male 43.9%) than that in the HC group (male 45.5%). Within the MDD group, the mean age at onset of depression was 31.74 years. The average number of episodes of depression was 2.03. There was no statistical significance in age and gender between MDD and HC groups (p > 0.05). Both HC/CM + and HC/CM − had more years of education compared to MDD/CM + and MDD/CM − . The MDD/CM + group had higher mean scores in HAMD 24 , HAMA 14 , C-DAS-A total, and CTQ total than the MDD/CM − group, and they were all significant (p < 0.001). Clinical and demographic characteristics are shown in Table 1. The prevalence of CM types and CTC is shown in Table 2. The prevalence of CM in our sample was 52.5%, whereas 60.2% of the MDD group reported CM. PN (43.1%) had the highest prevalence in the sample, in the MDD (49.7%) and the HC (35.9%) groups. SA (8.8%) was the least prevalent form of CM among the MDD group, whereas EA (3.2%) was the least common in the HC group. Most participants (52.9%) reported at least one type of CM from the sample, whereas 28.1% reported at least two types of CM, 7.3% reported at least three types of CM, 2.7% reported at least four types of CM, and 0.3% reported all types of CM. A higher proportion of participants in the MDD group reported having experienced maltreatment in the past. Similarly, the proportion for the subtypes of CM was higher in the MDD group than that in the HC group. As for CTC, the HC group's proportion was higher than the MDD group for scores 0, 1, and 5, whereas the reverse was observed with CTC scores 2, 3, and 4. Table 3 shows the results of a 2×2 ANCOVA (factor 1: diagnosis and factor 2: CM) on C-DAS-A total and subscale scores with age, gender, and education as covariates. No significant two-way interaction effect of CM and diagnosis was found for C-DAS-A total score while controlling for covariates (F = 1.20, p = 0.275, partial η 2 = 0.004). Therefore, an analysis of the main effects and the Bonferroni post-hoc test were performed for CM and diagnosis (35). The main effect of CM showed a statistically significant difference in unweighted adjusted marginal mean (36, 37) C-DAS-A total score for those who had CM (145.57) vs. those without CM (134.03) was 11.542 [95% confidence interval (CI), 5.83-17.25; p < 0.001]. As for the main effect of diagnosis, it showed a statistically significant difference in unweighted adjusted mean C-DAS-A total score for those of the MDD group (154.10) vs. those of the HC group (125.50). The difference was 28.60 (95% CI, 23.04-34.16; p < 0.001). Effect of Diagnosis and CM on C-DAS-A Total and Subscale Scores There was no statistically significant two-way interaction of CM and diagnosis while controlling for covariates, on C-DAS-A subscale scores, except for C-DAS-A dependence. These statistically significant interactions were interpreted through analysis of main effects and Bonferroni post-hoc analyses of CM and diagnosis. The main effect of CM had statistically significant adjusted marginal means in the following C-DAS-A subscales: vulnerability (1.497, p = 0.009), attraction and repulsion (2.717, TABLE 1 | Demographics and clinical information of major depressive disorder (MDD) and healthy control (HC) groups. No statistically significant adjusted marginal mean scores were observed for the main effect of CM in C-DAS-A compulsion and cognition philosophy subscales (p > 0.131). As for the main effects of diagnosis group, there was a statistically significant adjusted marginal means in all of the eight subscale scores of C-DAS-A (p < 0.001). Hierarchical Regression Analysis of CM Types on C-DAS-A Total and Subscale Scores A hierarchical regression analysis was run at three levels to determine if CM types improved the prediction of C-DAS-A total and subscale scores in the MDD and HC groups. At level 1: age, Frontiers in Psychiatry | www.frontiersin.org gender, and education; level 2: HAMA 14 and HAMD 24 ; and level 3: EA, PA, SA, EN, and PN were included for the hierarchical regression analysis in the HC group, whereas in the MDD group, two supplementary items were added to level 2: duration of current episode and episode counts. As six participants had missing records of the HAMA 14 data, they were removed from this investigation leading to a new sample size of 168 for the MDD group and 153 for the HC group. The hierarchical regression analysis of CM types on C-DAS-A total and subscale scores within the MDD group is shown in Table 4. Within the MDD group, the CM types' addition to the model led to a statistically significant R 2 of 7.9% (p = 0.015) with an EA standard coefficient of 0.249 in C-DAS-A total score. There was a statistically significant R 2 of 8.2% (p = 0.015) in the C-DAS-A attraction and repulsion score and a statistically significant PN standard coefficient (0.276). In comparison, in the C-DAS-A self-determination, the EA (0.262) was statistically significant, with a R 2 of 6.9% (p = 0.027). Table 5 shows the hierarchical regression analysis of CM types on C-DAS-A total and subscale scores within the HC group. Only PN (0.216) was statistically significant, with a change in R 2 of 7.7% (p = 0.033) observed in C-DAS-A seeking applause. Hierarchical Regression Analysis of CTC on C-DAS-A Total and Subscale Scores A hierarchical regression analysis was run to find CTC's predictability on C-DAS-A total and subscale scores in both the MDD (n = 168) and the HC (n = 153) groups. At level 1: age, gender, and education; level 2: HAMA 14 and HAMD 24 ; and level 3: CTC were included for the hierarchical regression analysis in the HC group, whereas in the MDD group, two supplementary items were added to level 2: duration of current episode and episode counts. The results are shown in Table 6. C-DAS-A total score had a significant predicted R 2 of 3.8% (p = 0.010, β = 0.213) in the MDD group. Other C-DAS-A subscales that showed a significant R 2 were as follows: vulnerability ( R 2 = 2.5%, p = 0.042, β = 0.171), attraction and repulsion ( R 2 = 5.4%, p = 0.002, β = 0.253), and seeking applause ( R 2 = 3.4%, p = 0.014, β = 0.202). In the HC group, the C-DAS-A attraction and repulsion score ( R 2 = 2.7%, p = 0.036, β = 0.167) led to a statistically significant rise with the addition of CTC to the investigation. DISCUSSION Up to our knowledge, this study is among the few to investigate DAs as trait features of CM. The reported prevalence rate of CM among the depressed participants and HCs was 60.2% and 44.2%, respectively. Our reported prevalence rate was much higher than a meta-analysis conducted in 2017. However, the meta-analysis reported a comparatively lower prevalence rate of 45.6% among depressed participants (15). This discrepancy could be because our study was restricted to one region, and we had a smaller sample size. The disparity suggests that CM could be more frequent in some regions. The reported prevalence of specific CM types was comparatively lower, except for PN, within the depressed participants (EA: 9.4% vs. 36 Given our study's regional concept, China's rapid economic development meant parents have less time to interact with their children physically. As stated by the social development theorist Vygotsky, children do not develop in isolation; the lack of social interaction suffered by the children neglected by their caregivers constitutes a social impediment for their cognitive development. Beck's cognitive theory of depression proposed that a negative self-schema is present before the onset of depression. Those cognitive distortions are results of adverse childhood experiences. They remain dormant until triggered by stressors (16,41). By demonstrating cognitive differences between individuals who underwent CM and the depressed participants, this study provides essential support to Beck's cognitive theory of depression. We showed that CM predicts DAs in both participants with and without depression. Thereby, we understand that CM predicts some amount of DAs, which remain latent. We shared a similar tenet with a study about mood induction. They showed that DAs remain latent unless activated (42). We also shared similar results with a survey of 155 participants; they found a significant association between DAs and CM (43). However, only healthy participants with a comparatively lower mean age were involved in that study. DAs are molded through adverse experiences starting from childhood. CM is among the risk factors for cognitive vulnerabilities (44). Maltreated children make inferences in trying to understand maltreatment events. With the repetition of those events, the children can develop DAs by negative cognitive structuring and faulty information processing. Ultimately, depression is the result when those are triggered (21,44,45). Studies have found that EA and EN had a strong association with DAs (46,47). They are also predictive of future depressive episodes (48)(49)(50), mediated by DAs (43). Our study is on similar lines. We found that individuals with EA were likely to develop more DAs of self-determination attitude type and overall DAs among the depressed participants. It is possible that those two types of DAs might influence the pathway from EA to depression. Individuals with DAs of self-determination attitude type are those with the thought of casting one's values in comparison with others (e.g., "If I do not do as well as other people, it means I am an inferior human being") (28). A group of researchers shared similar findings; they discussed the relationship between EA and depression mediated by DAs (19). Failure to cater to a child's basic needs by caregivers, either deliberately or unknowingly, defines CN. The child is deprived of basic needs, safety, supervision, medical care, physical requirements, and emotional support (1). CN includes PN and EN. It is the most prominent form of CM worldwide, and its high prevalence can be seen in our study. Approximately one in six children will experience CN (51). Studies have shown that CN impedes the development of the corpus callosum areas (52), and those alterations correlate with depression (53)(54)(55). An interesting result from our research indicated that PN is bound to more DAs: attraction and repulsion-type DA in the depressed and seeking applause-type DA in the non-depressed. However, a study of 155 healthy participants with a mean age of 18.8 (43), as PA was reported not to be associated with DAs (19). CN was found to be predictive of DAs among depressed participants in a study (20), partly supporting our findings for PN. Exposure to one form of trauma in childhood potentially elevates the risk of experiencing several forms of trauma over time. Polyvictimization is the term used to describe individuals who experienced potentially traumatic events such as the known components of CM, bullying, and witnessing adverse events such as parent substance abuse, domestic violence, and others (56,57). It is a robust predictor of short-and long-term mental health problems not limited to depression (14,58). Like a few, we also used the CTQ to assess an aspect of polyvictimization. They found a "dose-dependent" relationship between collective CM types and the odds of being diagnosed with depression (59,60). Our study added a new scope in polyvictimization; we further assessed which type of DA is more likely in depressed and non-depressed individuals. With increased CTCs, DAs of vulnerability type, attraction and repulsion type, and seeking applause type and overall DAs were predicted in depressed patients. In those without depression, increased CTCs predicted DAs of attraction and repulsion type. Our results were in line with the titration model of the cognitive vulnerability. It states that a lesser threshold of adverse events is required when more negative cognitive styles are present to onset depression (21,22). LIMITATIONS A couple of limitations concerning this study should be noted. First, the nature of our research is a hurdle to make reverse causality inferences. We could not show the direct causality of DAs associated with CM and account for time exposed to CM. Second, the retrospective assessment of CM using the CTQ is subject to recall biases. Also, some forms of CM such as SA might be underreported in fear of shame and social detriment. Third, polyvictimization is best assessed using the Juvenile Victimization Questionnaire (JVC) (57). As most of the JVC components overlapped with the CTQ, we adopted the latter for our study's purposes. Two researchers endorsed the same method (59,60). They assessed polyvictimization by grading the severity of the individual CTQ factors. We used a dichotomous format for each factor of the CTQ; either presence or absence could be the outcome, ignoring the severity of the CTQ factors. CONCLUSIONS Our group of researchers brought up the novel idea to examine the type of DAs predicted by CM types, and we are the only to explore the types of DA predicted by higher CTCs. In summary, our study provided new insights into the clinical field. Specific types of DAs might influence the relationship between MDD and CM. Furthermore, we also concluded that the higher the CTCs, the more DA types in participants with and without depression. Screening and prevention of CM by related authorities, caregivers, medical professionals, and parents are imperative to break the chain. EA or PN typically deserves better attention; they may be potential markers to screen for depression. Research has shown that psychotherapy alone or in combination with antidepressants is best suited in depressed patients who underwent CM (61). Cognitive-behavioral therapy (CBT), personalized trauma-focused CBT, and child-parent psychotherapy are recommended. The forms of DA associated with depression found in our study should to be focused on to address ongoing or future depressive episodes. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the Second Xiangya Hospital of Central South University and Ethic Committee of the Zhumadian Psychiatric Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LL, YZ, and BL co-designed the topic. BL, JS, MW, XL, QD, LZ, JL, YJ, PW, HG, FZ, and YZ are responsible for participant recruitment and data collection. RJ and BL co-conducted the statistical analyses. RJ wrote the initial draft of the manuscript. BL contributed important revisions to the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the National Science and Technologic Program of China (2015BAI13B02), the Defense Innovative Special Region Program (17-163-17-XZ-004-005-01), the National Natural Science Foundation of China (81171286, 91232714, and 81601180). The funding sources had no role in the study design, data collection and analysis, interpretation of the data, preparation and approval of the manuscript, and decision to submit the manuscript for publication.
2021-10-22T13:26:32.733Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "f0b9ac432d1989d98f29baaa6675f2668970901d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyt.2021.728280", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0b9ac432d1989d98f29baaa6675f2668970901d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
248323782
pes2o/s2orc
v3-fos-license
Microsatellite instability in north Indian colorectal cancer patients and its clinicopathological correlation analysed their MSI their on 2-tailed t-test logistic 0.05 associated with family history (OR = 5.63, p = 0.022*, 95% CI = 1.1–28.6) and tumour-infiltrating lymphocytes (TILS) (OR = 2.60, p = 0.023*, 95% CI = 1.1–6.0). The patients surviving longer (< 5 years vs > 5 years) were found significantly associated with MSI-high (MSI-H) (OR = 3.76, p = 0.029*, 95% CI = 1.2–4.5). Conclusion: Family history of cancer and presence of TILS were significantly associated with the presence of MSI-H tumours; also, patients surviving more than 5 years had more MSI-H phenotype. Introduction Colorectal cancer (CRC) is the most commonly diagnosed cancer worldwide. This lethal malignant disease is the leading cause of cancer-related deaths around the world. According to global cancer statistics 2020, CRC ranked third in terms of new cases and fourth in terms of mortality. In India, the number of new cases of CRC in males were 40 408 (6.3%) and in females were 24 950 (3.7%) of total cancers. 1,2 The risk of occurrence and development of CRC is a complex process and can be influenced by either environmental factors or genetic factors. Hereditary CRC has three well-described forms: 1. Lynch syndrome (LS); 2. familial adenomatous polyposis (FAP)/attenuated FAP (AFAP); 3. MUTYH-associated polyposis (MAP). Other CRC syndromes include juvenile polyposis, hereditary mixed polyposis, Peutz-Jeghers, Cowden syndrome and serrated polyposis. 3 Three molecular pathways have been identified in CRC progression: chromosomal instability (CIN), microsatellite instability (MSI), and the CpG island methylator phenotype (CIMP). 4 CIN is defined as an increase in the rate at which chromosomes are gained or lost, which accounts for 85% of sporadic CRC; MSI arises from defects in the DNA mismatch repair (MMR) pathway which accounts for 15% of all CRC (12% sporadic CRC and 3% LS); 5 CIMP or epigenetic instability pathway is an epigenetic phenomenon whereby hypermethylation of CpG islands on gene promoters correlates with gene silencing, which is found in approximately 20-30% of CRC. 6 MSI and CIN are proposed to be mutually exclusive pathways giving rise to sporadic CRCs. 7 The other two morphologic multistep pathways are the classical pathway (or adenomacarcinoma sequence) and the serrated neoplasia pathway. 8,9 The genetic basis for MSI is an inherited germline alteration in any of the MMR genes -MLH1, MSH2, MSH6, PMS2 -or in the EpCAM gene. In 2008, Ligtenberg et al. identified the epithelial cell adhesion molecule (EPCAM) gene (located upstream of MSH2) as a novel gene causing LS by epigenetic inactivation of the respective MSH2 allele. [10][11][12] MSI refers to the change in length of a tumour microsatellite DNA caused by insertion and deletion of repetitive sequences when compared to normal DNA. MSI can be detected indirectly by MMR protein expression by immunohistochemical (IHC) staining or directly by polymerase chain reaction (PCR)-based amplification of MARCH 2022 specific microsatellite repeats (BAT25, BAT26, D2S123, D5S346 and D17S250). 13 Genotyping for MSI was initially used for screening LS. 14 Later, IHC analysis of the MMR proteins was proposed as an alternative method for the screening of LS. 15 Currently, there are studies by Lee et al. 16 and Kawakami et al. 17 which show we can perform MSI as a primary screening method followed by IHC (only on samples with MSI-H) for identifying individuals at risk for LS. Hence, laboratory testing around MSI involves three main approaches: MSI testing, IHC analysis for the MMR proteins, and mutation detection in the MMR genes. PCRbased MSI testing and IHC both have their role: PCR-based MSI test can tell us whether the particular CRC patient has MSI or microsatellite stability (MSS), and IHC can tell us which MMR gene is lost. Although the sensitivity and specificity are similar, IHC testing cannot differentiate sporadic MSI and LS. Once the tumour is found to be microsatellite instable on PCR and/or demonstrates the loss of MMR protein expression by IHC, these patients should be selected for further molecular genetic testing to see the germline mutation. This selective approach will allow for the efficient and cost-effective identification of LS patients and their families. Saeki et al. 18 and Yuan et al. 19 have also indicated that MSI testing and IHC are highly effective strategies for selecting CRC patients for MMR genetic mismatch with high sensitivity, specificity and reproducibility. There are many clinical and histopathological features associated with MSI-phenotype-right-sided location of the tumour, stage of the disease, mucinous or signet ring cell histology, poor degree of differentiation, medullary, mucinous and signet ring cell histology, presence of a large number of tumour-infiltrating lymphocytes (TILS) and Crohn's like reaction. Investigation of MSI for its presence in CRC is important as it helps in decision-making regarding screening of family members for the presence of the same mutation. 20 Some studies have also shown that MSIassociated cancers have a better prognosis and reduced recurrence rates. 21 Our study aims to detect MSI-CRC by PCR-MSI testing and to analyse its correlation with the clinicopathological features, and its effect on survival in north Indian CRC patients. Material and methods This was a prospective study on CRC patients who were surgically treated at Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow, a tertiary care hospital in north India. During the period of study (May 2014-June 2018), samples were collected from all 117 patients who were admitted for surgery with the diagnosis of CRC. Fourteen patients were excluded after the final histology report which revealed benign conditions like TB and Crohn's disease. Finally, a total of 103 CRC patients who were above 18 years and willing to participate in the study were included. The study was approved by the Institutional Ethics Committee (IEC) and informed consent was taken from all the patients. Patients who are known to have familial adenomatous polyposis (FAP) were excluded from the study. For the purpose of analysis, patients were categorised into two groups based on the revised National Institutes of Health (NIH) Bethesda guidelines -those who fulfilled the criteria and others who did not. 22 The demographic data of the patients were recorded on a predesigned proforma -gender, age, site of the tumour, stage of the disease as per American Joint Committee on Cancer (AJCC) criteria 8th edition, and the histopathological findings. Follow-up methods included out-patient clinic (OPD) patient followup cards and telephonic follow-up. For survival analysis, only those patients who were enrolled between 2014 and 2016 were included. Patients available for follow-up were categorised into two groups -those who survived for more than 5 years and those less than 5 years -to see how they are correlated with microsatellite instability. Patients who died or stopped treatment were considered lost to follow-up in our study. MSI analysis In PCR-MSI analysis, we examined the loss or gain in the number of repeats in tumour DNA and compared this with the number of repeats in the same region in non-tumour or normal DNA of the same individual. Genomic DNA was extracted from normal and tumour-fresh frozen tissues with the help of the DNeasy Blood & Tissue Kit (Catalog No. ID: 69504). PCR amplification was done by using the MSI analysis system PCR kit which consists of five Bethesda markers BAT25, BAT26, D17S250, D5S346, and D2S123. Primers for each of the five markers were previously described in literature. 23,24 We have used a single marker for one PCR reaction for a better interpretation of the result. The PCR reaction mixture contained 50 ng of genomic DNA from normal or tumour tissue, forward and reverse primer (10 pmol) pairs for selected microsatellite markers, master mix (2.0x; EconoTaq PLUS), and MQ water. PCR conditions were standardised by performing gradient PCR. The amplified PCR product was analysed using a DNA sequencer (ABI 310 genetic analyser/ GeneMapper™ Software 4). The differences in electropherogram peak patterns of a tumour and normal tissue were scored as the instability at that particular locus. The samples were classified as high frequency of unstable microsatellites (MSI-H) if two or more of the loci showed instability or as low frequency of unstable microsatellites (MSI-L) if only one tested locus out of five showed instability. Samples with no instability at these loci were reported as microsatellite stable (MSS). In this study, we grouped microsatellite phenotype status as two: MSI-H and MSI-L/MSS. Statistical analysis All data were analysed by using IBM SPSS statistics for Windows version 16.0 (IBM Corporation, Armonk, NY, USA). Continuous data were reported as mean or median and discrete data were reported in percentage. Univariate analysis was performed by using the 2-tailed Student t-test for continuous non-normally distributed variables and categorical variables were compared using the chi-square test. Binary logistic regression was used for multivariate analysis to determine factors that are independently predictive of MSI-H. The Kaplan-Meier method was used to explain overall and disease-free survival curves. A log-rank test was also performed, and it is used to compare the patient's survival times. The overall survival (OS) was calculated from the primary diagnosis to death from any cause. Disease-free survival (DFS) was calculated from the primary diagnosis of the disease to the first event (recurrence or death). Survival was explained as a median with a 95% confidence interval. A p-value < 0.05 was considered statistically significant. Results Out of a total of 103 patients, there were 72 males (70%) and 31 females (30%) with an IQR of 42-61 and an age range of 15-81 years. Forty-three patients (41.7%) were younger than 50 years. A family history of malignancy was present in nine (8.7%) patients. Out of nine, seven were first-degree relatives (FDR) and two were second-degree relatives (SDR). Patients with family history of cancer were younger than patients without family history (median age 49 vs 55 years). Colon cancer was found in 69 (67%) patients and rectal cancer in 34 (33%) patients. The right-sided colonic lesion was found in 53 (52%) patients. Histopathological examination revealed well-differentiated carcinoma in 33 (32%), moderately differentiated in 14 (13.6%), and poorly differentiated lesions in 56 (54.4%) patients. Patient demographics, tumour location, and other details are shown in Table I. In our study, 41.7% (43/103) of patients had high unstable microsatellites. Among the various clinicopathological factors analysed, the factors found significantly associated with MSI were the presence of the family history of cancer and TILS, both on univariate and multivariate analysis (OR = 4.520, p = 0.033*, 95% CI = 0.011-0.831; OR = 5.812, p = 0.016*, 95% CI = 0.125-0.807) (Table II). Although there was a male preponderance (72/103; 70%) in our study, gender has no impact on MSI. Associated family history of malignancy was found in nine (9/103; 8.7%) of the patients. Out of these nine patients, seven (78%) had high MSI (OR = 5.63, p = 0.022*, 95% CI = 1.1-28.6). The majority of patients in our study were either stage II (n = 42) or stage III (n = 41), but the stage of the disease in the present study did not have any impact on the MSI status. Follow-up was available in 92 patients (89%) which varied from 24-72 months. Patients who died (during therapy treatment) or stopped treatment from our institute were considered lost to follow-up in our study. The Kaplan-Meier survival curves were found significantly better both in terms of OS and DFS in patients with MSI-H. Five-year OS and DFS of all MSI-H CRC patients were 72.1% and 53.5%, respectively (Figure 1a, 1b). The recurrence rate was also lower in MSI-H than MSS (4.7% vs 11.7%) (Table III). Discussion This cancer ranks third in the frequency of incidence (945 000 new cases, 9.4% of the world total) and fourth in mortality (492 000 deaths, 7.9% of the total). 25 The agestandardised incidence rate (ASR) for CRC in India is low and was observed 6.0 per 100 000 population in males and 3.7 per 100 000 populations in women. 2 The five-year survival of CRC in India is one of the lowest in the world at less than 40%. The CONCORDE-2 study revealed that the five-year survival of rectal cancer in India is falling in some registries. 26 There is a perception amongst oncologists that the cases of CRC in India are increasing in young age patients, with more advanced-stage disease, more signet ring morphology, and more anorectal cases as compared to the colonic site. 32 This could be because of the higher sample size and sensitive platform used (DNA sequencer) in this study. MSI-H tumours were more proximally located and were more common among male cases than females. Though there was male dominance in the present series (69%), we could not find an association between MSI status and gender. Our study revealed that MSI-H tumours were more associated with patients fulfilling the revised clinical Bethesda guidelines. Also, an MSI-H tumour shows a preferential association with familial CRC. Molecular and IHC methods of detection of deficient MMR are two completely distinct modalities of investigation where one is directed towards identifying microsatellite sequences and the other is a direct phenotypic reflection of the MMR gene, respectively. In our series, we found that patients with a family history of CRC were significantly associated with MSI-H tumours (p = 0.022*). Evaluation of the MMR protein expression in CRC is useful for the identification of patients at risk for LS; it may provide prognostic information as MSI is correlated with better prognosis in patients with CRC. 33 In our study, poor degree of differentiation was higher in MSI than in non-MSI (58.1% and 51.3%). Several investigators have also reported the correlation of MSI-H CRC with a poor degree of tumour differentiation, but we did not find any significant correlation; this might be because of the comparatively smaller number of sample size (n = 103) than these studies (n = 438, n = 310 respectively). 34,35 Also, the idea about its role in survival is not very clear. Studies by Kang et al. 36 and Xiao et al. 37 have not found a better survival for poorly differentiated MSI than MSS CRC, similar to our finding. TILS are considered as histological features of predicting MSI in CRC and an independent prognostic factor. 38 The deficiency of the MMR system in MSI tumours causes the accumulation of frame-shift mutations that causes the transcription and translation of neoantigens that are presented by human leukocyte antigen (HLA) class I and are identified by cytotoxic-T lymphocytes. The survival benefit for MSI-CRC may be partly attributed to the high lymphocytic response. Our study also revealed that MSI tumours had increased tumoral lymphocytic responses compared to MSS tumours. Several meta-analyses have shown that MSI-CRC cases have a good prognosis in terms of DFS, and OS regardless of the stage, whereas few reports have shown the therapeutic benefit of knowledge of MSI status in stages II and III CRCs. Our study has found higher DFS and OS in the MSI group. MSI-CRC has a favourable stage-adjusted prognosis compared to MSS-CRC and requires a different management strategy as it does not benefit from 5-FU based adjuvant chemotherapy. 39 Conclusion The 41.7% of CRC patients in the present series have associated MSI. Patients with a family history of cancer and features of TILS on histology were significantly associated with MSI-H status. MSI-H is an important prognostic factor for determining the 5-year survival and recurrence in CRC patients. Therefore, the authors recommend MSI testing to be routinely performed in north Indian CRC patients.
2022-04-23T06:22:57.112Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "5a0136cae8699eeda43cec3c981283d8ba1c16ee", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.org.za/pdf/sajsurg/v60n1/04.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e8697698b8237f112e55ba11fab41fef5c9de57b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14535630
pes2o/s2orc
v3-fos-license
A Rosetta Stone for Nature’s Benefits to People After a long incubation period, the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) is now underway. Underpinning all its activities is the IPBES Conceptual Framework (CF), a simplified model of the interactions between nature and people. Drawing on the legacy of previous large-scale environmental assessments, the CF goes further in explicitly embracing different disciplines and knowledge systems (including indigenous and local knowledge) in the co-construction of assessments of the state of the world’s biodiversity and the benefits it provides to humans. The CF can be thought of as a kind of “Rosetta Stone” that highlights commonalities between diverse value sets and seeks to facilitate crossdisciplinary and crosscultural understanding. We argue that the CF will contribute to the increasing trend towards interdisciplinarity in understanding and managing the environment. Rather than displacing disciplinary science, however, we believe that the CF will provide new contexts of discovery and policy applications for it. Introduction The Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) [1] (www. ipbes.net), was established to strengthen the science-policy interface for the conservation of biodiversity, ecosystem services, long-term human well-being, and sustainable development. It is similar to the Intergovernmental Panel on Climate Change (IPCC) in that it will carry out assessments of existing knowledge in response to governments' and other stakeholders' requests [1]. However, the challenges for IPBES are arguably more complex [2] because, although the biodiversity crisis is global, biodiversity distribution and its conservation status is hugely heterogeneous across the planet; therefore, the solutions will have to be scalable to a much finer level, and the relative contributions of such fine-scale solutions to the improvement of global biodiversity status will also vary enormously. IPBES has three distinctive features. First, it must engage, in the process of defining questions, assessing trends, and identifying solutions, a great diversity of stakeholders including policy makers, practitioners, civil society organisations, and the private sector. Second, it aims to incorporate knowledge from a variety of sources, including not only the natural, social, and engineering sciences but also indigenous and local knowledge (ILK). The inclusion of ILK is not only a matter of equity but also a source of knowledge that we can no longer afford to ignore [3][4][5]. Third, IPBES goes beyond producing assessments to include capacity-building, development of policy tools, and catalysing the generation of critical new knowledge. A conceptual framework was required to give cohesion to this ambitious vision. Such scaffolding needed to provide an integrated view of the biodiversity knowledge-policy interface, stimulate new thinking, accommodate diverse human attitudes to biodiversity, and at the same time be as simple as possible to be effective and useful for the diverse array of stakeholders. The conceptual framework (CF) adopted by IPBES rises to this challenge. In this piece, we briefly summarise its main features and argue for its potential to improve the science-policy interface and also advance fundamental science on the links between biodiversity, ecosystems, and societies. A Framework for the Knowledge-Policy Interface on Biodiversity and Its Societal Benefits The first public product of IPBES, the CF was constructed through more than two years of consultative work involving specialists from different sciences and knowledge systems, and was submitted to open comments by more than 100 governments and numerous nongovernmental organisations. It captures the relationships between the natural world and humankind in only six main elements-nature, nature's benefits to people, anthropogenic assets, indirect drivers of change (such as institutions and governance systems), direct drivers of change, and good quality of life ( Fig. 1, Box 1). This model clearly builds on the highly influential Millennium Ecosystem Assessment [6,7], which contemplated the essence of most of these elements and their links. However, the CF further emphasizes the crucial role of human institutions as sources of both environmental problems and solutions. Taking advantage of the remarkable conceptual and methodological progress made in this area since the early 2000s [8][9][10][11][12][13], it also goes further in its intent to consider a whole range of values from monetary to spiritual and from instrumental to relational, in the valuation of nature's contribution to quality of life. Finally and crucially, the CF goes further than any previous initiative in the international environmental science-policy interface in its explicit, formal incorporation of knowledge systems other than western science, in an unprecedented effort towards crosscultural and crossdisciplinary communicability in the search for options and solutions. [20,21] • Nature here refers to the natural world, with an emphasis on biodiversity and ecosystems. Nature has values related to the provision of benefit to people, and also intrinsic value, independent of human experience. Box 1. The Main Elements of the IPBES Conceptual Framework • Anthropogenic assets refers to knowledge, technology, financial assets, built infrastructure, etc. • Nature's benefits to people are all the benefits (and detriments or losses) that humanity obtains from nature. By definition, all nature's benefits have human value, which can range from spiritual inspiration to market value. Nature provides some benefits to people directly without the intervention of society (e.g., oxygen). Most benefits, however, Figure 1. The IPBES Conceptual Framework. In the central panel, delimited in grey, boxes and arrows denote the elements of nature and society that are at the main focus of the Platform. In each of the boxes, the headlines in black are inclusive categories that should be intelligible and relevant to all stakeholders involved in IPBES and embrace the categories of western science (in green) and equivalent or similar categories according to other knowledge systems (in blue). The blue and green categories mentioned here are illustrative, not exhaustive, and are further explained in the main text. Solid arrows in the main panel denote influence between elements; the dotted arrows denote links that are acknowledged as important, but are not the main focus of the Platform. The thick, coloured arrows below and to the right of the central panel indicate that the interactions between the elements change over time (horizontal bottom arrow) and occur at various scales in space (vertical arrow). Interactions across scales [8], including cross-scale mismatches [19], occur often. The vertical lines to the right of the spatial scale arrow indicate that, although IPBES assessments will be at the supranational-subregional to global-geographical scales (scope), they will in part build on properties and relationships acting at finer-national and subnational-scales (resolution, in the sense of minimum discernible unit). The resolution line does not extend all the way to the global level because, due to the heterogeneous and spatially aggregated nature of biodiversity, even the broadest global assessments will be most useful if they retain finer resolution. This figure is a simplified version of that adopted by the Second Plenary of IPBES [21]; it retains all its essential elements but some of the detailed wording explaining each of the elements has been eliminated within the boxes to improve readability. A full description of all elements and linkages in the CF, together with examples, are given in [20]. depend on the joint contribution of nature and anthropogenic assets, e.g., fish need to be caught to act as food. • Drivers of change refers to all those external factors that affect nature, anthropogenic assets, nature's benefits to people, and good quality of life. The CF includes drivers of change as two of its main elements: institutions and governance systems and other indirect drivers and direct drivers (both natural, such as earthquakes and tropical cyclones; and anthropogenic-e.g., habitat conversion, chemical pollution). • Institutions and governance systems and other indirect drivers are the root causes of the direct anthropogenic drivers that affect nature. They include systems of access to land, legislative arrangements, international regimes such as agreements for the protection of endangered species, and economic policies. • Direct drivers, both natural and anthropogenic, affect nature directly. The direct anthropogenic drivers are those that flow from human institutions and governance systems and other indirect drivers. They include positive and negative effects, e.g., habitat conversion (e.g., degradation or restoration of land and aquatic habitats), climate change, and species introductions. Direct natural drivers (e.g., volcanic eruptions) can directly affect nature, anthropogenic assets, and quality of life, but their impacts are not the main focus of IPBES. Placed at the heart of an intergovernmental process, the CF is the result of political negotiation, but it goes beyond that. The consultative construction process that converged in the model adopted by the IPBES Second Plenary was rich in discussion and conflict on epistemological and methodological, as well as political, grounds. A major breakthrough during this process was to allow different knowledge systems to define the six elements according to their own categories. Previously, there had been a struggle to find a single word or phrase to capture the essence of each element in a way that respected the range of utilitarian, scientific, and spiritual values that makes up the diversity of human views of nature. The CF is now a kind of "Rosetta Stone" (see Box 2) for biodiversity concepts that highlights the commonalities between very diverse value sets and seeks to facilitate crossdisciplinary and crosscultural understanding. For example, the CF element nature includes scientific concepts such as species diversity, ecosystem structure and functioning, the biosphere, the evolutionary process and humankind's shared evolutionary heritage (shown in green). For indigenous knowledge systems, nature includes different concepts such as "Mother Earth" and systems of life (for indigenous peoples of the South American Andes), and other holistic concepts of land and water, such as those held in the South Pacific islands, which include physical environment, nonhuman living organisms, living people, ancestors and their traditions (blue). Of course, a perfect alignment between concepts from different knowledge systems is probably unattainable. Instead, the framework provides common ground for basic working understanding and coordinated action towards tackling the biodiversity crisis. The broad crosscultural categories indicated as headlines of the boxes in Fig. 1 (in larger black font) should be transparent and important for all stakeholders involved and are proposed as standard for every assessment to be carried out by IPBES. Within them, different activities may identify more specific subcategories, associated with knowledge systems and disciplines relevant to the task at hand, without losing view of their placement within the general picture. For example, there is a large philosophical and instrumental gap between the ways in which gifts of nature and ecosystem goods and services are conceptualized, valued, and used according to different worldviews, but both categories are concerned with the good and bad things that societies obtain from the natural world, in the vast majority of cases through the mediation of institutions, be they ancestral rights to land, national economic policies, or international biodiversity treaties. In this way, general questions and problems can be formulated in a way that is intelligible across stakeholders, although they may strongly differ in the relative importance of different drivers in causing the problems and in the best responses to them. Box 2. The Rosetta Stone The Rosetta Stone is a inscribed rock tablet discovered in Egypt in 1799, which holds the key to understanding Egyptian hieroglyphs (http://www.britishmuseum.org/explore). The top band consists of Ancient Egyptian hieroglyphs, the middle band of Demotic script, and the bottom band of Ancient Greek writing. The inscriptions are three translations of the same decree, issued in Memphis in 196 BC, affirming the royal cult of Ptolemy V. In the early years of the 19th century, the Greek inscription was used as the key to deciphering the others. The intellectual and practical challenges involved in the implementation of the IPBES model will be formidable. A conceptual scaffolding such as the CF may not be sufficient for fulfilling the IPBES vision of bringing on board stakeholders across disciplines, cultures, and knowledge systems in the search of solutions. We argue, however, that an inclusive CF is a necessary condition towards the success of such a vision. Moving into Practice: A Call for Embracing the Framework In early 2014, IPBES started to implement its work programme: a set of coordinated assessments, policy tools, and capacity building actions [14]. The two initial assessments of IPBES focus on pollination and pollinators associated with food production on one hand, and scenarios analysis and modelling of biodiversity and ecosystem services on the other. Many others will follow in the years to come. If the vision behind the CF is embraced by the thousands of participants representing different disciplines, knowledge systems, and stakeholder groups who will perform IPBES-related work in the upcoming years, it will likely change the manner in which assessments have been done so far. Rather than focusing on one particular box or arrow, the framework is meant to inspire the community in looking at issues in an integrated manner, in an effort to consider the full cycle of events from causes to solutions. This is likely to push all engaged parties well beyond their comfort zones, but it will be worth the effort. For example, rather than focusing mostly on direct drivers of pollination change (such as habitat or climate change, landscape alteration, overuse of pesticides, or spread of pathogens), as recent scientific reviews of regional or global declines in pollinators [15][16][17] have done, the CF is meant to invite experts to look at the underlying causes of these direct changes, such as institutional drivers. In the spirit of the CF, the pollination assessment will further examine the impacts of pollinator declines on subsistence agricultural systems, which provide much of the food in some regions of the world and yet are under-represented in recent case studies. Assisted by the IPBES Task Force on Indigenous and Local Knowledge, it will also consider the trends observed by practitioners and their interpretations of such trends and whether local and indigenous knowledge can offer solutions. It will have to take on board state-of-the-art metapopulation and metacommunity ecology, spatial modelling, microbiology and engineering, as well as social and economic analysis of the supply and demand chains, to identify which aspects are part of the problem and which are part of the possible solutions. It will have to propose options for integrated changes to trade in domestic bees, pesticide use, and incentives for the conservation of patches of vegetation with their wild pollinators embedded in agricultural landscapes, and adapt these options to different regions of the world. Lastly, it will need to answer the question: what would be the best mechanisms to involve government agencies, civil society, and the private sector to explain and curb the declining trends in pollinators involved in food production? Pushing the Frontiers of Biodiversity Science The ability of the CF to provide insight and support transformative action will be tested as IPBES undertakes its work programme. IPBES is not alone in its quest for more integrative, cross-paradigm, co-produced knowledge. The CF is only one concrete step within a more general thrust that now includes many national research agencies, international funding bodies, and some of the largest scientific networks in the world. What are the implications of the CF for the ways in which we will do science in the coming years? Will the need for convergence of different disciplines and knowledge systems to solve concrete practice and policy problems compromise the sharpness of disciplinary science? The risk exists, but we argue that the opportunities greatly outweigh it. Disciplines will not necessarily need to become more superficial, or abandon their specific tools or their internal validity criteria [5]. Instead, they are likely to find novel questions worthy of all their analytical power. Indeed, one of the most crucial features of the CF is the way in which priority questions are generated and the way in which possible explanations and solutions are identified and put forward for practical implementation. As has happened many times through history, these new contexts of discovery and application, rather than blunting the cutting edge of science, will need it to be at its sharpest.
2016-05-12T22:15:10.714Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "f7ca32b4b2c0068321c0ee661dda0d6016f1bb58", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.1002040&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7ca32b4b2c0068321c0ee661dda0d6016f1bb58", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225137363
pes2o/s2orc
v3-fos-license
Quantitative analysis of optimum corrective fuel tax for road vehicles in Bangladesh: achieving the greenhouse gas reduction goal This study estimates optimum corrective fuel taxes for Bangladesh and correlates them with climate change policy. First, we use the European road transport emission model (COPERT IV) to precisely estimate the externalities. Second, using the same model, we also estimate the reduction in greenhouse gas emissions caused by the fuel tax. Finally, we develop a correlation between the fuel tax rate and emissions reduction. Our benchmark calculation of the optimum corrective tax is US$0.94 per gallon for gasoline and US$1.46 per gallon for diesel (in 2016 prices). We find that congestion and accident externalities are the two main fuel tax components for Bangladesh. We also find that the net social welfare gain per year is US$302.11 million and the net revenue gain per year is 3.59% of GDP. The corrective diesel tax reduces fuel consumption by 18.10% and increases fuel efficiency by 12.53%. In the benchmark case, corrective fuel taxes reduce GHG emissions by 5.77%. With the combination of the existing gasoline tax and a diesel tax of US$1.20 per gallon, the country’s greenhouse gas reduction goal can be achieved. Policymakers can use fuel taxes to support climate change policy. Introduction The main purpose of this study is to estimate the "corrective" fuel taxes that can help Bangladesh achieve its intended nationally determined contribution (INDC) to greenhouse gas (GHG) reduction, as a representative middle-income country. Bangladesh has set this goal at a 5% reduction of GHG by the end of 2030, compared with business as usual (BAU) in 2011 [Ministry of Environment and Forests (MoEF) 2015]. "Corrective tax" is a term used in many studies, such as Buchanan (1969), Haughton and Sarkar (1996), and Jacobs and De Mooij (2015), also known as the "Pigouvian tax." As mentioned in Smith (2017), the tax can be used to correct for externalities or "internalities" in a market. In this study, we analyze the road transport market, known as a major contributor to GHG emissions. The use of gasoline and diesel fuel causes emissions in two ways. First, emissions occur during the combustion of fuel, which is the main source of road transport emissions. Second, evaporation occurs from volatile fuels and lubricants. In this study, we focus on emissions due to combustion. However, the use of motor vehicles generates many negative externalities, including accidents, road damage, air pollution, congestion, and oil dependence (Maibach et al. 2007;Newbery 1990;Parry et al. 2007;Santos et al. 2010). Uddin and Mizunoya (2019) revealed that the construction of expressways can be effective in reducing such external diseconomies. To develop sustainable transport policies, all these external costs need to be measured correctly. Most fuel tax studies have been done in developed countries. Newbery (2001), Parry and Small (2005), and Sterner (2007) recognized the fuel tax as an ideal instrument to address global warming. Parry and Small (2005) found some people responded to the distance-based externalities of the fuel tax by purchasing more fuel-efficient vehicles, rather than driving less. Addressing these two issues, they derived a formula for an optimum fuel tax based on fuel consumption. Their findings suggest congestion externalities represent the largest component in fuel taxes in the USA and the UK. Antón-Sarabia and Hernández-Trillo (2014) calculated an optimum gasoline tax for Mexico, using five important parameter values, including price elasticity, from Parry and Small (2005). They found accident externality to be the largest component of the fuel tax, followed by distancerelated damage, and congestion. Parry and Strand (2012) developed a general approach for estimating motor vehicle externalities; they calculated corrective taxes on gasoline and diesel for Chile, considering only two types of vehicles, namely, cars and trucks. They also used important elasticity values from the USA literature. Moreover, they assumed vehicles' fuel efficiency might influence the optimum fuel tax calculation. Their model suggests that accident and congestion externalities are the main components of the fuel tax in Chile. All the above mentioned studies are empirical, and use important elasticity parameters in different geographical locations. Therefore, despite their claims, these models can only partially capture the response to optimum fuel taxes. Additionally, these studies do not address the fuel tax amount that would reduce global pollution to a certain level. To estimate a reduction in global pollution, descriptions of the model, we refer them. The full expression for the corrective fuel tax is as follows: where t c F is the corrective fuel tax, E PF is the marginal global pollution cost, E PM is the marginal local pollution cost, E C is the marginal congestion cost, E A is the marginal accident cost, E D is the marginal road damage cost, g is the fuel combustion per vehicle kilometer, and is the proportion of the reduction in fuel consumption that comes from a reduction in distance driven or the vehicle kilometer traveled (VKT) portion of fuel price elasticity (Parry and Small 2005). Parry and Small (2005), Parry and Strand (2012), and Goodwin (1992), define as the "vehicle miles traveled (VMT) portion of fuel price elasticity" as mile is used as the unit of distance. However, there is essentially no difference between VMT and VKT; hence, we use VKT in this study. Following Parry and Small (2005), where FF is the negative of the fuel demand elasticity, MF is the negative of the elasticity of VKT with respect to the consumer fuel price, and M FF is the negative of the fuel demand elasticity with VKT held constant. Since we could not find data on elasticity of fuel demand in Bangladesh, we assume = 0.4 as in Parry and Small (2005). We use = 0.2-0.6 for sensitivity analysis. In Eq. (1), we do not consider marginal noise cost, as it is difficult to measure in Bangladesh, and we did not find any related studies for the country. To estimate the effects of the corrective fuel tax on kilometers traveled and fuel economy, we use the following functions, as discussed in Parry and Small (2005). Table 1 Differences between Parry and Strand (2012) and this study Source: Author Parry and Strand (2012) This study where ° denotes an initial (currently prevailing) value, p F denotes the producer price of fuel, t F denotes the excise tax on fuel or the fuel tax, and M denotes the aggregate vehicle kilometers traveled by households. The welfare gains ( W F ) from raising the fuel tax from an initial level to its corrective level are given by Parry and Strand (2012), as follows: where F is aggregate fuel consumption. COPERT IV model We use COPERT IV to calculate emissions from road transport in Bangladesh. It is a software program that estimates motor vehicle emissions, including N 2 O, CO, methane (CH 4 ), Sulphur dioxide (SO 2 ), NO x , VOC, and PM, produced by different vehicle categories, and CO 2 emissions based on fuel consumption (Kousoulidou et al. 2010). This model has found growing acceptance among scientists who estimate road emissions regularly (Ntziachristos and Samaras 1998). Moreover, COPERT IV emission factors have been included in the recent 2006 Intergovernmental Panel on Climate Change (IPCC) guidelines. Gkatzoflias et al. (2012) provide a detailed description of the COPERT IV model. As vehicle emissions vary by individual vehicle size and type; emission control technology; age; vehicle maintenance; fuel use and type; and kilometers driven, each country or region should have specific emission factors. These also depend on driving conditions and environment. Guensler (1993) categorized vehicle emission factors in the USA in terms of vehicle specifications, fuel specifications, vehicle operating conditions, and vehicle operating environment. Similarly, a different road transport emission model (COPERT-based) was developed for European vehicles. The model calculates pollutant emissions and energy consumption within a region or country, using data such as the number of vehicles; year of introduction of regulations; fuel consumption and characteristics; average temperatures of the country; route distribution/driving condition (rural, urban, highway); and average speeds (Burón et al. 2004). Kholod et al. (2016) suggest using a COPERT model to calculate emissions, especially in countries that have implemented European emission standards. Cai andXie (2007), Lang et al. (2012), Thambiran and Diab (2011), and many others have used this model to estimate emissions at national and local levels for non-European countries. Since 2005, Bangladesh has adopted European emission standards: Euro I for diesel vehicles and Euro II for gasoline vehicles (Pundir 2012). Since Bangladesh does not have country-specific emission factors, COPERT IV seems suitable for the purpose of our investigation. We use the COPERT algorithm to estimate annual fuel consumption, by capturing fuel specification (Pundir 2012), emission regulation, vehicle information, VKT, speed, and driving share data. Elasticity of VKT with respect to consumer fuel price We estimate the corrective fuel tax using an intermediate-run estimate of the elasticity of VKT with respect to consumer fuel price. Since it plays a key role in the corrective fuel tax formulation, we discuss our methodology for the parameter value in detail. It is worth mentioning that although passenger kilometer (PKM) could be another option for buses, cars, and other passenger vehicles, it is not applicable for trucks. Additionally, the PKM database at the national level is not established yet. To calculate the intermediate-run fuel price elasticity of VKT in Bangladesh, we estimate a simple double-log model using country specific data over 8 years (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016). Given the stationary nature of our data, we follow Lin and Zeng (2013). Our model is as follows: where VKT denotes per capita vehicle kilometer (km) traveled by vehicle, P denotes the real price of fuel, I denotes per capita real disposable income, o is the constant term, 1 is the elasticity of VKT with respect to consumer fuel price, 2 is the elasticity of VKT with respect to income, and is the error term. Static models like the above are expected to produce intermediate-run elasticity. The interpretation of the coefficients in the above model is not entirely clear. We would expect the price and income elasticities to be as follows: Marginal local emission cost Marginal local emission costs are calculated using emission factors and damage costs. Since Bangladesh follows euro standards for emission, COPERT IV can calculate emission factors for different vehicles differentiated by area and road type. In this regard, we consider four categories of roads and highways in Bangladesh: national highways, regional highways, Zila roads (roads connecting district headquarters to the smallest administrative unit called Thana), and rural roads. We assume that our national highways and rural roads are the same as Europe's motorways and rural roads, respectively. However, the valuation of damage is highly speculative. The Directorate-General of the European Commission responsible for transport within the European Union (MOVE 2014) provides marginal local emission costs for EU countries (see (7) ln VKT = o + 1 ⋅ ln P + 2 ⋅ ln I + ln VKT ln P = 1 and ln VKT ln I = 2 Appendix where E L,BD i,j,k is the local emission cost per km in Bangladesh for vehicle class i, technology j (Euro I, II, etc.), and driving environment k (i.e., urban, rural, and highway); E L,EU i,j,k is the local emission cost per km in the EU for i, j, and k ; I BD is the real GDP per capita in Bangladesh (US$); I EU is the real GDP per capita in the EU (US$); and I is the elasticity of VOLY with respect to income. From the World Bank (2019a), I BD ∕I EU is (US$3869/US$43,976 =) 0.088. The empirical literature on the income elasticity of VOLY ( I ) is conflicting, with estimates varying between about 0.5 and 1.5 (Parry and Strand, 2012). This suggests a plausible range for Bangladesh of US$1652 to US$18,779, with a benchmark value of US$5571 when I =1 (all in 2016 prices). We use this range for our sensitivity analysis. To get the marginal local emission cost, we multiply the unit marginal local cost by number of vehicles and kilometers traveled under the same condition; then, we divide the product by total VKT, as shown in Eq. (9). where E PM is the marginal local emission cost, v i,j is the number of vehicles of vehicle class i and technology j , andm i,j,k is the kilometers traveled by vehicle class i and technology j under condition k. Marginal global pollution cost Global warming costs are highly speculative due to the long period involved; uncertainties about atmospheric dynamics; and inability to forecast adaptive technologies (Parry and Small 2005). The key step here is the valuation of the cost of CO 2 . MOVE (2014) suggests the carbon price to be US$120 (in 2010 prices) with a range of US$65-US$225 at a 3% discount rate. We extrapolate this carbon price in Eq. (10). Using I BD ∕I EU = 0.088 and I =1, our benchmark CO 2 price is US$11.62 for Bangladesh with a range from US$6.29 to US$21.78 (all in 2016 prices). Again, we use this range for our sensitivity analysis. where C BD f ,GHG is the climate change cost of fuel type f for Bangladesh (US$), and C EU f ,GHG is the climate change cost of fuel type f for EU (US$) (see Appendix Table 15). Marginal global pollution cost is estimated in Eq. (11). We multiply total fuel consumption by the unit climate change cost of that fuel, then divide the product by aggregate kilometer traveled under that fuel. Note that we import total fuel consumption from the COPERT IV software using the vehicle database, emission standard, and fuel specification, as the government database of fuel consumption for the road sector does not separate fuel consumption by other sectors. where E PF is the marginal global pollution cost, and F f is the aggregate fuel consumption of fuel type f in liters. Marginal congestion cost Each vehicle creates a marginal congestion cost for other road users due to its road usage, which depends on the marginal delay and value of travel time (VOT). We use VOT data from Uddin (2017), who studied VOT for different vehicles in Bangladesh (see Table 4 and Sect. 2.2.8 for details). An approximation for the marginal delay can be inferred from data on average delay. We obtain the average delay due to congestion by comparing the inverse of average speed and desired speed of a vehicle. Using the "Bureau of Public Roads" formula, Parry and Strand (2012) found the marginal delay to be four times the average delay. Since a nationwide congestion study does not exist, we assume that 30% of urban mileage and 20% of highway mileage face congestion, and there is no congestion on rural roads. To calculate marginal congestion cost, we multiply the marginal delay by kilometers traveled in congestion and by the VOT. Then, we divide the product by total VKT of the respective vehicles, as given in Eq. (12). where E C is the marginal congestion cost; T MD i,j,k is the marginal delay per vehicle km for i, j, and k; M congestion i,j,k is the kilometers traveled in congestion for i and k ; and VOT i,k is the value of travel time for i and k. Marginal accident cost Marginal accident cost depends on types of accidents, number of accidents, and RTA costs. However, we take a different approach to calculate marginal accident cost. We assume that each vehicle has a probability of accident when it is driven on a road and (11) poses an accident threat to others. This probability can be seen as equivalent to past accident rates of that vehicle type. In Bangladesh, a high level of underreporting is believed to exist at all levels of accident statistics, possibly due to the Government's limited capacity to track post-accident deaths. Furthermore, some accidents are not reported to avoid legal proceedings. To overcome this challenge, we consider fatal accidents to be 24,954 (World Health Organization 2018), and serious, simple, and property damage only (PDO) accidents to be 1954, 109, and 2566, respectively (Bangladesh Bureau of Statistics 2018). We assume that non-reported accidents have no effect on our results. The Power and Participation Research Centre (2014) identified buses as the dominant vehicle category (38.1%) among accident perpetrators, followed by trucks (30.4%), motorcycles (12%), cars (6%), jeeps (4.5%), and others (9%) in 2012. Paul et al. (2008) studied RTA costs for Bangladesh using the gross output method. We update their RTA costs, assuming 6% inflation (Table 2). We can calculate the marginal accident cost by multiplying the number of accident types with the RTA cost, then dividing the product by total VKT. Then, we distribute the total marginal cost to each vehicle type by their probability of accident. We summarize this in Eq. (13). where E A denotes the marginal accident cost, P i is the probability of an accident involving vehicle class i , N l is number of accidents of type l (i.e., fatal, serious, simple, PDO), and C a l is the cost of the road traffic accident of type l. Marginal road damage cost The road damage cost, which is the cost of repairing the damage caused by the passage of vehicles is borne by the highway agencies. However, other drivers also bear the cost in the form of extra vehicle operating costs imposed by this damage (Santos et al. 2010). Newbery (1988) suggests that, under certain conditions, the average cost of repair could be equal to the cost of marginal road damage. Empirical studies in transportation engineering suggest that road damage caused by a vehicle increases according to the fourth power of the axle load. Thus, most of the damage is caused by heavy duty vehicles on thin roads (Newbery 1990). We estimate marginal road damage cost for heavy duty vehicles using Eq. (14). where E D is the marginal road damage cost, C m is the annual maintenance expenditure in US$, and M HDV is the annual kilometer traveled by heavy-duty vehicles. Producer price of fuel ( p F ) and initial tax rate ( t o F ) As Bangladesh is not an oil producing country, it imports oil to meet domestic demand. Thus, the price in international markets affects oil imports. However, the domestic oil market is regulated by the government to protect consumers from sudden price changes. Therefore, the government accepts a huge loss each year when the price in the international markets increases. We assume the world average oil price as the producer price of fuel. Subtracting the producer price from the local fuel price, we obtain an initial tax rate (Table 3). Other data Our main data source for our major variables is Uddin (2017). We also use the Bangladesh Road Transport Authority (BRTA) database for on-road vehicle statistics from the Bangladesh Bureau of Statistics (2018). The BRTA has divided motorized vehicles into 20 classes. However, it does not keep annals according to fuel use. Therefore, we use the Roads and Highways Department (RHD) database to classify vehicles according to the types of fuel used. RHD classifies vehicles into 11 classes, which differs from the BRTA classification. In fact, BRTA defines heavy trucks as trucks that carry more than 7.5 tons of payload, while the RHD classifies heavy trucks as trucks that have three or more axles. Due to this difference in definition, BRTA-classified heavy truck numbers can include many of the RHD-classified medium trucks that have two axles, but carry more than 7.5 tons payload. Uddin (2017) shows that about 47% of BRTA-classified trucks are classified as heavy trucks by RHD, 16% as medium trucks, and less than 2% as articulated trucks. We assume the 2% articulated trucks are included in the heavy trucks, and the remaining 35% are small trucks (Appendix Table 12). Since Bangladesh does not have country-specific emission factors, our vehicle classifications should align with the vehicles referenced in the IPCC guidelines. Therefore, we compare RHD and BRTA vehicle classifications with those of Eggleston and Walsh (1998) to create a combined vehicle classification and unique database that conforms with COPERT (see Appendix Table 13). We assume the closest match in the COPERT classification when analogous vehicles are not found. For example, we assume the COPERT class PC mini (1.4-2 l) for the Auto Rickshaw, the Tempo, and the Human Hauler. Uddin (2017) surveyed VKT, average travel speed of vehicles, and VOT on national highways (see Table 4). As data on individual vehicle kilometers traveled are difficult to establish, we assume driving shares are distributed as 40%, 30%, and 30% on urban, rural, and national highways, respectively. From our industry experience, we assume the desired safe speed on highways to be 50 km/h. In contrast, the traffic on urban roads in Bangladesh is mixed in nature and moves in a group. The speed of the slowest vehicle slows down the average speed of the group. Additionally, numerous intersections at frequent intervals, poor road conditions, side friction of non-motorized vehicles, a manual signaling system, and illegal encroachments of roadsides slow down vehicle speed. At times, traffic slows to walking speed in large cities. Therefore, we assume an average speed of 20 km/h and a desired speed of 30 km/h for urban traffic during congestion. The VOT is captured by the concept that the time spent traveling could be used for an alternative activity. If we assign a monetary value to this alternative activity, we can estimate the VOT. Hence, its unit value is US$ per hour (US$/h). Both average wage and revealed preference methods are used for the VOT calculation. We use both VOT calculations for our sensitivity analysis. Moreover, an average of both is used for the benchmark calculation of the corrective tax. As we could find no study on travel time on urban roads, we assume that VOT is the same on highways. 1 3 Benchmark parameter values and COPERT IV analysis We calculate each component of the corrective fuel tax before we calculate the corrective gasoline and diesel taxes and analyze their impacts. According to Eq. (1), our corrective fuel tax has five components; each component depends on one or two parameters and assumptions. Therefore, we arrive at a range of values for each parameter. For example, Eq. (8) has two parameters, namely, local emission cost per km and elasticity of VOLY with respect to income. Both parameters have a range of values. It is difficult to achieve a certain value of the corrective fuel tax component if the parameters change. Therefore, we use the mean (average) values of the parameters so that we can reach a certain value for a component of the corrective fuel tax. We summarize the results of the benchmark parameter values in Table 5. For other parameters, using the COPERT IV model calculation, we arrive at an initial fuel consumption (F f ) for gasoline of 2,303,519 tons and for diesel 7,842,565 tons. The total initial VKT for gasoline and diesel are 48,874.22 million km and 34,838.34 million km, respectively. Dividing both figures, we arrive at initial fuel efficiency (1∕g o ) for gasoline of 15.86 km/l and for diesel 3.75 km/l. Elasticity of VKT with respect to consumer fuel price ( ˇ1) The elasticity of VKT of fuel price is probably the most important component of the corrective fuel tax; its estimation is crucial for policymaking. We believe that we are the first to estimate elasticity of VKT with respect to consumer fuel price for Bangladesh. Earlier, McRae (1994) estimated price elasticity of gasoline demand (− 0.35) for Bangladesh using a dataset with the period 1973-1987. However, to the best of our knowledge, no study has been done on the elasticity of VKT in Bangladesh. Our regression analysis shows that VKT elasticity for gasoline and diesel is − 0.26 and − 0.19, respectively (see Table 6). This means that a 1% increase in fuel price is associated with a 0.26% and a 0.19% decrease in annual VKT for gasoline and diesel vehicles, respectively. Most results are statistically significant at the 5% level. Goodwin (1992) found short-run VMT elasticity of -0.16 and long-run VMT elasticity of − 0.33. Graham and Glaister (2002) (2013) found a larger value, which they attributed to the poor quality of VKT data for China. Our findings differ from the previous literature, falling in the intermediate range. We think this variation is due to the sensitivity of the estimates to different aspects of the model's structure. These aspects might be selection of dependent variables, nature of data, period of analysis, geographic location, and estimation techniques, among others. Our VKT elasticity for gasoline is greater than that for diesel. One possible reason is that diesel is more inelastic because it is used mainly for public transportation vehicles and commercial trucks. Therefore, drivers are not very responsive to changes in diesel price; thus, leading to a higher inelasticity for VKT. Gasoline tax Applying the benchmark parameter values for gasoline to Eq. (1), we obtain the corrective gasoline tax. Our benchmark corrective gasoline tax is US$0.94 per gallon (see Table 7) with congestion accounting for ( E C ⋅ 100∕t C F =) 54.07%, traffic accidents ( E A ⋅ 100∕t C F =) 33.59%, global warming ( E PF ⋅ 100∕t C F =) 11.04%, and local tailpipe emissions ( E PM ⋅ 100∕t C F =) 1.29%. Our benchmark estimate is lower than that of Parry and Small (2005), even after we update their results to 2016 prices (see Table 8). This seems reasonable given the lower valuations of a life year and travel time for Bangladesh compared with the USA and the UK. However, we note that the percentage of the accident component in the corrective tax is about 33.59% for Bangladesh, which is higher than that in the USA (32%) and the UK (20%). Our explanation is that accident externalities are much higher for Bangladesh than for the USA and the UK. Although the nationwide congestion in Bangladesh might not be similar to that in the big cities in the USA Diesel tax Applying the benchmark parameter values for diesel to Eq. (1), we obtain the corrective diesel tax. Our benchmark corrective diesel tax is US$1.46 per gallon (see Table 7). Our benchmark calculation implies that the corrective diesel tax should be higher than the corrective gasoline tax, which is contrary to most situations in the OECD countries as depicted in Fig. 1. We argue that a low tax on diesel could act as an incentive for diesel users, and gasoline users might switch to diesel. Mayeres and Proost (2001) found that lower taxes on diesel than gasoline were inefficient because diesel emitted more pollutants than gasoline. A study by Harrington and Mcconnell (2003) found that the use of diesel-fueled vehicles increased in Europe due to a lower tax on diesel than gasoline, which supports our argument. However, our corrective diesel tax is higher than that in the USA, Canada, Chile, and New Zealand, as Bangladesh diesel vehicles have lower fuel economies and emit more emissions per kilometer than those in the abovementioned countries. For this reason, our corrective diesel tax seems reasonable. Unlike gasoline, road damage contributes ( E D ⋅ 100∕t C F =) 11.39% (US$0.17 per gallon) to the diesel tax. Other externalities are higher than for the gasoline tax. However, the share of the congestion externality in the total corrective tax is lower for diesel ( E C ⋅ 100∕t C F = 37.67%) than gasoline (54.26%). This can be explained by the fact that gasoline-powered private vehicles carry fewer passengers than public buses, which are the main cause of congestion in the big cities in Bangladesh. In this context, our finding implies that the tax could reduce congestion in the absence of a congestion charge or any measure specifically designed to address congestion. Notably, accident and congestion externalities reflect equal shares in the corrective diesel tax. Our corrective diesel tax is much smaller than that of Parry and Strand (2012) for Chile (US$2.09 per gallon). Although we follow the same model, our methodology is different. Assumptions in value of life and price of carbon, differences in income, accident statistics, and road maintenance expenditures might cause the dissimilarity in the corrective diesel taxes in the two countries. However, we think our use of on-road vehicular data, appropriate vehicle kilometers traveled, driving conditions, country-specific emission standards, and elasticity values make our estimate more precise than that of Parry and Strand (2012). Impacts of tax reform on social welfare and government revenue Using Eq. (6) in our benchmark case, the welfare gain for gasoline is US$11.48 million. However, our corrective tax for gasoline is less than the currently prevailing excise tax. Therefore, the corrective tax decreases the welfare gain by the same amount and fuel efficiency by 2.71%. It also increases gasoline demand by 4.93%. However, by raising the excise tax to the corrective diesel tax level, the welfare gain becomes US$313.59 million. At present, we show a loss in welfare of US$70.87 million due to the fuel subsidy. Therefore, our net social welfare gain would be (US$313.59-US$11.48) US$302.11 million. In addition, raising the excise tax to the corrective tax also increases fuel efficiency by 12.53%. In the long run, diesel consumption decreases by 18.10%. By implementing the corrective gasoline tax, gasoline demand increases by 4.93%. Therefore, the government loses revenue from two perspectives. First, revenue from existing sales decrease by 814.08 million gallons of gasoline (from the COPERT calculation), which amounts to US$252.37 million. Second, the government must pay for the increased demand for gasoline at the world market price. This amounts to US$120.00 million. Therefore, the total loss from the reduction in price is (US$252.37 + US$120.00) US$372.37 million. However, the increase in the diesel price has several benefits. First, the government does not have to pay the subsidy for future fuel consumption of 2008.04 million gallons (current consumption is 2451.82 million gallons from the COPERT calculation and the current subsidy is US$367.77 million), which amounts to US$301.21 million. Second, the government can increase revenue by US$2931.74 million by selling diesel. Finally, with the price increase, the demand for diesel drops by 18.10%, which means the government can import less diesel than before, saving US$1477.79 million. Thus, the total gain would be (US$301.21 + US$2931.74 + US$1477.79) US$4710.74 million. The net gain would be (US$4710.74-US$372.37) US$4338.37 million. The GDP of Bangladesh for fiscal year (FY) 2016-2017 is US$120,797.76 million (Bangladesh Bureau of Statistics 2018) at a constant price. Therefore, our net gain from the corrective fuel tax is 3.59% of GDP. So, corrective fuel tax like the carbon tax could be an effective way to promote economic development (Zou et al. 2014). The government can use this revenue to finance road maintenance (Act No. XXVIII, 2013), which will improve fuel efficiency, speed, and reduce road damage costs. Sensitivity analysis Considering the uncertainty surrounding the benchmark calculation, we conduct a sensitive analysis, by varying seven of the most relevant parameters, one at a time. We follow MOVE (2014) for the intervals of the first two parameters, Uddin (2017) for the alternative value of travel time, and Parry and Small (2005) for the intervals for the rest of the parameters (see Table 9). The results are most sensitive to the VKT portion of fuel price elasticity. Using β = 0.6 causes the corrective gasoline and diesel tax increase to rise to US$1.40/gal and US$1.90/gal, respectively. However, using β = 0.2 decreases the corrective gasoline and diesel tax by US$0.48/gal and US$0.66/gal, respectively. The results are also sensitive to traffic congestion. An increase in congestion by 50% results in an increase in the corrective gasoline and diesel taxes by 29.79% and 14.38%, respectively. However, under a low traffic congestion scenario, the corrective gasoline and diesel taxes fall by 28.72% and 26.03%, respectively. The results are also sensitive to VOLY. As discussed earlier, for Bangladesh, VOLY could be anywhere between US$1652 and US$18,779. Using the higher VOLY increases the corrective gasoline and diesel taxes by 24.47% and 17.81%, respectively. Using the lower VOLY, decreases the corrective gasoline and diesel taxes by 4.25% and 3.42%, respectively. Using a higher value for global warming damages ($21.77 per ton of CO 2 instead of $11.62 per ton) increases the corrective gasoline tax and diesel tax by 14.28% and 16.30%, respectively. Increasing and decreasing accident externalities by up to 50% causes the corrective fuel taxes to vary by up to 20% (approximately). In the remaining cases shown in Table 8, alternative parameter assumptions do not change corrective fuel taxes dramatically. For example, changing road damage by 50% and using the average wage method, or the revealed preference method, causes the corrective tax to vary by about 10%. In sum, a wide range of outcomes is possible under alternative parameter scenarios. To show the likelihood of different outcomes for the ranges of our parameters, we perform a simple Monte Carlo simulation. First, we set the maximum and minimum for all externalities. Second, we draw each of the above parameters randomly and independently 1000 times from selected distributions and, third, for each draw, we calculate the corrective fuel tax. Finally, we take the average of all the corrective fuel taxes. Table 10 shows the simulation results. The probability that the corrective gasoline tax is less than the $0.94/gal is 0.38, and the probability that it is below $1.20/gal is 1. For diesel, the probability that the corrective diesel tax is less than the $1.46/gal is 0.57, and the probability that it is below $1.50/gal is 0.74. Corrective fuel taxes for INDC goal The main pollutants from combustion activities that have an effect at the local level are nitrogen oxides, VOCs, NMVOCs, carbon monoxide, and particulate matter. CO 2 , nitrogen monoxides, and methane have impacts at a global level. Calculations from our COPERT IV model show that total emissions of PM 2.5 from the road transport sector is 13,527.09 tons (see Appendix Table 17), of which, 39.75% of the pollution occurs on urban roads, followed by 36.18% on rural roads, and 24.07% on highways. Diesel vehicles are mainly responsible for PM 2.5 (Fig. 2), emitting 86% of the total pollution, while gasoline vehicles are responsible for only 14%. Among the diesel vehicles, mini buses, heavy trucks, and large buses are responsible for 22.51%, 18.89%, and 16.66% of the total emissions, respectively. The contribution from jeeps is 11%, while other vehicles contribute less than 10%. Among gasoline vehicles, motorcycles contribute 93% of total emissions. In total, buses emit 50% of total emissions, while heavy duty trucks, motorcycles, passenger cars, and light commercial vehicles emit 26.96%, 12.61%, 7.86%, and 2.42% of total emissions, respectively. Appendix Table 18). Like PM 2.5 , diesel vehicles emit the largest portion (85%) of total PM 10 . Motorcycles and private cars, have an 88% and 12% share in pollution contribution, respectively. Among all the vehicle classes, buses are responsible for 49.36% of total PM 10 , followed by heavy duty trucks at 26.64%, motorcycles at 12.77%, passenger cars at 8.71%, and light commercial vehicles at 2.51%. Individual assessment suggests that mini buses and heavy trucks bear the responsibility for 22.09% and 18.37% of these emissions, respectively. Diesel vehicles emit 98% of total NO x . Among them, buses contribute 63.67% (145,370.95 tons) of NO x, which is more than twice the emissions from heavy duty vehicles (HDVs) (28.16%). Individually, mini buses emit 29.29%, and heavy trucks and large buses emit almost 20%. Passenger cars and light car vehicles (LCVs) contribute 5.67% and 1.78%, respectively (Appendix Table 19). The total VOC emissions are 119,797.5 tons (Appendix Table 20). Urban, rural, and highway shares are 41.54%, 36.04%, and 22.42%, respectively. Gasoline vehicles are the primary emitters of VOC (83%). Motorcycles alone contribute 80.55% (96,494.85 tons), which is the highest of all vehicles. Among the diesel vehicles, mini buses are responsible for 4.63% of total VOC. Motorcycles and buses are still major contributors of NMVOC. Motorcycles are responsible for 81.92%, whereas buses emit 10.16% (Appendix Table 21). CO was the second highest emitted pollutant, at 449,528.04 tons (Appendix Table 22). Gasoline vehicles are the main source of CO (86%), of which, 97% comes from motorcycles. Mini buses contribute 3.83%, followed by private cars at 2.79%. Heavy trucks and large buses each contribute 2.68%. CO 2 is the highest emitted pollutant; 31.94 million tons are emitted from fuel combustion (Appendix Table 23). Gasoline vehicles emit 22.60%, while diesel vehicles are the major contributors at 77.40%. Among the gasoline vehicles, motorcycles are the major emitter of CO 2 . Among the five vehicle classes, buses emit 14.47 million tons, representing 45.31% of total emissions, of which, large buses contribute 14.14%, followed by minibuses at 20.77%, and microbuses at 22.93%. Among other vehicles, HDVs, passenger cars, and motorcycles contribute 20.92%, 17.45%, and 13.68%, respectively. Heavy trucks and private cars are the top contributors within their respective vehicle classes. With a 55% share, diesel vehicles are the main contributor of N 2 O (Appendix Table 24). Of this, heavy trucks contributed 20%, large buses 19%, minibuses 16%, and jeeps 12%. Among gasoline vehicles, the shares of motorcycles and private cars are 53% and 47%, respectively. Major pollution occurs on urban roads (54.59%), while rural and highways contribute 24.61% and 20.80%, respectively. Total emissions of CH 4 are 3406.18 tons (Appendix Table 25). Most of the CH 4 emissions occur on urban roads (63.30%). Whereas, CH 4 emissions from rural roads and highways are very low (19.48% and 17.22%, respectively). Diesel vehicles make up 57% of total CH 4 pollution, while gasoline vehicles make up 43%. Among the five vehicle classes, motorcycles are responsible for 33.82% of total emissions. Buses and HDVs contribute 32.36% and motorcycles 20.44% of emissions. After motorcycles, the next highest emitters are heavy trucks (13.26%). As our corrective gasoline tax (US$0.94 per gallon) is lower than the initial tax rate (US$1.25 per gallon), a reduction in tax might increase VKT. Therefore, it is not practical to lower the existing tax. Hence, we fix the gasoline tax at US$1.25 per gallon and change the diesel tax rate to US$1.46 per gallon. We measure the response in Eq. (4) and estimate the emissions using COPERT IV. Compared with BAU, this combination of taxes reduces NO x by 7.31%, PM 2.5 by 6.44%, PM 10 by 6.38%, CO 2 by 5.77%, N 2 O by 4.13%, and CH 4 by 4.22% (see Figs. 3,4,5). Using the global warming potential (GWP) given by Forster et al. (2007) (CO 2 = 1, CH 4 = 25, and N 2 O = 298), we find that corrective fuel taxes reduce GHG emissions by 5.77% (1.85 Million tons of equivalent CO 2 ). Similarly, we calculate emissions at US$0.96, US$1.25, US$1.35, and US$1.50 per gallon (see Table 11). Applying the GWP factors, we find that GHG emission reductions are 4.27%, 5.18%, 5.48%, and 5.86%, respectively. We plot the percentage in GHG emission reduction against the corrective diesel tax (see Fig. 6). From this relationship, we can calculate the fuel taxes: US$1.25 per gallon of gasoline and US$1.20 per gallon of diesel reduce GHG emissions by 5%, which is Bangladesh's INDC goal. Conclusion This research presents a methodology for estimating corrective fuel taxes to meet INDC goal for a developing country. It also illustrates methodology for compiling estimates of parameters required to assess corrective fuel taxes for a country which does not have any previous study and uses different vehicle fleets, emission standards, and driving conditions. Though we use Bangladesh as a case study, our methodology can be useful to set the corrective fuel taxes to achieve INDC goal. This study estimates elasticity of VKT with respect to consumer fuel price while calculating corrective fuel taxes. Previous literatures on corrective fuel tax purports this parameter from the study of other developed countries. But the responses of tax We emphasize that this study does not investigate the fiscal impact of fuel tax, particularly the impact on the labor market, and does not investigate the distributional impact of the fuel tax. Although part of the fuel tax revenue could be redistributed back to citizens through an alternate taxation system (Zou et al. 2014), we recommend that fuel tax revenues be used to fund the "road maintenance fund" in Bangladesh to derive maximum welfare gain. Our study suggests that congestion and accident externalities are two major components of fuel tax in Bangladesh. Thus, we recommend that the major share of the road maintenance fund be used to improve existing road conditions, traffic management, and road safety. Note that we should not confuse fuel tax with toll fees collected for new roads or expressways. The former includes road damage cost, while the latter includes capital investment and operation cost, which is not a fuel tax. They also differ based on their objectives: fuel taxes are aimed at curbing externalities while toll fees are aimed at maximizing profits and reimbursing investment cost. Therefore, in this study, we only support the improvement of existing road conditions by utilizing fuel tax revenue. Our findings open a new window for policymakers to reduce GHG emissions from road transport and to maximize social welfare gains. In this study, our focus has been on emissions from fuel combustion. In future research, emissions from lubricants could also be included. A lifecycle analysis of fuels and lubricants could have further policy implications. However, despite some caveats, our methodology can be applied in other countries to achieve INDC goals. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-10-28T19:12:37.382Z
2020-10-11T00:00:00.000
{ "year": 2020, "sha1": "38ca0752c193f13078bee75758392eddd2f89bca", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41685-020-00173-5.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c9ee81b0aa55d03684cf0b04eea46eb9cb6e37e3", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Environmental Science" ] }
236853544
pes2o/s2orc
v3-fos-license
THE FUZZY SACREDNESS AURA AND CYBER-BASED DA’WAH Redrawing Karamah of Tuan Guru within The Belief System of Sasak Muslims : This article examines Sasak Muslims’ belief in tuan guru’ s karamah (charisma) in the midst of the emerging trend of cyber da‘wah . Findings illustrate that Sasak Muslims have repeatedly recognized that tuan guru’ s charisma becomes an important consideration for their respect and obedience to ‘ ulama > and are of great significance for da'wah. Accordingly, they have remained in favour of lived- da‘wah practices compared to the online ones. There are two facets that frame the underlying rationales of the findings. First, not all Islamic rituals and da‘wah activities can be transformed into the digital realm because da‘wah has complexity of concepts and meanings embedded within any Islamic rituals that would be difficult to be reproduced into internet medium. Second, da ’ wah through digital platforms lead people to feel less auratic experience as normally they can through in-person da‘wah activities. Introduction Although it necessitates Islamic preachers to have both substantive and methodological competencies of da'wah (proselytising), 1 possessing supernatural qualities that endow them with an aura of sacredness such as karamah (miracles) and kesekten (extra ordinary power) is considerably required 2 within the Indonesian traditionalist Muslims in particular. Indeed, in the historiography of Indonesian Islam, da'wah actors such as wali Sanga, kiai, tuan guru, ajengan, and 'ulama> (Muslim scholars) were often associated with various sacred stories. Examples can be cited here. Kiai Abbas Buntet, for example, conducted da'wah by teaching kanoragan (supernatural power), kekebalan (invulnerability), kesekten (extra ordinary power) to his disciples to fight against the invaders (penjajah) after gaining religious knowledge from Mecca and several Islamic boarding schools in Java. Also, within Nahdlatul Ulama (NU) tradition, kiai As'ad Syamsul Arifin of Situbondo, East Java, was believed to be a wali> of Allah, who was given supernatural powers and karomah. Tuan Guru Haji (TGH) Muhammad Zainuddin Abdul Majid and TGH Mutawali were the early generation of Tuan Guru in Lombok who were considered as resembleing wali> , possessing mystical power. 3 However, in the last two decades, the Indonesian Muslims have undergone a new trend in conducting religious proselytising. Many 'ulama> have created virtual da'wah channels with a broad reach to convey the messages of Islam to audiences. For instance, Usta> dh Abdul Shomad, Ustadz Adi Hidayat, Buya Yahya, Gus Nur (Sugi Nur Raharja), Felix Shiauw, Hanan Al-Taki, and Gus Baha (Bahauddin Nur Salim). There are some virtual da'wah pages that have been popular, followed by more than milion users in any social media channels such as Aswaja Yellow Book, Ideological Da'wah, Kaffah Islamic Community, Smart da'wah, Ngaji Online Aswaja, Fodamara, and Akhyar TV Indonesia. The virtual da'wah channels also often broadcast religious rituals and preachings through livestreaming video with direct feedback. This contemporary situation certainly affects the formation of religious knowledge system, culture, and authority which is subsumed under traditional or charismatic authority. This is because the transmission and circulation of Islamic texts through a variety of cybermedia on the Muslim world have expanded the number of people who can directly conduct a dialogue with the text that could terminate between the world-renowned mystic figures who lived at that time such as Abu Hamid al-Ghazali, M. Abdul Qadir al-Jilani, Najmuddin al-Kubra, Abu Hasan al-Syadzili, and Abdullah al-Syattari and the mystic traditions which are exist in Indonesia up till present day; that is, the Qadiriyah tariqat refers to M. Abdul Qadir al-Jilani, the Kubrawiyah tariqat, the Naqsabandiya tariqat refers to Abu Hasan al-Syadzili, and the al-Syattari refers to Additionally, the influence of the Sufism tradition in Indonesia can be traced back from the works of Acehnese Sufis as written by Hamzah Fansuri, Shamsuddin Pasai, and Nuruddin Arraniry from Samudra Pasai. 9 Many kinds of literature on Sufism have stated that Sufism traditions are often dealing with the wali> . 10 They are regarded those who have achieved the perfect knowledge of God (ma'rifa), have obtained divine power from God (quwwah ilāhiyya), and have had karamah 11 because of their proximity to Allah. Regarding to this tradition, karamah is commonly utilised for an indicative of a person's sainthood 12 as well as to help Islamic saints in obtaining the purposes of proselytising (da'wah), namely attracting a number of people to convert to Islam. 13 It can be seen form some footprints of wali sanga 8 Samsul Munir Amin, Sejarah Peradaban Islam (Jakarta: Amzah, 2015), 31-113. 9 Harun Nasution, Falsafah dan Mistisisme dalam Islam (Jakarta: Bulan Bintang, 1992), 56. 10 The word wali> is taken from the Arabic language wala, with the plural form of awliya which means qaraba, that is near. See Louis Ma'luf al-Abb, al-Munjid (Beirut: Dar al-Fikr, 1937), 1061. According to the Javanese tradition, wali> is a title for those that are regarded sacred. See Ensiklopedi Indonesia (Bandung: Ikhtiar Baru van Hoeve, n.d), 1417. 11 Karamah is superhuman and supernatural powers are given to saint spoken of by Muslim lexicographers as kha> riq al-'a> dat (things contrary to custom) which in turn has made him very different from society at large. The notion of karamah differs from that of, mu'jizah. A mu'jizah (plural mu'jiza> t) is attributed only to the Prophet, while a karamah is attributed to saints. See A.J. Wensinck, "Mu'djiza," Encyclopaedia of Islam 7 (Leiden: E.J. Brill, 1993), 295. In terms of karamaht al-awliya' (marvels of wali> ), in sufims tradition, it is as a mark of honour for confirming him in piety and God-fearing reverence. These karamah include prediction of the future, interpretations of the secrets of the heart, and miraculous happenings. See L Gardet, "Karâma," Encyclopaedia of Islam 4 (Leiden: E.J. Brill, 1978), 615. 12 Muslih Al-Maraqi, an-Nur al-Burhani fi Tarjamah al-Lujain ad-Dani (Semarang: Toha Putra, 1962a in doing da'wah toward Javanese kings. The manuscript Piwulang of Sunan Kalijaga, for instance, representatively represented saint's teachings containing 60 mantras (spells) which taught to the Sultan of Kraton (Sultan's Palace) Pajang, the second Sinuhun Kangjěng Pangeran Pugěr in Kraton of Mantaram, the third and the fourth Susuhunan Pakubuwono (ruler of Surakarta), and to Kanjeng Panembahan Senapati in Mataram Kingdom. 14 These mantras were about dzikr (remembrance) to become invulnerable and a powerful person; to have ability to jump over rivers, to disappear (not seen by anyone), to sharp objects, to fly when surrounded by enemies, to get a blessing from God, to get respect from others, and to pray when meeting enemies or wild animals, and so forth. 15 Another evident is a popular recount of Kiai Abbas Buntet, who was considered having kedigdayaan (extra ordinary power) during his lifetime. Kiai Abbas was not only well-known as an established scholar ('a> lim), but he also regarded being able to work miracles (karamah). When the battle of Surabaya in 10 th of November 1945 occurred, Kiai Abbas attacked the enemies by pelting them with handfuls of sand, which made them run away. It was also told that Kiai Abbas could go to Surabaya from Cirebon with just one beat (hentakan kaki). 16 For da'wah purposes, karamah al-awliya' is not only has an important role in converting people to Islam, but it is also useful to lead bad individuals (orang nakal) to the right path of God. Study conducted by Jamhari concerning on the veneration of wali> and holy persons in tarekat Istighasthat Ihsaniyyat has revealed that through gemblengan-a form of invulnerability by which he transferred a spiritual powerthey became invulnerable to sharp objects, fire and bullets. The leader of the order, Gus Abdul Latif, succeeded to lead bad people (orang nakal) to the right path and return to the path of God. The Social Reality of Tuan Guru and Islamic Proselytising In the Eastern Indonesian island of Lombok, tuan guru is akin to both 'ulama> (spiritual leaders) and usta> dh (Muslim religious teachers), much like kiai in Java. 18 Tuan guru is conceived as knowledgeable person from whom people learn Islamic teachings and as a person who is belived to inherit prophecy (waratha al-anbiya) which enables him to give a divine blessing (barakah). 19 Researches have illustrated that generally tuan guru is spiritually and intellectually very different from people at large. Fahrurrozi's study indicated five criteria for those who are regarded as tuan guru; first, having broad Islamic knowledge; second, having expertise in reading classical texts of the various Islamic disciplines (kitab kuning) third, having heredity; fourth, having great pious; fifth having number of santri. 20 Meanwhile, Jamaludin proposed three conditions for those who are regarded as tuan gurus, first, they have extensive knowledge about Islam and its various teachings because tuan guru becomes the main interpreter of religious texts amongst the Sasak Muslims. Second, they have studied with established scholars (alim-ulama) in the Middle East, especially Haramain, namely Mecca and Medina. Third, they obtain recognition from Muslims. Forth, they possess karamah (marvels). 21 In this sense, tuan guru is not only considered as a knowledgeable and pious person, but also a sacred individual endowed with karamah. 22 It is a truism to contend that tuan guru is regarded as having karamah (marvels) as often found within Sasaknese's folklore and local muslim stories up until currently. As a sacred story of Tuan Guru, Muhammad Rais is one of marvel-related tales. It was told that Tuan Guru Muhammad Rais got message in a dream to take a book (kitab) in segare (sea). Then Tuan Guru Rais went for fishing with a villager from Tanjung Karang to the sea near with Loang Baloq. Not likely to happen, the villager got many fishes and seemed the fish constantly eat up the 18 Jeremy J. Kingsley, "Peace-makers or Peace-Breakers? Provincial Elections and Religious Leadership in Lombok, Indonesia," Indonesia 93 (2012), 53-82. 19 Ibid. 20 Fahrurrozi, "Tuan Guru antara Idealitas Normatif dengan Realitas Sosial pada Masyarakat Lombok," Jurnal Penelitian Keislaman 7, 1 (2010), 221-250. 21 Jamaluddin, "Islam Sasak: Sejarah Sosial Keagamaan Masyarakat Sasak Abad XVI-XIX," Jurnal Indo-Islamika 1, 1 (2011), 63-88. 22 Putrawan, Dekramatisasi Tuan Guru, 284-295. JOURNAL OF INDONESIAN ISLAM Volume 14 , Number 02, December 2020 bait at that time. Meanwhile Tuan Guru Rais looked patiently waiting for his bait to be eaten by fish. After a few minutes, Rais seemed to lift his fishing rod, and apparently what he got was a book. Soon after that, he said goodbye to the villagers to go home, because he had already got something he had been waiting for a long time ago." 23 Unreasonable actions were also conducted by Tuan Guru Achmad, well known as tuan guru Ret Tet Tet. He was often regarded doing things contrary to custom (khāriq al-'ādah) because he was able to miraculously disappear. Once upon time, when one of villagers passed away and burried in Sekarbela, Tuan Guru Ret Tet Tet came late. Then one of his discilpes asked the reason why he came late. Tuan Guru Ret Tet Tet answered: "I had just come from consolation (ta'ziya) in Bagdad". Another sacred story conducted by Tuan Guru Haji (TGH) Ret Tet Tet was when he disguised himself as a beggar. At that time, people often found TGH Ret Tet Tet in a bus station and traditional market, Cakra Negara, to take some sellers' stufs, then he gave to other people who need them by saying "sedekah-sedekah (donation)." Once, in Central Lombok, he disguised himself as a beggar, begging every citizen he met in a village in Central Lombok. But none of them gave money, after the beggar left, the village was on fire. 24 In addition, another supernatural phenomena (kha> riq al-'ādah), like kiai in Java, tuan guru also has what is called ilmu laduni, that is knowledge acquired without learning. 25 The sacred belief of this sort has remained alive amongst Sasaknese Muslim up to present day. Some participants of this study firmly believe that tuan guru has supernatural powers and karamah. For example, Rahmatulloh, one of the interviewee, told that once he came and asked for help Tuan Guru Haji Ridwanullah, the leader of Islamic Boarding School Darussalam Beremi Desa Darussalam in Gerung District West Lombok, to set up for rain-delay when he was holding hajatan (ceremony): When I held a nyongkolan ceremony using Gendang Belek on Sundays, I went to see Tuan Guru Haji Ridwanullah, the leader of 23 Usually, tuan guru requires the disciples to do wirid (quotes from the specified the Quran to be read after prayer) in order to meet their needs or to get kesakten (extra ordinary power) through the process of ija> zah (direct authentication and certification from kyai or tuan guru to his disciples). Sulhan Ahmad said that he has received ija> zah from Tuan Guru Mawe to do a certain wirid to get a peaceful protection from witchcraft or black magic forces as well as robbers and thieves, as follows: "I feel at peace and have not been scared to robbers or thieves, and I am not afraid anymore of the black magic force that haunts me all the time." 27 Further, Ahmad coined that for those who received ija> zah from tuan guru should take note of any specific instructions pertaining the way to implement wirid. Consequently, the disciples become more consistent in implementing the Islamic teachings and performing the five times daily prayers on time. 28 Similarly, Musthofa and Abdul Majid acknowledged that they attempted to implement the tuan guru's spiritual advices in order to get blessing and reposefulness from God. "..when we received amalan (specified instruction to do) and bacaan (reading quotes from the specified the Quran) that required to be recited after doing a certain prayer, for instance, so we have to completely implement as what was instructed. Otherwise we do not get reposefulness from Allah." 29 Amak Muhaidi, a religious leader as well as a head of dusun (an administrative division form below village), also said that he felt insecure because of a threat of witchcraft, black magic, and thieves if he did not hold kekebalan (invulnerability power) from tuan guru. 30 Likewise, Iskandar, a disciple of Tuan Guru Abd Rauf in the Center of Lombok, admitted that consistently recited wirid given by his tuan guru in order to get blessing, reposefulness, and protection from Allah Swt. Accordingly, it would be easier to bring bad people (orang nakal) to 26 the right part of Allah if tuan guru or kiai has invulnerability and ability to win the battle on witchcraft. 31 Moreover, to get blessing, for those who are fanatic with the tuan guru, they will keep tuan gurus' photos in wallets and cars that can be brought everywhere or displayed in the walls of houses as a talisman. 32 With this regard, having such human resources, tuan guru has respected position and religious legitimacy for conducting da'wah toward Sasak society wherein religious observance, piety, and supernatural ability are of great significance . . 33 Drawing on Weber's perspective, it is understandable that many of tuan guru and kiai have charismatic authority 34 since they are deemed the holder of divine authority, including through karamah. Authenticity and Sanctity: Sasak Muslim's Response to Virtual Da'wah Although the proliferation of institutional online feeds for da'wah purposes changes the way of many Muslim learning the Islam, it seems does not completely fit to Sasak Muslim in the sense that that not all of them are interested using social media, YouTube channel in particular, in learning Islam. Some participants of this study asserted that da'wah is conveying religious teaching through verbal sermon and goodly model (uswatun h} asanah), therefore, people not only need the normative message (taus} iyah) but also exemplary approaches (uswah). However, YouTube channel and other social media are regarded just offer audio-visual content of religious preaching and do not cover the preachers' real-life attitude as a goodly model. "Being physically present in offline da'wah allows believers to know Muslim clerics' moral qualities and magnetic personalities". 35 Lalu Mahdan Badiaktar, one of interviewee, stated that physical da'wah not only sets a good model for the followers but it enables them to feel the sacredness aura of tuan guru that is likely to produce deep impression. "Physical da'wah provides qualitatively different religious experience and generates emotional impression compared to online da'wah. It is like attending live music concert compared to watching music on TV screen. Direct interaction with musicians in live music performance would convey a deep impression and emotional satisfaction" 36 . This is relevant to Abdur Rozaki's research result that revealed physical appearance is one of sources of kiai's charismatic power such as a large body, loud voice, and sharp eyes called as given charisma from God, besides engineering process like the extended religious knowledge, sincerity, and integrity. 37 Like kiai in Java, tuan guru occupies an elite, respectable position, and social standing within Lombok Muslim community. The data from this study showed that the position of tuan guru in the midst of digital Sasak Muslims has remained very respectable and still became the ultimate source of guidance for social-religious matters. However, there is a general recognition of Sasak Muslim society that tuan guru structure is hierarchical, namely the "scared tuan guru" those who are regarded having karamah (marvel or supernatural quality) and "ordinary tuan guru" who only teach the Quran and convey religious teachings to the believers. In regard to this reality, all interviewees of this study repeatedly acknowledge tuan guru's charisma, blessing, and karamah as disciples' main consideration for respect and obedience. Some come to see sacred tuan guru not only for attending religious gathering, but also for particular purposes such as asking for prayer to reject any disasters, to avoid black magics, to get a bottled mineral water for healing, to 35 determinate good times, good days, and good months before taking some important activities, or asking for morality advices. 38 Therefore, "Lombok people more respect to the sacred of tuan guru than those who are not. Even the sacred tuan guru is easier to gain wider recognition." 39 Consequently, the disciples are also convinced that the prayer delivered by the "sacred tuan guru" is more efficacious than the "unsacred" one as well as the quality of spiritual healing. "This is because the sacred tuan guru has stronger senses and supernatural power from God." 40 Accordingly, it is undoubtedly that Muslim Sasak' view toward popular and famous Islamic preachers on social media or YouTube channel is not as respectable as tuan guru. Even though some participants of this study utilise hybridized space, that is, physical and the digital da'wah, they admit only access the credible da'wah institution channel or personal channel of famous preachers. Hasanaen Djuaini for example, he tends to access personal channel of da'i that is considered having expertise and extended knowledge in religious matters such as usta> dz Abdul Somad, usta> dh Adi Hidayat, and Quraish Shihab. Yet, he is not interested to access YouTube channel posted by the ordinary preachers. However, according to interviewee, it is nothing can compete with the sacred tuan guru. Although famous asa> tidh (religious preachers) on YouTube channel have many followers throughout the world, Hasanaen Djuaini and Abdul Aziz, point out that the sacred tuan guru have remained possessing a higher prestigious position within Sasak Muslims, the sacred tuan guru in particular. Aziz illustrates this issue by making an analogy to the shaman that receive honored stratum in the midst society, "the more tuan guru who have extended-knowledge, pious, and sacred, therefore definitely the more they respected." 41 It is worth noting that another reason why Sasak Muslims in favor of physical da'wah is associated with karamah related-ritual matters such as ija> zah-giving ritual (a direct authentication and certification from tuan guru to his disciples). It is firmly believed that tuan guru have karamah which is likely to give blessing and virtue. As mentioned 38 Interview with Hasanaen Djuaini, June 30, 2020. 39 Interview with Abdul Aziz Fahmi, June 29, 2020. 40 Interview with with Lalu Mahdan Badiaktar, July 1, 2020. 41 Interview with Abdul Aziz Fahmi, June 29, 2020. above, some people come to see them for certain interest like to wish a sacred blessing (ngalap berkah), to get fortune, to and to avoid any catastrophes and black magic, so forth. In this regard, there is specific requirement and ritual that should be conducted pertaining to the process of ija> zahi-giving. Usually, tuan guru shake the disciples' hand while giving specific amalan (specified instruction to do) and wirid (reading quotes while rom the specified the Quran). Different purposes could have different rituals, as suggested in what follows: "It would be difficult to conduct the process of ija> zah-giving through social media or YouTube channel, and therefore not all religious activities can be conducted through online application." 42 In addition, all interviewee favor conventional da'wah over online because of social reasons in the sense that they can keep in touch with neighbors, family, and other Muslim colleagues directly, as well as tuan guru. Conventional religious gathering (pengajian) benefits me not only increase my religious knowledge, but also give me the chance to keep in touch with neighbors, family, and society. Even I can ask and discuss religious related-issues directly that is difficult to find on YouTube channel. 43 The Fading of Sacred Realms in Virtual Da'wah The proliferation of online religious proselytizing today has become a blessing and a great source of religious reference to many believers. These issues can be described by examining many researches from academics that have revealed the presence of various religious institutions' sites and personal accounts of religious leaders on social media that have transmitted essential information about Islamic teachings and tradition. 44 aimed to spread certain Islamic streams' schools of thought and interest. 45 It is worth noting that many religious institutions also have webcasted religious sermons, ritual activities, and sacred placed in livestreaming. 46 Correspondingly, academics asserted that new media have the unintended effect of warning charismatic authority that routinized through traditional forms of authority 47 because new media are democratic in terms of accessibility and availability for religious debate that is likely to radicalize a traditional culture of disputatious learning and argumentation. Accordingly, "the authority and legitimacy of information (in new media) cannot be subsumed under traditional or charismatic authority." 48 However, this study reports different finding regarding the existence of charismatic authority in the midst of the proliferation of cyber da'wah in the sense that Sasak Muslims in Lombok keep believing in tuan guru's karamah and consequently have ability to transmit barakah. 1-30. In regard with a description of social media used by religious leaders in Indonesia for da'wah purposes. See Dindin Solahudin and Moch Fakhruroji, "Internet and Islamic Learning Practices in Indonesia: Social Media, Religious Populism, and Religious Authority," Religions (2019), 1-12. 45 Asep's study found that Salafist stream has used websites to promote the ideology of the Salafi movement, to attack those who consider against their school of teachings, to spread their viewpoints of conpemporary issues, and to build networks both at the local and global level as a strategy to maintain the solidarity amongst followers. See Asep Muhammad Iqbal, "Agama dan Adopsi Media Baru: Penggunaan Internet oleh Gerakan Salafisme di Indonesia," Jurnal Komunikasi Indonesia 2, 2 (2013). 81-85. New media such as websites, facebook pages, blogspots, and twitter also become medium of voices for Shiite community in Indonesia to express their presence, movement, and thoughts as well as for the Sunnis and the Wahabi supporters. Rachmah (HCII, 2007), 74-82. 47 Turner, Religious Authority and the New Media, 120. 48 Weber's conception of charisma is based on the personal appeal of an exceptional figure such as innate supranatural quality but is transformed through disciples' participation. Turner,Religious Authority and the New Media,120. Although religious information and sermons as well as ritual activities have mushroomed on the Internet, they do not have corroding impact to the charismatic authority of tuan guru as the religious experience and rituals webcasted on YouTube channel and other cybernetic spaces cannot adequately transmit the aura of holiness. This finding can be examined through a discussion of Walter Benjamin's theory of aura grounded from his study of photography. Benjamin asserts that reproduced artworks, including live-streamed video, diminish and weaken an aura of the sacred: "Photography is reproductive by definition, and this interposes distance (an experiential discrepancy) between the authentic, inspiring original and its limitation, which diminishes its meaning." 49 Benjamin's assertion of a diminished meaning in replicated art is also evident in the finding of this study. The effort of many religious institutions and stakeholders those help religious leaders to reproduce religious rituals and preaching's through live-streamed video on YouTube, website, web-television, and other cybernetic spaces is not meaningful to distant and widely dispersed participants, namely Sasak Muslim community. Consequently, they are in favor of lived da'wah practice (offline da'wah), although some utilizes a hybrid space, that is both offline and online da'wah. It is true that the emergence of cyber da'wah enables the rise of new religious authority promoted by lay preachers what Hoesterey calls "innovative claims of religious authority" 50 and by new preachers who lies their appeal in traditionalist scholars and Arab saints performance to be looked charismatic. However, these new types actually differ and cannot break the mold of charismatic authority which its source in the innate and exceptional qualities of an individual's personality. 51 With regard to those who are convinced with exceptional qualities of individual-based conception of charismatic authority, they do not lay the basis of religious authority merely on oratorical competence and appearance appeal, but also on devotional quality, including karamah. In this sense, reproducing a miraculous power and auratic religious experience through cybernetic space would be very complicated as 49 there are many dimensions of religious experiences and concepts that are interdependent and inseparable. The tradition of kiai visitation (sowan) herein could become a good example. Sowan is muwajjaha (face-to-face meeting) between kiai or tuan guru and disciples in physical presence that usually intended to obtain barakah, and therefore well known as tabarrukan. 52 In this regard, the muwajjaha must take place physically, and cannot be done online because the physical presence (muwajjaha) in the sowan tradition indicates one's love expression and total obedience to kiai as heirs of prophets (waratha al-anbiya> ), even it signifies one's piety. Alongside sowan in the sense muwajjaha as face to face meeting is a central act of piety, it also signifies an essential relationship between the culted figure and followers, in its capacity to inspire awe and to feel an auratic existence of the sacred. Accordingly, it would be hard to accommodate these interdependence concepts embedded within sowan tradition into reproduced religious digital artworks (live-streamed video and webtelevision program) that enable to signify the authentic meaning of sowan, that is, total obedience and piety, as well as to awaken auratic religious experience of tabarrukan. It is arguably, that virtual religious objects which is comparable with photography diminish the aura of holiness and blurs the authentic meaning of religious rituals. Another example is the ritual of giving ija> zah that means the transmission of knowledge from kiai to disciples as the indicative of permission, authorisation, and authentication. Basically, it is an extension of the tradition of isnad (a continuous historical transmission) for hadith. Subsequently this ija> zah tradition has developed not only for hadith, but also for any kind of Islamic knowledge transmissions such as history, law, theology, and mysticism, 53 even for the transmission of ritual practices. 54 Similar with the tradition of sowan, the ritual of giving ija> zah also has 52 Arif Jamhari, "The Majlis Dhikr of Indonesia: Exposition of Some Aspects of Ritual Practices," Journal of Indonesia Islam 3, 1 (2009), 144. See also "Sowan dan Mencium Tangan Kiai," https://islam.nu.or.id/post/read/39396/sowan-dan-mencium-tangan-kyai. 53 interrelation between one concept and others with regard to meaningmaking and auratic experience. For those who create video of religious preaching or ritual activities wired on website, youtube, and web-TV w instance, they have to make sure that auratic experience embedded within ija> zah, physically personal meeting, wirid, and riya> d} ah practice can be adequately reproduced into virtual reality and new media forms. However, the participants of this study admitted that they accessed famous religious proselytisers (da> 'i) on YouTube is only intended to deepen the Islamic knowledge through provided video of speeches, but not to engage in online ritual, prayer, worship, meditation, what Helland called online religion. 55 The authenticity of religious practices has become a major reason for them not to embrace such online religion activities. This notion is relevant with Helland's idea of online religion that reveals "ritual activities and charismatic authority do not always transfer well into the Internet medium." 56 In other words, it is arguably, that the ability virtual reality of da'wah provided on new media forms up to present seems has remained disable to approximate the auratic experience, the sacred reality, and charismatic authority, including karamah of tuan guru. Conclusion In the midst of the emerging trend of cyber-based-da'wah that shows that most religious institutions and preachers use social media and other cybernetic spaces in religious propagation, this study has uncovered that not all Islamic rituals and da'wah activities can be transformed into the digital realm, in particular the Islamic tradition that engage people's belief in supernatural quality, charismatic authority, and karamah-related issues. Da'wah complexity of concepts and meaning embedded within a religious ritual has become main barrier in transforming lived-ritual practices into internet medium. Indeed, the genuine meaning of 'tabarrukan' constructed from the interconnected-concepts (muwajjaha, prayer, total obedience, and piety in the tradition of sowan), for instance, would be hard to find in the reproduced digital creative artworks. Likewise, the tradition of sowan, in the case of ija> zah-giving ritual, the problem of the disappearance of JOURNAL OF INDONESIAN ISLAM Volume 14 , Number 02, December 2020 sacredness aura has become a major reason of the Sasak Muslims' denial to bring this religious ritual online. They will not feel an auratic experience when the ritual takes place online as normally they can have through being physically present. It is worth noting that religious rituals which involve disciples' belief in karamah and charismatic authority of established Muslim scholars offer a mode of da'wah that enables believers to obtain spiritual experiences which are absent in virtual da 'wah. []
2021-08-04T00:04:17.662Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "8555df6f60bba5c009c65b4190341b1dde0fd0c5", "oa_license": "CCBYSA", "oa_url": "http://jiis.uinsby.ac.id/index.php/JIIs/article/download/1756/pdf_73", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "16a9fcffcd82760a8c44761f4fc6a2e5feb79657", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
53379133
pes2o/s2orc
v3-fos-license
Effect of Titanium Addition on Behavior of Medium Carbon Steel This work aims at investigating the influence of titanium addition on behavior of medium carbon steel. Three types of medium carbon steel with different titanium content and one reference steel titanium free were produced in 100 kg induction furnace. Titanium addition was increased up to 0.230%. The produced steels were forged at start temperature 1150 ̊C. Forging process was finished at temperatures 900 ̊C, 975 ̊C, and 1050 ̊C. Microstructure examination and hardness measurement were carried out for forged steels. Mechanical properties and impact measurements were carried out for quenched tempered steels. Ti addition was found to have significant influence on refinement of grains and increase of ferrite/pearlite ratio. It was also, observed that grain size decreases as finishing temperature of forging process decreases. Both Ti addition and lowering finishing forging temperature have positive effect on hardness. In addition, results indicated that addition of titanium has significant effect on the mechanical properties and toughness. Introduction Mechanical properties of steels are strongly connected to their microstructure obtained after heat treatments that are generally performed in order to achieve a good hardness and/or tensile strength with sufficient ductility [1]. Microalloyed steels have been developed for many years and are widely used in modern industry. It is well known that microalloyed high-strength low-alloy steels are essentially carbon low-alloy steels that contain small additions of alloying elements such as Nb, V, or Ti [2][3][4][5][6]. These elements act as solution atoms or precipitation to suppress the recrystallization and grain growth of austenite. Microalloying of carbon steels is widely used in practice. At the same time, little attention has been given to medium carbon steels containing vanadium, niobium and titanium. The obtained fine grain microstructure can enhance the mechanical properties of steels obviously. In addition, multimicroalloying can lead to the formation of carbide and nitride particles which can further influence on the mechanical properties of steels [7][8][9][10]. Due to the high price of niobium and vanadium, the development of titanium microalloyed steels seems to be attracted and get more attention recently. Steel alloyed with titanium alone especially the formation mechanism of TiC precipitation during different processes and its effect are seldom studied in carbon steel. This work aims at investigating the influence of titanium additions in medium carbon steels on mechanical properties and also investigation of the effect of finishing forging temperatures on grain refinements. Experimental Four steels with different titanium contents were melted in induction furnace of capacity 100 kg and cast in sand mold. Complete chemical analysis has been carried out for all cast steels. Ingots with diameter 90 mm were hot forged to about 40 mm square. The ingots were reheated up to 1200˚C and hold to 30 min then start forging. Starting forging temperature was 1150˚C while forging process was ended at temperatures 900˚C, 975˚C, and 1050˚C for the four steels. Microstructure examination and hardness measurements were carried out after forging process. Ferrite/pearlite ratios were measured using software Paxit program for forged steels. The forged bars -which ended at 975˚C-were reheated up to 960˚C for 1 hour and water-cooled followed by tempering at 260˚C for 30 min. The mechanical properties were measured for tempered steels. The standard V-notch Charpy specimens samples (10 mm × 10 mm × 55 mm, notch depth 2 mm) was prepared to investigate the influence of the titanium addition on impact toughness at 25˚C for tempered steels. Results The melted steels have the chemical composition given in Table 1. The microstructure examination of forged steels at finishing temperatures 900˚C, 975˚C, and 1050˚C is given in Figures 1-3 respectively. It is clear that the grain size decreases as titanium content increases at finished forging temperature 900˚C as illustrated in Figure 1. This can be attributed to the presence of titanium forming titanium carbides and/or titanium nitrides on the austenite grains that retard the grains growth and hence grain of ferrite decrease. The same results observed at finishing forging temperatures 975˚C and 1050˚C as shown in Figures 2 and 3. However, it was observed that for the same steel, the grain size increase by increasing the finishing forging temperature. This can be due to the grain growth of austenite phase during forging process and hence the ferrite and pearlite grain size increase. Also, it may be due to the solubility of titanium in austenitic phase increase as temperature increases leading to decrease of TiC formation which suppress the growth of austenitic grains. The microstructure examination show that the ferrite/pearlite ratio increases by increasing titanium content. This can be attributed to the formation of titanium carbides and consequently the free carbon is decreased leading to the increase of ferrite/pearlite ratio. However the finished forging temperature has little influence on the ferrite/pearlite ratios as illustrated in Table 2. It is clear fromTable 2 that ferrite percentage increases from 10% to 12% as titanium content increases from 0.0015% to 0.2300%. While, there is little change in ferrite/pearlite ratio that results from finishing temperatures of forging. The results show that the hardness increases by increasing titanium content for each finishing forging temperatures and increases by decreasing the finishing of forging temperature as illustrated in Figure 4. Therefore, it is clear that the main controlling parameter for hardness is the grain refinement. Titanium content has great influence on mechanical properties of steels, where it was noticed that the yield and ultimate tensile strength increase by increasing titanium content but elongation decreases as given in Figure 5. This can be attributed to the effect of grain refinement of titanium. Impact toughness is of importance for the evaluation of the resistance capability of steel against the crack initiation and rupture. In general, it is of significant evidences that the addition of low alloy element (such as V, Ti, and Ni , etc.) [11][12]. Titanium is used to retard grain growth and thus improve toughness as it is clear from Figure 6. The relation between the solubility products of carbides and nitrides as a function of temperature illustrated by Aronsson [13] is given in Figure 7. From this figure, it is clear that the solubility product of TiC in austenitic phase increases by increasing temperature from 770˚C to 1050˚C. From the results given in this figure the solubility of titanium at this temperature range can be calculated and is given in Table 3. From this table, it is clear that the solubility of titanium increases by increasing temperature. Consequently, TiC will decreases by increasing temperature in the austenitic phase. As, there is a direct effect of TiC on the formed ferrite grain size, therefore by decreasing temperature the ferrite grain size decreases. The actual atomic mole fraction of Ti and its solubility product of four types of steels is given in Table 4. Figure 7 and Table 3 show that the formation of TiC is the grain growth is rest Levaillant, "R icrostructure and Mechanical Pr pered Martensitic function in temperature of austenitic phase. Solubility of Ti decreases by increasing temperature as indicated from decreasing of solubility and solubility product. The solubility of Ti at 770˚C equal zero that is mean that any Ti content must form TiC. At 800˚C the solubility is 0.27978% atom. This means that any titanium content less than 0.2797% atomic present as soluble and start to form TiC by decreasing temperature. This means that the grain refinement is controlled at stage of cooling above and near to Ac3, Ti & C content. Conclusions Actually, this study shows that ricted as finishing forging temperature decreased (from 1050˚C to 900˚C). But, Ti content has positive effect on grain refinement at high temperature (up to 1050˚C). The later need to more investigation in future work. The addition of tita refinement and hence has positive effect on hardness, mechanical properties, and impact toughness. The grain refinement increases as finished forging temperature decreases from 1050˚C to 900˚C passing through 975˚C. Ferrite/pearlite ratio increase as titanium content inease from 0.0015% to 0.2300%. The finishing forging temperature has little negative influence on ferrite/pearlite ratio. The precipitation of TiC is took place in temperature above and close to Ac3.
2018-10-23T17:41:32.723Z
2012-11-20T00:00:00.000
{ "year": 2012, "sha1": "0130f814ed325a90a74d49d7294b3de15d6a4e01", "oa_license": null, "oa_url": "https://doi.org/10.4236/jmmce.2012.1111118", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0130f814ed325a90a74d49d7294b3de15d6a4e01", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
251745322
pes2o/s2orc
v3-fos-license
Development of a Model Predicting the Outcome of In Vitro Fertilization Cycles by a Robust Decision Tree Method Introduction Infertility is a worldwide problem. To evaluate the outcome of in vitro fertilization (IVF) treatment for infertility, many indicators need to be considered and the relation among indicators need to be studied. Objectives To construct an IVF predicting model by a robust decision tree method and find important factors and their interrelation. Methods IVF and intracytoplasmic sperm injection (ICSI) cycles between January 2010 and December 2020 in a women’s hospital were collected. Comprehensive evaluation and examination of patients, specific therapy strategy and the outcome of treatment were recorded. Variables were selected through the significance of 1-way analysis between the clinical pregnant group and the nonpregnant group and then were discretized. Then, gradient boosting decision tree (GBDT) was used to construct the model to compute the score for predicting the rate of clinical pregnancy. Result Thirty-eight variables with significant difference were selected for binning and thirty of them in which the pregnancy rate varied in different categories were chosen to construct the model. The final score computed by model predicted the clinical pregnancy rate well with the Area Under Curve (AUC) value achieving 0.704 and the consistency reaching 98.1%. Number of two-pronuclear embryo (2PN), age of women, AMH level, number of oocytes retrieved and endometrial thickness were important factors related to IVF outcome. Moreover, some interrelations among factors were found from model, which may assist clinicians in making decisions. Conclusion This study constructed a model predicting the outcome of IVF cycles through a robust decision tree method and achieved satisfactory prediction performance. Important factors related to IVF outcome and some interrelations among factors were found. INTRODUCTION Infertility is a worldwide problem that affects tens of millions of families. It is estimated that 1 in 6 couples in the world experiences infertility (1). The development of assisted reproductive technology (ART) has brought hope to couples with infertility. There were over 280,000 ART cycles and over 70,000 liveborn infants in the US according to the US Centers for Disease Control and Prevention 2017 Fertility Clinic Success Rates Report (2). Also, China has made great efforts to treat infertility that the total number of cycles of ART has exceeded 1 million and the number of infants born has exceeded 300,000 per year in China (3). However, despite the big number of ART cycles, clinical pregnancy rate per embryo transfer was as low as 30% (4). There are many factors identified to play important roles in the outcome of IVF, such as age, body mass index (BMI), hormone levels and ovarian reserve capacity (5)(6)(7)(8)(9). These various factors make it complex to evaluate the outcome of IVF cycles before implantation. It is also a financial burden for patients with infertility to perform IVF cycles. Therefore, there is a pressing motivation to improve the way in which those factors are integrated to predict the outcome of IVF cycles. Nowadays, data-driven analysis based on machine learning has been more and more applied in the field of medical problems where the success mainly depends on feature engineering and model selection. Feature-engineering, including feature selection and feature extraction, can better mine information from the original data and improve the data quality which is especially indispensable for multi-variable data (10). Among them, the binning method is used for discretizing continuous variables which has advantage in increasing stability and robustness of data by avoiding the fluctuation caused by the meaningless fluctuation of the feature and avoiding the influence of extreme values (11). Besides, discretization can introduce the nonlinear characteristics of variables into the linear model so as to improve the expression ability and increase the fitness of the model. For instance, metagenomic binning has been widely used in metagenomic research, which aims to classify the contigs obtained from different organisms according to the species (12,13). For model selection, the complexity and accuracy of the model need to be considered based on the characteristics of the data. For medical data, a simple logistic regression cannot process nonlinear data, although it has good interpretability. On the contrary, the complex deep learning model with high accuracy is hard to be applied in practice due to its unexplainable characteristics and the demand for large data samples. The decision tree methods, especially the Gradient Boosting Decision Tree (GBDT), which balances the accuracy and complexity are more suitable for our problem (14). Therefore, the aim of this study was to construct an IVF predicting model to estimate the chance of success implantation by GBDT based on discretized medical variables and to determine valuable factors affecting the outcome in IVF treatment. Sample IVF and ICSI cycles between January 2010 and December 2020 in Women's Hospital School of Medicine Zhejiang University were screened. Patients with all causes of infertility were included. Exclusion criteria were 1) patients with egg or sperm donor; 2) patients with preimplantation genetic diagnosis (PGT-M) or screening (PGT-A); 3) patients with frozen embryo transfer; 4) patients without treatment outcomes; 5) incorrect information or important data missing in the database. A total of 49413 cycles were collected, and 37062 were included in our analysis. Comprehensiv e diagnostic evaluation of infertility of patients, their specific therapy strategy and the outcome of treatment were recorded. Samples were divided into clinical pregnant group and nonpregnant group according to whether the patient has clinical pregnancy which needs evidence of both HCG and ultrasonography after in vitro fertilization and embryo transfer. The study was approved by the Institutional Review Board at Zhejiang University (IRB-2020 0235-R) and was carried out in accordance with the Helsinki Declaration. Study Design The flowchart of this study is shown in Figure 1. First, obvious outliers were removed according to the possible ranges of indicators. Second, Indicators were selected through the significance of 1-way analysis between the clinical pregnant group and the nonpregnant group. Third, the selected indicators were discretized. Then, a complete model was constructed by GBDT and the performance of the model was validated. Finally, important factors related to IVF outcome and some interrelations among indicators with clinical meaning in model were found. Variable Selection by Oneway Analysis Between the clinical pregnant group and nonpregnant group, the basic characteristics of infertile couples (including age, BMI, type of the infertility, history of pregnancy and delivery, causes and duration of infertility, basal FSH level, basal LH level, and antral follicles count, etc), the factors in controlled ovarian stimulation (COS) procedures (including COS protocol, time of Gonadotropin days and Gonadotropin dosages, number of large follicles, the number of retrieved oocytes and serum hormone level of HCG trigger day, etc) and the factors during the embryo transfer procedures (including the number of bipronuclear, embryo culture time, number of embryo transferred, endometrial thickness and endometrial type, etc) correlating with the outcome of clinical pregnancy were analyzed. A total of 47 variables were included (Appendix Table 1). After one-way analysis of variance, only variables with p value < 0.05 were selected to the binning procedure. Binning Procedure As mentioned before, the binning procedure is helpful in increasing stability and robustness of model and the discretized data can show the relationship between variables and outcome clearly. In our work, factors were discretized by chi-merge algorithm. The chi-merge algorithm is a bottom-up discretization method which bins the variable by merging the adjacent intervals with the smallest chi-square value. The specific steps were given in Appendix 1 in the Supplement. Model Construction A decision tree is a flowchart-like structure in which each internal node represents a 'test' on an input variable and each sample will go through one path from root to leaf to get a prediction. GBDT is an ensemble machine learning technique for regression and classification problems, which uses decision trees as weak prediction model and ensembles them to produce a strong prediction model. Models are built in a stage-wise fashion and generalized by the optimization of an arbitrary differentiable loss function. Discretizing the continuous input variables to unique values can dramatically accelerate the training process, which is called histogram-based gradient boosting decision tree (hist-GBDT) (15). Because of the binning procedure, our work could be regarded as a specific hist-GBDT form. Details about GBDT were given in appendix 3 in the supplement. Model Validation Patients were scored according to the model. Receiver operator characteristic (ROC) curve was constructed and the area under ROC curve (AUC) value was computed to validate the performance of pregnancy prediction. Also, ten-fold cross validation which is a method used for verifying the stability of the model was performed. The specific steps were given in Appendix 3 in the Supplement. Statistical Analysis In the process of data analysis, 1-way analysis of variance of the variables was performed by R statistical software version 3.6.2 with package "multcomp 1.4". Binning, model construction and validation were conducted by Python 3.7.1 with package "numpy 1.21, scipy 1.6, and pandas 1.2". Two-tailed tests and p values <0.05 for significance were used. Variable Screening A total of 37062 cycles were included and divided into two groups according to whether the patients were clinical pregnant or not after in vitro fertilization and embryo transfer. Among them, 16823 samples were in the pregnant group with an average age of 30.73 years while 20239 samples were in the non-pregnant group with an average age of 32.01 years. Detailed features of two groups were shown in Supplementary Table 1. Thirty-eight variables were found to have significant differences between the clinical pregnant group and the nonpregnant group by one-way analysis of variance, including demographic characteristics like age of couples and duration of infertility. Also, ovarian reserve capacity indicators such as antral follicle count (AFC) and anti-Mullerian hormone level (AMH) were also different in two groups. Besides, some factors during the IVF-ET procedure showed great discrepancy, for instance, treatment strategy, number of oocytes retrieved and number of 2PN Binning Procedure and Variable Selection During the binning, continuous variables were discretized and transformed to five grades by the chi-merge algorithm. For example, with the increasing age, the clinical pregnancy rate dropped from 49.0% to 9.0%. Besides, there were some variables that didn't show a linear correlation to the pregnancy rate, such as the embryo culture time. The success rate reached 57.1% at 3 days but was lower when the embryo culture time was less than 3 days or exceeded 4 days, which proved the necessity of using GBDT, a nonlinear model. Finally, 30 variables whose pregnancy rate varied in different categories were selected for GBDT-based model construction by the criterion "maximum clinical pregnancy rate difference between groups > 5%", including 5 categorical variables and 25 binned continuous variables (Figure 2). Model Construction The aforementioned variables were used to build the final comprehensive evaluation model and a score was computed to predict the rate of clinical pregnancy by GBDT algorithm. Figure 3 showed the internal construction of our model. GBDT was the sum of many similar decision trees. The left side of the figure 3 was one specific tree. Each sample (patient) will reach a leaf node in each decision tree and get a corresponding value. The score of a sample which predicted the success of clinical pregnancy was calculated by adding all values of all trees. Besides, the importance of features was evaluated by the model, as displayed in Figure 4. Age of women, number of 2PN, AMH level, number of oocytes retrieved and endometrial thickness were the most important variables related to the outcome of the cycle. The association between the clinical pregnancy rate and the final score is shown in Figure 5. The score was calculated by our model and represented the predicted pregnancy rate, while the left y-ordinate in Figure 5 was the actual pregnancy rate of the sample. The yellow line showed that there was a positive correlation between the score and the pregnancy rate, indicating our model was effective. The clinical pregnancy rate reached 84.6% for whose score was higher than 0.83 while the clinical pregnancy rate was only 12.8% whose score was lower than 0.2. Also, sixty percent of patients scored between 0.4 and 0.6, indicating the distribution of the score was in accord with the actual situation. Besides, the number of patients in each score section was more than 700, ensuring clinical pregnancy rate was stable rather than an extremum averaged by minor sample. Model Validation By dividing patients into high possibility of clinical pregnancy and low possibility of clinical pregnancy using different thresholds of scores, ROC curve was constructed and the AUC value was 0.704 (95% CI, 0.699-0.709), as shown in Figure 6, proving that our model had good prediction performance. Also, ten-fold cross validation showed that the classification consistency of the model reached 98.1% (95% CI, 0.973-0.988) so that our model had excellent stability. Clinical Revelation in Decision Tree Aside from knowing the association between the single variable and the clinical pregnancy rate, the decision tree analysis also provided us with information about interaction among variables which may help in the clinic. For example, from the tree in Figure 3, patients aged over 35 years old may get harmed from repeatedly performing IVF that with the increase of the numbers of cycles, the value reflected clinical pregnancy rate dropped a lot. However, patients aged younger than 35 years old may be less affected since the value hardly changed. More specifically, the original data revealed when the number of cycles increased from 1 to more than 5, the clinical pregnancy rate decreased only 6.6% in the younger group while the rate declined by 13.6% in the older group (Supplementary Table 2). Similarly, we found some interesting discoveries from other trees. For instance, we found that women with lower AMH may benefit more from the short strategy, which was consistent with the consensus achieved by expertise (Supplementary Table 3). Those findings may assist clinicians in making an efficient and accurate judgment on the condition of patients with infertility. DISCUSSION Infertility has attracted unprecedented attention worldwide nowadays. Despite that IVF and ICSI are the recommended and effective treatments for infertile couples, nearly half of couples who undergo IVF remain childless, even after multiple treatment cycles (16). Since treatment is expensive and invasive, couples with fertility problems need to undergo a complete assessment combined with various factors and be informed about their chances to succeed to make the decision. Over the past decades, many IVF prediction models have been developed to evaluate individual outcome of treatment but few of them were clinical practical due to their poor predictive ability and simple statistical method (17). Machine learning which provides the sight to interpret data and construct prediction models has been increasingly applied to clinical issues, especially in complex systems with multi-variable (18)(19)(20). Recently, machine learning algorithms have been used in the reproductive field, for example, Khosravi et al. managed to select the highest quality embryos which may lead to a viable pregnancy by machine learning algorithms using visual images of the embryos (21). In terms of predicting IVF outcomes, it was suggested that machine learning algorithms based on age, BMI, and clinical data have an advantage over classic logistic regression and several models have been constructed by different algorithms (22)(23)(24). However, their models' qualities were limited by small sample sizes, inadequate statistical methodology and lack of internal or external validation (17). For the first time, we built a model predicting the outcome of IVF cycles innovatively combining GBDT and discretization in a large sample. After selecting the variables with significant difference between the clinical pregnant group and the nonpregnant group, continuous variables were transformed into five grades and assigned with separate weights by the binning algorithm. Clinical pregnancy rate varied in different categories after discretization, supporting that binning was an appropriate and excellent method to process clinical data with broad ranges and interference of fluctuation. Then the model was constructed by GBDT, a novel machine learning algorithm and by which the importance of features and total score evaluating the success of pregnancy were determined. The association between the pregnancy rate and the final score was strong that their trends were highly consistent. The clinical pregnancy rate reached 83.9% for those whose score was higher than 0.8 while the clinical pregnancy rate was only 11.2% for those whose score was lower than 0.2. Moreover, the distribution of the score was similar to normal distribution which testified our model reflected the actual situation. The AUC value of the model was 0.704, indicating that our model had a good performance. Also, tenfold cross validation showed that the classification consistency of the model reached 98.1% which means our model construction method also had excellent stability. The five most important features related to the outcome of treatment were age of women, number of 2PN, AMH level, number of oocytes retrieved and endometrial thickness. Female age was one of the strongest factors in predicting pregnancy chances after IVF and was identified by nearly all studies as an important predictor (25,26). The underlying biological explanation included the diminished ovarian reserve, the decrease in both quantity and quality of oocytes with aging (27). In our study, women younger than 34 years old had the highest possibility to be pregnant with the total rate of 49.0%. Our study also showed that the number of 2PN is a significant predictor. Although Both 1PN-and 0PN-derived blastocysts can be used for embryo transfer, 2PN blastocysts indicated greater chance of success (28). The positive correlation between AMH level and the pregnancy rate found in our study was consistent with prior studies (29,30). AMH represents the ovarian follicular pool and has been used as a marker of ovarian reserve for a long time. Besides, a positive association between increasing number of oocytes retrieved and pregnancy chances after IVF was reported by many researchers (26). We found that once the number of oocytes exceeded five, the clinical pregnancy rate reached 40% in our cohort. Similar to other research which defined 7mm as the cut-off of endometrial thickness, we found females with endometrial thickness less than 8mm may have negative outcome after IVF (31). Apart from above variables, others such as basal FSH, method of fertilization (IVF or ICSI) and number of embryos transferred were also related to IVF outcomes (32). Our model also provided us with some information about interaction among indicators which may help in clinic. From specific decision trees in model, we concluded many interesting discoveries. For example, our study revealed multiple IVF cycles may cause harm to women over 35 years old but hardly influenced women younger than 35. Thus, clinicians need to be more cautious when treating patients aged over 35 because the failure of one cycle may be accumulated and affect the next cycle. Also, for women whose age exceeded 35, the number of oocytes retrieved had a great effect on the clinical pregnancy rate which increased a lot with the rising number of oocytes. However, the impact disappeared in young patients. It reminded us that finding ways to improve the number of oocytes retrieved to increase the clinical pregnancy rate may be a good choice for older women but may be less effective for the young. Besides, women with lower levels of AMH may benefit more from the short strategy when choosing COS protocol, which was consistent with the consensus reached by expertise. Those clinical revelations concluded in our model may in turn assist clinicians in making decisions on the complex condition of patients with infertility. Our research provided a new method for IVF data processing and achieved satisfactory prediction performance. This approach can be applied to various clinical problems with multiple variables where classic statistics and analysis methods may not work. However, our study had several shortages. Firstly, although the sample size was large, there were missing data in certain variables, which may cover some discoveries. Secondly, despite that the result of ROC and ten-fold cross validation showed good internal validation, our study was absent of external validation due to the heterogeneity of data in different clinical centers. Also, there may be regional and population limitations in applying our model. The binning was based on the sampled data with specific ethnic and characteristics distribution which were not universal in the world. Therefore, when performing external validation, our binning and model may need to be adjusted if the distribution of samples' characteristics change significantly. Thirdly, our model was suitable for patients with satisfactory uterine conditions who are ready for an IVF cycle, the effects of uterine abnormalities were not involved in this paper. In the future, we will continue to work on the practice of the model and to investigate the indicators' relationship with IVF outcome to better guide the clinical treatment. Furthermore, we will apply our method to specific type of infertility (for example, unexplained infertility) to explore the impact of variables and relationship between variables on IVF outcome. CONCLUSION This study constructed a model predicting the outcome of IVF cycles combining binning and GBDT algorithm and achieved satisfactory prediction performance. Number of 2PN, age of women, AMH level, number of oocytes retrieved and endometrial thickness were important factors in relation to IVF outcome and some interactions between factors were found. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Medical Ethics Committee of Women's Hospital, Zhejiang University. The ethics committee waived the requirement of written informed consent for participation.
2022-08-24T13:19:28.467Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "1265466265d38613136625db91c5e1f4e9f38d51", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1265466265d38613136625db91c5e1f4e9f38d51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
46700331
pes2o/s2orc
v3-fos-license
Screening of heavy quark free energies at finite temperature and non-zero baryon chemical potential We analyze the dependence of heavy quark free energies on the baryon chemical potential (mu_b) in 2-flavour QCD using improved (p4) staggered fermions with a bare quark mass of m/T = 0.4. By performing a 6th order Taylor expansion in the chemical potential which circumvents the sign problem. The Taylor expansion coefficients of colour singlet and colour averaged free energies are calculated and from this the expansion coefficients for the corresponding screening masses are determined. We find that for small mu_b the free energies of a static quark anti-quark pair decrease in a medium with a net excess of quarks and that screening is well described by a screening mass which increases with increasing mu_b. The mu_b-dependent corrections to the screening masses are well described by perturbation theory for T>2 T_c. In particular, we find for all temperatures above T_c that the expansion coefficients for singlet and colour averaged screening masses differ by a factor 2. I. INTRODUCTION Numerical studies of QCD provided quite detailed information about the properties of matter at high temperature and vanishing net baryon density [1]. In particular, the screening of static quark anti-quark sources at large distances and their renormalization has been analyzed in quite some detail [2][3][4]. Compared to this our knowledge on the dependence of the equation of state and screening at non-zero baryon number density, or equivalently, non-zero baryon chemical potential (µ b ) is rather limited. The µ b -dependence of the QCD partition function [5] and the HTLresummation of the pressure [6] have been evaluated only recently. Although in leading order of high temperature perturbation theory the dependence of the Debye screening mass on µ b is well-known [7] neither the temperature range for the validity of this perturbative result nor the generic features of screening of heavy quark free energies at non-zero µ b have been analyzed so far with non-perturbative methods in the vicinity of the QCD transition 1 . Recently studies of the equation of state have successfully been extended to non-vanishing baryon chemical potential using Taylor expansions [9] around µ b = 0 as well as reweighting techniques [10] and imaginary chemical potentials [11]. We will use here the former approach to analyze the screening of static quark anti-quark sources at non-zero µ b , i.e. in a medium with a non-vanishing net quark density. We evaluate the Taylor expansion coefficients for correlation functions of heavy quark anti-quark pairs and deduce from this expansion coefficients for the screening mass. For this work we analyzed gauge field configurations using a p4-improved staggered fermion action with N f = 2 degenerate quark flavours. We used the same data sample that recently has been generated by the Bielefeld-Swansea collaboration for the analysis of the equation of state [9]. This sample consists of 1000 up to 4000 gauge field configurations available for several bare gauge couplings below and above the transition temperature. The lattice size is 16 3 × 4 and the bare quark mass,m/T = 0.4, corresponds to a pion mass of about 770 MeV. In addition we generated 1000 configurations at T = 3T c and 1600 at T = 4T c to check for the approach to the high temperature perturbative regime. In addition to gauge invariant colour averaged free energies we also have analyzed colour singlet free energies. To do so all gauge configurations have been transformed to Coulomb gauge before evaluating the Polyakov loop correlation functions. This paper is organized as follows. In the next section we discuss the general setup for calculating Taylor expansions for heavy quark free energies. Our numerical results for singlet and colour averaged free energies are presented in section III. In section IV we discuss the determination of µ b -dependent corrections to screening masses from the free energies. Our conclusions are given in section V. In an appendix we give detailed expressions for the Taylor expansion coefficients of purely gluonic observables. is matched to the T = 0 heavy quark potential at small distances (a). II. TAYLOR EXPANSION OF HEAVY QUARK FREE ENERGIES A heavy (static) quark Q at site x is represented by the Polyakov loop, which is an SU(3) matrix. A heavy anti-quarkQ is described by the corresponding hermitian conjugate matrix. The free energy of a QQ-pair separated by a distance r is then calculated from the expectation value of the correlation function of L(0) and L † (r) where r points to a site with distance r to 0. The dependence on the baryon chemical potential µ b or quark chemical potential µ ≡ µ b /3 is established solely through the fermion determinant, det M ({U ρ (x)}, µ,m), withm denoting the bare quark mass. In order to avoid the sign problem, which arises from Re (det M ) not being positive definite for µ = 0, Taylor expansions in the quark chemical potential are used. This allows us to perform our simulations at zero chemical potential thereby restricting us to small chemical potentials. A purely gluonic observable O like the Polyakov loop L(x) or a corresponding correlation function does not explicitly depend on the quark chemical potential; it is calculated in terms of link variables U ρ (x) of the gauge field configuration which do not explicitly depend on µ. Any µ-dependence of the expectation value O µ thus arises from the µdependence of the Boltzmann weights in the QCD partition function, i.e. the µ-dependence of the fermion determinant. Therefore we can apply the same method that was used for the power series expansion of the equation of state; expanding the fermion determinant in powers of µ leads to a power series of our purely gluonic observable, where O denotes the expectation value of O evaluated for vanishing chemical potential. We consider observables like the colour averaged and singlet QQ-correlation functions, where the sum refers to all sites x, y with x − y = r and N is the number of these x, y-pairs. As O av,1 and the corresponding expectation values are strictly real for every single gauge field configuration the odd orders in the expansion vanish as was argued in [12]. For observables like the Polyakov loop itself or static quark-quark correlations like TrL(0)TrL(r) we also have to take into account the odd orders which are in general non-vanishing. In the appendix we give explicit formulas for calculating the expansion coefficients of an arbitrary gluonic observable up to sixth order in µ/T . We have used this to evaluate the first three, non-vanishing expansion coefficients of the purely real observables considered here. We extract the colour averaged free energy of a static quark anti-quark pair from the Polyakov loop correlation function and the colour singlet free energies from We renormalize the Polyakov loop as described in [13] such that at short distances and vanishing chemical potential the singlet free energy F 1 QQ (r, T, 0) matches the zero temperature heavy quark potential. This also fixes the renormalization of the Polyakov loop and all its correlation functions. In particular, this renormalizes also the colour averaged free energies. As all our calculations have been performed on lattices with temporal extent N τ = 4 the smallest available distance at which this matching could be performed is r 0 = 1/4T . In order to determine the expansion coefficients of the colour averaged (av) and singlet (1) free energies, with x = av and 1, we apply (A18) to the corresponding Polyakov loop correlation functions. We again note that these are strictly real on every gauge field configuration and thus have an expansion in even powers of µ/T . Explicit formulas used for the calculation of the expansion coefficients f av QQ,n (r, T ) and f 1 QQ,n (r, T ) are given in the appendix. III. NUMERICAL RESULTS ON QQ FREE ENERGIES In Fig. 1-4 we show the leading and higher order expansion coefficients up to sixth order in µ/T expressed in units of the string tension 2 . We do not include data for all temperature values analyzed by us because for T ≫ T c they have very small absolute values and for T < T c they suffer from large statistical errors and are still consistent with zero. The leading order results, f av QQ,0 (r, T ) and f 1 QQ,0 (r, T ) are consistent with previous analyses of static quark anti-quark free energies performed in 2-flavour QCD at µ = 0 on the same data set [3]. For the second order expansion coefficients we display separately results below ( Fig. 2(a), (b)) and above ( Fig. 2(c), (d)) the µ = 0 transition temperature, T c . As can be seen the second order expansion coefficients are always negative and increase in magnitude in the vicinity of T c . The corresponding results for the 4 th and 6 th order expansion coefficients are shown in Fig. 3 and Fig. 4, respectively. Here we only show results above T c ; below T c the expansion coefficients are consistent with being zero within errors even at rather short distances and errors grow large for rT ≥ 1. We note that all expansion coefficients shown in Figs. 2 to 4 vanish at small distances. This shows that a quark anti-quark pair is not affected by the surrounding medium if its size becomes small. This observation also justifies our procedure to renormalize the Polyakov loop by matching the µ = 0 singlet free energy to the T = 0 heavy quark potential. The renormalization constant is independent of µ. Also close to T c , where the µ-dependence of the free energies is strongest, the absolute values of the fourth and sixth order expansion coefficients are of the same order as or smaller than the second order expansion coefficient. Therefore the 4 th and 6 th order contributions rapidly become negligible for µ/T < 1. Although the errors are large for the higher order expansion coefficients they show that at high temperature the 2 nd and 4 t h order expansion coefficients are opposite in sign, f av,1 QQ,2 (r, T ) < 0 and f av,1 QQ,4 (r, T ) > 0. This is consistent with the expectation that at high temperature the asymptotic large distance value of the heavy quark free energy is proportional to the value of the Debye mass [15]. In this limit one obtains alternating signs of the expansion coefficients of the heavy quark free energies when one expands the perturbative Debye mass [7], with m D,0 (T ) = g(T )T Nc 3 + N f 6 denoting the Debye mass for vanishing baryon chemical potential. Although the statistical significance of our results for f av,1 QQ,6 (r, T ) rapidly drops with increasing temperature this pattern of alternating signs seems to be valid also at sixth order at least for temperatures T > ∼ 1.05T c . Except for temperatures close to the transition temperature the asymptotic behaviour of the free energies is reached at distances rT > ∼ 1.5. We determined their large distance value by taking the weighted average of the values at the five largest distances. The results are shown in Fig. 5. We note that |f av,1 QQ,2 (∞, T )| have a pronounced peak at T c . This also holds for |f av,1 QQ,2 (r, T )| evaluated at any fixed distance r. In fact, f av,1 QQ,2 (r, T ) is proportional to the second derivative of a partition function including a pair of static sources, QQ. It thus shows the characteristic properties of a susceptibility in the vicinity of a (phase) transition point. Fig. 5 also shows that at large distances, within the statistical errors of our analysis, the expansion coefficients for the colour averaged and singlet free energies approach identical values, where This has been noted before at µ = 0 and suggests that at large distances, e.g. for rT > ∼ 1.5, the quark anti-quark sources are screened independently from each other; their relative colour orientation thus becomes irrelevant. Including all terms up to sixth order we calculated the singlet and colour averaged free energies in the range from µ/T = 0.0 up to 0.8. Results for the colour singlet free energies evaluated at a few values of temperature are shown in Fig. 6. Similar results hold for the colour averaged free energies. The free energies decrease relative to their values at µ/T = 0 for all temperatures above and below T c . At small distances the curves always agree within errors. With increasing distance a gap opens up which reflects the decrease in free energy at non zero µ. As indicated by the asymptotic values f av,1 QQ,2 (∞, T ), which give the dominant µ-dependent contribution at large distances, the medium effects are largest close to the transition temperature and become smaller with increasing temperature. IV. SCREENING MASSES For temperatures above T c and large distances r the heavy quark free energies are expected to be screened, ∆F av,1 QQ (r, T, µ) = F av,1 QQ (∞, T, µ) − F av,1 QQ (r, T, µ) , ∼ 1 r n e −m av,1 (T,µ)r (10) with n = 1, 2 for the singlet and colour averaged free energies respectively. In the infinite distance limit we thus can extract the screening masses, We use this as our starting point to derive a Taylor expansion for the screening masses. Expanding the logarithm in eq. 11 in powers of µ/T it is obvious that also the screening masses are even functions in µ/T , To analyze the approach of the various expansion coefficients to the large distance limits we introduce effective masses, m x eff,n (r, T ), with x = av, 1, In the limit of large distances these relations define the expansion coefficients of the colour averaged and singlet screening masses, m av,1 n (T ) = lim r→∞ m av,1 eff,n (r, T ) . As will become obvious in the following the effective masses defined above show only little r-dependence. They are thus suitable for a determination of the µ-dependent corrections to the screening masses. This is not the case for the leading order, µ-independent, contribution. In order to determine m 1 0 (T ) we use an ansatz for the large distance behaviour of the singlet free energy motivated by leading order high temperature perturbation theory, We fit our data to this equation using α 0 (T ) and m 1 0 (T ) as fit parameters where f 1 QQ,0 (∞, T ) is determined as described in the previous section. We choose the same fitting procedure as in [3] namely averaging results received from five fit windows with left borders between rT = 0.8 and rT = 1.0 and right border at rT = 1.73. While the above ansatz is known to describe rather well the large distance behaviour of the color singlet free energy, it also is known that the sub-leading power-like corrections are much more difficult to control in the case of the colour averaged free energy. For this reason we will analyze here only the leading order contribution to the singlet screening mass. Results for effective masses in the singlet channel are shown in Fig. 7 as function of rT for one value of the temperature. As can be seen the asymptotic value is indeed reached quickly before the errors grow large at distances (T ) are thus well determined from the plateau values of these ratios. Similar results hold in the colour averaged channel. We found the left border of the plateau to lie between rT = 0.48 close to T c and rT = 0.23 for T > 1.15T c . Results for the various expansion coefficients are shown in Fig. 8. This figure shows that at high temperatures the µ-dependent corrections to the screening mass of the colour averaged free energies m av (T, µ) are twice as large as those of the (Debye) screening mass in the singlet channel, m 1 (T, µ). This is expected from perturbation theory, which suggests that the leading order contribution to the colour singlet free energy is given by one gluon exchange while the colour averaged free energy is dominated by two gluon exchange. Using resummed gluon propagators then leads to screening masses that differ by a factor of 2, Our results suggest that this relation holds already close to T c (Fig. 8). We thus have no evidence for large contributions from the magnetic sector, which is expected to dominate the screening in the colour averaged channel at asymptotically large temperatures [16] and which would violate the simple relation given in eq. 16. In order to compare the expansion coefficients with perturbation theory we need to specify the running coupling g(T ). Following [3] we use the next-to-leading order perturbative result for the running of the coupling with temperature but allow for a free overall scale factor. We thus fit our data on the T -dependence of the leading order (µ = 0) screening mass by the ansatz, with the 2nd order perturbative running coupling, where we use T c /Λ MS = 0.77(21) and the scale µ = 2π as in [3]. Fitting our data to eq. 17 with fit parameter A, yields which is almost identical to the result in [3] where the data for T = 3T c were still missing. Our fit result is included in Fig. 8. We also compare the temperature dependence of m 1 2 (T ), m 1 4 (T ) and m 1 6 (T ) with corresponding expansion coefficients of the perturbative Debye mass which result from an expansion of (7) using (17) as the 0th order. These expansion coefficients are alternating in sign, m D,6 (T ) = 9 √ 3 512π 6 · Ag(T ) . (20c) At least for the second order coefficient m 1 2 (T ) we find that this yields a satisfactory description of the numerical results for T > ∼ 2T c . Eq. 20 shows that subsequent terms differ by about an order of magnitude, which explains why our signal for a non-zero contribution m 1 n (T ) is rather poor for n > 2. From (20) we find m D,2 (T )/m D,0 (T ) = 3/8π 2 which is independent of A and g(T ) and is compared with our numerical results in Fig. 9(a). We note that the perturbative value for this ratio is already reached for T /T c > ∼ 2. In Fig. 9(b) we show the µ-dependence of the singlet screening mass for a small values of µ/T . Here we included only contributions from the 0th and 2nd order expansion in the calculation of m 1 (µ, T )/T . V. CONCLUSIONS We have analyzed the response of colour singlet and colour averaged heavy quark free energies to a non-vanishing baryon chemical potential and have calculated the resulting dependence of screening masses on the chemical potential. Using a Taylor expansion in µ/T we get stable results for the leading, non-vanishing correction, m 1 2 (T ), which is O((µ/T ) 2 ). We find that this correction in absolute units as well as its ratio with the leading order screening mass, m 1 0 (T ), is large in the vicinity of the transition temperature. The ratio m 1 2 (T )/m 1 0 (T ) is in agreement with perturbation theory for T > ∼ 2T c indicating that the expansion coefficients m 1 n (T ) receive the same multiplicative rescaling as the leading order screening mass. A calculation of the µ-dependent corrections to the screening mass in the colour averaged channel shows that these corrections are twice as large as those in the color singlet channel for all temperatures T > T c . This agreement with leading order perturbation theory indeed is quite remarkable as it suggests that the leading contribution to the µ-dependent corrections of the colour averaged screening mass is due to two-gluon exchange. The higher order expansion coefficients of the screening mass vanish within statistical errors at temperatures larger than 1.2T c . The analysis of the asymptotic behaviour of the free energies themselves, however, suggests that these corrections are non-zero but small at high temperature and have alternating signs. This is consistent with the leading order perturbative result for the Debye mass subsequent expansion coefficients of which drop by more than an order of magnitude and alternate in sign as they arise from an expansion of a square root. Our results thus suggest that at least for small values of the chemical potential and fixed temperature the screening length in a baryon rich quark gluon plasma decreases with increasing value of the chemical potential. This is consistent with the expectation that the transition to the high temperature phase shifts to lower temperatures at non-zero baryon chemical potential. VI. ACKNOWLEDGMENTS This work has been supported partly through the DFG under grant KA 1198/6-4, the GSI collaboration grant BI-KAR and a grant of the BMBF under contract no. 06BI106. MD is supported through a fellowship of the DFG funded graduate school GRK 881. The work of FK has been partly supported by a contract DE-AC02-98CH1-886 with the U.S. Department of Energy.
2017-09-23T06:57:15.876Z
2005-08-31T00:00:00.000
{ "year": 2005, "sha1": "787927d3947c4f9a8e2baa5cd516eb5dd7c8c165", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/0509001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "787927d3947c4f9a8e2baa5cd516eb5dd7c8c165", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
272690848
pes2o/s2orc
v3-fos-license
Multimodal functional deep learning for multiomics data Abstract With rapidly evolving high-throughput technologies and consistently decreasing costs, collecting multimodal omics data in large-scale studies has become feasible. Although studying multiomics provides a new comprehensive approach in understanding the complex biological mechanisms of human diseases, the high dimensionality of omics data and the complexity of the interactions among various omics levels in contributing to disease phenotypes present tremendous analytical challenges. There is a great need of novel analytical methods to address these challenges and to facilitate multiomics analyses. In this paper, we propose a multimodal functional deep learning (MFDL) method for the analysis of high-dimensional multiomics data. The MFDL method models the complex relationships between multiomics variants and disease phenotypes through the hierarchical structure of deep neural networks and handles high-dimensional omics data using the functional data analysis technique. Furthermore, MFDL leverages the structure of the multimodal model to capture interactions between different types of omics data. Through simulation studies and real-data applications, we demonstrate the advantages of MFDL in terms of prediction accuracy and its robustness to the high dimensionality and noise within the data. Introduction Advances in high-throughput technologies have enabled us to collect enriched multiomics datasets that capture the highdimensional and complex variations at various omics levels.This collected multimodal data of omics, which includes the genome, epigenome, transcriptome, proteome, metabolome, etc., allows for a systematic study of how different omics levels act jointly to affect human diseases.While the emerging multiomics datasets hold great promise for enhancing our understanding of these diseases, the high dimensionality, complex inter-relationships, low signal-to-noise ratio, and issues with data quality (e.g.missing values) in the multiomics data pose considerable analytical challenges [1]. Over the past two decades, a variety of methods have been developed for multiomics data analysis: methods such as Similarity Network Fusion [2] and mixOmics [3], select, extract, and integrate features of multiomics.Other tools, like MultiOmics Factor Analysis [4] and miodin [5], integrate data based on factor analysis, which could have computational efficiency issues [6].To alleviate the computational burden, dimension reduction techniques have been widely applied in multiomics data analysis [7]. Commonly used dimension reduction approaches include extensions of classical methods [8], such as penalized canonical correlation analysis (CCA) [9], sparse CCA [10], generalized SVD [11], co-inertia analysis (CIA) [12], sparse extensions of partial least squares (PLS) [13], and the self-paced learning L1/2 absolute network-based logistic regression model (SLNL) [14].However, these approaches have inherent limitations on sparse data.More recently, the state-of-the-art machine learning (ML) methods have been increasingly used in multiomics data, including a novel MultiOmics Meta-learning Algorithm (MUMA) [15] and other methods reviewed by Chung et al. [16].Although ML-based feature selection approaches [17] and ML-based clustering methods [18] have notably tackled the computational efficiency challenges of high-dimensional datasets; most ML methods still suffer from overfitting problems when integrating multiomics datasets [6].In a recent report [19], we developed a new functional neural network (FNN) method that incorporates functional data analysis (FDA) techniques to account for the underlying structure of genetic data, such as the linkage disequilibrium (LD) among neighboring variants, which successfully alleviates the overfitting issue in high-dimensional genetic data.In this study, we propose a multimodal functional deep learning (MFDL) method to facilitate multiomics data analysis, with the advantages in overfitting control, genetic structure modeling, and multiomics data integration, which will be illustrated in detail below. In the proposed MFDL method, we introduce an omics variant function by fitting a series of basis functions to each type of omics data (e.g.genome, epigenome, transcriptome, etc.) in the input layer, then integrate these fitted functions into the dimensionreduced hidden layers as a shared representation.From this shared representation, additional hidden layers are formed to continue the training of the model and to learn the complex relationships between multiomics and the phenotype of interest.The MFDL model has the following unique advantages: (i) it inherits the robustness of the FNN method to high-dimensional datasets and low signal-to-noise ratios by utilizing FDA techniques; (ii) its f lexible multimodal structure allows to learn a shared representation through the hidden layers of deep neural networks, which facilitates the capture of interactions and correlations between multiple omics inputs.(iii) The MFDL model can analyze outcomes in various forms (e.g.scalar, vector, or functional outcomes) and complex nonlinear relationships between outcomes and multiomics inputs.Through simulation studies and two real data applications, we demonstrate the superiority of the MFDL models in terms of both accuracy and robustness compared to the functional linear model (FLM), FNN, and feedforward artificial neural networks (NN) in multiomics data analysis. The paper is organized as follows: section "System and Methods" introduces the MFDL model with a brief overview of the FLM and the FNN method.In section "Simulation studies," we conduct three simulations to compare the performance of the proposed MFDL with FLM and FNN under various simulation settings.In section "Simulation Settings," we demonstrate the MFDL through two real data applications.Section "Discussion" discusses the merits of the proposed method and future directions.Technical details are included in Appendix 1. System and Methods To motivate the MFDL model, we first introduce the FLM and FNN methods for genetic data analysis along with the notation used in this paper.Building on these methods, we propose the MFDL model to accommodate multiple omics inputs and complex phenotypes. For the i-th individual of the study, we denote y i as phenotype and g ki = g ki1 , g ki2 , • • • , g kipk as the k-th omics input with dimension p k for i = 1, . . ., n and k = 1, . . ., m.Without loss of generality, for the rest of the paper, we state the models in the case of m = 2. Functional linear model A functional linear model (FLM) can be constructed from a traditional linear model by substituting the vector of covariate observations with functional covariates, provided that at least one of the following conditions holds: (i) the dependent or response variable is considered functional and (ii) one or more of the independent variables or covariates are considered functional [20].In the context of genetic data analysis, to evaluate the joint association of multiple omics levels with a disease phenotype, an FLM can be formulated by incorporating multiomics variants as functional covariates [21]. For each type of omics data with available location information (e.g. a gene), we scale location information to [0, 1], denoted as t k .We construct the omics variant function G ki (t k ) using a linear combination of the Dirac Delta function [19].The FLM incorporating two functional inputs G 1i (t 1 ) and G 2i (t 2 ) can be expressed as where α 0 is the overall mean, α is the regression coefficient of covariates, and β 1 (t 1 ) and β 2 (t 2 ) are the functional genetic effects of G 1i (t 1 ) and G 2i (t 2 ).i is an error term that is normally distributed. Functional neural network The functional neural network (FNN) model we previously developed was constructed based on the hierarchical structure shown in Fig. 1, where X and α are covariates (e.g.gender) and their corresponding coefficients, respectively.β (d) (s) and α (d) 0 refer to the functional weight and scalar bias at d-th hidden layer, which can be estimated by backward propagation in functional form.The term Z (d) refers to the hidden function at the d-th hidden layer, which captures nonlinear and nonadditive effects by applying nonlinear activation functions.Technique details can be found in Zhang et al. [19]. The FNN model has been proven to offer certain advantages for genetic data analysis including the ability to capture the complex relationships between a single source of genetic variants and disease phenotypes and the f lexibility to handle various types of phenotypes.However, when dealing with multiple sources of omics data, the discretized matrices of the omics variant functions must be concatenated as the input in the FNN model.This process may reduce the model's ability to capture correlations between inputs.To overcome this limitation, we propose the multimodal functional DL model, which features a novel network structure that better accommodates multiple inputs. Multimodal functional DL model Multiomics datasets, collected from diverse biological features (e.g.genetic variation, gene expression, methylation, etc.), exhibit complementary and heterogeneous properties.A multimodal structure can leverage these properties to exploit the correlations between different data sources and improve prediction performance [ 22].As shown in Fig. 2, the model we propose consists of two parts.In the separate training part, we train an FNN model for each modality using the functional data analysis technique to account for the LD effect and to reduce data dimensionality.In the combined training part, we further build a shared representation layer that encapsulates complex correlations between omics features, and construct another feedforward neural network model on the shared representation to model the complex relationship between multiomics and the phenotype of interest, considering possible interactions.The structural details of conventional neural networks and FNN can be found in [23] and [19], respectively. Specifically, we construct the omics variant function and omics effect function denoted as G ki (t k ) and β ki (t k ) , k = 1, 2. We then use the discretized forms of these functions as inputs to form separate FNN models with D k hidden layers Z (1) ki , . . ., Z (Dk) ki as follows: where X ki and α k represent the covariates and their corresponding coefficients, respectively.The terms α (dk) k0 (t), β Input: g 1 , t 1 , g 2 , t 2 , X 1 , X 2 , y Output: ŷ, W, b Initialization: Construct genetic variant functions G 1 and G 2 while the objective function is not converged do 1: Construct functional neural network (FNN) for G 1 and G 2 separately; 2: Feed forward both FNN models to obtain intermediate output k are parameters that need to be estimated.η j 's are predetermined basis functions, and J (dk) is the number of basis functions at the d k -th hidden layer. After the forward propagation process through all the hidden layers in each FNN model, we concatenate the outputs of the last hidden layer, Z (Dk) ki , k = 1, 2 to form a shared representation Z wide,i . Finally, a feedforward NN with D hidden layers is trained on the Z wide,i to obtain the fitted phenotype ŷ. where w (d) and b (d) The training process of the model is described in Algorithm 1. Specifically, to train the MFDL and estimate the model parameters defined in the previous forward propagation process, we denote the parameters of interest as To estimate the model parameters, we apply the backward propagation with respect to the mean squared error (MSE) loss function regularized by the L 2 norm penalty.First, we define the empirical risk function J W, b and the penalty term (W) as follows. The regularized loss function ∼ J is then defined as where λ is the penalty parameter determined by the crossvalidation technique.The parameters can be estimated by minimizing the regularized loss function with the gradient descent technique.We iteratively update the parameters based on the following equations, until the loss function (1) converges. Here r represents an adaptive learning rate determined by the ADADELTA algorithm [24].The technique details can be found in Appendix 2. Compared to the FLM and FNN models, our model provides a more f lexible structure that can easily accommodate multiple omics inputs, consider their nonlinear and nonadditive (e.g.interactions) features, and have advantages of being robust to high dimensionality and high noise levels.The shared representation layer in our model is capable of capturing the correlations between multiple omics inputs and avoids the situation where certain hidden nodes are trained exclusively for one source of omics input.Through the simulation study in section "Results," we show that these two improvements in our proposed model lead to better prediction performance compared to FLM, NN, or FNN. Simulation studies Through simulation studies, we evaluate the performance of MFDL for multiomics data analysis and compare it with FLM and FNN.For all simulation studies, to mimic the minor allele frequencies and LD in the real genome, all genotype data was drawn directly from the 1000 Genomes project [25].We simulated various nonlinear and interactive relationships between the phenotype and omics data to demonstrate the efficiency of MFDL in capturing complex relationships.We also simulated phenotypes in both scalar and vector forms to demonstrate the f lexibility of MFDL and introduced various noise levels to show the robustness of MFDL. Simulation settings For simplicity, we use two types of omics information: genotype data (i.e.SNPs) as G 1 (t 1 ) and gene expression data as G 2 (t 2 ), while the method can accommodate various types of omic data.For all the simulations, we used real genetic data from the 1000 Genome project to ref lect the real sequencing data structure (e.g.LD pattern and allele frequency).Specifically, we used a 1 Mb region from the genome (Chromosome 17: 7344328-8344327), and randomly chose a 30-kb segment from the 1 Mb region for each simulation replicate to mimic LD patterns and allele frequency distributions from the real genetic data.The minor allele frequency (MAF) of the SNPs in the genome region ranged from 4.50 × 10 −4 to 4.99 × 10 −1 , with a distribution highly skewed to rare variants (34.8% of the variants with MAF < .001,69.1% of the variants with MAF < .01 and 80% of the variants with MAF < .03).We randomly select 200 samples (n = 200) and 100 SNPs (p 1 = 100) from the 30 kb segment to construct G 1 (t 1 ).Two cases of gene expression data G 2 (t 2 ) of p 2 = 1 and p 2 = 50are generated for 200 samples from multivariate normal distributions with μ = 0, σ 2 = 0.5 for p 2 = 1 and μ = (0, . . ., 0 We simulate two types of outcomes: scalar and vector.The relationship between the phenotype outcomes and omics data consists of two functions f 1 and f 2 based on G 1 and G 2 , respectively.Moreover, we consider three types of relationships between omics and outcomes: a linear relationship, a linear relationship with interaction, and a nonlinear relationship. The linear and nonlinear models are simulated as: where f 1 , f 2 are linear/nonlinear functions when the relationships are linear/nonlinear for a scalar response.For a vector response, the fixed coefficients in f k (G ki (t k )) can be simulated in different dimensions to facilitate a vector-to-vector transformation from G ki (t k ) to y ki .For the linear relationship, we take For the nonlinear relationship, we define kl sin a kl s + d (2) kl where kl , d (2) kl ∼ unif (−π, π ) , and e = 1 3 , 3 2 , 3 .B k1 (t k ) is a predetermined fifth-order B-spline basis functions, and C k is a fixed coefficient matrix that takes different dimension according to the data type of y i . To further evaluate interaction effects in the simulation, we introduced an interaction term to the linear transformation, which is defined as the inner product of f 1 and f 2 [26], where c is chosen as a fixed scalar coefficient.The corresponding linear model with interaction is defined as where is generated from a normal distribution with a mean of 0 and various choices of variance.In all three simulations, we randomly divide the samples into a training set of size 160 and a testing set of size 40.To mitigate the risk of random findings, we replicate each simulation setting 200 times and set a maximum of 10 5 training epochs.To ensure consistency across all models, Figure 3. MSE of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of omics data (G-E and G-G). we use the L 2 penalty for all models, where the regularization parameter, λ, is selected from the set {0.1, 0.3, 1, 3, 10} using the validation technique.We compare the performance of MFDL with FLM and deep FNN with three hidden layers (FNN-3HL).Two evaluation criteria are employed: mean square error (MSE) and RV correlation coefficients between the predicted values Ŷ = ŷ1 , . . ., ŷn and true values Y = y 1 , . . ., y n , defined in the two equations below.The RV correlation coefficient is a multivariate generalization of the squared Pearson correlation coefficient proposed by Robert and Escoufier [27]. Simulation 1 In the first simulation, our aim is to evaluate three types of underlying relationships, a linear, a linear with interaction, and a nonlinear relationship between different omics inputs and a scalar phenotype, with a fixed noise level (i.e.var ( i ) = 0.3).We explore two types of omics input data: (i) G 1 (t 1 ) as vector and G 2 (t 2 ) as scalar to mimic the Genetic and Gene Expression data (G-E), and (ii) both G 1 (t 1 ) and G 2 (t 2 ) as vectors to mimic the Genetic and Genetic data (G-G).The primary distinction between treating omics data as vectors or functional data is whether we account for information (e.g.LD) from neighboring genetic variants.For an omics input treated as a functional input in MFDL, we apply beta-smoothing to the weight parameter in the input layer and vector-to-vector transformation in the hidden layers.We assess the model performances across six scenarios with the two types of omics data and three transformation functions, defined as in (equations ( 4)-( 8)).The results of these six scenarios are shown in Figs 3 and 4. In Figs 3 and 4, the first row depicts the performance of three methods under various relationships (i.e. a linear, a linear with interaction, and a nonlinear relationship) for the G-E data, while the second row summarizes the results for the G-G data.In Figure 4. RV correlation coefficients of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of omics data (G-E and G-G). the linear setting (left panels of Figs 3 and 4), MFDL and FLM have comparable performance and outperform FNN-3HL for all input data types.When there is an interaction between the omics data (middle panels of Figs 3 and 4), the MFDL model attains higher accuracy than FLM and FNN-3HL in terms of MSE and RV correlation coefficients, particularly in the G-G setting.In cases of nonlinear relationships (right panels of Figs 3 and 4), MFDL again achieves the highest accuracy across all models for both types of omics input data.Overall, the findings suggest that the proposed MFDL model excels at capturing complex nonlinear and nonadditive relationships between outcomes and multiple omics data, while it attains comparable performance to the other two methods in simpler scenarios (e.g. the linear relationship). Simulation 2 In the second simulation, we compare the performance of the three methods across different phenotype types (i.e.scalar and vector) under three types of underlying relationships with the G-E omics data.The noise level is set at 0.3 (i.e.var ( i ) = 0.3).For this setup, we generate G 1 (t 1 ) as functional data with p 1 = 100, while G 2 (t 2 ) is simulated as a scalar.Two types of phenotypes y are simulated: a scalar and a vector of dimension 50. Similar to the results in simulation 1, MFDL attains better or at least comparable performance than FLM and FNN-3HL across different phenotype types, underlying relationships, and omics data.Additionally, both MFDL and FLM outperform FNN-3HL under the linear relationship (left panels of Figs 5 and 6), while both MFDL and FNN-3HL outperform FLM with the vector phenotypes and nonlinear relationship (right bottom panel of Figs 5 and 6). Simulation 3 In the third simulation, we evaluate the robustness of the three methods with increasing levels of noise, mimicking the high noise-signal ratio in real-world multiomics data.Specifically, we simulate three noise levels: 0.3, 0.45, and 0.6.(i.e.var ( i ) = {0.3,0.45, 0.6}).In this simulation, we considered scalar phenotypes, the linear relationship with interactions, and both G-G and Figure 5. MSE of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of phenotypes (scalar and vector phenotypes).Figure 6.RV correlation coefficients of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of phenotypes (scalar and vector phenotypes). G-E omics data settings.The omics input data are generated as described in simulation 1. Figures 7 and 8 show that the proposed MFDL model achieves the smallest MSE and the highest RV correlation in all six scenarios, indicating the robustness of the MFDL model against various noise levels. In conclusion, through three simulations, we demonstrate the MFDL model's ability to capture complex relationships between different types of phenotypes and multiomics data.With the advantage of the multimodal structure, MFDL provides a more effective way of capturing the latent features from various omics data and modeling the interaction effect between multiple omics.Additionally, our proposed model demonstrates robustness against various noise levels and high-dimensional omics and phenotype data. Real data application Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by its complex and multifactorial nature.Although numerous studies have explored the role of various omics data in AD, the combined effects of multilevel omics data remain underevaluated.In this study, we undertake an integrative analysis of DNA sequencing and gene expression data derived from the AD Neuroimaging Initiative (ADNI) project.ADNI is a multisite study designed to evaluate clinical, imaging, genetic, and biospecimen biomarkers across the spectrum of normal aging to early mild cognitive impairment (MCI) and AD.DNA samples from 808 participants were subjected to non-CLIA whole-genome sequencing (WGS) at Illumina. For our phenotype of interest, we focus on hippocampal volume changes observed in structural MRI scans, a critical marker for Late-Onset AD (LOAD), examining the contributions of genetic, gene expression, and biomarker variations to these changes over time. Before analysis, per-individual quality control (QC) and permarker QC [28] were implemented.The per-individual QC excludes samples with massive missing genotype or related to other individuals.The per-maker QC excludes SNPs with insufficient proportion of successful genotype calls, marks that shows significant deviation from Hardy-Weinberg equilibrium (HWE) or with a very low minor allele frequency. Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression, and biomarker data To prepare the multiomics dataset for the analysis, we select three omics inputs: APOE genotypes, APOE gene expression levels and biomarker "Aβ-42" [29], which are recognized to affect AD pathology.We extracted the APOE genotypes, corresponding gene expression levels and biomarker Aβ-42 for all 808 participants from the ADNI dataset.This omics data were then integrated with longitudinal measurements of hippocampal volume derived from the ADNI structural MRI data.Participants who had only a single hippocampal volume measurement were excluded, resulting in a final dataset comprising 370 individuals and 1456 hippocampal volume measurements, along with the participants' ages at each visit. We applied four methods to the omics dataset from the ADNI, including FLM, DL model, FNN-3HL, and the proposed MFDL.These methods were used to investigate the combined effects of APOE genotypes, gene expression and the biomarker Aβ-42 on the hippocampus volume change over time.In FLM, the omics inputs are modeled as three separate terms, as detailed in section "Functional Linear Model."The DL model treats the three data matrices in vector form and concatenates them column-wise prior to training the neural network.For the FNN-3HL model, genotype data are modeled in a functional form, and the discretization of the genetic variant function is then combined with the gene expression data and biomarker data for model fitting.For MFDL, the three omics inputs are trained independently to construct the shared information layer. The phenotype comprises two or more observations of hippocampus volume change per participant taken during their visits and is insufficient to construct a function across these points.Consequently, the phenotype is treated in vector form, and the patients' age at the time of their first visit is used as a covariate in all models.Similar to the simulation studies, 278 patients were randomly selected to train the models, while the remaining 92 patients were used as the test set.The models are evaluated and compared using the MSE, MAE, and RV correlation coefficients between the observed and predicted phenotypes.To mitigate the effects of random data splitting, we repeated the process 200 times for each model with three-fold crossvalidation on the training set. Figure 9 shows that the MFDL model achieves superior performance compared to other methods on the test set regarding all the three criteria (MSE, MAE, and RV correlation).Moreover, compared to deep FNN models, our proposed MFDL model exhibits considerably less overfitting by comparing the performance between the training and test sets. The effect of APOE-ACE interaction on predicting hippocampus volume change over time In this part of the study, we study a gene-gene interaction related to hippocampus volume change over time.We consider ACE, which has been previously identified as having a strong association with LOAD and exhibits gene-gene interactions with the APOE4 allele status [30].By applying our methods to the ADNI dataset, we aim to evaluate the impact of the interaction between APOE and ACE on the prediction of hippocampal volume changes over time.ACE genotypes were sourced from the ADNI dataset.Following the same data processing as in "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," a total of 625 samples with 1250 hippocampus volume measurements were retained for analysis.We extracted SNPs from the APOE and ACE genotypes, along with their SNP location information, from the ADNI dataset, which were modeled as functional inputs in the FLM, FNN, and MFDL.Similar to the analysis in section "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," we repeated the modeling process 200 times for each model with three-fold crossvalidation on the training set. Figure 10 shows that our proposed method surpasses the existing FLM, DL, and FNN in terms of testing MSE, MAE, and RV correlation coefficient performance.Additionally, the difference of the training and testing results may suggest that DL and FNN-3HL are susceptible to overfitting.In contrast, our proposed model exhibits robust performance.Compared to data from section "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," there is an increase of data dimensionality in the two genotypes and possibly the noise level, the MFDL still consistently captures the gene-gene interaction and is less prone to the overfitting issue. Discussion In this paper, we introduce a novel multimodal functional DL method for the analysis of high-dimensional multiomics data.The proposed MFDL method uses the hierarchy of neural networks to learn complicated features from omic data, making it more powerful to model complex relationships (e.g.interactions between omics) than the tradition methods, such as FLM.By modeling effects as a function in the form of a combination of basis functions, MFDL is able to take information from nearby markers into account and reduce model complexity, providing more robust performance than DL for high-dimensional omic data analysis.By using a shared representation layer, the MFDL model is f lexible to handle different types of omic data.Unlike existing methods, such as FNN, MFDL uses subnets to model each omic data and model their complex relationships based on the shared representation layer.Such a strategy not only provides f lexibility to accommodate different data types (e.g.functional data versus nonfunctional data) but also reduces the complexity of the network structure. Through simulation studies and real-world data applications, the proposed model has demonstrated superiority over both the FLM and FNN models in scenarios where multiomics data exhibit complex relationships (e.g.nonlinear relationships and interactions).The MFDL model also exhibits robustness in scenarios with increasing noise levels or high-dimensional data.In comparison with the FNN model, which is prone to overfitting under certain conditions, our proposed MFDL model is more adept at handling multiomics data. In a traditional feedforward neural network with fully connected layers, when multiple inputs are merged to train the model, the hidden nodes in the network tend to become exclusively attuned to one type of the input.For example, in the case of a two-omics-input scenario, after certain training iterations, some hidden nodes may predominantly relate to the first type of input, while others are more closely associated with the second type of input.This tendency presents challenges for the neural network in capturing strong interactions between omics data, which can result in poor performance.Although FNN leverages the smoothness of genetic information to enhance predictive performance, it still struggles to identify a functional transformation that encapsulates the internal relationships among multiple types of inputs.The multimodal structure with its shared representative layer offers an effective solution for modalities that have latent interactions as demonstrated in multimodal DL methods applied to video-audio datasets [ 31].The adaptability of its structure, combined with the robustness of the shared representative layer, positions our proposed model as a useful tool for modeling multiomics data. MFDL can also be further extended to consider functional phenotypes (e.g.imaging and time-dependent phenotypes).For a single genetic input, FNN addresses this issue by converting vector-to-vector transformations between hidden layers into function-to-function transformations.However, for multiple omics inputs, the main challenge faced by FNN is fitting a function on both functional (e.g.SNPs) and nonfunctional data (e.g.gene expression), in which almost no basis systems are suitable for both data.Moreover, FNN becomes problematic in defining a function from the shared representative since the location information is not unified across different omics data.Consequently, our proposed model faces the same challenges when dealing with functional phenotypes.While a simple solution is treating a functional phenotype as a vector, exploring alternative strategies that incorporate additional information (e.g.location and temporal information, and networks) is worthwhile for future investigation.Statistical testing building on MFDL holds great promise for rigorously evaluating the complex associations between multiomics with the phenotype of interest and result interpretation.This represents an important avenue for further research. Key Points • We develop an MFDL approach to model the complex relationships between multiomics and disease phenotypes.• The MFDL approach imposes a hierarchical structure of deep neural networks using the individually trained Figure 1 . Figure 1.The hierarchical structure of FNN with D hidden layers. Figure 2 . Figure 2. The hierarchical structure of MFDL with two omics input. (dk) k (s, t)denote the functional bias and weights at the d k -th hidden layer of the FNN model, respectively.The function σ represents the activation function for the hidden layer, while the f function on the output layer is the linear link function.The explicit form of the bias and Algorithm 1. Training process of MFDL for two omics inputs. Figure 7 . Figure 7. MSE of the three methods under three noise levels (0.3, 0.45, and 0.6) and two types of omics data (G-E and G-G). Figure 8 . Figure 8. RV correlation of the three methods under three noise levels (0.3, 0.45, and 0.6) and two types of omics data (G-E and G-G). Figure 9 . Figure 9. Prediction of the change of hippocampus volume using APOE genotypes, gene expression, and the biomarker Aβ-42. Figure 10 . Figure 10.Prediction of the change of hippocampus volume by considering an interaction between APOE and ACE. represent the vector weights and scalar bias at the d-th hidden layer, respectively.It is worth mentioning that the proposed MFDL framework is flexible to fit various types of inputs depending on the nature of the omics data.For instance, if the input is a scalar (e.g.gene expression of a single gene) or multivariate with low dimensions (e.g.gene expression data of a limited number of genes), which are not suitable for functional smoothing, those functional weights,
2024-09-18T05:06:36.805Z
2024-07-25T00:00:00.000
{ "year": 2024, "sha1": "77e03406ce5d26386666c738bf10a331dac7bee8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "77e03406ce5d26386666c738bf10a331dac7bee8", "s2fieldsofstudy": [ "Computer Science", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265562622
pes2o/s2orc
v3-fos-license
Novel cuproptosis-related lncRNAs can predict the prognosis of patients with multiple myeloma Background Cuproptosis-related long-stranded non-coding RNAs (lncRNAs) have several implications for the prognosis of multiple myeloma (MM). This research aimed to construct a prognostic risk model for MM patients and explore the potential signaling pathways in the risk group. Methods Cuproptosis-related lncRNAs were obtained from the co-expression analysis of cuproptosis-related genes and lncRNAs. Subsequently, twelve cuproptosis-related lncRNAs were selected to construct a prognostic risk model of MM patients by the least absolute shrinkage and selection operator (LASSO) regression. Then, the clinical data of these patients were randomly divided into the training group and the testing group. Next, patients were divided into the low- and high-risk groups according to the median risk score. The Kaplan-Meier survival analysis was performed to clarify the prognostic differences between risk subtypes. Besides, the Cox analysis was conducted to identify whether the risk score can be used as an independent prognostic factor. In addition, the receiver operating characteristic (ROC) curve analysis and the concordance index (C-index) curve analysis were performed to elucidate the value of risk score as a prognostic indicator. Finally, the differential risk analysis and functional enrichment analysis were carried out to identify the potential signaling pathways in the low- and high-risk groups. Results The results demonstrated that the overall survival (OS) of patients in the high-risk group was shorter than that in the low-risk group. There were significant differences in the expression of genes in MM patients between the high- and low-risk groups. The Gene Ontology (GO) analysis results showed that the differentially expressed risk-related genes (DERGs) were mainly concentrated on the collagen-containing extracellular matrix. According to the Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis results, the DERGs may be related to the neuroactive ligand-receptor interaction and mitogen-activated protein kinase (MAPK) signaling pathway, indicating that they may be involved in the progression of tumors. Conclusions The findings of this study suggest that cuproptosis-related lncRNAs may be effective biomarkers for predicting the prognosis of MM patients, which is anticipated to contribute to the improvement of clinical outcomes. As a basic nutrient, copper can exert both beneficial and toxic effects on cells due to its redox properties (14,15).Intracellular copper concentration is maintained at a very low level through an active steady-state working mechanism across concentration gradients (16).Different from the known programmed cell death (such as ferroptosis and apoptosis), copper ionophore-induced cell death is closely related to mitochondrial respiration and protein lipoylation (17)(18)(19).In this process, copper binds directly to the lipoylated components of the tricarboxylic acid (TCA) cycle, which leads to the accumulation of lipoylated proteins and the subsequent loss of iron-sulfur cluster proteins.This induces protein toxic stress and eventually leads to cell death (20).In addition, it was demonstrated in a recent study that high anti-MM efficacy was showed in treatmentresistant cellular models of MM patients, when using copper ionophores (21). Long-stranded non-coding RNAs (lncRNAs) is a group of RNA molecules with a transcriptional length more than 200 nT.They are initially considered to be false transcriptional noise caused by low RNA polymerase fidelity (22).The promoter region of lncRNAs is generally more conservative than that of messenger RNAs (mRNAs).Different from mRNAs, lncRNAs do not encode proteins.However, lncRNAs can regulate gene expression in the form of RNA at multiple epigenetic, transcriptional and post-transcriptional levels (23).LncRNAs can induce many important cancer phenotypes by interacting with other cellular macromolecules, DNAs, proteins, and RNAs (24).Some lncRNAs have been shown to regulate apoptosis (25,26), ferroptosis (27)(28)(29), and cuproptosis (30,31) of cancer cells.Furthermore, the expression of lncRNAs usually changes and is related to the prognosis of various cancer patients.Mounting evidence supports the independent prognostic value of lncRNAs in patients with MM, leading to the development of multiple prognostic lncRNA signatures (32,33). LncRNAs participate in tumor progression by regulating cuproptosis genes.Numerous studies have focused on molecular classification and survival prediction using lncRNAs.One study (34) constructed a cuproptosis-related lncRNAs prognosis prediction model, which can reliably and accurately predict the prognosis, drug sensitivity, and clinical recurrence of acute myeloid leukaemia.However, there is no research on the relationship between the prognosis and score in MM based on cuproptosis-related lncRNAs, and whether it can be used as an independent prognostic factor is not clear.Therefore, we analyzed the data with a view to exploring cuproptosis-related lncRNAs biomarkers to predict the prognosis of MM patients. In this study, a bioinformatics analysis method was used to construct a prognostic risk model for MM patients based on the cuproptosis-related lncRNA score.Moreover, the differential risk analysis and functional enrichment analysis were also conducted to identify the potential signaling pathways in the risk group.These findings are expected to provide some theoretical support for exploring the regulation of cuproptosis-related lncRNAs in MM.The flow chart of this research is sketched in Figure 1.We present this article in accordance with the TRIPOD reporting checklist (available at https://tcr.amegroups.com/article/view/10.21037/tcr-23-960/rc). Data acquisition and preprocessing In order to identify cuproptosis-related lncRNAs that can (III) More than 300 cases were included to constitute a meaningful sample size.(IV) The data were the count files obtained by RNA sequencing (RNA-Seq) in gene expression quantification.The expression profile data and clinical data of MM (MMRF-COMMPASS) were downloaded from the Cancer Genome Atlas (TCGA) database (https://portal.gdc.cancer.gov/), and then lncRNAs were identified according to the gene annotation of TCGA.The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). Cuproptosis-related genes were collected from a series of recent reports (35)(36)(37)(38), including NFE2L2, NLRP3, ATP7A, ATP7B, SLC31A1, FDX1, LIAS, LIPT1, LIPT2, DLD, DLAT, PDHA1, PDHB, MTF1, GLS, CDKN2A, DBT, GCSH, and DLST.Combined with the standardized gene expression matrix, the cuproptosis-related gene expression matrix was obtained.Finally, the potential cuproptosis-related lncRNAs were identified based on the co-expression analysis of the cuproptosis-related genes expression matrix and the lncRNA expression matrix with the aid of the limma package in R software, with a correlation coefficient larger than 0.3 and P value less than 0.001 as screening conditions. Subsequently, a multivariate Cox model was constructed with these selected cuproptosis-related lncRNAs related for the prognosis and survival prediction.Besides, the risk scores of all MM patients were calculated.These patients were divided into the low-and high-risk groups according to the median value of risk scores for prognosis assessment.Furthermore, the Kaplan-Meier survival curves of MM patients in the low-and high-risk groups of the training group and testing group were plotted. Independent prognostic analysis of the prognostic risk model The univariate and multivariate Cox analyses were performed to evaluate the effects of risk score, age, gender, International Staging System (ISS) stage, and family tumor history on the prognosis of MM patients.And the variable in the Cox model has respected the hazards proportionality.Then a forest map was plotted.In order to verify the accuracy of this model in predicting the prognosis and survival of MM patients, the survival and timeROC packages in R software were used to calculate the factors of the risk score model and the 1-, 3-, and 5-year area under the curve (AUC).Besides, the C-index curve was constructed. Construction of nomogram of clinical subgroups The survival, regplot and rms packages in R software were used to construct a nomogram of clinical subgroups to identify the 1-, 3-, and 5-year survival rates of these MM patients.The nomogram of clinical subgroups was constructed based on the risk score, age, gender, ISS stage, and family history of these MM patients. Differential risk analysis With the aid of the limma package in R software, the differentially expressed risk-related genes (DERGs) in the samples of MM patients in the low-and high-risk groups were screened under the condition of |log2FC| >1 and P<0.05. Functional enrichment analysis The functional enrichment analysis of DERGs was carried out by GO and KEGG through the clusterProfiler package in R software.The results were screened according to corrected P (adj.P)<0.01 to identify the potential signaling pathways in the risk group. Statistical analysis The R software is used for data analysis and graphics rendering.The survival curve was drawn using R software package "survival", and the difference in prognosis between groups was analyzed using survival function in the R software package.Receiver operating characteristic (ROC) analysis was performed using the R software package "timeROC" to obtain the AUC.Nomograms were established using the R the software package "rms" to assess the prognostic significance of some of the features in the samples.P<0.05 indicated that the difference was statistically significant. Cuproptosis-related lncRNA data acquisition The bone marrow tissue transcriptome data of 575 MM patients and the clinical data of 516 MM patients were downloaded from the TCGA database (https://portal.gdc.cancer.gov/).Based on the Gencode Database, 16,901 lncRNAs were selected from the TCGA Multiple Myeloma Database.Then, the matrix expression of lncRNAs and cuproptosis-related genes was analyzed under the screening conditions of a correlation coefficient larger than 0.3 and P value less than 0.001.Finally, a total of 294 cuproptosisrelated lncRNAs were screened out.The co-expression relationship between cuproptosis-related genes and cuproptosis-related lncRNAs was visualized by the Sankey diagram (Figure 2). Construction of prognostic risk model for MM patients Firstly, due to lack of related data such as the initial treatment regimen, treatment response, and status of stem cell transplantation, the clinical data of 516 MM patients were randomly divided into the training group and the testing group (Table 1).Besides, 294 cuproptosisrelated lncRNAs were screened by the univariate Cox analysis (P<0.05), and a total of 76 cuproptosis-related lncRNAs were found to be associated with the survival of MM patients.Additionally, 12 independent prognostic cuproptosis-related lncRNAs were screened according to multivariate Cox analysis. The relationship between cuproptosis-related genes and lncRNAs was presented in a heatmap (Figure 3).According to the median risk score, patients with the risk score lower than the median were classified as the low-risk group, and those with the risk score higher than the median were classified as the high-risk group.It was found that in the training group, the testing group, and all groups, the overall survival (OS) of patients in the high-risk group was shorter than that in the low-risk group (Figure 4A-4C).OS, overall survival.be employed to identify whether the risk score calculated can be used as an independent prognostic factor.Due to the lack of related data of the serum lactate dehydrogenase (LDH) level and high-risk cytogenetics, we constructed the univariate and multivariate Cox regression analysis with age, gender, ISS stage, tumor family history and risk score.The univariate Cox analysis results showed that the age [hazard ratio (HR) =1.039, 95% CI: 1.017-1.062,P<0.001], stage (HR =2.092, 95% CI: 1.587-2.757,P<0.001), and risk score (HR =1.073, 95% CI: 1.038-1.110,P<0.001) of MM patients were correlated with the OS of MM patients (Figure 5A).The multivariate Cox analysis results showed that the age (HR =1.027, 95% CI: 1.005-1.049,P=0.018), stage (HR =2.053, 95% CI: 1.541-2.735,P<0.001), and risk score (HR =1.098, 95% CI: 1.059-1.139,P<0.001) were independently correlated with the OS of these patients (Figure 5B).This finding suggested that these prognostic factors are independent in MM patients.Then, the ROC curve was employed to evaluate the prediction accuracy of the risk score.On the one hand, the AUC of risk score was 0.732, which was larger than that of age (0.571), gender (0.582), stage (0.703), and tumor family history (0.535) (Figure 6A).On the other hand, the AUC of 1-, 3-, and 5-year OS was 0.732, 0.705, and 0.778, respectively, which confirmed the favorable diagnostic significance of this prognostic model (Figure 6B).In addition, a C-index curve was also constructed to compare the consistency index of risk score with other clinical features (age, gender, stage, and tumor family history).It was revealed that the C-index value of the risk score was larger than that of other clinical features (Figure 6C), which verified the high prediction accuracy of this model. Construction of nomogram of clinical subgroups Moreover, a nomogram was constructed based on the age, gender, ISS stage, risk score, and tumor family history from the signature (Figure 7) to predict the OS of patients. The total score calculation results demonstrated that the nomogram was effective in predicting the 1-, 3-and 5-year OS of MM patients.As the example shown in the Figure 7, the patient is a male and the age is 74 with a tumor family history.The risk score is low and the ISS stage is level 3. The total score calculation is 259, and the probability of living more than 1 year is 0.914.The probability of living more than 3 years is 0.726, while the probability of living more than 5 years is 0.475. Differential risk analysis and functional enrichment analysis The DERGs were extracted from the samples of the high- and low-risk groups by the limma package in R software. The results demonstrated that there were significant differences in the expression of 581 genes in MM patients between the high-and low-risk groups (|log 2 FC| >1, P<0.05).The functional enrichment analysis was also performed on 581 DERGs.The GO analysis results showed that these DERGs were mainly concentrated on such ontology annotations as extracellular structure organization, collagen-containing extracellular matrix, glycosaminoglycan binding, and sulfur compound binding (Figure 8, Table 2). The KEGG analysis showed that the DERGs may be related to neuroactive ligand-receptor interaction and mitogen-activated protein kinase (MAPK) signaling pathway.This finding indicated that these DERGs may be involved in the progression of tumors (Figure 9). Discussion MM is the second most common hematological malignant tumor in adults, and its incidence is on the rise worldwide (39,40).Despite the advancement in the treatment of MM with the emergence of several new drugs (carfilzomib, pomalidomide, daratumumab, elotuzumab, panobinostat, ixazomib, and Selinexor), this cancer is still incurable in most patients (41).There is an urgent demand for identifying reliable prognostic biomarkers and new therapeutic targets for MM.In recent years, the contribution of lncRNAs to cancer progression has been widely recognized (42,43).Li (44) found that cuproptosis played an important role in hematological tumors.However, there are few studies on the co-regulatory role of cuproptosis and lncRNAs in MM.Hence, exploring the biomarkers related to the prognosis of MM may have a favorable prospect in clinical application. In this study, a prognostic risk model was established for MM patients with cuproptosis-related lncRNAs. With the assistance of bioinformatics analysis, the bone marrow tissue transcriptome data of 575 MM patients and clinical data of 516 MM patients were screened from were divided into the low-and high-risk groups based on the median risk score.The Kaplan-Meler survival analysis results showed that there was a significant difference in the survival rate between the high-and low-risk groups (P<0.001).In addition, the ROC curve and C-index curve analysis results proved the effectiveness of risk score as a prognostic biomarker.Finally, the difference risk analysis and function enrichment analysis were carried out among risk groups. Among the 12 cuproptosis-related lncRNA genes associated with the prognosis of MM, TLX1NB and CNTFR-AS1 were verified to play a role in this cancer.TLX1NB is an overexpressed oncogene, and it is related to the tumor progression of lung cancer, colorectal cancer, and glioma.Duan et al. (45) found the low expression and hypermethylation of TLX1NB in patients with low-grade gliomas, and TLX1NB had a certain impact on the prognosis of low-grade gliomas.Further, they concluded that TLX1NB may be an early biomarker for the recurrence of low-grade gliomas.Chen et al. (46) found that the expression of TLX1NB was up-regulated in colon cancer tissue.They confirmed that TLX1NB can promote the invasion, migration, and metastasis of colon cancer cells by promoting the phosphorylation of STAT5A and it also played an important role in regulating cancer.Dastjerdi et al. (47) revealed that TLX1NB was overexpressed in colon cancer.It had potential carcinogenic characteristics and can be used as a diagnostic factor to detect tumors in normal samples.In a study of Li et al. (48), it was suggested that TLX1NB regulated CRISP1 through hsa-miR-148b-3p and can be considered a potential therapeutic target for lung adenocarcinoma.In addition, Li et al. (49) also found that the high expression of CNTFR-AS1 was related to the low survival rate of triple negative breast cancer, and it may be used as a potential biomarker for the treatment and prognostic classification of different breast cancer subtypes. In order to clarify the potential regulatory mechanism among different risk groups, the functional enrichment analysis was performed on 581 DERGs.According to the GO analysis results, risk factors seemed to be closely related to the extracellular matrix, which was a key component in the tumor microenvironment for the regulation of cell growth and development and contributed to the transmission of cell signals.Therefore, it can be speculated that cuproptosis-related lncRNAs may play a key role in the tumor microenvironment of MM.The KEGG analysis results indicated that differentially expressed cuproptosis-related lncRNAs were highly enriched in the MAPK signaling pathway.There are five MAPK signaling pathways in mammals, which play an important role in tumor proliferation, apoptosis, invasion, and metastasis and participate in the occurrence and development of many kinds of tumors, including MM (50).It was demonstrated in a previous study that the activation of the MAPK pathway mediated the proliferation, survival, and migration of MM cells (51).Up to 50% of patients newly diagnosed with MM are affected by the abnormal MAPK pathway.Zhang et al. (52) found that inhibiting the MAPK pathway can reduce the proliferation and migration of colon cancer cells.In this study, the KEGG analysis results demonstrated that the differentially expressed cuproptosis-related lncRNAs were highly enriched in the signaling pathway of neuroactive ligand-receptor interaction.This indicated that cuproptosis-related lncRNAs may be involved in the signaling pathway of neuroactive ligand-receptor interaction.Neuroactive ligands have been verified to affect neuronal functions by binding intracellular receptors and can bind transcription factors and regulate gene expression (53,54). To sum up, some bioinformatics methods combined with the TCGA database were adopted in this study to screen cuproptosis-related lncRNAs.The results demonstrated that TLX1NB and CNTFR-AS1 could regulate the prognosis of MM patients.Nevertheless, there are some limitations in this study.The related research on cuproptosis-related genes is still in the early stage, this paper only applied the cuproptosis-related genes reported at present, and more cuproptosis-related genes may be found in the future.There was a significant difference in gene expression between the high-and low-risk group.GO analysis showed that risk characteristics were closely related to the extracellular matrix.The KEGG analysis results showed that cuproptosis-related lncRNAs were highly enriched in the MAPK pathway and neuroactive ligand-receptor interaction signaling pathway.In this study, however, relevant analyses were only performed from the perspective of statistics, and basic experiments were not conducted.The findings of this study provided an important basis and direction for followup basic experimental and clinical studies. Conclusions In this study, a novel cuproptosis-related lncRNA prognostic model was constructed for MM patients.TLX1NB and CNTFR-AS1 may be cuproptosis-related lncRNAs associated with the prognosis of MM patients.Besides, the differentially expressed risk genes between the high-and low-risk groups were also analyzed, and the GO and KEGG analyses were also carried out.These findings are anticipated to contribute to the improvement of clinical outcomes.However, further verification is still needed. Figure 1 Figure 1 The schematic flow chart of this research.TCGA, The Cancer Genome Atlas; MM, multiple myeloma; lncRNAs, long-stranded non-coding RNAs; KM, Kaplan-Meier; C-index, concordance index; ROC, receiver operating characteristic; GO, Gene Ontology; KEGG, Kyoto Encyclopedia of Genes and Genomes. Figure 2 Figure 2 Sankey diagram of the co-expression relationship between cuproptosis-related genes and cuproptosis-related lncRNAs.lncRNA, long-stranded non-coding RNA. Figure 3 Figure 3 The correlation heatmap of the relationship between cuproptosis-related genes and lncRNAs.lncRNAs, long-stranded non-coding RNAs. Figure 4 Figure 4 Kaplan-Meier survival analysis results of patients.OS of patients in (A) the training group (B) the testing group, and (C) all groups. Figure 8 Figure 8Results of GO analysis.GO, Gene Ontology. Figure 9 Figure 9 Results of KEGG analysis.KEGG, Kyoto Encyclopedia of Genes and Genomes; MAPK, mitogen-activated protein kinase; GABA, gamma-aminobutyric acid. Table 2 Specific number and name of Gene Ontology BP, biological process; GO, Gene Ontology; CC, cellular component; MF, molecular function; GABA, gamma-aminobutyric acid.
2023-12-04T17:14:27.657Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "7730400ab0a735c254189111499931da7e1b478f", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.org/article/viewFile/80433/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cfcc9a46ac55b47bac0f8d5f540fa61b255d8dd6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255863170
pes2o/s2orc
v3-fos-license
Circular RNA EIF4G3 suppresses gastric cancer progression through inhibition of β-catenin by promoting δ-catenin ubiquitin degradation and upregulating SIK1 Increasing studies suggest that circular RNAs (circRNAs) are critical regulators of cancer development and progression. However, the biological roles and mechanisms of circRNAs in gastric cancer (GC) remain largely unknown. We identified the differentially expressed circRNAs in GC by analyzing Gene Expression Omnibus (GEO) datasets. We explored the biological roles of circRNAs in GC by in vitro functional assays and in vivo animal studies. We performed tagged RNA affinity purification (TRAP), RNA immunoprecipitation (RIP), mass spectrometry (MS), RNA sequencing, luciferase reporter assays, and rescue experiments to investigate the mechanism of circRNAs in GC. Downregulated expression of circular RNA EIF4G3 (circEIF4G3; hsa_circ_0007991) was found in GC and was associated with poor clinical outcomes. Overexpression of circEIF4G3 suppressed GC growth and metastasis through the inhibition of β-catenin signaling, whereas knockdown of circEIF4G3 showed the opposite effects. Mechanistic studies revealed that circEIF4G3 bound to δ-catenin protein to promote its TRIM25-mediated ubiquitin degradation and interacted with miR-4449 to upregulate SIK1 expression. Our findings uncovered a tumor suppressor function of circEIF4G3 in GC through the regulation of δ-catenin protein stability and miR-4449/SIK1 axis. CircEIF4G3 may act as a promising prognostic biomarker and therapeutic target for GC. Introduction Gastric cancer (GC) is the fifth most common cancer and the third leading cause of cancer-related death worldwide [1]. Although great improvement has been made, the early diagnosis rate, radical resection rate, and five year survival rate of GC patients are still unsatisfactory [2,3]. Therefore, it is of urgent need to find more effective biomarkers and therapeutic targets for GC diagnosis and therapy. Circular RNAs (circRNAs) are produced from precursor mRNA back-splicing and have been implicated as important regulators of gene expression [4][5][6][7]. CircRNAs were initially considered as byproducts of the biological process and though to have no functions. With the Open Access † Xueyan Zang, Jiajia Jiang and Jianmei Gu contributed equally to manuscript. *Correspondence: icls@ujs.edu.cn; xuzhang@ujs.edu.cn development of high-through sequencing and bioinformatics, circRNAs have been increasingly recognized as master regulators of various biological processes and key players in human health and diseases [8][9][10]. In particular, circRNAs have been shown to play important roles in cancer growth, metastasis, recurrence, and therapy resistance [11,12]. Due to its closed structure and RNA exonuclease resistance, circRNAs are more stable than their linear counterparts, showing a potential to be used as cancer biomarkers [13]. Accumulating studies suggest that circRNAs participate in cancer biology via multiple mechanisms. For instance, ciRS-7/CDR1as (circular RNA sponge for miR-7) constitutes an competing endogenous RNA (ceRNA) network with miRNAs [12]. Interestingly, CDR1as interacts with IGF2BP3 and compromises its pro-metastatic functions [14]. CDR1as also interacts with p53 and blocks its degradation by MDM2 [15]. In addition to acting as miRNA sponges, circRNAs can interact with RNA binding proteins (RBPs), regulate RNA splicing and gene transcription, act as scaffold proteins, and translate into peptides [16][17][18]. For example, circRHOT1 promotes hepatocellular carcinoma (HCC) growth and metastasis by recruiting TIP60 to the NR2F6 promoter [19]. A novel protein cGGNBP2-184aa encoded by cGGNBP2 promotes intrahepatic cholangiocarcinoma (ICC) cell proliferation and metastasis [20]. Therefore, it deserves further study to reveal the multifaceted roles of circRNAs in the pathogenesis of GC and uncover the underlying molecular mechanisms. In the present study, we demonstrated that a novel cir-cRNA, hsa_circ_0007991 (named as circEIF4G3), was significantly downregulated in GC cells and tumor tissues of patients with GC. The decreased expression of circEIF4G3 was associated with disease progression and predicted an adverse overall survival. Functional studies indicated that circEIF4G3 overexpression suppressed the growth and metastasis of GC while circEIF4G3 knockdown showed an opposite effect. CircEIF4G3 bound to δ-catenin (catenin delta 1) protein and enhanced TRIM-25-mediated ubiquitination and degradation. Cir-cEIF4G3 also acted as a miRNA sponge for miR-4449 and promoted the expression of its downstream target SIK1. Together, we identified circEIF4G3 as a tumor suppressive circRNA in GC, which may offer a new prognostic biomarker and therapeutic target for GC. Patients and clinical samples A total of 103 paired tumor and adjacent non-tumor tissues from GC patients, 120 serum samples from GC patients, 50 serum samples from gastritis patients, and 120 serum samples from healthy donors were obtained from Nantong Tumor Hospital between April 2018 and September 2020. Specimens were collected in accordance with institutional protocols. Written informed consent was obtained from all the participants and the study was approved by Institutional Ethical Committee of Jiangsu University. Bioinformatic analysis of circRNA expression profile in Gene Expression Omnibus datasets Microarray data was downloaded from the Gene Expression Omnibus (GEO) datasets and DESeq2 package was used to analyze differentially expressed circRNAs. Fold change ≥ 2 and P value < 0.05 were set as the threshold for significantly differential expression. Plasmid and siRNA transfection Specific targeting siRNAs and overexpressing plasmid were designed and synthesized by GenePharma (Shanghai, China) and Bersinbio (Guangzhou, China). A density of 2 × 10 5 /well cells were plated in 6-well plates and cultured until 50-70% confluent overnight. The plasmids and siRNAs were transfected into the cells with Lipofectamine 2000 (Life Technologies) in serum-free medium according to the manufacturer's instructions. Cells were changed to complete medium at 6 h after transfection and cultured for another 30 h. Tagged RNA affinity purification (TRAP) assay TRAP assay was used to determine the interaction between circRNA and proteins. Control and circEIF4G3 overexpressing vectors that contain the stem-loop structure of MS2 (MS2 and circRNA-MS2) and GST-MS2 overexpressing vector were constructed by Biosense (Guangzhou, China). MS2 and circRNA-MS2 vectors were co-transfected with GST-MS2 into GC cells to obtain the GST-MS2-circRNA complex. Then, the complex was pulled down by glutathione magnetic beads. The circRNA-binding proteins were identified by mass spectrometry and validated by western blot. Dual luciferase reporter assay Cells were cultured in 24-well plates and transfected with control vector, miRNA-binding site containing wild type (WT) or mutant (MUT) vector, as well as predicted miRNA mimics or controls (GenePharma, Suzhou, China). After 48 h transfection, the luciferase activity was detected by the dual luciferase reporter assay system (Promega, MA, USA). The intensity of firefly luciferase was normalized to that of renilla luciferase. The fold change between each miRNA compared to NC was calculated. Immunohistochemistry (IHC) For immunohistochemical analyses, 4% paraformaldehyde fixed tissues were embedded in paraffin and cut into 4 μm-thick sections. The sections were incubated with primary monoclonal antibody against Ki-67 (Cell Signaling Technology) followed by incubation with the secondary antibody for 30 min at room temperature. After being incubated with 3, 3'-Diaminobenzidine (3, 3'-DAB, Maxim, Fuzhou, China) for 5 min, the sections were counterstained with hematoxylin for 30 s. Finally, the sections were photographed under a TE2000 microscope (Nikon, Tokyo, Japan). RNA-protein immunoprecipitation (RIP) RIP assays were performed by EZ-Magna RIP ™ RNA-Binding Protein Immunoprecipitation Kit (Millipore, Billerica, MA, USA) according to the manufacturer's instructions. Cells at approximately 90% confluence were incubated with complete RIP lysis buffer containing RNase inhibitor and protease inhibitor. Magnetic beads were pre-incubated with the anti-Ago2 antibody for 1 h at room temperature, and lysates were immunoprecipitated with beads at 4 °C overnight. The immunoprecipitated RNA complex were then purified and quantified by qRT-PCR. Normal rabbit IgG was used as the negative control. RNA sequencing Total RNA were extracted from control and circEIF4G3 overexpressing GC cells and sent for sequencing by Illumina HiSeq sequencer (Cloundseq, Shanghai, China). Cutadapt, Hisat2, and Cuffdiff software were used to compare high-quality reads to the genome, obtain the FPKM value, and calculate the differentially expressed genes between control and circEIF4G3 overexpressing groups. The heatMap2 function in the R package was used for cluster analysis of differentially expressed mRNAs with FPKM values. LC-MS/MS Proteins were subjected to digestion with the sequencing-grade trypsin. The samples were analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS) to obtain original mass spectrometry results. Byonic software was used to analyze the raw file and search the uniprot-Homo sapiens data to obtain the identified protein results. Co-immunoprecipitation (Co-IP) assay To detect protein and protein interactions, cells were lysed by Pierce immunoprecipitation lysis buffer supplemented with a cocktail of proteinase inhibitors, phosphatase inhibitors and RNase inhibitor (Thermo, Waltham, MA). After incubation at 4 °C overnight, beads were washed with cell lysis buffer three times. The proteins were eluted from the magnetic beads for western blot analysis. In vivo animal studies For xenograft tumor model, 4-week-old male BALB/c nude mice were purchased from the Model Animal Research Center at Nanjing University (Nanjing, China) and raised under controlled conditions with comfortable temperature and humidity. The mice were randomly divided into 2 groups (n = 5 for each group), subcutaneously injected with HGC-27 cells (5 × 10 6 cells per mouse) that were transfected with circEIF4G3 overexpressing or control vectors. The tumor size was measured every week and calculated by using the flowing formula: Volume = width 2 × length/2. The tumor tissues were harvested for hematoxylin and eosin (H&E) and IHC staining. The animal experiments were approved by the Animal Use and Care Committee of Jiangsu University. Statistical analysis Statistical analyses were carried out by SPSS software (Chicago, IL, USA). Student's t-test and x 2 -test was performed to analyze the significance of differences between groups. Survival analysis was plotted according to the Kaplan-Meier curves and log-rank test in GraphPad Prism 5. The correlations were analyzed using Pearson's correlation coefficients. Differences were considered to be statistically significant at values of P < 0.05. CircEIF4G3 is downregulated in GC and its lower level predicts poor prognosis To identify the differentially expressed circRNAs in GC, we analyzed microarray datasets (GSE89143, GSE78092, and GSE93541) from Gene Expression Omnibus (GEO, https:// www. ncbi. nlm. nih. gov/ geo/). We found that several common circRNAs were differentially expressed between tumor tissues and adjacent non-tumor tissues in these datasets (Fig. 1A, Supplementary Fig. 1A). Considering the relative expression level and detection specificity, we chose hsa_ circ_0007991 as the target for next study. The information of hsa_circ_0007991 can be queried in both Circbank and Circbase. Hsa_circ_0007991 was composed of exons 3-5 of the linear transcript of EIF4G3 gene with a length of 301 nucleotides (abbreviated as circEIF4G3) (Fig. 1B). Sequencing results confirmed the existence of back-splicing site in divergent primers-amplified PCR product (Fig. 1C). In accordance, circEIF4G3 was validated by PCR amplification using divergent primers from cDNA but not gDNA of GC cells (Fig. 1D). Endogenous circEIF4G3 was resistant to RNase R digestion, while linear EIF4G3 mRNA was notably reduced by RNase R treatment (Fig. 1E). RNA-FISH assay indicated that circEIF4G3 mainly located in the cytoplasm of GC cells (Fig. 1F). Subcellular fractionation assay also showed the same results (Supplementary Fig. 1B). We next examined the expression of circEIF4G3 in human GC cells and tissues by qRT-PCR. The result showed that circEIF4G3 expression levels were decreased in GC cells, including HGC-27, MKN-45, AGS, NCI-N87, and SGC-7901, as compared to the normal human gastric mucosal epithelial cell line (GES-1) (Supplementary Fig. 1C). We then verified the expression of cir-cEIF4G3 in paired tumor and non-tumor tissue samples from patients with GC. We observed that the expression of circEIF4G3 significantly decreased in tumor tissues compared to adjacent non-tumor tissues (Fig. 1G). Further, we evaluated the association between circEIF4G3 expression level and the pathological parameters. As shown in Additional Table 1, circEIF4G3 expression levels were negatively associated with TNM stage and venous invasion while showed no significant association with genders, ages, tumor sizes, and differentiation stages. The lower expression of circEIF4G3 was strongly associated with a shorter survival time of patients with GC ( Fig. 1H). Recently, several studies demonstrate that deregulated circRNAs originating from tumor tissues are stable and easily detected in the serum or plasma of cancer patients. We found that the expression of circEIF4G3 was much lower in the serum of GC patients than those of healthy individuals (Fig. 1I). The receiver operating characteristic (ROC) curve was used to investigate the diagnostic value of circEIF4G3 in serum as a biomarker for GC. Serum circEIF4G3 distinguished GC cases from healthy controls with AUC of 0.797. The sensitivity and specificity of circEIF4G3 for the diagnosis of GC were 0.59 and 0.98, respectively ( Supplementary Fig. 1D). As indicated in Additional Table 2, we found that serum cir-cEIF4G3 expression levels were inversely associated with lymph node and distant metastasis. Together, these data suggests that circEIF4G3 is downregulated in GC and may serve as a prognostic biomarker. CircEIF4G3 overexpression attenuates GC growth and metastasis To further explore the biological roles of circEIF4G3, we performed gain-of-and loss-of-function studies (Supplementary Fig. 2A and 3A). The results of cell growth and colony formation assays showed that ectopic expression of circEIF4G3 inhibited GC cell proliferation ( Fig. 2A and 2B). CircEIF4G3 overexpression dramatically suppressed the migration and invasion of cells ( Fig. 2C and 2D). Flow cytometry results showed that circEIF4G3 overexpression caused an increase in the percentage of apoptotic cells (Fig. 2E) and a dramatic reduction in S-phase and increase in G1 phase of HGC-27 and AGS cells (Fig. 2F). The results of qRT-PCR and western blot showed that the mRNA and protein levels of E-cadherin increased while that of N-cadherin, Vimentin and slug decreased in circEIF4G3 overexpressing cells compared to control cells ( Supplementary Fig. 2B and 2C). We then established a mouse xenograft tumor model to validate the effect of circEIF4G3 on GC growth. We injected circEIF4G3 overexpressing and control HGC-27 cells into nude mice and monitored tumor growth regularly. The results showed that circEIF4G3 overexpression significantly inhibited tumor growth (Fig. 2G). After 6 weeks, we sacrificed the mice and calculated the tumor weights. Similarly, circEIF4G3 overexpression led to smaller tumor sizes (Fig. 2H). Immunohistochemical staining results revealed that the percentage of Ki-67-positive proliferating cells decreased in circEIF4G3 overexpressing group compared to control group (Fig. 2I). Subsequently, we designed two siRNAs specifically targeting the backsplicing site of circEIF4G3 in GC cells Fig. 3I) and induced an increase in S phase of GC cells ( Supplementary Fig. 3 J). Taken together, these results indicate that circEIF4G3 performs tumor suppressive roles in GC. CircEIF4G3 destabilizes δ-catenin protein and inactivates β-catenin signaling in GC cells To test whether circEIF4G3 exerts its function via interacting with proteins, we conducted tagged RNA affinity purification (TRAP) assay and mass spectrometry analyses to detect the specific proteins bound by circEIF4G3. The results of LC-MS/MS revealed that several proteins were consistently pulled down by circEIF4G3 in two GC cell lines (Fig. 3A). Five potential circEIF4G3-interacting proteins were identified through comprehensive analysis (Fig. 3B). We focused on δ-catenin as it has been well recognized as a key play in the progression of many human cancers [21]. We then utilized TRAP and western blot to verify the interaction between circEIF4G3 and δ-catenin protein (Fig. 3B). Meanwhile, RIP assay results also indicated that circEIF4G3 was enriched in RNA co-precipitated by anti-δ-catenin antibody in GC cells (Fig. 3C). Intriguingly, circEIF4G3 overexpression did not affect δ-catenin mRNA level but reduced its protein level in GC cells (Fig. 3D-E). δ-catenin is an important modulator of the canonical β-catenin signaling [22]. We found that circEIF4G3 overexpression dramatically decreased β-catenin protein level in GC cells while silencing cir-cEIF4G3 had an opposite effect (Fig. 3F, Supplementary Fig. 3 K). β-catenin regulates various downstream targets including cyclin D1 and c-Myc to promote tumor progression [23]. We observed that circEIF4G3 overexpression inhibited while circEIF4G3 knockdown promoted the expression of c-Myc and cyclin D1 in GC cells (Fig. 3F, Supplementary Fig. 3 K). Furthermore, the luciferase reporter activity of β-catenin was reduced in circEIF4G3 overexpressing group compared to control group (Fig. 3G). To further verify the role of circEIF4G3 in regulating β-catenin signaling, we used the β-catenin pathway activator LiCl. Compared with control group, Licl treatment induced the nucleus translocation of β-catenin in GC cells, while circEIF4G3 overexpression remarkably suppressed this effect (Fig. 3H). Consistent with the in vitro results, the expression of δ-catenin protein was elevated in tumor tissues of patients with GC who had low levels of circEIF4G3 (Fig. 3I, Supplementary Fig. 5F). Moreover, δ-catenin expression was also decreased in mouse tumor tissues in circEIF4G3 overexpressing group (Supplementary Fig. 8). Next, we confirmed that δ-catenin exerted oncogenic activities in GC cells as δ-catenin overexpression notably increased the proliferation, migration, and invasion abilities of GC cells (Supplementary Fig. 4A-D). We further performed rescue experiments and demonstrated that δ-catenin overexpression, at least partially, reversed the effects of circEIF4G3 on suppressing GC cell proliferation ( Supplementary Fig. 5A-B), migration, and invasion ( Supplementary Fig. 5C-D). In addition, δ-catenin overexpression also partially abrogated the decrease of β-catenin, c-Myc, and cyclin D1 expression by cir-cEIF4G3 overexpression (Supplementary Fig. 5E). In summary, these data suggests that circEIF4G3 regulates β-catenin signaling by interacting with δ-catenin. CircEIF4G3 promotes TRIM25-mediated ubiquitin degradation of δ-catenin Considering that circEIF4G3 alters δ-catenin protein but not mRNA level (Fig. 3D-E), we speculated that circEIF4G3 may destabilize δ-catenin protein by ubiquitination/degradation system. To this end, we used a proteasome inhibitor MG132 to explore the effect of cir-cEIF4G3 on δ-catenin protein degradation. As shown in Fig. 4A, the reduction of δ-catenin protein by circEIF4G3 overexpression was restored by MG132. We then transfected GC cells with circEIF4G3 and monitored the half-life of δ-catenin protein after CHX treatment. Compared to control group, circEIF4G3 overexpression evidently promoted δ-catenin protein degradation, thus shortening its half-life (Fig. 4B). Bioinformatics analysis results showed that δ-catenin protein has multiple ubiquitination modification sites. The levels of ubiquitinated δ-catenin protein were increased in GC cells when cir-cEIF4G3 was overexpressed in the presence of ubiquitin (Fig. 4C). These results suggest that circEIF4G3 regulates δ-catenin protein stability via enhancing its ubiquitination-dependent degradation. We screened the proteins co-precipitated by circEIF4G3 in TRAP assay for potential E3 ligase. Western blot result showed that TRIM25 was detectable in the proteins coprecipitated by circEIF4G3 ( Supplementary Fig. 6A). On the contrary, β-TrCP, a previously reported E3 ligase for δ-catenin ubiquitination, was not found in the proteins co-precipitated by circEIF4G3. Previous studies demonstrate that TRIM family proteins, including TRIM25, promote the degradation of their substrates by the ubiquitin proteasome pathway [24,25]. We then performed co-immunoprecipitation (Co-IP) assay and the results showed that TRIM25 bound to δ-catenin (Fig. 4D). RNA FISH and immunofluorescence results showed that cir-cEIF4G3, TRIM25, and δ-catenin co-localized in the cytoplasm of GC cells (Fig. 4E, Supplementary Fig. 6B). More importantly, we found that TRIM25 overexpression decreased the protein levels but not mRNA levels of δ-catenin (Fig. 4F, Supplementary Fig. 6C). The ubiquitination of δ-catenin were increased in GC cells with TRIM25 overexpression (Fig. 4G). Taken together, these data suggest that TRIM25 functions as a ubiquitin E3 ligase for circEIF4G3-regulated δ-catenin ubiquitination and degradation in GC cells. Previous studies suggest that TRIM25 uses RNA as a scaffold for efficient ubiquitination of its targets [26]. We then explored whether the ubiquitin-ligase activity of TRIM25 for δ-catenin is dependent on the presence of circEIF4G3. As expected, the loss of circEIF4G3 notably reduced TRIM25meidated ubiquitination and degradation of δ-catenin in GC cells (Supplementary Fig. 6D). We performed Co-IP assay to further explore whether circEIF4G3 acts as a scaffold to enhance the binding of TRIM25 with δ-catenin and found that the association between TRIM25 and δ-catenin was enhanced in GC cells by circEIF4G3 overexpression (Fig. 4H). These results indicate that circEIF4G3 acts as a scaffold to promote the interaction between TRIM25 and δ-catenin and subsequently facilitates TRIM25-mediated ubiquitination and degradation of δ-catenin. CircEIF4G3 acts as a miR-4449 sponge in GC Previous studies suggest that circRNAs regulate target gene expression by sponging miRNAs [27][28][29]. We then examined whether circEIF4G3 could function as a miRNA sponge. RIP assay results showed that circEIF4G3 was specifically enriched in beads containing Ago2 antibody compared with control IgG, suggesting the occupancy of Ago2 in the region of circEIF4G3 (Fig. 5A). We analyzed the potential targeted miRNAs of circEIF4G3 through bioinformatic methods (STARBASE, version 2.0 and circbank). We designed a luciferase screening assay using circEIF4G3-luciferase reporter and miRNA mimics and found that the luciferase activity was notably reduced when co-transfected with miR-4449 (Fig. 5B). We further identified a potential binding site in circEIF4G3 for miR-4449 (Fig. 5C). Further analysis showed that miR-4449 mimics notably suppressed the luciferase activity of cir-cEIF4G3 wild-type reporter while not affected that of cir-cEIF4G3 mutant reporter (Fig. 5C). We further confirmed that miR-4449 overexpression enhanced GC cell proliferation, migration, and invasion while circEIF4G3 overexpression antagonized these effects (Fig. 5D-G), indicating that circEIF4G3 may partially exert its tumor suppressive effect by sponging miR-4449 in GC. To identify the downstream signaling pathway and target genes that may be regulated by circEIF4G3, we performed RNA-seq for control and circEIF4G3 overexpressing GC cells. Pathway enrichment analyses showed that the altered transcripts by circEIF4G3 overexpression were enriched in many signaling pathways associated with tumor progression, including the β-catenin signaling (Fig. 5H). Therefore, we next explored whether circEIF4G3 could regulate β-catenin signaling through miR-4449 in GC cells. The results showed that circEIF4G3 overexpression decreased the protein levels of β-catenin, c-Myc, and cyclin D1, while simultaneous overexpression of miR-4449 mimics compromised this effect (Fig. 5I), indicating that cir-cEIF4G3 may also inhibit β-catenin signaling by interacting with miR-4449. CircEIF4G3 regulates miR-4449/SIK1 axis to inactivate β-catenin signaling We further investigated the target genes of miR-4449 that are regulated by circEIF4G3. RNA-seq results combined with bioinformatic prediction using TargetScan and miRDB identified several target genes (Fig. 6A). We chose salt-inducible kinases (SIK1) for further study as SIK1 mRNA and protein levels were upregulated in cir-cEIF4G3 overexpressing and miR-4449 inhibitor groups while they decreased in circEIF4G3 knockdown and miR-4449 mimics groups, respectively ( Fig. 6B-C, Supplementary Fig. 7A-B). The results of dual-luciferase reporter assay showed that miR-4449 mimics reduced the luciferase activity of reporter genes containing SIK1 binding site for miR-4449 when compared with control group, and the reduction was abrogated when the binding site in SIK1 for miR-4449 was mutated (Fig. 6D). TCGA data analysis showed that miR-4449 was up-regulated in tumor tissues compared to non-tumor tissues of patients with GC (Fig. 6E). We further investigated SIK1 gene expression in 36 paired tumor and adjacent nontumor tissues and found that the expression of SIK1 was downregualted in GC and positively associated with that of circEIF4G3 (Fig. 6F, Supplementary Fig. 7C). Moreover, SIK1 expression was also increased in mouse tumor tissues in circEIF4G3 overexpressing group (Supplementary Fig. 8). SIK1 has been reported as a tumor suppressor gene in hepatocellular carcinoma by regulating β-catenin signaling [30]. Thus, we explored whether circEIF4G3 modulates β-catenin signaling through SIK1 in GC. Our results showed that SIK1 overexpression markedly decreased β-catenin, c-Myc, and cyclin D1 protein levels, as well as the luciferase activity of β-catenin in GC cells (Supplementary Fig. 7D-E). The effect of SIK1 on the proliferation and invasion of GC cells was also examined. As shown in Supplementary Fig. 7F-H, SIK1 overexpression resulted in a strong inhibition of GC cell proliferation, migration and invasion. Then, we overexpressed cir-cEIF4G3 and knocked down SIK1 in GC cells simultaneously ( Supplementary Fig. 7I). Our results revealed that GC cell proliferation, migration and invasion were greatly inhibited by circEIF4G3 overexpression; however, this inhibitory effect was reversed by simultaneous knockdown of SIK1 (Fig. 6G-I). The similar effect was also observed in the expression and transactivity of β-catenin (Fig. 6J). Taken together, these results indicate that SIK1 is a direct target of miR-4449 and circEIF4G3 regulates miR-4449/SIK1 axis to inactivate β-catenin signaling in GC. Discussion The regulatory potential of circRNAs in gene expression has become a focus in cancer biology [31]. Increasing evidence suggests that circRNAs are aberrantly expressed in multiple cancers [32], including lung cancer [33], breast cancer [34], colorectal cancer [35], and hepatocellular carcinoma [36]. CircRNAs can be utilized as promising biomarkers for cancer diagnosis and prognosis due to their high stability and specific loop structure [37]. In this study, we identified circEIF4G3, a novel circRNA that was produced by backsplicing of EIF4G3 gene transcript, was downregulated in tumor tissues and serum of patients with GC. We found that the patients with GC who had a high level of circEIF4G3 presented significantly better survival than those who had a low level, which provides a new prognostic biomarker for GC. Moreover, the gain-of-and loss-of-function studies suggest that cir-cEIF4G3 overexpression suppressed while its knockdown promoted cancer progression, indicating that circEIF4G3 plays a tumor suppressive role in GC. FISH assays showed that circEIF4G3 was mainly distributed in the cytoplasm, where circRNAs may function as a miRNA sponge, interact with RNA binding proteins (RBPs) [38], or encode proteins [39,40]. Emerging studies suggest that circRNAs may play important roles in cancer progression by interacting with RBPs. For example, circRNAs derived from HUR restrains CNBPfacilitated HUR expression, resulting in the repression of GC progression [41]. CircZKSCAN1 binds to FMRP and blocks the interaction between FMRP and CCAR1 in HCC cells, subsequently inhibiting the transactivity of Wnt signaling pathway [42]. We searched the circRNADb database and found no open reading frame (ORF) in the sequence of circEIF4G3, suggesting that the probability of encoding protein by circEIF4G3 is low. We then developed a highly specific circRNA pulldown assay and identified the potential interacting proteins of circEIF4G3 by mass spectrometry. We validated that circEIF4G3 directly bound to δ-catenin and promoted its degradation by facilitating the interaction between δ-catenin and TRIM25. δ-catenin, also named as p120-catenin, is a member of an emerging subfamily of Armadillo repeats (ARMs) proteins [43] and a regulator of β-catenin signaling [22]. δ-catenin is a multifaceted intracellular signaling protein, which may serve as an oncogene through driving migration and anchorage independence [44][45][46]. Due to the epithelial to mesenchymal transition, the loss of E-cadherin function or expression during cancer progression leads to the transfer of δ-catenin from the cell membrane to the cytoplasm or nucleus [47,48]. δ-catenin modulates the canonical β-catenin signaling by forming a complex with its specific binding partner Kaiso [22]. Emerging evidence suggests that δ-catenin plays an important role in the development and progression of cancers [47,49,50]. Tang et al. demonstrate that δ-catenin regulates EMT, HCC cell invasion, and metastasis through the activation of β-catenin signaling pathway [49]. However, little is known about the regulation of δ-catenin in cancer. Herein, we found that δ-catenin promoted GC cell growth and metastasis. Moreover, circEIF4G3 could facilitate TRIM25-mediated ubiquitin degradation of δ-catenin. TRIM25 has been previously reported to be bound by non-coding RNAs to exert its E3 ubiquitin ligase activity and regulate antiviral innate immunity [51]. A more recent study also demonstrates that TRIM25 promotes HCC cell survival and growth through targeting Keap1-Nrf2 pathway [52]. Co-IP results showed that TRIM25 is the E3 ubiquitin ligase that interacts with δ-catenin and degrades it via the ubiquitin-proteasome pathway in GC cells. We also found that circEIF4G3 bound to δ-catenin and promoted its association with TRIM25, leading to increased ubiquitination of δ-catenin by TRIM25. These data provides a new mechanism for the regulation of δ-catenin protein stability by circRNAs. Since most circRNAs contain miRNA response elements (MREs), they can also serve as miRNA sponges [12]. For example, circTP63 has conserved binding sites for miR-873-3p and promotes lung squamous cell carcinoma progression by upregulating FOXM1 [53]. Circ-RanGAP1 acts as a competing endogenous RNA for miR-877-3p to increase VEGFA expression, promoting the proliferation and metastasis of GC [54]. In our study, we identified that miR-4449 bound to circEIF4G3. Previous studies suggest that miR-4449 expression is upregulated in serum of patients with multiple myeloma and may serve as a potential biomarker [55]. Yan et al. suggest that miR-4449 promotes colorectal cancer cell proliferation via regulation of SOCS3/STAT3 signaling pathway [56]. However, the function and regulation of miR-4449 in GC remain largely unknown. We analyzed TCGA data and found that miR-4449 expression was elevated in GC tissues. Overexpression of miR-4449 promoted the proliferation, migration and invasion of GC cells, which implied that miR-4449 could function as an oncogene. Rescue experiments demonstrated that miR-4449 reserved the inhibitory effects of circEIF4G3 overexpression on GC cell proliferation, migration and invasion, indicating that circEIF4G3 may play a tumor suppressive role by sponging miR-4449. We further performed RNA-seq to identify the differentially expressed genes in circEIF4G3 overexpressing GC cells. Gene ontology analysis indicated that the altered genes by circEIF4G3 overexpression were enriched in multiple critical signaling pathways associated with cancer progression, including β-catenin signaling. By intersecting RNA-seq data and bioinformatic prediction results, we focused on SIK1, a protein of AMP-activated kinase (AMPK) family, which has been suggested as a tumor suppressor in many solid tumors, such as HCC [30], breast cancer [57], colorectal cancer [58]. Previous studies suggest that SIK1 disrupts the binding of β-catenin to the TBL1/TBLR1 complex, thereby inactivating the β-catenin signaling [30]. We confirmed that SIK1 overexpression suppressed GC progression while its knockdown reversed the tumor Fig. 7 Proposed model for the roles and mechanisms of circEIF4G3 in GC progression. CircEIF4G3 destabilizes δ-catenin protein by enhancing TRIM25-mediated ubiquitin degradation and functions as a miRNA sponge to modulate miR-4449/SIK1 axis, which synergistically leads to the inactivation of β-catenin signaling and the inhibition of GC progression suppressive role of circEIF4G3 and identified a positive correlation between circEIF4G3 and SIK1 in tumor tissues of patients with GC, implying that SIK1 is an important downstream target of circEIF4G3. Moreover, we showed that circEIF4G3 increased SIK1 expression and decreased β-catenin expression and transactivity in GC cells, indicating that circEIF4G3 interfered with β-catenin signaling by modulating miR-4449/SIK1 axis. Conclusion Conclusively, our study revealed the role of a new cir-cRNA, circEIF4G3, in GC progression and elucidated its mechanism of action (Fig. 7). CircEIF4G3 expression was downregulated in patients with GC and predicted poor prognosis. CircEIF4G3 destabilized δ-catenin by forming circEIF4G3/δ-catenin/TRIM25 RNA-protein ternary complexes, which consequently enhanced TRIM25mediated ubiquitination and proteosomal degradation of δ-catenin. In addition, circEIF4G3 functioned as a sponge of miR-4449, and in turn promoted the expression of SIK1. The dual regulations subsequently led to the inactivation of β-catenin signaling and suppression of GC progression. Therefore, our findings provide a promising biomarker for GC prognosis and a potential target for GC therapy.
2023-01-17T15:04:04.356Z
2022-07-02T00:00:00.000
{ "year": 2022, "sha1": "9ab687e4b51aa67b8d2ebbc7a03513df5940580e", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/counter/pdf/10.1186/s12943-022-01606-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9ab687e4b51aa67b8d2ebbc7a03513df5940580e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
221882718
pes2o/s2orc
v3-fos-license
Removal of Interictal MEG-Derived Network Hubs Is Associated With Postoperative Seizure Freedom Objective: To investigate whether MEG network connectivity was associated with epilepsy duration, to identify functional brain network hubs in patients with refractory focal epilepsy, and assess if their surgical removal was associated with post-operative seizure freedom. Methods: We studied 31 patients with drug refractory focal epilepsy who underwent resting state magnetoencephalography (MEG), and structural magnetic resonance imaging (MRI) as part of pre-surgical evaluation. Using the structural MRI, we generated 114 cortical regions of interest, performed surface reconstruction and MEG source localization. Representative source localized signals for each region were correlated with each other to generate a functional brain network. We repeated this procedure across three randomly chosen one-minute epochs. Network hubs were defined as those with the highest intra-hemispheric mean correlations. Post-operative MRI identified regions that were surgically removed. Results: Greater mean MEG network connectivity was associated with a longer duration of epilepsy. Patients who were seizure free after surgery had more hubs surgically removed than patients who were not seizure free (AUC = 0.76, p = 0.01) consistently across three randomly chosen time segments. Conclusion: Our results support a growing literature implicating network hub involvement in focal epilepsy, the removal of which by surgery is associated with greater chance of post-operative seizure freedom. INTRODUCTION Epilepsy affects 50 million people worldwide, with one third not responding to medication. Neurosurgical treatment is potentially curative in focal epilepsy, if the source of the epilepsy, "the epileptogenic zone" can be identified and removed. The goal of pre-surgical evaluation is to identify the epileptogenic zone using semiology, neuroimaging, and neurophysiology (1). However, despite the abundant data that are used to inform clinical decision making, around 50-70% of patients continue to experience post-operative seizures. The possibility of postoperative seizures, and risks of adverse effects, acts as a significant barrier to surgery as some patients who may benefit do not proceed to operation. The ability to better predict if patients will have favorable post-operative outcomes (in terms of seizure-freedom) would therefore be highly beneficial. The identification of a measure for accurate outcome prediction is challenging, in part due to the complexity of brain network interactions. Functional brain networks derived from magnetoencephalography (MEG) data can be inferred by computing the pairwise similarity of brain regions. Several studies have shown increased MEG functional connectivity in patients with epilepsy compared to controls, even in inter-ictal periods (2)(3)(4)(5)(6). In two separate studies, Jin et al. (7) showed altered MEG network "hubs" -those regions with high network connectivity-in temporal areas in patients with hippocampal sclerosis, and increased network efficiency in patients with focal cortical dysplasia (8). With respect to surgical outcomes, Nissen et al. (9) investigated if MEG network hubs overlapped more with the resection area in seizure-free patients. The authors reported that hubs were localized within the area later resected in 9 of 14 seizure free patients, but none of the patients who had post-operative seizures. A later study from the same group showed in a larger cohort of 94 patients that areas with increased functional connectivity significantly overlapped with tissue that was later resected, but was not associated with outcomes. The study by Englot et al. (10) also demonstrated increased correlations in areas that were later resected in patients with good outcomes. Aydin et al. (11) suggested MEG networks could be used to predict outcome, and Krishan et al. (12) suggested epileptogenic source localization is feasible using MEG connectivity analysis irrespective of the presence/absence of inter-ictal spikes. In addition to surgical outcomes, MEG network properties have been related to epilepsy duration (i.e., number of years a patient has had epilepsy), or age of onset. For example, Englot et al. (10) showed overall network connectivity to be negatively associated with epilepsy duration whilst in contrast Madhavan et al. (13) showed a positive association. Jin et al. (8) showed a negative association with age of seizure onset. Taken together the MEG network literature suggests an increased connectivity in patients, particularly in "hub" areas that are later resected, which may be related to outcome and associated with duration. In this study we investigate MEG hubs and their removal in a cohort of 31 patients. Furthermore, since within-patient consistency is a critical step required before clinical application, we assessed the consistency of these results across three different time segments across four different parcellations. We hypothesized that the removal of high strength nodes would result in seizure-freedom. Our findings support earlier literature of being strong involvement of hub nodes in epileptogenic networks (14). Patients We retrospectively analyzed data from 31 patients who underwent pre-operative evaluation and subsequent epilepsy surgery at the National Hospital for Neurology and Neurosurgery, Queen Square, London. Outcomes of seizure freedom were assessed at least 12 months post-operatively according to the ILAE classification (15). In this cohort, 19 of the 31 patients had post-operative seizures (ILAE 2 or greater). All patients had no prior neurosurgery. There was no significant difference between outcome groups in terms of age, sex, or location of resection (Mann-Whitney U-test for age, χ 2 -test for differences in location, side, and sex) (see Table 1 for a summary and Supplementary Table 1 for detailed patient information). MRI Acquisition and Processing A T1 weighted MRI was acquired for all patients pre-and within 12 months post-operatively using a 3T GE Signa HDx scanner (General Electric, Waukesha, Milwaukee, WI). Standard imaging gradients with a maximum strength of 40 mTm −1 and slew rate 150 Tm −1 s −1 were used. All data were acquired using a body coil for transmission, and 8-channel phased array coil for reception. The high resolution T1-weighted volumetric image was acquired with 170 contiguous 1.1 mm-thick slices (matrix, 256 × 256; in-plane resolution, 0.9375 × 0.9375 mm). The preoperative MRI was used to generate cortical regions using the standard FreeSurfer recon-all pipeline (16). In brief, this performs intensity normalization, skull stripping, subcortical volume generation, gray/white segmentation, and parcellation (16-18). Surfaces were visually inspected and manually corrected where necessary. We then generated labels for the cortex from the Lausanne multi-resolution parcellation (https://github.com/ mattcieslak/easy_lausanne. (19) using surface-based registration. This resulted in four different parcellations which were later used to generate four networks per subject. These parcellations have different numbers of regions. The most coarse contains 68 regions and is the Desikan-Kiliany parcellation, which is based on anatomical boundaries (18), and the finest containing 448 regions which are subdivisions of lower resolution parcellations. To identify which regions were affected, and which were completely spared by the resection we linearly registered the postoperative MRI to the pre-operative MRI using the FSL FLIRT (20,21). We then overlaid the postoperative MRI on the pre-operative MRI using FSLview and manually drew a mask to delineate which tissue was resected (22). Care was taken to ensure masks were not extended beyond, or reduced in proximity to, boundaries known to be resected (e.g., beyond the sylvian fissure for a temporal lobe resection). Care was also taken to account for any brain shift as in our previous study (22). For one patient (IDP 851), a post-operative MRI was unavailable and the surgery report from the clinical team was therefore used to identify the resected tissue mask. The masks were then overlaid on the volumetric regions of interest from the Lausanne parcellation listed above. Regions overlapping the mask were labeled as removed, and others labeled as spared by surgery. If any overlap was present, then the region was considered as removed. MEG Acquisition and Processing MEG data for all experiments were recorded using a 275-channel CTF Omega whole head gradiometer system (VSM MedTech) in a magnetically shielded environment, with a 600-Hz sampling rate. After participants were comfortably seated in the MEG, head localizer coils were attached to the nasion and preauricular regions, 1 cm anterior to the left and right tragus to monitor head movement during the recording sessions. Patients were requested to keep as still as possible during the individual 6-min recording epochs, awake and their eyes closed in resting state. No patient had any overt seizure during the MEG recordings. The total length of MEG recording differed between patients (mean 16.70 min, standard deviation 6.82 min). Although the MEG recording duration was slightly different for each patient, this did not differ significantly between the outcome groups ( Table 1). The MEG sensor data were pre-processed in two steps using the Brainstorm software (23). First, data were notch filtered at 50 Hz (24) (IIR 2nd order zero phase-lag, Brainstorm implementation) followed by bandpass filtering (zero phase-lag even-order linear phase FIR filter, based on a Kaiser window design, Brainstorm implementation) the data between 1 and 55 Hz (broadband) using Brainstorm's pre-processing module ( Figure 1A). Second, the band-passed data were decomposed into different components using ICA, followed by removal of eye blink and cardiac components. The co-registration MEG helmet with pre-operative T1w MRI scan was performed using fiducials (anion, nasion, and preauricular points). MEG data were source reconstructed using sLORETA, a distributed model with zero localization error (25). The forward model (head model) was built using an overlapping multiple local sphere head model, which has accuracy similar to a boundary element model but is orders of magnitude faster to compute (10,26), with 15,000 voxels constrained perpendicular to the cortical surface. These 15,000 voxels are divided into cortical regions of interest (ROIs) using the Laussane parcellation schemes. We derived one time series per ROI using a flipped mean approach (23), resulting in, for the scale 114 parcellation, a 114 x number of time points matrix ( Figure 1D). The pre-processing pipeline is visually explained in Figure 1. Network Construction and Analysis Three one-minute epochs (10), each separated by at least five minutes, were chosen randomly for each patient. Given that some patients had insufficient durations of artifact-free recording our sample size reduced for epochs two and three. The results presented in the main text are for the first epoch, with others shown in Supplementary Materials. A two-second sliding window with 50% overlap was computed over the source reconstructed time series, extracting one functional connectivity (amplitude correlation using Pearson correlation) matrix per 2s window (Figure 1E; left panel). The time varying functional connectivity matrices were temporally averaged across windows to one matrix, which represents the functional network of the entire epoch. The same was repeated for each of the other two epochs. Figures 1D,E summarizes the methods. The intrahemispheric node strength, which we defined as the mean correlation of the nodes within hemisphere, was calculated. Nodes with high node strengths (large positive values) were hypothesized to be epileptogenic and thus their subsequent removal hypothesized to result in better patient outcomes. As all patients had unilateral resections i.e., resection in only one hemisphere, we posited that connectivity ipsilateral to the epileptogenic zone is stronger than connectivity within the contralateral hemisphere. We therefore excluded interhemispheric connectivity ( Figure 1E) since we expected this to hold less discriminatory information. We note that field spread may lead to spurious correlations; however, since the same methods are applied to all patients we do not expect this to be a confound to explain either outcome or duration. To quantify the difference in node strengths between removed and spared tissue we used the area under receiver operator statistic curve (AUC) which is equivalent to the normalized nonparametric Mann-Whitney U statistic, we term this measure as D RS (Distinguishability of Removed node strength vs. Spared node strength). A D RS value of 0 or 1 indicates complete distinguishability of resected tissue from spared tissue. A value of 0 indicates that the strength of all resected nodes is higher than all spared nodes. In contrast, a value of 1 indicates that the strength of all spared nodes is higher than all resected nodes. Any value around 0.5 indicates similar rank-ordering of node strengths from both tissue types (27). This is a non-parametric method, which is robust to outliers and is effective even with nongaussian distributions and generates a single value of D RS per patient (Figure 2 provides an illustrative example). This measure was introduced by Wang et al. (27). Statistical Analysis The area under the receiver operating curve (AUC) measure was used to determine whether D RS distinguishes seizure-free (ILAE 1) from not-seizure-free (ILAE>1) outcomes. If the AUC = 1, or if AUC = 0, then there is the highest separation between good and bad outcomes, on the contrary, if AUC = 0.5, then separation is by chance. Hypothesizing that ILAE 1 patients would have lower D RS values (see Figure 2) (10, 27), we used a one-tailed Mann-Whitney U-test (ranksum) test to compare the difference between outcome groups. We computed 95% confidence intervals of the AUC using a logit transformation (28). Post-operative T1-weighted MRI is overlaid on pre-operative T1-weighted MRI to obtain resection mask (D) Each parcellated source time series is now labeled as removed or spared using resection mask. A functional connectivity matrix is obtained for every sliding window (2s window with 50% overlap) (E) Functional connectivity matrices are then averaged to obtain a single connectivity matrix. Node strength is obtained from the averaged connectivity matrix, followed by D RS calculation between Spared and Removed regions. To test associations between the mean of the entire functional connectivity matrix i.e., average of all the connections (mean-FC) and log 10 epilepsy duration (DUR), we compared the following two regression models under the likelihood ratio test to obtain a p-value for each time segment: Model: mean FC ∼ 1+ DUR vs Model 0 : mean FC ∼ 1. This tests if log 10 epilepsy duration explains any of the variance in FC across subjects. The regression models were fitted using a robust regression approach. To test if duration and mean FC were positively associated over all segments, we used a linear mixed effects model, where we modeled the time segment as a random effect: LME: mean FC ∼ 1+ DUR + (1|segment). To test if duration explained significantly more variance in mean FC, we tested it against the alternative model LME 0 : mean FC ∼ 1 + (1|segment) using a likelihood ratio test again. High Connectivity in Resected Tissue Is Associated With Favorable Outcome We hypothesized that the removal of high strength nodes would result in seizure-freedom. Figure 3 shows the pre-operative node strength for two example patients and their later resections overlaid in blue. Patient 1220 ( Figure 3A) had a left sided temporal lobe resection, with many non-resected high-strength In contrast, patient 1022 ( Figure 3B) had a right sided temporal lobe resection and was seizure-free afterwards. Many of the highest strength "hub" nodes lay within the resection zone meaning they were subsequently removed by surgery. A D RS sore of 0.07 reflects that (i) removed nodes tend to have higher strength than spared nodes and (ii) the difference between removed and spared node is large. Consistency of Findings Extending the analysis to include all 31 patients shows significantly lower D RS values (i.e., the resected and spared ROIs are more distinguishable) in patients with post-operative seizurefreedom, compared to those who were not seizure-free (one tailed Wilcoxon rank sum p = 0.01, AUC = 0.76, 95%CI = 0.54-0.90, Figure 4). Confusion matrices for the ROC curve optimized for maximum accuracy are shown In Supplementary Table 2) To test the temporal robustness of this result the entire analysis was repeated for two additional one-minute segments, separated by at least five minutes. Supplementary Table 2 shows these findings for the 28 patients for whom there were sufficient data available to repeat the analysis. For both of those additional segments, ILAE 1 patients still had significantly lower D RS values than those patients with ILAE>1 outcomes (segment 2: p = 0.04, AUC = 0.7, 95%CI = 0.47-0.86, segment 3: p = 0.02, AUC = 0.74, 95%CI = 0.51-0.88). The networks generated in Figure 4 comprise cortical networks based on 114 ROIs using the Lausanne subparcellation of the original Desikan-Kiliany (DK) atlas (18). Repeating the analysis for higher resolution subparcellations, comprising 219, or 448 regions, the finding of lower D RS values for ILAE1 patients is replicated for this time segment (p < 0.05, Supplementary Table 1), but was not significant for other time segments, or when using the DK parcellation (Supplementary Table 3). Higher Connectivity Is Associated With Longer Duration Using a linear regression model robust to outliers we found the duration of epilepsy was positively associated with mean FC in contrast to the negative association reported by Englot et al. (10) (Figure 5) (likelihood ratio test p = 0.03, adjusted R 2 = 0.1). The positive association was also present for time segment 3 (likelihood ratio test p ≪ 0.01, adjusted-R 2 = 0.17), but was not significant for segment 2 (likelihood ratio test p = 0.1, adjusted R 2 < 0.01). Given the conflicting result from segment 2 we applied a linear mixed effects model that incorporates the segment number as a random effect in the analysis, boosting overall statistical power utilizing all available data. This approach found a significant positive association overall between duration and mean FC (likelihood ratio test p = 0.02). DISCUSSION In this study we investigated pre-operative functional connectivity networks, constructed in source space, using MEG recordings from 31 patients with refractory focal epilepsy who later underwent epilepsy surgery. Networks were constrained by pre-and post-operative MRI allowing accurate delineation of resected regions. We report three main findings. First, seizure-free patients showed higher preoperative node strength in surgically-removed regions compared to surgically-spared FIGURE 5 | Scatter plot illustrating the relationship between Epilepsy duration in years and mean global functional connectivity. Each "x" marker represents an individual patient. Dashed line represents the line of best fit using bisquare linear regression robust to outliers. The association is significant with p-value = 0.03 (likelihood ratio test) and adjusted R 2 = 0.1. regions. Second, capturing this discrepancy between surgicallyremoved and surgically-spared regions patient-specifically by our proposed D RS measure, we found significant differences in D RS between outcome groups. Third, overall network connectivity strength showed a weak, but significant, positive association with epilepsy duration. Our approach builds on our previous work with intracranial EEG, where we show that patients generally have better seizure outcomes when high-strength nodes are surgically removed (27)(28)(29)(30). Other groups have reported similar findings from intracranial EEG (31)(32)(33)(34)(35)(36)(37). With MEG data, Nissen et al. (9) showed in a cohort of 22 patients, that interictal source localized network hubs overlapped with the resection in seizure-free patients only. There, the authors applied node betweenness centrality (38) as their measure of hubness, as opposed to our measure of node strength. Node strength and betweenness centrality are highly correlated, so our results are in strong agreement. Jin et al. (39) used similar methods (nodal betweenness centrality) applied to source localized interictal MEG and reported altered network hubs in patients, as compared to healthy controls. Englot et al. (10) reported increased connectivity in the resected region to be more frequent in seizurefree patients. Our findings support a strong involvement of hub nodes in epileptogenic networks (14). We investigated the robustness of our results to the choice of time segment. This is important clinically because it is unknown if there is an optimal time segment, or whether results may vary over time. Our finding of consistent differences in D RS values between outcome groups, regardless of time segment suggests confidence that segments of one-minute duration are sufficient. Other previous studies have also found one minute to be sufficient for consistent predictions in most cases (10,27), and that short durations are sufficient to capture stationary aspects of the functional connectivity (40). In contrast to the consistency across time segments, we found some variability associated with the choice of spatial parcellations. Although a trend of increased D RS in poor outcome patients was present in all analyses (Supplementary Table 1), this only consistently met significance across all segments for only the 114 ROI parcellation. This may reflect a compromise between the regions being small enough to not have averaging across large regions (incurring a loss of data), but still large enough to represent independent time series data for our cohort as a whole. However, we recognize that patient-specific parcellations, parcellation-free, or adaptive parcellation approaches may be beneficial (41)(42)(43)(44). Although we typically found high node strength in the resection area in good outcome patients, several high strength nodes were spared by surgery. For example, patient 1022 had multiple high strength nodes even in the contralateral hemisphere ( Figure 3B). Given that all brain networks (epileptogenic or otherwise) have a mixture of high and low strength nodes, we interpret that seizures are facilitated by high strength nodes, but that not all high strength nodes are necessarily pathological. The normalization of patient networks against those from controls allows for the identification of pathological "abnormal" nodes (27,45). Future studies should investigate these relationships between node hubness and node abnormality. Despite several studies reporting relationships between structural MRI properties and epilepsy duration (46)(47)(48), few have investigated this relationship with MEG data. In agreement with our study, Madhaven et al. (13) performed an analysis of MEG data acquired from 12 patients with focal epilepsy and also found a significant positive correlation between connectivity and duration. In contrast to our analysis approach, the authors of that study analyzed only the subnetwork implicated in interictal epileptiform discharges, rather than performing a whole brain analysis as is presented here. Furthermore, that study reported significant findings only in beta band connectivity. However, our finding of increased mean network functional connectivity with increased epilepsy duration did not concur with Englot et al. (10). This may be due to the small size of the effect in our data (R 2 < 0.2 for all segments), or the data used by Englot et al. (10) (R 2 = 0.229). Other differences of note between the studies include the pre-processing strategies, and network types; specifically, Englot et al. (10) used alpha-band imaginary coherence, whereas our study uses broadband correlation. Given the limited and mixed literature, we conclude that a larger cohort with consistent processing is required to better understand relationships between duration and MEG functional connectivity. An important limitation of this study is that the networks studied include neocortical areas only, and not deep brain structures including the amygdala or hippocampus. Previous work has demonstrated that MEG signals can be localized in deep brain structures. Pizzo et al. (49) showed that MEG signals could be source localized to spikes detected on concurrently recorded intracranial EEG. However, localization to the hippocampus is challenging (50). Given that our networks are constructed from low amplitude interictal activity, and that our objective was not high amplitude spike localization, we excluded those structures in our analysis. Other limitations of our study include the sample size used, which is in a similar range to previous studies (9), and the retrospective (as opposed to prospective) design of our analysis. Prospective applications of our approach could involve using a mask of the intended resection overlaid with the patient's network as performed here. Calculation of an expected D RS , for the intended resection, could be made and this information used to alter the resection strategy. Multiple strategies could be computed and optimized for minimal D RS , minimal resection size, and maximal distance to eloquent areas. We envisage such a software tool could be used during pre-surgical evaluation (22). This study has focussed on functional (MEG derived) network properties. Previous studies suggest that univariate properties (e.g., dominant frequency, inter-ictal spike rate, presence of HFOs) may also hold informative information (51)(52)(53)(54)(55). Although interictal spikes may be randomly present in any chosen epoch, their influence on functional connectivity is limited if they are present in only a minority of time windows. This influence is limited because our approach captures stationary aspects of the reconstructed network rather than the transient spikes (40,56). However, we acknowledge that in circumstances where the majority of the recording contains spikes their influence on functional connectivity may be stronger. As we do not investigate spike counts here, this should be considered as a possible limitation of our work. Our approach as presented can be fully automated, without the need for manual identification of spikes, thus serving as a distinct advantage of our methods. Additional to neurophysiologically-derived networks, structural network information, which underpins functional network dynamics, has also been shown to have predictive value (22,57,58). Future studies should integrate univariate properties (such as spikes) with structural, and functional networks in a personalized patient specific manner to better understand the role of abnormal network hubs, maximizing the benefits of all patient data (59)(60)(61). Additionally, future studies should investigate robustness' of these results with different connectivity measures such as imaginary coherence or phase locked value (PLV). Taken together, our study has provided additional evidence that the removal of network hubs can lead to improved patient outcomes from epilepsy surgery and suggested that these findings are temporally robust. DATA AVAILABILITY STATEMENT Preprocessed networks are available on reasonable request to the corresponding author. Requests to access the datasets should be directed to peter.taylor@newcastle.ac.uk. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Newcastle University Research Office Ref: 1804/2020. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2020-09-25T13:06:25.195Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "64c42be8a3ac0a67bf3fa008916046cfa4656b5d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.563847/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64c42be8a3ac0a67bf3fa008916046cfa4656b5d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236976054
pes2o/s2orc
v3-fos-license
Four-dimensional noncommutative deformations of $U(1)$ gauge theory and $L_{\infty}$ bootstrap We construct a family of four-dimensional noncommutative deformations of $U(1)$ gauge theory following a general scheme, recently proposed in JHEP 08 (2020) 041 for a class of coordinate-dependent noncommutative algebras. This class includes the $\mathfrak{su}(2)$, the $\mathfrak{su}(1,1)$ and the angular (or $\lambda$-Minkowski) noncommutative structures. We find that the presence of a fourth, commutative coordinate $x^0$ leads to substantial novelties in the expression for the deformed field strength with respect to the corresponding three-dimensional case. The constructed field theoretical models are Poisson gauge theories, which correspond to the semi-classical limit of fully noncommutative gauge theories. Our expressions for the deformed gauge transformations, the deformed field strength and the deformed classical action exhibit flat commutative limits and they are exact in the sense that all orders in the deformation parameter are present. We review the connection of the formalism with the $L_{\infty}$ bootstrap and with symplectic embeddings, and derive the $L_{\infty}$-algebra, which underlies our model. Introduction Noncommutative gauge and field theories have been widely studied over more than twenty years. Much has been written about physical motivations for considering space-time to be "quantum" and physical models to be described in terms of noncommuting observables, if one wants to go beyond the dichotomy between classical gravity and quantum physics. There exist excellent reviews for that, see for instance [1][2][3] . Despite the large efforts made, there is however no general consensus about the appropriate noncommutative generalisation of field theory, mainly because, except for very few models, all attempts proposed present formal and/or interpretative problems, which render the results not fully satisfactory. Nonetheless, the problems addressed with the promise of providing effective models of space-time quantisation and compatible gauge theories maintain their validity. One main motivation for confronting once again with noncommutative gauge theory is a series of recent publications proposing the framework of L ∞ algebras and a bootstrap approach as appropriate for formulating gauge noncommutativity in a consistent way [4,5]. Moreover, an interesting connection has been established between the L ∞ bootstrap and symplectic embeddings of non-commutative algebras [6,7]. It is worth mentioning that the role of L ∞ algebras in gauge and field theory is already investigated in [8][9][10], see also [11][12][13][14][15][16] for recent progresses in studies of the L ∞ structures in the field theoretic context. In the present paper, we shall take advantage of the constructive approach proposed in [17] which, starting from the request that gauge theory be compatible with the desired space-time noncommutativity and be equivalent to the standard one in the commutative limit, yields recursive equations for field dependent gauge transformations and deformed field strength. As we shall explicitly discuss, the procedure is closely related to the L ∞ bootstrap and symplectic embedding approaches. The only exact (all orders in the non-commutativity parameter) nontrivial 1 models, which have been constructed so far along the lines of [17], and exhibiting the flat commutative limit, are the three-dimensional U (1) theory with su(2) noncommutativity [17] and the two-dimensional U (1) model with kappa-Minkowski 2 noncommutativity [19]. Therefore, the construction of fourdimensional models of this kind seems to be a valuable and timely problem. In the present work we fill this gap: we construct exact noncommutative four-dimensional deformations of U (1) gauge theory, implementing several three-dimensional noncommutative structures within the general framework proposed in [17], and adding one more commutative coordinate. As we shall see below, such an addition brings somewhat more than a naive generalisation of the corresponding three-dimensional setup. For the non-trivial sector of the algebra we shall consider explicitly the angular (or λ-Minkowski) noncommutativity [22][23][24][25][26][27][28], 1) and the su(2) noncommutativity [29][30][31][32][33][34][35][36][37][38], The latter may be easily generalised to su(1, 1), while the time variablex 0 stays commutative for all cases considered. We shall use the Greek letters µ, ν, ..., and the Latin letters a, b, c, ..., to denote the four-dimensional and the three-dimensional (i.e. the spatial) coordinates respectively. The three-dimensional deformation 3 of U (1) gauge theory, based on the su(2) noncommutativity (1.2) has already been studied in detail in [17]. We shall see, however, that an addition of time as a fourth commutative coordinate extends the results of [17] in a nontrivial way. For a given starting space-time M, we shall indicate with A Θ = (F(M), ⋆) the noncommutative algebra of functions representing noncommutative space-time, equipped by some noncommutative star product which, for coordinate functions, reproduces the linear algebras (1.1) and (1.2). Noncommutativity is therefore specified by the x−dependent skew-symmetric matrix Θ(x): which we assume to be a Poisson bivector in order to maintain associativity of the star-product. The symbol [ , ] ⋆ denotes the star commutator, defined as follows For non-Abelian gauge theories where gauge parameters are valued in a non-Abelian Namely, the algebra of gauge transformations closes with respect to a non-Abelian Lie bracket. Noncommutative U (1) gauge theory, with gauge parameters now belonging to A Θ behaves very much like non-Abelian theories. Therefore, according to [17], we shall require that the algebra of gauge transformations closes with respect to the star commutator, namely However, if gauge transformations are defined as a natural generalisation of the non-Abelian case, it is known that, by composing two such transformations, we get the result (1.6) only if ∂ is a derivation of the star commutator, which in general is not the case. Hence, the guiding principle in [17] was the definition of the infinitesimal gauge transformations, in such a way that they close the noncommutative algebra (1.6) and reduce to the standard U (1) transformations in the commutative limit, The star commutator, which enters in (1.6) has the following structure: where {f, g} stands for the Poisson bracket of f and g, while the remaining terms, denoted through "...", contain higher derivatives. From now on we neglect these terms, namely we consider the semi-classical limit [6,7]. Therefore our noncommutative gauge algebra becomes the Poisson gauge algebra: We shall consider in what follows a two-parameter family of Poisson structures: where the 3 × 3 matrixα is defined as follows: (1.14) At α = 0 we get the Poisson structure which corresponds to the angular noncommutativity, (1.15) while at α = 1 the three-dimensional bivector Θ jk is nothing but the Poisson structure of the su(2) case, Another interesting case is represented by α = −1 which corresponds to the Lie algebra su(1, 1). We emphasise however that the Jacobi identity, is satisfied for any α, not just at α = 0 and α = ±1. Introducing the projector δ ν µ on the three-dimensional space, we get an explicit formula for the structure constants: The paper is organised as follows. Sec. 2 is devoted to deformed gauge transformations. Moreover, a connection with the symplectic embedding approach is discussed. In Sec. 3 we present relevant aspects of the L ∞ bootstrap approach to gauge theories and establish the L ∞ algebra, which corresponds to our gauge transformations. In Sec. 4 we introduce a deformed field strength and a suitable classical action. According to [17] the infinitesimal deformed gauge transformations, which close the algebra (1.12), and reproduce the correct undeformed limit (1.9), can be constructed by allowing for a fielddependent deformation as follows: This variation satisfies the following derivation property [6]: For (1.12) to be satisfied the 4 × 4 matrix γ has to solve the master equation 4 moreover, it has to reduce to the identity at the commutative limit, (2.4) 4 We use the notation ∂ µ A ≡ ∂ ∂Aµ . The last requirement guarantees that the noncommutative transformations (2.1) reproduce the standard Abelian gauge transformations (1.9) in the undeformed theory. A general result has been established in [6] in the context of symplectic embeddings, that is valid for any Θ which is linear in x. It suggests a solution of Eq. (2.3) in the form 5 and the function χ reads In the last equalities the quantities B 2n are the Bernoulli numbers, and χ M 2 ν µ have to be understood as the matrix elements of χ(M/2). By replacing the structure constants (1.19) in the definition (2.6) we obtain is a projector, i.e.M 2 =M , and henceM n =M , ∀n ∈ N. (2.10) From now on we shall make use of the notations 6 : Using the identity (2.10) we can easily calculate the nontrivial term of Eq. (2.5): Substituting this result in the general formula (2.5), we arrive at the final expression for γ, where we introduced another form factor, 14) 5 Note that our notations differ from the ones of [6]. In order to obtain our Eq. (2.5) one has to set t = 1, p = A and replace γ by γ − 1 in Eq. (6.3) of [6]. 6 Do not confuseα andα! in order to confront our results with the ones of [17]. Setting α = 1, one can easily see that the three-dimensional part of (2.13), viz γ i j , coincides with the known three-dimensional result (2.11) of [17] for the su(2)-case. Interestingly, the field-dependent deformation of gauge transformations (2.1) has been derived in [6] as the result of a symplectic embedding of the Poisson manifold (M, Θ) into the symplectic manifold (T * M, ω), where T * M denotes the cotangent bundle and ω an appropriate symplectic form such that π * ω −1 = Θ, π : T * M → M being the projection map. Shortly, the idea of symplectic embeddings of Poisson manifolds is a generalization of symplectic realizations [20,21], which consist in the following. One considers the canonical symplectic form ω 0 on T * M, which is locally given by ω 0 = dλ 0 = dp µ ∧ dx µ with λ 0 = p µ dx µ the Liouville one-form. The contraction of λ 0 with the Poisson tensor Θ defined on M yields a vector field, In terms of the latter, it is possible to endow, at least locally, the cotangent space with a new symplectic form, ω whose inverse naturally projects down to the Poisson tensor Θ on M through the projection map π : T * M → M. According to [20,21] such a form is given by the integrated pull-back of the canonical symplectic form ω 0 through the flow associated with the vector field X Θ , The Jacobian matrix J = (∂y µ /∂x ν ) is formally invertible. On denoting its inverse by γ(x, p) the symplectic Poisson tensor ω −1 = ∂ ∂y µ ∧ ∂ ∂pµ is given in terms of the original variables (x, p) and the matrix γ(x, p), according to Let's pose ω −1 = Λ. A generalization of the previous procedure consists in defining Λ as a deformation of Θ, according to Eq. (2.16) and imposing that it satisfies Jacobi identity, provided Θ does. This amounts to compute the Schouten bracket, [Λ, Λ], and impose that it be zero. We obtain the following equation for the matrix γ, where [Θ, Θ] = 0 has been used. The latter is exactly the master equation (2.3) after replacing derivatives with respect to p µ by derivatives with respect to A µ , which is, however, a non-trivial difference, since A µ is itself a function of x, while p µ is obviously not. The relation between the two approaches, which has been established in [6,7], may be summarised as follows. Let us first consider the standard setting with Θ = 0. Then, the cotangent bundle T * M is endowed with the canonical symplectic form ω 0 . The gauge field be a local one-form on T * U with λ 0 the Liouville form. We have it being s * A (λ 0 ) = A = (π • s A ) * (A). This means that ξ A vanishes exactly on the submanifold im(s A ) ⊂ T * U . Therefore the latter is identified by the constraint (2.19), which in turn amounts to fix the fibre coordinate at x, p to its value A(x) identified by the section s A . Then, the infinitesimal gauge transformation of the gauge potential A, with gauge parameter f , may be defined in terms of the canonical Poisson bracket ω −1 0 as follows Now let us consider the case Θ = 0, namely, (M, Θ) is a Poisson manifold. A symplectic embedding is performed as described above, with symplectic form now given by the inverse of (2.16), while the image of U ⊂ M through the local section s A is still defined by the constraint (2.18). Then, the infinitesimal gauge transformation of the gauge potential is formally the same as in the previous case, (2.20), except for the fact that ω −1 0 is to be replaced by the Poisson tensor ω −1 defined by (2.16). Therefore we have which is precisely the starting assumption ( .3) is not unique, i.e. one may construct other deformed gauge transformations, which close the algebra (1.12). In the approach just described, this is due to the freedom in choosing different symplectic embeddings for the Poisson manifold (M, Θ) [6,7], and gives rise to a field redefinition, which maps gauge orbits of the original fields onto gauge orbits of the new fields [39]. In the next section we will see that the present construction is closely related to the L ∞bootstrap. In that setting, the ambiguity mentioned above corresponds to a quasi-isomorphism of the underlying L ∞ algebra, which is unique (up to quasi-isomorphisms) [39]. 3 Relation to L ∞ algebras and bootstrap. L ∞ algebras and gauge transformations are related in the following way [8,9]. Consider a graded space V such that the only nonempty subspaces are V 0 and V −1 . By construction the former is identified with a space of the gauge parameters, f ∈ V 0 whilst the latter contains the gauge fields, A = A µ dx µ ∈ V −1 . We shall look for the deformed gauge transformation in the form of a series expansion, as follows: By setting and determining the remaining brackets, l k , from the requirement of closure of the L ∞ -algebra, one can build the gauge transformation (3.6). Such a "completion" is referred to as the L ∞ bootstrap [4,5]. General properties of the L ∞ -construction automatically insure that the condition (1.12) is satisfied, see e.g. [5,18]. In the previous section we have constructed the deformed gauge transformations without any reference to the L ∞ algebras, however, Proposition 5.9 of [6] guarantees, that for any symplectic embedding, related 8 to the deformed gauge transformation (2.1), the L ∞ algebra is indeed there, and can be constructed as follows. • Expanding the right hand side of the transformation (2.1), presented as in powers of A, and comparing with the right-hand side of (3.6) one finds all the brackets of the form l n (f, A, · · · , A). All other brackets, which depend on a single argument f and n − 1 arguments A can be, obviously, recovered from the mentioned ones by the graded antisymmetry (3.3). • The only nonzero bracket, which involves two arguments f, g ∈ V 0 , is given by Eq. (3.8). • All other brackets are identically equal to zero. The proposition, mentioned above, also asserts that L ∞ -algebras which correspond to different choices of γ (i.e. different symplectic embeddings), associated with the same Poisson bivector Θ via Eq. (2.3), are necessarily connected by L ∞ -quasi-isomorphisms. From this point of view the L ∞ structure, which underlies a given deformed gauge transformation of the form (2.1) is "unique". Applying the prescription presented above to the matrix γ, given by Eq. (2.13), we get Here "related" means that both the deformed gauge transformation and the symplectic embedding are defined via the same matrix γ, which is a solution of the master equation (2.3). therefore the only non-zero brackets of the underlying L ∞ algebra are given by We remind that the structure constants are given by Eq. (1.19), and the quantity Z is defined by (2.11). This result is a direct generalisation of the L ∞ algebra, presented in the Example 6.4 of [6] for the three-dimensional su(2)-case. Deformed field strength. According to [17,18] the deformed field strength, which transforms in a covariant way under the noncommutative transformations (2.1), may be searched by adapting the usual definition of the non-Abelian field strength to our Poisson gauge algebra. This yields [17,18]: with the unknown R ξλ µν satisfying appropriate conditions. It would certainly be interesting to derive this result from symplectic embeddings, as we did for the gauge potential, however, such a connection is still missing. Therefore the field strength is here obtained, as in [17,18], in a more direct way. By imposing that (4.1) be satisfied, one gets an equation for the coefficient function R ξλ µν , which we name the second master equation, The latter exhibits the following undeformed limit: This requirement together with the relation (2.4) ensures that the noncommutative field strength reduces to the commutative one in the undeformed theory: A solution of Eq. (4.3), which satisfies the condition (4.4), is known for arbitrary Poisson bivector Θ up to O(Θ 2 ) terms [17], see Appendix A. Substituting our data (1.13) in these formulae, and using the straightforward identities, we get: This formula suggests to look for the complete solution of (4.3) in the form of the following Ansatz where the form factors ζ,ζ, φ, Λ and Φ are unknown functions, which exhibit the following asymptotic behaviour at small λ: Substituting the ansatz (4.8) in the master equation (4.3) at µ = 1, ν = 2, ρ = 3, ω = 2, ξ = 3 we get, (4.10) where u ≡ λ √ Z, and, we remind, the functionχ(v) is defined by Eq. (2.14). This relation is satisfied for all A j iff The solution of this system of equations, which is compatible 9 with the asymptotics (4.9), is given by In order to determine the remaining three form factors we substitute the ansatz (4.8) in Eq. (4.3) at µ = 1, ν = 0, ρ = 1, ω = 0, ξ = 2: what leads us to a system of three coupled equations for three undetermined functionsζ(u), Φ(u) and Λ(u): (4.14) Resolving these equations, and imposing the conditions (4.9), we obtain: Summarising Eq. (4.8), Eq. (4.12) and Eq. (4.15) we arrive at Actually, one has to use just the initial condition φ(0) = − 1 3 , while the asymptotic behaviour of ζ can be checked a posteriori. One can check by direct substitution that our solution is valid for all other combinations of the indexes µ, ν, ρ, ω and ξ. At α = 1 the three-dimensional restriction, R cd ab , of (4.16) coincides 10 with the known threedimensional solution for the su(2) case [17]. It is remarkable that the presence of the fourth (commutative) coordinate x 0 generalises the mentioned three-dimensional result in a quite nontrivial way, introducing new contributions of the form factors Λ and Φ. The deformed field strength F, defined by Eq. (4.2), allows for a natural definiton of the classical action functional, which remains invariant upon the deformed noncommutative gauge transformations (2.1), and which reproduces correctly the classical limit. Indeed, by defining we can check that the classical limit is 20) thanks to the property (4.5) of the deformed field strength. Moreover, since the deformed field strength F transforms in a covariant way (Eq. (4.1)), the deformed Lagrangian density, being quadratic in F, transforms in a covariant way as well. We have indeed where we have first used the standard definition of first variation of L upon the variation of F; then we have substituted the explicit expression (4.1) for δ f F, and took into account the derivation property of the Poisson bracket. By using Eq. (1.11) we thus get Remark. In order to avoid confusions, we comment on the usage of partial derivatives in this paper. On the one hand, in the master equations (2.3) and (4.3), in the terms Θ νµ ∂ µ γ ξ λ − Θ ξµ ∂ µ γ ν λ and Θ ξλ ∂ λ R ρω µν , the partial derivatives act on the explicit dependence on x only, whilst A(x) is considered as an independent variable. On the other hand, in all other places of this article, (e.g. in the definition of the Poisson bracket (1.11)), the partial derivatives act on all x-dependent objects. In particular, in the last line of Eq. (4.22) the partial derivative acts on x, which is present in L not just explicitly 11 , but also via A(x) and its first derivatives as well. This justifies the Leibnitz rule in the last step of (4.22). In this article we constructed a family of four-dimensional noncommutative deformations of the U (1) gauge theory, implementing a class of noncommutative spaces (1.13) in the general framework of [17]. This class includes the angular (or λ-Minkowski), the su(2) and the su(1, 1) cases at α = 0, α = +1 and α = −1 respectively. We worked within the semi-classical approximation, so our noncommutative gauge theories are actually Poisson gauge theories. The first result is the definition (2.1) of deformed gauge transformations, where the matrix γ is given by Eq. (2.13). These transformations close the noncommutative algebra (1.12). We also discussed the interpretation of the master equation Eq. (2.3), which we used to construct the deformed gauge transformations, as a Jacobi identity for symplectic embeddings [6,7]. The second result is an explicit L ∞ structure (3.11), which corresponds to our deformed noncommutative gauge transformations in sense of the L ∞ bootstrap. The third result is an expression for the deformed field strength, Eq. (4.2), where the quantity R is given by Eq. (4.16). This deformed field strength transforms in a covariant way upon the deformed noncommutative gauge transformations, thereby allowing for a definition of the gauge-invariant classical action (4.18). We stress that the presence of the fourth (commutative) coordinate x 0 brings nontrivial contributions to the deformed strength (via R), which do not look like a simple and intuitive addition to the corresponding three-dimensional result. In particular, the components F 0j exhibit a highly nonlinear dependence on the three-dimensional components of A. This behaviour is different from the one of the matrix γ, where the four-dimensionality does not change the corresponding three-dimensional result that much, since γ 0ρ = δ 0ρ . Let us illustrate the nontriviality on a simple example where the gauge potential does not depend on spatial coordinates x j . In this situation one may expect that the noncommutativity, being essentially three-dimensional, does not affect the field strength, so F 0j = ∂ 0 A j . Our analysis, instead, yields which disproves the naive expectation. The nonlinearity which derives from spatial noncommutativity manifests itself in the spatially-homogeneous situation as well, as far as the time-dependence is concerned. One can also check that, substituting an x j -independent gauge potential A in the equations of motion derived from the classical action (4.18), one gets a nonlinear dynamics, which governs the time-dependence. At the best of our knowledge nothing similar takes place in noncommutative gauge theories which are based on more conventional approaches. The present research can be continued in various directions. On one hand, one may study various physical consequences of noncommutativity such as the existence of Gribov copies, which has already been established for the noncommutative QED with Moyal type noncommutativity [40][41][42]. On the other hand, one may focus on purely mathematical structures, which stand behind. In particular, one may wonder whether the field strength F, obtained in this paper, is compatible with the L ∞ bootstrap procedure, related to the extended L ∞ algebra [4], that contains one more nonempty subspace V −2 of the objects which transform in a covariant way upon the deformed gauge transformations. In particular, one has to check, whether n(n−1) 2 l n (A, · · · , A), (5.2) where the first bracket corresponds to the undeformed field strength (4.5), and all the brackets together fulfil the L ∞ relations. Finally, an interesting problem which we would like to investigate in the coming future is to understand if it is possible to derive the deformed field strength proposed in this paper within the symplectic embedding approach, and to clarify its geometric nature.
2021-08-12T01:16:25.040Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "21f673224ecceeee4a954ad52a76f44757c029b5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "21f673224ecceeee4a954ad52a76f44757c029b5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
235293077
pes2o/s2orc
v3-fos-license
Efficiency of modified concrete in lining in underground structures The authors discus an efficient concrete technology toward highest advance rates in tunneling in unstable rocks at minimized labor input. Introduction Review of Russian and foreign experience gained in underground constructions shows that overall efficiency of support installation in mines has grown slightly in the last years but falls far behind the advance rate. The practice of underground construction indicates that the type, technology and mechanization of roof support govern the rate of tunneling. This study aims to justify optimized parameters for construction of underground structures in unstable rocks by the opencast method with cemented paste backfill using modified quick-hardening mixtures to ensure reduction in labor cost, financial expenditures and time of construction. Construction and support of underground structures: State-of-the art Underground structures can be constructed using the opencast method and without support in pits with sidewalls cut at a slope of repose and in pits with reinforced sidewalls. Cutting of pits with sidewalls at a slope of repose is the simples and the most economic solution but is heavily constrained, especially in the conditions of narrow-space urban development [1]. The major constraint is the depth of a pit. In deeper pits, it is necessary to make flatter slope, and the pit area and volumes of extracted soil essentially grow, which makes this approach inexpedient or even impossible in case of a limited space. Another complication is groundwater as underground construction requires water depression to be undertaken in this case. Thus, pits with sidewalls at a slope of repose are usually arranged in the absence of urban development and in case of deep level of groundwater. Support design and technology for pits in case of underground construction should meet some requirements such as: stability of pit walls during and after total extraction of soil; withstanding of load from the structure; water impermeability if water depression is impossible or economically inexpedient; re-usability of support elements in case of temporal reinforcement; the support should not block the pit, extraction of soil or backfill, and erection of other structures; saving of materials, labor and time; preservation of surface and underground objects in operation in the influence zone of underground structure construction; environmental standards (permissible rates of noise, vibration) and environmental protection. To this effect, sidewalls of pits and trenches are supported using modified concrete mixes. New-generation concretes are described in [2]. Concrete technologies have been greatly advanced through extensively investigated and practically approved science of concrete modification using admixtures-modifiers intended to improve properties of concretes and concrete mixes. Methods and means capable to maximize underground construction rates are discussed in [3]. Monolithic lining using concrete in horizontal and inclined tunnels in complicated geological conditions is combined with temporary support meant to take up loads exerted by enclosing rock mass. This is only possible if concrete rapidly develops strength mainly dependent on the strength of paste matrix. The main factor to govern duration of support installation cycle is a time period within which concrete placed in a formwork changes from plastic to solid state. Based on the studies into the rate of hydration of cements manufactured by different plants, the highest kinetics of structure formation is a feature of cements manufactured at Pervomaisky and Verkhnebakansky Cement Plants in Novorossiysk in Russia. High placeability of cements is only achieved with superplastifiers. The rate of strength development in modified early-curing concretes used in underground structures is analyzed in [3]. Regarding development of strength in concrete, the most effective admixtures should exert integrated effect on a binder. The tests of increment in plastic strength of cement grout of normal density were carried out with integrated admixtures such as Relamix M2, Relamix SL, Poliplast-1MB, Superplast ultra D5 and some others. From the test results, the most effective admixtures are D5 and Relamix T-2. Installation of monolithic concrete lining is a specific, difficult and long process. This is connected with design features of a framework, space-limited operation front, impossibility of vibrocompaction and duration of mixture placement, especially in the crown of roof. In this case, sustained fluidity of cement mixture for 30-40 min is a critical factor. Mie construction mostly uses flowing concrete with cone slump to 17 cm; for this reason, the above-specified time interval is of particular concern as the high water/cement ratio result in much longer time of initial and final setting. The authors of this paper accomplished testing of normal density mixtures at a cement : sand ratio of 1: 3 and at constant sand size in the range of 1.61-1.67. In the plastic strength test, we placed the mixtures in three rings with a diameter of 100 mm and a height of 40 mm; the bending and compression tests were carried out on 32 beams made of the test cement mixtures. The test results are compiled in Table 1 and in Figure 1. All modified mixtures exhibited a considerable increase in the plastic strength over a curing time from 2 to 6 h. These studies were carried out for monolithic concrete lining installation in Almaty Metro. In application of high-early-strength concrete, intensification of structure formation during early curing of concrete is accompanied by a decrease in concrete fluidity, and the increased time of mixing during placement leads to a decrease in strength. Thus, it is required to assess 'vitality' of cement mixtures by investigating the influence of mixing time before cement placement in frameworks on structural and mechanical properties of the mixtures. The tests were carried out at the mixing times of 10, 20 and 30 min. Immediately after mixing, the beam and ring frameworks were filled with the test mixtures. Placement time was 8-10 min; that is, the maximum time to stabilization of the mixtures was 40 min as per the earlier set condition. The test result show that the increase in the mixing time to 20 min greatly intensifies the processes of hydration (Table 2), which is proved by the increment in the plastic strength of the mixtures by 3-5 times at curing time of 6 h. The analysis of the research findings shows that the highest hydration rates are ensured by admixtures Extra, D-11 and D-5. On the other hand, the mixtures with modifiers Extra and D-11 considerably thicken by the time of placement and consolidate more difficultly, moreover, at curing time of 6 h, the plastic strength of the mixtures with Extra and D-11 is lower than with D-5. In the ultimate compression tests of the normal density mixtures in the form of beams, 5-8 halfbeams were compressed to fracture per test series so that the measurement error was not higher than 10%. The test results showed that the highest strength of structure formation and, accordingly, the highest strength characteristics are exhibited by the Portland cement mixture with modified D-5. At the same time, the rate of hardening in the period from 9 to 24 h was insufficient, while the maximum loads on the lining were expected in that very time span. Thus, based on the heat release tests in hydration of binders, the influence of the hardening accelerators on the kinetics of structure formation in the modified cement mixture with admixture D-5 was determined with a view to manufacturing a mixture with high rate of structure formation in the period of 12-18 h after addition of water to the mixture. Conclusions 1. It has been found that the maximal stress depends on the value of the confining pressure, thickness of lining, rock cohesion, and on the time change of the mechanical characteristics of modified concrete. 2. The developed efficient modified concretes ensure the increment in the strength by 20 times in curing for 12 h and by 7 times in curing for 1 day as compared with admixture-free concretes. 3. The presented research findings show that the highest rate of structure formation and, accordingly, the highest strength characteristics belong to the Portland cement mixture with modifier D-5. However, this mixture has insufficient rate of hardening over the period from 9 to 20 h when the maximal load on the concrete lining are expected. For this reason, the heat releases tests of binder hydration were carried out to determine the influence exerted by the binder hardening on the kinetics of structure formation in modified concrete with admixture D-5 and to obtain the mixture with the increased structure formation rate in the period of 12-18 after addition of water. 4. Modeling in PLAXIS 2D reveals that pit sidewalls lined with the modified concrete mixture is capable to sustain loading and pressures.
2021-06-03T00:57:03.928Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0bc8200b072e6bb5107b79daedcc16ecf2310f4c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/773/1/012063", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0bc8200b072e6bb5107b79daedcc16ecf2310f4c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
247185157
pes2o/s2orc
v3-fos-license
Needs and Research Priorities for Young People with Spinal Cord Lesion or Spina Bifida and Their Caregivers: A National Survey in Switzerland within the PEPSCI Collaboration The aim of this study was to describe the needs and research priorities of Swiss children/adolescents and young adults (from here, “young people”) with spinal cord injury/disorder (SCI/D) or spina bifida (SB) and their parents in the health and life domains as part of the international Pan-European Pediatric Spinal Cord Injury (PEPSCI) collaboration. Surveys included queries about the satisfaction, importance, research priorities, quality of life (QoL), and characteristics of the young people. Fifty-three surveys with corresponding parent-proxy reports were collected between April and November 2019. The self-report QoL sum scores from young people with SCI/D and SB were 77% and 73%, respectively. Parent-proxy report QoL sum scores were lower, with 70% scores for parents of young people with SCI/D and 64% scores for parents of young people with SB. “Having fun”, “relation to family members”, and “physical functioning” were found to be highly important for all young people. “Physical functioning”, “prevention of pressure injuries”, “general health”, and “bowel management” received the highest scores for research priority in at least one of the subgroups. As parents tend to underestimate the QoL of their children and young people prioritized research topics differently, both young peoples’ and caregivers’ perspectives should be included in the selection of research topics. Introduction Recently, rehabilitation philosophy has become more patient-orientated. Studies have shown that patient and public involvement can have positive impacts on research by enhancing its quality and ensuring its appropriateness and relevance [1]. Consistency between research and consumer priorities and expectations should be a goal for practitioners and could ultimately improve rehabilitation outcomes [2,3]. Pediatric rehabilitation is complex. It faces not only the age-specific needs of patients but also the needs of parents and caregivers. In pediatric rehabilitation, children, adolescents, and young adults (from here, "young people") with spinal cord injury/disorder (SCI/D) and spina bifida (SB) represent a small group living with relevant and lifelong impairments, sharing, to some extent, comparable needs [4,5]. Pediatric SCI/D is a very rare health condition [6][7][8] causing challenges to physical and psychosocial health, with an estimated number of new traumatic cases of 7-13 per year in the 0-14 age group in Switzerland [9]. The prevalence of SB at birth is 19.4 cases per 100,000 live births in Switzerland (Federal Office for statistics, 2014). Although small in number, these young people experience significant challenges that need to be addressed. Young people with SCI/D and SB have to deal with and adapt to several SCI/D-or SB-associated impairments, growing-related challenges, and secondary health conditions. The SCI/D or SB-related physiological changes include sensory and motor deficits; autonomic dysregulation (including respiratory, cardiac, and circulatory dysfunction); and bladder, bowel, and sexual dysfunction. [10][11][12]. SCI/D-or SB-related complications in young people mainly include constipation, pressure injuries, spasticity, neuropathic pain, and urinary tract infections due to their complicated bladder management [13]. In addition to adults, young people with SCI/D or SB have to cope with nutrition intake and weight adaptation during the growing phase, with the risk of becoming underweight or obese (due to lower resting metabolic rates and decreased muscle mass) [14,15], and they suffer from an increased risk of developing hip instability (dislocation and subluxation) and scoliosis (due to their lack of trunk muscles) [16]. Young people with SB often face additional needs-for example, cognitive impairments due to Arnold Chiari malformation and hydrocephalus [4,5] or urological dysfunction and hip malformation since birth [17][18][19][20]. Further, living with a disability as a young person can also lead to psychosocial issues, such as depression and anxiety, which may influence school integration, education, and participation [8,10,11,21,22]. The above-mentioned issues can have an effect on the overall quality of life (QoL), independence, psychosocial health, and outcomes relating to participation and coping [8,11,23]. Given the variety of complications faced by young people with SCI/D or SB, how to comprehensively prioritize and address these needs remains unclear. In 2012, a group of European rehabilitation specialists with an interest in pediatric SCI/D rehabilitation formed the Pan-European Pediatric Spinal Cord Injury (PEPSCI) collaboration. The overall aim of this collaboration was to collect data on health conditions, social and emotional situations, and the school integration patterns of young people with SCI/D [24] in Europe. As part of this international project, the purpose of this national study was to assess the importance, satisfaction, and research relevance in the health and life domains as well as QoL among young people with SCI/D and SB and their parents in Switzerland. As young people with SCI/D and SB sometimes participate in rehabilitation programs together [25,26], understanding the various needs and priorities of both patients with SCI/D and patients with SB can offer guidance on how to tailor rehabilitation programs to meet their needs. Study Design This is a descriptive study based upon a cross-sectional survey in the German-speaking part of Switzerland as part of an international collaboration (PEPSCI). Patients and Setting Eligible young people with SCI/D or SB aged from 2 to 25 living in the Germanspeaking part of Switzerland and their parents were included. The participants were tracked down in five large hospitals treating children, youths, and young adults with SCI/D and/or SB: children's hospitals in Basel, Berne, Lucerne, and Zürich and the Swiss Paraplegic Center. In the case of acquired SCI/D, the date of the injury had to be before the patient's 18th birthday. Participants with severe neurological deficits (e.g., because of acquired brain injury) or with no need of any assistive technologies were excluded. Screening and Recruitment Young people with SCI/D or SB were identified from the departmental or institutional databases of each participating institute. The standard procedure was for local collaborators to identify all the young people with SCI/D and SB. The screening of the inclusion and exclusion criteria was based on medical information in the institutional database. The eligible participants and their parents received the questionnaires by postal mail, including an information letter and informed consent form. The informed consent form was either signed by parents and participants or just parents depending on the age of the participant (Supplementary Material Table S1). According to the Swiss ethics guidelines, children until the age of ten were orally informed about the study by their parents. From an age of eleven onward, children received an age-adapted participant information form but did not sign it themselves. From an age of fourteen onward, children received an age-adapted participant information form and signed the form themselves. In some institutions, the potential participants were additionally informed about the study by telephone by their health professionals. The survey was sent in 2019 in the German-speaking part of Switzerland. One to three months after sending the questionnaires, the young people/parents who did not return the questionnaire were contacted by telephone. During this telephone call, the first author explained again the purpose of the study and reminded the individual that he or she was under no obligation to participate and that it would not influence their further treatment if he or she did not participate. Development of Survey The survey contains four parts: Part I, the basic information form; Part II, the PedsQL™ (pediatric quality of life questionnaire) [27]; Part III, the health and life domain questionnaire; and Part IV, a neurological form. The second and third part were organized as 3-to 5-point Likert scales, with an additional free text section in the third part (Supplementary Material A). The different parts were adapted for different age groups (Supplementary Material B Table S1). Part I-Basic Information Form This questionnaire contained 12 items and aimed to obtain demographic information about the young people with pediatric SCI/D and SB (i.e., their age; gender; education level; and time of, cause of, and years since their injury). The basic information form was completed by the parents of young people aged 2 to 14, whereas young people aged 15 to 25 filled out the basic information form by themselves. An additional question regarding the education status of the parents was asked for all ages. Part II-PedsQL™ To describe the QoL, we used the German-translated validated PedsQL™ [27]. The PedsQL™ is a modular instrument used for measuring health-related QoL in children and adolescents from 2 to 18 years of age. The PedsQL™ Generic Core Scales are multidimensional child self-reports and parent-proxy report scales developed by J. W. Varni and associates over the last 15 years [27]. The Generic Core Scales included four functional domains: (1) physical (8 items), (2) emotional (5 items), (3) social (5 items), and (4) school functioning (5 items). Separate questionnaires were provided for young people between the ages of 2 and 4 (parent proxy report only), 5 and 7 (self and parent-proxy reports), 8 and 12 (self and parent-proxy reports), 13 and 17 (self and parent-proxy reports), and 18 to 25 (self and parent-proxy reports) with SCI/D or SB. The questionnaire for ages 5-7 years consisted of graphic 3-point Likert scales, whereas the questionnaires for the ages 8 to 25 years and all parent-proxy reports consisted of 5-point Likert scales. To be more inclusive to the SCI/D and SB population, slight modifications to the wording of the questions for physical functioning (mainly questions concerning walking ability) were made with Varni's permission. Part III-Health and Life Domain Questionnaire (HLDQ) PEPSCI collaborators developed this questionnaire based on the findings presented in Simpson and colleagues' systematic review of the health and life priorities of adults with SCI/D and SB [3]. In this review, two central domains were identified: (1) the "life" domain and (2) the "health" domain. We decided to ask about (A) importance, (B) satisfaction, and (C) research priority in relation to Simpson's detected life and health domains. We started out by looking at the adult survey carried out in the UK [2]. After the questionnaire was developed in the expert committee, the questions were validated for understanding among children, adolescents, and parents in the UK, the US, and Sweden. Some adaptations were made concerning the wording and the description of sexuality. Separate questionnaires were provided for young people from 8 to 12, 13 to 17, and 18 to 25 years of age (all with corresponding parent-proxy reports). For young people younger than 8 years, questionnaires were only filled out by their parents as parent-proxy reports. We had to make some adaptions to age groups to meet the national ethical requirements in Switzerland. A free text option was available for all participants, allowing to write about additional aspects that they would like SCI/D and SB researchers to focus on in the future. Part IV-Neurology Form A baseline characteristic form was developed by the PEPSCI Collaboration, providing details about the individuals' SCI/D and SB (gender, date of onset, level, completeness, and type of injury). The first author completed this form. Translation Process We translated the HLDQ according to the cross-cultural translation guideline of Epstein and Sousa in two steps [28,29]: Stage 1: Bilingual translators whose mother tongue was the target language produced the two independent translations. These translators did not need to be professional translators. Preferably, they were experts in the medical field. Translator 1 (T1) was aware of the concepts in rehabilitation medicine, SCI/D, SB, etc. Translator 2 (T2) was familiar with colloquial phrases, health care slang and jargon, idiomatic expressions, and emotional terms in common use. Stage 2: The expert committee reviewed the quality and content of the translations produced by T1 and T2 and ensured that no meaning was lost. The Expert Committee consisted of all the translators (T1 and T2) and health care professionals involved up to this point. The composition of this expert committee was crucial for the achievement of cross-cultural equivalence. Data Collection Data collection was based on the returned paper-based questionnaires. For all who returned the questionnaire, the first author collected the medical information from the original medical records from each participating center. Furthermore, data were reviewed and verified prior to data entry completion. Data were entered into secuTrial ® (secure data server with no participant identifiers). Statistical Analysis Means and standard deviations (SD) or medians with 25th and 75th percentiles were calculated. The PedsQL™ questionnaire was recoded prior to analysis according to the following website: http://www.pedsql.org/score.html (accessed on 4 December 2018). Then, the sum scores were calculated for physical, emotional, social, and school functioning by adding all the corresponding scores and dividing them by the number of answered items. After this step, the mean sum scores for physical health (=physical functioning) and for psychosocial health (=mean of emotional, social, and school functioning), and the total score (mean of the four above-mentioned sum scores) were calculated. Scores for the PedsQL™ were given as medians with 25th and 75th percentiles for patients and parents, both of them subdivided into SCI/D and SB. For the Health and Life Domain Questionnaire, the means for the health and life domains were calculated for each question of the corresponding subgroup. Missing answers were replaced by zero, as not filling in a question was interpreted as low importance and the number of missing answers for each question was added. The highest five values of importance, satisfaction, and research were shown and highlighted in gray in the tables. The means of lowest satisfaction were also displayed. Microsoft Excel and PASW Statistics 18 were used for all analyses (SPSS Inc., Chicago, IL, USA). Study Population In total, 185 young people with SCI/D (32) and SB (153) and their corresponding parents were eligible to participate. Of these, 53 surveys from young people with SCI/D and SB with corresponding parent-proxy reports were returned: 15 were from young people with SCI/D and 38 were from young people with SB. This led to a response rate of 28.6% in total, 46.9% for SCI/D and 24.8% for SB. In the SCI/D group, 12 surveys (out of 15, 80%) were filled out by boys, while in the SB group 24 (out of 38, 63%) were filled out by girls. The mean age at injury for the SCI/D population was 9.9 years, and most of the participants had a neurological level between Th1 and Th12, with it being considered that the neurological examination of children remained challenging but possible (Supplementary Material Table S2). The education level of the parents differed in terms of percentages at university level (SCI/D 8% versus SB 28%). The percentages for primary school and vocational training were similar (Supplementary Material Table S3). The SCI/D population composed of half traumatic origin and half non-traumatic origin patients (bleeding, transverse myelitis, congenital spinal cord lesions, and tumors) (Supplementary Material Table S2). Quality of Life (PedsQL™) The total sum score of the PedsQL™ ranged between 64.5% and 77.4%. The young people with SCI/D and SB had total QoL scores of 77.4% and 73.1%, respectively. Parentproxy report scores were lower with 70, 2% for young people with SCI/D and 64, 5% for young people with SB. The Likert score for school functioning for young people with SB was 75% and 60% in the corresponding parent-proxy reports versus 81.3% for young people with SCI/D and 80% in corresponding parent-proxy reports. The psychosocial health sum score was highest for young people with SCI/D. The results of PedsQL™ varied between age groups in the SB group mainly due to the scores in school functioning (Supplementary Material Table S4). Satisfaction, Importance and Research Priorities in Health and Life Domains "Relationships with family members" and "physical functioning" were the two domains mentioned as being within the top five either for satisfaction or importance in all subgroups. However, these items did not receive high research priority in all subgroups. For the young people with SCI/D and SB, the "what you do to have fun" domain received the highest score for importance but was not ranked within the top five domains for research priorities and satisfaction. It was not ranked within the top five in one of the three categories in the parent-proxy reports. For young people with SCI/D and SB, "time playing with or hanging out with others" was ranked within the top five for importance. "Ability to concentrate and learn new things" was ranked within the top five for importance in parent-proxy reports (SCI/D and SB) and for young people with SCI/D but not for young people with SB. It was not ranked within the top five for research priorities in any of the subgroups. "Bladder management" was ranked within the top five for research priorities in parent-proxy reports (SB and SCI/D) and for young people with SB, but not for young people with SCI/D (Supplementary Material Table S5). Scores in satisfaction for the "ability of moving feet and legs" were low for the young people with SCI/D. Mobility ("how easy it is to get where you need to go") was ranked within the top five for research priorities in participants with SCI/D and SB. Accessibility ("accessibility of your child's home") was ranked within the top five for importance in parent-proxy reports in the SCI/D group and within the top five for satisfaction in parentproxy reports in the SB group (Supplementary Material Table S5). For the young people with SB and their parents, the presence of spasms and muscle jumping and how those could be controlled were not ranked within the top five for importance, satisfaction, nor research priorities, whereas it was ranked within the top five for research priorities in parents of young people with SCI/D. The presence of pressure injuries and how they could be prevented received top scores for research priorities for the young people with SCI/D and SB (Supplementary Material Table S5). The presence and treatment of pain was ranked within the top five for research priorities for young people with SCI/D and SB and parents of participants with SCI/D, but not for parents of participants with SB (Supplementary Material Table S5). In the additional free text section, young people and their parents mentioned many survey topics again and added some items. Parents added physical, emotional, and social aspects, whereas young people mainly added physical aspects. Additional new aspects were neurological recovery, stem cells, nerve transfers, electrical stimulation, and the healing of the spinal cord. It was also mentioned that surgical procedures should be better explained to children (not with medical terms) and that research should be performed on how children can be supported to mentally and physically recover from traumatizing operations and procedures (Supplementary Material Table S6). Discussion Data from our questionnaire-based study provide a better understanding of the current needs and research priorities for young people with SCI/D and SB and their parents. This is the first study to reveal health and life priorities from the perspectives of children, adolescents, and young adults, as well as their parents and caregivers. In our study, only parents and no caregivers participated, and for that reason we only used the term 'parents'. Participants clearly selected different topics of research priorities. This underlines the idea that even young people and parents can clearly define their needs and research priorities and shows that young people and their parents can be integrated into the selection of research activities. Besides biomedical research topics such as neurological recovery, bowel and bladder management, pain, pressure injuries, and mobility, the following topics were also prioritized for research by the participants: relationships with friends and family members, social activities, integration, and participation. Characteristics of the Study Population We found, in total, 185 young people with SCI/D or SB in the German-speaking part of Switzerland. This result confirms that SCI/D and SB are rare health conditions [9] and the need for specialized health care services and international collaboration [24]. We contacted the highly specialized SCI/D and SB centers and screened the hospital data bases. We assume that all young people with SCI/D or SB were hospitalized at one point in one of the centers. We did not check the official data bank of national statistics and we did not include the French-and Italian-speaking parts of Switzerland due to language restrictions. The gender representation in our study was in accordance with the expected distribution, with a predominance of male participants in the SCI/D population of 80% [7,10] and a female predominance in the SB population of 63.2% [19,30]. The education status of the parents in the SCI/D population represented the average in Switzerland. The higher educational level of the parents of young people with SB could be explained by the higher age at birth of the parents of children with SB [26,31]. Perception of Quality of Life We found differences in total QoL scores between the young people and parent-proxy reports. This confirmed that, in general, parents tend to underestimate the health and psychological condition of their children [32,33]. The lower results achieved in school scores for the SB group could be explained by the fact that the majority of children with SB, especially those with hydrocephalus, have cognitive impairments and therefore struggle more in school or after school time than those with SCI/D. It has been shown in several studies that most children with SB have a lower IQ compared to their able-bodied peers [26,34]. We found that the perception of the overall QoL does not only depend on physical health. When comparing the QoL scores in our population with published data collected from children without disabilities, we found our participants to report lower scores for physical health and the same scores for social and psychological health [27]. The results of our sample were, however similar on all dimensions when compared to published data collected from children with rheumatoid arthritis [35] and cancer [36]. Importance, Satisfaction and Research Priorities In general, social, psychological, and physical aspects influence the lived experience of young people with disabilities and their families [3]. Our population also mentioned relationships with family members as one of the most important aspects of their lives. As QoL was defined as including physical and psychological aspects [37], researchers have recently suggested that social aspects should also be included in research [38,39]. Comparing importance, satisfaction, and research priorities, the scores for research priorities were generally lower compared to those for other aspects. It might be possible that the participants-in particular, the younger children-were overstrained by questions about research priorities [1]. We realized that importance, satisfaction, and research priorities are different constructs and are perceived as such by participants, and concluded that something may be really important to someone's wellbeing but does not require research. Nevertheless, many of the mentioned aspects-for example, the presence of muscle spasms in participants with SCI/D, pain, and bowel and bladder management issueswere already part of the most important research topics and research plans for adults and could be specifically addressed in younger people [1]. Recent research concerning access to health services has been conducted utilizing new methodological concepts and health system interventions that can be used to optimize the satisfaction of young people and families living with SCI/D or SB [19,40]. The results obtained for mobility and accessibility showed that these domains play an important role in the young people's and parents' everyday lives. The free answers also indicated that parents in particular are aware of several topics covering specific as well as bio-psycho-social aspects in general. All the data collected for this study were self-reported or parent-proxy reports and vulnerable to problems and biases connected with self-and proxy-report measures. Related to the low incidence of people with SCI/D or SB, we only detected 185 participants, and, of those, the response rate was about 30% on average, with a 50% higher response for the participants with SCI. Although we sent reminders and followed up with phone calls in the no-response cases, the response rate reduces the generalizability of these results. The low number of participants, especially those with SCI/D, did not allow for complex statistical analyses, so some subgroup analyses were conducted for young people with SB only. Therefore, the descriptions of different topics might be overestimated due to the low number of responses. In particular, our sample group did not contain any individuals with SCI/D or SB younger than 3 years of age. This clearly shows the relevance of international multicenter studies in overcoming the challenge of low participation numbers. As we found a diverse range of perspectives, we think that we did not miss highly relevant perspectives. However, upcoming studies should include different languages and cultural regions. Age-specific versions of questionnaires were generated to ensure that the participants were provided with age-appropriate and cognitively relevant questions. However, this means that some of our data were not directly comparable across all age categories and cultural regions. As the cognitive development of children differs between individuals, larger cohorts are necessary to address the different age-related needs and perspectives. Conclusions Social, physical, and psychological topics were viewed in slightly different ways in all subgroups regarding importance, satisfaction, and research relevance. Parents tend to underestimate the health and psychological condition of their children, so the perspectives of young people themselves should also be integrated when research topics are selected in the future. The highly ranked social participation topics, as well as physical functioning, pressure injury prevention, and bowl management, should be included in the prioritized research topics. This knowledge will be of great importance, helping rehabilitation clinics and health services to optimize their care and develop guidelines based on patients' and caregivers' perspectives. Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/children9030318/s1: (A) German translated questionnaires: part I: basic information; part II: PedsQLTM; part III: HLD all age groups; part IV: neurological information. (B) Table S1: Characteristics (age groups, gender, lesion level); Table S2: PedsQL™: sub scores of physical, emotional, social, and school functioning and physical health sum; Table S3: Health and Life Domain Questionnaire; Table S4: Free text section: additional research priorities for young people with SCI/D and SB and their parents; Table S5: Age adaption of surveys; Table S6: Educational level of young people with SCI/D and SB and their parents. Author Contributions: I.B., P.L., E.R., and A.S.-S. were responsible for the conception of the study, the data collection, the analyses, and the writing of the manuscript. G.M. performed the statistical analyses and revised the manuscript. S.G., I.E.-H., B.P., C.S., and S.S. were engaged in the recruitment process, gave critical feedback to the study conception, and revised the manuscript. M.A., E.H.K., and J.T. were involved in the conception of the international study design and the development of the questionnaire and gave critical feedback on the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: Irina Benninger received funding by the Swiss Paraplegic Foundation and the Swiss Society of Paraplegia. Erich Rutz was supported by the Bob Dicken's research fellowship. We are grateful to the Buckinghamshire NHS Trust for initially sponsoring this study. Institutional Review Board Statement: This study was reviewed and approved by the local ethical committee, (EKNZ, Ethikkommission Nordwest-und Zentralschweiz) BASEC Nr. Req-2018-01027. We certify that all applicable institutional regulations concerning the ethical use of human volunteers were followed during the course of the research. The study was performed according to good clinical practice (GCP) guidance and the Declaration of Helsinki. Informed Consent Statement: Informed consent was received for all young people and parents or just parents depending on age. Data Availability Statement: Electronic data are stored by the corresponding author at the Swiss Paraplegic Centre. Source data and identification data are stored in a locked archive room in the Swiss Paraplegic Centre with limited and controlled access for a minimum of 15 years.
2022-03-02T16:33:52.563Z
2022-02-27T00:00:00.000
{ "year": 2022, "sha1": "224d602980e0a7a30153cf66510e8474fc5ee4bd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/9/3/318/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff335dca6d42458bd2ce574fe23e2cf28269d956", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235391938
pes2o/s2orc
v3-fos-license
Framework for Solid-Organ Transplantation During COVID-19 Pandemic in Europe Introduction Since the effect of the COVID-19 pandemic on solid-organ transplantation (SOT) is unclear, an online survey on the specific framework of leading European transplant centers (n=155) in 31 European countries was conducted between April 24 and May 15, 2020. Methods A questionnaire was designed to collect information on restrictions on SOT, protective measures,(non)governmental information policies, and individual opinions on how to deal with SOT during COVID-19. Results The response rate was 37.4% (58 of 155). Overall, 84.5% reported an effect of COVID-19 on SOT in Europe. In 49% of these, limited capacity was mentioned, and in 51% the reason for restricted resources was strategic preparedness. As a result, SOT was totally or partially suspended for several weeks. In sum, 93.1% of centers implemented protective measures against COVID-19. Nongovernmental information policies were felt to be adequate in 90%. Continuation of transplant activities was desired by 97% of centers. Conclusion The results of this survey suggested a need for more ICU capacity during COVID-19, in order to guarantee adequate and timely treatment of other patient cohorts in surveyed countries. Introduction Coronavirus type 2 is responsible for severe acute respiratory syndrome (SARS-CoV 2), as well as the associated COVID-19, which occurred first in Wuhan, China in December 2019. 1,2 The virus spread around the world within a few weeks 2 and became a public-health emergency of international concern at the end of January 3 and a pandemic on March 11, 2020. 4 At that point, the number of new cases in Europe had gone beyond those in China, and cases were doubling (depending on the country) within a few days. 5 All countries within Europe had confirmed cases of COVID-19 patients and deaths. The most affected countries were Italy, Spain, France, and the UK. In order to prevent further spread of the virus, most European governments established lockdowns, including self-isolation, social distancing, closing schools, and banning of events, which affected >250 million people. 6 During 2020, lockdowns and further waves of the pandemic alternated and were a never-ending challenge for hospital management and politics. At the time of writing, more than 102 million confirmed COVID-19 cases and over 2 million deaths have been reported worldwide. 7 Due to the COVID-19 pandemic, new challenges for health-care systems in all countries have arisen and there has been uncompromising prioritization of capacity of hospital facilities and human resources toward COVID-19 patients. Depending on the severity of the wave of infections, there was also a shortage of resources by actual capacity (eg, Italy, Spain) 4,5,7 or by governmental and/or hospital policies in the sense of strategic preparedness (eg, Germany, Austria, Baltic states). In both cases, treatment was restricted for all other patient cohorts but emergencies. Surgical societies drew up lists of procedures that could be performed during the pandemic, including transplantations as life-saving procedures. 8 The impact of COVID-19 on solid-organ transplantation (SOT) was limited at the beginning of the pandemic [9][10][11] and is still unclear and controversial. [12][13][14][15][16][17][18] Furthermore, it is unknown whether it affects the donor pool or whether there is a risk of virus transmission during transplantation. In addition, the risk of infection of living donors and of possibly poorer outcomes for organ recipients is still not clear. Since there are no evidence-based guidelines to deal with COVID-19 and transplantation, many centers have considered restricting (urgency, risk stratification) or even stopping their activities. 11,19,20 To evaluate the framework for SOT in Europe during the first lockdown, we conducted an online survey between April 24 and May 15, 2020. A questionnaire to collect information on restrictions on SOT, protective measures, (non)governmental information policies, and individual opinion on how to deal with SOT during COVID-19 was designed. Study Population and Survey Conduct The survey link was distributed between April 24 and May 15, 2020 by email to transplant surgeons at established centers (n=155) within 31 European countries. It was requested that the survey be filled out only once per center. Three reminders were automatically sent at weekly intervals to those who did not respond to the initial email. Only fully completed surveys were able to be returned. Survey Design and Topics This questionnaire on COVID-19 and SOT was created at the General, Visceral, and Transplant Surgery division, Medical University of Graz, Austria ( Figure 1). The survey comprised multiple-choice questions and yes/no questions. Depending on the answer in the latter, more detailed questions were asked. The first part of the survey included general questions on location (country) of the transplant center, hospital size, represented by total beds (up to 500, up to 1,000, up to 1,500, up to 2,000, >2000), and both total intensive care unit (ICU) beds and intermediate-care unit (IMC) beds (up to 20, up to 50, up to 100, up to 150, >150). Furthermore, specific data of transplant centers were evaluated including transplant dedicated capacity, ie, ICU/ IMC beds, (up to five, up to ten, up to 20, >20). Moreover, both number of transplantations per year (up to 25, up to 50, up to 100, >100) and number of transplant surgeons (up to five, up to ten, up to 20, >20) were asked about. There was a requirement to fill in the type of organ and the origin of grafts (deceased, living). Survey Topics Data on the following topics were collected: (I) Restrictions on SOT (questions 3.1-3.8): It was asked whether there had been an effect on the transplant program, and if yes, whether it was caused by shortage of resources or by hospital or governmental policies. In both cases, we intended to find out which resources were limited (beds, equipment, medication, staff) and whether there was a related impairment of the transplant program (same selection, but additionally "only for highly urgent or the sickest patients" in liver, kidney, heart, and lung transplantation 2422 (III) (Non)governmental information and policies (questions 4.1-4.10): These questions aimed to evaluate sufficiency of support with relevant information provided by local (hospital administration, physicians' chamber, governmental), national (governmental, physicians' chamber, societies, organprocurement organization), and international (governmental -European Union (EU), societies, organ-procurement organization) institutions. (IV) Personal opinion on how to deal with SOT during COVID-19 (questions 3.9, 5.1, 5.2): The main question was if transplant centers should continue their programs in general or based only on organ and/or urgency. If the latter of possibilities was answered with a "yes", the specific programs to be continued had to be selected. Data Management and Statistical Analysis The questionnaire was collected anonymously and data collected imported automatically and evaluated using the EvaSys program. 21 Data were displayed as descriptive statistics. General Data The response rate was 37.4% (58 of 155) of centers from 17 European countries ( Figure 2 General Information on Transplant Centers All kinds of SOT were represented in the survey ( Due to differeing severity of COVID-19 pandemic, health-care systems, and governments among European countries there was a certain heterogenity between countries concerning the impact on their transplant centers (Table 1). Respondents from Italy reported an effect of COVID-19 on transplant programs of 100% (50% shortage of resources and 50% hospital or governmental policies). In up to 25%, the entire transplant program had been stopped. In Austria, 60% of resources were limited by hospital and/or governmental policies, in Germany, this was only 22.7%. In Austria, the deceased kidney transplantation program had stopped in 100%, the living kidney program in 83.3%, the pediatric kidney program in 50%, and the pancreas program in 83.3%. In Germany, the deceased kidney transplantation program was continued, but the living kidney program was stopped in 80, and the pancreas transplantation program in 50%. A total of 13.8% of transplant centers disagreed with current policies. However, in some European countries there agreement of only 50-60% with hospital or governmental policies concerning COVID-19 (Table 1). Protective Measures Against COVID-19 in Transplant Centers Most of the transplant centers (93.1%) implemented protective measures against COVID-19. Recipients were isolated in 64.8% of centers (single room in 71.4%, single room + airlock in 17.1%, single room + airlock + overpressure system in 11.4%). Visit bans were in place in 77.8%, and number of health care professionals at once with recipient had been restricted to two in 40.7% of centers. Further modified standards comprised wearing of masks for both recipient and health-care professionals (surgical mask 85%/ 83.3%, FFP2 mask 12.5%/14.6%, FFP3 mask 2.5%/ 2.1% of centers). Changes in immunosuppressive regimen were carried out in only 7.4%, and no changes in prophylactic medication with antibiotic, antimycotic, or antiviral therapy were reported. COVID-19 PCR testing of recipients was found in 93.1% and of donors in 94.8%. Personal Opinions on How to Deal with SOT During COVID-19 Of all transplant centers, 96.6% would continue transplant activity in general (52%) during COVID-19 or based on organ and/or urgency (48%). Discussion Survey results demonstrated a tremendous impact of COVID-19 on transplant centers in Europe. Related impairment of transplant activity was caused either by actual shortage of resources or strategic preparedness, given by hospital/governmental policies. Both reasons for restrictions were present to the same extent. Actual impairment by number, however, remains unclear. This is the first assessment of the impact of COVID-19 on SOT in Europe. Other surveys that have been carried out recently 22-28 focused on different topics and were carried out onlywithin one or a few countries. A survey with similar question was carried out in the US. 29 This survey may be the basis for speculating on the shortage of ICU capacity needed to provide health-care support for both COVID-19 and all other patients in most countries. Impact of Shortage of Resources on Transplant Centers At the height of the pandemic, several European countries were affected by total utilization of the capacity of their health systems (eg, Italy, Spain, and France). 5,7 Both restrictions and even suspension of transplant programs were not Table 2) in various corresponding countries (eg, Germany). The characteristics of the health-care systems and the sociodemographic needs of the different European countries vary widely, and the measures recommended by the WHO have been applied differently across different countries and regions. During the pandemic, such countries as Germany, which have more ICU beds per head of population (Table 2), were able to help other European nations that were temporarily or continuously overwhelmed with an excess of cases (eg, Italy). In Italy, ICUs had to be expanded and new hospitals built to cope with the demands of the escalating number of COVID-19 patients. Furthermore, human resources were reorganized and retired professionals and DovePress volunteers sent to hospitals in critical areas. In this situation, other patients, such as cancer or transplant patients, could no longer be cared for, due to lack of resources, above all limited beds, staff, and equipment. As for our survey, transplant centers in Italy were affected 100% (50% by shortage of resources and 50% by hospital or governmental policies). In up to 25%, whole transplant centers were closed. However, 25% of the responding centers did not agree with health-care policy. All the queried centers would have liked to continue their transplant activity during COVID-19 (52% in general, 48% based on organ and/or urgency) in order to be able to continue caring for their patients. US data comparable to those from Italy were published recently. 29 Results of a nationwide survey demonstrated complete suspension rates of living kidney transplantation in up to 71.8% and living liver transplantation in up to 67.7%. Restrictions of deceased kidney-transplantation programs were declared in 84% and of deceased liver-transplant programs in 73.3%. Almost no restrictions were placed on heart-or lungtransplantation programs. A completely different situation was observed in other European countries, where the severity of the COVID-19 pandemic was not that substantial. In order to be prepared for a possibly extreme COVID-19 situation, resources were kept free for safety reasons and thus restricted. Strategically prepared countries were Austria and Germany, for example, but interestingly there were differences between them. While in Austria, 60% of resources were limited by hospital and/or governmental policy, in Germany it was only 22.7%. In Austria, the deceased kidney-transplantation program was 100% stopped, while in Germany it was continued. While transplant centers in Germany mostly agreed with hospital or governmental policies, in Austria there was disagreement of 40%. Almost all transplant centers in both countries (Austria 100%, Germany 95.5%) agreed to continue transplantation during COVID-19 in general (Austria 30%, Germany 66.6%) or based on organ and/or urgency (Austria 70%, Germany 33.3%). Our results showed that the resources of all transplant centers in Europe were more or less affected by COVID-19. The extent of the negative effect on transplantation depended on the severity of the infection wave, but also on strategic measures within individual countries. An important goal for the future must be to learn from the experiences of all countries and develop strategies for the future that can support a reduction in the collateral damage of pandemics on transplant centers. [30][31][32] It must be clear that a dramatic increase in patients requires an increase in resources, unless restrictions treating patients with ICU/ IMC requirements in need of isolation are accepted. Another approach to continuing SOT is restriction of treatment to highly selected patients. As an example, the sickest-first strategy could be applied. As an alternative, the healthiest patients are treated on the assumption of better resistance against COVID-19 infection and thus better survival after transplantation. Data from this survey clearly showed that up to 33.3% of the centers transplanted only highly urgent or the sickest patients. Deceased and pediatric programs should be continued over living transplant programs. However, transplanting only healthier recipients with the best-quality organs and lowest risk of delayed graft function might also be a good strategy. 29 When trying to select, some criteria might be considered: kidney transplantation, preemptive transplantation, highly sensitized patients, those with negative cross matching, or higher-acuity patients. In cases of liver transplantation, patients could be stratified on the severity of their illness, first transplants, and those with tumors without other options. For heart and lung transplantation as lifesaving procedures, there were almost no restrictions and there should not be any in future. In any case, COVID-19 recipient-and donor-testing availability is mandatory, and attention should be paid to a low-risk COVID-19 setting (both donor and recipient SARS-CoV2-negative). Unknown Effects of COVID-19 on SOT In the middle of March 2020, when the COVID-19 pandemic started, knowledge on the impact of the virus in a transplant setting including immunosuppression was scarce and discouraging. 19,33,34 Therefore, some transplant centers considered stopping their programs 11,12,20 or restricting transplantation to highly urgent or riskstratified patients. Risk stratification was especially difficult, due to pending data or even scores. During the following weeks, many recommendations from transplantation or surgical societies, clinical studies, and case reports were published, 35 In our center, liver transplantation was safely performed during COVID-19 in a low-risk setting (both donor and recipient SARS-CoV2--negative, low disease-severity recipients [labMELD score <25], and a low donor risk-index graft). 41 However, data concerning the impact of COVID-19 on patients after transplantation are still rare and controversial; therefore, we urgently need data from further studies that could provide important information for the care of transplant recipients. Conclusion This is the first survey to give an overview of 58 transplant centers within 17 European countries and their framework for SOT during the COVID-19 pandemic. Results of this survey suggest a desperate need for ICU capacity during COVID-19 in most countries to guarantee adequate and timely treatment of other patient cohorts. Publish your work in this journal Risk Management and Healthcare Policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. The journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2021-06-11T05:20:17.267Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "f05966169d395e115c803eb9701f7ef91f825d74", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=70232", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f05966169d395e115c803eb9701f7ef91f825d74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224809241
pes2o/s2orc
v3-fos-license
Bioengineered human skeletal muscle capable of functional regeneration Background Skeletal muscle (SkM) regenerates following injury, replacing damaged tissue with high fidelity. However, in serious injuries, non-regenerative defects leave patients with loss of function, increased re-injury risk and often chronic pain. Progress in treating these non-regenerative defects has been slow, with advances only occurring where a comprehensive understanding of regeneration has been gained. Tissue engineering has allowed the development of bioengineered models of SkM which regenerate following injury to support research in regenerative physiology. To date, however, no studies have utilised human myogenic precursor cells (hMPCs) to closely mimic functional human regenerative physiology. Results Here we address some of the difficulties associated with cell number and hMPC mitogenicity using magnetic association cell sorting (MACS), for the marker CD56, and media supplementation with fibroblast growth factor 2 (FGF-2) and B-27 supplement. Cell sorting allowed extended expansion of myogenic cells and supplementation was shown to improve myogenesis within engineered tissues and force generation at maturity. In addition, these engineered human SkM regenerated following barium chloride (BaCl2) injury. Following injury, reductions in function (87.5%) and myotube number (33.3%) were observed, followed by a proliferative phase with increased MyoD+ cells and a subsequent recovery of function and myotube number. An expansion of the Pax7+ cell population was observed across recovery suggesting an ability to generate Pax7+ cells within the tissue, similar to the self-renewal of satellite cells seen in vivo. Conclusions This work outlines an engineered human SkM capable of functional regeneration following injury, built upon an open source system adding to the pre-clinical testing toolbox to improve the understanding of basic regenerative physiology. Background Skeletal muscle possesses an innate and robust capacity to regenerate following injury, with most injuries regenerating the tissue to a state indistinguishable from that prior to injury [1]. This regenerative capacity in vivo relies upon the presence of a resident stem cell population, satellite cells (SCs), which reside between the plasma membrane (sarcolemma) of muscle fibres and the encasing basement membrane [2][3][4]. SCs are characterised by the unique position they occupy within the tissue, but also the expression of the stem cell transcription factor Pax7 [5]. Following injury SCs are activated and proliferate readily [6], producing committed myogenic precursor cells (MPCs) marked by the presence of MyoD expression [7,8]. MPCs then fuse together and with damaged muscle fibres, regenerating myofibres lost through injury [9][10][11]. In addition to a myogenic lineage, non-myogenic stem cells (fibro/adipogenic progenitors (FAPs)) and immune cells support regeneration by modifying the extracellular matrix and coordinating repair and regeneration [12][13][14]. The interactions between these additional cell types and MPCs have been shown to be vital in the regenerative process [15][16][17]. In severe muscle traumas such as volumetric muscle loss (VML), surgical trauma or partial muscles tears, the regenerative capacity of skeletal muscle can be overcome, leading to non-regenerative defects such as fibrosis [18][19][20][21], interstitial adipose accumulation [13,22,23] and heterotopic bone formation [24,25]. In these circumstances, individuals are left with complications such as reduced function, increased chance of reinjury and debilitating pain [26][27][28]. In animal models of severe traumas, there has been limited success in manipulating the regenerative process to promote muscle regeneration and reduce non-regenerative defects. Although, where clear biochemical rationale exists, for example limiting the expansion of specific cell populations with small molecules, progress has been made [29][30][31]. This limited success may be attributed to the lack of easily manipulated, medium-throughput models of injury and regeneration, thus limiting understanding of the fundamental biology of muscle injury and regeneration/repair. Due to the complex cell-cell interactions and threedimensional (3D) environment necessary to accurately mimic skeletal muscle regeneration, models of muscle injury to date have been predominantly based around laboratory animals. However, animal models face limitations with low experimental throughput, complex genetic manipulations, complex pharmacological manipulation (compared to cell cultures) and ethical considerations, in addition to inherent biological variation from humans, and therefore there is a clear requirement to develop accurate and robust ex vivo models of pathophysiology [32,33]. Advances in tissue engineering have made it possible to create engineered skeletal muscle to understand complex physiological phenomenon. Engineered skeletal muscles from cell lines [34,35], primary laboratory animal MPCs [36][37][38], pluripotent stem cell (PSC) derived myocytes [39][40][41] and primary human MPCs [42][43][44][45] have been demonstrated, but relatively few of these engineered skeletal muscles have been shown to possess a regenerative capacity following injury [46][47][48][49]. The regenerative processes of some engineered muscle models have shown clear correlations to that of in vivo muscle, and so the utilisation of these engineered tissues in studies of regeneration is a clear opportunity to increase our understanding of skeletal muscle regenerative physiology [46,49,50]. Previous engineered models of regeneration have used the snake venom cardiotoxin (CTX) to induce a chemical muscle injury. CTX is widely used in animal models to produce a specific cellular model of muscle injury and has been shown to be effective in engineered tissues [49,50]. The model presented here takes a similar approach utilising Barium chloride (BaCl 2 ), which is also widely used in laboratory animals as a specific myotoxin and produces a comparable injury type [51]. BaCl 2 was chosen as an injurious stimulus due to previous in vivo publications, its high water solubility and ready availability with low regulatory restrictions, allowing easy and reproducible in vitro application [20,51]. In addition, the mechanism of injury following BaCl 2 treatment is a simple cellular injury specifically removing myotubes without reducing mononuclear cell number. Although chemical insults do not mimic all of the damage, specifically extracellular matrix destruction caused by mechanical injuries, usually seen in vivo these insults produce a specific and reproducible injury phenotype to ensure accurate model development [46]. To ensure that data produced by these engineered models is as relevant as possible and that these models are exploited to their full potential, engineered muscles utilising primary human cells, with a regenerative capacity mimicking that of in vivo muscle, should be developed. To account for the heterogeneity of cells found within native muscle, primary tissue-derived MPCs and not human iPSCs present the most biomimetic option for creating a representative model of human skeletal muscle regeneration. Here we present a robust, high-content protocol to generate functional engineered human skeletal muscles from primary human MPCs. Utilising cell population sorting and media optimisation, we present human engineered skeletal muscles which regenerate function and morphology completely following injury. These engineered muscles in addition to supporting regeneration also contain a self-renewing stem cell niche, presenting an opportunity to accurately study the biology of human skeletal muscle regeneration ex vivo. Results Remixing CD56+ and CD56− cell populations produce robust tissue engineered muscles The sorting of non-myogenic and myogenic populations from human explant cultures allows extended culture periods within the myogenic populations without a significant loss of desmin positivity, a marker of myogenic potential (Additional file 1: Fig. S1a/b). However, the use of only myogenic cells in collagen/Matrigel® hydrogels produces engineered muscles of highly variable quality due to the apparent inability of these cells to reproducibly deform the hydrogel matrix (Additional file 1: Fig. S1c/d). However, due to the high proportion of myogenic cells in the CD56+ fraction (referred hereafter to as CD56+) constructs these engineered muscles, when successful, produce significantly more myotubes than unsorted equivalents (Additional file 1: Fig. S1 e-i). To exploit the high myogenic potential of CD56+ cells, a dose remixing experiment was undertaken to identify the lowest proportion of CD56− cells required to reproducibly deform collagen/Matrigel® constructs. The deformation of these constructs is a key feature of their development, generating tension to align developing myotubes. The CD56− cell fraction was therefore remixed with the CD56+ fraction at various ratios (10% 9:1 CD56+:CD56−, 30% 7:3 CD56+:CD56− and 50% 1:1 CD56+:CD56−) and hydrogel deformation and morphological appearance of the tissue examined. All ratios produced engineered muscles which deformed robustly, without any significant difference between conditions ( Fig. 1a/b). Clear trends in morphological appearance were present across conditions with 10% CD56− constructs displaying the highest number of myotubes per square mm and the largest percentage of constructs occupied by myotubes (Fig. 1d, e, and g). A small decrease between 10 and 30% CD56− and a much larger step between 30 and 50% CD56− engineered muscles was observed indicating that increasing the CD56− fraction reduced myogenic potential (Fig. 1d, f). As the proportion of CD56− cells increased, these measures of myotube formation were reduced, although this trend was not significant. Myotube cross-sectional areas (CSA) remained unaffected by the proportion of CD56− cell included in constructs (Fig. 1f). As 10% CD56− remixing produced robust deformation and allowed the maximum inclusion of the CD56+ myogenic fraction, this remixing ratio was carried forward for all future experiments. Media supplementation increases morphological and functional markers of muscle maturity Engineered muscles (CD56−:CD56+, 10:90) were supplemented either in the growth phase (days 0-4) or the differentiation phase (days 4-14) of culture with 2% B-27 supplement. Supplementation with B-27 increased nuclei number approximately 2-fold irrespective of the phase in which it was added (p = 0.001, p = 0.004, Fig. 2e). However, only supplementation in the differentiation phase of culture lead to increases in total myosin heavy chain (MyHC) coverage, myotube number and myotube CSA (p < 0.001, p < 0.001, p = 0.004, Fig. 2a-d). No significant changes in force generation were observed in any B-27 supplementation conditions; however, supplementation in the differentiation phase of development did lead to a mean increase in tetanic force of 8-fold, although this response was highly variable between repeats. Supplementation of GM with FGF2 at 5 ng/mL followed by 10 days in B-27 supplemented DM (+FGF, Fig. 2) was compared to unsupplemented GM followed by B-27 supplemented DM (−FGF, Fig. 2). FGF2 supplementation did not change any morphological measure significantly (Fig. 2i, k-n). However, FGF2 addition did increase force generation significantly for both tetanus (1.9-fold, p = 0.017) and twitch (1.74-fold, p = 0.044). This data allowed the selection of FGF2 supplemented GM and B-27 supplemented DM as suitable medias for the culture of 10% CD56− engineered human skeletal muscles. Human engineered muscles contain multinucleated myotubes, Pax7+ nuclei and laminin organisation To identify how similar engineered skeletal muscle was in its matrix and cellular organisation to somatic muscle, features of in vivo muscle morphology were examined. Longitudinal staining for MyHC confirmed that structures positive for MyHC were multinucleated myotubes (Fig. 3a). Laminin staining of cross-sections showed clear and distinct concentrations of laminin staining surrounding virtually all myotubes within engineered muscles, closely reminiscent of the basement membrane organisation of in vivo muscle (Fig. 3b/c). However, another basement membrane component and component of Matrigel®, collagen IV, did not show a similar organisation, suggesting this enrichment is specific for laminin and not a general compression of the matrix (Additional file 1: Fig. S3). Longitudinal analysis showed the presence of Pax7+ nuclei, with a small proportion of these associated with a laminin rich area of matrix or the plasma membrane of myotubes (Fig. 3b). Engineered skeletal muscles are capable of functional and morphological regeneration following injury To examine the regenerative capacity, and so the function of the Pax7 niche, engineered muscles were exposed to an injurious stimulus in the form of BaCl 2 exposure. Injury caused an initial reduction of 27.5% in MyHC positive coverage (p = 0.006) and this reduction in coverage became more pronounced up to 4 days post injury (54.7%, p < 0.001, Fig. 4a, d). This reduction was caused predominantly by a loss of myotubes (myotubes per square mm), from 972 mm −2 pre-injury to 789 mm −2 immediately post injury (p = 0.005, Fig. 4a, d) further reducing to 638 mm −2 (p = 0.001) 4 days post injury. This reduction was also accompanied by moderate atrophy of myotubes, with CSA reducing 9% following injury, although not significantly. This atrophy increased to a maximum of 29% (38.9 μm 2 , p < 0.001) at 4 days post injury. Across all measures of myotube size and density, the largest reduction was seen between control and immediately post injury (6 h BaCl 2 incubation); however, these measures continued to decline across the first 4 days of regeneration. At 9 days post injury, MyHC coverage had recovered to 85% of uninjured levels and was no Fig. 1 Remixing of CD56-sorted populations leads to robust engineered muscles. Throughout percentages refer to the percentage of CD56-cells remixed to create the human cell population. Black in all graphs denotes 10%, red 30% and blue 50% CD56− .a Photographs of engineered muscles across time showing deformation. Scale bar -5 mm. b Deformation over time. c Experimental scheme showing time points of analysis. d Representative micrographs stained for MyHCgreen and Nuclei (DAPI)blue. Scale bar 100 μm. e-g Graphs displaying; MyHC percentage coverage, myotube cross-sectional area (CSA) and myotubes per mm 2 . All graphs display mean ± S. D, individual repeat means are displayed as points. No statistically significant comparisons were identified, n = 9 samples across 3 repeats longer significantly reduced. In addition, myotubes per square mm had returned to 94% of uninjured controls, suggesting that myotube CSA was still depressed. Indeed, myotube CSA at day 9 post injury was significantly reduced, with an average CSA of 115 μm 2 compared to 133 μm 2 at control (p = 0.05, Fig. 4a, d). Following the full 14 days of regeneration, all measures of myotube density and size had returned to control levels showing complete morphological regeneration of the tissue. With morphological change, an accompanying reduction in functional output is expected. Immediately following injury, both tetanic and twitch force were reduced by an average of 62% compared to control (p = 0.003, p = 0.024, Fig. 4c). Both measures of function remained significantly depressed at 2 days and 4 days post injury when compared to control. At day 9 post injury, both twitch and tetanic force were recovered to control levels and remained comparable to control at the end of regeneration at 14 days post injury (Fig. 4c). Dynamics of myogenic and non-myogenic cell populations during regeneration Nuclei per square mm was recorded to ensure that engineered muscles remained viable throughout recovery, and no significant variation in this measure was observed (Fig. 5c). However, an increase of 15% was observed 2 days post injury (p > 0.05), although this was resolved by 4 days post injury. To examine if different populations of cells within the overall population were expanding in relation to nuclear markers of the myogenic lineage, Pax7 and MyoD were stained for and expressed as a percentage of total nuclei. MyoD, which marks proliferative myoblasts committed to the myogenic lineage, showed no change immediately following injury. However, a significant increase from 26.4% at control to 38.6% was observed after 2 days of regeneration (p = 0.005, Fig. 5a). This expanded MyoD population was completely collapsed following a further 2 days, at 4 days post injury, with the percentage of MyoDpositive nuclei returning to 26.0%. Through the remainder of regeneration, the percentage of MyoD-positive nuclei remained comparable to control. Pax7 a marker of satellite cells in vivo was found to be initially rare, making up only 0.42% of total nuclei at control, and no significant changes in Pax7-positive nuclei percentage was observed in the first 4 days following injury. However, following 9 days of regeneration Pax7+ nuclei comprised 2.4% of total nuclei a significant increase (p = (See figure on previous page.) Fig. 2 Media supplementation of remixed engineered muscles increases morphological maturity and increases functional capacity. a Representative micrographs stained for myosin heavy chain (MyHC)green and Nuclei (DAPI)blue. Scale bar 100 μm. b-f Graphs displaying; percentage MyHC coverage, myotubes per mm 2 , myotube cross-sectional area (CSA), nuclei per section and normalised force (normalised to control), respectively. Mean tetanus force at control 45.9μN, 0.96 kPa; twitch force 22.7μN, 0.048 kPa. All graphs display mean ± S. D. g, h Experimental schemes showing conditions coloured similarly to graphs. Scheme (g) applies to a-f whilst scheme (h) applies to i-n. i Representative micrographs stained for myosin heavy chain (MyHC)green and Nuclei (DAPI)blue. Scale bar 100 μm. j Normalised force (normalised to -FGF condition) mean tetanic force for -FGF condition 14.3μN, 0.10 kPa; twitch 6.2μN, 0.35 kPa. k-n Graphs displaying; Percentage MyHC coverage, myotubes per mm 2 , myotube cross-sectional area (CSA), nuclei per section respectively. All graphs display mean ± S. D, individual repeat means are displayed as dots. Statistical significance from control (a-f) and -FGF (i-n) is denoted as *p ≤ 0.05, ***p ≤ 0.001, n = 9 samples across 3 repeats Examination of Pax7 expression by RT-PCR showed a similar trend as Pax7 staining. Pax7 mRNA expression increased throughout recovery with 3.2-and 3.5-fold increases at 9 and 14 days post injury being significant (p = 0.02, p = 0.015). Although the 3-fold changes observed by RT-PCR are smaller than the approximate 6-fold change observed in staining, the trends appear to be consistent, with increased Pax7+ nuclei appearing through the final 10 days of regeneration (Fig. 5d). Myogenin (MyoG) expression was also analysed, as a later myogenic marker than MyoD, it would be expected to follow a similar trend but temporally delayed. The trend of MyoG expression is broadly similar to MyoD-positive nuclei with a rise following 2 days of regeneration and partially resolving at day 9 post injury. An increase at 14 days post injury is observed although this is not statistically significant (Fig. 5d). The non-myogenic transcription factors Runx2 and Pparg were also examined to illuminate the dynamics of Human engineered muscles regenerate functionally and morphologically following injury. a Representative micrographs of engineered muscle cross-sections. Stained for MyHC (green) and Nuclei (DAPI, blue). Scale bar represents 100 μm. b Representative force traces used for tetanus and twitch force measurements. c Normalised force measurements across recovery. Means at control; tetanus -101.6 μN, 0.19 kPa; twitch -36.6μN, 0.072 kPa. d Normalised morphological measures across recovery. Means at control; percentage MyHC coverage -13.0%, myotube cross-sectional area (CSA) -132.6 μm 2 , myotubes per mm 2 -971.6. b-d All graphs display control normalised means ± S. D, individual repeat means are displayed as points. Dashed line represents level at control, normalised to 1, on all graphs. Statistical significance from control is denoted as *p ≤ 0.05, **p ≤ 0.01 and ***p ≤ 0.001, n = 15 samples across 5 repeats potentially non-myogenic cells which in vivo can drive the formation of non-regenerative defects such as increased interstitial fat and heterotopic bone. Runx2 mRNA showed no statistically significant deviation from control levels, although an increase was observed at all time points across the first 9 days of regeneration (Additional file 1: Fig. S4). Pparg mRNA was significantly increased throughout regeneration, with the exception of immediately post injury. At 2 and 4 days post injury, Pparg mRNA expression was elevated 1.88-and 1.89fold, respectively (p < 0.001, p < 0.001). This expression was reduced to 1.37-fold at 9 days post injury (p < 0.001) before elevating again to 2.16-fold (p < 0.001, Additional file 1: Fig. S4) after 14 days of regeneration. The upregulation of Pparg suggests a level of non-myogenic repair occurring in response to injury. Discussion Here we have demonstrated, by the exploitation of wellestablished tissue engineering approaches, a protocol which allows the generation of engineered human muscle with the capacity to regenerate function following injury. The tissues generated are functional, contain the mixed cell populations represented in muscle and demonstrate myogenic population dynamics reminiscent of in vivo tissue. Although previous work has shown that engineered muscles can regenerate following injury [46][47][48][49], no previous work has demonstrated functional recovery in human engineered muscles [52]. The injury sustained following BaCl 2 treatment is robust, leading to significant loss of myotubes and function. As with in vivo skeletal muscle wound healing and previous primary tissue engineered muscles, injury is followed by a period (in this model 4 days) of impaired function and reduced myotube number [48,49,51]. This contrasts with our previous cell line based 3D models, which isolates the response of committed myogenic cells, and shows a very rapid recovery of function with no prolonged post injury period [46]. Following this initial period without myotube formation, a complete recovery of myotube number and function was observed in line with similar injury types in animal models and previous engineered tissues [46,47,49,51]. This demonstrates the regenerative capacity of human skeletal muscle ex vivo and positions this model as a useful tool in examining the underlying biology of skeletal muscle regeneration and repair. Initially, we demonstrate that the remixing of CD56− and CD56+ cells is required to create robust engineered muscles in line with our previous work [45,53]. Other models have shown that it is possible to generate CD56+ only engineered tissue, but this work utilises a fibrin-based hydrogel and an FGF2 supplemented myogenic growth media and these differences may explain the contrasting requirement for CD56− cells [54]. This protocol demonstrates that only a very low proportion of non-myogenic cells are required to drive hydrogel deformation, allowing a high percentage of myogenic cells to be exploited (Fig. 1). MidiMACS sorting is unlikely to be absolutely efficient in producing a population of homogeneous cells on the basis of CD56 expression; however, the desmin positivity values in excess of 90% observed here suggest a highly efficient sorting yield. In addition to increasing myogenic potential, sorting and remixing reduces donor to donor variation in desmin positivity, a common observation with hMPCs, reducing some of the donor variability and likely increasing experimental power. Finally, sorting allows separate expansion of myogenic and nonmyogenic populations and therefore extends the usability of hMPCs. hMPCs unsorted rarely retain sufficient desmin positivity for engineered tissue after 6 passages, and therefore a standard microbiopsy sample would yield approximately 100 × 50 μL engineered muscles. However, sorting cells allows further expansion (up to at least 9 passages, which could potentially be extended) and a projected 20-fold increase in MPC yield allowing 2000 engineered muscles to be generated per biopsy sample. Taken together, these advantages suggest clear rational for exploiting CD56 sorting for hMPC tissue engineering and solves one of the key limitations of using hMPCs. Remixing of separate populations produces a method to increase cell yield and reproducibility of engineered human skeletal muscle by allowing the expansion of competing populations in isolation. A remixing ratio of 10:90 (−ve:+ve) was selected as this ratio allowed the maximum percentage of myogenic cells to be incorporated into a construct which reproducibly remodelled to generate tension between the anchor pins. We did not examine more in-depth markers of remodelling, such as matrix metalloproteases (MMPs) or their tissue inhibitors (TIMPs), which may be of interest in future work to refine this model. In addition to remodelling the matrix, the CD56− fraction contains a range of other cell types found within the muscle (Additional file 1: Fig. S3) which have been shown in vivo to play important roles in muscle regeneration. This remixing is therefore also key to preserving the biological validity and utility of hMPCs. Identified through flow cytometry PDGFRα+ cells (FAPs), CD90+ (MSCs), CD45+ cells (immune lineage) and CD31+ (endothelial cells) were all identified alongside TE7+ cells (interstitial fibroblasts). The sum of the population fractions, although calculated across two methods, approximates at 99% of the total population suggesting that the predominant lineages within this population are accounted for, although not confirmed beyond surface marker expression. All of these cell types have suggested roles within the regenerative process and their inclusion may underpin some of the regenerative capacity of this model and at the very least provide an opportunity to examine in vitro how these populations behave following muscle insult. However, even utilising CD56 sorting, without media supplementation engineered muscles of this type have relatively low numbers of myotubes and produce functional output close to the limit of detection reducing the usefulness of this key measure. To further improve the quality of engineered tissues growth factor supplements (FGF2) and commercial supplement cocktails (B-27) were used. B-27 supplement is widely used in primary neuronal and stem cell cultures [55,56]. The supplement contains a broad range of components aimed at promoting redox balance, increasing metabolic flexibility, providing trace nutrients and driving cell growth with supplemental growth factors (Insulin) and steroids (triiodothyronine (T3), corticosterone and progesterone). The roles of these components are broad with protection from redox stress and provision of trace nutrients likely to promote cell survival and lead to increased cell numbers [57]. Growth factors and steroid hormones are more likely to have cell type-specific effects, with insulin having been shown in myoblasts to increase fusion and the expression of myogenic genes, as well as driving proliferation [58,59]. Thyroid hormones have wellestablished roles in regulating skeletal muscle growth and differentiation, with hypothyroidism leading to reduced muscle mass [60]. At a molecular level, the action of T3 on the expression of myogenic genes, MyoD and Myogenin, has been clearly established suggesting T3 will drive myogenesis in engineered muscles [61,62]. The effects of progesterone on skeletal muscle remain relatively unexamined, with the expression of progesterone receptors yet to be confirmed in myoblasts [63]. Finally, corticosterone, which has only a limited role in humans with its ortholog cortisol being the predominant corticosteroid, drives skeletal muscle atrophy and inhibits insulin signalling in vivo and may perform a similar function in culture [64]. The paucity of data makes the action of progesterone, and the apparent adversarial effects of insulin vs corticosterone, makes it difficult to unpick precisely the role of each component in B-27 and future work may be required to better understand the mechanisms by which the supplement drives myogenesis. Previous work using B-27 with skeletal muscle myoblasts has been shown to promote myoblast survival, but not differentiation in primary rat MPCs [65] whilst as a serum replacement for iPSCs B-27 promotes myogenesis [56]. In the first case, B-27 was included throughout culture and may better compare with the addition of B-27 to the growth phase in this work where increases in cell number were observed but no increase in myogenesis. Whereas in the work by Jiwlawat et al., the addition of B-27 occurs to support terminal differentiation, which is similar to the addition of B-27 in the differentiation phase in this work. The growth factor FGF2 is known to promote myoblast proliferation, matrix remodelling and attachment and is widely used in MPC culture [45,66] whilst also inhibiting differentiation [67,68] and so was only used in the growth phase of engineered muscles. A doubling in force production shows an effect of supplementation, but this is not driven by an increase in nuclei at maturity. It is possible proliferation may happen earlier in FGF2 supplemented muscles or that there is a priming effect of FGF2 which leads to increased maturity and so force production. The data presented here does not allow this to be examined further. Together, this data demonstrates that supplementation effectively improves the quality of engineered muscle (MyHC coverage and force production) produced from hMPCs and although this may not be the absolute optimal supplementation protocol presents a viable solution to produce functional human engineered muscle (Fig. 2). Although myotube CSA and MyHC coverage are low compared to somatic human muscle [69], morphological examination of engineered tissues reveals an organised basement membrane, shown by laminin rings surrounding muscle fibres and the presence of Pax7-positive nuclei (Fig. 3). However, the niche present in native muscle contains approximately 5-8% of Pax7+ nuclei, substantially greater (10/15-fold) than observed in control engineered tissues [70,71]. This may be explained by a lack of developmental cues such as the exercise/injury seen in vivo which activate satellite cells and cause proliferation [72]. These Pax7+ cells, however rare, are a key feature of skeletal muscle, underpinning the regenerative capacity of in vivo skeletal muscle and supporting tissue growth and turnover, and therefore should be present in models of engineered skeletal muscle [2,5,73]. To examine if these engineered tissues follow similar patterns of regeneration as native skeletal muscle, myogenic and non-myogenic markers were examined through protein and mRNA expression. During the initial 2 days following injury, a proliferative response of MyoD+ myoblasts was observed, before a return to preinjury levels, a response consistent with in vivo data. However, no increase in Pax7+ nuclei was observed during this period as would be expected from in vivo data [7]. This lack of Pax7 proliferation could be due to a lack of activation of these cells following injury, with regeneration driven instead by unfused MyoD+ MPCs, or due to the very low percentages of Pax7+ nuclei present making any changes difficult to detect. Distinguishing between these possibilities is difficult. However, the increases in mRNA expression of Pax7 and Pax7+ nuclei later in regeneration suggest a capacity to expand this cell population, increase the proportion of these cells potentially mimicking the self-renewing capacity of satellite cells in vivo [74,75]. It cannot however be absolutely confirmed, without an equivalent of a contralateral control, which is not included, that the increased prevalence of Pax7+ nuclei observed could be due to repeated growth and differentiation phases, and not solely driven by the injury response. Indeed, the post injury proportion of Pax7+ nuclei (2.1%) is more closely aligned to in vivo proportions (4-7%) than the pre-injury (0.4%), suggesting that an injury stimulus may be required to trigger population expansion. We have however not presented any direct evidence here that the Pax7+ cells present in this model support regeneration directly. To achieve this, lineage tracing experiments with persistent Pax7-dependent markers would be required. Instead, it is possible, as suggested above, that unfused nuclei within the model, often referred to as reserve cells, support regeneration following injury and not the Pax7+ cells. As the fusion index of control engineered muscles is estimated at approx. 40% there are significant numbers of unfused Pax7-nuclei which could support regeneration. It is not possible to accurately suggest the relative contribution of these two populations without robust tracing experiments. The data presented does show that engineered human tissues can mimic some of the key events of regeneration, including the expansion of MPCs and the expansion of the Pax7+ cell population alongside the recovery of function and myotubes making these engineered tissues an attractive model for understanding skeletal muscle regenerative physiology (Figs. 4 and 5). Non-myogenic markers Pparg and Runx2 drive nonregenerative repair defects in vivo [76,77]. Pparg showed an upregulation across regeneration, although this was lower in magnitude than non-regenerative expression in vivo [77], and may be due to increased myotube or MPC expression and not indicative of adipogenic differentiation [78]. Runx2 does not show significant upregulation although some variation from control is observed (Additional file 1: Fig. S4). As remixed engineered muscles contain a range of cell types obtained from skeletal muscle, these expression patterns may represent the expansion, or increased activity of non-regenerative cells types, which should allow future work to examine how these populations progress to develop non-regenerative defects and how they may be manipulated to improve clinical outcomes. Currently, three published models show a regenerative skeletal muscle [47][48][49]52]. Of these, two are collagenbased engineered tissues, whilst the third is a fibrinbased system. All systems have examined injury through the application of cardiotoxin (CTX), with one system also utilising a crush injury. The most recent, from Rajabian et al., shows the regeneration of myotubes of human engineered tissues in both a collagen-and fibrinbased system, although does not examine engineered tissue function. The collagen-based system regenerates myotubes following 5 days of regeneration, but no further analysis is presented to compare to the data presented here, although further analysis is undertaken in a fibrin-based model. The remaining collagen-based system, Tiburcy et al., utilises primary rat MPCs and shows a m-cadherin-lined SC niche and has a regenerative capacity similar to the model presented here, with the ability to regenerate force and morphology following an injury which completely ablates the ability of the tissue to regenerate force. In addition, this model shows a comparable response with regards to Pax7+ proliferation, with an increase across time following injury rather than a brief and temporally confined increase in Pax7+ nuclei. This is in contrast with fibrin systems containing both human and rat based MPCs which display a Pax7 proliferative wave and subsequent resolution. Interestingly, with a crush injury as opposed to chemical insult, Tiburcy et al. demonstrate that collagen-based engineered muscles display this peak in Pax7 and resulting resolution rather than in progressively increasing population. Fibrin-based systems appear to have a more limited regenerative capacity in response to significant functional injury. Both this model and that of Tiburcy et al. show complete ablation of force following injury and complete recovery; however, Juhas et al. see a limit of 50% functional reduction before regeneration no longer occurs, even in the presence of regeneration supporting macrophages. In summary, although this model remains the only model to show the functional regeneration of a human engineered tissue, it broadly shares similar characteristics with other published models. Interestingly, the closest comparison can be made with the collagen/Matrigel® based system of Tiburcy et al. which utilises rat MPCs bringing into focus the key role of matrix composition in these engineered tissues. Conclusions The model presented here provides a platform to generate large numbers of tissue engineered muscles from a single microbiopsy. Utilising CD56 sorting and media supplementation this protocol is robust and allows researchers, in combination with the open source mould system, to rapidly generate human skeletal muscle tissues within their laboratory [79]. The demonstration that these tissues regenerate following chemical insult allows the study of human skeletal muscle regeneration, including cell population dynamics across time, to be undertaken without the need to invasively sample patients repeatedly. In addition, the flexibility of the system allows for future work to build complexity, such as through the addition of immune cells to simulate an inflammatory response [48] or mechanical/electrical stimulation to capture the effects of post-injury exercise [34,44]. As the complexity and maturity of these models develop, they will present an opportunity to test putative clinical interventions in a high/medium throughput manner on human tissue, adding a novel tool to the preclinical testing tool box to help improve lead screening and ultimately improve healthcare for patients. Methods Isolation and culture of hMPCs from skeletal muscle biopsies Participants were recruited according to Loughborough University ethical and consent guidelines (Ethics no. R18-P098), with anonymised participant characteristics presented in Additional file 1: Table S1. Biopsies were collected by microbiopsy method from the vastus lateralis [80]. All collected tissue was minced finely, and connective tissue removed. Minced tissue was then plated out and cells isolated by explant culture [43,55]. Once collected cells were expanded to passage 3 (p3) in gelatin-coated (0.2% v/v) culture flasks. At p3 cells were sorted for the presence of the myogenic cell surface marker CD56 [81], using a MidiMACS™ system (Miltenyi Biotech, DE). Further expansion of the separate populations was then undertaken, CD56+ cells in Corning® Matrigel® basement membrane matrix-coated (1 mg/mL, Fisher Scientific, UK) flasks and CD56− cells in gelatin solution (Sigma-Aldrich, UK)-coated flasks. At p5 cells were cryopreserved or further expanded and used between p7 and p9. Throughout explant and expansion cells were maintained in growth media (GM -79% high glucose Dulbecco's modified Eagle's medium (DMEM, Sigma, UK), 20% fetal bovine serum (FBS, PanBiotech, UK) and 1% penicillin/streptomycin (P/S, Fisher, UK)). For the culture of minced tissue, 1% Amphotericin B (Sigma, UK) was added to standard GM. At no point were flask cultures allowed to exceed 65% confluence and were split at a ratio of 1:3 at passage. Generation of tissue engineered muscles Engineered muscles were made as described previously [43,46]. Briefly, 65% v/v acidified type I rat tail collagen (2.035 mg/mL, First link, UK) and 10% v/v of 10× minimal essential medium (MEM, Sigma) were mixed and neutralised. This was followed by the addition of 20% v/ v Matrigel® and 5% v/v GM containing hMPCs at a final density of 4 × 10 6 cells/mL and in a ratio of 9:1 CD56+: CD56− unless otherwise stated. The final solution was transferred to pre-sterilised biocompatible polylactic acid (PLA) FDM printed removable box 50 μL inserts [82] to set for 10-15 min at 37°C. All moulds used in this manuscript are freely available to download at the following URL: https://figshare.com/projects/3D_Printed_ Tissue_Engineering_Scaffolds/36494. Engineered skeletal muscles were maintained in GM with 5 ng/mL FGF-2 (Peprotech, USA) for 4 days, changed every 48 h. Following 4 days media was changed to differentiation media (DM -97% DMEM, 2% horse serum and 1% P/S) supplemented with Gibco™ B-27™ Supplement (50×, 1:50, Sigma) for a further 10 days. Barium chloride injury and regeneration Once engineered muscles had reached maturity (14 days), as defined above, they were exposed to chemical injury by BaCl 2 . Prior to inducing injury, fresh DM was added to all conditions. Precisely 50 μL/mL of 12% w/v BaCl 2 solution was then added to the medium for injury culture conditions, followed by a 6 h incubation to induce injury. Addition of BaCl 2 to cell culture media may produce a white precipitate which has the potential to obscure immunohistochemical analysis. For applications not utilising sectioning techniques for imaging, nonphosphate/sulphate buffers can be considered to prevent precipitate formation [83]. Following injury, cultures were washed once with phosphate-buffered saline (PBS) to remove residual BaCl 2 containing media. Control (no injury) and 0 h (0 h) time points were collected at the end of injury incubation. Additional time points at 2, 4, 9 and 14 days post injury were collected for all measures to examine the regenerative response across time. For the first 4 days of regeneration engineered muscles were maintained in GM with FGF2, and the remaining 10 days DM with B-27. Tissue fixation, sectioning and staining Engineered muscles were fixed in 3.75% formaldehyde solution overnight at 4°C and then stored in PBS. Prior to cryosectioning, engineered muscles were stored in 20% sucrose solution w/v for 24 h at 4°C to reduce water content and then were frozen under isopentane in liquid nitrogen. Sections were then prepared using standard cryotomy methodology. Cross-sections for MyHC staining were prepared at 12 μm, whilst Pax7 and MyoD staining used 4 μm sections. Longitudinal sections were prepared at 10 μm. Images were collected on a Leica DM2500 microscope using Leica Application Suite X software. Fiji 1.52e [84] was used for image analysis, and an in house macro performed automated myotube and nuclei analysis. Pax7and MyoD-positive nuclei analysis was performed manually. Five random images per repeat, per measure were taken and analysed to generate the presented data. RNA extraction and real-time polymerase chain reaction (RT-PCR) Engineered muscles were snap frozen upon collection and TRIReagent® extraction was augmented by mechanical disruption of constructs in a TissueLyser II (Qiagen, UK) for 5 min at 20 Hz. Following disruption, RNA extraction was carried out using chloroform extraction, according to the manufacturer's instructions (TRIReagent®, Sigma). RNA concentration and purity were obtained by UV-Vis spectroscopy (Nanodrop™ 2000, Fisher). All primers (Additional File 1: Table S2) were validated for 5 ng of RNA per 10 μL RT-PCR reaction. RT-PCR amplifications were carried out using Power SYBR Green RNA-to-CT 1 step kit (Qiagen, UK) on a 384 well ViiA Real-Time PCR System (Applied Bio-systems, Life Technologies, ThermoFisher, USA) and analysed using ViiA 7RUO Software. RT-PCR procedure was 50°C, 10 min (for cDNA synthesis); 95°C, 5 min (reverse transcriptase inactivation); and followed by 40 cycles of 95°C, 10 s (denaturation); 60°C, 30 s (annealing/extension). Melt analysis was then carried out using standard ViiA protocol. Relative gene expressions were calculated using the comparative CT (ΔΔCT) method giving normalised expression ratios [85]. RPIIβ was the designated housekeeping gene in all RT-PCR assays and sample controls for each primer set were included on every plate. Measurement of engineered muscle function Electric field stimulation was used in order to assess the functional capacity (force generation) of tissue engineered constructs. Constructs were washed twice in PBS, and one end of the construct removed from the supporting mould pin. The free end of the construct was then attached to the force transducer (403A Aurora force transducer, Aurora Scientific, CA) using the eyelet present in the construct. The construct was positioned to ensure its length was equal to that before removal from the pin and covered (3 mL) with Krebs-Ringer-HEPES buffer solution (KRH; 10 mM HEPES, 138 mM NaCl, 4.7 mM KCl, 1.25 mM CaCl 2, 1.25 mM MgSO 4 , 5 mM glucose, 0.05% bovine serum albumin in dH 2 0, Sigma, UK). Aluminium wire electrodes, separated by 10 mm, were positioned parallel either side of the construct to allow for electric field stimulation. Impulses were generated using LabVIEW software (National Instruments, UK) connected to a custom-built amplifier. Maximal twitch force was determined using a single 3.6 V/mm, 1 ms impulse and maximal tetanic force was measured using a 1 s pulse train at 100 Hz at 3.6 V/mm, generated using LabVIEW 2012 software (National Instruments). Twitch and tetanus data were derived from 3 contractions per construct, and a minimum of 2 constructs per time point per biological repeat. Data was acquired using a Powerlab system (ver. 8/35) and associated software (Labchart 8, AD Instruments, UK). Force is presented as both absolute force (μN) and specific force relative to construct cross-sectional area (kPa) for comparison. Specific force was calculated using average absolute force values normalised with average associated cryosection CSA. Flow cytometry CD56− cells were resuscitated from liquid nitrogen, at P6, and incubated in GM for 30 min at 37°C, 5% CO 2 . Cells were then filtered using MACS pre-separation filters to remove potential cell clumps. Cells were then washed once in fluorescence associated cell sorting buffer (FACS; 1% BSA, 0.2 mM EDTA and 0.1% sodium azide in PBS) and resuspended in 200 μL FACS buffer, containing the appropriate antibodies (BD bioscience; PDGFRα, 556002, 1:10; CD90, 555595, 1:100; CD45, 555485, 1:10; CD31, 746116: 1:100), at a concentration of 1 × 10 6 cells/mL and incubated for 30 min on ice. Cells were then washed with FACS and resuspended at 0.5 × 10 6 cells/mL. Flow data acquisition was then undertaken using a BD Accuri C6 flow cytometer, at fast flow rate. To ensure accurate gating of positive populations, fluorescence minus one (FMO) controls were used to set gates. Due to the low percentages of positive cells fluorescence compensation was carried out using antibody binding compensation beads (BD, 552843). Analysis of flow data was undertaken in BD C6 software and representative gating patterns are shown in Additional file 1: Fig. S2. Experimental repeats For all injury experiments (Figs. 4 and 5), 5 repeats across 3 donors were performed with each repeat yielding a minimum of 3 engineered muscles per analysis type, values from each individual engineered muscle were used for statistical analysis. For cell composition and media supplementation (Figs. 1 and 2), 2 repeats across 2 donors were performed with 3 engineered muscles per analysis technique. A total of 5 donors were used for the entirety of the experimental work. Statistical analysis Statistical analysis was undertaken in IBM SPSS 23. Data was subjected to tests of normality (Shapiro-Wilk) and homogeneity of variance (Levene's test). Where parametric assumptions were met, an ANOVA test was used to identify significant interactions. Where significant interactions were observed, Bonferroni post hoc analyses were used to analyse differences between specific timepoints or groups. Non-parametric Kruskal-Wallis analysis was undertaken where data violated parametric assumptions. Mann-Whitney (U) tests were then used, with a Bonferonni correction, to identify the differences between groups. Comparisons across time were made between control and the time point of interested and quoted p values refer to this comparison. All data are reported as mean ± standard deviation (SD). Significance was assumed at p ≤ 0.05 and denoted on graphs with asterisks at indicated levels of significance. Without exception, an asterisk above a bar or point indicates that the mean of the indicated condition deviates significantly from the associated control. Additional file 1: Table S1. Individual donor characteristics. Table S2. Primer sequence table. Figure S1. CD56 enrichment improves desmin positivity and morphological appearance of tissue engineered constructs. Figure S2. Characterisation of CD56-populations. Table S3. Table of donor sorting efficiencies and yields. Figure S3. Col IV and Laminin localisation in engineered skeletal muscles. Figure S4. Expression of Runx2 and Pparg across recovery.
2020-10-21T13:30:03.184Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "d298498f38089988faf78a9e2b2d720edb4e036b", "oa_license": "CCBY", "oa_url": "https://bmcbiol.biomedcentral.com/track/pdf/10.1186/s12915-020-00884-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bad18c8e56313c5581fa7845d5d1bbf600cd6b2b", "s2fieldsofstudy": [ "Biology", "Medicine", "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211726451
pes2o/s2orc
v3-fos-license
Prediction and optimization of slaughter weight in meat-type quails using artificial neural network modeling Carcass yield of meat-type quails is strongly correlated with the weight of the birds at slaughter (slaughter weight [SW]; body weight at 45 D of age). Moreover, prediction of superior animals for SW at the earlier stages of the rearing period is favorable for producers. Therefore, the aim of the present study was to predict and optimize SW of Japanese quails based on their early growth performances, sex, and egg weight as predictors through artificial neural network (ANN) modeling. To construct the ANN model a feed-forward multilayer perceptron neural network structure was used. Moreover, sensitivity analysis was used to arrange the predictors in the ANN model(s) according to their predictive importance too. In addition, the optimization process was conducted to determine the optimum values for the input variables to yield maximum SW. The best-fitted network on input data to predict SW in Japanese quails was determined with 7 neurons in the input layer, 11 neurons in the hidden layer, and one neuron in the output layer. The coefficient of determination (R2) was 0.9404, 0.9359, and 0.9223 for training, validation, and testing phases, respectively. For the corresponding phases, SEM were also 51.8854, 52.2764, and 55.2572, respectively. According to sensitivity analysis, the most important input variable for prediction of SW was body weight at 20 D of age (BW20), whereas the less important input variables were weight of the birds at hatch and body weight at 5 D of age. The results of the neural network optimization indicated that all the input variables, except for BW20, were very similar but slightly higher than mean values (μ for each input variable). The results of this study suggest that the ANN provides a practical approach to predict the final body weight (SW) of Japanese quails based on early performances. Moreover, phenotypic selection for higher values of early growth traits did not ensure the achievement of maximum SW, except for BW20. INTRODUCTION Body weight of quails (Coturnix coturnix) at slaughter (slaughter weight [SW]), its carcass yield (CY), and generally its global contribution to meat production are not comparable with broilers and turkeys; nevertheless, quail rearing for meat production is rising globally (Narinc et al., 2013;Silva et al., 2013;Barbieri et al., 2015). In comparison with other commercial poultry strains or breeds, quail production enterprises are at earlier stages. Hence, appropriate information to manage meat-type quail-producing farms is scarce. Therefore, owing to a lack of information, quail producers sometimes take management guidelines of other poultry species into account, which might not be completely profitable in improving meat-type quail production systems (Anthony et al., 1991). Other difficulties also are associated with the ineffectiveness of management policies in quail production as a result of nonuniformity and higher variation in egg weight (EW), incubation period, hatch weight, body weights at different ages (Anthony et al., 1991;Hyankova et al., 2004), and CY. Thus, considering their highly variable performances, giving an overview of the quailspecific management system during the rearing period and slaughter might be inadequate. However, ununiformed carcasses resulted from higher variation in SW of the birds, which is not desirable. Although CY of meat-type quails has economic importance, direct selection to improve this trait is found to be challenging. Therefore, SW or other body weight traits at higher ages adjacent to SW were also included in the breeding programs as correlated traits rather than CY (Akbarnejad et al., 2015). In common breeding programs, genetic correlations between traits assume major importance to improve correlated traits. However, the importance of phenotypic correlations should not be neglected (Silva et al., 2013;Barbieri et al., 2015;Mohammadi-Tighsiah et al., 2018). Nevertheless, despite the importance of the availability of records, pedigree information in particular, reliable pedigree information is indispensable to design a practical breeding program, but reliable pedigree information is not commonly available in quail production systems (Sari et al., 2011). In most of the production systems, the slaughter age of meat-type quails was considered between 40 and 45 D of age (Sari et al., 2011;Silva et al., 2013). In addition, sexual maturity in females and egg laying simultaneously start at the same age. Moreover it should be taken into account that in spite of commercial broilers, 1-day-old chickens in meat-type quails are not usually obtained from line breeding. Therefore, in a quail population of the same age, some of the birds were assigned as slaughter animals, and the others remained as breeder birds in the production system. Accordingly, egg production parameters assumed major importance even in meat-type quails. Although genetic or phenotypic correlations between body weight and egg production traits are negative (Kranis et al., 2006;Silva et al., 2013), greater impact on body weights results in lower egg production and vice versa. Therefore, partitioning of the birds into egg-or meat-producing groups based on their capability of producing egg or meat at the earlier stages would be helpful for quail production enterprises. Nevertheless, genetic or phenotypic correlations between early body weights and SW are weak (Barbieri et al., 2015) may be owing to sigmoid nature of growth pattern in quails (Anthony et al., 1991;Ahmad, 2009). Therefore, genetic or phenotypic selection of the birds based on early growth performances (as correlated traits) does not lead to improvement in SW. This may be due to nonlinear nature of the growth pattern of the birds or presence of maternal effects, which are not inherited at older ages. Accordingly, to find superior animals at slaughter, application of alternative methods rather than selection based on correlated traits would be profitable. Artificial neural networks (ANN) were frequently used in the poultry production sector. Powerful potential and flexibility of ANN models have been used for solving complex nonlinear problems, control problems, and prediction of economically important traits such as parameters of egg production curve (Faridi et al., 2013), growth curve parameters (Ahmad, 2009), reproductive performance (Mehri, 2013), and nutrient requirements (Ahmadi and Golian, 2010;Mehri, 2014). An ANN refers to a mathematical model inspired by neural networks of the human brain that provides a nonlinear data mining computing scheme to model the complex relationships between input variables (predictors) and output variable(s). To the best of our knowledge, the same report is not available with regard to final economic weight (e.g., SW) based on early growth performance of poultry species using modeling approaches. Therefore, the aim of the present study was to predict body weight at 45 D of age as SW of a Japanese quail population based on early growth performances, sex, and EW using the ANN. Sex of the birds was also included in the ANN model owing to its impact on EW and SW as well as its strong correlation with the birds' early growth performances. Birds and Data Data included to train the ANN have been recorded from a random bred population of Japanese quails reared at the Research Center of Special Domestic Animals, University of Zabol, Zabol, Iran. This population was primarily reared for meat production, and eggs were routinely delivered to the hatchery at the Research Center of Special Domestic Animals for regeneration. The study was conducted following the general ethical guidelines of the Animal Care and Use Committee of the Department of Veterinary, University of Zabol. To develop an ANN for the prediction of the SW in this study, body weight records of 1,136 registered quails of both sexes were considered, which were obtained from a single hatch. In the present study, body weight at day 45 was considered SW. Generally, each chicken was identified using wing tags immediately after hatch. Traits were body weight at hatch (HW) and body weight at 5 D of age (BW5), body weight at 10 D of age (BW10), body weight at 15 D of age (BW15), and body weight at 20 D of age (BW20) as early growth performances in addition to the weight of eggs to incubation (EW). In addition to quantitative traits, sex of the birds was also considered as a discrete variable in neural network modeling. During the rearing period, the birds were fed a standard diet containing 21% CP and 2,700 kcal of ME/kg. During the experiment, food and water were given ad libitum. A light regimen from hatch to the third week was 24 h/D, which decreased to 16 h/D from the fourth week and was then kept constant until the end of the experiment (day 45). Temperature of the birds' rearing house gradually decreased from 38 C in the first week to 22 C in the third week. Afterward, it was maintained between 18 and 20 C until the end of the rearing period. All the chickens were kept in group cages with 40 birds in each cage from 10 to 45 D of age. The chicks were not vaccinated during the experimental period. Descriptive statistics of data used in the present study are shown in Table 1. Artificial Neural Network In brief, in the present study, the ANN structure determines the arrangement of neurons in 3 separate layers (input, hidden, and output layers). Hence, the input layer allocates data into the network, the hidden layer processes the data, and results are extracted in the output layer. To construct the ANN model in the present study, a feed-forward multilayer perceptron (MLP) neural network structure was used. The MLP neural network is very common for classification and prediction. This type of ANN is found to be effective for complex problems, which can be made use for supervised training (Haykin, 1999). The MLP ANN model consists of at least 3 layers of nodes (vectors), which include the input, hidden, and output layers. The input and output layers include predictors and predictive variable(s), respectively. To predict the SW of Japanese quails, the neural network was trained using the backpropagation algorithm. Backpropagation is a method to calculate the weight assigned to each neuron, which is used in the network (Bryson and Ho, 1969;Erb, 1993). The input vector included 7 variables: sex as a discrete variable and the other 6 continuous variables EW, early HW, BW5, BW10, BW15, and BW20, respectively. The input variables were assigned to the neural network to predict the output variable, which was SW in this study. There were no null fields in the data set for all the birds (n 5 1,136). Therefore, the data set included was an 1,136 ! 8 matrix with 9,088 elements. To train the neural network, randomly 70% (796 rows; 6,368 elements) of the input variables (for learning stage), 15% of the input variables (for validation phase) and 15% of the input variables (each 170 rows; 1,360 elements) (for testing stage) were assigned. Totally 5,000 different architectures (ANN models) with different neurons in the hidden layer (7 to 14 neurons) and different activation functions for the hidden and output layers (including each of identity, logistic, hyperbolic tangent, and exponential and sine functions) were assigned to the Automated Network Search (ANS) module of Statistica software version 8.0 (StatSoft Inc., Tulsa, OK, 2009). In fact, the main duty of the ANN is to find the most appropriate functions connecting the input and output layers. Therefore, the ANS module is used to automatically find the best-fitted ANN models including the most appropriate connecting functions between 3 layers of MLP and the best number of hidden neurons, given that ANS provides a range of complexity to find the best-fitted model. Therefore, neurons' weighting function and their optimum network architecture were run automatically. Accuracy of the ANN Accuracy of the ANN model prediction was evaluated using the coefficient of determination (R 2 ). The R 2 between actual and predicted values (obtained from the ANN) was separately calculated for the training, validation, and testing phases. Moreover, mean SE (MSE) was compared for the training, validation, and testing phases. Sensitivity Analysis Owing to the nature of variables and/or the relationships between the input and output variables, a model may become very complex, and as a result, the relationships between inputs and output(s) may not be clear. In data mining and statistical model building or fitting, the sensitivity analysis ordinarily refers to the evaluation of the arrangement or importance of predictors in the ANN model(s) (Mehri, 2013(Mehri, , 2014. In Statistica Automated Neural Networks, the program will compute the residual sum of squares or misclassification rates for the model when the respective predictor is eliminated from the neural network. Moreover, ratios of the reduced model vs. the full model are reported, and the predictors can be arranged in terms of their importance to predict the output layer in a particular ANN. In the present study, the higher values reported as variable sensitivity ratio (VSR) indicate the more important predictor in relation to the prediction of SW in meat-type quails. Optimization of the ANN In Statistica, optimization process refers to a search for the optimal values of input variables that will achieve a particular desired effect such as minimized or maximized values for output variables. In the present study, maximizing the SW was desired; therefore, the optimization process will determine the optimum values for input variables to yield maximum SW (293.71 g; Table 1). For this purpose, the "random search" optimization algorithm provided in the "response optimization" section of Statistica software was used (StatSoft Inc.). Artificial Neural Network Comparing results through running 5,000 different neural network structures using ANS and based on R 2 and mean standard error (MSE) of the networks at Abbreviations: BW5, body weight at day 5; BW10, body weight at day 10; BW15, body weight at day 15; BW20, body weight at day 20; EW, egg weight; HW, body weight at hatch; SW, body weight at day 45 as slaughter weight. training, validation, and testing phases, the best-fitted network on input data to predict SW in meat-type quails was determined, with 7 neurons in the input layer, 11 neurons in the hidden layer, and one neuron in the output layer (SW). The connecting function between the input and hidden layer was hyperbolic tangent (tanh) and between the hidden and output layer was identity function. The topology of the network is shown in Figure 1. The R 2 values were 0.9404, 0.9359, and 0.9223 for the training, validation, and testing phases, respectively. Moreover, the MSE for the corresponding phases were 51. 8854, 52.2764, and 55.2572, respectively. In other words, based on the R 2 values, the ANN model was appropriately able to predict SW of meat-type quails based on EW, sex, HW, BW5, BW10, BW15, and BW20 ( Figure 2). Sensitivity Analysis In this study, the importance of EW, sex, and early growth performances of meat-type quails to SW prediction was analyzed through sensitivity analysis (Table 2). According to VSR, the most important input variable in the prediction of SW was BW20 (VSR 5 4.73). However, the less important input variables were HW and BW5, with VSR equal to 1.52 and 1.53, respectively. Considering only early body weights as the predictor of SW, the adjacent body weights to SW were BW20 (less interval between BW20 and SW), which resulted in higher VSR for BW20. Accordingly, the body weights from the highest VSR to the lowest were as follows: BW15, BW10, BW5, and HW, as expected. Indeed, with the increase of the interval between SW and early body weights (predictors), the importance of those predictors decreased. Sex of the birds as a discontinuous variable was the second important predictor of SW (VSR 5 2.04). However, between 7 input variables to predict SW in the present study, EW was the fifth important predictor. Egg weight was more important than HW and BW5. Correlation between EW and HW is high. Moreover, EW could be a marker for chicken healthfulness and survivability, at least at earlier stages of life. Optimization of the ANN To obtain the maximum value for SW in the present study (271.93 g), input variables were assigned to the "random search" optimization algorithm of Statistica software. The optimized values for input variables compared with mean and maximum values (derived from descriptive statistics, Table 1) are shown in Table 3. The results of the neural network optimization showed that all the input variables, except for BW20, were very similar but slightly higher than mean values. However, Figure 1. Artificial neural network topology (input layer sorted based on VSR). BW5, body weight at day 5; BW10, body weight at day 10; BW15, body weight at day 15; BW20, body weight at day 20; EW, egg weight; HW, body weight at hatch; SW, body weight at day 45 as slaughter weight; VSR, variable sensitivity ratio. differences between optimal values and mean values for EW, HW, BW5, BW10, and BW15 were 1.10, 4.94, 6.37, 1.84, and 4.10%, respectively. However, the difference between the optimal value of BW20 and the mean value was higher than other differences (22.28%). The higher difference of BW20 with the optimal value refers to higher importance of this input variable to the prediction of SW as was suggested by sensitivity analysis. However, the differences between optimal and maximum values were 38. 89, 21.74, 32.20, 40.01, 40.63, and 24.63% for EW, HW, BW5, BW10, BW15, and BW20 input variables, respectively, which indicate 20-40% differences between optimal values and maximum values for all the input variables. DISCUSSION Breeding programs have been made to enhance meat production in quails (Lotfi et al., 2011;Narinc et al., 2013;Barbieri et al., 2015). However, selecting birds for SW based on early growth performance is not common owing to weak correlation between early and late body weights. Several studies have confirmed positive and high estimates of phenotypic or genetic correlation for body weights at adjacent ages, while there was a significant decrease of estimates as the interval between ages has increased (Silva et al., 2013;Barbieri et al., 2015;Mohammadi-Tighsiah et al., 2018). In the present study, the importance of later body weight traits to prediction of SW was explained with higher VSR value of BW20, whereas the earlier body weight traits are weak predictors for SW, as suggested in other studies. In the study of body weights in a meat-type quail, phenotypic correlation for BW28-BW42 (with less interval between 2 traits) was 0.67, whereas phenotypic correlation for BW0-BW42 (with high interval between 2 traits) was 0.17 (Barbieri et al., 2015). Early growth traits assumed less importance in prediction of final body weights in quail. This may be due to the S-shaped pattern of the growth curve in this bird. In fact, the growth rate of the birds at early stages is very slow, and after 15-20 D of age, the growth rate accelerates to the inflection point (Hyankova et al., 2001;Faraji-Arough et al., 2018). Selection for body weight at week 4 (28 D) was recommended in the study of Barbieri et al. (2015) to maximize body weight at week 6 (42 D) as a correlated trait (phenotypic correlation for BW28-BW45 5 0.67). However, selection for body weight at 28 D of age may adversely lead to increase in the abdominal fat (Murata et al., 2013). Sex of birds has been found to have a significant fixed effect on body weights of quails in different statistical analyses (Silva et al., 2013;Mohammadi-Tighsiah et al., 2018). Although the effect of sex on initial body weights has assumed a lower importance, it should be taken into account at higher rates as suggested by the sensitivity analysis of this study. In our study, the importance of sex to the prediction of SW was lower than BW20 but higher than other predictors (EW, HW, BW5, BW10, and BW15). In Japanese quails, bird sexing becomes possible at 3 wk of age based on the plumage pattern. Differences between male and female body weights usually refer to their differentially expressed sex-specific genes and reproductive activities (Caetano-Anolles et al., 2015). In quails, before sexual maturity, females become heavier than males. Aggrey et al. (2003) reported that body weight of female quails at 28 D was higher than males but was not statistically significant; however, after 28 D of age, body weights of all females became significantly higher than males. In dual-purpose and even in the meat-type quails despite higher body weight of females at the end of the production period, they were not slaughtered. Rather, they were transferred to the laying phase. Moreover, in the breeder rearing system, each male quail is usually assigned to 2 to 3 females. Therefore, at the end of the growing period, at least half of the males would be slaughtered. Consequently, although higher body weight of female quails is inevitable (as shown in this study), it is not desirable from an economical point of view. Several reports have demonstrated that maternal effects at the earlier stages of the birds' life should be considered to study the early body weight traits (Hartmann et al., 2003;Ghorbani et al., 2013;Barbieri et al., 2015). In fact, maternal effects were transmitted to the chicken through egg composition, and for later body weights, the influence of maternal effects decreased. Being highly correlated with early growth performance, the egg size and composition directly reflected the maternal ability. However, the results of our study suggest that the importance of EW is less than BW20, BW15, BW10, and sex of the birds for the prediction of SW. In fact, assigning higher EW to incubation necessarily did not result in higher SW. To confirm the sensitivity analysis output, the ANN optimization revealed that higher values of EW or early Abbreviations: BW5, body weight at day 5; BW10, body weight at day 10; BW15, body weight at day 15; BW20, body weight at day 20; EW, egg weight; HW, body weight at hatch; SW, body weight at day 45 as slaughter weight. Table 3. Optimal values of input variables (predictors) in comparison with mean and maximum values (from Table 1 Abbreviations: BW5, body weight at day 5; BW10, body weight at day 10; BW15, body weight at day 15; BW20, body weight at day 20; EW, egg weight; HW, body weight at hatch; SW, body weight at day 45 as slaughter weight. body weights (especially HW and BW5) of quails would not lead to higher SW. Karami et al. (2017) studied weekly body weights of Japanese quails using the random regression model. They suggest that earlier body weights of quails (especially HW) are naturally different from body weights at older ages. Therefore, they imply that to improve SW, HW could not be used as a selection criterion. This finding was in agreement with our results. In conclusion, lack of pedigree information makes the selection for correlated traits difficult. However, the ANN provides a powerful approach to predict the response variable only based on phenotypic records. We developed a practical approach to predict SW in meat-type quails based on early body weights, sex, and EW. However, owing to the lower importance of HW and BW5, these 2 traits may be ignored in the commercial scale to predict SW. Moreover, considering higher values of early body weights and EW (except for BW20) did not ensure to improve SW.
2020-01-02T21:46:29.332Z
2019-12-28T00:00:00.000
{ "year": 2019, "sha1": "e004ba72f8122cb5af2cf33d96ecb097ecd12d23", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.psj.2019.10.072", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61285041a58dffed650423986b5c9702ef4102a6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
229402536
pes2o/s2orc
v3-fos-license
Emotional Intelligence and Teaching Satisfaction: The Mediating Role of Emotional Labor Strategies The study examines the direct effect of four “emotional intelligence” attributes on teachers’ job satisfaction in Karachi’s private teaching institutions. The study also investigates the mediating effects of “emotional labor strategies” on teachers’ job satisfaction. We have used the questionnaire adopted from earlier studies. We distributed 550 questionnaires to respondents, of which we received 499 useable responses. The study has used Smart PLS version 3.3 for data analysis. Our results support only six hypotheses, including two direct and our indirect. This study has contributed to the body of knowledge in the following ways. First, it has measured the effects of the four attributes of emotional intelligence on job satisfaction. Second, most studies have examined the mediating effect of emotional labor strategies on emotional intelligence and other job satisfaction antecedents. Perhaps this is the first study that has examined the direct impact of the sub-factor of emotional intelligence on teachers’ job satisfaction. Additionally, it also looks at the mediating effect of emotional labor strategies on teachers’ satisfaction. There are several implications for managers. For example, the teaching institutes should provide counseling and training to teachers for enhancing their emotional intelligence. Emotional labor strategies help individuals control and monitor their emotions; therefore, educational institutions may also encourage their teachers to adopt these strategies. Introduction Educational institutes in many countries have not only adopted new technology but have also implemented various educational reforms. Despite all these measures, they still face certain challenges specifically related to teachers' satisfaction (Ignat & Clipa, 2012). These challenges are related to enhancing teachers' job-related performance, improving student attitudes towards learning, and balancing the workloads of teachers (Mérida-López, Extremera & Rey, 2017). Social and other job-related stress stimulate emotional stress and emotional exhaustion. However, teachers with strong emotional competencies can cope with stress (Li, Pérez-Díaz, Mao & Petrides, 2018). Many researchers have suggested a need to examine teachers' emotions and their effect on classroom learning, students' motivation, and teachers' job satisfaction (Ignat & Clipa, 2012). Nafukho (2009) argue that success in interpersonal relations and careers depends on how individuals learn to manage their emotions. There is an abundance of studies on the effects of emotional intelligence (EI) on jobrelated antecedents. However, existing literature does not provide much evidence on the impact of emotional intelligence attributes on teachers' job satisfaction (JS). Perhaps no study is available that has examined the effects of antecedents of emotional intelligence on job satisfaction (JS). Given this gap, we have considered the impact of (OEA, SEA, ROE, UOE) on job satisfaction. Additionally, we have looked into the mediating effects of emotional labor attributes (i.e., " DSA, SA, and ENFE) on teachers' job satisfaction. Literature Review Teacher's Job Satisfaction Job satisfaction (JS) in general and teachers' job satisfaction, in particular, has been a problematic issue for decades. Its severity is more profound in developing countries where the compensation is lower than in other professions (Anastasiou, 2020). On the one hand, many teachers are constantly pursuing new employment opportunities due to various unfavorable conditions. On the other hand, the new generation prefers other professions over teaching (Eraldemir-Tuyan, 2019;Asforth et al., 1993). Eraldemir-Tuyan (2019) argues that teachers feel that modern society does not give due recognition to the teaching profession. Additionally, teachers' compensation has not increased significantly, while accountability, stress, and other job-related demands have increased considerably (Anastasiou, 2020;Wharton, 2009). Consequently, this disparity between job requirements and compensation of teachers has led to low job satisfaction. Teachers' motivation for joining the teaching profession is to provide intrinsic rewards and emotional benefits (Sahito & Vaisanen, 2020). Jones et al. (2002) argues that many teachers opted for the career because they feel that by imparting education to the future generation, they can contribute to society's development and progress. The research found that teachers who are changing their profession fall into two categories, which are beginners (that have worked up to five years) and veterans (that have worked more than 30 years in this profession) (Platsidou, 2010;Goleman, 1995). Emotional Intelligence (EI) EI has a close association with job satisfaction, organizational performance, and job success (Serrat, 2017). EI helps individuals manage job requirements and stress due to which they are more successful than others (Mattingly & Kraiger, 2019). Similarly, Miao, Humphrey & Qian (2017) also stress that individuals with high EI often are more successful at their jobs, as they are well equipped to use emotional knowledge to resolve personal and job-related issues (Miao, Humphrey & Qian, 2017). Serrat (2017) argues that besides the IQ level, EI is a critical precursor to JS and job success. EI has stemmed from social intelligence, which, according to Matttingly and Kragier (2019), enables individuals to manage others wisely and maintain sustainable human relations. The two facets of social intelligence are intrapersonal and interpersonal (Miao, Humphrey & Qian, 2017). Interpersonal knowledge enables individuals to interact with others effectively. As a result, such individuals earn the respect and cooperation of others (Mayer, Caruso & Salovey, 2016). Intrapersonal intelligence helps individuals judge their ability rationally. It also helps resolve personal, social, and job-related problems (Petrides et al. 2016). EI includes some important facets of both IQ and social intelligence. Conceptual Framework We have developed a new model in Figure 1 and have discussed the theoretical justification for the proposed hypotheses after the conceptual framework. Emotional Intelligence (EI) and Job Satisfaction (JS) Past studies have inconsistent results on the association between EI and teachers' JS. For example, Anari (2012) and Wong et al. (2010) found a positive association between teachers' EI and JS, while Platsidou (2010) found an insignificant association between teacher's satisfaction and EI. Goleman (1998) used the EI theory for understanding the association between EI and JS in several business domains. They concluded that the use of EI is not consistent in all industries but varies from one business sector to another. Also, emotionally intelligent individuals are more successful at work and society (Li, Pérez-Díaz, Mao & Petrides, 2018). Emotional intelligence comprises of "Self Emotional Appraisal (SEA), Other Emotional Appraisal (OEA), Regulations of Emotions (ROE), and Use of Emotions (UOE)" (Mayer et al., 1990). SEA helps individuals to understand, appraise, and express their sentiments naturally. Consequently, these qualities enable individuals to improve interpersonal relationships (Miao, Humphrey & Qian, 2016;Wong & Law, 2002). OEA allows individuals to assess the emotions of others effectively. Therefore, they are more considerate and empathic to others (Toprak & Savaş, 2020;Wong & Law, 2002). ROE is a control mechanism of feelings and emotions. Individuals with this ability are capable of monitoring their emotions and sentiments. Additionally, such individuals can recover rapidly from emotionally stressed situations (Wong & Law, 2002). UOE helps individuals to use their feelings for enhancing job and non-job related performance. Singh & Kumar (2016) suggest that SEA helps individuals to appraise and control their emotions. At the same time, OEA enables individuals to judge the sentiments of friends and colleagues rationally. Therefore, such individuals are more satisfied with their jobs (Wen, Huang & Hou, 2019). Wen, Huang & Hou (2019) argue that emotionally intelligent teachers have full command of DSA and quickly adapt their sentiments to meet students' expectations. Thus, due to superior emotion control mechanisms, emotionally intelligent teachers create an environment in a class where students feel comfortable and participate in the learning process. Consequently, this leads to students' achievements and teachers' satisfaction (Latif, Majoka & Khan, 2017). Teachers with high ROE have more control over their emotions due to which they promote positive emotions and sentiments in a class. Additionally, such teachers protect students from experiencing negative emotions, such as anger and fear. As a result, students remain focused on their studies and achieve better grades. Toprak & Savaş (2020) argue that teachers with a high level of ROE do not adopt emotional suppression strategies such as SA and DSA. Instead, they assume "cognitive appraisal, " which many researchers believe is an efficient approach for expressing the emotions expected by others. UOE helps teachers to respond to students with controlled emotions that promote an interactive environment in a class. As a result, both students and their teachers benefit. That is, teachers, benefit from a higher satisfaction level, and students benefit through better academic achievements (Miao Humphrey & Qian, 2016). Ho and Au (2006) and Weiss (2002) suggest that students' academic achievements stimulate teachers' pleasant emotions, which results in positive job satisfaction. A teacher's satisfaction level has a direct association with the fondness of job. It also motivates teachers to create an environment of social interactions, discussions, and debates (Hirschfeld, 2000;Yin et al., 2013). As previously discussed, we did not find a single study that has examined the impact of sub-factors of emotional intelligence on job satisfaction. Given this gap, we have proposed the following hypotheses: H1a: Other emotional appraisals (OEA) and teachers' satisfaction are positively associated. H1b: Regulation of emotions (ROE) and teachers' satisfaction are positively associated. H1c: Self emotion appraisal (SEA) and teachers' satisfaction are positively associated. H1d: Use of emotion (UOE) and teachers' satisfaction are positively associated. Mediating Role of Surface Acting (SA) All the facets of emotional labor, including SA, directly and indirectly, impact teachers' JS (Grandey et al., 2013). Individuals use SA based on their built-in capabilities and the requirement of situations. Qi, Ji, Zhang, Lu, Sluiter, and Deng (2017) argue that SA usage is not consistent in all domains and industries. It is generally high in businesses where personal and social interactions with employees are high (Winograd, 2005). On the other hand, it is low in sectors where social interaction with coworkers is minimal. SA is a phenomenon where an individual reacts to others' aggressive behavior, suppresses his/her natural emotions, and fakes a positive emotional expression (Winograd, 2003). Thus, SA is sometimes important for maintaining a sustainable social interaction environment in an organization. Although individuals with high SA change their outer emotional feelings and expressions, their internal personal feelings remain intact. Continued SA may not only adversely affect individuals' wellbeing, but it may also negatively affects their attitude towards the job (Lee, Pekrun, Taxer, Schutz, Vogl & Xie, 2016). Many teachers, despite the aggressive behavior of management and students, display pleasant emotions. However, this does mean that these teachers are satisfied with the organizational environment (Asrar-ul-Haq, Anwar & Hassan, 2017;Hayes, 2003). The literature suggests inconsistent results on the relationship between SA and EI. A few studies found they both are negatively associated, while others found insignificant links between SA and EI (Austin et al., 2008;Mikolajczak et al., 2007). These studies also concluded that individuals with high EI orientation have a low inclination towards SA and vice versa. Similarly, we found inconsistent results in the literature on the association between SA and JS. For example, some studies suggest that SA negatively stimulates JS (Beal, Trougakos, Weiss, and Green, 2006;Brotheridge & Lee, 2002;Grandey, 2003), while other studies stress that SA and JS have an insignificant association (Cheung et al. 2011;Hargreaves, 1998). Given the inconsistent results, there is a need to incorporate a mediator that may bring more insight into EI components' relationships. Given this background, we have formulated the following hypotheses: H2c: Surface acting (SE) mediates the self-emotional appraisal (SEA) and job satisfaction (JS) relationship. Mediating Effect of Deep Surface Acting (DSA) On many occasions, teachers, despite having negative emotions, display a positive attitude to others. This behavior is known as deep surface acting (DSA). In DSA, individuals show different emotions, but the real sentiments do not change (Schirmer & Adolphs, 2017). EI levels vary from one individual to another. Individuals with high EI are better equipped to cope with the job induced stress. Therefore, they generally do not adopt DSA (Lee, Pekrun, Taxer, Schutz, Vogl, & Xie, 2016). Since DSA exhibits the emotional reaction that other people anticipate, many studies suggest a strong association between SA and EI (Xanthopoulou, Bakker, Oerlemans & Koszucka, 2018). Past studies found inconsistent results on the association of DSA and EI. For example, Karim and Weisz (2011) and Liu et al. (2008) cited that emotionally intelligent teachers often resort to DSA. Therefore, they concluded that DSA and EI have a positive association. On the contrary, Mikolajczak et al. (2007) suggest a negative association between EI and DSA. Similarly, previous research has also examined the association between DSA and JS and found conflicting results. For example, Brotheridge and Lee (2002) and Grandey (2003) found a positive association between DSA and JS. Contrarily, others have found an insignificant association between deep DSA and JS (Cheung, Tang, & Tang, 2011;Mayer et al., 1990). Given the conflicting findings, we have formulated the following hypotheses: Mediating effect of Expression of Naturally Felt Emotions (ENFE) The expression of naturally felt emotions (ENFE) is the third kind of emotional labor (EL) (Mikolajczak et al., 2007). In this case, individuals express their true emotions, unlike SA. Past studies have found conflicting and heterogeneous EI outcomes. Austin et al. (2008) found a positive association between EI and EL, while Mikolajczak et al. (2007) concluded that these two variables have an insignificant association. Teachers, due to emotional labor (EL) suppress their feeling and sentiments, which adversely affects their job-related outcomes (Austin et al., 2008;Karakucs, 2013). In contrast, a few studies suggest that when individuals express their true emotions, they are less stressed, due to which they develop positive attitudes towards personal and job-related outcomes (Serrat, 2017;Mattingly & Kraiger, 2019;Lee & Ok, 2012). Given the conflicting findings, we have formulated the following hypotheses: H4a: Expression of naturally felt emotions (ENFE) mediates the self-emotional appraisal (SEA) and job satisfaction (JS) relationship. Methodology Population and Sample The research population of the study comprises of faculty members working in private teaching institutions of Karachi. From this population, the authors collected data from five leading business schools. The authors personally visited the selected universities and distributed 550 questionnaires. Of this total, we received 499 complete and useable responses. The profile of the respondents is presented in Table 1. Scales and Measures The questionnaire we have used in the study has 38 items. Of this total, 4 questions are related to demographics, based on a nominal scale. As many as 34 items are based on a rating scale of 1 to 5. The summary of the questionnaire used in the study is presented in Table 2. Data Analysis We have used the Smart PLS software (version 3.3) for statistical analysis, considered useful for estimating complex models (Henseler et al., 2014). Partial least squares (PLS) is a technique that links latent and indicator variables. The questionnaire used in the study has three latent variables (with seven factors) and 34 indicator variables. The reliability analysis was based on Cronbach's Alpha values, which should be greater than 0.6 (Tabachnick & Fidell, 2007). Convergent validity was examined based on composite reliability and AVE (Refer to Table 3). We have used the Fornell & Larcker (1981) criterion, cross-loadings and the Heterotrait-Monotrait (HTMT) ratio for discriminant validity. Descriptive Analysis For descriptive analysis, we have analyzed convergent validity, reliability, crossloadings of items, and constructs in Table 3. The summary of the results suggests that the Cronbach's Alpha value is the highest for surface acting (SA) (α=0.854), and the lowest is for expression of naturally felt emotions (ENFE) (α=0.853). Thus, we have inferred that the constructs have internal consistency (Hair et al., 2014). All the items' factor loadings are as high as 0.853 and as low as 0.653 and are statistically significant. Additionally, "the AVE value is greater than 0.60, and composite reliability values are also greater than 0.70. " Thus, we have inferred that the data fulfills convergent validity requirements (Hair et al., 2014). Discriminant Validity We have ascertained the discriminant validity of the constructs based on two criteria, i.e. (1) on Fornell & Larcker (1981) and (2) cross-loading. These approaches have been discussed in the following sections: Discriminant Validity using Fornell & Larcker (1981) Criteria The first criteria we have used to assess discriminant validity is of Fornell & Larcker (1981). It compares the values of the square root of AVE with the Pearson correlation values. We have depicted a summary of the results in Table 4. The results show that the highest Pearson correlation value (R=0.428) is for the pair SA and deep surface acting (DSA). The lowest Pearson correlation value (R=0.006) is for the pair surface acting (SA) and ENFE. The lowest value for AVE's square root is for UOE (0.719), and the highest value is for SEA (0.805). Since the square root of AVE is greater than the values of Pearson correlation, therefore the results fulfill the first criteria of discriminant validity (Brienam & Friedman, 1985). Discriminant Validity Based on Cross Loadings The second criteria we have used for examining the discriminant validity is loading and cross-loading. The summary of the results are shown in Table 5. Direct Effects We have proposed four direct hypotheses which we tested through Smart PLS. The summary of the results is depicted in Table 6. of emotions has a positive effect on teachers satisfaction, and use of emotions has a positive impact on teacher satisfaction. " Mediating Effects We have proposed 12 mediating relationships. These are "the mediating effects of deep surface acting (DSA), surface acting (SA), and expression of naturally felt emotions (ENFE) on teachers' satisfaction. " A summary of the results is depicted in Table 7. The results suggest that of the 12 mediating relationships, six were accepted and the other six were rejected. Discussion and Conclusion This study examines the direct effects of emotional intelligence constructs (i.e., OEA, ROE, SEA, UOE) on TS. It also looks at the mediating effect of SA, DS, and ENFE on TS. Our results supported only six of the 12 hypotheses, including two direct and four mediating (Refer to Tables 7 and 8) The literature suggests that emotional intelligence elevates teachers' satisfaction level, enhancing their behavior and attitude towards work. Consequently, teachers feel happy, and their wellbeing improves significantly (Bar-On, 2010; Jones, et. al., 2002;Hochschild, 1983 emotions are often more satisfied. Mayer, Caruso & Salovey (2016) suggest that teachers with UOE can direct their emotional stress productively. Consequently, such teachers create an environment that motivates students towards learning and achievements (Mayer, Caruso & Salovey, 2016;Hamachek, 2000). Teachers who can control their own emotions and appraise others' feelings are considered emotionally intelligent (Johnson & Spector, 2007;Mayer et al., 2004). Contrary to our results, the literature suggests that teachers with SEA and UOE have higher satisfaction levels towards their jobs (Joseph et all., 2010). DSA enables teachers to monitor and control their emotional feelings due to which they are more productive and conducive to the work environment (Yin et al., 2013;Hostani et al., 2011). Moreover, SA enhances the association between ENFE and teachers' JS. The literature suggests that teachers in higher educational institutions with EI can adopt different strategies to manage difficult situations. Also, teachers with low EI cannot develop positive psychological feelings due to which their satisfaction is low (Grandey, 2000). Practical Implications This study has implications for the management of higher education institutions. The results suggest that emotional intelligence is a critical asset. Teachers who can use emotional intelligence adequately are capable of making rational decisions in overstressed situations (Intrator, 2006;Jones et al., 2002). Emotional intelligence is a naturally gifted trait, but institutions, through counseling and training, can increase the personal intelligence level of their employees. Thus, the management of universities should primarily focus on enhancing this capability through well developed and structured training programs. These training programs may help teachers improve their expertise and skills of comprehending, controlling, and monitoring their feelings. Moreover, these training programs would help teachers build emotional associations, refine their cognizance, and upgrade their regulation capability. In addition to that, universities should counsel the teacher on the importance of learning and utilizing emotional labor strategies (i.e., SA, DSA, and ENFE) favorably. Limitations and Future Research This study has some constraints and provides directions for future research-the sample for the study consist of permanent and adjunct faculty of private teaching institutes of Karachi. Permanent and adjunct faculty members' emotional intelligence and satisfaction level may not be the same. Future studies may explore the difference in the attitude of permanent and adjunct faculty towards job satisfaction. Since this study's scope was towards one city, i.e., Karachi, other researchers can extend the developed conceptual framework to other cities and industries. We have examined the indirect effect of emotional labor strategies (i.e,. SA, DSA, and ENFE). Future studies can examine the mediating effects of other antecedents of job satisfaction. The demographic and cultural aspects were beyond the scope of this study. However, future academicians may consider these aspects in their studies. College of Management Sciences Volume 15, Issue 2 December 2020 I work at developing the feelings inside of me that I need to show to students or their parents Expression of Naturally Felt Emotions (ENFE) The emotions I express to students or their parents are genuine The emotions I show students or their parents come naturally The emotions I show students or their parents match what I spontaneously feel Teacher Satisfaction Scale (TS) In most ways, being a teacher is close to my ideal My conditions of being a teacher are excellent. I am satisfied with being a teacher. So far I have gotten the important things I want to be a teacher If I could choose my career over, I would change almost nothing
2020-12-25T12:53:49.248Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "f31faff6ea98ba32a7120ceafb370062d089d0bd", "oa_license": "CCBY", "oa_url": "http://www.pafkiet.edu.pk/marketforces/index.php/marketforces/article/download/464/371", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "baec534fd2ce3ab6b06869913961990c008700be", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
226073704
pes2o/s2orc
v3-fos-license
Criminal Law Protection of Cybersecurity Considering AI-based Cybercrime As the development of artificial intelligence (AI) is unfolding, cybersecurity faces the invasion of AI. AI-based cybercrime becomes a product of technological development, posing a threat to national security, public security, and the protection of citizens’ personal property rights and privacy rights. In particular, the derivation of cybercrime that follows the development of science and technology leads to a qualitative change in the patterns of cybercrimes, expanding the scope and depth of the harm caused by cybercrime, producing a series of impacts on the conviction rules of traditional criminal law in China. The defects such as the features of the principal offender being weakened, the definition of the responsibility for accessory to the principal offender not clear, the narrow scope applicable to one-sided accomplice are present. To this end, in the era where AI and cybercrime have deep integration, given the derivative trend of AI-based cybercrime, it is urgent to adjust the focus of criminal strategy against cybercrime. The horizontal docking of domestic substantive law and legal interpretation is performed to achieve a gradient balance between judicial interpretation and legislative amendment and adjust the criminal boundary in cybercrime evaluation, thereby changing the current high threshold of conviction in the cybercrime governance and late launch of punishment power. Introduction We are on the eve of a new round of scientific and technological revolution. The sense of survival crisis thus triggered covers every one of any gender, race, or country [1][2] . Human beings have experienced thousands of years of vicissitudes from the primitive society of taking fire from brick and wood to the agricultural to the industrial society [3][4] . Nowadays, the network impact brought by the information age expands the social activity scope of each unit in the social organization structure from the physical space to the network space. The expansion speed may be in years, or in five years as an iteration cycle. Therefore, we need to go with the times and accompany the self revolution. In the era 2 of AI rapidly occupying the market, it has changed our cognition of cybersecurity [5][6] . For example, AI with deep self-learning feature has learned and completed the chess manual accumulated by a human for thousands of years in a unit of days. Under the rational perspective of "the future is here", we are naturally required to pay attention to cybersecurity, facing the reality of AI accelerating the alienation of cybercrime, update the value orientation of cybersecurity at the national level and in the public and private fields, and at the same time carry out beneficial exploration in the criminal legal system. From the era of the traditional internet to ai: Iterative evolution of cybersecurity With the iterative evolution of the network, the content of cybersecurity has changed greatly. In the traditional network era, the content of cybersecurity is based on the physical stability of the network. Today, cybersecurity has completed the transformation from the media to the carrier of national security, public security, and economic security, and realized the shift from the focus of work and life to the online and the transition of network space. In the era of AI, the legal interest of cybersecurity is not a single legal interest, but compound legal interest. With the high dependence of the whole society on the network and the living of intelligent technology, paying attention to cybersecurity is to pay attention to national public security, public security and information security, which is also the logical starting point of studying cybersecurity. Table 2. layer appearance of cybersecurity content in the era of AI Times series Types of cybersecurity Content of cybersecurity The age of AI Space Security National security, public security, economic security Impact of ai-based criminals on cybersecurity in the era of ai Whether it is the crime against computer software and computer information system in the pre network era or the increasingly rampant cybercrime in recent years, it can be punished based on the traditional criminal law system and charges, which has not yet brought impact on the application of criminal law rules. AI-based cybercrime contains higher technology content than the general cybercrime. However, it also means that once the technology innovation, the harm and victims of cybercrime are exposed to a series of criminal risks. The essential reason for these risks is that each of us is in the development process of cyberspace. Given the sample data set, , Assuming that the first sample is , and the corresponding label is . Consider the problem of classification, and note the following symbols: and is the same kind. is neighbor . is adjacent and in the same class, and in various classes 。 The importance of each feature dimension in the classification problem is different. To a certain 3 extent, the cybercrime data can overcome the shortcoming that the cybercrime data treat each feature dimension equally. Its definition is as follows: The cybercrime data between samples and are defined as follows stands for a symmetric semi-positive definite matrix According to the properties of positive semidefinite matrix, decomposable into ,the A T A L L  above equation can be expressed as follows : This is equivalent to the matrix as a mapping, mapping the data in the original space to the new space, and converting the cybercrime data in the unique space to the cybercrime data in the new space As far as AI-based cybercrime is concerned, we should clarify whether it is the cyber crime committed by using AI technology or the criminal behavior caused by AI simulation. The cybercrime supported by AI technology is mainly criminal behavior caused by the extensive use of AI technology. The crime caused by AI simulation is mainly due to the design loopholes of AI technology. This paper mainly discusses the use of AI skills in the implementation of cybercrime. This consideration is based on the fact that from the industrial revolution to the post-Internet era, every technological innovation has brought some changes to our lifestyle, and we are experiencing the convenience of AI technology every time. At the same time, we are worried about the popularity of this technology. The rapid penetration of AI technology in daily life will not make all cybercrimes upgrade iteratively, but it will indeed hatch new methods of cybercrime. Positive feedback of ai-based cybercrime: Guarantee of cybersecurity based on criminal law in the era of ai The negative effect of the development of AI technology is the AI of cybercrime, which reveals the mutual flow between technology and law. Today's cyberspace is no longer a pure place for data exchange, but a quasi-real world constructed by information technology. It almost provides the same place for human activities as the reality and the world and makes all preparations for physical social behaviors such as food, clothing, housing, and transportation. After human behavior extends to the network world, the legal rules of the real world also need to enter the network space, especially in the era of AI. When facing new social problems, China tends to solve them through legislation. For example, in dealing with the issue of cybercrime, we should directly criminalize the preparatory act through legislation and set up the crime of "helping information cybercrime", but their application is conditional and can not fundamentally solve the high incidence of cybercrime. Our legislative act is certainly proactive. However, it is impossible to address new problems through legislation immediately, which is contrary to the modesty of criminal law and the spirit of the law. We need to find a balance between legislation and justice in the face of a new round of technology, that is, objective interpretation, to implement the national responsibility of maintaining cybersecurity. Objective interpretation emphasizes that legal interpretation is not only about revealing the legislative intention, but also exploring the legal essentials in line with social development from the legal norms. It can flexibly resolve the rigidity of substantive law in the governance of cybercrime. The reasons are as follows: first, objective interpretation theory is the most influential theory of legal interpretation today, following the pulse of scientific and Technological Development and making objective and appropriate legal interpretation is the fundamental task of fair interpretation and the product of the development of AI technology and positivism. Second, the objective interpretation makes up for the mechanical nature of the article itself. Through the vitality of language and words, objective interpretation revives the criminal charges and enables the traditional criminal law system to show its vitality in cyberspace. Thirdly, objective interpretation is conducive to the realization of fairness and justice. An objective interpretation has important significance in alleviating the rigidity of law by focusing on the exploration of the essence of law while exploring the law of social development. Judge Richard A. Posner once said that he knew the basic features and necessary background knowledge of a technical field through his assistants who had dealt with science, which would help him to conduct a fair trial. Therefore, "the law should go deep into the essence rather than the reality, and pay attention to the spirit rather than the literal meaning" has become the guiding ideology of objective interpretation in the governance of AI-based cybercrimes. Expansion of the punishment scope in the current criminal law through objective interpretation does not lead to the trend of the abuse of penalty power leading to the expansion of charges, because the objectivity and preciseness of objective interpretation need to pay attention to the circumstances of the crime, and the use of "serious circumstances" to limit the interpretation can avoid the expansion of charges. Furthermore, we should clarify the independent status of the crime of information cybercrime as the principal offender through objective interpretation, and reduce the conditions for criminal conviction, thereby preventing the crime from being vacant in the future judicial activities, and improve the practicality of the crime. Conclusions Einstein once said, " I never think of the future -it comes soon enough." We are now in the dawn of the AI era, when the forms of cybercrime are changing with each passing day. The theoretical exploration and institutional innovation of cybersecurity triggered by the intelligent revolution is in full swing. We need to ensure that the first concern of AI-based cybercrime -stable implementation of cybersecurity before carrying out subsequent studies. Under this background, the study of cybersecurity and criminal law should not be delayed or even absent. It is imperative to make the technological development and legal rationality shine in the era of AI.
2020-06-25T09:06:51.996Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "7e2ca4551085e428900dd3da3897e1c4bc69c509", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1533/3/032014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a211ffa4ebfeb861e1bb523fde43d582e985c4d3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
70700472
pes2o/s2orc
v3-fos-license
Restorative and Compensatory Changes in the Brain During Early Motor Recovery from Hemiparetic Stroke: a Functional MRI Study Stroke is a leading cause of disability in the elderly, and a considerable number of stroke patients suffer from residual motor deficits, particularly hemiparesis. There is a wide range of motor functional recovery after stroke, depending on the site, size and nature of the brain lesion (Duncan et al., 1992). Full recovery of hemiparesis is often observed when it is mild, and considerable recovery is not exceptional even after initial severe deficit. Functional recovery after stroke may be caused by resolution of acute effects of stroke, such as low blood flow, diaschisis and brain edema. However, functional gains may be prolonged past the period of this acute tissue response and its resolution. Stroke rehabilitation is introduced in order to promote brain plasticity and facilitate motor recovery. Understanding the mechanism of motor functional recovery after stroke is important because it may provide scientific basis for rehabilitation strategies. Recent advances in non-invasive functional neuroimaging techniques, such as positron emission tomography (PET), functional MRI (fMRI) and near-infrared spectroscopy (NIRS), have enabled us to study directly the brain activity in humans after stroke (Herholz & Heiss, 2000; Calautti & Baron 2003; Rossini et al., 2003; Obrig & Villringer, 2003). Initial cross-sectional studies at chronic stages of stroke have demonstrated that the pattern of brain activation is different between paretic and normal hand movements, and suggested that long-term recovery is facilitated by compensation, recruitment and reorganization of cortical motor function in both damaged and non-damaged hemispheres (Chollet et al., 1991; Weiller et al., 1992; Cramer et al., 1997; Cao et al., 1998; Ward et al., 2003a). Subsequent longitudinal studies from subacute to chronic stages (before and after rehabilitation) have revealed a dynamic, bihemispheric reorganization of motor network, and emphasized the necessity of successive studies (Marshall et al., 2000; Calautti et al., 2001; Feydy et al., 2002; Ward et al, 2003b). However, only limited data are available relating poststroke motor recovery to dynamic changes in cerebral cortical reorganization at the acute stage of stroke. We therefore measured the changes in cortical activation using fMRI during paretic hand movement both 7 Introduction Stroke is a leading cause of disability in the elderly, and a considerable number of stroke patients suffer from residual motor deficits, particularly hemiparesis. There is a wide range of motor functional recovery after stroke, depending on the site, size and nature of the brain lesion (Duncan et al., 1992). Full recovery of hemiparesis is often observed when it is mild, and considerable recovery is not exceptional even after initial severe deficit. Functional recovery after stroke may be caused by resolution of acute effects of stroke, such as low blood flow, diaschisis and brain edema. However, functional gains may be prolonged past the period of this acute tissue response and its resolution. Stroke rehabilitation is introduced in order to promote brain plasticity and facilitate motor recovery. Understanding the mechanism of motor functional recovery after stroke is important because it may provide scientific basis for rehabilitation strategies. Recent advances in non-invasive functional neuroimaging techniques, such as positron emission tomography (PET), functional MRI (fMRI) and near-infrared spectroscopy (NIRS), have enabled us to study directly the brain activity in humans after stroke (Herholz & Heiss, 2000;Rossini et al., 2003;Obrig & Villringer, 2003). Initial cross-sectional studies at chronic stages of stroke have demonstrated that the pattern of brain activation is different between paretic and normal hand movements, and suggested that long-term recovery is facilitated by compensation, recruitment and reorganization of cortical motor function in both damaged and non-damaged hemispheres (Chollet et al., 1991;Weiller et al., 1992;Cramer et al., 1997;Cao et al., 1998;Ward et al., 2003a). Subsequent longitudinal studies from subacute to chronic stages (before and after rehabilitation) have revealed a dynamic, bihemispheric reorganization of motor network, and emphasized the necessity of successive studies (Marshall et al., 2000;Calautti et al., 2001;Feydy et al., 2002;Ward et al, 2003b). However, only limited data are available relating poststroke motor recovery to dynamic changes in cerebral cortical reorganization at the acute stage of stroke. We therefore measured the changes in cortical activation using fMRI during paretic hand movement both Subjects We selected 9 ischemic stroke patients with mild hemiparesis without a history of prior stroke, who received fMRI study within 7 days of stroke onset. The patients presented with neurological deficit including hemiparesis, and were admitted to our hospital. They received standard stroke therapy and rehabilitation, and were discharged from the hospital, when they were independent regarding activities of daily living. They were 58-85 years of age, 8 males and 1 female, and all of them were right-handed. All the cerebral infarcts were evidenced by MRI, and were located in various regions of the cerebrum. The hand motor area was preserved in all patients. Six of the patients had left hemiparesis and 3 had right hemiparesis. They could move their hands, even though weakly, when the first fMRI was performed. No patients in this study had language or attention deficits. Clinical data are summarized in Table 1. Nine right-handed, normal subjects (40-81 years of age; 3 males and 6 females) served as controls. This study was approved by the ethics committee of our hospital and informed consent was obtained from all subjects in accordance with the Declaration of Helsinki. Restorative and compensatory changes in the brain during early motor recovery from hemiparetic stroke: a functional MRI study 127 Functional MRI Two fMRI studies were performed over time in all patients, the first one within 7 days of stroke onset (4.3 ± 2.1 days; the mean value ± SD) and the second one approximately 1 month later before they were discharged from the hospital (36.6 ± 20.9 days). The fMRI studies were performed using a 1.5 T Siemens Magnetom Symphony MRI scanner as described previously (Kato et al., 2002). Briefly, blood oxygenation level-dependent (BOLD) images (Ogawa et al., 1990) were obtained continuously in a transverse orientation using a gradient-echo, single shot echo planar imaging pulse sequence. The acquisition parameters were as follows: repetition time 3 s, time of echo 50 ms, flip angle 90°, 3-mm slice thickness, 30 slices through the entire brain, field of view 192 x 192 mm, and 128 x 128 matrix. During the fMRI scan, the patients and normal controls performed sequential, self-paced hand movements (repeated closing and opening of the hand). This task performance occurred in periods of 30 s, interspaced with 30 s rest periods. The cycle of rest and task was repeated 5 times during each hand movement. Therefore, the fMRI scan of each hand movement took 5 min to complete, producing 3,000 images. A staff member monitored the patient directly throughout the study, and gave the start and stop signals by tapping gently on the knee, and confirmed the absence of mirror movements. All the stroke patients completed the task but the paretic hand movement appeared abnormal at the acute stage. Data analysis was performed using Statistical Parametric Mapping (SPM) 99 (Wellcome Department of Cognitive Neurology, London, UK, http://www.fil.ion.ucl.ac.uk/spm/) implemented in MATLAB (The MathWorks Inc., Natick, MA, USA). After realignment and smoothing, the general linear model was employed for the detection of activated voxels. The voxels were considered as significantly activated if p<0.05 (corrected for multiple comparison). All the measurements were performed with this same statistical threshold. The activation images were overlaid on corresponding T1-weighted anatomic images. The criteria for the changes in the fMRI activation pattern in stroke patients were as follows. 1. A reduction of activation was considered when the area of activation was reduced to <50% compared to that induced by unaffected hand movement of the patient. 2. An expansion of activation was considered when the area of activation was increased by >50% compared with that induced by unaffected hand movement. 3. An appearance of activation was considered when a cluster of activation was induced in a region where unaffected hand movement induced no or little activation. Control subjects In control subjects, each hand movement activated predominantly the contralateral primary sensorimotor cortex (SM1), supplementary motor areas (SMA), and the ipsilateral anterior lobe of the cerebellum (Cbll) (Fig. 1). Contralateral SM1 and ipsilateral Cbll were always involved with a variation between individuals, and SMA was activated in 6 of 9 subjects with more variation. Ipsilateral SM1 was slightly activated in 2 of 9 control subjects. There was no large difference between right and left hand movements. Stroke patients During unaffected hand movements, the patients activated the same motor cortical areas as the control subjects did. The brain activation pattern during paretic hand movements within 7 days of stroke onset (the first fMRI) was different from that during unaffected or normal hand movements (Figs. 2-4). There were two major findings. www.intechopen.com Restorative and compensatory changes in the brain during early motor recovery from hemiparetic stroke: a functional MRI study 129 First, activations were reduced or lost in part or all of the normally activated regions (contralateral SM1, SMA, and ipsilateral Cbll) in 8 of 9 patients. Activation in contralateral SM1 was reduced or lost in 5 of 9 patients; in 4 of 4 patients with cortical infarction (Fig. 3) and in 1 of 5 patients with subcortical infarction (Fig. 2). In the latter patient, subcortical infarct was located near the motor hand area. In 4 of 5 patients with subcortical infarction, in contrast, activation in contralateral SM1 was preserved. Even when contralateral SM1 was activated, there was a posterior or ventral shift or expansion of activation in 2 patients. Ipsilateral cerebellar activation was reduced or lost in 5 of 9 patients. SMA activation was reduced or lost in 4 of 7 patients who activated SMA during unaffected hand movement. Second, a recruitment of additional motor-related areas was seen in 4 of 9 patients. The additional activations were observed in ipsilateral SM1 (4 patients), contralateral Cbll (1 patient), premotor cortex (bilateral in 1 patient and ipsilateral in 1 patient), and bilateral parietal cortex (1 patient) (Fig. 4). At the second fMRI study approximately 1 month later, the brain activation pattern during paretic hand movement had returned to normal in 7 of 9 patients and near normal in 2 other patients. Additional activation was still seen in ipsilateral SM1 in 1 patient. Discussion In this study, we observed a remarkable difference in cerebral cortical activation between affected and unaffected hand movements at the acute stage of stroke (within 7 days of onset). Paretic hand movement-induced brain activation may be reduced markedly in cortical areas that are normally activated by unaffected hand movements (contralateral SM1, SMA, and ipsilateral Cbll). The reduction of SM1 activation was observed predominantly in patients with cortical infarction and was exceptional in patients with subcortical infarction. Early recruitment of additional motor-related areas (ipsilateral SM1 in particular and secondary motor areas) may occur. At the chronic stage (the second fMRI), brain activations during paretic hand movement had returned to normal or near-normal. Thus, early motor recovery after stroke was accompanied by two major changes on fMRI, i.e., restoration of brain activity and recruitment of additional brain activity. We wanted to investigate into motor-related brain activation, and selected such patients who could move their hands, even though weakly, when the first fMRI study was performed within 7 days of stroke onset. As a result, we needed to select stroke patients with mild motor deficit and resultant excellent recovery because patients with poor motor function cannot perform the task of this study. This a priori limited the scope of the findings of our study. Furthermore, the motor performance of the paretic hand was not normal and one may point out that the brain activities during paretic and unaffected hand movements cannot be compared. But our results suggest that motor functional recovery occurred primarily using the standard motor system when damage to it was mild or partial, recruiting functionally related motor areas when necessary as a compensatory strategy. The recruitment of additional activation of motor-related areas was often transient. Therefore, this additional activation may reflect compensation and unmasking or disinhibition of existing motor network which is masked or inhibited under normal conditions since the activation appeared early after stroke. Of interest is that these restorative and compensatory changes occurred within the first month after stroke, and this period seemed critical to motor functional recovery. ipsilateral cerebellum ( ) had been normalized. Activations in ipsilateral primary motor cortex ( ) and contralateral cerebellum ( ) were still seen. Right (normal) hand movement induced a normal activation pattern. Fig. 4. fMRI of a 75-year old female (patient 6) who had a cerebral infarct in the right corona radiata (arrow on the T1-weighted MRI). After 7 days of stroke onset, right (normal) hand movement induced normal activation in the left primary sensorimotor cortex and the right cerebellum. No activation was seen in the supplementary motor areas in this patient. During left (paretic) hand movement, extensive activation was seen in bilateral primary sensorimotor and parietal cortices ( and ) but no activation was seen in the supplementary motor areas and the cerebellum ( ). After 48 days, both paretic (left) and normal (right) hand movements induced a normal activation pattern. www.intechopen.com Restorative and compensatory changes in the brain during early motor recovery from hemiparetic stroke: a functional MRI study 131 Earlier functional neuroimaging studies on poststroke cerebral reorganization from subacute to chronic stages revealed several activation patterns during paretic hand movement (Ward & Cohen, 2004;Jang, 2007). These include (1) a posterior shift of contralateral SM1 activation (Pineiro et al., 2001; or peri-infarct reorganization after primary motor cortex infarction (Cramer et al., 1997;Jang et al., 2005a), (2) a shift of primary motor cortex activation to the ipsilateral (contralesional) cortex (Chollet et al., 1991;Marshall et al., 2000;Feydy et al., 2002), (3) contribution of the secondary motor areas (Cramer et al., 1997;Carey et al., 2002;Ward et al., 2006), and (4) higher contralateral activity in the cerebellar hemisphere (Small et al., 2002). In the present study, we observed similar additional activation patterns at the acute stage of stroke. The earlier studies have also shown that the expanded activations may later decrease with functional improvements, which was also true in many of our acute stroke patients. The contralesional shift of activation may return to ipsilesional SM1 activation with functional gains (Feydy et al., 2002;Takeda et al., 2007), but worse outcome may correlate with a shift in the balance of activation toward the contralesional SM1 (Calautti et al., 2001;Feydy et al., 2002;Zemke et al., 2003). Thus, the patterns of cerebral activation evoked by hand movement show impaired organization and reorganization of brain motor network, and best recovery may depend on how much original motor system is reusable. The patterns of activation may also be dependent on the patient's ability to recruit residual portions of the bilateral motor network (Silvestrini et al., 1998). The fMRI findings need to be considered within the context of technical and task-dependent factors. The fMRI mapping obtained by the BOLD technique is dependent on the spatial extent of hemodynamic changes induced by local synaptic activity and field potentials (Logothetis et al., 2001). Localization of neural activity may be confounded by many factors (Ugurbil et al., 2003). BOLD-dependent capillary density and draining veins and the perfusion of brain tissue may differ between damaged and undamaged tissues, especially at acute stages of stroke. fMRI activation can even be lost in stroke patients because of altered vasomotor reactivity, demonstrating uncoupling of neuronal activity and fMRI activation (Rossini et al., 2004;Binkofski & Seitz 2004;Murata et al., 2006). In our study, paretic hand movement at the acute stage resulted in reduced motor cortex activation in the damaged hemisphere, especially in patients with cortical infarction. This reduced activation may be due not only to impaired neural activity but also to this uncoupling when cerebral infarction was close to the SM1. Thus, we need to be cautious when interpreting fMRI results at the acute stage of stroke. In contrast, patients with subcortical infarction did not usually display a reduction in motor cortex activation because the lesion was distant from the motor cortex. Of interest is the determinant of fMRI activation patterns induced by paretic hand movement. Motor system reorganization may also be influenced by stroke topography (Feydy et al., 2002;Luft et al., 2004), time after stroke , and stroke side (Zemke et al., 2003). Cortical motor organizations between dominant and non-dominant hand movements may be different, and non-dominant hand movements are more bilaterally organized (Kim et al., 1993). Furthermore, the performance of complex motor tasks is accompanied by bilateral activation of motor cortices in contrast to simple motor tasks that result in only contralateral activation (Shibasaki et al., 1993). Hemiparesis would increase task difficulty. When task demand increases, more regions would be activated by a motor task. Improvement in motor skills may depend on rehabilitation, handedness, motivation, and age-related capacity for plasticity. Then motor reorganization after stroke may be www.intechopen.com Neuroimaging 132 obtained depending on a number of factors. If motor reorganization is related to the degree of damage to the pyramidal tract, information on the sensorimotor projections would help further understand the brain reorganization in the context of structure and function. Diffusion tensor imaging tractography, which non-invasively visualize the pyramidal tract using the water molecule diffusion characteristics in the white matter, may be the tool to analyze its integrity (Masutani et al., 2003). The combination of fMRI and tractography of the pyramidal tract would further elucidate the mechanism of motor functional recovery after stroke (Jang et al., 2005b). The ipsilateral primary motor cortex activation may be seen slightly in normal subjects and we cannot distinguish whether the ipsilateral motor activities found in stroke patients existed prior to stroke or are a result of brain plasticity. Furthermore, we do not know whether additional activation that appeared after stroke really contributed to motor functional recovery. With regard to these fundamental points, of interest are the findings of experimental studies using animal models. Nishimura et al. (2007) have shown using a monkey model of unilateral pyramidal tract injury that motor recovery involves bilateral primary motor cortex during the early recovery stage and more extensive regions of contralesional primary motor cortex and bilateral premotor cortex during the late recovery stage. Nudo et al. (1996) and Frost et al. (2003) used an ischemic brain injury model in the monkey and showed substantial enlargement of the hand representation within the primary motor cortex and the ventral premotor cortex. These animal studies suggest that reorganization in brain motor network provides a neural substrate for adaptive motor behavior and plays a critical role in the recovery of motor function after stroke. Conclusion We investigated the changes in cortical activation using fMRI during paretic hand movement both at the acute stage of stroke and at the chronic stage when motor recovery was obtained. The findings of this study suggest that early motor recovery after stroke occurs primarily using the standard motor system, by recovering from reversible injury and by recruiting related motor areas for functional compensation. fMRI is an important tool for revealing the capacity and progress of rehabilitation-dependent changes in the brain motor network after stroke, and provides a neuroscientific basis for stroke rehabilitation. Future studies should clarify the relation between the motor recovery mechanisms and clinical outcome, and the importance of the critical period that greatly influences motor functional recovery after stroke.
2018-10-19T20:29:36.252Z
2010-08-17T00:00:00.000
{ "year": 2010, "sha1": "bb01cea04d340f6077c666b6100df811278c3165", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/11526", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d0cf848935a17f69fd16c0879db41e1f4b25445f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216342446
pes2o/s2orc
v3-fos-license
Most Intense X-Ray Lines of the Helium Isoelectronic Sequence for Plasmas Diagnostic We report accurate wavelengths for the three most intense lines (resonance line: 1s2 1S0 - 1s2p 1P1, intercombination line: 1s2 1S0 - 1s2p 3P1 and forbidden line: 1s2 1S0 - 1s2s 3S1) along with wavelengths for the 1s2 1S0 - 1snp1P1 and 1S0 - 1snp3P2 (2 ≤ n ≤ 25) transitions in He-like systems (Z = 2 – 13). The first spectral lines that belong to the above transitions are established in the framework of the Screening Constant per Unit Nuclear Charge method. The results obtained agree excellently with various experimental and theoretical literature data. The uncertainties in wavelengths between the present calculations and the available literature data are less than 0.004A. A host of new data listed in this paper may be of interest in astrophysical and laboratory plasmas diagnostic. Introduction The helium-like isoelectronic series emit strong X-ray wavelengths. The most intense lines of these systems are the resonance line designated by ω (also labelled r: 1s 2 1 S 0 -1s2p 1 P 1 ), the intercombination lines (x + y) (or i: 1s 2 1 S 0 -1s2p 3 P 2, 1 ) and the forbidden line z (or f: 1s 2 1 S 0 -1s2s 3 S 1 ). These three lines correspond to the transitions between the n = 2 excited shell and the n = 1 ground state shell. The determination of these lines is of great interest because the line ratios f/i and (f + i)/r provided respectively electrons density (n e ~ 10 8 -10 13 cm −3 ) and electrons temperature (T e ~ 1 -10 MK) as first shown by Gabriel and Jordan [1] and are widely used for collisional solar plasma diagnostics [1] [2] [3]. On the other hand, these line ratios enable also to determine the prevailed ioni-struct wave function expanded in a triple series of Laguerre polynomials of the perimertric coordinates to study the S and P states of the helium isoelectronic sequence and report nonrelativistic wavelengths and total wavelengths including mass polarization relativistic and, the Lamb shift corrections for Z = 2 -9 belonging to the 1snp 1 P -1s 21 S (n = 2 -5) transitions. In addition, Safronova et al. [13] apply the MZ code through a perturbation theory based on hydrogen-like functions to compute wavelengths of highly charged He-like ions (Z = 6 -54) for both satellite lines (1s2l'nl -1s 2 n'l', n, n' = 2, 3) and (1snp 1, 3 P -1s 2 , n = 2, 3 and 1s2s 1, 3 S -1s 2 ) transitions. Additionally, the plasma simulation code CLOUDY is used by Porter [14] to present wavelengths of the UV, intercombination, forbidden, and resonance transitions oh He-like ions for Z = 6 -14 and for Z = 16, 18, 20, and 26. But, as far as we know, the wavelengths cannot be directly determined within a single analytical formula for a whole members of He-like ions using one of the preceding method or one of the other existing computational techniques. Then, analytical spectral lines in two-electron systems such as the Balmer or the Lyman spectral lines of the hydrogen-like systems are not yet established. In this paper, we intend to present analytical spectral lines belonging to the resonance line: 1s 2 1 S 0 -1s2p 1 P 1 and intercombination line: 1s 2 1 S 0 -1s2p 3 P 2, 1 along the 1s 2 1 S 0 -1snp 1 P 1 (n ≤ 10) transitions in the helium isoelectronic sequence. In our study, we use the Screening Constant per Unit Nuclear Charge (SCUNC) method suitable in the analysis of atomic spectra [15] [16]. All the results obtained in the present work compared very well to the available experimental and theoretical literature data. A host of data listed in this paper may be of interest in astrophysical and laboratory plasmas diagnostic. In section 2, we present the theoretical procedure adopted in this work. In section 3, the presentation and the discussion of the results are made. A comparison of our results with available experimental and theoretical results is also made. Brief Description of the SCUNC Formalism In the framework of Screening Constant per Unit Nuclear Charge formalism, total energy of ( ) 2 1 , S N n L π + ′   excited states are expressed in the form (in rydberg units) In this equation, the principal quantum numbers N and n, are respectively for the inner and the outer electron of He-isoelectronic series. In this equation, the β-parameters are screening constant by unit nuclear charge expanded in inverse powers of Z and given by ( ) ; where ( )   are parameters to be evaluated empirically. 1 ; Using the experimental total energy of He I, Li II and Be III respectively (in eV) −79.01 [17], −198.09 [18] and −371.60 [18], the screening constants in Equation (4) are evaluated by use of the infinite rydberg energy 1 Ryd = 13.605698 eV. We find then ( ) Before presenting and discussing the results obtained in this work, let us first move on explaining how electron-electrons and relativistic effects are accounted in the present SCUNC formalism. As mentioned previously [16] In these expressions, α denotes the fine structure constant and M is the nuclear mass of the Q-electron systems. The energy value of the Hamiltonian (9a) is in the form This equation can be expressed in the same shape than Equation (9b) Using (9c) and the last equation in (10) Equation (11) Results and Discussions The present SCUNC wavelengths predictions for the wavelengths belonging to the 1s 2 1 S 0 → 1snp 1 P1 (3 ≤ n ≤ 13) transitions in He-like (Z = 3 -38) ions are quoted in Table 1. respect to the experimental values of the corresponding system are less than 0.009%. The slight discrepancies can be explained by the fact that the present formalism disregards explicitly mass polarization, relativistic and QED corrections. For the transitions 1 1 S 0 → np 1 P 1 (n ≥ 3), comparison with the quoted experimental data indicates again good agreements. For these levels, the percentage deviations with respect to the experimental value of the corresponding system are less than 0.05%. Here, the discrepancies may be imputed mainly to mass polarization corrections which are not taken into account in the present calculations. In fact, and as well mentioned by Beiersdorfer et al. [9], the n ≥ 3 levels are less affected by electron-electron interactions, relativistic and QED corrections. Then, for n ≥ 3 states, the ratio m/M (m and M respectively the electron and nuclear masses) becomes important while increasing the Z-charge number. Nevertheless, the present SCUNC semi-empirical formulas may be considered as good representative of experimental data when electron-electron interactions, relativistic and QED corrections are disregarded. In Table 3, the SCUNC predictions for the wavelengths belonging to the 1s 2 1 S 0 → 1s2p 1,3 P 1 transitions in He-like ions are compared to the ab initio calculations of Acaad et al., [12] using wave function expanded in a triple series of Laguerre polynomials of the perimertric coordinates, the computational results of Safronova et al., [13] applying the MZ code through a perturbation theory based on hydrogen-like functions and with the data of Porter [14] using the plasma simulation code CLOUDY. The overall agreement between the calculations is reasonably gratifying. Here, the |Δλ theo | differences in wavelengths between the present calculations and the theoretical literature data [12] [13] [15] have never overrun 0.003 Å for the 1s 2 1 S 0 → 1s2p 1 P 1 resonance line and 0.008 Å for the 1s 2 1 S 0 → 1s2p 3 P 1 intercombination line up to Z = 22. This may point out Here, λ p denotes the present SCUNC calculations, λtheo represents the theoretical values and |Δλtheo| stands for the difference in wavelengths between the present calculations and the other theoretical ones (λtheo a or λtheo b ). (a): calculations of Accad et al., [12], (b): calculations of Safronova et al. [13]; (c): calculations of Porter [14]. Wavelengths are in angstroms. the good agreement between the calculations. The discrepancies with respect to the accurate ab initio computations are due to the present none-relativistic formalism. Table 4, shows a comparison of the present wavelengths for the forbidden 1s 2 1 S 0 → 1s2s 3 S 1 transitions of He-like systems (Z = 2 -15) with the NIST compiled data. Excellent agreement is obtained between the SCUNC predictions and the NIST data. Except for Z = 8, the maximum shift in wavelengths with respect to the NIST values is at 0.003 Å. In Table 5, the present theoretical wavelengths for the 1snp 1 P1 → 1s 2 1 S0 (2 ≤ n ≤ 5) transitions of the helium-like ions up to Z = 9 are compared to the λnrel-nonrelativistic wavelengths values and to the λ tot -total wavelengths (including mass polarization, relativistic corrections and the Lamb-shift correction for the 1 1 S level) computed by Accad et al. [12]. For the 1s 2 1 S 0 → 1s2p 1 P 1 resonance line, the uncertainties between the present calculations and the λ tot -total wavelengths results [12] are less than 0.003 Å. As far as comparison with the λ nrel -nonrelativistic wavelengths values are concerned, it is seen that the uncertainties are about 0.01 Å for Z = 5 -9. This points out that, the present SCUNC results are most accurate than the λ nrel -nonrelativistic wavelengths obtained by Accad et al. [12] when increasing the nuclear charge. For n ≥ 3 states, it can also be seen that the present SCUNC wavelengths values are most accurate than that of Accad et al. [12]. Here, the uncertainties with respect to the λ tot -total wavelengths are less than 0.005 Å for all the entire series considered (Z = 2 -9) whereas the uncertainties with respect to the λ nrel -nonrelativistic wavelengths increase up to 0.01 Å for Z = 9. This may point out again that, in the SCUNC formalism, relativistic effects are implicitly incorporated in the fi-screening constants evaluated from experimental data. Besides, it should be mentioned that the λ tot -total wavelengths equal to 88.3075 Å for the 1s 2 1 S 0 → 1s3p 1 P 1 transition of Be III may be probably lower as the corresponding high precision measurement is at 88.3140 Å [7] to be compared to the present prediction at 88.3140 Å. Conclusion The Screening Constant per Unit Nuclear Charge method has been applied to inaugurate the first spectral lines for the three most intense lines (resonance line 1s 2 1 S 0 -1s2p 1 P 1 intercombination line 1s 2 1 S 0 -1s2p 3 P 1 and forbidden line 1s 2 1 S 0 -1s2s 3 S 1 and for the 1s 2 1 S 0 -1snp 1 P 1 transitions in the helium isoelectronic sequence. In our knowledge, only the spectral lines of the Hydrogen-like ions have determined empirically in the past. At present hour, the possibilities to calculate easily the most intense lines of helium-like systems in the X-ray range in connection with plasma diagnostic are demonstrated in this work. All the results obtained in the present paper compared very well to various experimental and theoretical literature data. It should be underlined the merit of the SCUNC formalism providing accurate results via simple analytical formulas without needing to use codes of simulation. The accurate results obtained in this work point out the possibilities to investigate highly charged He-positive like ions in the framework of the SCUNC method. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2020-03-26T10:48:08.644Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "3f46eb410aae9f6d4c5b509fd1d111b8ed968dd7", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=99145", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c44f7ecb173c9850f3855f69cd662e1d6039f32c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
270092151
pes2o/s2orc
v3-fos-license
Flower–bee versus pollen–bee metanetworks in fragmented landscapes Understanding the organization of mutualistic networks at multiple spatial scales is key to ensure biological conservation and functionality in human-modified ecosystems. Yet, how changing habitat and landscape features affect pollen–bee interaction networks is still poorly understood. Here, we analysed how bee–flower visitation and bee–pollen-transport interactions respond to habitat fragmentation at the local network and regional metanetwork scales, combining data from 29 fragments of calcareous grasslands, an endangered biodiversity hotspot in central Europe. We found that only 37% of the total unique pairwise species interactions occurred in both pollen-transport and flower visitation networks, whereas 28% and 35% were exclusive to pollen-transport and flower visitation networks, respectively. At local level, network specialization was higher in pollen-transport networks, and was negatively related to the diversity of land cover types in both network types. At metanetwork level, pollen transport data revealed that the proportion of single-fragment interactions increased with landscape diversity. Our results show that the specialization of calcareous grasslands’ plant–pollinator networks decreases with landscape diversity, but network specialization is underestimated when only based on flower visitation information. Pollen transport data, more than flower visitation, and multi-scale analyses of metanetworks are fundamental for understanding plant–pollinator interactions in human-dominated landscapes. Introduction Plant-pollinator interactions are the ecological foundations of animal-mediated pollination, a process on which 88% of flowering plants depend [1,2].The structural analysis of pollination interaction networks may provide key information on network stability and robustness under environmental change [3][4][5].Landscape-scale simplifications by habitat loss and habitat fragmentation have strong negative impacts on the characteristics of plant-pollinator network structure and functionality [3,6,7], directly through non-random loss of interactions [8] and indirectly through changes in species richness and abundance [9,10]. Land-use consequences on ecological networks must be explored at local and regional spatial scales because of the occurrence of important nonlinear emergent properties that demand the use of methods that can deal with multiple levels of ecological complexity [3,11,12].Metanetworks (i.e. a group of scattered local networks connected by shared interactions) are an emerging approach to study the consequences of habitat fragmentation on ecological networks at a regional scale [10,12,13]. Plant-pollinator networks have been traditionally constructed using data on flower visitation [14].However, in order for successful pollination to occur, viable pollen grains need to be transported from the anthers of a flowering plant to a receptive stigma of a conspecific [15].Therefore, the sole flower visitation of an animal is expected to be a poor predictor of its capacity as a pollinator [16].For instance, many flower visitors forage exclusively for nectar and do not contact flower anthers; other species lack morphological traits to carry pollen and thus cannot act as pollinators [17,18].Two methods have been proposed to overcome this challenge.First, stigmas and styles can be analysed to identify pollen deposition after an animal visit [19,20].However, pollen deposition analyses are extremely time-consuming and consequently prohibitive for landscape-scale studies.Alternatively, pollen loads analyses of flower visitors also provide valuable information regarding an animal's capacity as a pollinator and are suitable for large-scale studies given their relative simplicity [21,22].Pollen-transport networks have been studied at singular sites and local scales [23][24][25][26] and pollen metanetworks across land-use types [27], but, to our knowledge, there is no study analysing pollen-transport networks and metanetworks over gradients of habitat size, isolation and landscape diversity. To compare the structure of different types of networks, such as flower visitation and pollen-transport interaction networks-between each other and across environmental gradients-it is essential to quantify network specialization at the community level [28].This can be done through qualitative and quantitative indices such as network connectance and Blüthgen's 'H 2 ' [28].The specialization of pollination networks can be higher than that of visitation networks [23,29] given that the pollen richness on the bodies of flower visitors is usually a subset of the flowers they visit [30].However, pollen analyses can also reveal interactions established with rare and unfrequently visited plant species that could lead to higher pollen-transport network connectance [22,31] and lower network specialization [32,33].The specialization of mutualistic networks is affected by habitat loss and isolation through species turnover [34] and through a shift towards a higher prevalence of opportunistic interactions among generalists [35]. Species traits can determine the probability that a flower visit is accompanied by pollen transport and therefore that a certain interaction gets recorded with flower visitation observations, pollen load analyses or both.For example, given that bumblebees are social, usually hairier, larger and more numerous than other bees, they are generally expected to have a larger carrying capacity for pollen transport than most solitary bees in Europe [36].Therefore, interactions established by bumblebees should have a higher probability of occurrence in both visitation and pollen-transport networks than those established by other bees.Furthermore, habitat specialist plants, rather than habitat generalists, often have adaptations to maximize visitation and amount of pollen transferred to pollinators per visit, such as larger floral displays (i.e.increased attractiveness [37]).Hence, habitat specialist plants should establish interactions with a higher probability of occurrence in both types of networks. Mutualistic metanetworks in fragmented landscapes are characterized by a large majority of interactions that are unique to single fragments [11,12].In a mutualistic metanetwork (sensu [12]), single-fragment interactions are those plant-bee interactions that are recorded in only one of the natural or seminatural studied habitat fragments.Based on flower visitation data of bees and butterflies in calcareous grasslands, Librán-Embid et al. [11] found that landscape diversity had a positive effect on the richness of single-fragment interactions and a negative effect on the proportion of single-fragment interactions.However, whether this effect is also captured with bee pollen loads analyses is still unknown. Here, we tested how habitat fragmentation affects plant-pollinator interactions at four distinct but complementary levels of biological organization, i.e. (i) flower visitation and (ii) pollen-transport interactions at the (iii) local network and (iv) regional metanetwork scales. We studied a gradient of habitat change of European calcareous grasslands, a highly threatened biodiversity hotspot characterized by a vast number of rare and endangered species [38].Specifically, we asked (i) at the local scale, how landscape characteristics (fragment area, connectivity and landscape diversity of cover types) would shape the structure (connectance and specialization) of visitation and pollen-transport networks, and (ii) at the regional scale, which functional traits (i.e.bee body size, bee group, bee habitat specialization, flower size and plant habitat specialization) are associated with the probability that a flower visit involves pollen transport and how are single-fragment interactions affected by fragmentation and landscape diversity in both metanetworks. We hypothesized that (i) the specialization of light-microscopy local pollen-transport networks will be higher than that of local visitation networks and their connectance lower.This is expected if the flower visitors' morphological and behavioural constraints to pollen transport prevail over the emergence of interactions established with rare and unfrequently visited plant species; (ii) network specialization will decrease in larger and less isolated calcareous grasslands that are surrounded by more diverse land cover types, as the presence of more species should increase the probability of multiple interacting partners; (iii) owing to rare and ineffective interactions (i.e.interactions that contribute little to the plant reproductive success (sensu [15])), a high number of interactions unique to the pollen-transport and visitation metanetworks, respectively, is expected; (iv) interaction occurrence in both network types simultaneously depends on bee and flower size, bee and plant habitat specialization and bee group [11]; and finally, (v) the richness and proportion of single-fragment interactions increases with landscape diversity and fragment area and decreases with fragment connectivity in both types of networks given the central-place foraging behaviour of bees. Methods (a) Study system Calcareous grasslands in central Europe mainly result from low-intensity grazing activities associated with human livestock since the Anthropocene and are therefore considered seminatural habitats [38].In the absence of extensive grazing, bush encroachment leads to the degradation of this formerly common and extended habitat [38].In recent decades, agricultural intensification and land-use change have largely determined the reduction and fragmentation of this biodiverse habitat that harbours the highest richness of vascular plants, butterflies and grasshoppers in central Europe [39][40][41][42].Due to these characteristics, calcareous grasslands are considered core habitats and conservation priorities in Europe and are therefore legally protected in the European Union [43,44]. (b) Study area Data were collected from April until August 2018 on 29 calcareous grasslands in the surroundings of the city of Göttingen, Germany (electronic supplementary material, figure S1).These grasslands were selected in a previous study [45], from a larger regional pool (~300 fragments), to vary along independent gradients of size and isolation from other calcareous grasslands.Arable land and European beech (Fagus sylvatica) forests are the two main land use types in the region, with 31% and 38% land cover, respectively [45]. (c) Flower visitation interactions We performed three rounds of sampling throughout the season in each calcareous grassland to capture the succession of flower visitors (hereafter, pollinators) and wildflower species.Seven fixed-point observation plots of 10 min were established at each site.We followed a protocol established by van Swaay et al. [46] to carry out our surveys.We collected data from 9.00 to 17.00 on days with a minimum temperature of 15°C and at least 50% clear sky, or with a minimum temperature of 18°C in any sky condition.To avoid any confounding effect of daytime, sites were surveyed at different times of the day. Our observational plots were established in flower-rich areas and were circular (3 m radius, 28.3 m 2 ).Within these, all interactions between bees (Hymenoptera: Apiformes) and flowering plants were recorded.We focused on bees because they are (jointly with lepidopterans) the most abundant pollinators in grasslands [47] and because they carry significantly larger pollen loads than butterflies [23].A bee visit was considered an interaction once the insect contacted the plant's reproductive organs.Bees that were not easily recognizable from a distance were sweep netted or collected for later identification by taxonomists and the timer was stopped while handling insects.We excluded interactions involving Apis mellifera because the presence of this species in our region relates exclusively to the occurrence of bee keepers in the vicinity [11].Apis mellifera interactions accounted for 334 from a total of 1499 interactions registered and were present in all sites (range 1-75 A. mellifera interactions per site).Bees were classified as solitary bees or bumblebees (hereafter, bee group).All bumblebees are eusocial and belong to the genus Bombus spp.Within the group of 'solitary bees,' seven species presented some degree of sociality but were grouped within the solitary bees because of the morphological and genetic similarities with these.The seven species are Andrena scotica (communal), Halictus confusus, Halictus rubicundus, Halictus tumulorum, Lasioglossum calceatum, Lasioglossum morio and Lasioglossum pauxillum. (d) Pollen-transport interactions Pollen was taken from bees' bodies, head and antennae by bathing bees in Eppendorf tubes filled with distilled water, using a modified protocol from Dafni [48].As some interactions were very abundant, we collected pollen from all bees that visited flowers in our observation plots, with an upper limit of collecting six pollen samples from the same interaction in each site and sampling round following Zhao et al. [25].Pollen baskets were considered following de Manincor et al. [49], who found that there is no significant difference in the number of observed links between analyses based on pollen passively transported on the body and that collected in specialized structures such as the corbiculae.Samples were later acetolysed [50] using a protocol lab technique and analysed using light microscopy at 40× magnification (one slide represented one bee).We also created a reference collection of pollen from the flowering plants of the region to aid sample pollen identification.We did not consider slides with fewer than 30 pollen grains.From all the others, we counted 200 pollen grains on each slide, except for five slides that had 50-200 pollen grains.To ensure that pollen diversity was captured, the slides were scanned systematically in consecutive horizontal lines starting from the left upper corner of each slide and up to the count of 200 pollen grains.Following Bosch et al. [22], we considered the presence of at least 10 pollen grains in our samples as proof of true visitation to the corresponding flowering species (i.e.threshold for pollen-bee interactions). (e) Plant-pollinator traits Plants and pollinators were classified according to their habitat specialization, following Poschlod et al. [51] and Brückmann et al. [52] for plants and Jauker et al. [53] and Hopfenmüller et al. [54] for bees.All body length values for bees were taken from Westrich [55].We consider Cirsium sp.(cluster of four species mostly represented by the habitat specialist Cirsium acaule) and Ononis sp.(cluster of two hybridizing species including the specialist Ononis repens) as habitat specialists (i.e.species mostly restricted to available calcareous grassland habitat fragments). (f) Landscape metrics We tested the effects of fragment size, fragment connectivity and landscape diversity of cover types on the structure of local fragment networks in terms of specialization (i.e.network connectance and H 2 ') and also on the richness and proportion of single-fragment interactions (i.e.interactions that were only recorded in a single fragment).Fragment area was calculated with ArcGis 10.5.1 [56] and ranged from 82 to 52 557 m², excluding zones dominated by shrubs.Fragment spatial connectivity and the Shannon diversity of land cover types (as a measure of landscape diversity) were calculated using the 'landscapemetrics' package [57].For fragments' spatial connectivity, we used a connectivity index developed by Hanski et al. [58] and considered all calcareous grasslands in a radius of 2 km around the study grasslands (see electronic supplementary material for details).Larger values of this index indicate higher spatial connectivity (electronic supplementary material, table S1).The mapped cover types were: oilseed rape, grainfield, maize, other crops, forest open, forest closed, field margin, hedgerow, pasture, calcareous grassland, orchard, settlements, water bodies, streets, grass-roads and bare soil (see figure S1 in the electronic supplementary material).The basemap for habitat classification was provided by 'the rural development and agricultural promotion service centre' (Servicezentrum Landentwicklung und Agrarförderung).Shapefiles of land use were constructed using ArcGis 10.5.1 and all statistics were performed in R 4.1.0[59]. (g) Network and statistical analysis Our study involved two levels of biological organization: (i) local scale, in which each of the 29 calcareous grasslands corresponds to a local network of both flower-visitation and pollen transport interactions, totalling 58 networks; and (ii) regional scale, in which we scaled-up from local fragment networks to regional metanetworks of both flower-visitation and pollen-transport interactions.Below, we describe how we analysed that complexity in the light of our hypotheses. We constructed local quantitative bipartite networks (one for each calcareous grassland fragment) and regional metanetworks using data on flower visitation (hereafter, visitation networks) and pollen loads (hereafter, pollen-transport networks), respectively.Local networks were constructed as A ij adjacency matrices in which i are the plant species, j the pollinator species and the a ij element represents the frequency of interactions between i and j.At the landscape level, metanetworks were built by pooling the 29 calcareous grasslands into A kl adjacency matrices in which k are the studied sites and l either the plant visitation or the pollen-transport interactions.To make visitation and pollen-transport networks comparable we did not consider pollen from trees (e.g.Picea spp. or Pinus spp.), crops (e.g.Vicia faba), grasses (e.g.Poaceae) or ornamental plants (e.g.Astrantia major) because observations were done exclusively on herbaceous plants of calcareous grasslands. (h) Local networks To test whether pollen-transport networks were more specialized than visitation networks at the local level (hypothesis 1) and whether they were affected by habitat fragmentation and landscape diversity (hypothesis 2), we calculated network connectance, defined as 'the realized proportion of possible links' and the H 2 ' index, which is based on 'the deviation of a species' realized number of interactions expected from each species' total number of interactions' [28,60,61].We used a linear mixed model with network type, (log) fragment area, (log) connectivity index and landscape diversity at 350 m as explanatory variables, and fragment identity as random intercept.To choose the spatial scale surrounding the focal calcareous grassland habitats at which landscape diversity effects were stronger, we compared models fitted at all scales from 100 to 500 m in 50 m intervals and compared them using the corrected Akaike information criterion (AICc) for small samples.Because almost all indices of network structure are at least partially affected by network size, we standardized our response variables, network connectance (more affected by network size) and H 2 ' (less affected by network size) relative to a null model to allow for meaningful comparisons among networks of different fragments [61,62].We followed Grass et al. [3] by creating null distributions based on 1000 replicates of Patefield's algorithm.All network metrics were calculated using the 'bipartite' package [63] and model diagnostics was done using the 'DHARMa' package [64]. (i) Metanetworks We modelled the simultaneous presence of interactions in both metanetworks (hypothesis 4, i.e. flower visitation interactions resulting in pollen transport) using a generalized linear mixed model with binomial distribution and pollinator and plant species identity as crossed random intercepts.Our explanatory variables in the full additive model above were the plant and pollinator habitat specialization, flower size (area), pollinator size and group (i.e.bumblebee or solitary bee).Finally, to study the effects of landscape diversity and habitat fragmentation (i.e.fragment area and connectivity) on the richness and proportion of single-fragment interactions (hypothesis 5), we used generalized linear models with negative binomial distribution and linear models (i.e.normal distribution), respectively.The explanatory variables tested in the full models were the Shannon diversity index of land cover types at 150 m (for single-fragment interaction richness) and 500 m (for the proportion of single-fragment interactions), (log) fragment area and (log) connectivity index.The minimum adequate models were found with backwards royalsocietypublishing.org/journal/rspb Proc.R. Soc.B 291: 20232604 model selection using likelihood ratio tests.All non-significant explanatory variables (p > 0.05) were sequentially removed.Models were created using the 'lme4' package [65].All network and statistical analyses were performed in R 4.1.0[59] . Results We observed 1165 flower-bee interaction events among 71 plant species and 67 bee species, resulting in 250 unique pairwise interactions.Of those, 31 (43.7%)plant species were only visited by bumblebees and 19 (26.8%) plant species were only visited by solitary bees, while 23 (32.4%) plant species were visited by both, totalling 71 plant species visited (electronic supplementary material, figure S2a, table S2).Some examples include Fragaria vesca, which was only visited by solitary bees, and Trifolium pratense, Salvia pratensis, Prunella grandiflora, Carlina vulgaris and Anthyllis vulneraria, which were only visited by bumblebees (electronic supplementary material, table S2).Furthermore, we analysed pollen samples of 830 bee individuals and found 474 individuals carrying 0-30 pollen grains, 5 carrying 50-200 pollen grains and 351 carrying ≥200 pollen grains.We identified 44 bee species transporting pollen from 64 plant species, resulting in 222 unique pollen-bee pairwise interactions from a total of 626 pollen interaction events.Pollen of 20 (31.3%) plant species was only transported by bumblebees and pollen of 12 (18.8%)plant species was exclusively transported by solitary bees (electronic supplementary material, figure S2b, table S3), while pollen of 32 (50%) plant species was transported by both groups.For example, pollen of Knautia arvensis was only transported by bumblebees and pollen of Potentilla sp. was only transported by solitary bees (electronic supplementary material, table S3). At the local network level, our results show that pollen-transport networks were significantly more specialized than visitation networks (F 1,27 = 11.33,p = 0.002, figure 1).We also found a negative effect of landscape diversity at the 350 m scale on H 2 ' specialization of both visitation and pollen-transport networks (F 1,26 = 13.56,p = 0.001, figure 1).On the other hand, connectance did not differ between the visitation and pollen-transport networks (F 1,27 = 1.03, p = 0.32) and was also not affected by landscape diversity (F 1,26 = 1.97, p = 0.17).Fragment area and fragment connectivity had no significant effects neither on H 2 ' network specialization nor connectance (electronic supplementary material, table S4).At the regional level, we found a total of 345 unique combinations of plant-pollinator interactions considering both visitation and pollen-transport networks, from which 127 (36.8%) were found in both types (figure 2, electronic supplementary material, table S5).From a total of 222 unique pairwise interactions detected in the pollen-transport metanetwork, 95 (42.8%) were exclusive to it (i.e. they were not registered in the visitation metanetwork, electronic supplementary material, table S6) and 123 out of 250 (49.2%) were recorded only in the visitation metanetwork (electronic supplementary material, table S7).Furthermore, we identified important differences in the number of interactions established by some plant species in both metanetworks (electronic supplementary material, tables S8 and S9).The most outstanding case was K. arvensis, which was visited by 19 different bees but only four of them (all bumblebees) transported its pollen. Additionally, we found that the presence of an interaction in both metanetworks (i.e.visitation and pollen transport) was affected by the plant habitat specialization and the pollinator group.Specifically, interactions involving habitat specialist plants (χ 2 = 6.47, d.f.= 1, p = 0.011) and bumblebees (χ 2 = 17.24, d.f.= 1, p = 0.0071) had a higher occurrence in both metanetworks than those involving habitat generalist plants and solitary bees (electronic supplementary material, figure S3).Flower size, bee size and bee habitat specialization had no significant effects on the presence of an interaction in both metanetworks (electronic supplementary material, table S10). In the visitation metanetwork, we found 76 (30.4%) unique pairwise interactions that occurred in at least two calcareous grassland fragments but these accounted for 913 (78.4%) interaction events.In the pollen transport metanetwork (figure 3), we found 70 (31.5%)unique pairwise interactions occurring in at least two fragments but summing up to 452 (72.2%) interaction events.Finally, we found a significant positive effect of landscape diversity on the number of single-fragment interactions for both network types (electronic supplementary material, figure S4).However, the spatial scale at which this effect was stronger differed for the visitation and pollen-transport networks.Specifically, the number of single-fragment interactions increased with landscape diversity at the 150 m scale for the visitation data (χ 2 = 4.59, d.f.= 1, p = 0.032, electronic supplementary material, figure S4a) and at the 500 m scale for the pollen transport data (χ 2 = 5.96, d.f.= 1, p = 0.015, electronic supplementary material, figure S4b).Moreover, landscape diversity at the 500 m scale significantly increased the proportion of single-fragment interactions (F 1,27 = 5.26, p = 0.030), but this effect was only found for the pollen-transport networks (electronic supplementary material, figure S5).Fragment area and fragment connectivity had no significant effect on the number of single-fragment interactions or the proportion of single-fragment interactions (electronic supplementary material, table S11). Discussion In this study, we analysed multiple levels of ecological complexity of plant-pollinator networks constructed from bee-flower visitation and bee-pollen-transport interactions across a gradient of habitat fragmentation of a biodiversity hotspot.Of all interactions found, 63.2% were exclusive to either the visitation or pollen-transport networks, highlighting both the numerous low-frequency interactions that are not captured by observations of flower visits (27.5%) and the high number of interactions (35.7%) that do not translate into pollen transport.Pollen-transport networks were more specialized than visitation networks, and a higher diversity of land cover types in the surroundings of a habitat fragment decreased network specialization. (a) Network type effects on network specialization Ecological and methodological aspects can affect the relationship between the specialization of pollen-transport and visitation networks.Regarding ecology, an important aspect is whether studies consider all flower-visiting taxa [31][32][33]68] or rather focus on bees [29,49].Different from other flower-visiting arthropods, female bees have specialized structures for pollen transport (i.e.scopae) and actively search for pollen to feed their larvae.As bees carry more pollen than other flower visitors [25,69], the chance of detecting rare interactions with pollen analyses is higher for bees.These bee traits would favour the higher presence of transport-only interactions and the lower presence of visitation-only interactions compared to other pollinator taxa.On the other hand, as central-place foragers, bees are bound to the areas surrounding their nest location, and they therefore can have a lower probability of carrying pollen from plants not belonging to the local habitat community compared to other flower visitors, such as butterflies that can move more freely through the landscape.This implies that the pollen that bees carry has a higher chance of being a subset of the flowers observed in a focal study site than is the case in other taxa.This difference is expected to be especially important in bee communities with many small-and medium-sized solitary bees, but even larger bees like Apis spp.and Bombus spp.are known to forage mostly in proximity to their nests when enough flowering resources are available, which is usually the case in calcareous grasslands [70][71][72].In line with our findings, previous bee studies using microscopy have found higher specialization of pollen-transport networks compared to visitation networks [29], or no significant difference between them [49], suggesting a prevalence of the later phenomenon compared to the former.Regarding methodology, several aspects can affect specialization indices.First, if plant species recorded through pollen but missing in the study sites are not excluded from the analyses, there would be an overrepresentation of interactions in the pollen-transport networks and an underrepresentation of detected interactions in the visitation networks.This would increase the probability of finding pollen-transport networks to be more generalized than visitation ones.In addition, the lower resolution of plant taxa identification by light microscopy or metabarcoding, compared to field observations of flower visits, needs to be taken into account.This can be done, for example, by lowering the resolution of the plant visitation data from species to genus level for those species that cannot be discriminated based on pollen morphology or DNA analyses. The location on the bee's body from which pollen is removed to construct pollen-transport networks can also affect the results.Previous studies disagree on removing pollen located in the corbiculae from the family Apidae [29,31,33,68], pollen located in the scopae [23,25], or considering all carried pollen [32,49].The exclusion of pollen located in specialized bee structures can considerably reduce the number of detected interactions.Further, most studies based on microscopy report network specialization to be higher in pollen-transport networks compared to visitation networks [23,25,29,68], while studies based on metabarcoding report the opposite [32,33,73].A possible reason for this effect is that metabarcoding results can vary depending on the plant reference database used, eventually inflating false detections [73].Light microscopy also has limitations, including the expertise of the observer and the possibility of lower taxonomic resolution than metabarcoding [74].Lastly, the use (or not) of a threshold to distinguish pollen contamination from legitimate pollen transport [25,33,49] also affects the number of interactions found and can therefore impact specialization estimations.The methodological approach used in our study was designed to minimize pitfalls related to spurious interactions detection and to maximize the reliability of results. A higher specialization of pollen-transport networks in calcareous grasslands indicates that these pollination networks might be more vulnerable to collapse following disturbance because increased specialization can make pollination networks less robust and more prone to co-extinction cascades [75].The vast majority of plant-pollinator network studies are based on visitation data and conclusions regarding biodiversity conservation are derived mostly from them.In light of our results and previous studies [23,25,29,68], we call for attention to the risk of an overestimation of plant-pollinator networks' stability and robustness in past studies based solely on flower visitation data.Still, whether the higher specialization of pollen transport with respect to visitation networks is the rule across different habitats and geographical locations and how much is it affected by methodological artefacts remain to be investigated. (b) Landscape diversity effects on network specialization Plant-pollinator network specialization is affected by species richness and species behaviour [35,75,76].Habitat fragmentation and landscape simplification may decrease the availability of interacting partners as a consequence of reduced population sizes or local extinctions [77].The absence of interacting partners can have opposite effects on species specialization.On the one hand, pollinators may visit more plant species to compensate for missing resources, therefore increasing their generalization [78].However, in case of low behavioural plasticity or high plant fidelity, specialization could increase after disturbance (i.e.due to the loss of a plant partner) as pollinators would be unable to establish new interactions.For plants, losing a pollinator may directly increase plant specialization by reducing the number of interacting partners.Nonetheless, reduced competition for resources among pollinators could facilitate visitation of opportunistic (and usually less effective) pollinators, therefore increasing plant generalization [78]. In protected natural and seminatural habitats, mutualistic networks present higher nestedness and specialization than expected by chance, reducing interspecific competition through niche partitioning and allowing coexistence [8,79,80].Habitat fragmentation and extreme climatic events reduce mutualistic network specialization through the loss of specialized and rare interactions [8,76], thereby increasing the relative proportion of interactions involving generalist species [6].Jauker et al. [35] also reported this pattern when analysing plant-pollinator networks in calcareous grasslands.They found that habitat loss decreased network specialization through the loss of species and interactions, resulting in small and tightly connected networks.Given the pronounced process of fragmentation of European calcareous grasslands in the past century [38,81], it is expected that the remaining, commonly small and isolated fragments have already lost many specialized and rare interactions, as shown for butterflies on the same calcareous grasslands [82]. Habitat generalist bees can use floral and nesting resources of several land cover types and may establish opportunistic interactions with multiple (habitat generalist and specialist) plant partners.In our study, landscape diversification stands for high variety of land cover types in the matrix, dominated by oilseed rape, grainfields, maize, pastures, grasslands, field margin strips and forest stands.Any increase in landscape diversification in the surrounding of a calcareous grassland habitat fragment (e.g.matrix diversification) may increase the influx (i.e.spillover) of habitat generalist species establishing new generalized interactions, further decreasing grassland network specialization.Hence, specialization would be reduced by increasing the number of potential interaction partners available, closing the gap between the potential and the realized number of interacting partners of each species. Accordingly, habitat loss and landscape simplification may have opposite effects on European calcareous grasslands network specialization by affecting groups of species with different traits.Habitat loss may cause a selective reduction in richness or abundance of species with low behavioural plasticity and high plant fidelity (i.e.specialists), resulting in small and homogeneous networks with an overrepresentation of generalist species.In contrast, landscape simplification may result in a reduction of species with high behavioural plasticity and low plant fidelity (i.e.generalists), resulting in fragile and specialized networks.The most plausible mechanism that could explain the patterns found in this study is that pollinators did not compensate for the missing plant partners and that plants did not get extra visits once a pollinator was lost following landscape homogenization, resulting in more specialized interaction networks. (c) Visitation and pollen-transport unique interactions The large proportion of flower visitor species transporting few or no pollen, due to morphological or behavioural constraints, is an intrinsic characteristic of pollination systems [23,25,29,49].However, it is important to note that this proportion is usually reported to be larger in light microscopy studies than metabarcoding studies [74].The selection of a threshold of pollen grains carried by a bee as proof of legitimate pollen transport (in opposition to accidental transport due to contamination) is fundamental to avoid false positives, however, it is also non-trivial [22].Popic et al. [29] used a 10 grains threshold and reported only 38% of visitation links resulting in pollen transport (51% in our study).Manincor et al. [49], also studying bee interactions in calcareous grasslands of Europe (threshold five pollen grains), reported doubling the total number of interactions found (40% increase in our study) when looking at pollen transport (i.e.pollen-transport only interactions).Our approach was conservative; the choice of a lower threshold would have most likely augmented the differences found.Despite the more conservative threshold used in our study, the structural differences found between bee visitation and pollen-transport networks are in accordance with the literature and challenge the assumption that visitation data are sufficient surrogates of animal-mediated pollen transport [29,49]. The large presence of flower visitors with a relatively small capacity for pollen transport raises many questions regarding their importance for pollination [83].In theory, deposition of a single conspecific pollen grain could be enough for pollination to occur, but pollen deposition thresholds are common given that not all pollen deposited by pollinators is viable [84].Therefore, a relatively high amount of conspecific pollen deposition is usually needed for a meaningful pollination success [84].The concomitant deposition of heterospecific pollen is also an important factor considering its negative effects on pollination [85,86].Actually, from a plant species perspective, a strategy based on maximizing pollinators' visits might come at the cost of high heterospecific pollen deposition on their stigmas.Contrastingly, a strategy based on the attraction of a small number of specialized pollinators (and therefore larger potential for conspecific pollen deposition) comes at the cost of a higher dependence on a small group of pollinators, with a higher risk of local extinction and a lower probability of visitation.Habitat fragmentation and landscape homogenization may impose a reduced set of pollinator partners to interact with.Consequently, higher plant specialization could arise as an indirect result of the lack of alternative partners and not as part of an ecological strategy to increase reproductive success. The pollen-transport networks revealed a high amount of rare interactions.This implies that plant-pollinator networks based only on flower visitation data are not just biased by the inclusion of interactions with no potential for pollination, but also by missing many rare interactions.Consequently, pollen loads analysis represents an important complementary approach to study pollination systems because the actual pollen dispersal across the plant community can be quantified.Visitation data, on the other hand, appear fundamental to understand plant-pollinator interactions from the pollinator perspective, as competition among pollinators and the different foraging strategies that pollinators use to maximize their fitness can be analysed.For example, bees visit some flowers exclusively to collect nectar and eventually may not enter into contact with flower anthers.These interactions may not be important from the plant perspective because pollen transport may not occur, but they are still fundamental for bees as they will influence their diet.The detection of interactions involving rare habitat specialist plants, such as Scabiosa columbaria [87,88], indicates that pollen load analyses can contribute to improve conservation strategies by identifying remaining small populations of these rare species.For example, restoration efforts targeting these small populations could be undertaken in places where the plants were thought to be locally extinct. (d) Plant habitat specialization and pollinator group Bumblebees in central Europe have a high capacity for pollen transport given that they are abundant, larger than most solitary bees and have dense hair [36,89,90].Our results support this by demonstrating that the probability of a bumblebee carrying pollen after a flower visit is higher than that of solitary bees.However, studies on pollen transfer (i.e.pollen deposition on a conspecific stigma) after flower visits would be necessary to test whether bumblebees are also able to deposit more pollen on stigmas than solitary bees, since pollen transport does not always translate into pollen deposition [19].For example, the pollen located in specialized structures, such as the corbiculae, is generally assumed not to be available for pollination [91].This may not make a difference in pollen transport analyses [49], but it could be important for pollen transfer to stigmas.Although bumblebees are hairier than other bee groups, which is associated with a higher pollination effectiveness [36], plants' pollination also depends on pollinator behaviour, flower morphology and on the ratio between conspecific and heterospecific pollen deposition [85,92].Therefore, solitary bees may be irreplaceable for the reproductive success of some plant species. In fact, solitary bees were found to be fundamental for the pollen transport of many plant species.In particular, they transported pollen from 16 habitat specialist plants and were the only pollen vector for three of them.Considering that many solitary bee species are vulnerable and threatened with extinction [53,93], these results signal to the importance of their role in calcareous grasslands and to the potential risk of their absence for habitat specialist plants' reproductive success.Our findings reveal that both bumblebees and solitary bees are complementary for pollen transport of calcareous grasslands plant species. We found a smaller representation of habitat generalist than habitat specialist plants on pollen-transport networks in calcareous grasslands.Hence, flower visits to habitat specialist plants have a higher probability to translate into pollen transport than visits to habitat generalist plants.The habitat generalist K. arvensis, for example, was visited by many species of both bee guilds (bumblebees and solitary bees), being involved in a total of 105 interaction events.However, we found only 10 interactions with K. arvensis in the pollen-transport networks, involving only four bumblebee species and no solitary bees.In contrast, pollen from the habitat specialist Onobrychis viciifolia was transported by all of its seven species of visitors from both bee guilds. A higher representation of habitat specialist plants in pollen-transport networks cannot be solely related to a higher attractiveness of habitat specialist flowers or pollen, as interactions involving habitat specialist plants in the pollen-transport dataset were less than half of the total interactions found (46.8%).This result is rather a consequence of different mechanisms that allow habitat specialist plants to allocate their pollen more frequently on flower visitors than habitat generalists.Habitat specialist plants are expected to have a long history of evolutionary adaptations to the local pollinator pool and, therefore, to have developed mechanisms for efficient pollen transport through those pollinators [37].Conversely, generalist plants should lack such adaptations, as they would exhibit more opportunistic strategies to quickly adapt to different environments.The adaptations of plants to increase pollination success can occur at many levels, including pollen vector attraction, pollen presentation, pollen transport and pollen germination [94].At the visitation level, traits such as flower size, flower abundance and the quantity and quality of offered flower rewards (i.e.pollen and nectar) may increase visitation rates [95].At the pollen transport level, plants may possess mechanisms to place larger amounts of pollen at specific places on the flower visitors' body [94].At the pollen transfer level, plant traits such as the stigma type (i.e.wet or dry), pollen morphological traits or behavioural characteristics of pollinators may affect the quantity and quality of pollen deposition [19,94,96].Even after pollen deposition on stigmas, plants may exhibit mechanisms to regulate receptiveness depending on the characteristics of the flower visitor [97]. (e) Landscape diversity effect on single-fragment interactions The spatial scale at which landscape diversity most strongly affected the number of single-fragment interactions was larger for the pollen-transport networks compared to the visitation networks.This suggests that landscape-scale conservation measures to protect plant-pollinator networks might be undertaken at the wrong spatial scales when solely based on flower visitation data.The increased number of single-fragment interactions with landscape diversity is not solely related to a general increase in the total number of interactions with landscape diversification but rather that landscape diversification has a disproportionately positive effect on the occurrence of single-fragment interactions compared to the total amount of interactions.Importantly, this effect was only captured with the pollen transport data and highlights that landscape structure effects can remain undetected in plant-pollinator studies solely based on visitation data.In contrast to a previous study in the area [11], the effect of landscape diversity on the proportion of single-fragment interactions could be explained by the central-place foraging behaviour of bees.Differently from mobile butterflies, bees may determine that the positive effects of landscape diversification at small spatial scales do not spread through the metanetwork but rather increase the local diversity of plant-pollinator interactions in single fragments. Conclusion By analysing plant-pollinator networks across a gradient of habitat fragmentation, we found that pollen-transport networks were more specialized than visitation networks, indicating that plant-pollinator networks could be more vulnerable than previously believed.Only 36.8% of the total number of registered plant-pollinator interactions occurred in both flower visitation and pollen-transport networks.Our landscape analysis of a pollen-transport metanetwork also revealed that the properties of pollination networks are affected by landscape diversity at scales that differ from those informed by visitation networks, which may increase the accuracy and effectiveness of landscape-level measures for the conservation of plant-pollinator networks.Interactions involving habitat specialist plants and bumblebees had a higher probability of simultaneously occurring in the visitation and pollen-transport networks than interactions involving habitat generalist plants and solitary bees.Nonetheless, the pollen of several plant species was found to be transported only by solitary bees.Our study shows that conservation of pollination systems and related pollination services needs finer data on the biological processes underlying plant-pollinator interaction networks, such as pollen load analyses.Our metanetwork approach allowed us to identify rarity of plant-pollinator interactions and local uniqueness, which can be further used by local authorities to design tailored conservation strategies.Our results have important consequences for the understanding of the responses of plant-pollinator networks to habitat fragmentation and contribute to unveiling important processes underpinning the dynamics of these networks. Figure 1 . Figure 1.Relationship between standardized network specialization (h 2 '), network type and landscape diversity (i.e.Shannon diversity of land cover types).Each network type includes 28 local networks (fragments) in each dataset (pollen transport and visitation).Bands represent 95% confidence intervals of the model fit. Figure 2 . Figure 2. Bipartite diagram representation of the overlap between the visitation and pollen-transport metanetworks.The rectangles in the upper part represent the total interactions detected by each method (1165 and 626 total interactions in the visitation and pollen-transport metanetworks, respectively).The rectangles at the bottom represent the unique flower-bee (sky blue) and pollen-bee (pink) pairwise interactions.The size of the squares and thickness of the bars are proportional to the frequency of each unique pairwise interaction.Those interactions exclusive to the pollen transport dataset occur to the right (in pink) and those exclusive to the visitation dataset are shown to the left (in sky blue).Unique pairwise interactions occurring in both datasets are in the middle and highlighted in violet.The percentage of unique pairwise interactions occurring in each dataset is also indicated. Figure 3 . Figure 3. Pollen-transport metanetwork structure among calcareous grassland fragments and unique pairwise pollen-bee interactions (n = 29 and n = 222, respectively).Circles indicate pairwise pollen-bee interactions and squares represent grassland fragments.Interactions occurring in at least two fragments form links between sites.Thickness of links (grey lines) is proportional to interaction abundance.Colours represent metanetwork modules based on the Walktrap community-finding algorithm (igraph package) [66].This algorithm indicates the presence of subgraphs that constitute a distinctive community.Nodes with greater centrality occur in the central positions of the graph [67].
2024-05-30T06:17:49.156Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "5ca87b1d132dc836cbc6a1c636d6be7b246f5dbd", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2023.2604", "oa_status": "HYBRID", "pdf_src": "RoyalSociety", "pdf_hash": "0ca15b176344fbe499dc36716b5adb72a20b551c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1231103
pes2o/s2orc
v3-fos-license
TrakEM2 Software for Neural Circuit Reconstruction A key challenge in neuroscience is the expeditious reconstruction of neuronal circuits. For model systems such as Drosophila and C. elegans, the limiting step is no longer the acquisition of imagery but the extraction of the circuit from images. For this purpose, we designed a software application, TrakEM2, that addresses the systematic reconstruction of neuronal circuits from large electron microscopical and optical image volumes. We address the challenges of image volume composition from individual, deformed images; of the reconstruction of neuronal arbors and annotation of synapses with fast manual and semi-automatic methods; and the management of large collections of both images and annotations. The output is a neural circuit of 3d arbors and synapses, encoded in NeuroML and other formats, ready for analysis. Introduction There is a growing consensus that detailed volumetric reconstructions of thousands of neurons in millimeter-scale blocks of tissue are necessary for understanding neuronal circuits [1,2]. Modern electron microscopes (EM) with automatic image acquisition are able to deliver very large collections of image tiles [3][4][5][6][7][8]. Unfortunately, the problems of acquiring the data have so far been easier to solve than that of interpreting it [9,10]. Increasingly, neuroscience laboratories require automated tools for managing these vast EM data sets using affordable consumer desktop computers. Here, we present such a tool. It is an open source software package, named TrakEM2, that is optimised for neural circuit reconstruction from tera-scale serial section EM image data sets. The software handles all the required steps: rapid entry, organization, and navigation through tera-scale EM image collections. Semi-and automatic image registration is easily perfomed within and across sections. Efficient tools enable manipulating, visualizing, reconstructing, annotating, and measuring neuronal components embedded in the data. An ontologycontrolled tree structure is used to assemble hierarchical groupings of reconstructed components in terms of biologically meaningful entities such as neurons, synapses, tracts and tissues. TrakEM2 allows millions of reconstructed entities to be manipulated in nested groups that encapsulate the desired abstract level of analysis, such as ''neuron'', ''compartment'' or ''neuronal lineage''. The end products are 3D morphological reconstructions, measurements, and neural circuits specified in NeuroML [11] and other formats for functional analysis elsewhere. TrakEM2 has been used successfully for the reconstruction of targeted EM microvolumes of Drosophila larval central nervous system [7], for array tomography [12], for the reconstruction and automatic recognition of neural lineages in LSM stacks [13], for the reconstruction of thalamo-cortical connections in the cat visual cortex [14] and for the reconstruction of the inhibitory network relating selective-orientation interneurons in a 10 Terabyte EM image data set of the mouse visual cortex [8], amongst others. From Raw Collections of 2d Images to Browsable Recomposed Sample Volumes An EM volume large enough to encapsulate significant fractions of neuronal tissue and with a resolution high enough to discern synapses presents numerous challenges for visualization, processing and annotation. The data generally consists of collections of 2d image tiles acquired from serial tissue sections ( Figure 1; [7,8]) or from the trimmed block face (Block-face Serial EM or SBEM, [3,15]; focused ion beam scanning EM or FIBSEM, [6]) that are collectively far larger than Random Access Memory (RAM) of common lab computers and must be loaded and unloaded on demand from file storage systems. Additional experiments on the same data sample may have generated light-microscopical image volumes that must then be overlaid on the EM images, such as in array tomography [12,16] or correlative calcium imaging [8,15]. TrakEM2 makes browsing and annotating mixed, overlaid types of images ( Figure S1) over terabyte-sized volumes fast (Text S1, section ''Browsing large serial EM image sets'') while enabling the independent manipulation of every single image both from a point-and-click graphical user interface (GUI; Figure 1e, S2, S3, S4) and by automatic means (Text S1, section ''Image adjustment''). The images acquired with the EM microscope represent views of tissue that has been deformed by the sectioning process, by the heat of the electron beam, by charging effects, and by the magnetic lenses. For serial sections, part of the section may be hidden away by a section fold or support-film fold ( Figure S5), and counterstaining with heavy metals further increases the difficulty of the task by occluding parts of the section with accidental precipitates ( Figure S5). All images require illumination adjustments ( Figure S5, S6). TrakEM2 recovers the original sample present in the resin block from the images with a robust automatic multi-step image registration approach. First images are corrected for distortions induced by the EM magnetic lenses [17]. Then, image tiles belonging to individual sections are montaged combining a linear alignment established from invariant image features (SIFT; [18]) and an elastic alignment that compensates for the remaining nonlinear distortion [19]. Similarly, the section series are aligned by firstly using invariant features to estimate a linear transformation followed by elastic alignment to compensate for non-linear distortion. Alternatively to an immediate elastic alignment of the series of montages, feature correspondences can be used to estimate each image tile's globally optimal pose with respect to overlapping tiles within the same section and in adjacent sections [20]. This method enables the reconstruction of section series from section montages that cover only a few regions of interest disconnected in the section plane but Figure 1. From a resin block to serial 2d image montages. A Serial EM is performed on a block of tissue embedded in hardened plastic resin. B Sections are imaged with multiple overlapping image tiles. C The imprecision in the positioning of the camera and the numerous non-linear deformations demand of an automatic multi-section image registration procedure that computes the best possible transformation for each tile without introducing gross deformations. D TrakEM2 operates only on original images, which are treated as read-only. A preprocessor script specified invidually for every image alters the image after loading from disk and before the rest of TrakEM2 has access to it, enabling changes of scale, of lookup table, data type, and any pixel-level operation. A Patch object encapsulates the image file path and a set of properties such as the alpha mask, the coordinate transforms (linear and non-linear image transformations) and the desired image display range and composite mode, among others. The precomputed mipmaps store most of the Patch information in compressed 8-bit files ready for display. The image for the field of view is constructed from composing multiple Patch instances according to their location and composite rules (overlay, subtract, add, multiply, difference and Colorize YCbCr), and is then filtered, if desired, for dynamic interactive image enhancement. E The TrakEM2 Display presents the field of view showing a single section and the images, segmentations and annotations present in that section. The Display provides access to tools for manipulating and analyzing all imported images and reconstructed elements. doi:10.1371/journal.pone.0038011.g001 related across sections (e.g. sparse images of different branches of a neuron). The methods implemented for montaging, global tile pose estimation and elastic alignment calculate global alignments for groups of images while explicitly minimizing the local deformation applied to each single image. Only by that constraint, very large montages or series of montages can be aligned without accumulating artificial deformation [19]. In combination, TrakEM2's alignment and deformation correction tools both manual and automatic allow high quality volume reconstruction from very large section series. Complex imaging arrangements are supported, including low-resolution images of large fields of view that were then complemented with high-resolution images for areas of interest, or different tilts of the same section. Tens of thousands of images are registered with an off-the-shelf computer in a few days. Both linear and non-linear transformations are expressed with a system that brings pixels from the original image space to the transformed space in one single computational step, concatenating all transformations and expressing the final transformation in the precomputed mipmap images (Figure 1d; Text S1, section ''Browsing large serial EM image sets''). Additionally, the TrakEM2 GUI enables direct point-and-click manipulation of the transformation of any image in the volume, before or after the automatic registration without significant cost in data storage (relative to the dimensions of the image) or image quality (Text S1, section ''Assembling the volume with automatic registration of image tiles'' and ''Manually correcting automatic image registration with affine and non-linear transformations''; Figure S2, S3). Reconstructing a Neuronal Circuit from an Image Volume The second step in neuronal circuit reconstruction consists in identifying and labeling the neurons and synapses in the image volume. The current gold standard is computer-assisted manual labeling, either by brushing 2d areas ( [7,21]; not practical for large volumes) or by marking skeletons [8,15,22]. Automated methods for neuronal reconstruction are currently the focus of intensive research in Computer Vision (for review see [9]). TrakEM2 offers manual and semi-automatic methods for image segmentation ( Figure S7) and for sketching structures with spheres and tubes (Text S1, section ''Stick-and-ball models''; Figure S8), and interfaces with automatic image segmentation programs (Text S1, section ''Image segmentation for 3d object reconstruction''). Manual skeletonization of a neuronal arbor requires continuous recognition operations that are not always done with full confidence given ambiguity in the image data. In our experience an all-or-nothing approach (edge or no edge, that is, to connect two parts of a neuronal arbor or not) does not sufficiently express all the information available to the human operator. Therefore TrakEM2's skeleton data types are composed of nodes and directional edges that express parent/child relationships between nodes with a confidence value that captures the degree of certainty in the continuity of the skeleton at that edge ( Figure 2). Edge confidence values are particularly useful to restrict ulterior circuit analysis to the most trustable subsets of the skeletons. Additionally each node holds a list of text annotations (''tags'') to highlight structures of interest or to label nodes as places to branch out later (e.g. with a TODO tag), and also a radius value (treeline skeleton subtype) or a 2d area (areatree skeleton subtype) to render 3d skeletons as stick models or volumes, respectively ( Figure 2; Text S1, section ''Image segmentation for 3d object reconstruction''). To correct mistakes skeletons are cut or joined at any node. Node edges accept any color (e.g. to label a branch), or follow a color code that expresses betweeness-centrality (computed as in [23]) relative to other nodes, branches or synapses. Given the unreliability of human-based skeletonization (tracing) of neurons [22], TrakEM2 facilitates the revision of skeleton nodes. An interactive GUI table lists all skeleton nodes and sorts them by location, edge confidence or tags, allowing quick targeted review of interesting or problematic parts of the skeleton (Figure 2). To systematically review complete neuronal arbors, TrakEM2 generates sequences of images centered at each node (fly-throughs) for each skeleton branch ( Figure 2) that exploit the human ability to detect small changes in optic flow: missassignments across sections are readily identified as sudden shifts in the field of view. This review method aids as well in locating unlabeled synapses and untraced branches. TrakEM2 expresses synapses with connector elements that relate areas or skeleton nodes with other areas or nodes. Each connector consists of an origin and a number of targets, each assigned a confidence value, to express from monadic to diadic and polyadic synapses (Figure 2h). To aid the systematic reconstruction of all upstream and downstream neuron partners of a specific neuron, TrakEM2 presents an interactive table that lists all the incoming and outgoing connectors of a skeleton, and who they connect to. Incomplete synaptic partners are then visited one at a time and reconstructed. All tables are dynamically updated as nodes and connectors are added to or removed from the skeletons. The resulting neuronal circuit is then exported in various formats including NeuroML [11]. Structuring Reconstructions Hierarchically with Semantically Meaningful Groups The reconstruction of one or a few neuronal arbors is very different to the reconstruction of a complete neuronal processing module. The main difference is the scale: the latter is generally composed of dozens or thousands of neuronal arbors. While a human operator tracks the identities of a small collection of elements with ease, the task becomes very time consuming and error prone for large collections of neurons. In our experience the cut off is at about 50 elements. Nesting arbitrary groupings of reconstructed elements collapses a collection of arbitrary reconstructions into a meaningful entity such as a neuron. For example, a neuron may be represented with a nucleus (represented by a sphere), an arbor (represented by an areatree) and a list of synapses (each represented by a connector). Large collections of neurons are grouped by modality (''sensory neurons'' versus ''motor neurons'' or ''interneurons''), or by lineage (such as ''BLD5'', ''DALcl2'', etc. in the fly larval brain), or by experimental condition (''GFP-labeled'', ''RFP-labeled''), or by any desirable arbitrary grouping or nested groupings. Hierarchical grouping effectively reduces the complexity in the management of large collections of objects by collapsing them into high-level entities meaningful for the human researcher. These groups are application-specific and in TrakEM2 are constrained by a controlled vocabulary with the required hierarchical groups (Figure 3). With hierarchical data organization and a search tool that supports regular-expressions, TrakEM2 enables the location, manipulation, measurement (Text S1 ''Measurements''; Figure S9) and visualization of entities at the desired level of abstraction, be it fragments of neurons, individual neurons, a lineage of neurons, neuronal circuits, or arbitrary compartments or areas of the brain. Discussion We have described the key properties of TrakEM2, an open source software that is optimized for neural circuit reconstruction from serial section EM image data sets. TrakEM2 answers the quickly growing demand for a flexible and robust application for implementing at tera-scale the workflows typical of current connectomics projects that require volumetric reconstruction, visualization, and analysis of objects observed through 2D images. In this way, TrakEM2 supports the quest of neuroscientists to obtain a complete picture of the circuits embedded in the densely connected neurons of nervous systems. Indeed, ever since Schwann's theory of the cell and Cajal's neuron doctrine, neuroscientists have struggled to describe the diversity of neurons in the brain and their synaptic contacts that define the neuronal circuitry underlying brain functions. The turning point in this quest occurred in 1986, when Sydney Brenner and collaborators published their monumental work, the complete wiring diagram of the nematode Caenorhabditis elegans, with only 302 neurons [24]. The choice of organism was key to Figure 2. Neural circuit reconstruction with skeletonized neural arbors and connectors to relate them at synaptic sites. A Snapshot illustrating the use of connectors to relate neural arbors. The connector in green (notice the 'o' node with a yellow circle around; it has three targetsit's a polyadic insect synapse), each of which is represented within the section by a node with an arrow head that falls within the circle of each target. To the left, notice the use of text annotations to describe the synapse. B Search with regular expressions locates any objects of interest, in this case a ''membrane specializations'' tag in a neuronal arbor. C The tabular view for a neural arbor lists all nodes, branch nodes, end nodes or a subset whose tags match a regular expression. All columns are sortable, and clicking on each row positions the display on the node. The last column titled ''Reviews'' indicates which cables of the neuron have already been reviewed (in green) to correct for missing branches or synapses or other issues. D A review stack is precomputed for fast visualization of the cable of interest, each section centered on the node. The visual flow through the stack helps in catching reconstruction errors. E ''Area trees'' are skeleton arbors whose nodes have 2d areas associated. F 3d rendering of two ''area trees'', a section of which are depicted in E. G 3d rendering of the nucleus (represented by a ''ball'') and the arbor (represented by a ''treeline'') of a neuron in the insect brain. H-J Cartons of the skeletons used for reconstruction. The root node is labeled with an ''S'', the branch nodes with ''Y'' and the end nodes with ''e''. In H, a ''connector'' relates the nodes of two arbors, with specific confidence value for the relationship. These confidence values exist on the edges that relate the arbor's nodes as well (not shown). I Rerooting changes the perspective, but not the topology, of the tree. By convention we position the root node at the soma. J Two common and trivial operations on trees are split and merge. doi:10.1371/journal.pone.0038011.g002 their success, given the technological means of the time. However, a quarter of a century later, no other central nervous system has been reconstructed in full. Brenner's reconstruction of the C. elegans nervous system was performed largely without the assistance of a computer. The work consisted in photographing (with film) serial 50 nanometer sections of the nematode worm, and annotating neurons and synapses on paper prints. An early computer-based system [25] was used for three-dimensional reconstruction of a few very small volumes. The introduction of personal computers in the mideighties opened the way for the development of the first computerassisted reconstruction systems such as TRAKA [26] and three years later Neurolucida ( [27]; MicroBrightField), bringing feasibility to computer-assisted neuronal reconstruction. Both these systems were oriented towards the reconstruction of labeled neurons at the optical level. They solved the data storage problem of the time, that very large fields of view were far too large for computerized storage, by operating on microscope stage coordinates rather than pixel coordinates in a digitized image. Meanwhile, the results of Moore's Law, and improving electronic camera technology, have opened opportunities for storing and manipulating very large datasets of images. For large-scale serial section electron microscopy (EM) in its many variants (serial section electron tomography or SSET, [28]; serial section transmission EM or ssTEM; block-face EM or SBEM [3]; focused ion beam scanning EM or FIBSEM, [6]), coupling live imaging with neuronal reconstruction would result in damage to, and eventually disruption of, the nanometer-thick sections, or it is not possible (such as in block-face EM or FIBSEM). Acquiring images first and then performing the analysis offline is necessary. The software IMOD [29] revolutionized EM image volume analysis with tools for visualizing and aligning the sections of image stacks, and for manually counting, measuring and modeling objects in the 3d volume. The software Reconstruct [21] catered to the special needs of neuronal reconstruction from EM, namely tools for manual and semi-automated image registration within a section (montaging, for large fields of view) and across serial sections, and tools for volumetric reconstruction and measurement of neuronal structures. The software package ir-tools [30] made new developments of the computer vision field accessible for serial EM reconstructions, including automated image montaging and contrast limited adaptive histogram equalization for image enhancement (CLAHE; [31]), among others. All these softwares evolved considerably since their publication dates and complement each other to various degrees. Originally, each was designed with specific technological problems and scientific questions in mind. TrakEM2 is deployed along with all the necessary image processing libraries with Fiji [32], an open source image processing application. Fiji provides automatic deployment of software updates and comprehensive documentation via a publicly accessible wiki (http://pacific.mpi-cbg.de). Fiji supports a variety of scripting languages useful for the programmatic manipulation of ) and indicates what other abstract types (e.g. a ''glia'' is represented by one or more ''glial process'' instances) or primitive types (such as ''area list'', ''treeline'', ''connector'', ''ball'', etc) they may be represented with. All elements of the Template are specific of each reconstruction project and user-defined. In the center, Project Objects displays the actual instances of the abstract, templated objects, which encapsulate and organize in many levels of abstract types the primitive segmentation types (e.g. ''area list''). The hierarchical structure assigns meaning to what otherwise would be an unordered heap of primitive types. Each instance of a primitive type acquires a unique identifier (such as ''#101 [area list]'' ). Each group may be measured jointly, or visualized in 3d, shown/hidden, removed, etc., as illustrated in the contextual menu for the selected ''mitochondria'' group (highlighted in blue). To the right, the Layers list all sections in the project (a ''Layer'' holds the data for a single tissue section). From this graphical interface, an independent view may be opened for each section. doi:10.1371/journal.pone.0038011.g003 data structures in TrakEM2. The functionality and batchprocessing capabilities of TrakEM2 are extensible at will. TrakEM2 has already been employed in a variety of applications. While originally designed for reconstructing neural circuits in anisotropic serial section EM (for example, see [7,8,14]), researchers have found TrakEM2 useful for other EM modalities, for example for registering series of images from FIBSEM and annotating synapses by hand [33]. The segmentation tools have been used for generating a gold standard segmentation of brain tissue to compare with the output of automatic segmentation algorithms on EM images [34], and for reconstructing neuronal lineages [7] and organs [35] in laserscanning microscopy data sets. TrakEM2 must evolve as new imaging methods deliver higherresolution data sets of ever increasing volumes. The open source nature of TrakEM2 allows any researcher to modify the program to suit specialized needs, and to incorporate implementations for novel algorithms from the computer vision and image processing fields. For example, TrakEM2 currenty exploits the anisotropic nature of serial section EM data, in which the X and Y dimensions have about 10 times higher resolution than Z (which is limited by the thickness of the section). Now, novel algorithms for tomographic reconstruction of serial sections [36] and more isotropic EM imaging with BFSSEM [3] and FIBSEM [6] suggest that the approach, which limits the manipulation of image data to the XY plane will need to evolve to meet this challenge. General improvements in data storage and computing capacity will be very helpful for handling the coming new kind of large isotropic highresolution EM data sets. TrakEM2 source code is under a distributed version control system (git) that encourages forking the source code base, while retaining the capability of contributing back to the main development branch. TrakEM2 has been publicly available as open source since day one. The many contributions of interested users and developers have, and will, greatly enhance the utility of TrakEM2, for the benefit of all. Example EM Data The EM data used here to exemplify the use of TrakEM2 corresponds to the abdominal neuropil of the first instar larva of Drosophila, and will be made available in full elsewhere. Supporting Information Figure S1 Section and image compositing rules for simultaneous visualization of multiple sections or multiple channels. A Three consecutive sections (called Layer in TrakEM2 parlance), each with numerous tiles, are simultaneously rendered in red (previous), green (current) and blue (next). The gray area indicates that the overlap is very good. B The previous section is overlaid using a 'difference' composite: regions of the image that do not match will get highlighted in white. C RGB image tile from an antibody labeling manually registered on top of a collection of montaged EM tiles using a Color YCbCr composite. D Higher magnification of a similar region shown in C, where specific sectioned axons and dendrites are seen labeled in red or green. The overlay greatly facilitates identifying neurons in reasonably stereotypical animals such as Drosophila. (PDF) Figure S2 Manual affine transform of collections of image tiles. A The affine transform mode is used for interactive multi-tile transformations. In conjunction with multi-section visualization (the editable section in green, and the previous, reference section in red-the best overlap in yellow), a section is manually aligned to the previous-a capability most useful for correcting or refining the results of automatic registration algorithms. A2 Enlarged inset, revealing the lack of overlap of the two adjacent sections. Notice near top right how the green section doesn't overlap with the red section. Three landmarks that define an affine transformation are used to interactively adjust the pose of all tiles in the section. B, B2 After manually dragging the landmark the two sections now overlap more accurately. The transformation is then propagated to subsequent sections to preserve the relative pose of all tiles (see menu snapshot in A). (PDF) Figure S3 Manual non-linear transform of collections of image tiles for fine cross-section alignment. A,B Two consecutive sections numbered 344 and 345 present an artefactual stretch, as indicated by the widening of the marked profiles (in white). C,D The manual non-linear transformation mode is used here in conjunction with the transparent section overlay (notice the slider above the green panel in C) to reveal the local misalignment. The inset in C,D indicates the local transformation performed by dragging numerous landmarks. (PDF) Figure S4 Expressing image transformations without duplicating the original images by using alpha masks. Duplicating images has a huge cost in data storage which TrakEM2 avoids by using highly compressible alpha masks and precomputed mipmaps stored with lossy compression. A Images present borders which are apparent when overlapping (red arrowheads). An alpha mask with zero values for the borders (see adjacent cartoon) removes the border from the field of view. A1 and A2 images show the rectangular region marked in red in the cartoons. B Manual non-linear transformations before (A1) and after (A2) corrects a section fold in an image tile. Inset, the alpha mask of the corrected tile. C Alternatively, the manual image splitting mode cuts image tiles in two or more parts using a polygonal line (C1), so that each half is now an independent Patch object that represents a tile, each relying on the original image but with a different alpha mask (inset in C2). Rigid image registration may now proceed, visualized in C3 by overlaying two consecutive sections. Data in B and C courtesy of Ian Meinertzhagen, Dalhousie University (Canada). (PDF) Figure S5 Correctable noise on EM images. A1, A2 A large blob occludes information on an EM image when the display range is adjusted for the whole image (A1), but reveals its content when CLAHE is applied (A2). B1-4 A support-film fold generates a dark band (B1) whose content is discernible at a lower value region of the histogram (inset in B2). Applying CLAHE with a small window partially solves the problem (B3) but composing the image from both ranges restores it best (B4). (PDF) Figure S6 On-the-fly processing of the field of view for enhanced contrast. The live filter tab of the display offers a few filters, to adjust A the display range; invert the image (not shown) or B CLAHE. Yellow rectangle indicates the original view without filters. (PDF) Figure S7 Volumetric reconstruction with series of complex 2d areas or ''area lists''. The ''Z space'' tab lists all segmentation objects that exist in 3d. A With the brush tool, a selected ''area list'' instance is painted in yellow (notice the mouse pointer with circle), labeling the sectioned profile of a neuron. The selected object (listed in the cyan panel) may be visible or hidden, locked, or linked to the underlying images. B Labeled meshes are rendered in 3d by generating a mesh of triangles with marching cubes. C Dense reconstruction of a cube of neuropil. (PDF) Figure S8 Sketching and quantifying neural tissue with spheres and tubes. A,B Two sections with a ''ball'' to represent the nucleus and a ''pipe'' to model the main process of a monopolar insect neuron. The colors indicate relative depth: red means below the current section and blue above. instance, expressing a synapse between an axon (large profile at lower left with numerous microtubules) whose tree is tagged ''presynaptic site'', with numerous terminal dendrites (small target circles, one in red indicating it's in the previous section). B Measurement of the distances from the root node (the soma, by convention) to all nodes labeled ''presynaptic site'' like in A. The inset schematizes the measurements (dotted red lines from ''root'' to ''nodes labeled as ''pre''). C A double disector is used together with an overlay grid (in green, cell size is one micron) to detect the number of objects appearing new in the next section (objects labeled as little yellow squares, with blue circles for the position of the same object in the next section, if present). The table shows the list of all marked objects. Note how ''30 occurs only once, indicating that it appears new in the next section. See [42] for details on the double disector technique. D The built-in scripting editor in Fiji shows a small python script to extract statistics on the distances of synaptic vesicles (modeled with a ''ball'') to a synaptic cleft (modeled with an ''area list''), as shown in Supplemental Figure 11 d, e. (PDF) Text S1 Supplemental Text containing detailed information on various aspects of the TrakEM2 software, including image registration, dealing with noise, alpha masks, manual segmentation with areas, balls and pipe objects, and measurements. (PDF)
2016-04-27T00:50:03.749Z
2012-06-19T00:00:00.000
{ "year": 2012, "sha1": "327b33d038e3422d89e6ee64e841a6bc70308ca9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0038011&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49a921af56061cf0fdddbaef2d315e552c36dfb4", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
19018260
pes2o/s2orc
v3-fos-license
Increased Frequency of Pre–Pro B Cells in the Bone Marrow of New Zealand Black (NZB) Mice: Implications for aDevelopmental Block in B Cell Differentiation Reductions in populations of both Pre-B cell (Hardy fractions D) and Pro-B cells (Hardy fractions B–C) have been described in association with murine lupus. Recent studies of B cell populations, based on evaluation of B cell differentiation markers, now allow the enumeration and enrichment of other stage specific precursor cells. In this study we report detailed analysis of the ontogeny of B cell lineage subsets in New Zealand black (NZB) and control strains of mice. Our data suggest that B cell development in NZB mice is partially arrested at the fraction A Pre–Pro B cell stage. This arrest at the Pre-Pro B cell stage is secondary to prolonged lifespan and greater resistance to spontaneous apoptosis. In addition, expression of the gene encoding the critical B cell development transcription factor BSAP is reduced in the Pre–Pro B cell stage in NZB mice. This impairment may influence subsequent B cell development to later stages, and thereby accounts for the down-regulation of the B cell receptor component Igα (mb-1). Furthermore, levels of expression of the Rug2, λ5 and Igβ (B29) genes are also reduced in Pre–Pro B cells of NZB mice. The decreased frequency of precursor B cells in the Pre–Pro B cell population occurs at the most primitive stage of B cell differentiation. INTRODUCTION Rapid progress is being made in defining lineage specific precursor cells that are intermediates in the pathway of B cell differentiation (Payne et al., 1999;Akashi et al., 2000). Development of mature B cells from multipotential stem cells is accompanied by qualitative and/or quantitative differences in the expression of cell surface molecules. Such differences permit enumeration, depletion and enrichment of stage-specific precursor cells. Merchant et al.(1995; fractionated bone marrow cells (BMC) from lupus-prone mice according to stage specific fractions (Hardy and Hayakawa, 1991) and demonstrated an age-dependent reduction of both Pre-(Fr. D) and Pro-B cells (Fr. B -C) in the bone marrow (BM). Herein, we performed a detailed study of the ontogeny of each substage specific lineage of B cells from New Zealand black (NZB) mice. Our data reveal an accumulation, rather than a reduction, of the most immature B lineage cells, referred to as Pre -Pro B cells. Furthermore, the increased frequency of Pre -Pro B cells was secondary to a decreased rate of apoptosis. Thus, the decreased frequency of precursor B cells in NZB mice occurs at the most primitive stage of B cell differentiation may be secondary to abnormalities in the gene(s) that control B cell lineage differentiation. Mice and Cell Preparation Female NZB/BlnJ, BALB/CJ, C3H/HeJ, C57BL/6J mice, aged 1-8 months, were obtained from Jackson Laboratory (Bar Harbor, ME) and subsequently maintained by the Animal Resource Service of the University of California at Davis. BMC were obtained by flushing two femurs and tibiae with PBS containing 0.2% bovine serum albumin (BSA) utilizing a 25-gauge needle. Single cell suspensions were washed and viable cells quantitated and confirmed using trypan blue exclusion. The data presented in all experiments was replicated in three separate experiments using 3 -4 mice per group, unless otherwise noted. Immunofluorescence Labeling and FACS Analysis Immunofluorescence labeling was performed as described by Lian et al.(1997). Expression of cell surface antigens was measured by three-color flow cytometry analysis. Briefly, BMC were aliquoted (10 6 ) into tubes and preincubated with CD32/CD16 (Fc Blocke) at 48C for 5 min. FITC-labeled anti-CD43, CD4 or IgM and PE-labeled anti-B220, CD19, CD5, HSA or c-kit together with biotin-labeled anti-CD3, TcR-ab, Thy1.2, Mac-1, Gr-1, NK1.1, Sca-1, CD4, CD8 or B220 were added directly to cells in Fc Blocke at 48C for 30 min. Cells were then washed and subsequently incubated with streptavidin TRI w . The frequency of cells expressing individual and/or sets of cell surface markers and the mean density of expression of such markers was determined by analysis of a minimum of 50,000 cells utilizing a FACScan flow cytometer (Becton Dickinson) and Cell Quest software. NZB bone marrow B lineage cells can be divided into distinct maturational stages based on surface staining for CD43 and sIgM or HSA as reported by Hardy and Hayakawa (1991). The maturational subsets are identified alphabetically and represent increasing stages of differentiation from the Pre -Pro B cell subset stage (Fr. A: Fig. 1. We specifically quantitated NK1.1 þ cells in the Pre -Pro B cell population of NZB, BALB/c and C57BL/6 mice, based on earlier work suggesting that NK1.1 þ cells may be a minor population in the Pre -Pro B population (Rolink et al., 1996). Depletion of Immature/Pre/Pro (Fr. B -F) B Cells BMC were collected and layered onto a NycoPrepe (NycoMed Pharma As, Oslo, Norway) discontinuous density gradient. After centrifugation at 800g for 25 min, cells with a density of 1:066 , p , 1:077 were collected (Lian et al., 1999). The low-density cells were treated with a mixture of rat mAbs against mouse sIgM, CD24 (HSA) and CD19 followed by incubation with anti-rat IgGconjugated magnetic-beads (Dynabeads w ). Passage through a magnetic field was utilized to deplete the Immature/Pre/Pro B (Fr. B -F) cells. Cell Cycle Analysis Cell cycling was detected by the BrdU Flow Kit (BD PharMingen, San Diego, CA). Mice were injected i.p. with 1 mg BrdU dissolved in PBS and were thereafter fed drinking water containing 1 mg/ml BrdU for different periods of time. The drinking water was light protected and replaced with fresh BrdU-containing water every 2 days. Briefly, purified IgM 2 /HSA 2 /CD 19 2 BMC were stained with PE-anti B220 mAb, fixed with Cytofix/Cytoperm Buffer, and permeabilized with Cytoperm Plus Buffer. Cells were then incubated again with Cytofix/Cytoperm Buffer, followed by treatment with DNase to expose the BrdU epitopes. Finally, immunofluorescent staining was performed with FITC conjugated anti-BrdU (for defining the frequency of dividing cells) and 7-AAD (for measurement of total DNA content), and analyzed by FACScan. Detection of Apoptotic Cells The frequency of Pre -Pro B (Fr. A) cells undergoing apoptosis was detected by Annexin V staining (BD PharMingen, San Diego, CA). After depletion of Immature/Pre/Pro (Fr. B -F) B cells, 5 £ 10 5 IgM 2 / HSA 2 /CD19 2 fresh or cultured cells were resuspended in binding buffer (10 nM HEPES/NaOH, pH 7.4, 140 mM NaCl, and 2.5 nM CaCl 2 ). Thence, the cells were incubated with PE-conjugated Annexin V for 15 min at room temperature in the dark, washed and analyzed using the FACScan. RNA Isolation and Reverse Transcription PCR Total RNA for cDNA synthesis was prepared from freshly enriched Pre -Pro B (Fr. A) cells as described above. Briefly, the low-density Immature/Pre/Pro (Fr. B -F) B cells (sIgM þ , HSA þ , CD19 þ ) were first depleted using Dyna magnetic beads. Such enriched Lin 2 cells were subsequently incubated with CD45R (B220) microbeads and subjected to further enrichment using a cell sorter (Miltenyi Biotec Inc., Auburn, CA). The resulting B220 þ Pre -Pro B cells had a purity greater than 95% (Fig. 2). RNA was extracted utilizing the RNAeasy Mini kit (QIAGEN Inc., Santa Clarita, CA) and eluted into DEPC-treated H 2 O and stored at 2 708C. This RNA was used to synthesize first strand cDNA using Superscript II reverse transcriptase (RT; GIBCO life Technologies, Gaithersburg, MD), 1 mM dNTPs, 1 mg random hexameric oligonucleotides, and the supplied RT buffer (GIBCO BRL). PCR assays were carried out using the following primer pairs: b-actin, Whole BMC from NZB mice were enriched after discontinuous density gradent centrifugation; the low-density IgM þ , HSA þ , CD19 þ cells were first depleted using Dyna magnetic beads. Such enriched Lin 2 cells were subsequently incubated with CD45R (B220) microbeads and subjected to further enrichment using a cell sorter. The resulting B220 þ Pre-Pro B cells had a purity greater than 95%. Statistical Analysis Values were determined to be statistically significant by ANOVA test or by unpaired Student's t-test. Bone Marrow B Cell Subset Changes In order to determine the frequency and absolute number of cells at various stages of B cell development in NZB mice of different ages, bone marrow samples from 1, 2, 4 and 8 month old mice were examined. As shown in Fig. 3, the frequency and absolute numbers of cells in fractions B -F decreased with age. By 8 months of age, the proportion (and number) of Immature B cells (Fr. E -F) decreased from 6.1 to 2.2% (3.0 £ 10 6 to 1.1 £ 10 6 ), Pre-B cells (Fr. D) from 11.6 to 3.2% (5.8 £ 10 6 to 1.5 £ 10 6 ) and Pro-B cells (Fr. B -C) decreased from 4.3 to 0.2% (2.1 £ 10 6 to 0.5 £ 10 6 ). In contrast, there was a marked increase in the frequency (from 3.6 to 6.7%) and absolute numbers (1.8 £ 10 6 to 2.7 £ 10 6 ) of Pre -Pro B cells (Fr. A) from 1 to 2 months of age. Pre -Pro B cells at 8 months of age, however, showed an age associated decline, but still remained higher than levels at 1 month of age. As shown in Fig. 4A, a similar analysis of B cell differentiation in BALB/c, C3H, and C57BL/6 mice demonstrated that the trend was unique to NZB mice. The data in Fig. 4 also show that the decline in the frequency and absolute number of immature and Pre-B cells in bone marrow of NZB mice is greater than that seen in normal mice. Unexpectedly, the Pre -Pro B cell populations are unusually expanded in NZB mice, and significantly higher at 8 months in comparison with control mice (Fig. 4D). It has been previously reported that NK1.1 þ cells may be included in a minor population of a Pre -Pro B cell population (Rolink et al., 1996). In preliminary experiments, we detected a similar frequency of NK1.1 þ Pre -Pro B cells in age-matched NZB mice and C57BL/6 mice. However, NK1.1 are not expressed in BALB/c mice (data not shown). Further, the NK1.1 population was depleted before our analysis. BrdU Labeling of Pre-Pro B Cells It was reasoned that the increased number of Pre -Pro B cells in the bone marrow of NZB mice could be secondary to an increased level of proliferation of cells at this stage of B cell maturation. In order to examine this possibility, we analyzed the percentages of BrdU þ Pre -Pro B cells in bone marrow from NZB, and for comparison, BALB/C and C57BL/6 mice. Attempts were made in initial experiments to assess the frequency of BrdU þ Pre -Pro B cells in unfractionated bone marrow, but the low numbers of Pre -Pro B cells made this analysis difficult. Therefore, after low-density cell isolation, the IgM þ /HSA þ /CD19 þ BMC were depleted using magnetic beads as described in the "Materials and Methods" section, prior to analysis (Fig. 5). As shown in Fig. 6 and Table I, the frequency of BrdU þ cells in NZB Pre -Pro B cells is lower than in normal mice. Two hours after the injection of BrdU, only 0.9% of the Pre -Pro B cells were BrdU þ in the NZB mice, while almost 2% of the Pre -Pro B cells from BALB/c mice were BrdU þ . In addition, while 90% of Pre -Pro B cells in control mice were BrdU þ (Table I). These data suggest that increased proliferation does not account for the increased frequency and absolute numbers of Pre -Pro B cells in the bone marrow of NZB mice. The Rate of Pre-Pro B Cell Apoptosis in NZB Mice The increased frequency of bone marrow Pre -Pro B cells in NZB mice could also be due to the increased survival of cells at this stage, resulting in an accumulation of cells in that compartment relative to normal mice. To address this issue, the frequency of cells undergoing spontaneous apoptosis was measured. The data in Fig. 7 show results from one of four representative experiments where the frequency of apoptotic Pre -Pro B cells was determined. Highly enriched populations of Pre -Pro B cells from adult NZB mice exhibited a much lower level of apoptosis ð3:45^0:23%Þ than similar preparations of cells from similarly aged BALB/c ð9:1^1:8%Þ; and C57BL/6 mice ð6:2^1:3%Þ (Table II). Because apoptotic cells are rapidly removed from the bone marrow, the number of such cells detectable at any one time is low (Lu et al., 1998). Therefore, a short-term culture system that allows for the accumulation and enumeration of apoptotic cells was used. IgM þ /HSA þ /CD19 þ depleted low-density BMC from similarly aged adult NZB and normal BALB/c mice were incubated for 24 h, and apoptosis was examined by Annexin V staining. Once again, the frequency of apoptotic Pre -Pro B cells in NZB mice was found to be markedly lower than in normal BALB/c mice ( Fig. 7 and Table II). In addition, the finding that the expression of Bcl-X l in Pre -Pro B cells in NZB mice is also higher than normal BALB/c mice (Fig. 8), supports the view that Pre -Pro B cells from NZB mice are relatively resistant to undergo apoptosis and, therefore, reflects a longer half-life of B cells at this stage of maturation. Gene Expression in Pre-Pro B Cells To further clarify the mechanisms responsible for the abnormal accumulation of Pre -Pro B cells in NZB mice, we examined the expression of key B cell lineage genes involved in the process of B cell maturation and differentiation using a highly enriched population of Pre -Pro B cells from the bone marrow of 2-month-old NZB and age matched control normal BALB/c mice prepared as described. Figure 8 illustrates that the expression of the transcription factor BSAP in Pre -Pro B cells of NZB mice was down regulated, as were Iga (mb-1) and the Igb (B29) genes. Levels of the expression of the surrogate light chain l5 and Rag-2 were also lower in Pre -Pro B cells from NZB mice relative to levels seen in similar cells from control mice. DISCUSSION NZB mice, as well as several other models of murine lupus models manifest abnormal patterns of B-lineage cell development (Borchers et al., 2000). Typically, NZB mice exhibit accelerated appearance and production of B lineage precursor cells during fetal and neonatal life (Jyonouchi et al., 1983;Jyonouchi and Kincade, 1984). NZB mice develop large numbers of B cell precursors at an early stage of embryonic development, suggesting that hyperactive B cell formation continues for the first few weeks of life. By 5-6 months of age, however, the frequency and absolute numbers of Pre-B cells are markedly reduced when compared with age-matched normal murine strains (Jyonouchi et al., 1982;Kruger and Riley, 1990). Merchant et al. (1995; localized the decreases in the differentiation and maturation of B lineage cells to the Pre-B and immature B cell stages. However, it was not clear whether this decrease was manifest throughout the currently recognized stages of B cell maturation or initiated at the Pre-B or Pre -Pro B cell stage. Herein, we find that in fact there is an increase in the frequency of Pre -Pro B cells in the BM of NZB; this increase was most pronounced at 1 month of age and is sustained throughout the ages of the NZB mice studied, suggesting that there is a block in the maturation of B cells at the Pre -Pro B cell stage that leads to an accumulation of these cells with an associated decrease in the frequencies of the subsequent stages of B cell maturation. Rolink et al. have reported that NK1.1 þ cells may be included within FIGURE 5 Whole BMC from NZB mice were enriched after discontinuous density gradent centrifugation; IgM þ and HSA þ cells were depleted by magnetic-beads as described in the "Materials and Methods" section. Note the high frequency of the Pre-Pro B cell subset (R1). A similar depletion was demonstrated with the control strains (data not shown). Pre -Pro B cells (Rolink et al., 1996). As noted above, we found a similar frequency of NK1.1 þ cells in age-matched NZB and C57BL/6 mice. They are not expressed in BALB/c mice. Numerous studies have described that allogeneic bone marrow transplantation (BMT) can be disease preventive and has been attempted for the treatment of both systemic and organ-specific autoimmune diseases in SLE models (Ikehara et al., 1985;Ishida et al., 1994;Adachi et al., 1995;Mizutani et al., 1995;Ikehara, 1998;Good, 2000;Good and Verjee, 2001). More recently, we have reported that stem cells from adult NZB mouse bone marrow exhibited defective T cell lineage development in fetal thymic organ culture (FTOC) (Hashimoto et al., 2000). These findings suggest that autoimmune disease originates from intrinsic disorders of the HSCs themselves and in their developmental pathways, followed by autoreactive lymphocyte accumulation (Ikehara et al., 1990;Ikehara, 2001). Our observation of an unusual expansion of Pre -Pro B cells in NZB mice is consistent with these studies. It is believed that immune "tolerance" is accomplished, in part, through an educational phase of B lymphocyte development: autoreactive B cells are identified at an early maturational stage and effectively silenced. Importantly, depletion of immature B cells has been shown by studies that evaluated the numbers of cells that successfully traverse the immature to mature B cell stage of development daily (Lu and Osmond, 2000). The data indicate a high apoptotic index at the Pre -Pro B/Pro-B transition, many Pre -Pro B cells normally generate non-productive rearrangements and are diverted into a programmed cell death pathway. Our observation demonstrates that the level of spontaneous apoptosis of fresh and cultured Pre -Pro B cells in NZB mice are significantly lower than in normal mice, suggesting that some autoreactive B cells may escape from apoptosis to continue the maturation process. The general defect in apoptosis implicates the maturational arrest of the B cells in the pathogenesis of autoimmunity in NZB mice. There are several mechanism(s) that either individually or in concert could account for the accumulation of the B lineage cells at the Pre -Pro B cell stage. These include prolonged persistence and half-life of cells at this stage of maturation, decreased sensitivity to undergo apoptosis (increased half-life) or dysregulation of genes that control B cell differentiation and maturation. As we have demonstrated, the Pre -Pro B cells of NZB mice exhibit a much slower turnover rate and prolonged persistence, and resistance to apoptosis as compared to normal mice. The dysregulation of genes controlling B cell differentiation/development is another facet of the abnormal Pre -Pro B cell population expansion. Essential to the understanding of the molecular basis of this pathology is the clear characterization of the rearrangement status of the immunoglobulin heavy chain (IgH ) locus (Hardy and Hayakawa, 1991;2001;Li et al., 1993;. The expression of m is one of the earliest indication of B cell lineage commitment (Alessandrini and Desiderio, 1991;Schlissel et al., 1991), reflecting the remodeling of chromatin structure to make the heavy chain locus accessible to rearrangements (Alt et al., 1987). Transcripts of Rag-1 and Rag-2 gene are essential to IgH rearrangement; Pre -Pro B cells possess very low levels of mRNA from Rag-1 and Rag-2 genes and very little immunoglobulin D-J heavy chain rearrangement (Hardy and Hayakawa, 1991;Ehlich et al., 1993;Li et al., 1993). In this study, there was normal expression of the m 0 transcript in NZB mice, but Rag-2 transcript was decreased in the Pre -Pro B fraction in NZB mice compared with normal mice, suggesting impairment of rearrangement in the Pre -Pro B fraction in the NZB mice. The initiation of B-cell development critically depends on several transcription factors. B-cell-specific activator protein (BSAP, also termed B-lymphoid-specific transcription factor pax5 ) has been shown to play an essential role in early B cell development (Nutt et al., 1999a,b). The absence of BSAP leads to an arrest in B cell development at the earliest stage before rearrangement of the IgH (Urbanek et al., 1994) occurs. It has been shown that loss of BSAP affects the B lymphoid-restricted V H -to -D H J H joining step of IgH assembly (Nutt et al., 1997). BSAP appears to play a crucial role in B-lineage commitment (Nutt et al., 1999a). In our experiments, impairment of BSAP was observed in NZB Pre -Pro B cells, supporting the thesis that dysregulation of the cascade of gene transcription is involved in the abnormality in NZB mice. The initiation of B cell development also critically depends on the E2A gene which encodes two helix-loophelix transcription factors, E12 and E47. In the absence of these proteins, B cell development is arrested at the earliest stage, before D H J H rearrangement of the IgH chain occurs (Bain et al., 1994;Zhuang et al., 1994;Lin and Grosschedl, 1995). The level of E2A expression in Pre -Pro B cells were similar in NZB mice and BALB/c mice. E2A products are required for BSAP expression (Bain et al., 1994), and therefore the impairment of BSAP in NZB mice is not likely due to altered levels of E2A. It has been suggested that the E2A gene products are involved in cell lineage commitment while BSAP is essential for progression of B cell development beyond the early Pro-B cell stage (Busslinger et al., 2000). It has also been reported that a decrease in BSAP levels results in a loss of cell proliferation capability (Chong et al., 2001), which is in accordance with our findings that BrdU incorporation is decreased in the NZB Pre -Pro B cell. Collectively, an abnormal decrease of BSAP expression may be responsible for the abnormal increase of the Pre-Pro B cells in NZB mice. We recognize, however, that this genetic analysis must be performed on isolated B cell FIGURE 8 Expression of mRNA for Bcl-X l , m 0 , Rag-2, BSAP, E2A, Iga. Igb and l5 in sorted B220 þ CD43 þ CD19 2 HSA 2 (Pre-Pro B) cells from bone marrow of NZB and BALB/c mice. Total RNA isolation and RT-PCR assays were performed as described in "Material and Methods" section. Dilutions of cDNA were subjected to PCR amplification specific for b-actin, Bcl-X l , m 0 , Rag-2, BSAP, E2A, Iga. Igb and l5 and the resulting products separated by electrophoresis on a 1.5% agarose gel containing ethidium bromide and visualized by UV light illumination. subpopulations in a more quantitative fashion. Such work is in progress. B cell antigen receptor complexes include membranebound immunoglobulin molecules non-covalently bound to the Iga and Igb protein, respectively, the products of the mb-1 and B29 genes (Reth, 1992). In addition, Iga/ Igb heterodimers are essential elements in Pre-B and B receptor signaling, and the role of Igb in B lymphopoesis before m heavy chain synthesis has also been suggested using Igb knockout mice (Benlagha et al., 1999). A recent report also suggests that signaling through Igb regulates locus accessibility for ordered Ig gene rearrangements (Maki et al., 2000). In our study, mb-1 and B29 were also reduced in NZB mice Pre -Pro B cells. It is possible that the reduction of mb-1 may be due to the reduced expression of BSAP because mb-1 is positively regulated by BSAP (Busslinger et al., 2000).
2014-10-01T00:00:00.000Z
2002-03-01T00:00:00.000
{ "year": 2002, "sha1": "29c32da15576387783746a4843a383e164238ef4", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jir/2002/915967.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f0f8c9aac26a036a2eaafddd8c8dfb7c2f90867", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252972652
pes2o/s2orc
v3-fos-license
Corrigendum: An evaluation of the COVID-19 pandemic and perceived social distancing policies in relation to planning, selecting, and preparing healthy meals: An observational study in 38 countries worldwide [This corrects the article DOI: 10.3389/fnut.2020.621726.]. "Stay-at-home policies and feelings of having more time during COVID-19 seem to have improved food literacy among women. Stress and other social distancing policies relate to food literacy in more complex ways, highlighting the necessity of a health equity lens." Corrections have been made to the section Materials and Methods, "Study Size and Statistical Analysis," paragraph 2. The first correction was made to the sentence that previously stated: "Descriptive analyses, independent samples t-tests and chisquare tests (see Table 1) showed that scores of male and female respondents were different for all variables except for the perception of having more time and general financial struggles." The corrected sentence appears below: "Descriptive analyses, independent samples t-tests and chisquare tests (see Table 1) showed that scores of male and female respondents were different for all variables except for the perception of having more time." The second correction was made to the sentence that previously stated: "To control for over or underreporting from certain countries due to unequal survey collections, a survey weight based on the country variable generated by SPSS for unbalanced samples was applied in all analyses." The corrected sentence appears below: "To control for over or underreporting from certain countries due to unequal survey collections, a survey weight was created based on the country proportion in the total sample." A correction was made to the section Results, "Participants." This sentence previously stated: "A final N = 37,207 (77.8% women, Mage = 36.71, SD = 14.79) were retained for analysis." The corrected sentence appears below: "A final N = 37,207 (73.6% women, Mage = 36.72, SD = 14.43) were retained for analysis." Corrections have been made to the section Results, "Descriptive Results." The paragraph previously stated: "Mean scores for planning, selecting, and preparing healthier foods were average to high before the COVID-19 crisis in both women and men. All three food literacy behavior domains increased during the COVID-19 crisis in both women and men [plan, women, F (1,522,232) = 25594.47, p < 0.01, men "Mean scores for planning, selecting, and preparing healthier foods were average to high before the COVID-19 crisis in both women and men. All three food literacy behavior domains increased during the COVID-19 crisis in both women and men [plan, women, t (27,381) = 40.11, p < 0.001, men t (9,824) = 16.909, p < 0.001; select, women, t (27,381) = 3.25, p < 0.01, men t (9,824) = 8.63, p < 0.001; prepare, women, t (27,381) = 27.58, p < 0.001, men t (9,824) = 9.47, p < 0.001, see Table 1 for all means and SD]. Furthermore, both men and women scored higher on financial stress when they had lost income due to COVID-19 [for women t (15,092.38) = 71.87, p < 0.001 with M = 2.35, SD = 1.48 for women who did not lose income and M = 3.89, SD = 1.74 for women who lost income; for men t (7,005.57) = 45.05, p < 0.001 with M = 2.38, SD = 1.53 for men who did not lose income and M = 3.95, SD = 1.74 for men who lost income]." Corrections have been made to the section Results, "Hierarchical Multiple Regression Analyses." The first paragraph previously stated: "Results of all hierarchical multiple regression analyses are reported in full detail in Supplementary Table 2, and summarized in Figures 1, 2 and 3. To start with the personal responses, the perception of having more time since the COVID-19 crisis was associated with increases in planning, selecting, and preparing healthier foods in both women and men (p < 0.01). COVID-19-induced financial stress was associated with decreases in planning and preparing healthier foods in both women and men (p < 0.01). Financial stress was further associated with an increased use of food labels and nutrition information among women (p < 0.01). COVID-19-induced psychological distress was associated with decreases in planning, selecting, and preparing healthier foods among women (p < 0.01). For men, psychological distress was negatively related to selecting-and positively related to preparing-healthier foods (p < 0.01)." The corrected first paragraph appears below: "Results of all hierarchical multiple regression analyses are reported in full detail in Supplementary Table 2, and summarized in Figures 1, 2 and 3. To start with the personal responses, the perception of having more time since the COVID-19 crisis was associated with increases in planning, selecting, and preparing healthier foods in women (p < 0.001), but not significantly in men (p = 0.54). COVID-19-induced financial stress was associated with decreases in planning and preparing healthier foods in both women and men (p < 0.001). COVID-19-induced psychological distress was associated with decreases in planning, selecting, and preparing healthier foods among women (p < 0.05). For men, psychological distress was negatively related to selecting healthier foods (p < 0.05)." The second paragraph previously stated: "Concerning contextual factors, positive associations were found between policies to stay at home/work from home and changes in planning and preparing healthier foods in both women and men (p < 0.01). However, staying home was negatively associated with selecting healthier foods in women and men (p < 0.01). Next, policies on public gatherings related to an increase in selecting healthier foods among women, but this association was negative for men (p < 0.01). Policies on public gatherings also negatively related to women's planning and preparing of healthier foods. Policies on private gatherings negatively related to men's planning and preparation of healthier foods (p < 0.01)." The corrected second paragraph appears below: "Concerning contextual factors, positive associations were found between policies to stay at home/work from home and changes in planning and preparing healthier foods in women (p < 0.001). However, staying home was negatively associated with selecting healthier foods in women (p < 0.01). Next, policies on public gatherings related to an increase in selecting healthier foods among women (p < 0.01). Policies on public gatherings also negatively related to women's planning of healthier foods (p < 0.05). Policies on private gatherings positively related to women's planning (p < 0.05), and was negatively related to men's preparation of healthier foods (p < 0.05)." The third paragraph previously stated: "The closure of schools was associated with increased healthier food selection in men and women (p < 0.01), but decreased healthier food planning in men and preparation in women (p < 0.01). The closure of restaurants and the closure of pubs and bars was associated with decreases in selecting healthier foods in men and women (p < 0.01). The closure of restaurants, pubs, and bars further increased women's healthier food planning, while healthier food planning decreased in men when pubs/bars were closed (p < 0.01). And while women's preparation of healthier meals increased when restaurants were closed, men reported that their preparation of healthier meals decreased (p < 0.01)." The corrected third paragraph appears below: "The closure of schools was associated with increased healthier food planning in men and women, as well as selection and preparation in women (p < 0.05).The closure of pubs and bars was associated with decreases in selecting healthier foods in women (p < 0.001)." The fourth paragraph previously stated: "Regarding the sociodemographic characteristics associated with changes in food literacy behaviors, educational attainment was negatively related to changes in selecting healthier foods and positively related to changes in planning and preparing healthier foods in men and women (p < 0.01). Employment status was negatively related to changes in food preparation in men and women (p < 0.01) and positively related to changes in selecting healthier foods in women. Struggling to make money last until the next payday was positively related to changes in women's selecting healthier foods (p < 0.01), and negatively related to men's changes in food planning (p < 0.01). Struggling to have enough money to go shopping for food was De Backer et al. . FIGURE Graphic summary of the significant relations between personal, contextual and sociodemographic variables and changes in planning healthier foods during COVID-. We report beta-values only for significant relations in models for planning healthier foods. Bars to the right indicate improvement in food planning, bars to the left indicate decreases in planning healthy foods. also related to positive changes in women's use of food labels (selecting healthier foods), but related to negative changes in both women and men's planning and preparing healthier foods (p < 0.01). Also loss of income was related to an increase in selecting healthier foods among women and men (p < 0.01), an increase in preparing healthier meals in women, and a decrease in preparing healthier meals in men (p < 0.01). Age was positively related to changes in planning healthier foods for men and women. It was also positively related to changes in men's healthier food selection, while for women it was negatively related to changes in selecting and preparing healthier foods (p < 0.01). Finally, the more adult cohabitants women had during the COVID-19 crisis, the more their selection and preparation of healthier foods improved (p < 0.01). For men, increases in the number of adult cohabitants related to decreases in planning and preparing healthier foods (p < 0.01). The number of children in the household was negatively associated with men and women's planning and preparation of healthier foods (p < 0.01), and positively associated with women's selection of healthier foods." The corrected fourth paragraph appears below: "Regarding the sociodemographic characteristics associated with changes in food literacy behaviors, educational attainment was negatively related to changes in selecting healthier foods in women (p < 0.05) and positively related to changes in planning and preparing healthier foods in men and women (p < 0.001). Employment status was negatively related to changes in food preparation in women (p < 0.05). Struggling to make money last until the next payday was positively related to changes in women's selecting healthier foods (p < 0.05), and negatively related to men's changes in food planning (p < 0.05). Struggling to have enough money to go shopping for food was also related to positive changes in women's use of food labels (selecting healthier foods), but related to negative changes in women's planning and preparing healthier foods (p < 0.01). Also loss of income was related to an increase in selecting healthier foods among women (p < 0.001). For women, age was negatively related to changes in selecting and preparing healthier foods (p < 0.01). Finally, the more adult cohabitants men had during the COVID-19 crisis, the more their preparation of healthier foods decreased (p < 0.01). For women, increases in the number of adult cohabitants related to decreases in planning healthier De Backer et al. . FIGURE Graphic summary of the significant relations between personal, contextual and sociodemographic variables and changes in selecting healthier foods during COVID-. We report beta-values only for significant relations in models for selecting healthier foods. Bars to the right indicate improvement in food selection, bars to the left indicate decreases in selecting healthy foods. foods (p < 0.05). The number of children in the household was negatively associated with men and women's planning and preparation of healthier foods (p < 0.001), and also negatively associated with men's selection of healthier foods (p < 0.01)." Corrections have been made to the section Discussion. The second paragraph previously stated: "First, the COVID-19 crisis has taught us that stay-at-home policies, and especially personal perceptions of having more time, can increase the willingness to plan, select, and prepare healthier foods. Stay-at-home policies resulted in distorted perceptions of time and made many people feel bored (12, 13). Yet, stay-at-home policies may be in our favor when it comes to food literacy, if people feel to have more time, because in these cases we observed positive increases in planning, preparing, and selecting healthier foods. A health equity lens is warranted (3), however, since working from home is not beneficial for everyone and can lead to increased stress in some people (20). Results also show that while feeling to have more time relates to increases in planning, selecting and preparing healthier foods, stay-at-home policies corresponded to decreases in selecting healthier foods as well. Moreover, women with young children in particular experience more stress and time constraints when working from home (22). We also observed that an increase in the number of children one lives with relates to a decrease in changes in planning and preparing healthier foods. Thus, health practitioners should find ways of incorporating workplace policies to increase time availability in long-term food literacy interventions, bearing the home situation in mind. The requirement to work from home has been a successful public health initiative to curb the spread of COVID-19, and may be a successful long-term strategy to improve food literacy, other factors considered." The corrected second paragraph appears below: "First, the COVID-19 crisis has taught us that stay-at-home policies, and especially personal perceptions of having more time among women, can increase the willingness to plan, select, and prepare healthier foods. Stay-at-home policies resulted in distorted perceptions of time and made many people feel bored (12, 13). Yet, stay-at-home policies may be in our favor when it comes to food literacy, if people feel to have more time, because in these cases we observed positive increases in planning, preparing, and selecting healthier foods among . FIGURE Graphic summary of the significant relations between personal, contextual and sociodemographic variables and changes in preparing healthier foods during COVID-. We report beta-values only for significant relations in models for preparing healthier foods. Bars to the right indicate improvement in food preparation, bars to the left indicate decreases in preparing healthy foods. women. A health equity lens is warranted (3), however, since working from home is not beneficial for everyone and can lead to increased stress in some people (20). This is reflected in our results showing that while feeling to have more time relates to increases in planning, selecting, and preparing healthier foods among women, stay-at-home policies corresponded to decreases in selecting healthier foods as well among this group. These seemingly contradicting results can perhaps be brought back to time perception, as time constraints are an important factor in practicing healthy food behaviors (21). Stay-at-home policies specifically could be responsible for this dual outcome of either experiencing more or less time constraints, as some have experienced having more time during COVID-19 work from home obligations (13), and others-mainly parents and mothers especially-have had less or more fragmented time perceptions (22). Mothers during COVID-19 have especially perceived more time-related stress in combing their work and home responsibilities (22), aligning with previous findings that women with young children in particular experience more stress and time constraints when working from home (23). We also observed that an increase in the number of children one lives with relates to a decrease in changes in planning and preparing healthier foods in men and women, as well as selecting them for men. Thus, health practitioners should find ways of incorporating workplace policies to increase time availability in long-term food literacy interventions, bearing the home situation in mind for parents and especially mothers. The requirement to work from home has been a successful public health initiative to curb the spread of COVID-19, and may be a successful long-term strategy to improve food literacy, other factors considered." The third paragraph, from the third sentence, previously stated: "Idyllic representations of relieving stress in the kitchen during the COVID-19 crisis (2) may not have applied to women in our study. Among men we did observe an increase in preparing healthier meals when psychological distress increased. This could be interpreted as men viewing cooking as a "leisure" activity (22), while women take up the "burden" of everyday cooking (23). This may explain why, during the COVID-19 crisis, psychological distress became a barrier to women's everyday cooking but a creative outlet for men as a way to relieve . /fnut. . stress (16). Given that women are more likely to be responsible for everyday food preparation in households, the negative impact of psychological distress on their food literacy behaviors may impact the health of many other children and adults." The corrected third paragraph, from the third sentence, appears below: "Increases in psychological distress have been linked to averse nutritional health behaviors in the past (24). Previous studies have highlighted different possible causes to increased distress as a result of COVID-19 lockdown. Some studies have cited the distorted time perceptions and a sense of timelessness as a possible cause for sadness psychological distress (12, 13). Others cite lower socioeconomic status, COVID-19 infection risk, and longer media exposure as factors related to psychological distress (25). Women especially have been associated with higher psychological distress (25), which could explain our findings as they related to food literacy behaviors." The fourth paragraph, from the third till the seventh sentence, previously stated: "Both loss of income and feelings of financial stress caused by the COVID-19 crisis, as well as struggling to have enough money for food related to increases in selecting healthier foods for women. When looking at the planning and preparation of healthier meals, however, results show a different pattern: financial stress and struggles to have enough money for food related to decreases in planning and preparing healthier meals. Thus, while financial stress and -constraints do not relate to women's planning and preparation of healthier meals, something did change in their food shopping behavior. A potential explanation for this may be that prices of certain foods became more expensive, especially for foods that were hoarded due to social panic (24)." The corrected fourth paragraph, from the third till the seventh sentence, appears below: "Loss of income and struggling to have enough money for food related to increases in selecting healthier foods for women. When looking at the planning and preparation of healthier meals, however, results show a different pattern: financial stress related to decreases in planning and preparing healthier meals for both men and women, whereas struggles to have enough money for food related to these decreases only among women. Thus, while financial stress and -constraints decreased women's planning and preparation of healthier meals, it seemed to increase their selection of healthy meals. A potential explanation for this may be found in grocery shopping as it relates to meal selection, as prices of certain foods became more expensive, especially for foods that were hoarded due to social panic (26)." The fifth paragraph previously stated: "With regard to other sociodemographic characteristics, our results show that increases in food planning were associated with older age in men and women, while for women age was related negatively to changes in selecting and preparing healthier foods. A potential explanation for this is that more women acquire higher levels of food literacy at a younger age than men, leaving less room for improvement as they get older (4, 5, 7, 10)." The corrected fifth paragraph appears below: "With regard to other sociodemographic characteristics, our results show that increases in food planning were associated with older age in men and women, while, for women, age was related negatively to changes in selecting and preparing healthier foods. A potential explanation for this is that more women acquire higher levels of food literacy at a younger age than men, leaving less room for improvement as they get older (4, 5, 7, 10). Additionally, these results can be linked to younger age being associated with increased psychological distress during COVID-19 (25), potentially causing less healthy food behaviors (24)." The eighth paragraph previously stated: "In conclusion, we reported overall increases in planning, selecting, and preparing healthier foods during the COVID-19 crisis among women and men in 38 countries around the world using self-report data. Perceptions of having more time were most clearly associated with these positive changes, followed by the contextual factor of stay-at-home policies. Psychological distress was related to decreases in women's food literacy, and increases in men's healthy food preparation. Financial stress was not always related to decreases in food literacy; especially among women, financial stress and struggles related to increased healthier food selection behaviors." The eight paragraph appears below: "In conclusion, we reported overall increases in planning, selecting, and preparing healthier foods during the COVID-19 crisis among women and men in 38 countries around the world using self-report data. Perceptions of having more time were most clearly associated with these positive changes among women, followed by the contextual factor of stay-athome policies. Psychological distress was related to decreases in women's food literacy, and decreases in men's healthy food selection. Financial stress was not always related to decreases in food literacy, financial stress and struggles related to increased healthier food selection behaviors among women but decreased in planning and preparing." In the original article, there was an error in Figure 1 as published. An incorrect weighting coefficient was used, therefore analyses where ran again using the correct weighting variable. The corrected Figure 1 and its caption appear below. In the original article, there was an error in Figure 2 as published. An incorrect weighting coefficient was used, therefore analyses where ran again using the correct weighting variable. The corrected Figure 2 and its caption appear below. In the original article, there was an error in Figure 3 as published. An incorrect weighting coefficient was used, therefore analyses where ran again using the correct weighting variable. The corrected Figure 3 and its caption appear below. In the original article, there was an error in Table 1 as published. An incorrect weighting coefficient was used, therefore a Separate regressions were used for planning. selecting. and preparing healthier foods for male and female participants. In a first step only personal factors were included. in a second step social distancing measures were added to the model. In both models we controlled for a range of sociodemographic variables known to relate to food literacy. We report the unstandardized beta (B). standard error for the unstandardized beta (SE) and the standardized beta. b Sample sizes off all participating countries differed. To control for over or underreporting from certain countries due to unequal survey collections. a survey weight created based on the country proportion in the total sample was applied in all analyses. analyses where ran again using the correct weighting variable. The corrected Table 1 and its caption appear below. In the original article, there was an error in Supplementary Table 2 as published. An incorrect weighting coefficient was used, therefore analyses where ran again using the correct weighting variable. Supplementary Table 2 and its caption has been updated in the original article. Additional correction to text (Materials and Methods) In the original article, it was stated that repeated measures ANOVA was used to test the significance of changes. However, the reported analyses were paired-samples t-tests. Therefore, a correction was made to Materials and Methods, "Study Size and Statistical Analysis," paragraph 1. The sentence previously stated: "Repeated measures ANOVA was first used to test the significance of changes in self-reported planning, selection, and preparation of healthier foods before vs. during COVID-19." The corrected sentence appears below: "Paired-samples t-test was first used to test the significance of changes in self-reported planning, selection, and preparation of healthier foods before vs. during COVID-19."
2022-10-19T13:26:04.665Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "8a3a33a4e0d17caeb9f11b8be5196a8307cda966", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8a3a33a4e0d17caeb9f11b8be5196a8307cda966", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
67777449
pes2o/s2orc
v3-fos-license
Development of Gamma Irradiation Vaccine against Mannheimia haemolytica: A Preliminary Study The study aimed to use the several advantages of nuclear techniques for developing irradiation vaccine against Mannheimia haemolytica using different gamma radiation doses for vaccines preparation and different inoculation doses of irradiation vaccine. The M. haemolytica was exposed to different doses of gamma radiation. The dose rate was considered the optimum irradiating dose that was Lethal to M. haemolytica cells and selected for optimal gamma irradiation vaccine. Experimental animals were divided into four groups. The experimental groups injected twice with three weeks interval for tested vaccines. The first group (G1) inoculated with 4×109 bacterial cells/dose from optimum irradiation vaccine. The second group (G2) inoculated with 2×109 bacterial cells/dose from optimum irradiation vaccine. The third group (G3) inoculated with 4×109 bacterial cells/dose from high irradiation vaccine. The fourth group (C) injected (S/C) with 2 mL sterile PBS and was kept as a control group. Vaccination challenge with wild M. haemolytica life organism (0.5 mL of 3.6×1010 mLG1) was two doses for all experimental animals. ELISA was used to evaluate the efficiency of vaccines. The antibodies production are evaluated using Optical Density (OD) value as an indication of the efficiency of vaccine against M. haemolytica. The results revealed that after the second vaccination dose, the OD value of G2 showed a significant difference compared to G1 and G3 groups and it was non-significant between G1 and G3 groups. Comparative analysis of control and the different doses of gamma irradiation vaccines showed that after the second vaccination dose, the mean of OD value of the G2 was a significant different while it was non-significant in the G1and G3 compared to the control group. After vaccination challenge, the mean of OD value of G2 was with high significant different compared to all of vaccinated and control groups. INTRODUCTION Mannheimia haemolytica (M. haemolytica) is the principal bacterial pathogen of respiratory disease, causing considerable economic losses in cattle, sheep and goats. Moreover, it is responsible for mastitis in ewes and camels and abortion in cattle (Blackall et al., 2002;Christensen et al., 2003;Dewani et al., 2002;Odugbo et al., 2004;George et al., 2008). Res. J. Immunol., 8 (1): 17-26, 2015 In Egypt, the Mannheimia disease reported as the major cause of death in several farms of ostrich (Fatma and Hala, 2008). Kaoud et al. (2010) isolated M. haemolytica from pneumonic sheep, goat, cattle and buffalo (14.10, 11.80, 3.60 and 3.90%, respectively). They also isolated the microorganism from healthy animals with a relatively high number. Zaher et al. (2014) recorded the frequent association between Bovine Respiratory Disease Complex (BVD) and M. haemolytica in Egyptian cattle, sheep and goat. Vaccination is arguably the most effective defense ever deployed to fight disease. Vaccination strategies have saved billions of animals and people from death, sickness and hardship. Progress has been made towards the development of vaccines against causative pathogens showing protection and lasting immunity. Vaccine development is an activity that focuses on a variety of technological initiatives and applied research which enhance and promote improved systems and practices for vaccine safety. Gamma irradiation is a technically destroyed the DNA of pathogen, making the microorganism unable to replicate so it cannot establish an infection but some residual metabolic activity may survive, so, the irradiated microorganism can still find its natural target in the host (Datta et al., 2006). Gamma irradiation is widely used by many researchers to inactivate parasite for the preparation of vaccines, instead of traditional heat or chemical methods of inactivation. It has the advantage of a longer storage life than live, attenuated and killed, inactivated vaccines (Syaifudin et al., 2011). The objective of this study was to develop gamma irradiation vaccine against M. haemolytica using different gamma radiation doses (optimum and high radiation) for vaccines preparation and different inoculation doses of irradiation vaccine. MATERIALS AND METHODS Samples collection: Samples from both healthy and pneumonic lungs were obtained from Basateen automated Slaughterhouse (Cairo-Egypt) of freshly slaughtered animals. The samples were cultured overnight at 37°C in Erlenmeyer flasks containing 200 mL of brain/heart infusion broth. Bacterial isolation and identification: Based on morphology under microscopy, suspected colonies were cultured on (Oxoid) Trypton soya agar with 10 g LG 1 NaCl and 10 mL sheep blood selective medium for Mannheimia haemolytica and on MacConkey. The plates were incubated aerobically and anaerobically at 37°C for 24-72 h, followed by purification through sub-culturing. The isolates were subjected to further identification using Gram staining and biochemical reactions (MacFaddin, 2000). Molecular identification: Bacterial genome was extracted using Wizard genomic DNA isolation kit (#A1120, Promega Corporation, USA). The 16S rRNA gene sequencing was used for molecular identification of M. haemolytica sample according to James (2010). PCR amplification of 16S rRNA gene was carried out using forward 8F primer "5' AGA GTT TGA TCC TGG CTC AG" and reverse U1492R primer "5' GGT TAC CTT GTT ACG ACT T", PCR green master mix (Promega Corporation, USA) and 0.2 µg of purified bacterial DNA. Thermal cycle of the reaction was that the pre-denaturation at 95°C for seven minutes one cycle, followed by 35 cycles (Denaturation at 95°C for 1 min, annealing at 50°C for 1 min, extension at 72°C for 1 min) and finalized at 72°C for 7 min Res. J. Immunol., 8 (1): 17-26, 2015 one cycle. The PCR product was loaded in 1.5% gel agarose for electrophoresis separation and molecular weight calculation using molecular weight standard ladder (100-bp DNA ladder, Promega Corporation, USA). Vaccines preparation: A single colony of M. haemolytica was inoculated into 5 mL tryptone soya broth. Inoculated flask was incubated at 37°C for 18-24 h in shaking incubator. Mannheimia haemolytica was exposed to different doses of gamma radiation ranged from 2-20 kGy. The process was achieved (under cooling) by using Co 60 source (Russian facility, Model Issledovatel). Bactericidal activity of different radiation doses was assessed by cultivation on soya tryptone agar media, the optimum irradiating dose was the lowest amount of radiation that was lethal to M. haemolytica cells (Aquino et al., 2005;Abo-State et al., 2010). A complete abolishing of M. haemolytica was obtained in media exposed to 20 kGy. Animals: White New Zealand rabbits, four weeks old were used in present experimental studies. The animals were obtained from Animal Production Research Institut's New Zealand rabbit farm. The rabbits were barrier-bred, unvaccinated and free of a variety of pathogens. Animals were allowed a one-week period of acclimatization following their arrival at the vivarium. The animals were individually housed in stainless steel cages with slatted bottoms but did not contain bedding. The rabbits were allowed ad libitum access to fresh tap water by water bottles and were fed a balanced commercial feed. Bacterial infection challenge: Mannheimia haemolytica organism was grown confluent on dextrose starch agar plates overnight at 37°C. The cells were harvested in 0.01 M phosphate-buffered saline, centrifuged, washed twice with phosphate-buffered saline and diluted to 3.6×10 10 cells/mLG 1 . All groups were inoculated subcutaneously with the challenge organisms at dose 0.5 mL per rabbit, the challenge dose was according to Lu and Pakes (1981). Experimental design: Experimental study divided into two experiments: (1) Comparative study between different doses of optimum gamma irradiation vaccine and high gamma irradiation vaccines. (2) Vaccination challenges for all experimental animals to test the efficiency of different irradiation and inoculation doses against the infection with wild M. haemolytica. The animals were classified into four groups and subjected to treatment as follows: C Group one (G 1 ) : Vaccinated subcutaneously (S/C) with two doses of optimum gamma irradiated M. haemolytica at 4×10 9 bacterial cells/dose C Group two G 2 : Vaccinated subcutaneously (S/C) with two doses of optimum gamma irradiated M. haemolytica at 2×10 9 bacterial cells/dose C Group three (G 3 ) : Vaccinated subcutaneously (S/C) with two doses of gamma irradiated high dose at 4×10 9 bacterial cells/dose C Group four (C) : Injected (S/C) with 2 mL sterile PBS and was kept as a control group For all experimental animals, the second dose was given after three weeks from the first dose. Vaccination challenge with live M. haemolytica (0.5 mL of 3.6×10 10 mLG 1 ) was twice for all experimental animals. The first dose was three weeks after the second dose of vaccination. The second dose of challenge was given after one week of first challenge (0.5 mL of 3.6×10 10 mLG 1 ). Res. J. Immunol., 8 (1): 17-26, 2015 Samples collection for vaccine evaluation: Blood samples were collected at the beginning of every week after first dose of vaccination till one week after the second dose of challenge. The collected samples were centrifuged at 4500×g for 10 min at 4°C. Plasma samples were transferred to 1.5 mL tubes and frozen at -20°C until used. Evaluation of vaccine efficiency using Enzyme Linked Immuno-Sorbent Assay (ELISA): The antibody production was evaluated using Optical Density (OD) value as an indication of the efficiency of vaccine against M. haemolytica. Plasma samples were assayed for anti-bodies against M. hemolytica by ELISA. The polystyrene microtiter wells were coated with sonicated antigen (The bacterial cells were diluted in the bicarbonate buffer (pH 9.6) at an absorbance of 1.0 measured spectrophoto-metrically at 450 nm). The suspension was sonicated for 15 min at 35% power using a cell disrupter with a microtip-probe.), 100 µL of 1:10 diluted antigen in carbonatebicarbonate buffer (pH 9.6) were added to each well of a 96 flat bottom. The plate was then incubated at 4°C overnight. The plates were washed three times with PBS (pH 7.4) containing 0.5% (v/v) Tween 20 and then incubated for 30 min at 37°C with 1% (w/v) bovine serum albumin (Sigma Chemical, St Louis, MO). Immediately before samples were tested, wells were washed three times with PBS-Tween 20. Based on preliminary assays, plasma samples were diluted 1:5 in PBS and incubated in duplicate PTE-coated wells and uncoated wells (to control for non-specific absorption) for 1 h. Then, the wells washed with PBS-Tween 20, 100 µL of the diluted Rabbit IgG-heavy and light chain antibody conjugated horseradish peroxidase (HRP) (Bethyl laboratories. Inc, USA Cat No. A120-101P) (1:10,000) were added to all wells and incubated at 37°C for 1 h. The 100 µL of the substrate 3, 3, 5, 5, -tetramethyl benzidine (TMB) (Bethyl laboratories. Inc,, USA Cat. No. E102) solution was added and kept for 15 min at 37°C. A color reaction was developed with the wells. The reaction was stopped by the addition of 25 µL of sulphuric acid (95-97%) per well. The plates were read at 405 nm spectrophoto-metrically using the ELISA reader (bio Tek ELX800, Using soft wear Gen5 2.00). Statistical analysis: The results of OD values were analyzed using the arithmetic mean, standard deviation and variance ANOVA, Posthoc multiple comparisons tests according to Pipkin (1984). Identification of M. haemolytica: According to MacFaddin's methods (MacFaddin, 2000), the results proved that the isolated microorganism from collected samples identified as M. haemolytica was gram-negative rods, did not produce indole, grew in MacConkey's agar, non-motile, catalase positive, oxidase positive, attacks sugars fermentatively like lactose, non-motile and heamolysis. The PCR amplified product of M. haemolytica 16S rRNA gene was 1.5 Kbp. BLAST analysis of M. haemolytica 16S rRNA gene sequence indicated that the isolated M. haemolytica sequence showed identity to Mannheimia haemolytica D174 complete genome in the region of 16S ribosomal DNA sequence (NCBI Sequence ID: gb|CP006574.1|). This result confirmed that the isolated microorganism from the study samples was Mannheimia haemolytica. Detection of different doses of gamma-radiation on the survival of M. haemolytica: The D10 value was 2.5 kGy and the sub-lethal dose was found to be 18 kGy. In present experiment, a complete abolishing of M. haemolytica was obtained in media exposed to 20 kGy. This dose rate was considered as the optimum irradiating dose that was lethal to M. haemolytica cells and selected for optimal gamma irradiation vaccine. The M. haemolytica exposed to 25 kGy was used for high gamma irradiation vaccine. Evaluation the results between the control and the different gamma vaccines inoculation groups: Comparative study between control and vaccinated groups was illustrated in Table 1 and Fig. 1-4. The vaccinated G 1 and G 2 groups showed significant difference at the three weeks of the first vaccinated dose compared to the control group while the OD value of G 3 group showed significant difference only at the first week compared to the control group. After the second vaccination dose, the OD value of the G 2 and G 3 groups showed a significant difference at the first week while it was non-significant in the G 1 group compared to the control group. At the second week, the OD value of G 1 group was significantly different while it was non-significant difference in G 2 and G 3 groups compared to the control group. At the third week, the OD value of the G 2 and G 3 groups showed a significant different while it was non-significant difference in the G 1 group compared to the control group. Evaluating the results of the different gamma vaccines inoculation: The results of comparative analysis between the three vaccines revealed that the OD value of the three groups Fig. 3(a-b): OD values for G 1 , G 2 , G 3 and C groups after the (a) 1st and (b) 2nd vaccination dose of vaccinated animals was non-significant at first week of first vaccination dose. At the second week the OD value between G 1 and G 3 was non-significant (Table 1 and Fig. 1) while the OD value of G 2 group showed significant difference compared to the G 3 group (Table 1 and Fig. 2). At third week, a significant difference was observed among all vaccinated groups, where the OD value of G 1 was 1.241 compared to 1.007 and 1.121 for G 2 and G 3 , respectively. After second vaccination dose, the OD value of G 2 group at first week showed a significant difference compared to the G 1 and G 3 Fig. 4: Mean of OD values after first and second vaccination doses for the G 1 , G 2 , G 3 and C groups groups. At the second week, the OD value showed non-significant between all of the vaccinated groups. The OD value at the third week was significantly different between G 1 and G 3 as well as between G 2 and G 3 while it was non-significant between G 1 and G 2 groups. Evaluation the results between different doses of optimum gamma irradiation vaccine: After the first vaccination dose, the results of OD values between the two doses of optimum gamma irradiation vaccine showed that the difference observed non-significant at first week while it was with significant differences at the second and third weeks. After the second vaccination dose, the G 2 vaccinated group at first week observed with significant difference in the OD value compared to the OD value of G 1 group while it was non-significant at second and third weeks (Table 1). Evaluation the results of the total OD mean values after first and second inoculations between all experimental groups: Comparative analysis of the total OD mean value in control and different doses of optimum gamma irradiation and high gamma irradiation vaccines after first vaccination dose revealed that a significant difference was observed between control and vaccinated groups. After the second vaccination dose, the total mean of OD value of the G 2 showed a significant difference while the total mean of OD value of G 1 and G 3 vaccinated groups was non-significant compared to the control group. The total OD mean values between the three vaccinated groups after first vaccination dose observed non-significance. After the second vaccination dose, the total OD mean value of G 2 group showed a significant difference compared to G 1 and G 3 groups and it was non-significant between G 1 and G 3 groups (Table 1 and Fig. 4). Vaccination challenge: The mean values of OD in various challenge treatments illustrate in Table 2. The mean value of OD in G 1 , G 2 and G 3 groups were significantly different compared to the control group after the first and second M. haemolytica challenge treatments. The vaccinated animals by optimum irradiation vaccine at 2×10 9 dose observed with a highest OD value after first and second challenges compared to inoculation animals with optimum irradiation vaccine at 4×10 9 dose and high gamma irradiation vaccine at 4×10 9 dose. The OD value between G 1 and G 3 at the second M. haemolytica challenge treatment was non-significance. The results of the total OD mean values between experimental groups after challenge treatments were shown in Table 3. The total OD mean value of G 2 group showed highly significant difference compared to G 1 , G 3 and control groups. DISCUSSION Although, Pasteurella has been described from a long time ago by Louis Pasteur but still remains a ubiquitous organism with a worldwide distribution which causes several serious diseases in domestic animals and milder infections in humans. Its species are microbiologically characterized as gram-negative, non-motile, facultative anaerobes (not requiring oxygen) that have a fermentative type of metabolism (Odugbo et al., 2004;Oladele et al., 1999;Martino, 2000). ELISA was used to evaluate the efficiency of a newly developed gamma irradiation vaccines. The antibodies production were evaluated using Optical Density (OD) value as an indication of the efficiency of vaccine against M. haemolytica. Comparative analysis of the results obtained from different doses of optimum gamma irradiation vaccine, high gamma irradiation vaccine compared to the C group revealed that the overall mean of OD values of vaccinated G 1 , G 2 and G 3 groups showed significant difference after first vaccinated dose compared to the C group. After the second dose, the overall mean of OD value of vaccinated G 2 group was significant different while it was non-significant in the G 1 and G 3 groups compared to the C group (Table 1, Fig. 1-3). These results suggested that the antibodies were highly produced after first dose compared to second dose of vaccines in G 1 and G 3 groups due to the memorial cells were initiated after first dose and produced significant amount of antibodies compared to the C group. The results are in agreement with Datta et al. (2006). After the second dose of vaccination, the antibodies of G 1 and G 3 groups increased compared to the C group but this increases showed less than that observed after first dose. These results are in agreement with Sun (2009). Regarding to the results of the G 2 vaccinated group, the same significant amount of antibodies were produced after the second dose of vaccine inoculation as well as after first inoculation. The results indicated that the booster dose (second inoculation dose) of optimal gamma vaccine inoculation (2×10 9 bacterial cells/dose) stimulated the antibodies production and kept the animals with highly immune defense. The results of comparative analysis between the three vaccines revealed that the overall mean of OD values of vaccinated groups after first vaccination dose were non-significant between G 1 and G 2 as well as between G 1 and G 3 groups while G 2 showed significant difference compared to the G 3 group. After the second vaccination dose, the overall mean of OD value of G 2 group was significantly compared to G 1 and G 3 groups and the difference between G 1 and G 3 groups was non-significant. The results suggested that the second dose of G 2 vaccine could act as booster dose resulting in the increase in the amount of antibodies production while this advantage does not exist in G 1 and G 3 vaccines. These results confirmed by vaccine challenge experiment, whereas the G 2 group recorded highly significant amount of antibodies detected by ELISA assay compared to G 1 and G 3 groups (Table 2 and 3). The experimental results proved that the gamma irradiation vaccine at inoculation dose 2×10 9 bacterial cells/dose was a significant vaccine which could provide a highly significant amount
2019-03-16T13:10:20.025Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "3c44c4b35b902152749128eeaed18f94be1b6c06", "oa_license": null, "oa_url": "https://doi.org/10.3923/rji.2015.17.26", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6d3404e5352e6ed24fa6c97b9378e8a96e6a5e6", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
1249778
pes2o/s2orc
v3-fos-license
Subcutaneous Interferon β-1a May Protect against Cognitive Impairment in Patients with Relapsing–Remitting Multiple Sclerosis: 5-Year Follow-up of the COGIMUS Study Objective To assess the effects of subcutaneous (sc) interferon (IFN) -1a on cognition over 5 years in mildly disabled patients with relapsing–remitting multiple sclerosis (RRMS). Methods Patients aged 18–50 years with RRMS (Expanded Disability Status Scale score ≤4.0) who had completed the 3-year COGIMUS study underwent standardized magnetic resonance imaging, neurological examination, and neuropsychological testing at years 4 and 5. Predictors of cognitive impairment at year 5 were identified using multivariate analysis. Results Of 331 patients who completed the 3-year COGIMUS study, 265 participated in the 2-year extension study, 201 of whom (75.8%; sc IFN β-1a three times weekly: 44 µg, n = 108; 22 µg, n = 93) completed 5 years' follow-up. The proportion of patients with cognitive impairment in the study population overall remained stable between baseline (18.0%) and year 5 (22.6%). The proportion of patients with cognitive impairment also remained stable in both treatment groups between baseline and year 5, and between year 3 and year 5. However, a significantly higher proportion of men than women had cognitive impairment at year 5 (26.5% vs 14.4%, p = 0.046). Treatment with the 22 versus 44 µg dose was predictive of cognitive impairment at year 5 (hazard ratio 0.68; 95% confidence interval 0.48–0.97). Conclusions This study suggests that sc IFN β-1a dose-dependently stabilizes or delays cognitive impairment over a 5-year period in most patients with mild RRMS. Women seem to be more protected against developing cognitive impairment, which may indicate greater response to therapy or the inherently better prognosis associated with female sex in MS. Introduction Cognitive impairment is an important feature of multiple sclerosis (MS), affecting up to 65% of patients [1]. Cognitive symptoms may develop from the early stages of MS, sometimes as the presenting symptoms, and in any form of the disease (clinically isolated syndrome [CIS], relapsing-remitting MS [RRMS], or primary or secondary progressive MS) [2]. Once present, cognitive symptoms are unlikely to resolve and the level of impairment is believed to increase with worsening of physical disability [3], disease duration [4,5], and the onset of progressive disease [4,5]. Deficits in memory, learning, attention, and information-processing ability, most commonly observed in MS, may reflect damage to specific brain regions that do not affect physical functioning. Therefore, cognitive decline can indicate disease progression in patients with stable physical function [5,6]. Cognitive symptoms alone can negatively affect many aspects of patients' daily lives, including employment and social relationships, reducing overall quality of life [7,8]. In addition, common MS comorbidities, such as fatigue and depression, can impair cognitive function and further increase disability levels [4,9,10]. Despite its high prevalence in MS, cognitive impairment is rarely measured as part of standard clinical assessments because many cognitive tests require specialist training and must be administered by a certified neuropsychologist. In addition, tests are often time consuming to perform [2]. For patients with cognitive impairment, treatment is based on symptomatic therapies that aim to optimize remaining cognitive function and thus reduce the impact of cognitive decline [11,12]. Alternatively, pharmacological treatment of comorbidities affecting cognitive performance can provide benefits for patients, for example acetylcholinesterase inhibitors, which are widely used to treat Alzheimer's disease, may also benefit patients with MS [13]. There is considerable evidence to indicate that diseasemodifying drugs (DMDs) can significantly improve outcomes for patients with MS by reducing lesion development and improving clinical measures of disease, such as relapse rate [14]. The observation that some magnetic resonance imaging (MRI) disease measures, such as lesion load and brain volume, correlate with cognitive impairment suggests that DMD treatment may also prevent or delay cognitive decline by attenuating inflammatory processes and preventing the development of new brain lesions or progressive brain atrophy [12,13]. However, as the pivotal trials of DMDs did not, in general, include cognitive assessments, the cognitive benefits of DMDs in patients with MS are unconfirmed. The COGIMUS (COGnitive Impairment in MUltiple Sclerosis) study evaluated cognitive decline in mildly disabled Italian patients with RRMS receiving treatment with interferon (IFN) b-1a, 22 or 44 mg (RebifH; Merck Serono S.A., Switzerland), administered subcutaneously (sc) three times weekly (tiw). In this study, cognitive impairment was assessed using the Rao's Brief Repeatable Battery (BRB) and the Stroop Color-Word Task (Stroop Test), which have been validated for use in patients with MS and for which Italian normative values are available [15]. After 3 years' follow-up, it was found that sc IFN b-1a may have dose-dependent cognitive benefits in this patient group. At year 3, the proportion of patients with cognitive impairment was significantly higher in the 22 mg group than in the 44 mg group (p = 0.03) and the risk of cognitive impairment was reduced by 32% with the 44 mg dose [16]. These findings may further support early initiation of high-dose IFN b-1a treatment in patients with RRMS. Here we report clinical and cognitive outcomes from the 2-year extension of the study, giving a total of 5 years' follow-up. Methods COGIMUS was a prospective, 3-year, multicenter, observational, Italian cohort trial. Patients were enrolled between September 2003 and March 2005. Methodological details have been reported elsewhere [16]. Following completion of the study, patients were eligible to enter a 2-year extension study, with a total follow-up of 5 years. Patients Eligibility criteria have been previously described [17]. Briefly, patients were aged 18-50 years with a diagnosis of RRMS (McDonald criteria), had an Expanded Disability Status Scale (EDSS) score of #4.0 and were naïve to DMD treatment. All patients at participating centers who had completed the core 3-year study were invited to participate in the 2-year extension study. All patients gave written informed consent prior to undergoing any assessments not performed as part of their routine care. The study protocol was first approved in Treatment In the core study, patients were assigned to IFN-b treatment, with the formulation and dose at the discretion of their treating physician [16,17]. Of those who received sc IFN b-1a (N = 459), 223 (48.6%) received the 22 mg dose and 236 (51.4%) received the 44 mg dose. All patients who completed the 5-year follow-up continued on the same treatment as at year 3 for the duration of the extension study. Relapses were treated with corticosteroids, and flu-like symptoms with non-steroidal anti-inflammatory drugs or paracetamol. DMDs other than the study drug were not permitted. Study objectives and endpoints The primary objective of the extension study was to determine the effects of two doses of sc IFN b-1a on cognition over 5 years; the primary endpoint was the proportion of patients with cognitive impairment at year 5. The main secondary objective was to identify factors that predicted the presence of cognitive impairment after 5 years on study. In addition, discontinuations and reasons for treatment discontinuation, and adverse events (AEs) during years 3-5, were recorded. Patients who discontinued treatment were followed regularly in the clinical trial setting and were included in the analyses if they had cognitive assessments at all time points, regardless of whether they had discontinued treatment. Evaluation of disease status Clinical and MRI assessments during the core study have been reported previously [17]. Patients attended two further visits at years 4 and 5 that comprised neurological assessment, including EDSS score, recording of relapses, and MRI (25 of 34 centers to year 3, and 19 centers from years 3 to 5). Neuropsychological evaluation All patients underwent neuropsychological evaluation at baseline and every 12 months during the core study. Two further neuropsychological assessments were performed at years 4 and 5, as described previously [17], namely the BRB (alternate versions administered in the order A, B, A, B) and the Stroop Test. Cognitive impairment was defined as 1 standard deviation (SD) below the mean Italian normative values for each cognitive test [15]. Cognitive testing of patients who had an ongoing relapse at the time of the scheduled assessment was delayed until 30 days after the last steroid injection. Statistical analyses For outcome measures at 5 years, only patients with 5 years of follow-up were included in the analyses. No imputation of missing data was considered. Analyses at 5 years were exploratory without adjustment for multiplicity. Cognitive data from baseline and years 1, 3, and 5 only (BRB version A) were analyzed to avoid differences due to administration of alternate versions of the BRB. The following tests were conducted: Pearson chi-square and McNemar tests to compare categorized proportions, Cox proportional hazards regression to compare longitudinal data on cognitive impairment, Cochran test for k-related samples to assess variation over time in the percentage of patients with cognitive impairment, and Friedman test for k-related samples to assess variation over time in the number of impaired tests in the study population and each treatment group. In addition, Kaplan-Meier survival curves were constructed to evaluate longitudinal differences between treatments. Risk factors for the presence of cognitive impairment over 5 years were identified using a multivariate regression model, which was developed by sequentially adding variables with a significant hazard ratio in univariate analyses. Statistical significance was set at 0.05. Patients and baseline characteristics Of the 40 original participating centers, 23 (accounting for 80.1% [265/331] of patients from the 3-year follow-up cohort) participated in the extension study. Of the 265 eligible patients, 201 (75.8%; sc IFN b-1a tiw: 44 mg, n = 108; 22 mg, n = 93) completed the 5-year follow-up and were included in these analyses. The mean duration of follow-up was 5.6 years (range 4.5-6.1 years). Mean (SD) age was 39 (8.2) years and mean (SD) disease duration was 8 (4.4) years (mean [SD] disease duration at baseline: 3.9 [4.4 years]). No differences were found between patients who did or did not participate in the 5-year follow-up in terms of baseline clinical and demographic characteristics, neuropsychological performance, or proportions receiving the 44 or 22 mg dose, with the exception of mean (SD) Environmental Status Scale score, which was greater in patients who had the 5year follow-up (1.63 [2.5]) than those who did not (1.29 [2.4]). The male:female ratio was 0.6. Overall, there was no difference in the proportion of patients with or without cognitive impairment at year 3 (the end of the core study) who went on to participate in the 2-year extension study and complete the 5-year follow-up (Pearson chi-squared test = 0.574; Table 1). Cognitive impairment at 5 years A Cox proportional hazards survival analysis was performed to assess the development of cognitive impairment (proportion of patients with $3 impaired cognitive tests) during the 5-year study. Figure 1 shows Kaplan-Meier survival curves for this analysis, by treatment (discussed further below). The overall proportion of patients with cognitive impairment did not increase significantly over the 5-year period. Among patients with data available at all time points, the proportion with cognitive impairment was 18.0% at baseline and 22.6% at year 5 (Cochran test = 0.392; Table 2 Clinical outcomes Over the 5-year period, the mean relapse rate per patient per year was 0.21. The mean relapse rate remained stable between years 3 and 5. Median EDSS scores also remained stable between years 3 and 5 (median score at year 5: 2.0; interquartile range 2). The proportion of patients who were free from disability progression (as assessed by EDSS score) was 84% at year 3 and 71% at year 5. At the 5-year follow-up, 82% of patients who had been progression-free at year 3 had an unchanged level of physical disability. The proportion of patients who were free from disease progression at year 5 was similar in those with and without cognitive impairment at year 5 (33% vs 27%, respectively). There were no differences in clinical outcomes between treatment groups. Safety AEs were consistent with the known safety profile of IFN b-1a [18]. The most common AEs reported over the 5-year follow-up were injection-site reactions (30% of patients), flu-like symptoms (15% of patients), and depression (2% of patients). Overall, 50% of AEs were classified as mild in severity. Discussion In this extension study, we found that, in the study population as a whole, the proportion of patients with impaired cognitive function remained stable over the 5 years of follow-up. However, after 5 years of treatment with IFN b-1a, a higher percentage of men than women had cognitive impairment. These results suggest that sc IFN b-1a may have a protective effect on cognitive performance and that this effect may be greater in women than in men. The current finding that the level of cognitive impairment remained stable during the 5 years confirms and extends our previous observations [16], suggesting that sc IFN b-1a may stabilize cognitive function in mildly disabled patients with MS. Natural history studies of cognitive impairment in patients with MS indicate that cognitive performance would be expected to decline by approximately 5% per year in this patient group [6]. Indeed, the proportion of patients with cognitive impairment has been reported to increase substantially from 29% to 54% during the first 5 years after a CIS [19], and from 52.3% to 71.4% in the first 7 years, after diagnosis of MS [8]. Significant cognitive deterioration over 5 years in patients with CIS or MS with a disease duration of #6 years has also been reported, particularly in the domains of working memory, speed of information processing, and spatial memory [20]. In contrast, we found no significant increase in the proportion of patients with cognitive impairment over the same timeframe in the 5-year follow-up cohort (mean [SD] duration of disease at baseline: 3.9 [4.4 years]). Considering that cognitive function would be expected to deteriorate over 5 years in the absence of treatment, including in patients with intact cognitive function at baseline [21], the present results confirm our previous findings suggesting that sc IFN b-1a can prevent or delay the onset of cognitive symptoms in patients with MS [16]. Our observation of a dose effect, with the 44 mg dose of sc IFN b-1a being a significant predictor of absence of cognitive impairment at year 5, provides further evidence of a beneficial treatment effect. How IFN b-1a treatment may bring about cognitive benefits is an interesting topic for debate. A likely explanation is that cognitive effects are a result of the known anti-inflammatory actions of IFN b, which reduce lesion development in the central nervous system (CNS). Correlations between cognitive function and MRI measures of disease have been reported [2], thus supporting this theory. There is increasing evidence to indicate Table 1. Proportion of patients with and without cognitive impairment at year 3 (end of the core study) who did/did not complete follow-up at year 5. that MS-related changes in cortical matter (lesions and atrophy) play a major role in the development of cognitive symptoms [22]. Recently, sc IFN b-1a has been shown to significantly decrease the development of new cortical lesions and cortical atrophy, which could further explain how sc IFN b-1a protects against cognitive decline in patients with MS [23]. In addition to its immunomodulatory properties, IFN b may also indirectly protect against neuronal damage or promote repair by increasing the production of neurotrophic factors, including nerve growth factor and brain-derived neurotrophic factor [24,25]. However, the relationship between neurotrophic factor production and cognitive outcomes in patients with MS has not been studied. Our observation that a greater proportion of men than women had cognitive impairment at year 5 is intriguing, particularly as this difference was not observed at the end of the 3-year core study, and suggests a better response to sc IFN b-1a in women, at least for this outcome. As in other autoimmune diseases, sex differences have been reported in MS susceptibility [26][27][28]. Differences in disease course and severity have also been highlighted, with male sex being associated with a more progressive form of the disease and worse outcomes [28,29]. Indeed, large epidemiological studies have shown that men reach the same level of disability (EDSS score) as women in a shorter time from diagnosis, are more likely to present with a primary progressive course, and are more susceptible to destructive lesions than women; in contrast, inflammatory lesions seem to be more prevalent in women [30,31]. The underlying causes of these sex differences are unknown; however, genetic predisposition between sexes [32,33], the modulation of immune responses by sex hormones, inflammatory processes, tissue injury and repair mechanisms, and possible neuroprotective effects seem to play a part [34,35]. Consistent with a role for sex hormones is the observation that relapse rates decrease during pregnancy and increase post partum [36,37]. Our current findings are in agreement with previous studies showing a differential response to IFN b in women and men with MS regarding disability progression [38], although another study in patients with RRMS did not find sex differences regarding response to intramuscular IFN b-1a [39]. It is also possible that the apparent lower response to IFN b-1a in men seen here is, in fact, simply due to the inherently worse prognosis in men; that is, in the absence of treatment, the degree of cognitive decline in men may have been even greater than was observed. COGIMUS confirmed that cognitive impairment affects a significant proportion of patients from the early stages of MS: over half of all patients in this cohort had impaired performance on at least one cognitive test despite being at an early stage of the disease and having a mild level of disability at study entry [16]. As existing cognitive impairment is a risk factor for further cognitive decline [40], it is clearly important that patients with cognitive symptoms are identified and their treatment tailored as necessary. However, the observation that cognitive impairment can develop during the first 5 years of the disease in patients who previously had no evidence of cognitive symptoms highlights that all patients are potentially at risk of cognitive impairment [19]. The importance of initiating early DMD treatment to prevent or slow the accumulation of damage to the CNS, including brain atrophy [41], and thus physical disability, is now recognized. As cognitive decline occurs in the absence of ongoing relapse or disability progression even in the early stages of disease [21], our findings suggest that early IFN-b treatment may not only protect those with cognitive symptoms from further cognitive decline, but may also prevent the development of cognitive impairment. Whether these observations Table 2. Proportion of patients with and without cognitive impairment a at baseline and years 1, 3 (end of core study), and 5 (end of extension study). reflect the prevention of damage to cortical tissue would be an interesting topic for further investigation. Limitations of this study should be considered when interpreting these findings. The lack of an untreated control group and study discontinuation rate of over 20% between year 3 and completion of the 2-year extension limit the conclusions that can be drawn regarding efficacy. Several centers from the original trial did not participate in the extension phase, accounting for a considerable proportion of the reduction in patient numbers between baseline and year 5. There may also have been a selection bias for patients who are doing well on treatment; patients with declining cognitive function may have been more likely to drop out. However, as there was no significant difference between the proportion of patients with or without cognitive impairment at year 3 who participated in the extension study, cognitive impairment at the end of the core study did not appear to be a predictor of lack of participation at year 5. One limitation in this respect is the lack of availability of patient data at year 4. Furthermore, because cognitive performance was evaluated using a different version of the BRB (version B) in year 2 from that used in years 1, 3, and 5 (version A), and the two versions differ slightly regarding the weight given to some cognitive functions, year 2 data were excluded from the analysis to ensure that the longitudinal data were comparable. Finally, the differential effects in men and women could have been influenced by the different numbers of men and women, or by other possible differences between the sexes, such as adherence to treatment, which were not assessed. Despite the limitations, the results reported here add to the evidence suggesting that sc IFN b-1a may have dose-dependent cognitive benefits in patients with RRMS. Here we also demonstrate that these benefits persist over at least 5 years of treatment and may be more pronounced in women than in men, although it is possible that the sex difference reflects inherently poorer prognosis in men. Additionally, sc IFN b-1a was shown to achieve good disease control and was well tolerated. Our results further support the clinical benefit of initiating sc IFN b-1a treatment, even in patients with mild physical disability.
2016-05-15T02:35:33.044Z
2013-08-30T00:00:00.000
{ "year": 2013, "sha1": "0f0d02a9b25244d372dccb9799c978cb03202218", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0074111&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f0d02a9b25244d372dccb9799c978cb03202218", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
150258311
pes2o/s2orc
v3-fos-license
ANALYSIS OF FALSE POSITIVE AND FALSE NEGATIVE FINE-NEEDLE ASPIRATION CYTOLOGY OF BREAST LUMP : A PERSONAL EXPERIENCE This study aimed to determine the reasons for sampling and interpretative errors in false negative and false positive diagnoses of breast carcinoma on fine-needle aspiration cytology (FNAC) material. The study design is that a totally 912 cases of breast FNAC were performed between 2000 and 2004, and 126 cases of them were diagnosed as breast carcinoma. Only those cases with cytohistological discrepancies were cytologically reviewed, in which the cytological material was abnormal and to some extent misinterpreted or both. There were 8 false negative diagnoses (false negative rate 6.3%) and 3 false positive diagnoses (false positive rate 2.3%). The results of this study showed that among 8 false negative cases, 5 showed hypocellular smears with minimal nuclear pleomorphism of the cells. Histology revealed 3 infiltrating ductal carcinomas of scirrhous subtype and 2 infiltrating lobular carcinomas. The smears of other 2 false negative cases, which histologically verified as welldifferentiated infiltrating ductal and pure intraductal carcinomas, were hypercellular and composed predominantly of groups of cohesive, small, and uniform cells simulating fibroadenoma or fibrocystic changes. Smear of the last false negative case (histologically verified as infiltrating ductal carcinoma with extensive cystic degeneration) revealed large sheets of macrophages and degenerated epithelial cells on inflammatory background. In 3 false positive cases, 2 were histologically proved as fibroadenoma and 1 fibrocystic changes. Smears of the 2 false positive fibroadenomas showed very high cellularity, overlapped clusters, and frequent stripped bipolar nuclei. The fibrocystic case showed tight clusters of apocrine cells and sheets of loosely aggregated macrophages that were over interpreted. The conclusion of this study is that hypocellularity and relatively nuclear monomorphism are the reasons for failure to diagnose breast carcinoma. Careful attention should be paid to extreme nuclear monomorphism and absence of naked bipolar nuclei. So awareness of smear cellularity and subtle cytological features will aid in the correct preoperative diagnosis of lobular; scirrhous; and intraductal carcinomas, and false negative diagnoses can be minimized. A cytologically atypical or suspicious diagnosis together with positive mammographical and clinical findings should suggest a diagnosis of malignancy. Hypercellular smears with overlapped clusters should be carefully assessed for uniformity of the cells and detailed nuclear features. If the full-blown malignant cytomorphological features are not visible, a diagnosis of suspicious or inconclusive should be made and frozen section Created by Wameed Al-Hashimy intraoperative imprint cytology is recommended before surgery. Correspondence to : Dr. Sawsan ALHaroon, Department of Pathalogy and Frensic Medicine, College of Medicine, Basrah, Iraq Aspiration Cytoloy of Breast lump Sawsan Al-Haroon Bas J Surg, September, 10, 2004 37 Introduction ine-needle aspiration cytology (FNAC) is a routine test in the evaluation of breast lesions and play a key role in the preoperative diagnosis of breast carcinoma 1,2 .The diagnostic failure of FNAC seemed to be attributed to mainly sampling and/or interpretative errors 3,4 .To understand the causes of diagnostic pitfalls in FNAC, all the false positive and false negative FNACs of breast lumps were reviewed along with their histological confirmation. Materials and Methods Between June 2000 and March 2004, 912 fine-needle aspirates of the female breast lumps were performed by the author at the Medical Consultative Center of Basrah University and Basrah Teaching Hospital.One hundred and twenty-six breast carcinomas were diagnosed by FNAC; there were 8 false negative diagnoses (false negative rate 6.3%) and 3 false positive diagnoses (false positive rate 2.3%).On reviewed examination of their cytological smears, the 8 false negative cases for malignant cells were diagnosed as; 4 suspicious; 3 benign; and 1 malignant.The 3 false positive cases for malignant cells were re-diagnosed as 2 suspicious and 1 benign.The detailed clinical and cytological features of these cases were correlated with the Subsequent histological features. Results A summary of the original and reviewed cytological diagnoses, along with the histological diagnosis, and the age of the patients is shown in Table I.All cytologically positive cases were followed by histological examination of the excised pathological specimens (excisional biopsy or mastectomy); which in these 3 false positive cases revealed as 2 fibroadenomas and 1 fibrocystic changes (disease).The 8 false negative cases were also followed by excisional biopsy because of their clinical and mammographical suspicions.On histological examination, they revealed 2 infiltrating lobular carcinomas of classic subtype; 3 infiltrating ductal carcinomas of scirrhous subtype; 1 infiltrating ductal carcinoma of classic subtype; 1 infiltrating ductal carcinoma with massive cystic degeneration; and 1 intraductal (in-situ) carcinoma.Table II and III analyses the detailed cytological features of these 11 false positive and false negative cases by tabulating them with the criterion for benign and malignant features. Discussion Fine-needle aspiration cytology is a well recognized preoperative diagnostic technique that has been used to diagnose breast cancer for over 50 years II The specificity of FNAC approaches that of frozen section analysis .The reported specificity rates for FNAC vary from 96% to 100% [4][5][6][7][8][9][10][11] .Most recent studies reported false positive rates ranging from 0 to 6% 4,8,9,11-15 .This high degree of diagnostic accuracy allows definitive therapy to proceed on the basis of FNAC diagnosis of malignancy 14,15 .The sensitivity of FNAC for the detection of palpable carcinoma varies widely in reported series (65% to 98%).It is lower than that achieved by frozen section [4][5][6][7][8][9][10][11] .The sensitivity of the diagnostic procedure is determined by technical and interpretative limitations with the reported false negative rates range from 0 to 35% 4,8,9,12-16 .Table 4 shows sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates of the present study in comparison with other ten studies in literature 5,8,10,14,17-22 .In this study, 5 out of 8 false negative cases (case 1,2,6,7, and 8) were diagnosed as negative for malignant cells mainly because of very low cellularity, little nuclear pleomorphism, and low atypism.These 5 cases were histologically diagnosed as 3 infiltrating ductal carcinomas of scirrhous subtype and 2 infiltrating lobular carcinomas of classic subtype.Poor cellular yield with subtle cytological features of infiltrating lobular and scirrhous (fibrotic) carcinomas have been found to be a source of false negative FNAC; and mammography showed a better discrimination in such cases. 1,14,18,19,23,24riteria used to diagnose a malignant condition in FNAC of the breast are well established and; in satisfactory specimens, allow a definitive diagnosis in most cases of breast cancer 25 .However, despite these criteria, there remain cases of breast carcinoma in which the malignant nuclei are small and uniform and most cells are in cohesive clusters mimicking fibroadenoma or fibrocystic changes 26 .Such a diagnostic difficulty was encountered in the present study and was responsible for 2 false negative cases (case 4 and 5).It has been observed Created by Wameed Al-HashimysCreated by Wameed Al-Hashimyuch malignant lesions are usually well-differentiated infiltrating ductal or intraductal carcinomas [25][26][27] .This study supports this observation, in which case 4 was histologically diagnosed as well-differentiated infiltrating ductal carcinoma with intraductal (in-situ) component and case 5 as pure intraductal carcinoma arising on the background of proliferative fibrocystic changes.The last false negative case (case 3) was histologically proved as infiltrating ductal carcinoma with massive cystic degeneration.In this case, the aspirated cloudy fluid was cytologically misinterpreted as fibrocystic changes even in reviewed examination; because it showed large sheets of macrophages with degenerated epithelial cells, as well as inflammatory cells and necrotic debris.Most recent studies reported that FNAC tended to be less reliable and inadequate with a high false negative rate in the diagnosis of lobular, scirrhous, and intraductal carcinomas 1,14,18,19,23,24 .However, in both hypocellular and hypercellular cytological smears all the criterions for the benignancy and malignancy should be carefully taken under consideration; for example lack of single bipolar nuclei, loss of normal cell adhesion and presence of some atypical nuclei should raise the suspicion of malignancy especially if clinically and radiographically suspected so, or when abnormal tissue texture is felt at the time of aspiration.Fibroadenoma and fibrocystic changes are the most common benign breast lesions to be distinguished from adenocarcinoma by FNAC 26 . In the present study, 2 out of 3 false positive cases were histologically verified as fibroadenoma, pointing to the difficulty of diagnosing this lesion sometimes.The smears of these 2 false positive cases (case 10 and 11) were misinterpreted on the original cytological diagnosis because they showed highly cellular smears with large cells having prominent nucleoli, as well as frequent naked bipolar nuclei and few nuclei with cytoplasm.There were few overlapped clusters with some pleomorphism too.These features mislead towards positive diagnosis or suspicious interpretation.The third false positive case (case 9) was histologically turned out as fibrocystic changes.There were many tight clusters of apocrine cells with obvious and large sheets of loosely aggregated macrophages; that were over interpreted as malignant cells and loss of cell cohesion.These observations are supported by literature since fibroadenoma and fibrocystic changes are considered the major pitfalls in diagnosing breast carcinoma 26,28 .Rogers and Lee 26 reported that no combination of cytological features accurately separated all benign and malignant cases in their study.In conclusion, FNAC represents a most valuable preoperative procedure for the diagnosis of breast cancer as the false positive and false negative rates were acceptable i.e 2.3% and 6.3% respectively, but still lesions such as fibroadenoma and fibrocystic changes can create some difficulties.FNAC of the breast has some unavoidable limitations mainly due to poor sampling; poor yield of cells caused by tumour fibrosis, small size tumour, poor preservation, and difficulty in identifying small well-differentiated malignant cells; or atypical benign cells with inadequate interpretation.Because the sensitivity and specificity rates of FNAC are not always 100%, the technique should be used with this limitation in mind 29 Table I . The age, original and reviewed cytodiagnosis with histological diagnosis of eight false negative and three false positive cases. 2,5. F
2019-05-12T13:27:44.839Z
2004-12-28T00:00:00.000
{ "year": 2004, "sha1": "53bfa7ffa40e016d6a5f66c802b37928e6835005", "oa_license": "CCBY", "oa_url": "https://doi.org/10.33762/bsurg.2004.57532", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "53bfa7ffa40e016d6a5f66c802b37928e6835005", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260334339
pes2o/s2orc
v3-fos-license
Magnetic nanoparticle detection based on nonlinear Faraday rotation Magnetic nanoparticle (MNP) has attracted interest in various research fields due to its special superparamagnetic and strong magneto-optical effects, especially as contrast agents to enhance the contrast of medical imaging. By introducing the interaction coefficient, we propose a model of the nonlinear Faraday rotation of MNP under the excitation of an external alternating magnetic field. In our homemade device (which can detect the rotation angle as low as about 2e-7 rad), it has been verified that the higher harmonics of the Faraday rotation can avoid the interference of paramagnetic and diamagnetic background at lower concentrations. What's more, the higher harmonics of the Faraday rotation of MNP can be detected in real time and they have a linear relationship with concentration. In the future, it is expected to use MNP as a magneto-optical contrast agent to achieve high-resolution imaging in vivo. harmonics of the Faraday rotation can avoid the interference of paramagnetic and diamagnetic background at lower concentrations. What's more, the higher harmonics of the Faraday rotation of MNP can be detected in real time and they have a linear relationship with concentration. In the future, it is expected to use MNP as a magneto-optical contrast agent to achieve high-resolution imaging in vivo. Magnetic nanoparticle (MNP) has been widely studied for its excellent and diverse physical properties. Especially in biomedical applications, MNP is often used as contrast agents or tracers after surface modification to complete biological tissues imaging in vivo 1-4 . Among them, magnetic resonance imaging (MRI) and magnetic particle imaging (MPI) techniques are representative [5][6][7][8][9][10][11] . As the contrast agent, MNP affects the relaxation time of the echo signal and effectively improves the contrast of images in MRI 5, 6 . In addition, MPI is considered one of the most promising imaging techniques for metabolic imaging, as the spatial concentration distribution of MNP is directly inverted by measuring the magnetization response of MNP [7][8][9] . However, the full width at half maxima (FWHM) of the point spread function (PSF) in MRI and MPI is limited by the spatially encoded magnetic field. Therefore, it is difficult to break through the theoretical limit of 1 mm in the resolution of these imaging methods 10,11 . Optical detection of MNP is considered to be one of the most promising technologies to break through the bottleneck of highresolution imaging in vivo. For example, it is possible to image microstructures such as brain vasculature and nerves in near-infrared biological window (700-900 nm) because the resolution is only a few μm [12][13][14][15] . MNP suspension has been verified to have strong magneto-optical effects [16][17][18] , and its magnetic-optical film can not only change the transmittance of the incident light 19,20 , but also produce Faraday magnetooptical effect 21,22 and Cotton-Mouton effect 23,24 under the excitation of an external magnetic field. Therefore, MNP can be used as a magneto-optical contrast agent for high-resolution optical imaging of tissues in vivo. The most important feature of MNP as a magneto-optical contrast agent is that the polarization state of light is independent of the light intensity during propagation, i.e., the attenuation of the light intensity does not affect the angle of the polarization plane. Although light is severely absorbed and scattered as it propagates through human tissues such as skin and fat 25 The experiments were carried out using a homemade Faraday rotation detection device, and the schematic is shown in Fig. 1. the light intensity information received by the two differential inputs of the balance detector can be expressed as 1 = 0 sin 2 ( + π/4) 2 = 0 cos 2 ( + π/4) (3) where 0 is the intensity of light transmitted through the MNP suspension. Simplified by trigonometric formulas, it can be calculated that the differential signal of the balance detector is 1 − 2 = 0 sin(2 ), and the common signal is 1 + 2 = 0 . When the Faraday rotation angle of the MNP suspension is smaller, there is ≈ sin(2 ) 2 = 1 − 2 2( 1 + 2 ) (4) The interference of the light intensity 0 can be eliminated Fig. 3 is almost the same as that of water, i.e., the direction of the Faraday rotation produced by MNP is the same as that of water. Although the magnetization response of MNP is much higher than that of water, the Faraday rotation is not enhanced by orders of magnitude. Therefore, we consider that MNP only enhances the local magnetic field around the water by interaction, and the effect can be expressed by the interaction coefficient .
2023-08-01T06:42:17.258Z
2023-07-31T00:00:00.000
{ "year": 2023, "sha1": "8a89ac1bd2072af3a98f4aac14a7c0daee5cf80e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8a89ac1bd2072af3a98f4aac14a7c0daee5cf80e", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
220057055
pes2o/s2orc
v3-fos-license
Retrobulbar and Tongue Base Pyogranulomatous Myositis Resulting in Strabismus in a Dog: Case Report A seven-year-old female spayed Australian Shepherd was presented for a 3-day history of left eye ventromedial strabismus, episcleral injection, protrusion of the third eyelid, miosis, and enophthalmia. Magnetic Resonance Imaging (MRI) identified lesions in the left medial pterygoid muscle and left tongue base. Cytology and histopathology revealed pyogranulomatous inflammation with rod-shaped bacteria and pyogranulomatous myositis, respectively. One month of oral antibiotics resolved both lesions. Repeat MRI showed a mild decrease in size of the left medial pterygoid muscle consistent with fibrosis. Clinically, residual, positional ventral strabismus remained upon dorsal neck extension, but all other ophthalmic abnormalities resolved. To the authors' knowledge, this is the first report of pyogranulomatous myositis causing this constellation of clinical signs and of repeat imaging depicting resolution of these lesions with therapy. Two separate lesions with remarkably similar imaging characteristics were associated with the soft tissues of the head. One was associated with the left medial pterygoid muscle immediately ventral to the orbital fissure (Figure 2). It measured ∼2.5 × 1.3 × 1.2 cm (length × height × width; L × H × W) and mostly followed the normal muscle contour aside from a focal region of muscle expansion in the dorsal aspect of the lesion. The second lesion was associated with the left side of the tongue base, was ovoid in shape, and measured (Figures 3A-G). Both lesions were mildly heterogeneously T2 and STIR hyperintense, T1 iso-to mildly hyperintense, PD hyperintense, did not suppress on FLAIR and did not exhibit susceptibility artifacts on T2 * -weighted images. Pinpoint intralesional T1 and T2 hypointense foci were noted. The lesions were strongly contrast enhancing and mildly heterogeneous in intensity. The left mandibular and medial retropharyngeal lymph nodes were mildly enlarged and heterogeneously contrast enhancing. No intracranial abnormalities were identified. Ultrasound guided tissue sampling or surgical biopsy of the lesion at the tongue base was recommended. Ultrasound examination of the ventral tongue was performed using a high frequency linear transducer (5-12 MHz, Epiq5, Philips Ultrasound, Bothell, WA). The tongue base lesion appeared well-circumscribed, hypoechoic, and fairly homogeneous ( Figure 3H). CYTOLOGY AND HISTOPATHOLOGY Cytology of a fine needle aspirate of the tongue base mass revealed pyogranulomatous inflammation and rodshaped bacteria. An incisional biopsy of this lesion revealed pyogranulomatous myositis, possibly secondary to prior foreign body penetration. Foreign material was not identified within the submitted biopsy sample. A tissue culture was not done, but, in retrospect, should have been submitted. Sampling of the pterygoid muscle lesion could not be performed due to proximity to the maxillary artery. TREATMENT The patient was placed on oral amoxicillin/clavulanic acid (15.6 mg/kg q12h, Clavamox TM , Zoetis Services, LLC, Parsippany, NJ) for 1 month and the clinical signs resolved. Ten weeks after presentation, the ophthalmic examination was normal except for residual, positional ventral strabismus OS during dorsal extension of the neck (Figure 1). FOLLOW UP A repeat MRI examination was performed 10 weeks after presentation. The protocol was identical to the initial study. The previously seen soft tissue lesions associated with the left medial pterygoid muscle and the tongue base were largely resolved. Faint regions of altered signal intensity (minimal T1 and T2 hyperintensity and mild contrast enhancement) were still noted (Figure 4). The left medial pterygoid muscle was mildly smaller compared to the right. The mandibular and medial retropharyngeal lymph nodes were normal. No new abnormalities were identified. DISCUSSION This report describes a unique cause of ventral strabismus in a dog. Given the initial clinical signs of resting ventromedial strabismus, retrobulbar soft tissue pathology was suspected. Strabismus can be classified in many ways, such as resting vs. positional and restrictive vs. neuronal. A "resting" ventral strabismus is most indicative of a restrictive process, such as fibrosis or muscle impingement, that prevents proper relaxation of specific muscles. A "positional" strabismus is only seen in certain head orientations and most commonly occurs from a vestibular neuropathy (1). In the current report, ventromedial strabismus was present at rest indicating the reciprocal extraocular muscle was unable to move the eye against the restricted muscle. Passive forced duction testing confirmed a restrictive process as dorsolateral movement of the globe was not possible. There was also mild miosis indicating at least mild cranial nerve III (CN III) dysfunction. If complete CN III dysfunction alone was present, then miosis and ventrolateral strabismus would be expected. If there was just a neuronal process occurring, the globe would move easily on forced duction, and contracture of the reciprocal extraocular muscle would cause strabismus opposite the paretic muscle. On presentation, the restricted globe movement, ventromedial strabismus, and miosis indicated a combination of a neuropathy of the branches of CN III and inflammation of the musculature. Once CN III exits the orbital fissure, several branches run through the orbit to innervate the parasympathetic iris and ciliary body muscles, the dorsal, medial, and ventral rectus muscles, and the ventral oblique muscle. The medial pterygoid muscle forms part of the posterior orbital floor (1). The intramuscular lesion in this tissue could have compromised branches of CN III and/or inflamed the ventral rectus muscle resulting in ventromedial strabismus and miosis. Since the remainder of the neuro-ophthalmic examination was normal and the clinical signs correlated with CN III dysfunction from possible ventral orbit inflammation, no other cranial nerve pathologies were suspected. Advanced imaging was therefore needed to investigate these tissues. Two intramuscular lesions were appreciated on MRI. Differential diagnoses for intramuscular nodules and masses in humans include primary and metastatic neoplasia, hematomas, nodular myositis, sarcoidosis, crystal deposition and post injection changes (2). Dependent on the underlying cause, the MRI characteristics of the nodules and concurrent abnormalities are variable. Reports of imaging findings in dogs with muscle lesions are limited. Inflammatory myopathies encompass a large group of heterogeneous disease processes of various etiologies including infectious and immune-mediated etiologies. Even though reported MRI features are variable, these conditions often cause multifocal patchy, ill-defined intramuscular lesions rather than distinct nodules as seen in our case (3)(4)(5)(6)(7). Abnormalities in canine myopathies may be unilateral or bilateral. With some inflammatory conditions (e.g., masticatory myositis and iliopsoas myopathy), changes may appear fairly symmetric (5,8,9). Upon questioning, the dog's owners mentioned that the dog frequently chews on sticks, but they had not noticed any clinical signs commonly seen with acute penetrating oropharyngeal injuries such as retching, salivation, or pain. The pathology results in our patient suggested a prior penetrating injury; however, it is worth noting that foreign material was not identified on imaging or histopathology. It is conceivable that the ends of a sharp-edged stick penetrated both the dorsal and ventral lining of the oral cavity, introduced bacteria into the soft tissues, and caused the inflammatory lesions found on imaging without actually leaving foreign material behind. Alternatively, it is possible that foreign particles were left behind but were too small to be seen on imaging or histopathology. Little is known about the accuracy of MRI in detecting foreign bodies in soft tissues. In one experimental study, acute wooden foreign material appeared T1 and T2 hypointense to surrounding musculature (10). Even though CT and ultrasound performed better in the identification of foreign bodies in that study, the results have to be interpreted with caution as the study was performed on cadaver specimen, and foreign body induced soft tissue changes could not be assessed. Inflammatory soft tissue changes secondary to foreign bodies appear T2 hyperintense and are expected to highlight any foreign material embedded within the lesion (5,7,11). Other common findings with foreign bodies include fluid cavities and sinus tracts. The intralesional punctate T1 and T2 hypointense foci were most consistent with fibrosis or may have represented very small residual foreign body fragments not identified on histopathology. Metastatic disease to muscle may occur secondary to sarcomas (e.g., hemangiosarcoma or fibrosarcoma), carcinomas (e.g., adenocarcinoma or squamous cell carcinoma), and round cell tumors (histiocytic sarcoma, lymphoma, or melanoma). Most reports of imaging findings with muscle metastases in dogs are based on whole body computed tomography studies (12,13). On CT, muscle metastases appear as typically well-demarcated oval to round lesions with variable contrast enhancement. MRI findings in a dog with adenocarcinoma metastasis to the intertransversarius cervicis muscle included a uniform, T2 hyperintense and T1 isointense and uniformly contrast enhancing intramuscular nodule (14). A repeat MRI was done to investigate if the resolution of clinical signs corresponded to resolution of the lesions noted on the initial MRI. The decrease in muscle volume and mild diffuse signal intensity changes seen associated with the left medial pterygoid muscle are consistent with fibrosis and correspond to abnormalities reported with other fibrotic myopathies in dogs (5,7,15). Fibrosis within this area is consistent with the clinical finding of persistent, positional ventromedial strabismus and a positive, passive forced duction test. The inability to dorsally rotate the globe most likely indicates a restrictive process. The initial passive forced duction test appeared to have improved 2 weeks later. This could have been due to partial consolidation of retrobulbar inflammation or due to the dog being anesthetized. Since the previously noted miosis resolved, there was likely consolidation of retrobulbar inflammation that decreased compression of CN III branches. Residual fibrosis of the medial pteryoid muscle and surrounding fascia likely resulted in the residual positional, ventral strabismus. CONCLUDING REMARKS In conclusion, this report describes the MRI findings of a dog with pyogranulomatous myositis of the tongue base and left medial pterygoid muscle resulting in ventromedial strabismus OS. Rather than being ill-defined, these inflammatory lesions were distinct nodules. The current patient had a history of gnawing on various objects, including sticks. Therefore, both lesions could have occurred from an object penetrating the tongue base and retrobulbar space; however, no foreign material was found on histopathology and no traumatic event was observed. While no culture of the tongue base lesion was performed, cytology and histopathology revealed rod-shaped bacteria and myositis, respectively. Anaerobic and aerobic cultures should be performed for similar appearing lesions to help with diagnosis and guide antibiotic therapy. Given their appearance, both lesions were assumed to be from the same process. A one-month course of a broad-spectrum antibiotic resolved the lesions confirming this assumption. A repeat MRI examination showed absence of retrobulbar inflammation and fibrosis of the left medial pterygoid muscle correlating with resolution of previously noted neuro-ophthalmic abnormalities and only persistent, clinically evident positional ventral strabismus, respectively. DATA AVAILABILITY STATEMENT All datasets presented in this study are included in the article/supplementary material. ETHICS STATEMENT Ethical review and approval was not required for the animal study because the report was written retrospectively on a patient treated with standards of care at a veterinary teaching hospital. Patient care, including diagnosis and treatment, did not include methods intended for research. Written informed consent for participation was not obtained from the owners because ethical approval and written consent from the owner were not needed for this report. Oral consent was given by the owner.
2020-06-26T13:07:53.377Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "f773a17303ff6df0688f4b719a917ce4f15b9a97", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.00360/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f773a17303ff6df0688f4b719a917ce4f15b9a97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236441781
pes2o/s2orc
v3-fos-license
Development A Portable Solar Energy Measurement System This project presents the design and development a portable measurement device for measure and monitor solar panel parameters by using Internet of Things (IoT) concept. Solar energy measurement plays a very important role in the measurement of parameter reading for the determination of output generated, but the challenge is only performed manually at the work site using a clamp meter or a multimeter. Furthermore, it was very difficult to get the value at that time, and the data recovery error occurred. There are three specific objectives have been used for the project. Firstly, the relevant circuits for this project are design and built the circuit by using software. The output of the measurement solar irradiance, ambient temperature, solar panel temperature, current and voltage value were displayed on LCD. Next, IoT concept is used for solar panel measurement and monitoring. The value of the measurement and monitoring is used ThingSpeak cloud and ThingView application on the smartphone. It can be collected the portable solar for the energy measurement system can monitor on site, anywhere and anytime using IoT platform. Introduction Energy is well known as being able to bring about change or do work. For example, energy produces light, heat, sound, and motion. Furthermore, there are many types of energy such as electrical energy, kinetic energy, nuclear energy, sound energy and others. There are several renewable energies in this earth, which is wind energy, solar energy, biomass, geothermal and hydroelectric. Researchers have studied the economic feasibility of solar energy for domestic, commercial, and industrial use over the last two decades. Due to the limited supply of natural primary energy sources, industrial countries such as Japan and Germany are looking for alternative energy sources such as solar energy [1,2]. Solar energy is sustainable and clean source of renewable energy [3]. An electrical unit, solar cell is made by silicon that is a type of semiconductors for generating electrical energy. A large number of solar cells combine to form a solar panel. Solar power is a conversion of sunlight into electricity, sunlight is collect either directly by using photovoltaics or indirectly using concentrate of solar energy [4]. The conversion of solar energy into electrical energy may be used to meet the needs of household electricity or industries Figure 1 show the flowchart of overall project. The project is starting by reviewing previous study that related to this project. From the literature review, it can be obtained the optimum ideas and the circuits design. After that, the circuits will be designed and simulated to test the functionality of these circuits. When the circuits are malfunction during the simulation, the circuits will be redesigned to troubleshoot that problem. However, when the circuits are successful simulated, it will proceed to the next step which is to program for these circuits. Next, is to run and testing the functionality of the program coding. When the simulation fails, then the software will be reprogrammed again until the coding works as expected. On the other hand, when the circuits code is successfully developed, a hardware prototype can be developed in the next step. The first step in the hardware part is to build the circuits on the circuit board. Following this, programming a microcontroller by transferring the program from the compiler to the memory of the microcontroller. After that, this portable measurement device will be tested the functionality. When the portable measurement device fails to work properly, it will return to hardware part for troubleshooting until the desired function is obtained. While, the portable measurement device works normally, the measurement results are displayed on the Liquid Crystal Display (LCD) and smartphone. Lastly, the measurement data will be collected and saved in the cloud by using the concept of Internet of Things (IoT). Figure 2 shows a project block diagram that designed and executed by using Arduino Integrated Development Environment (IDE) and Espressif 32 (ESP32). The main purpose of the project is to monitor the environment parameter and performance parameter of the solar panel. In this project, there are four sensors were used and combined with ESP32 microcontroller to form a solar energy measurement monitoring system. The first input is the MAX44009 sensor, which is used to measure the solar irradiation. Secondly is the ambient temperature and humidity sensor (DHT11) that used to measure the ambient temperature and humidity from the solar panel. Next is temperature sensor (LM35) that is used to measure the temperature from the solar panel. Furthermore, current and voltage sensor (INA219) is used to measure current and voltage generated from solar panel. A 12 V LED bulb is used as a load in this project. The function of the load is to get the load current and load voltage from the solar panel. Then, all the sensed data has been collected and sent to the ESP32 board that was powered up by using 5 V power bank. Arduino IDE has executed the instruction of written code in the program and sent to cloud interface by ESP32 Wi-Fi-module. Finally, it displays the respective values sensed by these sensors on terminal screen. For the output part, it classified in two terminal screens that is LCD and smartphone. ESP32 has wi-fi module that monitor that data from microcontroller to IoT platform and the output will display on smartphone using ThingSpeak and ThingView application as a cloud service to store and display the measurement or the data. Figure 3 shows an experimental setup of a portable and the prototype of solar energy measurement system. All the sensors will sense all the performance of solar panel. For the solar irradiance, the placement of MAX44009 sensor is the top of solar panel. So that, the radiation of the sunlight will fall on to that sensors and measured the parameter. Next, the placement of DHT11 sensor on beside the solar panel that measured the ambient temperature and humidity on that time. Other than that, the placement of LM35 sensor is placed behind of solar panel that was showed in Figure 4 and it measured the solar panel temperature on that time. Furthermore, the INA219 sensor is placed inside on the portable that as shown on Figure 5. A part of that, the performance of 20-Watt 12 V solar panel is connected direct to the 20-Watt 12 VDC load which is the bulb that shown as Figure 5. The tilt angle of the solar panel testing is at 15°, this is suggested by the researched conducted by Manun and his team [15]. Result and Discussion In this section, it was divided with seven parts which is solar irradiance, ambient temperature, humidity, solar temperature, output current, output voltage, and output power generated by the solar panel. This measurement has been recorded in a day within 8.30 AM to 6.30 PM with the tilt angle of solar panel is 15 °. The result was recorded with 10 minutes interval through IoT concept by using Thingspeak and Thingview for measured and monitored. This data was collected from Thingspeak and converted to Excel file. Result of Solar Irradiance Based on Figure 6, the result shows that the highest solar irradiance was 712 W/m 2 at 2.00 PM while the lowest result on solar irradiance was 39 W/m 2 at 6.30 PM. In this result, the solar irradiance started increasing the peak hour at 11.00 AM until 2.00 PM and stating decrease at 2.10 PM. The solar irradiance on the starting and end of the graph is lower because of the sun position on sunrise and sunset condition while on peak hour, the sun on upward position. As the solar irradiance increases, the value of MAX44009 sensor will increase. This is also named photodiode that same with semiconductor diode. The spectral reaction of the chip photodiode is designed to replicate the human eye's experience of ambient light Result of Ambient Temperature and Humidity The measurement for maximum value of ambient temperature is 42.8 °C at 3.20 PM and the humidity is 87 % at 8.30 AM, respectively. The minimum value for ambient temperature 24.6 °C at 8.30 AM, while humidity is 30 % on 3.20 PM. Based on Figure 7 and Figure 8, the graphs shows that the starting of measurement at 8.30 AM that the ambient temperature was lower, and humidity is higher. On the peak hour, the ambient temperature was increased until 3.20 PM while humidity was decreasing until 30 % on that time. From this pattern, the maximum value of ambient temperature on that day were attributed to the high solar irradiance obtained, the period when the atmosphere is mostly clear and clean, no clouds, dust free and low humidity and the minimum value of ambient temperature were attributed to low solar irradiance and high humidity. Figure 9 shows the result of temperature from solar panel. The temperature of solar panel is the main part of generating energy such as voltage and current that are generated from the solar panel. The maximum solar panel temperature is 65 °C at 2.00 PM and minimum temperature on starting point of graph which is 22 °C at 8.30 AM. The solar panel facing the higher temperature than the ambient temperature, this is because the solar panel is manufactured by semiconductor. The semiconductor is sensitive to temperature. The bandgap of a semiconductor is reducing due to the increment in the temperature, thereby affecting most of the semiconductor material parameters. In this result, the solar panel temperature starting increased from 10.10 AM until the peak hour of sun. The solar panel temperature starting decreased 2.20 PM. It is because the decreasing of solar irradiance and the ambient temperature. The solar panel temperature is effect on the generating of solar energy. Result of Current Based on Figure 10, the result shows the output performance of current generated by the solar panel. This result is the load current from solar panel to the 20-Watt bulb load for measuring the load current from solar panel. The highest load current at 2.00 PM with 1.07 A that was generated at that time while the lowest load current is 0.08 A at the starting measurement of solar panel which is 8.30 AM. The result was shows in a graph that the load current was affected on solar irradiance and solar panel temperature. The highest value of solar irradiance and temperature of solar panel, the highest load current was generated. Figure 11 displays the result for measurement the output voltage from solar panel performance. This result is the load current from solar panel to the 20-Watt bulb load for measuring the load voltage from solar panel. From this graph, the maximum output at 1.50 PM with 15.89 V on that time. The minimum output at 6.30 PM that only can generated 8.79 V. Based on this result, the pattern of the load voltage was stable on that range and does not have a very significant difference value on load voltage. The higher the voltage value, the solar irradiance also higher. The presence solar irradiance can influence the voltage generated by the solar panel. Figure 12 shows the result or graph for output power performance that was generated from solar panel. This result is form from current and voltage field to get the value of output power. The highest load power was generated at 2.00 PM with 16.76 W and the lowest is 0.54 W at 6.30 PM. Based on this result, the power continuously increased from 11.20 AM to 2.00 PM and dropped on 2.10 PM because the position on the sun. According to this result, the solar panel failed to generate the actual power which is 20 W of the solar panel has a power losing. Besides, the power of solar panel also got affected due to the size of area as the bigger the area, the more energy can be absorbed. The output power was affecting by solar irradiance, current and temperature. The more the value of solar irradiance, the more the power was generated by solar panel. When the temperature gets high, it will affect the high in voltage that simultaneously will lower the current. Hence, the power will be increased. Figure 13 shows the output performance by IoT concept on ThingSpeak open-source cloud service and Figure 14 shows the output performance on ThingView application. Conclusion In conclusion, the development a portable solar energy measurement system was successfully designed and developed. This testing was carried out on the designed and developed the system that findings obtained showed the right functioning of solar energy measurement system. This portable measurement device that easy to use for measure and monitoring the performance of solar panel without using the analog or digital multimeter on worksite. So that, the user or consumer will get the optimum values without any error on data retrieval. Other than that, the solar energy measurement system was measured and monitored the solar panel parameter by using Internet of Things (IoT) concept. The parameter was recorded on 10 minutes interval for each parameter for energy that produced from solar panel. On this IoT technology, user and worker will got the time saving, reduce the number of manpower and more effectively. The results obtained by the solar energy measurement system show that the best output performance to generated of solar panel was at within 1.50 PM to 2.00 PM with highest load current value which is 1.07 A and 15.89 V and for the highest value for load voltage. The maximum output power which is 16.76 W that have been recorded. Currently, the solar irradiance was 712 W/m 2 , the ambient temperature was at 41.1 ℃ and 33 % for humidity. The solar temperature is 65 °C on that time.
2021-07-27T20:08:08.023Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a60b27f308cdd67ae7c4fe7581c931c6a221d3ed", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1962/1/012049", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a60b27f308cdd67ae7c4fe7581c931c6a221d3ed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
1304242
pes2o/s2orc
v3-fos-license
How does dancing promote brain reconditioning in the elderly? 1 The Brown Foundation, Department of NanoMedicine and Biomedical Engineering, Institute of Molecular Medicine for the Prevention of Human Diseases, The University of Texas Health Science Center at Houston, Houston, TX, USA 2 Division of Pulmonary, Sleep Medicine, and Critical Care, Department of Internal Medicine, The University of Texas Health Science Center at Houston, Houston, TX, USA *Correspondence: ppfoster@utmb.edu Cognition encompasses multiple dimensions which may be defined as "connectome" which aims to achieve a complete connection mapping of the brain (Kuljiš, 2010). Jan-Christoph Kattenstroth, Hubert R. Dinse and their team at the Ruhr-University of Bochum, Germany, have developed a weekly "social" exercise protocol in healthy elderly conceptualized as one-hour dance class with a group for a six-month period (60 min/1 time/wk). Following the six-month protocol, significant improvements of performance were observed in cognition/attention (memory, visuo-spatial ability, language, and attention), reaction times, sensorymotor performance, posture and lifestyle but none in maximal aerobic capacity (VO 2 max ) nor in fluid intelligence. Therefore, a question arose: how does the brain recondition in the elderly? Several hints may be inferred from those findings. The level of physical exercise inherent to dancing may not have been enough to produce an increase in aerobic fitness (VO 2 max ) albeit the exercise-induced elevation of heart rate, cardiac output, and perfusion may have been sufficient to produce changes in protein expression (brain-derived neurotrophic factor, BDNF; cytokines; insulin-like-growth factors, IGF-1 and IGF-2) impacting cognitive plasticity (Chen et al., 2011;Foster et al., 2011). Besides, dancing as a motion in a three-dimensional space relies on the path-integration which maintains permanent visual tracking of the direction and distance from reference points (landmark) during 3-D navigation in the environment (Hafting et al., 2005) and hence requires the activation of hippocampal and entorhinal networks. Motion and 3-D navigation in the environment are closely related to spatial memory. Deterioration of spatial memory is an early warning of cognitive impairment and potential onset of Alzheimer's disease. Specific cells, hippocampal place cells, are underlying spatial recognition in response to spatial stimulations such as environmental landmarks and translational or directional movement inputs (O'Keefe and Burgess, 1996). In dancing with uncharacterized motion and pacing, a few hypotheses may be set forth. We may speculate that during slow motion, place cells would fire in response to a unique, specific position in the environment (O'Keefe and Burgess, 1996;Doeller et al., 2010). Alternatively, grid cells, within entorhinal networks may be firing in response to a dance-induced rapid motion in multiple locations of the environment geometrically defined (Hafting et al., 2005). This geometrical superimposition of rapid motions onto a map of the environment is reproducing a pattern of equilateral triangles "tiling" the space (Hafting et al., 2005). Responses of grid cells are modulated by direction and speed of motion (Doeller et al., 2010). Indeed, stimulation provided by external landmarks may be a major player to improving spatial memory. Persistence of firing after sensory input has ceased suggests that network mechanisms are underlying grid cells executive mapping (Hafting et al., 2005). In humans, functional MRI suggests that direction and running speed are modulating the path integration (Doeller et al., 2010). Although the map is tied to external visual landmarks, it remains without them once composed (Doeller et al., 2010). Therefore, dancing may provide a benefit by input-stimulation following a simple exploration in motion of the surrounding environment. Depending on the importance of the virtual navigation paradigm in the stimulation process, a virtual training might enhance hippocampal and entorhinal volume and improve spatial cognition performance. However, it is unclear what would be the respective role of sub-maximal exercise and visual environmental landmark-mapping in the overall improvement found by the authors in dance classes. Dance and music, as a form of art, are also molding brain circuits and may enhance cognition as well as emotional and behavioral patterns (Preminger, 2012). The long-term effect of auditory, multilingual, verbal, or musical workouts on the brain has been investigated underlying structural and functional adaptations (Jancke, 2009;Oechslin et al., 2010). One specialization related to musical training is the increase of gray matter in the auditory Foster How does dancing promote brain reconditioning in the elderly? cortex (Schneider et al., 2005;Preminger, 2012). Several authors have also suggested that improvisation involving novel and complex situations such as controls of bearings and directions solicits frontal lobes (Preminger, 2012). Improvisation has also been used as training and rehabilitation of prefrontal functions (Preminger, 2012). Indeed, dancing as a neurocognitive experience activates multiple cognitive functions such as perception, emotion, executive function (decision-making), memory, and motor skills. A large array of brain networks is thus being activated. Yet, how the brain achieves this remarkable feat in the elderly remains a puzzle, and questions about the respective roles of simultaneous mental (virtual reality) and skeletal muscle exercises are raised. Video game designers and movie directors, experts in virtual reality, put the emphasis on such widespread brain activation (Hasson et al., 2004(Hasson et al., , 2009Hasson and Malach, 2006;Preminger, 2012). Virtual reality requires only minimal motor execution and cardio-respiratory activation compared with actual sustained sub-maximal skeletal muscle exercise. Furthermore, in virtual reality, a full stimulation of entorhinal networks by actual motion-induced navigation in the environment may thus be lacking. Another study (Anderson-Hanley et al., 2012) has also brought about insights into the role of a virtual reality-enhanced submaximal skeletal muscle exercise training for 3 months (45 min/5 times/wk at 60% heart rate reserve). Indeed, a 3-D virtual navigation on a computer screen while exercising on a stationary bicycle provided greater cognitive (executive) benefits than stationary bicycle alone. Strikingly, in Kattenstroth's study, one session per week is sufficient to bring about significant benefit. Such studies (Anderson-Hanley et al., 2012;Kattenstroth et al., 2013) have started to provide us with further insights into the more detailed physiological mechanisms involved. These studies speak in favor of one conclusion that may be summarized in the single most important question about brain reconditioning: how might these findings be reconciled?
2016-06-17T07:56:07.984Z
2013-02-26T00:00:00.000
{ "year": 2013, "sha1": "f1b67afe2899b4e9ed6267fb1ff3b6437374a2e5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2013.00004/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1b67afe2899b4e9ed6267fb1ff3b6437374a2e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256332860
pes2o/s2orc
v3-fos-license
Post-translational modifications are enriched within protein functional groups important to bacterial adaptation within a deep-sea hydrothermal vent environment Post-translational modification (PTM) of proteins is one important strategy employed by bacteria for environmental adaptation. However, PTM profiles in deep-sea microbes remain largely unexplored. We provide here insight into PTMs in a hydrothermal vent microbial community through integration of metagenomics and metaproteomics. In total, 2919 unique proteins and 1306 unique PTMs were identified, whereas the latter included acetylation, deamination, hydroxylation, methylation, nitrosylation, oxidation, and phosphorylation. These modifications were unevenly distributed among microbial taxonomic and functional categories. A connection between modification types and particular functions was demonstrated. Interestingly, PTMs differed among the orthologous proteins derived from different bacterial groups. Furthermore, proteomic mapping to the draft genome of a Nitrospirae bacterium revealed novel modifications for proteins that participate in energy metabolism, signal transduction, and inorganic ion transport. Our results suggest that PTMs are enriched in specific functions, which would be important for microbial adaptation to extreme conditions of the hydrothermal vent. PTMs in deep-sea are highly diverse and divergent, and much broader investigations are needed to obtain a better understanding of their functional roles. Background Hydrothermal vents are cracks in the earth's crust where high-temperature water escapes after being heated in the below rocks. Scientists have explored the deep ocean hydrothermal vents and were surprised to find the areas teeming with abundant life [1][2][3]. As important players, microbial populations participate in diverse biogeochemical processes, including the nitrogen, sulfur, and carbon cycles. Microbes in hydrothermal fields are mostly sustained by energy derived from inorganic redox reactions. Despite the common general origin of the investigated hydrothermal vents and the important roles of microbial communities, mechanisms underlying microbial adaptation to the vent environments remain largely unknown. As one of the important strategies for environmental adaptation, post-translational modifications (PTMs) play crucial roles in regulating protein function and controlling several fundamental features of microbial biology, such as cell signaling, protein turnover, cell-cell interactions, and cell differentiation. For example, protein methylation denotes the addition of a methyl group to a protein or the substitution of an atom or group by a methyl group, and it is involved in mediating protein-protein interactions and enhancing protein thermostability [4]; the hydroxylation of specific residues in the ribosome has been identified in bacteria, suggesting a role for hydroxylation in cell growth and cycling [5]; in addition, phosphorylation and methylation collectively regulate signal transduction in bacteria [6,7]. Among the studies focused on PTMs to date, laboratory strains have served as models or research subjects in most cases. However, the microbial community in nature is very complex, and thus, characterizing PTM events in natural communities is a challenging task. Li et al. identified PTMs in two growth stages of acid mine drainage (AMD) biofilms using a shortgun proteomics approach. They analyzed the PTMs profile based on an enrichment-independent technique that allowed the direct quantification of different modification events and characterized eight common biological PTM types [8]. More recently, Marlow et al. studied protein PTMs in microbial communities from marine methane seep habitats and focused on PTMs of methyl-coenzyme M reductase (Mcr) orthologs [9]. These studies have provided insights into PTMs in natural microbial communities, which are barely being explored. In the present study, we employed metaproteomics, metagenomics, and genome binning to explore the PTMs in the microbial community from a hydrothermal vent plume on the South Mid Atlantic Ridge (SMAR). The findings have provided new insight into PTM events in deep-sea extreme areas and motivated further study of their roles in microbial ecology and physiology. Overview of the metagenome, metaproteome, and PTMs Information for the assembled metagenome is summarized in Additional file 1: Table S1. The total number of contigs was 24,099, with an N50 of 10,361 bp, to generate 171,515 open reading frames (ORFs). These ORFs were used as a database for the following metaproteomic analysis. Using gel-based fractionation (eight fractions for each metaproteomic sample) of the free-labeled peptides and LTQ-Orbitrap-MS/MS analysis (work flow shown in Additional file 1: Figure S1), we identified in total 2919 unique proteins, from 1,978,700 peptide-spectrum matches for the two metaproteomes (metaproteo-1 and metaproteo-2). Among these proteins, 766 were shared by the two metaproteomes. The high-resolution MS/MS generated a high mass accuracy (<0.02 Da mass error and a very low false discovery rate of 0.1 %). Mascot Daemon searching produced a total of 1306 unique PTMs for the two metaproteomes (PTM1 and PTM2), including acetylation, deamination, hydroxylation, methylation, nitrosylation, oxidation, and phosphorylation. Detailed information regarding the number of identified proteins and PTMs in the two metaproteomes is summarized in Additional file 1: Figure S2, and all PTMs are listed in Additional file 2. Taxonomic distribution of the metagenome, metaproteome, and PTMs The hidden Markov models (HMMs) of conserved singlecopy proteins were extracted from the metagenome and metaproteome-derived ORFs and searched against the National Center for Biotechnology Information (NCBI)-Nr database to reveal the taxonomic structures. Assignment of the reads at the class level revealed the prevalence of Alphaproteobacteria, Betaproteobacteria, Gammaptoteobacteria, Deltaptoteobacteria, and Nitrospira (phylum Nitrospirae) (Fig. 1a). Compared with the metagenome, the metaproteomes were enriched for Deltaptoteobacteria, which were clearly more enriched in the PTM profiles (Student's t test, P < 0.005, PTMs versus metaproteomes). Moreover, bacteria belonging to the phylum Nitrospirae were also enriched in the PTM profiles (P < 0.05, PTMs versus metaproteomes). Functional distribution of the metagenome, metaproteome, and PTMs All of the ORFs from the metagenome and metaproteomes were searched against the protein databases, including Clusters of Orthologous Groups (COGs), Kyoto Encyclopedia of Genes and Genomes (KEGG), and NCBI-Nr, to identify the functional profiles. The distribution of COG functional categories is summarized in Fig. 1b. One of the notable results was that the genes responsible for translation [J] and replication [L] accounted for 20-30 % of the metagenome and metaproteome, but their total abundance decreased to <7 % in the PTMs. In contrast, several COG functional categories were significantly enriched for PTMs when compared with the metaproteomes, including transcription [K] (Student's t test, P < 0.05) cell motility [N] (P < 0.05), energy production [C] (P < 0.05), coenzyme transport and metabolism [H] (P < 0.01), and inorganic ion transport and metabolism [P] (P < 0.05). To confirm the functional profiles, individual genes annotated by KEGG are summarized in Fig. 2. The most abundant proteins with modifications included those related to electron transport and energy production, such as F-type ATPase and ribulose-biophosphate carboxylase; inorganic ion metabolism, such as nitric oxide reductase, phosphate transport system protein, iron complex outer membrane receptor protein, sulfate adenylyltransferase, and Cu2 + -exporting ATPase; and signal transduction and chemotaxis, such as TetR and AcrR family transcriptional regulators and FlgC. The prevalence of these genes was consistent with the COG categories. In addition, a number of genes responsible for gene recombination, such as transposase and restriction enzymes, were identified. Correlation between PTM types and functional categories The above results revealed the enrichment of PTMs in particular pathways, such as inorganic ion transport and energy metabolism. It can be hypothesized that the distribution of PTM types in different functional categories was also uneven because molecular studies of model species have demonstrated that divergent modifications exert distinct functions. Thus, we first summarized the percentages of the seven different PTM types (Fig. 3a). Methylation was the most prevalent PTM type, accounting for 40 % of the PTMs, followed by deamination, which accounted for 30 %. In contrast, nitrosylation and phosphorylation only accounted for 1-2 %. We distributed the seven PTM types into COG categories, as shown in Fig. 3b. Proteins that are important for inorganic ion metabolism [P] were mainly associated with hydroxylation; energy production [C] was associated with methylation, oxidation, and deamination; and cell motility was associated with deamination and methylation. We further explored the PTMs in orthologous proteins from different taxa. Examples included F-type ATPases belonging to Bacteroidia, Gammaproteobacteria, Flavobacteriia, and Alphaproteobacteria based on the MEtaGenome ANalyzer (MEGAN) analysis (Additional file 1: Figure S3). The results revealed the divergence of PTMs, although the selected amino acid sequences were rather conserved. The modification types included acetylation, deamination, methylation, and oxidation, whereas the modification sites included lysine, glutamine, and methionine. Genome information for the dominant microbe, Nitrospirae bacterium sp. nov To further understand the organization of functions and PTMs, we recovered one draft genome from the metagenome dataset. Based on the phylogenetic tree constructed using 16S ribosomal RNA (rRNA) genes (Additional file 1: Figure S4), the bacterium had a close phylogenetic relationship with the bacterial members of the phylum Nitrospirae. The bacterium was located in the same branch as uncultured bacteria from hydrothermal vent areas, which may have adapted to such extreme environments for a long time but for which no genome information is available. The bacterium was named Nitrospirae bacterium sp. nov. Phylogeny of single-copy genes derived from available Nitrospirae genomes and Nitrospirae bacterium sp. nov was also investigated (Additional file 1: Figure S5), which placed this new Nitrospirae bacterium at a similar location as that of the 16S rRNA gene tree. The completeness of the genome of Nitrospirae sp. nov was estimated based on the number of the 139 conserved single-copy proteinencoding genes and by comparison with previously reported Nitrospirae genomes that served as references (Additional file 1: Table S2). The results showed that 20 genes, such as GAD domain-containing protein (PF02938), potentially could not be recovered from the genome bin of Nitrospirae bacterium sp. nov because these genes were present in the reference genomes. In contrast, the presence of two elongation factor TS (PF00889) and two pseudouridine synthase I (PF01416) in the reference genomes indicated that the observed duplication in the genome of Nitrospirae bacterium sp. nov was not due to contamination. Thus, we assumed that the completeness of the draft genomes was 85.6 %. Contigs of the Nitrospirae bacterium sp. nov genome were further compared with the reference genomes, which revealed that it shared 79.8 % of the genome inventory with other Nitrospirae bacteria. Metabolic pathways and PTMs in Nitrospirae bacterium sp. nov Catalytic pathways were constructed based on the genome of Nitrospira bacterium sp. nov by blastp searches against the KEGG database (Fig. 4). The Wood-Ljungdahl pathway was present in the genome, which may provide the main carbon source for this bacterium. Genes encoding proteins that play a role in electron transport and energy production, including those encoding F-type ATPase, cytochrome bc1 complex, cytochrome oxidase, and NADH dehydrogenase, were identified. Remarkably, the genome possessed both nitrate reduction and sulfate reduction pathways, as indicated by the presence of the narGH, norBC, dsrAB, and aprAB genes. The presence of diverse signal transduction genes, such as phoRBPA and cheWVYA, supported the tightly regulated metabolic activities. Moreover, a large number of metal ion transporters were present, which are involved in metal efflux and uptake. Compared with the seven close relatives with complete genomes, Nitrospirae bacterium sp. nov displayed a generally similar functional inventory, which included a number of genes related to glycolysis/gluconeogenesis, oxidative phosphorylation, carbon fixation in prokaryotes, dissimilatory nitrate reduction, dissimilatory sulfate reduction, metal ion transport, and signal transduction (Additional file 1: Table S3). There were 287 proteins with PTMs in the genome of Nitrospirae bacterium sp. nov. The selected proteins with PTMs are highlighted in red in Fig. 4. Some extensively modified proteins included regulators such as PhoR, which regulates PhoBPA to sense phosphate and iron [10]; VicR, which senses osmotic stress [11]; PilR, which is a transcriptional regulator for pilin and other genes required for Fe (III) reduction [12]; and methyl-accepting chemotaxis protein (MCP) and CheW, which are involved in chemotaxis [13]. The F-type ATPase was also found to have unique PTMs. Proteins involved in adenosine-5′phosphosulfate (APS) and sulfite reduction were modified with PTMs. In addition, transporters such as PstS, which is responsible for phosphate uptake [14], and MlaD, which is responsible for phospholipid transport [15], had PTMs. Discussion We integrated metagenomics and metaproteomics to illustrate protein PTM events in a hydrothermal vent microbial community. The high resolution of the MS system and the protein fractionation allowed an accurate identification, whereas the unexpected high diversity of microbes in the SMAR may have limited the number of detected proteins and modifications. Methylation was one of the dominant PTM events, whereas phosphorylation was among the rare types. In prokaryotes, methyl groups can be added to the carboxyl groups of proteins. Quite few studies have focused on the molecular functions of protein methylation in the microbial world. It has been evidenced that changes in the methylation levels of the chemotaxis signaling proteins correlate with the ability of microbes to response to chemoeffectors [6,16]. In Agrobacterium tumefaciens, methylation of the electron transfer flavoprotein (ETFß) diminished the ability of this enzyme to mediate electron transferring from various dehydrogenases [17]. In the hyperthermophilic archaea, Sulfolobus islandicus, the helicase activity of mini-chromosome maintenance (MCM) is enhanced at high temperatures (over 70°C ) by lysine methylation [18]. Collectively, it seems that protein methylation in microbes is involved in signal transduction, energy metabolism, and protein stabilization under high temperatures. These functions and the prevalence of methylation in our metaproteomes led us to assume that methylation would be important for microbial survival under the extreme conditions of the hydrothermal vent. More evidence supporting the role of PTMs in microbial adaptation to the vent environment can be provided by the correlation between PTM types and functional categories, such as the enrichment of hydroxylation for inorganic ion transport and metabolism. Notably, Marlow et al. observed that methylation and hydroxylation were popular PTMs affiliated with orthologs of McrA, a critical enzyme in the reverse methanogenesis pathway, suggesting that the two PTM types may be involved in enzyme regulation in the deep-sea methane seep (775 m depth) [9]. By contrast, the low relative abundance of phosphorylation was not expected because phosphorylation is an important signal transduction mechanism that occurs in prokaryotic organisms [6,7]. Here, we found that phosphorylation was majorly associated with translation, ribosomal structure, and biosynthesis, suggesting different strategies adopted by the vent microbiome. Moreover, we found that the PTM profile was dictated by taxonomy, whereas the PTMs of orthologous protein differed among microbes, indicating the divergence of PTM patterns underlying the metabolic distinction of closely related microbes. The present study also demonstrates that the integration of genome binning and proteomics is a good way to identify PTMs in unculturable bacterial species of interest. Members of the Nitrospirae phylum have been reported to inhabit a number of environments, such as acid mine biofilms [8], pond sediments [19], hot springs [20], and lakes [21]. Comparisons in the present study revealed the generally conserved lifestyles of most of the Nitrospirae bacteria with genome sequences. For example, the presence of the Wood-Ljungdahl pathway suggests autotrophic and strictly anaerobic respiration. However, Li et al. proposed that the divergence of PTMs in Nitrospirae may contribute to the phenotypic diversity because the Fig. 4 The metabolic capacities and pathways with enriched PTMs of Nitrospirae bacterium sp. nov. This bacterium possesses multiple pathways for energy metabolism, signal transduction, and inorganic ion transport, which contain several proteins with PTMs (highlighted in red) Leptospirillum group II dominating AMD biofilms exhibits substantial ecological differentiation [8]. Consistently, PTMs in Nitrospirae bacterium sp. nov in the present study displayed different patterns from those in Leptospirillum. In particular, the PTMs of regulators, including PhoR, VicR, and PilR, as well as transporters including PstS and MlaD, suggest an important role of PTMs in metal ion metabolism and resistance, which may facilitate adaptation to the vent area. The detailed functions of PTMs in these novel genes deserve further characterization. In addition, because proteins from different phyla may have quite similar sequences, we cannot be sure that all the protein sequences mapped to Nitrospirae bacterium sp. nov exclusively belong to this organism, and this is would be one of the challenges faced by integrative analysis of metaproteomics and metagenomics. Conclusions PTMs of unique proteins that play a role in energy metabolism, signal transduction, and inorganic ion transport would be an important strategy for microbial adaptation to the vent environment. PTMs in the deep sea are highly diverse and divergent, thus highlighting the need for broader investigations to elucidate their functions. Sampling The samples were collected in our 2012 (August) cruise to SMAR (13.35°W, 15.16°S, and 2500 m in depth) by "Dayang Yihao," by a conductivity, temperature, and depth (CTD) rosette attached to a remotely operated vehicle (ROV). Hydrothermal activity was confirmed by methane and temperature anomalies with portable miniature autonomous plume recorders attached to a towed deep-sea instrument. The block samples (each of~1 kg) were collected from the plume wall. After collection, they were immediately transferred to the laboratory with dry ice, followed by frozen in liquid nitrogen and stored at −80°C until use. Three samples collected decimeters apart were used in the present study: one for the metagenomic sequencing, protein database construction, and genome binning and two for the metaproteomic and PTM analyses. DNA extraction and Illumina sequencing The technique used for DNA extraction has been described in our previous work [22,23]. Briefly, the hydrothermal plume samples maintained in DNA extraction buffer were homogenized using a sterilized mortar. Subsequently, 50 μl of lysozyme (100 mg/μl) was added to the samples, followed by 400 μl of 20 % SDS and 40 μl of proteinase K (10 μg/μl). The total nucleic acid was then extracted and purified using the AllPrep DNA/RNA Mini Kit (Qiagen, Hilden, Germany). Finally,~200 ng of DNA was subjected to an Illumina Hiseq 2000 platform (PE500 library) at the Shanghai South Genomics Center (Shanghai, China). Metagenome assembly Illumina reads were subjected to quality control using the next-generation sequencing (NGS) QC toolkit [24] before being assembled using SPAdes Genome Assembler 3.6.1 [25] on a local server. The specified K values 21, 31, 41, 51, 61, 71, and 81 were used under the "-careful" and "-pe" options. ORFs were predicted using Prodigal [26] on a local server, while single procedure and gff output formats were used. The HMMs of conserved single-copy proteins were extracted by searching against a local database. Protein extraction and digestion Protein extraction was performed following the processes described in previous metaproteomic studies [27,28]. After Coomassie brilliant blue staining, each lane was cut into eight fractions and subjected to in-gel digestion according to the protocol described by Shevchenko et al. [29]. Briefly, the gel fractions were cut into small pieces and placed in Eppendorf tubes. Next, 500 μl of 100 mM ammonium bicarbonate/acetonitrile (1:1, vol/vol) was added to each tube and incubated with occasional vortexing for 300 min. Then, 500 μl of acetonitrile was added to the sample, followed by incubation at room temperature for 30 min. The acetonitrile was then removed, followed by the addition of dithiothreitol solution and incubation at 56°C for 45 min. The dithiothreitol solution was then removed, and iodoacetamide solution was added followed by incubation in the dark for 30 min. The gel pieces were shrunk with acetonitrile prior to the removal of all liquid. Finally, trypsin buffer was added to cover the dry gel pieces, and the gel was incubated at 4°C for 30 min before being incubated at 37°C overnight. The peptides were extracted from the gel slides, desalted, and dried in a speedvac. LC-MS/MS measurements The dried fractions were re-constituted by 0.1 % formic acid and further analyzed by a LC-Orbitrap Elite mass spectrometer (Thermo Scientific) following our former methods [30]. Briefly, the peptides were fractionated in a 90-min gradient by an Easy-nLC (Thermo Fisher, Bremen, Germany) using a C18 capillary column (Michrom BioResources, CA). The eluted peptides were first scanned in the mass spectrometer with the mass range of 350-1800 m/z and a resolution of 60,000. The top 15 high intensity ions with a minimum threshold of 500 were selected for the downstream fragmentation by using higher-energy collisional dissociation (HCD). The dynamic exclusion with an isolation width of 2.0 m/z and exclusion time of 30 s was adopted. We used 35 % of normalized collision energy and 0.25 activation Q in the HCD analysis. Protein and PTM identification The protein database comprising all ORFs from the abovementioned metagenome was constructed using database maintenance in Mascot (version 2.3.02). The protein identification was performed in reference to the methods described in our previous studies [30]. The MS raw files were processed with Proteome Discoverer 1.0 (Thermo Fisher Scientific) to generate Mascot generic files (mgf ) of the HCD data. The normalized mgf files were submitted to Mascot to search the protein database. The following parameters were used for protein identification: tolerance for parent peptides and fragment ions 5 ppm and 0.3 Da, respectively, and three missed cleavages. All searches were performed with "decoy" sequences. The false discovery rates (FDR) were thus calculated and maintained under 0.1 %. The following settings were used for PTM identification: 5 ppm and 0.02 Da for parent peptides and fragment ions, respectively; up to three missed cleavages; acetylation (Nterm; lysine), deamination (glutamine, asparagine, and arginine), hydroxylation (asparagine and lysine), methylation (C-term, aspartic acid, and glutamic acid), nitrosylation (tryptophan and tyrosine), oxidation (methionine, histidine, and tryptophan), and phosphorylation (serine, threonine, and tyrosine) were searched dynamically. PTMs were localized based on the DeltaP score, the score between the two best alternative modification site assignments, as described in previous studies [8,31]. Taxonomic affinity of proteins of interest was determined by searching against the NCBI-Nr database with blastp (e-value <1e−07) on a local server followed by MEGAN analysis [32]. Genome binning and validation Genome binning was performed according to the steps described by Albertsen et al. [33] and in our previous studies [23,34]. The contigs belonging to different taxa were separated based on the genome coverage, GC content, tetranucleotide frequency, and taxonomic information. The taxonomic information for the contigs was obtained by searching against the NCBI-Nr database with blastp (e-value <1e−07) using a set of conserved single-copy protein-encoding genes as queries, followed by importation of the blast results into MEGAN 5.0 [32]. To exclude potential contig contamination from our genome bins, the extracted contigs could be checked by searching a local database consisting of reported genomes belonging to the same phyla. Here, we constructed a small database comprising the ORFs of seven previously reported complete Nitrospirae genomes [21,[35][36][37]. The ORFs in the genome bin of the present study were searched against this database with blastp (e-value <1e−07). To further assess the completeness and purity of the genome bin, singlecopy protein-coding genes were also compared. In this way, we could determine whether the duplication of single-copy protein-coding genes was caused by contig contamination or incorrect hybridization. Genomic analysis Genomic analysis was performed according to the steps documented in our previous studies [23,33]. Briefly, the ORFs in the extracted genome were predicted using Prodigal [26] using a local server. The ORFs were annotated by searching the KEGG [38] and COG [39]. Metabolic pathways were revealed using online tools in KEGG Mapper (http://www.genome.jp/kegg/mapper.html). Phylogenetic analysis Phylogenetic organization of the Nitrospirae bacterium sp. nov strain and closely related Nitrospirae strains were visualized based on 16S rRNA sequences (~1400 bp). The reference sequences were retrieved from the NCBI database. Alignment was made by ClustalW implemented by Molecular Evolutionary Genetics Analysis (MEGA, version 6.05) [40], and then, a maximum likelihood tree was constructed. Phylogeny was also investigated based on conserved single-copy genes, which were widely used in microbiome studies [41][42][43]. AMPHORA [44] was used to predict conserved single-copy genes from the genome of Nitrospirae bacterium sp. nov and all the available Nitrospirae genomes (draft and complete genomes) in the NCBI database. Protein sequences corresponding to twelve single-copy genes (tsf, rpsS, rpsJ, rpsE, rpmA, rplT, rplS, rplF, rplE, rplC, rplB, and pgk) that were present in all the involved genomes were aligned by ClustalW. The aligned protein sequences were then concatenated using an in-house script and imported to Mega to construct a ML tree based on the Jones-Taylor-Thornton (JTT) substitution model. The bootstrap values were calculated with 500 replicates. Additional files Additional file 1: Table S1. Features of the assembled metagenomes and binned draft genomes. Table S2. Conserved single-copy proteincoding genes for the estimation of genome completeness. The numbers of the 139 single copy genes in Nitrospirae bacterium sp. nov were compared with that in closely related genomes. Table S3. Numbers of genes involved in carbohydrate metabolism, nitrogen metabolism, and sulfur metabolism in Nitrospirae bacterium sp. nov and the reference genomes. Figure S1. Work flow of the present study. Three samples were collected decimeters apart: one for the metagenomic sequencing, protein database construction, and genome binning and two for the metaproteomic and PTM analyses. Figure S2. A Venn diagram showing the overlaps between identified proteins (a) and PTMs (b) in the two metaproteomic samples. Figure S3. Alignments of partial F-type-ATPase protein sequences to show the PTM sites. Figure S4. Phylogenetic organization of the Nitrospirae bacterium sp. nov strain and closely related Nitrospirae strains based on 16S rRNA sequences (~1400 bp). Figure S5. Phylogenetic organization of the Nitrospirae bacterium sp. nov strain and Nitrospirae strains based on concatenated single-copy genes. (DOC 1298 kb) Additional file 2: List of PTMs. (XLSX 696 kb)
2023-01-29T15:29:47.154Z
2016-09-06T00:00:00.000
{ "year": 2016, "sha1": "7a38c7f91099d41e23f1a6bcae9af518a5dee4ac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40168-016-0194-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "7a38c7f91099d41e23f1a6bcae9af518a5dee4ac", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
222210737
pes2o/s2orc
v3-fos-license
Strength-Endurance: Interaction Between Force-Velocity Condition and Power Output Context Strength-endurance mainly depends on the power output, which is often expressed relative to the individual’s maximal power capability (Pmax). However, an individual can develop the same power, but in different combinations of force and velocity (force-velocity condition). Also, at matched power output, changing the force-velocity condition results in a change of the velocity-specific relative power (Pmaxv), associated with a change in the power reserve. So far, the effect of these changing conditions on strength-endurance remains unclear. Purpose We aimed to test the effects of force-velocity condition and power output on strength-endurance. Methods Fourteen sportsmen performed (i) force- and power-velocity relationships evaluation in squat jumps and (ii) strength-endurance evaluations during repeated squat jump tests in 10 different force-velocity-power conditions, individualized based on the force- and power-velocity relationships. Each condition was characterized by different (i) relative power (%Pmax), (ii) velocity-specific relative power (%Pmaxv), and (iii) ratio between force and velocity (RFv). Strength-endurance was assessed by the maximum repetitions (SJRep), and the cumulated mechanical work (Wtot) performed until exhaustion during repeated squat jump tests. Intra and inter-day reliability of SJRep were tested in one of the 10 conditions. The effects of %Pmax, %Pmaxv, and RFv on SJRep and Wtot were tested via stepwise multiple linear regressions and two-way ANOVAs. Results SJRep exhibited almost perfect intra- and inter-day reliability (ICC=0.94 and 0.92, respectively). SJRep and Wtot were influenced by %Pmaxv and RFv (R2 = 0.975 and 0.971; RSME=0.243 and 0.234, respectively; both p < 0.001), with the effect of RFv increasing with decreasing %Pmaxv (interaction effect, p = 0.03). %Pmax was not considered as a significant predictor of strength-endurance by the multiple regressions analysis. SJRep and Wtot were higher at lower %Pmaxv and in low force-high velocity conditions (i.e., lower RFv). Conclusion Strength-endurance was almost fully dependent on the position of the exercise conditions relative to the individual force-velocity and power-velocity relationships (characterized by %Pmaxv and RFv). Thus, the standardization of the force-velocity condition and the velocity-specific relative power should not be overlooked for strength-endurance testing and training, but also when setting fatiguing protocols. INTRODUCTION Repetitive near-maximal-or maximal-intensity efforts, such as sprinting, rowing, jumping, or stair climbing, are frequent in daily life and sporting activity. The key to successful performance during repeated movements relies on the production of mechanical power and its maintenance over a series of repetitions until task completion. Power production capabilities depend on movement velocity and are well-represented by the parabolic power-velocity (Pv) relationship during multi-joint movements (Bobbert, 2012;Samozino et al., 2012;Jaric, 2015). The apex of the P-v relationships corresponds to the maximal power attained at optimal velocity (P max ), which is commonly accepted as a macroscopic measure of dynamic strength capabilities (Jaric, 2015;Alcazar et al., 2017). The ability to maintain power over a series of movements (i.e., strength-endurance) depends primarily on the output magnitude and is well-illustrated by the power-time relationship. Two distinct power-time relationships have been reported to characterize strength-endurance: (i) the inverse hyperbolic relationship between the absolute or relative power output and the duration during which this given power can be maintained, which can be obtained from 3 to 5 tests to exhaustion (Monod and Scherrer, 1965;Burnley and Jones, 2016), and (ii) the decrease in instantaneous power output over time during a single all-out exercise, which is instead associated with fatigability indices, such as the rate of power output loss over 30-s all-out cycling (Bar-Or, 1987). However, the same absolute or relative-to-P max (%P max ) power output can be developed in high force-low velocity conditions or in low force-high velocity conditions, and these different force-velocity (F-v) conditions can be interpreted as distinct ratios between the force output and the movement velocity (R Fv ). The effect of R Fv on strength-endurance has been studied indirectly by investigating the effect of movement velocity using cyclic (e.g., cycling) and acyclic movements (e.g., knee extension; Elert and Gerdle, 1989;Barker et al., 2006). Due to the specificity of cyclic movements, velocity is indirectly controlled by adjusting movement frequency (e.g., the pedaling cadence) or using a specific set-ups (Dorel et al., 2003;Tomas et al., 2010). During allout exercises, higher fatigability has been systematically observed at higher compared to lower movement frequencies in cyclic movements (i.e., cycling; e.g., Sargeant et al., 1981;Beelen and Sargeant, 1991a). However, there is little consensus in acyclic movements (i.e., knee extension and shoulder flexion) since some studies report higher fatigability at higher movement velocities (e.g., Mathiassen, 1989;Morel et al., 2015) while others report opposite results (Elert and Gerdle, 1989;Dalton et al., 2012). Moreover, R Fv and %P max conditions were not fixed at each repetition over the tests due to the decrease in power output throughout all-out exercises. Consequently, it is challenging to evaluate the effects of R Fv and %P max on strength-endurance, as well as the interactions between both mechanical conditions by the mean of all-out exercises. During tests to exhaustion performed at constant power output, only cyclic movements (i.e., cycling and paddling) were used to study the effect of R Fv on strength-endurance. Similarly, there is a lack of consensus since some studies reported lower strength-endurance at higher movement frequencies (Carnevale and Gaesser, 1991;Barker et al., 2006) and others, lower strength-endurance at lower movement frequencies (Leveque et al., 2002;Bessot et al., 2006). Moreover, the sole effect of R Fv cannot be examined when using cyclic movements due to the concomitant influence of both movement frequency and velocity on strength-endurance. Indeed, movement frequency alone impacts strength-endurance by changing (i) rest between repetitions and (ii) contraction number during a test of fixed duration (Enoka and Stuart, 1992;Broxterman et al., 2014). A lower time-to-exhaustion observed at higher movement frequencies can thus be due to shorter rest time between contractions and/or more contractions and/or higher contraction velocities. Overall, investigating the effect R Fv on strength-endurance requires (i) the use of an acyclic movement, allowing the dissociation with the effect of movement frequency, and (ii) the use of time-to exhaustion at constant power to control force-velocity and power output conditions throughout the test. In parallel to R Fv , strength-endurance can also be influenced by the power reserve. Indeed, due to the parabolic shape of the Pv relationship, a change in R Fv at a matched %P max is associated with a change in the power reserve. This reserve corresponds to the difference between the maximal power capability at a specific velocity and the power output at the same specific velocity (Sargeant, 1994(Sargeant, , 2007Zoladz et al., 2000). This reserve can also be interpreted as a velocity-specific relative power (%P max v): the lower %P max v, the larger the power reserve. When considering the same %P max , low force-high velocity conditions (often close to the optimal velocity) are associated with larger power reserve and lower %P max v, and might improve strength-endurance (Sargeant, 1994(Sargeant, , 2007Zoladz et al., 2000). Nevertheless, due to the concomitant change of R Fv and %P max v at matched %P max , it remains unclear whether the influence of R Fv on strengthendurance is independent of %P max v. Also, as matched %P max v can lead to different %P max , the question of which of the two indices better represents exercise intensity remains unanswered. Clarifying the effect of %P max , %P max v, and R Fv on strengthendurance could be of great interest for scientific and training purposes since typical strength-endurance evaluations have been standardized across individuals based on (i) the same relative load (e.g., percentage of the one-repetition maximum; Mayhew et al., 1992), (ii) the same movement velocity across individuals (Câmara et al., 2012) or (iii) the same resistive force per bodyweight during all-out cycling exercises (Bar-Or, 1987). Thus, provided R Fv affects performance, inter-individual differences in strength-endurance observed with these commonly used methods could be mainly due to different mechanical conditions relative to individual capabilities (i.e., %P max, % P max v, and/or R Fv ) rather than different physical abilities. These methods could represent both an inaccurate and non-specific means of assessing strength-endurance, and lead to practically ineffective testing and training regimes. The aim of the present study was to test the effects of forcevelocity condition (i.e., R Fv ) and power output (i.e., %P max and %P max v) on strength-endurance using an acyclic movement. We hypothesized that decreasing velocity-specific relative power (%P max v) increases strength-endurance via increasing power reserve, even if it led to no change or an increase in %P max . We theorized that R Fv influences strength-endurance independently from %P max v, due to the likely different etiology of fatigue between high force-low velocity and low force-high velocity conditions (Enoka and Stuart, 1992;Morel et al., 2015Morel et al., , 2019. Participants Fourteen healthy participants (12 males and 2 females, age = 20 ± 2 years, mass = 73 ± 7 kg and height = 1.79 ± 0.09 m) gave their written informed consent to participate in this study, with all procedures in agreement with the declaration of Helsinki and the ethical standards of a local committee. All were involved in regular physical activity (14 ± 7 h of training per week) and were accustomed to strength-based resistance training (i.e., habitual use of submaximal to maximal loads). All participants were free of musculoskeletal pain or injury during the study. Design The main limitations of previous works were addressed in this study by using jumping exercises due to (i) the possibility to dissociate rest between repetitions from movement velocity, (ii) the acute and reliable quantification of the mean force, velocity, and power output by lower limbs, and (iii) its similarity to typical iso-inertial movements observed in sport and testing batteries. To test the effects of %P max v, %P max , and R Fv on strengthendurance, repeated squat jumps (RSJ) tests to exhaustion were performed in various force-velocity-power (F-v-P) conditions. Overall, 10 F-v-P conditions were determined relative to individual P-v relationship (detailed in the following sections), which meant conditions were graphically positioned on or under the P-v curve (gray points, Figure 1). This positioning of the F-v-P conditions implies each condition has similar coordinates relative to individual maximal capabilities (i.e., the P-v relationship), but various individual absolute force, velocity and power values. Thus, each F-v-P conditions was characterized by different power output (P 1 to P 5 ) and velocity (v 1 to v 6 ), FIGURE 1 | Typical individual power-velocity relationship representing 100 (black curve) and 85%P max v (gray curve), associated with single maximal squat jumps in different loading conditions (white points) and the 10 F-v-P conditions (gray points). Each F-v-P conditions is defined by specific power and velocity coordinates. The dashed gray curve represents the different power-velocity conditions for jumps without load, from sub-maximal to maximal jump height. The crosshatched area under the gray and the black curve represents all power-velocity conditions that require assistance (i.e., total load lower than body mass), and thus were not measured. expressed relative to the individual P-v relationship (Figure 1). In addition, the positioning of F-v-P conditions follows the constraint imposed by dynamics principles during a vertical jump with and without additional load (represented by the white area under the P-v curve and the dashed gray line, respectively, Figure 1). The remaining crosshatched area represents F-v-P conditions requiring a simulated reduction in body weight with assistance. The 10 F-v-P conditions were selected to represent: (i) 3 velocity conditions at two %P max (corresponding to P 3 v 3 , P 3 v 4 , and P 3 v 6 at 85%P max and to P 2 v 2 , P 2 v 3 , and P 2 v 5 at ∼73%P max ), (ii) 3 velocity conditions at two %P max v (corresponding to P 1 v 1 , P 2 v 3 , and P 3 v 6 at 85%P max v and P 2 v 2 , P 3 v 3 , and P 4 v 6 at 100%P max v) and (iii) 3 power conditions at two velocities (corresponding to P 5 v 6 , P 4 v 6 , and P 3 v 6 at v 6 and to P 3 v 3 , P 2 v 3 , and P 1 v 3 at v 3 ). Note that all F-v-P conditions were determined only using power and velocity values to graphically position them relative to power capability as a common reference (i.e., P-v relationship), but changes in velocity across all different power conditions correspond also to changes in R Fv . Protocol This study comprised six sessions, separated by more than 48h of rest (Figure 2). The first session familiarized participants in performing (i) single maximal effort squat jump (SJ) with and without load (range of loads detailed in Section "Force-and Power-Velocity Relationships Assessment") and (ii) unloaded RSJs test until exhaustion (see section "Measurements and Data Analysis" for the exact definition of exhaustion). In the second session, individual F-v and P-v relationships of the lower limbs were evaluated from SJ with and without additional loads, then RSJ test was performed in one specific F-v-P conditions (P 3 v 4 ) for inter-day reliability analysis. From the third to the sixth session, each participant performed 12 RSJ tests randomly organized into 3 per session and separated by 30 min of passive rest (e.g., Karsten et al., 2016;Triska et al., 2017), which corresponded to: 1 RSJ test in each of the 10 F-v-P conditions, 1 RSJ test repeated one more time to assess intra-day reliability in the specific F-v-P conditions (P 3 v 4 ) and 1 RSJ test was not included in the data analysis of the present study (black vertical RSJ bar, Figure 2), because this test is dedicated to answer another aim not addressed here. The six sessions began with body mass measurements and a standardized warm-up consisting of 5 min of self-paced treadmill running followed by ∼15 min of dynamic lowerlimb movements (including unloaded squats with maximal intention and sub-maximal and maximal SJs in unloaded and loaded conditions) and concluding with 5 min of non-fatiguing personally selected exercises. Familiarization Sessions During the first session, the familiarization occurred in two distinct sets. The first set aimed to familiarize participants with the F-v and P-v relationships evaluation procedures. This first set included the same procedures as during the session 2, which are described in the next section, "Force-and Power-Velocity Relationships Assessment." The second set aimed to familiarize participants with the RSJ test. This second set comprised (i) three trials of unloaded RSJ, targeting ∼50% of maximal jump height, separated each by 5 min of rest and ended when 10 successive repetitions were successfully performed at the target and, after 30 min of passive rest, (ii) two unloaded RSJ tests, aiming for maintaining the effort of maximal jump height until exhaustion, interspersed by 30 min of passive rest. During this familiarization session, individual starting position for RSJ tests and F-v and P-v relationships assessment was recorded and was standardized throughout the study. The preferential starting position was chosen by the participant, which has been shown as the method with which force, velocity, and power output are maximized and most reliable in squat jump Janicijevic D. N. et al., 2019). Using a barbell or a wooden dowel held across the shoulders, the starting position was matched with lateral adjustable supports (∼1 cm resolution), preventing participants from going beyond the starting position during the downward movement of SJ (Figure 3). Individual push-off distance (h po ) was determined as the difference between the length of the lower limbs extended with maximal foot plantarflexion (iliac crest-toe distance) and the vertical distance between iliac crest and ground in the starting position. Force-and Power-Velocity Relationships Assessment The determination of individual F-v and P-v relationships included 5 SJs with loading conditions ranging from 0 to 100% of body weight, with each condition performed twice. For each trial, participants stood stationary holding a barbell on their shoulders for additional-load conditions or a wooden dowel (∼400 g) for the unloaded condition (i.e., 0% of body weight). They lowered the bar to reach their individual starting position FIGURE 3 | Schematic setup for all squat jumps performed to determine individual F-v and P-v relationships and during RSJ tests to exhaustion. Frontiers in Physiology | www.frontiersin.org and after maintaining this position for 2 to 3 s, they were asked to jump maximally without countermovement. They were also prompted to touch down in the same leg position as they took off: extended leg with foot plantar flexion. If these requirements were not met, the trial was discarded, and then repeated. The trial with the greatest jump height across all trials was used to determine individual F-v and P-v relationships (Samozino et al., 2008. When the force exerted against a certain load led to the coefficient of determination of the F-v relationship to be lower than 0.96, a third repetition was performed with that specific load to infirm or confirm the trial. Repeated Jump Test For each RSJ test, the practical setting of a given F-v-P condition consisted of modulating the additional load and the jump height based on fundamental laws of dynamics and following the equations proposed and validated by Samozino et al. (2008). Briefly, during the push-off phase of SJs, the mean force (F, Eq. 1), velocity (v, Eq. 2), and power (P, Eq. 3) developed by the lower limbs can be expressed as: where m body is the body mass, m bar the mass of the bar (including the mass of the bar [10 kg] and the additional mass), g the gravitational acceleration (9.81 m.s −2 ), and h the jump height. From Eqs 1 and 2, the jump height (Eq. 4) and the additional mass (Eq. 5) can be computed as a function of the targeted F-v-P conditions: Consequently, participants were instructed to reach a targeted jump height under a specific loading condition, which allowed them to perform an RSJ test in targeted F-v-P conditions. The jump height was self-controlled and aided by continuous visual feedback of the jump height that was displayed, repetition by repetition, to the screen in front of the participant (Figure 3). Where the required additional mass was lower than the mass of the bar, participants wore a weighted vest with the appropriated added load (0.5 kg resolution) and the wooden dowel. The jumping frequency was adjusted at each RSJ test, considering 2.5 s rest time between two successive SJs. The jumping frequency was monitored using two audible beeps to signal (i) the initiation of the downward movement to reach the starting position and (ii) the initiation of the jump. Participants were verbally encouraged to maintain the targeted jump height as long as possible (i.e., until exhaustion). Once jump height dropped below the target, participants were provided with strong encouragements to continue with maximal intent (i.e., aiming for maximal height). All procedures were monitored by the experimenters via their own screen, hidden from the participants during their trials. Measurements and Data Analysis For SJs performed during RSJ tests and F-v and P-v relationships assessment, force, velocity, and power developed during the push-off phase were computed using Eqs 1-3. The jump height was determined from fundamental laws of dynamics and aerial time (Asmussen and Bonde-Petersen, 1974), the latter being obtained using an infrared timing system (OptoJumpNext, Microgate, Bolzano, Italy). For each participant, the F-v and Pv relationships were determined from F, v, and P values obtained from the 5 loading SJ conditions and were used to extrapolate F 0 and v 0 , the y and × intercept of the F-v relationship, respectively. Then, P max was computed as (Samozino et al., 2012): For each of the 10 RSJ conditions, R Fv was computed as the ratio between the force developed (expressed relative to F 0 ) and the velocity (expressed relative to v 0 ). Exhaustion was defined as the inability to perform three consecutive jumps above 95% of the targeted jump height. Strength-endurance was quantified by (i) the maximum repetitions (SJ Rep ) and (ii) the cumulated mechanical work output (W tot ) associated to SJ Rep . SJ Rep corresponded to all repetitions preceding exhaustion, excluding the three jumps below the limit of 95% of the targeted performance and W tot was computed as the sum of the mechanical work of all repetitions of SJ Rep . Statistical Analysis All data are presented as mean ± standard deviation (SD). Intra-set RSJ height variability around the targeted jump height value was assessed using a coefficient of variation. Also, absolute intra-and inter-day reliability of SJ Rep in P 3 v 4 condition were assessed with the standard error of measurement (SEM; Hopkins et al., 2001) expressed in raw units and standardized to inter-individual SD. Relative intraand inter-day reliability of SJ Rep in P 3 v 4 condition were assessed with intra-class correlation coefficient (ICC), which was interpreted as almost perfect (0.81 to 1.00), substantial (0.61 to 0.80), moderate (0.41 to 0.60), fair (0.21 to 0.40), slight (0.01 to 0.20), or poor (<0.01; Landis and Koch, 1977). The difference between the two trials was tested with the paired sample t-test. The respective effects of %P max , %P max v, and R Fv on both SJ rep and W tot were examined using two separate stepwise multiple linear regressions performed from averaged data of the 10 F-v-P conditions of RSJ tests (n = 10), with %P max , %P max v, and R Fv as independent variables and SJ rep or W tot (logtransformed to support linearity of relationships, Monod and Scherrer, 1965;Jones and Vanhatalo, 2017) as the dependent variable. To test the main effects of %P max , %P max v and R Fv on both SJ rep and W tot , as well as their interaction, 2 twoway ANOVAs with repeated measures were performed on SJ rep and W tot , separately: (i) effects of R Fv (low, medium and high levels) and %P max (∼73%P max and ∼85%P max ) and (ii) effects of R Fv (low, medium, and high levels) and %P max v (85%P max v and 100%P max v). Each ANOVA was performed after checking for distribution normality and equality of variance with Shapiro-Wilk's and Mauchly's test, respectively. In the case of non-normality and violation of the assumption of sphericity, the non-linear logarithm transformation and the Greenhouse-Geisser's correction were applied, respectively (Sainani, 2012). Holm's post hoc test was used to highlight significant differences between conditions, as well as simple main effects to test the effect of the first main factor at each level of the second factor, and vice-versa. For all statistical analyses, an alpha value of 0.05 was accepted as the level of significance. RESULTS All individual F-v relationships fitted by linear regressions showed very high quality (R 2 = 0.98 to 1; p < 0.001), and were associated to F 0 of 2202 ± 317 N (30.1 ± 3.5 N.kg −1 ), v 0 of 2.79 ± 0.43 m.s −1 , P max of 1542 ± 329 W (21.0 ± 4.0 W.kg −1 ) and h po of 0.45 ± 0.06 m. The SEM, ICC, and t-test's p-values between the trials performed in the P 3 v 4 condition to assess intra-day and inter-day reliability are presented in Table 1. RSJ additional load, targeted jump height, intraset coefficient of variation of jump height, and jumping frequency associated with the 10 F-v-P conditions are presented in Table 2. SJ Rep , R Fv , W tot, as well as the targeted and achieved absolute and relative force, velocity, and power values associated with the 10 F-v-P conditions are presented in Table 3. The stepwise multiple regression analysis with SJ rep as the dependent variable showed that %P max v (88.4% of the variance explained, beta-weight of −0.812) and R Fv (9.1% of the variance explained, standardized beta-weight of −0.327) accounted for a significant amount of SJ rep variability (p < 0.001; F = 134.187). The regression model obtained was ln(SJ Rep ) = 17.042 -0.144(%P max v) -0.649(R Fv ), which indicated a very high goodness of fit (R 2 = 0.975, p < 0.001), with low residuals (RSME = 0.243). The stepwise multiple regression analysis with W tot as the dependent variable showed that %P max v (89.2% of the variance explained, beta-weight of −0.825) and R Fv (7.9% of the variance explained, standardized beta-weight of −0.305) accounted for a significant amount of W tot variability (p < 0.001; F = 116.866). The regression model obtained was ln(W tot ) = 22.140 -0.132(%P max v) -0.545(R Fv ), which indicated a very high goodness of fit (R 2 = 0.971, p < 0.001), with low residuals (RSME = 0.234). Effect of R Fv and %P max on SJ rep and W tot The two-way ANOVA with repeated measures testing the effect of %P max and R Fv on SJ rep showed a main effect of R Fv (p < 0.001) and R Fv × %P max interaction (p < 0.001), but no main effect of %P max (p = 0.129; Figure 4A). Post hoc comparisons revealed significant differences (p < 0.001) for all comparisons between the three R Fv levels, with an increase of SJ rep when R Fv decreases. A simple main effect of %P max was observed at the highest level of R Fv (p < 0.001), but not at the two lower levels (p = 0.129 and p = 0.782, TABLE 1 | Mean ± SD of the maximum repetitions in P 3 v 4 condition obtained from the two trials to assess intra-day and inter-day reliability analysis. for the lowest and the middle level, respectively). A simple main effect of R Fv was observed at the two levels of power (p < 0.001). Maximum repetitions Reliability The two-way ANOVA with repeated measures testing the effect of %P max and R Fv on W tot showed a main effect of R Fv (p < 0.001) and %P max (p < 0.001), and R Fv × %P max interaction (p < 0.001; Figure 4C). Post hoc comparisons revealed significant differences for all comparisons between the three R Fv levels (p < 0.001), with an increase of W tot when R Fv decreases. A simple main effect of %P max was observed at the low level of R Fv (p < 0.001), but not at the moderate and high levels (p = 0.954 et p = 0.323, respectively). There was a simple main effect of R Fv at the two levels of power (p < 0.001). Effect of R Fv and %P max v on SJ rep and W tot The two-way ANOVA with repeated measures testing the effect of %P max v and R Fv on SJ rep showed a main effect of R Fv (p < 0.001) and %P max v (p < 0.001), and R Fv × P max v interaction (p = 0.03; Figure 4B). Post hoc comparisons revealed significant differences in all comparisons between the three R Fv levels (p < 0.05), with an increase of SJ rep when R Fv decreases. A simple main effect of %P max v was observed at each level of R Fv (p < 0.001). There was a simple main effect of R Fv at 85%P max v (p < 0.001) and a trend at 100%P max v (p = 0.078). The two-way ANOVA with repeated measures testing the effect of %P max v and R Fv on W tot showed a main effect of R Fv (p < 0.001) and %P max v (p < 0.001) and R Fv × %P max v interaction (p < 0.001; Figure 4D). Post hoc comparisons revealed significant differences at the three R Fv levels (p < 0.05), with an increase of W tot when R Fv decreases. A simple main effect of %P max v was observed at the three levels of R Fv (p < 0.001). There was a simple main effect of R Fv observed at 85%P max v (p < 0.001), but only a trend at 100%P max v (p = 0.134). DISCUSSION The main finding of this study was that strength-endurance in repeated jumping depends on force, velocity, and power conditions, expressed relative to force-and power-velocity relationships. The large intra-individual differences in both the maximum repetitions and total work produced across the 10 Fv-P conditions studied (from ∼3 to ∼150 repetitions and from ∼2000 to ∼70000 Joules) were almost entirely explained (∼98%) by both the velocity-specific relative power and the ratio between force and velocity to generate power. Strength-endurance was higher at lower velocity-specific relative power and in lower force-higher velocity conditions. Intra-and inter-day reliability of the RSJ test to exhaustion was acceptable and congruent with previously reported reliabilities for tests to exhaustion of approximately similar duration (e.g., Coggan and Costill, 1984;Hinckson and Hopkins, 2005). In comparison to %P max and R Fv , %P max v was the mechanical condition that affected the most strength-endurance (i.e., ∼88-89% of the variance explained in SJ rep and W tot ).%P max was not a predictor of strength-endurance, notably since it does not consider the change in power capability with the forcevelocity condition. Indeed, at the same %P max , the power TABLE 3 | Mean ± SD of the maximum repetitions, force-velocity ratio, absolute and relative achieved and targeted force, velocity and power output for each of the 10 F-v-P conditions. output relative to the velocity-specific P max (i.e., P max v) can be drastically different according to the force-velocity conditions and lead to substantial differences in strength endurance performance. It is worth noting that among the 10 F-v-P conditions, a lower %P max was not systematically associated with a higher strength-endurance. For example, the 3 F-v-P conditions at ∼85%P max , ∼73%P max , and ∼62%P max were associated with performances of ∼58, ∼21, and 12 repetitions, respectively. This further highlights the inability of %P max to represent exercise intensity, notably when the exercises are not performed in the same force-velocity condition. Since the force-velocity condition varies during field performance and physical testing due to changing loading/resistive conditions and levers/equipment used, the common implementation of %P max to represent exercise intensity could be challenged (e.g., Harman et al., 1987;Bundle et al., 2003). Instead, it appears that %P max v better represents exercise intensity, since it considers the change in the individual maximal power capabilities according to the force-velocity condition. Thus, strength endurance seems to depend primarily on power output, expressed relative to the velocity-specific maximal power, and not to the maximal power value developed at optimal velocity. This supports the importance of the power reserve (Sargeant, 1994(Sargeant, , 2007Zoladz et al., 2000), and in turn, the influence of maximal power capabilities (i.e., the P-v relationship) on the individual ability to maintain sub-maximal power over time, notably at high exercise intensities. The second strongest mechanical predictor of strengthendurance was R Fv , which explained ∼8-9% of the variance in SJ Rep and W tot . Note that the remaining variance (∼2-3%) is likely due to measurement errors. Decreasing R Fv (i.e., increasing movement velocity and decreasing the force output at matched %P max or %P max v) resulted in increased strengthendurance. These results confirm that, when standardizing rest time between repetitions, a change in force-velocity condition influences strength-endurance independently from a change in %P max v (Figures 4B,D) or a change in %P max (Figures 4A,C). These findings contrast previous hypotheses suggesting that increasing movement velocity is unbeneficial (Mathiassen, 1989;Carnevale and Gaesser, 1991;Morel et al., 2015), notably due to potentially higher proportions of fatigable type II muscle fiber recruitment (Beelen and Sargeant, 1991b;Blake and Wakeling, 2014). However, as these studies did not use standardized rest time between contractions and fixed repetitions across velocity conditions, the negative effect of low rest time in high-frequency conditions could have counteracted FIGURE 5 | Schematic three-dimensional power-velocity-endurance relationships representing mean maximum repetitions across individuals in the 10 F-v-P conditions (colored horizontal cylinders). The dashed gray curve represents the different power-velocity conditions for jumps without load, from sub-maximal to maximal jump height. The crosshatched area under the gray and the black curve represents all power-velocity conditions that require assistance (i.e., total load lower than body mass), and thus were not measured. the positive effect of movement velocity. Additionally, as mechanical work per repetition was different across all Fv-P conditions, the total mechanical work produced until exhaustion is likely a better index of strength endurance, even if less practically relevant. However, although R Fv explained a comparatively small part of the overall variance, its change led to substantial differences in strength-endurance (e.g., ∼13, ∼20, and ∼60 repetitions at 85%P max v, with associated R Fv mean values of ∼2.9, ∼2.1, ∼1.3, respectively). It is worth noting that, the influence of R Fv on strengthendurance can change according to %P max v, as shown by the significant R Fv × P max v interaction. Indeed, the effect of R Fv is further magnified at lower %P max v (Figures 4B,D). Taken together, these results show that increases in velocity and decreases in force at the same %P max v or %P max during acyclic movements (e.g., repeated jumps or callisthenic exercises) are rather beneficial than detrimental and could lead to substantial change in maximum repetitions and cumulated work until exhaustion. Strength-endurance at the individual level seems to be almost fully dependent on F-v-P conditions, expressed relative to the individualized F-v and Pv relationships. More specifically, performance is determined by the position of the exercise mechanical conditions on or under the F-v and P-v relationships, this position being characterized by %P max v and R Fv (expressed relative to F 0 and v 0 ; Figure 5). Limitations of the Study One limitation of this study is the restricted range investigated relative to the entire P-v spectrum, and coincidentally the extrapolation of the results on the effect of %P max v and R Fv on strength-endurance beyond the optimal velocity. However, the range of movement velocities explored was maximized considering conditions occurring in sports activities (i.e., inertial and resistive conditions close to bodyweight and higher). The range of power explored was also nearly maximized, from maximal jump height to jump height of ∼10 cm with different loadings. The latter was proposed as a cut-off jump height for accurate assessment of force, velocity and power output with the practical field method used in this study (García-Ramos et al., 2018). Also, although rest time between contractions was controlled in the present study, slight differences were observed in jumping frequencies across F-v-P conditions. However, due to the specificity of the RSJ test, other main mechanisms associated to the negative effect of movement frequency, that is, the lower effectiveness of force application (Dorel et al., 2010) and higher internal work (Zoladz et al., 2000) may have minorly affected our results. Another limitation is the focus on the understanding of the difference in strength-endurance between different F-v-P mechanical conditions, without considering inter-individual differences. Qualifying the physical abilities underlying differences in strength-endurance for two participants in the same %P max v and R Fv would be a beneficial avenue of investigation for future research. Practical Applications and Perspectives for Future Studies • Strength-endurance evaluation should be standardized according to the individual F-v and P-v relationships, notably via %P max v and R Fv , rather than to (i) a given percentage of maximal force (Mayhew et al., 1992), (ii) the same movement velocity across individuals (Câmara et al., 2012), or (iii) the same resistive force per bodyweight during all-out cycling exercises (Bar-Or, 1987). Without such standardization, inter-individual differences in strength-endurance could be mainly due to different %P max v and R Fv conditions among individuals and not only a marker of different physical abilities. Such "Force-velocity-Power based training" could ensure strength and conditioning to improve the strengthendurance of athletes in competition-specific %P max v and R Fv conditions. • Similarly, standardizing dynamic fatiguing protocols and the subsequent fatigue assessment only relative to %P max or maximal isometric force (Millet et al., 2011) could be challenged since each individual may experience different %P max v and R Fv conditions during both phases of such experimentation. Thus, it is likely that the typical high inter-individual variability response in fatigue level (Morel et al., 2019) could be explained by the non-consideration of F-v-P conditions under which the evaluation or the effort was performed. • RSJ is a reliable, practical, and modifiable method to evaluate lower limb strength-endurance in a broad range of exercise conditions specific to field situations. Indeed, the results of the present study showed that strength-endurance assessment in jumping exhibited acceptable absolute and almost perfect relative intra-and inter-day reliability. These values are in agreement with those reported in cycling for efforts of approximately similar duration (e.g., Coggan and Costill, 1984;Hinckson and Hopkins, 2005). The only requirements of an RSJ test are the measurements of body mass, pushoff distance and continuous jump height over successive repetitions, and the use of Samozino et al's validated simple method to estimate force, velocity, and power in jumping (Samozino et al., 2008;Giroux et al., 2014;Jiménez-Reyes et al., 2014;García-Ramos et al., 2019). Notably, there are many convenient methods of detecting the necessary variables (e.g., phone applications or other common devices, such as optical systems). Since different sporting scenarios involving repeated lower limb extensions feature different underlying expressions of movement frequency, force, velocity, and power output (e.g., volley-ball vs. skiing disciplines), it is possible to adapt these mechanical conditions through manipulating rest time, loading, and jump height. While the RSJ test is relatively simplistic, non-familiar cohorts of participants should be well-familiarized to ensure reasonable accuracy and reliability of assessment (Hopkins et al., 2001). CONCLUSION Strength-endurance in jumping, either characterized as the maximum repetitions or cumulated mechanical work performed until exhaustion, depends on both the velocity-specific relative power (or the power reserve) and the underlying force-velocity condition. Strength-endurance was higher when velocity-specific relative power was lower (i.e., larger power reserve) and when the force-velocity condition to generate power was oriented toward low force-high velocity (at least until optimal velocity). The RSJ is a reliable and practical method to assess strength-endurance of the lower limbs, with the possibility to easily set these mechanical conditions, by manipulating jump height, loading and rest time between jumps. Strength-endurance in acyclic movements depends on the position of the exercise mechanical conditions, in terms of relative force, velocity and power, which can be situated on or under the force-velocity and power-velocity relationships. Since maximal capabilities (i.e., force-and power-velocity relationships) and the exercise mechanical conditions (i.e., forcevelocity condition and velocity-specific relative power) influence strength-endurance performances, both should be controlled and targeted to standardize testing and training between individuals and to explore underlying mechanisms of fatigue. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Comité d'Ethique de la Recherche à l'Université Savoie Mont Blanc. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS JR, PS, NP, and LM conceived and designed the experimentation. JR conducted the experiments and wrote the manuscript. JR, MC, and PS analyzed the data. All authors read and approved the manuscript.
2020-10-09T13:10:08.497Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "a3c98d314d147254664169ba9eef0bfaa765fa62", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.576725/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3c98d314d147254664169ba9eef0bfaa765fa62", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
253255156
pes2o/s2orc
v3-fos-license
Road Markings Segmentation from LIDAR Point Clouds using Reflectivity Information Lane detection algorithms are crucial for the development of autonomous vehicles technologies. The more extended approach is to use cameras as sensors. However, LIDAR sensors can cope with weather and light conditions that cameras can not. In this paper, we introduce a method to extract road markings from the reflectivity data of a 64-layers LIDAR sensor. First, a plane segmentation method along with region grow clustering was used to extract the road plane. Then we applied an adaptive thresholding based on Otsu s method and finally, we fitted line models to filter out the remaining outliers. The algorithm was tested on a test track at 60km/h and a highway at 100km/h. Results showed the algorithm was reliable and precise. There was a clear improvement when using reflectivity data in comparison to the use of the raw intensity data both of them provided by the LIDAR sensor. I. INTRODUCTION Nowadays, lane-detection algorithms are crucial for the implementation of Advanced Driving Assistant Systems (ADAS) with different levels of autonomy such as Lane Keeping Assistance (LKA), Lane Change Assistant (LCA), and Lane Departure Warning (LDW) among others. The development and deployment of vehicles with level 3 automation or higher (according to the automation levels represented in the Society of Automotive Engineers (SAE) J3016 standard [1], ranging from "no driving automation" (level 0) to "full driving automation" (level 5)) makes them even more important. The estimation of the lane's shape in structured roads very often relies on the white lines used as road markers and sometimes in the road curb itself. To their detection, most of the related works tend to process images taken by cameras located on the outside of the vehicle or installed under the windshield [2]- [4]. This image-processing approach exhibits great results under both, good lighting and weather conditions. However, it fails at nighttime, in bright sunlight, or under adverse weather conditions. However, Light Detection And Ranging (LIDAR) sensors remain almost unaffected by poor lighting conditions [5] and provide an accurate and reliable way to measure the distances to objects around the vehicle. LIDAR prices have been dropping over the last few years resulting in a wider deployment and use in the autonomous vehicles field. Since the lane-markings reflectivity is improved by reflective glass beads embedded in the surface of the paint, the intensity of the reflected laser beam is expected to be Chair Sustainable Transport Logistics 4.0, Johannes Kepler University Linz, Altenberger Straße 69, 4040 Linz, Austria. {novel.certad hernandez, walter.morales alvarez, cristina.olaverri-monreal}@jku.at higher than the intensity of the beams reflected by the rest of the road (asphalt or concrete) [5]- [9]. However, the beam intensity is also affected by the distance to the target and the incidence angle against the surface. In recent years, LIDAR's manufacturers like Velodyne Lidar and Ouster have begun to offer reflectivity information along with the intensity and range signals. Reflectivity data indicates information about the inherent reflective property of the target, being not affected by lighting conditions and range [10]. Therefore reflectivity is a powerful tool for road markings detection as can be seen in Figure 1. In this paper, we review several road-marking detection algorithms (section II) primarily based on LIDAR intensity, range, or both. Later on, we propose our method to segment road-markings from LIDAR point clouds using the reflectivity information instead of the intensity channel. The whole procedure was implemented using the Point Cloud Library (PCL) [11] in C++ ensuring compatibility with Robot Operating System (ROS) among other common frameworks. The detailed description is in section III. The procedure was tested with two different datasets (section IV) and the results (sections V) demonstrate that the use of reflectivity information provides better results than intensity. Finally, the section VI concludes the present work outlining future research. II. RELATED WORK Vision-based works are not included in this section as they are outside of the scope of the study. Nevertheless, a full review can be found in [2]. In [9] a 6-layers LIDAR sensor, with only three layers facing the ground, was used. A hybrid approach using range and intensity information was presented and the Otsu's method [12] was used to distinguish between the road surface and lane marker signals [12]. A detailed analysis regarding the material of the road (asphalt vs concrete), the type of road markings (plain vs raised), and different weather conditions (rain, sun, and night) were presented as results. However, the study did not present quantitative results. In [5] the authors presented a method for offline annotation of lane-markings. The lane-marking candidate points were detected as local maxima with an intensity value greater than a dynamic threshold. Odometry data was used to keep track of successive scans and the final candidates were then classified between solid or dashed lane-markings. Both studies [10] and [13] achieved lane-markings detection using reflectivity information and tracking based on Extended Kalman filter. In [10], authors initially stacked consecutive one-dimensional LIDAR scans to generate a reflectivity map or image, which was processed using common image processing techniques (binarization, Canny filtering, fixed thresholding). Afterward, they generated the underlying lane model by approximating the filtered points using the Hough transform. In [13], authors used a Finite Impulse Response (FIR) bandpass filter on a 64-layer LIDAR sensor to remove noise. Then a fixed threshold was applied to obtain candidate line points. Finally, the authors fitted these points to clothoid curves to obtain the road markings. In [14], the authors detected the road-marking points using a fixed threshold over the intensity channel of a 32-layers LIDAR sensor. First, the authors detected the road-lines by searching for a set of parallel lines integrated into a digital map using a GNSS/INS system. They then used an expectation-maximization method to detect parallel lines. The same LIDAR was used in [15], where a modified version of Otsu's method was applied to each scanned line. Then, a localization method based on the extracted road markings was presented. Finally, the resulting algorithm was improved in [15] and validated with a real test. In [7] and [16], the intensity scans from an RS-LiDAR-16 were processed with a multi-threshold variation of Otsu's method to determine road-markings candidate points. These candidates were then filtered with a Random sample consensus (RANSAC) line model. In [8], the authors proposed a method based on map localization. The method relied on road lane markings detection. Under the assumptions of flatness and smoothness of highway road surfaces, a ring analysis was performed to determine the points over the road surface. Then, an intensity-based thresholding was applied to extract the road marking points. Finally, these detected lane markings were matched to an HD map using a Particle Filter (PF). In [17] the authors used LIDAR scans to determine virtual lanes not related to the real lane-markings. First, a heightbased filter eliminated the points of the road surface. Then, an unsupervised segmentation algorithm was run to cluster the points. Clusters with similar lateral positions were merged and two independent circular models were fitted to the clusters forming the so-called virtual lanes. A similar approach for unstructured roads was presented in [18]. The authors of [19] used a roadside LIDAR for lane detection. The approach consisted of identifying the ground plane, extracting the lane marking points based on the difference in laser intensity, and dividing the ground within the range of LIDAR scanning into different stripes to extract the lane markings. III. IMPLEMENTATION Our developed procedure comprises two main blocks with several sub-processes. The complete procedure is depicted in Figure 2 and the detailed explanation is depicted in the sections below. Consider a LIDAR sensor with N L layers and N P points per layer. A single scan of the LIDAR sensor produces a pointcloud P A defined as a N L × N P set of points p ij ∈ R 6 : For each point p ij ∈ P A , the following quantities are defined: • x ij , y ij , z ij are the 3D spatial coordinates of the point from the LIDAR reference frame. • r ij is the distance (range) from the LIDAR reference frame to the point. • I ij and R ij are the intensity and reflectivity levels respectively. • i: is the i-th layer in which the point is located, indicated by an integer between 1 and N L • j: is the j-th point index within the layer indicated by an integer between 1 and N P . A. Pre-filtering Point clouds obtained from multi-layer 3D-LIDAR sensors contain massive amounts of data. Thus, it is important to reduce the size in order to improve the processing speed in further steps. Getting rid of non-relevant information is the best way to achieve this reduction. Therefore, we implemented a pre-filtering method that relied on the following two sources of spatial information for each point in the point cloud: • Layers reduction: We first filtered out the LIDAR layers that were scanning above the horizon, as they acquired data from objects in the environment that are utterly irrelevant for the algorithm (buildings, trees, traffic signs, etc). In our implementation, we used only the 30 lower layers of our 64-layers LIDAR. • Height-based filter: Similarly, we performed a filtering over the z-axis (perpendicular to the floor), by keeping the points between a lower (z L ) and upper threshold (z U ). Consider P A as the original pointcloud, then, the prefiltered pointcloud P B ⊂ P A is a set obtained by the following rule: B. Road plane segmentation Once the points are limited to the vicinity of the road plane, we used RANSAC as an iterative method to find the plane model (a, b, c, d) which best fits the point cloud P B . We then filtered the points that did not fit into the plane model within a threshold (T H plane ) obtaining a new pointcloud P C defined as: An example of the results after executing the pre-filtering along with the plane segmentation can be seen in Figure 3a. C. Region growing clustering A region-growing clustering was implemented in order to filter out points that did not belong to the road but the curb, the sidewalk, or other low-height structures. For each point, we analyzed the vicinity to find the K RGC nearest neighbors and then estimated the normal to the neighbors' surface evaluated in the original point. Then, the clustering algorithm selected a point in the cloud and started growing a region iteratively based on two thresholds. When the angle between two points' normals was less than T H angle and the difference between their curvatures was less than T H curve , they were considered to be in the same region. An example of the resulting point cloud after the execution of the clustering is depicted in Figure 3b. D. Adaptive thresholding Since the LIDAR layers (i) could differ in calibration, the reflectance of the same object could produce different values in two different layers. Thus, a different threshold was calculated for each one of the layers (i). First, a reflectivity histogram (h i (k)) of size N bins was built considering the N P i points inside each one of the layers (i). At this point, a threshold was applied across the reflectivity channel to split the remaining data points between two possible classes: road (C R ) and road markings (C M ). As seen in the literature, Otsu's method [12] is widely used to find the threshold that maximizes the inter-class variance (σ 2 b (t)) (or in other words minimizes the intra-class variance) [7], [9], [15], [16]. Where ω R (t) and ω M (t) are the probabilities of two classes separated by a threshold t and µ R (t) and µ M (t) are the respective averages. All of them were calculated as follows: In order to speed up the processing time and reduce the number of points with low reflectivity values, we proposed an initial threshold similar to the one presented in [7] and [16]. We calculated the mean value of reflectivity (R i ) along with the variance (V AR(R i )) across all the points (j) in layer (i): Then, the computation of the adaptive threshold was executed as follows: • A layer i was selected and the histogram h i (k) is calculated • Set up an initial value for ω R,M (0) and µ R,M (0) • An iterative process was executed across all the possible values of t starting with t =R i + V AR(R i ) as the initial value instead of 0. • σ 2 b (t) was calculated. • The maximum value of t was chosen as the final threshold. • All the points with a reflectivity value equal or greater than the threshold were marked as road markings candidates. Figure 3c shows that most of the points that were not related to the road markings were removed from the original point cloud. However, there were still outliers to be treated with the next step. E. Line fitting We applied RANSAC as an iterative method to find the line models which fit the road markings candidates resulting from the previous step. Once a line model was found, the supporting points were removed from the cloud and another line model was sought. The algorithm stopped when a maximum number of lines were found (N l ) or when the last line model found was supported by a small number of real points (≤ N p ). In Figure 3d, the lines and their supporting points detected as road-markings are depicted in red. The lines and points that were rejected by the algorithm due to too few supporting points are depicted in light blue. A. Setup To acquire the pertinent data to test the proposed approach we relied on the JKU-ITS research vehicle (See Figure 4). It consisted of a 2020 hybrid RAV4 from Toyota geared with an OS2-64 LIDAR as well as an IMU, GPS, and a monocular camera [20]. The acquisition system was based on ROS running in a laptop connected to the sensors. The vehicle was used to record two different datasets on which the algorithm was tested afterwards. The final values set for the different variables described in section III are depicted in Table I. From each dataset, we randomly selected 200 frames and then ran them through the algorithm offline. The metrics we Fig. 4. The JKU-ITS research vehicle [20] was used to collect the data. chose to evaluate the result were precision, recall, and F1-score which are widely used in the literature [7], [16], [21]. To this end, we first defined: • True Positives (T P ): points marked as road-markings that supported a line that was in fact a lane-marking on the road. • False Positive (F P ) = points marked as road-markings that supported a line that was not a lane-marking. • False Negatives (F N ) = Points filtered out but that supported a line that was in fact a lane-marking on the road. Then, we calculated the metrics according to [21]: In order to assess the differences between reflectivity and intensity data from the LIDAR, we also ran the intensity data through our algorithm just by changing the data used to build the histogram described in section III-D. B. Test track dataset This dataset was recorded in the Digitrans test track [22] located at St. Valentin, Austria, and the technical details are described below: • 1100m total longitude (940m straight). 8m wide twolanes track. • There is a middle segment of 450m with 6-lanes and 20m wide. • Left curve minimum radius 45m. • Right curve (Roundabout) minimum radius 48m. • Different types of markings: flat thin-layer and structured road markings, white and orange (Delivered and applied by SWARCO Road Marking Systems [23]). • Main material: asphalt. There are other side roads made out of concrete. To validate our proposed algorithm we acquired data on lightning conditions that interfere with the camera of the car. We made sure to drive through the test track during sunset hours to obtain conditions where the sun faced directly to the front of the car when driving one way. C. Highway dataset This dataset was acquired in two different locations. The first one was a segment of the A7 highway around Linz in Austria during regular traffic conditions and typical daylight. The second location was a segment of the A3 highway near Bonn in Germany during regular traffic under low light level (cloudy evening). Unlike the first dataset, in this one there were occlusions caused by other vehicles on the road as well as road-markings in bad conditions. On the other hand, this dataset had no pronounced curvatures, which was advantageous for the representation of straight segments used in the developed algorithm. V. RESULTS The detailed results from both datasets are depicted in Table II. The best results were obtained on the test track where road markings are kept in good condition. In the highway dataset, there is a slight reduction in the F1 score due to two reasons. First, the road markings conditions were not the same as in the test track. Second, the vehicle was driven at high speed. This made the detection of the points over the dashed center-line difficult. At high speed, the points supporting the dashed center-lines are too sparse and the algorithm filtered them out (as can be seen in Figure 6). As can be seen in Figure 5, our algorithm was not restricted to identifying the lane-lines where the vehicle was traversing but almost all the supporting points of the available lane-lines on the road. It is also reliable under different lighting conditions, with no detectable differences along the three lighting levels that were tested: high light level when the vehicle was driven with the sun in front; normal lighting level (daylight); and low level of light (cloudy evening). The results depicted in Table II and Table III show a slight advantage when using reflectivity data instead of intensity data when both channels are run through the exact same algorithm thus confirming the hypothesis proposed in this work. The overall results of our method are depicted in Table III along with a comparison with the results obtained in [7]. Even though our method exhibits slightly better results, both methods were tested in different datasets thus a point-to-point comparison is not possible. An attempt was made to test the algorithm in the rain, however, the LIDAR sensor used does not have the ability to detect the road when it is covered with a layer of water. VI. CONCLUSION AND FUTURE WORK In this paper, we introduced a method to detect roadmarking points from LIDAR data. In contrast to currently available methods, our procedure was not limited to the current lane the vehicle is traversing and was able to extract the road markings from all the lanes of the road which is an important feature for lane changing algorithms. The results showed an improvement just by using the reflectivity data directly provided by the LIDAR instead of the raw intensity data. Currently, we are working to add a tracking system to preserve the road-marking information over time and reduce the detected false negatives. In the near future, we plan to substitute the line models described in section III-E with semicircular arcs to improve the detection in curves. We also plan to test the procedure in other environmental conditions like haze, light snow, and nighttime.
2022-11-03T01:15:45.408Z
2022-11-02T00:00:00.000
{ "year": 2022, "sha1": "b4bb9f4ddecc1beb76e882904cd29156c941167b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b4bb9f4ddecc1beb76e882904cd29156c941167b", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
9760875
pes2o/s2orc
v3-fos-license
The basic helix-loop-helix transcription factor HESR1 regulates endothelial cell tube formation. Human endothelial cells can be induced to form capillary-like tubular networks in collagen gels. We have used this in vitro model and representational difference analysis to identify genes involved in the formation of new blood vessels. HESR1 (HEY-1/HRT-1/CHF-2/gridlock), a basic helix-loop-helix protein related to the hairy/enhancer of split/HES family, is absent in migrating and proliferating cultures of endothelial cells but is rapidly induced during capillary-like network formation. HESR1 is detectable in all adult tissues and at high levels in well vascularized organs such as heart and brain. Its expression is also enriched in aorta and purified capillaries. Overexpression of HESR1 in endothelial cells down-regulates vascular endothelial cell growth factor receptor-2 (VEGFR2) mRNA levels and blocks proliferation, migration, and network formation. Interestingly, reduction of expression of HESR1 by antisense oligonucleotides also blocks endothelial cell network formation in vitro. Finally, HESR1 expression is altered in several breast, lung, and kidney tumors. These data are consistent with a temporal model for HESR1 action where down-regulation at the initiation of new vessel budding is required to allow VEGFR2-mediated migration and proliferation, but re-expression of HESR1 is necessary for induction of tubular network formation and continued maintenance of the mature, quiescent vessel. The formation of new blood vessels by angiogenesis is critical to development of normal tissues as well as growth of solid tumors (1,2). Angiogenesis is a multistep sequence of distinct cellular processes beginning with degradation of extracellular matrix, then proliferation, and migration of endothelial cells (EC), 1 followed by lumen formation and functional maturation (3,4). Currently, two families of EC-specific growth factors are known to regulate these steps. The vascular endothelial growth factors (VEGFs), together with the more widely expressed and pleiotropic fibroblast growth factors (FGFs), promote EC migration, proliferation, and tube formation (5)(6)(7)(8)(9). The second family is composed of angiopoietins (Ang) 1-5, of which Ang-1 and Ang-2, acting through the Tie-2 receptor tyrosine kinase, are known to be critical for the later processes of vessel maturation and stabilization (10 -12). The VEGFs, which can drive all of the early stages of angiogenesis, act through two tyrosine kinase receptors, VEGFR1 (flt-1) (13) and VEGFR2 (KDR/ flk-1) (14). EC also express neuropilin-1 and neuropilin-2, which only bind the VEGF 165 isoform (15). Although transcription factors such as HIF and Tfeb are known to mediate EC responses to specific angiogenic inducers (hypoxia and placental growth, respectively (16,17)), downstream events coordinating EC responses to general angiogenic growth factors remain unknown. In particular, it is not clear how sequential cellular processes can be triggered by continued or repeated exposure to the same stimulus, although a model for reiterative signaling has been proposed to explain FGFinduced branching morphogenesis in lung development (18). To aid in further understanding the process of vessel formation, we wish to identify genes up-regulated in cultured EC induced to differentiate into capillary-like tubular networks. As a first step we have used the well characterized system of cultured EC forming networks in collagen gels (three-dimensional cultures) (19 -22) and compared these to EC growing on top of collagen (two-dimensional cultures) using the PCR-based subtractive hybridization technique of representational difference analysis (RDA). This system models the temporally regulated angiogenic processes of migration, alignment, and tube formation and likely involves many of the same genes. This screen yielded the novel bHLH transcription factor HESR1, which has recently been identified as one of a new 3-member family of Hairy and E(spl)-related bHLH transcription factors, and is variously called HESR1 (23), HRT-1 (24), HEY-1 (25), and CHF-2 (26). The gene will be referred to as HESR1 in this report. HESR1 is widely expressed in the developing vasculature as well as in the presomitic mesoderm, brain, and limbs (23)(24)(25)27). It is specifically expressed in atrial precursors, in the cardiac outflow tract, in aortic arch arteries, and in the dorsal aorta (24,25). In adult tissues HESR1 has been detected in heart, brain, and lung but was not localized to particular cells (24). Recently, the gene responsible for the gridlock mutation in zebrafish was cloned and shown to be identical to Hey-2 (Hrt-2, CHF-1), the second member of this family. Mutation of gridlock disrupts caudal blood flow due to failure of the anterior lateral dorsal aortae to merge into the single midline aorta (27). There is a high degree of homology in the bHLH domain * This work was supported by United States Army IDEA Grant DAMD17-98-1-8291 and National Institutes of Health Grant HL60067. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. We have now analyzed the function of HESR1 in cultured EC and show that expression is essential for capillary-like network formation. EXPERIMENTAL PROCEDURES Cell Culture-Human capillary endothelial (HUCE) cells were prepared from liposuction adipose tissue using anti-CD31-coated magnetic beads exactly as described (31). Tissue was obtained under protocols approved by the appropriate IRB committees. To induce capillary formation, HUCE cells were seeded onto fibronectin-coated rat tail type I collagen, rested for 1 h, and then overlaid with a second layer of collagen (1.5 mg/ml) to provide a three-dimensional matrix. Cells were grown in Medium 199 with 20% FBS, 25 ng/ml recombinant human vascular endothelial cell growth factor (VEGF, Genzyme), and 25 ng/ml recombinant human basic fibroblast growth factor (bFGF, R & D Systems). Two-dimensional cultures were identical except for the absence of the second layer of collagen. For the sorting experiments cells were resuspended in collagen, and 100-l drops were added to bacteriological grade 100-mm dishes on ice. After 15 min to allow cells to settle, the plates were moved to a 37°C incubator to allow gelling. In some experiments early passage (p1-2) human umbilical vein EC (HUVEC) were used (32). Results were identical to those obtained with HUCE. Representational Difference Analysis-For RDA, HUCE cultures were harvested after 18 h. Monolayers were lysed in situ using RNA isolation kits (Stratagene). Cells in collagen gels were harvested by digestion of the gel with 0.4% collagenase I (Worthington), and RNA was prepared as for monolayer cultures. Poly(A) ϩ mRNA was purified over oligo(dT) columns (Stratagene). RDA was performed exactly as described (33). mRNA from monolayer EC (Driver) and tube-forming EC (Tester) was reverse-transcribed, and double-stranded cDNAs were digested with DpnII, ligated to linkers, amplified, and subtracted. Three rounds of subtractive hybridization were performed with increasing ratios of tester to driver: DP1, 1:100; DP2, 1:800; and DP3, 1:400,000. DP3 was cloned into pBluescript II-KS, and transformants were selected for analysis. IMAGE clones corresponding to the human (R60704) and mouse (AA980080) A21 genes (HESR1) were obtained and sequenced. RT-PCR and Northern Blotting-Poly(A) ϩ mRNA was prepared as above and resolved on standard formaldehyde gels. After cross-linking to nylon membranes, blots were hybridized to Psoralen-Biotin labeled probes (Ambion). After hybridization, blots were washed and developed according to the manufacturer's instructions. For detection of VEGFR2, total RNA from control (pcDNA3) or pcDNA3-HESR1-transfected EC was resolved and blotted as above. Membranes were hybridized with a [␣-32 P]dATP-labeled, random-primed probe, generated by PCR. Blots were washed and exposed either to film for 12-18 h or to PhosphorImager screens (Molecular Dynamics). A multiple tissue tumor blot was obtained from CLONTECH and probed according to the manufacturer's instructions. For RT-PCR, 2 g of DNase I-treated total RNA isolated from cells or mouse tissues was primed with random hexamers and reversed-transcribed using SuperScript II (Life Technologies, Inc.). Tissue was obtained under IACUC-approved protocols. 2-5 l of cDNA reaction was subjected to PCR amplification using gene-specific primers. The following primer pairs were used: GAPDH, sense 5Ј ACCA-CAGTCCATGCCATCAC 3Ј and antisense 5Ј TCCACCACCCTGTT-GCTGTA 3Ј (product size, 450 bp, annealing temperature, 67°C, 25 cycles); human HESR1, sense 5Ј GGAGAGGCGCCGCTGTAGTTA 3Ј and antisense 5Ј CAAGGGCGTGCGCGTCAAAGTA 3Ј (product size, 429 bp, annealing temperature, 63.5°C, 30 cycles); and mouse HESR1, sense 5Ј AGGGTGGGATCAGTGTGC 3Ј and antisense 5Ј TGCTTCT-CAAAGGCACTG 3Ј (product size, 355 bp, annealing temperature, 56°C, 30 cycles). The amplification profile was 94°C, 5 min for hot start, followed by denaturation at 94°C, 30 s; annealing as indicated, extension at 72°C, 1 min for the indicated number of cycles with a final extension at 72°C for 10 min. Amplifications were run on a PTC-200 (MJ Research). To confirm the absence of contaminating genomic DNA, the RT step was omitted as indicated. Plasmids-The full-length coding sequence of HESR1 was cloned into either pcDNA3 (human gene, clone A21) or pIRES2-EGFP (mouse gene) using standard protocols. In experiments where pcDNA3-HESR1 was used, pEGFPN-1 was used to monitor transfection efficiency. Transfections and Cell Sorting-Transfection of EC was performed using LipofectAMINE reagent in Opti-MEM (Life Technologies, Inc.). Three-day confluent (synchronized) EC were plated into 6-well plates 1 day before transfection. After incubation with DNA/liposome for 1.5 h, medium was changed. Transfection efficiency was monitored by FACS analysis. We routinely obtained 10 -30% of cells expressing GFP. In some experiments transfected cells coexpressing GFP were sorted on a Cytomation Mo-Flo and then cultured for 24 h to allow recovery. Cells were then replated in 100-l collagen gels as indicated. Migration and Proliferation Assays-To monitor EC migration, control or HESR1-transfected cells were plated onto gelatin-coated Transwell filters (PET membrane, 24-well, Falcon) with a pore size of 8 m. Medium M199 ϩ 5% FBS was added to the Transwell and M199, 5% FBS ϩ VEGF and bFGF at 25 ng/ml was added to the lower chamber. After 24 h cells were fixed in 3.7% formaldehyde and stained with crystal violet (0.5% in 25% methanol). Cells on the upper surface were then removed with a cotton swab, and the remaining dye (from transmigrated cells) was solubilized in 0.1 M citric acid in 50% ethanol, and the absorbance was read at 590 nm. Readings were normalized to the total number of cells plated by assaying membranes from parallel cultures that did not have the cells on the upper surface removed. Proliferation was measured by direct counting of replicate cultures in FIG. 1. Endothelial cells form tubes in collagen gels. A, cultured ECs grow to form a monolayer when plated on top of collagen gels. B, ECs form capillary-like tubes 18 h after seeding into three-dimensional collagen gels. C, tube-forming cells continue to express EC-specific markers as demonstrated by staining with an anti-CD34 antibody. Staining with an isotypematched control antibody was negative. D, ECs form capillary-like structures with patent lumens, demonstrated by staining with a polyclonal antibody to the EC-associated extracellular matrix protein ␤igh3. Staining with nonimmune serum was negative. All cultures contained 25 ng/ml bFGF and VEGF. 24-well plates or by XTT assay (Sigma) with 10 -12 replicates for each time point in 96-well plates (34). In Vitro Transcription/Translation-2 g of plasmid DNA was used in the TNT T3/T7 coupled transcription/translation system (Promega) according to the manufacturer's instructions. After a 90-min reaction at 30°C, 35 S-labeled translated products were resolved by SDS-polyacrylamide gel electrophoresis. Dried gels were exposed to film overnight. Quantitation of Tube Formation-Photomicrographs of the developing tubular networks were skeletonized using a digitizing tablet by an observer blinded to the experimental conditions. A number of branch points were then enumerated, and the average interbranch distance was calculated. Three to five randomized fields for each gel were quantitated. Antisense Treatment-The following sequence-specific S-oligonucleotides were used: HESR1 A1, 5Ј TCCGCACTCTCCTTCTCC 3Ј; A2, 5Ј TCGTCGGCGCTTCTCAAT 3Ј; A3, 5Ј TCCCGAAATCCCAAACTC 3Ј; nonsense control, 5Ј CCCTCCCTTGTTACTCCC 3Ј, and VEGFR2, positive control, 5ЈCGGACTCAGAACCACATC 3Ј. EC were loaded with antisense oligonucleotides using Influx pinocytic cell-loading reagent (Molecular Probes). After loading, cells were allowed to recover for at least 10 min before plating in collagen gels. Assays were performed in triplicate. Cultures were scored 24 h after plating, by two individuals blinded to the experimental set up. The degree of tube formation was graded in 5 levels from 0 to ϩϩϩϩ. For RT-PCR analysis of antisense action, cells were harvested after 24 h in collagen. RESULTS Human EC, seeded into three-dimensional collagen gels, formed capillary-like networks within 18 -24 h (Fig. 1B), whereas cells growing on top of collagen continued to proliferate and eventually formed a monolayer (Fig. 1A). Networkforming cells retain their EC phenotype as shown by their continued expression of CD34 (Fig. 1C), a commonly used immunohistochemical blood vessel marker. The networks are composed of tubes as evidenced by the presence of lumens (Fig. 1D), outlined by staining for the extracellular matrix protein ␤igh3 (36). Significantly, when the gels containing the neovessels are transplanted into a skin pocket on a SCID mouse, the human vessels anastomose with ingrowing mouse vessels and are perfused with blood, indicating that the cultured cells are fully competent to form mature vascular networks 2 (37). Genes differentially expressed between these two culture geometries were identified by RDA. Several well characterized "angiogenic" genes were obtained, including the ␣ v integrin and plasminogen activator inhibitor, suggesting overlap in gene expression between tube-forming cells in vitro and in vivo. 3 A highly represented fragment (27/138 clones) we obtained after three rounds of subtraction showed homology to human and Drosophila Hairy protein and mammalian hes genes and was further characterized. The A21 cDNA contained a single large open reading frame coding for a protein of 304 amino acids and was recently independently named HESR1 (23). We also cloned and sequenced the mouse homologue, which has been variously named HEY-1 (25), HRT-1 (24), and CHF-2 (26). To confirm the differential expression of HESR1 in EC-forming tubes compared with migrating and proliferating EC growing in two dimensions, we isolated poly(A) ϩ mRNA from threeand two-dimensional cultures and used this for Northern blotting. A single species of ϳ2.0 -2.5 kb was detected in the three-dimensional tube but not in the two-dimensional cultures ( Fig. 2A) consistent with the cDNA size of 2.2 kb and the reported transcript size for mouse of ϳ2.3 kb (24). To analyze the time course of HESR1 expression during network formation, we harvested RNA and performed RT-PCR (Fig. 2B). In migrating and proliferating cells (two-dimensional cultures) HESR1 was absent or present at very low levels. Induction was rapid when the cells were embedded in collagen gels (three-dimensional) with expression peaking by 2 h. Expression then fell slowly over the next 15-18 h to a level still considerably higher than that seen in two-dimensional cultures. To investigate tissue distribution in the adult, we used RT-PCR on mRNA prepared from several adult mouse tissues. A single band of the predicted size was detected (Fig. 3A), and its identity as mouse HESR1 was confirmed by sequencing. No bands were present when the RT step was omitted, confirming the absence of genomic contamination (data not shown). As a semi-quantitative estimate of relative abundance, we normalized HESR1 band intensities to those of GAPDH amplified in parallel and compared these to the tissue expressing the lowest level of HESR1 (kidney). This revealed a dramatic enrichment of HESR1 transcripts in well vascularized tissues such as brain (7.5 times) and heart (4.8 times), as well as high levels in aorta FIG. 2. Expression of HESR1. A, Northern blot of poly(A) ϩ mRNA (1 g) from tube-forming (T) and migrating/proliferating (M) EC hybridized with psoralen-biotin-labeled HESR1. A single species of ϳ2.5 kb, corresponding to the predicted HESR1 mRNA, was present in the tube forming EC but absent in migrating/proliferating EC. Equal loading was demonstrated using a GAPDH probe (bottom panel). B, time course of HESR1 expression. RNA was harvested from migrating/proliferating cells (two-dimensional) and from network-forming cells at the indicated times after plating into collagen gels (three-dimensional). RT-PCR was performed for HESR1, and this was normalized to GAPDH expression. Data are plotted in arbitrary units normalized to twodimensional culture.One of four similar experiments is shown. (4.5 times). These data are consistent with recently published data showing expression in brain and heart by multiple tissue Northern blot. The high level of sensitivity provided by RT-PCR allowed us to detect transcripts in other tissues not revealed by Northern blotting. So far we have been unable to detect HESR1 mRNA in adult tissues by in situ hybridization, presumably due to low levels of transcript. All previous in situ hybridization studies have been on embryos. As an alternative strategy to confirm vascular expression of HESR1 in vivo, we used RT-PCR on mRNA prepared from highly purified, freshly isolated adipose-derived capillaries (Fig. 3B). We obtained a strong signal indicating that HESR1 is indeed expressed in capillaries in vivo, most likely by EC; however, expression in both EC and pericytes cannot be ruled out by our data. No signal was obtained in the absence of the RT step (data not shown). We also wished to determine whether levels of HESR1 are altered in the vasculature of tumors, but again in situ hybridization proved too insensitive. We therefore analyzed HESR1 expression in a multiple tissue tumor blot spotted with amplified cDNA from matched normal and tumor tissue. A potential problem with this kind of analysis is that up-or down-regulation of a gene may only be occurring in a discrete subset of cells and at discrete time points. In an analysis of bulk RNA, therefore, a signal may be masked by background noise. We believe this to be the case for HESR1 as we saw upregulation in 3 of 3 lung tumors but down-regulation in 11 of 15 kidney tumors and 5/9 breast tumors (data not shown). This variation may reflect site of sampling (angiogenic edge or quiescent middle of the tumor), state of the tumor (rapidly growing compared with static or slow growing), or may reflect tissuespecific effects. More sensitive in situ hybridization will help resolve this question. Our in vitro and in vivo data suggest that in the adult HESR1 is expressed in mature vessels in all tissues, is decreased during migration and proliferation (two-dimensional cultures), and is re-expressed during the tube-forming stage of new vessel formation (three-dimensional cultures). This temporal expression pattern suggests that the level of HESR1 may regulate the phenotype of the EC, high levels acting to maintain quiescence while reduced levels may allow migration and proliferation. To test this hypothesis we constructed a HESR1 expression vector and used this to test the effect of HESR1 overexpression on EC phenotype. As a test of the plasmid, we performed in vitro transcription/translation of the pcDNA3-HESR1 plasmid, which revealed a single band of 31-33 kDa, consistent with the predicted size of 32,627 kDa for the conceptually translated protein (Fig. 4A). To examine the effect of HESR1 on cell proliferation, ECs were transfected with pcDNA3-HESR1 or pcDNA3 control, along with pEG-FPN-1 to monitor transfection efficiency, and cell number was monitored over 3 days by direct counting or XTT assay. By both assays there was a reproducible decrease in cell proliferation at all time points. In the XTT assay shown (Fig. 4B) the decrease in proliferation was consistent with the transfection efficiency for this experiment of ϳ10%. To confirm that HESR1 overexpression was not killing cells by inducing apoptosis, we stained HESR1 and control-transfected cells with 7AAD (35). By FACS, transfected and apoptotic cells appear GFP bright and 7AAD bright, respectively. This population represented 2.9% of the total in control-transfected cells and 2.1% of the HESR1-transfected cells (data not shown). Salicylate-induced apoptosis was used as a positive control and yielded 27% positive cells. These data indicate that there is no increase in apoptosis in response to HESR1 expression. Finally, we counted floating (dead) cells directly, and again there was no significant difference at 3 days between control and HESR1-transfected cells (data not shown). To examine migration of EC-overexpressing HESR1, we plated transfected cells onto the upper chamber of 8-m pore size transwells and measured transmigration in response to VEGF and bFGF after 24 h. As shown in Fig. 5, HESR1 overexpression decreased migration by 25%. Taken with the effect on proliferation, these data suggest that HESR1 maintains EC quiescence and that HESR1 down-regulation may be essential to allow expression of the angiogenic phenotype. We next asked whether overexpression of HESR1 would promote or disrupt capillary-like network formation. Our prediction was that, given the importance of EC migration in the early stages of angiogenesis, preventing migration by overexpression of HESR1 would block migration and proper alignment of the EC and therefore disrupt formation of the network. This indeed was the case. EC were transfected with control or pIRES2-EGFP-HESR1 and then sorted for GFP expression. When compared with EC transfected with a control plasmid, HESR1-overexpressing cells survived but were unable to generate extensive branching networks (Table I). There appeared to be a failure of "islands" of cells to interconnect, due to a lack of budding and branching. When the cultures were quantitated the data revealed an ϳ50% decrease in the number of branch points in HESR1-transfected cells (Table I). Both EC migration and proliferation are driven by VEGF acting through VEGFR2, and consequently the receptor has a critical role in vasculogenesis and angiogenesis. Moreover, the promoter of VEGFR2 contains several E boxes, including one at position Ϫ175 that has been suggested to bind an EC-specific factor (38). We wondered, therefore, whether VEGFR2 is a downstream target of HESR1 and whether HESR1 overexpression might down-regulate VEGFR2 expression. To test this, ECs were transfected with control vector or pcDNA3-HESR1 expression vector and harvested for RNA isolation 24 h later. Northern analysis revealed that overexpression of HESR1 in EC led to a dramatic down-regulation of VEGFR2 mRNA levels (Fig. 6). Preliminary experiments suggest that HESR1 is acting at the transcriptional level as cotransfection of pcDNA3-HESR1 into EC blocks VEGFR2 promoter-driven luciferase expression. A similar effect is seen with the VEGFR1 promoter. 4 HESR1, like Hairy, HES, and E(spl), therefore appears to have a role in negative regulation of gene expression. These data suggest that HESR1mediated inhibition of EC migration and proliferation is due, at least in part, to down-regulation of VEGFR2 and loss of responsiveness to VEGF. This interpretation is supported by our antisense experiments (see below) where we used a VEGFR2 antisense oligonucleotide as a positive control to block network formation. By having determined that HESR1 is expressed in mature, quiescent vessels and that its down-regulation is necessary for the early stages of angiogenesis, namely migration and proliferation, we next wanted to determine whether re-expression of HESR1 is required for the later stages of EC alignment, tube formation, and network maturation. We designed several antisense oligonucleotides to HESR1 and tested these in the three-dimensional culture model. Based on lack of sequence homology, none of the oligonucleotides are predicted to crossreact with other family members. The positive control sequence from the VEGFR2 gene blocked network formation in a dosedependent manner to a maximum of 88% at 3 M (Fig. 7A). Two of the HESR1 oligonucleotides also blocked network formation. A1 was most potent, blocking by 75% at 0.3 M and greater than 95% at 3 M; A3 was less effective, only showing significant blocking at 3 M; A2 did not block. Variable effectiveness between different oligonucleotides is a common occurrence when using antisense to block gene expression. Matched nonsense oligonucleotides were ineffective at blocking. Neither the antisense nor the nonsense oligonucleotides had any effect on the morphology of monolayer cultures (data not shown). To rule out a nonspecific action of the antisense compounds, we analyzed HESR1 mRNA levels in cells treated with antisense or nonsense oligonucleotides. In the presence of 3 M A1 oligonucleotide, no HESR1 message was detectable, whereas levels of GAPDH message were unaffected (Fig. 7B). Nonsense oligonucleotides had no effect on expression of either gene. These experiments indicate that HESR1 expression is necessary for establishment of the mature EC network. DISCUSSION In a screen to isolate EC genes up-regulated during in vitro capillary-like network formation, we identified the bHLH transcription factor HESR1 (23). This gene is homologous to the Hairy/HES family of genes that act downstream of Notch to regulate neuronal development in Drosophila and mice (39,40) and has recently been independently identified and named HEY-1 (Hairy, E(spl) related with YRPW) (25), HRT-1 (Hairyrelated transcription factor), and CHF-2 (cardiovascular helixloop-helix factor 2) (26). There are two other members of the family so far identified (24,25,27), with similar structure and overlapping expression patterns. To date, expression of HESR1 has been examined mostly in embryos, by whole-mount in situ hybridization. HESR1 is observed in the atrium of the heart, in the cardiac outflow tract, in the aorta, and in the somitic mesoderm, several regions of the central nervous system, and in limbs. We found HESR1 expression in all adult tissues examined including aorta and purified capillaries, indicating that it might be expressed by EC throughout the body. Expression was down-regulated in migrating and proliferating EC in culture 4 suggesting that HESR1 may be involved in induction and/or maintenance of the mature, quiescent vascular phenotypes. test this hypothesis we overexpressed HESR1 in EC and examined migration and proliferation, as well as expression of VEGFR2. This gene, in contrast to VEGFR1, delivers a proliferative signal to EC (5) and has been demonstrated in the proliferating EC of vascular sprouts in the brain (8). Moreover, VEGFR2 expression was shown to be dramatically down-regulated in adult brain where no angiogenesis is occurring, compared with the embryo where all new vessels in the brain are generated by angiogenesis. When we overexpressed HESR1 in our system, to mimic quiescent vessels, it down-regulated VEGFR2 expression and slowed migration and proliferation of EC. Furthermore, expression of HESR1 blocked network formation in three-dimensional gels, presumably due to suppressed EC migration. Adding further support to this interpretation is the finding that antisense to VEGFR2 likewise prevented network formation. Thus, down-regulation of VEGFR2 by HESR1 may be a switch that regulates the transition from the proliferating, migrating phenotype to the network-forming and vessel maturation phenotype. Interestingly, antisense inhibition of HESR1 expression also inhibits network formation, demonstrating that the gene is essential for this process. Our interpretation is that while HESR1 down-regulation is essential in the early stages of migration, presumably to allow VEGFR2 expression, re-expression of HESR1 is required at a later stage to allow alignment, tube formation and vessel maturation. Antisense knockout of HESR1, therefore, prevents this stage in network formation. To summarize, we hypothesize that HESR1 is required to induce and maintain the mature vascular phenotype, in part by preventing EC proliferation and migration (Fig. 8). During the early stages of angiogenesis HESR1 expression would be expected to fall, allowing cells to migrate away from the parent vessel, express VEGFR2, and proliferate. Later, re-expression of HESR1 would be required for down-regulation of VEGFR2, leading to cessation of proliferation, and for tube formation and re-establishment of the mature vessel phenotype. Our data are consistent with this model in that HESR1 is expressed in mature blood vessels (Fig. 3); its expression is suppressed in migrating and proliferating cells, and it is rapidly reinduced in 7. Antisense inhibition of HESR1 expression in EC causes disruption of network formation. A, three antisense oligonucleotides (A1, A2, and A3) were selected from nonoverlapping regions of the HESR1 cDNA sequence and incorporated into EC by hypo-osmotic shock. Cells were cultured in three-dimensional collagen gels for 24 h. The ability of treated EC to form networks was assessed (see "Experimental Procedures") and plotted on an arbitrary scale. A VEGFR2 oligonucleotide was used as a positive control, and a nonsense oligonucleotide served as a negative control (Control). The mean and S.E. of three experiments is shown. The VEGFR2 data are from a single experiment. All experiments were performed with three concentrations of oligonucleotides: 0.03 M (gray); 0.3 M (white); 3 M (black). B, to confirm the specificity of antisense action, RNA was isolated from A1-treated and nonsense control-treated cells and RT-PCR performed with primers specific to HESR1 and GAPDH. Lane 1, molecular weight ladder (kb); lanes 2 and 4, nonsense oligonucleotide-treated cell cDNA, amplified with HESR1 and GAPDH primers, respectively. Lanes 3 and 5, A1 antisense-treated cell cDNA, amplified with HESR1 and GAPDH primers, respectively. Lanes 6 and 7, same as lanes 2 and 3 but no reverse transcription. The absence of a band in lane 3 confirms the effectiveness and specificity of the A1 oligonucleotide. network-forming cells (Fig. 2). Overexpression of HESR1 prevents VEGFR2-mediated migration and proliferation (Figs. 4 and 5), whereas antisense reduction of HESR1 expression prevents the switch from the migrating and proliferating phenotype of cells in two dimensions to the tube-forming phenotype in three-dimensions (Fig. 7). As HESR1 overexpression does not induce tube formation directly in two-dimensional culture, we propose that re-expression of HESR1 is permissive for network formation and vessel maturation, rather than instructive. Although this model is somewhat speculative, it does provide a framework for future experiments. Most bHLH proteins bind to DNA as either homo-or heterodimers. Most bind to the canonical E box (CANNTG), although Hairy proteins bind a variant called the N box (CACNAG) due to a conserved proline in the basic region. HESR1 has a glycine at this position and is predicted to bind to an E box. HESR1 is likely acting in our system by dimerizing with other bHLH proteins. Indeed, a recent report identified CHF-1 and CHF-2 (HESR1) in a two-hybrid screen using as bait the bHLH domain of the aryl hydrocarbon receptor nuclear translocator (ARNT) (26). ARNT has been shown to regulate VEGF expression by binding to the HIF-1 site in the VEGF promoter. CHF-1 was shown to displace ARNT from this site and down-regulate VEGF promoter activity. CHF-2 was not tested. It is not possible to predict, based on sequence alone, whether HESR1 will positively or negatively regulate other target genes, or whether it will interact with corepressors such as groucho/TLE (29,30). By analogy with the Hairy family, and based on the presence of the YRPW motif, a role in negative regulation in mature vessels may be expected, possibly of Achaete-Scute complex homologues (41), potentially of other proangiogenic genes such as matrix metalloproteinases and angiopoeitin-2. Clearly, our data on VEGFR2 expression are consistent with this hypothesis. Interestingly, however, there is good evidence that the Runt family of genes, which contain the WRPY motif, are capable of mediating both negative and positive regulation of transcription (42,43). HESR1 may behave similarly as it also carries a variant of the WRPW motif, namely YRPW. Binding of groucho to the WRPY motif of Runt appears to be regulated, in contrast to the constitutive binding of groucho to the WRPW motif found in the Hairy/HES family. Context-dependent binding of groucho to target promoters appears to determine whether positive or negative regulation occurs. Hairy/HES genes act downstream of Notch to determine cell fate decisions in neurons by regulating the expression of the pro-neural achaete-scute complex genes (39). Notch has also been implicated in angiogenesis; Zimrin et al. (44) demonstrated that inhibiting expression of the Notch ligand Jagged-1 potentiated the ability of EC to form capillary-like networks in culture. The direct demonstration in mice that HESR1 is downstream of Notch in presomitic mesoderm (23) suggests that HESR1 may be downstream of Notch in EC, further supporting a role for HESR1 in angiogenesis. Interestingly, mutations in the human Jagged-1 ligand are reported to be responsible for Alagille syndrome, which manifests as cerebral vascular hemorrhaging (45), whereas a number of the notch and notch ligand knock-out mice show vascular phenotypes (46). We are currently investigating the expression of Notch proteins in human EC. In conclusion, our findings suggest that HESR1 may be a genetic switch, the control of which may regulate EC phenotype and the multiple stages of angiogenesis.
2018-04-03T00:10:44.311Z
2001-03-02T00:00:00.000
{ "year": 2001, "sha1": "1e72c09f5cb1565baae1359190af16215a9dcc09", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/9/6169.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "53175e4d81222ea61b301aa07ead00fd4f78a55b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }