id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
249045558
pes2o/s2orc
v3-fos-license
The impact of working in academia on researchers’ mental health and well-being: A systematic review and qualitative meta-synthesis Objective To understand how researchers experience working in academia and the effects these experiences have on their mental health and well-being, through synthesizing published qualitative data. Method A systematic review and qualitative meta-synthesis was conducted to gain a comprehensive overview of what is currently known about academic researchers’ mental health and well-being. Relevant papers were identified through searching electronic databases, Google Scholar, and citation tracking. The quality of the included studies was assessed and the data was synthesised using reflexive thematic analysis. Results 26 papers were identified and included in this review. Academic researchers’ experiences were captured under seven key themes. Job insecurity coupled with the high expectations set by the academic system left researchers at risk of poor mental health and well-being. Access to peer support networks, opportunities for career progression, and mentorship can help mitigate the stress associated with the academic job role, however, under-represented groups in academia are at risk of unequal access to resources, support, and opportunities. Conclusion To improve researchers’ well-being at work, scientific/academic practice and the system’s concept of what a successful researcher should look like, needs to change. Further high-quality qualitative research is needed to better understand how systemic change, including tackling inequality and introducing better support systems, can be brought about more immediately and effectively. Further research is also needed to better understand the experiences and support needs of post-doctoral and more senior researchers, as there is a paucity of literature in this area. Trial registration The review protocol was registered on PROSPERO (CRD42021232480). Introduction The university sector has undergone substantial changes over the last decade [1].Across the world, universities have become increasingly business-like, often focusing at an institutional level on maximising income streams as opposed to focusing on the original ethos of expanding and training young minds [2].Indeed, academics are caught up in a range of initiatives which measure job performance including global university league tables, research league tables, and student satisfaction surveys [3][4][5].The better the performance across these initiatives, the more revenue a university is likely to attract.These performance rankings can help determine the allocation of public resources and research funding [4], and are often used in the global competition between universities to recruit fee-paying domestic and overseas students [5]. Research output is central to a university's reputation, and critically, largely determines where they fall in the global university rankings [4].Given the importance placed on research output, it is unsurprising that a good research reputation-often characterized by frequent publication in high impact journals, and a continued ability to successfully obtain research funding-can be critical for career progression in academia [6]. Perceptions of research culture can vary depending on the institution or individual [7,8].However, emerging evidence suggests that a large majority of university research cultures are characterized by job insecurity, competing demands in the form of both teaching and research work, long hours including unpaid and uncontracted work, brutal competition amongst peers to succeed in academia, and an immense pressure to publish papers and win research funding [8,9].These characteristics have left researchers experiencing high levels of stress, which have the potential to impact negatively on their mental health and well-being [10]. Well-being and mental health are terms which are often used interchangeably, yet research suggests that whilst they should be viewed as linked, they are distinct concepts [11,12].Mental health encompasses a spectrum of experience, ranging from good mental health to mental illness.Good mental health is more than just the absence of mental illness, but rather the presence of particular mcontruental skills, habits and capacities [12] that enable an individual to effectively react to, or deal with the environment around them [13,14].Well-being on the other hand is a more holistic term, reflecting broader social, physical, and economic experiences.It is often indicative of how closely an individual can live their life in accordance with how they want to.Good well-being is associated with developing robust relationships, reaching individual potential, and being able to engage in activities of personal value and meaning [15]. The high levels of stress inherent in the academic researcher population has been shown to increase their risk of experiencing burnout and depression [16].Early career researchers, a term often used to describe doctoral researchers and post-doctoral researchers [17,18], are thought to be particularly at risk of experiencing common mental health difficulties due to the job precarity that characterizes this career stage [19], and the prevalence of top-down power dynamics which can prevent the disclosure of bullying, harassment, and exploitation [8]. A small, yet growing number of quantitative studies utilizing author-created questionnaires and validated mental health measures have given an indication as to the prevalence and severity of mental-ill health amongst postgraduate researchers in particular.Evans et al., [20] utilised the Generalised Anxiety Disorder questionnaire (GAD-7) and the Patient Health Questionnaire (PHQ-9) to show that postgraduate researchers (comprising both MSc students and doctoral researchers) were six times more likely to report experiencing anxiety and depression compared to those in the general population, with poor work life balance and poor mentor relationships being cited as correlating with worse mental health outcomes.Similarly, a recent systematic review and meta-analysis found that 24% of doctoral researchers displayed clinically significant symptoms of depression and 17% displayed clinically significant symptoms of anxiety, rates which were identified to be similar to estimated prevalence rates in other high stress populations including medical students and resident physicians [21]. Estimates of the prevalence and severity of specific mental health difficulties amongst postdoctoral researchers and more senior researchers are scarce, however, a recently released report by Education Support found that out of 2,046 academic (85.9%) and academic-related staff (14.1%) in the UK, 53.2% showed probable signs of depression [22].This echoes a recent report by the Wellcome Trust wherein 34% of researchers across multiple career stages (the vast majority located in universities across the globe), stated that during their research career, they had sought professional help for depression or anxiety [8]. Interestingly, Kinman & Johnson [1] have also noted that factors such as secure employment, autonomy, and teamwork which have been shown to protect university employees, including academic staff, against the more stressful aspects of the job, are not as prevalent as they once were in university sectors across the UK, USA and Australia. Whilst quantitative-focused studies on this topic have brought to light findings that can be considered concerning for the academic community, the work that does exist tends to treat the university workforce (which comprises both academic and non-academic staff) as a homogenous group [16], and they are limited in their ability to capture a researcher's lived experience due to fixed response options.Through using a qualitative research design, responses can be probed and underlying drivers and factors can be uncovered [23], enabling a more in-depth understanding of academic researchers' experiences and how this relates to their mental health and well-being from a self-report perspective (rather than through fixed and often inflexible clinical screening tools or measures).Nevertheless, the qualitative research in this area often tends to focus on examining discrete aspects of mental health, well-being, or the researcher experience [6,24].This is unsurprising given the varying ways in which the constructs of mental health and well-being can be conceptualised, and the difficulty in clearly defining the academic researcher population either as a whole, in terms of career stage, or in terms of discipline [14,25].This disparity in the existing qualitative literature, however, makes it difficult to ascertain what we currently know about how academic researchers experience working in academia, and the effect these experiences have on their mental health and well-being. We chose to conduct a systematic review and qualitative meta-synthesis to enable us to identify similarities, contradictions and patterns across existing published data in order to both better understand researchers' experiences and develop new insights into this topic [26].Integrating data related to this topic area can aide in informing local organisational policy which can better support academic researchers' well-being and can help guide future work in this area. Through synthesizing existing qualitative data, we aimed to address our research question; how do researchers experience working in academia, and what effect do these experiences have on their mental health and well-being? Method We followed the guidance provided by Lachal et al., [26] on synthesising qualitative literature.The review protocol was registered with PROSPERO, the NIHR's International Prospective Register of Systematic Reviews (registration number: CRD42021232480).PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidance was adhered to throughout this review [27].Please see supplementary information S1 File for the PRISMA checklist. Search strategy The following electronic databases were searched for relevant academic papers: PsycINFO (Ovid), EMBASE (Ovid), CINAHL Plus, PubMed, SCOPUS, Web of Science. We searched the databases from inception to January 2021, with an English language restriction (due to limited resources for translation).Key words related to the research question (including 'mental health', well-being, researcher, and qualitative) were organised under the headings of the SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, and Research type) tool and then elaborated upon to include alternative terms, related constructs, and database specific subject headings.The search terms were combined as necessary using the Boolean operators OR and AND. In order to capture relevant literature not indexed in the electronic databases we also conducted searches on Google Scholar using key terms relevant to the research question such as 'qualitative', 'researcher', 'academia' and 'mental health'.The results were sorted by relevance and no date restriction was applied.The first 200 results were downloaded and imported into the reference management software EndNoteX9, along with the search results from the electronic databases, where duplicates were then removed.For all included papers, we also employed citation tracking.This involved searching the reference lists of the original included paper and searching for papers which cite the original included paper.Forward citation tracking was completed in May 2021.Please see supplementary information S2 File for the full search strategy. Eligibility criteria To be included, peer-reviewed research articles reported in English needed to (a) use a qualitative research design or mixed methods design where qualitative data could be extracted, (b) consist of a sample which clearly identified its population as researchers or individuals with research-related responsibilities (carrying out research, publishing papers, applying for funding), (c) consist of a sample which clearly identifies its population as working in a higher education institution (defined here as an institution which awards degree level certificates or above) and, (d) focus sufficiently on researchers' mental health and well-being experiencesaccordingly, papers were only included in the analysis if the aim(s) or research question(s) of the paper involved examining an aspect of mental health or well-being (or both) and aspects of mental health or well-being (or both) formed a significant part of the (qualitative) results output.Any aspect of mental health and/or well-being was eligible for inclusion.Examples of topics related to well-being include work-related stress, psychological or physical well-being, emotional health, life/work satisfaction.Examples of topics related to mental health include resilience, coping, or specific mental health difficulties such as depression or anxiety. Articles were excluded if: (a) they did not focus sufficiently on researchers' mental health and well-being experiences as detailed above under inclusion criterion (d), (b) they focused primarily on experiences outside of academia, (c) the experiences of researchers who work in higher education institutions could not be extracted, (d) they focused on evaluating a workshop, intervention, or policy change or, (e) the information necessary for the data extraction phase of the review was not present.Corresponding authors were contacted regarding any missing data necessary for data extraction.Where no response was received in one month, the article was excluded on the basis of this missing information.Whilst research on undergraduate students and masters' students were excluded, literature concerning doctoral researchers was eligible for inclusion, as much of the research which explores the experiences of early career researchers has often included doctoral researchers within that paradigm, and they have been noted as playing a key role in the research productivity of universities [17,18].Quantitative studies were excluded as their ability to address lived experience is limited [23]. Data extraction and analysis The following data were extracted from the eligible papers by the first author (HN): (1) title of the research, (2) author (year) and country, (3) sample size, (4) identifying features of the participants, (5) the aspect(s) of mental health or well-being explored as part of the research aim/ question (6) method of qualitative data collection and, (7) method of qualitative data analysis. The included papers were exported into NVivo Pro version 12 to facilitate analysis.Qualitative data contained within the included papers under headings (such as) 'results' or 'findings' were analysed.The qualitative data analysed included themes, participant quotes, or author interpretations/explanations. The analytical method used to synthesize the qualitative data in this review was reflexive thematic analysis.Reflexive thematic analysis can be used to explore questions pertaining to participants' experiences and is well suited to answer the research question of this systematic review [28].The primary author (HN) followed the 6-phase process as outlined by Braun et al., [28], although the analysis was a recursive process and involved movement back and forth between each phase.Following initial immersion in the data through reading and re-reading the papers, the research team comprising HN, JB, DL, ST, and MN, collaboratively developed a provisional coding frame based on eight papers which were identified as being recent enough to cover current practices in academia (studies conducted within the last 5 years at the time of our analysis (2021)), and diverse in terms of examining different aspects of mental health, well-being, and the researcher experience.As the analysis progressed to include the remaining papers, the coding frame was further refined and extended as necessary.Given the exploratory nature of the research question, the process of data coding and theme development was inductive. When approaching the data, we adopted a critical realist position.That is, we assumed that the qualitative data analysed from the included papers was informative of an objective 'true' reality shared by academic researchers, but we also acknowledge that each individual researcher will have a subjective reality which is mediated by their own perceptions, experiences, and beliefs.Identifying key similarities/themes across the dataset (whilst accounting for contradictory findings) helped us to explore this dual reality. Reflexivity The researcher plays an active role in qualitative research (influencing not only how the data is collected, but also how the data are interpreted through their own personal experiences, knowledge, and assumptions).As such, it is important to comment on the position and composition of the research team, so that the reader is able to come to their own conclusions about the validity and trustworthiness of the analysis produced. The research team differed in terms of discipline, career stage, gender, and cultural background, which helped to ensure that any personal assumptions or 'blind spots' with regards to assessing eligibility, assessing quality, interpreting the results, or the topic as a whole, were minimised.Both HN and ST are current doctoral researchers with previous experience in conducting qualitative research, and experience as assistant psychologists.JB is a Consultant Clinical Psychologist and Associate Clinical Professor, and DL is a Senior Research Fellow, both of whom have extensive experience in conducting qualitative research and systematic reviews.MN is a research assistant in the field of molecular medicine.Together, the research team brought a variety of different perspectives and experiences to the development of the synthesis, and this piece of research as a whole.citation tracking.Following de-duplication, 8,978 titles and abstracts were screened for relevance by the primary reviewer (HN) using the software tool Rayyan QCRI.Just over 10% (n = 905) of the titles and abstracts were screened independently by a second reviewer (MN).Of the 8,978 papers 8,765 were excluded for irrelevance, leaving 213 full text articles to be sourced and read in full. Screening outcome All 213 articles were screened for relevance independently by the primary and secondary reviewer.At this stage 187 articles were excluded, for not being a peer-reviewed research article (n = 59), for not focusing on mental health and well-being (n = 81), and for not having extractable qualitative data related to academic researchers (n = 39).Articles were also excluded on the basis of not having the information necessary for data extraction (n = 1), for focusing primarily on fieldwork experiences (n = 2), focusing on evaluating a workshop, intervention, or policy change (n = 3), and finally for being a book chapter (n = 1), or review (n = 1).Any disagreements over eligibility at either the title and abstract stage or the full text stage were resolved through discussion between HN and MN.Where eligibility remained unclear, JB and DL were consulted and a decision was made. In the protocol submitted to PROSPERO, we outlined our intentions to include relevant grey literature and commentaries written in first person where all necessary data could be extracted.This was to ensure we captured sufficient data for a meta-synthesis to be feasible.However, due to the adequate amount of peer-reviewed research articles available for inclusion in the review, the decision was made to exclude any grey literature and commentaries at the full text screening stage.Attempts were made to find peer-reviewed versions of this literature, which were then screened for relevance instead.Articles which have gone through a peer review process are likely of a good quality [29], which can help to create confidence in the findings.By including peer-reviewed research articles only in our systematic review, we therefore aim to increase confidence in the quality of our findings also. Characteristics of the included studies Of the 26 papers included in the meta synthesis, five papers included participants based in North America (Canada, USA, Mexico), 13 papers included participants from Europe (UK, Finland, Germany, Sweden, Netherlands, Spain), one paper included participants from Asia (China), and nine papers included participants based in Australia and Oceania (Australia and New Zealand).The methods of data collection employed included: surveys with open-ended questions (n = 7), interviews (n = 14), focus groups (n = 4), and autoethnographic excerpts (n = 4).Whilst all participants were associated with conducting research, they varied in terms of career stage and role title.The most common group was that of doctoral researchers, with 14 papers including them as participants.The academic disciplines represented across the papers varied extensively, as did the aspects of mental health and well-being examined.All of the studies were published between 2011 and 2021.Further details pertaining to the characteristics of the included studies can be found in Table 1.The details contained in Table 1 refer to the qualitative component of the papers, where papers are mixed methods or report on more than one study. Quality appraisal Research into the mental health and wellbeing of researchers in academia is still in the early stages, with limited literature published so far.As such, no paper was excluded from this review due to quality limitations.However, each study was given a quality rating through the use of the Critical Appraisal Skills Program (CASP) checklist.The Critical Appraisal Skills Program (CASP) checklist was used to assess the quality of the included studies as it is a frequently used instrument that is recommended by the Cochrane Collaboration, and it addresses many of the principles underlying qualitative research [55]. We used a three-point scale which is advocated by Lachal et al., [26] to classify criteria astotally met, partially met, and not met.The criteria were as follows, Q1: Was there a clear statement of the aims of the research?Q2: Is a qualitative methodology appropriate?Q3: Was the research design appropriate to address the aims of the research?Q4: Was the recruitment strategy appropriate to the aims of the research?Q5: Was the data collected in a way that addressed the research issue?Q6: Has the relationship between researcher and participants been adequately considered?Q7: Have ethical issues been taken into consideration?Q8: Was the data analysis sufficiently rigorous?Q9: Is there a clear statement of findings?Q10: How valuable is the research? The studies were independently assessed on quality by HN and ST.Any disagreements were resolved through discussion.Individual study quality ratings can be found in Table 1. Meta-synthesis We identified seven key themes through the reflexive thematic analysis.The key themes along with their corresponding sub-themes (which are highlighted in bold and italicised in the text) are reported below with illustrative extracts.Insecurity and career prospects.Issues with financial insecurity spanned researchers' experiences across countries, disciplines, and career stages, and often resulted in feelings of worry and stress.Researchers from the UK commented on a scarcity of funding to effectively support students at undergraduate level and above [37], whilst professors at varying levels from North America and Australia commented on a lack of funding to support professional development activities [40].Doctoral researchers' financial concerns centred mainly on the financial limits imposed by the scholarship or stipend they receive throughout the duration of their study: '. . .receiving "a stipend that can barely [. ..] support your living' as a doctoral student is not the same as other people earning money, like, real money by working" (Doctoral researcher, Educational Sciences, Finland- [42]).For many post-doctoral researchers and those in the later stages of their career, economic precarity was also linked directly to job insecurity.Researchers from Australia in the later stages of their career [6,51] drew attention to the importance of successfully obtaining a grant to fund research activity, which helps to maintain both current job contracts and research personnel.The likelihood of ongoing employment being dependent on the outcome of a funding round or securing a grant-the process of which was not always considered fair-placed extensive pressure on researchers, which ultimately impacted negatively on their well-being: ". ..the chance of anyone with even a modicum of expertise in your field reviewing your grant is basically zero" (Mid-career researcher, Australia- [6]). ". . .Many people anxiously await the grant outcome to see if they are out of work in six weeks" (Senior researcher, Australia- [6]). The stressful nature of precarious work contracts was further compounded by a lack of communication on behalf of the universities around contract renewal: ". . .my contract was coming up for renewal, and my university just messed me around. ..they weren't telling me anything. .." (Post-doctoral researcher, Education, UK- [44]). Others felt that this precariousness also prevented researchers from expressing dissatisfaction with current work practices at an institution: ". ..you can lose your job if you question practices of a higher level. ..." (Professor, Engineering Education, Australia- [34]). Both financial and job insecurity impacted on researchers' career prospects and aspirations.Unsurprisingly, perceptions of career prospects were influenced by the reliability of a funding bridge between research projects or posts.However, they were also influenced by the lack of "tenure-track" [54] and permanent positions available in academia.Indeed, some researchers felt that the unpredictability of securing a permanent position de-valued their achievements, hard-work, and expertise: ". . . it is not weird. ..to expect after such long studies and with such a great CV, to get a permanent position as reward and acknowledgement. .." (Post-doctoral Researcher, Netherlands- [47]). Nevertheless, despite restricted career prospects which may compromise mental health and well-being, a number of researchers, particularly those in the post-doctoral stage, stated that it was their intention to remain in academia.Doctoral researchers, however, were noticeably more hesitant in committing to a single career path: "I am green with envy and stressful when I see my classmates at college are well-settled down in their career while I am still struggling for a PhD, my career still being an illusion" (Doctoral researcher, China- [49]). It is important to keep in mind that limited access to career workshops or coaching [47] and thus limited preparations for a non-academic career path, may influence a researcher's intent to remain within academia: ". . . I feel I have little to offer outside of the university sector and am unsure what I could realistically go for!" (Participant 12, UK-[37]), A demanding career path: "you have to be excellent at everything".The sub-theme high expectations and overworking was present across all 26 of the included papers.High expectations encapsulates the pressure to engage with the three domains of research, teaching, and service (for example conducting a review of a program [53]), whether trained in these domains or not, the pressure to handle competing demands with strict deadlines, to work unpredictable and long hours, to be independent, to handle multiple counts of criticism and rejection [33], and to be resilient in the face of these expectations.Ultimately, researchers were expected to be focused on impact and productivity [46] rather than individual/researcher wellbeing or emotional capacity. These expectations were set by the system, and consequently by colleagues and the researcher themselves.Prominent factors which made meeting these expectations difficult, particularly for post-doctoral researchers and researchers in the later stages of their career included the introduction of new research policies which were not conducive to all disciplines, increased student numbers without the necessary resources in place to manage this, increased administrative loads, and the expectation to provide pastoral support to students.The expectation to produce research that is 'impactful' appeared to more negatively impact the well-being of those from less applied/theoretical disciplines [34], whilst the expectation to provide pastoral support was noted as falling more heavily on female researchers: ". . .she had people queuing out the door for office hours even if they weren't really her students . .." (Doctoral researcher, Arts & Humanities, UK- [31]). To meet the job demands of academia and/or gain the skills and accolades necessary to secure an ever-elusive permanent position, researchers across countries, career stages, and disciplines described the extra work they took on and the long hours they worked, along with the productivity guilt that ensued when they were not able to meet their own or others' expectations.This left them at risk of stress and burnout: ". ..if it's like 4 pm and. ..my experiment hasn't worked, immediately my brain is like "Well you should start it again and leave work at 10pm. ..finish it, get it right. .." (Doctoral researcher, Science, UK- [31]). Research cultures characterised by high expectations, job precarity, and reduced opportunities for permanent positions engendered a sense of competition among the research community often described as 'nasty, aggressive and unpleasant' [51].Nevertheless, it was difficult not to perpetuate this sense of competition, so as not to feel at a disadvantage career-wise: ". . .I will still secretly judge if somebody always goes home at 4pm, and I know I shouldn't. . .But there is this. . .highly competitive spirit that everybody sort of expects, that if you want to be the best then you have to work 80 h a week . .." (Post-doctoral researcher, Canada- [54]). In the interest of career progression, both early career researchers and researchers at more senior levels expressed wanting to maintain a reputation of being able to meet the high expectations set: [32]. '. . . I have known academics who have hidden their mental distress for fear of being pigeonholed as flimsy and undependable' Interestingly, career stage did not appear to impact researchers' reluctance to disclose difficulties or dissatisfaction (that is, difficulties of an academic, mental health, or well-being related nature): ". ..I don't want to give the impression that I'm already failing" (Doctoral researcher, Science, UK- [31]). Due to this general reluctance to disclose, researchers were left at risk of blaming any difficulties or dissatisfaction associated with their job on themselves.This risk was further compounded by facing, or expecting to face, negative reactions from colleagues or supervisors when difficulties or dissatisfaction related to the workplace were shared: "I just felt like. . .they wouldn't listen to me as a person and they would just say, "Hey, see these Black kids can't cut it"" (Post-doctoral researcher, Mechanical Engineering, USA- [38]). For doctoral researchers specifically, the reluctance to disclose was also related to confusion over the extent to which the supervisory relationship is pastoral, unknown limits to confidentiality, and a fear of overburdening others: ". ..they're stressed and it feels like a lot to say to them. ..'Can we talk to you more. ..?" (Doctoral researcher, Arts & Humanities, UK- [31]). It is important to note that some researchers encountered both understanding and help from their colleagues and supervisors following the disclosure of difficulties [31,32].However, the extent to which this understanding lasted was limited in some cases: '. ..two focus group participants with chronic mental health conditions stated that although supervisory teams were sympathetic when they first learned of the student's condition, participants felt that this was soon forgotten or dismissed with the expectation that they must surely be 'over it' after a period of time' [48]. Ultimately, being perceived as meeting or not meeting the expectations set by themselves, colleagues, or the system as a whole may have an impact on a researchers' confidence in their ability to do their job, not only affecting their well-being, but also their sense of identity as an academic, and thus their sense of belonging to the academic community. Receiving an award from a research society based on my presentation and work [. . .] I felt that I was recognised as an experienced researcher who could convey my research and was becoming an expert in my field" (Post-doctoral researcher, Oncology, UK-[44]). Indeed, some researchers commented on not feeling like their current self is living up to the perception of the 'proper' or 'ideal' academic.These feelings of self-doubt and inadequacy (despite evidence to the contrary) are indicative of imposter syndrome, a well-known phenomenon in academia that can impact negatively on mental health: 'In some cases, women also explicitly attributed mental health issues to imposter syndrome, as in the case of a Canadian postdoc who reported: "Mentally I think definitely there's been some bouts of depression.You know, definitely some imposter syndrome. . .So with that, you know, definitely some anxiety. .." [54]. A factor which further impacted doctoral researchers' identity and sense of belonging at work was uncertainty around whether they are a member of the student body or faculty: ". ..we are like ghosts in the campus.We are part of the faculty, but we are not" (Doctoral researcher, Arts & Humanities, UK- [31]). Work-life balance and the academic lifestyle: An incompatibility.The high expectations set by academia coupled with the importance placed on mobility for continued employment: "I'm working in a city for 2 years and then I'm expected to move to a whole new country. .." (Post-doctoral researcher, Germany- [54]), networking, and career progression: "One of the requirements for the fellowship above my level specifically says you have had to work overseas" (Participant, Sciences, Australia- [51]) often made balancing personal and professional lives, difficult.Researchers across career stages, academic disciplines, and countries described academia as inherently inflexible in this regard: ". ..You are either expected to play the game in full or get out" (Post-doctoral researcher, Canada- [54]). Feeling unable to take breaks and engage with other meaningful activities had the potential to lead to high levels of stress and burnout.Feelings of frustration and guilt were also particularly prevalent in responses which mentioned conflict between job demands and family systems: ". ..my family is the most important to me, but so is my career, and there is when I go into conflict. ..I do not want to leave either of them but I cannot be in both places at the same time. .." (Participant, Mexico- [39]).Indeed, a key stressor spoken about primarily by female researchers, was that of when to start a family.Early career researchers in particular described the following tension: children versus career progression, "one or the other not both".Many perceived that having children was generally stigmatized and discouraged within the academic environment due to the potential loss of productivity: "The gossip in my department was that. . . the climate was not very conducive for women to become pregnant . . .they become less useful for the department during their time off. .." (Post-doctoral researcher, Germany- [54]). Ultimately, in the context of applying for funding [51], publishing [54], and the academic job market as a whole; pregnancy, taking maternity leave, and raising a child left female researchers feeling at a disadvantage compared to their peers without children: ". . .I feel like that does harm your career.Because I don't think it's recognized. . .you're still expected to be producing a certain number of publications even if you are taking time off to have kids. .." (Post-doctoral researcher, Canada- [54]). Nevertheless, flexible working hours and flexibility in terms of idea development appeared to be associated with good well-being as it allowed academic researchers to retain a certain level of autonomy and independence over their personal and professional lives.However, the tension, flexibility: "a blessing and a curse" was prevalent in the literature.Some early career researchers including doctoral researchers and post-doctoral researchers, associated this flexibility with difficulties in maintaining motivation and feeling a sense of achievement: "The flexibility, it's both a blessing and a curse really, every day you kind of plan for yourself, and it's a blank slate.But admittedly a lot of times I wake up and I'm not sure what I'm going to achieve that day and I don't achieve anything" (Post-doctoral researcher, Canada- [54]). The influence of relationships and role models.Researchers' social relationships were often described as being sources of support.Social relationships were described as protective against the experience of mental health or well-being-related difficulties, and often aided in maintaining a good work life balance: ". ..my husband and children allowed me to stay sane because it forced me to make time for other things than work" (Professor, Kinesiology- [40]). However, social relationships could also be a source of stress.Due to the demands and high expectations associated with the academic job role, establishing, or maintaining social relationships outside of academia could be complex: "Only the strongest relationships survive. ..I focus on only the closest family members [for] maintaining relationships.Other relationships have had to adapt. ..or, more often, disintegrate" (Senior researcher, Australia- [6]). Researchers' also spoke about their work relationships and the wider academic community in the context of being protective against the demands of the job.Positive interactions at work helped to combat feelings of loneliness, isolation, and mental distress.Receiving peer support from those within the same discipline, those at a similar career stage, or those with other similar personal characteristics was considered particularly beneficial, as it led to a feeling of 'togetherness' and a sense of community: ". . .there is no one else that understands you as well as another doctoral student . .." (Doctoral researcher, Sweden- [43]). Nevertheless, researchers work relationships could also be stressful.The competition to get ahead in academia can encourage both negative self-comparisons and fractious relationships to form between colleagues, which imposed limits on opportunities for peer support: "Colleagues take advantage. ..It made me understand the kind of person I find diffcult to work with" (Post-doctoral researcher, Medical Sciences, UK- [44]). For researchers who also taught, their interactions with students were described as similarly double edged, being both a source of job satisfaction and stress: "There is a lack of respect with some.They disrupt lectures and send quite rude emails demanding attention" (Participant 28, UK- [37]). Researchers from all career stages spoke about the importance of mentors and role models, that is, having someone to guide them throughout their career as a researcher.For doctoral researchers, their supervisor was seen as somebody who could contain worries, strengthen the doctoral researcher's confidence, and increase motivation.However, supervision could negatively impact mental health and well-being if it was perceived as not meeting the doctoral researchers' own needs and expectations or was considered unhelpful or harmful.A lack of formal rules and training were thought to encourage negative supervisory practices: "[supervisors] might be amazing scientists, but they have never been trained in. . .managing collective people" (Doctoral researcher, Science, UK- [31]). Closely tied to supervision and mentorship was the idea of a role model, that is, a person who is looked up to by others as an inspirational example to be imitated.Role models were particularly important for both women and those from a Black ethnic background, who are under-represented among the senior levels in academia.This lack of representation at the higher levels led to feelings of not belonging amongst early career researchers who share these characteristics: ". ..I feel like engineering in general is much harder for minorities because they don't have a lot of people they can look up to. .." (Doctoral researcher, Computer Science, USA- [38]). The absence of either good supervision, role models, or peer or social networks led to feelings of isolation and loneliness, which had the potential to significantly impact not only researchers' mental health and well-being, but also their productivity at work [41].In the context of this review, particular groups at risk of isolation included female researchers [54], researchers from a Black Ethnic background [38], and part-time or international doctoral researchers [48]: ". ..you end up being totally isolated and I think it's easier to some extent for British or when you have your family because even if they don't know anything what you're doing they are still there to support you. .." (Doctoral researcher, UK- [48]). The impact of working in academia on health.Researchers' awareness and understanding of mental health and well-being varied across the included papers.Indeed, the normalizing of chronic stress in academia left some researchers unsure as to whether they were at risk of developing, or currently experiencing, difficulties with their health or well-being: ". ..even those women who said that they did not experience negative effects on their health due to their academic careers mentioned that they experienced great amounts of stress and contended with sleepless nights, suggesting that those women came to expect extreme stress and lack of sleep as a part of the normal postdoctoral experience" [54]. Overall, there was a general call for more transparency with regards to managing mental health and well-being in the context of academia: "[A]t high levels there's not very much vulnerability and transparency about how people actually approach their daily work lives and how they actually go about maintaining their wellbeing at the same time as achieving as a researcher" (Doctoral researcher, Science, UK- [31]). Due to feelings of uncertainty, and varying levels of mental health literacy, doctoral researchers highlighted the key role of the supervisor in legitimising and aiding in help seeking for mental health or well-being related difficulties: ". ..it really saved me. . .they weren't going to be my therapist, of course. ..but they were there to make sure that I addressed my issues" (Doctoral researcher, Arts & Humanities, UK- [31]). The lack of open discourse around mental health and well-being related difficulties seemed to perpetuate the idea that a successful academic is infallible and immune to such difficulties.However, this is often not the case, and a large majority of researchers across countries, disciplines and career stages described experiencing stress and the presence of physical and mental health difficulties: ". ..she's got all these publications and she's had grants-. ..actually my life is a bloody nervous wreck" (Professor, Music, Australia- [34]). "I suffered severe pain and unknown skin irritations and allergy symptoms.The doctor said everything was caused by stress. .." (Participant, Canada- [53]). There were some exceptions in the research.For example, when managed using personal resources, the presence of stress was sometimes seen as a motivational factor, and necessary for scholarly development: '. ..seeing stress as "a motivation by itself" urges one to "try harder" and "become more competent and more efficient". ..' (Doctoral researcher, Education, Finland- [42]). Coping and support.Support provided by organisations (or lack thereof) was touched upon across many of the included papers.Overall, there appeared to be a disconnect between the high expectations set by the higher education system, and the time, resources, and encouragement given to researchers in order to reach these expectations: "Just when most academics are due for a break, right when most universities shut down and take offline all of their support services, RGMS [online application process] opens up" (Senior researcher, Australia- [6]). Interestingly, UK doctoral researchers [31] perceived that universities prioritized their reputation above acknowledging and addressing their mental health and well-being.Early career researchers across Canada, Germany and the USA also expressed frustration over a lack of change in policy or practice, despite universities publicly acknowledging the biases and challenges faced by under-represented groups in academia, including women and those from a Black ethnic background: ". ..Structural barriers. . .are documented and real, and yet the universities still have this gender bias problem" (Post-doctoral researcher- [54]). There were varying levels of awareness as to what institutional support was currently available (either in terms of mental health or professional development).Some researchers appeared to encounter difficulties in accessing the support provided either due to the support information not being easily accessible [48], or due to their career stage [54].As one post-doctoral researcher recounted: "My officemate actually was particularly anxious and he called some kind of help line at [the university] looking for support and they denied him anything as a postdoc.They told him if he were a student okay, or faculty okay, but as a postdoc we can't help you...." (Post-doctoral researcher, Canada- [54]). Concerns were also raised as to the effectiveness of student services in handling problems unique to doctoral researchers: "My experience with student services was they didn't know what they could do, they'd say 'I'll look into it'. Granted I'm in quite a unique situation right now, they are multiple things going on. They said 'I don't know if we can do anything to help you, I can look into it and get back to you" (Doctoral researcher, UK-[48]). However, others did describe university-based support they had found helpful: 'the counselling was great.She really helped me' (Doctoral researcher, UK- [48]). The lack of support provided by organisations necessitated the use of individual coping strategies to counteract the stress of working in academia.Given the lack of control researchers had over many aspects of the job, many researchers focused on both what they could change and regaining a sense of control.The most common coping strategy mentioned was perseverance, however, a mixture of other emotion-focused and more practical coping strategies were also used, such as re-framing the experience from a negative to a positive and seeking professional help. "My stick-to-it-ive nature. . .has kept me and gotten me to the point where I am, and gotten me to the point where I can finish. .." (Doctoral researcher, USA- [38]). Although not an individual coping strategy per se, the passage of time often allowed for the potency of negative emotions to decrease: "I don't care anymore; I've kind of forgotten about it, to be honest.[. ..]At the time, I was very frustrated and irritated. .." (Post-doctoral researcher, Sociology, UK- [44]). Factors contributing to job satisfaction including a passion for science, recognition of hard work from peers or institutions, seeing students develop, or a paper being accepted for publication, also aided in attempts to maintain positive well-being at work: 'The science gives me the greatest satisfaction . . . the satisfaction of pitching a question, seeing the results come through' (Senior researcher, Science, Australia- [51]). Perceptions of what support should look like were included across many of the studies in this review.Most prominent among the suggestions which could improve researchers' mental health and well-being at work was a call for organisations to assess 'productivity relative to opportunity' [51], and clarity regarding promotion and tenure processes so that individuals can set realistic goals.Researchers at different career stages also commented on the importance of their physical workspaces engendering a sense of belonging and well-being: ". ..there no space on this campus where. ..five of us can sit down and just yap without the undergrads constantly taking that space. ..we talk so much about informal learning and. ..we don't have a space for PhD students. ..Places need to grow" (Doctoral researcher, Social sciences, UK- [31]).Some suggestions for support were specific to particular academic researcher populations.Post-doctoral researchers explicitly called for more developed policies, programs and workshops tailored to furthering their career [47] whilst doctoral researchers called for supervisors to receive training with regards to conducting supervision and also suggested assigning a supervisor(s) based not only on the research topic, but also on the support needs of the doctoral researcher. Positions of privilege.Academics from different career stages felt privileged to be in a position where they had the opportunity to contribute to society through their research.Nevertheless, feeling morally compelled to give back to society also had the potential to negatively impact upon well-being through contributing to overwork, and tensions could be found between colleagues and social support networks when moral dispositions did not align: "Someone I know who got one of the largest grants ever said, 'I don't care if my research has impact-I'm doing this because I'm curious about this' and I just thought that was an appalling waste of tax payer's money. .." (Professor, Education, UK- [34]). Feelings of tension could also arise when the impact on society or participants was not immediate or direct, particularly when the research involves discussing sensitive topics such as trauma: '. ... the promise of the potential for positive change years down the road does little to help a researcher sleep at night' [41]. Although already touched upon in other key themes (notably 'work-life balance and the academic lifestyle: an incompatibility' and 'the influence of relationships and role models'), the included papers indicated a sense of inequality in academia that should be drawn out more explicitly. This sense of inequality was particularly prevalent among responses from female researchers and researchers from a Black ethnic background, who are under-represented among the senior levels in academia, and who often described facing, or expecting to face, incidents of harassment, bias, or discrimination.This left them at risk of unequal access to resources, support, and opportunities, and reduced well-being at work: ". . .I'm the only Black guy in the group. . .and the only one being treated this way.So, you're like, "What?!" you know" (Doctoral researcher, USA- [38]).These experiences were considered reflective of society at large.Consequently, both institutional as well as broad societal changes were thought to be needed in order to enact change and foster a more supportive and equal research culture: '. . .students reported feeling the need to combat stereotypes that seeped from society at large into their engineering and computing programs' [38]. Ultimately some researchers noted that: 'there's definitely a boys/girls' club and being part of that group can help your career' [51]. Discussion We aimed to better understand how researchers experience working in academia, and the effect these experiences have on their mental health and well-being.We identified seven key themes as a result of conducting a meta synthesis across 26 papers which met our inclusion criteria. The seven key themes spanned across the countries, disciplines and career stages mentioned in the included papers, and shed light on factors at an individual, interpersonal, and systemic level which appear to impact the mental health and well-being of the academic researcher population.However, throughout the analysis, we took care to also highlight the nuances in researchers' experiences.Considering both the similarities and differences in experience can have important implications for workplace policy and can help highlight areas where interventions should be targeted. Job insecurity, a lack of family-friendly policies, inflexible requirements for funding and promotions, and a push for productivity above all else left many researchers stressed and experiencing (or at risk of experiencing), mental and physical health difficulties.These systemic stressors are highlighted across the wider, albeit limited, literature on this topic [8,16], and it is unsurprising therefore, that suggestions for support and change across the included studies in this review focused on addressing these systemic issues, as opposed to implementing interventions at the individual level.There was a sense across the included studies that it is scientific/academic practice, and the system's concept of what a successful researcher should look like, that needs to change, rather than putting the onus on the individual to cope in this environment. Nevertheless, recent evidence suggests that many of these systemic issues continue to pervade academic spaces [56].Indeed, there remains an expectation to meet high academic performance standards, despite the ongoing disruptions the COVID-19 pandemic has caused across researchers' personal and professional lives [57].The impact of COVID-19 on the higher education system should continue to be monitored, as evidence suggests that the pandemic has both illuminated and exacerbated the risk particular systemic issues highlighted in this review (such as financial and job insecurity, as well as gender and ethnicity) can have on researchers' mental health and well-being [56,58]. This review has also highlighted the pressure researchers feel to maintain a reputation of being able to cope with the high expectations set in a competitive academic environment.The stigma that appears to exist with regards to experiencing mental health difficulties in academia, coupled with the normalizing of chronic stress, likely prevents researchers from accessing support when needed [24,59].Fostering an environment where open discourse around mental health and well-being at work can occur without fear of repercussions, will likely aide in the detection and treatment, and ultimately prevention, of mental health difficulties [59].Nevertheless, this review has also highlighted a lack of awareness as to what mental health-based support is currently offered by academic institutions, which represents another barrier to accessing support.Institutions need to ensure that any support currently offered is visible and the process of accessing this support is clear and straightforward.Further research is needed with regards to doctoral researchers and post-doctoral researcher's hopes and expectations for mental health-based support at work, as some of the included papers in this review [48,54] have highlighted that they may not be able to access or benefit from the institutional support already provided for students or faculty. The importance of both peer and social networks in maintaining positive well-being is stated throughout the included papers.Interventions at both an individual and systemic level may be needed to ensure that researchers are not forced to choose their academic identity at the expense of other important life roles and activities, which may in turn limit their access to social support networks.Indeed, belonging to multiple social networks has been shown to be protective of mental health and well-being [54,60]. In the absence of adequate support from academic institutions, evidence has shown that early career researchers in particular have taken the initiative to form their own peer support networks, an example being Scholar Minds in Germany [61], where workshops related to the PhD experience or general information-sharing and collaboration can take place [62,63].These networks can help to foster a sense of community and connection [63].Whilst it is imperative to further develop peer support systems where opportunities exist [64], it is important to note that this review has highlighted that these networks can also be a source of competition and stress.Any peer support interventions will therefore warrant careful evaluation, and care will need to be taken to ensure that these interventions do not overburden an already overburdened workforce. Despite finding multiple similarities across career stages, disciplines and countries, this review also highlighted some notable differences in experience between certain subgroups of the academic researcher population. Across the included studies which commented on the experiences of doctoral researchers, the key role of the supervisor was highlighted, a sentiment which is echoed across the wider literature on doctoral researchers' mental health and well-being [24,65].As such, it is important for universities to invest in this relationship by providing appropriate training for supervisors, clarifying the supervisory role, and ensuring a good fit between doctoral researcher and supervisor based on both professional and personal support needs.Whilst some universities may already have procedures linked to these suggestions in place [66], this review suggests that the supervisory relationship can still be a source of tension for doctoral researchers.Further research is needed to explore the supervisory relationship from both a doctoral researcher and supervisor perspective [67], to help identify ways to ensure that this relationship does not impact negatively on the well-being of either individual. This review also highlighted the difficulties faced by female researchers and researchers from a Black ethnic background in particular, although it is important to note that other under-represented groups in academia including those from the LGBTQ+ community and those with disabilities also experience similar systemic challenges with regards to a lack of role models [68] and experiences of bias and discrimination [8,68]. Initiatives have been introduced to try to tackle inequality in academia across the UK, Europe, North America, and Australia [69,70].A notable example is the Athena SWAN Charter, introduced in the UK in 2005, which provides incentives and awards for higher education institutions to actively highlight and tackle gender inequality across multiple disciplines [69].Whilst participation in these initiatives has been shown to increase awareness of broader diversity issues and has helped to challenge incidents of discrimination and bias [69], the results of this review, with all papers published since 2011, suggest that researchers from under-represented groups in academia still experience the academic research environment as unequal and unsupportive.Indeed, in the short term at least, evidence from the wider literature suggests that some academics may perceive these initiatives as restricted in their ability to tackle persistent pay, power and promotion disparities [69,71].Whilst calls for societal change inherent in this review and the wider literature [70] may be beyond the scope of the higher education system, further research could explore in greater depth the work experiences of those from under-represented groups, along with their perspectives on what more effective support could look like. Strengths and limitations of the included papers Most of the articles reviewed were judged to be of good quality, and each article shed light on how working in academia can impact on researchers' well-being and mental health.Nevertheless, there are a number of limitations inherent in the papers included in this review.First, there was a notable lack of reflection into how the researchers themselves may have influenced the findings of their studies.Through not clearly stating the studies philosophical stance nor clearly stating the potential impact of the researchers, it is difficult to ascertain how the research teams' characteristics may have influenced the process of data collection and analysis.Second, the link between work experiences and researchers' mental health and wellbeing was not always explicitly stated in the papers, and it therefore fell to us as the reader to make our own interpretations and inferences regarding the data-which may have been different from the study participant's/author's original intended point.Indeed, due to the differences in how mental health and well-being were conceptualised and discussed across the included papers, it was difficult to maintain a distinction between the two concepts when conducting our analysis.Finally, this review has also highlighted the general scarcity of research which explores academic researchers' mental health and well-being experiences-only a small number of articles were identified and included in this review (n = 26). Strengths and limitations of the meta-synthesis The meta-synthesis itself also has some notable limitations.The search strategy involved an English language restriction, therefore the majority of included papers included participants from predominantly Western, English-speaking countries.As such, the findings from this review may not reflect the views and experiences of researchers working in higher education institutions across the globe.However, research exploring the stressors faced by academic researchers suggests that there are similarities between the experiences of those in western countries and the rest of the world, particularly with regards to unequal access to resources, support, and opportunities [72].Our search strategy took a broad approach, as we aimed to provide an inclusive and in-depth examination of the status of academic researchers' mental health and well-being.As a result, we included a range of academic researcher populations, methodological approaches, constructs related to mental health and well-being, and places/ institutions of higher education.Nevertheless, a narrower search strategy might have allowed for a more in-depth look into more specific practices and experiences and could be a potential avenue for further systematic research in this area. On a similar note, well-being and mental health are complex constructs and we acknowledge that the list of terms related to these constructs that we included in the search strategy to help identify relevant papers was by no means exhaustive.By not including a more exhaustive list of constructs related to mental health and well-being, we may have missed further relevant papers.Nevertheless, to help ensure that we captured as many relevant papers as possible, we conducted a preliminary informal literature search to discover how the existing relevant literature conceptualised these constructs. It is also important to note that many of the papers included doctoral researchers as participants in their study (n = 14), and therefore their views may arguably be more prevalent and represented in this meta-synthesis than other academic researcher groups.A relatively small number (n = 12) of the included papers in this review focused specifically on one academic researcher group past PhD level, highlighting a dearth of exploratory research into the wellbeing and support needs of post-doctoral researchers and those in the later stages of their career which, again, may form an avenue for future research.It should also be noted that the views of researchers who have experienced difficulties with their mental health or well-being may arguably be more prevalent across the included papers (and thus, more represented in our analysis), as these experiences may make them more inclined to participate in research exploring these concepts [48].Similarly, some of the included papers specifically sought out participants with symptoms of a mental health difficulty [39], and others specifically focused on examining more negative constructs which could contribute to poorer mental health and well-being, such as stress [49]. Finally, whilst the impact of researching trauma on mental health was noted explicitly [41], as was the negative impact of the 'impact agenda' on researchers from less applied, theoretical disciplines [34], any other experiences specific to discipline/subject area and the impact these have on mental health and well-being were difficult to ascertain, given that disciplines were not always stated, and ascertaining disciplinary differences was often not the focus of the included papers. Conclusion The findings of this systematic review and qualitative meta-synthesis highlight the individual, interpersonal, and systemic factors that can impact the mental health and well-being of researchers who work in academia.Attempts to navigate the high expectations set by the academic system, continued job insecurity, and incidents of bias and discrimination have left researchers experiencing, or at risk of experiencing, physical and mental health difficulties.This review has highlighted areas where better support could be implemented, including maximising opportunities for social and peer support, and tackling systemic issues.Further highquality qualitative research is needed to better understand how systemic change, including tackling inequality, can be brought about more immediately and effectively from a researcher's perspective.Further high-quality qualitative research is also needed to better understand the experiences and support needs of post-doctoral and more senior researchers as there is a paucity of literature in this area. Fig 1 Fig 1 depicts a PRISMA flow diagram concerning the process of identification, screening, and selection of papers for inclusion [27].13,778 articles were identified through searching the bibliographic databases and Google Scholar.Eight additional articles were identified through Fig 2 visually depicts the interconnected nature of the main themes and sub-themes.
2022-05-26T06:22:41.549Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "4fe7c6187f7f2d0aa989959d615d11e417e3fc07", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0268890&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ed7f5987b4c5636b028631ab8027ddddf935fd0", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
256721927
pes2o/s2orc
v3-fos-license
Prediction of construction material prices using ARIMA and multiple regression models Construction material prices (CMP) variations have become a major issue in properly budgeting construction projects. Inability to accurately forecast CMP volatility can also lead to price overestimation or underestimation. Enhancing the accuracy of predictions of CMP can also enhance the accuracy of predictions of total construction costs. The purpose of this study is to present a model for predicting construction material prices that assist decision-makers to make better decisions over the life cycle of a project. The price records for CMP namely; steel, cement, brick, ceramic, and gravel, and the indicators affecting them in Egypt were used for the prediction procedures. The practical methods for using the Box-Jenkins approach Autoregressive Integrated Moving Average (ARIMA) time series and multiple regression models for forecasting building material prices are outlined in this research. Out-of-sample predictions are used to evaluate the provided model’s performance in predicting future prices. The models are compared according to the Mean Absolute Percentage Errors (MAPE). The generated models show good results in predicting month-to-month variations in material prices, with MAPE ranging from 1.4 to 2.8 percent for the selected models. This research can assist both owners and contractors in improving their budgeting processes, and preparing more accurate cost estimates. Introduction The number of large-scale construction projects for residential, commercial, and government structures has recently surged around the world. Construction costs for mega projects have become a major source of concern under current conditions, due to their high prices and numerous design modifications during their long construction durations. Contractors have also been impeded to create accurate cost estimates as a result of this issue. Since the material prices can account for up to one-fourth of overall project costs (Hwang et al., 2012). The wide variation in the CMP makes accurate planning and cost estimating difficult for both owners and contractors. Contractors may lose bids or revenues owing to cost overestimation or underestimation (Ashuri and Lu, 2010). Many scientists attempt to accurately forecast cost increases, but predicting prices for a variety of construction materials requires a simple and automated procedure. Enhancing the efficiency of material price predictions can also improve the accuracy of total cost estimates. Various project stakeholders might benefit from predicting short-and long-term variations in construction material prices. Contractors can avoid losing bids or profits by enhancing the accuracy of their cost forecasting. This avoidance of losses leads to fewer hidden price contingencies postponed or canceled projects, budget irregularities, and erratic project flows. Owners of projects might profit from avoiding these undesirable consequences. To account for probable changes in future material prices, cost estimators, for example, raise the estimated material costs to the planned construction date's midpoint (Anderson et al., 2006). Another cost estimators followed the method of adding a fixed percentage of the overall estimated cost as a risk premium to account for material price increases, such as asphalt cement (Laryea & Hughes, 2009). These simplistic solutions have ignored the fact that CMP varies significantly even over short periods of time. Considerable uncertainty regarding the rate of escalation for material prices, a probabilistic approach based on Monte Carlo simulation was utilized to assess the project cost range (Back et al., 2000). Monte Carlo simulation was used to generate random values for the escalation rate of material price. Monte Carlo simulation does not address the impacts of autocorrelation in historical CMP, which is a critical flaw in this approach (autocorrelation represents the relationship between a time series of variables over various time intervals). The results show that the suggested model performs similarly to present practice in terms of expectation while also offering theoretical uncertainty bounds that are well suited to future volatility, which is possibly more relevant. Literature review Numerous studies have sought to address cost escalation factors by concentrating on rapidly fluctuating construction material costs in an effort to make cost planning more feasible. The primary problems here are identifying escalation drivers and properly and simply calculating project costs. Shiha et al. (2020) represented three models that employ artificial neural networks (ANNs) to estimate future costs of major building materials, such as steel reinforcing bars and Portland cement, 6 months ahead in the Egyptian construction sector. The three models were included Genetic Algorithm (GA), Neural Tools software, and the Python programming language. Historical data on steel and cement prices, as well as macroeconomic indices, were used in Egypt to train, test, and validate the suggested models. To forecast the 7-day and 28-day strength of concrete specimens, Kaveh and Khalegi (1998) used artificial neural networks for various types of concrete mixtures. They consider both plain and concretes with admixtures. The Backpropagation technique is used to train and compare neural networks with one, two, and three hidden layers. The strongest networks are then chosen, with relatively modest mistakes, and utilised to forecast the strength of concrete combinations. Kaveh and Servati (2001) effective neural networks are trained for the creation of double layer grids in this article. they considered square diagonal-on-diagonal grids with spans ranging from 26.5 to 75 m. Effective neural networks were trained using the backpropagation algorithm to evaluate the weight, maximum deflection, and design of double-layer grids. To decrease the nonlinearity of the data and speed up training, a unique method for data sorting was devised. Additionally, this strategy offers the required steadiness. Using the created data ordering, more neural nets are trained and tested for grid construction. Marzouk and Amin. (2013) formulated a fuzzy logic to assess the degree of importance of each material type through the three main criteria, 1. The percentage of the elements participating in the total cost items; 2. The difference in the calculating price index of the elements during the research period; 3. The percentage difference in the price of the cost elements. In this research, they also made comparisons between the Artificial Neural Network and Regression Analysis. Results showed that the technique of Neural Networks surpasses regression analysis according to the estimated error. Lee et al. (2019) suggested a technique for forecasting raw material prices with the purpose of inspiring more accurate predictions. The prediction approach is a multivariate analysis of the time series, and the prediction goal is iron ore price, which is the primary driver of steel raw material prices. The accuracy of the prediction results over a specified period was compared with past average values. The results show that the proposed method is 2.3 times more accurate than the previous average values. Faghih and Kashani (2018) introduced a vector error correction (VEC) model for estimating construction material prices in the United States. The association between construction material pricing and a collection of key explanatory factors was studied using this model. The use of VEC models to anticipate construction material prices filled a gap in the current literature caused by the necessity of forecasting both short and long-term movements of particular construction materials being overlooked. Kissi et al. (2018) modelled the tender price index (TPI), in Ghana using an autoregressive integrated moving average with exogenous factors. The results showed that the ARIMAX model outperformed the single method in terms of predictive ability. The study backs up prior research by emphasizing the importance of using an integrated model technique to forecast TPI. Oshodi et al. (2017) studied the accuracy of employing univariate models for tender price index predictions. The modeling tools used in this study were Box-Jenkins and neural networks. In terms of accuracy, the results show that the neural network model outperforms the Box Jenkins model. Ilbeigi et al. (2017) defined and analyzed the observed fluctuations in actual asphalt and cement prices over time to create time series, forecasting models. This study investigated whether and how time series prediction models can expect future prices with higher accuracy compared to established approaches. Four univariate time series forecasting models, namely Holt Exponential Smoothing (ES), Holt-Winters ES, Autoregressive Integrated Moving Average (ARIMA), and seasonal ARIMA (SARIMA), are generated to study the short-term variation in future prices. The forecast results show that all four models of the time series can predict prices with better accuracy than the current approaches, such as the Monte Carlo simulation. The ARIMA and Holt ES models were among the four most reliable predictive models with errors of less than 2%. To forecast future values for the Engineering News-Record (ENR) CCI over a 12-months period, Ashuri and Lu (2010) employed an ARIMA model that took seasonality into account. The mean absolute error (MAE), mean square error (MSE) and mean absolute percent error (MAPE) were used to assess forecasting accuracy. Sonmez et al. (2007) researchers employed regression analysis to develop a model that included 14 potential independent components to anticipate cost contingency in international projects. Abu Hammad et al. (2010) used many explanatory parameters, such as project area and duration, to design a probabilistic regression model to predict the cost of public building projects. These models were beneficial for addressing cost escalation issues and preliminary estimating in the early design phase, but they have certain limitations in time-varying variables and representing different time lags between influence elements. In fact, a lot of time-related data is dependent or has an autocorrelation. Applying time-related techniques to anticipating trends in material prices is one way to address these restrictions. Time-series approaches, which predict the future increase of a variable based on historical values of the variable and other relevant factors, have been used to handle time-related problems in the aforementioned methodologies. Time-series models are used to forecast trends in a systematic and time-related manner. Such that, based on historical trends, it is possible to generate useful projections (Wong et al., 2005). Research objective To make the research presented in this paper suitable for accurate and updatable material price predicting, an automated forecasting system is developed on the basis of both ARIMA, and regression modeling process using historical Egyptian data. A time series forecasting model identifies relevant traits in the past of a variable and predicts future values using those traits and earlier observations. Regression models account for the fact that price fluctuations are influenced by a variety of independent variables. As a result, this paper's aims is to show that price projection models may be created that perform well in terms of expectation and, produce great estimates of future material price volatility (even when data availability is limited). The data utilized in this analysis is accessible in public databases, and the techniques used in this analysis are available in several statistical software packages (the analysis is conducted in SPSS and EViews). Making this research practical and implementable for both practitioners and academics, the goals of this study are to: (1) Discover and analyze fluctuations in actual material prices. (2) Apply this information to develop CMP forecasting models. (3) Evaluate if the proposed models can estimate future prices more accurately. To satisfy the research objectives, the remainder of the study is organized as follows. The subsequent section is the recommended research approach and the steps taken in this study. The material pricing time series data set is introduced, and its main features (autocorrelation and stationary) are studied. The most important indicators influencing the CMP are listed. ARIMA and regression models were constructed based on the stated properties. Each model's predictability is assessed and compared. Research methodology Accurate forecasting of construction material prices is an essential practice, particularly in developing countries where high price fluctuations can adversely affect the success of projects and even their viability. To avoid this, a system that can predict the size of the change in material prices with acceptable accuracy is required. As a result, a technique is used, with univariate time-series (ARIMA) and regression approaches used to forecast material prices. Figure 1 presents the process map of the procedures employed in this study. The methods include all important information about the required data, where and how it was obtained, and how a sample was chosen. This method entails four high-level processes, which are briefly outlined below and expanded upon later. 1. Determine the long-term price trend of construction materials, as well as the most relevant indicators influencing price change over time. Conducting the ARIMA models using the material prices historical data, and regression model using the historical data as a dependent variable and the indicators which has a significant relationship with material prices as independent variables. 2. Validate relative model performance to existing practice, out-of-sample projections are used to assess the accuracy of price projection models to current assumptions. Price forecasts are established previous to the present and compared to what really happened in this step. 3. Compare results. 4. Recommend the best-fitting model for each material type. Data description (input data) Models were created using publicly available price data from CAPMASS Egypt. The types of materials used in this search are steel, cement, brick, ceramic, and gravel. Which represent an important part of all the construction work items. ARIMA model The Box-Jenkins approach (ARIMA) model forecast is a time series prediction approach that is relatively advanced. It is capable of describing the dynamic change rules realistically. Under certain conditions, it can be utilized to do statistical analysis and forecast for time series. The model is particularly well suited to short-term forecasting. When the predicting time scale is long, large variances occur. There are three stages to conduct the ARIMA model: (1) model identification, (2) parameter estimation, and (3) diagnostic checking. Stationary time-series data are appraised in the model identification step, while non-stationary data are turned into stationary data using normal differencing or logarithmic transformation. Transformed data can be used in the next modeling stage. In prediction forms, the ARIMA model can be expressed using Eq. (1). The order of the AR model (p) represents the order of the autoregressive component, and the order of the MA model (q) represents the order of the moving average element are then established by examining the autocorrelation coefficient (ACF) plot and partial auto-correlation coefficient (PACF) plot to determine ARMA (p, q) models using the provided lag numbers, while the order of differencing I (d) is identified through the model identification stage as the number of differencing to make data stationary. Fitted ARIMA models are recognized during the parameter estimation stage. For ARIMA models, EViews software version 10 was utilized. EViews is a software application that is specifically built to process time-series data. The models were made based on the published monthly price data of materials during the last 10 years (January 2011 to December 2020) and 5 years (January 2016 to December 2020). After testing the appropriate model for each type of material, the model was used to predict price values during the first 6 months of 2021. The least values of mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error (MAPE) criteria can be used to choose the best model. Most researchers suggested MAPE as a method for judgment. In prediction models, the MAPE is commonly agreed to be 10% (Fan et al., 2010;Hwang et al., 2012). MAPE is calculated using Eq. (2). Fig. 1 The process map of the procedures C is constant; p is the order of the autoregressive component (AR); q is the order of the moving average component (MA); is the coefficient of the autoregressive model; is the coefficient of the moving average model; t is the error Term. Mean absolute percentage error equation: Y t is the actual value at any specified time; f t is the forecasted value at any specified time; n is the number of forecasts. Multiple regression models Multiple Linear Regressions (MLR) is a linear statistical strategy for investigating the relationships between a dependent variable and two or more independent variables. When the focus is on the link between a dependent variable and one or more independent variables, it encompasses numerous approaches for modeling and evaluating multiple variables. When one of the independent variables is changed while the other independent variables remain constant, regression analysis can help you understand how the typical value of the dependent variable (or 'criterion variable') varies. As a result, it provides a strong basis for predicting price changes. For the regression model, IBM Corporation's SPSS statistical program version 25 was used to analyze the data. A set of nineteen indicators that can potentially influence the CMP were identified through a literature review extensive study. The information collected was split into two categories: independent input variables and dependent output variables. When various inputs are used to predict an output, the primary assumption is that these inputs are independent variables that predict the output dependent variable. Raw prices are considered as a dependent variable, and indicators affecting construction material prices are used as independent variables. For this study, the indicators used have publicly published data on one of the official websites indicated in Table 1. If y is a dependent variable and X 1 ,…, X n are independent variables, the multiple regression model predicts y from x in the following manner: (3) The global Economy (f) Producer Price Index Ashuri et al., Trading Economics (b) where Y denotes the output of the dependent variable, C denotes the constant, b denotes regression coefficients, and X denotes the input of independent variables. The following assumptions govern the multiple regression models: 1. Linearity: The dependent variable y is a linear combination of the independent variables x 1………. , x n 2. Independence: Observations are chosen from the population independently and randomly. 3. Normality: Observations are distributed regularly. 4. Variance homogeneity: All observations have the same variance. Regression models were created for each type of material. Then each model was used to predict the future values of prices using the out-sample method. The prediction period is the first six months of the year 2021. Results of the prediction process are then compared with the actual values. The error rate was calculated for each value separately, and then the MAPE was calculated for each model following Eq. 2. Results of Regression modeling The aim of the research using multiple regression models is to find out whether it is possible to describe the relationship between prices and the influencing indicators through some equations. Interpretation of the results includes the issues of (1) analyzing the data, (2) estimating the model, i.e. fitting the line, and (3) evaluating the validity and efficiency of the model. SPSS software was used to analyze the data. The actual price is the (Y) dependent variable in the regression analysis. The independent variables (X) that have been assigned are shown in Table 1. Analyzing the data This study aims to create prediction models using only significant indicators i.e., indicators with strong t-statistics and a significance value of less than 0.05 are used in the prediction process. As a result, the final model may not include all of the indicators you selected. These tests of significance are useful for determining if each explanatory variable is required in the model, assuming that the others are already present. As a result, the "p-value" column in Table 2 represents the significant level. In the case of steel as an example, indicators inflation rate, GDP-construction, GDP, revenue, expenditure, industrial production, import, export, external reserve, and balance of payment have p-value of (0.058, 0.635, 0.983, 0.983, 0.313, 0.52, 0.322, 0.444, 0.801, 0.983) > 0.05, respectively. The test tells us that these indicators are not significant for the modeling process, while the other indicators which have p-value (0.00) < 0.05 add a significant contribution to explaining the change in steel prices as indicated in Table 2. Estimated models coefficients General forms of the equations for predicting material prices for the fore-mentioned types are obtained from Table 2. When all other independent variables are held constant, coefficients show how much the dependent variable varies with an independent variable. The regression coefficient provides the prospective change in the dependent variable for an increase of one unit in the independent variable. Determine the suitability of the models The models' summaries are indicated in Table 3. This table provides the values of R, R square (R 2 ), and adjusted R 2 for the estimate, which can be used to determine the appropriateness of the regression models for the data. The value of R, the multiple correlation coefficients, is represented in the "R" column. R can be thought of as a metric for the accuracy of the dependent variable's prediction. For the steel model, a value of 0.995 implies a good level of predictability. As displayed in the "R Square" column, the R 2 value (also known as the coefficient of determination) indicates the proportion of variance in the dependent variable that can be explained by the independent variables. Our steel model's result of 0.99 shows that our independent variables account for 99 percent of the variability in our dependent variable. R-squared appears to be a simple statistic that measures how well a regression model fits a set of data. However, it does not provide us with a good ending. R 2 value must be associated with residual plots, other statistics, and an in-depth understanding of the topic area to get the entire picture. Another key issue is to appropriately provide the data interpretation of "Adjusted R Square" (adj. R 2 ). In this example, a result of 0.99 (coefficients table) shows that the predictors that should be kept in the model explain true 99 percent of the variance in the outcome variable. A large difference between the R-squared and Adjusted R Square values suggests a poor model fit. Any superfluous variable introduced into a model reduces adjusted R squared. Adjusted R squared, on the other hand, will rise when more beneficial variables are included. R 2 will always be less than or equal to adjusted R 2 . As a result, adjusted R 2 compensates for the number of terms in a model. The histogram of residuals for the constructed model of the steel as an example is shown in Fig. 2a. The histogram displays a plot of the regression standardized residuals versus the regression standardized predicted values, demonstrating that the residuals are normally distributed. The points on the plot are roughly randomly distributed, indicating that the assumption of homoscedasticity or equality of variances has been realized. Stationary test General ARIMA modeling and predicting methods are outlined in this section. This procedure is clearly depicted in (Fig. 1). It's worth noting that this is not a straightforward sequential procedure it can contain repetitive loops based on the results of the diagnostic and forecasting stages. ARIMA model is used to examine stationary time-series data, the data must first be determined to be stationary in terms of mean and variance. The steel, cement, brick, gravel, and ceramic historical price data is plotted in Fig. 4. The result shows that for all types of material used, the data was nonstationary in the first inspection. Taking the natural logarithm of the material type's data to eliminate its non-stationary, and taking the augmented dickey-fuller test (ADF) for the logarithm, it was found that it is still greater than the critical value of the significance level of 0.01, 0.05. Further, the first-order difference is performed and a DLsteel, DLcement, DLgravel, DLceramic, and DLbrick sequence are obtained. After taking the logarithm and the first difference for the above-mentioned types of materials, ADF became smaller than the critical value. That is to say, the series became stationary and the significance test for stationary was passed as shown in (Fig. 3). Model identification The next step is to develop a suitable ARMA form to model the stationary series after determining the correct order of differencing required to make the series stationary. The Box-Jenkins procedure is used in the classic method, which involves an iterative process of model identification, model estimation, and model evaluation. The Box-Jenkins process is a semi-formal approach that relies on the subjective evaluation of plots of auto-correlograms and partial auto-correlograms of the series to identify models. Plotting the auto-correlogram of a time series is another technique to investigate its characteristics. The auto-correlogram shows the autocorrelation between time series with different lag lengths. The auto-correlogram must be plotted before the Box-Jenkins model can be identified. A Box-Jenkins technique includes evaluating plots of the sample auto-correlogram, partial auto-correlogram, and inverse autocorrelogram and inferring the correct type of ARMA model to use from patterns detected in these functions. This section outlines the theoretical auto-correlogram for various orders of AR, MA, and ARMA models. EViews software had been used to conduct the Auto-Correlation (ACF) and Partial Auto-Correlation (PACF) for all aforementioned material types. Figure 4 shows the ACF, and PACF for steel and cement as an example for the model identification process. Identify the most significant model Time series analysts have sought alternate objective approaches for finding ARMA models due to the extremely subjective nature of the Box-Jenkins methodology. The Akaike Information Criterion [AIC] or Final Prediction Error [FPE] Criterion (Akaike, 1974), the Schwarz Criterion [SC] or Bayesian Information Criterion [BIC] are examples of the identifying criteria, time series analysts have used them to resolve the need to minimize mistakes. For this study, eight models were done for each type of material and then the best model was chosen based on the value of adjusted R-squared, Akaike info criterion (AIC) value, and Schwarz criterion (SC) value. The least AIC value and the SC value, on the other hand, are insufficient requirements for the best ARMA model. The procedure followed in this study was to first create a model with the lowest Root Mean Square Error (RMSE), AIC and SC values, and then execute a parameter significance test and a residual randomness test on the estimation result. If the model passes the test, it can be considered the best model. If it fails the test, the second least AIC and SC values are chosen, and the appropriate statistical test is run. And so on, until you have picked the best model. Table 4 shows the most significant model chosen for each type of material. The criteria used for the judgment take the following form: where N is the total number of data points; y t is the actual material price; yˆt is the forecasted material price; y¯t is the mean of actual material prices; and k is the total number of estimated parameters. Model diagnostic The formal evaluation of each of the time series models will be the next stage. This will entail a thorough examination of each model's diagnostic tests. A variety of diagnostic techniques are available to ensure that an acceptable model is created. A useful diagnostic check is plotting the estimated model's residuals. This should highlight any outliers that may have an impact on parameter estimations, as well as any potential autocorrelation or heteroscedasticity issues. Plotting the auto-correlogram of the residuals provides the second test of model appropriateness. The residuals should be 'white noise' if the model is appropriately described. As a result, a plot of the auto-correlogram should die out after one lag (Fig. 5). Comparison of ARIMA and regression prediction models To validate the proposed time series models, the predictive accuracy of the Box-Jenkins model (ARIMA) was compared to that of structural multiple regression models. The actual material prices series were used as a basis. The validity of each model was tested using the actual and the predicted values for the six-month out-sample from January 2021 to June 2021. We found that the predicting accuracy level of the regressions model compared to that of the ARIMA model is not very significant, as shown in Table 5 and Fig. 6. Given the small forecast error of both models, it may be stated that both models performed well in terms of predicting. However, from the test data, the ARIMA model outperforms the regressions model in terms of forecasting accuracy as shown in Fig. 6. This finding demonstrates that in the case of material prices that have time-series data, time-series models able to predict well. The recommended model for the prediction of each type of material is explained in Table 5. According to the value of the mean absolute percentage error (MAPE) donated by * Conclusion The difficulty of construction project partners to precisely estimate future material prices in the market, especially in the face of economic volatility, is a common challenge. This can often prevent developers from investing, reduce contractor profitability, and cause owners to delay payments. The techniques proposed in this research will help construction contractors and owners accurately estimate material prices. These prediction models take advantage of ARIMA's predictive power by learning from historical trends and the power of regression models, which take the affecting indicator into consideration. Although much research has presented numerous prediction models, the value of this study is the power to forecast price fluctuations even in economically unstable circumstances, which other approaches would not have been able to capture. In the context of the Egyptian construction sector, this research proposes 6-months price prediction models for steel reinforcing bars, Portland cement, brick, ceramic, and gravel. As a consequence, relevant Egyptian indicators were collected and associated with material prices during the study period, which ran from November 2018 to January 2020. ARIMA models were built based on the previous historical data of each type of material using the available data of CAPMAS Egypt. Regression models were built using the historical price data as dependent variables and the most important quantifying indicators as independent variables. The mean absolute percentage error (MAPE) of each generated model's predictions was used to evaluate it. Results of this search indicated that Construction material prices have time-series data, therefore, ARIMA models outperformed in predicting future prices of materials with a very small error rate. Limitation This study has empirically identified 19 indicator affecting CMP that have been used in the prediction process. These indicators were carefully chosen to reflect the relationship between the correlated variables. Other researchers may choose other indicators in the prediction process according to the available data. Further investigations on the CMP principal indicators levels can be carried out to improve estimating accuracy and efficiency. Author contributions All authors reviewed the manuscript Funding No specific grant from funding organizations in any area was given to this research. Conflict of interest The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article. There are no declared competing interests of the authors that are pertinent to the subject matter of this study.
2023-02-10T16:12:16.474Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "e14aec2e4264022fd10e2166adc13d4a4261bdca", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-2481703/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "9ba7f26ffe8e21eac267acdd521782e5426c37f3", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [] }
244773322
pes2o/s2orc
v3-fos-license
'Entanglement' -- A new dynamic metric to measure team flow We introduce"entanglement", a novel metric to measure how synchronized communication between team members is. This measure calculates the Euclidean distance among team members' social network metrics timeseries. We validate the metric with four case studies. The first case study uses entanglement of 11 medical innovation teams to predict team performance and learning behavior. The second case looks at the e-mail communication of 113 senior executives of an international services firm, predicting employee turnover through lack of entanglement of an employee. The third case analyzes the individual employee performance of 81 managers. The fourth case study predicts performance of 13 customer-dedicated teams at a big international company by comparing entanglement in the e-mail interactions with satisfaction of their customers measured through Net Promoter Score (NPS). While we can only speculate about what is causing the entanglement effect, we find that it is a new and versatile indicator for the analysis of employees' communication, analyzing the hitherto underused temporal dimension of online social networks which could be used as a powerful predictor of employee and team performance, employee turnover, and customer satisfaction. Introduction Albert Einstein called quantum entanglement "spooky action at a distance" (Einstein et al., 1935), predicting that quantum mechanics should allow objects to influence each other's action at great distance. It took other Nobel prize winning physicists' decades after Einstein's death to confirm his prediction. In this paper we propose a similar social entanglement effect between people. Note that we are not making any conclusive claim about the cause of this social entanglement effect, we just find that it seems to exist, and posit that there seem to be useful parallels between quantum entanglement and social entanglement that assist in the conceptualization of the latter. "You share everything with your bestie. Even brain waves." (Angier, 2018). This is how the New York Times summarized the work of Parkinson, Kleinbaum and Wheatley (2018), who found that brain scans of close friends show similar patterns as they watch a series of short videos. Using these results, the researchers trained a computer algorithm to predict the strength of a social bond between two people based on the relative similarity or synchronization of their neural response patterns. Such neural synchronization patterns are also observed in various other studies in different contexts, e.g., to determine neural contingencies between musical performers and their audiences. Hou et al. (2020) assess the neural synchronization between violinist and audience and the relation to popularity of violin performance. Their findings suggest that neural synchronization between the audience and the performer might serve as an underlying mechanism for the positive reception of musical performance. Further, neural synchronization can be confirmed by analyzing verbal group communication (Liu et al., 2019). Individuals try to achieve neural and body synchronization in order to facilitate fluid interaction (Fairhurst et al., 2013;Yun et al., 2012). Experiments show that synchrony of fingertip movement and neural activity between two persons increases after cooperative interaction (Yun et al., 2012). Hence, engaging individuals in synchronized activities like walking, dancing etc. is an effective way of increasing subsequent cooperation between those individuals. However, the studies mentioned above focus on neural or body synchronization and are not applied in typical work environments or contexts. But "being in sync" or "in flow" in work environments is a relevant research topic and should be considered by decision-makers to determine the impact of such behavior on employee performance. Being in sync with others can increase cooperation by strengthening social attachment among team members (Wiltermuth and Heath, 2009). Thus, it might also affect team productivity and team performance positively. Offline and online communication plays an important role to distinguish between teams that are in sync or "out of sync". Where offline communication like face-to-face meetings establish team synchronization easily (Maznevski and Chudoba, 2000), online communication such as e-mail and chat tools might diminish team synchronization (Hinds and Bailey, 2003). The asynchronous characteristic of online communication, for instance caused by time lags (Cramton, 2001), may hinder developing a shared team rhythm (Hinds et al., 2015;Hinds and Bailey, 2003). However, there exist opportunities to analyze online communication data in near-real time for continuous monitoring of team learning and performance. Metrics based on communication flow from person to person or amount of communication are suitable for real-time processing. In addition, studies have shown that analyzing online communication data in organizational contexts (de Oliveira et al., 2019;Gloor et al., 2017b) could be used as a predictor for job-related constructs, such as employee turnover or employee performance. Speed of responding to an e-mail, for example, is a good predictor of individual and team performance (Gloor et al., 2020). It might be a proxy for the passion of the person who is responding to an e-mail (Gloor, 2017), or for other external reasons such as urgency, power differentials, etc. Based on these behavioral and neuroscientific insights and findings on the relationship between interpersonal synchronization and communication, we hypothesize that being in sync can also be shown by analyzing patterns of team online communication gathered through a social network analysis (SNA) approach. Hence, our research questions are: (1) Are time series of communication patterns from online communication valid indicators for analyzing the synchronization of a team and its flow? (2) Is a measure for team flow capable to predict job-related outcomes such as job performance or employee turnover? We answer these questions by introducing a metric called entanglement, which measures the synchronization of e-mail communication behaviors of team members and their flow state over time. This metric is grounded in SNA and identifies the similarity of timeseries of SNA metrics. We validate the metric by conducting four case studies, with different datasets from different organizations. Each case study is in a different context and variants of the entanglement measure are used as a predictor of different individual and group performance indicators. The rest of the paper is organized as follows. In Section 2 we present the theoretical background of flow state and team synchronization. Subsequently, we illustrate the idea of our entanglement metrics, which want to capture how much people interact in the same rhythm or are "in sync". We finalize this section with the metric's formalization. In Section 3, we explain the data collection and applied methods. Then in Section 4, we introduce four case studies in which we demonstrate the predictive power of the proposed entanglement metrics. In the last section, we discuss results and advocate future research. 2 Theoretical background 2.1 Team synchronization and flow state Synchronization is a fundamental element of life. Besides neuronal synchronization mentioned in the introduction, one finds studies that deal with the synchronization of human activities (Guastello and Peressini, 2017). Synchronization is often defined as the manifestation of unintended coordination. It is part of the natural behavior of a human being and takes place so invisibly that we usually do not notice it. It is triggered by audio-visual stimuli, haptic perception or simply by the presence of certain people. Synchronization can be analyzed as neuromuscular coordination, where there is a relatively exact or proportional tracking of body, hand and head movements, autonomic arousal, or electroencephalogram (EEG) readings between two or more people (Guastello and Peressini, 2017). For example, Néda et al. (2000) show that the audience of a concert synchronizes its applause after an asynchronous start and Fairhurst et al. (2013) and Yun et al. (2012) show that people synchronize their finger tapping to improve coordination. While these studies only look at synchronization as neuromuscular coordination and task coordination, there are research efforts currently underway to uncover connections between synchronization in cognition, task structures, and performance outcomes in teams (Gipson et al., 2016). Better work performance outcomes would also be expected when teams are similarly synchronized (Elkins et al., 2009;Stevens et al., 2013). The hypothesis that team synchronization leads to better performance is further motivated by the theory of flow state. While the concept of synchronization in the above-mentioned studies applies a natural science perspective, human sciences like positive psychology consider synchronization as a part of flow state (Gloor et al., 2012) and expect flow state to cause better performance. A team is in flow state (Csikszentmihalyi, 1996) when members create a sense of shared confidence and empathy, which culminates in a collective mental state in which individual intentions harmonize and are in-sync with those members of the group. This condition is also referred to as achieving a "group mind", which is marked by a deep emotional resonance which enables e.g., jazz musicians to be completely coordinated throughout the improvisational flow. In other words, group flow manifests itself in physical and verbal activities, for instance people mirroring each other and quickly finishing each other's sentences using the same words and phrases, indicating a "parallel synchronization of thought" (Armstrong, 2008). The more the team members are in-sync, the more likely it is to observe group flow. Group flow can be analyzed applying "interaction analysis", which entails closely observing and categorizing the interactions, movements, and body language of group members. But, it cannot be limited to neurological studies of particular participants of the group's emotional conditions or subjective memories (Sawyer, 2003). Thus, group flow cannot be split down into specific tasks; rather, it is a process that arises from group dynamics and has the ability to improve job satisfaction, intrinsic motivation, vigor, performance or efficiency (Delarue et al., 2008;Sawyer, 2003;van den Hout et al., 2018). Hence, flow represents rather an oscillating dynamic state that combines continuous and sudden changes across time (Ceja and Navarro, 2012) than a static one. The flow concept can be transferred into the organizational context (Heyne et al., 2011). Bakker (2005) defines work-related flow as a short-term peak experience at work that is characterized by absorption, work enjoyment and interest. Teams "are in flow" if there is a certain balance between challenges and the skill sets of the individual team members. Work-related flow leads to a better productivity and performance (see Figure 1). Further, by the definition of flow by Csikszentmihalyi (1996) high flow leads to high performance. If a team is collectively in flow, it therefore will deliver high performance. In general, flow is likely to correlate positively with measurable results (Quinn, 2005). Quinn (2005, p. 611) emphasizes that "[i]n knowledge work […] flow may be a useful concept for understanding performance.". Studies of flow proceed from a broader awareness that team processes like communication need to be studied as events over time (Arrow et al., 2004). Entanglement conceptualization and formalization The idea of the entanglement measure is to determine how a person is in sync with his/her group and shares the same flow with the other team members, with regards to communication over a period of time. In an attempt to conceptualize entanglement, a multidisciplinary approach is proposed, bringing together concepts from several disciplines, ranging from quantum mechanics to human and social sciences. The term entanglement is borrowed from quantum physics, where a pair or group of particles which are "entangled" mysteriously change their quantum state at the same time, even when the group of particles is physically far apart at different locations on the world (Horodecki et al., 2009). A result of this phenomenon is that when one measures the quantum state of one particle, one simultaneously determines the quantum state of the other particle. A quantum state (of a particle) is a representation of knowledge or information about an aspect of the system or reality (Pusey et al., 2012). In this study, we interpret the reality as the state about a person-to-person relationship. Thus, the two particles are seen as two individuals that have potentially interacted with "others", not necessarily with each other, and have therefore become entangled. Our idea of synchronicity is that people are in-sync when they show similar behavioral patterns, such as communication activity. Hence, two persons are entangled even when they are physically separated or not involved in a (local) interaction with each other but share a similar communication behavior (an example is provided in Figure 2). Figure 2. Communication intensity of three persons by time Similar concepts have previously been described in psychology and sociology. "Entrainment" describes a process where one system's motion or oscillation frequency synchronizes with another system, for instance the brainwaves of two people rocking together in their chairs. Cross et al. (2019) define interpersonal entrainment as the synchronization of organisms to a rhythm, for example singing, dancing, or even walking together. Much earlier, early twentieth century French sociologist Emile Durkheim defined collective effervescence as the similar but broader notion of synchronized action between humans (Durkheim, 2008), to describe when a community or society comes together to communicate the same thought or participate in the same action. This concept has been picked up by sociologist Randall Collins through his construct of "Interaction Ritual Chains" (Collins, 2005), which explain collective action through shared emotional energy. The common theme of all these constructs is colocation, people creating and experiencing emotional energy by being together at the same location. We therefore prefer the term "entanglement" to describe synchronous action between humans independent from where they are located, to describe in the words of Albert Einstein, "spooky action at a distance". Human communication is fundamentally synchronous and rhythmic, two important characteristics of individual and interactional behavior (Condon, 1986). The synchronization of interactional behaviors helps to generate a sense of flow state for the persons involved (Condon, 1986). Further, it always takes other people for a person to reach the state of flow (Collins, 2005), while the other people do not have to be physically present. Thus, entanglement leads to a flow state of two persons analogous to the "mysterious change" of a particle's quantum state. Intuitively, we propose that the "more similar the communication" of two persons A and B is, the more person A is in sync and is able to share the same flow of communication with person B over a period of time. Individuals that are in flow might have higher abilities to productively channel their cooperative spirit when working together. and increases from t2 to t3 by the same amount. Further, their lines in the chart are very close together meaning the distance between each of their data points is short. We observe the same pattern for person A and person B in time period t3 and t4. Such patterns might indicate synchronization. Thus, we can state that the distance of the data points representing the communication intensity between two or more persons in a specific time window is an indicator for their synchronization. Here, we use the Euclidean distance, a straight-line distance between two points in Euclidean space. We calculate the Euclidean distance d of two data points x and y of a communication metric of the same time window t with: This Euclidean distance specified in the formula above is calculated for every pair of nodes and time window t. An essential requirement to determine if persons are entangled is to consider both team synchronization and team flow. Team flow is based on flow experienced in relational embeddedness (Burt, 2005) which can be established by e.g. communication and collaboration. To address this structural feature of communication, we propose to apply SNA. SNA offers a suitable methodology to study group dynamics as well as to investigate the role of the individuals within these dynamics (Wasserman and Faust, 1994). It focuses on various aspects of the relational structures and the flow of information, which characterize a network of people, through graphs and structural measures. To better illustrate the concept of "entanglement" we consider an e-mail network, characterized as a graph made of a set of nodes (e-mail accounts) and a set of directed edges (weighted by the number of e-mails) connecting these nodes. The direction of an edge specifies the source (e-mail sender) and target (e-mail receiver) node; the weight of an edge shows the relation intensity (number of e-mails) between two nodes (see Figure 3). For example, if person A sends 3 emails to person B, we see an arc originating at node A and terminating at node B of weight equal to 3. To illustrate the idea and calculation of entanglement with an example, we use an individual mailbox representing a dataset of e-mails of persons that work together on several projects. First, we collected the mailbox and stored it in a database, where the e-mail data was structured from a network perspective. In order to calculate the entanglement of the mailbox owner and his/her colleagues, we take the inverse of the Euclidean distance of the time series of the communication activity represented by messages sent over time for each node/actor in the network. This value will get the larger the more similar the activity time series of two actors are. However, we have to distinguish between two pairs of actors at different locations in the network, one pair embedded into a tight cluster communicating with many other actors, while the other pair is exchanging the same number of e-mails as the first pair, but is only weakly Figure 3. Graph representing an e-mail communication network connected to other actors. To make this metric comparable among pairs of actors with different levels of activity in the same network, we multiply it by the product of the degree centralities of both actors. Degree measures the centrality, sometimes seen as a proxy of popularity, of a node in a network, by counting the number of its nearest neighbors (Freeman, 1978). Further it can be a proxy for the level of engagement within a group, team or organization (Gloor et al., 2020). Communication activity via email (Gloor et al., 2014) indicates the number of e-mail messages sent by a person within a time interval. centralities of x and y into the entanglement formula to provide for the differences in centralities between actors: assume that actor x has low degree, if x is synchronized with highly connected actor y having high degree centrality, the high degree of actor y will boost entanglement of actor x in comparison with all other actors in the network. In other words, we want our metric to reward less influential actors that are synchronized with influential actors. Similarly, we could consider not just communication activity, but also individuals' synchronization in weighted and unweighted betweenness centrality. Betweenness is a well-known metric in social network analysis. It is the sum of the fraction of all-pairs shortest paths that pass through a node (Freeman, 1977): where is the set of nodes, ( , ) is the number of shortest paths from s to t, and ( , | ) is the number of those paths passing through node (Brandes, 2001). Inverse arc weights are considered for the determination of node distances. To control for network size, the above index is usually normalized between zero and one. If the betweenness centrality time series of two individuals are in sync, it means that they share similar network positions, and levels of influence, at the same time. Individual betweenness entanglement is the product of the degree of two individuals divided by their Euclidean distance in betweenness centrality over a period of time. In addition, we speculate on the possibility to evaluate how much an individual is in sync with the aggregated flow of the entire network. As a proxy of the aggregated rhythm of the team we take Freeman's group betweenness centralization, (Freeman, 1978). Group betweenness centralization is the sum of the differences between the betweenness centrality of the most central node, ( * ), and that of all other nodes in the network (Freeman, 1978;Wasserman and Faust, 1994), normalized by its maximum value which is ( − 1) 2 ( − 2) where is the total number of nodes: This definition of group betweenness centralization is appropriate for this use case, as we compare how entangled an individual node is with all other nodes with regards to betweenness. Figure 5 gives an intuitive motivation for the usefulness of group betweenness entanglement. It shows a group of six actors at three points in time of a changing network structure. Actor A is very much "entangled" with the overall group: In t1 and t3, when the group betweenness centralization ( ) is low, his/her (individual) betweenness centrality ( ) is low also, in t2, when the group betweenness centralization is high, his/her is high too, leading to low Euclidean distance of his/her to , resulting in high entanglement. In contrast, actor B is lowly "entangled" with the group, in t1 and t3 when is low, his/her betweenness centrality ( ) is high, in t2 when is high, his is low. This leads to high Euclidean distance to , and thus to low entanglement. Formally, we measure group betweenness entanglement by dividing group betweenness centralization by the Euclidean distance of group betweenness centralization and normalized betweenness centrality of the actor being analyzed over a time period. as a metric of variationis an indicator for the centralization of the group in time window T, the individual betweenness centrality ( ) in this sense is an influence on , i.e., how much an actor impacts . Intuitively this metric reflects the contribution of this actor to the level of centralization of its group. In other words, it measures how far away the normalized betweenness centrality of an actor is from the betweenness centralization of its group at any point in time. If an actor's betweenness is high and its group betweenness centralization is high, the actor is probably responsible for the centralized network structurethus the Euclidean distance between group betweenness centralization and an actor's betweenness centrality is small, and therefore the actor's group betweenness entanglement high. On the other hand, if an actor's betweenness is low and its group betweenness centralization is high, it means somebody else is central and the actor is unimportant in betweenness centrality terms, thus less entangled with the group. We look at this across groups (frequently analyzing advice networks in work settings) and over time. Accordingly, we define group betweenness entanglement, ( ) of x as: To show the inequality in individual group betweenness entanglement we calculate the Gini coefficient for : ² ̅̅̅̅̅ The same formula can also be used for activity entanglement to calculate ( ). Intuitively, the Gini coefficient measures inequality in the distribution of entanglement among all actors in a network. This is based on the observation that for an actor x being resource-poor or resource-rich in a networkthe resource being entanglement in this casecan be highly predictive for the behavior or performance of x. It therefore makes sense to put the entanglement of x in relationship to the entanglement of all other actors in the network through Gini entanglement. Data collection and methods In this section, we present the data collection process and the methods we applied to analyze the data for the case studies. For each case, we ran the same data collection process. We fetched the e-mails of a sample of project members who chose to participate in each pilot study. All worked at large organizations at the time we collected their communication data. We used Condor 1 , a social network and semantic analysis software to collect and analyze the data. We normalized the e-mail data for time zones. In our calculations we set the time window to 7 days, as this has been shown to deliver the best results for this type of organizational e-mail data (Gloor 2017). We measured the relationship of entanglement calculated from e-mail communication with individual and group outcome variables. Since we explore the properties of communication networks, we focused on the calculation of communication-based measuressuch as messages sent and receivedand of network centrality measures, as we explained in section 2.2. Further, we used the reach-2 metric, which is the number of nodes that a social actor can reach by going through each of its direct links in the graph (Gloor, 2017). Reach-2 has been used as a proxy for social capital, as it measures the number of connections of the people a person is connected to (de Oliveira and Gloor, 2018). In addition, we relied on online communication metrics developed specifically for assessing interactivity in e-mail communication. In particular, we looked at the communication activity (Gloor et al., 2014), which indicates the number of e-mail messages sent by a person within a time interval, and at the number of nudges, which represents the average number of pings (emails) that a sender needs to send in order to receive a response from the receiver (Gloor et al., 2014). Here we differentiate between ego nudges (the number of pings before a recipient responds) and alter nudges (the number of pings before others respond). In addition, we measured the contribution index which is the balance between messages sent and messages received (Gloor, 2017). Lastly, we calculated the average response times (ART) to measure how much time it takes a person to reply to an e-mail (Gloor et al., 2014;Merten and Gloor, 2010). This metric is helpful to identify fast and slow communicators and recognize patterns of behavior looking at periods of slower response. We separate between Ego ART, the average number of hours a sender takes to respond to e-mails and Alter ART, the average number of hours recipients takes to respond to a sender. Case studies We illustrate, in four case studies, how the proposed entanglement metric can be used with e-mail data to predict work-related outcome variables, such as team performance and employee turnover (see Table 1). The four case studies we present here are related to different business contexts and consider different dependent variables. In all cases we analyze email data, illustrating the suitability of the entanglement metric for online communication data. Our goal here is not to directly compare results across case studies, deriving general conclusions, or claiming causality. Rather we want to show the versatility of our entanglement metrics, which can be adapted to study business interaction dynamics in different scenarios. Case study Alearning behavior and performance This case study was conducted as a pilot in a health care organization to determine if activity entanglement between 53 team members of 11 medical innovation teams could predict performance and learning behaviors. Figure 7. Entanglement correlation with learning behavior Figure 6. Entanglement correlation with performance The performance and learning behaviors of each team was rated and triangulated every other month for the duration of a year by three overall project managers. They individually rated the team performance and the capability of a team to learn new things. At the same time, all e-mails of the project members were collected and analyzed. Individual activity entanglement of each actor with all other actors was calculated, and then the average was taken for each actor. Finally, for each team average and standard deviation of activity entanglement over all team members was computed. We find that team performance and learning behavior are significantly correlated with the standard deviation of activity entanglement of team members, as shown in Figure 6 and with learning behavior is .707 (p = .015). In other words, the wider the spread in activity entanglement of the team members, the higher their performance and learning behavior. This pattern corresponds to a few core team members being strongly entangled, and the remaining members showing weak . We also notice that moderate dispersion of entanglement is associated to higher variability in performance scores. This could be explained by control variables we could not collect in this study due to limited data availability. Alternatively, it could suggest that in order for performance to be high, few employees have to take a strong group lead, guiding the others towards a common goal. Case study Bturnover prediction In our second case study, we conducted a pilot study at a global professional services firm. In this case we wanted to evaluate the possible association of entanglement with executives' decision to leave the Past studies have shown that managerial disengagement might depend on multiple factors and that communication-based and social network analysis metrics, captured from e-mail communication, can reveal it (Gloor et al., 2017b). Accordingly, we present Pearson's correlations (in Table 2) and logistic regression models (in Table 3), to see if the effect of the entanglement variable remained significant when combined with other predictors. The highest correlation of entanglement is with contribution index, which however does not lead to collinearity issues. A high contribution index is an indication for "spammers", the higher the contribution index, the more somebody sends compared to receiving email. If there is a spammer, s/he will be entangled with many, while others who are sending much less, will thus be less entangled. This results in a high Gini entanglement for that person. Extending this effect to all users will lead to high correlation between the two values. Table 3. Logistic regression for leavers We first tested a model with only the control variables of rank, tenure, and time since last promotion (TSLP) measured in months. In the subsequent models, we added the other predictors in blocks showing, in Model 4, that the only significant predictor, before adding entanglement, is Ego ART. This suggests that managers who leave the company are less responsive to e-mails and take more time to answer. In the full model, Ego ART, messages sent, contribution index and Gini activity entanglement are significant. Including this last predictor in the model leads to a significant improvement of the McFadden's pseudo-R-squared, which more than doubles (going from .08 to .18). As we can see from Model 5, a higher Gini entanglement makes the probability of leaving the company smaller. To evaluate the possibility of using the entanglement variable for making predictions, we used machine learning. In particular, we used a tree boosting model named CatBoost and its related Python library (Prokhorenkova et al., 2018). This boosting approach is now well-known and proved its usefulness in past research, where it also sometimes outperformed other supervised machine learning methods, such as Support Vector Machines (SVM) and Random Forest Models (Huang et al., 2019). The model performance has been assessed through Monte Carlo Cross Validation (Dubitzky et al., 2007), with 300 random splits of the dataset into train and test data (75% vs 25%). Thanks to the contribution of our variables, we could achieve an average accuracy of predictions of 80.25%, with an average value of the Area Under the ROC-Curve (AUC) of 0.81. In a second step, we considered the average model resulting from cross-validation and used it to interpret the impact of each variable on predictions (calculated as the average of its absolute Shapley values). We used the SHapley Additive exPlanations (SHAP) Python package (Lundberg and Lee, 2017). This method proved to be particularly suitable for tree ensembles and to work well also with respect to other approaches (Lundberg et al., 2020(Lundberg et al., , 2018). As Figure 9 shows, the Gini index of activity entanglement is the variable with the highest impact on model predictions. Its contribution is much higher than all other variables, again supporting the importance of this metric. At the second place, we find Ego ART. Results are consistent with those of logit models and indicate that managers who are slower in answering e-mails, and have low Gini entanglement, are more likely to leave the company. Low Gini entanglement means that they show constant levels of entanglement, either being entangled with almost nobody or everyonea situation that might be stressful to maintain, especially when associated with email overload (Reinke and Chamorro-Premuzic, 2014). Average/high levels of Gini entanglement, on the other hand, have a positive impact on the prediction of staying in the company. This means that these managers show uneven entanglement, being highly entangled with some colleagues, while being weakly entangled with others. Case study Cemployee performance We analyzed the e-mail interactions of 81 managers working for a big international services company. Every year the performance of managers was evaluated by their bosses and by the HR department. Whereas the rating of almost all of these managers was "exceeded expectations" for the year 2015, we noticed that 15 of them obtained a lower rating. Like in the case study B of resigning senior executives, we were interested in understanding if entanglement could be related to individual work performance. Carrying out a t-test, we could see that there is a significant difference between the Gini coefficients of betweenness entanglement scores of top (M = .508, SD = .061) and low (M = .469, SD = .028) performers, t(79) = 2.432, p = .017. As we did for leavers in case study B, we additionally built logistic regression models to assess the combined impact of variables on the probability to be a low performer. Pearson's correlations among our predictors are presented in Table 4. The highest correlation of entanglement is again with contribution index, but this time lower than case study B. -3.0535**** -3.608166*** -2.26197 -1.44970 10.88953 Pseudo R-squared 0.06980 0.09900 0.22400 0.23140 0.28030 * p < .10; ** p < .05; *** p < .01; **** p < .001. Table 5. Logistic regression for low performers As Table 5 shows, in the full model the p-value of Gini entanglement is only < .1; however, the inclusion of this variable leads to a good improvement of the McFadden's pseudo-R-squared, from .2314 (Model 4) to .2803 (Model 5). A significant performance improvement is also obtained by including weighted betweenness centrality. 22 The usefulness of the entanglement predictor is confirmed by the results of the CatBoost model that we trained to classify managers into top and low performers. We followed the same procedure as in the previous case study Bi.e., a Monte Carlo cross-validation with 300 repetitionsand obtained good average results (Accuracy = 74,73%, AUC = 0.68). Figure 10 shows the Shapley values associated with each predictor. For an easier reading, we coded top performers as 1 and low performers as 0 (here the model is predicting top performers, which is exactly symmetrical to the choice of predicting low performers that we did in Table 5). Tenure, betweenness centrality and entanglement are the most important predictorswith high Gini coefficient of betweenness entanglement and high betweenness centrality significantly increasing the chance of being classified as a top performer. These managers are highly entangled with some colleagues, and weakly entangled with othersdemonstrating selective communication behavior with close collaborators, while being efficient with their time and communicating comparatively less with the rest of the organization. Regarding tenure, we observe the opposite effect, with recently hired employees generally receiving better ratings. Case study D -customer satisfaction In this case study, we show that entanglement is significantly related to team performance, measured as customer satisfaction through the Net Promoter Score (NPS). 13 teams within the company participated to our study, comprising a total of 82 managers. Each team was dedicated to a specific client. We measured betweenness entanglement of each team by taking the group betweenness entanglement of each member and considering group dispersion by means of the Gini coefficient. We find that high group betweenness entanglement inequality is positively related to team performance this time measured as customer satisfaction. Running a Pearson's correlation test, we find a significant association of Gini group betweenness entanglement with team performance (r = .522, p = .002). For each team we have repeated measures over three time periods. Therefore, we used multilevel linear models (Hoffman and Rovine, 2007;Nezlek, 2008;Singer and Willett, 2009) as a more appropriate technique to evaluate the possible effect of entanglement on customer satisfaction. We nested repeated measures into groups (level 2). Results are presented in Table 6. As the table shows, the biggest variance proportion can be attributed to team characteristics: the intraclass correlation coefficient is 0.7604, meaning that 76% of the empty model variance is at level 2 (Model 1). Including the entanglement variable in the model (Model 2) reduces this variance of 30.56%, which is a highly significant result for a single predictor. The higher the inequality in group betweenness entanglement is, the happier the customer is. Similarly to case study A, this confirms that selective communication of teams, where some team members are highly entangled and others are not, leads to happier customers. Discussion and conclusions In this study, we propose a novel synchronization metric, called entanglement, which is based on SNA of e-mail communication between different actors. We demonstrate with four case studies on real-world datasets that this metric and its variants are a good predictor of different individual and team performance indicators (a summary of our results is provided in Table 7). Result summary A Team performance and learning behavior The wider the spread in activity entanglement of the team members, the higher the team performance and learning behavior. This corresponds to having some core team members strongly entangled and the remaining members weakly entangled. B Employee turnover The Gini index of activity entanglement is the variable with the highest impact on model predictions. Employees who stay in the company have high Gini entanglement probably using selective communication and interacting more with some colleagues than with all others. They are also more responsive to emails and take less time to answer. C Individual performance Tenure, betweenness centrality and Gini entanglement are the most important predictors of top performers.with high Gini index of betweenness entanglement and high betweenness centrality significantly increasing the chance of being classified as a top performer. D Customer satisfaction The Gini index of group betweenness entanglement for teams, is related to customer satisfaction. The higher the inequality in group betweenness entanglement is for a team, the happier its customer is. This suggests that customers are happier when a few entangled leaders emerge in the team. Firstly, we find that dispersion of activity entanglement is positively associated with team performance. This means that the synchronized communication activity of some team members and their continuous similar flow state improve the performance of the team. These findings resemble studies showing that e-mail communication and face-to-face communication frequency (Patrashkova-Volzdoska et al., 2003), and flow in knowledge work (Quinn, 2005), can both lead to higher team performance. It also seems that the best teams exhibit higher dispersion, comprising highly entangled team members and more peripheral ones. Teams might benefit from strong leadership of few selected individuals that can guide and inspire others. With regard to employees disengagement, other studies have already shown that communication-based metrics of SNA can support the prediction of voluntary turnover (de Oliveira et al., 2019;Gloor et al., 2017b). We have proven that our proposed metric entanglement can also predict individual employee turnover and might help such studies to improve their model quality. Secondly, we show that the Gini coefficient of betweenness entanglement, as well as betweenness centrality, are associated with individual employee performance. A high Gini index of betweenness entanglement significantly increases the chance of being a top performer. This means that focused communicationcommunicating intensively and highly synchronized with a few select colleagues, while reducing communication with the rest of the organizationis an indicator of high performance. Our findings are consistent with past research (Brass, 1984;Mehra et al., 2001;Sparrowe et al., 2001) showing that network centrality is positively related to individual performance. However, the important part of our metric is that synchronization with others has a positive impact on individual performance, and not only having central social position. Centrality alone may not be enough to explain individual performance (Reinholt et al., 2011) and we address this issue with the betweenness entanglement metric. Furtherly, we found that low tenure also has a positive influence on individual performance. Thirdly, inequality of group betweenness entanglement in teams positively influences customer satisfaction. The company in case study D considers customer satisfaction as a proxy for team performance. Our findings suggest that the stronger leaders with high entanglement emerge in groups, the happier the customer is. This means we have strongly entangled leaders who influence team dynamics over time, while the rest of the team is rather passive. While Mukherjee (2016) reveals a positive relationship between centralized leadership and sport teams' performance, Mehra et al. (2006) suggests that distributed leadership structures can differ with regard to important structural characteristics, and these differences can have positive or negative effects (Cummings and Cross, 2003) on team performance. (Valente, 2012) and use this metric to identify weak and strong entangled actors in the communication network of a team or in the entire company. Thus, HR managers might use this metric to improve performance appraisal systems, anticipate disengagement and improve hiring and retention strategies. Combining novel metrics of e-mail communication analysis with long-established methods to assess employees' satisfaction (like surveys), HR managers can offer improved organizational initiatives, such as mentoring programs or cross-staffing, or retention strategies. The entanglement metric described in this paper has the potential to help managers to better understand the nature of employee online communication at their particular organization. This might lead to a rethinking of team design and building in the specific organization, which could ultimately lead to improved communication and collaboration and might support the identification of cohesive groups. Nevertheless, e-mail communication analysis combined with SNA raises some ethical concerns. HR managers need to make sure that metrics gathered from such analysis are seen as a support for HR decision making, and not as the holy grail for automated decision making ( Future studies might also take body measures like heart rate or body movement into account to determine synchronization and flow state during real-time communication (e.g., online/offline team meeting). Nowadays, it is technically easy to collect such data e.g. via smart wearables . However, we are aware of the difficulty to use such smart devices in an organizational setting, because of security, privacy, and legal issues. Besides e-mail, employees increasingly use instant messaging tools like Slack or Microsoft Teams. Such tools provide application programming interfaces (APIs) for accessing communication data. Researchers could use that data to build a communication network and follow the analytical approach we presented. In general, our conceptualization of entanglement could be extended to other network measuressuch as group betweenness centrality as formalized by Everett and Borgatti (Everett and Borgatti, 1999)or to other aspects of social interactionsuch as measuring synchronicity in the emotions of people who carry out similar activities. In future research, we additionally plan to compare our results with those of other possible approaches to the study of temporal networks (Falzon et al., 2018;Holme and Saramäki, 2012). Our study has some limitations that should be taken into account. While the evidence supports guidance for new research agendas, our analysis is limited to the contexts of the case studies and the available datasets. It will be important to replicate our analysis in organizations of different industries, also considering different job descriptions and hierarchical positions of employees. We included the entanglement calculation in the SNA tools Condor and Griffin, which are free to use for academics in order to facilitate replicability. Further, other social network metrics could be considered to extend our definition of entanglement. For example, additional interaction patterns could be taken into account, developing metrics that specifically look at who communicates with whom. This could be particularly relevant when additional information about nodes is available, other than the social network structure. In addition, we advocate future studies to more deeply investigate the relationship of entanglement with other social network metrics, both time-variant and time-invariant. Building upon existing synchronization and flow state literature from different disciplines, we showed that the idea of synchronization and flow state can be used together to develop new metricsbased on methods and tools of SNA. Note that social entanglement is an indicator of behavior, with no definitive claims about cause and causality. Just as with quantum entanglement, much more research will be needed to fully "untangle" the origin of social entanglement. Nevertheless, the findings from our four case studies give evidence to the potential of our proposed entanglement metric. We position our research as a starting point for further HR-related analyses, which consider employees' social interactions and communication, with the goal to improve and optimize collaboration, leading to more satisfied employees and customers.
2021-12-02T02:16:30.513Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "52b6c2a2445a736cf5c13ee53b87784d6759c91e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2112.00538", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52b6c2a2445a736cf5c13ee53b87784d6759c91e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
267502935
pes2o/s2orc
v3-fos-license
Fretting and Fretting Corrosion Behavior of Additively Manufactured Ti-6Al-4V and Ti-Nb-Zr Alloys in Air and Physiological Solutions Additive manufacturing (AM) of orthopedic implants has increased in recent years, providing benefits to surgeons, patients, and implant companies. Both traditional and new titanium alloys are under consideration for AM-manufactured implants. However, concerns remain about their wear and corrosion (tribocorrosion) performance. In this study, the effects of fretting corrosion were investigated on AM Ti-29Nb-21Zr (pre-alloyed and admixed) and AM Ti-6Al-4V with 1% nano yttria-stabilized zirconia (nYSZ). Low cycle (100 cycles, 3 Hz, 100 mN) fretting and fretting corrosion (potentiostatic, 0 V vs. Ag/AgCl) methods were used to compare these AM alloys to traditionally manufactured AM Ti-6Al-4V. Alloy and admixture surfaces were subjected to (1) fretting in the air (i.e., small-scale reciprocal sliding) and (2) fretting corrosion in phosphate-buffered saline (PBS) using a single diamond asperity (17 µm radius). Wear track depth measurements, fretting currents and scanning electron microscopy/energy dispersive spectroscopy (SEM/EDS) analysis of oxide debris revealed that pre-alloyed AM Ti-29Nb-21Zr generally had greater wear depths after 100 cycles (4.67 +/− 0.55 µm dry and 5.78 +/− 0.83 µm in solution) and higher fretting currents (0.58 +/− 0.07 µA). A correlation (R2 = 0.67) was found between wear depth and the average fretting currents with different alloys located in different regions of the relationship. No statistically significant differences were observed in wear depth between in-air and in-PBS tests. However, significantly higher amounts of oxygen (measured by oxygen weight % by EDS analysis of the debris) were embedded within the wear track for tests performed in PBS compared to air for all samples except the ad-mixed Ti-29Nb-21Zr (p = 0.21). For traditional and AM Ti-6Al-4V, the wear track depths (dry fretting: 2.90 +/− 0.32 µm vs. 2.51 +/− 0.51 μm, respectively; fretting corrosion: 2.09 +/− 0.59 μm vs. 1.16 +/− 0.79 μm, respectively) and fretting current measurements (0.37 +/− 0.05 μA vs. 0.34 +/− 0.05 μA, respectively) showed no significant differences. The dominant wear deformation process was plastic deformation followed by cyclic extrusion of plate-like wear debris at the end of the stroke, resulting in ribbon-like extruded material for all alloys. While previous work documented improved corrosion resistance of Ti-29Nb-21Zr in simulated inflammatory solutions over Ti-6Al-4V, this work does not show similar improvements in the relative fretting corrosion resistance of these alloys compared to Ti-6Al-4V. Introduction Titanium alloys are widely used in medical devices due to their enhanced fatigue strength and biocompatibility [1].Ti-6Al-4V is particularly favored for permanently implanted medical devices because of its corrosion resistance, promoted by a thin 2-10 nm TiO 2 passive oxide film that rapidly forms on its surface [2][3][4][5]. AM Titanium Printing Parameters AM titanium discs were designed and printed using laser powder bed fusion (L-PBF) on an SLM 125 HL printer (SLM Solutions Group AG, Lubeck, Germany).Printer parameters are listed in Table 1.Parameters were previously optimized for Ti-6Al-4V 1% nYSZ mechanical properties and density [50].Characterization of the precursor powders for the five materials used in this study is reported by Kurtz et al. [49].Following L-PBF, the alloy samples were thermally annealed for 3 h at 600 • C followed by air cooling. Microstructure Alloy surfaces were polished with a 0.3 µm alumina suspension, rinsed with deionized (DI) water and 70% ethanol, and polished with 60% colloidal silica and 40% 9.8 M H 2 O 2 .Backscattered (BSE) and secondary electron (SE) micrographs were captured using scanning electron microscopy (SEM, Hitachi S-3700N, Tokyo, Japan).Accelerating voltages ranging from 10-15 kV were used.After imaging, the discs were repolished to 0.3 µm and cleaned with DI water. With no solution present (dry wear), cyclic fretting tests were performed for 100 cycles at 3 Hz (33 s of fretting) under a constant load of 100 mN of normal force (F N ) applied with DC motors.Using Hertzian contact mechanics, this corresponds to 6.2 GPa nominal contact stress, indicating significant plasticity in the contact region [53].Horizontal motion was applied in a square wave function.For each trial, fretting was initiated at 7 s and ended at 40 s of a 50 s test.The time before and after fretting measured the non-fretting baseline currents from which the fretting currents could be determined.After testing, wear tracks were nominally 80 +/− 10 µm long, measured with a digital optical microscope (DOM, Keyence VHX-6000, Mahwah, NJ, USA). In some cases, the true scratch was shorter due to the higher coefficient of friction (COF) and lateral compliance of the load system.Two differential variable reluctance transducers (DVRT) were mounted on the system to monitor the asperity's horizontal micromotion and the true horizontal amplitude.Fretting amplitude was recorded with a data acquisition system (LabVIEW, National Instruments, Austin, TX, USA).Testing was repeated on each alloy multiple times (n = 5).showing lateral compliance and asperity movement (δ), with a difference of deflection (Δ − δ = d) of the system under applied load (FN) and dependent on COF.As denoted in the figure, lateral compliance refers to the stiffness of the vertical system (i.e., its resistance to bending under horizontal (perpendicular) loads), and deflection refers to the amount of one end of the vertical system (the asperity at the base) is diverted from its neutral point (i.e., the asperity location when 0 horizontal load is applied).(c) Example data set of DVRT (differential variable reluctance transducers) displacement measurements of the applied and asperity amplitude over 100 fretting cycles (at 3 Hz). With no solution present (dry wear), cyclic fretting tests were performed for 100 cycles at 3 Hz (33 s of fretting) under a constant load of 100 mN of normal force (FN) applied with DC motors.Using Hertzian contact mechanics, this corresponds to 6.2 GPa nominal contact stress, indicating significant plasticity in the contact region [53].Horizontal motion was applied in a square wave function.For each trial, fretting was initiated at 7 s and ended at 40 s of a 50 s test.The time before and after fretting measured the non-fretting baseline currents from which the fretting currents could be determined.After testing, wear tracks were nominally 80 +/− 10 µm long, measured with a digital optical microscope (DOM, Keyence VHX-6000, Mahwah, NJ, USA). In some cases, the true scratch was shorter due to the higher coefficient of friction (COF) and lateral compliance of the load system.Two differential variable reluctance transducers (DVRT) were mounted on the system to monitor the asperityʹs horizontal micromotion and the true horizontal amplitude.Fretting amplitude was recorded with a data acquisition system (LabVIEW, National Instruments, Austin, TX, USA).Testing was repeated on each alloy multiple times (n = 5).showing lateral compliance and asperity movement (δ), with a difference of deflection (∆ − δ = d) of the system under applied load (F N ) and dependent on COF.As denoted in the figure, lateral compliance refers to the stiffness of the vertical system (i.e., its resistance to bending under horizontal (perpendicular) loads), and deflection refers to the amount of one end of the vertical system (the asperity at the base) is diverted from its neutral point (i.e., the asperity location when 0 horizontal load is applied).(c) Example data set of DVRT (differential variable reluctance transducers) displacement measurements of the applied and asperity amplitude over 100 fretting cycles (at 3 Hz). Fretting in Solution-Fretting Corrosion Open circuit potential (OCP vs. Ag Ag/Cl) was monitored for one minute before applying a potentiostatic hold (0 V vs. Ag/AgCl), and subsequent baseline current plateaued before fretting corrosion testing in phosphate-buffered saline (PBS, ASTM F2129 Table X2.3,P3813-10PAK, Sigma-Aldrich, St. Louis, MO, USA).A three-electrode potentiostat (EG&G 263A, Princeton Applied Research/Ametek, Inc., Berwyn, PA, USA) was used for electrochemical measurements with a carbon counter electrode, a chlorinated silver wire reference electrode, and the titanium biomaterials as working electrodes.Fretting currents were recorded with the same LabVIEW data acquisition system used for DVRT measurements. Fretting corrosion was repeated in solution under the same conditions (100 mN, 3 Hz, 100 cycles) (n = 5).Surface damage was analyzed with DOM and SEM. Depth of Wear Wear track length was measured using DOM.Images were captured in 3D mode to quantify the depth.Using Depth from Defocus (DFD) 2018 software, 3D reconstructions of the damaged regions were analyzed.Tribocorrosion (in PBS) damage on pre-alloyed Ti-29Nb-21Zr is shown in Figure 2 as an example [53]. Depth of Wear Wear track length was measured using DOM.Images were captured in 3D mode to quantify the depth.Using Depth from Defocus (DFD) software, 3D reconstructions of the damaged regions were analyzed.Tribocorrosion (in PBS) damage on pre-alloyed Ti-29Nb-21Zr is shown in Figure 2 as an example [53].Depth measurements were compared between dry fretting and fretting corrosion groups and between the five titanium biomaterials.Depth measurements were compared between dry fretting and fretting corrosion groups and between the five titanium biomaterials. Electrochemical Measurements Fretting currents were recorded using LabVIEW 2023 Q3 software in PBS for each alloy and were plotted versus time (Figure 3). Electrochemical Measurements Fretting currents were recorded using LabVIEW software in PBS for each alloy and were plotted versus time (Figure 3).Both the fretting current (the average current above baseline over the 33 s of fretting) and baseline current were compared between the five titanium biomaterials.Both the fretting current (the average current above baseline over the 33 s of fretting) and baseline current were compared between the five titanium biomaterials. Elemental Analysis of Debris SEM was used to image wear tracks, documenting the local damage, plastic deformation mechanisms, and the debris field with both secondary (SE, topographical) and backscattered (BSE, chemistry) electron imaging modes. Debris pileup and embedding within the wear scratch were chemically analyzed using energy dispersive spectroscopy (EDS, Aztec, Oxford Instruments, Abingdon, UK).Using Aztec version 4.0 software, elemental weight percent measurements were acquired for Ti, Al, V, Zr, O, and P. Quantities of each element were compared between Ti-6Al-4V, AM Ti-6Al-4V, and AM Ti-6Al-4V 1% nYSZ, as well as between both AM Ti-29Nb-21Zr samples.Oxide debris produced during dry fretting and fretting corrosion was also quantified for all five biomaterials. Statistical Analysis Statistical analysis was conducted with analysis of variance (ANOVA) methods, implementing Bonferroni corrections or Tukey's post-hoc comparisons where appropriate (α = 0.05).Depending on the variable in question, either single-factor or two-way analyses were performed with a sample size of n = 5 for each group.The SEM BSE micrograph of traditional Ti-6Al-4V in Figure 4a shows a two-phase equiaxed microstructure.The vanadium-rich β phase appears distinctly brighter and takes up less surface area than the darker, globular, aluminum-rich α phase.Both AM Ti-6Al-4V (Figure 4b) and Ti-6Al-4V 1% nYSZ (Figure 4c) have martensitic microstructures.Note the needle-like appearance, characteristic of the SLM powder bed fusion manufacturing process for α and α + β alloys.Both alloys (Figure 4b,c) have lamellar α grains The SEM BSE micrograph of traditional Ti-6Al-4V in Figure 4a shows a two-phase equiaxed microstructure.The vanadium-rich β phase appears distinctly brighter and takes up less surface area than the darker, globular, aluminum-rich α phase.Both AM Ti-6Al-4V (Figure 4b) and Ti-6Al-4V 1% nYSZ (Figure 4c) have martensitic microstructures.Note the needle-like appearance, characteristic of the SLM powder bed fusion manufacturing process for α and α + β alloys.Both alloys (Figure 4b,c) have lamellar α grains approximately 20 µm in length that appear darker gray than the lighter and smaller martensitic needles in the rest of the micrograph.The admixed Ti-29Nb-21Zr microstructure shown in Figure 4d is comprised of several distinct regions of varying contrast, corresponding to the rapid melting and re-solidification of the admixed Ti, Nb, and Zr powder particles.The EDS of the precursor powder for the admixed Ti-29Nb-21Zr (not shown here) reveals a mixture of three separate powders (Ti, Nb, and Zr), including spherical Ti and Zr particles and blocky Nb particles [49].No discernable microstructure is revealed from the mechanical polishing of the pre-alloyed Ti-29-Nb-21Zr, as shown in Figure 4e. Wear Track Damage and Debris SEM BSE analysis of the resulting wear tracks and SE micrographs of the debris document the nature of damage between alloys as well as between fretting in air and fretting corrosion in PBS (Figure 5). Figure 5 shows a comparison of surface damage after fretting in air (Figure 5a,c,e,g,i) and in PBS (Figure 5b,d,f,h,j) under a passive potentiostatic hold (0 V vs. Ag Ag/Cl) for traditional Ti-6Al-4V (Figure 5a,b), AM Ti-6Al-4V (Figure 5c,d), AM Ti-6Al-4V 1% nYSZ (Figure 5e,f), admixed AM Ti-29Nb-21Zr (Figure 5g,h), and pre-alloyed AM Ti-29Nb-21Zr (Figure 5i,j).There is evidence of cyclic plastic deformation, plate-like particle formation, oxidation, ribboning, removal of metal, and oxide/debris impaction into the wear track for each alloy in both dry and wet conditions.The BSE images of damage in the air show smaller amounts of oxide (dark regions) embedded into the track compared to the images taken after fretting in solution.All five titanium metals show no clear evidence of slip lines, and material removal is primarily by cyclic shearing, cutting and plowing of the diamond tip, which results in ribbons of debris forming at each end of the sliding stroke. The most visibly notable difference in damage generated between dry fretting (Figure 5a) and fretting corrosion (Figure 5b) of traditional Ti-6Al-4V appears to be additional oxide embedded into the wear track during fretting in PBS.Additionally, the wear mechanism is affected by the presence of the solution.During dry fretting, one large ribbon is typically extruded (Figure 5a) compared with multiple smaller ribbons generated in solution (Figure 5b).Similarly, more oxide is embedded in PBS (Figure 5d,f) than in air (Figure 5c,e) for AM Ti-6Al-4V and AM Ti-6Al-4V 1% nYSZ.Fretting corrosion results in larger extrusion ribbons and debris for both AM Ti-6Al-4V (Figure 5d) and AM Ti-6Al-4V 1% nYSZ (Figure 5f) compared to dry fretting (Figure 5c,e). During dry fretting, the asperity follows a straight wear track when abrading the traditional and AM Ti-6Al-4V alloys (Figure 5a,c,e), plowing through and displacing the alloy uniformly.The Ti-29Nb-21Zr alloy grains were not as easily displaced, causing the asperity to slide in a non-linear fashion (Figure 5g,i).The BSE images in Figure 5g,h show the heterogeneous chemistry of the microstructure of admixed AM Ti-29Nb-21Zr, inducing asperity wandering during fretting.This damage is influenced by the location and orientation of the variable chemistry associated with admixed elements and their rapid solidification.However, more oxide debris appears in the AM Ti-29Nb-21Zr (admixed) fretting corrosion wear track (Figure 5h) than in the dry fretting track (Figure 5g).Additionally, more loose debris dislodged, generating a debris field around the asperity path, separate from the pileup on the ends of the path.Compared to the traditional and AM Ti-6Al-4V biomaterials (Figure 5a-f), both AM Ti-29Nb-21Zr materials (Figure 5g-j) BSE micrographs show evidence of asperity sticking, indicated by a shorter wear track and more debris pileup within the track. Elemental Analysis of Debris EDS analysis performed on the damaged region and surrounding debris field after fretting corrosion damage for relevant elements is shown below in Figure 6. EDS chemical weight percent analysis of Ti-6Al-4V (Figure 7a) and Ti-29Nb-21Zr (Figure 7b) was performed on the damaged regions after 100 cycles of fretting in solution and compared within groups.Average values plotted in Figure 7 are reported in Table 2, along with standard deviation and statistical comparison between groups.Average values plotted in Figure 7 are reported in Table 2, along with standard deviation and statistical comparison between groups.Table 2 lists p-values from a single factor ANOVA performed between traditional Ti-6Al-4V, AM Ti-6Al-4V, and AM Ti-6Al-4V 1% nYSZ showing significant differences in Ti, Al, V, and Zr, as well as weight percent of oxygen (with a Bonferroni correction factor of 2) in the debris field. Tukey's post-hoc between groups shows traditional Ti-6Al-4V has significantly lower counts of titanium (p < 0.001) and vanadium (p Traditional vs. AM Ti-6Al-4V = 0.03 and p Traditional vs. AM Ti-6Al-4V 1% nYSZ < 0.001), higher counts of aluminum (p < 0.001), and, using the weight percent of oxygen on the damaged region and debris field as a measure of oxidation, more oxidation (p < 0.001) than both AM Ti-6Al-4V biomaterials (Figure 7). When comparing elemental weight percentages between the versions of AM Ti-29Nb-21Zr, the admixed had significantly less titanium but more zirconium and niobium.The pre-alloyed version had more oxidation debris and more phosphorous (likely from the formation of phosphates, PO 4 ). Figure 8 shows the differences, or lack thereof, in the weight percent of oxygen in the debris field and wear track generated from dry fretting to fretting corrosion. When comparing elemental weight percentages between the versions of AM Ti-29Nb-21Zr, the admixed had significantly less titanium but more zirconium and niobium.The pre-alloyed version had more oxidation debris and more phosphorous (likely from the formation of phosphates, PO4). Depth: DOM Adding corrosion to the process of wear did not affect the magnitude of the wear track depth.However, fretting and fretting corrosion damage was more extensive on AM Ti-29Nb-21Zr (admixed) than any other biomaterial tested under the same conditions (Figure 9).Two-way ANOVA (air/solution, sample) reveals no difference in the depth abraded during dry fretting and fretting corrosion (p = 0.35).However, the five biomaterials differed in wear track depth (p < 0.001).Table 3 shows average and standard deviation values (in µm) of depth abraded by a single diamond asperity after 100 cycles in air and PBS. Friction and Sliding Amplitude Figure 10 shows the effect of adding solution to fretting in terms of micromotion and the amount of sticking when horizontal motion of the fretting asperity is tracked by DVRT. experienced different amounts of fretting corrosion damage in PBS under 0 V vs. Ag Ag/Cl (p > 0.17). Friction and Sliding Amplitude Figure 10 shows the effect of adding solution to fretting in terms of micromotion and the amount of sticking when horizontal motion of the fretting asperity is tracked by DVRT.DVRT motion tracking of the asperity shows an overall difference between dry fretting and fretting in solution.Though the asperity is under a controlled amplitude input, how it moves on the surface in contact results from the frictional forces between the asperity and the metal.The asperity amplitude in solution is much more uniform overall, whereas when dry fretting and frictional forces are higher, the asperity experiences sticking more often (most prevalent for AM Ti-6Al-4V 1% nYSZ, AM Ti-29Nb-21Zr (admixed), and AM Ti-29Nb-21Zr (pre-alloyed)).This is shown by the drop in fretting amplitude after a few cycles during dry fretting, likely due to the added effects of lubrication as well as a thicker titanium oxide layer in solution compared to in air.Similarly, both AM Ti-29Nb-21Zr metals also experience asperity sticking during fretting corrosion more often than the three other metals tested.DVRT motion tracking of the stage, i.e., the controlled horizontal amplitude output (not shown in Figure 10), indicates a uniform motion of the stage (represented in Figure 1c).Thus, any difference in the horizontal motion of the asperity over time and between dry/wet fretting is due to surface forces alone. Electrochemical Measurements An example (n = 1) of baseline current before, fretting current during, and return to baseline current after stopping fretting for each biomaterial is shown in Figure 11a (PBS, DVRT motion tracking of the asperity shows an overall difference between dry fretting and fretting in solution.Though the asperity is under a controlled amplitude input, how it moves on the surface in contact results from the frictional forces between the asperity and the metal.The asperity amplitude in solution is much more uniform overall, whereas when dry fretting and frictional forces are higher, the asperity experiences sticking more often (most prevalent for AM Ti-6Al-4V 1% nYSZ, AM Ti-29Nb-21Zr (admixed), and AM Ti-29Nb-21Zr (pre-alloyed)).This is shown by the drop in fretting amplitude after a few cycles during dry fretting, likely due to the added effects of lubrication as well as a thicker titanium oxide layer in solution compared to in air.Similarly, both AM Ti-29Nb-21Zr metals also experience asperity sticking during fretting corrosion more often than the three other metals tested.DVRT motion tracking of the stage, i.e., the controlled horizontal amplitude output (not shown in Figure 10), indicates a uniform motion of the stage (represented in Figure 1c).Thus, any difference in the horizontal motion of the asperity over time and between dry/wet fretting is due to surface forces alone. Electrochemical Measurements An example (n = 1) of baseline current before, fretting current during, and return to baseline current after stopping fretting for each biomaterial is shown in Figure 11a (PBS, 0 V vs. Ag Ag/Cl).Figure 11b shows the average fretting current over 100 cycles (with the baseline current subtracted) on each of the biomaterials.Error bars represent the standard deviation for n = 5 tests performed. Statistical analysis with a single factor ANOVA (p-values reported in Table 4) reveals significant differences between baseline and fretting currents (above baseline) between the five biomaterials tested.0 V vs. Ag Ag/Cl).Figure 11b shows the average fretting current over 100 cycles (with the baseline current subtracted) on each of the biomaterials.Error bars represent the standard deviation for n = 5 tests performed.Statistical analysis with a single factor ANOVA (p-values reported in Table 4) reveals significant differences between baseline and fretting currents (above baseline) between the five biomaterials tested.Post-hoc analysis identified a lower baseline current for traditional Ti-6Al-4V when compared with AM Ti-6Al-4V (p < 0.001), admixed AM Ti-29Nb-21Zr (p = 0.01), and prealloyed AM Ti-29Nb-21Zr (p = 0.04).AM Ti-6Al-4V 1% nYSZ also had a significantly lower baseline current than AM Ti-6Al-4V (p < 0.001), admixed AM Ti-29Nb-21Zr (p = 0.01), and pre-alloyed AM Ti-29Nb-21Zr (p = 0.04).No other significant differences in baseline current were measured before fretting (p > 0.16). Fretting current (above baseline) was averaged over 100 cycles and plotted in Figure 12 versus wear track depth.These tests were performed with matched mechanical conditions (normal force, nominal sliding distance, etc). Fretting current (above baseline) is correlated to the depth abraded during fretting corrosion for all the titanium biomaterials (p < 0.001), with 67% of variation accounted for by a direct linear correlation.Fretting current (above baseline) is correlated to the depth abraded during f corrosion for all the titanium biomaterials (p < 0.001), with 67% of variation accoun by a direct linear correlation. Discussion This study investigated the asperity-based fretting and fretting corrosion beha five titanium materials: (1) traditional Ti-6Al-4V; (2) AM Ti-6Al-4V; (3) AM Ti-6Al-nYSZ; (4) AM Ti-29Nb-21Zr (admixed); and (5) AM Ti-29Nb-21Zr (pre-alloyed).U single 17 µm radius diamond stylus and reproducing an experimental setup desig Goldberg et al., a controlled assessment of the wear and fretting corrosion resistan performed by quantitatively measuring the damage (scratch depth and fretting cu and qualitatively assessing debris fields and wear tracks [54]. Generally, we found few differences in wear or fretting corrosion behavior be traditional and AM Ti-6Al-4V (with and without 1% nYSZ) despite different micro tures, debris chemistries, and corrosion properties (baseline current before fretting ditionally, the pre-alloyed AM Ti-29Nb-21Zr performed worse (i.e., greater damag consistent wear path, and higher fretting currents) than all three Ti-6Al-4V bioma tested and is also less resistant to abrasion than the admixed AM Ti-29Nb-21Zr in b (fretting) and in solution (fretting corrosion).These findings support our initial hy ses. Generally, we found few differences in wear or fretting corrosion behavior between traditional and AM Ti-6Al-4V (with and without 1% nYSZ) despite different microstructures, debris chemistries, and corrosion properties (baseline current before fretting).Additionally, the pre-alloyed AM Ti-29Nb-21Zr performed worse (i.e., greater damage, less consistent wear path, and higher fretting currents) than all three Ti-6Al-4V biomaterials tested and is also less resistant to abrasion than the admixed AM Ti-29Nb-21Zr in both air (fretting) and in solution (fretting corrosion).These findings support our initial hypotheses. To explore differences in the oxide films/fretting current magnitudes, the charge per oxide volume (Φ in C/cm 3 ) was calculated [57]. where ρ is the oxide density, n is the oxide valence, F is Faraday's constant (96,500 C/mol), and M w is the molecular weight of the oxide.Assuming the oxide has the same composition as the metal it is formed from (measured by EDS), the oxide charge volumes were calculated from the values listed below in Table 5.Both versions of AM Ti-29Nb-21Zr had calculated charge volumes of approximately 15,000 C/cm 3 , which is lower than all the Ti-6Al-4V calculated values: traditional Ti-6Al-4V (20,482 C/cm 3 ), AM Ti-6Al-4V (20,471 C/cm 3 ), and AM Ti-6Al-4V 1% nYSZ (20,450 C/cm 3 ).This is also lower than values previously calculated by Li, 2016 for traditional Ti-6Al-4V (21,498 C/cm 3 ) and even lower than those reported for traditional CoCrMo (18,477 C/cm 3 ) [58].Because AM Ti-29Nb-21Zr (pre-alloyed) has a lower charge per volume of oxide abraded (C/cm 3 ) yet higher fretting currents (A/s = C), this alloy experiences more oxide abrasion/repassivation than the others tested.The volume of oxide generated per charge is smaller, but the total charge generated in the same amount of time is larger; thus, more oxide volume is generated.Similarly, though the average fretting current of the admixed AM Ti-29Nb-21Zr is similar to those of the three Ti-6Al-4V biomaterials, the oxide charge volume is much smaller, meaning more oxide is abraded and repassivated to generate the same total charge during fretting. The SEM imaging analysis of wear and fretting corrosion scars showed varying amounts of oxide debris generation and embedding within the damage zone.This result, consistent with other studies of single asperity fretting corrosion testing, decreases subsequent fretting corrosion damage and reduces the measured fretting corrosion currents [53,59].This is, essentially, an effect arising during fretting corrosion that has only recently been identified and shows that there are antagonistic effects resulting from oxide debris embedding processes [53,59].Additionally, EDS of the fretting corrosion debris and wear tracks (Table 2, Figure 6) reveals discrepancies in Nb and Zr content for the admixed and pre-alloyed Ti-29Nb-21Zr.These values (28.7% Nb, 21.4% Zr for the admixed and 22.3% Nb and 18.7% Zr for the pre-alloyed) were significantly different (p < 0.001), inconsistent with both the chemical composition of the biomaterial powders and previously measured chemical compositions of the as-built samples in the literature (30.2% Nb, 22.4% Zr for the admixed and 28.8% and 22.1% Zr for the prealloyed) [49].One reason for this discrepancy is likely the oxidation occurring at the wear-track interface (4.32% O for the admixed and 6.12% O for the pre-alloyed) and the formation of phosphates.Chemical heterogeneities in the admixed alloy may also contribute to this measured variation.This study documents a titanium-rich melt pool approximately perpendicular to the wear track (Figure 6d).Previous studies show how the distinct elemental powder beads in the Ti-29Nb-21Zr admixture fail to fully mix during the SLM process, generating heterogenous melt pools of different chemical compositions [49].This unique microstructure may be caused by the comparatively higher melt temperature of the cubic niobium particles, a common problem for titanium alloy and admixture powders with mismatched melt temperatures and densities [60,61]. While we investigate wear and fretting corrosion in this study, the mechanical properties of additively manufactured alloys remain critical.The fatigue resistance of AM titanium alloys is of particular concern.Voids and defects during the printing process, as well as alterations to the microstructure during the rapid heating and cooling of laser powder bed fusion, are hypothesized to decrease AM fatigue resistance.Indeed, the equiaxed α + β microstructure of traditionally manufactured Ti-6Al-4V is so widely used in medical devices because of its resistance to fatigue crack initiation.Though other microstructures (lamellar α) decrease the time needed for crack propagation, under repeated cyclic loading, preventing crack initiation takes precedence.For in vivo titanium alloy applications, including hip stems and tibial baseplates, the time and cycles needed for cracks to nucleate are exponentially larger than the time from initiation until failure.Additionally, while corrosion and wear are associated with device failure, patients may rely on a corroding device for years without experiencing clinical symptoms.In contrast, fatigue failure is catastrophic, ending in device fracture a clear breakdown of the device's purpose [62].Thus, crack prevention is prioritized and is a design consideration. Here, we use as-built AM Ti-6Al-4V after a brief, stress-relieving heat treatment (600 • C for three hours).The resulting martensitic microstructure, with fine needles and prior β grains, is unlikely to reproduce the microstructure of AM Ti-6Al-4V devices used in vivo.Thus, a gap exists between the AM materials we use in this study and what is likely being implanted into patients, a limitation of our work.Overcoming this gap is nontrivial.First, device manufacturers view the post-processing they perform on AM devices as proprietary information.It is unclear what microstructures (and the processes that generate them) are present on FDA-cleared AM titanium alloy devices.Next, regulatory bodies and standards organizations lag the rapid technological developments in additive manufacturing.ASTM standards like F2924-14 Additive Manufacturing Titanium-6 Aluminum-4 Vanadium with Powder Bed Fusion are essential but do not specify a uniform microstructure beyond the absence of α-case, a surface layer generated under high temperatures from oxygen and nitrogen diffusion [63].This lack of standardization may result in differing microstructures for off-the-shelf devices on a company-by-company basis.Microstructure, including the phase compositions, surface area, and elemental distribution (α is typically aluminum-rich, β vanadium-rich), influences the protective oxide film responsible for corrosion resistance and the alloys' wear resistance, and remains a key variable for future investigations. Conventionally and additively manufactured titanium devices in vivo are often coated or subject to post-processing and surface modifications [64].Implant use-cases for titanium and its alloys generally avoid bearing applications.However, retrieval studies document severe corrosion in vivo on titanium alloy surfaces where rotation was not designed, including in the modular junctions of total hip and total knee replacement systems.Under cyclic loading between mixed and same-alloy titanium interfaces, mechanically assisted crevice corrosion promotes thick Cr-Ti-Mo oxides, etching, and pitting, among other damage modes [15].Wear and fretting corrosion represent critical components during this mechanism.Asperities between two interfaces in the crevice may abrade the oxide, interrupting it and initiating a positive feedback loop that perpetuates further corrosion.In this study, we reduce this mechanism to a single diamond asperity, characterizing the fundamental wear and fretting corrosion of five titanium-based biomaterials.While these materials may not be used as rubbing surfaces, conventionally and additively manufactured titanium components are often used in modular junctions.Consequently, understanding the tribocorrosion properties of new titanium alloys is critical to screening their performance as potential new biomaterials. The relevance of wear properties for new AM Titanium biomaterials is directly related to their in vivo application.While crucial for articulating components like acetabular cups and modular taper junctions under cyclic loading, fretting performance is less important for mechanically inert, well-fixed devices.Data reported by the FDA on additively manufactured devices cleared from 2010 to 2020 reveals three trends relevant to this study [65].First, 70 percent of all cleared AM devices (at least 357 devices) use titanium-based biomaterials.Second, 78 percent of devices were manufactured using laser powder bed fusion, the manufacturing method investigated in this paper.Third, spinal cages, where fretting corrosion is not a prominent damage mode, make up 46 percent of devices cleared via the 510 k pathway. In addition to reinforcing the clinical relevance of this work, these trends reveal that orthopedic applications do not always involve interfaces where wear and fretting corrosion occur.To mitigate fracture within total knee replacement systems, AM tibial trays are produced through hybrid manufacturing.The keel (the portion of the device that interfaces with the patient's bone) is printed onto a traditionally manufactured puck, leaving a visually distinct transition.Failure at this interface is hypothesized to initiate micromotion due to poor bony ingrowth [65].Thus, even in instances where tribocorrosion is a concern, AM may allow device designers to pick and choose manufacturing processes and biomaterials to optimize desired properties and improve clinical performance. Based on the wear and fretting results of this study and the fundamental corrosion properties we previously elucidated, we provide the following practical recommendations for using AM Ti-29Nb-21Zr as a biomaterial.First, despite comparatively decreased wear and fretting corrosion properties for the pre-alloyed Ti-29Nb-21Zr, it is unclear what impacts these would have in vivo, given the current applications for AM titanium alloys.The poor wear resistance of titanium and its alloys is well documented in the literature, and CoCrMo and ceramics like BIOLOX delta are favored for bearing surfaces like femoral heads in total hip replacement devices and femoral condyles in total knee replacement systems [53,[66][67][68][69][70][71][72][73].In other words, worse wear performance than Ti-6Al-4V does not preclude using AM Ti-29Nb-21Zr as an orthopedic alloy, given that other biomaterials are selected when wear is a major factor.Additionally, previous studies on AM Ti-29Nb-21Zr show good osseointegration (evidenced by 95% bone-implant contact in a sheep animal model), improved corrosion resistance in simulated inflammatory environments, and increased resistance to cathodic activation compared to Ti-6Al-4V [49,74,75].In total, these studies, along with the wear properties reported here, support the use of Ti-29Nb-21Zr in vivo at bone-device interfaces, though further research is required. Conclusions In this study, we investigated the wear and fretting corrosion behavior of three new additively manufactured biomaterials, including pre-alloyed AM-29Nb-21Zr, admixed AM Ti-29Nb-21Zr, and Ti-6Al-4V nYSZ.We selected traditional and AM Ti-6Al-4V, two biomaterials actively used in vivo, for comparisons.Wear and fretting corrosion were quantified using a single diamond asperity (17 µm radius, 100 mN, 3 Hz, 100 cycles).While we found few differences between traditional and AM Ti-6Al-4V, we identified pre-alloyed AM Ti-29Nb-21Zr as the least resistant to wear and fretting corrosion (measured by wear track depth).We identified admixed AM Ti-29Nb-21Zr as the second least resistant to fretting corrosion.Additionally, compared with the Ti-6Al-4V-based biomaterials, AM Ti-29Nb-21Zr biomaterials generally exhibited less uniform wear and higher frictional forces during single-asperity fretting in air and solution.These results support our initial hypotheses where, based on previous corrosion studies of the conventionally manufactured Ti-13Nb-13Zr alloy, we expected decreased wear and fretting corrosion properties for the Ti-29Nb-21Zr biomaterials when compared with Ti-6Al-4V.Despite these decreased properties, Ti-29Nb-21Zr in its current as-built state may still be suitable for in vivo use at interfaces where bone ingrowth and fixation are prioritized (e.g., in the keel of tibial baseplates).Additionally, post-processing techniques, including diffusion hardening and surface coatings, may increase the wear and fretting corrosion properties of the Ti-29Nb-21Zr biomaterials, expanding their use cases in vivo.Future investigations will focus on characterizing the fatigue resistance of the new AM biomaterials, which is critical for in vivo success. Figure 1 . Figure 1.(a) Diagram of fretting apparatus [54].(b) System after applied horizontal movement (Δ)showing lateral compliance and asperity movement (δ), with a difference of deflection (Δ − δ = d) of the system under applied load (FN) and dependent on COF.As denoted in the figure, lateral compliance refers to the stiffness of the vertical system (i.e., its resistance to bending under horizontal (perpendicular) loads), and deflection refers to the amount of one end of the vertical system (the asperity at the base) is diverted from its neutral point (i.e., the asperity location when 0 horizontal load is applied).(c) Example data set of DVRT (differential variable reluctance transducers) displacement measurements of the applied and asperity amplitude over 100 fretting cycles (at 3 Hz). Figure 1 . Figure 1.(a) Diagram of fretting apparatus [54].(b) System after applied horizontal movement (∆)showing lateral compliance and asperity movement (δ), with a difference of deflection (∆ − δ = d) of the system under applied load (F N ) and dependent on COF.As denoted in the figure, lateral compliance refers to the stiffness of the vertical system (i.e., its resistance to bending under horizontal (perpendicular) loads), and deflection refers to the amount of one end of the vertical system (the asperity at the base) is diverted from its neutral point (i.e., the asperity location when 0 horizontal load is applied).(c) Example data set of DVRT (differential variable reluctance transducers) displacement measurements of the applied and asperity amplitude over 100 fretting cycles (at 3 Hz). Figure 2 . Figure 2. (a) A representative DOM 3D reconstruction and heat map of wear track damage and debris pile-up after 100 cycles of uniaxial fretting corrosion.Conditions included a 100 mN normal load, a 3 Hz frequency, and a 17 µm radius diamond asperity on pre-alloyed AM Ti-29Nb-21Zr in PBS.The heat map scale bar corresponds with increases in depth in the + Z direction (assuming a three-dimensional Euclidian space).Here, the bottom of the wear track represents the global minimum (blue, 0 µm), and the peak of the debris pile-up represents the global maximum (red, 17.25 µm).(b) A 2D surface image shows the line scan location and (c) depth and length measurement from that line scan.The horizontal red line in (c) corresponds with the sample surface while the blue line follows the debris pileup and subsurface wear track.Axes and micron markers are superimposed on the original DOM images to improve legibility. Figure 2 . Figure 2. (a) A representative DOM 3D reconstruction and heat map of wear track damage and debris pile-up after 100 cycles of uniaxial fretting corrosion.Conditions included a 100 mN normal load, a 3 Hz frequency, and a 17 µm radius diamond asperity on pre-alloyed AM Ti-29Nb-21Zr in PBS.The heat map scale bar corresponds with increases in depth in the + Z direction (assuming a threedimensional Euclidian space).Here, the bottom of the wear track represents the global minimum (blue, 0 µm), and the peak of the debris pile-up represents the global maximum (red, 17.25 µm).(b) A 2D surface image shows the line scan location and (c) depth and length measurement from that line scan.The horizontal red line in (c) corresponds with the sample surface while the blue line follows the debris pileup and subsurface wear track.Axes and micron markers are superimposed on the original DOM images to improve legibility. Figure 3 . Figure 3. Horizontal asperity displacement (light) and fretting current above baseline (dark) measurements before, during, and after uniaxial fretting corrosion (a) for 100 cycles with an applied 100 mN normal load at 3 Hz on admixed AM Ti-29Nb-21Zr (17 µm radius diamond asperity , 0 V vs. Ag/AgCl in PBS).Note the non-uniform fretting amplitude of the asperity and spikes in current correlating to sticking/slipping of the asperity during fretting.(b) DVRT and current were recorded during the start of fretting, just after 8 s, showing 11 spikes in current for 5.5 cycles of loading (3 Hz movement back and forth). 2. 5 . 3 . Elemental Analysis of Debris SEM was used to image wear tracks, documenting the local damage, plastic deformation mechanisms, and the debris field with both secondary (SE, topographical) and backscattered (BSE, chemistry) electron imaging modes. Figure 3 . Figure 3. Horizontal asperity displacement (light) and fretting current above baseline (dark) measurements before, during, and after uniaxial fretting corrosion (a) for 100 cycles with an applied 100 mN normal load at 3 Hz on admixed AM Ti-29Nb-21Zr (17 µm radius diamond asperity, 0 V vs. Ag/AgCl in PBS).Note the non-uniform fretting amplitude of the asperity and spikes in current correlating to sticking/slipping of the asperity during fretting.(b) DVRT and current were recorded during the start of fretting, just after 8 s, showing 11 spikes in current for 5.5 cycles of loading (3 Hz movement back and forth). Figure 5 . Figure 5. Paired backscatter (BSE) and secondary (SE) micrographs for the five tested biomaterials in air (a,c,e,g,i) and in PBS (b,d,f,h,j).The SEM BSE micrographs (left, between ×650 and ×850 magnification) show the entire damaged region and debris field.The higher magnification SE micrographs (right, ×3000 magnification) show the nature of the wear track and debris removal in dry conditions versus in PBS.Note the increase in dark regions in the micrographs captured after fretting corrosion, indicative of increased oxide generation in the wear tracks.(Note: All SE images (right, ×3000) scales are equivalent.Thus, the scale bars shown in (a,b) apply to all SE micrographs. Figure 6 . Figure 6.Wear track imparted by a single diamond asperity fretting corrosion apparatus in PBS on (a) traditional Ti-6Al-4V, (b) AM Ti-6Al-4V, (c) AM Ti-6Al-4V 1% nYSZ, (d) admixed AM Ti-29Nb-21Zr, and (e) pre-alloyed AM Ti-29Nb-21Zr.SEM SE micrographs show the nature of the surface damage as well as the debris pile-up.False color EDS maps identify elemental mapping of titanium, aluminum, niobium, zirconium, phosphorous, and oxygen.(Note: All images in (a-e), regardless of the element shown, are of the same magnitude as (a).Thus, the scale bars shown in (a) apply to all images in Figure 6). Figure 6 . Figure 6.Wear track imparted by a single diamond asperity fretting corrosion apparatus in PBS on (a) traditional Ti-6Al-4V, (b) AM Ti-6Al-4V, (c) AM Ti-6Al-4V 1% nYSZ, (d) admixed AM Ti-29Nb-21Zr, and (e) pre-alloyed AM Ti-29Nb-21Zr.SEM SE micrographs show the nature of the surface damage as well as the debris pile-up.False color EDS maps identify elemental mapping of titanium, aluminum, niobium, zirconium, phosphorous, and oxygen.(Note: All images in (a-e), regardless of the element shown, are of the same magnitude as (a).Thus, the scale bars shown in (a) apply to all images in Figure6). Figure 7 . Figure 7. Average EDS weight percent of elements Ti, Al, V, Zr, and Nb, as well as O and P (found in oxides and phosphates) for (a) traditional Ti-6Al-4V, AM Ti-6Al-4V, and AM Ti-6Al-4V 1% nYSZ and (b) admixed and pre-alloyed AM Ti-29Nb-21Zr measured on the damaged regions and debris field generated from fretting in PBS under a passive 0 V vs. Ag Ag/Cl hold, as well as comparisons between groups, n = 5.The asterisk (*) between groups indicates significance (p < 0.025). Figure 7 . Figure 7. Average EDS weight percent of elements Ti, Al, V, Zr, and Nb, as well as O and P (found in oxides and phosphates) for (a) traditional Ti-6Al-4V, AM Ti-6Al-4V, and AM Ti-6Al-4V 1% nYSZ and (b) admixed and pre-alloyed AM Ti-29Nb-21Zr measured on the damaged regions and debris field generated from fretting in PBS under a passive 0 V vs. Ag Ag/Cl hold, as well as comparisons between groups, n = 5.The asterisk (*) between groups indicates significance (p < 0.025). Figure 8 . Figure 8.Average oxygen weight percent measured with EDS analysis of the wear track and surrounding debris with comparisons between dry fretting and fretting in PBS, n = 5.Outlined bars represent measurements from fretting wear tracks in air while solid bars indicate measurements acquired from fretting corrosion wear tracks in PBS.Colors correspond with each of the five biomaterials tested: traditional Ti-6Al-4V (blue); AM Ti-6Al-4V (orange); AM Ti-6Al-4V 1% nYSZ (grey); admixed AM Ti-29Nb-21Zr (green) and pre-alloyed AM Ti-29Nb-21Zr (red).The asterisk (*) between groups indicates significance (p < 0.025). Figure 8 . Figure 8.Average oxygen weight percent measured with EDS analysis of the wear track and surrounding debris with comparisons between dry fretting and fretting in PBS, n = 5.Outlined bars represent measurements from fretting wear tracks in air while solid bars indicate measurements acquired from fretting corrosion wear tracks in PBS.Colors correspond with each of the five biomaterials tested: traditional Ti-6Al-4V (blue); AM Ti-6Al-4V (orange); AM Ti-6Al-4V 1% nYSZ (grey); admixed AM Ti-29Nb-21Zr (green) and pre-alloyed AM Ti-29Nb-21Zr (red).The asterisk (*) between groups indicates significance (p < 0.025). Figure 9 . Figure 9. Abrasion depth from a single 17 µm radius diamond asperity after 100 cycles of fretting in air and fretting corrosion in PBS under a constant 0 V vs. Ag Ag/Cl hold.Values are reported for traditional and additively manufactured Ti-6Al-4V and Ti-29Nb-21Zr (admixed and pre-alloyed) titanium biomaterials. Figure 9 . Figure 9. Abrasion depth from a single 17 µm radius diamond asperity after 100 cycles of fretting in air and fretting corrosion in PBS under a constant 0 V vs. Ag Ag/Cl hold.Values are reported for traditional and additively manufactured Ti-6Al-4V and Ti-29Nb-21Zr (admixed and pre-alloyed) titanium biomaterials. Figure 11 . Figure 11.(a) Representative data (n = 1) of fretting and baseline current data vs. time recorded during fretting in PBS under a 0 V vs. Ag Ag/Cl potentiostatic hold concurrent with a static 100 mN load at 3 Hz.(b) Average fretting current above baseline for each biomaterial.Error bars represent the standard deviation (n = 5). Figure 11 . Figure 11.(a) Representative data (n = 1) of fretting and baseline current data vs. time recorded during fretting in PBS under a 0 V vs. Ag Ag/Cl potentiostatic hold concurrent with a static 100 mN load at 3 Hz.(b) Average fretting current above baseline for each biomaterial.Error bars represent the standard deviation (n = 5). Figure 12 . Figure 12.Fretting current (above baseline) versus wear track depth after 100 cycles, with eac representing one trial of one material (n = 1).Linear least squares regresses a line with R 2 = all currents and depths. Figure 12 . Figure 12.Fretting current (above baseline) versus wear track depth after 100 cycles, with each point representing one trial of one material (n = 1).Linear least squares regresses a line with R 2 = 0.67 for all currents and depths. Table 1 . Printer parameters used to fabricate AM titanium alloys using L-PBF by an SLM 125 printer. Table 2 . Average EDS analysis weight percent of each element measured in the fretting corrosion debris and wear track, with p-value comparison between biomaterial and elements from single factor ANOVA calculations, n = 5. Table 2 . Average EDS analysis weight percent of each element measured in the fretting corrosion debris and wear track, with p-value comparison between biomaterial and elements from single factor ANOVA calculations, n = 5. Table 3 . Measured scratch depth after abrasion by a single micro-asperity, n = 5. Table 3 . Measured scratch depth after abrasion by a single micro-asperity, n = 5. Table 4 . Average baseline current and average current above baseline during fretting (0 V vs. Ag Ag/Cl in PBS, n = 5). Table 4 . Average baseline current and average current above baseline during fretting (0 V vs. Ag Ag/Cl in PBS, n = 5). Table 5 . Parameters used to calculate the oxide film properties and the oxide charge volume (C/cm 3 ) of the five bioamterials tested.(Note:Weight% from EDS are average values calculated from TableexcludingO and P in the total %, n = 5).
2024-02-07T16:13:33.220Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "9a8706bf7b70568461937d7c49846544bad9eff0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4983/15/2/38/pdf?version=1707185976", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58b712ae4f4dc6ba5dee9abe19f8ebe2917456a6", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Medicine" ], "extfieldsofstudy": [] }
269162427
pes2o/s2orc
v3-fos-license
Analysis of Existence and Faults Impact on Geological Disasters Using GGMPlus Data The existence of faults can trigger various geological natural disasters because faults will react when an earthquake occurs and volcanic activity occurs, causing effects in the form of landslides, subsidence, ground movements, and other geological disasters. This research aims to analyze the existence and impact of local faults on geological disasters around the research location using GGMPlus data. A derivative filter is used to get FHD and SVD maps based on the gravity anomaly map. Fault analysis was carried out using a graph from the FHD and SVD map incisions, which were then correlated with each other. The incision graph of the meeting point between the maximum FHD value and the zero SVD value will be interpreted as a fault structure. The results show that there were several fault indication points; these points were then drawn straight lines to get the lineaments of the faults. The fault lineaments with the location of the landslide and subsidence events are correlated so that it becomes evident that the subsidence and landslide disaster in the Brau Hamlet, Batu City, area can be associated with local faults. Based on several previous research on determining faults in coastal areas, it is known that the areas crossed by the Palu-Koro fault have experienced many disasters, such as landslides, land movement and liquefaction. The existence of local faults in an area can increase the impact of damage when natural disasters such as earthquakes and landslides occur. Introduction Geologically, Indonesia is traversed by the Ring of Fire, causing Indonesia to have a lot of volcanoes and faults.The existence of faults can trigger various geological natural disasters because faults will react when an earthquake occurs and volcanic activity occurs, causing effects in the form of landslides, subsidence, ground movements, and other geological disasters.These geological disasters can occur in mountainous and coastal areas, and one of the causes is the presence of faults.So, it is necessary to analyse faults in mountainous and coastal areas to determine the influence of faults in different locations with the same impact and characteristics.When regional faults or volcanoes react, they will cause local faults to react, thus triggering ground movements [1].Generally, the position of buildings on fault lines tends to receive a greater impact, so fault mapping in an area is important as a disaster mitigation strategy and building planning. The gravity method is one of the geophysical methods commonly used to identify faults.Gravity data information can be used efficiently in various geological problems related to the exploration of the 1321 (2024) 012003 IOP Publishing doi:10.1088/1755-1315/1321/1/012003 2 earth's crust, such as natural disaster mitigation, geothermal, and fault mapping.The gravity method has advantages in determining the density limits of subsurface rocks.The gravity method measures variations in the earth's gravitational field caused by differences in the rock mass density beneath the surface [2].Gravity data can be obtained by direct measurement or secondary data in the form of GGMPlus satellite data.One of the advantages of using satellite data is that it has wide area coverage and requires little time and cost.The data obtained from GGMplus is in the form of coordinate point data, Free Air Anomaly, and elevation.The measured gravity anomaly value is directly proportional to the density of the rock, where a high anomaly value identifies rock with high density and vice versa.Therefore, the use of the gravitational method is commonly used to map the density contrast under the surface, such as the distribution of faults in a study area. The gravity method is suitable for determining fault structures because it can identify contrasting differences in rock density, such as identifying faults in Trienggadeng, Aceh [2].The fault area has unstable rock due to the presence of two rock densities with contrasting values in one location, so it is very susceptible to geological disasters such as ground movements [3].Ground movement is the movement of soil mass from its original position so that it can develop into a landslide if mitigation is not done immediately [4].Landslide disasters will be very dangerous if they occur in densely populated areas because they can damage infrastructure and cause casualties.This research aims to analyze the existence and impact of local faults on geological disasters around the research location using Global Gravity Model Plus (GGMPlus) data.The results of this study are expected to provide information regarding the dangers of faults in a location and recommendations for disaster mitigation for the local government and increase preparedness for the surrounding community. Material and Method The research was conducted in Brau Hamlet, Gunungsari Village, Batu City, East Java, Indonesia.The research data was obtained from the GGMplus gravity satellite with an area of 3 x 2.5 kilometres.The area focuses on Brau Hamlet with a distance between data points of 220 meters; the measurement points are designed in the form of a grid so that they can represent the entire study area (Figure 2).Geographically, Brau Helmet is located at 7°50'46.41"S and 112°29'44.45"Ewith an altitude of about 1080 masl.Several hills surround this Hamlet, so the potential for geological disaster can come anytime. The measurement value of gravity for each region always varies because the earth is not a precise sphere and is considered homogeneous isotropic [5].Several components, such as differences in degrees of latitude, topographical conditions, and rock density, influence the acceleration value due to gravity.GGMPlus data obtained includes Free Air Anomaly data, elevation, coordinates, and Gobs.GGMplus data is satellite data obtained from the three main constituents of Gravity, namely, the GOCE and GRACE satellites with a spatial scale from 10,000 km to 100 km, the EGM2008 satellite with a spatial scale of 100 km to 10 km, and topographical gravity from 10 km until 250 m.The data from the 3 satellites were processed using the approximative method, and the analysis process was carried out spectrally using the discrete Fourier technique to obtain a degree of variance model [6]. This method is ambiguous due to the many noise or disturbing factors in the data retrieval process, so it is necessary to make some corrections to remove the noise.Correction of data in the processing of the gravity method includes Terrain, Bouguer, and Free Air Corrections.The results obtained from the correction process are a complete bouguer anomaly map; based on a complete bouguer anomaly map, an anomaly can be separated using a butterworth filter.Separation of these anomalies aims to obtain residual and regional anomalies.The research implementation is shown in Figure 1 below.Analysis is performed to determine the presence of faults based on the residual anomaly map obtained from First Horizontal Derivative (FHD) and Second Vertical Derivative (SVD).FHD is a horizontal change in the value of the gravity anomaly, which can show the maximum and minimum values at the anomaly contact so that it is suitable for determining the presence of a fault at the geological structures.Meanwhile, SVD is used to reveal shallow sources of anomalies.SVD is an analysis that can describe residual anomalies associated with shallow structures [7].The residual anomaly map obtained is then filtered with order 1 derivatives on the x and y axes using the Oasis Montaj software, and then the GridMath option is added to get the FHD.As for the SVD, results are obtained from the second order derivative filter on the Z-axis.After obtaining the FHD and SVD maps, a path incision is made, and the analysis process uses a Microsoft Excel curve to identify the presence of faults and the type of fracture.[8], because it has the same concepts and methods and only differs in the research location.After analyzing the results of determining faults in the mountains and coasts, the influence of faults on the type of disaster and the level of risk posed is then correlated. Fault in Mountainous The research area is included in two regional geological maps, namely the regional geological maps of Kediri and Malang Quadrangle.Based on the geological map of the Kediri and Malang Quadrangle, the study area is included in the Old Anjasmara Volcanic Formation (Qpat).Old Anjasmara Volcanics Formation (Qpat) in Figure 3 was formed during the Quaternary period, which has an Early Pleistocene age.The formation consists of several rock types: volcanic breccia, tuff breccia, tuff, and lava.Based on observations at the location, breccia and tuff rocks were found with brown soil weathering.Hills dominate the morphological conditions of Brau Hamlet, with a height of around 1034 meters above sea level.A complete Bouguer anomaly map (Figure 4a) and residual anomaly (Figure 4b) are obtained based on the correction results.High gravity anomaly values are marked in pink and low gravity anomaly values are marked with dark blue, where the value of the gravity anomaly is directly proportional to the density of the rock.On the complete bouguer map, low density is shown in green-blue with a value range of 83.9 to 85.1 mGal; then for high density yellow-pink, the value range is 85.3 -86.8 mGal.Low anomalies on the complete Bouguer map are mostly in the southeast direction, and high anomalies are in the northwest direction, so it shows a boundary meeting of high and low anomalies, which is quite contrasting.Residual anomaly maps describe rock structures with shallow depths and irregular patterns because shallow geological structures' effects vary widely.The resulting residual anomaly map has a low anomaly with an anomaly value range of -0.5 to -0.1 mGal, while a high anomaly has a value range of 0.0 mGal to 0.4 mGal.The residual anomaly is used as a reference in filtering derivatives because this anomaly map has a shallow nature, making it suitable for making FHD and SVD maps.The FHD map is obtained from the first derivative of the residual map horizontally.This method can determine the existence of a fault where a high FHD value indicates the presence of a structure that is the boundary between high anomaly and low anomaly.The results of the FHD and SVD maps are shown in Figure 5. The FHD map has anomaly values varying from low to high of 0.00025 to 0.00048 mGal.The distribution of anomaly values on the FHD map can indicate the presence of a lithology type contact horizontally.Meanwhile, the SVD map is the second derivative of the residual map vertically.This second vertical descent is carried out to identify the presence of shallow faults.The SVD values obtained were in the range of 0.0000646 to 0.0000322 mGal.The boundaries of high and low SVD anomaly values at close distances can indicate the presence of fault structures or shallow faults.Based on the SVD map in Figure 5b.there is a confluence of gravity anomaly values that contrast between low (blue) and high (red) anomaly values.The meeting boundary indicates that in that area, there are differences that separate the two regions.These contrast anomaly encounters can be identified by making several incisions to determine the subsurface lithology conditions (Figure 5).Derivative analysis of the FHD and SVD maps is used to identify the presence of a fault based on the lithology conditions obtained from the incisions on the two maps.The incisions are marked with straight black lines in Figures 5 (a) and (b).The values obtained from the FHD and SVD maps are then displayed in graphical form using Microsoft Excel, then an analysis of the graphs obtained from the incisions is performed.Determination of the existence of a fault from the graph is done by comparing the FHD and SVD values.The existence of a fault structure can be detected if the maximum value on the FHD section correlates with a value close to or equal to zero on the SVD section.The correlation of the two values is then interpreted as a fault structure.The results of the process of graphic analysis and identification of faults from the incisions in Figure 5 are presented as a 2D map that aims to determine the lithology of the constituent rocks.The model is shown in Figure 6 with an error of 2.068%; this value is low.Based on the 2D model, it is known that the constituent rocks at the study site consist of Clay, Tuff Breccia, and Andesite.Determination of the fracture type can be seen from the maximum and minimum values on the SVD cross-section.If the measured positive maximum value of SVD is less than the negative minimum value of SVD, then an upward fault is indicated [5]. Identification of fault lineaments is carried out by combining points indicated to have fractures based on the results of the incision analysis in Figure 6.Based on Figure 7b, it is found that in the study area, there are 2 fault alignments from north to south which are marked by the dotted lines F1 and F2.These two faults are interpreted as local faults with shallow depths.Based on the fault lineament analysis obtained, it can be connected between the existence of the fault and several disaster events such as landslides and subsidence.Based on Figure 7, the big circle (High Damage Area) shows areas with quite high disaster intensity, such as landslides and subsidence, causing damage to residents' houses.This position is on the fault lineaments, so it can indicate that the fault lineaments can trigger greater damage.In addition, at the research location, a spring was also found in a blue circle in the direction of the F2 fault indication, so this is supporting evidence that there is a local fault in Brau Hamlet.These conditions require the community to increase awareness of natural conditions.In addition, the government's role is needed in provide education regarding the geological conditions in the study area.2020), during the 2018 Palu earthquake, there were many landslides in coastal areas, as shown in Figure 9(a) [10].The research conducted by Kusumawardani et al. (2021) shows that after the 2018 earthquake in Figure 9(b), it can be seen that liquefaction occurred in a large area [11].This case can be evidence that the presence of active faults in an area can increase the risk of damage from natural disasters; earthquakes can be very destructive if there are faults in the area.Then faults can also increase the risk of other natural disasters, such as landslides and liquefaction, because the soil in the area becomes unstable and fragile.Based on several studies, it can be correlated between faults that occur on the coast and faults in the mountains.The correlation results of faults in both areas show the same characteristics, resulting in geological disasters such as liquefaction, ground movement, and landslides, thereby increasing the risk in both areas.The meeting of oceanic crust with continental crust causes oceanic crust, which has thinner plates to move downwards.The meeting of these two plates is called a subduction zone, where when movement occurs in this zone it can cause earthquakes, tsunamis and volcanic eruptions.When an earthquake occurs, the vibrations can reach hundreds of kilometres depending on the size and depth of the earthquake's epicentre.Apart from that, volcanic activity can also cause ground vibrations.Vibrations from earthquakes and volcanic activity will spread in all directions and then resonate on local faults.The local faults that resonate due to the movement of tectonic plates can increase shaking when an earthquake occurs [12], giving rise to other natural disasters such as ground movements, landslides and other geological disasters.This can indicate that areas with faults can increase the occurrence of natural disasters. Conclusion The presence of faults can be determined using the gravity method by analysing the FHD and SVD maps obtained from the Residual Anomaly Map.Based on the correlation graph of FHD and SVD values, it was found that there were indications of local faults in the Brau Hamlet, Batu City.When the measured positive maximum value of SVD is smaller than the minimum negative SVD value, it indicates the presence of an upward fault.If a straight line is drawn, the indication of the existence of the fault is parallel to the landslide point and several damaged residents' houses.The fault lineaments with the location of the landslide and subsidence events are correlated so that it becomes evident that the subsidence and landslide disaster in the Brau Hamlet, Batu City, can be associated with local faults.Based on several previous studies regarding determining faults in the coastal area of Palu City, it is known that the area crossed by the Palu-Koro Fault has experienced many disasters such as landslides, land movement and liquefaction.The correlation between faults located on coastal and mountainous shows that both have the same properties, and they can increase the risk of geological disasters.When vibrations occur from the movement of tectonic plates or volcanic activity, they can resonate with faults, thereby increasing fault activity.The research that has been done needs to be studied further to obtain subsurface theories and models in more detail. Figure 2 . Figure 2. Survey design at the research location. Figure 6 . Figure 6.2D Model of slice A-A'. Figure 7 . a. Overlay the alignment of the local fault map with the Brau Hamlet map, b.Map of SVD and slicing with fault lineaments. 3. 2 . Fault in Coastal AreaDetermining faults in coastal areas is based on research conducted by Permana et al. (2022) using GGMplus gravity data in Palu City, Central Sulawesi, Indonesia[8].The method used is Gravity with GGMplus data, including Contour Bouguer Anomaly modelling, anomaly separation, and derivative analysis.In derivative analysis, the process of deriving the model horizontally and vertically is carried out.The first derivative horizontally is called FHD and the second derivative vertically is called SVD.The results of the two derivatives are correlated to show the maximum FHD and SVD values, where the maximum FHD value is correlated with a value close to or equal to zero in the SVD section. Figures 8 ( Figures 8 (a) and (b) show the correlation results between FHD and SVD maps to determine faults.The meeting of the maximum values of 0.00557 -0.01087 mGal on the FHD map in the western area
2024-04-17T15:29:16.680Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "a8670a9d0cb382dfc156afe9438db346ab8a960a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1321/1/012003/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2575d03cec36b940cbd296c8cb1bc2824ec00ea8", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
160012369
pes2o/s2orc
v3-fos-license
Glial Cell AMPA Receptors in Nervous System Health, Injury and Disease Glia form a central component of the nervous system whose varied activities sustain an environment that is optimised for healthy development and neuronal function. Alpha-amino-3-hydroxy-5-methyl-4-isoxazole (AMPA)-type glutamate receptors (AMPAR) are a central mediator of glutamatergic excitatory synaptic transmission, yet they are also expressed in a wide range of glial cells where they influence a variety of important cellular functions. AMPAR enable glial cells to sense the activity of neighbouring axons and synapses, and as such many aspects of glial cell development and function are influenced by the activity of neural circuits. However, these AMPAR also render glia sensitive to elevations of the extracellular concentration of glutamate, which are associated with a broad range of pathological conditions. Excessive activation of AMPAR under these conditions may induce excitotoxic injury in glial cells, and trigger pathophysiological responses threatening other neural cells and amplifying ongoing disease processes. The aim of this review is to gather information on AMPAR function from across the broad diversity of glial cells, identify their contribution to pathophysiological processes, and highlight new areas of research whose progress may increase our understanding of nervous system dysfunction and disease. Introduction Glutamatergic signaling through alpha-amino-3-hydroxy-5-methyl-4-isoxazole (AMPA) receptors (AMPAR) forms a major component of excitatory synaptic transmission in the central nervous system (CNS). However, glutamate release in the CNS is not exclusive to synaptic terminals, but also arises from unmyelinated axons [1] and non-neuronal glial cells [2] under both physiological and pathophysiological conditions. Glutamate is therefore present at varying concentrations in a number of extra-synaptic locations where it influences non-neuronal AMPAR, particularly those expressed by CNS glia cells, leading to influences on a range of critical functions. Importantly, glutamate release is enhanced under a number of pathological conditions [3][4][5] leading to concentrations of extracellular glutamate that trigger excitotoxic processes that threatens glial cell viability, and set in motion cellular and molecular processes that initiate and intensify pathophysiological conditions. In this review we provide an overview of AMPAR expression in glial cells of the CNS and peripheral nervous system (PNS), describing functions for these receptors in physiological conditions, highlighting their involvement in glial responses to pathophysiological conditions, and hypothesising on additional roles that glial AMPAR may perform in the context of nervous system injury and disease. The review will also indicate areas for future research, including cannabinoid AMPAR interactions, and AMPAR-stimulated transcriptional regulation, whose investigation promises to stimulate new knowledge on mechanisms regulating injury and disease processes, and identify new targets for CNS protection and repair. Glia: A Brief Overview of Diversity in Form and Function Glia are a diverse group of neural cells whose principle uniting features are a non-neuronal identity, and the performance of functions essential to the normal operation of the nervous system. Glia can be divided into two categories based on their developmental origins: macroglia derived from the ectoderm; and microglia derived from hemopoietic stem cells originating from the yolk sac during early embryonic development [6]. The principle CNS macroglia are the astrocytes and oligodendrocytes (OL). Astrocytes are a heterogenous group of cells that, in addition to a well-defined role in optimising extracellular conditions for neuronal function via the uptake of neurotransmitters and ions, may also contribute to the regulation of a number of other functions including synaptic function, cerebral blood flow and maintenance of the blood brain barrier [7]. OL are exclusively involved in myelin generation in the CNS [8]. However, it is now appreciated that myelination provides benefits to their neuronal targets that extend beyond the enhancement of axonal conduction velocities to encompass the provision of trophic and metabolic support [9], and potentially an involvement in information processing and and learning [10,11]. CNS macroglia also include NG2-glia, radial glia, and ependymal cells. NG2-glia are characterised as multi-process bearing NG2/PDGF receptor α expressing cells capable of generating OL and astrocytes (and possibly neurons, although this is the subject of intense debate [12]) in both the embryonic and postnatal CNS [13]. However, their extensive distribution throughout the adult CNS, and synaptic integration with neural circuity, suggest functions that extend beyond the role of a glial progenitor [13][14][15]. Radial glia are astrocyte-like neural progenitors that sequentially give rise to neurons, astrocytes and OL during embryonic development [16]. They are also considered to be the progenitors for the CNS ependymal cells (see below) [17]. Structurally, radial glia are polarised cells with a cell body located close to the ventricle, a short "endfoot" oriented towards the ventricle wall, and a longer process that extends to make contact with the pial surface [18]. This morphological arrangement provides a scaffold that guides newly born neurons as they migrate into the developing cerebral cortex [19]. Ependymal cells cover the brain's ventricles and the central canal of the spinal cord. Examples of ependymal cells include the cerebral spinal fluid secreting choroid plexus epithelial cells [20], and thalamic tanycytes. Tanycytes are radial glia-like cells located in the walls of the 3rd and 4th ventricle, that exhibit a neuro-and gliogenic capacity, and through their function as glucosensors, are proposed to play a role in regulating feeding and energy balance [21]. Microglia, which are restricted to the CNS, present a dualistic identity, providing protective and modulatory influences under healthy conditions through the release of pro-survival trophic factors, and by tidying CNS tissues through the removal of dead cells and the pruning of excess synapses. However, when activated under pathological conditions they play a central role in the progression of CNS injury and disease states through the release of inflammatory and cytotoxic mediators [22]. Thus, microglia are essential to both the maintenance of a healthy CNS environment, and the pathophysiological processes that threaten it. PNS macroglia include Schwann cells, satellite cells and enteric glia. Schwann cells exhibit two forms, the myelinating variety which form compact myelin sheaths on PNS axons, and non-myelinating forms whose interactions with multiple axons form the Remak fibres [23]. Under pathological conditions Schwann cells are capable of de-differentiating into an immature phenotype reported to exhibit pro-demyelination characteristics, while also expressing neurotrophic factors involved in neuronal survival and axonal regeneration [24]. Satellite glia interact with the soma of neurons in the sensory, sympathetic and parasympathetic ganglia where they are considered to play a similar role to astrocytes in the CNS. Finally, enteric glia are located in the ganglia of the gastrointestinal tract and are considered the most similar of PNS glia to astrocytes due to their multi-process morphology, including "end feet" connected to blood vessels, and their coupling via gap junctions to form a glial syncytium [25]. AMPAR AMPAR are one of three types of ionotropic glutamate receptors, the others being the N-methyl-d-aspartate receptors (NMDAR), known for their involvement in synaptic plasticity [26], and the kainate receptors (KR). KR are frequently grouped with AMPAR due to their similar pharmacological properties (e.g., agonised by kainate, and antagonised by several of the same drugs). However, KR are formed from distinct protein subunits that confer unique receptor properties, such as slower inactivation kinetics, that distinguish them from AMPAR [27,28]. Although glutamate can influence glia via activation of NMDAR and KR, these actions have been reviewed elsewhere [29,30] and will not be covered in the present work. Regarding AMPAR function in neuronal circuits, AMPAR activation mediates the majority of fast excitatory synaptic communication in the CNS. AMPAR are heterotetrameric complexes containing various combinations of the pore forming GluA subunits 1-4 (GluA1-4) whose assembly produces cation permeable receptors with physiological properties that differ depending on their specific subunit composition [28]. AMPAR channel activation permits the influx of Na + , and varying levels of Ca 2+ , depending on the subunits present in the complex. Cation influx induces excitatory post synaptic currents (EPSCs) with rapid kinetics that are characterised by pronounced receptor desensitisation and influenced by subunit composition [31,32]. The inclusion of GluA2 limits the permeability of AMPAR to Ca 2+ due to codon editing at the so-called Q/R site located within the pore forming region. The vast majority of GluA2 subunits undergo Q/R editing leading to the presence of a positively charged arginine residue within this region that inhibits the permeation of divalent ions [33,34]. In this way, AMPAR lacking GluA2 show greater Ca 2+ permeability than complexes containing this subunit. Variation in AMPAR desensitisation of all GluA subunits arises through alternative splicing to produce Flip and Flop variants that differ in their kinetics of desensitisation [28]. Importantly, most Flip variants exhibit slower, less pronounced, desensitisation, thus a dominance of these subunits over the Flop variant in AMPAR would be expected to increase vulnerability to excitotoxic injury. This scenario may occur in amyotrophic lateral sclerosis (ALS) where motoneurons in the cervical ventral horn show a decrease in Flop variants, while Flip expression is sustained [35]. Diversity in AMPAR receptor function is also provided by a range of auxiliary proteins, most notably the transmembrane AMPAR regulatory proteins (TARPs), whose inclusion in the complex influences AMPAR pharmacology and kinetics, and regulates GluA subunit trafficking [36,37]. AMPAR function in the nervous system is not restricted to neuronal cells. Indeed many glial cells exhibit functional AMPAR whose activation regulates a range of important cellular activities including cell migration [38,39], morphological development and re-structuring [40,41] proliferation and differentiation [42,43], transcriptional regulation [44,45], survival [46], and the regulation of various ion channels [42,47,48]. Importantly, overactivation of AMPAR plays a key role in mediating cellular injury to neuronal and glial cells alike. These excitotoxic actions, which largely depend on the excessive influx of Ca 2+ , are described in the following section. AMPAR Involvement in Cellular Injury Glutamatergic excitotoxic injury is defined by the overactivation of glutamate receptors due to high or prolonged glutamate exposure. Cell death induced by excitotoxicity is an important feature in either acute or neurodegenerative pathologies [4,49]. After an ischemic stroke the increase of glutamate level directly correlates to the severity of the stroke, infarct volume and poorer functional outcome of patients [50,51]. That correlation has also been observed in different rodent stroke models [52]. Similar results were observed in the cerebrospinal fluid (CSF) or cerebral areas of patients after a traumatic brain injury [53,54]. Higher levels of glutamate have also been described in the CSF of infants correlating with the severity of hypoxic-ischemic (H-I) encephalopathy [55], and indeed levels of glutamate remain elevated for days in animal models of pre-term H-I injury [56][57][58]. Glutamate excitotoxicity is also an important event in chronic neurodegenerative diseases [49]. Both Parkinson's and Alzheimer's diseases are characterized by a dysregulation of glutamate homeostasis that may damage neurons [59][60][61], and both serum and CSF from patients with Multiple Sclerosis (MS) and Amyotrophic Lateral Sclerosis (ALS) show higher levels of glutamate [5,62,63]. Excessive levels of glutamate induce cell death by several complex mechanisms that have been previously reviewed [49,60,64]. Importantly, sustained activation of AMPAR, NMDAR and KR under these conditions produce a large increase in intracellular Ca 2+ [65][66][67] that triggers various cellular injury processes including endoplasmic reticulum (ER) stress, mitochondrial dysfunction and the production of reactive oxygen and nitrogen species [59,66,[68][69][70][71][72][73][74][75]. Ca 2+ influx also induces the activation of calcium-dependent enzymes that promote cell damage. This situation is exemplified by calpain, whose activation by elevated Ca 2+ levels promotes the internalization of plasma membrane Ca 2+ ATPase leading to further dysregulation of intracellular Ca 2+ and oxidative stress [68,76,77]. These outcomes are related to ER stress since this organelle is highly sensitive to disturbances in calcium homeostasis and the reactive species that ensue under these conditions. Therefore, the ER is highly sensitive to glutamate insults and cytosolic Ca 2+ elevations [68,69,71,78,79]. In addition to these actions, calpain may actively promote ER stress via the modulation of ryanodine receptor and the sarco/endoplasmic reticulum Ca 2+ -ATPase (SERCA) [71,[80][81][82]. Here, alterations in ryanodine receptors and the SERCA produce further dysregulation of the Ca 2+ balance providing an additional stimulus to the production of reactive oxygen species, which, together with reactive nitrogen species that also arise due to enhanced Ca 2+ levels, promote oxidative stress [64,68,69,72,77,80,83,84]. Ca 2+ accumulation and nitrogen oxide disrupt the electron transport chain in mitochondria producing additional reactive oxygen species and mitochondria dysfunction [64,68,77]. Indeed, glutamate induction of both mitochondria failure and ER stress are critical factors in excitotoxic cell death [59,64,68,69,74,77]. Glutamate insult further potentiates oxidative stress by decreasing glutathione and superoxide dismutase activity, and promoting NADPH oxidase activity, eventually leading to lipid peroxidation, protein nitrosylation and cell death [72,85]. In addition, glutamate excitotoxicity also increases Ca 2+ uptake by mitochondria, leading to the opening of mitochondrial permeability transition pores, and mitochondria fragmentation [66,74]. The high levels of Ca 2+ uptake by mitochondria have also been related to the inhibition of mitochondrial respiration and the release of cytochrome c, thus triggering apoptosis [75,86,87]. Glutamate excitotoxicity also increases the expression of pro-apoptotic Bak protein, while also decreasing anti-apoptotic Bcl-2 protein levels [71], and altering the transcription and function of the nuclear factor Y (NF-Y) complex [45], a transcription factor closely linked to the control of apoptotic cell death [88] (see Section 8). AMPAR in Glial Cells The vast majority of research into glial cells has focussed on the astrocytes, oligodendrocytes and microglia. Much is therefore known regarding the expression and function of AMPAR in these CNS glia, while knowledge on this topic in other glial cells, particularly those of the autonomic system and the ependymal tissues is limited. Consequently this review will largely focus on those glia where functional AMPAR have been most thoroughly explored, although we will attempt to summarise the information that is available on other glial subtypes where possible. The following sections provide a review of each glial cell type, considering AMPAR expression, describing the known functions for these receptors in physiological and pathophysiological conditions, and highlighting emergent actions that stimulation of these AMPAR may evoke in the context of nervous system injury and disease. The major findings on AMPAR receptor expression and function in each glial cell type are summarised in Tables 1-4. Astrocytes Astrocytes are the most numerous glial cells in the CNS. They are distributed throughout both the grey and white matter where they perform a myriad of tasks that serve to maintain neuronal function. Astrocytes make extensive contacts with synapses and nodes of Ranvier that enable them to regulate neuronal environments. They achieve this through the re-uptake of transmitters, and the buffering of extracellular ions, which together help to provide an extracellular environment that is optimised for efficient axonal and synaptic activity. In addition to their connections to neuronal compartments, astrocytes extend processes that terminate on cerebral vasculature through which they are involved in the formation of the blood brain barrier [89], and in mediating neurovascular coupling to maintain the supply of energy to the brain [90]. Astrocytes also have important functions in guiding CNS development, being involved in both the regulation of myelination [91] and synaptogenesis [92]. Astrocytes are equipped with a diverse array of neurotransmitter receptors that enable them to monitor neuronal activity, and which may provide the capacity to generate feedback signals via "glio transmitters" whose actions on adjacent neuronal synapses may involve the regulation of basal transmission and the regulation of synaptic plasticity [93] (also see [94]). Many of these neurotransmitter actions, including those stemming from glutamate, are associated with the activation of metabotropic receptors [95]. In contrast, less is known regarding the influence of AMPAR in astrocytes. The following sections will review the available literature and discuss the potential functions of astrocyte AMPAR in physiological and pathophysiological conditions. A summary of these findings is presented in Table 1. Expression and Functional Properties of AMPAR in Astrocytes Whole-cell voltage-clamp recordings show that astrocytes in most CNS regions exhibit functional AMPAR [96,[103][104][105]. The exception to this rule appears to be the hippocampus where AMPAR-mediated currents are absent from astrocytes exhibiting high levels of GFAP-GFP transgene expression (cells expressing low GFAP-GFP transgene are now recognised as oligodendrocyte progenitors) [102]. The subunit composition of astrocyte AMPAR differs from region to region leading to variability in their permeability to Ca 2+ . Bergmann glia in the cerebellum exhibit strong expression of transcripts for GluA1 and GluA4, and lower levels of GluA2, leading to AMPAR with a high level of Ca 2+ permeability [40,96]. Similarly, astrocytes in the olfactory bulb (OB) express significant levels of GluA1 and 4 protein, and low levels of GluA2 [103]. Although AMPA stimulation evokes a measurable influx of Ca 2+ in these cells, AMPAR-mediated currents in these astrocytes are not fully blocked by specific blockade of Ca 2+ permeable AMPAR, thus OB astrocytes appear to express a mix of Ca 2+ permeable and impermeable receptors [103]. In other CNS regions, such as the cortex, GluA2 represents a dominant AMPAR subunit in astrocytes, with GluA1 and 4 transcripts being present at considerably lower levels [100]. As expected, AMPAR stimulation fails to elicit Ca 2+ influx in cortical astrocytes unless their desensitisation is blocked by the application of cyclothiazide (CTZ) [101] indicating a low permeability to Ca 2+ . In the spinal cord immunohistochemical analysis reveals GluA2, 3 and 4 protein on GFAP + astrocytes [106], while astrocytes in the thalamus exhibit a low level of Ca 2+ permeability, potentially due to a dominant expression of GluA2, as revealed by the partial sensitivity of their AMPAR currents to pharmacological agents that selectively block GluA2 lacking receptors [104]. Astrocyte AMPAR Functions Under Physiological Conditions Despite the abundance of studies exploring the expression of AMPAR in astrocytes (Section 2.2), relatively few physiological functions are ascribed to these receptors. In vitro studies using primary cultures of cortical astrocytes show that Na + influx due to AMPAR activation produces a blockade of outward K + currents through both A-type and delayed rectifier channels [48]. AMPAR activation in cultured Bergman glia produces similar effects on K + currents activity [47,97]. Although this mechanism has not been confirmed in situ, it has been proposed to act as a mechanism for limiting elevations in extracellular [K + ] during periods of excessive neuronal activity [48]. Astrocyte AMPAR are likely to be situated in appropriate locations to fulfil this role since astrocyte processes make extensive contact with synapses [92] and nodes of Ranvier throughout the CNS [107,108] (Figures 1A and 2A). Glutamate has well documented effects on the function of peri-synaptic astrocytes [95], thus AMPAR-mediated effects on outward K + currents seems a possibility. With regard to peri-nodal astrocytes, internodal axonal segments are known to release glutamate in an activity-dependent manner [109], yet it is unclear whether glutamate released in this way diffuses in sufficient concentrations to stimulate glial receptors located at nodes ( Figure 1A). OL myelin). To do so it must cross the paranodal axoglial junctions (PAJ) that separate the internodal periaxonal and paranodal spaces. The PAJ has in fact been proposed as a route for the traffic of small molecules such as glucose [110], thus it is conceivable that glutamate may also diffuse via this structure to reach glial AMPAR located in peri-nodal spaces. Resolution of these questions could be achieved through the application of a genetically encoded glutamate sensing molecule, for example the intensity-based glutamate sensing fluorescent reporter (iGluSnFR) [111], which if targeted to astrocytes may be useful for revealing perinodal glutamate release. Alternatively, two-photon imaging may be used to analyse AMPAR-mediated Ca 2+ influx at peri-nodal astrocyte process [112]. Studies of this nature could have wide-reaching impact given the important role astrocytes play in the regulation of myelination [91], and the influence that neuronal activity exerts in guiding oligodendrocyte differentiation and myelination [113]. Additionally, glutamate exerts a multitude of action in astrocyte via metabotropic glutamate receptors [95], thus evidence in support of an axonal-astrocyte glutamatergic signaling pathway could have significance for our understanding of white matter development that extends beyond the influence of AMPAR. In contrast to the hypothetical functions of white matter astrocytes discussed above, AMPAR located on the processes of astrocytes in the molecular layer of the cerebellum perform an established function in the regulation of excitatory synapses. Here, Ca 2+ influx through Ca 2+ -permeable AMPAR located on Bergman glial processes is linked to the regulation of glutamatergic synapses on Purkinje cell dendrites [40]. These synapses are enfolded by Bergman glia processes [114] whose expression of the glutamate transporter GLAST helps define the kinetics of AMPAR-mediated currents at Purkinje cells synapses. The function of these Ca 2+ -permeable AMPAR, which lack GluA2, are revealed by experiments in which the Ca 2+ permeability of these AMPAR is reduced by exogenous expression of GluA2. Forced GluA2 expression leads to a retraction of the glial processes away from synapses, and an increase in the duration of Purkinje cell glutamatergic synaptic currents. These changes likely reflect alterations in the re-uptake of glutamate by Bergmann glial glutamate transporters located on the retracted processes [40]. Evidence for the significance of these synaptic effects are apparent in observations from transgenic mice with astrocyte-targeted deletions of GluA1 and 4. Here, changes in glial-neuron interactions and synaptic function are correlated with alterations in fine motor control that are consistent with the disturbances in Purkinje cell synaptic function [98]. and oligodendrocyte progenitors (OPC) extend processes that make contact with unmyelinated axons and the nodes of Ranvier on myelinated axons. These glial processes contain functional AMPAR since white matter astrocytes exhibit AMPAR-mediated Ca 2+ signals, and OPC display AMPAR-mediated synaptic currents. Both unmyelinated and myelinated axons release glutamate via vesicular mechanisms. Glutamate released at unmyelinated axons drives synaptic input on OPC, but it is unclear whether glutamate released at internodal sites diffuses in concentrations sufficient to activate glial AMPAR at nodes of Ranvier. The functional consequences of astrocyte AMPAR in white matter remains unknown but may include a role in the regulation of outward K + currents. AMPAR activation influences multiple functions in OPC including migration, proliferation, differentiation and survival. In addition, AMPAR activation in OPC influences events in the nucleus including the induction of immediate early genes involved in cellular growth. Note, OPC actions depicted also occur in CNS grey matter. (B) Depiction of excitotoxic events and glial cell injury in CNS white matter. Excitotoxic and in inflammatory conditions involve an increase in extracellular glutamate levels that damage OL and myelin internodes. Glutamate is released from damaged axons and from gap-junction hemichannels on activated microglia. Excessive astrocyte AMPAR activation may aggravate excitotoxic conditions via the downregulation of astrocyte GLAST. Myelin can be restored by OPC recruited to demyelinated axons. GluA2-GAPDH complexes formed in astrocytes at inflammatory demyelinating lesions may undergo nuclear translocation leading to the initiation of disease processes. Myelin repair involves the recruitment of OPC to demyelinated axons where they establish AMPAR-mediated synaptic connections. Axon-OPC synapses may play a role in guiding OPC to target axons, and in controlling their differentiation into myelinating OL. The modulation of the Endocannabinoid System is able to prevent several of these pathogenic pathways. Either the increase of endocannabinoid tone, or the direct agonism of CB1 receptors, reduces cytosolic Ca 2+ influx in the oligodendrocyte after an AMPA stimulus. Similarly, increased AEA tone prevents GLAST and GLT-1 downregulation in a mechanism that involves at least CB1 receptor, and AEA tone increase or CB1/ CB2 agonism potentiates GLAST and GLT-1 expression in mouse models of MS. Note, other cytotoxic mediators involved in inflammatory demyelination, such as cytokines and complement cascade components are not shown for clarity. and oligodendrocyte progenitors (OPC) extend processes that make contact with unmyelinated axons and the nodes of Ranvier on myelinated axons. These glial processes contain functional AMPAR since white matter astrocytes exhibit AMPAR-mediated Ca 2+ signals, and OPC display AMPAR-mediated synaptic currents. Both unmyelinated and myelinated axons release glutamate via vesicular mechanisms. Glutamate released at unmyelinated axons drives synaptic input on OPC, but it is unclear whether glutamate released at internodal sites diffuses in concentrations sufficient to activate glial AMPAR at nodes of Ranvier. The functional consequences of astrocyte AMPAR in white matter remains unknown but may include a role in the regulation of outward K + currents. AMPAR activation influences multiple functions in OPC including migration, proliferation, differentiation and survival. In addition, AMPAR activation in OPC influences events in the nucleus including the induction of immediate early genes involved in cellular growth. Note, OPC actions depicted also occur in CNS grey matter. (B) Depiction of excitotoxic events and glial cell injury in CNS white matter. Excitotoxic and in inflammatory conditions involve an increase in extracellular glutamate levels that damage OL and myelin internodes. Glutamate is released from damaged axons and from gap-junction hemichannels on activated microglia. Excessive astrocyte AMPAR activation may aggravate excitotoxic conditions via the downregulation of astrocyte GLAST. Myelin can be restored by OPC recruited to demyelinated axons. GluA2-GAPDH complexes formed in astrocytes at inflammatory demyelinating lesions may undergo nuclear translocation leading to the initiation of disease processes. Myelin repair involves the recruitment of OPC to demyelinated axons where they establish AMPAR-mediated synaptic connections. Axon-OPC synapses may play a role in guiding OPC to target axons, and in controlling their differentiation into myelinating OL. The modulation of the Endocannabinoid System is able to prevent several of these pathogenic pathways. Either the increase of endocannabinoid tone, or the direct agonism of CB 1 receptors, reduces cytosolic Ca 2+ influx in the oligodendrocyte after an AMPA stimulus. Similarly, increased AEA tone prevents GLAST and GLT-1 downregulation in a mechanism that involves at least CB 1 receptor, and AEA tone increase or CB 1 / CB 2 agonism potentiates GLAST and GLT-1 expression in mouse models of MS. Note, other cytotoxic mediators involved in inflammatory demyelination, such as cytokines and complement cascade components are not shown for clarity. Astrocyte AMPAR in Pathology Astrocyte AMPAR activation has not been directly implicated in CNS injury or disease. Indeed, in contrast to cells in the OL lineage (Section 3.2), astrocytes do not appear to be vulnerable to pathological conditions associated with glutamate mediated excitotoxicity [106,115]. In fact, AMPAR mediated excitotoxicity is only observed in cultures of cortical astrocytes when AMPAR desensitisation is prevented by the application of CTZ [101]. Ca 2+ imaging failed to reveal glutamate-mediated Ca 2+ influx in these cultures suggesting that cortical astrocytes exhibit AMPAR with low Ca 2+ permeability. These findings, and similar observations of excitotoxic resistance in hippocampal astrocytes [116], are supported by RNAseq data showing a dominance of GluA2 in both cortical and hippocampal astrocytes in situ [100]. Thus, low levels of AMPAR Ca 2+ permeability may be associated with resistance to glutamate mediated injury. While excessive AMPAR activation does not appear to induce overt injury, it may trigger molecular alterations that could then trigger pathological conditions in the CNS. In this regard, prolonged stimulation of cultured Bergmann glia with AMPAR antagonists produces a downregulation in the expression of the glutamate transporter GLAST [99] ( Figure 2B). Alterations in glial glutamate clearance due to a reduction in GLAST would be expected to disturb synaptic transmission, and could intensify excitotoxic conditions thus threatening more vulnerable neurons and OL. On this basis astrocyte AMPAR may represent a useful therapeutic target despite the apparent insensitivity to glutamate mediated injury of these glia. In the healthy CNS astrocyte (AST) and OPC/NG2-glia processes exhibit physical and functional contacts with neuronal synapses. Ca 2+ permeable AMPAR on Bergmann glia sustain the physical interaction between glial processes and neuronal synapses allowing efficient clearance of glutamate from Purkinje cell synapses. Activation of astrocyte AMPAR also induces a blockade of outward K + currents that may support neuronal function during sustained periods of neuronal activity. OPC/NG2-glia in developing and adult CNS tissues exhibit Ca 2+ permeable AMPAR and receive AMPAR-mediated synaptic input that may regulate their migration, maturation and survival. In the adult CNS NG2-glia continue to receive synaptic input, the functions of which remain unclear. The role of microglial AMPAR in the healthy CNS remain unknown although in vitro data suggest a role in chemotaxis [117]. (B) Glial cell AMPAR amplify pathological conditions and mediate glial cell injury in the CNS. Stimulation of AMPAR under excitotoxic conditions induces the release of glutamate from activated microglia via gap-junction hemichannels. AMPAR activation under hypoxic conditions also stimulates the upregulation of microglial AMPAR leading to an imbalance in anti-and proinflammatory cytokine release characterised by enhanced release of TNF-α. TNF-α released from activated microglia may intensify excitotoxic conditions by inducing the downregulation of astrocytic GLT1. Similarly, direct stimulation of astrocyte AMPAR may worsen excitotoxic conditions by inducing the downregulation of GLAST. TNF-α also increases the vulnerability of neurons to excitotoxicity by stimulating increased surface trafficking of AMPAR. Excessive AMPAR activation induces direct injury to OPC via numerous mechanisms including oxidative and ER stress and mitochondrial dysfunction (depicted by red organelles). In addition, excitotoxic stimulation of OPC In the healthy CNS astrocyte (AST) and OPC/NG2-glia processes exhibit physical and functional contacts with neuronal synapses. Ca 2+ permeable AMPAR on Bergmann glia sustain the physical interaction between glial processes and neuronal synapses allowing efficient clearance of glutamate from Purkinje cell synapses. Activation of astrocyte AMPAR also induces a blockade of outward K + currents that may support neuronal function during sustained periods of neuronal activity. OPC/NG2-glia in developing and adult CNS tissues exhibit Ca 2+ permeable AMPAR and receive AMPAR-mediated synaptic input that may regulate their migration, maturation and survival. In the adult CNS NG2-glia continue to receive synaptic input, the functions of which remain unclear. The role of microglial AMPAR in the healthy CNS remain unknown although in vitro data suggest a role in chemotaxis [117]. (B) Glial cell AMPAR amplify pathological conditions and mediate glial cell injury in the CNS. Stimulation of AMPAR under excitotoxic conditions induces the release of glutamate from activated microglia via gap-junction hemichannels. AMPAR activation under hypoxic conditions also stimulates the upregulation of microglial AMPAR leading to an imbalance in anti-and pro-inflammatory cytokine release characterised by enhanced release of TNF-α. TNF-α released from activated microglia may intensify excitotoxic conditions by inducing the downregulation of astrocytic GLT1. Similarly, direct stimulation of astrocyte AMPAR may worsen excitotoxic conditions by inducing the downregulation of GLAST. TNF-α also increases the vulnerability of neurons to excitotoxicity by stimulating increased surface trafficking of AMPAR. Excessive AMPAR activation induces direct injury to OPC via numerous mechanisms including oxidative and ER stress and mitochondrial dysfunction (depicted by red organelles). In addition, excitotoxic stimulation of OPC AMPAR alters the function of transcription factor complex NF-Y leading to alterations in the expression of Ca 2+ permeable GluA4 subunits and the regulation of genes involved in apoptosis. Excitoxicity is a common feature of many CNS disease states including H-I injury, stroke, multiple sclerosis and neurodegenerative disorders including Alzheimer's, Huntinton's and Parkinson's Disease [118]. Thus, CNS diseases of this type may provide valuable areas in which to search for subtle molecular alterations that could signal an involvement of astrocyte AMPAR in the initiation or propagation of the disease state. In this context, the emergence of RNAseq studies examining cell-specific gene expression in various disease models provides an opportunity to search for disease signatures that could foreshadow an involvement of glial AMPAR. The experimental autoimmune encephalomyelitis (EAE) model of inflammatory demyelination provides a promising model for this line of enquiry since AMPAR-mediated excitotoxicity has been linked to disease processes in this model [119][120][121]. In agreement with this, a recent astrocyte transcriptome analysis in the EAE model reveals a down-regulation of GluA4 in spinal cord astrocytes [122]. The consequence of this alteration in AMPAR expression are unclear. However, given the involvement of GluA4 in mediating Ca 2+ influx, and the influence of astrocytes on myelin formation and viability [91], it is interesting to consider the detrimental effects that could arise due to an uncoupling of the axon-astrocytes interaction under inflammatory excitotoxic conditions. Although not direct implicating AMPAR activation, astrocytic GluA2 has also been linked to the pathogenesis of EAE. Recent work studying protein complexes containing GluA2 and glyceraldehyde 3-phosphate dehydrogenase (GAPDH), a key enzyme involved in glycolytic metabolism, have identified an increase in the presence of this complex in EAE lesions [123] ( Figure 1B). This is significant since excitotoxic stimulation of AMPAR induces cell death in cultured neurons via nuclear translocation of GluA2-GAPDH complexes, where it likely promotes cell death via upregulation of the p53 pathway [124,125]. Interestingly, a cell penetrating peptide designed to disrupt the formation of GluA2-GAPDH complexes ameliorates both clinical disease and the degree of astrocyte reactivity in EAE [123]. In addition, the peptide reduces inflammation-associated changes in isolated cultures of astrocytes, including increased expression of GFAP and EAAT1/2 proteins, nuclear translocation of GluR2-GAPDH, and p53 activation [126]. These findings highlight AMPAR-mediated GluA2-GAPDH nuclear translocation and p53 activation as a potential mechanism connecting glial AMPAR to inflammatory excitotoxic disease states. Importantly, GluA2-GAPDH complexes are also enriched in MS lesions [123] suggesting that this complex may represent a promising therapeutic for inflammatory demyelination. In conclusion, while evidence linking astrocyte AMPAR activation to CNS disease is limited, subtle changes in the molecular characteristics of astrocytes following pathological AMPAR activation may contribute to the initiation and propagation of CNS injury and disease. In addition, AMPAR subunits may stimulate inflammatory disease processes through participation in novel protein complexes. Consequently, further work is required to investigate the links between CNS disease states, disturbances in the astrocyte AMPAR transcriptome, and the intracellular behaviour of GluA proteins. Oligodendrocytes Discovered by Pio Hortega in 1921, OL are the myelinating cells of the CNS [127]. Along development, three different waves of OL generation have been identified in rodent forebrain: the first one, arising from OL progenitors (OPC) stemming from the medial ganglionic eminence and anterior entopeduncular area, happens around E12.5, and is almost completely replaced by early postnatal stages [128]. This wave is followed by a second that originates in the lateral and caudal ganglionic eminences around day E16.5 [128]. Finally, around birth a new wave of OPC with a cortical origin populates cortical areas [128]; although some studies have pointed out that this cortical oligodendrogenesis may occur earlier in development [129]. Concerning cerebellar OL, their origin is mainly extracerebellar and with a likely source being OPC from the ventral rhombomere 1 (r1) that populate the cerebellum by day E18.5 [130]. Two different waves of OPC have also been described in the spinal cord [131,132]. The first one around E13, that produce most of the spinal cord OL, and the second, starting on E15.5, which contributes a smaller population of OL in spinal cord [131,132]. The maturation of the OL lineage is commonly divided into four stages: OPC, late OPC (also known as preoligodendrocytes, preOL), immature OL (iOL) and mature myelinating OL (mOL) [133][134][135]. OPCs are characterized by a bipolar morphology and the expression of markers like PDGF-Receptor α and NG2 [136,137]. These cells are able to migrate to different regions after damage or during development [138,139]. Of note, OPC remain abundant in the adult CNS where they retain the potential to differentiate into myelinating OL [15]. However, a substantial number of these cells remain undifferentiated, and it is this fact, coupled with their unique physiological connection to neural circuitry (see Section 3.2), that has led to the suggestion that they represent a distinct cell type, termed 'NG2-glia', with functions that extend beyond the generation of myelin forming cells [15,140,141]. Upon arrival at their target sites developmental OPC remain mitotically active, elaborate their processes, and lose their migrative properties before differentiating into preOL displaying immunoreactivity to the O4 monoclonal antibody [136,142]. A further stage of differentiation sees preOLs transition into iOL, which are post-mitotic cells with a highly complex morphological structure characterized by expression of galactocerebrosidase (Galc, recognized by the O1 monoclonal antibody) [142][143][144]. Finally, these cells wrap the axon, myelinating it as mOL [145,146]. Traditionally, the main function OL has been to myelinate the axon. Myelination is associated with an improvement of action potential transmission, increasing its speed and saving energy [147]. Thus, a lower velocity of conduction has been observed after demyelination [148], or in unmyelinated axons [149]. A recent hypothesis has proposed that myelination could contribute to a type of brain plasticity. Here, activity-dependent modulation of myelination on the most active fibers could, by altering their conduction velocities, provide a mechanism for coordinating the flow of information through neural circuits [10,150,151]. Myelin also protects axons from outside metabolites that could be harmful [152,153]. Besides isolating the axon, myelin also plays a role in the energetic metabolism of the axon as reviewed elsewhere [9]. These metabolic functions include involve the provision of energy substrates such as lactate or pyruvate, whose delivery to the axon contribute to the preservation of axon function and neuron survival [153]. Expression and Functional Properties of AMPAR in Oligodendrocytes AMPAR are expressed at all stages of the OL lineage. OPC express all four GluA subunits, although GluA1 appears to be less prominent [154][155][156][157][158]. Analysis by patch-clamp recording from OPC in primary culture and acute brain slices indicate that OPC AMPAR exhibit rapid desensitisation kinetics, and a degree of Ca 2+ permeability [155,156,159,160]. In support of this electrophysiological evidence, a number of Ca 2+ imaging studies of OPC in cell culture and ex vivo preparations have revealed AMPAR mediated Ca 2+ influx [140,155,161]. Given the prominence of GluA2 in OPC [155], and evidence showing the dominance of the Ca 2+ impermeable Q/R edited form of GluA2 in hippocampal OPC [157] it has been proposed that OPC exhibit both Ca 2+ permeable (GluA2 lacking) and impermeable (containing GluA2) AMPAR simultaneously [141]. A shift in potentiation to CTZ has been observed between postnatal day 5 and 12 suggesting a developmental increase in the proportion of AMPAR containing the Flip isoform [162]. Both Flip and Flop variants have been detected in the CG-4 OL cell line [163], but the relative expression of these variants in OPC in situ has not to our knowledge been reported. Differentiated OL (iOL/mOL) continue to express GluA mRNA and protein, although the combination of subunits present appears to differ from that observed in OPC. RNA sequencing of OPC, iOL (also known as newly formed OL) and mOL indicates that all four GluA transcipts are down-regulated upon differentiation to the iOL stage, although the decline in GluA2 and GluA4 is less marked compared to GluA3 [164]. The decrease in expression continues with the transition to the mOL stage, although as discussed below it is notable that levels of GluA4 transcript remain relatively sustained [164]. In agreement with sustained GluA4 expression, AMPA receptor activation induces intracellular Ca 2+ influx in mature OL isolated from the optic nerve and cortex [65,155]. At the protein level GluA2, 3 and 4 are detected in cultured iOL and mOL [155], and immunocytochemical staining has revealed expression of GluA4 in the soma and myelin of spinal cord OL [106], and in the soma and processes of CNPase-GFP expressing OL in optic nerve [165]. In the latter study, GluA2 was not detected in OL, but GluA3 protein expression in OL soma was inferred using anti-GluA2/3. In addition, a recent study in human white matter detected localisation of GluA4 protein to MBP + myelin sheaths [166,167]. In contrast to these data, OL lineage cells within the forebrain white matter of immature rats exhibit immunoreactivity for GluA1, 2 and 3, and a brief expression of GluA4 protein on 04 + lateOPC between postnatal days 7-9, which was not observed on GalC + iOL [168]. The expression and functional properties of OL lineage AMPAR are summarised in Table 2. Proliferation, Differentiation [41,43] Migration [39] Blockade of K + currents [43,174] OPC LTP [175] OGD injury [165] OPC in vivo GluA2-4 (FISH) [46] N.D. N.D. Survival [165] H-I injury [172] Remyelination [148,176] OL in vitro GluA2, 3, 4 (qPCR, WB) [ Oligodendrocyte AMPAR Functions Under Physiological Conditions It is tempting to hypothesize that the pronounced alterations of AMPAR expression observed during OL differentiation [155,164] are related with the receptors role in OL maturation and myelination. Indeed, in line with evidence from other neural cell types, AMPAR have been implicated in developmental OL maturation and myelination [39,41,43,46,180] (Figure 1). AMPAR activation seems to have an inhibitory effect on OPC proliferation. This is observed by the reduction of the proliferative ratio after AMPAR stimulation in cell cultures [42], as well as its increase after AMPAR antagonism in organotypic cerebellar slice [41,43]. In agreement with this, a promotion of OPC proliferation has been studied after suppressing axonal release of glutamate [148]. By contrast, AMPAR activation appears to play a role in the recruitment of OPCs to target axons [148], perhaps via effects on OPC migration speed [37]. AMPAR also play a key role in OL differentiation, promoting the elongation and branching of OPC processes [41]. Indeed, this receptor seems to participate in the early stages of OPC lineage maturation rather than in developmental myelination [46,148], with alterations of this receptors inducing delays of myelination by reducing the number of mature oligodendrocytes [46]. The involvement of OPC AMPAR in developmental myelination has been challenged by recent in vivo work involving mice carrying a conditional deletion of OPC GluA2, 3, and 4 [46]. OPC AMPAR currents are completely abolished under these conditions, yet analysis of OPC in tissues from these mice failed to detect a change in OPC proliferation or differentiation [46]. Instead, the genetic ablation of OPC AMPAR resulted in a decrease in OPC survival. Importantly, parameters of myelination considered to be senstive to neuronal acitvity such as internodal length [181] and number [182] also remained unaltered. These findings are in contrast with data from larval zebrafish and rodent optic nerve where suppression of neuronal activity and attenuation of vesicular glutamate induce a measurable decrease in internode number and length respectively [182,183]. Technical differences in the approaches used to modulate glutamatergic actions on OPC could account for these contrasting results. Kougioumtzidou et al. [46] used a non-inducible Cre strategy to delete Gria2 and Gria4 in OPC on a constitute Gria3 −/− background. These tripple transgenic OPC lacked functional AMPAR through development, thus they would have been insensitive to all sources of glutamate-mediated AMPAR stimulation, be that vesicular or otherwise. One consideration is whether complete starvation of AMPAR-mediated signaling through the life-span may induce compensatory changes in OPC that mask the function of AMPAR in OPC maturation and myelination. In support of this view, recent work using OPC-specific retroviral-mediated modulations that alter OPC AMPAR only during postnatal development have identified an involvement of these receptors in OPC proliferation and differentiation, but not survival [180] (discussed in more detail below). With regard to the study by Kougioumtzidou et al. [46], it should also be noted that Gria2, 3, and 4 lacking OPC would continue to receive glutamatergic stimulation from NMDAR [165,184,185], thus actvitity-dependent glutamate signaling could still influence OL maturation and myelination [185] (but see [186]). In contrast, Mensch et al. [183] and Etxeberria et al. [182] targeted glutamate release, rather than AMPAR expression, thus OPC in these studies could continute to receive stimulation from glutamate released by non-vesicular sources, which may act on both AMPAR and NMDAR. The use of an inducible-conditional Gria 2, 3, 4 deletion, perhaps via a multiplex CRISPR-based knockout strategy, could help to bring further clarity to the role of AMPAR signaling in OPC maturation and myelination. Notably, OPC AMPAR are activated by vesicular release of glutamate from unmyelinated axons in white and grey matter [141,[187][188][189] (Figures 1A and 2A). The function of these neuro-glial synapses is unknown, but it is hypothesised that they may signal levels of activity within neural circuits, perhaps allowing OPC to regulate their proliferation or differentiation at sites of increased activity [141,190]. In agreement with this idea, AMPAR-mediated input declines upon differentiation of OPC [191], and synaptic activity can induce Ca 2+ influx into OPC via AMPAR [159,160], thus the synaptic activation of pro-differentiation Ca 2+ -dependent intracellular signals seems a possibility. However, recent evidence suggests a role for axon-OPC synapses in regulating proliferation but not differentiation [180]. In this work increases in the Ca 2+ permeability of OPC AMPR via OPC specific expression of either non Q/R edited GluA2 subunits, or a "pore dead" GluA2 construct, promoted OPC proliferation without affecting differentiation or survival. Thus neuronal activity may influence OPC proliferation via the activation of OPC AMPAR and the subsequent activation of Ca 2+ -dependent signaling pathways. Interestingly, an additional strategy that reduced the proportion of Ca 2+ permeable AMPAR in OPC without affecting GluA2 channel properties caused an increase in the size of the OPC population without altering proliferation or survival [180] suggesting further complexities in the influence of AMPAR on OPC development. Contrasts between these findings, and those indicating an enhancement of OPC proliferation following AMPAR antagonism in cerebellar slice cultures [41,43] may be explained if bath applied AMPAR blockers, as used on ex vivo slices, affect additional mechanisms that impinge on OPC functions. One possibility, as highlighted previously [41], would be an effect on neuronal synapses whose inhibition would be expected to produce similar effects to that seen when neuronal activity is blocked pharmacologically. Of note, both TTX and the AMPAR antagonist GYKI induce a similar stimulation of OPC proliferation in cerebellar slice cultures [41]. Taken together there is considerable evidence that OPC AMPAR, including those recruited via neuron-OPC synapses, exert influences on OPC migration, proliferation and survival during CNS development ( Figure 1A). Interestingly, a large numbers of OPC, or NG2-glia, persist in the adult CNS where they continue to receive synaptic input from neuronal circuits [reviewed by 182]. These NG2 + cells seem able to respond to this activity since, like their developmental counterparts [161], they exhibit activity-dependent and neurotransmitter receptor dependent Ca 2+ transients [192]. These observations, and morphological data showing that their processes make intimate contact with multiple neuronal and astrocyte elements, are suggestive of specialized functions within the CNS [192]. Indeed, it has been proposed that NG2 + cells might regulate glutamatergic synapses by modulating postsynaptic AMPA [193], although this idea remains controversial at this time [194]. Aside from a role in remyelination (Section 3.2) other functions for OPC/NG2-glia in the adult CNS remains an open question. Regarding differentiated OL, both iOL and mOL continue to express AMPAR (Section 3.1), and GluA4 has been detected in rodent spinal cord myelin [106] and human cortical white matter [167]. Ca 2+ -permeable AMPAR containing GluA4 may contribute to activity-dependent signaling between axons and myelin. Electrical stimulation of an ex vivo optic nerve preparation induces Ca 2+ signaling in myelin that is blocked by NMDAR and AMPAR antagonists [109]. These data suggest that AMPAR activation may provide a depolarizing stimuli that relieves Mg 2+ blockade from NMDAR, and indeed imaging in zero Mg 2+ revealed a subtle NDMAR-dependent elevation in Ca 2+ [109]. This form of activity-dependent signaling involves vesicular release of glutamate: Ca 2+ signals were blocked by bafilomycin and tetanus toxin, both of which disrupt vesicular release; and enhanced by hypertonic sucrose stimulation, which encourages vesicle release. Whether or not this form of axon-myelin signaling occurs in other regions of the CNS, and what function it may play in regulating axon-OL interactions remains unclear. However, it has been proposed that an 'axo-myelinic' synapse may enable mOL to sense activity in target axons, allowing them to adjust the provision of metabolic support [195], perhaps via the provision of lactate [196], or to fine tune myelin internode parameters such as length and thickness in response to changes in demand [195]. Oligodendrocyte AMPAR in Pathology The prominent expression of Ca 2+ permeable AMPAR in the OL lineage places these cells at threat of injury from excitotoxic conditions ( Figure 2B). Indeed, AMPAR-mediated excitotoxicity has been described in numerous in vitro studies using cultures of both OPC/preOL [45,170,171] and differentiated OL [65,163,197]. Importantly, excitotoxicity in these cells is associated with excessive Ca 2+ influx [65,171], thus expression of Ca 2+ permeable AMPAR, as described in Section 2.2, appears to render these cells vulnerable to high levels of extracellular glutamate. In support of this idea, specific antagonists of Ca 2+ permeable AMPAR reduce injury in OL subjected to oxygen glucose deprivation [171], while forced expression of Q/R edited GluA2 subunits protected OPC from an AMPAR-induced excitotoxic injury [173]. Given the links between Ca 2+ influx and cellular injury responses in OPC, the degree of Ca 2+ permeable AMPAR expression in OPC would be expected to regulate vulnerability to excitotoxic injury. In this respect, it is interesting that group 1 mGluR activation leads to an increased surface expression of Ca 2+ permeable AMPAR in OPC [198]. This mechanism, which involves specific TARP-dependent trafficking of the Ca 2+ permeable GluA4 subunit, may act to amplify the sensitivity of OPC to elevated levels of extracellular glutamate. However, at this time the links between mGluR activation and OPC excitotoxicity remain unclear. AMPAR-mediated injury has been observed in a number of in vivo models involving OL and myelin injury. For example, the AMPA antagonists NBQX reduces OL and myelin loss in rats subjected to a weight-drop spinal cord contusion injury [179], and as discussed below, ameliorates inflammatory demyelination in the EAE model of MS [119,120]. These observations are significant since excitotoxicity is implicated in a number of pathologies (described in Section 1.3) including Cerebral White Matter Injury (WMI) and MS [199,200]. In premature infants WMI includes different neuropathological cerebral white matter [201], the survivors of which eventually develop long-term neurological disabilities [199,202]. Within the framework of pre-term conditions, H-I damage has been specifically correlated to oligodendrocyte maturation, with preOL being identified as being particularly vulnerable in several species [115,[203][204][205] including human [206]. For a detailed discussion of perinatal WMI see the recent review by Back et al. [199]. Importantly, within the OL lineage, preOL in developing white matter exhibit the greatest abundance of GluA4 [168,207], a Ca 2+ permeable subunit highly expressed in neural cells exhibiting vulnerability to excitotoxic death [208]. Importantly, topiramate and NBQX reduce OL death in rodent models of neonatal H-I injury [172,178]. Thus, the protection of OL via the targeting of Ca 2+ permeable AMPAR, or relevant downstream pathways, represents a potential strategy for the prevention of WMI in pre-term infants. However, despite these promising pre-clnical findings, it should be noted that other pathogenic events are likely to be relevant since preOLs are also very sensitive to increased levels of TNFα and oxidative stress, both of which are characteristic of WMI [209][210][211]. Related results are obtained after stroke in neonatal and adult rodent models. In both cases, cerebral ischemia is followed by a decrease in the mOL population [212,213] as well as OPC proliferation and migration into the damage areas [213,214]. mOL death is reproduced in vitro by AMPA but not NMDA administration, and is reduced by AMPAR antagonist after oxygen and glucose deprivation [197,215]. The administration of AMPAR competitive antagonist, SPD 502, mimics this oligoprotection after cerebral stroke induced in adult rats [216]. Interestingly, this effect is regionally restricted since the oligoprotection is specific to cortical OL. As mentioned above, brain OL are heterogenous both in respect of their developmental origins, and their differentiation rates in grey and white matter [217,218]. Therefore, deeper studies to analyze how AMPAR modulation affects specific OL populations are warranted if AMPAR-targeting therapeutics are to be developed further. A number of observations link glutamate dysregulation and OL AMPAR to pathology in MS. First, MS is characterized by high levels of glutamate within CSF [5,62]. Second, an increase in levels of the Ca 2+ permeable subunit GluA1, but not GluA2, in oligodendrocytes located next to MS plaques has been observed in post-mortem CNS samples of MS patients [219]. Third, a greater level of GluR2-GAPDH complexes are detected in MS plaques [123], and disruption of the GluA2-GAPDH complex has been shown to produce a therapeutic/protective action in EAE [123] (see Section 2.3 for a related discussion on this topic). Fourth, Ca 2+ permeable AMPAR are implicated in CNS pathology in EAE where OL death, axonal damage and demyelination are reduced in mice lacking Gria3, the gene encoding the Ca 2+ permeable GluA3 subunit [220]. In agreement with these latter findings, other work in the EAE model encourages the use of AMPAR as therapeutic targets in MS since blockade of these receptors via subcutaneous injection of NBQX reduces clinical disease and protects OL [119,120]. These promising preclinical results from models of MS [221], and also stroke [216,222], must be weighed against the negative findings from clinical studies that have observed worsened outcomes following the use of AMPAR-targeting drugs in patients with acute ischemic stroke, [223,224]. These disappointing results may be explained by the low specificity of the drug used, ZK200775, and the wide expression of these receptors in CNS circuitry, where their inhibition may be expected to produce numerous unwanted side effects that may interfere with any potential therapeutic benefits. As discussed above (Section 3.2) AMPAR influence the development and survival of OL. Consequently, protection of OL from excitotoxic conditions associated with CNS inflammation may not be achieved without compromising the regenerative capacity of OPC (but see [172]). Nonetheless, glutamate dysregulation plays wider roles in MS pathogenesis via actions on T cell function such as migration and cytokine secretion, and even glutamate release [5], thus therapies focused on controlling glutamate levels may provide significant benefits that extend beyond oligodendrocyte protection. The activation of OPC AMPAR by axonal synaptic input may play a role in regulating OPC-mediated remyelination (Figure 1). New OPC recruited to lesions express AMPAR, which may be involved in both migration and the early stages of myelination [39,148]. Within the caudal cerebellar peduncle, axons demyelinated by infusion of ethidium bromide establish de novo glutamatergic synapses with recruited OPCs, while infusion of antagonists of specific voltage-gated Ca 2+ channels, selected to specifically act on axonal channels, reduce the degree of remyelination [148,176]. Thus, synaptic activation of OPC AMPAR appears to be important for remyelination. Another study tracked the occurrence of synaptic inputs on OPC following lysolecithin lesions in the corpus callosum [176]. Here the authors report a disruption of synaptic input shortly after demyelination that correlates with a reduction in immunohistologically identified axonal synaptic connections on proliferating OPC in the lesion. Interestingly, a recovery in synaptic innervation coincides with a reduction in OPC proliferation, suggesting that AMPAR activation may inhibit cell division [176]. This observation agrees with recent studies indicating a role for synaptic Ca 2+ permeable AMPAR in stimulating OPC proliferation [180], and with data from brain slice cultures and in vivo studies that show that blockade of axonal activity increases OPC proliferation [41,148,225,226]. However, blockade of activity in ex vivo brain slices also produces an increae in OL differentiation [41] that was not observed under in vivo remyelinating conditions [148]. These contrasting outcomes may be explained by differences in experimental models (ex vivo vs. in vivo) and the developmental status of the tissues examined (developing white vs adult remyelinating). Despite these differences the data from these studies, and many others from a variety of different CNS systems, clearly indicate an important role for axonal activity and AMPAR in the regulation of OL myelination [113,[181][182][183]225,227] (Figure 1). In conclusion, the excessive activation of OL AMPAR poses numerous threats to the viability of OL and the myelin they support. This loss of myelin compromises neurological function in a wide range of CNS disease states, thus therapies that protect OL from excitotoxic AMPAR-mediated injury are an important clinical goal. Nevertheless, AMPAR are powerful regulators of OL development and survival, and may also play unidentified functions in adult-NG2-glia. Therefore, AMPAR-targeting therapies capable of protection OL may induce unwanted side effects, and potentially hinder other aspects of OL regeneration and myelin repair. Further basic and pre-clinical research will be necessary to fully identify the functions of OL AMPAR, and determine the benefits and feasibility of targeting these receptors in a therapeutic context. Microglia Microglia are highly abundant CNS glia, whose origin within the embryonic yolk sac [6], and macrophage-like functions, distinguish them from all other parenchymal glia in the nervous system. In the healthy brain microglia exhibit highly motile processes that scan the local tissue landscape, seemingly searching for signs of disease or injury whose detection may trigger them into acquiring an activated amoeboid phenotype capable of launching inflammatory responses and engaging in phagocytic activity [228]. Microglia are considered to serve a number of protective roles that collectively help to maintain a healthy environment with the CNS. These functions include the removal of dead and dying cells, the regulation of synapse number via pruning of excess connections, and the production of various neurotrophic factors capable of influencing the development and survival of other neural cells [22]. In addition, microglia express major histocompatibility complex (MHC) I and II molecules, particularly under pathological conditions including perinatal hypoxia, thus a role in the detection of pathological conditions and the stimulation of adaptive immune responses are ascribed to these cells [229]. In addition to protecting the healthy CNS, microglia contribute to the generation of pathological conditions via the synthesis and release of cytotoxic molecules such as nitric oxide, reactive oxygen species and proinflammatory cytokines [230]. They also contribute to CNS repair processes by releasing neuroprotective factors, reducing inflammation, and encouraging regenerative processes such as OPC differentiation and remyelination [230]. These contrasting roles in CNS protection and injury have frequently been cast within a model where microglia exhibit polarised states, the so-called M1 (pro-inflammatory)/M2 (anti-inflammatory pro-repair) phenotypes [231]. However, a number of observations, such as the co-existence of cardinal M1/M2 molecules within microglia in vivo, have lead this dichotomy to be described as unsuitable and unhelpful for the understanding of microglial biology within the intact CNS [232]. Nevertheless, microglia certainly exert profound and varied influences in the context of CNS protection, immune activation, injury and repair, whose regulation by AMPAR may contribute to the induction and progression of a number of disease states. Expression and Functional Properties of AMPAR in Microglia Consistent evidence for function AMPAR expression in microglia has been reported in experiments on cultures of cortical rat microglia [233]. Reverse transcription PCR indicates that these cells express GluA transcripts 2, 3 and 4 [234], although a subsequent study using more sensitive quantitative RT-PCR method detected GluA, 1, 2 and 3 mRNA [235]. Expression of GluA1 has also been shown at the protein level in rat cortical microglia via immunocytochemistry [236]. Functionality for these AMPAR has been confirmed by patch-clamp analysis where CNQX sensitive glutamate currents are detected [233]. These currents were potentiated by CTZ, and showed little Ca 2+ permeability [233], consistent with the presence of GluA2 transcripts. Interestingly, experiments with AMPAR modulators show that microglial glutamate currents exhibit a greater degree of potentiation to CTZ compared to PEPA [234]. CTZ is selective for Flip variants, while PEPA exhibits a greater affinity for Flop subunits, thus these data suggest a dominance of Flip splice variants in microglia [234]. In line with this finding transcript analysis in the same study revealed a predominance of GluA1-3 Flip variants, a configuration that may render cells more sensitive to excessive glutamate levels (Section 1.2). In contrast to the in vitro studies described above, evidence for microglial AMPAR expression in vivo is more uncertain. AMPAR expression has been detected by immunostaining for GluA2/3 and 4 in OX-42 cells in rat forebrain sections [237], although this work was done under permeabalising conditions hence it is not clear if the immunosignals represent surface expressed receptors. Importantly, immunostaining did not identify receptor subunits on the surface of retinal microglia, and patch-clamp recordings from microglia have failed to detect glutamate-mediated currents in a range of CNS tissues including spinal cord, hippocampus and retina (reviewed in [238]). Interestingly, hypoxia causes a marked increase in microglial GluA2-4 protein expression within forebrain periventricular white matter [237], and hippocampal microglia express GluA4 protein only after exposure to an ischemic injury [239]. These in situ observations suggest that microglial AMPAR expression may be low or absent under healthy conditions, but may be induced in microglia activated under pathological conditions [238] (Figure 2). Furthermore, the contrast between these findings, and those from in vitro studies, suggest that the physiological properties of microglia may differ significantly under cell culture conditions. Indeed, differences in the expression of voltage-gated K + channels are observed when patch-clamp recordings are made from microglia in actute vs cultured hippocampal brain slices [240]. A summary of findings on the expression and functional properties of microglial AMPAR are summarised in Table 3. Table 3. Summary of AMPAR expression and functions in microglia. Microglial AMPAR in Pathology Evidence linking AMPAR activation to microglial functions under physiological conditions appear to be limited to effects on chemotaxis, where glutamate stimulates a directed-migration of microglia in cell cultures and spinal cord slices via AMPAR receptors [117]. In contrast, several interesting actions are associated with the activation of microglial AMPAR under pathological conditions that are linked with excessive glutamate signaling. Studies using a rat model of periventricular white matter (PWM) injury, and microglial cell cultures exposed to hypoxia, have identified links between hypoxia-induced elevations in glutamate, and the regulation of protective and inflammatory microglial-derived substances [237] (Figure 2B). This work shows that hypoxia causes increases in PWM concentrations for both glutamate and IGF-1, a neurotrophic factor associated with the protection of neurons [241] and oligodendrocytes [169,170] following H-I injury. Importantly, elevated IGF-1 protein is localised to microglial with an activated amoeboid morphology that also exhibit enhanced expression of GluA2, 3 and 4 subunits [237] (Figure 1B). While increased expression of IGF-1 may play a protective role following this injury, the increase in AMPAR expression in these cells appears to prime them for a more pathological response since a subsequent exposure to glutamate simultaneously attenuates IGF-1 production, while promoting TNF-α release and IL1-beta [237]. This inflammatory response is mechanistically linked to the reduced levels of IGF-1 seen after glutamate stimulation since knock-down of IGF-1 gene expression promotes TNF-α and IL-1 beta gene expression. Overall, the study by Sivakumar et al. [237] suggests that increased expression of microglial AMPAR in hypoxic PWM tissue sensitises microglia to glutamate leading to a regulatory response that reduces protective IGF-1 release while increasing the release of inflammatory mediators. Other evidence suggests that microglial AMPAR activation, and a subsequent production and release of TNF-α, may contribute to the propagation of CNS disease states ( Figure 2B). First, cell culture and in vivo experiments show that the activation of microglia AMPAR stimulates the production and release of TNF-α [234,237]. Second, TNF-α signaling is heavily associated with a number of glial-dependent actions, including the emission of glutamate via gap-junction hemichannels from microglia [242] and astrocytes, Fas ligand release from microglia, and the downregulation of EAAT2/GLT1 glutamate transporters in astrocytes, which trigger neurotoxicity in the CNS [2]. Third, TNF-α release from astrocytes stimulates the trafficking of AMPAR to the surface of hippocampal and cortical neurons, rendering them more sensitive to excitotoxic conditions involving excessive levels of extra cellular glutamate [243]. Fourth, conditioned medium containing TNF-α released from kainate-stimulated microglia induces the upregulation of voltage-gated Ca 2+ channels and NMDA-type glutamate receptors in hippocampal neurons, and is associated with neuronal apoptosis [244]. In summary, microglial AMPAR-mediated TNF-α release, acting through a range of autocrine and paracrine routes, are positioned to elevate extracellular glutamate concentrations, release apoptosis inducing signals, and increase the sensitivity of neurons to excitotoxicity. As outlined in Section 1.3, excitotoxic injury has a link to a wide range of CNS injury and disease states [1,4], thus further investigation of the influence of the microglial AMPAR-TNF-α axis, and its actions in propagating excitotoxic conditions, are warranted. In this regard, a promising area for research can be found in the context of MS, where impaired cognitive function may be linked to pathological alterations in synaptic structures that arise early in the course of the disease [245]. Research using the EAE model of inflammatory demyelination shows that TNF-α contribute to the mechanisms driving these synaptic dysfunctions, at least within the striatum, where these pathophysiological actions are correlated with the activation of microglia and astrocytes [246,247]. While the involvement of microglial AMPAR is in these actions is unknown, the established links between their activation and the release of TNF-α [234,237] indicates that they could be capable of amplifying an initial inflammation-induced elevation in extracellular glutamate into additional excitotoxic insults. Microglial AMPAR have also been implicated in neurodegenerative disease processes through a series of cell culture studies examining the regulation of GluA subunit surface expression in activated microglia [248]. Here, the authors examined the hypothesis that down-regulation of GluA2, as is observed in a number of neurodegenerative conditions including MS [249], drives neurotoxicity via increased release of pro-inflammatory cytokines from microglia. Analysis of cultured microglia generated from a global GluA2 null line exhibited a greater degree of Ca 2+ permeability, and enhanced TNF-α production following AMPAR activation. In line with this data, conditioned medium from GluA2 -/microglia exerted a greater neurotoxic effect on cultured neurons. Thus, as proposed by Noda and Beppu [250], the loss of microglial GluA2 under pathological conditions may serve to accelerate neuronal injury and loss via alterations in the subunit composition of their AMPAR that favour Ca 2+ entry and inflammatory cytokine release. While this hypothesis provides an attractive link between microglial AMPAR function and disease, further work should be performed using a conditional deletion of GluA2 in microglia to refine the loci of action and confirm the relevance of this mechanism in relevant in vivo disease models. In summary, microglial are equipped with AMPAR whose activation may help to synchronise the production and release of inflammatory molecules to inflammatory excitotoxic conditions within the tissue. These actions could trigger de novo excitotoxicity or may contribute to the propagation of an ongoing disease state. Based on these actions, therapies targeting microglial AMPAR represent a valuable ambition, the achievement of which may provide a means to modulate neurotoxicity in a number of neurodegenerative CNS conditions. AMPAR in Other Glial Cells AMPAR are expressed in glial cells beyond the principle types described in Sections 2-4, yet their functions in these 'other glia' are less characterized. Nonetheless, these glia, which in this review include Radial Glia, Schwann Cells, Satellite Glia and Enteric Glia, have important functions in both CNS and PNS. Therefore, AMPAR might yet play significant roles in their behavior under physiological and pathophysiological conditions. Here we present a brief overview of AMPAR in these other glial cells. See Table 4 for a summary of the expression and functional properties of AMPAR in these glial cells. Radial Glia Radial glia (RG) gives rise to astrocytes, neurons, and oligodendrocytes during embryonic development [16,262,263]. In primary cell cultures of RG-like neural progenitor cells (NPC) qPCR reveals a high level of GluA1 mRNA, and lower levels of GluA2, 3 and 4, while immunostaining detects GluA1, 2 and 3 protein expression on RG/NPC expressing the glial marker GLAST [251]. The functionality of these AMPAR is evident in Ca 2+ imaging data where transient AMPA induced elevations in intracellular Ca 2+ are enhacned by CTZ and reduced by philantotoxin, a specific antagonist of Ca 2+ permeable AMPAR [251]. In contrast, another study revealed high levels of GluA3 and 4 mRNA, and an absence of GluA1 and 2, in NPC [252]. In agrement with this data, immunohistochemical staining reveals strong GluA3 and 4 localisation to RG in the developing white matter of rodents [168], while GluA4 is localised to RG in fetal human RG [207]. Differentiation of RG/NPC into astrocytes or neurons involves an increase of GluA2 and a decrease of GluA3/4, thus suggesting an increase in the proportion of Ca 2+ impermeable AMPAR [168,207,252,253,264]. These findings suggests a role for glutamate acting via AMPAR in RG proliferation and differentiation. In agreement with this idea, AMPAR are involved in the regulation of RG processes length and RG motility [265]. RG-like cells are also present in the adult subgranular zone of the hippocampus [166,253]. In contrast to developmental RG, these adult RG-like cells possess AMPAR that are mainly composed of GluA2 subunits, and which are expressed in the cell's processes but not in the soma [166]. The administration of kainate promotes RG-like cell proliferation and survival, both in vitro, and in vivo, in an animal model of status epilepticus [253]. Very little is known about AMPAR expression in ependymal cells derived from radial glia, although GluA2/3 expression has been found in cell bodies and proximal thick processes of tanycytes [254]. Schwann Cells As mentioned in the introduction (Section 1.1), Schwann cells are the myelinating cells of the PNS. As in the CNS, PNS axons extend significant distances, thus in addition to the benefits associated with OL myelination, such as enhanced conduction velocity and axonal isolation (Section 3), trophic support arising from myelinating Schwann cells is likely to be important for sustaining the viability of axons located long distances from the cell body [152]. Distinct from the myelinating forms, the perisynaptic Schwann cells (PSCs) are non-myelinating cells that are part of the neuromuscular junction (NMJ) formed between the presynaptic nerve terminal and the postsynaptic specialization [264]. These cells plays an important role in the growth and maintenance of NMJs as well as in the modulation of its synaptic properties [264,266,267]. In comparison to OL, the very little less is known regarding the expression and function of AMPAR in SCs. During development mRNA expression of all AMPAR subunits is detected in mouse sciatic nerve, although it is not clear if this expression arises from the axon or glial cells [256,268]. While GluA2/3 and GluA4 are detected by electron microscopy in Schwann cells of the vestibular system of rats and guinea pigs; only GluA1 and GluA4, but no GluA2 or GluA3, protein expression are found in primary SC culture [255,258]. Patch-clamp recording from developing SC in peripheral nerve tissue and cell cultures exhibit functional Ca 2+ permeable AMPAR [256,257]. A detailed study performed in sciatic nerve preparation of mouse pups at embryonic day 16th-18th or postnatal day 0-2nd has proved that AMPAR are modulated along development, with functional AMPAR only expressed in developing SCs while more mature myelinating SCs are unresponsiveness to glutamate [256]. Furthermore, the amount of AMPAR might also be modulated in developing SCs [256]. However, the AMPAR role in SCs is still not clear. Metabotropic glutamate receptors activation induces SCs proliferation, whereas migration induced by glutamate are also NMDARdependent [268][269][270]. AMPAR activation on PNS myelinated axons induced an increase of axoplasmic Ca 2+ , although myelin abnormalities are only observed after prolonged activation of NMDA, but not AMPAR [271]. AMPAR activation has also be found to control ATP release after glutamate stimulation in SCs culture [258]. Satellite Glia Among satellite Glia cells (SGCs), those of the sensory ganglia has been the most studied. [25]. SGCs are in close contact with neurons, ensheathing them and playing a role in neurotransmission and glutamate homeostasis [25,259,260,272]. To that end, SGCs express several glutamate transporters, metabotropic glutamate receptors, NMDAR, KR and AMPAR [259,260]. Regarding AMPAR, SGCs express mainly GluA4 and in lesser quantities GluA2/3 but no GluA1 [259,273]. AMPAR activation induces a rapid Ca 2+ influx in SGCs inline with their expression of GluA4 [259]. In addition, a role for SGC NMDAR has been described in pathological situations like hyperalgesia [274]. It is therefore tempting to hypothesize that AMPAR could also play a role in SGCs response after damage. Enteric Glia Enteric glia are non-myelinating glial cells of the enteric nervous system, localized within the wall of intestines they are critical in the control of gut motility [275,276]. Despite showing some similarities to astrocytes, enteric glia display a unique transcription profile [275,276]. The connection between enteric neurons and enteric glia is not well characterized, although ionotropic glutamate receptors have been found in both cell types [261,275]. Concerning AMPAR only GluA1 and GluA3 subunits have been found in enteric glia, but the Ca 2+ permeability of the functional AMPAR has not been fully elucidated yet [261]. AMPAR are involved in excitotoxic damage to enteric neurons in studies performed in myenteric ganglia [277,278]. Although it is unclear whether these pathological actions involve AMPAR located on enteric glia, their expression of GluA proteins suggests they may also be targets during ischemic injury in these enteric tissues. Cannabinoids and AMPA Receptor Several studies have pointed out that cannabinoids and the Endocannabinoid System (ECS) might modulate glutamatergic transmission, including AMPAR. This modulation has been principally studied in neurons, although glial receptor involvement has also been described [279][280][281]. The medical use of Cannabis sativa has been explored for millennia worldwide [282]. Modern research in the cannabinoids field started during the last century with the isolation of ∆ 9tetrahydrocannabinol or THC [283], a discovery that eventually lead to the recognition of the ECS. The ECS is an endogenous system that plays important physiological roles, modulating neuronal synapses, the immune response or energy and metabolism regulation, among others [284][285][286][287][288]. As has been recently reviewed, the ECS modulates OL lineage cells and myelination [289]. Different elements of the ECS are expressed in all OL stages, and observations that the ECS is modulated during OL maturation, suggest that they may play a role in OL differentiation [290,291]. The endocannabinoid 2-arachidonoylglycerol (2-AG) and cannabinoid receptors CB 1 and CB 2 activation promote OPC proliferation, migration and maturation, and oligodendrocyte myelination [290][291][292][293][294][295]. Indeed, different pharmacological studies have proved that the reduction of 2-AG levels, or blocking either CB 1 or CB 2 reduces myelination, both in vitro and in vivo [291,296]. Based on the links between the ECS and OL lineage functions described above, the modulation of the ECS has been studied as a possible treatment for demyelinating diseases like MS [289]. A cannabinoid derived drug, Sativex, is approved to treat MS spasticity and pain [297,298], and in different animal models of MS, cannabinoid treatment based on cannabidiol solely, or cannabidiol plus THC (Sativex), reduces the myelin injury, inflammation and functional impairment [299][300][301]. The modulation of 2-AG levels effectively reduces motor symptoms, along with inflammation and demyelination [302,303]. Some of this protective effect is mediated by cytosolic Ca 2+ modulation. Either the inhibition of 2-AG degrading enzyme, or the direct administration of 2-AG preventing intracellular Ca 2+ increase via AMPARs, and the subsequent pattern of mitochondrial dysfunction and ROS increase [302]. Similar results are obtained in cultured OL when CB 1 receptor agonist are administered after AMPAR activation, or K + -induced depolarization, with both producing a reduction in intracellular Ca 2+ [302,304] (Figure 1B). These processes might involve the activation of Gi/0 proteins, the blockade of Kir or voltage-gated Ca 2+ channels (VGCCs) [304,305]. The administration of the phytocannabinoid cannabidiol (CBD), a promising therapeutic molecule, also modulates intracellular Ca 2+ via mechanisms involving mitochondria, and can, under specific conditions, induce OL cell death [306]. In spite of the controversy concerning the deleterious effect of CBD on OL cell cultures, some authors have identified a dose-dependent response [306,307]. Furthermore, CBD has been reported to be oligoprotective after an inflammatory damage [307], and to reduce excitotoxicity in several pathologies such as stroke or neonatal hypoxia-ischemia [58,[308][309][310]. Results of our group have proved this dual effect. While the exposure of mouse organotypic cerebellar slices to CBD resulted in a reduction of the OL population; CBD applied before and during an excitotoxic insult was able to reduce OL cell death (Ceprian M, Ng J, Fulton D, unpublished). Promising oligoprotective findings from CBD treatments suggest that more research in this area are warranted. An important area for this work relates to the mechanism of CBD action on glia, which at this time remains unclear. After an excitotoxic insult, there is an increase of endocannabinoids and endocannabinoids-like molecules, probably released by neurons [308,311]. Interestingly, the administration of the endocannabinoids 2-AG or anandamide (AEA) or synthetic cannabinoid agonist, HU-210, reduces cell death after AMPA administration in a mixed culture of astrocytes and neurons [221,311]. HU-210 neuroprotective mechanism are also seen in vivo and requires astrocytes and CB 1 and CB 2 activation [221]. AEA also exerts its neuroprotection via CB 1 and CB 2 receptors, and by preventing the AMPAR-induced downregulation of astrocyte glutamate transporters GLAST and GLT-1, as evidenced in both primary cultures of astrocytes, and an in vivo model of MS [311] (Figure 1B). In agreement with these results, the agonism of both cannabinoid receptors by WIN55,212-2 increases GLAST and GLT-1 expression in the spinal cord of EAE induced animals [312], while another study in the EAE model has found a decrease of AEA and 2-AG, which due to the actions on GLAST and GLT-1 described above, would be expected contribute to glutamate excitotoxicity [313] ( Figure 1B). Finally, the administration of the phytocannabinoids THC and CBD modulate the expression of neuronal GluA2-AMPAR in rodent models of drug abuse [314,315]. Further research to analyze if a similar AMPAR subunit modulation occurs after cannabinoid administration in excitotoxic models could help to elucidate cannabinoid's protective action on glial cells. AMPAR-Stimulated Gene Expression in Glial Cells: Contributions to Injury and Disease? Activity-dependent receptor-mediated gene expression, particularly immediate early genes (IEGs), regulates key functions in the CNS including development and synaptic plasticity [316,317]. Gene expression in glial cells can also regulated by neuronal activity, for example in astrocytes where neuronally derived notch signals regulates a broad range of genes leading to alterations in astrocyte development, metabolism and neurotransmitter uptake functions [318]. As described in Section 2.1 astrocytes express functional AMPAR that allow them to respond to glutamate released from neuronal synapses. Given the broad range of astrocyte genes regulated by neuronal activity [318] it is tempting to speculate on the potential of AMPAR to contribute to these actions. For example, AMPAR activation on Bergmann glial cells leads to a downregulation in transcription of the glutamate transporter GLAST via a mechanism involving Ca 2+ influx, PKC signaling and the activation of the IEG c-jun [99]. GLAST downregulation was observed with prolonged exposures to glutamate. Triggering of this regulatory response during periods of elevated extracellular glutamate, as occurs in pathological conditions such as MS [121], stroke and hypoxic-ischemia [4], could therefore lead to an amplification of the pathophysiological levels of glutamate ( Figure 2). AMPAR have also been shown to regulate IEG in cortical cultures of OPC [44] where stimulation of AMPAR induced a Ca 2+ -dependent upregulation in ngfi-a and other IEGs ( Figure 1A). Ngfi-a (also known as zif-268, egr-1 and krox-20) is implicated in regulation of the cell cycle [319], and is induced dramatically in the brain in a depolarisation-dependent manner [320]. Thus Ngfi-a activation by AMPAR may contribute to the regulation of OPC proliferation observed following modulation of neuronal activity and AMPAR [41,148,210]. AMPAR stimulation inhibits OPC proliferation [210], thus pathological elevations in glutamate concentrations could act to reduce the supply of remyelinating OPC. As discussed in Section 3.3, excessive AMPAR activation injures OL lineage cells via Ca 2+ -dependent mechanisms involving mitochondrial stress and the induction of pro-apoptotic Bcl-2 molecules [65,169,321]. Apoptosis is closely regulated by the induction of pro-and anti-apoptotic Bcl-2 genes [322,323], so the pathways connecting pathophysiological AMPAR stimulation may provide an interesting range of targets for therapeutic research. Related to this idea, we recently examined the transcriptional events induced in OPC following pathophysiological AMPAR stimulation [45]. To focus our work we searched for potential regulators of Gria4, the gene encoding GluA4, via an in silico analysis. Gria4 was an attractive target since it has prominent expression in OPC, and has been linked to excitotoxic injury in other cell types [208]. From among the set of candidate regulators we selected NF-Y subunit b (NF-Yb), a member of the NF-Y complex whose activity is closely associated with the regulation of apoptototic cell death [88,324]. Experiments using an OPC cell line revealed that excitotoxic AMPAR stimulation altered NF-Yb expression, modulated its binding to regulatory regions within Gria4, and altered the expression of Gria4 transcripts. Thus NF-Yb is regulated by pathophysiological stimulation of AMPAR leading to alterations in the expression of target genes. Further experiments using primary OPC showed that excitotoxic stimulation produced a parallel increase in the expression of NF-Yb and its target gene Gria4. Interestingly GluA4 protein is upregulated in OL in an in vivo hypoxia model [237], thus it is tempting to speculate on the involvement of NF-Yb in these actions. NF-Yb is linked to the control of apoptosis [88,324] thus we performed a transcriptomic analysis to identify genes that were differentially regulated by NF-Yb modulation and excitotoxicity. For this analysis we compared the effects of a treatment with Garcinol, a compound that both blocked NF-Yb binding to Gria4 and reduced cellular viability, with an excitotoxic AMPAR stimulation proven to injure OPC. Both treatments induced transcriptional regulation in the same set of apoptotic genes underscoring the link between NF-Y function, excitotoxic injury, and apoptosis in OPC ( Figure 1B). As described in this review, AMPAR are ubiquitous in neurons and glial cells so the clinical use of systemic AMPAR blockade is likely to produce numerous side effects that may complicate the evaluation of therapeutic outcomes. Cell-specific targeting could reduce these problems, but the technologies available are not translatable, and in any case, AMPAR influence numerous physiological functions in glial cells whose modulation may worsen disease conditions. Consequently, molecular targets downstream of pathological AMPAR activation, for example those contained within the NF-Y transcriptome, represent an attractive proposition for research aiming to provide protection against pathological conditions involving AMPAR-mediated injury. Summary AMPAR are expressed widely by glial cells throughout the nervous system. Their activation modulates numerous cellular actions in glia including ion channel function, gene expression, migration, various aspects of growth and differentiation, and even the expression and subunit composition of AMPAR themselves. Additionally, at the tissue level glial AMPAR influence homeostatic functions and shape neuronal function by regulating myelin production and modulating synaptic function. Numerous pathological conditions, including several notable neurological and neurodegenerative conditions, involve dysregulation of extracellular glutamate levels. These conditions can lead to excessive AMPAR stimulation triggering injury responses within glia and releasing additional excitotoxic signals in the form of glutamate and cytokines. Glia are a key feature of the nervous system whose normal activities maintain an environment that is optimal for healthy development and neuronal circuit activity, thus the identification of therapeutic approaches capable of protecting glial functions and viability under excitotoxic conditions are a critical target for future research.
2019-05-22T13:31:42.672Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "be9ee3695f511c0b12287f9665317e3d1a86df2f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/10/2450/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be9ee3695f511c0b12287f9665317e3d1a86df2f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225181105
pes2o/s2orc
v3-fos-license
“IT DOESN’T MATTER HOW MANY (CASES) YOU GOT, IF YOU LOVE THE JOB, YOU CAN MANAGE EVERYTHING”: MANAGEMENT STRATEGIES UTILISED BY FRONTLINE SOCIAL WORKERS Nevashnee Perumal, Pius Tanga The adoption of the social development approach in South African social service organisations continues to challenge and stretch organisations in many directions. The frontline social worker navigating this terrain, carrying the bulk of direct services and undertaking various management tasks, is confronted with personal trauma, resource constraints, organisational issues, ethical dilemmas as well as the pressure of inclusive and representative service delivery. An exploratory descriptive qualitative empirical study using a case study research design was undertaken with the main aim being to explore and describe the management tasks of frontline social workers in the NPO sector in Port Elizabeth. Semi-structured individual interviews were held with frontline social workers and one focus group was held with middle managers. The study's findings revealed the aspects contributing towards undertaking management tasks, the experiences of executing management tasks and the consequences of doing so. This paper presents the management strategies utilised by frontline social workers. Dr Nevashnee Perumal, Social Development Professions, Department of Social Work, Faculty of Health Sciences, Nelson Mandela University, Port Elizabeth, South Africa. BACKGROUND AND LITERATURE REVIEW provide the following broad definition of social work management: "management is certain activities performed by social workers at all administrative levels within human service organisations that are designed to facilitate the accomplishment of organisational goals." Management tasks in social work are therefore executed by the frontline social worker, the middle manager/supervisor and the top level manager/director. Lewis, Packard and Lewis (2012:4) operationalise the definition of social work management as follows: "management is the process of making a plan to achieve some end, organising the people and resources needed to carry out the plan, encouraging the helping workers who will be asked to perform the component tasks, evaluating the results, and revising the plans based on this evaluation". In recent years increasing attention has been drawn to the roles of frontline social workers in executing their daily responsibilities (Hepworth, Rooney, Strom-Gottfried & Larsen, 2010). Frontline social workers are social workers who primarily render direct social work services to individuals, groups and communities. According to Patel (2005), and based on the social development approach that frames post-apartheid South African social work practice, some of the roles of frontline social workers include individual problem solving, couples or family therapy, groupwork services, educator, broker and case manager. Coulshed, Mullender, Jones & Thompson (2006) hone in on the management functions of social workers by suggesting that all social workers perform management functions, since practitioner skills (or direct social work services) in social work include managerial skills; they further indicate that social workers, at all levels, have to perform specialised management tasks. Although Weinbach and Taylor (2015) confirm that management is a team function, frontline social workers in the South African NPO sector often "manage" in silos, because of the crisis nature of their work as well as the lack of human resources and middle management capacity within their organisations, naturally resulting in unintended consequences for these organisations. Furthermore, the South African welfare sector committed social service providers to adopting the social development approach to welfare, aligned to the democratic objectives rooted in social justice and a macroeconomic focus (Patel, 2005;Patel, Schmid & Hochfeld, 2012;Rautenbach & Chiba, 2010). The social development approach, being fundamentally different from apartheid social welfare provision, created additional demands on service delivery in the NPO sector (Rankin & Engelbrecht, 2014). Newer frontline social workers who had been trained in the theoretical foundations of social development, were joining organisations where most of the experienced social workers had never been trained in this approach. Besides making communication, supervision and intervention planning challenging, because both parties were not starting launching from the same level, frontline social workers also saw themselves fulfilling more formal management roles within their social service organisations in order to remain responsive to the macro-level needs of the social development approach (Department of Social Development (DSD), 2006), and to keep their organisations operational. The social development approach was anticipated to accelerate transformation by addressing issues of race, class, gender and spatial imbalances because of its macroeconomic focus, emphasising the interdependence between social and economic development (Midgley, 1995;Patel, 2005;Gray, 2008;Patel, 2009;Rautenbach & Chiba, 2010;Weyers, 2013;). Emanating from the social development approach was the term 'developmental social work', which authors such as Patel (2005), Gray (2008) and Rautenbach and Chiba (2010) contexualised as social work services that are geared towards holistic, planned intervention, which places human and social concerns at the centre of social welfare policy and planning. Developmental social work is strengths-based and further geared towards bringing about social change, using a systems approach (focusing on the person and environment, and the interaction between the two), and based on the principles of social justice, equality, ubuntu, democracy and social change (Midgley, 1995;Gray, 2008;Patel, 2005;Patel, 2009). All social workers, and by implication all social work managers, are trained to be agents of social change (Sewpaul, 2013) especially in the South African social work context, where transformation (change) underpins developmental social work and occupies a prominent space in the social development approach to welfare (Patel, 2005;Rautenbach & Chiba, 2010). Hence management at all levels in social work, viz. boards of directors, top managers, middle managers as well as frontline social workers, needed to re-assess their prevailing strategies and align these to the social development approach. In order to remain responsive to the clients and communities they served, frontline social workers, middle managers and top managers in the NPO sector needed to transform their intervention strategies as well as their management roles on a micro, meso and macro level (Patel, 2014). For the frontline social worker this meant incorporating management tasks into their daily 'direct services' function, thereby also having to perform the following tasks traditionally performed exclusively by middle and top management: analysing situations and conceptualising what is happening; identifying problems and opportunities for addressing them; balancing competing goals; setting priorities for themselves and others; handling finances responsibly and reducing expenses whenever possible; and working effectively with others who may not share all of the same values (Weinbach & Taylor, 2015:8). Therefore, it may be argued that the implementation of developmental social work imposed new demands on an already overstretched public as well as private social welfare sector (Rankin & Engelbrecht, 2014), since this approach dictated a shift in the culture and managerial practices of organisations rendering welfare services, to expand their services in order to reach less developed, under-resourced and more marginalised communities (DSD, 2012). Ultimately, social work practice and its alignment to professional social work goals rest on the shoulders of social service managers (Patel, 2014), a role in which the frontline social workers increasingly see themselves. It is further noted by Rankin and Engelbrecht (2014) that the ability of frontline social workers to undertake management tasks would be dependent on the following inter-related management skills: technical, interpersonal and conceptual skills. According to Reyneke (2014), lower management levels (the level at which the frontline social worker would fall) require 42% technical skills (the ability to use methods, processes and techniques in social work), 50% interpersonal skills (communication, relationship, conflict resolution and leadership skill) and 8% conceptual skills (motivational skills and teamwork). According to Lewis et al. (2012) and Menefee (2004), managers need a good deal of education, training and development because of the multitude of technical and interpersonal skills necessary to perform management tasks in social work. In the South African context, social work management training and development are diluted because of the challenges of an inequitable apartheid past, leaving communities socially disabled, prioritising direct social work service delivery and neglecting management capacity in social work (Engelbrecht, 2012;Patel, 2005;Russell & Swilling, 2002). Management alludes to shaping and exerting influence over the work environment. Social workers in the formal role of managers strive to ensure that the work environment is conducive to maximum productivity, which includes the "promotion of desirable activities" and "efficient delivery of services to clients" (Weinbach & Taylor, 2015:6). Ideally, management is intended to be mainly proactive in nature. This does not mean that management may not be reactive, since situations facing organisations usually do not give fair warning of arising, such is the dynamic nature of South African social work. However, what it means is that reactive management should form only a small part of managing an organisation. When reactive management is required, it means that there is an unanticipated problem facing the organisation and the management techniques employed should be rational, fair and prompt. This will allow the organisation to rebalance itself in a short space of time, ensuring that staff continue to deliver on the mission of the organisation. This applies especially in child protection services, where vulnerable children will be placed at further risk if the organisation is unbalanced for too long. According to Weinbach and Taylor (2015), good management practices create the stepping stones for achieving organisational goals, ultimately resulting in an organisation's success. Frontline social workers undertake a series of management tasks which are underpinned by the management functions of planning, organising, leading and control. Coulshed et al. (2006) contend that management is a function that all social workers undertake, implying that management is not a function reserved only for middle and top management. Weinbach and Taylor (2015:8) identify the following non-exhaustive list of the various management tasks:  Analysing situations and conceptualising what is happening;  Identifying problems and opportunities for addressing them;  Balancing competing goals;  Setting priorities for themselves and others;  Working effectively with others who may not share all of the same values;  Representing the organisation to staff members and in the community;  Serving as a role model for paid staff and volunteers;  Keeping others on track;  Resolving interpersonal conflicts;  Adhering to and ensuring that others adhere to ethical standards;  Handling finances responsibly and reducing expenses whenever possible;  Making difficult (often unpopular) decisions;  Supporting decisions of others with which they themselves may disagree. Weinbach and Taylor (2015:8) also contend that managers do everything else except render direct services to clients, which is contrary to the assumption of this study. This study is premised on the notion that frontline social workers also fulfil the stated management tasks in addition to rendering direct services to clients and communities. Hence, frontline social workers need to possess a combination of interpersonal, conceptual and technical skills (Reyneke, 2014) so as to meet all their direct services demands as well as to manage these demands. Pretorius (2014) further highlights some essential social work management tasks such as workload management, time management, information management, risk management and change management but relate them specifically to middle management positions. Not taking into account the management tasks that frontline social workers engage in creates a non-balanced view of the extent of the frontline social workers' role in the organisation. This study therefore explores the nature of the management tasks undertaken by frontline social workers and their experiences of doing so. RESEARCH METHODOLOGY, AIMS AND OBJECTIVES A sample of 19 frontline social workers and 6 middle managers from 3 NPOs (coincidentally all child protection agencies) in Port Elizabeth participated in this qualitative study premised on achieving the following specific objectives:  To explore and describe the factors in the NPO sector that necessitate frontline social workers executing management tasks;  To examine the nature of the management tasks that frontline social workers in the NPO sector undertake;  To explore the experiences of frontline social workers in respect of their management tasks in the NPO sector;  To determine the consequences of frontline social workers executing management tasks; and  To propose a framework to support frontline social workers in the NPO sector in respect of executing management tasks Data were collected in two phases. Phase 1: semi-structured individual in-depth interviews with frontline social workers, and Phase 2: a focus group with middle managers. Group prompts were generated from data gathered in Phase 1. Because of the depth of the overall study findings, this paper reports on the findings only pertaining to following objective: To determine the consequences of frontline social workers executing management tasks. The intention of this paper is to present and discuss the empirical evidence obtained on the management strategies utilised by frontline social workers. The other four study objectives will be reported on in separate papers. The main theme garnered from the findings under this objective was the development of frontline management strategies, which appeared to be a key consequence of undertaking management tasks. This theme, with its corresponding subthemes, will be discussed in this article. Table 1 provides an overview of the themes and subthemes identified from the findings. Table 1 illustrates that the development of frontline management strategies by frontline social workers was the main consequence of undertaking management tasks. The subthemes that will be discussed are: workload management, time management, relationship management and self-management. Pseudonyms are used to identify the participant responses. These are indicated as P1, P2, MM1, MM2, etc. "P" indicates participant and "MM" indicates middle manager. Theme 1: Development of frontline management strategies The main consequence of frontline social workers undertaking management tasks was that they had to develop strategies to enhance their workload management, time management, relationship management and self-management tasks. Subtheme 1.1: Workload management The findings revealed that frontline social work participants found it extremely useful to be fully aware of the details of all their cases. Reading their casefiles, making summaries and knowing the contents of reports prevented embarrassing situations from arising when social workers were expected to answer questions in court. This is reflected in the words of the two frontline social work participants quoted below: Read the files, the small summaries there and make your own summary perhaps. (P4) It's very embarrassing when you go to court and you do not know the contents of the report when you compiled the report. (P14) The frontline social workers indicated various ways of managing their workloads, since managing workload means having control over the service one renders to one's clients on a caseload (Calitz, Roux & Strydom, 2014;Strydom, 2010). Because of their excessive workloads, social workers have developed strategies to remain in control by getting to know every client on their caseload. Relationship building is an important skill for social workers as it enables growth and development for clients. Good interpersonal relationships result in a better understanding between clients and frontline social workers and therefore provide greater opportunity for sustainable social change to occur in the helping relationship. Utilising flexible ways of thinking and managing the administrative aspects of cases, such as making small summaries, indicate the frontline social workers' personal interest in a client/family. Preparing for court is also essential as it demonstrates that the social worker has anticipated the complexity of the case and has anticipated the possible contingencies when making recommendations in the court report. Exhausting all options prior to opening a Children's court inquiry demonstrates that the social worker has the best interests of the child at heart, as stipulated in the Children's Act No. 38 of 2005 (Republic of South Africa (RSA), 2006). As statutory work is administratively intensive and time-sapping in nature, social workers preferred to engage in prevention services with clients. The study found that rendering prevention services is perceived to benefit families more in the long run, as opposed to statutory intervention. The following frontline social work participant, who works at a drop-in centre, stated that: The middle managers in the focus group collectively expressed the view that frontline social workers found prevention programmes more rewarding, because they do not entail labour-intensive reports and crisis management. This is evident in the response of one focus group participant, as quoted below: I think this is why they [social workers] sometimes love the programmes that they are doing, because it's a little bit different from all the people that they see and all the reports and things. Then they can also use the skills and make use of what they've learned, especially in the schools they do the programmes at. (MM1&3) Prevention services entail interventions such as awareness about abuse, information on where to access birth documents, parenting programmes and budgeting on a child support grant, to name a few. According to the Integrated Service Delivery Model (ISDM) (RSA, 2006), prevention services are rendered prior to families needing statutory intervention. In addition, prevention programmes do not necessarily require lengthy investigations and reports. The programmes reach more people because they are generally community-based, or they are located in schools where large groups of children are targeted. Contrary to the findings of this study, Strydom (2012) postulates that child protection organisations have not been rendering prevention programmes because there is little funding for them. Instead, statutory services, which receive subsidies from the DSD, were the services that child protection organisations prioritised. Another frontline social work participant highlighted the various intervention strategies she uses with children as a means of managing her workload for effectiveness: The findings of this study further revealed that frontline social workers have to be innovative in designing interventions for children because children need help to make sense of their feelings and to build trust. Therefore, the inclusion of more playful techniques rather than a conversational style of interviewing will naturally work better with children, as evidenced in the study undertaken by Chinakidzwa, Dika, Molefe, Mutasa, Yawathe, and Perumal (2013). Subtheme 1.2: Time management Coming to work very early and completing work at home was a strategy used by some frontline social workers to manage time, as indicated by three frontline social work participants quoted below: The two frontline social work participants quoted below indicated that having a system of return dates in place is beneficial in assisting them to manage their time and their workload: Another time-management strategy utilised by frontline social workers to reach a large number of clients was to offer community awareness workshops on social issues prevalent in certain communities. One frontline social work participant confirms this: The workshop is for the community, to empower them with knowledge on the social issues that are there such as: substance abuse, domestic violence, child neglect, sexual abuse. We do it through workshops. (P17) Two frontline social work participants advised that it was worthwhile to plan so as to use their time productively. However, it was also necessary to know one's limitations and have realistic expectations of how much can be achieved, as expressed in the responses below: Use your time as productively as possible. Realise that you can only do so much in one time. It is not possible to handle sixty to eighty cases effectively and efficiently at the same time. (P1) I would say that even if you plan and it doesn't work out, you must (still) plan for your week with your basic stuff. And learn to say NO I cannot get to that today, but I will get to it. You've got to learn to accept your limitations. You can do so much and no more. (P7) Efficiency is directly related to how a social worker manages his/her time (Weinbach & Taylor, 2015). The findings above reveal that frontline social workers devised different strategies to enable them to manage their time effectively so that they may efficiently manage their work. It is evident that beginning work earlier than the official organisational starting times was one strategy used to manage time. Taking work home was another strategy. This is unfortunate, as it increases the working hours of the social worker and is bound to result in personal stress over a period of time (Coulshed et al., 2006). On a more positive note, however, it is evident from the findings that frontline social workers used innovative time-saving strategies to report to funders and to reach community members, such as sending funders photographs, maintaining a Facebook page and holding community workshops in areas in which there was a collective need for information. Some authors contend that setting goals, prioritising tasks and planning ahead are useful timemanagement strategies (Pretorius, 2014;Weinbach & Taylor, 2015). Evidence of this contention is that frontline social workers organised their return dates along the lines of what was due weekly and what was due monthly, so as not to let court orders lapse. However, this study's findings correspond with those of Michie (2002), who notes that frontline social workers must know how much work they can get through in a day without causing personal and professional stress to themselves. Subtheme 1.3: Relationship management The frontline social work participants highlighted the need to meet and get to know each client on one's caseload as a beneficial strategy for relationship management between the social worker and the client. This is evident in the words of one frontline social work participant: Furthermore, when clients progressed into positions of leadership at school, the participants felt a sense of pride because, as the quote from one participant below indicates, these clients have risen above the adverse conditions they face at home: One of my foster children is now the head girl of a high school and she is doing very well, and she was accepted at NMU [Nelson Mandela University] for next year. One of my boys in the Children's Home is now the deputy head boy in his school; he is also a child out of a household that is very, very poor and there are lots of drugs involved with the family. Now he is doing well. (P5) Managing relationships with stakeholders is described as useful when services are required, such as pooling together with the DSD for transport or the Department of Health or requesting the services of the SAPS. Participants shared their thoughts on the matter: The social work profession is premised on relationship building since it is a helping profession, based on working with people (Hepworth et al., 2010). Strategies such as getting to know each client, innovating for poor clients, showcasing clients' talents, understanding children's stages of development, and maintaining open communication with stakeholders were seen as beneficial. In order for change to occur in clients' lives, clients need to feel respected, valued and significant. These principles contribute to building trust and enhancing the self-worth of clients so as to facilitate change in their lives. When working with small children and adolescents, it is especially important to balance the understanding of their stages of development with the needs of funders. Forcing a child to engage against his/her will is detrimental to the child's development and inevitably delays progress in respect of bringing about change in that child's life. Once clients are comfortable with the frontline social worker, a deeper understanding emerges. To effect intrapersonal change is challenging, hence frontline social workers measure the appreciation shown by clients as well as client achievements as rewarding. Therefore, there is a need for frontline social workers to engage in relationship management strategies, as outlined in the findings of this study. According to Claeyé (2014), trust building is also a key component of building stakeholder relationships. Stakeholders in the NPO sector are significant partners in keeping with contingency theory, since frontline social workers cannot function in a vacuum. Child protection services demand that the Department of Health, Home Affairs and the South African Police Service are involved when determining the best interests of the child. Effective communication is enhanced by keeping promises, providing timeous feedback and requesting assistance from stakeholders in advance. Valuing input and ideas as well as maintaining professional boundaries further enhances stakeholder relationships. Subtheme 2: Self-management The need for self-management emerged as a significant consequence of frontline social workers' undertaking management tasks. Keeping a diary, maintaining a weekly planner and ticking off items on a checklist were some of the concrete self-management techniques used by frontline social workers to assist them in remembering tasks that must be completed. Participants shared the following: Self-preservation is enhanced by acknowledging support systems within the work environment. These support systems include colleagues and supervisors, as described below: The social worker will have to learn to speak to her colleagues and her supervisor, to make sure that she debriefs every day. Because they do burnout very quickly, I've seen it happen in our organisation. (MM1&6) Besides having support systems in the work environment, according to the participant listed below, it is important to have a safe place to retreat to outside of the work environment: Another self-management strategy that was identified, was the need to self-reflect, weigh the pros and cons, and remain calm when work situations become tense. Reflecting on situations before reacting is cited by the frontline social work participant quoted below as more productive and less self-destructive: A couple of frontline social workers indicated that personal self-reflection is an absolute necessity as it enables one to engage in rational decision making. Should emotions cloud a social worker's actions, situations may become tense, which may result in stress and burnout. The two frontline social work participants quoted below caution in favour of personal self-reflection on the opinions that social workers hold, and the need to always prioritise the best interests of the clients: Although self-reflection is magnified in the findings as a strategy to manage frontline social work tasks, self-reflection is an attribute that is gained with experience. All frontline social workers may not necessarily be capable of the kind of self-reflection that leads to unbiased and rational viewpoints. Some may require the assistance of their supervisors to help develop this attribute. According to the one frontline social work participant, reflection and considered opinions come with experience: No, obviously it's a skill that has developed over time. I would say that maybe 20-25 years ago I would probably also have reacted at first. My initial reaction would have been "upset" and maybe I would have spoken sooner. (P1) Upholding professional integrity by showing compassion, respect, engaging in open communication and being flexible in relations with clients was found to be critically useful as a quality that frontline social workers should possess. The words of the frontline social work participants quoted below are reflective of the need for professional integrity as essential to self-management: Be yourself with the people and handle everyone with respect … doesn't matter who they are. What I am there to do is to give clients respect, confidence and compassion. (P7) The findings further reveal that frontline social workers run the risk of operating in mechanical ways so as to be in control of their work. With the workload being so heavy, and time being limited, it is easy to slip behind with work, which will then result in poor self-management. One frontline social work participant fell into this trap of transforming into a "robot" so as to manage her day: According to Calitz et al. (2014), social workers experience stress and burnout as a consequence of their workload and poor time management, and they fail to engage in self-care (Jackson, 2014). This study's findings reveal that frontline social workers used various methods to stay ahead of their workload, such as keeping diaries and planning in advance, irrespective of whether crises would affect their plans. This strategy gave frontline social workers a sense of control over their workloads, which contributes to an overall sense of wellbeing for the social worker. Frontline social workers also reported that supervisors are key role players in acting as buffers against stress and trauma, thereby enhancing self-preservation and preventing burnout. Coupled with this, engaging in hobbies and having support systems outside the organisational environment contributed to self-care. This finding further addresses a recommendation, made by Calitz et al., (2014), that more support be provided to social workers in the form of supervision and support groups. Another self-management strategy that frontline social workers employed was being assertive and expressing their concerns with the relevant structures, as opposed to remaining silent and selfdestructing. According to Patel (2005), the social worker fulfils the role of advocate in discharging his/her duties as a social worker. By implication, it makes sense for advocacy to begin from within, on a micro level of functioning. Although authors document the value of self-advocacy, this is sometimes difficult for social workers to engage in without self-reflection (Stewart & MacIntyre, 2013). According to the participants in this study, personal self-reflection enables the social worker to respond to issues in a thoughtful and developmental manner after reflection, as opposed to becoming irrational and agitated, which results in stress. Participants indicated that work experience (predictive knowledge) allows for constructive self-reflection. It may also be argued that a pre-condition for self-reflection is a safe supervision environment that is enabling rather than controlling (Coulshed et al., 2006), and in which the social worker is respected and nurtured. Upholding professional integrity by engaging in respect, care, compassion, flexibility, hope and instilling confidence in clients were also seen to be beneficial self-management strategies. In contrast, the findings also revealed that the workload sometimes makes social workers function in mechanical ways, which may ultimately be detrimental to client services. Working mechanically inevitably results in not being in tune with your clients (Hepworth et al., 2010). CONCLUSIONS This article presented the findings in respect of the consequences of frontline social workers undertaking management tasks. The key consequence was that frontline social workers developed strategies to manage workload, time, relationships and the self. Based on the findings, a number of broad conclusions may be drawn and recommendations made.  The variety of work that frontline social workers had to manage was rewarding, but also stretched their capacity. Workload management is monotonous and boring which, in some instances, resulted in social workers becoming mechanical in their operations. Work pressure resulted in frontline social workers becoming desensitised and detached from societal ills. In a similar vein, the emotional and personal pressures precipitated by frontline management tasks resulted in diminished self-worth and confidence for the frontline social worker. Mutual trust between the supervisor and supervisee, as well as debriefing among colleagues, relieved frontline social workers' stress. Having the backing of the Children's Act instilled confidence in frontline social workers and middle managers.  Building rapport with clients and stakeholders contributed positively to relationship management. A deeper understanding emerged from good relationships and effecting change with client systems were more constructive.  Extending working hours by coming to work earlier and taking work home, balanced with maintaining work/personal life boundaries, was a beneficial time-management strategy. In addition, the use of social media within the ethical parameters of the organisation, reduced the hours spent on writing formal reports to funders.  Resourcefulness on the part of frontline social workers was evident in the strategies they developed to enhance their workload management, time management, caseload administration, relationship management, management of volunteers and self-management. Consequently, the need for self-care such as having hobbies, socialising outside the work environment and positive family relationships was realised. RECOMMENDATIONS Below is a list of the practice recommendations flowing from the conclusions reached above; they are clustered according to the subthemes presented in this paper. An additional subtheme, viz. organisational support, was included. Workload management  The caseloads of frontline social workers should to be manageable, guided by the relevant childprotection frameworks and adequate time should be apportioned to each case in consultation with the supervisor.  A focus on prevention programmes should dominate in the strategic plans of NPOs, because prevention programmes reach more people in a shorter time and avoids statutory intervention, which requires much time and workload management. Prevention programmes also allow for creativity and reduce much of the administration that individual and groupwork demands.  Frontline social workers should assess the estimated time, support and resources required to deal with each client and each crisis to assist with developing a vision for managing their caseloads. Time management  Frontline social workers should advocate for the use of non-traditional administrative methods and reporting within the ethical parameters of the profession, e.g. audio recording interviews and getting social auxiliary workers to transcribe interviews, taking photos of how funds were utilised and using social media platforms to display community engagements and sponsorships. Relationship management  The organisation should schedule quarterly meetings with stakeholders such as South African Police Services, Department of Social Development, Department of Home Affairs, Department of Justice and Department of Health to renegotiate and cement relationships due to the dynamic nature of the environmental factors facing the NPO sector.  All clients should be informed of the crisis nature of work by frontline social workers when scheduling appointments, so that clients are aware that appointments may be need to be rescheduled in response to crises and they should not become agitated and negative towards frontline social workers. Self-management  The use of collegial support/peer debriefing by frontline social workers is encouraged so as to get through difficult cases.  Frontline social workers should commit to self-care by identifying and utilising safe and nurturing spaces outside the office environment for respite.  A persistent engagement in self-reflection by frontline social workers is recommended so as to reduce reactivity and burn-out in difficult circumstances in the office and in the community.  Frontline social workers are encouraged to engage in activism in respect of their working conditions, e.g. safety within communities, access to resources, salaries, etc. Organisational support  Frontline social workers should be actively nurtured by middle and top management, and these relationships need to be enhanced with a strong focus on the supportive component of supervision.  There should be biannual evaluations held with frontline social workers so as to proactively ensure their emotional and physical wellness.  Non-threatening trust-building exercises for the frontline social workers and supervisors in the form of teambuilding activities should be incorporated into the strategic planning of the organisation.  Training on strengthening the application of core legislations, such as the Children's Act and other legislation pertaining to families, should be commissioned biannually.  Organisations to partner with higher education institutions as well as professional boards to commission further research on management tasks in the public sector so as to share best practices.
2020-10-28T18:06:39.946Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "a2e43693eb1e7752e1cf31c14753c45093d3596f", "oa_license": "CCBY", "oa_url": "https://socialwork.journals.ac.za/pub/article/download/855/766", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "423ae5b9acf92741469dda4ca47e6bf52c95e44f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
235364798
pes2o/s2orc
v3-fos-license
An experimental investigation of resilience decision making in repeated disasters Given the growing prevalence of catastrophic events and health epidemics, policymakers are increasingly searching for effective strategies to encourage firms to invest in resilience rather than relying on insurance or government assistance. Too often, however, resilience research focuses on decisions made by firms and emergency planners in the context of “one-off” events. We extend this research by examining resilience decision making in the more realistic context of repeated catastrophic events. Using a population of professional managers of middle market firms and a university experimental economics subject pool, we conduct a series of controlled experiments on the decision to invest in inventories to improve firm resilience to repeated catastrophic events. While existing economic and supply chain resilience research has focused on resilience in terms of avoiding some magnitude of economic losses, existing research omits a focus on the probability of those losses. Controlled experiments can evaluate the influence of probability more effectively than observational data by better controlling for magnitude and more easily accounting for repeated events. We find that decision makers are less likely to make resilience investments when a disaster has recently occurred. We further find that advisory information alone is insufficient to motivate resilience investments by firms. It must be substantiated by a history of advisory accuracy. However, we find that this effect is heavily moderated by the type of advisory information provided; we find that firm managers are much more likely to trust precautionary advice. Introduction The specter of both human-induced and natural disasters has led to a great deal of planning for low probability and high damage events. In the midst of the COVID-19 global pandemic, it is a particularly appropriate time to examine the factors that affect decision makers' willingness to follow precautionary advice. Experience with previous epidemics shows that one of the important factors in convincing people to heed such advice is the credibility of the source (Van Bavel et al. 2020). Often, however, literature examining the preparation for these potentially large disruptions treats them as one-off events and ignores that decision makers likely update their assessments of the risk of future disasters based explicitly or implicitly upon the occurrence, or lack of occurrence, of past disasters. An important and often overlooked part of an economy's resilience to such disasters is the business continuity decisions individual businesses make to build resilience capacity. Our research addresses this considerable gap in the literature by evaluating individual-level resilience decisions and their determinants in the context of repeated disaster events. The effectiveness of any policy aimed at increasing business or organizational resilience to disasters will partially be a function of how business leaders assess threats and are willing to act on advice to sacrifice current profits to protect against potential future losses due to catastrophic events. When making such cost-benefit calculations, even if decision makers have a strong sense of the potential losses due to disruptions in their business operations, the nature of most catastrophic events is such that the probability of a 1 3 disruption in a finite period is typically unknown. Decision makers likely update initial risk probability assessments over time in response to whether and how often such events occur. Furthermore, business leaders rarely make important investment recommendations such as these without input from others. This type of advice, and, in particular, the perceived accuracy and trustworthiness of the advice, also likely play a role in driving these investment decisions. While research addressing individual decision making has focused on how individual attributes affect risk perception or risk preference, we instead draw on prospect theory (Kahneman and Tversky 1979) to attempt to examine the extent to which the decisions to invest in resilience in the context of repeated events are influenced by factors such as confirmatory bias, recency bias, and the law of small numbers. These human tendencies can lead to vexing strategic investment challenges. For example, if a region experiences a "500-year flood," how does one convince decision makers that this does not absolve them of preparing for similar future disasters? This is not just an academic exercise, as Houston faced so-called 500-year-floods in 2016 and then again in 2017 with Hurricane Harvey (Popovich and O'Neill 2017). Thus, while we expect cumulative repeated exposure to disasters leads decision makers to be more likely to invest in resilience as decision makers reassess risk probabilities, prospect theory leads us to expect a lower probability of resilience investment immediately subsequent to a disaster. Further, we expect advice provided to the decision makers to influence their decisions, particularly when experience show the advice to be accurate or when the advice is more consistent with the decision maker's underlying beliefs. To examine the resilience decision making of firms in the context of repeated catastrophic events, we conduct a series of controlled experiments on the decision to invest in inventories to improve firm resilience. An investment in inventories is a well-known and common firm-or household-level resilience tactic (Rose and Liao 2005;Dormady et al. 2019a, b). We use a sample of both professional managers of middle market firms and a university experimental economics subject pool. The use of a controlled experiment represents an important advancement to existing resilience literature that relies on field data, as such data cannot be used to disentangle the effect of a first disaster on investment decisions in the same way. It is important to note that while controlled experiments are often employed to test tenets of prospect theory, our goal is instead to draw upon prospect theory while using our experiment to help shed additional light on how policy can help support resilience-enhancing decisions. Next, we define resilience and then review some of the literature examining the factors that affect individual resilience-enhancing investments. This paper contributes to the standard conceptualization of resilience by incorporating the probabilistic aspect of resilience and incorporating this into the context of repeated disasters. We provide a detailed description of our controlled experiment that models decision making in the context of the repeated events followed by results. While we do not find that cumulative exposure to more disasters affects resilience decision making, we do find that a recent disaster leads decision makers to be less likely to make resilience investments. While this is consistent with expected behavior via prospect theory, it may also represent a significant challenge for investment planning. Importantly, we find that advisory information does strongly influence resilience investment decisions and can help moderate some of the biases identified in prospect theory. The accuracy of that advisory information further amplifies this effect. Taken as a whole, the results suggest an important role for public sector leaders in fostering economic and supply chain resilience, as well as the resilience of the broader communities and economies that rely upon firms. Defining resilience Resilience has gained increasing attention across multiple domains-from business and management sciences (Tang and Tomlin 2008;Hosseini and Barker 2016;Kamalahmadi and Parast 2016;Brusset and Teller 2017;Chowdhury and Quaddus 2017) to engineering (Hollnagel et al. 2006;Youn et al. 2011;Shafieezadeh and Burden 2014;Hynes, et al. 2020); from ecology (Carpenter et al. 2001;Kerkhoff and Enquist 2007;Webb 2007) to economics (Rose 2004;Rose and Liao 2005;Martin and Sunley 2014); and from sociology (Cacioppo et al. 2011;White et al. 2014;Tierney 2019) to geography (Cutter et al. 2008;Miles and Chang 2011;Martin 2018). Across each of these disparate domains, numerous definitional distinctions have been offered, with domain-specific nuance applied to each. Comparing across these multiple domains, Rose (2009Rose ( , 2017 has generally found more commonalities than differences, and Naderpajouh et al. (2018) call for a more interdisciplinary approach to managing resilience. Despite the commonalities, domainspecific nuance can often preclude research advancement, necessitating an explicit definition of resilience at the outset. As the intent of this research is not to make a definitional contribution, but rather to advance a generalized decisionmaking experiment that informs how human decision makers process and conceptualize resilience in the context of repeated disasters, we adopt the more generalized definition of resilience advanced by the National Research Council [NRC] (2012) that is "the ability to prepare and plan for, absorb, recover from, or more successfully adapt to actual or potential adverse events" (p. 16). 1 Because experiments like this one are generalizations of external contexts intended to introduce experimental control where observational data cannot present such control, introducing high levels of definitional nuance to human subjects in an experiment would confuse subjects and could heavily bias results. Given this intent toward a generalized application, however, two important conceptual nuances must be addressed, which we turn to next. The first involves the nature of repeated events and temporal dynamics. The second involves considerations specific to the unit of analysis-in this case, the firm. Addressing conceptual distinctions in repeated events Addressing the nature of repeated events requires two important time-related dimensions that affect the way in which resilience is conceptualized. The first involves the distinction between static versus dynamic resilience (see e.g., Rose 2004Rose , 2007Rose , 2017. The former involves the manner in which remaining resources are utilized to maintain function when shocked and comports more closely with Holling's (1973) general definition. The latter involves the efficient use of resources over time and comports more with Pimm's (1984) definition. While the application of inventory investments in repeated disasters involves core elements of both concepts, repeated events resilience decisions inherently involve the act of setting aside currently profitable resources in the here and now to maintain function and reestablish productivity in the future. As such, repeated events decisions inherently involve temporal tradeoffs between current opportunity costs and future losses avoided. Those avoided losses are typically assessed in terms of business interruption, or BI, as measured by sales revenue (see, e.g., Rose and Liao 2005;Dormady et al. 2019a, b). This raises another important distinction, magnitude versus probability. Avoided BI losses are inherently a magnitude consideration, as they measure the size of the loss that was avoided. However, while the opportunity cost of setting aside currently profitable capital or materials in the here and now is certain, the magnitude of the future avoided loss is uncertain and subject to some probability domain. With the exception of Azadegan and Jayaram (2018), who introduce the concept of "anticipative" resilience, we are aware of no existing empirical economic resilience research that has addressed resilience actions that result from information about the probability of the shock in the context of repeated disasters. Additionally, even Azadegan and Jayaram's work does not address this, as anticipative resilience is more consistent with actions taken to build slack capacity in anticipation of a future disaster. So, even their insightful work omits the process of informing the likelihood of a disaster over time in the context of repeated events. It is this dimension, in particular that presents an important motivation for the current experiment. Observational data, as opposed to experiments, cannot hold disaster magnitude constant and vary the disaster frequency in the way that a controlled experiment can. Addressing firm-level decision making in resilience research The second important conceptual nuance involves resilience decision-making considerations specific to the unit of analysis. Much of the economic resilience literature has addressed large-scale regional or national-level economic issues such as COVID-19, outages in bulk power systems, and municipal water contamination, to name a few. While some of the research in this area has helped create frameworks for evaluating vulnerability and disaster response (Gerber 2007;Chang, McDaniels, Fox, Dhariwal, and Longstaff 2014;Alderson, Brown, and Matthew 2015;Kim and Marcouiller 2015), others have concentrated on quantifying post-disaster losses, deriving methods of measuring resilience costs, or establishing benchmarks for resilience performance measures (Cimellaro et al. 2010;Park et al. 2011;Vugrin et al. 2011;Henry and Ramirez-Marquez 2016). These types of studies have concentrated on community or regional resilience by examining specific geographic areas (e.g., metro areas or watersheds), organizations (e.g., hospitals), institutions (e.g., public policies), or infrastructure systems (e.g., supply chains, power, and communication networks). While these contributions are important, the microfoundations (i.e., the level of the firm) are noticeably absent. Scholars who have attempted to quantify the effects of catastrophic events on businesses have largely ignored the attempts by decision makers within firms to minimize potential losses. These types of decisions that originate with an individual decision maker or collaboration among individuals collectively constitute the economic resilience of firms and, ultimately, communities. This gap becomes even more consequential when considering research that shows that individual characteristics and risk attitudes influence corporate policies (e.g., Cronqvis 2012; Roussanov and Savor 2014;Bernile et al. 2017). While the economic resilience literature has focused on regional or community resilience, much of the existing risk and individual decision-making literature has considered the influence of personal characteristics. For instance, one line of research published in the economic and personal finance literature has examined gender differences in risk preference and has yielded inconsistent findings (Schubert et al. 1999;Sonfield et al. 2001;Atkinsonet al. 2003;Beckmann and Menkoff 2008;Eckel and Grossman 2008;Charness and Gneezy 2012;Booth and Katic 2013;Filippin and Crosetto 2016;Sila et al. 2016). Decision making in the context of repeated events When faced with making decisions on behalf of the firm, an important individual characteristic that likely influences risk perception is the decision maker's personal experience. Experience with fatal disasters early in life has been shown to predict chief executive officer's (CEO) risk attitudes (Bernile et al. 2017). CEOs born in counties that experienced a moderate number of natural disaster fatalities engaged in more corporate risk-taking behavior than those exposed to low fatalities. However, CEOs born in counties with more extreme disasters pursued less risky corporate activities (Bernile et al. 2017). While Bernile et al. (2017) find a relationship between disaster exposure as a child and subsequent firm-related investments later in life, this research can only hint at the impact of repeated events on decision making. Hertwig et al. (2004) posited that people with experience likely underweight the likelihood of rare events but overweight them if making decisions from a description of the scenario. They attribute this to the nature of rare events. Given that the events happen only infrequently, people ratchet down their expectations over time, especially because people often overweight the impact of recent events. On the other hand, Yechiam et al. (2005) also hypothesized that experience reduces sensitivity to rare events, but they attributed this to reduced sensitivity of risk. When examining the effect of the Intifada on overnight stays in Israeli hotels, they found that the rise in terrorism led to a much larger reduction in hotel stays by international rather than domestic tourists. Using a laboratory experiment to test their hypothesis regarding the role of personal experience, they found results consistent with their hypothesis that personal experience by the local residents reduced the sensitivity to the risk. Notably, the experiment's participants in the experience treatment tended to revert to risky choices soon after experiencing negative outcomes. Prospect theory can also shed light on decision making in the context of repeated events. Because perceptions of risk are often a function of drawing from a small, unrepresentative sample. Recency bias leads individuals to overweight their most recent experiences and experiencing an event may lead to underweighting the probability of its reoccurrence (Tversky and Kahneman 1971). This "gambler's fallacy" would lead one to believe that because of the incorrect belief that the small sample represents the large sample of a variable from the same distribution, the occurrence of the rare event would somehow make it less likely to occur in the next period. This is consistent with findings from the literature (e.g., Bell and Tobin 2007) that people are confused by terms like "100-year" flood when risk probabilities are presented to them. On the other hand, He and Hong (2018) found in a lottery laboratory experiment that subjects who were exposed to riskier environments in earlier rounds of the experiment displayed greater risk aversion in later rounds of the experiment. Individuals' own biases can further be reinforced or tempered by advice they seek out or are offered. For instance, a decision maker's willingness to take precautionary measures in the face of risk is also likely to be partially a function of the level of trust in warning advice provided (LeClerc and Joslyn 2015). Confirmation bias, which leads people to overweight evidence conforming to their own initial beliefs, might lead the decision makers to place greater trust in advice consistent with their priors. Further, Slovic (1999) points out that trust is asymmetric in that it is more easily destroyed than created. One explanation is that events that betray trust are typically more perceptible than events that reinforce trust. Interestingly, people are more likely to view sources of bad news as credible but discount sources of good news. Also, distrust, once established, is difficult to overcome. Hypotheses To summarize, the insights and gaps in the existing empirical and theoretical research reviewed above lead to the following hypotheses. First, following Bernile et al. (2017), we hypothesize that decision makers who are exposed to a greater number of cumulative disasters will invest in resilience at a higher rate. Second, prospect theory concepts of recency bias and gambler's fallacy imply that decision makers will be less likely to invest in resilience after a disaster has recently occurred. Finally, it is expected that the advice provided regarding resilience investments will influence those investments accordingly; that advice consistent with underlying beliefs will be weighted more, and that, over time, the influence of advice will be qualified by the decision maker's experience with the accuracy of that advice. We turn next to a discussion of the experimental design to evaluate these hypotheses. Experimental design We utilize a controlled experiment to study the effect of resilience investment recommendations on resilience decisions in the context of repeated catastrophic events. Use of controlled experiments has grown rapidly because of their strengths in testing social phenomena in a structured manner (Plott and Smith 2008;Kagel and Roth 2015) and in setting up scenarios in large samples that would not be possible with observational data. Controlled experiments, by their very nature, rank highly in internal validity (Roth, 1995;Roe and Just 2009). However, their external validity regarding generalizability to external policy contexts depends heavily on the choice of assumptions. Below, we describe the experimental design and rationale. Section 3.1 describes the sampling approach and subject populations, sample size, and the overall operation. Section 3.2 describes the decision-making scenario along with the payment structure. Section 3.3 describes design considerations relating to the dynamic decision context. Section 3.4 describes the design features relating to the probability domain and rationale for a mixed-strategy design. Experiment operation and sample selection Remarkably few studies replicate the exact same experiment to compare university subject pools with professional market actors (Frèchette 2015(Frèchette , 2016. This is the case for a variety of reasons, including the financial and opportunity costs involved in utilizing professional subjects. Further, among the few existing studies, there is notable divergence between studies finding no or small qualitative difference (List 2002;List and Haigh 2005;Levitt et al. 2010) and studies finding significant qualitative differences (Burns 1985;List 2001;Palacios-Huerta and Volij 2008) between student and professional subjects. The experiment using the two subject pools was conducted as an online experimental survey administered in two stages in late 2015 and early 2016 by RTi Research, a professional business survey firm. Professional subject experimental sessions made use of an existing subject pool of managers from a representative sample of mid-sized businesses and included mainly CEOs, COOs, owners, or executives tasked with making strategic corporate investment decisions. 2 A more standard experimental economics subject pool was also used from The Ohio State University. In October 2015, the initial run included 368 undergraduate subjects. The advice treatments that are the focus of this paper provided 298 completed experiments. 3 The second run, carried out in January 2016, included both students (286) and managers (312). Altogether, the data set evaluated here consists of 896 subject records, including a professional subject sample of over 300, which is much larger than nearly all other experiments involving professionals in the field today. Subjects were randomly assigned from the subject pool and also randomly assigned to treatments. The random assignment used a conditional least-count uniform distribution algorithm to assign subjects to advice treatments. Although this algorithm assigned subjects randomly using a uniform distribution, it also weighted the distribution more heavily toward those treatment and selection parameters that had the lowest count of completed surveys at that point in time. This ensured perfect equality of subject counts across treatments (as possible). We also oversampled from female subjects in both subject pools to ensure an equal gender balance in all treatments. Decision-making scenario Subjects were provided a resilience decision-making context, or vignette, in which they were asked to advise a firm's Chief Operations Officer (COO) on an important operational decision in the face of a critical supply chain vulnerability. In the possible event of an unnamed catastrophe, the firm's ability to acquire the needed production input would be substantially limited. Subjects were asked to advise the COO on an investment decision that could reduce the potential negative consequences of the production input curtailment that would occur if the catastrophic event were to ensue. The exact type of catastrophic event was not specified, as a contextualized decision could introduce exogeneity bias if subjects' individual heuristic biases (e.g., fear of hurricanes) influenced their resilience decisions. In this experiment, if a catastrophic event were to occur, the inventory investment provides a stock of the critical input that would result in only a slight reduction in the firm's operation continuity. Subjects were thus faced with the decision of continuing to operate normally and face the risk of a catastrophic event that would nearly wipe out production capability or make an investment in inventories with an opportunity cost in the here and now that would shield the firm from probabilistic near-term operational consequences. The vignette read as follows: You are an executive in a mid-sized business. The Chief Operating Officer (COO) has asked you to help the company make an important operation decision that will play an important role in the future success of the company. The company faces a potential vulnerability in its supply chain. To produce its output, the company requires 2 Because the National Center for the Middle Market funded this research and had existing collaborations with RTi Research, we had a high degree of assurance that the subjects took the experiment seriously. More specifically, these subjects were drawn from the pool of managers who complete the Middle Market Indicator Report. For more information on the sampling pool, see the FAQ at http:// www. middl emark etcen ter. org/ perfo rmance-data-on-the-middle-market. 3 The remaining 70 subjects were assigned to independent and alternative treatments that are not relevant for this study and address the value of information in repeated events decisions. Those are published in Dormady, Greenbaum and Young (2021). an input in order for it to be able to continue to operate. If a catastrophic event were to occur, it would wipe out the company's ability to obtain this critical input, and the company would operate on a skeleton basis until operability is restored. The company has the option of making an investment that could limit the negative impacts of this catastrophic event. If the company purchases a large inventory of the critical input, the company could continue to function at nearly full operability. The inventory however, would incur a sizeable cost to the company. The COO has asked you to make the decision of investing in the inventory. The COO has informed you that: • If no catastrophic event occurs, the company will have estimated profit of $100 million. • If a catastrophic event occurs, the company will have estimated profit of $10 million. • If a catastrophic event occurs, and the company has made the investment in inventories, the company's estimate profit is $90. • The cost of inventories is $20 million. Subjects were then given the decision-making payoff matrix in Table 1. Inventories incurred a cost of $20 million per decision period. If a firm invested and a catastrophic shock occurred, the firm is only slightly negatively affected by the shock. Profits would be $70 million per period, taking into account the inventory investment (top-left cell). If a shock were to occur, and no inventories were acquired by the firm, its operability would be severely impacted, reflecting the production capability reduction. Profits would be $10 million per period (bottom-left cell). The right column represents the payoffs under the scenarios in which no catastrophe occurs. Under these business-as-usual conditions, the firm would have profits of $100 million per period if inventories were not purchased (bottom-right cell). Finally, if the firm made the investment and no catastrophic shock occurs, profits would be $80 million per period, or $100 million minus the $20 million cost of inventories (top-right cell). Because the focus of our study is middle-sized businesses (as defined by annual revenues between $10 million and $1 billion), resilience strategies of middle-sized firms tend to be limited compared to larger companies. This is important because middle-sized businesses that make investments in redundancy or inventories, for example, tend to do so at a tradeoff to core production inputs in the present, notably investments in labor or capital. Larger firms can generally afford redundancy without the same relative opportunity cost. Moreover, in the globally competitive marketplace in which most middle-sized businesses compete, costly investments in inventories or other resilience investments can put them at a disadvantage relative to other firms that do not bear such costs or catastrophic risk. Subject remuneration was aligned with this payoff structure, which also aligns with standard experimental practices of incentivizing performance via induced value theory. This is also consistent with corporate performance pay strategies that reward executives for management performance that is tied to market-based outcomes (Jensen and Murphy 1990). Subjects in the experiment received payment at the ratio of one dollar for every 100 million dollars the firm received in profits. 4 The running calculation of remuneration was visible during the experiment; however, every other aspect of the vignette indicated the independence of decision-making periods. Specifically, inventories were not carried over from period to period, and the introduction of a new period was accompanied by the phrase, "Some time has passed. The company is again faced with the option to invest in inventories that would limit the negative impacts of the catastrophic event." This scenario signaled a new, independent time period without suggesting a type of inventory or type of disaster that could have activated individual heuristic biases, as discussed below. Dynamic treatment conditions Subjects made resilience decisions across ten two-round periods. In the first round of each period, subjects made an initial investment decision. After making their initial decision, subjects were informed that the COO has appointed an advisory committee of two associates with operational management expertise to assist them in making their decision on how to advise the COO regarding the inventory investment. However, subjects were informed that their decisions were ultimately their own. There were two treatments in terms of advice received from the appointed associates, either to "invest" or "do not invest" in inventories, and the advice given was consistent across the ten periods. In all cases, both of the appointed vignette advisors gave the same advice. That is, in no treatment were the subjects receiving conflicting advice from the advisory committee. 5 An example vignette (for a subject who decided to not invest in the first round of a period and was assigned to the do not invest treatment) reads as follows: Now that you have made the decision to not invest in inventories, the COO has appointed an advisory team of two other executives with experience in operations management. Although the decision to invest in inventories ultimately rests with you, the COO has asked you to consider the input of the advisory team. The first member of the advisory team has reviewed the revenue scenario thoroughly. This team member recommends that the company not invest in inventories. This team member recommends that you not invest in inventories. The second member of the advisory team has also reviewed the revenue scenario thoroughly. This team member recommends that the company not invest in inventories. This team member recommends that you not invest in inventories. After receiving the recommendations from the advisory committee, subjects made their final decision and were subsequently informed of the disaster outcome. Subjects were not informed that the total number of periods would be ten. Therefore, subjects' resilience decisions were systematically influenced by only the advice received by advisors and by their own non-systematic subject-specific experience with the outcomes of disasters in each period. Therefore, in this experiment, our two main variables of focus are the influence of the advice and the effect of repeated events. Probability domain in the experimental design As identified in Sect. 2, the benefit of an experiment over observational data is the ability to introduce experimental control, specifically regarding the probability domain while holding magnitude constant. We note that subjects' decision calculus inherently depends on their risk tolerance and their willingness to take preventative action (Englander 2015). However, this experimental design differs from classic risk experiments in three important ways. First, unlike many risk experiments, the subjects in this experiment are not informed of the likelihood of the shock. Second, there is no dominant strategy in equilibrium. Disasters were assigned randomly from a uniform distribution with mean 0.25. The expected monetary value (EMV) is equivalent for either resilience investment strategy ($77.5 million). 6 Table 2 presents EMVs for this, as well as 0.5 and 0.1, two likely subject guesses for the event likelihood. If probability were observable to subjects, risk-neutral subjects would play a mixed strategy. At the same time, risk averse subjects and subjects with likelihood priors above 25% would tend to make the investment. As such, our experimental design mirrors the real-world resilience investment decision environment faced by firms where no risk-neutral inventory investment decision is dominant. The additional benefit of this design is that it makes treatment effects more clearly observable. Subjects' inherent priors about the likelihood of a catastrophic event and their risk preferences ultimately inform their resilience investment decisions. Subjects who believe that the likelihood of a catastrophic event is high are more likely to invest in resilience, ceteris paribus. Results We begin by providing some initial descriptive statistics of our results, including some basic hypothesis tests. Then, we provide more detailed subject-level panel regression Descriptive statistics In addition to the advice received from the advisors in the vignettes, subjects' own experience with disaster occurrence across the ten periods can influence their resilience investment decisions. Moreover, because subjects were not informed of the likelihood of disaster occurrence, their experiences with disasters on a period-by-period basis would tend to inform their perceptions of the likelihood of disaster occurrence dynamically (i.e., as they experience them over time). To ensure that there were no systematic differences across advice treatments in either of our subject populations, we evaluate the occurrence of catastrophic events that subjects observed. Descriptive values are provided on a round-byround basis in Table 3. From these descriptive results, we have confidence that there are no systematic biases across treatments, subject populations, or rounds in terms of any group receiving more "shocks" than another. The rate of disaster occurrence varies only from a mean of 24.6% among students who received the advice to not invest to 25.3% among the managers who were given the advice to invest, over ten periods. We also conducted parametric and non-parametric tests of means by treatment, subject type, and period, not provided here for brevity, to ensure that all subjects in each treatment and round experienced disaster likelihoods that were not statistically different from 25%. 7 Across the board, all subjects in all rounds were exposed to the same shock probabilities and no systematic differences in exposure exist in our data. We provide basic summary statistics of the experimental results by treatment group, subject type, and period, in Table 4. The table provides mean and standard deviation for each. It also provides the total ten round averages for each. In total, 292 student subjects and 156 middle market managers received consistent advice to invest in resilience. The same In nearly all cases, Wilcoxon tests failed to reject the null that the percent of subjects observing a disaster was equal by advice treatment at the p < 0.05 level. The exceptions are students in period 9 and managers in period 8. The computer-generated randomly drawn disaster outcomes yielded slightly fewer disasters for students in the Do Not Invest treatment in period 9 and managers in the Invest treatment in period 8. In both cases, however, two-sample z-tests failed to yield statistically significant differences at the same significance level, due to the large standard deviations produced by the uniform distribution. Given the sensitivity of Wilcoxon tests, we have strong confidence that no group incurred more frequent disaster outcomes than another. 1 3 count of subjects, respectively, received consistent advice to not invest. From the descriptive values alone (i.e., without controlling for subject-level disaster experience), it is clear that there is a relatively strong treatment effect in both the student and manager groups-subjects who were advised in the vignettes to invest in resilience did so at a greater rate than those who were advised to not invest (81.8% versus 59.4% among students and 77.1% versus 60.9% among managers across all ten periods). We note that even when advised to not invest, we still observe approximately six out of ten subjects making the investment. We take this as providing some evidence that subjects were either ascribing an event likelihood above 25% or viewed the investment as the risk averse option. We also note the importance of econometric controls for individual-level experience with disasters as provided in the next section. Regardless, it is the difference between the treatments that is illustrative here, as the strength of the treatment effect remains strong across ten rounds, although it appears to dissipate to a degree with increasing experience across periods. We extend this by adding subject-level disaster exposure to the mean resilience investment results to observe the treatment effect associated with resilience investment recommendations. We first provide this graphically in Figs. 1 and 2. The Y-axis of the figures indicates the mean resilience investment decision across subjects and the X-axis indicates the period. We provide separate line plots by the total count of disasters that subjects incurred, which ranged from zero to six for nearly all subjects. 8 Figure 1 provides the time series averages for the Invest treatment and Fig. 2 provides them for the Do Not Invest treatment. In comparing the two figures, we observe substantially more cohesion about the mean despite the count of disasters incurred in the Invest treatment. However, that cohesion dissipates substantially over time in the Do Not Invest treatment. The subjects incurring three or fewer disasters had the largest drop in resilience investments from the mean in the Do Not Invest treatment. This indicates that over time, realizing fewer disasters combined with receipt of a recommendation to not invest, resulted in fewer investments. This is consistent with Hertwig et al.'s (2004) speculation that the lack of experiencing rare events leads people to reduce their expectations of their occurrence. Put the other way, this suggests that recommendations to invest in resilience can have a substantial effect on encouraging resilience investments, even when subjects observe few disasters. We extend this graphical analysis further by providing formal non-parametric tests of the equality of resilience investments by treatment. Wilcoxon (Mann-Whitney) tests are provided by the count of total disasters experienced in Table 5. In all cases except the case of six disasters experienced due to sample size and few subjects getting that many shocks, we safely reject the null hypothesis that the mean resilience investment in the Invest treatment is equal to the mean in the Do Not Invest treatment. The rarer case of six disasters falls short of common statistical significance. This provides strong evidence of a treatment effect associated with advising decision makers to invest in resilience. Next, we extend this analysis further through formal econometric estimation. Regression analysis Econometric estimation allows us to build in statistical controls to account for the repeated events aspects of the data through panel regression techniques. We use three econometric models (one static model and two dynamic models) to explain a subject's resilience investment decision. Our static econometric model (Model 1) is estimated using a random effects panel logit model given by the following equation: where the dependent variable is the binary outcome Invest in resilience by firm i in period t. Consistent with a random effects model, variance components are given by v i. Explanatory variables in vector x consist of both panel/firm-invariant variables as well as time-varying variables. It is given by the following equation: The variable Disaster indicates disaster occurrence in that period. We lag this variable by a single period, which is the most recent period observed by a subject as disaster is not observable until after the decision is made. Cumulative provides the running total count of disasters the subject has incurred in prior periods leading up to the current decisionmaking period. In other words, in any given period t = n, it provides ∑ n−1 t=1 Disaster i,t . This variable is inherently lagged; for example, in the fifth period (t = 5), a subject could have experienced a max total of only four disasters by the time they make their resilience decision. Advice is a dummy variable indicating the subjects' treatment of either Invest or Do Not Invest (1 − Advice) in inventories. For any given subject i, treatment assignment is randomized and distributed Bernoulli, and it remains the same across all periods. As such, it is time invariant and no lag is necessary. We interact the treatment dummies with the variable Accuracy, which is lagged by one period. The accuracy variable captures the running total of accuracy of the advisory information the subject received. For example, if a subject in the sixth period of the invest treatment incurred one previous disaster in any preceding period, the accuracy would be 0.20, or 1/5. This allows us to incorporate the moderating effect of subjects potentially discounting advice that turns out to be erroneous over time (as well as the opposite case of high accuracy). Simply put, the interaction term allows us to measure not only the effect or our treatment parameter but also the dynamic effect of realized, or observed, advice accuracy as it plays out. Finally, we include the subject's initial investment decision that provides a time-invariant binary operator for the subject's initial resilience decision at the end of the first period. We incorporate this variable as a way to account for the effect over time of subjects' adherence to their initial decision. We incorporate this variable because we observe a relatively large percentage of subjects who, after making their final decision at the first period after receiving the advisory information, never deviated from that decision despite their experience with disaster outcomes. In total, 37.6% of students and 50.9% of managers never deviated from their initial resilience investment decision. Our dynamic econometric model (Model 2) extends our static model by incorporating two lags of the dependent variable, given by Dynamic models are provided because of the nature of repeated events in the experiment-a subject's resilience investment decision in the last period or two is likely to accurately predict their resilience investment decision in the current period. We note that dynamic random effects panel models (i.e., with lagged dependent variables) have been identified in the econometrics and epidemiology literature to have the potential to produce biased coefficients. This is because they violate the assumption that the dependent variable is not correlated with the random intercept (Nickell 1981;Bhargava and Sargan 1983;Allison 2015;Kripfganz 2016). Given the potential explanatory power of dynamic decisions in this experimental environment, and given the importance of time-invariant explanatory variables that could not be evaluated using a fixed effects approach, we take two steps to ensure that our coefficients are consistently estimating treatment effects. First, we provide both the dynamic and non-dynamic models. Second, we estimate a dynamic generalized estimating equation for our third model (Model 3), as described in Liang and Zeger (1986), as a robustness check. Here, we fit our dynamic model (Model 2) to a panel generalized linear model (GLM) given by the link function L: where the distribution family of L is a logit function distributed binomial. We use an exchangeable correlation matrix, given by R t,s , such that where the diagonal elements are unity and the off-diagonal elements are correlation values rho. This is useful as it provides logit odds ratios with an exchangeable correlation matrix and relaxes the assumption that lags are independent of one another (Zeger et al. 1988;Hanley et al. 2003;Gunasekara et al. 2014). This approach provides the same estimators as a dynamic population-averaged panel logistic regression. Coefficients in each model are all in general agreement and each serve to validate the direction and magnitude of treatment effects. Because the estimator in Model 3 does not allow for clustering, we do not cluster Models 1 and 2 for uniformity but note that alternative specifications utilizing subject-level clustering obtain highly similar results. Table 6 Regression results for investment in resilience (all subjects) All models report odds ratios and standard errors in parentheses. Model 1 provides static model and models 2 and 3 provide dynamic models with two lags of the dependent variable. Model 3 provides a repeated-measures generalized estimating equations model (xtgee in Stata 14) using logit link function. Treatments in which subjects received advice to not invest in resilience are the excluded reference category for the dummy variable Advice ***p < 0.01, **p < 0.05, *p < 0. We provide our regression results for all subjects in Table 6. We also provide separate results of the same models for middle market managers only in Table 7, and our student subject pool only in Table 8. Each table provides Models 1-3. Coefficients are presented as odds ratios. Recall that the odds ratio (OR) is interpreted as a deviation from unit value (so OR > 1 is a positive effect on resilience investment, OR < 1 is a negative effect, and OR = 1 is a neutral effect). In all models and for both subject pools, we find consistent negative resilience investment associated with the occurrence of a disaster in the prior period. Coefficient magnitude is relatively consistent across all specifications and across both populations. The implications of this are namely that decision makers are much less likely to expend financial resources on resilience investments when a catastrophic event has just passed. In other words, subjects are heavily discounting the likelihood of a second disaster after the first has passed. This behavior is consistent with the gambler's fallacy notion that people misattribute the characteristics of the small sample of lived experiences to the larger underlying probability distribution (Tversky and Kahneman 1971). This result also provides validation of our experimental design. It was our intent to design a decision environment that was simultaneously strategy neutral and that incorporated the real-world institutional feature of inventories bearing an opportunity cost in the here and now. Because we observe an odds ratio consistently less than one associated with Disaster in all models, this provides some evidence that our experimental design is capturing this effect and that subjects are attempting to avoid having to make the investment. This also has important implications for the business population of our manager subject pool-mid-sized businesses. While the largest firms are able to underwrite their own losses, mid-sized firms face a much more competitive Table 7 Regression results for investment in resilience (managers) All models report odds ratios and standard errors in parentheses. Model 1 provides static model and models 2 and 3 provide dynamic models with two lags of the dependent variable. Model 3 provides a repeated-measures generalized estimating equations model (xtgee in Stata 14) using logit link function. Treatments in which subjects received advice to not invest in resilience are the excluded reference category for the dummy variable Advice ***p < 0.01, **p < 0.05, *p < 0. All models report odds ratios and standard errors in parentheses. Model 1 provides static model and models 2 and 3 provide dynamic models with two lags of the dependent variable. Model 3 provides a repeated-measures generalized estimating equations model (xtgee in Stata 14) using logit link function. Treatments in which subjects received advice to not invest in resilience are the excluded reference category for the dummy variable Advice ***p < 0.01, **p < 0.05, *p < 0. business climate and are forced to compete both nationally and globally against firms that may not face the same degree of catastrophic risk. Resilience expenditures, such as inventories, may put these firms in particular, at a competitive disadvantage (Young et al. 2017). It is key to the external validity of our experiments that we consistently observe this effect of subjects seeking to avoid the opportunity cost of resilience investments. Any other directional effect of this coefficient would indicate that our data are not picking up real-world opportunity costs of inventories. Our treatment variable Advice provides interesting but mixed results. While the reference group is the Do Not Invest treatment, we expect the coefficient Advice to be greater than one. However, we observe this only for managers in Model 1, when a lagged dependent variable is not incorporated into the model. However, the two interactive terms in which the treatment dummy is interacted with Accuracy are generally significant at the 10% level and in the expected direction, with the exception of accuracy of advice to invest for managers, which falls short of generally accepted levels of statistical significance. Critically important to understanding these results are the following. We note the importance of the magnitude of these two interactive variables in light of our disaster probability of 0.25. Given that subjects all consistently faced a 25% likelihood of disaster occurrence, the advisory information in the Advice (advised to invest) treatment would be incorrect relative to the advisory information provided in the Do Not Invest treatment at a ratio of 3:1. Thus, it is important that we consider the impact of the treatment variable in light of the accuracy of the information as it is revealed across time to the subjects. Keeping this important consideration in mind, we expect accurate advice to invest in resilience to have a positive effect on resilience investments. Contrariwise, we expect accurate advice to not invest in resilience to have a negative effect. This result obtains and in the expected direction in all cases. This effect is statistically significant in all models except the accuracy of advice to invest among managers as just mentioned. We note that across the board, the contrast between the direction of magnitude of these two interactive coefficients is quite large. This provides evidence that decision makers respond in accordance with accurate advisory information. Taken together with our main treatment dummy Advice, these results suggest that advisory information alone is not sufficient to encourage resilience investments-that information must also be substantiated with a history of accuracy. We can extend the description of our treatment effects further by evaluating the marginal effects of the treatment interaction variables. While logit coefficients are not as easily interpreted as OLS coefficients, we generate predictive margins for specified parameters in our estimating equation. We illustratively provide margins plots in Figs. 3 and 4 for all subjects and managers respectively. We exclude the plot for our student population for brevity as it is very similar to Fig. 3. The Y-axis provides the predicted probability that the decision-maker will invest in resilience as fitted by Model 1, holding all other variables constant at their means. The x-axis provides Accuracy t−1 as provided in the regression model. The panels on the left of Figs. 3 and 4 provide the predictive margins for the Invest treatment, and the panels on the right, the Do Not Invest treatment. The whiskers indicate the upper and lower bounds of a 95% confidence interval Fig. 3 Predictive margins of resilience investment, by accuracy of advice (all subjects). Horizontal line indicates mean predictive margin for treatment when accuracy of advice is held at the mean about each predictive point estimate. Further, we provide the mean predictive margins for each treatment as a horizontal reference line. The Invest treatment marginal effects have a positive slope in the expected direction indicating that as advisory information encouraging resilience investments increases in accuracy, the probability of a decision maker investing in resilience increases. The Do Not Invest treatment is also in the expected direction, with a negative slope revealing the opposite effect. Whereas the confidence intervals are broader for the manager plots, as they are in the regression models, this is due to two main effects-the smaller sample size of our manager pool and the relatively larger magnitude of the mean value of initial investment decision (Invest t=1 ) indicating that managers deviated less from their initial resilience investment decision in response to advisory information. We also note the steeper slope of the marginal effects of the Do Not Invest treatment. This again indicates that decision makers are averse to incurring the opportunity costs of inventories, and are more likely to not invest in inventories when their experience with disasters not occurring is congruent with advisory recommendations to not invest in inventories. As previously mentioned, many subject's investment decisions were time invariant or nearly so. Thus, in all models, the odds ratio for the initial decision variable is greater than one and statistically significant. Subjects who made the initial decision to invest in inventories had a significantly higher likelihood of continuing to invest in inventories, even after controlling for the volume of disasters experienced and the advisory information received. Additionally, because it is a dummy variable, the opposite case obtains for those subjects who initially decided to not invest in inventories. We also note that the magnitude of this odds ratio declines markedly when lagged DVs are incorporated, and particularly when GEE models are used because they relax the assumption that lags are independent of one another. This indicates that when dependence between decisions across rounds is permitted, the relative importance of the initial resilience decision diminishes. The variable Cumulative Count of Disasters falls short of statistical significance in all models. While we would expect that decision makers faced with increasing disaster frequency would be more likely to invest in resilience to reduce loss risk going forward, an equally plausible hypothesis would suggest that decision makers view multiple recent disasters as evidence that they will be far more infrequent going forward as they have already beaten the odds. We therefore retain this variable in the model as a control, but do not necessarily maintain an expectation for its direction of effect or statistical significance. The count of disasters does however provide for a meaningful way to view the treatment effects more broadly. By viewing the treatment effects of Advice by the running count of disasters, we can observe the degree to which the advisory information is persistent over the range of potential disaster outcomes, including the mean of 2.5 disasters. We provide this in margins plots in Figs. 5 and 6 for all subjects and managers, respectively. Again, these marginal effects hold all other values constant at their means-including the accuracy of the advisory information. While the slopes of the marginal effect plots are not statistically different from zero, the marginal effects and accompanying confidence intervals Fig. 4 Predictive margins of resilience investment, by accuracy of advice (managers). Horizontal line indicates mean predictive margin for treatment when accuracy of advice is held at the mean indicate that the regression model has predicted statistically significant treatment differences for up to eight disasters for all subjects (the maximum incurred by any subject), and five for the manager models. As appropriate, because the sample size of subjects incurring more than 2.5 disasters dissipates with an increasing number of disasters, the confidence intervals of the predictive margins widen as the count of disasters increases. These plots provide an accessible visualization of our regression results. They indicate that clear treatment effects obtain for the effect of advisory information on resilience investments, even accounting for relatively frequent disaster outcomes. Trust in resilience advice We also administered a small battery of questions in a postexperiment survey to elicit a richer understanding of subject rationale for resilience decisions and their perceptions of the experiment more broadly. Here, we report an important measure of subject affect toward the advisor-trust. We asked subjects a simple binary question: did you trust the advisors? Overall, 63.4% of subjects reported trusting the advisors. However, while 47.9% of subjects in the Do Not Invest treatment report trusting the advisors, 78.8% of subjects in the Invest treatment report trusting the advisors. Aside from random heterogeneity in individual subjectlevel affect (e.g., an individual's own experiences with advisory boards), systematic influences in subject-level trust here could be influenced by a subject's own experience with disasters during the ten periods in light of the advisory information given (i.e., Accuracy), or the vignette itself that introduced the advisors as experts appointed by the COO (which was held constant). Thus, while the effect of the vignette remains constant throughout all treatment groups, and while subject's own heterogeneity should be negligible due to assignment, accuracy of advisory information can be assessed with a high degree of control. We evaluate subject-level trust of advisory information by conducting regression analysis on each subject's final postexperiment (i.e., period 10) data. By the end of the tenth period, each subject's disaster outcomes and accuracy of advisory information have been fully experienced, and it is at this point in the subjects' timeline that they indicated in the post-survey their trust. Table 9 provides the results of this cross-sectional logit model for each of our subject Logit models reporting odds ratios and standard errors in parentheses for the final round (round #10). Model 1 provides results for all subjects, and models 2 and 3 provide results for managers and students, respectively ***p < 0.01, **p < 0.05, *p < 0. 1 3 samples and all subjects in total. We include our interactive treatment variables indicating the effect of accurate advisory information in each treatment, and we include the total count of disasters experienced. Again, each model reports coefficients as odds ratios. The regression results indicate that subjects overwhelmingly trust accurate advice to invest in resilience. The results also, however, indicate that subjects distrust accurate advice to not invest in resilience. Both of these findings are consistent with Slovic's (1999) notion that people are more likely to trust sources of bad news (in this case, the advice to invest in inventories because of an impending negative event) than to trust sources of good news (in this case, the message that there is no need to invest). The results also indicate that subjects were increasingly distrustful of advisory information as they incurred more disasters, again consistent with the notion that events that betray trust are more perceptible to individuals (Slovic 1999). This result is consistent with expectations in terms of accuracy of advice to invest in resilience. More accurate advice that turns out, over time, to provide a cautionary recommendation that benefits the decision maker should, on balance, be more favorably received. However, because of the relationship between the measure of accuracy and the count of disasters experienced, advisory information in the Do Not Invest treatment would be of lower accuracy for a larger count of total disasters experienced than for fewer disasters experienced. Thus, these results indicate that subjects have positive affect for advisory information to invest in resilience. This affect may stem from the fact that subjects perceive advisory information to invest as cautionary or protective. We provide the predictive margins as plots in Figs. 7,8,9,and 10. Figure 7 provides the predictive margins of trust as a function of accuracy of advisory information by treatment group for all subjects, and Fig. 8 provides these results for managers (again student plots are omitted for brevity as they are quite similar to the plots for all subjects). Figure 9 provides the predictive margins by the total count of disasters experienced for all subjects, and Fig. 10 provides the same for managers. While the raw mean value of trust for all subjects is 63.4%, it is 58.0 and 73.4% for students and managers, respectively. Given that the mean accuracy of advice to invest in resilience should be 0.25 for the average subject in the Invest treatment, and given that the mean accuracy of advice to not invest in resilience should be 0.75 for the average subject in the Do Not Invest treatment, we can evaluate the predictive margins at those values to compare the treatment effects. At those margins, the predicted probability that a subject would trust that advisory information to Invest is 72.9%, and the predicted probability that a subject trusts the advisory information to Not Invest is 58.1%. For managers, these values are 81.2% and 70.5%. And for students, they are 68.7% and 51.7%. Across the board, therefore, there is a nearly 15% greater probability that a subject trusts advisory information to invest in resilience than advisory information to not invest in resilience, even after accounting for the subject's experience with prior disasters. Summary Four key empirical findings of this study contribute to the knowledge gap regarding how individual decision makers make resilience investments related to the potential for repeated catastrophic events. First, our results are consistent with some of the cognitive biases that have been identified in earlier research. Both manager and student decision makers are much less likely to invest in economic resilience after a catastrophic event has just occurred. When facing the potential for another event, subjects appear to discount the probability in the next round based on their experience of an event immediately prior. We, thus, show that resilience investment decisions made with the potential for repeated catastrophic events are consistent with the recency bias and gambler's fallacy elements of prospect theory (Tversky and Kahneman 1971;Kahneman and Tversky 1979). This study also provides evidence of confirmation bias in resilience investment. Subjects overwhelmingly chose to invest in resilience initially, then subsequently reported increased trust of advice to make resilience investments. While findings consistent with prospect theory are not novel, more notable is that we find in our multiple-round randomized experiment that the provision of advice can moderate the biases decision makers may harbor and can even Predictive margins of trust for advisors by count of disasters experienced by treatment (after ten rounds, all subjects) Fig. 10 Predictive margins of trust for advisors by count of disasters experienced by treatment (after ten rounds, managers) affect how perceptions of risk are updated. Our second key empirical finding is that advice to decision makers to invest in economic resilience can help maintain resilience investments even for those experiencing a shock infrequently. In the first period, before subjects received any advice, 84.7% of subjects chose to invest in resilience (Young et al. 2017). Third, we find that advice sustains cohesion about this high mean over subsequent rounds for every group except those who experienced zero shocks. Advice to not invest in inventories proves similarly influential, as it appears to prompt investment dissipation. Comparing advice to invest and advice to not invest, dissipation is larger for subjects who received advice to not invest coupled with a lack of disaster experience. Together, these results emphasize the influence of advice on economic resilience investments. Finally, we discover that advisory information must be substantiated with a history of accuracy to be optimally effective and that not all information is similarly received. Subjects show a 15% greater probability of trusting advice to invest compared to advice to not invest. Limitations While the use of a controlled experiment is the only practical way to test our hypotheses regarding resilience decision making in the face of repeated hazard threat and the related role of advice, it is also important to acknowledge that there are limitations to vignette-based online experiments. While the subjects have real skin in the game by virtue of remuneration based on their performance in the experiment, a few dollars lower payout as a result of experiencing a hypothetical disaster is not the same as managing through an actual catastrophic event. Likewise, a lower payout due to investing in resilience and not experiencing the disaster is different than having to justify to actual shareholders or bosses why profits were reduced in preparation for an event that never transpired. These pragmatic implications of resilienceenhancing expenditures are playing out as businesses grapple with recommendations of resilience officers above and beyond standard risk management practices. It is reasonable to assume that in practice, executives in an experiment like this one would be more contextualized to the decision environment, while students may treat the experiment more like a game. It is heartening to find similar results and effects for both groups in these experiments. Conclusions Practically, the results suggest a role for public sector leaders in fostering a resilient economy, particularly because of the important microlinkages between firm and community resilience. They also help inform the strategies of public leaders tasked with emergency management and recovery. These government agencies and organizations should incorporate consideration for the heterogeneity among firms in how they will receive and process advice based on their own disaster experiences. 9 While larger firms have the resources to explicitly invest in resilience, they also often are implicitly more resilient by virtue of operating in multiple locations at a larger and less lean scale. In this study, heterogeneity manifests as a tendency to overemphasize the status quo for decision makers that had not experienced any catastrophic events. Much of the scholarly work on economic resilience has tended to focus on single, rather than repeated events, because of the confounding factors that affect areas that are subject to multiple disasters. These findings, that advice and repeated events are both influential in resilience investment decisions, have important scholarly and practical implications. The experiment was framed such that neither the decision to invest or not invest in resilience was a less risky option. Regardless of which decision the experimental subjects viewed as less risky and hazards they experienced, advice, and particularly more accurate advice, helped to overcome decision-making biases. This type of work may inform how advice is given in the presence of over-investment or under-investment in resilience. Under-investment is a current public policy issue. Despite frequent and severe natural and human-induced disasters, many businesses fail to adequately invest in resilience. This may be because of the high tradeoffs small-and mid-sized firms face or due to status quo bias. Our experiment incorporated a vignette that forces decision makers to think about that risk. If the financial pressures of being a small-or medium-sized company operating in a competitive market are such that even trusted advice is ignored, there may be a role for the government to instead increase its resilience investments. With respect to over-investment, some scholars have argued that in some cases in which perceived risks are too high, too many resources may be directed toward disaster preparedness (Mueller 2006). In these industries, regions, or contexts, our results suggest that a firm with a history of unnecessary resilience investment will begin heavily discounting the value of those investments on its own. The influence of agency advice may hasten the reallocation of resources toward more efficient uses. Specifically, decision makers are averse to incurring the opportunity costs of inventories and are more likely to not invest in inventories when their experience with disasters not occurring is congruent with advisory recommendations to not invest in inventories. Trust in expertise and authority and confidence in the effectiveness of protective actions are essential influencers of risk perception (Wachinger, Renn, Begg, and Khulicke 2013). In fact, trust and confidence are second only to personal experience with disasters (Wachinger et al 2013) though the type of risk (i.e., type of disaster) determines the strength of the relationship (Viklund 2003). Perception, along with experience and trust, are the causal mechanisms prompting the pursuit of disaster preparedness measures (Wachinger et al. 2013) or behavioral changes in epidemics (Van Bavel et al. 2020). This has prompted Wachinger et al. (2013) to conclude that, "trust in authorities is necessary to build up a social climate in which advice from authorities will be taken into account in a crisis situation" (p 1061). Historically, government agents communicating risk have not garnered a high degree of trustworthiness, perhaps due to factors inherent in our participatory democracy system (Slovic 1993;Trumbo and McComas 2003). Our results suggest an additional challenge for authorities and leaders communicating risk and resilience information, namely, differences in how advice with varying content is received. While decision makers in our study overwhelmingly trust advice to invest in resilience, they tend to distrust advice to not invest. This is consistent with the literature describing the asymmetry of trust, or the phenomenon in which distrust is relatively more difficult to overcome due to both the visibility and over-weighting of negative (trust destroying) events (Slovic 1993(Slovic , 1999. Some distrust is likely prompted by uncertainty. Advice accuracy positively relates to resilience investment for both types of advice (invest and do not invest), but the research subjects tend to trust advice to invest. The practical application, then, is that in situations in which advisors (i.e., government agencies or other economic organizations) seek to combat over-investment in resilience, they should consider the potential that this message will be met with distrust, indifference or disregard. At the same time, LeClerc and Joslyn (2015) found that increasing uncertainty as part of the advice message offered helped to alleviate the risk that false alarms might lead to future advice being discounted. The implication in this context is that when providing information or advice related to investing or not investing in inventories, presenting ranges of probabilities of events or outcomes may increase the trust in that advice. Presenting reliable information in a joint participation exercise may also increase trust between the public, experts, and authorities (Wachinger et al. 2013). In light of this literature and our findings, we propose that when government agencies provide advice against investing in resilience they do so by way of participatory processes. Funding The funding was provided by the National Center for the Middle Market and the Battelle Center for Science and Technology Policy. Data availability All of the data and models that support the study findings are available upon request from the corresponding author for purposes of replicating or validating findings. However, during the period in which the results of this suite of experiments are being published in this and other outlets, the data will not be posted to publicly available repositories. Upon completion of the authors' use of the data, the finalized data will be made publicly available and posted online, and requesters' use of the data for subsequent publication and broader dissemination will be permitted. Code availability Code are available upon request. Conflict of interest The author declares that they have no conflict of interest.
2021-06-08T14:04:27.460Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "4f3d0a1602e0dc7155096f4521edcd38978fbc89", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10669-021-09818-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4f3d0a1602e0dc7155096f4521edcd38978fbc89", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
231792000
pes2o/s2orc
v3-fos-license
Carbon-Based Materials as Catalyst Supports for Fischer–Tropsch Synthesis: A Review The use of carbon-based materials as catalyst supports for Fischer–Tropsch synthesis (FTS) is thoroughly reviewed. The main factors to consider when using a carbonaceous catalyst support for FTS are first discussed. Then, the most relevant and recent literature on the topic from the last 2 decades is reviewed, classifying the different examples according to the carbon structure and shape. Some aspects such as the carbon textural properties, carbon support modification (functionalization and doping), catalyst preparation methods, metal particle size and location, catalyst stability and reducibility, the use of promoters, and the catalyst performance for FTS are summarized and discussed. Finally, the main conclusions, advantages, limitations, and perspectives of using carbon catalyst supports for FTS are outlined. INTRODUCTION Hydrocarbons are the most widely used chemicals and fuels and are the main driving force of occidental social well-being. The major part of hydrocarbons on earth are produced from crude oil, which provide approximately 33% of the current world's primary energy requirements, followed by coal (27%) and natural gas (24%). In the past 10 years, oil consumption has grown globally by an average of 1.1% (1.1 million barrels per day), Asia being the region that has shown the highest growth, where coal consumption is still dominant. Furthermore, the global proved oil reserves account only for around 45 years at the current consumption ratio, whereas the estimates of the extent of available reserves of natural gas and coal seem to be around 50 and 132 years, respectively (BP Statistical Review, 2020). Therefore, the growing global demand for crude oil, together with its fast depletion rate, and the implementation of a more stringent environmental legislation on liquid fuels boost the use of alternative and sustainable hydrocarbon sources. In this sense, Fischer-Tropsch synthesis (FTS) is an alternative industrial process for the production of clean liquid fuels and value-added chemicals from synthesis gas (a mixture of CO and H 2 ), which can be derived from nonpetroleum feedstocks including natural gas, coal, and renewable biomass (mainly, lignocellulosic biomass) (Noureldin et al., 2014;Ren et al., 2019). Depending on the feedstock, the process is referred to as GTL (gas-to-liquid), CTL (coal-to-liquid), or BTL (biomass-to-liquid). Nowadays, there are large commercial FTS plants operating worldwide that produce liquid fuels and hydrocarbons from syngas obtained by partial oxidation and steam reforming of natural gas and by coal gasification (Lappas and Heracleous, 2016). However, the vast majority of BTL schemes, which uses syngas from gasification of biomass, are in the pilot or demonstration phase. The development of a commercial BTL process seems to be hindered due to the limited commercial experience in biomass gasification and its integration with fuel production processes and to the high capital costs associated to the BTL technology (Lappas and Heracleous, 2016). Transition metals are used in FTS process due to their considerable activity. Among them, Fe and Co are the only industrially relevant catalysts that are currently commercially used in FTS. The choice of catalyst depends primarily on the FTS operating mode: (1) the so-called low-temperature Fischer-Tropsch synthesis (LT-FTS) and (2) high-temperature Fischer-Tropsch synthesis (HT-FTS) (Steynberg, 2004). In the former case, at , mostly longchain paraffins (wax) are produced over either Fe-or Co-based catalysts. This wax is afterward (hydro) cracked in the desired product spectrum (Luque et al., 2012). On the other hand, the use of Fe-based catalysts at HT-FTS conditions (300-350°C) is typically aimed to produce short-chain unsaturated hydrocarbons, olefins (Fischer-Tropsch to olefins, FTOs), and oxygenates (Torres Galvis et al., 2012b). Furthermore, high selectivities toward gasoline-range hydrocarbons can be produced using Fe at HT-FTS conditions (Steynberg et al., 1999). Moreover, the choice of metal also depends on the feedstock used for the FTS. When high-purity syngas is used, Co-based catalysts are preferred because of their higher intrinsic activity for FTS and higher selectivity toward linear products than those of Fe (at similar conditions). Moreover, Co-based catalysts present a low activity toward water-gas shift (WGS) and a high hydrogenation activity, therefore produce less unsaturated hydrocarbons and oxygenates, while having a higher catalyst stability (Munirathinam et al., 2018). Therefore, cobalt-based catalyst is the choice when using syngas with a H 2 /CO ≥ 2, as that produced from natural gas as feedstock (Aasberg-Petersen et al., 2004). On the other hand, iron-based catalysts are cheaper and widely available compared with Co-based catalysts and present a high flexibility in terms of operating conditions and poisoning, being their use possible at different temperatures and H 2 /CO molar ratios (Abelló, S., and Montané, D. (2011). Exploring Iron-based Multifunctional Catalysts for Fischer-Tropsch Synthesis: A Review. ChemSusChem 4, 1538ChemSusChem 4, -1556. This flexibility for different H 2 /CO ratios is of great interest when using syngas derived from biomass or coal gasification, which present H 2 /CO molar ratios lower than 2 (Lappas and Heracleous, 2016). This is related to the high WGS activity of Febased catalysts, which could compensate the lack of hydrogen until reaching the stoichiometric proportions required for the FT reaction (Sartipi et al., 2014). Supported catalysts for FTS have been extensively studied in academia, in spite of only Co-supported catalysts have been used up to know at industrial scale in the LT-FTS (Luque et al., 2012). In this sense, FT catalyst structure and performance are highly influenced by the catalyst support. Conventional inorganic materials such as Al 2 O 3 , SiO 2 , TiO 2 , and zeolites have been frequently studied to disperse and stabilize both Fe and Co catalyst nanoparticles (Sun et al., 2000;Prieto et al., 2009;Sartipi et al., 2014;Abrokwah et al., 2019). Unfortunately, highly dispersed cobalt and iron nanoparticles can interact with the metal oxide support during the thermal activation treatments (high-temperature calcination and/or reduction), resulting in formation of cobalt and iron-supported mixed compounds (i.e., cobalt and iron silicates in the case of Co/ SiO 2 and Fe/SiO 2 catalysts, respectively), which are hardly reducible and therefore nonactive in the FTS reaction (Lund and Dumesic, 1981;Tauster et al., 1981;Munirathinam et al., 2018). In order to tackle this issue, the use of more inert materials, such as carbon-containing supports, has been proposed. Carbonbased materials have been reported to minimize the metalsupport interactions because of their inert nature, high surface area and tunable porous texture, and surface chemistry. The high thermal conductivity of carbon materials is an additional advantage of carbon-based supports for FTS catalysts, which favors the catalyst heat-transfer properties during the highly exothermic FTS reaction (Chin et al., 2005;Asalieva et al., 2020). Thus, carbon-based materials have been successfully applied as catalyst supports for FTS (Xiong et al., 2010;Moussa et al., 2014;Dlamini et al., 2020), but only in the area of research because they have not yet been used at an industrial level, as it has been already mentioned. Figure 1 shows the evolution of the number of research papers published regarding the use of carbon-based materials as catalyst supports for FTS. We have selected for the present work 164 research papers reported in scientific journals of high impact (80% Q1) indexed in JCR in the areas of knowledge of Chemistry, Chemical Engineering, Environmental Engineering, andMaterials Technology, between 2004 and. Approximately, 25% of the studies related to catalysts for FTS are devoted to catalyst supported on carbon materials and 35% of these research studies have been reported in the last 3 years, which highlight the potential interest in this topic. Accordingly, 67.3% of the research papers correspond to carbon-supported Co catalysts, 32.7% to Ferelated catalysts, and 2.5% to Ru catalysts. Ru has been less investigated due to the high metal costs and, therefore, to the greatest difficulties of implementation in industrial applications. Furthermore, the most active area of research along the past 2 decades has been in activated carbons (AC, 15%) and in multiwall carbon nanotubes and nanofibers (CNTs and CNFs, respectively, 42%). More recently, there has been an increasing interest in the use of new carbon materials with uniform structures for FTS process, such as ordered mesoporous carbons (OMC), carbon spheres (CS), and graphene. A considerable increasing number of studies have also investigated the use of carbon/oxide hybrid supports (i.e., C/ Al 2 O 3 , C/SiO 2 , and C/HZSM-5), and, very recently, Metal Organic Frameworks-derived catalysts. In the latter case, the thermal decomposition in inert conditions of Co-and Fecontaining MOFs, namely, the MOF-mediated synthesis technique, resulted in carbon-doped Co and Fe metal catalysts with very interesting results for FTS (Santos et al., 2015;Otun et al., 2020). Figure 2 summarizes the different carbon support materials reported in the literature for FTS catalysts classified according to the carbon structure and morphology. Herein, we have summarized the most relevant and recent advances in conversion of syngas to hydrocarbons by FTS using heterogeneous catalysts supported on different carbon-based materials (AC, OMC, CNTs, CNFs, CSs, and graphene) reported during the last 2 decades. We first bring to the attention of the reader important characteristics to take into account in the use of carbon materials in catalysis, particularly for the FTS process. Afterward, the different examples from the literature are thoroughly reviewed and discussed by classifying the carbon-based supports according to their properties, structure, and morphology. Otun et al. (2020) have recently reported a review on the use of MOF-derived catalysts in the FTS process. It is also to be noted that several important reviews have appeared on carbon supports for FTS catalysts in the last decade Xiong et al., 2015;Ahn et al., 2016). However, this review focusses on the general relationship between the carbon catalysts properties, structure, and morphology and the catalytic performance in the FTS process from the most relevant and recent literature, with special emphasis in some catalyst aspects, such as pore structure, carbon support modification (functionalization and doping), catalyst preparation methods, catalyst stability, reducibility, metal particle size, and location FIGURE 1 | Evolution of the literature reported for different carbon-based supports for FT catalysts in the last 2 decades. The values in parenthesis in the legend indicate the percentage of research papers dedicated to each carbon support. Source: Scopus. and the use of metal promoters. Furthermore, the FTS catalyst performance in terms of FTS activity (CO conversion, metal-time yield (MTY), and turnover frequency (TOF)) and hydrocarbons selectivity is also tabulated for all the carbon-based supports (Supplementary Tables S1-S7), compared and deeply discussed. Finally, in the conclusions, some challenges and future perspectives about the industrial feasibility of carbonbased supported FT catalysts are also considered. GENERAL CONSIDERATIONS ABOUT THE USE OF CARBON MATERIALS AS FT CATALYST SUPPORTS The interest in carbon-based supports for catalytic applications relies on the great advantages it exhibits, such as good thermal conductivity, high specific surface area, and high thermal and chemical stability under middle operation conditions. On the other hand, in the case that biomass or lignocellulosic residues are used as carbon precursors, it would result in an additional advantage from the economic and environmental points of view (Rosas et al., 2010;Moulefera et al., 2020). Another remarkable feature that carbon supports possess is the possibility of tailoring both their porous structure and surface chemistry, not only during the preparation procedure, but also by further modification via different chemical and thermal treatments, which allow bonding extra heteroatoms on their surfaces, such as oxygen and nitrogen surface groups (Figueiredo et al., 1999;Xiong et al., 2014b). Figure 3 shows typical oxygen ( Figure 3A) and nitrogen surface groups ( Figure 3B) on the carbon surface. The nature and concentration of functional groups on the surface of carbon materials have a significant influence on the catalyst dispersion and reducibility given that these surface groups act as anchoring sites for the active phase of supported catalysts and even they can be also the active sites for specific catalytic reactions. Oxygen functional groups are the most important in this context, as they can be formed spontaneously by exposure to the atmosphere or can be further generated or modified by oxidative and/or thermal treatments, in the liquid phase ((NH 4 ) 2 S 2 O 8 , HNO 3 , H 2 O 2 ) (Moreno-Castilla et al., 1995;Moreno-Castilla et al., 2000;Palomo et al., 2017), or in the gas phase (O 2 , O 3 , N 2 O, and HNO 3 vapor) (Figueiredo et al., 1999;Valero-Romero et al., 2017). Nevertheless, such treatments, particularly those under severe oxidizing conditions, may cause the partial destruction of the pore structure of the carbon material due to their gasification to CO 2 (and CO). Regarding the preparation of catalysts for the FTS, it has been reported that metal-support interactions play a crucial role on the activity of the catalysts (Xiong et al., 2015). The main goal of any catalyst support is to enhance metal dispersion, giving rise a higher number of active sites on the catalyst surface as compared to the unsupported catalyst. Due to their inertness nature, especially when they are prepared at high carbonization temperatures, carbon materials produce low interactions with the FT metal-supported catalyst, which was investigated for the first time by Jung et al. (1982). Since then, many publications can be found in the literature on this issue, most of them dealing with the reduction of the metal-support interactions (Hernández Mejía et al., 2018;Van Deelen et al., 2019) and with the possibility of achieving a relatively low metal particle size (Bezemer et al., 2006a). However, these weak metal-support interactions are not always beneficial from the catalytic viewpoint. Metal sintering can also take place under reaction conditions, resulting in the loss of catalytic activity (De Smit and Weckhuysen, 2008). In order to overcome this metal sintering process and to achieve a higher metal dispersion values, several authors studied the modification of the carbon surface with oxygen and nitrogen surface groups, where the FT metal catalyst can be bonded during the impregnation stage (Xiong et al., 2010;Xiong et al., 2014a;Xiong et al., 2014b;Chernyak et al., 2016). Another important issue relies on the maximum temperature at which carbon supports are stable under the operating conditions required for the FTS process. HT-FTS is usually carried out at 340°C. However, the conditions for iron/cobalt reduction usually require a higher temperature, above 350°C, under hydrogen flow (Chen et al., 2018). On this regard, several authors have observed the evolution of methane during the reduction treatment from 350°C (Bezemer et al., 2006b;Fu et al., 2014b;Valero-Romero et al., 2016). The chemical and thermal stability of a particular carbon-based catalyst depends on several aspects, such as the metal content. Therefore, catalyst stability measurements during reduction for each individual carbon-based catalyst should be addressed in order to determine the optimum reduction temperature. On the other hand, the use of carbon materials as support of iron catalysts for FTS seems to be beneficial due to the easier formation of iron carbide species, which have been claimed to be the active phases for this reaction (Chen et al., 2008;Wezendonk et al., 2018). On the contrary, the formation of carbides has been reported to be not useful for cobalt FTS catalysts, where the active phase is metallic cobalt. In this line, the formation of cobalt carbides has been reported to lower the catalytic performance of FTS catalysts, giving rise to high amounts of methane (Mohandas et al., 2011). However, carbon materials can be used as support of cobalt catalysts if harsh preparation conditions responsible for the formation of cobalt carbides are avoided. High pore volume and high mean pore size have been reported to be important parameters to control metal particle size and dispersion on carbon materials for the FTS process (Ahn et al., 2016). A carbon support with a well-developed mesoporous and macroporous structure would have excellent advantages in FT reaction, because larger pores benefit the diffusion of the reactants and hydrocarbon products to and from the catalytic active reaction sites, respectively, thus, enhancing the production for longer hydrocarbon chains. Finally, another interesting issue to take into account when carbon-based supports are used in the FTS process is the possibility of recovering the metal phase after aging or deactivation of the catalytic system under FTS conditions by a simple combustion or gasification of the carbon support. This type of active phase recovering would also result in a negligible net increase of CO 2 to the atmosphere in the case that renewable biomass had been used as the carbon source in the catalyst preparation, contributing to the reduction of greenhouse gas emissions. In addition to this, the spent (aging or deactivated) catalysts and/or mixtures of residual lignocellulosic biomass and spent catalysts could be used as feedstock in the gasification reactor for the production of syngas for the FTS (Figure 2). Catalysts Supported on Activated Carbons The earlier studies on carbon-supported catalysts for the FTS process focused on the use of AC, black carbon, and glassy carbon as supports. These works were dedicated to study how to achieve small metal particles, and hence a high metal dispersion, and to study the metal-support interactions (Xiong et al., 2015). In the last years, however, ACs have been mostly studied as model catalyst supports for the FTS reaction, with the purpose of analyzing the effect of the carbon nature and porous texture as compared with other supports and with the aim of analyzing the effect of metal promoters on the FTS catalyst performance. The preparation of AC can be achieved using several kinds of lignocellulosic waste as carbon precursor and by different chemical and/or physical activation processes, giving rise to carbon materials possessing different porous textures and surface chemistry. In this sense, a particular surface chemistry can be achieved by choosing the proper activation process (Rodríguez-Reinoso and Molina-Sabio, 1992;Rosas et al., 2008;Rosas et al., 2010). According to the literature, most of the ACs used for research as FTS catalyst supports were mainly commercial (purchased), with coconut shell or almond being the most commonly used carbon precursors, and as for the catalyst preparation method, incipient wetness impregnation (IWI) was commonly selected. In general, Co-and Fe-supported AC catalysts were characterized by a high BET surface area and pore volume and by a high contribution of microporosity (pores <2 nm). Table 1 shows the textural parameter range values of different carbon-supported catalysts for the FTS process from the literature reviewed in the present work, and Table 2 summarizes the FTS performance for the most relevant Fe-and Co-supported AC catalysts and their metal loading, and when a metal promoter is used, its loading is also shown. In addition to this, further details are reported in the supporting information file (Supplementary Table S1) as their metal particle size, C 2 -C 4 olefin/paraffin ratio (O/P), and α value obtained from FTS. Effect of Metal-Support Interactions and Activated Carbon Porous Texture on FTS Activated carbon has been used as a model catalyst support in order to study the influence of metal-support interactions on the FTS catalyst performance. In this line, Cheng et al. (2014) studied the preparation of iron catalyst supported on silica and on different carbon-based materials. α-Fe 2 O 3 was the main iron phase on silica supports, whereas magnetite (Fe 3 O 4 ) and/ or maghemite (c-Fe 2 O 3 ) were mainly present on the carbon supports. The presence of partially reduced iron oxides was related to the carboreduction of iron oxides during the iron nitrate precursor decomposition stage. These authors also found a higher carburization extent for the carbon-based iron catalysts as compared to the silica-supported counterparts during a CO activation stage. The higher activity found for the carbon-supported catalysts was attributed to the presence of iron carbide-magnetite composites. Among the carbon-based catalysts, the one prepared using AC as support presented the second highest activity, being only surpassed by the one prepared using carbon nanotubes as support. The Fe/AC catalyst showed a CO conversion of 64%, with high selectivity to C 5+ hydrocarbons (53.7%) (entry 1, Table 2) and a relatively high O/P ratio, 1.2. Similar conclusions were found by Jiang et al. (2017), who also compared the preparation of iron catalysts on ACs and different inorganic supports. Fischer-Tropsch reaction is structure sensitive, being the conversion and the product distribution affected by the particle size of the active phase and by the porous texture of the support. On this issue, Fu et al. (2014b) studied the effect of pore size on the activity of cobalt-based catalysts supported on ACs and CNTs for FTS. The extent of reduction of Co/AC catalyst was the lowest, presumably due to the higher metal-support interactions, which gave rise to the lowest CO conversion value. In addition, this catalyst presented the highest and the lowest selectivity to methane and to C 5+ , respectively. These catalytic features were justified based on the presence of a narrow microporosity and a low cobalt particle size, which resulted into a higher diffusion rate for H 2 as compared to CO, resulting in a high H 2 /CO ratio inside the pores. Likewise, Chen et al. (2012) compared two AC supports with different average pore sizes (5.2 vs 2.5 nm). The catalytic results were in line with the findings reported by Fu et al. (2014b). The mesoporous carbon-based catalyst (15Co/MC) presented more than twice the CO conversion value than the one of the microporous carbon-based catalyst (15Co/AC), 73.1% vs 29.7%, respectively (entries 6 and 7, Table 2). In addition, the former presented a higher selectivity to C 5+ hydrocarbons and a lower selectivity to methane. Effect of Metal Promoters on FTS The most commonly used promoters for Fe/AC catalysts are K, Mn, and Mo. On the other hand, K, Zr, Ce, Cr, Na, and Mn have been studied as catalyst promoters for Co/AC catalysts. Figure 4 represents the increase in the CO conversion and selectivity to the main reaction products values for some Co-and Fe-supported catalyst on AC in comparison with those for the unpromoted counterpart. Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 example, Ma et al. (2007) observed that a K content of 0.9 wt% produced an improvement of CO conversion as compared to the unpromoted catalyst (entries 2 and 3, Table 2). The selectivity to CO 2 was also increased with K promotion, due to the enhancement effect of K over the WGS reaction. Additionally, methane selectivity was reduced, whereas longer chain hydrocarbons were formed. Furthermore, the O/P ratio was also increased from 0.1 to 5 for C 2 -C 4 short hydrocarbons. In this line, Chernavskii et al. (2018) observed that the catalyst prepared by first loading iron and then potassium as promoter showed a CO conversion value of 62%. However, when following a reverse sequence order for metal deposition (first K, then Fe), the CO conversion value reached a value of 87.2% (entries 4 and 5, Table 2). These differences in CO conversion were attributed to the magnetite particle size formed in each catalyst. The authors claimed that the alkalinization of the AC, prior to iron impregnation, increased the number of oxygen-containing groups on the AC surface, giving rise to the formation of more nucleation centers for Fe 3+ ions and consequently, smaller magnetite particles were formed when K was firstly loaded on the AC. Ma et al. (2006) studied the effect of Mo loading (6 and 12 wt%) on the properties and the catalytic performance of Fe-Cu-Ksupported AC catalysts. The addition of Mo significantly inhibited iron reduction, which was attributed to the existence of iron-molybdenum mixed oxides, which were more difficult to reduce. Promotion with 6 wt% Mo showed a lower initial syngas conversion value (58%), which increased with time-on-stream up to 81% and remained unaltered for more than 80 h, which was attributed to the Mo capability of inhibiting iron sintering (Zhao et al., 1994). The addition of Mo also increased the selectivity to methane and C 2 -C 4 hydrocarbons, lowered the selectivity to C 5+ , and reduced the O/P ratio ( Figure 4). Tian et al. (2017) studied the effect of Mn as promoter for Fe/ AC catalysts. Prior to iron impregnation, the AC support was oxidized with KMnO 4 . During the KMnO 4 treatment, a redox reaction between the carbon surface and the KMnO 4 occurred, yielding a uniform MnO 2 layer covering the surface of the ACoxygenated surface groups. Additionally, K was deposited on the carbon surface. The presence of K and Mn in the catalysts resulted in an enhancement of the CO conversion value (37%, with respect to that of the unpromoted catalyst) ( Figure 4). Furthermore, the selectivity to main reaction products were also affected by the promoters, being observed an increase of a 46% in the C 2 -C 4 selectivity and a decrease of a 44% in the C 5+ selectivity, with respect to those of the unpromoted catalyst. In addition, the O/P ratio outstandingly increased from 0.65 for the unpromoted catalyst to 4.88 for the promoted one. The promotion effect of Mn was associated to the synergistic effect of MnO and Hägg carbides in enhancing CO adsorption and dissociation and K helped to form iron carbides on the AC surface. In summing up, K promotion of Fe-based catalysts resulted in the increase of the CO conversion, when it was loaded in certain controlled amounts, and an enhancement of the activity for WGS. Additionally, the olefin/paraffin ratio and the C 5+ selectivity values were increased. Mn and K promotion enhanced the CO conversion value and gave rise to a higher C 2 -C 4 olefin production. On the other hand, the addition of Mo as a promoter has been shown to lower the initial activity but also to enhance the catalyst stability. Ma et al. (2004) studied the effect of Zr, K, and Ce as promoters for Co/AC catalysts. K acted as a strong poison for the catalyst, decreasing syngas conversion and methane selectivity, as compared to the unpromoted catalyst (Figure 4), which was attributed to the possible coverage of cobalt active sites by K. On the contrary, both Zr and Ce had a positive impact in the catalytic activity. Zr promoted CO conversion without largely modifying the hydrocarbon selectivity values and the activity for WGS. Ce enhanced both syngas conversion and activity for WGS and increased the methane and C 2 -C 4 selectivity values. The positive effects of Zr and Ce as promoters were attributed to the improvements found in cobalt dispersion and to the enhanced interaction between cobalt and the oxygen surface groups resulting from the addition of Zr and Ce to the AC. Effect of Promoters in Cobalt-Based Catalysts Based on these results, Wang et al. (2008) conducted an indeep research focused on studying the effect of lanthanum in Zrpromoted Co/AC catalysts. The active phase of the catalysts was composed by 10 wt% Co, 4 wt% Zr, and different amounts of La. The presence of La in the catalyst increased the cobalt reducibility at low La loadings. However, La was detrimental at higher loadings. The CO conversion value increased from 86.4 to 92.3% when the content of La increased from 0 to 0.2% (entries 8 and 9, Table 2) and the C 5+ selectivity was higher compared to the unpromoted catalyst. Promotion of Co/AC catalysts with Cr (0-5wt%) resulted in the reduction of the cobalt species crystal size, excepting for the highest Cr content . Additionally, the extent of reduction of the catalyst was also enhanced by the presence of Cr. Structural analyses of the active sites showed that the presence of Cr suppressed the formation of cobalt carbides. The reaction results showed an increase in both the CO conversion and the selectivity to C 5+ values for a Cr content of 2% ( Figure 4). Furthermore, the O/P ratio was lower after Cr promotion. The authors attributed these catalytic features to the higher H 2 -rich surface environment caused by Cr promotion in the catalyst, which facilitated the α-hydrogen addition step and suppressed the β-hydride elimination and CO insertion steps, simultaneously. Jiang et al. (2020) studied the promotion effect of Mn and Na on Co/AC catalysts. The unpromoted catalyst showed a CO conversion of 100% with a very high selectivity to methane (54.4%). Na promotion slightly decreased the CO conversion and methane selectivity values (98.7 and 49.9%, respectively). On the other hand, the presence of Mn slightly decreased the CO conversion value to 83.3% and noticeably the methane selectivity to 15.3%. The simultaneous presence of Mn and Na in the catalyst further reduced the CO conversion value from 83.3 to 73.8% as compared to the one for the catalyst just promoted with Mn (entries 10-13, Table 2). Furthermore, the O/P ratio experienced a noticeable increase, from 0.65 for the catalyst promoted with Mn to 1.54, when using together Na and Mn as promoters. Another feature observed in this work was the Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 capacity of Na to enhance the WGS reaction activity in cobalt catalysts. Catalysts Supported on Ordered Mesoporous Carbons The presence of narrow micropores in ACs resulted in internal diffusion limitations for reagents and products when they were used as catalyst supports for the FT process (Chen et al., 2012;Fu et al., 2013;Fu et al., 2014b). To overcome this problem, the use of carbons with a wider porous structure, such as ordered mesoporous carbons (OMCs), has been studied. Co and Fe supported on OMCs are characterized by exhibiting an ordered and well-defined mesoporous texture, with a large pore volume and average pore diameter in the range of 2-6 nm ( Table 1). The preparation of OMCs can be carried out by two approaches: (1) the hard-template method, in which an inorganic material, such as SBA-15, is used as template of a carbon source, and then it is removed by HF/NaOH treatments. (2) The soft-template method, so-called solvent induced self-assembly (EISA), which involves the use of an organic directing agent, such as Pluoric F127. In this case, the template is removed during the carbonization step. Supplementary Table S2 summarizes the FTS performance for the most relevant Fe and Co/OMC catalysts. Knox et al. (1986) were pioneers in reporting the preparation of OMCs by the hard-template method. Among the reported OMC, CMK-3, a hexagonally structured OMC, is the most commonly used OMC support for FTS catalysts. This material was first synthetized by Jun et al. (2000), using SBA-15 as the hard template and sucrose and H 2 SO 4 in water solution as the carbon source. For example, Oschatz et al. (2016b) prepared OMC-supported iron catalysts, using CMK-3. The active phase (Fe, Na, and S) loading was carried out by IWI. Afterward, the catalysts were stabilized at different temperatures. Hematite nanoparticles were found for calcination temperatures up to 500°C. Above this temperature, iron carbide species and metallic iron were found. Additionally, iron particles wrapped by a graphite shell forming core-shell structures were found at temperatures above 800°C, which lowered the catalytic activity. The lowest selectivity to methane and the highest selectivity to C 2 -C 4 , 13.4 and 59.5%, respectively, were achieved for the catalyst carbonized at 500°C (entry 1, Supplementary Table S2). An outstanding O/P ratio value of 10 was attributed to the efficient promotion of the catalyst with S. Effect of the OMC Preparation: The Hard-Template Method Likewise, Kang et al. (2017) studied the preparation of iron catalyst for FTS using CMK-3 as support. In this case, metal loading was carried out by directly grounding the CMK-3 support with iron nitrate (in a physical mixture). The catalyst was stabilized under CO atmosphere. Uniformly distributed Fe 5 C 2 nanoparticles were found in the pore system of the CMK-3. After a long induction period, the catalyst reached a 91.4% steady-state CO conversion value. The product distribution remained unaltered during the induction period, showing selectivity values to CH 4 , C 2 -C 4 , and C 5+ of 23.3, 68.3, and 8.3%, respectively (entry 2, Supplementary Table S2). Co-supported CMK-3 catalysts were also studied by Fu et al. (2013) and Li et al. (2019). The loading of cobalt (20%) was carried out by ultrasonication-assisted IWI followed by stabilization at 200°C (Fu et al., 2013) and 350°C . The average pore size of the catalyst was smaller for the sample treated at the highest temperature. However, similar CoO x crystallite sizes were found for both catalysts. The CO conversion values were also very similar for both catalysts. Nevertheless, the catalyst carbonized at 200°C (Fu et al., 2013) yielded a higher production of diesel range hydrocarbons than gasoline ones, 48 vs 35%, whereas the catalyst stabilized at 350°C presented a higher selectivity to hydrocarbons in the gasoline range (entries 10 and 11, Supplementary Table S2). Zhao et al. (2020) studied the preparation of OMC-supported cobalt catalysts. In this case, for the preparation of the OMC support, SBA-16 was used as the hard template and furfuryl alcohol (FA) and oxalic acid in ethanol solution were used as carbon source. After a carbonization stage, HF was used to eliminate the silica template and cobalt loading was carried out by IWI using cobalt nitrate. The CoO crystallite size slightly increased with increasing the support carbonization temperature due to the diminishment of the metal-support interactions. The catalyst presenting the highest catalytic activity was the one prepared using the OMC carbonized at the highest temperature (1,300°C), due to lower cobalt supportinteractions and higher reducibility of the cobalt species. This catalyst showed a CO conversion value of 49.7% and a selectivity to C 5+ hydrocarbons of 74% (entry 12, Supplementary Table S2). Effect of the OMC Preparation: The Soft-Template Method Liu et al. (2017) studied the preparation of OMC-based cobalt FTS catalysts in a single step using Pluoric F127 as directing agent, resorcinol and formaldehyde as carbon sources, and cobalt nitrate as metal precursor. FTS experiments showed a CO conversion value of 30.2%, with selectivity values to methane and to C 5+ hydrocarbons of 15.2 and 81.5%, respectively (entry 13, Supplementary Table S2). The catalytic activity reported in this study is lower than those reported by other studies working under similar operation conditions (Fu et al., 2013;Li et al., 2019;Zhao et al., 2020). A possible explanation to this lower catalytic performance reported by Liu et al. (2017) could be the relatively high cobalt particle size and the lack of accessibility to these cobalt particles due to the deep embedment in the carbon support during the synthesis procedure. Tailoring OMC Supports for Controlling Metal Crystallite Size Metal particle size is a highly important Fischer-Tropsch catalyst feature, and thus, different catalyst synthesis strategies have been proposed to control the size of the active phase on OMC supports. Yang et al. (2012) carried out a study dedicated to control the cobalt particle size in OMC-supported FTS catalysts by the modification of the OCM synthesis procedure. For this aim, different amounts of FA (carbon source) were introduced in Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 SBA-15 (hard template). Cobalt loading was carried out by IWI using cobalt nitrate. The average size of the cobalt particles on the catalyst increased with increasing the FA content. Additionally, the allocation of the cobalt particles was shifted to the outer surface when increasing the FA content. The highest activity was achieved for the catalysts prepared using a 50% FA (CO conversion value of 45.07%, with methane, C 2 -C 4 , and C 5+ selectivity values of 24.6%, 11.31%, and 64.09%, respectively), entry 14, Supplementary Table S2. More recently, Yang et al. (2014) carried out a study to control the cobalt particle size of FTS catalysts using an N-doped OMC as catalyst support. Nitrogen incorporation was carried out by a postsynthetic route using cyanamide. The metal loading was carried out by IWI using cobalt nitrate in acetone solution. The authors found that the higher the N content in the support, the smaller the cobalt particle size. This fact was associated to the capacity of N of improving dispersion of cobalt metal species and forming more uniform particles. The TOF values and the catalytic activity increased with increasing the cobalt particle size up to a 10 nm. Above this value, the TOF remained constant, but a decrease in the catalytic activity was observed. Likewise, Sun et al. (2012) investigated the preparation of OMC-supported catalysts with different iron crystallite sizes for the FTS reaction. The preparation of the OMC-based catalysts was carried out using Pluronic F127 as directing agent, resol as carbon source, and iron nitrate as metal precursor. Different amounts of a chelating agent, acetylacetone, were used with the aim of controlling the metal particle size. The iron particle sizes were reduced when increasing the acetylacetone content. The catalyst presenting the smallest iron particle size (8.3 nm) showed the highest CO conversion value (90.1%), with a low selectivity to CO 2 (13.3%) and very high selectivity to C 5+ hydrocarbons (>68%) (entry 3, Supplementary Table S2). Cheng et al. (2014) controlled the iron particle size by varying solvent (water or acetone) used for the metal impregnation process. The catalyst prepared using water as iron nitrate solvent showed an average crystallite size four times larger than the one prepared using ethanol as solvent. The catalytic results were in line with those reported by Sun et al. (2012), being observed a higher CO conversion for the catalyst presenting a smaller iron particle size (49.7% and 38.5% for the catalysts prepared with ethanol and water, as impregnation solvent, respectively), entries 4 and 5, Supplementary Table S2. Effect of Metal Promoters on FTS The use of promoter has been also studied in OMC-supported Fischer-Tropsch catalysts. Cheng et al. (2015) reported that the use of Na as catalyst promoter reduced the average iron phase crystallite size. The reaction results showed that the presence of Na reduced the CO conversion value. However, a selectivity to C 5+ up to 78.9% was obtained for a Na to Fe molar ratio of 0.3 (entries 6 and 7, Supplementary Table S2). Na promotion also increased the C 2 -C 4 O/P ratio more than five times the value of the unpromoted catalyst. Similarly, Oschatz et al. (2016a) studied the effect of Na and S as promoters on Fe/OMC catalysts. The reaction results showed that CO conversion was lower for the Napromoted catalyst than for the Na-S-promoted one. Additionally, the simultaneous presence of Na and S enhanced the selectivity to C 2 -C 4 as compared to that of the Na-promoted catalyst (entries 8 and 9, Supplementary Table S2). The O/P ratio showed an outstanding value of 10 in both promoted catalysts studied. Catalysts Supported on Carbon Nanotubes and Carbon Nanofibers Typically, multiwall carbon nanotubes or carbon nanotubes (MWCNTs or CNTs) and carbon nanofibers (CNFs) are grown by the decomposition of carbon-containing compounds on a metal catalyst particle. By modifying the carbon source, as well as the chemical composition and morphology of the catalyst, it is possible to synthesize CNTs and CNFs with variable crystallinity degrees, sizes, and shapes. The main difference between nanotubes and nanofibers consists in the presence of a hollow cavity for CNTs. Single-wall carbon nanotubes (SWCNTs) are ideally made of a perfect graphene sheet rolled up into a cylinder and closed by two caps (semifullerenes), whereas MWCNTs are formed by concentric SWCNTs with increasing diameter. On the other hand, CNFs are made of domains of sp carbon atoms (graphene-like layers) bounded by sp 3 carbons or other terminal atoms or groups of atoms (Serp et al., 2003). Typically, Co-and Fe-supported CNTs used in FTS have total surface areas ranging between 70 and 255 m 2 /g ( Table 1). The pores in these structures can vary from inner hollow cavities with diameters within the micropore range (less than 2 nm) and mesopore range (between 3 and 6 nm) to aggregated pores (>15 and up to 40 nm) in a larger extent, which are formed by interaction of isolated MWCNTs. The length of these structures can range from few microns to several millimeters. Furthermore, the external diameter of the MWCNT can reach 100 nm (Yang et al., 2001). In the case of Co-and Fe-supported CNFs, the surface area can range between 170 and 300 m 2 /g, no micropores are found, and the mesopore volume ranges between 0.5 and 2 cm 3 /g (De Jong and Geus, 2000). The external diameters of CNF are generally higher than the ones presented by nanotubes and can reach 500 nm (Serp et al., 2003). The detailed similarities and differences in adsorption, electronic, thermal, and mechanical properties, and growth mechanisms of CNTs and CNFs have been extensively reviewed (De Jong and Geus, 2000;Serp et al., 2003;Lehman et al., 2011). Given that CNTs and CNFs are relatively inert materials, it is necessary to modify their nature by introducing surface functional groups in order to attain high stabilization and dispersion of the metal particles on their surface. In addition to this, these materials have been considered as model supports in the FTS reaction process. Therefore, the effect of CNT and CNF functionalization, catalyst preparation methods, metal particle size, pore size, pore confinement, and the incorporation of metal promoters on the catalysts structure and FTS performance have been investigated. Effect of CNT and CNF Functionalization and Thermal Treatments Several studies have been dedicated to study the surface functionalization of CNTs either by introduction of oxygen or Ref. nitrogen surface groups and the influence of these groups on the structure and the catalytic performance of CNF-and CNT-supported FT catalysts. Table S3 summarizes the most relevant results regarding the FT performance of unpromoted Co and Fe catalysts supported on CNTs and CNFs after different functionalization pretreatment conditions. In most cases, the optimum catalyst observed from each study is tabulated when different oxidizing conditions are studied. For example, Chernyak et al. (2016) investigated the CNT oxidation prior to Co impregnation by a treatment with ∼70 wt% HNO 3 for different reaction times and tested them in FTS at 190°C, 1 bar, H 2 /CO of 2 and space velocity (SV) of 2.2 m 3 kg cat −1 h −1 . Figure 5 compares the results of the cobalt catalyst pretreated in acid for 1, 3, 9, and 15 h with respect to the untreated Co/CNT catalyst. Optimal oxidation conditions were found after 9 h of acid reaction, resulting in Co/CNTs with the highest porous development, oxygen content, and an optimal cobalt particle size of 4.2 nm was obtained after catalyst impregnation and activation, which also presented the highest FTS activity, C 5+ hydrocarbons yield, and lower Co sintering during FTS. These results are in line with the research of Trépanier et al. (2009b) who investigated different nitric acid pretreatment temperatures (entries 1-3, Supplementary Table S3). The most severe acidtreatment conditions (15 h) produced the deterioration of the CNT material, resulting in a significant decrease of the BET surface area, which is in line with other studies (Fu et al., 2014a;Nakhaei Pour et al., 2018). The catalytic activity was also maximized for Co and Ru catalysts supported on oxidized CNTs with optimal HNO 3 concentrations of 70 and 68 wt%, respectively (Kang et al., 2009;Vosoughi et al., 2016) (Karimi et al., 2014) yielded Co/CNT catalysts with higher FTS activity, higher selectivity to C 5+ hydrocarbon fraction, and high stability compared to the untreated catalyst. Optimal preparation conditions were investigated for oxidizing CNTs for cobalt catalyst preparation with H 2 O 2 and sonicated via a pulsing method (entry 6, Supplementary Table S3) (Nakhaei Pour et al., 2018). It appeared that sonication in a short time (10 s) resulted in Co/FCNTs-10 catalyst with a remarkably narrow cobalt particle size distribution. On the other hand, Eschemann et al. (2015) reported that CNT oxidation adversely influenced the FT performance of the Co/CNT catalysts, with significantly lower cobalt-time yields (CoTYs) and C 5+ selectivity for the cobalt catalyst pretreated in acid for LT-FTS. They ascribed the different catalytic performance to an increase in hexagonal-close-packed crystal structured (hcp-Co) on pristine CNTs compared to the surfaceoxidized CNTs. Hcp-Co phase has been experimentally confirmed to be more active and selective to C 5+ than the cubic Ghogia et al. (2020). Likewise, more recently, Van Deelen et al. (2020) reported a negative effect on the catalytic performance of presynthesized 6 nm colloidal CoO nanocrystals supported on oxidized CNTs tested under similar FTS conditions (entries 10 and 11, Supplementary Table S3). The different catalytic performance was ascribed to the low crystalline metallic Co content on oxidized CNTs than on pristine CNTs. In addition, it has been reported that the nature of oxygen functional groups on CNTs and CNFs can be modified by the application of different thermal treatments. In this regard, Chernyak et al. (2016) investigated the thermal stability of surface functional groups on oxygen-functionalized CNTs after different thermal treatment conditions and stages (catalyst carbonization, catalyst activation or reduction, and FTS reaction). They observed that most of the carboxylic groups decomposed in the first stage carried out at 400°C, whereas a decrease in content of all oxygen functional groups was mainly observed after the catalyst reduction stage at 400°C, specially due to the decomposition of hydroxyl and ether groups. Only the more thermally stable oxygen surface groups, such as quinones and phenols, remained stable on the support surface after 70 h of FTS reaction at 190°C, 1 bar, H 2 /CO of 2 and SV of 2.2 m 3 kg cat −1 h −1 . The authors highlighted that these oxygen surface groups together with the CNT defects and the CNT surface geometry effects might have prevented Co from sintering during the catalytic reaction, given that the catalysts were not deactivated with time-on-stream. In contrast, nitrogen groups were significantly more stable upon heating. Thermal treatment of nitrogen-containing CNTs at 600°C only caused a minor loss of pyridine and quaternary type nitrogen groups Kundu et al. (2010). Xing et al. (2013) compared the FTS performance of Co-based catalysts supported on oxidized and thermally treated CNTs in inert atmosphere at 450, 650, and 900°C. The results of the FTS reaction revealed that the oxidized Co/CNT catalysts treated at 650°C (Co/CNTs-650) presented the highest CO conversion (89.3%), the lowest CH 4 selectivity (8.4%), and the highest C 5+ hydrocarbon selectivity (83.7%) among all the tested catalysts (entry 12 and 13, Supplementary Table S3). They claimed that it was possible to partially remove the oxygen-containing functional groups from the surface of CNTs by controlling the thermal treatment temperature, while keeping the integrity of inner CNT walls and thus controlling the preferential encapsulation of cobalt FIGURE 5 | Amount of oxygen measured by XPS and increase of BET surface area, CO conversion, C 5+ yield, and Co particle growth during FTS for Co/CNT catalysts when CNTs were oxidized with HNO 3 for 1, 3, 9, and 15 h with respect to the catalyst supported on pristine CNTs. Data were adapted from ref. (Chernyak et al., 2016). Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 clusters (80% for Co/CNTs-650) with optimal size (5-10 nm) inside the CNTs. The role of oxygen-and nitrogen-functionalized CNTs as supports for Fe-based FTS catalytic systems was also investigated. Malek Abbaslou et al. (2009) studied the acid treatment (35 wt% HNO 3 ) of CNTs with different BET surface areas at 25 and 110°C. These materials were used as support for the preparation of Febased catalysts for FTS. The resultant Fe/CNT catalysts revealed that the more severe the acid-treatment temperature and the higher BET surface area (46 m 2 /g for Fe/ha-lsa-C and 200 m 2 /g Fe/ha-hsa-C) of the CNT-supported Fe catalysts, the higher the activity, stability, and selectivity to C 5+ hydrocarbons of the catalysts (entries 16 and 17, Supplementary Table S3). Schulte et al. (2012) compared N-doped CNTs (N-CNTs) with conventional oxygen-functionalized CNTs as supports for iron catalyst for FTS. The supports were pretreated by gas phase NH 3 or HNO 3 vapor, respectively. They observed that doping CNTs with nitrogen enhanced Fe species dispersion and, as a result, an almost two-fold higher FT activity was found for Fe supported on the N-doped CNT (20Fe/N-CNT) as compared to the one of Fe supported on the oxidized CNT (20Fe/O-CNT) (entries 19 and 20, Supplementary Table S3). They suggested that high electric conductivity and good metal dispersion ability were the main advantages for the N-NCT support. These results were later supported by Chew et al. (2016). On this context, Xiong et al. (2014a) reported a novel approach to prepare N-CNTs by a postdoping method in which acetonitrile was passed over CNTs at 700 and 900°C, and N atoms were homogeneously deposited on the CNT surface (so that the resulting N-CNTs contained 1.75 wt% N). The N-CNTs were later treated with HNO 3 prior to iron deposition. The resulting catalysts were tested in the HT-FTS reaction (entry 18, Supplementary Table S3). The surface N was in the form of pyridinic, quaternary, and pyridinic oxide type nitrogen. They observed that the Fe/N-CNT catalysts were more difficult to reduce than the corresponding Fe/CNT catalysts due to the prefunctionalization with nitrogen atoms. However, the Fe/ N-CNT catalysts showed superior FTS activity when comparing with that of Fe/CNT catalysts. Therefore, N-doping seems to have a very positive effect on the FT catalyst performance, being it even superior than those results obtained for oxidized CNTs as supports of FT catalysts. In fact, as it can be observed in Supplement Table S3, Fe supported on N-doped CNTs presents the highest metal-time yield (MTY) operating at HT-FTS conditions. Effect of Catalyst Preparation Methods Various catalyst preparation methods to support FT catalysts on CNTs have been investigated and compared, including IWI, wetness impregnation (WI), homogeneous depositionprecipitation (HDP), colloidal synthesis technique, and most recently, a modified photo-Fenton process. Among of all of them, the standard IWI method is the most frequently used for fundamental studies. Table 3 compares the FT catalytic performance of the most relevant unpromoted Co and Fe catalysts supported on CNTs and CNFs prepared using different catalysts preparation methods. Further details are reported in the supporting information file (Supplement Table S4) as the prefunctionalization method, metal particle size, C 2 -C 4 O/P ratio and α value obtained from FTS. For example, it was found that using ethanol as a solvent for cobalt impregnation on oxidized CNTs and untreated CNTs showed a superior FTS activity as compared to those prepared from an aqueous solution and propanol (entries 1-2, Table 3) (Bezemer et al., 2006a;Eschemann et al., 2015). Eschemann et al. (2015) emphasized that choosing ethanol as solvent and an appropriate drying procedure reduced the average cobalt cluster size on the CNT surface and improved the metal dispersion. The comparison of Co/CNT prepared by IWI (Co/CNT-IM) and the HDP method (Co/CNT-DP), using urea as the precipitation agent, showed that the catalysts prepared by IWI were 2.6 times more active, which was attributed to lower cobalt particle size and improved metal dispersion on the latter case (entries 3 and 4, Table 3) (Xiong et al., 2011). Recently, Almkhelfe et al. (2018) investigated Co and Fe catalysts prepared in a single step by a modified photo-Fenton process without the need of a prefunctionalization stage, which showed higher CO conversion, selectivity to C 5+ hydrocarbons, and an improved catalyst stability compared to those of the catalysts prepared via IWI at low FTS reaction temperatures (200 and 250°C for Co-and Fe-based catalysts, respectively, 10 bar and H 2 /CO of 2). The main cause of the outstanding behavior for the photo-Fenton-prepared catalysts was the higher metal dispersion and optimal catalyst particle sizes with a narrow particle size distribution. In particular, the Co/CNT catalysts prepared from the photo-Fenton approach showed a CO conversion of 80% and a selectivity for liquid hydrocarbons of 70%, which is among the highest values reported for FTS (entry 6, Table 3). Ismail et al. (2019) investigated the FT catalyst performance of Co/CNT catalysts prepared by a colloidal synthesis process (cs_Co 100 /CNT) and IWI (ci_Co 100 /CNT). It was found that the colloidally synthesized Co catalysts showed higher catalytic activity, higher selectivity toward C 2 -C 4 fraction, and lower selectivity to C 5+ hydrocarbons than the Co/CNT catalysts prepared by IWI at 1 bar and 220°C (entries 7 and 8, Table 3). Nevertheless, colloidally synthesized cobalt catalysts were previously reported to have a very high selectivity to long hydrocarbon chain at higher reaction pressure (20 bar and 220°C), given that high-pressure conditions promote C 5+ product formation Van Deelen et al., 2018). The modification of the Co and Fe/CNT FTS catalysts by thermal treatments was also investigated. Chernyak et al. (2020) studied the effect of sintering temperature (800-1,200°C) on the structure and FTS catalytic performance of Co and Fe CNTsupported catalyst prepared by IWI via the spark plasma sintering approach (Co800 and Fe800, entries 9 and 17, Table 3). The sintered catalysts presented higher activity and selectivity to C 5+ liquid hydrocarbons during FTS, as compared to those nonthermally treated catalysts and without the application of a prereduction step. The main reason was the presence of carbonencapsulated metallic nanoparticles embedded in the CNT Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 framework. In the case of the sintered Fe/CNT catalyst, the close contact between the metallic site and the carbon material after the sintering approach facilitated the formation of the active iron carbide phase. It should be also highlighted the calculated TOF values for Co800 (0.10 s −1 ) and for Fe800 (∼1.0 s −1 ), which were remarkably high as compared to other unpromoted FTS catalysts. In a recent work, the controlled synthesis of cobalt catalyst with single hcp-Co supported on CNF was carried out through the controlled formation of a CoO oxide precursor, followed by a reduction step (entry 11, Table 3). Compared to the conventional reduction-carburization-reduction (RCR) process (entry 10, Table 3), this method improved Co particle dispersion and LT-FTS activity by avoiding sintering of the nanoparticles after reduction. Furthermore, this catalyst was catalytically stable for 400 h at the operation conditions studied . Effect of Pore Size CNT pore confinement of the FT active phase and the effect of the support pore size have also shown to influence the activity and selectivity of the catalysts for FTS. As aforementioned, the pore size of the CNTs can be associated to both the inner diameter of the tube or to aggregated pores caused by CNT interaction. On this context, Fu et al. (2014b) studied Co/CNT catalysts prepared by IWI with different CNT outer diameters (<8, 20-30, and 30-60). It was found that larger CNT outer diameters resulted in the formation of bigger Co 3 O 4 crystallites and greater reducibility, but the larger sizes also resulted in less Co dispersion. The catalyst with larger outer diameters of 30-60 nm and pore sizes displayed higher TOF and selectivity to C 5+ hydrocarbons, which was due to the suitable particle sizes and the better crystallized graphitic structure for the support with larger pore sizes, which promoted CO conversion. These results are in line with the observations by Xie et al. (2012) about the FTS performance of Co/CNT catalysts with different outer diameters. In contrast, Zhang et al. (2009) observed that the diameter of carbon nanotubes seemed to have negligible impact on the FTS performance of Co/CNT catalysts. The effects of pore diameters of Fe catalysts supported on CNT on the FTS reaction rates and product selectivity were also studied. Abbaslou et al. (2010) showed that both the selectivity to C 5+ hydrocarbons and the CO conversion were improved for Fe/CNT catalysts with the narrower pore structure. Deposition of iron inside the nanotubes (∼80% according to the TEM images) with narrower pore structure resulted in smaller metal particle size (12 nm compared to 17 nm of Fe/wp-CNT catalyst with wider pore structure) and better metal dispersion. These features conferred the catalyst a better extent of reduction and an improved catalytic performance. Effect of Catalyst Pore Confinement In general, metal catalysts encapsulated inside CNTs are obtained by direct incorporation during the pyrolysis of precursor mixtures, such as ferroceneacetylene (Karmakar et al., 2004), by metal deposition on CNTs with opened tips after a strong-acid pretreatment (Chen et al., 2008) or by a two-step IWI methods, first with an aqueous solution and later with the metal precursor solution . Figure 6 shows TEM images of confined and nonconfined Fe catalysts. The research group of professor Bao Xinhe (Chen et al., 2006;Chen et al., 2007;Chen et al., 2008) evidenced that iron species located inside the CNT tubes (Fe-in-CNT) had better reducibility and tended to form a more active iron carbide phase under reaction conditions than iron located outside the CNT channels (Fe-out-CNT). This caused a remarkable enhanced activity in LT-FTS and the favored formation of C 5+ hydrocarbons working from 20 to 50 bar at 270°C and H 2 /CO of 2 (entries 18-19, Table 3). This has been recently supported by Gu et al. (2019). Figure 7 shows the catalytic performance of the Fe-in-CNT and Fe-out-CNT catalysts during FTS as a function of the pressure reported by Chen et al. (2008). This behavior was attributed to the modified redox properties of the confined iron catalysts and to the trapping effect of the reaction intermediates inside the CNTs, which was suggested to increase their contact time with iron catalysts, favoring the growth of longer chain hydrocarbons. Several authors also investigated the confinement effect of cobalt particles inside CNT on the FTS catalyst performance. Tavasoli et al. (2010) confirmed that encapsulation of the Co catalytic sites inside the CNTs resulted in lower rates of sintering of the Co nanoparticles as compared with the particles located on the outer layer of the CNTs. Furthermore, Xie et al. (2012) and Fu et al. (2014b) agreed that the FTS activity and the selectivity to C 5+ hydrocarbons was improved over the catalysts with most of the Co nanoparticles located inside the CNT due to the enhanced catalyst reducibility and to the favorable chain growth of the intermediates formed inside the tubes. Effect of Metal Particle Size Professor De Jong and co-workers reported the influence of both metallic cobalt and iron carbide particle size of graphitic CNFs (Co/CNFs and Fe/CNFs) as support for the FTS reaction under 1 and 35 bar (Bezemer et al., 2006a;Torres Galvis et al., 2012a). In this regard, Bezemer et al. (2006a) found that the surface-specific activity (apparent TOF) in the FT reaction was independent of cobalt particle size for unpromoted cobalt catalysts with sizes larger than 6 nm at 1 bar or 8 nm at 35 bar (220°C, H 2 /CO 2) [11]. The authors attributed the lower TOF values of the catalysts with Co particles smaller than 6 nm as the result of a significant increase in the CH x intermediates residence time combined with a decrease of the CH x coverage and, among others, to the presence of irreversible bonded CO molecules on smaller particles, which causes partial blockage of the Co surface (Den Breejen et al., 2009). These results have been later supported for Co/AC, Co/CNT, Co/CNF, Co/CSs, and Co-MOFMS catalysts (Xiong et al., 2011;Fu et al., 2014b;Luo et al., 2019). On the other hand, both the activity and the selectivity of the CNF-supported cobalt catalysts were strongly influenced by the catalysts with smaller cobalt particle sizes. Figure 8 shows the CoTY ( Figure 8A) and the selectivity to C 5+ hydrocarbons ( Figure 8B) for Co supported on CNF as a function of the cobalt particle size at 35 bar (Bezemer et al., 2006a). It was found a volcano-like curve for the CoTY as a function of the cobalt particle size with a maximum CoTY at 5-6 nm. Furthermore, the selectivity to C 5+ hydrocarbons increased with the cobalt particle size up to 15 nm, whereas the opposite trend was observed for the production of methane. The higher selectivity to methane of small Co particles was mainly attributed to their higher hydrogen coverage. A similar result for a Co/CNF catalyst was reported by Den Breejen et al. (2010). They agreed that the increase of the cobalt particle size positively affected the selectivity to C 5+ hydrocarbons, especially for particle sizes lower than 8 nm. In addition, the selectivity to C 5+ was constant for Co particle size from 8 to 15 nm. A similar positive relationship between cobalt particle size (<45 nm in diameter) and C 5+ selectivity on Co/CNTs, Co/carbon-sphere, and Co-derived MOF catalysts was also reported (Xiong et al., 2011;Luo et al., 2019). Torres Galvis et al. (2012a) investigated the influence of the Fe carbide particle size of CNF-supported catalysts. The TOF based on the initial activity of unpromoted catalysts increases 6-8 times at 1 bar (350°C, H 2 /CO 1) when the average iron carbide particle size decreased from 7 to 2 nm, whereas the selectivity to methane and lower olefins remained constant. On the other hand, Ru nanoparticles supported on CNTs, AC, and graphite have been also studied. Kang et al. (2009) reported that Ru nanoparticles supported on carbon nanotubes with a mean particle size of 7 nm, exhibited the highest selectivity toward C 10 -C 20 (60%) and TOF for CO conversion at 260°C, 20 bar and H 2 /CO 1. Cobalt and Iron Bimetallic Catalysts and Effect of Promoters on Fischer-Tropsch Synthesis The combination of cobalt and iron in a bimetallic-supported catalyst has been investigated in the last years. This interest stems from the prediction that the simultaneous use of CoFe bimetallic catalysts will give rise to a "synergistic" effect between the two active phases on the FT process. However, the studies reported on these bimetallic catalytic systems are a bit contradictory. In addition to this, optimization of Co-, Fe-, and CoFe-supported CNT and CNF catalysts for FTS process, especially related to product selectivity, can be addressed by catalyst promotion. Among various promoters, MnO and noble metals such as Pt and Ru have been incorporated to cobaltsupported CNF and CNT catalysts, whereas K, Cu, Mo, Na, S, Bi, and Pb have been added to Fe/CNFs and Fe/CNTs catalysts. Figure 9 represents the increase in CO conversion and selectivity to C 2 -C 4 and C 5+ products for different promoted Co and Fe catalysts supported on CNFs and CNTs compared to those of the unpromoted counterpart, whereas Supplementary Table S5 summarizes the FTS performance of bimetallic and promoted cobalt and iron catalysts supported on CNTs and CNFs. Tavasoli et al. (2009) conducted some studies on CoFe/CNT catalysts and found that cobalt catalyst with 0.5 wt% of Fe increased the CO conversion with minor modifications of the selectivity to C 5+ hydrocarbons during LT-FTS (entry 1, Supplementary Table S5). However, incorporation of 4 wt% of iron to the bimetallic catalyst resulted in a decrease of the catalyst activity and selectivity to C 5+ hydrocarbon, whereas the alcohol selectivity notably increased. This behavior was attributed to the formation of Co-Fe alloys. Contrary to this, another study found that cobalt and iron supported on CNFs with a metal loading of 10 wt% Co and 5 wt% Fe (10Co5Fe/CNF), tested at 250°C and H 2 /CO of 2, presented the highest yield to long-chain hydrocarbons, with minor selectivity to methane and carbon dioxide (Díaz et al., 2014). These studies agreed that the incorporation of iron to cobalt catalysts improved the dispersion of cobalt on the support and facilitated the reduction of iron species to the metallic form. In a recent study, Ismail et al. (2019) found that colloidally synthesized bimetallic Co 50 Fe 50 /CNT catalyst had considerably higher FT activity than the monometallic iron catalyst prepared by the same method and similar to the monometallic cobalt one under atmospheric pressure, 220°C and H 2 /CO of 2. Furthermore, a significant selectivity to C 5+ hydrocarbon and lower selectivity to CH 4 were obtained for the bimetallic catalyst. The authors attributed this behavior to the role of iron enhancing the distribution of cobalt species over the carbon support, in line with the aforementioned studies, and to the presence of Co-Fe alloys. Several authors have investigated the incorporation of Pt and Ru promoters to the Co/CNT catalytic system and tested their performance in LT-FTS. Promotion with 0.2 wt% of Pt or Ru resulted in a significantly enhanced cobalt catalyst reduction and slight increase in the CoTY and C 5+ hydrocarbon selectivity (82.5% for RuCo/CNTs and 79.2 for PtCo/CNTs) with respect to the unpromoted catalysts, Figure 9 and entries 5 and 6 in Supplementary Table S5 (Zhang et al., 2011). Similar conclusions were obtained by Trépanier et al. (2009a) for the 0.5 wt% Ru-promoted Co/CNT catalyst and significant C 5+ hydrocarbon selectivity of 77% was also obtained (entry 7, Supplementary Table S5). Recently, the comparison of FIGURE 9 | Increase in metal-time yield and C 1 , C 2 -C 4 , and in selectivity to C 5+ hydrocarbons for different promoted Co-and Fe-supported catalysts on CNFs and CNTs compared to those of the unpromoted counterpart. The title in the x-axis indicates the promoter and its concentration in wt%. Data are adapted from refs. (Bahome et al., 2005;Bezemer et al., 2006b;Malek Abbaslou et al., 2011;Zhang et al., 2011;Cheng et al., 2015;Xie et al., 2016;Gu et al., 2018) Co-Ru/CNT catalyst (4 wt% Ru) prepared by the chemical reduction method and IWI showed that the Co-Ru/CNT catalysts synthesized by the reduction technique (entry 8, Supplementary Table S5) increased the FTS rate by 11% and the C 5+ selectivity by 16% when compared to that obtained through the impregnation method. Moreover, these Ru-promoted catalysts outperform the unpromoted catalyst. The different in the performance of the catalysts was attributed to the different crystallite sizes and the catalyst reduction enhancement for the Ru-promoted catalysts (Shariati et al., 2019). The research group of professor De Jong investigated the influence of MnO on Co/CNT catalysts under LT-FTS conditions (Bezemer et al., 2005;Bezemer et al., 2006b). MnO was reported to favorably affect both activity and selectivity to C 5+ hydrocarbons, depending on concentration and reaction conditions. They concluded that the CoTY increased c.a. 65% and the C 5+ selectivity increased c.a. 4% with respect to the unpromoted catalyst after incorporation of 0.13 wt% MnO (Figure 9). The promoter effect was suggested to originate from a lower degree of cobalt reduction and moderation of hydrogenation reactions. On this research line, Liu and Li (2020), based on a computational study, have recently proposed a promising and novel Co/Mn bimetallic center supported on N-doped CNTs as an efficient FTS catalytic system for the production of long-chain hydrocarbons. 0.7 wt% K-promoted Fe/CNT catalysts decreased the catalyst reducibility, decreased FT rate, increased the yield of CO 2 and C 2 olefins, and reduced the methane selectivity, when compared with unpromoted catalysts (Bahome et al., 2005;Trépanier et al., 2009a). However, promotion with 0.7 wt% Cu did not greatly influence the FT product selectivity (Figure 9 and entries 10-11, Supplementary Table S5). In other work, a synthesized K-doped MnO 2 -coated CNT composite was used to support iron catalysts (7.9 wt% Fe, 15.7 wt% Mn, and 1.9 wt% K) . It was found a remarkable selectivity to C 2 -C 4 olefins (50.3%) and higher CO conversion than FeMnK/CNT catalyst prepared by the coimpregnation method using CNTs as support. This was associated to the small-sized and narrow nanoparticle distribution, high dispersion of the promoters, and the weak metal-support interaction. Combined promotion of Fe/CNF catalysts with 0.1 wt% Na and 0.2 wt% S was shown to improve the selectivity to light olefins at low conversions operating at HT-FTS conditions (Figure 9 and entry 15, Supplementary Table S5) (Xie et al., 2016). The comparison with the unpromoted Fe/CNF revealed a notable enhanced iron carburization and higher initial catalytic activities over the promoted iron catalysts with Na and S. More recently, professor Khodakov and collaborators (Gu et al., 2018;Gu et al., 2019) found extremely strong promotion effect of Bi and Pb on the catalytic performance of Fe/CNT catalysts. Compared to the unpromoted catalysts, a significant increase in FT reaction rate and a higher selectivity to the C 2 -C 4 olefins (55-60%) at 10 bar were obtained (Figure 9 and entries 16 and 17, Supplementary Table S5). The promoting effects of Bi and Pb on iron catalysts have been reinforced by their preferential localization at the surface of iron carbide nanoparticles leading to the formation of core-shell structures. Furthermore, the presence of Bi enhanced the catalyst reducibility and facilitated carburization of iron nanoparticles. For example, the FeTY was 82 × 10 -5 mol CO g Fe −1 s −1 for the FePb/CNT-in catalyst at 350°C, total syngas pressure of 10 bar, and SV of 17 m 3 kg −1 h −1 , which is one of the best results for unpromoted and promoted iron-based FTS catalysts available so far in the literature. To sum up, from Figure 8, it can be concluded that promotion with MnO to Co/CNTs catalysts produced a very significant increase of CoTY with respect to the unpromoted catalyst, whereas promotion of Bi and Pb enhanced considerably the FeTY in Fe/CNT catalysts. Regarding the selectivity values, it should be remarked that promotion with K+Cu and Na enhanced selectively C 5+ formation over Fe/CNT catalytic systems with respect to the unpromoted catalysts compared under very similar FeTY values. Catalysts Supported on Carbon Spheres Since the discovery of buckminsterfullerenes, spherically shaped carbons or carbon spheres (CSs) are receiving great attention from the scientific community (Ugarte, 1992). The preparation of CSs is usually accomplished by two major approaches. On the one hand, the chemical vapor deposition (CVD) method involves high-temperature decomposition of a carbon-based material under inert atmosphere, typically in the absence of a catalyst (Serp et al., 2001). This noncatalytic synthesis procedure allows the direct preparation of CSs with low surface areas (<10 m 2 /g) and high purity (Xiong et al., 2011;Yang et al., 2014). The surface of theses CSs prepared by CVD is composed of graphitic layers/ flakes with sizes ranging from 1 to 10 nm (Xiong et al., 2011), which can be arranged in different structures, forming concentric, radial, or random configurations (Serp et al., 2001). They are reported to be good model catalyst supports in FTS (Xiong et al., 2010). On the other hand, hydrothermal carbonization (HTC) process, where a carbon source (i.e., sucrose, glucose, and resorcinol) is hydrothermally treated between 80 and 250°C in an autoclave reactor, has been also proposed for the preparation of these shaped carbons (Hu et al., 2010). Compared to other preparation routes, the HTC process has some advantages, including low toxicological impact of materials and processes, easy instrumentation and techniques, the use of renewable sources, and a high energy and atom economy. Carbon spheres prepared by the HTC approach are characterized by a hydrophobic core and a hydrophilic shell with abundant surface OH and C O groups. These CSs usually have intrinsic porous structures with controllable morphology and surface functionality. Furthermore, coupling either hard-or soft-templating effect with the HTC process has shown interesting results in the preparation of hollow carbon spheres (HCSs). The textural parameter range of typical Fe-and Co-supported CSs used in the FTS reaction are shown in Table 1. The BET surface area for the Fe-and Cosupported CSs prepared via the HTC approach can vary between 143 and 465 m 2 /g, whereas the total pore volume can vary between 0.2 and 0.59 cm 3 /g (Cheng et al., 2019;Phaahlamohlaka et al., 2020). Supplementary Table S6 summarizes the FTS performance of Fe-and Co-supported CS catalysts. The effect of support pretreatment, catalyst preparation methods, and promoters on the FTS performance of Fe-and Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 Co-supported CSs have been investigated. In the case that hollow carbon spheres (HCSs) were used as supports, the effect of catalyst confinement was studied. Effect of Support Functionalization and FT Catalyst Preparation Methods CSs prepared by the CVD process are characterized for exhibiting a high carbon purity and an inert surface chemistry. In order to achieve a high metal dispersion when using these carbon materials as catalyst supports, CSs have to be functionalized with different oxygen and/or nitrogen surface groups. On the contrary, CSs obtained by the HTC approach usually do not require functionalization due to their hydrophilic shell with abundant oxygen functional groups. Yu et al. (2010) were pioneers in reporting the use of ironcontaining CSs in the FTS process. The authors reported a onestage route for the preparation of Fe x O y @C spheres embedded with highly dispersed iron oxide nanoparticles by the hydrothermal treatment of a glucose solution mixed with iron nitrate under mild conditions. A steady-state CO conversion of 76% and a selectivity to C 5+ hydrocarbons of 60% were obtained. In particular, the selectivity values to C 5+ hydrocarbons are even better than the values reported for unpromoted iron catalysts, including Fe-in-CNT and Fe-out-CNT catalysts tested under similar FTS conditions (at 270°C, 20 bar and H 2 /CO of 1) ( Table 3). The remarkable catalytic activity and stability was associated to the favorable formation of iron carbides during H 2 activation, which were embedded into the carbonaceous matrix. Professor Coville and collaborators (Xiong et al., 2010;Xiong et al., 2011) carried out a comprehensive study on the prefunctionalization treatments (with HNO 3 or KMnO 4 ) and preparation catalyst methods (IWI and HDP) of CS-supported Fe and Co catalysts. The higher the HNO 3 treatment temperature (up to 90°C), the higher degree of functionalization and metal dispersion achieved in the final catalysts. They observed that in both cases, for Fe-and Co-supported CSs, the catalysts prepared using iron/cobalt nitrate and the HDP method using urea as precipitation agent (Fe/CSs-C-DP and Co/CS-C-DP) showed the highest MTY for the FTS reaction among the different catalysts prepared, which was attributed to the smallest average metal particle size and highest metal dispersion (entries 2 and 13, Supplementary Table S6). Functionalization using nitric acid or KMnO 4 showed comparable catalytic activity and C 5+ hydrocarbon selectivity. More recently, Kuang et al. (2019) prepared Co/CS catalysts by thermal decomposition (TD), IWI, and ultrasonic impregnation (UI) methods. The preparation of the CS support was carried out by the hydrothermal approach using an aqueous glucose solution followed by carbonization at 800°C in N 2 . The catalyst prepared by the TD method (CoO/C-TD) presented the highest metal dispersion and, as consequence, remarkably higher CO conversion (21%) and selectivity to C 5+ hydrocarbons (81.9%) during LT-FTS. Regarding the use of N-doped CSs as supports, Xiong et al. (2014b) investigated Fe-supported N-doped CS catalysts for HT-FTS prepared by different strategies. It was suggested that the presence of pyrrolic and pyridinic N atoms is essential in anchoring and stabilizing Fe atoms to the carbon surface, whereas quaternary N atoms play a minor role. Among all of them, the Fe-supported N-doped CSs via CVD of a mixture of acetylene and CH 3 CN in a vertical furnace (Fe/NCS ver ) had the highest N content (4 wt%, mainly pyrrolic and pyridinic N atoms) and well-dispersed Fe oxide particles on the N-doped CSs. Therefore, Fe/NCS ve catalyst exhibited the highest FT activity and selectivity to C 5+ hydrocarbons (entry 3, Supplementary Table S6). More recently, Cheng et al. (2019) studied the preparation of N-doped CSs using biomolecule dopamine as carbon and nitrogen sources and they are used as supports for cobalt catalyst. In line with the observations reported by Xiong et al. (2014b), the sample with the highest content of pyrrolic N and smallest cobalt particle size (that pretreated at 500°C, Co/NCS-500) exhibited the highest CO conversion and C 5+ hydrocarbon selectivity under LT-FTS reaction (entries 14 and 15, Supplementary Table S6). Cobalt and Iron Bimetallic Catalysts and Effect of Promoters Dlamini et al. (2015) prepared a series of Fe-Co bimetallicsupported CS catalysts and investigated their use in the FTS reaction. The addition of small amounts of Fe to Co-based catalyst resulted in an enhancement of the CO conversion, being its maximum for the catalyst containing 0.5 wt% Fe and 9.5 wt% Co (entry 17, Supplementary Table S6). Fe/Co alloy formation was detected upon reduction above 450°C, but its relative amount was not correlated with higher C 5+ selectivity. The bimetallic catalysts with iron content higher than 2 wt% showed the highest C 5+ selectivity (87%) at a CO conversion of 21%. Zhang et al. (2015) carried out a deep study on the effect of different promoters (Na, K, Mn, and Zn) over Fe-supported CSs prepared through one pot solvothermal method and their use in the FTS process. The catalytic experiments showed that K, Na, and Zn promotion resulted in an enhancement of the CO conversion values as compared to that of the unpromoted catalyst. However, Mn promotion resulted in the decrease of the CO conversion. The FTS results revealed that Na was the one enhancing the catalytic performance to the most. Na promotion strongly decreased the methane generation, producing more C 5+ hydrocarbons and enhancing the O/P ratio. In this line, K-and Mn-promoted Fe-supported spherical mesoporous carbons (Fe/SMCs) were reported by Chen et al. (2018). These authors prepared spherical mesoporous carbons by a SiO 2 template assisted sol-gel procedure in water-in-oil emulsions, using resorcinol and formaldehyde as carbon sources. High iron loadings were achieved (30-50 wt%), and the BET surface area was very high (397 m 2 /g for an iron loading of 40 wt%). 2.5 wt% K promotion decreased the FeTY and TOF values, whereas the presence of 5 wt% of Mn enhanced them. CO 2 generation was diminished by the presence of Mn but enhanced by K. The favorable effect of alkali (Na, Li, and K) promotion over iron-based CS catalysts for the HT-FTS reaction was reported by Ma et al. (2020). In this study, the promoted iron-containing CSs were prepared though a one-step hydrothermal synthesis. The Frontiers in Materials | www.frontiersin.org February 2021 | Volume 7 | Article 617432 reaction results showed an improvement in the CO conversions and in the O/P ratios for all the promoted catalysts compared to the unpromoted one. Here, Na was the promoter enhancing the CO conversion value to the most, which is in agreement to the work reported by Zhang et al. (2015). The presence of the metal promoters increased the selectivity to C 5+ hydrocarbons following the order: Na > K > Li. A further study on the effect of Na content revealed that the CO conversion value was maximum for a Na load of 1 wt%, whereas the highest selectivity to C 5+ hydrocarbons was achieved for the catalyst with 2 wt% of Na. To sum up, alkali metals result in the enhancement of the CO conversion, the olefin/paraffin ratio, and the C 5+ selectivity values when they are used as promoters in Fe/CS catalysts. However, in the case of cobalt-based catalysts, K promotion resulted in the decrease of the catalytic activity. On the other hand, Mn has been shown as a useful promoter for olefin generation purposes in Fesupported CS catalysts. Hollow Carbon Spheres as Catalyst Supports CNTs as support for FT catalysts have the advantage of allocating the catalytic active phase either inside or outside the nanotube. This phenomenology was also studied with HCSs. HCSs used as supports for FTS catalysts were prepared by coating a carbon precursor onto either SiO 2 (Phaahlamohlaka et al., 2017;Teng et al., 2018) or polystyrene spheres as hard and soft templates, respectively, followed by a pyrolysis stage and removal of the template. The SiO 2 template spheres were removed by NaOH or HF treatments, whereas polystyrene was easily removed by heat treatment under an inert environment. For example, Phaahlamohlaka et al. (2017) and Phaahlamohlaka et al. (2020) prepared Co-supported mesoporous hollow carbon spheres (MHCSs) promoted with ruthenium and both Co and Ru nanoparticles where located either outside or inside the MHCSs. The promoted catalysts exhibited higher FTS activity compared to the unpromoted counterparts, which was attributed to a hydrogen spillover effect from Ru to Co that enhanced cobalt oxide reducibility. When Co and Ru nanoparticles were located inside the MHCSs (CoRu@HCS), higher selectivity to methane and lower selectivity to C 5+ hydrocarbons were obtained (entry 17, Supplementary Table S6). The authors attributed these differences to the confinement effect of the Co and Ru nanoparticles inside the hollow carbon structure, which gave rise to a hydrogen richer environment, which favored methane formation. In other work, Teng et al. (2018) reported a highly efficient Fecontained hollow CS catalyst with highly dispersed Fe 2 C sites embedded within the carbon matrix and successfully tested it in the HT-FTS reaction. SiO 2 spheres were used as hard templates with different diameter sizes (150 and 260 nm) and resorcinol and formaldehyde as carbon sources. Iron loading was carried out prior to the pyrolysis of the polymer at different temperatures (500, 600, and 700°C) under N 2 flow, followed by etching the template. Lower carbon thickness and higher iron particle size was evidenced from TEM when increasing the pyrolysis temperature. It was found that the catalyst calcinated at 600°C exhibited the highest selectivity to lower olefins (30.1% in a CO 2free basis) and the highest O/P ratio (4.8). Additionally, they found a higher methane formation and lower O/P ratio when using the larger template, which was also associated to the H 2 enrichment effect taking place inside the hollow structure of the catalyst, being it higher when increasing the cavity size of the CS catalyst. Catalysts Supported on Graphene, Graphite, and Diamond Graphene is formed by one or several layers (3 to <10) of sp 2hybridized carbon films forming a two-dimensional (2D) crystal, which is considered as the basic building block for carbon materials of different dimensionalities, such as fullerenes (0D), nanotubes and nanofibers (1D), or graphite (3D) (Geim and Novoselov, 2007). Graphite consists of van der Waals coupled graphene layers, which can be stacked slightly differently and either alpha (hexagonal) or beta (rhombohedral) graphite forms can be formed (Lipson and Stokes, 1942). On the other hand, diamond is a crystalline carbon material formed by sp 3 hybridized carbon atoms. Figure 10 shows a schematic crystal structure of graphene, graphite, and diamond and (Supplementary Table S7) the FTS performance of FT catalysts supported on these carbon materials. To the best of our knowledge, there is only one work of Coloaded powdered oxidized diamond catalyst tested in the FTS reaction (Honsho et al., 2012). The authors used a commercial powdered diamond having a surface area of 24 m 2 /g, which was oxidized in air prior to cobalt deposition by IWI. The catalysts showed a high CO conversion of 44.5% and selectivity to C 5+ hydrocarbons of 62.7%. This CO conversion was significantly higher than those obtained for Co-loaded on SiO 2 (38.4%), AC (12.2%), and powdered oxidized graphite catalysts (2.8%) with higher surface areas. The weaker interaction between the O-DIA surface and cobalt oxide contributed to the better FTS results. Regarding the use of graphene as supports for FTS catalysts, Moussa et al. (2014) investigated the chemical reduction of graphene oxide in water in the presence of nitrates of iron and potassium under microwave irradiation resulting in Fe 15 K 5 -G catalyst (15 wt% of Fe and 5 wt% of K). It should be highlighted that graphene oxide does not require a prefunctionalization of the support due to the presence of epoxy groups on the surface, which act as anchoring sites for the metal catalysts. The FTS catalyst was tested under HT-FTS and compared with K-promoted Fe/CNT catalyst. It was observed that the graphene oxide-supported catalyst exhibited an excellent stability, recyclability, the highest CO conversion (73.5%), and selectivity to C 8+ hydrocarbons (86.7%). The authors attributed the good FTS performance of the Fe 15 K 5 -G catalyst to the presence or defects within the graphene lattice, which acted as favorable nucleation sites to anchor the metal nanoparticles. Karimi et al. (2015a) and Karimi et al. (2015b) performed a comparative study of 15Co/graphene (602 m 2 /g) and 15Co/ CNT (372 m 2 /g) catalysts for the FTS reaction. Prior to catalyst preparation by IWI, both supports were treated with HNO 3 . The FTS rate and CO conversion percentage obtained by 15Co/graphene were significantly larger than that obtained using 15Co/CNT catalyst. The selectivity to C 5+ hydrocarbons was also higher for 15Co/graphene (87.1%) than for 15Co/CNTs (83.9%) at isoconversion conditions (around 60% of CO conversion). In addition, the CO conversion dropped only by 22% over 15Co/graphene after 480 h, whereas it dropped by 34% for the 15Co/CNT catalyst, which was caused in both cases by cobalt sintering. Therefore, in this study, Co-supported graphene outperformed to Co-supported CNTs catalyst under the preparation and reaction conditions used. In this line, Hajjar et al. (2017) compared the FTS performance of cobalt catalysts supported on graphene oxide and nanoporous graphene with BET surface areas of 290 and 700 m 2 /g, respectively. The nanoporous graphene material was first oxidized in a mixture of sulfuric and nitric acids. As aforementioned, graphene oxide did not require functionalization. The resulting catalysts (15Co/GO and 15Co/ NPG) were evaluated in the FTS reaction. The carbon nanostructured graphene-based catalysts exhibited higher CO conversion of around 65% and lower deactivation rate compared to 15Co/GO. Moreover, the selectivity to C 5+ hydrocarbon was also significantly higher when using Co/NPG (87.4%), which was evident from the higher surface area and pore volume. More recently, Chernyak et al. (2019) reported oxidized and N-doped graphene nanoflakes (GNFs) as supports for Co-based FT catalysts. In this work, pristine and N-doped GNFs were prepared by pyrolysis of hexane and acetonitrile, respectively. The oxidized derivatives were obtained after an HNO 3 treatment, and the Co-supported catalysts were prepared by the WI method, resulting in Co-supported GNFox (Co/GNFox) and Cosupported N-doped GNFox (Co/N-GNFox) having BET surface areas of 250 and 415 m 2 /g, respectively. The introduction of acetonitrile at the pyrolysis stage led to the formation of predominantly bulk-distributed pyridine and graphitic nitrogen species, and the nitric acid oxidation of this material introduced the pyridone/pyrrolidone groups on the surface of the support. The catalysts were tested in the FTS reaction (entries 5 and 6, Supplementary Table S7). Interestingly, greatly higher TOF and selectivity to short-chain hydrocarbons (C 2 -C 4 ) were obtained for Co/N-GNFox, whereas higher CO conversion and CH 4 selectivity was obtained for Co/ GNFox. The presence of smaller cobalt oxide crystallites found in Co/N-GNFox and the higher resistance to particle sintering during catalyst activation could explain these results. However, their C 5+ selectivity values were quite low (20-43%) due to the presence of very narrow pores on these samples (less than 1 nm), which hindered CO diffusion and increased H 2 intrapore concentration. On the other hand, a high surface area graphite material (399 m 2 /g) has been used as support of cesium-promoted Ru catalysts and tested for FT reaction (entries 8 and 9, Supplementary Table S7). In this work, Eslava et al. (2018) claimed that the presence of Cs 2 O in the catalysts prepared with CSNO 3 as promoter precursor was responsible of a high selectivity to CO 2 during reaction, whereas the WGS reaction, and hence the CO 2 selectively, significantly decreased using CsCl. ANALYZING THE EFFECT OF CARBON SUPPORT STRUCTURE ON FISCHER-TROPSCH SYNTHESIS CATALYST PERFORMANCE It has been observed that the reducibility of Co-and Fe-based catalysts is improved on carbon-based supports compared to that of oxide materials. However, the preparation of highly dispersed and stable catalysts still requires at least of intermediate interactions between the carbon support surface and the metal precursor. Modification of surface chemical properties of inert (or highly ordered) carbon-based materials, especially those prepared at high carbonization temperatures and/or from CVD of a carbon precursor, such as of CNTs, CNFs, CSs, graphite, and graphene, by introduction of oxygen and nitrogen functional groups, was found to be essential to increase their ability to stably anchor the active metal species for the FTS. Consequently, an additional prefunctionalization step prior to the catalyst preparation resulted in an increase of metal dispersion and stability on the carbon surface, positively affecting the activity of the catalyst in this reaction. On the other hand, amorphous carbons such as ACs, OMCs, and CSs, which are usually prepared at lower carbonization temperatures (especially those obtained by the HTC approach), are characterized by the presence of abundant surface oxygen functional groups. These oxygen surface groups are mainly originated from the biomass source (biomass residues in the case of ACs or isolated carbohydrates in the case of OMCs and CSs) used as carbon precursor. Therefore, these carbon supports did not require of a carbon surface functionalization stage. Furthermore, the use of activation agents and porous inorganic templates in the preparation of ACs and OMCs, respectively, together with the lower carbonization temperatures mostly used in the preparation of these carbon materials produced carbon supports with high BET surface areas, pore volumes, and oxygen surface groups, as shown for Co-and Fe-supported AC and OMC catalysts in Table 1. Undoubtedly, the metal loading and catalyst preparation procedure also influenced the textural characteristics and surface chemistry of the resultant catalysts, blocking part of support porosity and creating specific oxygen surface groups. To compare the effect of carbon support structure of Co-and Febased catalysts on their activity for the FTS reaction, the weight specific activity (cobalt-and iron-time yield, CoTY and FeTY, respectively) and surface-specific activity (turnover frequency, TOF) were plotted for each type of carbon-based supported catalyst under similar reaction condition range, and the results are shown in Figure 11 and Supplementary Figure S1, respectively. Clearly, a strong dependence of the FTS catalytic activity on carbon-based support structure has been observed. It is noteworthy that both the CoTY and TOF are similar for Cobased catalysts. However, the lack of TOF values (or data to be able to calculate these values) reported for Fe-based catalysts does not make comparison between these two activity indicators possible. In general, the highest CoTY values are obtained for Co-supported CNTs, followed by Co/graphene and Co/CNFs ( Figure 11A). The better crystallized graphitic structure in CNTs, which facilitate the electron transfer between the cobalt metal and CO molecules and highly stable cobalt nanoparticles, mainly dispersed inside the tubes, has been reported to be responsible of their higher catalytic performance (Pan and Bao, 2011;Fu et al., 2013;Xiao et al., 2015). A comparative study of the catalytic behavior of cobalt catalyst supported on graphene and on CNT for the FTS showed that the use of graphene increased the rate by 22%, shifted the product distribution to long-chain hydrocarbons, and exhibited higher stability when compared to CNT, at 220°C, 18 bar, and a H 2 /CO ratio of 2 (Karimi et al., 2015a;Karimi et al., 2015b). These properties were attributed to a better dispersion of cobalt clusters and to an increase in the degree of reduction of Co at relatively lower temperatures in the graphene-supported catalyst. Nevertheless, the TOF values reported in this work for Co-supported on graphene and CNTs (38.6 and 35.1 s −1 , respectively) were quite far from the range of TOF values (25·× 10 −3 -160·× 10 -3 s −1 ) reported for Co/CNTs under similar reaction conditions (Supplementary Figure S1A). In this sense, the poorly crystallized graphitic (amorphous) structure of ACs and OMCs does not seem to be a favorable characteristic for a catalyst support in FTS (Zaman et al., 2009;Fu et al., 2013). Nevertheless, the unique (meso-) porous structure of the different carbon supports has been suggested to provide geometric constraints that allow controlling the product distribution through the shape selective role of the catalytic system. Particularly, the special confinement effect of CNTs was reported to restrict cobalt particle sintering during catalyst activation and FTS reaction conditions. As compared in Table 1, CNTs had the maximum pore (tube) diameter and most of the metal particles are usually inside the pores (tubes). Therefore, the reaction intermediates formed inside the pores can contact the metal active site for longer time, promoting the formation of long-chain hydrocarbons. Furthermore, the high length-todiameter ratio in CNTs and CNFs confers them a high external surface area that together with the absence of microporosity significantly reduces the mass transfer limitations compared to those of the traditional microporous activated carbons (Abbaslou et al., 2010). Contrary to this, the confinement of cobalt catalyst inside hollow carbon spheres resulted in higher selectivity to methane associated to an H 2 enrichment effect inside the hollow carbon structure (Phaahlamohlaka et al., 2017;Phaahlamohlaka et al., 2020). Comparatively, the microporous structure in ACs is claimed to result in a higher selectivity to methane and a higher light hydrocarbon fraction, which was attributed to the high specific surface area of these carbon supports, leading to smaller cobalt particle sizes and diffusion limitations for CO as compared to that of H 2 (resulting in a higher H 2 /CO ratio inside the pores) (Zaman et al., 2009;Fu et al., 2013). Similar results were recently reported for Co supported on oxidized and N-doped graphene nanoflakes with narrow pores (Chernyak et al., 2019). However, the wider pore sizes in OMC-supported catalysts leaded to improved catalyst mass transfer properties and higher selectivity to long-chain hydrocarbons compared to those of AC-supported catalysts (Supplementary Tables S1, S2). Regarding the use of Fe-supported catalysts, both Fe/CNTs and Fe-MOFMS catalysts present high and similar FeTY values ( Figure 11B). It has been observed along the reported literature that the proximity between carbon and supported iron particles can facilitate the formation of iron carbides, thus leading to a higher concentration of the iron carbide active phase on the catalyst surface, giving rise to a high selectivity to C 5+ hydrocarbons (Chen et al., 2008;Santos et al., 2015). The Fe nanoparticle confinement in CNTs seems to be an ideal condition for the successful formation of the active iron carbide species. This fact can explain the high FeTY obtained for these catalytic systems. Furthermore, the restricted iron sintering of the iron carbide nanoparticles confined and/or embedded in the carbon matrix in these carbon structures also confers a high catalyst stability. In this regard, it was suggested (Cheng et al., 2014) that the confinement of iron nanoparticles inside the CNT with unique electronic properties presents a more relevant impact on the preparation of more active, selective, and stable FT catalysts than that of iron dispersion. In line with these observations, iron and cobalt nanoparticles highly dispersed and embedded in CSs and OMCs have been prepared in a single step via the hydrothermal synthesis of a mixture of the carbon and the metal precursors. According to this catalyst preparation procedure, the close contact between iron and the carbon can facilitate the easy formation of the active iron carbide phase during the subsequent carbonization stage and in the FTS reaction conditions, resulting in catalysts with a high FTS activity and stability Sun et al., 2012;Teng et al., 2018). Contrary to this, a detrimental effect was observed for the case of Co-based catalysts with Co nanoparticles embedded in the carbon matrix, as that reported for Co/OMCs, due to the fact that the metallic cobalt surface is the active phase for Co-based catalysts in FTS reaction. Thus, when Co particles are surrounded by the carbonaceous matrix, a large part of the cobalt active sites are blocked, being deactivated . Concerning the use of metal promoters, several authors observed that metal promotion effect is also dependent on the support structure. Nevertheless, the comparison of metal promotion on different carbon-supported catalysts has not been well investigated, which makes difficult the discussion of the effect of metal promotion on different carbon-supported catalysts on FTS performance. For example, sodium promotion was more pronounced on Fe/CNTs as compared to that on Fe/OMC, due to the presence of iron carbide species stabilized by encapsulation in the carbon matrix of the Fe/CNT catalytic system (Cheng et al., 2015). Therefore, higher selectivity values to both light and long-chain olefins were observed for Napromoted Fe/CNT catalysts when compared to those of Fe/OMC. Likewise, it was found that the iron reducibility and carbidization proceeded much easier for iron species confined inside CNTs and promoted with Bi and Pb, which resulted in an increase in FeTY and in a higher selectivity to light olefins (around 40% at 10 bar and 60% at 1 bar, 350°C and H 2 /CO 1), as compared to those of the promoted and nonconfined catalysts (Gu et al., 2019). This behavior was attributed to the closer contact of the promoters with Fe inside the tubes due to the nanoconfinement effect. The use of carbon-based supports derived from lignocellulosic biomass in FTS has been less studied. Moreover, the presence of inorganic species in biomass-derived carbon supports might play an important role in enhancing the activity in FTS. Such studies would help to identify suitable biomass sources and natural and cheap promoters from the extensive and heterogeneous diversity of the biomass materials. Another important aspect related to FT synthesis, which has not been so widely discussed in the literature, is the high exothermicity of the process in relation to heat removal and reactor temperature control. Highly exothermic reactions, such as those of the FTS, usually present important heat-transfer problems, giving rise to hotspots in chemical reactors that may damage the catalysts. In this line, it has been reported that local overheats in Co-based catalytic bed results in the increase of methane selectivity and in an acceleration of catalyst deactivation (Visconti et al., 2011;Fratalocchi et al., 2018). Conventional pelletized catalysts, which usually involve the use of alumina or silica as catalyst support, present certain limitations in relation to heat removal under FT synthesis reaction conditions (Asalieva et al., 2020). In order to tackle this issue, several approaches such as the use of monolithic (Visconti et al., 2011) or foam (Lacroix et al., 2011) structured catalysts and the operation in microchannel reactors (Holmen et al., 2013) have been explored. The use of carbon materials has been also reported to be a feasible solution to overcome heat-transfer problems in FT reactors. Chin et al. (2005) reported the preparation of microstructured Co-Re catalysts based on aligned multiwall carbon nanotube arrays supported on FeCrAlY foam. A four times higher catalytic activity was obtained for the carbon-containing microstructured catalyst as compared to the one of an engineered catalyst structure without the carbon nanotube arrays. This difference was attributed to the superior thermal conductivity for the carbon-containing microstructured catalyst, which resulted in a higher mass and heat transfer and in an improved reactor temperature control, being it possible to operate at higher temperatures without methane selectivity runaway. In this context, professor Holmen and collaborators have intensively worked on the use of different monolithic/microstructured reactors using carbon-based catalysts (Zarubova et al., 2011;Holmen et al., 2013). They reported the preparation of Co catalysts supported on hierarchically structured carbon nanofibers (CNFs)/carbon felt composites. These materials showed enhanced heat and mass transfer and provided a relatively uniform temperature profile inside the reactor. Similarly, the addition of exfoliated graphite to pelletized Co-based catalysts resulted in a 30 times higher thermal conductivity for the catalytic bed than that of the catalyst without any additives and gave rise to an enhanced catalytic performance (Asalieva et al., 2020). In the light of all the aforementioned results, one can conclude that carbon materials exhibit a huge potential not only in terms of reducing metal-support interactions and providing a high metal dispersion and FTS catalyst activity, but also for the enhancement of the heat and mass transfer inside the reactor, allowing for a better reactor temperature control and a higher catalytic performance. CONCLUSIONS, CHALLENGES, AND FUTURE PERSPECTIVES Fischer-Tropsch synthesis (FTS) is an important industrial process in the transformation of nonpetroleum carbon resources, including natural gas, coal, and lignocellulosic biomass into clean hydrocarbon fuels and valuable chemicals. The FTS catalysts are required to be preferably supported, and carbon-based materials have been recognized as an interesting alternative to conventional metal oxides. In this review, we have described the use of different carbon-based materials as supports for Co, Fe, and in a lesser extent Ru-based FT catalysts (promoted and unpromoted) over the past 2 decades, including activated carbons (ACs), ordered mesoporous carbons (OMCs), carbon nanotubes and nanofibers (CNTs and CNFs), carbon spheres (CSs), diamond, grapheme, and graphite. Some general conclusions can be drawn from these studies: (1) the carbon surface modification (functionalization and doping) with oxygen and nitrogen functional groups, especially in the case of carbon supports prepared at high carbonization temperature, is crucial to produce catalysts with a high dispersion, FTS activity, stability, and enhanced selectivity; (2) the extent of reduction of FT metal-carbon catalysts is generally high due to the low metalsupport interactions; (3) the proximity between carbon and supported iron can facilitate the formation of the active iron carbides, thus leading to a higher concentration of active sites on the catalyst surface; (4) the morphology and structure of the carbon are crucial aspects to modify the metal-support interactions, the metal dispersion, the particle size, and hence, their performance in the FTS process. Specifically, metal catalyst confinement inside the pores of CNTs has shown an outstanding behavior as compared to those of catalytic systems presenting metal nanoparticles supported on the outer CNT surface; (5) larger pores in the support, as those in CNTs, OMCs, and mesoporous carbon spheres, resulted in larger metal phase crystallites formed inside and, thus, higher metal reducibility and lower metal dispersion, enhancing, on the other hand, the hydrocarbon diffusion and the formation of long-chain hydrocarbons; (6) an optimum metal promoter loading and a close proximity between the promoter and the FT metal catalyst seem to be essential factors to increase the FT catalyst reducibility and, thus, to improve the FTS activity and selectivity; (7) it has been demonstrated that the carbon support improves the catalyst heat-transfer properties during the highly exothermic FTS reaction and, thus, the catalytic performance. However, there are also some challenges to be addressed and future perspectives regarding the use of carbon-based materials as FTS catalyst supports from an industrial-scale point of view. One of these issues is the low density and, in some cases, the insufficient mechanical strength of carbon-based materials. Most of FT reactors used in industry are fixed-bed reactors and slurry reactors. When using a fixed-bed reactor, the catalyst requires to have an appropriate size and shape and therefore they need to be pelletized in order to facilitate intraparticle mass transfer and avoid high-pressure drops. In case of using a slurry reactor, problems derived from the catalyst abrasion and product-catalyst separation are remarkable. Carbon-supported catalysts have been less evaluated on a slurry reactor, and these issues need to be investigated. One important disadvantage is related to the high costs of the nanostructured carbon materials as compared to conventional oxide supports, typically used in the FTS process. Although the industrial production of CNTs, CNFs, and AC is currently not an issue, the production of metal-doped carbons is currently not available on a large industrial scale. Furthermore, in most of the cases, petroleum-derived carbon sources are used for the preparation of the carbon-based materials. Only in the case of ACs, the use of biomassic sources has been explored as raw material. Nevertheless, most of the catalysts studied have been prepared using commercially available AC supports. Much research is still necessary in this direction. In this sense, renewable biomass residues, besides being used for the production of liquid fuels via gasification and further conversion of the produced syngas, could be used for the production of the FT catalyst supports, resulting in both a positive environmental and an economic impact. By this way, it would be possible to minimize greenhouse gas emissions and to achieve a significant reduction of fossil fuel dependency. On this context, the simulation of syngas from the gasification of biomass as feedstock to the FTS reactor operating at both low-and hightemperature (LT-FTS and LT-FTS) processes, using carbon-based catalyst supports, has not been explored in detailed. Therefore, process intensification and catalyst engineering are both crucial steps necessary to be investigated and optimized for the successful implementation of the biomass-to-liquid technology and the use of carbon-based catalyst supports for FTS at large scale. AUTHOR CONTRIBUTIONS JR-M and TC conceived and designed the structure of the review. MV-R, MR-C, and JP contributed with the literature analysis, illustrations, and writing the manuscript. All authors contributed to the manuscript revision and approved the submitted version.
2021-02-04T14:16:13.770Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "f0f0f321a7d9bad4b127316bd7693c4eaf919f7d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmats.2020.617432/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f0f0f321a7d9bad4b127316bd7693c4eaf919f7d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
235787651
pes2o/s2orc
v3-fos-license
Case-based audit and feedback around a decision aid improved antibiotic choice and duration for uncomplicated cystitis in primary care clinics Objectives The objective of our study was to evaluate the impact of a multifaceted stewardship intervention on adherence to the evidence-based practice guidelines on treatment of uncomplicated cystitis in primary care. We hypothesised that our intervention would increase guideline adherence in terms of antibiotic choice and duration of treatment. Design A preintervention and postintervention comparison with a contemporaneous control group was performed. During the first two study periods, we obtained baseline data and performed interviews exploring provider prescribing decisions for cystitis at both clinics. During the third period in the intervention clinic only, the intervention included a didactic lecture, a decision algorithm and audit and feedback. We used a difference-in-differences analysis to determine the effects of our intervention on the outcome and guideline adherence to antibiotic choice and duration. Setting Two family medicine clinics (one intervention and one control) were included. Participants All female patients with uncomplicated cystitis attending the study clinics between 2016 and 2019. Results Our sample included 932 visits representing 812 unique patients with uncomplicated cystitis. The proportion of guideline-adherent antibiotic regimens increased during the intervention period (from 33.2% (95% CI 26.9 to 39.9) to 66.9% (95% CI 58.4 to 74.6) in the intervention site and from 5.3% (95% CI 2.3 to 10.1) to 17.0% (95% CI 9.9 to 26.6) in the control site). The increase in guideline adherence was greater in the intervention site compared with the control site with a difference-in-differences of 22 percentage points, p=0.001. Conclusion A multifaceted intervention increased guideline adherence for antibiotic choice and duration in greater magnitude than similar trends at the control site. Future research is needed to facilitate scale-up and sustainability of case-based audit and feedback interventions in primary care. INTRODUCTION Antimicrobial resistance is a well-recognised threat to global health, with the USA alone accounting for 2.8 million antibiotic-resistant infections and 35 000 deaths each year. 1 Increasing realisation of the need to minimise this public health threat can be seen in the efforts of regulatory bodies such as the Joint Commission, which recently established requirements for antimicrobial stewardship for ambulatory healthcare organisations, effective at the beginning of 2020. 2 The new requirements mandate that such organisations provide resources to practitioners to promote appropriate antibiotic prescribing practices. 2 Although most studies of outpatients have focused on implementing antibiotic Key points Question ► We evaluated the impact of audit and feedback antibiotic stewardship intervention on guideline adherence for antibiotic choice and duration for acute uncomplicated cystitis in primary care. Finding ► Our multifaceted intervention increased guideline adherence for antibiotic choice and duration in greater magnitude than similar trends at the control site. Using the difference-in-differences design, we demonstrated that a case-based audit and feedback intervention can increase the proportion of primary care clinic visits for urinary tract infections in which women receive the right drug with the right duration, a fundamental aspect of antibiotic stewardship. Meaning ► Our study added evidence to the limited literature regarding antimicrobial stewardship interventions for cystitis in the outpatient setting, a neglected practice area in the US antibiotic stewardship programmes. Future research will focus on scale-up and sustainability of case-based audit and feedback interventions in primary care. Open access stewardship for upper respiratory infections, 3 4 there is also a high prevalence of inappropriate antibiotic prescribing for urinary tract infections (UTIs) in primary care, including overuse of fluoroquinolones and longer duration of treatment than recommended by guidelines. [5][6][7][8][9][10] A study in France found that only 20% of outpatient UTIs were treated with the guidelines-recommended drug, dose and duration. 11 An Irish study found that only 55% of the antibiotic prescriptions for UTI in general practice were appropriate. 12 Of 7738 outpatient encounters for UTI in Israel, 91% were treated with a longer duration of antibiotics than recommended by guidelines. 13 A recent US study revealed that fluoroquinolones were the most commonly prescribed antibiotics for uncomplicated UTI, comprising up to 49% of prescriptions. 6 Inappropriate use of fluoroquinolones is especially concerning because it promotes the emergence and spread of multidrugresistant Escherichia coli strain sequence type 131. 14 In addition, continued overprescribing of fluoroquinolones for uncomplicated cystitis in patients with other treatment options is occurring in the USA 5 15 despite two black-box warnings from the US Food and Drug Administration (FDA) for fluoroquinolones due to an association between their use and serious side effects. [16][17][18] In addition, fluoroquinolones were associated with more central nervous system-related and gastrointestinal-related adverse events compared with other types of antimicrobials in a recent meta-analysis. 19 Excessive treatment duration for uncomplicated cystitis is another common problem documented internationally. 9 13 Current Infectious Diseases Society of America (IDSA) guidelines 20 recommend nitrofurantoin for 5 days, trimethoprim-sulfamethoxazole for 3 days and a single dose of fosfomycin as the first-line regimens for uncomplicated cystitis. In our previous study in the same setting, most prescriptions for trimethoprim-sulfamethoxazole, nitrofurantoin and fluoroquinolones had a treatment duration longer than recommended. 9 Unlike upper respiratory infections, which typically involve viral infections for which antibiotics are not indicated, a symptomatic UTI merits treatment with antibiotics, as recommended by the IDSA guidelines. 20 Thus, the focus in implementing antibiotic stewardship for UTI needs to be optimisation of antibiotic choice and duration, which may present a different cognitive challenge for practitioners than deciding whether a patient needs antibiotics or not. One evidence-based strategy shown to be effective in implementing an antimicrobial stewardship for UTI in acute and long-term care settings is audit and feedback. [21][22][23] Multiple strategies have been employed for implementing stewardship such as feedback to prescribers on antimicrobial consumption and antimicrobial stewardship committee. 24 Based on our prior successful experience with audit and feedback in acute and long-term care, 21 we implemented a multifaceted antimicrobial stewardship intervention using audit and feedback to improve compliance with acute cystitis guidelines in a family medicine setting (general practice). The objective of our study was to evaluate the impact of a multifaceted stewardship intervention on adherence to the evidence-based practice guidelines on treatment of uncomplicated cystitis in primary care. We hypothesised that our intervention would increase guideline adherence in terms of antibiotic choice and duration of treatment. Study design and settings We used a difference-in-differences study design to determine the effects of our stewardship intervention on adherence to the IDSA guidelines 20 and recommendations from the American Academy of Family Physicians (AAFP) 25 for treating acute cystitis and measuring antibiotic choice and duration. 20 A preintervention and postintervention comparison with a contemporaneous control group from July 2016 to March 2019 was performed at two private, academically affiliated family medicine clinics in a large urban area. We chose two clinics (intervention and control sites) within the same private US healthcare system because they were similar in terms of patient populations, provider type (predominantly physicians with two physician assistants at each clinic) and electronic medical record (EMR) software. Table 1 shows clinic and prescriber characteristics at the intervention and control sites. Both sites provide preventive and acute care, behavioural health, nutrition services and onsite laboratories. All physicians, except for one, are board certified in family medicine. On average, 3248 appointments for a cohort of 19 777 patients occur at these clinics each month. Patients in both clinics are predominantly women (58%) and Caucasian (54%). Study population The study population included all patients with acute uncomplicated cystitis at the intervention and control sites. Inclusion criteria for uncomplicated cystitis required that participants be women ≥18 years who had International Classification of Diseases, Tenth Revision (ICD-10) codes (N30.0, acute cystitis; N30.9, cystitis unspecified; and N39.0 UTI site not specified) for UTI listed as a diagnosis in the EMR system (Epic Clarity database). In addition to a UTI-related diagnosis, patients must have also been prescribed a UTI-relevant antibiotic during the same visit. UTI-relevant antibiotics included fluoroquinolones, nitrofurantoin, fosfomycin, trimethoprim alone or in combination with sulfamethoxazole, betalactams and aminoglycosides. The electronic algorithm, using ICD-10 UTI diagnosis codes paired with medication data to identify patients with UTI, was validated in the same setting. 26 Visits that met criteria for complicated UTI or had recorded signs or symptoms of pyelonephritis were excluded (eg, an additional code for genitourinary abnormalities or recorded fever, defined as ≥100.4°F or 38°C). We also excluded five patients who had allergies to both nitrofurantoin and sulfa-containing antibiotics and those prescribed long-term antibiotics indicating prophylaxis for recurrent UTI (figure 1). For each eligible visit, we extracted the following variables: patient age, race, comorbidities (Charlson Comorbidity Score), antibiotic allergies, type of antibiotic prescribed and duration of treatment. If a patient returned to the clinic within 7 days of initial treatment due to UTI, Figure 1 Selection process used to determine uncomplicated cystitis visits in the study period*. The study period was from July 2016 to February 2019. *Visits may have had more than one exclusion criteria. UTI, urinary tract infection; ICD, International Classification of Diseases. the case was considered as a failure of previous treatment. In these cases, only the original visits were included in the study. We included women with diabetes because recent evidence suggests that diabetic women presenting with acute cystitis in primary care should be managed similarly to women without diabetes. 27 We also included women aged ≥65 years, as treatment recommendations for otherwise healthy older women are similar to those for younger women. 20 Each record was manually reviewed by a team member (GG or MG) to rule out the possibility of contraindication to all first-line antibiotics. All cases of acute uncomplicated cystitis during the study period were included in the analysis. Outcome measure The outcome of this study was in adherence to the IDSA guidelines for managing uncomplicated cystitis, both to medication choice and duration of therapy (figure 2). For example, prescribing a guideline-adherent antibiotic (nitrofurantoin) for excessive duration (7 days) would be counted as non-adherent. Likewise, prescribing ciprofloxacin (a non-first-line antibiotic choice) for the correct duration (3 days) would be counted as non-adherent. Prescribing a first-line agent (trimethoprim-sulfamethoxazole) for the correct duration (3 days) would be counted as compliant. Likewise, prescribing nitrofurantoin for 5 days or a single dose of fosfomycin would be counted as compliant. 20 Intervention development All activities in the intervention and control sites during the study period are described in table 2. In the first study period, we validated our electronic algorithm 26 and obtained baseline data on the outcome. In the second study period, we interviewed providers to explore their prescribing decisions for UTI to help us understand why they were choosing certain drugs or durations of treatment. The findings from these interviews, published elsewhere, 28 were used to develop educational materials (interactive case-based lecture) for the intervention. For example, we found that providers were misled by advanced patient age, diabetes and recurrent UTI to make inappropriate choices for acute cystitis. We therefore focused our teaching cases on these points, presenting actual cases of patients who had visited one of the clinics in the previous 2 months. Baseline period activities also Interviews with providers about treatment of uncomplicated cystitis Intervention (April 2018 to February 2019) ► Guidelines distribution (sent by email with read receipt received from all providers) ► Intervention (targeting primary care providers) ► Interactive case-based training lecture (included teaching cases to address specific clinical scenarios that were problematic for the interview participants) Pocket cards with algorithm distributed to all providers (figure 2); audit and feedback (at least one session per each of the 11 providers per month), 121 audit and feedback sessions (21 in person and 100 by phone) ► Guidelines distribution (sent by email with read receipt received from all providers) Open access included the development and validation of a search algorithm for identifying visits with UTI, as well as piloting of our decision aid (pocket card) (figure 2), and the audit and feedback intervention and script (box 1). We revised the existing audit and feedback script that we used in our previous successful intervention study 21 in acute and long-term care to be used in the primary care setting. The audit and feedback component of the intervention (based on feedback intervention theory) 29 30 was a highly personalised, interactive, one-on-one intervention with primary care providers to improve their capacity to distinguish between uncomplicated cystitis and other UTI syndromes and to encourage them to prescribe a guideline-concordant antibiotic regimen. We also included information about first-line antibiotics recommended by the IDSA guidelines and AAFP and determined whether the antibiotic regimen prescribed by the providers was in accordance with the guidelines (box 1). In the intervention period, we distributed guidelines at both sites (intervention and control). Distributing the guidelines addressed awareness, but we did not expect guideline dissemination alone to be an effective method to achieve behaviour change. 31 32 At the intervention site, we also conducted a training session to help providers engage with and internalise guidelines content. Our educational session provided a detailed overview of the IDSA treatment guidelines; definitions for various UTI syndromes, including uncomplicated versus complicated UTI, and actual clinical examples. During our training session, we also taught the providers how to use the decision aid (figure 2). The investigators selected actual cases of UTI seen in the clinics to design both teaching cases to address the specific clinical scenarios that were problematic for the interview participants. From April 2018 to February 2019, we performed an audit and feedback intervention, in which charts of women meeting study eligibility, as described in figure 1, were reviewed. All cases of acute cystitis during the second phase of the study in the intervention clinic triggered a chart review. The patient's EMR was reviewed to determine the type of antibiotic prescribed and the duration of treatment. Appropriateness of the treatment was determined by the research team (LG, GG, MG and BT), using the IDSA guidelines. Our team included two infectious diseases doctors, an infectious diseases epidemiologist and a primary care research fellow. We randomly selected one case per provider per month to reduce the burden on providers, and the research team contacted each provider in person or by phone to provide follow-up as to whether the treatment decision was in compliance with IDSA guidelines. We built the script for our audit-feedback intervention using our previously published script used in acute and long-term care settings. 21 The feedback was given to providers in person or by phone by the principal investigator (LG) within 5-7 days of the patient visit through postprescription antimicrobial review, using the algorithm. Feedback was given in both scenarios-when the prescribing was in accordance with the guidelines and when antibiotic choice and/or duration was not in accordance with the guidelines. Statistical analysis Sample size In a previous US study, the adherence to cystitis management guidelines for antibiotic choice and treatment duration (our outcome) in the outpatient setting was 44%. 22 We used this estimate and calculated the sample size of visits with cystitis needed at each of the two sites based on testing the differences in two independent proportions. We calculated that a total of 97 visits at each site would provide a power of 80% to detect an absolute difference of 20% in the postintervention rates between the two groups at a significance level of 0.05. We used χ 2 test, Fisher exact test and t test to determine whether visit-level factors (age, race/ethnicity and Charlson Comorbidity Score) differed between the intervention and control sites. Difference-in-differences analysis was performed to determine intervention effectiveness using the composite outcome of guideline adherence in terms of antibiotic choice and duration of treatment. The difference-in-differences estimator is calculated by subtracting the change in proportion of guideline-adherent regimens between the intervention and baseline periods of the control site from the change in proportion of guideline-adherent regimens between the intervention and baseline periods of the intervention site. We used log-binomial regression analyses for each outcome to calculate the relative risks (RRs) with 95% CIs and studied the interaction between study site (intervention and control) and study period (baseline, interviews and intervention). We specifically separated the baseline and interview periods because the interviews Box 1 Audit and feedback script (example) 'Your patient (45 years old) presented with symptoms of urinary tract infection (UTI) (dysuria and urinary frequency), and you diagnosed her with UTI. According to the guidelines, the first thing is to check whether the patient had any of the following symptoms of pyelonephritis: fever, flank pain, nausea and vomiting or other suspicion for pyelonephritis. Also, consider possible complicating factors such as urological abnormalities and immunocompromising conditions. Based on reviewing the chart, the patient didn't seem to have pyelonephritis or a complicating condition. Therefore, this was likely a case of acute uncomplicated cystitis. The patient did not have allergies to any of the first-line recommended antibiotics for UTI (trimethoprim-sulfamethoxazole, nitrofurantoin and fosfomycin). You decided to treat the patient empirically with nitrofurantoin for 7 days'. Feedback 'Your choice of antibiotic fits with the guidelines. However, according to Infectious Diseases Society of America guidelines, nitrofurantoin can be prescribed for 5 days. Therefore, consider shortening your duration of treatment with nitrofurantoin to 5 days for future cases'. Open access may have affected providers' prescribing behaviour. The interaction term of these two variables was the differencein-differences estimator, and its coefficient reflected the magnitude of association between the intervention and the dependent outcome. All tests were two-sided, and p≤0.05 was considered statistically significant. Analyses were performed using SPSS V.26. The study was approved by the institutional review board and ethics committee at both sites. Figure 1 is a flow chart showing the study selection process to identify eligible visits with acute uncomplicated cystitis in the study period. After applying our prespecified exclusion criteria, our final sample included 932 visits representing 812 unique patients. Of these 932 visits, 546 were made at the intervention clinic and 386 at the control clinic. Table 3 presents the characteristics of patients at intervention and control clinics. Patients with uncomplicated cystitis visiting the control clinic were slightly younger than those in the intervention clinic (46.8 years versus 49 years). In both clinics, most patients were Caucasian (62.5% and 54.9%, respectively), followed by black, Asian and Hispanic patients. No significant differences were observed for race/ethnicity or Charlson Comorbidity Index. Table 4 summarises the proportion of guideline-adherent antibiotic regimens in each study period by study clinic. The overall proportion of guideline-adherent regimens increased both in the intervention and control clinics. For the baseline, interview and intervention periods, respectively, these values were 33.2%, 40.9% and 66.9% for the intervention clinic and 5.3%, 10.3% and 17.0% for the control clinic (table 4). The proportion of guidelineconcordant prescriptions at baseline was higher in the intervention site (33.2%) than in the control site (5.3%). Using the difference-in-differences analysis, the estimated net change between the intervention and baseline periods that is attributable to the intervention is 22 percentage points (table 4). Guideline-adherent antibiotic regimen Multivariable log-binomial regression analysis of guideline-adherent regimen demonstrated a significant interaction between study clinic and study period (p=0.01), showing that the increase in guideline adherence was greater in the intervention site. All RRs derived from the regression analysis including the interaction term are presented in table 5. At the intervention site, the probability of prescribing a guideline-adherent regimen for uncomplicated cystitis was 12.7 times higher in the intervention period compared with the baseline period of the control clinic (RR 12.7 (95% CI 6.4 to 25.2)) and 3.9 times higher compared with the intervention period of the control clinic (RR 3.9, 95% CI 2.4 to 6.3). At the control site, the probability of prescribing a guideline-adherent regimen was 3.2 times higher in the intervention period compared with the baseline period (RR 3.2, 95% CI 1.4 †Proportion of guideline-adherent regimens was calculated by dividing guideline-adherent regimens by the total number of antibiotic prescriptions. ‡The difference-in-differences estimator is calculated by subtracting the change in proportion of guideline-adherent regimens between the intervention and baseline periods of the control site (17.0%-5.3%) from the change in proportion of guideline-adherent regimens between the intervention and baseline periods of the intervention site (66.9%-33.2%=33.7%), which is equal to 33.7%-11.7%=22%. The p value refers to the interaction term between study clinic (intervention and control) and study period (baseline, interviews and intervention) in the log-binomial regression analysis, implying that the increase in guideline adherence was significantly greater in the intervention site compared with the control site. Open access to 7.3). The risk difference (absolute risk reduction) and 95% CI of guideline adherence for uncomplicated cystitis between the intervention (66.9%) and control (17.0%) sites were 49.9% (38.8-60.9). DISCUSSION In this difference-in-differences study, we implemented a multifaceted stewardship intervention that targeted inappropriate antibiotic choice and duration of treatment. An increased proportion of guideline-adherent prescriptions was observed in the intervention period of both intervention and control sites. However, in the difference-in-differences analysis, the intervention site had a significantly larger increase in adherence than the control site. The audit and feedback stewardship intervention was successful for UTI treatment in emergency departments 22 33 and acute 21 23 and long-term care. 21 In this study, we applied the audit and feedback intervention to primary care settings, where antibiotic stewardship is urgently needed. 2 34 Our intervention was based on a treatment algorithm derived from the IDSA guidelines and AAFP recommendations on management of uncomplicated UTI. 20 25 Our qualitative study showed that differentiating uncomplicated cystitis from other UTI syndromes is a challenge for providers 28 ; therefore, providing a set of diagnostic criteria for uncomplicated cystitis was important. This treatment algorithm describes steps that providers should take when encountered with a patient with UTI-relevant symptoms. We used this algorithm as a starting point to provide personalised, interactive, one-on-one feedback with providers to improve their capacity to distinguish between uncomplicated and complicated UTI and treat UTI appropriately. The content was individualised for each recipient, and specific information about the correct solution was included to maximise feedback effectiveness. 30 Besides improvements in the clinical outcomes, we also observed that the intervention was received positively by feedback recipients. Positive impacts of antibiotic stewardship interventions in the inpatient setting and emergency department have been well described. 21-23 33 In contrast, fewer studies describe successful implementation of UTI antibiotic stewardship strategies in the US outpatient setting, where an estimated 80% of antibiotic use occurs. 35 A previous study in a family medicine setting used a decision support tool embedded in the EMR to improve guideline adherence for uncomplicated UTI. 36 However, the utilisation of the tool clinic-wide was only 29%. Our study, one of the first to use audit and feedback in family medicine, suggests that audit and feedback is an effective approach to antibiotic stewardship in outpatient primary care clinics, although it is labour intensive. Our study has some limitations. First, we observed improved outcomes not only in the intervention site but also in control site, which may be due to spillover effect from the providers in the intervention site to their colleagues at the control site. The clinics are in the same geographical area, and the providers from both clinics have regularly scheduled faculty meetings. Another reason for improved outcomes at the intervention and control site may be the release of FDA warnings against fluoroquinolones in the intervention period, which may have contributed to avoidance of fluoroquinolones and increased adherence to the guidelines. However, in our previous interrupted time series analysis in the same setting, the 2016 FDA boxed warning against fluoroquinolone use for simple infections was not associated with a significant reduction in the rate of fluoroquinolone prescriptions for UTI. 5 Second, we assessed the choice of antibiotics and duration of therapy as these aspects of the regimen had low concordance in our previous database study in the same clinics. 9 We did not assess dose or frequency of antibiotics. Third, our study was conducted only at two clinics within the same healthcare system, and certain modifications might be needed for other healthcare systems such as public healthcare system. The proportion of guideline-adherent prescriptions at baseline was higher in the intervention site. This may be explained by a lower number of visits with uncomplicated cystitis in the baseline period in the control site, so the baseline difference between the clinics may be an artefact of small sample size. Local prescribing culture and social norms may have also contributed to the baseline differences in the outcome between the clinics; culture has been previously shown to lead to differences in antibiotic prescribing in other studies. 37 38 However, different levels of outcome at baseline are acceptable for difference-in-differences analysis. 39 There is no requirement that guideline-concordant prescriptions should be similar at baseline, 39 and our data meet the parallel trend requirement for difference-indifferences analysis. Fourth, we did not perform a formal evaluation of implementation fidelity. Fifth, our quasiexperimental study was not randomised, and we did not Open access evaluate sustainability of the intervention. Perhaps most importantly, we noted that providing individualised audit and feedback is time-consuming and presents sustainability challenges. CONCLUSION In this difference-in-differences study, we demonstrated that our multifaceted stewardship intervention can improve the proportion of primary care clinic visits for UTI in which women receive the right drug with the right duration, a fundamental aspect of antibiotic stewardship. Future dissemination of this intervention to additional primary care clinics may incorporate a component of computerised decision support built around our UTI treatment algorithm, in addition to case-based audit and feedback, to facilitate scale-up and sustainability. Contributors LG, RZ and BT conceived and designed the study. LG wrote the first draft of the paper. LG, GG and MG extracted the data from the institutional database and performed analysis. MS, MH, FK, MZ and RA contributed to the conception, design of the study and interpretation of data. All authors contributed intellectual content, edited the manuscript and approved the final version for submission. Funding This investigator-initiated research study was funded by Zambon Pharmaceuticals. No award/grant number. Disclaimer The sponsor did not participate in the study design, data collection, analysis, interpretation or preparation or submission of this report.
2021-07-11T06:16:40.726Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "d51cb63ed2d7837e912fa9b83fcad91f33cd712b", "oa_license": "CCBYNC", "oa_url": "https://fmch.bmj.com/content/fmch/9/3/e000834.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5db319976cfbd1cfb724a5aa6018848530c7c278", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
26006872
pes2o/s2orc
v3-fos-license
SERVER FAILURES ENABLED JAVASPACES SERVICE JavaSpaces service is a Distributed Shared Memory (DSM) implementation. It has been introduced by Sun Microsystems as a service of the Jini system. Currently, JavaSpaces support client side fault tolerance. It enables both transaction and mobile coordination mechanisms for such purpose. The application failures could be detected and recovered. However, server side failures may occur during the application runtime. Therefore, it is important to supply JavaSpaces with a mechanism that handles such type of failures dynamically. On the other hand, An example of a system that supports both server and client fault tolerance over DSM is TRIPS system. TRIPS protocols are suitable to be integrated in JavaSpaces to supply it with server fault tolerance capabilities. In this study, a server Failures Enabled Javaspaces Service (FTJS) is introduced. FTJS is based on the dynamic failure detection and recovery mechanisms implemented by TRIPS. However, FTJS is able to handle both client and server side failures. The analysis, design and implementation issues of FTJS are introduced. INTRODUCTION Machine crashes and network partintions are major problems while running a distributed application. It is important to deal with failures that are caused by such events within runtime. Otherwise, it will be a must to restart the application from the beginning. A possible solution to this problem is to introduce a software layer that is able to detect failures and recover from them dynamically. Fault tolerance mechanisms, such as transactions and mobile coordination, are applicable to deal with client failures. Other mechanisms, such as dynamic replication, are suitable to the server failures. Sun Microsystems has introduced the Jini system. It is a distributed system that enables groups of users and the resources required by those users to be federated. The main goal of Jini is to facilitate different resources to be available for cleints over the network. Moreover, JavaSpaces is a service introduced by the Jini system. It is a Distributed Shared Memory (DSM) used for object storage and communication (SM, 2007;Kanjilal, 2013). JavaSpaces service has been supplied with transactions and mobile coordination mechanisms. Therefore, it is able to deal with the client side failures. It is important to support JavaSpaces service with server failures handling mechanisms as well (Kamalam and Bhaskaran, 2012). On the other hand, TRIPS is a system that enables dynamic detection and recovery of failures in both the client and server sides using the dynamic replication over DSM (Badawi, 2009). Problem Statement The goal of this research work is to construct FTJS, which is a server failure enabled JavaSpaces service. This will be accomplished by integrating the dynamic failure detection and recovery mechanisms introduced by TRIPS in the JavaSpaces service. This will enable JavaSpaces to deal with both client and server failures. Related Studies TRIPS enables DSM based applications to tolerate with both server and client failures. It is based on the Linda Model and constructs a distributed environment Science Publications JCS for parallel processing. The Tuple space concept has been introduced by the Linda Model. Tuple space could be defined as an associative Shared Memory (DSM) accessible to all application processes. Its contents are entries, which are retrieved using a matching mechanism by their contents rather than by physical addresses (Badawi, 2009;Alsmadi et al., 2013). A DMS access set of operations has been introduced by Tuple Space. TRIPS System Structure TRIPS is structured in three main layers as shown in Fig. 1, namely, the transis layer, LiPS Layer and Trips message handling layer. The Transis event layer is a group communication layer inherited from the Transis group communication system. It is focused towards high throughput local communication. It supports group communication service. Transaction based delivery semantics are guaranteed. Message ordering is supported and network failures are transparentfrom the user. If membership changes occure, the system reports them. The idea behind its mechanism is to create a singlton group for each newly arriving process. The new group receives a 'mailbox' to which messages arrive (Dolev and Malki, 1996;Liefke, 1998). The Transis Event Layer is composed of two sub-layers, namely, the network layer and the group communication layer. The former layer is responsible for handling socket connection and physical data routing. The group communication Layer facilitates the membership mechanisms that enable group members to identify the group communication and configuration mechanisms that enable the member to communicate and broadcast messages to the other members (Badawi, 2009). The second TRIPS layer is the LiPS-layer. This layer controls and manage the distributed applications. This is accomplished through control processes called lipsds. Lipsds are responsible for managing the DSM and application message log. They start and control the application processes. Moreover, they replicate the application processes data to other equivalent processes. Server level failures are handled using replication. This layer is composed of two sub-layers, namely, Trips middle layer and local tuple space layer. The former includes the interface operations enabling the application to interact with the DSM. Examples are Mid_in(), to extract entries, Mid_out(), to write entries and Mid_rd(), to read entries. LiPS system (Library of Parallel Systems) implementation of Linda premitives is used in constructing these operations. The local tuple-space layer includes the DSM structures. This layer is used as the system repository and is inherited from the LiPS tuple space structure (Setz, 1997). Fig. 1. TRIPS system internal layers TRIPS message handling layer is responsible for dealing with different message types. The fault tolerant mechanism that handles different message types is integrated in this layer. The main component in this layer is the "State Change Protocol", that handles both regular distributed shared memory messages and configuration change ones. This protocol is activated as soon as a message is received either from a member to access the DSM, or from the membership layer indicating view change. The JavaSpaces Service JavaSpaces has an associative set of operations to access the contents of the space. This set of operations has inherited its behavior from the Linda tuple space model. For example, to insert an entry to the JavaSpaces the write() operation is used. To extract an entry fro the JavaSpaces the take() operation is used. The write() and take() operations are equivalent to the Linda operations out() and in() respectively (Busi et al., 2010). JavaSpaces service supports transactions and mobile co-ordination to enable client side failures handling. Transaction methodology enables all operations to be performed under it. For example, if a take() operation is done under a transaction, the entry is added to a set of entries that are taken by the transaction. If the transaction is aborted, the taken entries are returned to the space. The taken entries are removed from the space after the transaction is committed (SM, 2007). On the other hand, mobile co-ordination is more associated with the DSM concepts. In this method the coordination primitives (JavaSpaces operations) are moved to the server side, which contains the space that the client wishes to access. JCS JavaSpaces operations that are executed under mobile co-ordination must be encapsulated into a coordination method. This method is executed by the JavaSpaces server (Rowstron, 1999;Lazr, 2001;Tanha et al., 2012). JavaSpaces has been introduced as one of the Jini system powerful services. Jini system is introduced by Sun Microsystems. JavaSpaces enables the Java environment to deal with a network of virtual machines. It helps in constructing variant sized distributed applications. The central element in Jini is the service, which is an interface of hardware device, application, database, or anything that can be connected to the network. To enable a device with Jini technology, it must have a processing power and memory. Jini enables devices without memory or processing power to be connected to the vertual system and controlled by other hardware and/or software, proxies. Such proxies task is to present the device to the system with processing power and memory (Heiningen et al., 2006a;2006b). JavaSpaces is a DSM implementation. It stores data items, called entries, to be accessed by clients. The entry objects are expressed in classes that implement the interface Jini.core.entry.Entry. Entry behavior and characteristics are inherited from the Linda tuple space model. Different entries are said to be of the same type if they are members in the same class. The entry can have methods that define its behavior (SM, 2007;Batheja and Parashar, 2010;Marghny and Refaat, 2012). In this section, the Jini system structure is viewed as well as the current JavaSpaces fault tolerance protocols. MATERIALS AND METHODS The methodology in this research work is based on the idea of integrating the TRIPS systems, that enables server failures in the JavaSpaces service that enables applications failures within runtime. In this section, the TRIPS fault tolerance methodology and the Jini system structure are introduced. Fault Tolerance in TRIPS Dynamic replication is the mechanism used by TRIPS to enforce fault-tolerance. The core of the TRIPS message handling layer is the scheduler that is responsible for receiving and recognizing the type of the state change message. Then it is responsible for directing the message to the suitable handling routine (Badawi, 2009). The scheduler structure and vehavior is whown in Fig. 2. In case of configuration change during handling a regular DSM message, an interrupt request is sent to the DSM handler. The DSM operation is intruppted and the control is returned to the scheduler without performing the DSM operation. The configuration changes are handled first. The Canceled operation is inserted in a local queue to be accessed later. TRIPS uses the "State Change Protocol" to ensure the availability of the distributed application processes. This protocol is responsible for handling the possible state changes, such as new member join or existing member exit, that could occure to the distributed application. The protocol guarantees the survival of data in the DSM in spite of failures. Moreover, it makes sure that the regular operations are applied to all members in the configuration. In case of starting a new member, the global queue that contains all application members is activated and the DSM data structures are initialized. Then, control is passed to the configuration change handler that controls the membership changes (Badawi, 2009). The Jini System Structure To accomplish the service communication, Jini uses Remote Method Invocation (RMI) as shown in Fig. 3. RMI enables full objects (code and date) to be passes around the network. This gives Jini the simplicity of moving encapsulated objects around network. From the figure, one can notice that Jini layers are located on top of the Java platform. This enables the processes and services, that run under Jini control to inherit the powerful behavior of java processes. Jini network federation consist of two main layers. The Lookup layer includes a protocol that enables clients to search for the Jini services they need to utilize. The Discovery/Join layer includes discovery and join protocols that enable the clients to join the services they need to utilize. TRIPS JAVASPACES SERVICE (FTJS) Both of the JavaSpaces fault tolerance methods are dealing with client side failures. The proposed service (FTJS) deals with both server and client side failures. For this purpose, a warm backup replication protocol is presented. In this section, the proposed protocol, SpacesManager, is introduced as well as the analysis and design of FTJS. The SpacesManager Layer The idea behind FTJS is to construct the SpacesManager layer that increases the system availability. Normally, there exist many running JavaSpaces services per application. Some of these spaces are active and others are passive. One of the active spaces is the original space, which is called the replica and the others are identical copies of the original space. The SpacesManager layer is responsible for spreading the effect of the client operations in all active spaces. If the client writes an entry in the system, the SpacesManager replicates this entry in all active spaces and ensures that all spaces are identical. Moreover, it is responsible for managing the spaces failures. It performs the client operations in the active spaces. If any active space is failed, the client will never notice system changes. The SpacesManager failure recovery algorithm is shown in Fig. 4. The SpacesManager layer handles different failure types depending on the type of failed machine. If the failed machine contains an active space, the response depends on whether the failed active space is the replica or not. If the failed machine is the replica, one of still alive active spaces is chosen to be the original space. To survive an active space from perishing, one of the passive spaces is initiated and inserted in the list of active machines. The new active space receives a copy of all entries. If any of the active spaces other than the replica is failed, one of passive spaces is chosen to be the new active space and it receives a copy of all entries. In case of machine failure, the SpacesManager blocks this machine. In other words, the system will delete this machine from the active spaces list. If the failed/disjoined machine comes back to the system, the SpacesManager deletes all entries in its JavaSpaces and rejoins it as a passive machine. FTJS Service Design FTJS service consists of three main parts. Figure 5 shows the FTJS class diagram. The first part is the SpacesManager. It is based on defining a DSM control service in the Java RMI. The SpacesManager interface contains the basic DSM operations (write(), take() and read()). This interface extends the Java API Remote interface. SpacesManagerImp is a class that implements the SpacesManager interface and extends the java API interface Unicast-Remote-Object. This class calls the GetSpacesThread thread in its constructor. The GetSpacesThread is a thread that contains an infinite loop to check the still alive JavaSpaces. The GetSpacesThread class contains a public variable of type vector called SpacesObject. It contains objects of all JavaSpaces services in the system and other metadata like the type of space (active or passive), block... The second component of FTJS is the SetSpacesThread that is responsible for managing the failures. It uses the checkSpaces() method to check the existence of the system machines. This method accesses, in turn, the JSServiceLocator class objects to check the existence of the JavaSpaces service. It uses the convertSpace() method to convert the passive spaces to active spaces and the copySpace() method to copy all entries from one of still alive active spaces to the new active space. The SetSpacesThread uses the flushSpace() method to delete all entries from the rejoining machine. Figure 6 shows the FTJS structure. The third component of FTJS service is the SpacesManagerClient, which is a client program that is used to test the service using the resizable entry MyEntry. The client program fetches the dynamic replica service using the ServiceLocator class. The client code uses this service using its proxy class called SpacesManagerInfProx. This proxy allows the user of add some code in the service operations. Figure 6 gives an overview to the flow control in the dynamic replica protocol used in the FTJS service. RESULTS In this section, practical tests are introduced to evaluate the FTJS service. First, the test environment and technique are introduced. Then the tests and their results are presented. Test Environment and Technique The measurements are performed by using six PC's each with a CPU of type Intel Pentium 2.4 G.H and 512 RAM. The inter-communication among the machines is done by 100 Mbps Ethernet. The software environment includes Windows XP professional as an operating system, Java JDK 1.4.2 04, Jini(TM) Technology Starter Kit v2.0.2 and a free visual platform for Jini 2.0 that is called Inca X(TM). A fault-tolerance test that is more associated to the dynamic replica is introduced. This test is based on testing the system fault-tolerance and the recovery time. Other types of tests have performed to measure the performance of the proposed service by testing the DSM access operations for insertion and retrieval. The Fault Tolerance Test In this section, it is proved that the proposed service tolerates with failures. The following scenario has been applied for this purpose. A counter is intiated by one client. It is an entry that contains an integer. The client procedure writes the entry, takes that entry, increases the counter by 1 and then rewrites the entry with the new value. The above steps are repeated in a large number of iterations. One of the active spaces is enforced to fail during the process. If the client process survives in spite of the failure and the counter increases correctly, then it is proved that the service is fault tolerant. Figure 7 shows a skeleton code for the test steps. In this test, the loop is infinite. The written entry is taken to be increased and is rewritten again with the new value. Figure 8 shows the output of the pervious test. Part (A) shows output messages of the entry counter value while writing and taking entry. The second part of the list (B) shows the setSpacesThread output messages. The output messages indicate the still alive active or passive spaces. While writing the entry that contains counter value equals 47, the first active JavaSpaces is enforced to fail. The FTJS service chooses passive spaces1 to be the new active spaces. Then the dynamic replica service copies entries from one of the still alive space (active space 2) to the passive spaces1. Finally, FTJS service converts the passive spaces1 to active spaces1 and blocks the object of passive spaces1 (not exist). Measuring the Recovery Time FTJS recovery time has been measured. The time taken to recover a failure in one of the active spaces equals the time required to copy the system entries from one of the still alive active spaces to one of the passive spaces plus the time required to convert the passive space to an active space. The most effective parameter in the recovery time is the number of entries in the DSM. In this test, different number of entries have been used with entry sizes 1 and 2 kbytes. Figure 9 illustrates the recovery time in FTJS. From the figure, it is clear that increasing the number of entries in the space leads to increasing the recovery time. This is due to the time taken to copy the entries to the new active space. Performance Tests This section evaluates the effect of the number of active FTJS spaces on performance. This is done by testing the DSM access operations. Figure 10 shows the write() operation performance in the cases of two, three and four active spaces. The figure shows that the performance of the write() operation decreases by increasing the number of active spaces in the system. This is because the write() operation is applied in all active spaces. The difference among the three curves (two, three and four active spaces) is minimal at the small entry array size. Figure 11 shows the write()-take() operation performance comparison for two, three and four active spaces. From this figure, the four-active-spaces curve is the noisiest curve. This noise is due to the fact that increasing number of machines (active spaces) leads to extra communication time. Moreover, the difference between two and three-active-space curves is smaller than the difference between three and four active space curves. CONCLUSION In this research work, the FTJS service is introduced. It is a server failures enabled JavaSpaces service. A high availability layer called SpacesManager layer has been added to the JavaSpaces service. If a failure occurs, the application data survives without any interruptions. JCS Moreover, the detection and recovery process is transparent to the user service. Many types of practical tests have been applied to show the proposed service performance. A fault tolerance test has been performed as well as a recovery time test, performance tests have been appled on different read-write premitives. All the tests have proved that the service performance is reasonable. It is also shown that the proposed service is practically applicable. The proposed JavaSpaces service has been applied To Local Area Network (LAN). It is possible to apply it in the Wide Area Network (WAN) in later versions. On the other hand, the current version of the service cannot deal with the case of merging spaces with entries inside. The nonoriginal space must be empty in case of merge. A possible future work is to enhance the protocol to deal with non-empty spaces merge. This requires a lot of work to deal with the famous merging conflicts.
2017-10-10T23:25:49.645Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "99dd8e92efbdce1950d25d700dc006fd01327996", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jcssp.2014.671.679", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ae818215a868fe8636c43f2ba1a1a55f9f57d635", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
262097310
pes2o/s2orc
v3-fos-license
Vibration and noise characteristics of an inverter for electric vehicle application . Noise, Vibration & Harshness (NVH) is one of the key parameters associated with comfort of an automobile. An Electric Vehicle (EV) consists of transmission driven by electric motor which ultimately gets its power or current supply from inverter. As per Electric vehicle architecture, motor ranks first followed by transmission and inverter in terms of NVH performance. The function of an inverter is to convert DC current from battery source to 3-phase AC current which goes to electric motor. Torque and speed characteristics of motor is dependent on current and voltage supply from inverter. Hence it is important to assess NVH characteristics of inverter as well. In this paper, vibration and acoustic performance of a standalone EV inverter have been studied by testing it at different vehicle operating conditions. Ideally the output 3 phase AC current from inverter should contain only electrical frequency, but it is observed that AC current contains harmonics in the form of linear combination of electrical frequency and switching frequency. From this study, observations from NVH tests on an EV inverter is highlighted in frequency range where relatively high vibration and noise levels were present. Introduction An EV mainly consists of components such as battery source, inverter, motor and transmission.The main function of an inverter is to convert DC current from battery source to 3-phase AC current which goes into motor.The technique which is used to convert DC current from battery to 3-phase AC current to motor is Pulse width modulation (PWM) [1].The frequency at which this conversion occurs is termed as switching frequency.This switching frequency is usually same across inverter's different operating conditions.The speed at which motor rotates is governed by electrical frequency of 3-phase AC current from inverter.When the output current signal from inverter is analyzed, it is observed that amplitude of AC current in frequency domain has sharp peaks of higher amplitude at switching frequency [7].Further it is observed that AC current has higher amplitude in sidebands of combination of switching frequency and electrical frequency [1].This fluctuation in current gives rise to fluctuating forces and hence leads to vibration in structures where AC current is passing such as inverter housing and motor [9]. In this paper, focus is on NVH characteristics of an inverter.Through literature it is observed that there are various sources responsible for noise in a power electronic device such as inverter [2].These sources are Lorentz forces, magnetostriction and electrostriction.These phenomena are applicable to electrical components which is generally present in inverter.Broadly speaking, conducting units such as AC and DC busbars are responsible for noise generated due to EMAG sources [8].Choke coil (inductors) are responsible for noise due to magnetostriction while MLCC capacitors are responsible for noise generated due to electrostriction phenomena. In literatures, information about noise levels for individual key electrical components are present, but overall noise characteristic of inverter is not available.In this paper, emphasize is given on testing to measure overall vibration and noise levels of an inverter keeping the focus of this study around switching frequency of inverter.This test setup or testing is applicable to all power electronics product which has noise radiating components such as busbars, inductors & capacitors. Problem description It was believed that motor is the noisiest component in EV, but recent study shows that inverter is also responsible for noise at higher frequency [2].The 3-phase AC current has ripples developed due to PWM technique from IGBTs which causes sharp peaks of higher amplitude of current around switching frequency [7].These currents are responsible for force fluctuation in electrical components like busbars.These fluctuation in forces cause vibration in structure and hence leads to noise at around switching frequency ranging from 5 to 15 kHz depending upon inverter application.In this paper, responses obtained from vibration and acoustic testing on an inverter at different operating conditions are shown at higher frequency range which can prove to be irritating to a person or observer sitting inside vehicle cabin. Sources of noise Inverters are an integrated part of electric propulsion system of EV.At various vehicle modes such as charging, driving, cruising etc., these components radiate humming, buzzing, tonal noise at higher frequencies.As per literature, current state of power electronics of Inverter and its noise excitation mechanism is studied.There are broadly three phenomena responsible for high frequency tonal noise in an inverter.They are Lorentz forces, magnetostriction and electrostriction.The components like busbars, inductors, MLCCs etc. are key components used in inverter to radiate noise as it is observed that these three phenomena radiate significant noise even when tested separately.These components if present inside inverter radiates acoustic noise as a combination or separately through different physics which are given below: Lorentz forces The Lorentz forces which is proportional to current flowing through 3 parallel conductors for ex: in busbars is given by following equation: where, is current, is gap between conductors where current is flowing [3].When current is varying with respect to time, forces will also change with time and thus responsible for vibration excitation for electrical conductors such as AC & DC busbars inside inverter [6].The force will get transferred through busbar supports to inverter housing. Magnetostriction When a component such as inductor is present inside inverter, it leads to strain in core at a frequency of current which is passing through it.This leads to change in dimension of core and leads to vibration and thus noise to structure [4].Inverter has common mode chokes to filter out output signal.As it is a vital component its placement on inverter is responsible for acoustic noise radiation. Electrostriction Electric field polarizes a dielectric medium in presence of varying voltage.This leads to yielding of dielectric medium at frequency of fluctuating voltage.This fluctuation leads to force transfer on PCB where MLCC capacitors are mounted and thus vibration is transferred to structure.Noise is generated through this source is caused by voltage characteristics of each MLCCs [3].Usually, a PCB consists of many MLCCs which can potentially produce significant forces.This force gets transferred through PCB mounts and radiates noise. Test setup The test setup consists of inverter as DUT along with supporting components required for its functioning at different operating conditions.These supporting components include DC power input which acts as DC power source for inverter.Cooling channel unit is required to remove heat generated during IGBT's operation from inverter.Resistive inductive load (R-L load) has been tuned to mimic motor load which draws current from inverter.A fixture is used to hold DUT to isolate ground vibrations interfering with vibration generated due to electrical sources at operating conditions.For sound measurement a customized hemi-anechoic chamber is built to isolate inverter acoustic noise from background noise.A microphone setup is used to measure sound pressure level (SPL) at 10 cm from inverter which is defined as per OEM requirement.This test is performed in two parts where vibration responses were measured on housing followed by acoustic noise measurement from inverter.In this test, designing of anechoic chamber was critical to isolate low frequency noise from high frequency acoustic noise near switching frequency zone.Hence details about design of anechoic chamber are mentioned in detail. Construction of anechoic chamber The dimension of anechoic chamber should comply as per requirement given by ISO3745 which implies that maximum volume of the object measured inside the chamber should be less than or equal to 5 percent of volume of inside volume of anechoic chamber ≤ 0.05 * .Considering the approximate dimensions of DUT used in this testing, the volume of object is around 0.0224 m 3 . Hence the volume of chamber comes out to be 0.448 m 3 which is rounded off to 0.5 m 3 as it is better to have a chamber as large as possible. Since the volume of chamber is fixed, it is now required to fix the chamber dimensions.The room dimensions are given in terms of the ratios of length of three sides of rectangular chamber as per room proportion criterion mentioned in Bolt [5]. The plot mentioned in Bolt [5] useful for designing rectangular anechoic chamber to isolate excitation due to room modes.According to recommendations, the minimum dimensions of anechoic chamber should be following : : = 1:1.2:1.4. Due to ease of manufacturing, the finalized dimension of chamber from inside is 0.9 m×0.9 m×0.9 m as mentioned in Fig. 2. The design of such chamber can be done as per [5] based on DUT of choice and individual application. Test conditions Testing is conducted in a room where DC power input is present along with space to keep RL load, CCU and anechoic chamber.This testing is conducted in two parts. Vibration testing For vibration testing, DUT is mounted on a fixture with the help of bolts which is again mounted on bench.Since response levels due to electrical sources are low, it is better to use Laser Doppler Vibrometer (LDV) instead of accelerometers as former can measure responses without contacting object.Here inverter is placed on fixture to isolate ground vibrations which could interfere with vibration responses from LDV.The inverter is powered on at different operating conditions as mentioned in Table 1 and vibration is measured on housing surface.For measurement of vibration responses, LDV is pointed on inverter housing.The responses measured at given operating condition for inverter are given in Fig. 3.Here the differences in test conditions are rms value of AC current and electrical frequency of AC current while switching frequency remains same for these 3 testing conditions as mentioned in Table 1. Through vibration testing, it is observed that responses from inverter near switching frequency and harmonics of switching and electrical frequency are responsible for significant vibration levels from inverter due to 3 phase AC current as per Fig. 3.As per observed significant peaks from vibration test, acoustic testing is performed subsequently as well to evaluate noise due to these vibration signatures. Acoustic testing For acoustic testing, DUT is placed inside anechoic chamber and is powered on using cables which is coming out from chamber from side cavities as per schematic given in Fig. 1. A set of microphones is placed inside anechoic chamber, around inverter and is powered on using cables coming from side conduits.The side openings of anechoic chamber are sealed using industrial wax after wiring setup.Thus, testing is commenced, and reading is taken at different inverter operating conditions as mentioned in Table [2].The sound pressure level (SPL) is measured for above mentioned operating conditions of inverter.The acoustic test is performed at higher power rating as compared to vibration test which reflects from higher rms AC current value of inverter.It is observed that power electronics devices tend to radiate greater acoustic noise at higher current levels. The outcome from acoustic testing is given in fig [4] where sharp peaks are observed at and around switching frequency while for power off condition, no peaks were observed which confirms that there is no background noise present even at higher frequency. Observations As per Fig. 3 vibration responses at switching frequency are higher when rms AC current from inverter is higher (Amplitude of current in Test 2 > Test 3 > Test 1).Hence amplitude of vibration is directly proportional to magnitude of rms AC current which governs equation [1] that Emag force increases with current amplitude.Also from Fig. 3 and 4 it can be deduced that harmonics of higher amplitude AC current is present at 10 kHz ± (N)*electrical frequency where = 1, 2, 3, etc. along with switching frequency.Hence, higher magnitude of vibration is observed at switching frequency as well as combination of switching and electrical frequency.Since acoustic testing is done with higher power rating as compared to vibration testing as evident from Table 1 and 2, SPL at switching frequency and at combination of switching and electrical frequency have sharp response levels as observed in Fig. 4 than Fig. 3. SPL response around switching frequency is following same trend as responses from vibration testing and has harmonics present at combination of switching and electrical frequencies of higher amplitude.An interesting finding which is common to both acoustic and vibration testing is that amplitude of third harmonic 10 kHz ± 3*electrical frequency has higher response as compared to other harmonics except Test 1 case of vibration testing. Conclusions The trend of vibration responses at harmonics of combination of switching frequency and electrical frequency correlates with acoustic testing performed on inverter.Hence it can be concluded that AC current ripples cause high level of vibration and acoustic noise response around switching frequency of inverter.This testing helped to pinpoint areas and frequency range where noise issue needs to be addressed for an inverter.Thus, giving advantage in benchmarking inverter as per acoustic noise performance for EVs. Fig. 1 . Fig. 1.Schematic of acoustic testing of inverter and supporting components required to make inverter functional Fig. 2 . Fig. 2. Dimensions of acoustic chamber for testing of inverter, test setup for acoustic testing inside chamber Fig. 3 . Vibration responses (m/s vs frequency in Hz) of inverter housing Fig. 4 . Acoustic responses of inverter housing (SPL vs frequency in Hz) Table 1 . Operating conditions for vibration testing S. No. Output current from Inverter (in rms) Electrical frequency Switching frequency Table 2 . Operating conditions for acoustic testing S No. Output current from Inverter (in rms) Electrical frequency Switching frequency
2023-09-22T15:04:25.344Z
2023-09-21T00:00:00.000
{ "year": 2023, "sha1": "87b88081437b86083c874e469256d6478ee98bae", "oa_license": "CCBY", "oa_url": "https://www.extrica.com/article/23563/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d09480c2e9eb230f5883200fe342c8bb257749d2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
251589378
pes2o/s2orc
v3-fos-license
Enhance Accuracy: Sensitivity and Uncertainty Theory in LiDAR Odometry and Mapping Currently, the improvement of LiDAR poses estimation accuracy is an urgent need for mobile robots. Research indicates that diverse LiDAR points have different influences on the accuracy of pose estimation. This study aimed to select a good point set to enhance accuracy. Accordingly, the sensitivity and uncertainty of LiDAR point residuals were formulated as a fundamental basis for derivation and analysis. High-sensitivity and low -uncertainty point residual terms are preferred to achieve higher pose estimation accuracy. The proposed selection method has been theoretically proven to be capable of achieving a global statistical optimum. It was tested on artificial data and compared with the KITTI benchmark. It was also implemented in LiDAR odometry (LO) and LiDAR inertial odometry (LIO), both indoors and outdoors. The experiments revealed that utilizing selected LiDAR point residuals simultaneously enhances optimization accuracy, decreases residual terms, and guarantees real-time performance. INTRODUCTION S IMULTANEOUS localization and mapping (SLAM) methods have been applied to solve localization and mapbuilding problems in robotics. LiDAR odometry and local mapping algorithms are widely used in SLAM systems. However, integration processes unavoidably cause an accumulation of pose errors. This drift causes map distortion and estimation failure. Although SLAM is a loop-closing algorithm, it only disperses errors in the history trace, rather than truly eliminating every pose error [1]. The key to improving the long-term performance relies on the enhancement of front-end accuracy [2]. Inspired by our experience with visual odometry (VO) in a dynamic environment [3], diverse feature points are considered owing to their different influences on pose estimation. We presume that diverse LiDAR point residuals have different sensitivities and uncertainties for pose estimation accuracy. Therefore, a novel sensitivity and uncertainty theory that distinguishes residuals from diverse pattern representations was proposed. The theory quantifies the influence of every residual term's pose estimation accuracy into six dimensions: three for rotation and three for translation. The theory classifies and selects a subset of high sensitivity and low uncertainty, which enters the optimization to achieve a higher accuracy than the utilization of all points. As shown in Fig. 1 the original LO using all valid planar feature points, and the right is adding our selection scheme to this LO. They run on the KITTI benchmark [4] sequence 03. The selected planar points were almost half of the original; however, they obtained fewer translation errors simultaneously. Sensitivity describes the extent to which a registration residual changes when a standard pose disturbance is applied to the sensor. It is defined as a six-dimensional vector: three for rotation angles and others for translation. In Fig. 2, calculating a LiDAR rotation angle, using high sensitivity points is better, which equals the lever principle. In Fig. 3, every planar point's sensitivities to yaw angle are drowned in color. Black and red were low, while green and blue were high. The near-ground points were not sensitive to the yaw angle, and the middle-building walls were more sensitive than the left and right walls. Uncertainty describes the reliability of a registration residual term that combines a LiDAR point measurement and its corresponding geometric model credibility. It is defined as a three-dimensional Gaussian distribution, such as a line or a plane pattern. In Fig. 4, high uncertainty planar points in blue are trees, which are unsuitable for pose estimation. The red regions represent smooth walls and near-ground points, which are reliable for pose estimation. This research aims to find calculus approaches for Li-DAR point residual sensitivity and uncertainty. We comprehensively considered these two properties in a score vector and decouple them into six dimensions. Thereafter, all LiDAR point residuals were sorted to select a subset that included high sensitivity and low uncertainty points. Finally, these residuals were sent for optimization. In code realization, a threshold rule is defined to stop the selection. Theoretically, we demonstrated that sorting residuals using the proposed method achieves a global statistical optimum. This algorithm is independent of the specific LiDAR SLAM algorithms and LiDAR hardware configurations. It is a general module to enhance accuracy and can be added to any existing optimization-based code realizations. Our arXiv:2111.07723v2 [cs.RO] 16 Aug 2022 experiments on LO and LIO indicate that utilizing selected residuals simultaneously enhances optimization accuracy, decreases residual terms, and guarantees real-time performance. The main contributions of this study are as follows: (1) To the best of our knowledge, this is the first study to theoretically prove the global statistical optimal point selection scheme for enhancing pose estimation accuracy in LiDAR odometry and mapping. (2) This paper proposed a sensitivity model for pointto-plane and point-to-line distance, as well as uncertainty model for LiDAR point measurement and its corresponding geometry pattern. Sensitivity and uncertainty decoupling into six-dimensional methods were also proposed. (3) Experiments were conducted using the KITTI benchmark. The ALOAM translation error decreased from 1.7318% to 1.5781% in virtually half of the number of planes and lines used. Different types of LiDAR scan modes were evaluated in indoor and outdoor environments using LO and LIO, increasing the average accuracy by approximately 20%. Experiments reveal that the time consumption depends on the residual amount rather than the feature detection and residual selection parts. The proposed selection scheme guarantees real-time performance. RELATED WORK Related studies can be classified into three categories: sensitivity, uncertainty, and entropy-based LO methods. Sensitivity Model This report [5] provides considerable motivation for employing the sensitivity model. It proposes a technique for determining whether a pair of meshes is unstable in the iterative closest point (ICP) algorithm. It estimates a covariance matrix from the sparse uniform sampling of the input. Subsequently, it develops a strategy that attempts to minimize this instability and draws a new set of sample points primarily from the stable areas of the input meshes. However, this study concentrates on the registration problem; it does not consider measurement uncertainties and analyzes only the mesh plane errors. However, this technique was fundamental to our theory. LO-Degeneracy [6] aims to avoid a degenerate environment, which is regarded as a condition in which one-dimensional sensitivity is zero. It determines and separates the degenerate dimensions in the state space and partially solves the problem in wellconditioned directions. It linearizes the cost function and uses the dot product of the coefficient matrix with its transpose. A matrix containing the geometric structures of the problem constraints is formed. The IMLS [7] technique is a complete LO that uses [5] method to select points. Therefore, numerous points can alter the constraints to shrink the final pose. However, it does not solve the problem theoretically and only considers a point-to-plane sensitivity model. The LeGOLOAM algorithm [8] uses normal vector clustering to detect true line points and obtain better matching. Optimization was achieved in two stages using the ground vehicle hypothesis. LeGOLOAM is regarded as improving accuracy from the pattern recognition perspective. However, this is strongly limited by the ground-vehicle hypothesis. Twostage optimization enables all observations to calculate the rotation and translation separately. LION [9] can self-assess its performance using an observability metric that evaluates whether the pose estimation is geometrically ill-constrained. This is similar to LO-Degeneracy [6] and is applied to a real tunnel scene. SGLO [10] considers the derivative of the residuals; however, it has not been discussed in depth. It does not consider constraint information in every dimension, which is the core content. MULLS [11] clarified the residual linearization process. It uses all observations in the estimations with diverse weights, implying that estimations in different directions can be balanced. In [12], inline set cardinality maximization was used to select suitable feature for a 3D-2D pose estimation. Bearing vectors play an important role in the selection and avoidance of degeneration. From these studies, the proposed theory theoretically clarifies sensitivity. It is inspired by IMLS and extends to a point-to-line residual type. Compared to the MULLS, the point-to-line catches a Hessian matrix in the MULLS, which cannot be sorted directly. The proposed theory uses the main direction projection to regroup into a linear form, which is convenient for sorting. Uncertainty Model The uncertainty model consisted of two parts. The first part independently models the uncertainty of every laser point measurement in 3D and is referred to as the laser scan beam. The second part models the uncertainty of the geometric pattern in the map. The essential difference between these two models is that the first part describes the uncertainty of the current observations and the second part describes the uncertainty of history-measured information. Laser Scan Beam For laser points, [13] proposed rigorous first-order error analysis. It measures the horizontal and vertical errors of a laser pulse and determines the nonlinear error growth, as recently reported in [14]. Comparing various LiDAR sensors available in the market [14], measurement errors were found to be relevant to the target range. In [15], a laser point was modeled as a projected footprint and used to represent an uncertainty matrix. A 3D Gaussian distribution was proposed [16] to model LiDAR uncertainty points and to clarify their propagation. Geometry Pattern For geometry patterns, point cloud data (PCD) are direct and easy to use for localization. Accordingly, this investigation focuses on the map geometry pattern implemented using PCD. The LOAM algorithm [17] generates five points to simulate a plane and line by decoupling the eigenvalues. It fundamentally calculates them as a 3D Gaussian distribution but discards irrelevant directions. The Gaussian mixture model [18] (GMM) is a continuous distribution function method; however, it adopts multiple Gaussians and regroups them with different weights. A multilayer tree structure [19] can fuse flat areas into one Gaussian or decompose a complicated area into several Gaussians. In addition to these explicit function representations, implicit methods are relevant to PCD applications. A moving least square (MLS) surface is defined in [20]; it is a C ∞ smooth surface generated from a raw PCD. An implicit version was defined in [21], which represents the distance of a location on a surface composed of neighboring points. Based on the aforementioned studies, real LiDAR emitting and receiving structures were considered to build a laser point uncertainty model. Thereafter, the measured points were added to the map and fused together. Our study aimed to improve the real-time estimation accuracy of mobile robots. Compared with a 3D reconstruction, sacrificing some of the complicated area details and focusing on the main direction constraints are advantageous for SLAM. Because nearby LiDAR points are dense, the remote points are sparse. The leaves, trees, and other irregular objects are not suitable for pose estimation, indicating that their fused uncertainties are higher. Therefore, the Gaussian method is preferred, and the uncertainties of the map points are considered. Consequently, these points are completely used in a plane or line. Entropy Based LO Method Since 2020, some research has applied the entropy concept from information theory [22] to SLAM systems to improve robustness and accuracy. [23] proposed sub-matrix selection by choosing a scoring metric for VO. It models estimation as a linear matrix to obtain the best subset that yields the metric Max-logDet. With this metric, satisfactory feature selection becomes an NP-hard problem. They designed a lazy greedy algorithm to determine the maximum submatrix. In a continuation of [24], it focused on selecting satisfactory poses for graph optimization. In [25], the most numerous mutual information points were selected, and the metrics were similar. MLOAM [26] imitates and applies entropy to a multi-LiDAR field. The two LiDAR sensors were set at diverse angles to cover a wide area. In addition, a greedy method was designed. The two LiDARs were run in real time with satisfactory accuracy. Compared to the aforementioned studies, the most novel contribution of our proposed theory is that we provide an analytical demonstration of why the selected points obtain higher accuracy. Another advantage is that the proposed theory is endogenously explainable, which is derived from a singular value decomposition (SVD)-based registration problem [27]. Finally, the sensitivity and uncertainty processes are modeled in a linear form, avoiding the calculation of a sub-matrix metric using a greedy method. NOTATIONS AND PRELIMINARIES Before introducing our theory, the interpretation of the ICP registration problem aids in understanding the theory. The SVD method was used as a standard solution. Zeromean normalization is applied to decouple it into calculating the rotation (first step) and translation (second step). That is, it solves R in SO(3) space and then returns to SE(3) space to calculate t. This is known as the Wahba problem [28] since 1965, or rotation search [29] in the recent robotic community. Assume that a no-disturbance point set (source) is P = {p * i }, i = 1, . . . , N and its corresponding no-disturbance point set (target) is Q = {q * i }, i = 1, . . . , N . The standard L 2 norm point-to-point registration problem is expressed as follows: where R is a 3 × 3 rotation matrix and the proof of Eq. (1) is provided in Appendix A. By linearizing the rotation parameters from the Lie group manifold to its corresponding location in the tangent vector space (i.e., φ ∧ R ∈ so(3), φ R ∈ R 3 Lie algebra), the problem becomes a linear least-squares problem. Accordingly, it is solved using the Gauss-Newton [30] or Levenberg-Marquardt [31] techniques. The optimal rotation is found as the singular vectors V and U regroups of H. Any two rotation matrices, L and K, and their corresponding axis angles (rotation vector), φ L = θ L ω L and φ K = θ K ω K , as shown in Fig. 5, are connected by the exponential map from the Lie group to the Lie algebra. θ L and θ K are the angles (scalar), ω L and ω K are axes (3 × 1 vector), φ ∧ denotes the symmetric skew matrix of vector φ. skew-symmetric matrix and corresponding vector satisfying The black sphere is the SO(3) space Lie group, and the blue grid is the so(3) space Lie algebra, which is tangent to the expansion location. exp and log mappings connect each other. This enable gradient descent-based optimization algorithms to work. To measure the difference between the two rotation matrices, R * and R i in Fig. 6, the Riemannian metric distance [29] under the Frobenius Norm is used. It is defined as that satisfies ∆R = R * T R i . This distance is the length of the shortest geodesic curve connecting the two rotation matrices. The defined symbols are summarized in TABLE 1. . When sensors provide redundant observations, there exist many estimated rotation matrices R i . The method of calculating the smallest fitting error rotation matrix R * is a fundamental optimization problem. This phenomenon is common in a LO or LIO, which requires the sum of Riemannian distances to be the smallest. Specifically, if R * is the optimal, it must be close to all valid R i . SENSITIVITY AND UNCERTAINTY This section proves that the proposed sensitivity and uncertainty theory-based selection scheme achieves the global statistical optimal pose estimation accuracy. The optimum is statistical because the analyses are based on probability distributions. In one-shot sampling, a point with a higher uncertainty may be more accurate than other points because the uncertainty is only a probability distribution description. Therefore, multisampling and calculated expectations are superior. For brevity, we simplified the enhancing accuracy issue as a point registration problem in SO(3). Assuming the ideal condition, there exists a ground truth rotation R * that satisfies the no-disturbance condition for N ≥ 4 points: Consider small disturbances on source points p * i as real LiDAR noise, which may occur during capturing or algorithm drift. On SO(3), a dynamic rotation matrix L i describes this error, p i = L i p * i , which lies on the 3D manifold surface in Fig. 9. The same disturbance K i was added to the target point q * i . We defined this simple uncertainty model as the only one for the inference. The uncertainty model details are described in Lemmas 2 and Fig. 9, which do not hinder understanding at present. Assuming the selection M point pairs in a certain rule, M depends on the number of points participating in the rotation calculation, that is, at least 4 points. Therefore, M controls the algorithm to cover all the possible conditions, satisfying 4 ≤ M ≤ N . The optimal rotation, R , was calculated using Eq. (1): If this selection scheme is optimal, the Riemannian distance between R * and R is very close. Specifically, by exchanging an arbitrary one-point pair inside M with that outside of N − M , the optimal rotation is R . where {p }, {q } and {p }, {q } are at least one point pair that are different. A conjecture must then be established to illustrate why the selection scheme is optimal. Conjecture 1 (Closest Riemannian Distance). The remainder of this section demonstrates Conjecture 1, and the entire demonstration process is shown in Fig. 7. The defined symbols are summarized in TABLE 2 Initially, the relationship between R * and R should be established in Theorem 1, and then their distances are calculated. This is practicable for analyzing the point disturbance influence on the pose estimation result. Next, the Riemannian distance between R * and R is expanded. Theorem 2 aims to identify the elements that affect the result exactly. Thus, the definition of sensitivity and uncertainty properties is motivated here. Theorem 2 (Riemannian Distance). the four terms that satisfy Proof 2 (Riemannian Distance). The proof of Theorem 2 is provided in Appendix B. The core is the utilization of the Baker (Campbell) Hausdorff formula [32] and exponential mapping expansion. Thereafter, we acquired six terms, and the last two terms are always zero. Note that, in Theorem 2, A and B have additional concise representations. Therefore, we have written Remark 1 to illustrate this for better comprehension. C and D are dynamic, depending on the included angle of these two disturbance rotation vectors. Subsequently, the dynamic properties of the last two terms, C and D can be clarified from another perspective. In one-shot sampling, a point is captured at a specific position where it must be located. Although it belongs to a specific distribution, the one-shot sampling is random. This phenomenon results in Eq. (10) in Conjecture 1 is impossible. Therefore, we considered comparing the expectations of the Riemannian distance to solve this problem. In the next theorem, after double integration throughout the disturbance space, C is only related to θ K θ L and D is zero. Eventually, the comparison continued. Theorem 3 (Expectation of Riemannian Distance). The expectation of the Riemannian distance is Proof 3 (Expectation of Riemannian Distance). The proof of Theorem 3 is provided in Appendix C. Sensitivity and uncertainty have been clear perspicuities. They are defined from Theorem 3 with θ K and θ L . We write these two properties in Lemmas 1 and 2. Lemma 1 (Sensitivity). Points that are distant from the center of a LiDAR sensor undergo more changes when the same rotation is applied. Thus, in point-to-point registration, the sensitivity is defined by a point's norm. The point (i.e., p * i ) sensitivity in Fig. 8 is defined by Lemma 2 (Uncertainty). A small disturbance on SO(3) can be described as a small rotation matrix L i , which is equal to a circular uniform distribution with radius h ∈ (0, ). h is a scalar variable and is the distance to the maximal far location. The point (i.e., p * i ) uncertainty in Fig. 9 is defined by because h 2 indicates the disturbance amplitude, and its expectation integration E is where α denotes the round integration of p . Fig. 9. Small disturbance on SO(3) can be described as a small rotation matrix L i , which equals to a circle uniform distribution whose radius is h ∈ (0, ). h is a scalar variable, and is a given distance to the maximal far location. Proof 4 (Conjecture 1 Closest Riemannian Distance). As shown in Fig. 10, for a specific point pair (i.e., index i), according to the law of sines, the relationship between sensitivity and uncertainty is established. There exists Fig. 10. According to the law of sines, the relationship between sensitivity and uncertainty is established. When M point pairs have diverse disturbances, R and R are dynamic owing to the specific disturbances. Fortunately, by solving Eq. (8) (R ) and (9) (R ) are based on Lie algebra, a linear space. Similar to the rotation search in Fig. 6, this linear property means argument every M point pairs data around their locations satisfying where R i is the optimal rotation estimation for every point pair (i.e., p i and q i ). Although R i cannot be solved using only one point pair, the influence of this point pair on the final result can be quantified using Eq. (22). Because the same point pair can be aligned, their sensitivities are equal. Considering the exchange point pairs, E(Riem(R * , R )) and E(Riem(R * , R )), the different parts are comparable. Subsequently, the two equations in Eq. (24) by substituting Eq. (18) and (19). Considering Eq. (17), θ L , θ K , θ L , and θ K are the small disturbances. The term θ 2 K θ 2 L is of fourth order. The main related terms are of second order. Q.E.D. Therefore, the demonstration is terminated at the expectation comparison because the dynamic disturbance parts make direct comparison impossible. Therefore, our selection scheme was statistically optimal. In the next section, we define more complex, close-to-reality sensitivity, and uncertainty models to describe the real LiDAR measurement points. ENHANCE LIDAR ODOMETRY ACCURACY This section describes the practical application of our theory. The complete procedure is shown in Fig. 11. An outline details the selection scheme. The inputs included map points, LiDAR measured points, and an initial pose available from a uniform motion model or IMU. Subsequently, we used an octree to find neighbors that established the closest matches. Because the measured points are classified as surf and corner (plane and line), the algorithm computes the sensitivities and uncertainties separately. Finally, we sorted all residual terms by sensitivity and uncertainty scores, stopped at a threshold, and sent them to the nonlinear solver to derive the optimal pose. This section first presents the method for calculating the sensitivity model. It uses a Taylor expansion and eigenvalue projection tool to decouple residuals into six dimensions depending on the type of point-to-plane and point-to-line residuals. The second section presents the calculation of the uncertainty model. The laser scan beam and geometry patterns are analyzed to describe the uncertainties in this process. The third section explains the final selection standard, which comprehensively considers the influences of sensitivity and uncertainty. Fig. 11. Outline details the selection scheme. The inputs include map points, LiDAR measured points, and an initial pose available from a uniform motion model or IMU. Subsequently, we employed an octree to find neighbors that establish the closest matches. Because measured points are classified into surf and corner (plane and line), the algorithm computes sensitivities and uncertainties separately. Finally, we sorted all residual terms by sensitivity and uncertainty scores, stopped at a threshold, and sent them into the nonlinear solver to derive the optimal pose. Sensitivity model To satisfy the assumption of an infinitesimal rotation and translation, the linearization error approaches zero. 5.1.1 Point-to-plane distance A LiDAR measured point p i and the corresponding map point q i , which is defined as a point on the plane. The normal vector is n i , as shown in Fig. 12(a). The error of the i-index residual-term point-to-plane distance is and e pl i is scalar. Residual sensitivity describes the r and t on e pl i . Thereafter, we used the Jacobian tool and linearized rotation to calculate this property. Point-to-line distance A LiDAR measured point, p i , and the corresponding map point, q i , which is defined as a point on the line. Its pointing direction is the unit vector n i , as shown in Fig. 12(b). Before forming the distance, a new vector d i should first be defined. where d i denotes a 3×1 vector. Its norm is the parallelogram area of vectors p i to q i and p i to q i + n i . Its direction was orthogonal to the plane of the two vectors. Because the norm vector n i is a unit vector, the number of areas is exactly equal to the distance. The error in the i-index residual term point-to-line distance e li i is defined as This differed from the point-to-plane distance. First, d i is derived as where J di is a 3 × 6-matrix. Moreover, Thus, Hessian matrix H di = J T di J di is defined. Therefore, point-to-line e li i is a quadratic form of the optimization parameters, different from the linear form in the point-toplane distance. Direct decoupling into six dimensions is impossible because the partial derivatives of the quadratic function approximating ∆r = 0 and ∆t = 0 are consistently zero. Thus, we have focused on the growing gradient in a small region. The Hessian matrix was projected onto the six axes. Every eigenvalue with vectors was projected onto the j-index axis. They were regrouped in linear form. Uncertainty model Before introducing the uncertainty model, the accuracy and variance should be defined. For the standard variable θ, the accuracy is ∆θ. As shown in Table 3, when the error associated with the variable θ is defined with distinct distributions, the variance is different. Laser scan beam Based on a multibeam laser scanner system [13], [15], [16], the rotation R sl is from the laser coordinate l to the scanner coordinate s. Typically, a mechanical spinning device, which creates a fixed laser to a circular scanner, as shown in Fig. 13(a). where α is the azimuth angle and ω is the elevation angle of the laser beam channel. As illustrated in Fig. 13(b), a laser diode stack emits three light beams. They fall on the environment surface and are reflected in the LiDAR observation window, as shown in Fig. 13(c). LiDAR records the emission time and the most intense time to calculate depth. Owing to the observation window, the laser depth can be simulated as a divergent beam. The true location can lie anywhere within the beam footprint. According to the manual, the Velodyne Puck (VLP-16) claims ∆z l = 3 cm. The horizontal and vertical divergence angles of the rectangular window were δ h = 3 × 10 −3 rad and δ v = 1.5 × 10 −3 rad. Therefore, assuming that point is uniform in this region, Following the self-rotation, a laser scan line was formed, as shown in Fig. 13(d). Every elevation angle ω was carefully calibrated and rectified; thus, σ ω = 0. For the azimuth α, the manual states that the rotation angular resolution is 0.01 • . All studies in [13]- [16] assumed that where ∆α = 0.005 • is the half resolution. The LiDAR parameters and coordinates are shown in Fig. 13(a). Because self-rotation is nonlinear, R sl (α, ω) must be linearized, and uncertainty propagation works. Finally, uncertainty is a matrix. In laser coordinate l, it is a 5 × 5 matrix Σ l . The scanner coordinate s is a 3 × 3 matrix Σ s . They are connected by uncertainty propagation as follows: where J R sl (α,ω) denotes a 3 × 5 matrix. This is derived from the first-order Taylor expansion formula of R sl (α, ω). Σ l is generated as a diagonal matrix from the individual sources: These variable variances have been discussed previously, and some can be found in the LiDAR sensor manual. Owing to inhomogeneous noise, sparse density, and missing data in LiDAR sampling [33], pose estimation typically employs plane and line patterns. Several studies [8], [11], [17], [34], [35] have minimized the alignment distance. The LO baseline LOAM [17] uses five neighboring points to model the plane or line shown in Fig. 14(a) and 14(b). Applying PCA technique, eigenvalues λ 0 < λ 1 < λ 2 and eigenvectors ν 0 , ν 1 and ν 2 are calculated by applying the PCA technique. These correspond to x,y, and z dimensions. This process essentially involves modeling the surface as a 3D Gaussian ellipsoid. Geometry pattern We should comprehensively consider the influences of both current LiDAR measured information uncertainties (laser scan beam) and history-map model uncertainties (geometry pattern). As shown in Fig. 15(a), the prior only considers modeling these map points as a plane. After adding the information of every point uncertainty, although the posterior becomes slightly fat, this fusion result indicates that this model is sufficiently good for pose estimation. In Fig. 15(b), considering every point uncertainties, the posterior becomes thick in the main direction, and this fusion result is bad. Because our purpose is to model uncertainties in registration, the main error direction uncertainties are modeled using the sigma-point transform technique [36]. The points were resampled around the ellipsoid to infer the posterior Gaussian distribution. The distances of these points to the mean are one sigma. Similar to minimizing the Kullback-Leibler divergence [37] between two Gaussians, we provide a simple fusion method in which the eigenvalue along the registration direction is employed to reflect the disparity: where Φ ei is a scalar that evaluates uncertainty. λ 0 isou is the smallest eigenvector of the source point distribution, and λ 0 itar is for the target. Sort by score Motivated by Eq. (21), combining the sensitivity and uncertainty models from Lemmas 1 and 2 into a score, and judging the residual influence on the pose estimation accuracy. In practice, using Eqs. (29), (33), and (41), the score for a residual can be derived as where the score Ψ ei is a 6 × 1 vector that corresponds to three rotations and three translations. Next, we performed score sorting and selected residuals from the top big score in every six dimensions, in parallel. The repeated terms were recorded only once. Because the dislocation match and geometry assumption (plane line) cause four point pairs to be unstable, we set a threshold parameter to judge the stop rule: (1) the selected residual amount reaches 200 per dimension and (2) the residual score decreases to 10% of the maximal. EXPERIMENTS We used simulation, benchmark, and our captured real data to introduce the experiments, which were segmented into three parts. The first part is a two-frame point cloud registration simulation (Section 6.1), which controls the noise amplitude in the measurements and models. This verifies the validity of the residual selection scheme in a controlled environment. The second part is the KITTI benchmark [4] comparison (section 6.2), which aims to prove the selection's general effectiveness in decreasing time cost and improving pose accuracy. The third part comprises our captured real indoor and outdoor data (Section 6.3). It contains two types of scan-mode LiDAR and inertial measurement unit (IMU) data. This part proves our method's validation in both the LO and LIO algorithms and is also applicable for different LiDARs. Finally, the IMU is used only in LIOmapping [34] for comparison purposes, which is unnecessary for our proposed algorithm. Therefore, the sensitivity-and uncertainty-theory-based residual term selection scheme achieved significant improvements in accuracy. It exhibits real-time performance with fewer residual terms and lower computational costs in nonlinear optimization. Our codes were implemented in C++. The program was executed on a desktop computer with hardware parameters of a six-core CPU AMD 2600x, 48-GB RAM, and an Nvidia RTX 2070 GPU. Simulation To evaluate the proposed theory, in Algorithm 1, a two-frame (source and target) registration simulation is implemented. The ground truth (gt) transformation and LiDAR measured points (source) were randomly generated. Every point was randomly allocated to a specific pattern model, plane (three points), or line (two points). Next, the gt pose was applied to the source points to generate the target points. Subsequently, the disturbances increase. Finally, theory-based and random selection methods were applied to solve the registration. This was repeated 100 times, and then an average translation error was derived. A comparison between the two methods reveals that the proposed method is superior, as shown in Fig. 16. The resulting curves are presented in Fig. 16. The disturbance amplitude increased along the horizontal axis. When the disturbance is zero, the data associations are accurate and do not change during the optimization. The proposed and random methods converge to zero. The error in the proposed method gradually increased as the disturbance amplitude increased. The random method probably selected large-error residual terms. The proposed method sorted all the terms; thus, the selected terms were optimal. This simulation demonstrated the influence of the proposed theory on improving pose estimation accuracy. KITTI benchmark The KITTI benchmark is a well-known autonomous driving benchmark [4]. It includes a Velodyne LiDAR (64 scans), two gray cameras, and two color cameras. The GPS and IMU were used for the gt. It provides 11 sequences with ground truths in urban, city, natural, and highway environments, and has been widely used for VO and LO evaluation. A few comparison algorithms are introduced in this section. ALOAM is a well-known advanced c++ realization of the LO baseline LOAM [17]. LOAM is now a closed source. Several other LO/LIO algorithms have been modified, such as LeGOLOAM [8] and LIO-SAM [35]. We compared the original ALOAM with ALOAM-select, which was added to our proposed selection scheme in front of the mapping Algorithm 1: Two-frame (source and target) registration simulation Input: input parameters: disturbance amplitude Da, residual number Rn Output: output results: translation error in selection t sel and random t ran 1 Source LiDAR points were randomly generated from 1 m to 100 m in 64-circles with different depths. /*generate points*/; 2 The PCD density satisfies the Velodyne HDL64 angle resolution Every point is randomly allocated to a specific geometry pattern model, a plane (three points), or a line (two points). The measured points remain associated with their models, and these matches do not change during optimizations; 3 for Da = 0;Da < 0.2;Da+ = 0.01/*disturbances*/ do 4 for Rn = 120;Rn <= 240;Rn+ = 60/*select*/ do 5 for int i = 0;i < 100;i + +/*multi-samples*/ do 6 The ground truth transformation T gt is randomly generated/*generate a gt pose*/; 7 Apply T gt to the source LiDAR points to generate target points/*apply pose*/; 8 Apply disturbance Da to the measurements or models/*apply disturbances*/; 9 The proposed theory-based method selects Rn terms and calculates the transformation result T sel i /*selection method*/; [26] is another relevant work in this field that has a residual selection process. Although it was designed for a multi-LiDAR system, we modified it for one LiDAR. In particular, MLOAM (we modified) and ALOAM-select were only compared in a standard benchmark. Pose accuracy The first test directly utilized selection in the ALOAM mapping thread, indicating that our selected residual terms are a subset of the original code used. The results are summarized in TABLE 4. On average, the proposed method employs fewer planes and lines for optimization than the original method. Although the residuals ALOAM-select used were a subset of the original ALOAM, improvements were achieved in seven sequences. For the other four sequences, the proposed method was not superior. Nevertheless, these four sequences fell by approximately 0.05%, including 5 cm drift over 100 m. We believe that waiting to be selected as a feature set restricts the improvement in accuracy. In particular, the detected feature point set of ALOAM was insufficiently large for our selection. We modify the feature detection parameter of the original ALOAM in the next test to illustrate our hypothesis. For the second test, we used twice the number of potential feature points for selection (ALOAM-select2). As summarized in TABLE 5, the accuracy improves by approximately 20 cm per 100 m, with advancements in ten sequences. Moreover, only approximately half of the planes and lines were used to obtain this result. These two tests demonstrated the validity of our theory for improving accuracy. To identify shortages, in the second test sequence 10, the checking of the LiDAR frame points is shown in Fig. 17. The car in this sequence traverses a wild-field road with bushes on both sides. In this environment, LiDAR observations were restricted to a local region. Our calculations yielded similar results. The proposed algorithm trades off the robustness to achieve accuracy. Error matches have a stronger influence; thus, falling behind 7 cm is possible in this extreme environment. MLOAM [26] was designed for multi-LiDAR systems. The results are compared on KITTI, as summarized in TABLE 6. Regarding the proposed method, only Sequence 02 falls behind; the other sequences have better results. MLOAM's selection scheme defines manual prior information, solves a metric Max-logDet, and then derives residuals that persist. Our proposed theory considers the sensitivity and uncertainty of sensor data, which are closer to the natural process of an LO. This enables more accurate measurements and map models to improve accuracy and avoids the calculation of matrix determinants. For sequence 02, the LiDAR goes through a crossroad with fast turning, and the proposed method drifts significantly from this location. From that point onward, it was considerably misled, resulting in a large error in the total path. MLOAM was originally intended for multi-LiDAR sensors, and we modified its code for one LiDAR running. Considering fairness and limitations on the article length, the next section's comparison focuses on the single LiDAR algorithm. Time Cost To verify the effect of the selection part on the overall LO time performance, the time costs of the main part are shown in Fig. 18. Compared with feature point detection (green) and residual optimization (red), our selection process (black) costs less than 15 ms. The optimization part accounts for a large proportion of the time cost. Provided that the selected residual amount per dimension decreases from 500 to 50, the optimization becomes faster. Some coding tricks have been introduced here to illustrate why the selection parts require less time. If an LO/LIO algorithm adopts our selection, its accuracy can be improved, and more computation time can be achieved. First, not every residual must participate in the sorting. When calculating the score, the maximal scores in 6 dimensions were recorded. Residuals scores are higher than the ratio threshold (60% we adopt) of maximal retention sorting. Second, point-to-plane and point-to-line are both independent in 6 dimensions; thus, multithread parallel operations accelerate sorting. Therefore, the optimization part requires time, which is related to the residual amount. If we utilize our selected residuals, compared with using all obtainable residuals, the residual term amount decreases significantly, and the selection process is still lightweight. The pose estimation accuracy is simultaneously improved in the next section, and we use residual amounts to illustrate the computation cost. Online captured scenario We used our sensors to operate in real environments to illustrate the details. The captured online scenarios are shown in Fig. 19. The indoor environment includes walking inside a building with a long corridor. The outdoor path was approximately 1.1km long. By pasting a landmark on the ground, both tours start and end at exactly the same location. The same path is captured five times to ensure fairness and credibility. Two collection devices were used during capture, as shown in Fig. 20(a). The first was a Velodyne LiDAR (Puck VLP16) with an IMU (Xsens MTI-100). The second is Robosense LiDAR (Blind Spot 32). As shown in Fig. 20(b), these two LiDARs have completely different scan modes. The horizon of VLP16 was 360 • , and the BS LiDAR was a half-sphere window with 32 scans. We apply our method to ALOAM and LIOmapping as ALOAM-select and LIOmapping-select, respectively. We both set the same strategy and parameters: (1) stop when the residual amount reaches a maximum of 200 per dimension; (2) stop when the residual score decreases to 10% of the maximum. For convenience, we focused on the pose accuracy using the loop-closure error. VLP16 LO indoor The indoor loop closure errors are summarized in TA-BLE 7. Because the LiDAR range was sufficiently long to measure the farthest wall, the drift was within the centimeter level. Compared with ALOAM, ALOAM-select achieved better results in the four sequences. In sequence 01, because ALOAM's translation is virtually 1 cm, we believe that the accuracy of the start and end locations of this sequence is Fig. 19. The indoor environment is a building with a long corridor. Its length is 74m, and width is 35m.The outdoor path is approximately 1.1km long. Both capturing tours start and end at exactly the same location, and execute five times for fairness. in TABLE 7. The BALM adopts the bundle adjustment concept for visual SLAM. It has a sliding window and adjusts inside frame poses, which aims to make voxels more compact, such as planes flatter and lines more slender. This is also the same idea in eigenfactor [39], plane-adjustment [40], and π-LSAM [41]. The BALM code was originally designed for the VLP16 LiDAR. We applied our selection scheme in front of the decision regarding, which voxels enter the optimization; thus, we called it BALM-select. Because the inside of the building has many smooth wall constraints, the loop-closure errors are significantly small. Because of the sliding windows in the bundle adjustment, the indoor environment is insufficiently large for LiDAR sensors. Current frame observations may be connected to early information, which strongly constrains the sensor pose. Therefore, after comparing the indoor data, the BALM accuracy disparity was not as obvious as that of ALOAM. VLP16 LO outdoor The outdoor loop closure errors are summarized in TA-BLE 8. The long outdoor path demonstrates the superiority of the ALOAM-select and BALM-select. This path is approximately 1.1 km-long circle in Fig. 25. The proposed method achieved almost twice the accuracy of all five sequences. These results further support the analysis of the proposed method using the KITTI benchmark dataset. Under largescale conditions, the proposed method achieves more significant results. The wall mapping quality is illustrated in Fig. 23 and 24. In Fig. 23, points in the blue rectangle represent the ALOAM building wall; points in the green rectangle are generated by ALOAM-select, and points in the orange rectangle are LIOmapping result. Fig. 24 shows the LiDAR's moving path, and the scanned wall. The ALOAM building wall is thick, indicating that the estimated LiDAR poses a drift. The wall generated by ALOAM-select is as thin as that of LIOmapping, indicating higher accuracy; here, only LiDAR is used to reach this LiDAR with an IMU level. VLP16 LIO Indoor The loop closure error is small for LIO because the indoor path is less than that of the outdoor path; the results are summarized in TABLE 9. The path starts from a hall, and the front side of the corridor can be observed. The LiDAR data are repeated on the surrounding walls. The results of LIOmapping and LIOmapping-select were similar. We presumed that the small region restricted the improvement in accuracy. The outdoor results are summarized in TABLE 10. The average improvement in accuracy of LIOmapping-select was approximately 0.5 m. Therefore, the application of our theory to LIO systems is also valuable. After fusing the IMU data, the loop closure error of the LIO algorithm was significantly smaller than that of the LO. However, from the start and end location data, before and after the addition of drifts in the selection are still visible. Because LIO mapping does not have a loop-closing function, this error cannot be eliminated. After the addition of selection, LIOmappingselect tends to use far-away observations, which strongly constrain the sensor pose. Thus, the estimation accuracy was higher on average. Blind spot LO indoor Another scan mode, the Robosense blind spot (BS) Li-DAR, is shown in Fig. 20(a). Its view was a half-sphere with 32 laser scans from 0 • (horizontal) to 89 • (vertical). Its direction is set toward the ceiling, which is less likely to be scanned by VPL16. To fit the ALOAM code, we modified the point allocation and feature detection functions and renamed them to BS-ALOAM. Upon entering the corridor, the BS LiDAR was placed toward the front to measure more points. In Fig. 26, the BS LiDAR point distribution shows extremely different from that of VLP16. Virtually, 92% of the laser points lie in a small region within 10 m of the surroundings. However, points that are distant from the wall are more useful. In BS-ALOAM selection, more suitable points are selected. The results are summarized in TABLE 11; the results of the proposed method are distinct. The accuracy of the BS-ALOAM-select improved from the meter to decimeter level. A building map is shown in Fig. 27. When we return to the hall, the floor and ceiling map of the BS-ALOAM is distorted because the pose drifts are at the meter level. The BS-ALOAM-select maintained a considerably lower drift. Because of the BS LiDAR scan characteristics, distant points are extremely sparse, and near points are considered dense. According to our theory, the estimation accuracy is significantly lower than that of VLP16. The results summarized in TABLE 12 explain this phenomenon. The translation error exceeded that of VLP16. Although certain improvements were achieved, the resulting errors can be ignored. Thus, BS LiDAR is unsuitable for outdoor SLAM applications. SLAM requires a LiDAR sensor capable of capturing distant points, which is more favorable for estimation. CONCLUSION In this paper, we proposed a theory of LiDAR point sensitivity and uncertainty to enhance LiDAR odometry accuracy. We demonstrated that our selection method is a global statistical optimal. To explain this realization, LiDAR measurement uncertainties and fusing mechanisms were calculated, and residual sensitivities were analyzed. The scores were decoupled into six dimensions. Thereafter, the algorithm sorted and selected the residuals for optimization. The experiment results revealed that superior pose estimation accuracy was achieved. This selection makes it possible to simultaneously achieve high optimization accuracy and guarantee real-time performance. Owing to laser time-of-flight sensing and careful rotary mechanism calibration, the LiDAR uncertainty region does not grow as large as that of the binocular cameras. Therefore, LiDAR had a more distinct effect on our theory. The problem of data association has not yet been addressed. This work adopted traditional data association methods in the LO, relying on a uniform motion model or IMU, which is the neighborhood principle in ICP. Because this study concentrates on improving the pose estimation accuracy, a uniform motion model for walking or low-speed driving is sufficient. The proposed theory attempts to select residual terms with small uncertainties and high sensitivities. This fundamentally decreases the robustness of the pose estimation and simultaneously increases its accuracy; this is the reason for the tradeoff between robustness and accuracy. To improve the accuracy of pose estimation from another perspective, our next objective is to investigate data association. B.THEOREM 2 RIEMANNIAN DISTANCE Utilizing angle axis Eq. (4) to Eq. (11) yields When R ∈ SO(3), u ∈ R 3 , there exists Lie group adjoint properties [1] as Substitute Eq. (45) into Eq. (44), receive Because the L and K are small, left or right Jacobian is not suitable. Directly apply the BCH formula [32] to two matrices X = φ ∧ X and Y = φ ∧ Y . Keep the first-order terms as where {X, Y} is the Lie bracket satisfying Substituting Eqs. (47) and (48) into Eq. (46) yields The Forobenius norm property equals to matrix trace ||X + Y|| 2 F = tr((X + Y) T (X + Y)) = tr(X T X) + tr(Y T Y) + 2tr(X T Y) The matrix trace property exists then apply Eq. (51) to Eq. (50), acquires satisfying Eq. (53) is composed of six terms. But the last two terms are consistently zero. To demonstrate it, a new rotation vector G is defined as Eq. (55) The structure of this new rotation vector is shown in Fig. 28 Fig . 28. New rotation vector, φ G can be regarded as rotating disturbance from q * coordinate to p * coordinate. for comprehension, and then where α p and α q are the possible angles of p and q , respectively. α p is shown in Fig. 29(a). (b) New rotation vector, φ H = φ G × φ L , where φ G = R * φ K ; β is included angle of cross product between φ G and φ L On so(3) space, these two vector move on the circle, and double integration can cover all possible arrangements. A and B are discussed in Remark 1 Eq. (16). Considering D, the inner integration can hold R * φ K as a fixed value with inner integration on α p . D becomes a zero vector Considering C, a new vector is also formulated as cross product, as shown in Fig. 29(b): The integration of C becomes φ T H φ H . Moreover, it contains one angle, α p , when LiDAR is implemented. The inner integration and exchange of integration variables from α q to β shown in Fig. 29(b) indicate that φ G × φ L includes the angle:
2021-11-16T02:16:21.205Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "3ace25bd6033f86e5c26e01a6b3b6dc55d3cbd55", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "434f220a3a1750e889759de0256a57292781bcf4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245566231
pes2o/s2orc
v3-fos-license
GFR estimation is complicated by a high incidence of non-steady-state serum creatinine concentrations at the emergency department Background Acquiring a reliable estimate of glomerular filtration rate (eGFR) at the emergency department (ED) is important for clinical management and for dosing renally excreted drugs. However, renal function formulas such as CKD-EPI can give biased results when serum creatinine (SCr) is not in steady-state because the assumption that urinary creatinine excretion is constant is then invalid. We assessed the extent of this by analysing variability in SCr in patients who visited the ED of a tertiary care centre. Methods Data from ED visits at the University Medical Centre Utrecht, the Netherlands between 2012 and 2019 were extracted from the Utrecht Patient Oriented Database. Three measurement time points were defined for each visit: last SCr measurement before visit as baseline (SCr-BL), first measurement during visit (SCr-ED) and a subsequent measurement between 6 and 24 hours during admission (SCr-H1). Non-steady-state SCr was defined as exceeding the Reference Change Value (RCV), with 15% decrease or 18% increase between successive SCr measurements. Exceeding the RCV was deemed as a significant change. Results Of visits where SCr-BL and SCr-ED were measured (N = 47,540), 28.0% showed significant change in SCr. Of 17,928 visits admitted to the hospital with a SCr-H1 after SCr-ED, 27,7% showed significant change. More than half (55%) of the patients with SCr values available at all three timepoints (11,054) showed at least one significant change in SCr over time. Conclusion One third of ED visits preceded and/or followed by creatinine measurement show non-stable serum creatinine concentration. At the ED automatically calculated eGFR should therefore be interpreted with great caution when assessing kidney function. Introduction Assessment of kidney function plays a crucial role in the evaluation and treatment of patients. A change in kidney function can point to renal disease, which is associated with an increase in morbidity and mortality [1,2]. Timely therapeutic intervention may attenuate or prevent renal damage [3,4]. Furthermore, kidney function is essential for drug dosing, ensuring optimal efficacy while reducing potential toxicity [5,6]. Renal function is most often quantified as the estimated glomerular filtration rate (eGFR), calculated by the CKD-EPI formula using serum creatinine (SCr), age, gender, and race [7]. The CKD-EPI formula was developed in patients with chronic kidney disease and a stable kidney function. However, in patients with changes in renal function due to, for example acute kidney injury (AKI), it takes time before SCr has reached its new steady-state because the assumption that urinary creatinine excretion is constant is then invalid [8,9]. In these situations, the CKD-EPI is inaccurate and lags behind the true eGFR for up to 3 days [10]. As AKI is frequently seen at the emergency department (ED), CKD-EPI, as well as older formulas used to calculate eGFR such as MDRD, might often not adequately estimate the actual GFR at the ED. Dosing of renally cleared drugs is often based on KDIGO CKD-categories using a measure of eGFR such as the CKD-EPI. When SCr is not in steady-state, drug dosing based on the CKD-EPI may result in potential toxicity or underdosing. Indeed, up to 24% of admitted patients with AKI experience some form of adverse event caused by inadequate drug dosing [6]. In this study we aimed to assess the incidence of non-steady-state SCr concentrations at the ED and therefore situations where the CKD-EPI is potentially unreliable. We hypothesized that a substantial number of patients who visit the emergency department an in whom a serum creatinine concentration is assessed have a serum creatinine concentration that is not in steady-state. Study design and population (data extraction) We performed a single centre retrospective analysis, using data from the University Medical Centre (UMC) Utrecht, Utrecht, the Netherlands. We evaluated all ED visits between 2012 and 2019 from patients aged over 18 years. Data were extracted from the Utrecht Patient Oriented Database (UPOD) [11]. In brief, UPOD is an infrastructure of relational databases comprising data on patient characteristics, hospital discharge diagnoses, medical procedures, medication orders and laboratory tests for all patients treated at the UMC Utrecht since 2004. For each ED visit we extracted patient age, sex and hospitalization information. Additionally, all SCr measurements were extracted 365 days prior and up to 24 hours after each ED visit. SCr was measured by enzymatic colorimetric assay (Beckman Coulter, Brea, CA, USA). For each SCr an eGFR was computed using the CKD-EPI formula [7]. Chronic kidney disease (CKD) was defined by the KDIGO 2012 criteria based on the estimated GFR [12]. Definition of serum creatinine not in steady-state We defined the following three SCr measurements: baseline SCr measurement as the most recent SCr measurement within a year before ED presentation (SCr-BL), SCr-ED as the measurement at ED presentation, and a subsequent SCr measurement during hospitalization (SCr-H1). Since very short time-intervals (e.g. 30 minutes) may obscure significant fluctuations in SCr, we defined SCr-H1 as the first measurement closest to 12 hours and at least 6 hours with a maximum of 24 hours after the SCr-ED measurement. Significant fluctuation in SCr was defined as exceeding the Reference Change Value (RCV). RCV represents the smallest difference between sequential laboratory results representing a true change in the patient and can be calculated using the analytical coefficient of variation (CV a ) and within-subject biological coefficient of variation (CV i ). Since SCr does not follow a normal distribution due to the underlying first-order elimination, we calculated the RCV using the log-method: s ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi ffi ffi 2 p �s . With this method, we calculated the RCV (CV i = 5.95% and CVa = 1%) resulting in a significant increase of 18% and a significant decrease of 15% [13,14]. Ethics This study was performed according to the declaration of Helsinki and the ethical guidelines of our institution. The institutional review board of the UMC Utrecht waived the need for informed consent. Pseudonymized data were used for this study. Data collection and handling was conducted in accordance with European privacy legislation (GDPR). Statistics Risk of non-steady-state SCr was quantified with logistic regression. 95% confidence intervals for the absolute change in creatinine over time were calculated with smoothed quantile regression. All statistics and pre-processing were performed using the R environment (3.6.1). P-values below 0.05 were considered significant. Patient characteristics Between 2012 and 2019 there were 120,652 visits from 69,579 unique patients who visited the ED (S1 Fig and Table 1). Of all visits, there were slightly more males (54%) than females (46%) and the average age was 53.8 years. Three not mutually exclusive groups were defined for analysis: visits with a SCr-BL and SCr-ED measurement (N = 47,540), visits with a SCr-ED and SCr-H1 measurement (N = 17,928), and visits with measurement at all timepoints (N = 11,054) (S1 Fig, S1 Table). Percentages are based on the total number of visits. ED discipline was defined as the first discipline the patient visited during visit. Incidence of serum creatinine concentration not in steady-state at the emergency department as compared to baseline When SCr-ED was compared to SCr-BL, 8,794 visits (18.5%) showed a significant increase and 5,378 (11.3%) showed a significant decrease SCr ( Fig 1A). The median time between baseline and ED was 26.8 days with an interquartile range of 8.4 to 83.0 hours. The number of patients with a significant change in serum creatinine decreased when time between SCr-BL and SCr-ED was longer (S2 Fig). 29.8% of these visits changed at least one CKD category (S2 Table). Incidence of serum creatinine concentration not in steady-state at the emergency department as compared to follow-up measurements Of the patients with both a SCr-ED and a SCr-H1, a significant increase was seen in 1,304 (7.3%) and a significant decrease was seen in 3,839 subjects (21.4%) (Fig 1B). The median time between ED presentation and the subsequent measurement was 15.7 hours with an interquartile range of 12.0 to 19.1 hours. 27.7% of these visits changed at least one CKD category (S3 Table). Consistency of serum creatinine change over time Next, we compared the consistency of SCr in visits with all three measurements available (SCr-BL, SCr-ED and SCr-H1, N = 11,054). After a significant rise in SCr between BL and (Table 2). After a significant decrease in SCr between BL and ED (N = 1,016), 5.7% continued to decrease, 76.4% stabilised, and 17.9% showed a significant increase in SCr. Despite having a stable SCr between BL and ED (N = 6,045), a significant rise was seen in 6.3% and a significant decrease was seen in 11.4% of these admissions. Taken together, more than half (55%) of the patients of whom SCr values were available for all three timepoints, showed at least one significant change in Scr and were therefore not in steady state. Incidence of serum creatinine not in steady-state between medical specialties and CKD stages We compared the incidence for non-steady-state SCr between different medical specialties and CKD-stages. Visits with CKD stage 3a or worse were at a higher risk of non-steady-state SCr between SCr-BL and SCr-ED (p<0.001; S4 Table). Similarly, when comparing SCr-ED to SCr-H1 visits with CKD stage 2 or worse (except for CKD stage 5) were at a higher risk of non-steady-state SCr (p<0.001; S4 Table). Next, we compared the incidence of a non-steady-state creatinine between different medical specialties at the ED, as one might hypothesize that this phenomenon preferentially occurs in certain specialties. Although the percentage of non-steady state creatinine concentrations differed between the different specialties (Fig 2; S6 and S7 Tables), the incidence was substantial (22%-36%) in all specialties. Discussion Reliable estimation of the GFR at the ED is the cornerstone of assessing renal function and thereby essential for correct dosing of drugs that are renally excreted. However, commonly used renal function formulas (CKD-EPI, MDRD) require SCr in steady-state to provide a reliable estimate of the GFR. We found that a third of all the SCr measured at our ED were not in steady-state and that the CKD-EPI may therefore not reflect the underlying renal function. Faulty GFR estimates and CKD staging will affect clinical decision making and drug dosing regimens. Interestingly there appeared to be an inverse correlation between the incidence of non-steady state creatinine concentration and elapsed time since last creatinine concentration measurement before the ED measurement. We found a similar high incidence of non-steadystate SCr across all different specialities. The incidence of non-steady-state SCr and the reliability of the CKD-EPI at the ED has not been widely studied. Previous reports have estimated the incidence of AKI in the general EDpopulation between 3% and 25%, depending on the definition and the population [16,17]. One study that investigated non-steady-state SCr in patients that were admitted to the hospital after ED visit found that nearly half of the visits had non-steady-state SCr during the entire length of hospital stay [18]. To our knowledge we are the first to report the high incidence of a non-steady-state SCr at the ED, which raises serious concerns about the applicability of eGFR formulas like CKD-EPI at the ED. The high incidence of SCr not in steady-state in the ED not only has consequences for the interpretation of the CKD-EPI as a measure of kidney function per se, but also for dosing of drugs with a significant renal clearance, since this is often guided by CKD categories indexed by the eGFR. The CKD-EPI based eGFR calculated during ED visit and subsequent admission Table 2 results in frequently changing CKD categories over time (27,7-29,8%). Moreover, an increase or decrease in CKD-EPI categories from baseline to ED was not associated with a subsequent increase or decrease in CDK-EPI categories from ED to H1. Although the changing CKD categories do reflect changes in underlying renal function, they do not adequately reflect the actual GFR and should therefore be used with caution for dosing of drugs with significant renal clearance. BL-ED Most formulas that estimate glomerular filtration rate assume a stabilized serum creatinine concentration, and therefore only require the input of one serum creatinine concentration value [19,20]. Performing a consecutive SCr measurement will provide insight in serum creatinine fluctuation. Moreover, this consecutive SCr can be used for alternative approaches that have been published that can be used to estimate renal function when SCr is not in steadystate [21]. These "kinetic" formulas have been applied in patients admitted to the intensive care and in patients after kidney transplantation and calculate the eGFR by combining two SCr measurements with an estimate of the production and volume of distribution of creatinine [8,9,22]. Although physiologically interesting, these formulas have not been rigorously validated in patients at the ED. In the future, use of thresholds calculated with the RCV may help identify patients with a SCr not in steady state, where "kinetic" eGFR formulas may give a better estimation of true underlying GFR. There are other instances where serum creatinine based renal function formulas may be inaccurate. These are, amongst others, situations that influence the production or excretion of creatinine independent of GFR, such as aberrant diet, pregnancy, skeletal muscle disease and drugs that influence tubular secretion of creatinine. This further stresses the limitation single serum creatinine concentration based renal function formulas to estimate renal function at the ED. It is of note that cystatin C based renal function formulas have been proposed to be less dependent on muscle mass and diet. However, current cystatin c based renal function formulas also require steady state serum concentrations of cystatin C to estimate renal function from a single serum cystatin C concentration. Strengths of this study are that we used a large dataset with a well-documented and unselected population. This allowed us to study the true incidence of non-steady-state SCr at the ED of our tertiary care hospital over different medical specialities and CKD stages. The detailed time annotation allowed us to not only study non-steady-state SCr between baseline and the ED measurement but also the subsequent measurement within 24h after admission. Furthermore, the use of a relational database such as UPOD ensures maximum completeness and integrity of the data, since it continuously stores laboratory and clinical data for every individual ED visit. This allowed us to perform our analyses on routine care data that is a valid representation how the CKD-EPI formula is used in clinical practice. This study has some drawbacks. This retrospective study only included ED patients in whom creatinine concentrations were determined, which introduced selection. Although this might lead to overestimation of non-steady state creatinine concentrations at the ED population as a whole, it does reflect the incidence of non-steady state creatinine concentration in the ED subpopulation where renal function estimation is deemed appropriate by the treating physician. Another drawback of this retrospective cohort study is the lack of invasive GFR measurements to assess actual GFR. Although currently no validated formula is available to estimate underlying renal function in patients with a non-steady state serum creatinine concentration at the ED, the simple notion that an increasing serum creatinine concentration causes the CKD-EPI to underestimate underlying renal function (and vice versa) is important and may be an incentive to better estimate renal function with timed urine collections. Finally, the current study was neither designed to show any adverse clinical consequences of the faulty GFR estimates nor to quantify potential benefits of improved GFR estimates. However, the abovementioned high percentage of adverse drug reaction due to inadequate dosing in patients suffering from AKI, suggests room for improvement. In conclusion, a third of the patients who visit the ED have non-steady-state SCr. Physicians should be aware of this when using the automatically provided CKD-EPI at the ED and should interpret the reported eGFR with great caution. Future studies should elucidate whether a more tailored GFR estimate (e.g. by using dynamic formulas or timed urine collections) improve drug dosing and/or clinical outcome. Supporting information S1 Table. Characteristics of the three emergency department (ED) sub-cohorts depended on availability of serum creatinine (SCr) measurement, not mutually exclusive. Percentages are based on the total number of visits per sub-cohort. ED discipline was defined as the first discipline the patient visited during visit. ICU admission was defined as having at least one admission to the ICU during hospital stay. (DOCX) S2 Table. CKD staging changes between the baseline eGFR (CKD-BL) and the eGFR at emergency department (CKD-ED). (DOCX) S3 Table. CKD staging changes between the emergency department eGFR (CDK-ED) and the subsequent eGFR during visit (CKD-H1). (DOCX) S4 Table. Odds ratio for each CKD-EPI stage based on SCr-BL compared with the G1 CKD-EPI stage in respect to a non-steady-state serum creatinine (SCr) between SCr-BL and SCr-ED. (DOCX) S5 Table. Odds ratio for each CKD-EPI stage based on SCr-ED compared with the G1 CKD-EPI stage in respect to a non-steady-state serum creatinine (SCr) between SCr-ED and SCr-H1. (DOCX) S6 Table. Odds ratio for each emergency department (ED) specialism compared with the nephrology ED specialism in respect to a non-steady-state serum creatinine (SCr) between SCr-BL and SCr-ED. � : p-value < 0.001. (DOCX) S7 Table. Odds ratio for each emergency department (ED) specialism compared with the nephrology ED specialism in respect to a non-steady-state serum creatinine (SCr) between SCr-ED and SCr-H1. � : p-value < 0.001.
2021-12-31T05:07:12.985Z
2021-12-29T00:00:00.000
{ "year": 2021, "sha1": "f5f38b76095344412397e4c325242539ba4ed15d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0261977&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5f38b76095344412397e4c325242539ba4ed15d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5319653
pes2o/s2orc
v3-fos-license
Hearing thresholds in adult Nigerians with diabetes mellitus: a case–control study Objectives To determine the prevalence, types and severity of hearing loss and associated factors in a hospital population of adult Nigerians with diabetes mellitus. Subjects and methods This study was a prospective hospital-based study conducted at the Otorhinolaryngology and Diabetic Clinics of the University of Nigeria Teaching Hospital (UNTH) Ituku-Ozalla, Enugu, for a period of 12 months. Consecutively presenting eligible adult diabetics and their age- and sex-matched healthy controls were recruited. Each case and control participant had clinical and otologic examination, followed by pure tone audiometry. Data were analyzed using descriptive and comparative statistics. Results There were 224 patients and 192 control participants. The patients comprised 112 males and 112 females (sex ratio=1:1), whose mean age was 47.6 years (range: 26–80 years). The prevalence of hearing loss was 46.9%. This comprised 43.8% sensorineural and 3.1% conductive hearing losses. The distribution of hearing loss by severity was mild 25.0%, moderate 15.6% and severe 6.3%. The controls comprised 96 males and 96 females whose mean age was 44.6 years (range: 25–79 years). The prevalence of hearing loss was significantly higher overall and by type (sensorineural hearing loss, conductive hearing loss) in cases compared with controls. Conclusion The prevalence of hearing loss among diabetic adults at UNTH, Enugu, is comparatively high. Hearing loss is predominantly sensorineural and often mild to moderate in severity. Routine audiometric evaluation of all adult diabetics at UNTH is recommended. Introduction Diabetes mellitus (DM) is a disease characterized by hyperglycemia due to absolute or relative deficiency of insulin. 1 Sustained hyperglycemia is associated with multisystemic complications and multiple end-organ damage. Currently, there is a worldwide pandemic of DM 1-3 and by extension, the inherent complications. The auditory apparatus is one of the vulnerable end organs in DM due to ischemic cochlear damage resulting from diabetic microangiopathy. 4-6 DM-related hearing loss is a major public health issue in both low-and middle-income countries and the developed economies. With the projected increase in the world's diabetes burden due to increased longevity and changes in lifestyle, its prevalence is bound to increase. It impacts adversely on the patients' quality of life and their capacity for independent living. Previous studies investigating hearing thresholds in DM, in Nigeria 7-9 and elsewhere, [10][11][12][13][14][15][16] have been dominated by descriptive cross-sectional surveys, often with widely variable results. The reported prevalence of hearing loss ranges from 0.0% to 90.0%. Consequently, the investigators conducted a hospital-based case-control audiometric evaluation of adult Nigerians with and without DM to determine the prevalence, types and profile of hearing loss and the associated characteristics. In addition to provision of valid comparative data, the generated data will assist public health policymakers and implementers, and otologic care providers in optimizing the quality of life of persons living with DM. Significance of the study DM is a systemic disease with multiple systemic complications and end-organ damage. DM-related hearing loss and its adverse impact in the quality of life is a major public health issue across the globe. The study would provide valid comparative data that will assist public health policymakers and otologic care providers in optimizing the quality of life of persons living with DM. Subjects and methods Background Established in 1970 and located in Enugu, southeastern Nigeria, the University of Nigeria Teaching Hospital (UNTH) is one of the first-generation public tertiary health care facilities in Nigeria. UNTH provides undergraduate and postgraduate medical training, outpatient/inpatient clinical care and undertakes research. At UNTH, the otorhinolaryngology (synonym: Ear Nose and Throat [ENT]) department provides medical, surgical and audiometric ENT care, while a dedicated Diabetic Unit, in the hospital's Internal Medicine Department, provides inpatient and outpatient diabetic care. The UNTH's feeder population comprises inhabitants of Nigeria's southeast geopolitical zone and beyond. The study, conducted for 1 year at the ENT and Diabetic Clinics of UNTH, was a prospective case-control study of eligible diabetic adults and their age-and sex-matched healthy controls. Ethics Prior to commencement of the study, ethics clearance was obtained from the Medical and Health Research Ethics Committee (Institutional Review Board) of UNTH, Enugu, compliant with the tenets of the 1964 Helsinki Declaration on research involving human subjects. Additionally, written informed consent was obtained from each participant, case, and control, before recruitment into the study. Eligibility cases Adults aged 19 years or older diagnosed with types 1 or 2 DM for 5 years or longer, showing absence of congenital anomalies of, or infective/inflammatory/neoplastic lesions of the outer, middle or inner ear were included in the study. Also excluded were potential participants who had coexisting tuberculosis, syphilis, sickle cell disease, hypertension, Human Immune Deficiency Virus infection, neoplasia; or with past history of head injury, acoustic trauma, ear surgery, familial deafness and use of ototoxic drugs 1 month prior to recruitment. Controls Age-and sex-matched healthy nondiabetic adults without any of the above conditions contraindicating enrollment were recruited from the hospital community. Sample size and sampling technique A minimum sample size for the study was calculated using Fisher's formula. 17 Consecutively presenting patients who met the inclusion criteria were recruited into the study. Study instrument This was a pretested investigator-administered questionnaire/ proforma specifically designed for the study. It contained subsections on participants' demographic and clinical characteristics, and findings of audiometric assessment. Study procedures A peripheral venous blood sample was obtained for fasting blood sugar determination using AccuCheck ™ (Roche Diagnostic GmbH, Mannheim, Germany) and human immune deficiency virus 1 and 2 screening test with Determme (Alere) ™ (Alere Medical Co. Ltd, Chiba, Japan) from each case and control participant. A midstream urine sample was also obtained for urinalysis. Subsequently, each subject (case and control) had a general and systemic examination, and otologic examination using Led Head light (Tiger Head Battery Group Co., Ltd, Guangzhou, People's Republic of China) and a batterypowered otoscope (Welch Allyn Inc., Skaneateles Falls, NY, USA). Any impacted wax in the external auditory canal was removed either with wax hook (Downs Surgical, Sheffield, UK) or by instilling wax softening solution -Cerumol (Thornton & Ross Ltd, Huddersfield, UK) before syringing with Higginson's syringe (Downs Surgical) filled with normal saline (Juhel Pharmaceuticals, Awka, Nigeria) at body temperature. This was followed by a re-examination of the ears 157 Hearing threshold in diabetes mellitus to confirm that the external auditory canal was clear. Pure tone audiometry, in a sound-treated room, using MEDIMATE 602 audiometer (Madsen Electronics, Taastrup, Denmark) calibrated to ISO standard (9002) was performed on each subject to determine the hearing threshold for octave frequencies 250-8000 Hz. The level of hearing for each subject was determined based on pure tone audiometric finding. The average for each frequency considered was determined, and the degree of hearing loss for each patient was determined based on the World Health Organization standard classification 18 (Table 1). Data analysis Data were entered into and analyzed using the Statistical package for Social Sciences for Windows, version 18 (SPSS Inc., Chicago, IL, USA). Descriptive statistics yielded frequencies, percentages and proportions. Comparative statistical tests for significance of observed intergroup differences were performed using Pearson's chi-square test or Fisher's exact test for categorical variables and Student's t-test for continuous/metric variables. In all comparisons, a p value <0.05, at one degree of freedom, was considered statistically significant. Results Two hundred and thirty cases were recruited in the study; however, six cases with incomplete data were excluded from the analysis. The prevalence of hearing loss was 46.9% among cases and 15.6% in controls. The patient-control age match with hearing threshold is shown in Table 3. Of the cases, normal hearing was present in 119 (53.1%), sensorineural hearing loss (SNHL) in 98 (43.8%) and conductive hearing loss (CHL; air-bone gap ≥15 dB) in 7 (3.1%); among the controls, hearing was normal in 162 (84.4%), SNHL was present in 30 (15.6%) and CHL in 0 (0.0%). None (0.0%) of the case or control participants had mixed hearing loss. The profile of hearing loss is shown in Table 4. The diabetic patients consistently had significantly higher mean threshold values at all frequencies in both ears ( Table 5). The prevalence of hearing loss was significantly higher overall (p<0.0001) and by type (SNHL, p<0.0001; CHL, p=0.0168) in diabetics compared with controls. Figure 1 shows the audiogram of the mean hearing thresholds of DM patients and the controls. Discussion There was equal sex dominance among the case and control participants, which comprised adult participants. This is despite measures to achieve perfect age match, suggests that the age-determined influences on hearing threshold might partly account for the higher prevalence of hearing loss among diabetics. To eliminate the potential cofounding influence of age, related future studies should aim at perfect age match between cases and controls. In this study, the prevalence of hearing loss, predominantly SNHL and CHL, was significantly higher among cases (46.9%) than controls (15.6%). There has been a wide variation in the prevalence of SNHL found in studies involving diabetic patients. Ranges such as 0%-93% 19,20 have been quoted. There has not been any satisfactory explanation for this wide variation. The present finding is consistent with reports elsewhere (0%-93.0%), 19,20 but far 159 Hearing threshold in diabetes mellitus higher than that reported by Lasisi et al 7 (17.0%) in Ibadan, Nigeria. Between-survey similarities and differences in inclusion criteria, participants' demographics and clinical profile might explain these observations. While the participants in this study were adults who have had diabetes for 5 years or longer, the Ibadan 7 report included pediatric subjects with <5-year history of diabetes. To enable valid comparisons between the survey results, the authors suggest the standardization and adoption of a standard recruitment procedure during future surveys. The high prevalence of hearing loss among diabetics emphasizes the constant necessity for routine periodic audiometric evaluation among adult diabetics. Future longitudinal studies are needed to assess the temporal profile of diabetes-related hearing loss and inform the frequency of audiometric assessments. The hearing thresholds of the cases compared with controls showed significant difference at all frequencies in both right and left ears, with the cases showing higher thresholds. This is consistent with the findings of Ologe and Okoro 8 and underscores the constant necessity for periodic otologic screening in all adult diabetics. The severity of hearing loss among the cases was frequently mild (25.0%) and moderate (15.6%); severe (6.3%) hearing loss was relatively infrequent. Thus, 21.9% of patients had moderate to severe hearing impairment, which will add to the burden of their systemic morbidity and adversely impact on their quality of life and performance. 18 The type-specific prevalence of hearing loss showed a predominance of SNHL over CHL. Microangiopathy and peripheral neuropathy, common in diabetics, might explain this finding. Although a case-control study, the conclusions drawn from this study and the extrapolation of its findings are limited by its hospital-based cross-sectional design, strict age criterion for enrollment and the potential influence of age-related hearing loss. Therefore, the results cannot be generalized to the entire population and do not provide information on the temporal trend. Population-based surveys across all ages, preferably of longitudinal design, are warranted. Conclusion There is high prevalence of hearing loss, predominantly of the sensorineural type, among adult diabetics at UNTH, Enugu. This might have adverse implications for their quality of life. Routine periodic audiometric assessment of adult diabetics is recommended to ensure early detection and timely otologic care. Disclosure The authors report no conflicts of interest in this work.
2018-04-03T02:32:38.262Z
2017-05-02T00:00:00.000
{ "year": 2017, "sha1": "c0d9acc46db044090d6bd446ad625267efaaf12c", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=36275", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2d92a79e94c0b459ef229ec95f03b550138f3e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
158889368
pes2o/s2orc
v3-fos-license
The Part-Time Revolution: Changes in the Parenthood Effect on Women’s Employment in Austria across the Birth Cohorts from 1940 to 1979 Comparing employment rates of mothers and childless women over the life course across the birth cohorts from 1940 to 1979 in Austria, we address the question of whether the parenthood effect on employment has declined. By following synthetic cohorts of mothers and childless women up to retirement age, we can study both the short-term and long-term consequences of having a child. We consider employment participation as well as working time and also perform analyses by educational level. Our study is based on the Austrian microcensus, conducted between 1986 and 2016, and uses descriptive methods, logistic regression models, and decomposition analysis. The results show that the increase in the proportion of part-time work has led to a declining work volume of mothers with young children, despite employment rates of mothers having increased across cohorts. Return to the workplace is progressively concentrated when the child is 3–5years old, but the parenthood effect has become weaker only from the time children enter school. Part-time employment is primarily adopted (at least with younger children) by highly educated mothers and often remains a long-term arrangement. Introduction Having children has a significant impact on women's employment, which leads both to gender inequality and disparities with childless women. In addition to crosscountry differences in the parenthood effect on women's employment, variation over time has been observed due to changing family policies, cultural attitudes, and women's labour market opportunities (Goldin, 2006;Vlasblom and Schippers, 2006;Nieuwenhuis, Need and Van Der Kolk, 2012;Connolly et al., 2016). In this article, we compare the employment of mothers and childless women across birth cohorts from 1940 to 1979 in Austria. Previous research mainly examined the shortterm consequences of childbirth on mothers' employment over the first years of their children's life, often in conjunction with parental leave policies (Aisenbrey, Evertsson and Grunow, 2009;Kanji, 2011;Schober, 2013;Berghammer, 2014). Instead, we study both short-term and long-term consequences. To achieve this, we follow mothers and childless women over their life courses up to their retirement age in synthetic cohorts, i.e. we treat the age distribution of successive waves of cross-sectional data as if birth cohorts were passing through time. We consider their employment participation and their working time arrangements and also perform analyses by educational level. Austria is commonly considered a prime example of a conservative-corporatist welfare state (Esping-Andersen, 2009), characterized by care duties being mainly allocated to families and less to state institutions. With the rise of women's employment in Austria, this familistic orientation elicited the growth of part-time work, which enables mothers to continue taking over care responsibilities, while fathers' (full-time) employment remains largely unaffected (Berghammer and Verwiebe, 2015). 1 In fact, by 2014, the female part-time employment rate in Austria reached the third highest in Europe after the Netherlands and Switzerland (Eurostat database, 2018b). Furthermore, unlike other countries with predominantly part-time female labour force participation, this rate continues to rise in Austria. By studying women's employment in Austria, we can document in detail how part-time work increased over recent decades, especially in relation to having children. Comparing the labour market behaviour of mothers and childless women is particularly relevant in Austria, as a relatively large share of women has no children. The childlessness rate was 19 per cent in the cohorts that have more recently completed childbearing years (women born in 1972) and as high as 30 per cent among university-educated women (born between 1956 and 1960) (Sobotka et al., 2015;Beaujouan, Brzozowska and Zeman, 2016). The challenges in reconciling (fulltime) employment and raising children, which results in a large parenthood effect on maternal employment, have been proposed as the main explanations for the high levels of childlessness observed (e.g. Sobotka, 2011). This contribution provides evidence towards how the parenthood effect has evolved across cohorts. By addressing the issue of part-time employment, this study contributes to a wider discourse on women's employment in Europe and the United States. There has been a debate over whether the opportunity for parttime work allows women who would otherwise be inactive to remain in the labour market, or whether part-time employment creates a marginalized work force (Gallie et al., 2016). A related debate revolves around the issues of working time preferences (Hakim, 2002). It addresses the question of whether working part-time is voluntary or involuntary and how working time preferences interact with both cultural and economic structures. Scholars identified several reasons for why part-time work is widespread and continues to increase in some countries. First, policies that encourage parttime work play a decisive role. Examples include the right to work part-time and childcare infrastructure, in particular the lack of full-time childcare spaces (Kreyenfeld and Hank, 2000;Del Boca, Pasqua and Pronzato, 2009). Second, cultural attitudes that support traditional gender roles make two full-time earners difficult to realize. A masculine workplace culture means that high time availability and flexibility are expected from men. Likewise, occupational segregation and the associated wage gap tend to sort women into jobs that are easier to accommodate childrearing but often entail lower wages (Blau and Kahn, 2003;Mandel and Semyonov, 2005). Third, mothers are expected to devote a lot of time to their children in cultures of timeintensive and child-centred mothering (Hays, 1996;Bianchi, 2000). In addition, the following three factors related to educational differences play a role in women's participation in the workforce. Highly educated women have a higher earnings potential, which makes full-time work more attractive. However, they also tend to be partnered with equally highly educated men, often in high-level positions that entail high work commitment and long working hours. These circumstances may lead them to curtail their working hours, especially when they have children. In addition, there is evidence that the culture of intensive mothering is particularly strong among highly educated women (Sayer, Gauthier and Furstenberg, 2004). Our analyses are based on seven waves of the Austrian microcensus surveys, conducted between 1986 and 2016, which contain information on the number and birth year of children ever born. We constructed synthetic cohorts of 10 years (1940-1949, 1950-1959, 1960-1969, 1970-1979), which we followed from women's young adulthood over their prime employment years up to age 60. Unlike several previous studies on parents' employment behaviour that had to cap at ages 40 or 45 because they only had information on children in the household available (Percheski, 2008;Konietzka and Kreyenfeld, 2010;Bü nning and Pollmann-Schult, 2016), we are able to follow cohorts of women to the end of their labour market careers. Taking into account the employment trajectory until higher ages is desirable since there are not only immediate consequences of employment; rather, lifetime employment largely determines welfare in old age. In conservative welfare states such as Austria, old age poverty among women tends to be a particular concern because they are often less well-secured by their own employment (Angel and Kolland, 2011). Therefore, using a cohort comparison of four successive 10-year cohorts of women, this study addresses the question of whether the parenthood effect has declined over cohorts. Given improved opportunities for combining work and family, and more egalitarian gender attitudes, we expect-in line with previous evidence-that employment participation of mothers has converged towards the rate of childless women. However, the rise in part-time employment may have attenuated this decline in the parenthood effect. Our study details to what extent this is the case, paying specific attention to children's age. In terms of education, we expect the parenthood effect to be generally stronger among lowereducated mothers and we comprehensively analyze changes over cohorts and by age of the youngest child. In the remainder of this article, we first provide an overview on the theoretical background and previous research regarding women's labour force participation, part-time employment, and the education effect on employment. Subsequently, we discuss the profiles of the four cohorts under study. Next, we address data and methods. Finally, we present the findings on employment, full-time/part-time employment, and educational differences as outlined earlier. Theoretical Background and Previous Research Previous theoretical contributions have addressed the reasons for the increase in female labour force participation. At the macro-level, policies, in particular the arrangement of work-family policies, have played a key role (Nieuwenhuis, Need and Van Der Kolk, 2012;Steiber and Haas, 2012). Other factors include the expansion of the service sector (Thévenon, 2013), families' economic need for more than one income due to rising costs of living, and the cultural change towards more egalitarian gender role attitudes (Pfau-Effinger, 2004). At the micro-level, the main determinants of women's labour market participation are the number and age of children, education and income, gender role attitudes, and partnership characteristics (Pettit and Hook, 2005;Nieuwenhuis, Need and Van Der Kolk, 2012). The higher labour force participation of highly educated mothers has been largely explained by their higher earnings, better opportunities in the labour market, and more attractive job characteristics as well as by their more gender egalitarian attitudes (Steiber and Haas, 2012). The effect of micro-level determinants varies by country and life stage. For instance, in countries such as France or Norway, the education effect on mothers' employment is larger when children are younger (because highly educated mothers return back to the workplace faster than their lower-educated peers), while it levels off with the increasing age of the children. Conversely, in other countries (e.g. Austria), educational differences by phase in the family life course are similar (Steiber, Berghammer and Haas, 2016). Part-time employment can be theoretically conceptualized either in terms of integration or in terms of segmentation (Gallie et al., 2016). The integration perspective assumes that the option for part-time work serves as a bridge into the labour market for persons who would otherwise be inactive (such as mothers with young children). The segmentation perspective views part-time employees as a marginalized workforce that is easier to substitute, experiences less on-the-job training and reduced career opportunities, and suffers from low task discretion and highly repetitive tasks. Empirical evidence points towards a lower intrinsic quality of part-time jobs in some countries but still finds a high degree of job satisfaction (Gallie et al., 2016). The integration-segmentation conceptualization is closely related to the debate around working time preferences. Some scholars, most prominently Catherine Hakim, have argued that women's employment choices are predominantly explained by their preferences (Hakim, 1995). In her preference theory, Hakim distinguishes between women with a strong work orientation and commitment (work centred), women with a strong preference for family (home centred), and adaptive women without any clear preference, who seek to combine work and family (Hakim, 2002). Highly educated women are assumed to be more often work-centred than their lower-educated peers. According to Hakim's theory, adaptive women predominantly choose to work part-time. They are particularly responsive to policies (e.g. availability of childcare services) and also favour occupations (such as teaching) that may more readily be combined with family life. Indications for part-time work being a lifestyle choice rather than a constraint are a high job satisfaction and a reluctance to raise working hours even if circumstances permit. Other scholars have contested Hakim's preference theory mostly on the grounds that it does not sufficiently consider constraints (McRae, 2003). Constraints to full-time work may be both structural (e.g. opening hours of childcare, partners' working hours) and cultural (e.g. norms of intensive mothering). Critics have also argued that working time preferences are not static but are shaped by both constraints and actual working hours (Crompton and Harris, 1998;Steiber and Haas, 2012). These preferences hence interact with cultural and economic structures. Moreover, there is frequently a mismatch between attitudes and behaviour, which challenges the notion that women are largely able to live according to their preferences (Steiber and Haas, 2012). Highly educated women are usually better able to pursue their preferences for full-time work (while paying for childcare and outsourcing housework), part-time work or staying at home because of the couple's higher resources and their better negotiating position towards their partners (Bernardi, 1999;Verbakel and de Graaf, 2009). Crossnational evidence has shown that part-time work is more prevalent among less-educated women than among their highly educated peers (Del Boca, Pasqua and Pronzato, 2009). This is also the case for Austria (Baierl and Kapella, 2014), although more detailed results find that this relationship only applies to mothers with schoolaged children, while part-time work is more frequent among highly educated mothers with infants or preschool children (Berghammer, 2014;Steiber, Berghammer and Haas, 2016). In Austria, both structural and cultural constraints play a role in the rise of part-time employment. Its childcare and school infrastructure are commonly not geared towards two full earners and show a high degree of regional variability (Dö rfler, Blum and Kaindl, 2014). Other policy measures also facilitate the part-time option, most importantly the right to work part-time (since 2004) and lower taxes. Part-time work is also enabled by moderate costs of living, which allow many families to sustain on one-and-a-half incomes. 2 Norms of intensive mothering are influential (Diabaté, 2015), and large amounts of the Austrian population hold negative attitudes towards full-time working mothers with a child below age 3 (Steiber and Haas, 2010). Coupled with this, there is a strong male breadwinner culture; overtime is common in Austria with the average number of usual weekly hours of work among full-time employees being among the highest in the European Union (43 hours per week in 2017) (Eurostat database, 2018a). These cultural attitudes are sustained by a gender wage gap, which persists from the time of labour market entry (Bock-Schappelwein et al., 2018) and hinders men taking up parental leave or reducing their working hours. Within the context of these constraints, surveys find a high degree of voluntary part-time employment in Austria: less than 10 per cent of part-time employed women aged 25-49 report that their work arrangement is involuntary. This pattern is similar to other countries with high part-time work rates (e.g. the Netherlands, United Kingdom) and in contrast to Southern Europe and some Central and Eastern European (CEE) countries (Baierl and Kapella, 2014). Cohort Profiles This section briefly presents the profiles of the four Austrian cohorts under study with respect to education, employment, and family life (for an overview, see Table 1; Supplementary Figure A.1 shows a Lexis diagram to facilitate connecting policy events with women's ages in different cohorts). World War II and post-World War II (1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948)(1949) The cohorts born during World War II or in the immediate post-war period generally experienced their early socialization under tight economic circumstances. Family relations were often strained due to many fathers' long absences during and after the war (Sieder, 1987: pp. 236-242). Most women in these birth cohorts completed only primary education. They continuously lived in a family context, moving from their parental home to living in a household with their husband (Prskawetz et al., 2008). Close to 90 per cent of women married and had children-two on average. In 1957, women in Austria obtained the right to take unpaid parental leave for 6 months with a guaranteed return back to their workplace; this was extended to 1 year in 1961 and endowed with an income-dependent leave benefit. In the early 1960s, when many of the women born in these cohorts had entered (or were close to entering) the labour market, female employment rates were higher in Austria than in the other western European countries (Butschek, 1965). Most women worked in the service sector, closely followed by the agricultural sector and industry (mostly textile) (Butschek, 1965;Butschek, 1974). Shortly after the post-war period, in the 1950s and 1960s, the economy began an unprecedented boom and families could increasingly afford a modest standard of living. Late baby boom (1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969) Although the growth rate in women's employment was less steep during the 1980s than the decade before (Lutz, 2000), women's position in the labour market began to strengthen in which they increasingly held leading positions and academic jobs (Dö rfler and Wernhart, 2016). After becoming mothers, a growing share returned (faster) to their workplaces, but since the mid-1980s, increasingly on a part-time basis. With a weakening economy since the 1980s, labour market uncertainties rose, real wages grew less rapidly than before, and many families felt that they could no longer live on a single income. The main changes in family policies were the extension of the parental leave duration from 1 to 2 years in 1990 and, 1 year later, the introduction of parental leave for fathers. Moreover, the childcare infrastructure for the morning care of children above age 3 improved continuously. Some of the demographic developments that had started in the previous cohorts accelerated, e.g. increase in premarital cohabitation, postponement of childbearing (Prskawetz et al., 2008). Meanwhile, the mean number of children per woman continued to drop and the use of the birth-control pill, legal since 1962, became increasingly widespread (Sieder, 1987: p. 257). Generation X (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979) Generation X experienced a more flexible and globalized labour market than previous generations (partly related to Austria's accession to the European Union in 1995), although the labour market continued to be highly regulated (e.g. trade unions are strong and a high percentage of employees are covered by collective bargaining agreements). Generation X has shown to hold a stronger worklife orientation on account of the centrality of work status (Beutell and Wittig-Berman, 2008), but evidence is inconclusive (Schrö der, 2018). Norms of intensive mothering have become stronger over time and are more prevalent in this generation than in previous ones (Berghammer, 2013). Equality between men and women further increased in terms of education, employment, and work status. However, while men became increasingly involved in childcare and housework (Berghammer, 2013), they remained reluctant to take a substantial share of parental leave or to reduce their working hours as mothers increasingly did. In 2004, the right to part-time work until the child's seventh birthday was introduced (restricted to employees that were employed in a company with more than 20 employees for at least 3 years). Regarding parental leave, in 2002, the system became more familistic (payments up to 3 years) and comprehensive (no longer tied to previous employment). In 2010, additional leave options were introduced to provide parents with greater flexibility (including one income-dependent option restricted to around 1 year). The childcare infrastructure for children below 3 years and full-time childcare developed slowly and predominantly in urban areas. With respect to demographic developments, the age at motherhood continued to increase while the cohort family size remained stable compared with the previous cohort. On the basis of empirical evidence and the changing context of cohorts, we formulate the following hypotheses. First, we expect that mothers' employment rates will converge towards their childless counterparts' (hypothesis 1). This reasoning is based on better opportunities for combining work and family, more gender egalitarian attitudes, and economic necessity in view of low wage growth. In addition, compositional changes (particularly an increase in highly educated women) as well as demographic changes (particularly a decreasing number of children) could play a role. We, however, do not assume a significant increase in mothers' employment with children below age 3 due to extensions of the parental leave period (1990 and 2002) and a scarcity of institutional childcare. Second, we expect that an increasing share of mothers would return to their workplaces on a part-time basis, especially those from the late baby boom cohort, among which the right to work part-time started to take effect (hypothesis 2). Third, with regard to education, we expect a convergence of education groups in the employment rate (Berghammer, 2014) as the lower-educated women especially profit from better opportunities to combine work and family (hypothesis 3). Part-time employment could also be more frequent among highly educated mothers with young children, because intensive mothering prevails more strongly among them and-if they prefer parttime work-they will be better able to realize it. Another argument pertains to the declining selectivity of highly educated women as their share expands: while highly educated women in older cohorts may have been more selective, for instance, in terms of skills, motivation, and work orientation (Gesthuizen, Solga and Kü nster, 2011), this might be less the case for the highly educated in younger cohorts. Data and Methods The analyses are based on Austrian microcensus data (labour force survey), a representative survey conducted since 1974 using a one per cent sample of Austrian households. This large-scale survey contains detailed information on household composition, employment, and education. In addition, approximately every 5 years (1986, 1991, 1996, 2001, 2006, 2012, and 2016), questions on the number and birth year of biological children were added in a special module directed at women aged 15 and over (in some waves, a different age definition was applied; for more details, see Supplementary Table A.1). While participation in the core microcensus is compulsory, participation in this module was voluntary, but response rates were above those reported for other Austrian social surveys. In the initial waves, data were collected in face-to-face interviews, whereas the special modules from 2006 were conducted by computer-aided telephone interviews. Between 1986 and 2001, questions about children were asked to another household member if the randomly selected respondent was not available. To assess the quality of these proxy interviews, we conducted sensitivity analyses that excluded them and, reassuringly, the results were very similar. Thus, the proxy interviews were included in the analyses. Case numbers and mean age at the time of interview are depicted in Table 2 (for a detailed sample description, see Supplementary Table A.2). The availability of information on biological children is a clear advantage of this survey over other surveys that are most frequently used for analyzing labour force participation. The EU Labour Force Surveys, the EU Statistics on Income and Living Conditions, and-for some countries, including Austria-its predecessor, the European Community Household Panel, only inquire about children living in the household. Hence, they do not allow for making a distinction between childless persons and parents who do not live with their children. This is less of an issue in younger age groups but becomes a growing concern around age 40 (for women) when children start to leave the parental home (Greulich and Dasré, 2017). Our variables are defined as follows. We distinguish between four 10-year cohorts: 1940-1949, 1950-1959, 1960-1969 and 1970-1979. Regarding the employment classification, we consider women who are active in the labour market as employed. That is, unlike the common International Labour Organization (ILO) definition, we do not denote women on parental leave as employed. We use the number of working hours during a regular work week and differentiate among short part-time work (1-20 usual weekly working hours), long parttime work (21-35 hours), and full-time work (36 hours and more). The distinction between main and secondary job is not available in the older surveys, hence our analyses refer to the main job. The categories for age of the youngest child are as follows: 0-2 years, 3-5 years, 6-9 years (primary school age), 10-15 years (lower secondary school), and 16-19 years (upper secondary school). 3 We group education into the following four categories: low education denotes incomplete or complete primary education; medium education means having completed a secondary vocational track, usually apprenticeship training; medium-high education refers to having completed the higher vocational or general school, which ends with an examination permitting university attendance (in Austria, the 'Matura'); and high education denotes having completed the tertiary education. In analyses on earlier birth cohorts, medium-high education and high education had to be collapsed due to low case numbers of women with tertiary education. In our analytic strategy, we mainly pursue a descriptive approach in which we combine many characteristics: cohort, motherhood status, age, age of the youngest child, working time, and education (for a similar approach, see Trappe, Pollmann-Schult and Schmitt, 2015). We first (1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948)(1949) Early baby boom (1950)(1951)(1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959) Late baby boom (1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969) Generation X (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979) Respondents with information on parenthood depict differences in employment rates between childless women and mothers for all four cohorts under study and show maternal employment rates by age of the youngest child. In a second step, we differentiate by working time arrangements (full-time, short part-time and long parttime, unemployment, parental leave, and inactivity), focusing on childless women and mothers at ages 36-45 (in this age group, we dispose with data for all four cohorts). Next, we analyze mothers' working time arrangements by age of the youngest child (until age 19) in more detail. Finally, we include education in our analyses of female employment (differences between childless women and mothers) and working time arrangements (differences by age of the youngest child). We also estimated logistic regression models and, based on these results, conducted a Blinder-Oaxaca decomposition analysis (Sinning, Hahn and Bauer, 2008) to assess whether the changing composition of mothers (i.e. increasing education, fewer children) or other developments (e.g. labour market policies) are responsible for changes in maternal employment across cohorts. The different cohorts overlap only partly in terms of women's age, and we thus had to restrict the age range to 36-45, which is a limitation of the multivariate models. Logistic regression models are estimated separately for the four cohorts. We estimated models both for nonemployed vs. employed and for part-time employed vs. full-time employed. 4 We depict the average marginal effects (AMEs), as these are most comparable across models for different groups (Best and Wolf, 2012). AMEs represent the average effect of a specific characteristic (e.g. being highly educated) on the probability of being (full-time) employed. Positive coefficients indicate that a certain group is more often in (full-time) employment, while negative coefficients indicate that a certain group is less often in (full-time) employment than in the reference group. Figure 1 shows employment rates of childless women and mothers across cohorts. The results reveal only a moderate change in the employment rates of childless women over time. They mostly range between 80 and 90 per cent in the prime employment years (26-50), while being slightly lower in the oldest cohort ( Figure 1A). Conversely, among mothers, the employment rate rises with each younger cohort, thus providing evidence for a declining parenthood effect over time (in line with hypothesis 1). For example, in the 36-40 and 41-45 age groups, the differences between the youngest and oldest cohorts are as large as 23 and 31 percentage points, respectively ( Figure 1B). The results also demonstrate that the male breadwinner model is still quite widespread in the oldest cohort, with many women remaining housewives after they had children. For example, at ages 36-45, 45 per cent of women in the oldest cohort are not employed. Figure 1C displays the absolute difference in the employment rates between childless women and mothers. It shows that employment behaviour of mothers and childless women converges with age as the children grow up. In the two younger cohorts, differences in employment participation have almost levelled when women reach their early 40s, while in the two older cohorts, the gap continues to persist. Employment rates Due to the postponement of childbearing in recent decades, it might be misleading to compare maternal employment rates by age (as in Figure 1), since women at a given age will have younger children in the younger cohorts than in the older ones. We hence depict maternal employment rates across cohorts by age of the youngest child ( Figure 2). The almost identical employment rates of around 30 per cent in all cohorts when the youngest child is below age 3 are striking. The faster re-entry to the workplace of each younger cohort becomes apparent only when the youngest child is aged 3-5. At this family life course stage, less than 40 per cent of women in the oldest cohort were employed compared with almost 70 per cent of women in the youngest cohort, with a steady rise observed among the cohorts in between. Aside from this conspicuous pattern, cohort differences in re-entry into the workforce with older children are moderate. For example, the rise in the employment rate from when the child is aged 3-5 to when it is 10-15 ranges from 69 per cent to 88 per cent (þ19 points) in the youngest cohort and from 37 per cent to 51 per cent (þ14 points) in the oldest cohort. Hence, the absolute increases are not all that dissimilar. The results also reinforce the finding (displayed in Figure 1) that being a housewife used to be common in the oldest cohort-with half of the mothers staying at home even when the youngest child is aged 10-15-but this model was eventually replaced by working mothers. Full-time/part-time employment In a next step, we introduce the full-time/part-time distinction. The results reveal that the full-time rate among childless women in all cohorts is around 70 per cent ( Figure 3A). This means that their labour market behaviour in terms of working hours has remained rather stable and that they were almost unaffected by the increasing diversity of working hours among mothers. 5 A quite different trend in working hours emerges for mothers ( Figure 3B): In parallel with a rising maternal employment rate, the share of full-time employed mothers has nearly halved from about 40 per cent in the two oldest cohorts to 23 per cent in the youngest one (corresponding to hypothesis 2). In fact, part-time work (53 per cent, mostly long part-time) has become the most prevalent arrangement in this youngest cohort. Again, we introduce the age of the youngest child as an alternative time metric (Figure 4). Whereas we had previously observed that the maternal employment rate with a child below age 3 is around 30 per cent in all cohorts, the actual labour market volume has dropped considerably along with the decrease in the share of fulltime employed, which halved from 22 to 11 per cent ( Figure 4A). This change implies that the mean working hours of employed women with the youngest child below 3 years have declined from 14 to 9 hours per 1940-1949 to 1970-1979 (per cent) Note: When their youngest child is below age 3, the women are on average 32 years (cohort 1950-1959), 29 years (1960-1969), and 30 years (1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979) old. We do not show data for the birth cohort 1940-1949 with a child below age 3 as they are on average 39 years old, which is atypical for their cohort. For the same reason, we do not show data for the cohort 1970-1979 with a child age 16-19 as they are on average 44 years old. 1940-49 1950-59 1960-69 1970-79 Full-time Long part-time Short part-time Unemployed Inactive Leave A B Figure 3. Women's employment status in detail by motherhood status at ages 36-45: cohorts born 1940-1949 to 1970-1979 (per cent). week across cohorts (not shown in figure). With the youngest child between 10 and 19 years, the share of full-time working mothers across cohorts has come closer ( Figure 4D and E), suggesting that many mothers who had returned to the labour market on a part-time basis raise their working hours when their children get older. Even so, with the youngest child aged 10-15 years, still only 30 per cent of mothers work full-time compared with 58 per cent in short part-time or long part-time in the youngest cohort ( Figure 4D). 6 Educational differences In Figure 5, we look at educational differences in the age-specific employment rate for mothers and childless women. We distinguish between cohorts for mothers but not for childless women as their employment rate varies little over cohorts. The employment rate for childless women differs between women who have at least medium education (around 90 per cent) and women with low education (around 70 per cent; refers to approximately aged 31-45). Among mothers, the employment rate is also higher for those with higher educational levels. The results reveal, in addition, that the increase in the employment rate concerned low-and medium-educated mothers more than mothers in the two higher education categories (in line with hypothesis 3). This conclusion is based on a comparison of the oldest cohort (1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948)(1949) ( [1960][1961][1962][1963][1964][1965][1966][1967][1968][1969] with regard to the mean employment rate during the age range of 36-50 (we did not consider the youngest cohort because of small case numbers). The average increase in the employment rate was 19 percentage points for low-educated mothers and 20 percentage points for medium-educated mothers but only 14 and 8 percentage points for the two higher education categories (see Supplementary Figure A.2 for a presentation by cohort). This result indicates a converging trend among mothers with different educational backgrounds. Next, we concentrate on mothers' employment with a youngest child below age 6 ( Figure 6A). While the numbers are similar for the two oldest cohorts, mothers from the 1960-1969 cohort onwards increasingly return to the workplace on a part-time basis. Across all cohorts, loweducated mothers are least likely to work part-time and the non-employment rate (i.e. inactivity and unemployment) is by far highest among them. This pattern may be, with some caution, interpreted as a polarization among low-educated mothers between full-time and non-employment. Part-time work is initially adopted by mediumeducated mothers, and by the youngest cohort, it is clearly most common among medium-high-educated mothers and highly educated mothers. Whereas in the three older cohorts, full-time employment was highest among highly educated mothers, the rates had converged to be almost identical in the youngest cohort. Figure 6B shows how working time arrangements have evolved by the time the children are aged 10-19. The results provide evidence that, in line with their greater labour market potential, in the youngest cohort, highly educated women are most likely to increase their working hours to full-time. Whereas half of the highly educated mothers with a child in this age group work full-time, the three lower educational categories display full-time rates around 30 per cent. Notwithstanding, the stronger downwards trend in full-time work among highly educated women and medium-highly educated women-from 67 to 49 per cent and from 48 to 30 per cent, respectively (cohorts 1950-1959 to 1970-1979; see note to figure)indicates that the decline in the parenthood effect was relatively less pronounced among these groups of women. 7,8 Multivariate Results The results from the multivariate models support the main descriptive findings (Table 3, panel A). The decline in the parenthood effect in employment is reflected in the declining relevance of age of the youngest child (with the exception of children below age 3). In younger cohorts, more mothers are employed independently of the child's age. While the employment rate of highly educated women is distinctively higher in the older cohorts, the education effect converges across cohorts. Regarding full-time/part-time employment (Table 3, 1940-49 1950-59 1960-69 1970-79 1940-49 1950-59 1960-69 1970-79 1940-49 1950-59 1960- 1940-49 1950-59 1960-69 1970-79 1940-49 1950-59 1960-69 1970-79 1940-49 1950-59 1960- panel B), the results show an increase in part-time work in the two younger cohorts, especially with younger children. Conversely, the employment rate in the two older cohorts had been lower, with a higher share of mothers working full-time. Accounting for the age of the youngest child, highly educated mothers are more likely to work full-time. In the youngest cohort, the full-time employment rate of medium-high-educated women is lower compared with the three older cohorts. Findings from the decomposition analyses comparing the 1940-1949 cohort with the 1960-1969 cohort and the 1970-1979 cohort indicate that increasing maternal employment cannot be explained by changing cohort characteristics (e.g. female education) but seem to be (almost solely) driven by structural changes, i.e. the rise in part-time work (Table 4). 9 Concluding Discussion This study has drawn a detailed comparison of employment behaviours between childless women and mothers who were followed up on in synthetic cohorts over their life courses from their late teens to age 60. Its aim was to analyze whether the parenthood effect has declined over cohorts of women. When using the employment rate as an indicator, we may indeed conclude that engaging in paid work has become significantly easier for mothers; returning to the workplace is increasingly concentrated when the child is aged 3-5. Source: Austrian microcensus 1986-2016 (own computations). ***P < 0.001; **P < 0.01; *P < 0.05. actual working hours. The increasing diversity of working hours among mothers is not mirrored among childless women. Instead, if working time is reduced, it is almost always-at least initially-for reasons of care. This finding challenges the notion that Generation X seeks a better reconciliation between working time and leisure. Another important finding is that part-time work often remains a long-term arrangement rather than being a stepping stone into full-time employment (although we would need individual-level panel data to observe changes in employment hours). Many mothers do not expand their working hours to full-time, even as their children grow up (for international evidence, see Må nsson and Ottosson, 2011; Kelle, Simonson and Gordo, 2017). 10 They avoid this, even though disadvantages in terms of career prospects, pension benefits, and poverty among single mothers rise with the duration of part-time work (Thévenon, 2013). In the youngest cohort, only 30 per cent of mothers work full-time when their children attend lower secondary school (aged 10-15). Current labour laws provide the right to part-time work until the child's seventh birthday and, by this time, this arrangement is often consolidated in the company and also in the family (i.e. regarding the division of unpaid work and with respect to the extent of leisure time). We interpret the rise of part-time work as a new divide within the workforce between mothers and childless women. Mothers in part-time employment, especially if it is short part-time, are now in rather marginalized labour market positions. Before the spread of part-time employment, both mothers and childless women worked full-time. Hence, the divide was less within the workforce but between housewives and those women who were active in the labour market (with small differences by motherhood status). The education-specific results reveal that employment rates increased most strongly among low-and medium-educated mothers, while the increase was more moderate among their highly educated peers whose employment levels were already higher in older cohorts. This result, thus, suggests a converging trend between education groups (in line with Berghammer, 2014). Medium-high-educated women and highly educated women resume employment faster than their lesseducated counterparts after childbirth and more often on a part-time basis. This implies that the parenthood effect has declined relatively less in these groups of women than among their less-educated peers. In line with this finding, the results from the decomposition analysis reveal that changing cohort characteristics (e.g. more highly educated women) cannot explain the increase in maternal employment. Notwithstanding, as their children grow older, highly educated women are more likely than the other three educational groups to expand their working hours to full-time. On the other hand, we find that low-educated mothers experience more polarization between full-time work and non-employment (often unemployment) with children below age 6 than in the other education groups-and if they work part-time, they more often do so because they cannot find a full-time job. This supports the view that highly educated mothers are better able to pursue their working time preferences. Source: Austrian microcensus 1986-2016 (own computations). ***P < 0.001; *P < 0.05. parents observed in most other European countries (Berghammer and Verwiebe, 2015;Connolly et al., 2016). Second, while previous research found that highly educated women engage less in part-time work (Del Boca, Pasqua and Pronzato, 2009), the Austrian results show that this association only holds among mothers with older children. In the youngest cohort, three of the four highly educated employed mothers with children below age 6 work part-time. This challenges the widely held preconception that highly educated women are career-focused and oriented on gender equality. Notes 1 In terms of working time regime, since 1975, a standard full-time work week in Austria is 40 hours (8 hours per day), but many collective bargaining agreements provide reduced hours (e.g. 38.5). 2 For instance, higher disposable income net of housing costs in Norway and Sweden compared with Finland is a main reason for the four to five times higher part-time employment rates in these countries during the 1970s-1990s (Rønsen and Sundströ m, 2002). 3 In Berghammer and Riederer (2018), we also show results that pertain to the empty nest stage. 4 We also applied ordered logistic regression models (non-employment, short part-time, long part-time, full-time). Their interpretation is, however, not as straightforward because increasing employment rates of mothers (positive effect) are partly offset by increasing part-time employment rates (negative effect). 5 The 2006, 2012, and 2016 microcensus surveys additionally contain information on the reasons for part-time employment, allowing analyses for cohorts 1960-1969 and 1970-1979 (see Supplementary Table A.3). Among childless women who work part-time, around 40 per cent do not want a full-time position and almost 20 per cent cannot find one (the rest works part-time for, e.g. other personal and family reasons, illness). 6 Analyses on the reasons for part-time employment show that around 80-90 per cent of mothers with children below age 10 work part-time for reasons of care (see Supplementary Table A.3). With the youngest child aged 10-15, still one half attribute their part-time arrangement to care reasons and 20-25 per cent want to increase their working hours. 7 Additional analyses on the reasons for part-time work (cohorts 1960-1969 and 1970-1979; see Supplementary Table A.3) indicate that the parttime working arrangement is more often involuntary among low-educated women than their highereducated peers. Among part-time working mothers with a youngest child aged between 10 and 15 years, 20 per cent (high education) and 29 per cent (low education) want to increase their working hours (not necessarily to full-time); between 5 per cent (high education) and 12 per cent (low education) indicate that the reason for their part-time work is that they cannot find a full-time position. On average, only 56 per cent of women born 1960-1979, who work part-time in 2016 and want to work more hours, would like to work full-time, 33 per cent of women would like to work long part-time, and 11 per cent of women would like to work short part-time (e.g. an increase from 12 to 20 hours per week). 8 Statistical tests corresponding to Figures 1-6 generally support our observations (see Supplementary Tables A.4-A.6). For instance, these tests confirm that (i) the "motherhood effect" in employment is smaller in the two younger cohorts than in the two older cohorts; (ii) mothers are more likely to work part-time, more likely to be on leave, more likely to be unemployed, and less likely to be inactive in younger cohorts; (iii) mothers of young children work particularly less often full-time in younger cohorts; (iv) mothers of children below age 6 are more likely to work part-time if they have medium or high education; (v) mothers of children aged 10-19 are less likely to work short part-time if they are highly educated; and (vi) there has been convergence between mothers and childless women in all educational groups at ages 41-45. 9 Example of interpretation: the difference of 0.19 (Table 4, panel A, decomposition 1) means that the share of employed mothers is 19 percentage points higher in the 1960-1969 cohort than in the 1940-1949 cohort. The decomposition result suggests that only 1 percentage point is explained by different compositions (characteristics) of mothers in the two cohorts, while 18 percentage points are due to other developments (resulting in differences in regression coefficients, i.e. effects of characteristics). 10 Legally, parents have supervisory duty for their children up to age 18. Supplementary Data Supplementary data are available at ESR online.
2019-05-20T13:06:54.570Z
2018-04-01T00:00:00.000
{ "year": 2019, "sha1": "6ddc586563532b84f3d9a8513412c59f3a38757b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/esr/article-pdf/36/2/284/33465561/jcz058.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "660487aa47e612a4159a16b40a210ce2be886b0b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Sociology" ] }
1257302
pes2o/s2orc
v3-fos-license
Molecular Chaperones as Targets to Circumvent the CFTR Defect in Cystic Fibrosis Cystic Fibrosis (CF) is the most common autosomal recessive lethal disorder among Caucasian populations. CF results from mutations and resulting dysfunction of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR). CFTR is a cyclic AMP-dependent chloride channel that is localized to the apical membrane in epithelial cells where it plays a key role in salt and water homeostasis. An intricate network of molecular chaperone proteins regulates CFTR’s proper maturation and trafficking to the apical membrane. Understanding and manipulation of this network may lead to therapeutics for CF in cases where mutant CFTR has aberrant trafficking. INTRODUCTION The most common disease-causing mutation in cystic fibrosis transmembrane conductance regulator (CFTR) is the deletion of a single phenylalanine at position 508, ∆F508-CFTR. This mutation is present in one or both alleles of ∼90% of people with CF (Riordan, 2008), making it an attractive target for therapeutics. In contrast to wild type CFTR, which reaches the apical cell surface after its N-linked oligosaccharides are modified in the Golgi to an endoglycosidase H digestion-resistant form, ∆F508-CFTR does not acquire endoglycosidase H resistance (Cheng et al., 1990). These data suggested that ∆F508-CFTR is retained in the endoplasmic reticulum (ER; Kerem et al., 1989;Collins, 1992;Riordan, 1999;Bobadilla et al., 2002). Interestingly, ∆F508-CFTR appears to retain some ability to transport chloride when in the ER (Pasyk and Foskett, 1995), suggesting that the deletion of phenylalanine interferes with proper biogenesis and promotes degradation of the mutant protein (Ward and Kopito, 1994;Ward et al., 1995;Okiyoneda et al., 2010). Because ∆F508-CFTR retains the ability to transport chloride, it is widely hypothesized that correction of the mutant protein's trafficking would lead to functional CFTR at the apical cell surface (Denning et al., 1992b;Li et al., 1993;Pasyk and Foskett, 1995). This premise was supported by early data from Drumm et al. (1991), indicating that ∆F508-CFTR was functional in Xenopus oocytes, which are typically incubated at room temperature. Studying mammalian cells, Denning et al. (1992a) found that decreasing the cell incubation temperature led to an increase in both expression and function of ∆F508-CFTR at the cell surface. Overcoming this kinetic trafficking defect of ∆F508-CFTR would be an important step in developing therapeutics for people with CF. CFTR BIOGENESIS Proper biogenesis of the CFTR protein is not a trivial task. CFTR is synthesized as a ∼140 kDa protein (comprising 1480 amino acids) and requires a number of processing steps to progress to a mature, ∼180 kDa form. The protein contains two nucleotide binding domains (NBD1 and NBD2), two membrane-spanning domains (MSD1 and MSD2), and an intervening regulatory domain (R; Riordan et al., 1989). During translation, MSD1 is synthesized first, followed by NBD1, R, MSD2, and finally NBD2; folding of the nascent peptide appears to occurs both co-translationally and post-translationally (Du et al., 2005;Kleizen et al., 2005). F508 is located in NBD1, and while the crystal structures of wild type and ∆F508 NBD1 are quite similar, deletion of F508 appears to cause NBD1 to have a more unfolded solution conformation, as assessed by proton-deuterium exchange (Lewis et al., 2005(Lewis et al., , 2010. Furthermore, deletion of F508 appears to destabilize a critical interaction of NBD1/MSD2 interaction (Thibodeau et al., 2005;Serohijos et al., 2008). Du et al. (2005) also suggested that phenylalanine 508 provides an important interaction with NBD2 that assists in proper post-translational folding of this domain. Together, these data suggest that newly synthesized ∆F508-CFTR is less appropriately folded, and therefore more readily recognized by ER quality control mechanisms and targeted for degradation. Interestingly, Cui et al. (2007) found that a wild type CFTR construct lacking the NBD2 domain escaped degradation and trafficked to the cell membrane where it had similar stability to full-length CFTR, but had a very low open probability. These data suggest that, though important for CFTR activity, NBD2 is not essential for CFTR biogenesis and exit from the ER. Consistent with this notion, when this group introduced the ∆F508 mutation into their NBD2-deficient construct, the resulting protein did not reach the plasma membrane, supporting the earlier hypothesis that ∆F508 impacts aspects of CFTR folding and biogenesis other than the NBD1/NBD2 interaction. MOLECULAR CHAPERONES To better understand the difficulties of ∆F508-CFTR biogenesis, it is important to examine the cellular context in which CFTR biogenesis occurs. The folding and trafficking environment, referred to by Wang et al. (2006) as the "CFTR interactome," contains over 200 proteins that co-immunoprecipitate with either wild type or ∆F508-CFTR in model systems. These co-precipitating proteins, a number of which are implicated in proper folding, trafficking, and function of CFTR, include a number of molecular chaperone proteins. Molecular chaperones are proteins that aid in the folding of other proteins, but do not become part of the final product (Ellis, 1987). Instead, they promote self-assembly of their client proteins and prevent non-productive folding. Historically, the functions of many molecular chaperones are defined by their ability to assist in the refolding of denatured proteins, such as luciferase, in vitro (Schroder et al., 1993;Barral et al., 2004). Molecular chaperones appear to interact with CFTR during many stages of biogenesis. Nascent peptides of membrane proteins, such as CFTR, are synthesized at the ER, where cotranslational folding occurs (Hartl, 1996). Because CFTR is inserted into the ER membrane, its folding is monitored by chaperone proteins within both the ER and cytoplasm. If CFTR folding is delayed or prolonged, interaction with molecular chaperones (Loo et al., 1998;Meacham et al., 1999) can cause improperly folded proteins to be transported back to the cytoplasm, where they are targeted for degradation by the proteasome (reviewed in Rivett, 1993). This process, known as ER-associated degradation (ERAD), also involves a number of molecular chaperones. These interactions and processes are discussed in detail below. Appropriately folded CFTR exits the ER and is transported to the Golgi where its N-linked glycosyl modification is further processed into the mature form before trafficking to the apical cell surface. The ∆F508-CFTR mutant is unable to reach the Golgi, though it is able to transport chloride in reconstituted systems (Li et al., 1993;Lukacs et al., 1993). A number of data suggest differing and not mutually exclusive mechanisms by which ∆F508-CFTR is retained in the ER. One proposed mechanism suggests that recognition of an ER exit sequence within NBD1 of the CFTR protein by Coat Complex II (COP II) ER → Golgi transport machinery is impaired in the ∆F508 protein (Chang et al., 1999;Wang et al., 2004). Other works cite improper and/or more robust chaperone binding as the mechanism by which ∆F508-CFTR is retained in the ER (Pind et al., 1994;Wang et al., 2006). Hypothetically, excessive chaperone binding could inhibit COP II's access to the ER exit motif within NBD1. To address this question, Wendeler et al. (2007) affixed a strong ER exit signal to the wild type CFTR protein. This signal did not disrupt protein localization or expression, but did enhance wild type CFTR maturation by two-fold. In contrast, this ER exit signal did not enhance the maturation of the ∆F508 protein, thereby contradicting the hypothesis that a primary defect in the ER exit sequence is responsible for failure in ∆F508-CFTR trafficking. Instead, these data support the hypothesis that molecular chaperone proteins may play a key role in the quality control of wild type CFTR. CFTR AND ERAD Accumulated non-functional membrane or ER luminal proteins can aggregate and interfere with the production or function of other newly synthesized proteins, as well as cause an ER and/or cellular stress response. To prevent this, aberrant proteins are recognized, shuttled out of the ER, and targeted for degradation by ERAD. Investigations in our group have focused on the mechanism by which 4-phenylbutyrate (4PBA) enhances ∆F508-CFTR trafficking (Rubenstein et al., 1997). We found that 4PBA decreased Hsc70 mRNA and protein expression in CF epithelial cells, as well as decreased recovery of ∆F508-CFTR when Hsc70 was immunoprecipitated (Rubenstein and Zeitlin, 2000;Rubenstein and Lyons, 2001). These data support the hypothesis that Hsc70 inhibits ∆F508-CFTR maturation, likely by promoting its ERAD (see Figure 1). Hsc70's promotion of ERAD involves a co-chaperone known as CHIP (C-terminus of Hsc70-interacting protein), an E3 ubiquitin ligase (Wiederkehr et al., 2002;Murata et al., 2003). Meacham et al. (2001) demonstrated that CHIP and Hsc70 cooperate to target the immature (band B) form for ubiquitination and degradation; overexpression of CHIP decreased whole cell and surface expression of CFTR. Simplistically, association of Hsc70 with a client (like CFTR) would bring CHIP into proximity where it could catalyze ubiquitination of the client. A more robust association of Hsc70 with client, as was demonstrated by our group for ∆F508 vs. wild type CFTR (Rubenstein and Zeitlin, 2000), would portend greater ubiquitination and likelihood for ERAD. Additional co-chaperone proteins interact with the Hsc70/CHIP complex to modulate their client interaction. HspBP1 binds Hsc70 and this binding decreases the ubiquitin ligase activity of CHIP (Alberti et al., 2004). This, in turn, decreases the ubiquitinmediated degradation of CFTR and increased the steady-state expression of either wild type or ∆F508-CFTR in an in vitro assay. Similarly, Bag-2 interacts with CHIP and inhibits its ubiquitin Frontiers in Pharmacology | Pharmacology of Ion Channels and Channelopathies ligase activity (Arndt et al., 2005). With regards to CFTR, increased Bag-2 expression increases steady-state expression of both immature and mature CFTR in heterologous cells (Arndt et al., 2005). Bag-2 appears to stabilize the NBD1 domain of CFTR and prevent its aggregation while unfolded. Matsumura et al. (2011) performed experiments in a cell-free system to discern the role of Hsc70 in promoting biogenesis from its role in promoting ubiquitination. Using a fragment of the Bag-1 protein to destabilize the interaction between Hsc70 and CFTR led to a decrease in CFTR ubiquitination, but no effect on protein biogenesis (Matsumura et al., 2011). Similarly, Meacham et al. (1999) found that the interaction between Hsc70 and Hdj-2 promotes stabilization of a foldingcompetent CFTR intermediate and prevents aggregation of NBD1, while Zhang et al. (2006) also found that Hdj-2/Hsc70 promoted stabilization of mature CFTR and prevented aggregation. Together, these data suggest that Hsc70 and CHIP primarily cooperate to promote ERAD of clients, and that this interaction can be modified by co-chaperones. In the case of ∆F508-CFTR, a more robust association with Hsc70/CHIP portents increased ERAD. In addition to Hsc70, degradation of newly synthesized ∆F508-CFTR is also controlled by Derlin, an ER membrane-associated complex comprised of RMA1 (an E3 ubiquitin ligase), Ubc6e (an E2 ubiquitin-conjugating enzyme), and Derlin-1 (Younger et al., 2006). Derlin-1 appears to retain ∆F508-CFTR at the ER membrane and allow its recognition by Ubc6e and RMA1. Derlin-1 can interact with p97, the ATPase that extracts proteins from the ER during ERAD, within a separate complex that also targets CFTR for degradation . Derlin-1 overexpression leads to decreased wild type and ∆F508-CFTR expression, while RNAi-mediated depletion of Derlin-1 had the opposite effect. Interestingly, the Derlin complex can ubiquitinate proteins cotranslationally (Younger et al., 2006), which is known to occur for CFTR (Sato et al., 1998) while CHIP/Hsc70 primarily recognizes misfolded proteins post-translationally (Younger et al., 2006). Derlin-1 degrades the CFTR fragment containing only MSD1, but not longer forms of the protein, possibly because partial CFTR folding prevents binding of Derlin-1 . Together, these data suggest that Derlin and CHIP/Hsc70 have complementary roles in surveillance of newly synthesized proteins to prevent accumulation of misfolded proteins. CFTR AND CHAPERONES IN THE CYTOPLASM Folding of the cytosolic domains of CFTR requires coordinated effort of heat shock proteins (Hsps), a large family of functionally related chaperones that promote folding and prevent aggregation of new proteins. ∆F508-CFTR demonstrates prolonged interaction with cytosolic Hsps (Yang et al., 1993;Loo et al., 1998;Rubenstein and Zeitlin, 2000;Choo-Kang and Zeitlin, 2001), indicating that these chaperones also represent potential therapeutic targets in improving ∆F508-CFTR trafficking. Hsp70, the stress induced 70 kDa heat shock protein, and the aforementioned Hsc70, are two extensively studied members of this family. They are more than 85% identical on an amino acid level, which has led many to hypothesize that Hsp70 and Hsc70 have similar functions. Interestingly, however, Hsp70 function does not always overlap with Hsc70's, and the two often have opposite cellular effects (Gething and Sambrook, 1992;Goldfarb et al., 2006). Experimentally, Hsc70 inhibition has been shown to lead to an increase in Hsp70 expression (Aquino et al., 1996); this may represent cellular stress, as Hsp70 expression is induced by such stress (reviewed in Mayer and Bukau, 2005). The exact role of Hsp70 in CFTR function and expression remains controversial. Choo-Kang and Zeitlin examined the effect of increased Hsp70 expression on CFTR in CF epithelial cells. In contrast to previous data (Rubenstein and Zeitlin, 2000), their data suggested that 4PBA increased Hsp70 expression and increased Hsp70/CFTR interaction (Choo-Kang and Zeitlin, 2001). They also found that overexpression of Hsp70 enhanced the interaction between Hsp70 and ∆F508-CFTR, which promoted ∆F508-CFTR maturation (see Figure 2). Suaud et al. (2011b) recently reconciled these data and demonstrated that 4PBA causes a transient increase in Hsp70 expression by a mechanism that involves the STAT-3 transcription factor and its interacting protein, Elongator Protein 2 (Elp2). This transient increase in Hsp70 expression with 4PBA is consistent with that suggested by gene expression profiling experiments (Wright et al., 2004). Taken together, these data support a model in which Hsp70 promotes proper trafficking of ∆F508-CFTR; this contrasts the role of its homolog, Hsc70, which, as discussed above, appears to promote ∆F508-CFTR degradation by ERAD. In contrast, Farinha et al. (2002) found no increase in either wild type or ∆F508-CFTR maturation when both CFTR and Hsp70 were overexpressed in Chinese Hamster Ovary (CHO) cells. Instead, they saw increased wild type CFTR maturation only when Hsp70's co-chaperone, Hdj-1, was also overexpressed, but did not see a similar increase in maturation of ∆F508-CFTR. They found that Hsp70/Hdj-1 could slow the degradation rate of wild type CFTR, but not the mutant protein, possibly because of the folded state of ∆F508-CFTR. Farinha et al. also examined 4PBA treatment of cells to determine if the effect was similar to the results of their transient Hsp70/Hdj-1 overexpression. They observed a more rapid degradation of ∆F508-CFTR with 4PBA treatment, but no effect on wild type CFTR. This is contradictory to what was seen in previous reports, which suggest 4PBA promotes ∆F508-CFTR trafficking (Rubenstein et al., 1997;Choo-Kang and Zeitlin, www.frontiersin.org 2001; Suaud et al., 2011b). This apparent disparity may result from the model systems under study. Farinha et al. used heterologous CHO cells where CFTR (wild type or ∆F508) was overexpressed, while others (Rubenstein et al., 1997;Choo-Kang and Zeitlin, 2001;Suaud et al., 2011b) used IB3-1 CF bronchiolar epithelial cells where ∆F508-CFTR is endogenously expressed. Another heat shock protein, Hsp90, also plays a key role in protein homeostasis and folding of a variety of proteins in a number of organisms (reviewed in Balch et al., 2008;Hutt et al., 2009;Powers et al., 2009). CFTR folding intermediates are stabilized by binding to Hsp90, which prolongs their half-life and aids in their trafficking and maturation (Loo et al., 1998;Fuller and Cuthbert, 2000;Wang et al., 2006). Hsp90 binding to client depends on its ATPase activity, and both client binding and Hsp90 ATPase activity are enhanced by the presence of co-chaperones, such as Aha1 (Pearl and Prodromou, 2006). Recently, Aha1 was suggested to regulate CFTR interaction with Hsp90, leading to increased interest in this co-chaperone (Wang et al., 2006). Sun et al. (2008) examined chaperone binding of wild type and ∆F508-CFTR and found that both proteins interacted similarly with Hsp90. Interestingly, they found that Aha1 interacted with ∆F508-CFTR at almost twice the affinity of wild type CFTR (Sun et al., 2008). They also expressed CFTR fragments in an attempt to rescue ∆F508-CFTR trafficking, as was reported in previous studies (Owsianik et al., 2003;Clarke et al., 2004;Cormet-Boyaka et al., 2004). With one such fragment of CFTR, they saw the predicted increase in ∆F508-CFTR maturation and a corresponding decrease in Aha1 binding to ∆F508-CFTR. These data suggest that Aha1 plays an important role in the Hsp90-mediated stabilization of CFTR. Koulov et al. (2010) recently extended these findings by demonstrating that mutations introduced in both the N-and C-terminal structures of Aha1 decreased binding of Aha1 to Hsp90, which in turn decreased the ATPase activity of Hsp90 and its ability to bind client proteins. Taken together, these data suggest that Aha1 promotes the binding of Hsp90 to client proteins by increasing the Hsp90's ATPase activity. While initial studies using Hsp90 inhibitors, such as geldanamycin, suggested that Hsp90 promotes ∆F508-CFTR maturation and trafficking (Loo et al., 1998;Wegele et al., 2004), studies focused on Hsp90 and Aha1 suggest an alternate mechanism (Wang et al., 2006;Koulov et al., 2010). It is likely that, similar to Hsc70, the Hsp90/CFTR interaction is complex. Perhaps initial binding between Hsp90 and CFTR lead to productive biogenesis. However, if the interaction is prolonged by CFTR's inability to fold, CFTR is targeted for degradation instead. While many studies focus on correcting the trafficking of ∆F508-CFTR to the apical cell surface, there is evidence that regulation of this mutant's endocytic trafficking is also abnormal. In fact, wild type CFTR is efficiently recycled back to the apical cell membrane after endocytosis. In contrast, ∆F508-CFTR that is delivered to the membrane using low temperature is removed from the surface more rapidly and is recycled less efficiently than the wild type CFTR (Cholon et al., 2009). These data suggest that increasing the fraction of ∆F508-CFTR that arrives at the apical cell surface, while important, may not be sufficient to increase the functional expression of this mutant protein. Interestingly, because Hsc70 is involved in endocytosis and the uncoating of clathrin-coated vesicles (DeLuca-Flaherty et al., 1990;Morgan et al., 2001), and for targeting proteins for degradation by the lysosomes (Gething and Sambrook, 1992), it seems likely that Hsc70 may also influence the stability of the wild type and mutant CFTR proteins that are expressed on the apical cell surface. These data also suggest that therapeutics which modulate the effect of Hsc70 on clathrin-mediated endocytosis may lead to increased apical membrane stability of ∆F508-CFTR. CFTR AND CHAPERONES IN THE ENDOPLASMIC RETICULUM The role of ER luminal chaperones in CFTR biogenesis is less well delineated. CFTR biogenesis appears influenced by additional molecular chaperone proteins in the ER, including calreticulin and calnexin. These proteins recognize terminal oligosaccharides on proteins modified with high mannose N-linked glycosylation and promote ER retention of "folding intermediates" until they either fold properly or undergo ERAD. As such, Harada et al. (2006Harada et al. ( , 2007 found that CFTR expression and function were enhanced by RNAi-mediated depletion of calreticulin in both cultured cells and mouse models, suggesting that calreticulin negatively regulates CFTR. Because previous reports indicated that curcumin, a SERCA pump inhibitor, corrected ∆F508-CFTR trafficking to the apical plasma membrane (Egan et al., 2004), Harada et al. (2007) examined the mechanism by which this occurs. They found that curcumin downregulates calreticulin expression, leading to enhanced CFTR expression. Though curcumin alone could not Frontiers in Pharmacology | Pharmacology of Ion Channels and Channelopathies activate ∆F508-CFTR in their experiments, in combination with calreticulin knockdown they showed enhanced activity of mutant CFTR, again consistent with calreticulin negatively regulating CFTR. Calnexin's role in regulating CFTR biogenesis is less clear. Initial reports suggest that calnexin binds to immature CFTR, and the interaction with ∆F508-CFTR is prolonged, compared to wild type CFTR (Pind et al., 1994). Based on these data, it is reasonable to hypothesize that calnexin is responsible for ER retention of ∆F508-CFTR, and may therefore represent a viable target for therapeutics to rescue ∆F508-CFTR. However, recent studies suggest a more complex picture of CFTR regulation by calnexin. One study suggested that calnexin actually decreased ERAD of ∆F508-CFTR (Okiyoneda et al., 2004), and depletion of calnexin using RNAi did not improve trafficking of newly synthesized ∆F508-CFTR (Farinha and Amaral, 2005). While calnexin might not influence CFTR trafficking as predicted, this study may have been limited by incomplete calnexin depletion. To address this possibility, a followup study examined CFTR trafficking in calnexin-deficient cells, or cells containing calnexin mutant proteins (Okiyoneda et al., 2008). One calnexin mutant, a truncated form that is exported from the ER, was able to bind to ∆F508-CFTR with similar affinity to wild type. However, this mutant failed to increase the amount of ∆F508-CFTR in the Golgi, suggesting that calnexin may not be responsible for ER retention of ∆F508-CFTR. In complimentary experiments, the group also employed wild type and calnexin knockout murine embryonic fibroblasts (MEFs) to address caveats of earlier RNAi experiments. They found that wild type CFTR protein was decreased in calnexin knockout MEFs, compared to MEFs containing wild type calnexin. Consistent with the RNAi experiments, they found that neither ∆F508-CFTR trafficking nor chloride transport was affected by calnexin knockout. These data suggest that calnexin is not sufficient for ER retention and degradation of the ∆F508-CFTR protein. Instead, other ER chaperone proteins may represent a stronger therapeutic target for CF patients. Endoplasmic reticulum luminal chaperones involved in the unfolded protein response (UPR) work closely with the ERAD system. When protein folding in the ER is delayed, the UPR is activated to reestablish homeostasis within the ER by increasing the protein folding capacity of the cell and/or decreasing biosynthesis (reviewed in Schroder and Kaufman, 2005). The UPR is comprised of the regulator protein Grp78/BiP and a number of signal transducers, including ATF6 and PERK (Bertolotti et al., 2000;Lee, 2005). Under non-stress conditions, Grp78/BiP binds ATF6 and maintains it in an inactive state. Under ER stress, such as an excess of unfolded protein, Grp78/BiP preferentially binds to the luminal unfolded protein, which releases and allows activation of ATF6 and PERK, leading to initiation of the UPR. Because ∆F508-CFTR is a misfolded protein, Kerbiriou et al. hypothesized that ∆F508-CFTR-expressing cells would activate the UPR. Using ATF6 and Grp78/BiP as markers of the UPR, they found that protein levels of both Grp78/BiP and activated ATF6 were increased in ∆F508-CFTR-containing cells (Kerbiriou et al., 2007). Interestingly, RNAi-mediated depletion of ATF6, but not Grp78/BiP, corrected ∆F508-CFTR trafficking, as evidenced by increased ∆F508-CFTR-mediated chloride transport and surface expression. These data suggest that the UPR pathway is involved in the retention of ∆F508-CFTR in the ER, but that Grp78/BiP is not involved directly in CFTR biogenesis. This is also consistent with earlier data from Yang et al. (1993) and Pind et al. (1994), which found no interaction between CFTR and Grp78/BiP, and no effect of Grp78/BiP on the trafficking of ∆F508-CFTR. In contrast to Kerbiriou et al. others have not found increased Grp78/BiP expression in cells expressing ∆F508-CFTR (Nanua et al., 2006). These seemingly contradictory findings may indicate a potentially transient interaction between unfolded proteins and Grp78/BiP. In addition, ERAD may be the predominant mechanism by which the cell responds to unfolded CFTR, meaning that Grp78/BiP's role in the response to ∆F508-CFTR is small, leading to a small or negligible activation of the UPR. Based on these data, it remains unclear what role the UPR plays in trafficking or internal retention of ∆F508-CFTR. Our group has recently focused on another ER chaperone and its potential role in regulating CFTR trafficking. ERp29 (ER luminal protein of 29 kDa) is ubiquitously expressed, but is especially prominent in brain and lung (Demmer et al., 1997). Its function is not entirely clear, but is suggested to promote thyroglobulin secretion and regulate assembly of connexin hemichannels (Sargsyan et al., 2002;Hubbard et al., 2004;Baryshev et al., 2006;Das et al., 2009), and it also seems to play a role in CFTR trafficking. Our group recently demonstrated that 4PBA increased ERp29 mRNA and protein expression (Suaud et al., 2011a). We also demonstrated that overexpression of ERp29 in Xenopus oocytes and mammalian cells increased the functional and surface expression of wild type and ∆F508-CFTR, while RNAi-mediated depletion of ERp29 decreased wild type CFTR in bronchial epithelial cells (Suaud et al., 2011a). These data suggested that ERp29 protein acts to promote biogenesis of both ∆F508 and wild type CFTR, and is the first ER luminal protein described to have this role. While additional studies are necessary, these data suggest an additional mechanism by which 4PBA may correct ∆F508-CFTR biogenesis and trafficking. MOLECULAR CHAPERONES AS PHARMACOLOGIC TARGETS To improve the function of ∆F508-CFTR, it is important to consider the many molecular chaperones in the CFTR"interactome"as potential therapeutic targets. Though 4PBA is a prototype ∆F508-CFTR corrector, its effects are only partial. While most reports suggest that 4PBA promotes ∆F508-CFTR trafficking by decreasing Hsc70 and increasing Hsp70 (Rubenstein et al., 1997;Zeitlin, 1998, 2000;Choo-Kang and Zeitlin, 2001;Rubenstein and Lyons, 2001;Suaud et al., 2011b), another found no 4PBA effect on these chaperones or on ∆F508-CFTR (Farinha et al., 2002). Early phase clinical trials showed a partial improvement in CFTR-mediated chloride transport in ∆F508-CFTR homozygous subjects with CF (Rubenstein and Zeitlin, 1998;Zeitlin et al., 2002), but the amount of improvement suggested that more efficacious correctors would be necessary to achieve meaningful clinical improvements. In addition to 4PBA, a variety of Hsc70 inhibitors are being examined as potential correctors of ∆F508-CFTR trafficking and may also represent therapeutic targets for treatment of CF (see Figure 3). Apoptazole is one such drug that interferes with Hsc70. www.frontiersin.org Cho et al. (2011) found that apoptazole has the potential to promote ∆F508-CFTR trafficking and activity. Apoptazole appears to disrupt the ATPase activity of Hsc70 and decreases the ubiquitination of ∆F508-CFTR by blocking the interaction between Hsc70 and CHIP. Matrine, a quinolizidine alkaloid, also downregulates Hsc70 expression, leading to an increase in ∆F508-CFTR protein levels (Basile et al., 2012). It also allows ∆F508-CFTR to exit the ER and localize to the plasma membrane, as evidenced by an increase in interaction between ∆F508-CFTR and BAG3, a co-chaperone located at the apical cell surface. Deoxyspergualin is a drug that targets both Hsc70 and Hsp90 (Nadler et al., 1992;Nadeau et al., 1994), but has no apparent effect on Hsp70. Jiang et al. (1998) found that deoxyspergualin treatment increased CFTR activity in ∆F508-CFTR-expressing cells, suggesting this drug may provide an alternate mechanism by which to affect Hsc70 and indirectly increase ∆F508-CFTR trafficking. Clinically, there are many potential problems with deoxyspergualin treatment, however, likely because Hsc70 and Hsp90 are ubiquitously expressed proteins with many functions. Recently, Norez et al. explored a potential solution to this problem by constructing a form of the molecule with an adjuvant. When they generated a human serum albumin/deoxyspergualin construct, they were able to deliver the drug at lower doses, with lower toxicity, and achieve even better correction of ∆F508-CFTR trafficking than they saw with deoxyspergualin alone (Norez et al., 2008). This is a promising method by which drugs could be delivered to patients with lower toxicity. Pharmacologic agents that specifically target Hsp90 are also being studied to understand their effects on ∆F508-CFTR. Early studies showed that geldanamycin, as well as other members of the ansamycin family, target Hsp90, and disrupt binding to CFTR (Loo et al., 1998). However, geldanamycin increased turnover of CFTR by increasing CFTR's susceptibility to ERAD. Based on these data, it seems that geldanamycin would be detrimental, rather than helpful, in CF patients. However, more recent data provided a completely different picture. Using an in vitro system, Fuller and Cuthbert (2000) found that geldanamycin interferes with degradation of ∆F508-CFTR by disrupting ubiquitination. The caveat of this study is that it was conducted using rabbit reticulocyte lysates, rather than cell or animal models. Further investigation into geldanamycin or other Hsp90 inhibitors is needed and would provide a more complete picture of the role that these agents play in maturation of the mutant CFTR protein. The identification of ER luminal chaperones, such as ERp29, that modulate CFTR and ∆F508-CFTR biogenesis is an exciting new development. These chaperones may be useful targets for development of novel ∆F508-CFTR corrector strategies. CONCLUSION Patients currently receive therapeutics primarily aimed at treating symptoms of Cystic Fibrosis (CF; Ashlock and Olson, 2011;Cuthbert, 2011), although the first mechanism-based therapy for CF patients harboring a CFTR gating mutation like G551D was recently approved. For most people with CF this is not a permanent solution, thus new therapies that can target the underlying pathology of the defect are needed. This is a difficult task, as ∆F508-CFTR correctors tested thus far have had only limited efficacy (Rubenstein and Zeitlin, 1998), likely due to the complexities of CFTR folding and trafficking. Targeting chaperone proteins that influence CFTR, rather than CFTR itself holds promise for success. Because of their ubiquitous expression and interactions with so many cellular proteins, small changes in chaperone level or function may have dramatic effects on client proteins, such as CFTR. Frontiers in Pharmacology | Pharmacology of Ion Channels and Channelopathies It is important to keep in mind that the molecular chaperone functions described here (ERAD, UPR, folding, etc.) are tightly regulated and highly evolved to prevent the prolonged existence of unfolded or improperly folded proteins. In order to overcome the ∆F508-CFTR trafficking defect, it is necessary to find ways to bypass and/or change the set point of these quality control mechanisms. The system redundancy, highlighted by chaperone proteins with similar or overlapping roles (e.g., Hsc70/CHIP and Derlin), adds a level of security which is essential to the cell, but difficult to overcome, from a scientific perspective. A very delicate balance must be struck if a highly efficient therapeutic agent is to be found. The compound must prolong the lifetime of the misfolded ∆F508-CFTR protein, in order to allow proper folding. However, increased half-life might also lead to increased chaperone binding, which, as in the case of Hsp90, can counterproductively force the cell to degrade misfolded proteins (Koulov et al., 2010). Because a large fraction of newly synthesized ∆F508-CFTR is degraded by the ubiquitin-proteasome pathway, inhibition of the proteasome inhibitors might seem like an attractive therapeutic strategy. However, inhibiting proteasomal degradation does not increase the functional ∆F508-CFTR at the apical cell surface (Ward and Kopito, 1994;Ward et al., 1995). Instead, inhibiting the proteasome led to intracellular accumulation of ubiquitinated immature ∆F508-CFTR without increasing surface expression and function. In addition, proteasomal inhibition leads to increased cellular stress due to accumulation of misfolded proteins, which in turn induces expression of heat shock proteins, such as Hsp70, Hsc70, and Hsp90 (Liao et al., 2006), and may lead to cell apoptosis/death (Fribley et al., 2004;Park et al., 2011). These data suggest that inhibition of the proteasome is not a viable therapeutic option for correcting ∆F508-CFTR trafficking. Unfortunately, there are a number of difficulties that scientists face in designing therapeutics to correct ∆F508-CFTR. Many of the studies on CFTR and chaperones have been conducted using overexpression systems. This, of course, is necessary for detection of the extremely low-level expression of ∆F508-CFTR in cells where the protein is not overexpressed. However, this overexpression makes interpretation of the results somewhat more difficult. In addition, while often used non-epithelial cell models facilitate the overexpression of wild type and ∆F508-CFTR, non-epithelial cells do not endogenously express CFTR, so their responses to overexpression my not be physiologically relevant (as discussed above, Farinha et al., 2002). Studies performed in these models must be validated using epithelial cells. CFTR expression varies between epithelial tissue types. Kalin et al. examined samples from CF patients as well as healthy human samples using immunohistochemistry. They found that the wild type CFTR protein could be detected in sweat glands, lung epithelia, and villi and goblet cells in the intestine (Kalin et al., 1999). In contrast, ∆F508-CFTR could not be detected in sweat glands, but expression in the lung and intestine were very similar to wild type CFTR. While this study did not address the functional activity of ∆F508-CFTR in these tissues, these data suggest that CFTR processing defects may be tissue type-specific and that ∆F508-CFTR processing may affect some tissues more than others. Further study of chaperone function in a range of epithelial tissues is required to fully understand their role in CFTR trafficking and activity. Recent generation of novel animal models of CF, such as the ferret and pig, and their disease pathology is of great benefit to the advancement of this field as a whole (reviewed in Fisher et al., 2011) and (Keiser and Engelhardt, 2011). While the role of chaperones in CFTR trafficking have yet to be investigated in these models, future interrogations of epithelial cells from these models will undoubtedly yield a great deal of insights into both underlying physiology and therapeutic approaches. Many chaperone proteins are upregulated in response to cellular stress, which may result from overexpression of exogenous proteins or increased abundance of misfolded proteins in the ER. Overexpressing ∆F508-CFTR may lead to a specific activation of proteins needed to fold the mutant, or instead cause a global upregulation of chaperone proteins involved in ERAD or the UPR, simply by increasing cellular stress. Studies examining overexpression of both wild type and ∆F508-CFTR lend credence to the hypothesis that the response is specific to the mutant protein, but this is still a concern that needs to be addressed when designing therapeutics. Many pharmacologic agents that correct ∆F508-CFTR trafficking do so by an as yet unknown mechanism. Though many chaperones have been extensively studied, there are still aspects of our understanding that are lacking. This is evidenced by studies with seemingly contradictory data, discussed above. As an additional caveat, chaperone proteins have many targets and interact with an abundance of proteins in response to cellular stress. While changes in chaperone expression may positively influence ∆F508-CFTR expression, the effects on other important protein pathways could have unforeseen negative consequences. The use of these pharmacologic agents must be understood in the context of these other roles for chaperones within the cell. Building an even greater knowledge base of molecular chaperones and ∆F508-CFTR, in the context of the CFTR "interactome," will help to fill in the gaps and lead to a better understanding of the pharmacologic agents, as well as the proteins that they target. Finally, ∆F508-CFTR interacts with many other proteins during its lifetime, and it may not be possible to design a single molecule to correct all its potentially problematic interactions. Instead, a combination of therapeutics may be more appropriate and effective. Targeting multiple chaperones may allow therapies to avoid the trap of decreasing a single molecular chaperone protein too much. Small changes in multiple chaperones may provide the balance needed to prolong the life of ∆F508-CFTR enough to allow proper folding, but not so much that it is recognized by ERAD or the UPR. These sorts of small changes to multiple chaperones may also help create therapies with less toxic side effects.
2016-06-17T21:50:02.387Z
2012-05-31T00:00:00.000
{ "year": 2012, "sha1": "bc3403b1e84ef1c001780d0c0993e222319c5314", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2012.00137/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc3403b1e84ef1c001780d0c0993e222319c5314", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73604869
pes2o/s2orc
v3-fos-license
Indirect Influences on Directed Manifolds We introduce a program aimed to studying problems arising from the theory of complex networks with differential geometric means. We study the propagation of influences on manifolds assuming that at each point only a finite number of propagation velocities are allowed. This leads to the computation of the volume of the moduli spaces of directed paths, i.e. paths satisfying the imposed tangential restrictions. The proposed settings provide a fertile ground for research with potential applications in geometry, mathematical physics, differential equations, and combinatorics. We establish the general framework, develop its structural properties, and consider a few basic examples of relevance. The interaction between differential geometry and complex networks is a new and promising field of study. Introduction Our aim in this work is to lay down the foundations for the study of the propagation of influences on directed manifolds. Our object of study can be approached from quite different viewpoints as indicated in the following, non-exhaustive, diagram: Ind. Inf. on Graphs Indirect Influences on Directed Manifolds Feyman Integrals Our departure point is the theory of indirect influences for weighted directed graphs which has gradually emerged thanks to the efforts of several authors, among them Brin, Chung, Estrada, Godet, Hatano, Katz, Page, Motwani, and Winograd. Although the history of the subject is yet to be written, for our purposes we may consider the introduction of the Katz's index [18] as an early modern approach to the problem of understanding the propagation of influences in complex networks. Fundamental developments in the field came with the introduction of the MICMAC [13], PageRank [3,4], Communicability [11], and Heat Kernel [7] methods. In 2009 the second author proposed the PWP method for computing the propagation of influences on networks [8]. In a nutshell the method proceeds as follows. We assume as given a network (weighted directed graph) represented by its adjacency matrix D, also called the matrix of direct influences. Then one defines the matrix T = T (D) of indirect influences whose entry T ij measures the weight of the indirect influences exerted by vertex j on vertex i. The matrix T is computed using the following expression: where λ is a positive real parameter. In words: indirect influences arise from the concatenation of direct influences, and the weight of a concatenation of length n comes from the product of n entries of D and the factor λ n n! ensuring convergency by attaching a rapidly decreasing weight to longer chains of direct influences. The PWP method has been applied to analyse educational programs, and to study indirect influences in international trade [9]. Further extensions and applications are underway. The stability of the method with respect to changes in the matrix D and the parameter λ has been recently studied in [10]. Our first proposal in this work is that one may regard a differential manifold provided with a tuple of vector fields on it -we call such an object a directed manifoldas being a smooth analogue of a directed graph with numbered outgoing edges attached to each vertex. Armed with this intuition we pose the question: Is there an extension of the theory of indirect influences from the discrete to the smooth settings? We argue that the answer is in the affirmative, and that such an extension both interplays with many notions already studied in the literature, e.g. control theory [1,17,23,25,28], Feynman integrals [24,12,29], and directed topological spaces [14], and also demands the introduction of new ideas. Constructing a smooth analogue for the PWP matrix of indirect influences -whose entries are given by sums over paths in a graph -leads directly to Feynman path integrals, understood in the general sense of integrals over spaces of paths on manifolds. Although of great interest, we follow an alternative approach in order to avoid the usual difficulties that have prevented, so far, the development of a fully rigorous general integration theory over infinite dimensional manifolds. Thus, in order to reduce our computations to finite dimensional integrals, we impose strong tangential conditions on the allowed paths in our domains of integration. The background upon which we develop our constructions is the category of directed manifolds, introduced in Section 2, which is also a convenient category for studying geometric control theory. Our constructions bring about a new set of problems to geometric control theory -usually focus on the path reachability and path optimization problems -namely, the problem of computing integrals over the moduli spaces of directed paths. We remark again that strong tangency conditions are imposed in order to insure that the moduli spaces of directed paths -also called the spaces of indirect influences -split naturally into infinitely many finite dimensional pieces, each coming with a natural measure. Thus we have a notion of integration over each piece, which we extend additively to the whole moduli space of directed paths, leaving the convergency of these sums to a case by case analysis. Fortunately, in our examples we do obtain convergent sums. These ideas are developed in Section 3, where we also introduce the wave of influences u(p, t) which computes the total influence received by a point p in time t, i.e. u(p, t) computes the volume of the moduli space of directed paths starting at an arbitrary point and ending up at p in time t. Our notion of directed manifolds is strongly related to the notion of directed spaces introduced by Grandis [14], and to some extend the former notion may be regarded as a smooth analogue of the latter. In Section 3 we make this connection precise. In Section 4 we discuss invariant properties for directed manifolds and for the moduli spaces of directed paths on them. We also study invariant properties with respect to reordering of our given tuple of vectors fields. We propose a possible route for using our spaces of indirect influences to approach integrals with more general domains of integration, such as Feynman path integrals. Whether this approach can actually be implemented to work as a viable computational technique is left for future research. In Section 5 we study the moduli spaces of directed paths on product and quotient of directed manifolds. In Section 6 we study the moduli spaces of directed paths arising from constant vector fields on affine spaces. We show that even in this case, the simplest one, our theory yields results worth studying where explicit computations are available. These settings give rise to fruitful constructions in combinatorics and probability theory [5]. Finally, in the closing Section 7 we indicate how our general settings for computing indirect influences, based on the computation of the volume of moduli spaces of directed paths, can be extended to the quantum context adopting a Hamiltonian viewpoint. Notation. For n ∈ N, we set [n] = {1, ..., n}, [0, n] = {0, ..., n}, and let P[n] be the set of subsets of [n]. The amalgamated sum of closed subintervals of the real line R is given by We let δ ab be the Kronecker's delta function. Basic Definitions We let diman be the category of directed manifolds. A directed manifold is a tuple is a map, and the following identity holds Let (g, β) : (N, w 1 , ..., w l ) −→ (K, z 1 , ..., z r ) be another morphism. The composition morphism (g, β) • (f, α) is given by: It satisfies the required property since One can think of a directed manifold (M, v 1 , ..., v k ) as being a smooth analogue of a "finite directed graph with up to k outgoing numbered edges at each vertex". Points in the manifold M are thought as vertices in the smooth graph. The tangent vectors v i (p) ∈ T p M are thought as infinitesimal edges starting at p. The out-degree of a vertex p ∈ M is the number of non-zero infinitesimal edges starting at p, i.e. the cardinality An actual edge from p to q, points in M, is a smooth path ϕ : [0, t] −→ M with ϕ(0) = p, ϕ(t) = q, and such that the tangent vector at each point of ϕ is an infinitesimal edge, i.e.φ = v i (ϕ) for some i ∈ [k], or more explicitlẏ We say that p exerts a direct influence, in time t > 0, on the vertex q through the path ϕ. Note that ϕ is determined by the initial point p, and the index i of vector field v i , thus we are entitled to use the notation ϕ(t) = ϕ i (p, t), where ϕ i is the flow generated by the vector field v i . .., v k ) be a directed manifold and p, q ∈ M. The set of one-direction paths D p,q (t) from p to q developed in time t > 0 is given by We also set i.e. each point of M exerts a direct influence over itself in time t = 0, and there are no t = 0 direct influences between different points of M. Thus D p,q defines a map D p,q : R ≥0 −→ P[k]. We also say that D p,q (t) is the set of direct influences from p to q exerted in time t > 0. There might also be one-direction paths from p to q taking an infinite long interval of time to be exerted, these influences occur through a path ϕ : R −→ M such that lim t→−∞ ϕ(t) = p and lim t→∞ ϕ(t) = q. Semi-infinite direct influences can be similarly defined. One might also consider topological direct influences from p to q which are exerted through a path ϕ : R −→ M such that p ∈ ωlim t→−∞ ϕ(t) and q ∈ ωlim t→∞ ϕ(t). We will no further consider one-direction paths of these types in this work. Next we introduce the notion of indirect influences which arise from the concatenation of direct influences. Our focus is on finding a convenient parametrization for the space of all such concatenations. Definition 2. Let (M, v 1 , ..., v k ) be a directed manifold and p, q ∈ M. A directed path from p to q displayed in time t > 0 through n ≥ 0 changes of directions is given by a pair (c, s) with the following properties: • c = (c 0 , c 1 , ..., c n ) is a (n + 1)-tuple with c i ∈ [k] and such that c i = c i+1 . We say that c defines the pattern (of directions) of the directed path (c, s). We let D(n, k) be the set of all such tuples, and l(c) = n + 1 be the length of c. There are k(k − 1) n different patterns in D(n, k). Note that we may regard a pattern c as a map c : [0, n] −→ [k]. • s = (s 0 , ..., s n ) is a (n + 1)-tuple with s i ∈ R ≥0 and such that s 0 + · · · + s n = t. We say that s defines the time distribution of the directed path (c, s), and let ∆ t n be the n-simplex of all such tuples. • The pair (c, s) determines a (n + 2)-tuple of points (p 0 , . . . , p n+1 ) ∈ M n+2 given by: where ϕ c i−1 is the flow generated by the vector field v c i−1 . We denote the last point p n+1 by ϕ c (p, s). • The pair (c, s) must be such that ϕ c (p, s) = q. • Directed paths in time t = 0 are the same as one-direction paths in time t = 0. Remark 3. By definition directed paths include one-direction paths as well, even for the conventional case t = 0. We also say that (c, s) determines an indirect influence from p to q exerted in time t. The fact that our paths are displayed in non-negative time means that indirect influences propagate forward in time. Remark 4. The geometric meaning of directed paths is made clear through the following construction. A pair (c, s) as above determines a piece-wise smooth path such that the restriction of ϕ c,s to the interval [0, s i ], for 0 ≤ i ≤ n, is given by We say that ϕ c,s is the directed path determined by the pair (c, s). Indirect influences are exerted through such directed paths. Whenever necessary we write ϕ v,c,s instead of ϕ c,s to make explicit that these paths do depend on the vector fields v = (v 1 , ..., v k ). Figure 1 shows the directed path associated to a pair (c, s). Note that directed paths in the sense above are examples of horizontal paths as defined in geometric control theory [1]. Remark 5. Our notion of indirect influences on directed manifolds may be regarded as a limit case of the propagation of disturbances in geometric optics, see Arnold [2]. In geometric optics one works with a Riemannian manifold M, and is given a map v : SM −→ R ≥0 from the unit sphere bundle of M to the non-negative real numbers. The number v(l) gives the speed allowed for the propagation of a disturbance along the direction l. Indirect influences on a directed manifold (M, v 1 , ..., v k ) correspond to the propagation of disturbances in geometric optics, if one lets v be the singular map that is zero everywhere except at the directions defined by the vector fields v j , and on this directions it assumes the values |v j |. Note that the notion of indirect influences does not demand a Riemannian structure on M. Figure 2 illustrates the relation between indirect influences and geometric optics, by displaying the deformation of the indicatrix surface (the image of v) from a smooth ellipses to a curve concentrated on three vectors. • The group condition ϕ j (ϕ j (p, s 1 ), s 2 ) = ϕ j (p, s 1 + s 2 ) holds for s 1 , s 2 ∈ R. We regard the n-simplex ∆ t n introduced in Definition 8 as a smooth manifold with corners. There are at least three different approaches to differential geometry on manifolds with corners. First we can apply differential geometric notions on the interior of ∆ t n . Also it is possible to introduce differential geometric objects on ∆ t n by considering objects that are smooth on an open neighborhood of ∆ t n in R n+1 . A third and more intrinsic approach for doing differential geometry on ∆ t n relies on deeper results in the theory of manifolds with corners. For a fresh approach the reader may consult [16]. Although this more comprehensive approach is certainly desirable, for simplicity, we will not further consider it. Proposition 7. For a pattern c ∈ D(n, k), the map sending a pair (p, s) ∈ M × ∆ t n to the point ϕ c (p, s) ∈ M is a smooth map and a diffeomorphism for a fixed time distribution s ∈ ∆ t n . Next we introduce the main objects of study in this work, namely, the moduli spaces of directed path, also called the spaces of indirect influences, on directed manifolds. These spaces parametrize directed paths from a given point to another. .., v k ) be a directed manifold and p, q ∈ M. The moduli space Γ p,q (t) of directed paths from p to q developed in time t > 0 is given by In addition we set ∅ otherwise, Figure 3 shows a schematic picture of a component Γ c p,q (t) of the moduli space of indirect influences. Remark 9. For a fixed pattern c the continuity of the iterated flow ϕ c (p, ) implies that the moduli space of directed paths Γ c p,q (t) is compact, as it is a closed subspace of ∆ t n . The moduli spaces of directed paths come equipped with the structure of a category. Indeed directed paths are pretty close of being the free category generated by one-direction path, but not quite since we have ruled out repeated directions. Theorem 10. Altogether the moduli spaces of directed paths on a directed manifold form a topological category. Proof. Given a directed manifold (M, v 1 , ..., v k ) we let Γ = Γ(M, v 1 , ..., v k ) be the category of directed paths on M. The objects of Γ are the points of M. Given p, q ∈ M, the space of morphisms in Γ from p to q is given by In order to define continuous composition maps • : Γ p,q × Γ q,r −→ Γ p,r , it is enough to define componentwise composition maps for given patterns c and d with n = l(c) and m = l(d). We consider two cases: (s 0 , ..., s n ) • (u 0 , ..., u m ) = (s 0 , ..., s n , u 0 , ..., u m ). These compositions are well-defined continuous maps satisfying the associative property. The unique t = 0 directed path from p ∈ M to itself gives the identity morphism for each object p ∈ Γ. Remark 11. The moduli spaces of directed paths Γ p,q (t) can be extended from points to arbitrary subsets of M as follows. Given A, B ⊆ M we define the moduli space of directed paths from A to B as Restricting attention to embedded oriented submanifolds of M, and following techniques from Chas and Sullivan's string topology [6], this construction gives rise to some kind of transversal category. We close this section introducing a few subsets of M useful for understanding the propagation of influences on M. These sets are usually called the reachable sets in geometric control theory, and are natural generalizations of the corresponding graph theoretical notions. They also play a prominent role in general relativity [22]. For A ⊆ M we set: i.e. the set of points on which A depends on time t. such that Γ q,A (s) = ∅} is the set of points in M that influence A in time less or equal to t. are called, respectively, the front of influence and the front of dependence of A in time t. Note that a directed manifold M is naturally a pre-poset by setting The associated poset is the quotient space M ∼ , where the equivalence relation ∼ on M is given by The space M ∼ tell us how M splits into components of co-influences, i.e. the path connected components of M through directed paths. Note that a directed manifold (M, v 1 , ..., v k ) comes equipped with a natural distribution, indeed for each point p ∈ M we have the subspace generated by the vectors v 1 (p), ..., v k (p). If this distribution is integrable, then directed paths are confined to live on the leaves. Thus to study the moduli spaces of directed paths, in the integrable case, we may as well forget about the manifold M and work leaf by leaf. So the interesting cases of study are: Measuring the Moduli Spaces of Directed Paths Fix a directed manifold (M, v 1 , ..., v k ). In order to measure directed paths on M we assume from now on that an orientation on M has been chosen. To gauge the amount of indirect influences exerted, in time t, by a point p ∈ M on a point q ∈ M we need to define measures on the modulis spaces Γ p,q (t) of directed paths. From Definition 8 we see that Γ p,q (t) is a disjoint union of pieces, one for each pattern c ∈ D(n, k), of the form Γ c p,q (t) = {s ∈ ∆ t n | ϕ c (p, s) = q}. So, our problem reduces to imposing measures on the pieces Γ c p,q (t). The n-simplex ∆ t n is a smooth manifold with corners, and comes equipped with a Riemannian metric and its associated volume form. Indeed using Cartesian coordinates the n-simplex can be identified with the following subset of R n : Thus ∆ t n inherits a Riemaniann metric, an orientation, and the corresponding volume form dl 1 ∧ · · · ∧ dl n . With this measure we have that has smooth spaces of directed path if for any pattern c ∈ D(n, k) and points p, q ∈ M the space of indirect influences Γ c p,q (t) is a smooth embedded sub-manifold of ∆ t n . For our next result we use the implicit function theorem for manifolds [15,26]. Let f : N −→ M be a smooth map between differential manifolds and fix q ∈ M. Then Next we apply this result to the open part of manifolds with corners. Proof. Fix c ∈ D(n, k) with n ≥ 1. Recall from Remark 6 that ϕ c : M × R n+1 −→ M is the iterated flow associated to c. The differential of ϕ c naturally split as: Consider the map φ : ∆ t n −→ M given by where we are using the identification In order to guarantee that Γ c p,q (t) = φ −1 (p) is a smooth sub-manifold of ∆ t n we impose the condition that d s φ has maximal rank for s ∈ φ −1 (p). Next we compute for i ∈ [0, n − 1] the vectors Using the identity ∂ ∂s n (ϕ c 0 ,··· ,cn )(p, s 0 , · · · , s n ) = v cn (ϕ c 1 ,··· ,cn (p, s 0 , · · · , s n )), one can show that ∂φ ∂s i (s) is given by where we recall that s n = t − |s|, Thus the rank of d s φ is maximal at each point s ∈ φ −1 (q) if and only if there are exactly min(n, dim(M)) linearly independent vectors among the vectors ∂φ ∂s i (s) given by the expression above. We have shown the desired result. Corollary 14. Under the hypothesis of Theorem 13, the interior of the moduli space Γ c p,q (t) is an oriented Riemannian sub-manifold of ∆ t n . Proof. We use oriented differential intersection theory as developed by Guillemin [15]. Since Γ c p,q (t) is a smooth sub-manifold of ∆ t n it acquires by restriction a Riemannian metric. The orientation on Γ c p,q (t) arises as follows. n is oriented, and N s Γ c p,q (t) acquires an orientation from the isomorphism above, then T s Γ c p,q (t) naturally acquires an orientation. For a directed manifold with a smooth moduli space of directed paths each piece Γ c p,q (t) ⊆ ∆ t n acquires from ∆ t n a Riemannian metric. If in addition we assume that each piece Γ c p,q (t) is given an orientation, then Γ c p,q (t) acquires a volume form denoted by dl c . As we have just shown this is the situation arising from the conditions of Theorem 13. We are ready to highlight a few functions on the moduli spaces of directed paths, for a fix a time t > 0, that one would like to integrate against these measures. Volume of Moduli Space of Directed Paths. Each component Γ c p,q (t) of the space of indirect influences is compact and thus of bounded volume. We define the volume or total measure of Γ p,q (t), leaving convergency issues to be discussed on a case by case basis, as follows: vol(Γ c p,q (t)). 2. Functions on directed paths coming from differential 1-forms on M. Let A be a differential 1-form on M. We formally write where the map A : Γ c p,q (t) −→ R is given by with ϕ c,s : [0, s 0 + · · · + s n ] −→ M the directed path associated to (c, s) ∈ Γ p,q (t). Functions on directed paths from Riemannian metrics on M. Let g be a Riemannian metric on M. We formally write where e −lg : Γ c p,q (t) −→ R is the map given by ǫ −lg (c, s) = e −lg(ϕc,s) and l g (ϕ c,s ) is the length of the path ϕ c,s , i.e.: Functions on direct paths from functions on M. Given a smooth map f : M −→ R we formally write Functions on directed paths from Lagrangian functions on T M. Let L : T M −→ R be a Lagrangian map. In the applications L is usually built from a Riemannian metric g on M and a potential map U : M −→ R as follows: Given a Lagrangian L we consider the following analogue of the Feynman integrals: This example both reveals the relations and differences between our constructions and Feynman integrals. Whereas in the latter arbitrary paths are taken into account, with our methods only paths with speeds and directions prescribed by the vector fields v 1 , ..., v k are allowed. Also, instead of looking for a measure on the space of all paths, we first decompose our space of paths into several pieces, and then impose a measure on each piece. Fortunately, each piece is finite dimensional and thus we have at our disposal the usual techniques coming from Riemannian geometry. Convergency of the sum of the integrals over each piece is to be studied in a case by case fashion. Remark 15. In our examples we have found that the infinite sums defining the integrals above are actually convergent. Nevertheless, convergency is not a built-in property and should not be expected in general. To improve convergency properties one may look at the exponential generating series instead. For example, the vol function defined above can be replaced by the function vol λ , with λ a positive real parameter, defined as follows: Clearly, this technique can be applied as well to the other quantities defined above. Moreover, if necessary, we may regard λ as a formal parameter. We have shown how to construct and integrate functions on the moduli spaces of directed paths on directed manifolds. So let us pick one such a function and call it g. Integrating over the moduli spaces of directed paths we obtain the kernel for the propagation of influences k : M × M × R −→ R which is given by where we assume that Γ − q (t) is a compact oriented smooth sub-manifold of M; thus it acquires by restriction a Riemannian metric, and comes with a volume form dp. Let us consider a couple of examples. • Let g be the map constantly equal to 1, we have that vol(Γ p,q (t))f (p) dp. • For g = e i S where S is the action defined by a Lagrangian map, we have that Invariance, Involution, and Limit Properties Let (M, v 1 , ..., v k ) be a directed manifold and f : M −→ N be a diffeomorphism. Then we obtain the directed manifold where the push-forward vector fields f * v i are given for q ∈ N by With this notation we have the following result. Moreover, if (M, v 1 , ..., v k ) has a smooth moduli space of directed paths, and f is an orientation preserving diffeomorphism, then the identification above is an identity between Riemannian manifolds, and in particular we obtain that vol(Γ M p,q (t)) = vol(Γ N f (p),f (q) (t)). Proof. We show that s ∈ Γ M,c p,q (t) if and only if s ∈ Γ N,c f (p),f (q) (t). By construction we have that and thus by induction on the length of c we have that and therefore the equations For the second part we show that the identity map Γ M,c p,q (t) −→ Γ N,c f (p),f (q) (t) preserves orientation. Since the identity map preserves the splittings we just have to show that N s Γ M,c p,q (t) and N s Γ N,c f (p),f (q) (t) are given compatible orientations. This follows by construction, see the proof of Theorem 13, as the square is a commutative diagram of orientation preserving isomorphisms, see Corollary 14. Next result tell us how the moduli spaces of directed paths depend on the ordering on vector fields. Moreover, if (M, v) has a smooth moduli space of directed paths, then so does (M, vα) and we have that vol(Γ v p,q (t)) = vol(Γ vα p,q (t)). Proof. We regard the permutation α as a map It follows that α is an homeomorphism as its restriction map is just the identity map and is a well-defined homeomorphism since In the case of a smooth moduli space of directed paths, the map above is clearly orientation preserving, since it is just the identity map, and we have a commutative diagram of orientation-preserving isomorphisms And therefore the respective reachable sets are related by: If (M, v 1 , ..., v k ) has a smooth moduli space of directed paths, then so does (M, −v 1 , ..., −v k ) and the maps above are actually diffeomorphisms. These diffeomorphisms may or may not preserve orientation. In quantum mechanics the proposed integration domain of a Feynman integral is usually the space of differentiable paths, with fixed endpoints, on a manifold. We think of the moduli spaces of directed paths Γ p,q (t) as being analogues for the integration domains for Feynman path integrals, where in addition to boundary restrictions, we impose tangential restrictions on the allowed paths; these restrictions induce a partition of pathspace into finite dimensional pieces. The question arises: Can we somehow approach the full Feynman domains of integration from the moduli spaces of directed paths? In other words, is it possible to relax our definition of directed paths, or perform some kind of limit procedure that allow us to approach Feynman integrals from the viewpoint of indirect influences? We left this problem open for future research, and limit ourselves to make a couple of remarks along this line of thinking. Clearly what one should do is to allow more paths into our moduli spaces. One way to go is to replace the vector fields v j by sections of the projective tangent bundle PT M, so that one fixes the directions along which our curves can move, but leave the speeds unconstrained. Although this approach may be of interest, finite dimensionality is lost. Incidentally, this approach establishes the connection with directed topological spaces [14]. Instead we propose another approach. Given a directed manifold we consider the tuple v(a, b) of vector fields on M, for a, b ∈ N + , given by the lexicographically ordered set: Indirect influences on the directed manifold (M, v(a, b)) are exerted trough paths along the directions defined by the vector fields v j with rather arbitrary speeds, if a and b are large numbers. Piecewise finite dimensionality is preserved for a and b fix. To relax even further the restrictions on the paths in our moduli spaces we consider directed manifolds of the form (M, < v(a, b) >) where in < v(a, b) > we include all vector fields that are finite sums of vector fields in v(a, b). Indirect influences in (M, < v(a, b) >) are exerted trough paths with rather arbitrary speeds and directions; for example, if the vector fields in v(a, b) at some point contain a basis of the tangent space, then essentially all directions and speeds are allowed, for a and b large, at that point. Piecewise finite dimensionality is preserved for a and b fix. The fundamental question is whether it is possible to make any sense of the limit of the moduli spaces of directed path for the spaces (M, < v(a, b) >) as a and b grow to infinity, a question however beyond the scope of this work. Let diman be the category of directed manifolds. We allow in diman manifolds with connected components of different dimensions, and assume by convention that the set with one element is a directed manifold. Indirect Influences on Product/Quotient Manifolds Proposition 20. The product defined above gives diman the structure of a monoidal category with unit the set [1]. Proof. The desired homeomorphism sends to the pair Next we consider the moduli spaces of directed paths on quotient manifolds. Let M be a smooth manifold, G a compact Lie group acting freely on M, and assume that the directed manifold (M, v 1 , ..., v k ) is invariant under the action of G, i.e. the following identities hold: Then M/G is a smooth manifold and it comes with a smooth quotient map Note also that we have isomorphisms Thus we obtain the directed manifold (M/G, v 1 , ... , v k ) with v i = dπ(v i ). Directed Paths for Constant Vector Fields As a first and pretty workable example, linking the theory of indirect influences on directed manifolds with linear programming techniques, we consider constant vector fields on affine spaces. Thus we fix a directed manifold (R d , v 1 , ..., v k ) where the vector fields Theorem 23. Consider the directed manifold (R d , v 1 , ..., v k ). Fix a pattern c ∈ D(n, k) and points p, q ∈ R d . The space of directed paths Γ c p,q (t) is the convex polytope given on the variables s ∈ R n+1 ≥0 by the system of equations: a ic(0) s 0 + · · · + a ic(n) s n = q i − p i , for i ∈ [d], and s 0 + · · · + s n = 1, or equivalently in matrix notation where A c is the matrix of format d × (n + 1) given by: .., s n ), p = (p 1 , ..., p d ), and q = (q 1 , ..., q d ). Proof. The result follows from the fact that the solutions of the differential equationṗ = v, where v is constant and with initial condition a, are of the form p(t) = a + tv. Theorem 24. Consider the directed manifold (R d , v 1 , ..., v k ). For p, q ∈ R d , the volume of the space of directed paths Γ c p,q (t) is given by vol(Γ c p,q (t)) = vol(Conv(u I )), where: Conv(u I ) is the convex hull of the vector u I defined by the following conditions: is a subset of cardinality n. • The entries of the vector u I ∈ R n+1 ≥0 vanish for indexes not in I. • For a matrix A we let A I be its restriction to the columns with indexes in I. The set I must be such that • u I is the unique solution of the linear system: Proof. Theorem 23 and standard results of linear programming [21,27] one can show that Γ c p,q (t) = Conv(u I ). Dimension One Consider the directed manifold (R, a 1 d dx , ... , a k d dx ) where for simplicity we assume that a i = a j . Fix a pattern c ∈ D(n, k) and consider the space Γ c 0,x (t) of directed paths from 0 to x exerted in time t. The space Γ c 0, is the convex polytope defined by the equations a c(0) s 0 + · · · + a c(n) s n = x and s 0 + · · · + s n = t. Consider the set By Theorem Γ c 0,x (t) is the convex polytope Conv(u ij ) generated by the vectors u ij , given for (i, j) ∈ D by Below we use the following identity, valid for n, m ∈ N, involving the classical beta B and gamma Γ functions: . For x, y ∈ R we have that vol(Γ 0,x (t)) = 0 if |x| > t, vol(Γ 0,x (t)) = 1 if |x| = t, and otherwise is given by: Furthermore, we have that vol(Γ x,0 (t)) = vol(Γ 0,x (t)) and vol(Γ x,y (t)) = vol(Γ 0,y−x (t)). The wave of influences for t > 0 is given by and is given explicitly by u(x, t) = 10e t + 6e −t − 16. Proof. Fix x ∈ R and a pattern c ∈ D(n, k). The space of directed paths Γ c 0,x (t) is the polytope given by Since we have just two vector fields, a pattern (c 0 , ..., c n ) is determined by its initial value c 0 . Figure 4 shows the directed path associated to the tuple (7, 5, 3, 7) ∈ Γ We distinguish four cases taking into account the initial value c 0 and the parity of n. Next we show that the wave of influences is constant in the variable x. Making the change of variables y − x → y we get that: vol(Γ 0,y (t))dy = u(0, t). To compute u(0, t) we make the change of variable y = t(2s − 1) in the integral Dimension Two Consider the directed manifold (R 2 , ∂ ∂x , ∂ ∂y ), and let Γ(x, y) = Γ (0,0),(x,y) be the moduli space of directed paths from (0, 0) to (x, y). Note that such influences can only happen at time t = x + y, and thus there is no need to include the time variable in the notation. Figure 4 shows the directed path associated to the tuple (1, 3, 2, 1) ∈ Γ (2,1,2,1) (4, 3). In our next results we use the following notation. For k ∈ N we set x n y n+k n!(n + k)! and x n+k y n (n + k)!n! . The following result is easy to check. • For l, m ∈ N and k ∈ Z we have that • For k ∈ N, the function i k (x, y) is given in terms of the modified Bessel function where we recall that (z 2 /4) n n!Γ(v + n + 1) . 4. vol(Γ(x, y)) is a symmetric function in x and y. 7. Only points (x, y) ∈ R 2 ≥0 on the segment x + y = t receive an influence from (0, 0) at time t ≥ 0. Among the points on this segment, the highest influence from (0, 0) is exerted on the point ( t 2 , t 2 ); the volume of the moduli space of directed paths from (0, 0) along the line of maximal influences is given by vol(Γ(t, t)) = 2 ∞ n=0 n ⌊n/2⌋ t n n! . Proof. Item 1 is clear, and item 2 simply counts the influences that arise, respectively, from the patterns (1) and (2). Let us show 3. Since k = 2, a pattern (c 0 , ..., c n ) is determined by its initial value c 0 . For (x, y) ∈ R 2 >0 we distinguish four cases taking into account the initial value c 0 and the parity of n. Item 5 follows from item 3 and Lemma 26. Item 6 is a particular case of item 5. Let us show item 7. Let vol n (Γ(x, y)) be the n-th coefficient in the series expansion of vol(Γ(x, y)) from item 3. The points influenced by (0, 0) at time t are of the form (s, t − s) with 0 < s < t. Thus: The sign of the expression above is determined by the sign of (t − 2s), as the other factors are positive. Thus the volume of the moduli space of directed paths from (0, 0) exerted on time t achieves a global maximum at the point ( t 2 , t 2 ), and we have that vol(Γ(t, t)) = 2 ∞ n=0 t 2n n! 2 + t 2n+1 (n + 1)!n! = 2 ∞ n=0 2n n t 2n (2n)! + 2n + 1 n t 2n+1 (2n + 1)! = 2 ∞ n=0 n ⌊n/2⌋ t n n! . Item 8. By translation invariance the wave of influence is independent of x, y. Thus we have that Next we consider the moduli spaces of directed paths on the torus T 2 = S 1 × S 1 . We use coordinates (x, y) ∈ R 2 representing the point (e 2πix , e 2πiy ) ∈ T 2 . Consider the vector fields on T 2 given in local coordinates by ∂ ∂x and ∂ ∂y . The moduli space of directed paths on the torus T 2 from (1, 1) to (e 2πix , e 2πiy ) exerted in time t > 0 is denoted by Γ(e 2πix , e 2πiy , t). Recall that D(e 2πix , e 2πiy , t) is the set of one-direction paths. Higher Dimensions Let us first introduce a few combinatorial notions. Given integers n 1 , . . . , n k ∈ N >0 we let Sh k (n 1 , . . . , n k ) be the set of shuffles of n 1 + · · · + n k cards divided into k blocks of cardinalities n 1 , . . . , n k . Recall that a shuffle is a bijection α from the set to itself such that if i < j ∈ [1, n s ], then α(i) < α(j) ∈ [1, n 1 + · · · + n k ]. When we shuffle a deck of cards the idea is to intertwine the cards in the various blocks, without distorting the order in each block. We say that a shuffle is perfect if no contiguous cards within a block remain contiguous after shuffling, i.e. a shuffle α is called perfect if for i, i + 1 ∈ [1, n s ] we have that α(i) + 1 < α(i + 1) ∈ [1, n 1 + · · · + n k ]. Let PSh k (n 1 , . . . , n k ) ⊆ Sh k (n 1 , . . . , n k ) be the set of perfect shuffles, and psh k be the corresponding exponential generating series given by psh k (x 1 , . . . , x k ) = n 1 ,...,n k ∈N >0 |PSh k (n 1 , . . . , n k )| x n 1 1 · · · x n k k n 1 ! · · · n k ! . Proof. If A ∈ S k [m], then |A c | = m − k, and A c comes with a naturally ordered partition with exactly k − 1 blocks if 1, m ∈ A, k blocks if 1 or m (but not both) belong to A, and k + 1 blocks if 1, m / ∈ A. The cardinalities of the blocks of A c provides the various kinds of numerical partitions needed to complete our result. Proof. A perfect shuffle in PSh k (n 1 , . . . , n k ) is determined by its image on each of the blocks [1, n s ], which must be a sparse subsets. Let us point out the relation between patterns and perfect shuffles. Consider the map | | : C(n, k) −→ N k , sending a pattern c ∈ C(n, k) to its content multi-set given by the sequence |c| ∈ N k such that |c| i = |c −1 (i)|. The support of a pattern c is the set s(c) ⊆ [k] with i ∈ s(c) if and only if |c| i = 0. Proof. The vector (n 1 , . . . , n k ) gives us the content multi-set of c, a shuffle on it gives us in addition the order of the vector c. The perfect condition on shuffles is equivalent to the conditions c(i) = c(i + 1) on patterns. 3. For (x 1 , . . . , x k ) ∈ R k ≥0 , with at least two positive entries, the moduli space Γ(x 1 , . . . , x k ) of directed paths from (0, . . . , 0) to (x 1 , . . . , x k ) has volume vol(Γ(x 1 , . . . , x k )) = Thus a pattern c ∈ C(n, k) with support s(c) = A ⊆ [k], with |A| ≥ 2, contributes to the monomial x n 1 1 · · · x n k k n 1 ! · · · n k ! , if and only if |c| i = n i + 1 for i ∈ A, and n i = 0 for i / ∈ A. Therefore the total contribution of the patterns with support A to this monomial is given by A⊆[k] where n A is the vector obtained from the tuple (n 1 , ..., n k ) by erasing the zero entries, and n A + 1 is the vector obtain from n A by adding 1 to each entry. Summing over the n j , and setting x A = (x j ) j∈A , we obtain that the total contribution of the patterns with support A to the volume of the moduli space of direct ed paths is given by Adding over all possible supports A ⊆ [k], with |A| ≥ 2, we obtain the desired result. Quantum Indirect Influences In this closing section we briefly describe how to extend the theory of indirect influences to the quantum settings. We first consider indirect influences on Poisson manifolds [2] from two different viewpoints.
2016-04-24T13:58:00.000Z
2015-07-03T00:00:00.000
{ "year": 2015, "sha1": "e014e99c2e39b2fa29f857ad0206cd6b50800a88", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e014e99c2e39b2fa29f857ad0206cd6b50800a88", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
244914983
pes2o/s2orc
v3-fos-license
Optimized fast data migration for hybrid DRAM/STT-MRAM main memory In order to reduce the main memory energy of the IoT terminal, STT-MRAM is used to replace DRAM to save refresh energy. However, the write performance of STT-MRAM cells is worse than that of DRAM. Our previous work proposed a hybrid DRAM/STT-MRAM main memory and fast data migration to reduce the adverse effects of poor write performance of STT-MRAM cells with negligible performance overhead. This article optimizes the migration algorithm and experiment scheme: 1. Reduce the storage overhead of the algorithm. 2. Realize the continuous work of the algorithm. 3. Consider the impact of system standby time on main memory energy. The results show that compared with our previous work, the storage overhead of the algorithm is reduced 99.8%. When the system standby time is zero, the energy of the hybrid main memory (including the energy of the algorithm) is reduced by 4% on average compared to DRAM. The longer the system standby time, the more energy saving. Introduction With the development of Internet of Things (IoTs) technology, the number of IoT terminal devices has grown exponentially. Battery-powered terminals require a low-energy main memory. As one of the new non-volatile memories (NVMs), STT-MRAM has many advantages [1]. Compared with other new NVMs, it has the fastest access speed and the strongest endurance. Compared with DRAM, it has no leakage current and does not need refresh. Now, the key task for STT-MRAM to replace DRAM is to expand it to higher densities. In addition, the write performance of STT-MRAM cells needs to be improved [2]. In theory, STT-MRAM can be continuously scaled to below 10-nm. However, this ideal behavior will encounter many challenges in mass production, requiring continuous innovation in manufacturing technology [3]. For standalone STT-MRAM, commercial products have been continuously launched [4][5][6], and the current maximum capacity is 1Gb [7], which can meet the memory capacity requirements of some IoT terminals. A lightweight neural network as a complex application, its weight parameter is about 10MB [8]. If the adverse effects of poor write performance of STT-MRAM can be controlled, STT-MRAM will be a good choice for terminal low-energy main memory. Our previous work proposed a hybrid DRAM/STT-MRAM main memory and fast data migration to reduce the adverse effects of poor STT-MRAM write performance, with negligible performance overhead [9]. This article further optimizes the migration algorithm and experiment scheme. Optimize the structure of the miss table and the selection of DRAM migration blocks to reduce the storage overhead of the algorithm. Realize the replacement mechanism of migraton table to ensure the continuous work of the algorithm. When calculating the main memory energy, consider different system standby times to get closer to the real scene. The rest of this article is organized as follows. Section 2 introduces related work. Section 3 introduces the optimized migration algorithm. Section 4 describes the experimental setup. Section 5 discusses the experimental results and Section 6 summarizes the article. Related work Research on new NVM/DRAM hybrid main memory began to appear in 2009, mainly PCM and DRAM. There are two types of organization: hierarchical and parallel. For hierarchical organization, Qureshi et al. [10] proposed a main memory consisting of PCM coupled with a small DRAM buffer. Lee et al. [11] proposed a novel write-only DRAM cache for PCM. Park et al. [12] addressed the power management of DRAM cache and PCM. Yoon et al. [13] proposed a new caching policy for DRAM/PCM hybrid memory. For parallel organization, Dhiman et al. [14] proposed PDRAM, a novel energy efficient main memory architecture and architecture and system policies. Zhang et al. [15] presented a hybrid PRAM/DRAM memory architecture and exploit an OS-level paging scheme. Ramos et al. [16] proposed a new hybrid design that features a hardware-driven page placement policy. The above managements of parallel hybrid main memory requires the participation of the OS, which involves the division of software and hardware. It is not suitable for embedded systems, especially when the embedded system does not use virtual memory. STT-MRAM has been widely studied as an on-chip cache [17][18][19][20], and few studies have used it as a main memory. Meza et al. [21] showed that reducing the size of the row buffer can greatly reduce the dynamic energy of the NVM main memory. Kultursay et al. [22] showed that the energy and performance of STT-RAM without any optimization cannot compete with DRAM. Partial write and row buffer write bypass can significantly improve the performance and energy of STT-RAM main memory. Wang et al. [23] solved performance issues caused by small MRAM page size. The above circuit-level optimization within the MRAM can be combined with the architecture-level optimization proposed in this article. Asifuzzaman et al. [24][25] investigated the feasibility of using STT-MRAM in high performance computing systems and real-time embedded systems. However, the timing parameters of MRAM come from estimates, and publicly reliable timing parameters are unavailable [26]. We proposed a hybrid DRAM/STT-MRAM main memory and fast data migration [9]. Optimized migration algorithm Standalone STT-MRAM adopts DDRx interface design, which can directly replace DRAM. The structure of the STT-MRAM chip is similar to that of DRAM, and each bank in the chip has a row buffer. When the row buffer hits, read or write the row buffer. When the row buffer misses, the opened page is first precharged back to the array, and then the page to be accessed is activated into the row buffer. Only precharge and activation are related to the array. Therefore, reducing the adverse effects of poor STT-MRAM cell performance requires reducing the number of precharges and activations. In other words, the number of MRAM row buffer misses needs to be reduced. We propose fast data migration, which migrates frequently missed MRAM data to DRAM. The structure of the hybrid memory is shown in Fig. 1. Miss table As shown in Fig. 1, a miss table needs to be implemented in the MRAM sub-controller to record the number of MRAM row buffer misses caused by different pages. If the miss table records misses caused by all pages, the size of the miss table is 512KB (the number of pages is 512K, and the width is set to 1 byte). Considering the cost and area overhead, it is not suitable for implementation in an on-chip memory controller. In this article, the miss table only records the most recent MRAM row buffer misses. The depth of the miss table is set to 64 and the width is increased by 19-bit to record the address of the missed page. When the miss table is full, LRU is used to replace the least recently accessed entry in the table. This structure is reasonable, because the program usually only accesses part of the memory pages, and pages that missed a long time ago do not need to be retained. The workflow of the miss table is shown in Fig. 2. Migration table In order to achieve correct access after migration, a migration table needs to be implemented in the hybrid memory controller to record the address of the migration block, as shown in Fig. 1. In our previous work, once the migration table overflows, the algorithm will be disabled. If the program continues to access MRAM and causes a large number of row buffer misses, the performance of hybrid main memory will be greatly reduced. This article implements the replacement mechanism of the migration table. After the MRAM block is migrated to DRAM, a counter is used to record the number of times it has been accessed. When the migration table is full, considering the time locality of the program, the least recently accessed MRAM block will be migrated back to MRAM. The workflow of the migration table is shown in Fig. 3. The time cost of migrating back is tested in the following experiment. In addition to the miss table, the migration table also record the actual addresses of the MRAM migration block and the DRAM migration block respectively. In this article, the capacity ratio of DRAM and MRAM in hybrid main memory is 1:1, and the two migrated blocks need to have the same offset address in DRAM and MRAM, as shown in Fig. 1. Therefore, only one entry is required for a migration, and the entry only needs to record the same offset address. When accessing the block located at the offset address in DRAM or MRAM, just go to the opposite memory to access it. In this way, the migration table of the same capacity can record more migrations while the program is running, which can better reduce the number of MRAM row buffer misses. Experimental setup We built a hybrid DRAM/STT-MRAM main memory using Micron 1Gb x8 DDR3 SDRAM [27] and Everspin 256Mb x8 DDR3 STT-MRAM [28] verilog models. We modified the capacity of the DRAM model to 256Mb. The model configuration and parameters are shown in Table 1. The technology nodes of the DRAM and STT-MRAM models are 2x-nm and 40-nm respectively, making the results more friendly to DRAM. We have implemented three main memory structures: 256MB pure DRAM, 256MB hybrid memory composed of 128MB DRAM and 128MB STT-MRAM, and 256MB pure STT-MRAM. Each main memory structure contains two ranks. In the hybrid memory, rank0 is composed of four 256Mb DRAMs in parallel, and rank1 is composed of four 256Mb STT-MRAMs in parallel. The system configuration is shown in Table 2. The experiment is divided into three parts: 1) Test the effect of the optimization: (a) Miss (a) The memory access time and energy when the program is running, and the total execution time of the program. (b) The memory energy when the system standby time is 1ms, 10ms, and 100ms. The processor waits for the specified time before starting to fetch instructions. 3) Test the overhead of the algorithm, including storage overhead, performance overhead and energy overhead. Fig. 4 shows the impact of miss Fig. 5 shows the impact of the migration table replacement mechanism on MRAM row buffer misses. When the depth of the migration table is 512, the migration table does not overflow, and the number of misses under the two mechanisms is the same. As the depth of the migration table decreases, the table overflows. When the depth is 256 and 64, the number of misses under the replacement mechanism is reduced by 8% and 38% compared with that under the disabled mechanism. When the depth is 128, the number of misses is close. This is related to the program's access to MRAM after the migration table overflows. The more misses, the more effective the replacement mechanism. Migrating back will bring additional reads and writes to DRAM and MRAM. But in general, the reduction in the number of MRAM row buffer misses caused by the replacement mechanism can completely offset this overhead. Fig. 6 shows the normalized memory access time. It can be seen that, compared with DRAM, the access time of hybrid memory increases by 1% on average, while the access time of MRAM increases by 32% on average. This is because MRAM has higher activation and precharge delays and more activation and precharge operations (the capacity of the MRAM row buffer is one-sixteenth of the DRAM row buffer, which is easier to miss). For matrxmul and convolution, the access time of MRAM does not increase compared with DRAM. This is because the program generates fewer activations and precharges, and DRAM refresh will increase the access time of the DRAM. However, the access time of hybrid memory is always comparable to that of DRAM, because the migration algorithm can effectively control the precharge and activation times of MRAM, and the time overhead of migration is negligible. Fig. 7 shows the normalized program execution time. It can be seen that the program execution time is less sensitive to the increased delay of MRAM. Compared with DRAM, the program execution time of hybrid memory does not increase, and the average increase of MRAM is 8%. Fig. 8 shows the normalized memory energy. It can be seen that compared with DRAM, the energy of hybrid memory is reduced by 15% on average, while the energy of MRAM becomes 3.38 times on average. This is because the activation-precharge energy of the MRAM is too high, which completely exceeds the saved refresh energy. In hybrid memory, the migration algorithm can well control the activation-precharge energy, and refresh energy is reduced by half. However, for stringsearch, the energy of MRAM is the lowest. For cnn_layer, the energy of hybrid memory cannot be reduced. This is related to the ratio of refresh energy to total memory energy. The higher the ratio, the more energy MRAM saves. Fig. 9 shows the ratio of memory sub-energy to total memory energy in DRAM. Memory energy can be divided into read energy, write energy, activation-precharge energy and refresh energy [29]. It can be seen that for stringsearch, Eref accounts for 96%, and Eact-pre accounts for 2%. The refresh energy saved by the MRAM completely covers the increased activationprecharge energy. For cnn_layer, Eref accounts for 30%, and worse, Eact-pre accounts for 46%. The refresh energy saved by the hybrid memory cannot keep up with the increased activation-precharge energy. In addition, for convolution, its Eref accounts for 86%, and Eact-pre accounts for 5%. The hybrid memory energy dropped by only 11% (as shown in Fig. 8). This is because the migration algorithm does not work well for convolution (as shown in Fig. 4) Fig. 8. Memory energy normalized to DRAM. Fig. 9. The ratio of memory sub energy to total memory energy. Fig. 10 shows the average memory energy under different system standby times. It can be seen that as the system standby time increases, hybrid memory is more and more energy-efficient than DRAM. When the system standby time is 100ms, the energy of MRAM is the lowest. This is because when the system is in standby, only DRAM consumes refresh energy. Compared with DRAM, refresh energy of hybrid memory is halved, while MRAM has no refresh energy at all. The longer the system standby time, the greater the advantage of MRAM. It should be noted that in the 1:1 hybrid memory, the energy reduction ratio of the hybrid memory can only be infinitely close to 50%. The storage overhead of the algorithm mainly comes from the migration table and the miss table. These two tables are constructed as fully associative LRU replacement caches. The block size is 1 byte. The size of the migration table is about 1KB, and the size of the miss table is about 0.2KB. Compared with our previous work (the size of the migration table is about 2KB and the size of the miss table is 512KB), the storage overhead of the algorithm is reduced by 99.8%. Table 3 is the estimated performance and energy of the migration table and miss table using CACTI 7.0 [30]. The performance overhead of the algorithm is very small, which is the contribution of the fast data migration proposed in our previous article [9]. Normally, only one clock (tCK=1.5ns) is added to the critical path of memory access to query the migration table to determine whether the block to be accessed is migrated. When the MRAM is accessed and the row buffer misses, a delay of two clocks will be added to read and write the miss table. When the migration occurs, the migration process and the registration of the migration table after the migration is completed, are executed in parallel with the memory access. The delay of the algorithm is already included in the memory access time (Fig. 6). The overhead of the algorithm The energy overhead of the algorithm mainly comes from the leakage current energy of the migration table and the miss table. The miss table is read and written only when the MRAM row buffer misses, and the migration table is written only when a migration occurs. Although the migration table is read every time the memory is accessed, it is very small compared with the leakage current energy that has always existed. The additional DRAM and MRAM read and write energy caused by the migration has been included in the memory energy (Fig. 8). Fig. 11 shows the impact of algorithm energy overhead under different system standby times. After adding algorithm energy, the energy of hybrid memory is reduced by 4% on average than DRAM (system standby time = 0ms). In the future, we will try to implement a table based on embedded STT-MRAM instead of SRAM to solve the problem of large leakage current. Conclusion This article first reduces the storage overhead of the migration algorithm without affecting the migration effect. Then the continuous work of the algorithm is realized, which can better control the number of MRAM row buffer misses. After considering the overhead of the algorithm, hybrid memory can still reduce memory energy without affecting system performance. And as the system standby time increases, the energy saved by the hybrid memory is increasing. Therefore, compared with DRAM, hybrid memory is more suitable for battery-powered IoT terminals, especially in scenarios with a long standby time, such as smart homes and smart agriculture.
2021-12-07T16:09:23.872Z
2022-01-01T00:00:00.000
{ "year": 2021, "sha1": "dd0fc8423194442795eda7a0babc886723281d23", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/elex/advpub/0/advpub_18.20210493/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dfe6f0ef8aded411fd6f0eb0905cd1fc7362cd2d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
17688130
pes2o/s2orc
v3-fos-license
A Fossil Record of Galaxy Encounters The cosmic infrared background (CIRB) is a record of a large fraction of the emission of light by stars and galaxies over time. The bulk of this emission has been resolved by the Infrared Space Observatory camera. The dominant contributors are bright starburst galaxies with redshift z ~ 0.8; that is, in the same redshift range as the active galactic nuclei responsible for the bulk of the x-ray background. At the longest wavelengths, sources of redshift z>2 tend to dominate the CIRB. It appears that the majority of present-day stars have been formed in dusty starbursts triggered by galaxy-galaxy interactions and the build-up of large-scale structures. The CIRB The x-ray background discovered in 1962 by Giacconi and his collaborators during a pioneering rocket experiment was first partially resolved into individual sources in the soft energy range by the Roentgen X-ray Satellite (ROSAT) (1), then more deeply and in a wider energy range by the present-day x-ray observatories Chandra and X-ray Multi-Mirror (XMM-Newton) (2)(3)(4). Most of the sources are active galactic nuclei (AGNs), supermassive black holes in the center of galaxies that are accreting matter at a high rate. Recent spectroscopic studies of these sources with the Very Large Telescope (VLT) at the European Southern Observatory revealed that they mostly lie at redshifts (z) below 1, with a mean value of z ∼ 0.7 (4). In the same way, the light emitted by stars, integrated over time, is expected to generate an almost uniform background. In the optical, a lower limit to this background was established by calculating the integrated contribution of galaxies in the deepest field observed, the Hubble Deep Field North (HDFN) (5,6). The existence of an infrared (IR) background in the 5-to 15-µm wavelength range was also predicted (7) but was attributed to the redshifted ultraviolet (UV) or optical light from very early galaxies. In 1983, the first all-sky survey at mid-infrared (MIR) and far-infrared (FIR) wavelengths (12 to 100 µm), performed by the Infrared Astronomical Satellite (IRAS), brought about a revolution in our understanding of IR emission from local galaxies. Since the IRAS data were acquired, we know that in the nearby Universe galaxies globally radiate about two-thirds of their light below λ = 5 µm (i.e., through direct stellar light); the remainder is absorbed by dust in the interstellar medium and re-emitted at dust temperatures (i.e., in the IR above 5 µm). Moreover, a new class of galaxies was discovered [(8) and references therein] that radiate the bulk of their luminosity in the FIR, between 5 and 1000 µm. These galaxies, with bolometric luminosities larger than 10 11 or 10 12 solar luminosities, are classified as luminous or ultraluminous infrared galaxies (LIRGs or ULIRGs), respectively. They produce only 2 % of the bolometric luminosity density in the local Universe, and the starbursts in them are nearly always triggered by galaxy-galaxy interactions. These galaxies must have been more numerous in the past, when the Universe was denser and galaxies richer in gas. Unfortunately, IRAS was not sensitive enough to detect distant objects, but counts of sources at 60 µm in a few deeper fields already showed hints of evolution-that is, an increase in the source density or luminosity in the past. The extraction of a CIRB from COBE data [(9, 10) and references therein], 34 years after the discovery of the x-ray background, was almost simultaneous with the introduction of new IR and submillimeter observing facilities on the ground [the James Clerk Maxwell Telescope (JCMT) and the Institut de Radioastronomie Millimetrique (IRAM) 30-m telescope] and in space (the Infrared Space Observatory). The CIRB is a measure of the stellar light radiated in the optical and UV (over the history of the Universe) that was absorbed by dust and thermally reradiated in the IR in the 5-to 1000-µm range. The energy density of this background, about 200 times that of the x-ray background and equal to or greater than that of the optical background, came as a surprise. It implies that in the past a larger fraction of starlight was absorbed by dust and that giant starbursts were more common than now. But when? Or at what distance from us? When trying to assess at which epoch the Universe was most active in the IR, the first clue is the shape of the CIRB spectrum. It is reminiscent of the spectral energy distribution of galaxies, as observed by IRAS, exhibiting a hump at a wavelength that for starburst galaxies or for LIRGs is located at ∼ 80 µm. With a peak intensity of the CIRB around λ ∼ 140 µm (Fig. 1), and if we assume that the spectral energy density of distant starbursts is similar to that of the local ones, then the sources responsible for the bulk of the CIRB should be located around a redshift of z ∼ 0.8; that is, we see them as they were about 7 billion years ago, when the Universe was about half as old as it is today (11). A contribution of more distant galaxies at larger wavelengths is suggested by the slope of the CIRB between 300 and 1000 µm, which is flatter than the spectral energy distribution of a single galaxy at z ∼ 0.8 (12). Further studies will be needed to identify the sources of the CIRB and to see whether these conjectures are confirmed. Identification of the Galaxies Responsible for the CIRB Ideally, one would wish to observe the IR sky with sufficient spatial resolution at ∼140 µm to pinpoint the individual galaxies producing the peak intensity of the CIRB. Unfortunately, this has not yet been possible. The ISOPHOT detector on board the Infrared Space Observatory (13) did find a population of galaxies emitting at 170 µm, which are one order of magnitude more numerous than expected if the number density and luminosity of IR galaxies had remained constant with time (14). The combined contribution of these galaxies to the CIRB amounts to only ∼10 % of its value as measured by COBE (14,15) (Fig. 1), although fluctuation analysis indicates that fainter sources contributing to a greater extent to the CIRB are also present in the ISOPHOT (16,17) and IRAS images (18). Identifications are difficult because of the relatively large error box [full width at half maximum (FWHM) of the point spread function (PSF) = 50 arc sec], but it appears that the sources detected are either nearby or rare, extremely bright distant objects. In the MIR, the gain of sensitivity of the Infrared Space Observatory with respect to IRAS was more than three orders of magnitude. Deep surveys at 15 µm with the camera ISOCAM, also on board the Infrared Space Observatory, yielded an excess of detections of up to a factor of 10 with respect to what would be expected if the relevant galaxy populations had not evolved in the last 10 billion years (19). This constitutes another proof that the IR luminosity of distant galaxies and/or their density were much larger in the past than they are today. Integrating over the ISOCAM source counts, a lower limit to the CIRB at 15 µm was established (11) (Fig. 1). ISOCAM spectra of local galaxies of all types [(20) and references therein] show a set of features in the MIR ( Fig. 1) that are attributed to large molecules, probably polycyclic aromatic hydrocarbons (PAHs) (21) transiently heated to a few hundred kelvin. These features facilitate the detection by ISOCAM of starburst galaxies up to redshifts < 1.3 (Fig. 2). These galaxies invariably have easily identifiable optical counterparts whose IR colors are indistinguishable from those of optically selected galaxies, but they exhibit strong H emission (22)(23)(24)(25). Their redshift distribution peaks around z ∼ 0.7 to 0.8 (11,26,27), as expected if they are responsible for the bulk of the intensity of the CIRB at its peak. Their FIR emission was evaluated using the MIR-FIR relation observed for local galaxies (11,28). The FIR luminosity of galaxies correlates strongly with the radio continuum (29), as it does with the MIR at least up to z ∼ 1 (11). It is generally assumed that massive stars are responsible for the UV photons that heat the IR-emitting dust and, when they explode as supernovae, for the acceleration of electrons producing the radio continuum. In the future, the Herschel satellite will detect these galaxies directly in the FIR up to z ∼ 3 (Fig. 2), provided that the spectral energy densities in these distant galaxies with low metallicity and possibly different distributions of grain sizes and abundances of polycyclic aromatic hydrocarbons (30,31) are not too different from the local ones. MIR surveys with ISOCAM reach a sensitivity of ∼0.1 mJy at 15 µm; that is, they are able to detect any galaxy producing more than 20 solar masses of stars per year up to a redshift of z = 1, hence over the last 60 % of the history of the Universe. Using the MIR-FIR correlations, it is possible to derive a total IR luminosity for each of the galaxies. Integrating the emissions, it was found that the galaxies detected in ISOCAM deep and ultradeep surveys are responsible for about two-thirds of the peak and integrated intensity of the CIRB. About 75 % of these galaxies are LIRGs (∼55 %) and ULIRGs (∼20 %) (11); they produce stars with a median rate of about 50 solar masses per year. As a consequence, the density of IR luminosity (per unit of comoving volume) produced by the IR-bright galaxies at z ∼ 1 was 70 ± 35 times their presentday luminosity density. This shows that even though LIRGs and ULIRGs play a negligible role in the local Universe, they were important actors in the past and represent a common phase in the evolution of galaxies in general. An excess of faint galaxies was also detected with the bolometer array SCUBA on the James Clerk Maxwell Telescope down to the confusion limit (2 mJy) (32), accounting for about 20 % of the CIRB at 850 µm [(33) and references therein]. Deeper surveys using gravitational lensing resolved 60 % of the CIRB at 850 µm into individual galaxies (33). Unfortunately, the large beam size and the large redshifts favored by this wavelength range have limited the identification of the optical counterparts of the bulk of the sources, and thus the determination of their redshifts, except in rare cases using interferometry (34)(35)(36). In a recent study of bright SCUBA galaxies with radio counterparts (37) that allow secure identifications, it was inferred that some of these are indeed powerful ULIRGs located around z ∼ 2. However, the contribution of sources brighter than 8 mJy to the CIRB is not dominant (38). Models have been constructed that fit ISOCAM, ISOPHOT, and SCUBA galaxy counts as well as the CIRB itself (12,26,39,40). There is a degeneracy in the parameters assumed, defining the relative roles played by the evolution of galaxies in luminosity and density with time, but all the models share some general conclusions: About 80 % of the peak of the CIRB at 140 µm is due to galaxies closer than z = 1.5; this explains why ISOCAM deep surveys were so efficient in finding the sources of the CIRB. In contrast, about 70 % of the intensity of the CIRB at 850 µm is due to galaxies more distant than z = 1.5 (28), of which SCUBA is already detecting the brightest members. This also explains why ISOCAM and SCUBA preferentially detect different populations of galaxies but nonetheless obtain perfectly consistent results. Overall, 85 % of the integrated light of the CIRB can be attributed to IR luminous galaxies (LIRGs and ULIRGs). The CIRB and Large-Scale Structure Formation In the local Universe, nearly all ULIRGs are produced by the merging of two spiral galaxies that will probably result in one intermediate-mass elliptical galaxy (41,42). About 75 % (43) of the local ULIRGs already present a luminosity profile following a r1/4 law, typical of early-type galaxies (ellipticals or S0s). The origin of the starburst phase in LIRGs is less evident, but a recent study of local objects (44) shows that it is also linked to galaxy environment ranging from advanced mergers to pairs of spiral galaxies. In the same vein, less than half of the ISOCAM galaxies exhibit the disturbed morphology typical of merging galaxies, but it is likely that tidal interactions or previous encounters triggered the starbursts, even in the apparently undisturbed ones. The fact that the integrated contribution of bright starbursts to the cosmic star formation history or to the CIRB dominates over that of galaxies forming stars at moderate rates not only implies that most galaxies must have experienced such a phase in their lifetimes but also suggests that each of them went through several such phases (39). In summary, the CIRB appears to be a fossil record of numerous encounters and/or mergers of galaxies, responsible for their briefly prominent IR brightness. An intriguing corollary is that luminous IR galaxies at redshifts lower than z ∼ 1.3 may also be responsible for the formation of the majority of present-day stars, as well as of heavy elements, in the local Universe. Indeed, because LIRGs and ULIRGs dominate the cosmic star formation rate history over that estimated on the sole basis of direct UV light (26,28), they should also dominate in the production of the low-mass stars present today, unless the initial mass function of stars in these starbursts is strongly depleted of low-mass stars. Assuming an updated version of the classical Salpeter initial mass function departing from it below one solar mass (45), the models of Chary and Elbaz (28)-which fit the CIRB and account for ISOCAM and SCUBA results-predict that 60 % of present-day stars were born nearer than z ∼ 1.3, that is, during the most recent 65 % of the age of the Universe (40 % below z ∼ 1, 80 % below z ∼ 2). Because of the dilution of light in an expanding Universe, it is the galaxies at z ∼ 0.8 that have emitted the bulk of the present-day CIRB. Overall, 80 % of the stars born at z ≤ 2 originated in dusty starbursts (LIRGs and ULIRGs). If these were triggered by galaxy-galaxy interactions, then the environment of galaxies played a major role in the formation of present-day stars, as predicted in hierarchical scenarios of galaxy formation. About 68 % of the field galaxies from a magnitude-limited sample, located in a field 8 arc min wide centered on the HDFN, are located in redshift peaks, whereas all but three of the ISOCAM galaxies in this field (i.e., 94 %) belong to these redshift peaks (46,47), which trace large-scale structures such as sheets, filaments, and groups or clusters of galaxies. A structure located at z ∼ 0.848 alone contains almost 30 % of the ISOCAM galaxies in the field and includes two AGNs detected in the x-rays (Fig. 3). At this redshift, the 6-arc min extension of the structure corresponds to 3 Mpc proper (i.e., too small to discriminate between a galaxy cluster and a sheet). This hints at a connection between the formation of large-scale structures and of galaxies. This also indicates that large-scale structures may play an important role in the switching on of star formation within galaxies, but additional MIR deep fields with complete spectroscopic redshift surveys are obviously required to test the robustness of this result. What Powers the CIRB: Nucleosynthesis or Accretion onto a Black Hole? The observed CIRB may originate from light due to nucleosynthesis at the center of stars or active nuclei (i.e., accretion onto a black hole). However, detailed studies of the hard x-ray emission of the ISOCAM galaxies using the deepest x-ray surveys performed with XMM-Newton in the Lockman Hole and the Chandra X-ray Observatory in the HDFN have shown that < 20 % of their luminosity at 15 µm is due to an active nucleus (48). This result is consistent with the fraction of AGNs within LIRGs and ULIRGs in the local Universe (49,50). Similarly, the AGNs responsible for the bulk of the x-ray background were found to produce less than 7 % of the submillimeter background (51). Nonetheless, the redshift and spatial distribution of ISOCAM galaxies present some striking similarities to x-ray AGNs. Contrary to optically selected AGNs and x-ray quasi-stellar objects, the redshift distribution of the Seyfert-type galaxies responsible for the bulk of the x-ray background also peaks around z ∼ 0.7 (4). Moreover, x-ray AGNs also exhibit strong clustering, as can be seen in the two deepest images of Chandra, the Chandra Deep Field South at z = 0.66 and 0.73 (4) and the Chandra Deep Field North at z ∼ 0.843 and 1.017 (52). The structure at z ∼ 0.843 is the same as that mentioned earlier at z ∼ 0.848 (Fig. 3). Among the 10 x-ray AGNs detected by Chandra, only two are also ISOCAM sources. This suggests that x-ray AGNs and IR luminous galaxies can act as beacons indicating the regions of growth of large-scale structures. A similar effect was suggested (53) for the more distant population of SCUBA galaxies, although this may instead be an artifact of gravitational lensing (54). The fact that strong starbursts and AGNs exhibit similar spatial distributions suggests that they represent successive phases in the life of galaxies. A recent Chandra discovery (55) may shed new light on this issue: NGC 6240 is a symbiosis between a typical dusty star-burst and an x-ray AGN. Recent Chandra observations have revealed that this object encompasses in its center two supermassive black holes probably in the process of merging. NGC 6240 may therefore represent the missing link between dusty starbursts and x-ray AGNs. Conclusions and prospects The recent extraction of a CIRB from the data obtained by the COBE satellite, combined with the results of deep surveys in the IR and submillimeter range, has revealed the importance of star formation in strong starbursts in the history of the Universe. The cosmic star formation rate density was more than one order of magnitude larger about 7 billion years ago (z = 0.8) than it is today (28). More than 75 % of this evolution is due to dusty starbursts (LIRGs and ULIRGs) that produced stars at a mean rate of ∼50 solar masses per year at the earlier epoch. Although the peak and the bulk of the CIRB can be attributed to galaxies at relatively modest redshifts (z ≤ 1.3), more distant galaxies dominate the emission at submillimeter wavelengths, to which their intrinsic emission is redshifted because of the expansion of the Universe. The brightest of these galaxies, ULIRGs at redshifts z ≥ 2, are being detected at 850 µm from ground by bolometer arrays at the focus of radio telescopes. The overall importance of ULIRGs seems to have been even greater in those earlier times. The rapid star formation revealed by IR observations may be connected to large-scale structures. There is a similarity in the redshift distributions, and possibly also the clustering properties, of the bright starburst galaxies and x-ray-selected AGNs. The existence of a link between the triggering of a starburst phase in galaxies and the fueling of a central black hole, already suggested by the study of local ULIRGs discovered by IRAS (56), is supported by this independent evidence. These findings can also be summarized by noting that galaxies, paradoxically, are sociable and shy at the same time. They are sociable because they brighten up in company. They are shy because during their encounters with other objects, the UV light of their newly formed stars is absorbed by dust and thermally re-emitted in the IR, so that they blush. The fecundity of this topic promises a bright future for the next generation of IR instruments such as the Space Infrared Telescope Facility (SIRTF), which will be able to bridge the gap between ISOCAM and SCUBA and to study LIRGs and ULIRGs in the 1 ≤ z ≤ 2 redshift range. Later, the PACS instrument on the Herschel telescope will resolve the CIRB directly in the FIR, and the James Webb Space Telescope with its MIR camera MIRI will permit detailed studies of the individual sources. The fluctuations of confusion-limited surveys with Herschel will also provide the opportunity to obtain information on FIR sources at redshifts so high that they cannot be detected individually (40). Finally, the combination of all these instruments with the high spatial resolution images and spectra of the Atacama Large Millimeter Array (ALMA) is likely to bring about a new revolution in our understanding of how stars and galaxies form. The solid squares with error bars and the orange area give the actual intensity of the CIRB from the DIRBE and FIRAS instruments on board COBE, respectively. The dots with upward arrows (see references in the text) are lower limits set by galaxy counts from ISOCAM (6.75 and 15 µm), ISOPHOT (90 and 170 µm), and SCUBA (850 µm). The lower limit set by ISOCAM at 15 µm was used to compute a lower limit to the CIRB at its peak around 140 µm (dashed arrow) using the MIR-FIR relation (11). The spectral energy density is that of a typical LIRG normalized to the 15-µm point and redshifted to z = 0.8. It exhibits broad features attributed to polycyclic aromatic hydrocarbons and peaks at about 80 µm (in the rest frame). The hatched area is an upper limit set by TeV -ray photons that annihilate with MIR photons through electron-positron pair production (59-61). . Empty circles are field galaxies; dark circles are 15-µm ISOCAM galaxies. Postage-stamp HST images of the ISOCAM galaxies are shown when available (from the DEEP archive database). The positions of two active nuclei (AGNs) are indicated. This is the highest concentration of dusty starbursts ever detected. Each ISOCAM galaxy is forming stars at a rate of about 50 solar masses per year.
2014-10-01T00:00:00.000Z
2003-04-11T00:00:00.000
{ "year": 2003, "sha1": "731bb28d979c01a408d8a0c98afa2202eaa056a8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0304492v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ebd337e5f44461374eb8d7f81acbfe854b9d6359", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
17743517
pes2o/s2orc
v3-fos-license
Preclinical safety and efficacy of an anti–HIV-1 lentiviral vector containing a short hairpin RNA to CCR5 and the C46 fusion inhibitor Gene transfer has therapeutic potential for treating HIV-1 infection by generating cells that are resistant to the virus. We have engineered a novel self-inactivating lentiviral vector, LVsh5/C46, using two viral-entry inhibitors to block early steps of HIV-1 cycle. The LVsh5/C46 vector encodes a short hairpin RNA (shRNA) for downregulation of CCR5, in combination with the HIV-1 fusion inhibitor, C46. We demonstrate here the effective delivery of LVsh5/C46 to human T cell lines, peripheral blood mononuclear cells, primary CD4+ T lymphocytes, and CD34+ hematopoietic stem/progenitor cells (HSPC). CCR5-targeted shRNA (sh5) and C46 peptide were stably expressed in the target cells and were able to effectively protect gene-modified cells against infection with CCR5- and CXCR4-tropic strains of HIV-1. LVsh5/C46 treatment was nontoxic as assessed by cell growth and viability, was noninflammatory, and had no adverse effect on HSPC differentiation. LVsh5/C46 could be produced at a scale sufficient for clinical development and resulted in active viral particles with very low mutagenic potential and the absence of replication-competent lentivirus. Based on these in vitro results, plus additional in vivo safety and efficacy data, LVsh5/C46 is now being tested in a phase 1/2 clinical trial for the treatment of HIV-1 disease. INTRODUCTION HIV-1 continues to be a major global public health issue, having claimed more than 25 million lives over the past three decades. It is estimated that 34 million individuals around the world are currently living with HIV-1. Standard treatment for HIV-1 infection is highly active antiretroviral therapy, which can reduce plasma viral loads to undetectable levels for years at a time. [1][2][3] During this time, however, HIV-1 persists in various cellular reservoirs, and discontinuation of antiretroviral therapy can lead to rapid rebound of viral loads causing renewed disease progression toward AIDS. [4][5][6] While antiretroviral therapy is effective at reducing viral load and maintaining CD4 + T-lymphocyte counts, strict adherence by the individual is required to maintain effectiveness; however, side effects of antiretroviral therapy can be severe, long-term complications can develop, and HIV-1 resistance to the antiretroviral regimen can also develop. [7][8][9][10] A promising alternative approach is cell-delivered gene therapy, in which anti-HIV-1 agents are delivered into target cells with the intention to interfere with the HIV-1 life cycle. Infusion of the genetically engineered HIV-1-resistant cells to patients has the potential to control HIV-1 infection, slow disease progression, repair damage to the immune system, and reduce reservoirs of infected and latently infected cells. [11][12][13] Other approaches that have been tested include vaccines, immunotherapy, adoptive immunotherapy, and vectored immunoprophylaxis. HIV-1 gene therapy has been applied targeting early life cycle steps before integration, such as HIV-1 binding, fusion/entry, and reverse transcription, or later steps, including integration, transcription, translation, maturation, or virion assembly. 12 Some of these approaches were tested in clinical trials using gene agents such as silencing dominant negative rev, env antisense RNA, ribozymes, Rev response element (RRE) decoy, fusion inhibitors, short hairpin RNA, and zinc finger nucleases. [12][13][14] One promising strategy of preventing HIV-1 entry is based on suppression of the HIV-1 coreceptor, C-C chemokine receptor type 5 (CCR5). Genetic and molecular studies on human populations have demonstrated that individuals homozygous for a defective CCR5 gene, CCR5∆32, are protected from HIV-1 infection, [15][16][17][18] and heterozygous individuals with a 50% reduction in the expression level of CCR5 on the cell surface have a substantially reduced rate of disease progression. 19,20 Homozygous CCR5∆32 is a stable genetic trait with a frequency of 1.4% in the Caucasian population. 21 These individuals are healthy apart from the potential for increased pathogenicity of West Nile Virus infection. 22 A functional cure for HIV-1 infection has been demonstrated recently in the "Berlin patient" case, where a HIV-1-positive individual, with concurrent acute myeloid leukemia, was treated by transplant of homozygous CCR5Δ32 allogeneic hematopoietic stem/ progenitor cells (HSPC). 23 Reconstitution of the immune system with cells protected from HIV-1 infection led to substantial attenuation of HIV-1 replication and an increase in CD4 + T-cell counts. The CCR5∆32 donor cells nearly completely replaced the recipient cells within 61 days, and the patient's viral load has remained undetectable in the absence of antiretroviral therapy. 24 However, due to the low prevalence of homozygous CCR5∆32 genotype and limited availability of donors, more practical approaches are currently being sought. Blocking virus-CCR5 interaction by inhibiting or eliminating CCR5 expression is being investigated by a number of groups that include the use of ribozymes directed to CCR5 [25][26][27][28] , single-chain intrabodies, 27,29 RNA interference, [30][31][32][33][34][35][36][37] and zinc finger nuclease. [38][39][40] A specific short hairpin RNA to CCR5 was previously demonstrated to effectively inhibit CCR5 expression and thereby protect primary human CD4 + T lymphocytes from CCR5-tropic HIV-1 infection in culture. 31,41 Expression of this potent anti-CCR5 shRNA (CCR5 shRNA1005, or here termed sh5) was subsequently optimized using the human H1 promoter in a lentiviral vector to stably inhibit HIV-1 replication. 42 The H1-CCR5 shRNA 1005 vector was shown to be noncytotoxic and effective in stable downregulation of CCR5 in human primary peripheral blood mononuclear cells (PMBCs) in vitro, 42 and in vivo using the humanized bone marrow-liver-thymus (BLT) mouse model 36 as well as in nonhuman primates introduced through hematopoietic stem cell transplant. 41 C46 is an HIV-1 entry inhibitor derived from the C-terminal heptad repeat of HIV-1 gp41 modified to be expressed on the cell surface. C46, like other gp41-derived C peptides, blocks HIV-1 fusion to the cellular membrane by interacting with the N-terminal coiledcoil domain of the HIV-1 gp41 intermediate structure and preventing the six-helix bundle formation. In vitro studies have shown that membrane-anchored C46 protein effectively protects cells against a broad range of HIV-1 isolates by blocking entry of the virus to the target cells. [43][44][45][46] The safety of C46 has been tested previously in a phase 1 clinical trial in which autologous T cells, transduced with a retroviral vector expressing C46, were infused into HIV-1-positive patients without adverse effects. 47 It is well established that HIV-1 can rapidly develop resistance to monotherapy by emergence of point mutations in virus variants, whereas combined therapy shows better clinical outcome. 10,[48][49][50] In patients treated with CCR5 inhibitors, R5 variants of HIV-1 can evolve to use the blocked receptor. In addition, there are X4-tropic strains of HIV-1 that use the alternate coreceptor, CXCR4 [51][52][53] . The role of these X4 viruses in HIV-1 pathogenesis is as yet unclear [54][55][56][57] ; however, effective genetic therapeutic applications for HIV-1 infection will likely require combinations of multiple reagents directed against HIV-1. In the present study, we have developed a self-inactivating (SIN) lentiviral vector construct for gene transfer, LVsh5/C46, to express a combination of two anti-HIV-1 genes: sh5, an shRNA to the HIV-1 coreceptor CCR5, and C46, the antiviral fusion inhibitor peptide. We assessed expression of the anti-HIV-1 transgenes in cell culture and any impact on viability or phenotype and efficacy of inhibiting HIV-1 infection. We show here the ability of LVsh5/C46-modified cells to stably downregulate CCR5 and express C46. LVsh5/C46 was well tolerated when introduced into hematopoietic cells based on their viability, ability to proliferate, differentiate, and retain their normal phenotype. Moreover, we show that treatment of hematopoietic cells with LVsh5/C46 provides protection from the establishment of HIV-1 infection and replication from multiple strains, clades, and tropisms of HIV-1. ReSUlTS Engineering and production of LVsh5/C46 vector To construct the LVsh5/C46 plasmid, we utilized FG12, a SIN lentiviral plasmid that is based on the HIV-1 backbone with modified 5′ and 3′ HIV-1 long terminal repeats. 31 First, the HIV-1 fusion inhibitor C46 43 was introduced into the FG12 plasmid downstream of the human Ubiquitin C (UbC) promoter, replacing the enhanced green fluorescent protein sequence. Next, a previously characterized shRNA against CCR5 (1005), driven by the human H1 RNA polymerase III promoter, 41 was inserted upstream of the UbC promoter ( Figure 1a). The two viral-entry inhibitors, incorporated into a single construct, were then tested in vitro for their ability to be expressed in target cells, their effect on the treated cells, and their potency in suppressing HIV-1 replication. Small-scale production of LVsh5/C46 viral vector was used during the initial characterization studies followed by preclinical testing of LVsh5/C46 preparations generated using methodology and procedures consistent with good laboratory practice (GLP) and good manufacturing practice (GMP). Comprehensive testing for purity, safety, and functionality was conducted for a 251 ml pilot lot of GLP vector and two large-scale batches (25 l each) of GMP-grade vector. Small-scale virus production routinely yielded a titer of 3-5 × 10 6 infectious viral particles per milliliter (ivp/ml), whereas GLP grade and GMP-grade productions of LVsh5/C46 vector yielded 8 × 10 7 ivp/ ml and 1-2 × 10 8 ivp/ml, respectively. LVsh5/C46 virus particles were introduced into various cell lines, PBMCs, CD4 + T lymphocytes, and hematopoietic stem cells for detailed preclinical characterization. Expression of LVsh5/C46 in hematopoietic cells Gene delivery of the dual anti-HIV-1 agents was first assessed in target cells transduced with small-scale vector preparations of LVsh5/ C46. PBMCs and the CCR5-expressing T cell line Molt4/CCR5 were transduced with LVsh5/C46, and cell-surface expression of C46 and CCR5 was assessed by fluorescence-activated cell sorting (FACS) after 4 days. At a multiplicity of infection (MOI) of 1, 39.35% of PBMCs (33.9 + 5.45%) were transduced as determined by expression of the C46 peptide (Figure 1b). At this MOI, CCR5 expression was reduced from 58.3% in control cells (56.3 + 2.03%) to 19.65% in LVsh5/C46 transduced cells (14.2 + 5.45%) (Figure 1b). When Molt4/CCR5 cells were transduced at an MOI of 2, 92.9% of the cells expressed C46 peptide, and CCR5 expression was decreased from 86.98 to 17.94% posttransduction (Figure 1b). These results demonstrated successful delivery of LVsh5/C46 vector into cells, ability of the modified cells to express sh5 short hairpin RNA and C46 peptide, and successful downregulation of CCR5. To assess longer-term stability of the integrated LVsh5/C46 in the modified cells, the T cell line CEM.NKR.CCR5 was transduced with LVsh5/C46, and a time course of CCR5 and C46 expression was analyzed by flow cytometry. As shown in Figure 1c, after 2 months in culture, 68 To quantify LVsh5/C46 gene transfer, PBMCs were transduced with increasing doses of LVsh5/C46 followed by evaluation of vector copy number and RNA transcript synthesis in the transduced cells. Transduction at MOI of 1 resulted in one copy of LVsh5/C46 per cell, while higher doses of LVsh5/C46 vector (MOIs of 5 and 10) led to an average of 1.5-2 copies of LVsh5/C46 viral DNA per cell ( Figure 1d). Furthermore, we observed a dose-dependent increase in C46 RNA synthesis in response to elevated LVsh5/C46 dose as quantified by reverse transcription-quantitative PCR (RT-qPCR) (Figure 1e). This data demonstrated the ability to define the preferred dose of LVsh5/ C46 vector delivered into human primary cells by correlating MOI with the integrated vector copy number and the RNA transcripts of C46 transgene. To further evaluate the therapeutic potential of LVsh5/C46, we purified CD4 + T lymphocytes and CD34 + HSPC and demonstrated that the intended target cells were genetically modified in vitro to produce the anti-HIV-1 gene agents. Purification of CD4 + T lymphocytes was achieved by isolation of PBMCs, followed by a CD8 depletion step and CD3/CD28 stimulation and activation. The purification process yielded purity of 88.6% CD4 + cells (Figure 2a, lower right panel). Human CD34 + HSPC were isolated from granulocyte colonystimulating factor mobilized peripheral blood using CliniMACS with 41 was digested by NdeI/XhoI to excise a fragment containing CCR5 shRNA (sh5) under the human H1 RNA polymerase III promoter. The insert was cloned into the same sites in an FG11F vector encoding membraneanchored C46 43 , under the Ubiquitin C promoter (UbC). Other components of the vector include 5′ and 3′ modified HIV-1 long terminal repeats (LTRs), a central polypurine tract (cPPT), and a woodchuck hepatitis virus posttranscriptional regulatory element (WPRE). (b) PBMCs and Molt4/CCR5 cells were transduced with LVsh5/C46 at MOI of 1 and 2, respectively, in triplicate, and expression of sh5 and C46 was assessed by flow cytometry. CCR5 shRNA (sh5) expression was demonstrated by downregulation of CCR5 expression evidenced by CD195 staining. Cell-surface expression of C46 was detected by staining with 2F5 antibody. (c) The T cell line CEM.NKR.CCR5 was transduced with LVsh5/C46, and the expression of CCR5 and C46 was similarly assessed over 8 weeks in culture by flow cytometry. (d, e) Quantification of integrated LVsh5/C46 DNA and LVsh5/C46-mediated C46 mRNA synthesis in transduced PBMCs. Cells were treated with LVsh5/C46 at MOIs of 1, 5, and 10 (in duplicate), resulting in transduction efficiencies of 35, 62.5, and 74% (respectively) at 4 days postinfection as assessed by flow cytometry (data not shown). Genomic DNA and RNA were isolated at 8 days posttransduction. C46 DNA copy number per cell was determined by quantitative PCR and was normalized to β-globin (d). C46 RNA transcript levels were measured by RT-qPCR using C46 primers and were normalized to β2-microglobulin mRNA as a measure of relative C46 expression (e). PBMC, peripheral blood mononuclear cell; shRNA, short hairpin RNA. CD34-microbeads as positive selection, which yielded purity of over 99% (Figure 2b). Purified CD4 + T lymphocytes and CD34 + HSPC cells were then treated with LVsh5/C46 vector at increasing MOIs (0. [5][6][7][8][9][10], and the cell-surface C46 expression was determined by FACS analysis. A dose-dependent increase in the number of cells expressing C46 was observed in response to increasing doses of LVsh5/C46 (Figure 2c,d). Transduction efficiency in CD4 + cells ranged between 10 and 50% and between 2 and 15% in CD34 + cells. The doses of LVsh5/C46 chosen for the subsequent studies were MOI of 1 for CD4 + transduction and MOI of 5 for CD34 + HSPC. In large-scale experiments using GMP-grade LVsh5/C46 vector, improved transduction efficiencies were observed. Over 40% transduction efficiency was obtained in CD4 + T lymphocytes transduced at MOI of 1 and in CD34 + HSPC transduced at MOI of 5 ( Figure 2e). To further evaluate the degree of gene transfer in the transduced cells, CD4 + and CD34 + cells were treated with GMP-grade LVsh5/ C46 at MOIs of 1 and 5, respectively. Genomic DNA was extracted, and quantitative PCR was performed to determine C46 copy number per cell. As shown in Figure 3a, an average of 2-2.5 copies of integrated LVsh5/C46 vector DNA was observed in CD4 + and CD34 + transduced cells. Expression of CCR5 shRNA (sh5) was also detected in the transduced CD4 + and CD34 + using RT-qPCR ( Figure 3b). Taken together, these results validate the ability of LVsh5/C46 to genetically modify hematopoietic target cells, by stable integration and sustained expression of C46 and sh5. LVsh5/C46-modified HSPC retain their full differential capacity To determine whether CD34 + HSPC transduced with LVsh5/C46 could maintain their differentiation capacity, colony-forming assays were performed. CD34 + cells were transduced with a high dose of LVsh5/C46 (MOI of 10), and after 2 weeks in methylcellulose culture, Figure 2 Introducing LVsh5/C46 vector into target cells. (a) Human CD4 + T lymphocytes were isolated from PBMCs by CD8 depletion followed by selection and expansion using CD3/CD28 beads. Cells were stained with CD4, CD3, and CD8 antibodies and analyzed by flow cytometry. Purified CD4 + T-lymphocyte cells are shown as CD4 + /CD3 + /CD8 − fraction in the lower right panel. (b) CD34 + HSPC were isolated from G-CSF mobilized peripheral blood, using CD34 + microbeads. Positive fraction was stained with CD34 antibody and analyzed by flow cytometry. (c, d) Purified cells were treated with LVsh5/C46 at the increasing doses (MOIs as indicated), in triplicate, and the percentage of (c) C46-expressing CD4 + cells and (d) CD34 + cells was determined by flow cytometry. (e) Purified CD4 + T lymphocytes and CD34 + HSPC cells were treated with GMP-grade LVsh5/C46 at MOI of 1 and 5, respectively, and LVsh5/C46 transduction was analyzed by flow cytometry, measuring the percentage of cells expressing cell-surface C46. G-CSF, granulocyte colony-stimulating factor; GMP, good manufacturing practice; HSPC, hematopoietic stem/progenitor cell; MOI, multiplicity of infection; PBMC, peripheral blood mononuclear cell. the colony-forming units (CFU) were enumerated and characterized. Quantification of colonies of erythroid (CFU-E), myeloid (CFU-GM), and multiple lineage origin (CFU-GEMM) generated from LVsh5/ C46-transduced cells showed no significant difference compared to untransduced cells (Figure 4a). Similar results were obtained when HSPC were transduced with either singular anti-HIV-1 genes (C46 or sh5) or the dual agents (LVsh5/C46). Transduced cells generated comparable numbers of differentiated colonies, with no obvious effects of either C46 or sh5 expression (Figure 4b-d). These findings demonstrate that LVsh5/C46-modified HSPC retain their capacity for multilineage hematopoietic differentiation in vitro, without lineage skewing, even at a relatively high dose of LVsh5/C46. LVsh5/C46-modified cells retain their normal phenotype To investigate the effect of LVsh5/C46 on the phenotype of the treated cells, PBMCs were treated with LVsh5/C46 at MOIs of 1 and 10, and analyzed after 7-8 days in culture. We first determined whether apoptosis was induced in LVsh5/C46-modified cells. Similar activity of Caspase 3/7 was found in transduced PBMCs compared to untransduced cells, at low and high MOIs of LVsh5/ C46, indicating that neither LVsh5/C46 transduction nor expression of sh5 or C46 induced programmed cell death ( Figure 5a). We then assessed the influence of LVsh5/C46 on cell growth as measured by the number of metabolically active cells and observed comparable activities in untreated and LVsh5/C46-treated cells (Figure ). In addition, PBMCs were treated with LVsh5/C46 or with lentiviral vectors expressing the single genes (sh5 or C46), and cell viability was assessed and enumerated for a period of 12 days (see Supplementary Figure S1a,b). Proliferation was found to be similar between transduced and untransduced cells with no impact on viability. To further examine the cellular response to LVsh5/C46, the levels of the cytokines interferon γ, interleukin-6, and tumor necrosis factor α were measured in transduced PBMCs and control cells. As shown in Figure 5c-e, there were no significant differences in the intracellular production of these proinflammatory cytokines between LVsh5/C46-treated PBMCs and untreated cells, indicating that inflammatory signaling was not activated by either viral transduction or LVsh5/C46-mediated expression of sh5 or C46. Stability of LVsh5/C46 vector in transduced cells To evaluate the stability of the viral vector in modified cells, VERO cells (monkey kidney epithelial) were transduced with LVsh5/C46, and the structure and integrity of the integrated vector was evaluated. Genomic DNA extracted from the transduced cells and subjected to southern blot analysis revealed two major forms of the integrated LVsh5/C46 DNA: a nonspliced 3.8 kb viral vector and the anticipated spliced form of 3 kb, due to splicing of the UbC intron, Figure S2a,b). Sequencing analysis of the 3 kb band confirmed splicing event within the UbC region (data not shown). Apart from the expected splicing, the overall structural organization of the integrated LVsh5/ C46 vector was kept intact and was genetically stable, with no rearrangements, duplications, insertions, or unexpected deletions. Low risk of mutagenic potential in LVsh5/C46-modified hematopoietic cells To assess the mutagenic potential of LVsh5/C46, in vitro immortalization assays were performed. Primary murine hematopoietic cells (Lin − ) were cultured in the presence of LVsh5/C46 at MOIs of 20, 40, and 80, and the incidence of cell transformation was calculated as replating frequency per vector copy number. LVsh5/C46 vector was compared to two positive control vectors shown previously to induce in vitro immortalization. 60 lv-SF, a lentiviral vector in SIN configuration under the control of a strong internal retroviral promoter of the spleen focus forming virus, and RSF91, a gammaretroviral vector with internal or long terminal repeat-contained spleen focus forming virus promoter sequences. A mock infected sample cultured without a viral vector served as a measure of spontaneous immortalization and routinely scored negative. In three independent experiments, transformation frequency per vector copy number in cells treated with LVsh5/C46 vector was strongly reduced compared to positive controls (see Supplementary Figure S3). These results demonstrate a significantly reduced risk of insertional mutagenesis and genotoxicity. No evidence of replication-competent lentivirus in LVsh5/C46transduced cells To further evaluate the safety of LVsh5/C46 vector as a potential therapeutic agent, the risk of replication-competent lentivirus (RCL) development was analyzed. LVsh5/C46 is a SIN lentiviral vector, in which modifications to the HIV-1 long terminal repeats significantly reduce the likelihood that RCL will develop. RCL testing has been performed on samples of LVsh5/C46 viral production batches (large-scale GLP and GMP). Five percent of the total batch volumes were used to inoculate C8166-45 cells, a highly infectable cell line. 61 After a culturing period of a minimum of 21 days to allow potential amplification of any RCL, the supernatants were collected and were incubated with fresh permissive cells. Supernatants were collected for p24 ELISA to detect HIV-1 capsid protein, and genomic DNA was used for psi-gag PCR to detect recombination. Results from all batches tested showed no detectable quantities of p24 antigen or HIV-1 DNA, hence, confirming the absence of RCL in the LVsh5/ C46-modified cells (see Supplementary Table S1). Moreover, in a separate RCL assay, in which postproduction HEK293T cells were cocultured with the C8166-45 cell line, no RCL was detected in any of the LVsh5/C46 test samples, as indicated by the absence of detectable quantities of p24 protein and HIV-1 psi-gag recombination (see Supplementary Table S1). These results demonstrate that the likelihood of RCL development using clinical grade of LVsh5/C46 lentiviral vector is extremely low. Modification of T cell lines with LVsh5/C46 effectively inhibits HIV-1 replication The ability of the LVsh5/C46 vector to confer cellular resistance from HIV-1 infection was evaluated by conducting HIV-1 challenge assays in vitro. First, the T cell line Molt4/CCR5 was transduced with LVsh5/ C46 at MOI of 5 followed by infection with CCR5 (R5)-tropic HIV-1 strain BaL or CXCR4 (X4)-tropic strain NL4-3. Two weeks postinfection, the appearance of viral p24 antigen in the medium was measured, and the infection level of LVsh5/C46-modified cells was compared to untransduced control cells. As shown in Figure 6a, while untransduced Molt4/CCR5 were susceptible to BaL (R5) and NL4-3 (X4) strains, viral infection was strongly suppressed in LVsh5/C46-modified cells, indicating that cells expressing LVsh5/C46 were significantly more resistant to HIV-1 infection compared to control cells. Similarly, when the LVsh5/C46-modified Molt4/CCR5 cells were infected with a dual R5/X4 tropic HIV-1 strain, SF2, viral titer was reduced dramatically relative to untransduced cells, exhibiting ~100-fold inhibition of viral replication (Figure 6b). This observation was consistent when transduced cells were exposed to low or high dose of SF2 strain. These results demonstrate the ability of LVsh5/C46 to provide strong resistance against R5-, X4-, and R5/X4-tropic HIV-1 strains. We next tested the potency of the dual-therapeutic vector to single vectors containing either sh5 or C46 alone. Molt4/CCR5 cells were transduced with either LVsh5/C46 or lentiviral vectors containing sh5 or C46 genes, followed by infection with the R5-tropic strain BaL, and infection level was evaluated 7 and 10 days postinfection by p24 assay. In cells expressing either sh5 or C46 alone, BaL infection was inhibited; nonetheless, LVsh5/C46 was more effective than either vector expressing single agents (Figure 6c). The Molt4/CCR5 cells were then transduced with low or high MOIs of sh5, C46, or LVsh5/C46 lentiviral vectors constructs, followed by exposure to either X4-tropic NL4-3 or R5-tropic BaL, and infection level was analyzed. As expected, sh5 vector was able to inhibit replication of R5-tropic but not X4-tropic strain, whereas C46 provided a stronger resistance against R5-tropic and X4 HIV-1-tropic strains. The combined LVsh5/C46 exhibited the strongest viral inhibition, reducing p24 concentration from 6,400 to 2 ng/ml in BaL infection and from 2,048 to 4 ng/ml in NL4-3. The response to higher dose of LVsh5/C46 resulted in the same effect (Figure 6d). Taken together, these results demonstrate that LVsh5/C46, as a dual anti-HIV-1 agents in a single lentiviral construct, has additive effect compared to the single agents, delivering robust inhibition of a broad range of HIV-1 strains. LVsh5/C46-treated PBMCs inhibit replication of R5-and X4-tropic HIV-1 strains We next assessed the ability of LVsh5/C46 to provide resistance to primary human PBMC cultures from HIV-1. PBMCs were treated with either LVsh5/C46 or vectors containing the single anti-HIV-1 agents: sh5 and C46, and the expression of C46 and downregulation of CCR5 by sh5 was confirmed 4 days posttransduction by flow cytometry (Figure 7a). Based on this analysis, transduction efficiency was estimated to be 43.7% for LVsh5/C46, 49.4% for C46 alone, 67.5% for sh5/eGFP, and 51.7% for eGFP control vector. Transduced PBMCs infected with R5-tropic HIV-1 strain NFNSX inhibited viral infection, whereas when infected with the X4-tropic strain NL4-3, only cells expressing C46 or the dual-therapeutic LVsh5/C46 vector were capable of inhibiting HIV-1 infection (Figure 7b). To further test the ability of PBMCs to resist HIV-1 infection, cells were transduced with LVsh5/C46 at MOI of 5 and reached transduction efficiency of 20-50%. After 48 hours, the transduced cells were infected with HIV-1 virus strains, and the level of p24 antigen was evaluated relative to control untransduced cells. Results from four independent experiments of cells challenged with the laboratory strains BaL (R5-tropic), SF162 (R5-tropic), and Bru (X4-tropic) showed effective inhibition of viral replication in all three strains of HIV-1 (Figure 7c) demonstrating again the potency of LVsh5/C46 to inhibit HIV-1 infection. Clinical isolates of HIV-1 from Clade B and Clade D subgroups were tested to further characterize LVsh5/C46-medited viral inhibition. Clade B is the most predominant subtype of HIV-1 that is found in the developed Western World, whereas Clade D is generally limited to East and Central Africa. PBMCs were infected with clinical isolate 92US723 (R5/X4 dual tropic, Clade B), and 92UG021 (X4-tropic, Clade D), in three independent assays. Infection of LVsh5/C46 modified PBMCs with Clade B clinical HIV-1 isolate led to an average of 83% inhibition of viral replication, whereas 67% inhibition was observed with Clade D isolate (Figure 7d). Altogether, these results demonstrated that delivery of the dual anti-HIV-1-agents sh5 and C46 to hematopoietic cells mediated by the LVsh5/C46 lentiviral vector inhibit a broad range of HIV-1 isolates, clades, and tropism. DISCUSSION In this preclinical study, we have assessed the therapeutic potential of a lentiviral vector (LVsh5/C46) containing two anti-HIV-1 agents that target viral entry. We have demonstrated the ability to stably introduce LVsh5/C46 into various hematopoietic cells, including CD4 + T lymphocytes and CD34 + HSPC, to allow for expression of a CCR5-targeted shRNA (sh5) and C46. Treatment of target cells with the LVsh5/C46 vector did not show any indication of toxicity. In transduced PBMC cultures, cell viability and proliferation were normal, apoptosis was not induced, and no mark of inflammation was observed. LVsh5/C46-treated CD34 + cells maintained their ability to differentiate into various hematopoietic lineages with no sign of lineage skewing. Moreover, LVsh5/C46-modified cells showed profound resistance to R5-, X4-, and dual-tropic strains of HIV-1. Gene therapy for HIV-1 relies on genetic modification of hematopoietic cells in order to generate long-lasting HIV-1-resistant cells that would replenish and stabilize the patient's immune system. Transplantation of manipulated CD4 + T cells would provide a pool of T lymphocytes with resistance to HIV-1 infection. A recent study by Scholler et al. 62 has shown detection of engineered T lymphocytes 11 years after infusion that suggests modified cells may persist for decades, with continued expression and function that may contribute to prolonged survival. Genetic modification of CD34 + cells would generate HIV-1-resistant progenitor cells with the capacity to selfrenew and differentiate into all hematopoietic lineages, including CD4 + T cells, macrophages, and dendritic cells, which are the natural targets of HIV-1. The general concept is that in HIV-1-infected individuals, the modified cells, even at low-to-moderate levels of gene modification, would have a selective survival advantage over unmodified cells, allowing them to proliferate and expand over time, and as a consequence will minimize viral loads and reduce viral reservoirs. In this work, we transduced T cell lines, PBMCs, and CD34 + HSPC to assess toxicity, phenotype, and in the former two cell types protection from HIV-1. We did not test protection of CD34 + HSPC in the present experiments but as a continuation in a humanized mouse model (manuscript in preparation). Recent clinical applications using lentiviral vectors for treatment of genetic diseases, such as β-thalassemia and adrenoleukodystrophy, have also shown these vectors to be safe and efficient. [63][64][65] A theoretical concern with the safety of lentiviral vector usage is the development of RCL. To date, RCL has not been observed in any gene therapy clinical trial. LVsh5/C46, as a SIN vector, includes modifications designed to contribute to the safety of the vector. These features minimize the homology between the vector and the HIV-1 sequences, decrease the capacity of RCL formation, and reduce the likelihood of mobilization of the vector. Moreover, by separating the packaging genes into three plasmids, the number of recombination events required to produce a competent replicative virus increases. LVsh5/C46 pseudovirus batches did not show any evidence for the emergence of replicative particles in transduced cells (see Supplementary Table S1). Likewise, integration analysis of LVsh5/C46 provirus demonstrated absence of predominant integration sites (data not shown). Taken together, these data support the safety profile of LVsh5/C46 for clinical use. The stability of the delivered LVsh5/C46 vector into target cells was confirmed by southern blot and sequencing analyses, with no unforeseen alterations to the genetic elements of LVsh5/C46. Splicing of the 812 bp intron within the 5′ UTR of the human UbC promoter was well documented previously. 58,59 In the present study, the spliced UbC promoter was found to be sufficient to drive expression of C46 peptide to a detectable level and was effective in inhibiting HIV-1 infection. We have demonstrated efficient delivery of LVsh5/C46 to various hematopoietic cells. PBMCs transduced at MOI of 1 resulted in transduction efficiency of ~34% with 1 copy of LVsh5/C46 vector per cell, and this level of transduction was sufficient to inhibit HIV-1 in challenge assays (Figures 1b,d and 7). Based on mathematical modeling, we have previously shown that a level of gene marking of ~10-20% in hematopoietic stem cells would be sufficient to have a "curative" effect, i.e., impact on viral load and CD4 + lymphocyte counts. 66 For large-scale studies, an MOI of 1 for CD4 + T lymphocytes and MOI of 5 for CD34 + HSPC were chosen as the optimal doses, with the ideal of a therapeutic target of <5 copies of LVsh5/C46 per cell (Figure 3a). Expression of the integrated LVsh5/C46 transgenes C46 and sh5 in cells over time was examined in the T cell line CEM.NKR.CCR5 (Figure 1c), due to the difficulties of maintaining primary cultures for extended period of time in culture. These results suggest that coexpression of the two anti-HIV-1 genes has no influence on their stability over time. An et al. 41 have previously demonstrated stable expression of CCR5 shRNA in nonhuman primates for over 14 months posttransplantation of CD34 + HSPC, and Younan et al. have recently shown C46-mediated protection from simian-human immunodeficiency virus in a nonhuman primate study transplanting autologous ex vivo modified CD34 + HSPC. 67 Various gene therapies have been developed to inhibit HIV-1 infection by blocking postintegration steps or early steps in the HIV-1 life cycle that occur prior to genome integration. Preintegration strategies are thought to have an advantage since production of HIV-1-resistant cells may lead to selective expansion of these modified cells over infected cells, which could prevent Entry inhibitors also have the potential to prevent the generation of escape mutants in the treated cells, a process that occurs during reverse transcription. 68 Previous computational modeling studies have demonstrated that inhibition of preintegration steps is more likely to provide more effective outcomes. [69][70][71][72] CCR5 is the predominant coreceptor used by HIV-1 during both initial infection and subsequent infection. 73,74 The "Berlin patient" case has provided evidence that knocking down CCR5 expression has the potential to cure HIV-1. Currently, a gene therapy trial using CCR5 zinc finger nuclease is being conducted, and preliminary results indicate that treatment is generally well tolerated. 75,76 Despite the promising results obtained with CCR5 knockdown, these strategies do not protect against X4-tropic HIV-1 or dual R5/X4 tropisms, which emerge with disease progression. 21,77 For that reason, a dual-therapy design has been chosen in which CCR5 shRNA sh5 and the membraneanchored C46 peptide have been engineered into a single lentiviral vector. C46, shown previously to effectively inhibit a broad range of viral isolates, was the first HIV-1 entry inhibitor to be tested in a clinical trial and was found to be safe with no major toxicities. 47 Recently, Kimpel et al. 78 have evaluated three leading genetic strategies for their potency to inhibit HIV-1 replication and demonstrated C46 to be the most robust in HIV-1 inhibition in T cells compared to tat/rev shRNA and RNA antisense against HIV-1 envelope. The combinatorial therapy of CCR5 shRNA and C46 peptide was found in this study to have a potentially synergistic effect on HIV-1 inhibition compared to the function of the singular genes (Figure 6c,d), which is supported by previous studies using a similar combinatorial approaches. 79 Although lentiviral vectors expressing sh5 alone or C46 alone were able to protect PBMCs against R5-or X4-tropic HIV-1 (respectively), this effect was observed only at a low level of infection, as indicated by the level of p24 protein in control cells (Figure 7b). When infection levels were high, the dual-therapeutic vector displayed the most prominent inhibition of HIV-1 replication (Figure 6c,d). LVsh5/C46 has also been characterized for safety and efficacy in vivo using a variety of humanized mouse models. During a GLP pharmacology/toxicology study, LVsh5/C46-modified human CD34 + HSPC transplanted into NSG mice displayed normal hematopoietic engraftment and differentiation that included a safe vector integration site profile; also, a humanized bone marrow-liver-thymus (BLT) mouse model study supported the ability of LVsh5/C46-modified hematopoietic cells to protect CD4 + T cells from HIV-1 pathogenesis in vivo and reduce viral load within peripheral blood and tissues (manuscript in preparation). Taken together, the preclinical safety and efficacy studies presented herein, LVsh5/C46 vector is now being evaluated in a phase 1/2 clinical trial in which autologous hematopoietic cells are transduced ex vivo, followed by infusion back into HIV+ subjects (Clinical Trials NCT01734850). LVsh5/C46 anti-HIV-1 genes are designed to protect not only the mature T lymphocyte and macrophage populations but also the HSPC reservoirs that give rise to those progeny cells. This is the first clinical trial using RNA interference to downregulate CCR5 expression in combination with the fusion inhibitor C46 peptide in T cells and HSPC. Cell isolation and culture Human primary PBMCs. Buffy coats were obtained from healthy donors from the Australian Red Cross Blood Service. PBMCs were isolated from buffy coats by Ficoll Paque Plus (GE Healthcare, Uppsala, Sweden, #17-1440-02) density-gradient centrifugation, and the isolated cells were cultured in RPMI (Life Technologies, Carlsbad, CA, #72400047) supplemented with 20% fetal bovine serum and PHA-P for 48 hours. On the day of transduction, 10 U/ml rhIL-2 was added. HEK293T cell line. These cell lines were used for viral production and were cultured in Dulbecco's Modification of Eagles Medium and 10% fetal calf serum. LVsh5/C46 plasmid construction To clone LVsh5/C46 (Cal-1) lentiviral plasmid, the FG12 plasmid 31 was first modified by adding multiple cloning sites to generate the FG11 plasmid. 36 The membrane-anchored C46 sequence (containing a signal peptide, the C46 peptide, a hinge, and a membrane-spanning domain) 43 was then cloned into BamHI/EcoRI sites within the FG11 plasmid downstream of the UbC promoter to produce FG/C46. The FG/C46 plasmid was digested with NdeI/XhoI, leaving a fragment of 7.5 kb, containing the UbC promoter and the C46 peptide. Next, the FG12 H1shRNACCR5 lentiviral vector 41 was digested with NdeI/XhoI, and the 2.4 kb fragment, containing the human H1 RNA polymerase III promoter and CCR5 shRNA (1005) sequence was ligated into the same sites of FG/C46. Lentiviral vector production and transduction To produce pseudotype virus, the lentiviral SIN vectors were cotransfected with three helper plasmids: gag/pol helper plasmid, the HIV-1 Rev plasmid, and the VSV-G envelope plasmid. The four-plasmid system was transfected into HEK293T cells by calcium phosphate method (Clontech, Mountain View, CA), as described previously. 80 For small-scale production, virus-containing media (VCM) was collected 24 and 48 hours posttransfection and pooled. VCM was concentrated by ultracentrifugation (25,000 rpm) over a 20% sucrose solution. Large-scale virus was produced in a manufacturing facility under either GLP or GMP conditions, based on a method published previously. 81 VCM was purified by Mustang Q followed by concentration using tangential flow filtration filters. Lentiviral vectors were diluted (1/8-1/512 for unconcentrated samples or 1/10-1/30,000 for concentrated samples), and 1 ml used to transduce 1 × 10 5 293T cells in a 12-well plate. Seventy-two hours posttransduction, cells were analyzed for enhanced green fluorescent protein expression or stained with 2F5 for cell-surface expression of C46. Samples were transduced in duplicate. Using the percentage of positive cells, titer was calculated according to the following formula: RCL detection RCL rapid analysis was performed on the large-scale GLP and GMP viral production batches made by Indiana University Vector Production Facility. The assay was modeled on guidelines recommended by the US Food and Drug Administration for detecting replication-competent retrovirus. In the amplification phase, a test sample of concentrated LVsh5/C46 lentiviral vector, representing 5% of the total volume, was used to inoculate a cell line C8166-45 (derived from human umbilical cord blood lymphocytes). The inoculated cells were cultured for a minimum of 21 days to allow potential amplification of any RCL, and the supernatant was collected. In the indicator phase, fresh C8166-45 cells were infected with the supernatant from the amplification step and passaged for 7 days. At the end of this phase, supernatants and genomic DNA were collected for ELISA and PCR, respectively. Presence of RCL in the test sample was indicated by detection of p24 antigen in the indicator cell supernatants and psi-gag sequences in indicator cell DNA 61 . In the coculture, RCL assay postproduction cells were used as a test sample. 1 × 10 8 of HEK293T cells that were used to produce the two batches of the GMP LVsh5/C46 vector were cocultivated with C8166-45 cells, followed by the amplification and induction phases as described above to detect RCL. Viral isolates NL4-3 and NFNXS were prepared by calcium phosphate transfection of 293T cells with either NL4-3 or NFNXS plasmid DNA. VCM was collected 72 hours posttransfection, and aliquots stored at −80 °C. BaL, SF162, Bru, 92UG021, and 92US723 were prepared by infecting cellfree supernatant from PBMCs onto fresh uninfected PBMCs in the presence of 8 µg/ml Polybrene. Infections were performed over 2-2.5 hours with gentle rocking, followed by one wash of the cells and culture in RPMI containing 20% fetal bovine serum and 100 U/ml IL-2. Fresh culture media and PBMCs were added from days 7 through 21. Supernatant was collected every 3 days from day 14, and aliquots stored at −80 °C. BaL and SF2 used in Molt4/CCR5 experiments were prepared by infecting PM-1 cells with cell culture supernatant containing either BaL or SF2 in the presence of 8 µg/ml Polybrene. Infections were performed over 2-2.5 hours with gentle rocking. At completion of infection, cells were washed once and put into culture. Fresh media and PM-1 cells were added from days 7 through 21. Supernatant was collected every 3 days from day 14, and aliquots stored at −80 °C. Titers for all virus batches were determined using Perkin Elmer HIV-1 P24 ELISA. Flow cytometry (FACS) Up to 1 × 10 6 cells were transferred to FACS tubes, washed in 1 ml of phosphate-buffered saline, and stained with antibodies according to the manufacturer's recommendations. Cells were incubated for 20-30 minutes (room temperature or 4 °C), washed in 1 ml of phosphate-buffered saline, and fixed in 2% paraformaldehyde for 30 minutes at 4 °C prior to data acquisition on an LSRII (Becton Dickinson, San Jose, CA) using FACS Diva software. Data were analyzed using FlowJo Software (TreeStar, Ashland, OR). As controls, unstained samples and/or relevant isotype control antibodies were used. For endogenous CCR5 expression, CD195 (APC or PE-Cy7 fluorophores; Becton Dickinson) was used for staining. The fluorescently stained cells were detected by flow cytometry. Transduction efficiency was determined by calculating the percentage of C46-positive cells. Evaluation of cell purity. CD4 + cells were stained with CD4, CD3, and CD8 antibodies (CD4 FITC or PE, CD3 PerCP-Cy5.5 and CD8 PE, Becton Dickinson) and analyzed by flow cytometry to calculate the percentage of positive cells. Purity of CD34 + was determined by using anti-CD34 antibodies (Becton Dickinson). Data analysis was performed with Stratagene qPCR MxPro software. LVsh5/C46 plasmid with known concentration was serially diluted in a background of 10 ng/µl total PBMC RNA to generate standard curves. Known amounts of total PBMC RNA were serially diluted in H 2 O, converted to cDNA, and diluted in 1:20 in H 2 O to generate a β2-microglobulin standard curve. Copy number of C46 transgene was determined from the plasmid standard curve and normalized to β2M levels as determined from starting input amount RNA to determine relative C46 expression. sh5 RT-qPCR. Total RNA was extracted using either Qiagen miRNeasy Kit or Ambion mirVana miRNA Isolation kit according to the manufacturer's instructions. Extracted RNA (10 ng per reaction) was run in duplicate in Taqman custom designed small RNA Assay and microRNA assays to determine the relative expression of sh5. cDNA was generated using a Taqman MicroRNA reverse transcription kit. Thermocycling was as follows: 16 °C 30 minutes; 42 °C 30 minutes, 85 °C 5 minutes; hold 4 °C. cDNA was diluted 1 in 5 in nucleasefree water, and 5 µl added to each custom Taqman PCR assay set up with Taqman Universal Master Mix No UNG in 20 µl reactions in 96-well plates on the Stratagene Mx3000P. Thermocycling conditions were 50 °C for 2 minutes, 95 °C for 10 minutes, 40 × (95 °C for 15 seconds; 60 °C for 1 minute). Standard curves were generated using synthetic RNA Oligos for both sh5 and RNU38B (control gene used to normalize sh5). RNA Oligos of known copy number (10 7 -10 1 ) were diluted 10-fold in a background of 10 ng/μl tRNA. Data analysis was performed with Stratagene qPCR MxPro software. Expression of sh5 was determined from the sh5 standard curve and normalized to RNU38B. Apoptosis assay Apoptosis assays were performed using the Caspase-Glo 3/7 Assay (Promega, #G8091), measuring caspase-3 and -7 activities, which play key effector roles in apoptosis. Cells were transduced overnight on 2.5 µg/cm 2 RetroNectincoated plates and the following day transferred to 96-well plates at 5 × 10 4 cells per well in 100 µl culture volume with three replicates per condition. Seven days posttransduction, 100 µl of Caspase-Glo 3/7 Reagent was added to each well, and the plates incubated for a further 2-3 hours at room Proliferation assay Proliferation assays were performed using the Premix WST-1 Cell Proliferation Assay System (Takara, #MK400), which measures cell proliferation based on the enzymatic cleavage of tetrazolium salt (WST-1) to a water-soluble dye that can be detected by absorbance at 450 nm. Cells were transduced overnight on 2.5 µg/cm 2 RetroNectin-coated plates, and the following day transferred to 96-well plates at 5 × 10 4 cells per well in 100 µl culture volume with 6 replicates per condition. Seven days posttransduction, 10 µl of Premix WST-1 was added to each well, and the plates were incubated for a further 2.5-4 h at 37 °C. Absorbance was measured at 450 nm on a Fluostar Optima (BMG Labtech). As a positive control for proliferation, cells were incubated with 100 ng/ml rhIL-2. Negative controls were culture media alone (blank) and untransduced cells. Fold change in absorption was determined relative to the untransduced control. Inflammation response assay Inflammation response assays were performed on culture supernatants using Quantikine Human IFN-γ ELISA kit (R&D Systems, #DIF50), Quantikine Human IL-6 ELISA kit (R&D Systems, #D6050), and VeriKine Human INF-α ELISA kit (PBL Interferon Source, Piscataway, NJ #41100). Cells were transduced overnight on 2.5 µg/cm 2 RetroNectin-coated plates, and the following day transferred to 96-well plates at 5 × 10 4 cellsper well in 100 µl culture volume with 3 replicates per condition. At 7 days posttransduction, supernatant was removed from the culture and stored at −80 °C for batching of analysis. ELISA assays were performed according to manufacturer's instructions. Standard curves for each cytokine to be measured were prepared from the standards supplied with the kits. Absorbance at 450 nm was measured on a Fluostar Optima (BMG LabTech). Cytokine levels in the sample supernatants were read off the standard curve, and fold changes were determined for the transduced samples relative to untransduced cells. HIV-1 challenge assays HIV-1 challenges on PBMCs were performed by pelleting 5 × 10 5 cells in HIV-1 viral supernatant containing 8 µg/ml polybrene. Cells were then incubated at 37 °C for 2-2.5 hours with gentle tapping every 15 minutes. Postincubation, cells were washed once and put into cultures of 1.5 ml in a six-well plate. The cultures were then sampled for p24 at either day 4 or 6. HIV-1 challenges on Molt4/CCR5 cells were performed by pelleting 1 × 10 6 cells in HIV-1 viral supernatant containing 8 µg/ml Polybrene. Cells were then incubated at 37 °C for 2-2.5 hours with gentle rocking. Postincubation, cells were washed once and put into cultures of 3 ml per T25. Cultures were sampled on days 7 or 8, and 13 or 14, and fresh media added after sampling. Evaluation of p24 by ELISA assay. p24 was determined by PerkinElmer HIV-1 p24 ELISA following the manufacturer's instructions. Analysis of integrated vector genomes VERO cells were transduced with GMP-grade LVsh5/C46vector, genomic DNA was then isolated, and digested with NotI/Bsu36I and subjected to southern blot analysis using an 890 bp probe positioning at the 5′ end of the LVsh5/C46 sequence (NotI/PstI). The LVsh5/C46 plasmid, digested with NotI/ Bsu36I, was used as a positive control. CFU assay Cells are plated at 500 cells per dish in methylcellulose media (Methylcult Optimum, Stem Cell Technologies) to enable colonies to form from individual cells. Cultures are incubated in a humidified dish for 14-16 days and then scored.
2017-11-03T01:48:19.560Z
2014-02-12T00:00:00.000
{ "year": 2014, "sha1": "210a071c2f9d315bc9f02657f1c8c4dfe74d6c2a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1038/mtm.2013.11", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "55f630cc932df2829723bb771ef3c742cedc61d4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Biology" ] }
52193368
pes2o/s2orc
v3-fos-license
3D imaging of PSD-95 in the mouse brain using the advanced CUBIC method Aims Postsynaptic density – 95 kDa protein (PSD95) is an important molecule on the postsynaptic membrane. It interacts with many other proteins and plays a pivotal role in learning and memory formation. Its distribution in the brain has been studied previously using in situ hybridization as well as immunohistochemistry. However, these studies are based on 2 dimensional (2D) sections and results are presented with a few sections. The present study aims to show PSD-95 distribution in 3 dimensions (3D) without slicing the brain tissue of C57BL/6 mice into sections using the advanced CUBIC technique. Methods Immunofluorescent staining using a PSD-95 antibody was performed on a half of the mouse brain after clarifying it using the advanced CUBIC protocol. The brain tissue was imaged using a Zeiss Z1 light sheet microscope and 3D reconstruction was completed using the Arivis Vision 4 dimensional (4D) software. Results The majority of brain nuclei have similar distribution pattern to what has been reported from in situ hybridization and immunohistochemical studies in the mouse. The signal can be easily followed in the 3D and their spatial relationship with adjacent structures clearly demarcated. In the present study, some fiber bundles also showed strong PSD-95 signal, which is different from what was shown in previous studies and need to be confirmed in future studies. Electronic supplementary material The online version of this article (10.1186/s13041-018-0393-4) contains supplementary material, which is available to authorized users. Main text Postsynaptic density protein of 95 kDa (PSD-95) is a scaffolding protein encoded by the disc large homolog 4 (DLG4) gene. It is highly enriched in the postsynaptic membrane and interacts with multiple synaptic proteins [1] through its three Post synaptic density protein, Drosophila disc large tumor suppressor, and Zonula occludens-1 protein (PDZ) domains. It plays an important role in synaptic plasticity [2], learning and memory [3], and other functions depending on the proteins it binds to. The expression of PSD-95 in the mouse brain has been described in traditional studies [4][5][6]. PSD-95 is highly expressed in the cerebral cortex, hippocampus, and striatum, and only moderately or weakly expressed in other brain regions. Our research has focused on the molecular and histological changes in the brain of disrupted in schizophrenia 1-locus impairment (DISC1-LI) mice, which is known to be relevant to the neuropathology of schizophrenia. The 3D imaging results presented here were from the C57BL/6 mouse as part of our histological analysis comparing the expression changes of PSD-95 in DISC1-LI mice with that of control C57BL/6 mice in the 3D space. Brains of 12-week old C57BL/6 male mice were fixed with 4% paraformaldehyde and dissected before being cut into two halves sagittally. Immunofluorescent staining using a PSD-95 antibody was performed on a half of the brains after clarifying them using the advanced CUBIC protocol [7]. The brain tissue was imaged with a Zeiss Z1 light sheet microscope using a 20× clearing objective and 3D reconstruction was completed using the Arivis Vision 4 dimensional (4D) software. In the forebrain, we found strong positive PSD-95 signal in the internal plexiform and the mitral cell layers of the olfactory bulb ( Fig. 1a and b), which was similar to what has been reported in the in situ hybridization [4,8,9] and immunohistochemistry studies [5,6]. Ventral to the cerebral cortex, which had weak PSD-95 signal, moderate signal was observed in the corpus callosum except the area dorsal to the hippocampus ( Fig. 1b and c). The positive signal in this stripe was aligned in the same direction as that observed in the caudate putamen, radiating towards the meeting point of the stria terminalis and the internal capsule, which is consistent with results from other studies [4][5][6]9]. The hippocampal fissure showed a narrow line of weak signal (Fig. 1b and c), which is in contrast to what was reported in a study using lacZ [6]. In a human postmortem study, it was reported that schizophrenia and bipolar patients showed a lower level of PSD-95 in the hippocampus, especially in the molecular layer of the dentate gyrus, suggesting the diagnostic value of PSD-95 in the reported diseases [10]. Strong signal was also found in the anterior part of the anterior commissure, the fornix, the optic nerve, and the cerebral peduncle ventral to the substantia nigra ( Fig. 1b and c). This is in a clear contrast to what has been reported in the lacZ gene knockin mice [6] and other in situ hybridization [4,9] as well as immunohistochemical [5]. In those studies, most if not all fiber tracts are void of PSD-95 signal. Moderate signal was observed in the shell of accumbens nucleus, the ventral pallidum, and the nucleus of horizontal diagonal band, the lateral preoptic area, and the majority of hypothalamic nuclei. In the thalamus, its rostral half had moderate PSD-95 signal, whereas the caudal half did not have or had weak PSD-95 signal (Fig. 1b and c). This is different from what has been reported by Porter et al. [6]. Positive signal was observed in the majority of midbrain nuclei and nuclei in the rostral hindbrain (the rest of the hindbrain and cerebellum were cut off ). Positive signal in the midbrain and the hypothalamus tended to follow the course of large fibers travelling rostrocaudally. In the middle part of the midbrain in sagittal sections, a small band of signal appeared to travel dorsoventrally. Ventral to this band, positive signals converge towards the ventral surface of the hindbrain (Fig. 1b). These features were not observed in previous in situ hybridization and immunohistochemical studies. The present study showed the expression of PSD-95 in the mouse brain using an emerging 3D techniqueadvanced CUBIC and this study confirmed some previous reports about PSD-95 in the majority of brain nuclei (Additional file 1). We showed that some fiber bundles were positive for PSD95, which has not been reported in previous studies. To verify this concern, we used Western blot (not shown) and confirmed the specificity of our PSD95 antibody. Our finding might be true due to the fact that lipid of the neuropil has been removed by the clearing solution and the antibody can easily bind to the epitope of PSD-95, whereas in conventional sections, there is no such a step and the interaction between PSD-95 and its antibody may be blocked to some extent, especially in a fiber bundle where the fibers are tightly bound to each other. The present study did not include the entire hindbrain due to difficulty in mounting the clarified tissue, which was very soft, to the glass capillary for imaging. Based on the findings from the rostral hindbrain, which are similar to what has been shown in in situ hybridization studies [4,9], it is expected that the majority of hindbrain nuclei will have weak to moderate PSD-95 signal. The cerebellum might have a higher level of signal than the other areas in the hindbrain as indicated by the in situ hybridization studies. The advanced CUBIC is an efficient technique in clarifying the mouse brain tissue (Additional file 2). For the same reason, it might lead to more protein loss than passive Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/ Immunostaining/In situ hybridizationcompatible Tissue hYdrogel (CLARITY) and other similar techniques. Currently, no method is ideal for preserving protein and rendering tissue transparent for imaging. Better clearing and imaging techniques with longer working distance will be in demand in order to show tissue integrity in the 3D video. Additional files Additional file 1: 3D video of fluorescent PSD-95 signal reconstructed using Arivis. Strong fluorescent signal was observed in the large fiber bundles such as the anterior commissure, fornix, stria terminalis of the thalamus, and corpus callosum. Weak to moderate signal was observed in a large number of nuclei in the forebrain and midbrain. (AVI 34006 kb)
2018-09-14T14:13:57.758Z
2018-09-12T00:00:00.000
{ "year": 2018, "sha1": "4d8a763b1ec4a9a82db60bdccbfbee84813ed7fa", "oa_license": "CCBY", "oa_url": "https://molecularbrain.biomedcentral.com/track/pdf/10.1186/s13041-018-0393-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da7462127b37ff271c55a6d84d52e6cebc6d352a", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
260273320
pes2o/s2orc
v3-fos-license
Evaluation of the effective mirror area of CTA Small-Sized Telescopes for camera design and Monte Carlo simulation The effective mirror area of an imaging atmospheric Cherenkov telescope is a crucial key parameter for trigger threshold determination and energy calibration. It is usually calculated by 3D ray-tracing simulation using a simplified telescope model, and the result is used in Monte Carlo simulations. However, simplified telescope and camera models are not adequate for the Schwarzschild-Couder configuration to be used in Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array. This is because the complex 3D structure of the secondary mirror, telescope masts, and camera body block a significant fraction of Cherenkov and night-sky photons. To evaluate the effective mirror area of an SST and to finalize its camera body design with minimal shadowing, a complex 3D model was built and simulated using the ROBAST ray-tracing library. A camera body size of 570 mm and a window size of 430 mm were selected for the final camera design based on the evaluation of shadowing by simulation. A non-axisymmetric effective area distribution was determined via the modeling of the complex telescope structure, while meeting the SST effective area requirement. Introduction The effective mirror area of gamma-ray and cosmic-ray telescopes is a key parameter used in Monte Carlo (MC) simulations, in which the photon tracks of atmospheric Cherenkov or fluorescence radiation are traced.In addition to understanding the optical properties of the atmosphere and focal-pane photodetectors, an accurate evaluation of the effective mirror area can improve the energy calibration and MC performance studies of the telescopes. The optical systems conventionally used for gamma-ray and cosmic-ray observations are parabolic and Davies-Cotton (DC) systems.The evaluation of the effective area of these systems is not technically difficult or complex because they are (segmented) single-mirror telescopes.In these systems, the specular reflection of photons occurs only once on the mirror surface.Therefore the obscuration by the telescope structure ("shadowing") is considered only before the reflection in most cases. In addition to the conventional telescope designs, the Schwarzschild-Couder (SC) configuration has been proposed for future imaging atmospheric Cherenkov telescopes (IACTs) [1], which have aspherical primary and secondary mirrors.This configuration enables us to simultaneously realize a wide field of view (FOV) and high angular resolution using a compact small-plate-scale camera.However, calculating the effective area of an SC system is complex because the telescope masts supporting the secondary mirror and camera are located between the primary and secondary mirrors.Therefore, incident photons may be obscured by the masts before being reflected by the primary mirror and again before being reflected by the secondary mirror.In addition, the camera may block photons if its body diameter is excessively large.Therefore, non-sequential ray-tracing simulation and accurate three-dimensional (3D) modeling of the telescope and camera structure are required to evaluate the effective mirror area, which has been approximated in previous simplified simulations. The use of the SC configuration in IACTs has been realized in prototype telescopes of the Cherenkov Telescope Array (CTA) [2,3].CTA is the next-generation ground-based gamma-ray observatory that will be comprised of different telescope designs: Large-Sized Telescopes (LSTs, segmented 23 m parabola), Medium-Sized Telescopes (MSTs, 12 m DC and 10 m SC), and Small-Sized Telescopes (SSTs, 4 m SC).Among the four different telescope designs of CTA, this paper focuses on the effective mirror area evaluation of the SC optical system of the SSTs. To meet the CTA requirement for the minimum effective area of the SSTs and to finalize the camera body design, we first calculate the effective mirror area as a function of different camera body sizes in Section 2. In Section 3, we calculate the effective area with the final camera design, including its various components that can potentially block photon tracks between the primary and secondary mirrors.The nonuniformity of the effective area in the camera FOV is discussed as well. SST Camera Design CTA SST requirements include a camera FOV radius larger than 4 • (namely 8 • diameter) and an effective mirror area larger than 5 m 2 over the FOV, without considering the losses caused by photodetector gaps and camera window transmittance.To meet these requirements, not only the telescope structure but also the camera body size must be carefully designed to prevent significant effective area losses from shadowing.As shown in Fig. 1 and Fig. 2(a), the SST camera will have silicon photomultiplier (SiPM) tiles on the spherical focal plane to cover the > 8 • FOV with 2048 pixels.They are connected to the front-end and back-end electronics and covered by a UV-transparent flat window.If the camera doors and/or camera body enclosing the electronics are excessively large, they might interfere with the photon tracks between the primary and the secondary mirrors.However, if a small window diameter is selected to reduce the window production cost and to minimize the interference, the window edge may block the incident photons at the SiPM edges, because the angles of incidence distribute from ∼30 • to ∼60 • .Therefore, the camera body must be sufficiently small, whereas the camera window must be sufficiently large simultaneously to consider both shadowing causes.First, we evaluated the impact of the window diameter on shadowing by simulating the SST optical system with different window diameters.The telescope structure shown in Fig. 2(a) was fully simulated; however, the camera body structure was simplified by including only a simple camera box and an octagonal window frame.The ROBAST ray-tracing library [5,6] was used in the simulations to model the 3D structures and to trace the photon tracks.One hundred thousand parallel photons, randomly distributed in a 2.5 m radius circle, were fallen onto the primary mirror, and the number of photons reaching the focal plane was counted to calculate the effective mirror area.This simulation was repeated for polar angles within the range 0.00-5.50• (0.05 • step) and different azimuthal angles, as shown in Fig. 3(a).The diameter of the camera in the simulation was scanned from 360 to 450 mm with a step of 10 mm. As shown in Fig. 3(a), the 450 mm camera window exhibited a smooth and moderate degradation of the effective area at large off-axis angles; however, the 360 mm window showed a steep drop within the SiPM tile coverage.the effective area over the SiPM tiles is undesirable for triggering and image analysis.Therefore, a diameter of 430 mm was selected for the camera window in our final camera design to ensure a uniform effective area. After determining the diameter of the camera window, we evaluated the effect of the camera enclosure and window frame diameters.Increasing these parameters enables us to flexibly and easily design the internal structure and cooling system of the camera; however, an excessively large camera design may block photons passing from the primary mirror to the secondary mirror. Fig. 4(a) shows a side view of one of the 3D camera models to be simulated.The effective mirror area was calculated using the same method, but with different camera enclosure and window frame sizes.Fig. 4(b) compares the simulation results of seven configurations, B0-B6, including two unrealistic options, i.e., B5 and B6, without doors or motors. The comparison of B0-B4 indicates that cameras with smaller enclosures exhibit a larger effective area as expected.However, the difference between B0 (600 mm) and B3 (510 mm) is only ∼0.1 m 2 at a field angle of 4 • .Therefore, the gain from a smaller camera design is only a few percents or less.This does not help improve the SST performance in energy bands below 1 TeV.Therefore, from an engineering perspective, a enclosure size of 570 mm was selected for the final camera design.A 490 mm frame size (a 430 nm window) was also selected because it slightly increases the effective area at around 5 • compared to a 475 nm frame (a 415 nm window). Final Design As of July 2023, the SST camera design is being finalized for SST mass production [7].The current design is based on the shadowing evaluation discussed in Section 2; the design includes a 430 mm window diameter and a 570 mm enclosure size. In the Cherenkov ray-tracing part of sim_telarray, the common MC package used in CTA [8], assumes a simple telescope and camera models as shown in Fig. 2(b).This is because a nonsequential ray-tracing simulation of complex telescope geometries requires significant computing power and a sequential simulation with simplified models must be employed.Instead, sim_telarray has a configuration parameter (function of the angles of incidence) called telescope_transmission that scales the effective mirror area to account for shadowing by the complex geometries. Prior to the final camera design phase, a rather simplified telescope and camera models were assumed in the calculation of the telescope_transmission function.The shadowing by the structure of the minor telescope components and the camera was ignored or underestimated in the previous large MC production in CTA ("Prod5").Hence, we had to re-evaluate telescope_transmission for the latest MC production ("Prod6") to more accurately predict the CTA performance. Fig. 5 compares the ROBAST simulations for the simplified Prod5 model and a full 3D model for Prod6.The former is axisymmetric about the optical axis (i.e., FOV center); however, the latter is asymmetric because of the full 3D implementation of the telescope masts and camera window.The nonuniformity is more visible in Fig. 6, where Fig. 5 is sliced in several directions.Owing to the nonuniformity in the full ROBAST simulation, the relative difference of the effective mirror area reaches approximately 10% around 2 • .Hence, the energy reconstruction of a single gamma-ray event is dependent on the direction on the camera, and the energy resolution can degrade if the nonuniformity is not considered in image analysis. In the current implementation of sim_telarray, the telescope_transmission function cannot be asymmetric.Instead, Prod6 uses a symmetric telescope_transmission averaged over azimuthal angles.Fig. 7 compares the telescope_transmission used in Prod5 and Prod6.The latter was calculated by comparing the simplified sim_telarray simulation and full ROBAST simulation in this study.In the future version of sim_telarray, asymmetric telescope_transmission must be implemented to minimize the difference between MC simulations and real data. Conclusion We calculated the effective mirror area of the CTA SSTs using the ROBAST ray-tracing library and a full 3D model of the telescope and camera structure.A simulation was performed in the camera design phase to finalize a few important parameters of the camera body size.After the finalization, another simulation was performed to calculate the telescope_transmission parameter in sim_telarray for the CTA MC production Prod6.This study revealed an asymmetric effective area over the camera FOV, which must be considered in the Cherenkov image analysis for future gamma-ray observations. Figure 1 : Figure 1: 3D CAD mechanical model of the SST camera (2021 version).The octagonal camera window of diameter 430 mm protects the SiPM array aligned on the focal plane.The figure is reproduced from [4] under the Creative Commons License (CC BY-NC-ND 4.0). Figure 2 : Figure 2: (a) Detailed 3D model of an SST used in full ROBAST simulations.(b) Simplified 3D model of an SST assumed in sim_telarray. Figure 3 :Figure 4 : Figure 3: (a) Distribution of the calculated effective mirror based on different camera FOV coordinates.Two window diameters are compared: 450 mm (left) and 360 mm (right).Four of the SiPM tile positions are represented using squares.(b) Calculated effective areas as a function of the field angle.Two slices, the center to a SiPM edge and the center to a SiPM corner, are depicted by the black and red curves (see also black and red arrows in (a)).Calculations with different window diameters (360-450 mm in 10 mm steps) are displayed concurrently.The curves for the 430 mm window are highlighted with thick lines.The difference between the black and red curves in 0 • -3.5 • is because of the asymmetric telescope masts and primary mirror. Figure 5 : Figure 5: (Left) Distribution of the effective mirror area calculated for individual coordinates on the focal plane.Simplified 3D models of the SST telescope and camera are assumed for Prod5.(Right) Same as left, but accurate 3D models are assumed for Prod6.Four SiPM tile positions at the camera edges are approximately indicated by the squares. Figure 6 : Figure 6: Comparison of the effective areas assumed in Prod5 and Prod6.The former is axisymmetric, whereas the latter is asymmetric.Thus, several azimuthal directions exhibit small (∼0.1 mm 2 ) variations in Prod6. Figure 7 : Figure 7: Comparison of the telescope_transmission functions used in Prod5 (blue dashed line) and Prod6 (red long-dashed line).
2023-07-29T15:14:06.653Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "ba750a53e39a9de4ce09e3388099d7cad434a47e", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/444/675/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "475edbcc4656e4de2782b6f87d2fad219508cd30", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
158466937
pes2o/s2orc
v3-fos-license
What Web Ads , Blurbs and Introductions Tell Potential Dictionary Buyers about Users , User Needs and Lexicographic Functions The present article deals with an investigation aimed at establishing the extent to which existing dictionaries provide potential dictionary buyers/borrowers with clear, unmistakable and easily understandable information about user need situations that might prompt consultation of the dictionary in question. The investigation analyses four monolingual English phrasal verbs dictionaries and fi ve monolingual English specialised dictionaries. The primary sources of such information are identifi ed as back cover blurbs of dictionaries, introductions to dictionaries and web ads for dictionaries. In the analysis, statements about user need situations extracted from these information sources are fi rst classifi ed as clear vs. unclear statements. The clear statements are then classifi ed under the lexicographic function to which they are related. The results of the analysis disconfi rm the hypothesis that the more well-defi ned and constrained the intended user group or groups for a given dictionary are, the more likely it is that the sources of information will provide the potential dictionary buyer/borrower with clear, unmistakable and easily understandable information about lexicographic function(s). Introduction For someone who fi nds himself/herself in a situation that requires the consultation of a dictionary to solve a particular problem, there are various sources of information which -in the ideal casecan tell the potential dictionary user whether a given dictionary will satisfy his/her needs. If the need for consultation requires the purchase of a dictionary, the following sources of information are available. Students for example may ask teachers (and perhaps also fellow students) for advice. Other information sources include reviews, publishers' printed and online book catalogues, publishers' ads including publishers' web ads (usually linked to publishers' online catalogues), blurbs and book introductions (also called 'prefaces' or 'forewords'). This study will analyse publishers' ads (in this case web ads), back cover blurbs and book introductions for a number of monolingual English dictionaries with the purpose of establishing whether these sources of information provide the kind of information potential dictionary buyers or borrowers need. The analysis will be based on the functional theory of lexicography in the sense that it will attempt to uncover whether the three sources of information give clear, unmistakable and easily understandable information about the kind of user group or user groups the given dictionary is intended for and, more importantly, whether they provide the potential dictionary buyer with clear, unmistakable and easily understandable information about the lexicographic function(s) covered by the dictionary 1 , so that the potential dictionary buyer can readily establish whether the given dictionary will satisfy his/her extra-lexicographic needs. The three sources of information mentioned have been selected for this study because they are readily available to the potential dictionary buyer (provided he/she has access through a computer to the Internet). Not all dictionaries are reviewed, and it may furthermore be diffi cult and timeconsuming for a potential dictionary buyer to locate a review of a particular dictionary. Also, publishers' printed book catalogues are rarely readily available. With respect to 'introductions' (or 'prefaces/forewords'), they have only been included in the analysis if they are not too long or integrated into another front matter text. The longest introduction included in the study is the one found in Longman Phrasal Verbs Dictionary, stretching over three pages. The reason why long introductions or introductions integrated into other front matter texts should be excluded from the analysis is that potential dictionary buyers in the actual purchase situations are unlikely to read through very long texts in their search for relevant statements that can tell them whether the dictionary will satisfy their needs. The dictionaries analysed fall into two groups: a) Four monolingual English phrasal verbs dictionaries b) Five monolingual English specialised dictionaries (all published by Oxford University Press) 2 The hypothesis is that the more well-defi ned and constrained the intended user group for a given dictionary is, the more likely it is that the sources of information will provide the potential dictionary buyer with clear, unmistakable and easily understandable information about lexicographic function(s). This is based on the assumption that it is much easier to defi ne lexicographic function(s) for a clearly defi ned intended user group than for a diffuse user group. The three types of information sources have previously been studied from a variety of perspectives, mainly by genre analysts who have studied them with the aim of establishing communicative purpose(s) for these genres. Bhatia (1997) is a study of academic book introductions in which he establishes that such introductions mix a descriptive communicative purpose with a promotional communicative purpose. It also includes a discussion of possible differences between the terms 'introduction', 'preface' and 'foreword', for example with respect to authorship of these texts. His conclusion is that it is largely impossible to set up any clear-cut distinctions with respect to communicative purpose, authorship, etc. between 'introductions', 'prefaces' and 'forewords'. For this reason, no distinction between them will be made in this study. Bhatia (2004: 168-181) analyses three book blurbs (two from academic works and one from fi ction) and concludes that in fact all three blurbs share the same communicative purpose (description and evaluation), but there are differences between the fi ctional work on the one hand and the academic works on the other in terms of lexical choices in the blurbs, particularly with respect to adjectives. Gea-Valor (2005) investigates publishers' web site ads from four publishing companies (Penguin, Ballantine, Routledge, and Barnes & Noble). She fi nds that these ads share communicative purposes (persuasive and informative) with blurbs to such an extent that they constitute a single genre. Kathpalia (1997) is a study of cross-cultural differences between book blurbs of international publishers and local Singapore-based publishers. Cacchiani (2007) is an investigation of evaluative language in book blurbs taken from what she calls 'lazy reads', whose communicative purpose is almost exclusively promotional, whereas Gesuato (2007) is a study of evaluative language in back-cover blurbs of academic books. Basturkmen (2009) is a study of the blurbs of seven English as a Foreign Language course books with a view to identifying the values of the English Language teaching community. This is done through a study of the key lexical items in the blurbs. Finally, Cronin/La Barre (2005) defi ne blurbs as book recommendations on dust jackets written by named authors (called 'blurbers'), so that a book may contain more than one blurb. Their analysis of 450 non-fi ction books (history and business) with a total of 1850 blurbs had the aim of discovering whether there exist 'serial blurbers' (authors writing inordinate numbers of blurbs) or 'back-scratching blurbers' (authors writing blurbs for each other's books on a regular basis), but this could not be confi rmed by their study. All of these studies are concerned with either works of fi ction or academic prose works. None of them have studied web ads, blurbs or introductions for reference works such as dictionaries or encyclopedia. There is every reason to expect that web ads, blurbs and introductions for utility tools such as dictionaries and encyclopedia will differ in content and structure from web ads, blurbs and introductions for both fi ctional and academic prose works. First of all, the genuine purpose of dictionaries and encyclopedia is to fulfi l punctual (either communicative or cognitive) needs that arise in a range of extra-lexicographic situations, although some dictionaries contain outer matter texts with a genuine purpose that resembles that of academic prose works, i.e. to satisfy global cognitive needs. On the other hand, the genuine purpose of fi ctional works is to satisfy emotional, entertainment (and possibly other) needs, and the genuine purpose of academic works is to satisfy global cognitive (often educational) needs, although textbooks in particular are often provided with indexes to allow consultation to satisfy punctual cognitive needs. Secondly, since dictionaries are compiled to cater for sometimes just one type of extra-lexicographic user need (monofunctional dictionaries), sometimes a multitude of extra-lexicographical user needs (polyfunctional dictionaries), potential dictionary buyers have a legitimate right to demand that those text genres that exist with the purpose of providing information about the user needs they were designed to fulfi l give clear, unmistakable and easily understandable information about the data included in the dictionary to satisfy those user needs. Gouws (2007) and Andersen/Fuertes-Olivera (2009) offer suggestions as to how this information can be formulated so as to give the potential dictionary buyer a clear indication of the communicative and/or cognitive needs a specifi c dictionary is meant to satisfy. Gouws (2007) suggests that information about lexicographic function(s) could be given in the front matter texts of the dictionary. For a dictionary with both receptive and productive functions a formulation such as Help with the writing and understanding of texts would be very helpful. Likewise, for a dictionary with an exclusively cognitive function, the front matter texts could include a formulation such as Help with knowledge about language (or some other specifi c subject fi eld). Andersen/Fuertes-Olivera (2009) is an investigation, based on the functional theory of lexicography, of fi ve English monolingual business dictionaries with the aim of suggesting a functionally based classifi cation of such dictionaries. In addition, and more importantly in this context, they give some proposals for adding extra information (for example in the blurb) about the specifi c functions (and types of users) the dictionary is adequate for. They give the following proposals for the fi ve business dictionaries investigated (adapted from Andersen/Fuertes-Olivera (2009: 236): A communicatively oriented dictionary, with a cognitive touch, for semi-experts and interested laymen with both text production and text reception needs A balanced cognitively and communicatively oriented dictionary for semi-experts and experts with mostly text reception needs A cognitively oriented dictionary for experts and semi-experts with text reception needs A cognitively oriented dictionary for experts and semi-experts with text reception needs Whether the theoretically oriented expressions such as A communicatively oriented dictionary, text production and text reception needs, etc. are adequate for a potential dictionary buyer with no knowledge of theoretical lexicographic terms can be questioned, but the proposals at least indicate in an unmistakable way which function(s) each dictionary is meant to satisfy. Methodology The methodology of this study consists in the extraction -from the three sources of information -statements that are judged to contain more or less clear descriptions or expressions of extralexicographic need situations that might prompt consultation of the dictionary in question and therefore a desire to buy (or borrow) it. The statements are simply divided into statements that are judged to be clear statements about user need situations and statements that are judged to be unclear statements about user need situations. All statements appear from Appendix A. A where it is doubtful whether all potential dictionary buyers will interpret 'explanations/explain' as 'defi nitions/defi ne' and thus conclude that the dictionary is intended to meet receptive needs. Another example is the statement explication of the new and sometimes baffl ing vocabulary associated with structured fi nance and the subprime lending crisis (Oxford Dictionary of Business and Management/Preface) where it is even more doubtful that potential dictionary buyers will interpret the term 'explication' to mean that they will fi nd defi nitions that will help them understand the meaning of the vocabulary items in question. The same applies to the statement clarifi cation of everyday business terms (Oxford Dictionary of Business and Management/ Web ad) However, since at least some dictionary users (perhaps the more experienced ones) may be able to unravel the probable intended meaning of these statements, they have been classifi ed as clear statements. In the lists of statements (see Appendix A), all extracted statements have been classifi ed fi rst as 'clear statements' or 'unclear statements'. Secondly, 'clear statements' have been classifi ed under the lexicographic function to which they are related. A statement such as information about whether or not a phrasal verb is passive (Longman Phrasal Verbs Dictionary/Intro) has been classifi ed under the lexicographic function 'Production' since the statement is intended to provide the potential dictionary buyer with information about the capability of the phrasal verb to appear in the passive voice. 3 A statement such as recommended web links for many entries -these links are a valuable source of extra information (Oxford Dictionary of Economics/Web ad) has been classifi ed under the lexicographic function 'Cognition', because it tells the potential dictionary buyer that the dictionary is capable of guiding him/her to other sources of information where additional knowledge about the entry word in question can be obtained. A few statements in Macmillan Phrasal Verbs Plus, Cambridge Phrasal Verbs Dictionary and Oxford Phrasal Verbs have been classifi ed under the lexicographic function 'Vocabulary Building'. This applies for example to the following statement: hundreds of synonyms and antonyms help build your vocabulary (Macmillan Phrasal Verbs Plus /Blurb) In the traditional functional theory of lexicography, 'Vocabulary Building' will probably be viewed as a sub-function under 'Cognition'. However, since these learner's dictionaries explicitly refer to this (important) aspect of language learning, 'Vocabulary Building' has been set up in this study as a separate lexicographic function. The following two statements in Oxford Dictionary of Law have been related to two different functions, namely both 'Cognition' and 'Production': the Writing and Citation Guide provides detailed advice on how to write and present essays on legal subjects (Oxford Dictionary of Law/Preface) a useful Writing and Citation Guide that specifi cally addresses problems and establishes conventions for writing legal essays and reports (Oxford Dictionary of Law/Web ad) In most cases, consultation of this Writing and Citation Guide will be for cognitive reasons, i.e. not related to any specifi c communicative-productive situation, but we cannot rule out the possibility that on rare occasions, the Guide may be consulted in a specifi c communicative-productive situation. The same might perhaps apply to the following statements: Language Study articles on pronunciation, register, grammar, metaphor and learner errors (Macmillan Phrasal Verbs Plus/Blurb) explanations of how particles contribute to the meaning of phrasal verbs (Macmillan Phrasal Verbs Plus/Blurb) However, in these cases it is very unlikely that users will consult these outer matter texts to solve communicative problems. They have therefore been classifi ed only under the function 'Cognition'. Users With respect to statements about intended users it clearly appears from the analysis that the four phrasal verbs dictionaries see themselves as English learner's dictionaries. This is explicitly stated in Macmillan Phrasal Verbs Plus/Intro, Cambridge Phrasal Verbs Dictionary/Intro, and in Oxford Phrasal Verbs/Blurb (front cover). Longman Phrasal Verbs Dictionary/Blurb further specifi es that the dictionary is intended for 'advanced' and 'upper intermediate' learners of English. Longman Phrasal Verbs Dictionary/Web ad, Cambridge Phrasal Verbs Dictionary/Intro and Oxford Phrasal Verbs/Web ad mention 'learners' as an intended user group without further specifi cation of type of learner. The same implicit information is given through the use of the term 'students' in Longman Phrasal Verbs Dictionary/Blurb, Macmillan Phrasal Verbs Plus/Web ad, Cambridge Phrasal Verbs Dictionary/Web ad, and Oxford Phrasal Verbs/Web ad. However, since all four dictionaries are monolingual English dictionaries, we must assume in all these cases that potential buyers of these dictionaries will take this information to mean that the dictionaries are intended for 'learners of English'. Three of the phrasal verbs dictionaries restrict their intended user groups to this category whereas Longman Phrasal Verbs Dictionary/Blurb further gives 'general' as an intended user group. 'General' will probably have to be interpreted as 'the general public' and is probably included by the publishers in an attempt to reach as large a user market as possible. However, on the whole it can be concluded that the four phrasal verbs dictionaries indicate that their intended user groups is quite clearly defi ned and constrained to learners of English. With respect to intended user groups for the fi ve specialised dictionaries, the picture is quite different. They all mention 'students' and 'professionals' (mainly of the relevant subject fi eld, i.e. the subject fi eld covered by the dictionary) as intended user groups, and with the exception of Oxford Dictionary of Accounting, they also see 'teachers/lecturers' (also mainly of the relevant subject fi eld) as potential dictionary users. 'Teachers/lecturers' as potential users are mentioned mainly in the web ads. But then the picture becomes blurred, cf. the following statements: These statements alone clearly show that the compilers/publishers of these specialised dictionaries have had the intention of appealing to so far-reaching a user market that we are left with the impression that they have had no clear perception of whom the dictionaries are intended for. We must therefore conclude that the fi ve specialised dictionaries have no clearly defi ned and constrained intended user group(s). Functions As mentioned in the Methodology section, statements about lexicographic functions of the dictionaries have been classifi ed as clear, if there is no doubt about which user need(s) the statement refers to. That section provided a few examples. In the following, a few more examples are given, classifi ed according to lexicographic function: Not only do potential dictionary buyers have a legitimate claim to be told to what extent a given dictionary can satisfy (a range of) user needs. They also have a legitimate claim to be given this information in a language they can understand. We already touched upon this issue in the Introduction where it was questioned whether the formulations containing theoretical lexicographic terms suggested in Andersen/Fuertes-Olivera (2009) will be understood by potential dictionary buyers. Statements have therefore been classifi ed as clear only if they avoid the use of such terms. In fact, no statement extracted from the nine dictionaries analysed have used theoretical lexicographical terms, and we can therefore conclude that all clear statements are also easily understandable statements. An analysis of the main reasons for classifying statements as unclear with respect to the potential satisfaction of user needs reveals that for the phrasal verbs dictionaries many of these statements refer to linguistic data included in the dictionary articles or in outer matter texts, however without giving any clues as to which user needs they were included to satisfy. This applies for example to the following statement about synonyms and antonyms: if a phrasal verb has a synonym or a word that has almost the same meaning, this is shown at the end of that sense of the phrasal verb (Longman Phrasal Verbs Dictionary/Intro) In two of the dictionaries, there are in fact clear statements about synonyms and antonyms, but there is not total agreement as to what this kind of linguistic data can be used for in terms of satisfying user needs. Macmillan Phrasal Verbs Plus states that synonyms and antonyms have been included to support 'Vocabulary Building': Some statements refer to special layout features such as highlighting or the use of symbols, but again it is diffi cult to deduce from these statements which specifi c user needs the features were included to satisfy. Examples include: In the Methodology section we have already mentioned and given examples of nouns such as 'information' and 'coverage' whose meaning is too general to give clues as to the data referred to, unless the noun is modifi ed in some way so as to give the potential buyer a clue to the user needs the data are intended to satisfy. Examples from the phrasal verbs dictionaries include the following: up-to-date information about phrasal verbs in general English, as well as in business, Internet and computing contexts (Macmillan Phrasal Verbs Plus/Blurb) The verb 'include' is used in the same fashion in the following statement: includes marketing, accounting, organizational behaviour, global fi nance, business strategy, and taxation (Oxford Dictionary of Business and Management/Blurb) In fact, the number of statements in the information sources for the specialised dictionaries with 'coverage/cover' without some form of modifi cation to explain which user needs the 'coverage' intends to satisfy is 26, i.e. almost half of the 56 unclear statements in the information sources for the specialised dictionaries. A few statements for the specialised dictionaries refer to the dictionary as a whole using such terms as 'guide', '(source of) reference', 'reference work' or 'source of information'. Examples include: guide to assist professional advisers in their work (Oxford Dictionary of Accounting /Preface) an essential source of reference (Oxford Dictionary of Economics/Blurb) a handy guide to legal terminology (Oxford Dictionary of Law/Blurb) the authoritative A-Z guide to the world of money (Oxford Dictionary of Finance and Banking/Blurb (front cover)) These terms do not in any way in themselves give any assistance to the potential dictionary buyer with respect to revealing information about intended lexicographic functions. Rough calculations of the proportion of clear statements to unclear statements about user needs in the three sources of information for the dictionaries analysed give the following results 4 : If we look fi rst at the total proportion of clear statements to unclear statements in the sources of information for the two groups of dictionaries (phrasal verbs dictionaries vs specialised dictionaries) (Tables 1 and 2), it is evident that the phrasal verbs dictionaries do better than the specialised dictionaries with respect to providing the potential dictionary buyer with clear statements about the user needs the dictionaries are intended to satisfy. First of all, however, the difference is not judged to be signifi cant enough to fully support the hypothesis that the sources of information for dictionaries with clearly defi ned and clearly constrained intended user groups are much better at providing potential dictionary buyers with clear, unmistakable and easily understandable information about their capability of satisfying specifi c user needs. Secondly, there are signifi cant differences within each group of dictionaries in this respect. As far as the phrasal verbs dictionaries are concerned, Longman Phrasal Verbs Dictionary performs signifi cantly better than Macmillan Phrasal Verbs Plus and Cambridge Phrasal Verbs Dictionary, and Oxford Phrasal Verbs performs somewhat better than Macmillan Phrasal Verbs Plus and Cambridge Phrasal Verbs Dictionary. With respect to the specialised dictionaries, Oxford Dictionary of Accounting and Oxford Dictionary of Law stand out compared with the other three specialised dictionaries with respect to providing clear, unmistakable and easily understandable information about the user needs they were designed to satisfy. In fact, both of these dictionaries perform better than Cambridge Phrasal Verbs Dictionary and almost as well as Macmillan Phrasal Verbs Plus. We must therefore conclude that the analysis cannot confi rm the hypothesis that sources of information about lexicographic functions for dictionaries with clearly defi ned and constrained intended user groups are clearly better at providing potential dictionary buyers with clear, unmistakable and easily understandable information about the user needs they were designed to fulfi l compared to sources of information for dictionaries with diffuse intended user groups. Phrasal Verbs Dictionaries Clear Table 3. Total proportion of clear statements to unclear statements in the three sources of information If we turn for a moment to each of the sources of information (Table 3) in order to see whether there are signifi cant differences between them with respect to the proportion of clear statements to unclear statements, we can fi rst of all conclude that the picture is quite similar for the two groups of dictionaries. The analysis shows that for both groups of dictionaries blurbs and web ads provide the potential dictionary buyer with a higher proportion of clear statements to unclear statements than the introductions/prefaces. These results might be interesting if we could establish with certainty the authorship of each of the three types of information sources. We might assume that introductions/prefaces are mainly written by the editors/compilers of the dictionaries as a clear and objective guide to the contents and functions of the dictionaries. After all, editors/compilers may be expected to have a clear perception of who the intended users of their dictionaries are and which user needs they designed their dictionaries to satisfy -in other words the function or functions of their dictionaries. We might also assume that blurbs and web ads are mainly written by the publishers of the dictionaries as marketing tools for the dictionaries with a less clear perception of intended users and lexicographic function(s). In essence, under these assumptions, we might expect introductions/prefaces to have a higher proportion of clear statements to unclear statements about lexicographic functions than the other two types of information sources. Unfortunately, the literature does not provide us with a clear picture of the authorship of the three types of information sources. As already mentioned, Bhatia (1997) was unable to establish unequivocal authorship for introductions to academic books. With respect to the authorship of blurbs, Cronin/La Barre (2005: 19) says that "Blurbs are brief, effusive and often edited by the publisher", while Bhatia (2004: 170) says that "It is a bit diffi cult to decide who actually writes the blurb. Is it the author of the book or the publisher? Or may both of them have a role to play?". In any case, the remarks by both Cronin/La Barre and Bhatia relate to blurbs for academic books and should not be generalized so as to include also blurbs for reference works such as dictionaries. However, in four of the fi ve specialised dictionaries 5 , the prefaces are initialled by the editor of the dictionary and in one of the four phrasal verbs dictionaries 6 , the introduction is signed by the chief editor of the dictionary, which must be taken as an indication that the introduction/preface was actually written by the editor. Table 4 These results are remarkable if our assumptions with respect to authorship for blurbs and web ads hold, namely that these information sources are written by publishers' marketing people, particularly with respect to the specialised dictionaries, where the editors of the dictionaries are clearly more vague in their statements about dictionary functions. However, as already mentioned, verifi cation of these conclusions will have to await further research into the authorship of the sources of information here investigated. Conclusion The hypothesis set forth in the introduction to this study -that the more well-defi ned and constrained the intended user groups for a given dictionary is, the more likely it is that the sources of information, on which potential dictionary buyers can rely prior to the purchase of the dictionary, will provide the potential dictionary buyer with clear, unmistakable and easily understandable information about lexicographic function(s), could not be confi rmed. First of all, the differences with respect to proportions of clear statements to unclear statements in the information sources for dictionaries with well-defi ned and constrained target user groups (the phrasal verbs dictionaries) and the proportions of clear statements to unclear statements in the information sources for dictionaries with rather ill-defi ned and unconstrained target user groups (the specialised dictionaries) were not judged to be signifi cant enough to provide confi rmation of the hypothesis. Secondly, the analysis revealed signifi cant differences within each group of dictionaries with respect to proportions of clear statements to unclear statements. These differences also serve to disconfi rm the hypothesis. Appendix A: Lists of statements In the following lists, passages in italics are comments by the author of this article. a) students on business and management courses at all levels b) business professionals including lawyers, bankers, accountants, advertising agents and insurance brokers c) the general reader looking for clarification of everyday business terms (encountered, for example, in house-buying, tax returns, or share investment)
2019-05-20T13:03:14.182Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "571c72c585ccc4cc0abff7452aa8a799f37960bd", "oa_license": "CCBY", "oa_url": "https://tidsskrift.dk/her/article/download/97741/146896", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b78f43e65bda61e0e390449b3162b078630c285b", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
8886029
pes2o/s2orc
v3-fos-license
Lower bounds for bootstrap percolation on Galton-Watson trees Bootstrap percolation is a cellular automaton modelling the spread of an `infection' on a graph. In this note, we prove a family of lower bounds on the critical probability for $r$-neighbour bootstrap percolation on Galton--Watson trees in terms of moments of the offspring distributions. With this result we confirm a conjecture of Bollob\'as, Gunderson, Holmgren, Janson and Przykucki. We also show that these bounds are best possible up to positive constants not depending on the offspring distribution. Introduction Bootstrap percolation, a type of cellular automaton, was introduced by Chalupa, Leath and Reich [1] and has been used to model a number of physical processes. Given a graph G and threshold r ≥ 2, the r-neighbour bootstrap process on G is defined as follows: Given A ⊆ V (G), set A 0 = A and for each t ≥ 1, define where N (v) is the neighbourhood of v in G. The closure of a set A is A = t≥0 A t . Often the bootstrap process is thought of as the spread, in discrete time steps, of an 'infection' on a graph. Vertices are in one of two states: 'infected' or 'healthy' and a vertex with at least r infected neighbours becomes itself infected, if it was not already, at the next time step. For each t, the set A t is the set of infected vertices at time t. A set A ⊆ V (G) of initially infected vertices is said to Usually, the behaviour of bootstrap processes is studied in the case where the initially infected vertices, i.e., the set A, are chosen independently at random with a fixed probability p. For an infinite graph G the critical probability is defined by This is different from the usual definition of critical probability for finite graphs, which is generally defined as the infimum of the values of p for which percolation is more likely to occur than not. In this paper, we consider bootstrap percolation on Galton-Watson trees and answer a conjecture in [3] on lower bounds for their critical probabilities. For any offspring distribution ξ on N ∪ {0}, let T ξ denote a random Galton-Watson tree with offspring distribution ξ. For any fixed offspring distribution ξ, the critical probability p c (T ξ , r) is almost surely a constant (see Lemma 3.2 in [3]) and we shall give lower bounds on the critical probability in terms of various moments of ξ. Bootstrap processes on infinite regular trees were first considered by Chalupa, Leath and Reich [1]. Later, Balogh, Peres and Pete [2] studied bootstrap percolation on arbitrary infinite trees and one particular example of a random tree given by a Galton-Watson branching process. In [3], Galton-Watson branching processes were further considered, and it was shown that for every r ≥ 2, there is a constant c r > 0 so that r − 1 and in addition, for every α ∈ (0, 1], there is a positive constant c r,α so that, Additionally, in [3] it was conjectured that for any r ≥ 2, inequality (1) holds for any α ∈ (0, r − 1]. As our main result, we show that this conjecture is true. For the proofs to come, some notation from [3] is used. If an offspring distribution ξ is such that P(ξ < r) > 0, then one can easily show that p c (T ξ , r) = 1. With this in mind, for r-neighbour bootstrap percolation, we only consider offspring distributions with ξ ≥ r almost surely. Definition 1. For every r ≥ 2 and k ≥ r, define and for any offspring distribution ξ with ξ ≥ r almost surely, define Some facts, which can be proved by induction, about these functions are used in the proofs to come. For any r ≥ 2, we have g r r (x) = r−1 i=0 (1 − x) i and for any k > r, Hence, for all distributions ξ we have G r . Developing a formulation given by Balogh, Peres and Pete [2], it was shown in [3] (see Theorem 3.6 in [3]) that if ξ ≥ r, then Results In this section, we shall prove a family of lower bounds on the critical probability p c (T ξ , r) based on the (1 + α)-moments of the offspring distributions ξ for all α ∈ (0, r − 1], using a modification of the proofs of Lemmas 3.7 and 3.8 in [3] together with some properties of the gamma function and the beta function. Recall that the gamma function is given, for z with ℜ(z) > 0, by Γ(z) = ∞ 0 t z−1 e −t dt and for all n ∈ N, satisfies Γ(n) = (n − 1)!. The beta function is given, for ℜ(x), ℜ(y) > 0, by Γ(x+y) . We shall use the following bounds on the ratio of two values of the gamma function obtained by Gautschi [4]. For n ∈ N and 0 ≤ s ≤ 1 we have Let us now state our main result. Combining equation (8) with equation (7) yields For every natural number n ∈ [1, r − 2], note that lim α→n − c r,α > 0 and, by the monotone convergence theorem, there is a constant c r,n > 0 so that This completes the proof of the lemma. In the above proof, as α → (r − 1) − , c 1 (r, α) → ∞ and hence lim α→(r−1) − c r,α = 0, so the proof of Lemma 3 does not directly extend to the case α = r − 1. We deal with this problem in the next lemma. Using a different approach we prove an essentially best possible lower bound on p c (T ξ , r) based on the r-th moment of the distribution ξ. The sharpness of our bound is demonstrated by the b-branching tree T b , a Galton-Watson tree with a constant offspring distribution, for which, as a function of b, we have p c (T b , r) = (1 + o(1))(1 − 1/r) (r−1)! b r 1/(r−1) (see Lemma 3.7 in [3]). Proof. As in the proof of Lemma 3.7 of [3] note that for every k ≥ r and t ∈ [0, 1], Using the lower bound in inequality (9) for the function G r ξ (x) yields Since the maximum value of G r ξ (x) is at least as big as G r ξ (1 − t 0 ), by equation (3), . This completes the proof of the lemma. Theorem 2 now follows immediately from Lemmas 3 and 4. It is not possible to extend a result of the form of Theorem 2 to α > r − 1, as demonstrated, again, by the regular b-branching tree. For every α, the (1 + α)-th moment of this distribution is b 1+α and the critical probability for the constant distribution is . As we already noted, Lemma 4 is asymptotically sharp, giving the best possible constant in Theorem 2 for any r ≥ 2 and α = r − 1. We now show that for α ∈ (0, r − 1), Theorem 2 is also best possible, up to constants. In [3], it was shown that for every r ≥ 2, there is a constant C r such that if b ≥ (r − 1) log(4er), then there is an offspring distribution η r,b with E[η r,b ] = b and p c (T η r,b , r) ≤ C r e − b r−1 . It was shown that there are k 1 = k 1 (r, b) ≤ e(r − 2)e b r−1 − 1 and A, λ ∈ (0, 1) so that the distribution η r,b is given by P(η r,b = k) =      r−1 k(k−1) r < k ≤ k 1 , k = 2r + 1 1 r + λA k = r r−1 (2r+1)2r + (1 − λ)A k = 2r + 1.
2014-02-18T20:17:18.000Z
2014-02-18T00:00:00.000
{ "year": 2014, "sha1": "7159640061cc44c8113184d3e2a02b374454fba0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1214/ecp.v19-3315", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "7159640061cc44c8113184d3e2a02b374454fba0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8372481
pes2o/s2orc
v3-fos-license
A global profile of replicative polymerase usage Three eukaryotic DNA polymerases are essential for genome replication. Polα-primase initiates each synthesis event and is rapidly replaced by processive DNA polymerases: Polε replicates the leading strand while Polδ performs lagging strand synthesis. However, it is not known whether this division of labour is maintained across the whole genome or how uniform it is within single replicons. Using S. pombe, we have developed a polymerase usage sequencing (Pu-seq) strategy to map polymerase usage genome–wide. Pu–seq provides direct replication origin location and efficiency data and indirect estimates of replication timing. We confirm that the division of labour is broadly maintained across an entire genome. However, our data suggest a subtle variability in the usage of the two polymerases within individual replicons. We propose this results from occasional leading strand initiation by Polδ followed by exchange for Polε. a r t i c l e s Accurate DNA replication is fundamental to life, and errors that occur during replication underpin the genome instability that is the hallmark of cancer development 1,2 . In most eukaryotes, bidirectional replication is initiated stochastically, with distinct regions of the genome showing varying initiation efficiencies and distinct temporal regulation 3 . In budding yeast, specific DNA consensus sequences define the binding of the origin recognition complex (ORC) to DNA throughout the cell cycle 4 . Each region of replication initiation is thus defined by a single DNA sequence or origin. In higher eukaryotes, ORC association with the chromosomes varies through the cell cycle, and the mechanisms defining where the ORC binds are not understood. Initiation zones in higher eukaryotes are probably composed of numerous low-efficiency origins clustered together 3 . In exponentially growing budding yeast, the different origins are activated with different efficiencies. Thus, times at which different initiation regions are replicated (the population average) are distinct 5 . In higher eukaryotes, growing cultures of individual cell types display reproducible replication-timing profiles, thus indicating that ORC association and/or the likelihood of replication initiation from ORCassociated regions are stable characteristics of specific cell types 6 . Interestingly, replication-timing profiles for different mammalian cell types correlate well with three-dimensional chromosome interaction maps, thus suggesting a link between replication timing and chromatin organization within the nucleus (reviewed in ref. 3). The ORC attracts the MCM complex in the G1 phase of the cell cycle, licensing the site for initiation 7 . The six-subunit MCM complex is the core of the replicative helicase, which is subsequently activated by the loading of two additional components: Cdc45 and the four-subunit GINS complex. The resulting active helicase is known as CMG 8 . An ancillary replisome component, the Ctf4 trimer, links Polα-primase to CMG 9,10 , coordinating the necessary initiation events. The Polε holoenzyme interacts directly with GINS, an association also independently required for the initial formation of CMG [11][12][13] . Once CMG is formed, the Polε holoenzyme-GINS interaction is not required for CMG helicase activity. It is not known whether the Polδ holoenzyme interacts directly with CMG. Once DNA replication is initiated, each fork synthesizes the leading strand continuously and the lagging strand discontinuously. Certain DNA polymerase mutations introduce a biased mutation spectrum. This has allowed assignment of the source of mutations due to mispairing on one or the other DNA strand 14 . From studies using these mutant polymerases, Polε was genetically assigned as the leading-strand DNA polymerase at several loci in Saccharomyces cerevisiae. Similarly, Polδ was assigned as the major lagging-strand polymerase 15 . These data led to the model that the labor of replication is shared: Polε replicates the leading stand and Polδ the lagging strand. An equivalent experiment using S. pombe similarly assigned Polδ to the lagging strand 16 , demonstrating evolutionary conservation of polymerase usage. An S. pombe mutant Polε that incorporated ribonucleotides into DNA at increased frequency was used to physically assign Polε to leading-strand synthesis 16 . These experiments relied on the increased incorporation of rNTPs into the leading strand, thus causing that specific strand to be fragmented by alkali treatment, which cleaves the phosphate backbone at ribonucleotides but not deoxyribonucleotides. To establish whether the division of labor between Polε and Polδ is consistent across an entire genome and to ascertain whether there is variation in the usage between the two polymerases within a single replicon, we set out to physically map, genome wide, the division of labor between these polymerases. We devised a strategy to identify, by high-throughput sequencing, the position of ribonucleotides in the genome and combined this with Polε and Polδ mutants that incorporate excess ribonucleotides to establish a Pu-seq methodology that allowed us to map the division of labor genome wide. We confirm that the division of labor is broadly maintained across an entire genome. We also demonstrate that a single Pu-seq experiment, which consists of two library samples for deep sequencing (one each from asynchronous cultures of the respective polymerase mutants) delivers a direct and extremely high-resolution genome-wide map of DNA replication initiation and allows the indirect calculation of robust genome-wide replication-timing data. The resolution of our data revealed evidence for subtle variability in the usage of the two polymerases within individual replicons. We suggest that this results from occasional leading-strand initiation by Polδ. RESULTS At physiological dNTP and rNTP concentrations, S. cerevisiae replicative DNA polymerases incorporate, in vitro, ribonucleotides at frequencies ranging from 1:650 bp (Polα) to 1:5,000 bp (Polδ) 17 . Ribonucleotides are efficiently removed from duplex DNA by ribonucleotide excision repair (RER): RNase H2 nicks 5′ to the ribonucleotide, Polδ (or Polε) initiates strand-displacement synthesis, and Fen1 (or Exo1) removes the resulting flap before ligation completes repair 18 . In the absence of RER, single ribonucleotides persist (although some are removed by Topo1 (refs. 19-21)). Ribonucleotides can template DNA synthesis, albeit with a reduction in processivity 22,23 . We previously exploited an S. pombe cdc20-M630F (Polε) allele to introduce excess ribonucleotides into DNA replicated by Polε. Southern blot analysis in an RNase H2-deficient (rnh201∆) background previously provided physical evidence that Polε performed the majority of leading-strand synthesis 16 . To facilitate mapping the division of labor genome wide, we have generated an equivalent mutation for Polδ, cdc6-L591G. DNA prepared from cells containing this mutation showed lagging strand-specific degradation when alkali gels were probed for sequences flanking an efficient origin (Fig. 1a,b). This is complementary to the DNA prepared from cells containing the previously characterized cdc20-M630F (Polε) allele, which demonstrated leading strand-specific degradation (Fig. 1b). Both the cdc20-M630F (Polε) and the cdc6-L591G (Polδ) mutant strains in the rnh201∆ background incorporated similar levels of ribonucleotides 24 , grew with similar kinetics and displayed similar flow-cytometry profiles (Fig. 1c). Mapping polymerase usage across the genome Alkali treatment of duplex ribonucleotide-containing DNA results in phosphate-backbone cleavage 3′ to the ribose to result in a 5′-OH (Fig. 1d). If the denatured DNA is used to template random hexamer primer extension, 5′-to-3′ synthesis results in a flush end adjacent to the initial ribose (Fig. 1e). By generating a library from single-stranded DNA and placing distinct index primers at each end, deep-sequencing reads can be mapped to individual strands, locating with base accuracy the original ribonucleotide. To map replication polymerase usage across the genome, we therefore grew two RNaseH2-deficient cultures containing either cdc20-M630F (Polε) or cdc6-L591G (Polδ) mutations, prepared DNA, treated with alkali a r t i c l e s npg a r t i c l e s and created two independent libraries. We mapped approximately 10 million paired-end sequence reads for each strain to 300-bp bins across the genome (Fig. 2a). We then calculated the relative ratio of reads from the Polε and Polδ data sets (Fig. 2b) and smoothed the data to provide frequency scores representative of relative Polε and Polδ usage for the Watson (+) and Crick (−) strands (Fig. 2c). Polymerase usage transitions define initiation sites Bidirectional initiation and the division of polymerase labor predicts a reciprocal demarcation on both the Watson and the Crick strands between Polε (leading) and Polδ (lagging) usage for each initiation zone. Efficient origins should manifest as sharp reciprocal changes in the polymerase usage ratios. Less efficient origins, which are replicated passively in most cells, should present as reciprocal inflections in otherwise uniform gradients. We thus used the two independent data sets to calculate Polε usage on the Watson stand or Polδ usage on the Crick strand ( Fig. 3a) and plotted the differential of each neighboring data point (Fig. 3b). Where a reciprocal positive peak was identified (i.e., there was a change in polymerase usage in both data sets), we derived maxima and minima and plotted the average of their differences ( Fig. 3c; further explanation in Fig. 3e). Peak heights reflect relative origin efficiency: the highest peaks correspond to the most-efficient origins (distribution of origin efficiencies in Supplementary Fig. 1a; identified origins and their relationship to previous studies in Supplementary Table 1). To account for experimental variation, we analyzed four additional independent experiments and annotated how often each origin was identified (Supplementary Table 1). To independently visualize origins in a manner concordant with the literature 25 , we synchronized wild-type cells in G2, released them into S phase in the presence of hydroxyurea (HU) plus the nucleotide analog bromodeoxyuridine (BrdU) and quantified replication by using BrdU immunoprecipitation (BrdU-IP) plus deep sequencing (Fig. 3d). This identified 421 origins, >90% of which correspond to Pu-seq origins (Supplementary Table 1 and Supplementary Fig. 2). A map of replication timing by marker frequency analysis Although Pu-seq provides a direct assay for replication-initiation efficiency, it can also indirectly provide information about relative replication timing (described below). To validate replication-timing data calculated from the Pu-seq experiments, we first wished to generate a direct replication-timing map for S. pombe that is not biased by cell synchronization or treatment with replication inhibitors 25 . We thus mapped replication profiles of cells synchronized by elutriation, using marker frequency analysis (Fig. 4a). We examined aliquots of an elutriated culture over time for mitotic index, septation and DNA content. From these data, we calculated percentages of G2-phase, M-phase, S-phase and post-S phase cells for each time point, on the basis of the known cell-cycle behavior of S. pombe ( Fig. 4b and Online Methods). We calculated the fraction of DNA replicated for each time point and set boundaries for the beginning and end of S phase (Fig. 4c). Next, we extracted DNA from the indicated aliquots spanning S phase and prepared libraries for deep sequencing. We compared the proportion of reads for each 1-kb bin across the genome to a fully replicated (G2) control and calculated the percentage of replication at each locus for each time point sequenced (Fig. 4d). Because elutriation can cause cellular perturbation due to centrifugation 26 , we validated that elutriation did not distort replication profiles by performing fluorescence-activated cell sorting (FACS) and deep-sequence analysis (Sort-seq) in which S-phase cells Table 2. (e) Example of how origin efficiencies were quantified. Polymerase usage ratio, established minima and maxima (yellow triangles) around the reciprocal peaks (yellow dots) identified from a. Differentials, example region of differentials from b. Origin efficiency, differences between the above identified maxima and minima (E(δ) f and E(ε) f ). Average origin efficiency, averaged differences producing the relative origin efficiency (E f ori ). e Polymerase usage ratio (Fig. 5a). This confirms that elutriation does not perturb replication timing. Pu-seq provides timing and termination information Mathematical analysis of the Pu-seq data provides a measure of replication timing: the proportion of reads mapping to each strand from the cdc6-L591G (Polδ) and cdc20-M630F (Polε) data sets provides two independent and direct measurements of the proportion of replication forks moving leftward (or rightward) throughout the genome (Fig. 5b). Such fork-direction data allow a direct calculation of relative replication times 5,28 . From a mean replication-fork velocity of 1.5 kb/min, we calculated a relative replication-timing map from Pu-seq data that is superimposable on direct replication-time measurements derived from the time course and Sort-seq analysis (Fig. 5c). Changes in mean fork direction across a chromosome are a consequence both of replication-origin activity and of replication-termination events: even close to an efficient origin, the proportion of moving forks always decreases with distance. This is the consequence of both the initiation and replication-termination npg a r t i c l e s events in the population. We can thus also calculate the percentage of termination events occurring within a defined window. Although we observe that replication origins result in sharp transitions in fork direction, indicating discrete and efficient initiation sites, replicationtermination events are dispersed stochastically across large termination zones (Fig. 5d), with no evidence of programmed termination regions (Supplementary Fig. 1b). Observed polymerase usage variation within a replicon Potential differences in the ribonucleotide incorporation rates between cdc20-M630F (Polε) and cdc6-L591G (Polδ) preclude accurate establishment of the absolute fraction of DNA synthesized by Polε and Polδ. Without considering the minor contribution of Polα, the anticipated division of labor plus coupled leading-strand and lagging-strand synthesis predicts that ~50% of the genome is replicated by Polε and ~50% by Polδ. Using this assumption, we plotted polymerase usage of the duplex for each 300-bp bin across the genome (Fig. 6a). Genome wide, the division of labor was largely uniform, although small fluctuations were evident. The majority of these correspond to efficient origins. Therefore, we computationally identified interorigin regions of >30 kb where the directionality of replication forks was not appreciably perturbed by less efficient origins (Fig. 6b) and determined the average use of Polε and Polδ across replicons. A substantial bias toward Polδ was evident proximal to origins and declined toward the center of the interorigin region. This effect was not influenced by either global replication timing or by the absence of the Rad18 ubiquitin ligase (Supplementary Fig. 1c), which prevents PCNA ubiquitination and thus compromises noncanonical polymerase usage. Hence, when proximal to efficient origins, replicons exhibit an apparent bias toward usage of Polδ relative to Polε that is dependent on distance from the origin and is independent of postreplication repair. DISCUSSION We, along with other groups [29][30][31] , have developed approaches to identify the genome-wide location of ribonucleotides incorporated into DNA. In a cdc20 + cdc6 + (Polε + and Polδ + ) rnh201∆ background, we observed that the percentage of each ribonucleotide incorporated shows little bias relative to genomic sequence composition (Supplementary Fig. 3). This was not appreciably altered in the two polymerase-mutant backgrounds. We observed a moderate increase in the frequency of ribonucleotide incorporation in gene coding regions when compared to 5′ and 3′ untranslated regions and promoters, a bias that is not influenced by our polymerase mutations (Supplementary Fig. 4). Adaptations to this hydrolysisdependent ribonucleotide mapping methodology will facilitate research into the causes of, and biological consequences arising from, ribonucleotide incorporation. To study DNA replication, we combined this approach with ribonucleotide-discrimination mutations in the two main replicative polymerases [14][15][16] to provide a Pu-seq strategy that allowed us to map polymerase usage genome wide. Our analysis demonstrated that the division of labor for Polε and Polδ is consistent across an entire genome. Although not unexpected, this is important to establish. Strikingly, Pu-seq provided a highly discriminatory data set that directly revealed the location and efficiency of replication origins at very high resolution. We compared our origin assignments to those previously collated from the literature in oriDB 32 . To locate potential overlap, we first identified the central nucleotide of the Pu-seq-identified origin and established whether it fell within ±900 bp of the reported origin region. Comparing the two data sets (741 origins from oriDB and 1,145 origins recognized by Pu-seq), we identified 97.5% of 'confirmed' , 84.9% of 'likely' and 67.7% of 'dubious' oriDB origins (Supplementary Table 1). Previous work in S. cerevisiae used replication-timing data to calculate termination frequencies across the genome 5 and demonstrated that defined termination zones were not common: termination events per 1 kb fluctuated between approximately 0% and 4% per cell cycle across the genome. Applying this established mathematical analysis 5,28 to the Pu-seq data similarly predicted that the distribution of termination frequencies in S. pombe is consistent with there not being defined termination zones between origins. This suggests that termination is largely defined by stochastic origin usage 5 as opposed to the positioning of discrete replication-fork pausing elements 33 . The high definition provided by Pu-seq enabled us to identify an apparent bias toward Polδ close to the sites of efficient initiation, a phenomenon that is reproducible across multiple biological and experimental replicates (data not shown). This phenomenon is not influenced by either regional replication timing 34 or by postreplication repair 35 , thus implying that it is independent of noncanonical repair polymerases. Although we cannot exclude an unidentified prosaic explanation accounting for these data, one interpretation is that (b) Top, the 85 interorigin regions between high-efficiency origins (E f ori >40%) of >30 kb, which do not contain lower-efficiency origins (20% < E f ori < 40%), displayed as a heat map aligned to the three chromosomes (right bar). Light pink represents early-replicating regions, and brown represents late-replicating regions ( Supplementary Fig. 1c). Each row represents an interorigin region. The horizontal axis shows the relative position between origins. Chr., chromosome. Bottom, average values. Supplementary data sets to visualize the whole genome are listed in Supplementary Table 2. npg a r t i c l e s a small fraction of leading-strand replication events, once started by Polα-primase, are initially extended by Polδ in place of Polε. The interaction between the N-terminal 103 amino acids of the Dpb2 subunit of Polε and GINS is likely to position Polε for leadingstrand synthesis. Although this same interaction is required for the formation of the CMG complex 11,13 , it is subsequently dispensable for CMG helicase activity, and loss of the interaction does not prevent replication progression if CMG formation is promoted by an ectopically expressed N-terminal region of Dpb1 (ref. 11). In such cells, replication is slow, and synthesis of the leading strand is probably completed by Polδ. Indeed, in yeasts, the entire genome can be replicated without the catalytic activity of Polε 11,36,37 , thus demonstrating substantial flexibility in the use of Polε and Polδ during DNA replication. The choice of Polε for leading-strand synthesis is, at least in part, a function of the interaction of Polε holoenzyme and the core replication machinery discussed above. Polδ, although not apparently showing a strong interaction with the core replisome, does have a high affinity for PCNA and therefore potentially could compete for the leading-strand primer. Initiation of leading-strand synthesis by Polδ is likely to result in Polδ being subsequently displaced by Polε during elongation. Indeed, in vitro studies have shown that S. cerevisiae Polε holoenzyme is preferentially recruited to leading-strand substrates preloaded with CMG and that, although Polδ can load in the absence of Polε, it is displaced if Polε is added after DNA synthesis has initiated 38 . We thus propose that the apparent discrepancy in polymerase usage within a replicon reflects occasional recruitment of Polδ to leading-strand synthesis, with its subsequent displacement during progression by Polε. It will be interesting to test this proposition with further experiments in the future. In summary, Pu-seq provides a simple yet powerful tool to explore genome replication in any eukaryote in which suitable polymerase mutants can be introduced in a background deficient (or depleted) for RNase H2. Unlike replication-timing data, Pu-seq data directly identify regions of replication initiation. We show here that it can also provide indirect but accurate evidence of relative replication timing and frequency of termination. Pu-seq will thus provide a useful tool for examining DNA replication. METHODS Methods and any associated references are available in the online version of the paper. a r t i c l e s npg DNA was prepared from the elutriated reference sample and samples from within S phase, and libraries were prepared and subjected to high-throughput sequencing as previously described 27 . The relative representation of each locus in the S-phase samples was normalized to the percentage of total replication and to the unreplicated reference sample to provide an average percentage replication for each locus for each time point. To provide an unbiased replication-timing map, S-phase cells from an unperturbed exponentially growing culture were collected by FACS after fixation with 70% ethanol and were subjected to marker frequency analysis with the Sort-seq protocol previously described 27 ). Calculation of relative and absolute replication timing. The time-course data were used to calculate a median absolute replication time (T rep ) for each genomic locus as described previously 27 . Briefly, a sigmoidal function was fitted to the population-averaged fraction of the genome replicated at each time point for each genomic locus, and T rep was determined as a time when the population-averaged fraction of the genome replicated was equal to 0.5. Times are shown relative to the approximate start of S phase, 50 min post elutriation. Relative replication times and the distribution of replication-termination sites were calculated from the Pu-seq fork-direction data with custom scripts described previously 5,28 . Briefly, relative replication time was calculated from the integral of the percentage of leftward-moving forks, assuming a constant average fork velocity across the genome. Termination frequency was calculated with a finite difference approximation for estimating the derivative of the percentage of leftward-moving forks.
2016-10-26T03:31:20.546Z
2014-12-30T00:00:00.000
{ "year": 2014, "sha1": "dd9e1635efb89b95e3853459fd61f49f31c97258", "oa_license": "unspecified-oa", "oa_url": "https://europepmc.org/articles/pmc4789492?pdf=render", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "fd3bed342ee31b97978e6103184702bbd574e34a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
220483362
pes2o/s2orc
v3-fos-license
Sex Differences in Work-Stress Memory Bias and Stress Hormones Mental health problems related to chronic stress in workers appear to be sex-specific. Psychosocial factors related to work–life balance partly explain these sex differences. In addition, physiological markers of stress can provide critical information on the mechanisms explaining how chronic stress gets “under the skull” to increase vulnerability to mental health disorders in working men and women. Stress hormones access the brain and modulate attentional and memory process in favor of threatening information. In the present study, we tested whether male and female workers present a memory bias towards work-stress related information, and whether this bias is associated with concentrations of stress hormones in reactivity to a laboratory stressor (reactive levels) and samples taken in participants’ workday (diurnal levels). In total, 201 participants (144 women) aged between 18 and 72 years underwent immediate and delayed recall tasks with a 24-word list, split as a function of valence (work-stress, positive, neutral). Participants were exposed to a psychosocial stressor in between recalls. Reactivity to stress was measured with saliva samples before and after the stressor. Diurnal cortisol was also measured with five saliva samples a day, during 2 workdays. Our exploratory results showed that men presented greater cortisol reactivity to stress than women, while women recalled more positive and neutral words than men. No sex difference was detected on the recall of work-stress words, before or after exposure to stress. These results do not support the hypothesis of a sex-specific cognitive bias as an explanatory factor for sex differences in stress-related mental health disorders in healthy male and female workers. However, it is possible that such a work-stress bias is present in individuals who have developed a mental-health disorder related to workplace stress or who have had one in the recent past. Consequently, future studies could use our stress memory bias task to assess this and other hypotheses in diverse working populations. Introduction Women have been increasing and sustaining their presence in the workforce for more than half a century [1,2]. This, along with changes in economic and socio-political pressures has led to reorganization of workplaces around the world and increasing work overload for both sexes [3][4][5]. Although greater equality between men and women in the workplace is slowly being achieved, research shows that mental health problems related to stress at work are still sex-specific. As an example, women Cognition and Stress at Work One of the mechanisms explaining the discrepancy in prevalence of work-related mental health disorders could be differences in cognitive processing of information as being stressful or threatening in men and women. More precisely, differences in attentional bias towards stress-related information may be sex-specific, and this could drive sex differences currently observed in stress responses. Attentional circuits preferentially detect and process information in the environment that has immediate survival value and task relevance through selective attention [40][41][42]. The individual parameters that determine relevance of stimuli (as threatening or not) and adequacy of response are largely determined by the individual's previous experiences throughout life that impact the individual's perception of stressors [43,44]. Cognitive systems are mostly shaped in childhood during critical developmental periods. Notwithstanding, but experiences later in life can also have significant influence in higher-order cognitive and emotional processes. For example, children who have been maltreated show marked sensitivity in detection of anger-related content [45][46][47]. Adults with anxious symptoms and/or anxiety disorders show preferential biases towards information that is specific to the feared foci of their anxiety [48,49]. It is important to note that studies performed in human populations reveal that this attentional bias to threatening information is partly sustained by the secretion of glucocorticoids that are released during a stressful and/or an emotional experience [26,50,51]. As previously mentioned, secretion of glucocorticoids can be adaptative. In that sense, the enhancement of memory for stimuli inducing stressful and/or emotional responses may be essential for species' survival. However, in a context where survival is not at stake and stressful stimuli are present in a repeated way (such as in some workplace settings), chronic activation of the HPA axis might lead to an increased encoding and/or consolidation of stress-related information (bottom-up effects). This attentional bias may then lead to increased secretion of stress hormones (top-down effects). This, in turn, leads to a vicious circle in which the stress hormones released in response to a chronic stressful condition access the brain and modify the way that upcoming information are perceived and interpreted, thus leading to increased stress reactivity [52,53]. Without underestimating the complexity of the human brain, the objective of the current study was to (1) identify if this mechanism, starting from the "filtering" mechanism can lead to a bias that is specific to workplace stress (which would modulate memory performance in response to stress), (2) if this cognitive mechanism is similar in healthy male and female workers, and (3) if it relates to biological markers of stress. Given the lack of previous data on this particular issue and the empirical nature of the protocol, we did not propose any hypothesis with regards to directionality of significant effects. Rationale and Historical Background of the Present Study From 2011 to 2015, our laboratory conducted a study among healthy male and female workers in order to perform a Sex (differences between men and women related to biological factors) by Gender (differences between men and women related to psychosocial factors such as gender roles and socio-cultural factors) analysis on psychological and physiological markers of stress. Various papers have been published on the results of this confirmatory study [54][55][56]. While we were preparing the research protocol for this study, and based on data from our laboratory showing the presence of stress-induced attentional bias in young adults as a function of childhood socioeconomic status [51], we decided to develop a new cognitive task that would allow us to assess whether male and female workers differ on the recall of information related to (1) stress at work (2) positive information or (3) neutral information. The goal of this new Work-Stress Memory Task (WSMT) was to determine if a potential memory bias toward work-stress related information is present in male and female workers and whether it is associated with biological markers of stress. This paper presents the results of this exploratory study. Our purpose was to assess the WSMT performance before and after exposure to a laboratory stressor in male and female workers. The rationale and pre-analysis plan of this exploratory project has been pre-registered and can be found at OSF under the following address: https://osf.io/tcbuh. Sample and Missing Data From an initial pool of 295 participants recruited among employees of the Institut Universitaire en Santé Mentale de Montréal (IUSMM) hospital, a total sample of 204 working adults completed the original study between 2011 and 2015. The IUSMM is the largest psychiatric hospital in the Canadian province of Québec and, at the time, had a total of 1546 employees (65% women) from different professions. Participants in the study came from clinical services (29.9%), administration (17.2%), research (13.7%), social integration (11.3%), professional services (9.8%), maintenance (10.8%), general direction (4.4%) and human resources (3.0%). Participants were allowed to complete the study during their normal work hours. This study (including the WSMT) was approved by the Research Ethics Board of the same institution (2011-003). For the current set of analyses, 201 of the participants were included since the WSMT could not be completed for technical reasons for 3 out of the 204 participants. Women comprised 70% of the sample (n = 141), and participant ages ranged from 18 to 72 (Average-40.48 years old; Std Dev-12.17). Socio-Demographic and Psychosocial Information In our analyses, we used the age (ranging from 18 to 72) as a covariate and biological sex (man/woman) of participants (all participants were cisgender) as a between-subject factor. Education level (number of years of school completed), body mass index (BMI) and self-reported chronic stress were also compared between men and women for descriptive purposes. Self-reported chronic stress was measured using the Trier Inventory for Chronic Stress (TICS), a 30-item questionnaire with 10 subscales that asks the participant to rate the levels of stress they experienced in the past month, mostly in work-related circumstances [57]. Potential differences in socio-demographic information, BMI [58] and self-reported chronic stress [59] were verified in order to make sure that men and women sub-samples were comparable. The Work-Stress Memory Task-WSMT The WSMT is an emotional declarative memory task developed by the Center for Studies on Human Stress [60]. The task comprises an initial encoding phase in which a list of 24 words is presented visually using E-Prime2 (Psychology Software Tools, Pittsburg, PA, USA). Each word is presented once for five seconds, in a random order, with explicit instructions to remember the words. This encoding phase is followed by an immediate recall of the list, in which participants have to write down all the words they remember within one minute. A delayed recall phase is conducted approximately 30 min later (after exposure to a psychosocial stress task), both recall phases are completed using pen and paper. The 24-words list has 3 different types of word: Work-stress, Positive and Neutral. Eight words are presented in each of these three categories. In order to develop the list of work-stress words, we asked 208 working adults attending stress management workshops and conferences in different organizations to write down words that "best described stress in the workplace for them". Note that none of the participants in the original sample of this study were part of this initiative. Individuals were mostly working in IT/Engineering (30.8%), management or direction (30.8%), administrative support (8.2%) and human resources (7.2%). A frequency analysis was then performed on the answers provided by participants and we extracted the 8 most frequent words provided by the group. Positive and Neutral words were matched to work-stress words for length, and frequency of use in the English language. We also made sure that the meaning of the positive and neutral words was not directly related to work, and they were obtained using standardized word banks. The words of the WSMT can be found in Table A1 of Appendix A. Performance on the WSMT was calculated using the number of words recalled for each category. A perfect recall of all words of a category gives a score of 8 and recalling none of the words of a given category gives a score of 0. Trier Social Stress Task (TSST) The TSST is a laboratory-based stressor developed by Kirschbaum and colleagues [61,62] and is one of the standard stress interventions used in a laboratory setting aimed at provoking activation of the HPA axis. It has two main phases. The first is an anticipation phase in which participants are given the instructions of the task and asked to prepare themselves. The second phase is the performance phase, in which a panel of "experts" stands on the other side of a one-sided mirror (in the panel-in version of the TSST), observing and correcting the participant during a speech and an arithmetic task. Samples of saliva are taken before and after completion of the TSST to assess cortisol reactivity to stress. Protocol Participation in this project consisted of two laboratory visits, two days of saliva samples at work and home, one day of saliva samples during a rest day, home questionnaires and a follow-up call. During the first laboratory visit, participants gave informed consent, completed the WSMT, the TSST and some questionnaires, while providing saliva samples every 10 min for a total of 6 samples aimed at measuring cortisol levels. At-home saliva samples followed consensus guidelines [63]: samples were taken upon awakening, 30 min later, at 2 PM, at 4 PM and upon going to bed. Participants recorded the time they took their samples and a Medication Event Monitoring System (MEMS TM , AARDEX Ltd., Sion, Switzerland) was used to optimize compliance [63]. Questionnaires were also completed at home on a secure online platform. In this paper, results to the Trier Inventory for Chronic stress will be reported, as previously mentioned. The full list of questionnaires in the original protocol can be found in Appendix A Table A2. Note that questionnaire instructions and practice were provided to familiarize participants with our platform; however, participants completed questionnaires at home at their leisure. During their second laboratory visit, participants handed their saliva sample and a blood sample was taken. A full list of biological measures that have been collected in this protocol can be found in previous publications using this sample [54][55][56] and in our pre-registration plan (see above). Hormonal Measures To obtain assays, frozen samples were first brought to room temperature, then centrifuged at 150× g (3000 rpm) for 15 min. High-Sensitivity immunoenzyme assays were used for cortisol (Salimetrics ® , No. 1-3102). These have sensitivity of 0.012-3 µg/dL, with inter-and intra-assay coefficients of variance, respectively, below 9.27% and 5.89%. For each sample, assays were run in duplicate, then averaged. To assess the diurnal levels of stress hormones at work and home, the five samples from the two work days were considered. As we were interested in work-stress, the weekend samples were not considered in these analyses. The cortisol concentrations at each time point were averaged for the two days and the area under the curve (from the ground-AUCg) was calculated using Pruessner and colleagues' method [64]. To assess reactive levels of stress hormones, the six samples taken during the laboratory visit described in the previous section were used. We also calculated the area under the curve for these values, but using the increase from baseline formula (AUCi) [64]. Statistical Analysis Statistical analyses were conducted using IBM Statistical Package for the Social Sciences 25. Although our research objectives were pre-registered, our analyses deviate from the initial plan as we now concentrate our analyses on sex differences. For the present study, preliminary analysis assessing sex differences in socio-demographic information and hormonal measures were first conducted using independent sample t-tests and repeated measures factorial ANCOVA to set the grounds and compare the current sample to the existing literature. Results for the WSMT were then analyzed using a repeated-measures factorial ANCOVA, comparing performance at the immediate and delayed recall, for each word type, for men and women. This was done while controlling for the effects of age, given its well-established effects on declarative memory [65,66]. Finally, performance on the immediate and delayed recall of the WSMT, for each type of words, and for men and women, was correlated with diurnal cortisol AUCg and reactive cortisol AUCi using Pearsons correlations. Preliminary Analysis As shown in Table 1, independent sample t-tests revealed no sex differences in age, education level, number of hours worked, BMI and self-reported chronic stress (TICS) levels. Further preliminary analyses were conducted with repeated measures factorial ANCOVA to replicate previously established time and sex differences in TSST reactivity and absence of sex differences in diurnal cortisol levels, while controlling for age of participants, as shown in Figure 1. When comparing cortisol samples collected over the TSST session, there were expected effects of time (F(5, 995) = 4.313; p = 0.001) and sex (F(1, 199) = 6.092; p = 0.014), but no interaction effect (F(5, 995) = 2.065; p = 0.068). Cortisol levels significantly increased starting at the TSST, peaked just after and started decreasing shortly thereafter. Cortisol levels in men were significantly higher than those of women. Preliminary Analysis As shown in Table 1, independent sample t-tests revealed no sex differences in age, education level, number of hours worked, BMI and self-reported chronic stress (TICS) levels. Further preliminary analyses were conducted with repeated measures factorial ANCOVA to replicate previously established time and sex differences in TSST reactivity and absence of sex differences in diurnal cortisol levels, while controlling for age of participants, as shown in Figure 1. shows diurnal cortisol levels averaged over two typical working days. Adapted from [55]. Note that controlling for sex hormones in previous analyses unmasked a sex difference in diurnal cortisol [55]. Performance on the Work-Stress Memory Task Performance on the WSMT for the immediate and delayed recall phases is presented in Figure 2 and results of the repeated measures factorial ANCOVA, comparing immediate and delayed recall [55]. Note that controlling for sex hormones in previous analyses unmasked a sex difference in diurnal cortisol [55]. Performance on the Work-Stress Memory Task Performance on the WSMT for the immediate and delayed recall phases is presented in Figure 2 and results of the repeated measures factorial ANCOVA, comparing immediate and delayed recall of all type of words, for men and women, are shown in Table 2. Main effects of recall phase and sex were detected, along with a significant interaction between type of word and sex. There was no main effect of the type of word and the two other interaction terms, between recall phase and type of word, between recall phase and sex, and between recall phase, type of word and sex were also not significant. immediate recall was systematically better than for the delayed recall, and that women recalled significantly more words regardless of the recall phase. It is also important to highlight the significant contribution of age as a covariate in this model, where memory performance was systematically lower as age increased (F(1, 198) = 23.268; p < 0.001). Sex-specific Memory Bias Regarding effects of interest for our objectives, no significant sex-specificity could be detected regarding work-stress words. The significant interaction between sex and type of word was investigated with 95% confidence intervals and Bonferroni corrections. These comparisons show that women recalled more positive and neutral words than men, regardless of the recall phase, while controlling for age. This was not the case for work-stress words. With these same 95% confidence intervals, it was not possible to detect significant differences between word types for each sex separately. Manipulation Checks for Performance on the Work-Stress Memory Task Some of the significant effects in of the ANCOVA were expected in the context of a declarative memory task. Results of the main effects of recall phase and sex showed that performance on the immediate recall was systematically better than for the delayed recall, and that women recalled significantly more words regardless of the recall phase. It is also important to highlight the significant contribution of age as a covariate in this model, where memory performance was systematically lower as age increased (F(1, 198) = 23.268; p < 0.001). Sex-specific Memory Bias Regarding effects of interest for our objectives, no significant sex-specificity could be detected regarding work-stress words. The significant interaction between sex and type of word was investigated with 95% confidence intervals and Bonferroni corrections. These comparisons show that women recalled more positive and neutral words than men, regardless of the recall phase, while controlling for age. This was not the case for work-stress words. With these same 95% confidence intervals, it was not possible to detect significant differences between word types for each sex separately. WSMT Performance and Cortisol Measures A series of Pearson correlations were conducted to determine the association between the cognitive and endocrine measures. As Table 3 shows, there were no significant correlations between memory of any type of words and either reactive or diurnal cortisol, both for immediate (all ps > 0.161) and delayed (all ps > 0.080) recall. Discussion The goal of this study was to determine whether male and female workers present a memory bias for work-stress words and whether this cognitive bias is associated with diurnal or reactive cortisol levels. We first confirmed previous psychoneuroendocrine research by showing that men presented increased cortisol levels during a laboratory stressor when compared to women; however, there was no sex difference in diurnal cortisol levels. These results were manipulation checks, confirming the quality of our experimental protocol, given the robustness of these findings across the literature [35][36][37]. The second manipulation check performed on the memory task allowed us to be confident in the preliminary validity of the WSMT; namely, the presence of a decline of overall performance with age [67], higher performance in the immediate recall, and a generally better performance in women than in men [68]. These effects are also robustly found in the declarative memory literature [65,66]. While the difference between immediate and delayed recall was expected, it is not possible to know in this particular experimental design which portion of this decline in performance was attributable to the delay, versus to the deleterious effect of the TSST on memory. Work-Stress Bias In relation to our main objective, we found no preferential recall for work-stress related content, whether it be compared to other types of words, when exposed to the TSST, or when compared between men and women. This result does not support the presence of a work-stress related memory bias in one sex over the other as was originally hypothesized. The absence of a significant memory bias for work-stress related words could be explained by the fact that the words chosen as 'work-stress related' may not have a consistent valence for each individual. While the way the words were chosen for the WSMT was intended to limit researcher bias, this led to the inclusion of words with potentially ambiguous valence in the work-stress category. Indeed, words such as "team" and "management" were included in the work-stress list as they were among the most frequently reported words by our control sample. However, if these words triggered a different contextual association than work-related stress during the WSMT for our participants, they could have been interpreted as neutral of positive words. This means that there may be a heterogeneous allocation of attentional resources toward these words, rather than the increased attention usually associated with negatively valanced words [42]. One way to control for this limitation in future studies would be to verify the perceived valence of the words after the second recall for each participant. This would allow to adjust the level of perceived valence and have more precise and individualized results. This methodological feature could allow to deepen our understanding of sex differences and of the interaction between sex and type of word. Identifying words that resonate with work-stress differently for men and women would make for a more fine-grained delineation in identifying a sex difference or sex-specificity that the current study could not. Serial position of the words could also have been a confounding factor at the individual level. While randomization of the word order for each participant was meant to remove serial position effects between subjects, it probably had an effect within each individual that we have no way of statistically accounting for in the context of this study. Also, while the TSST is a validated method to induce the activation of the HPS axis, using a stressor more closely related to situational stress as it is experienced at work could have helped prime memory biases specific to work. One could hypothesize that the longer the exposition to workplace stress, the stronger the potential for the development of a bias. However, in the current context, experience in the workplace is closely correlated with age, which is known to be a strong predicting factor in the decline of declarative memory performance [65,66]. A more homogeneous sample in terms of age, but with varying levels of experience in a particular job could be a way to assess the effect of work experience, rather than age, on cognition and the development of cognitive biases. Additionally, it is possible to test the presence of a work-stress memory bias in individuals who may have developed a stronger bias due to specific reasons that we did not consider. For example, individuals who are currently experiencing a mental-health disorder related to workplace stress (e.g., burnout) or who have had one in the recent past could be more susceptible to display cognitive biases in the context of work-stress. Studies have found differences in the processing of positive information in individuals recovering from a major depressive disorder [69,70] and altered working memory in women on long-term sick leave due to depression [71]. Sex Differences in Relation to Memory Bias and Biological Markers of Stress Interestingly, women recalled more positive and neutral words than men, regardless of the recall phase. Although this result does not confirm the presence of a memory bias toward work-stress words, it shows the presence of a sex difference in the processing of positive and neutral content in the absence of a sex difference in the processing of work-stress content, which could be seen as an indirect bias. It is important to note that although the absence of a sex difference in the recall of work-stress related content is interesting, it could mainly be due to a more general 'negative memory bias' [72]. As previously mentioned, the only potentially negative words presented to the participants in the present study were work-stress words and it is thus possible that the effects in memory are more closely related to the negative valence of the words rather than the stress-related particularity of these words. One way to disentangle this effect would be to add a category of negative (although not related to work-stress) words to the list to be memorized. Comparing the recall of work-stress and negative content could help confirm the presence of a memory bias for work-stress content in male and female workers. It is interesting to note that women recalled significantly more positive and neutral words than men and were less reactive to a laboratory stressor when compared to men. It is thus possible that the seemingly more efficient cognitive processing of women toward positive and neutral content has a positive impact on the secretion of cortisol, leading them to produce less cortisol in response to a laboratory stressor. Although this hypothesis is interesting, one has to be reminded that we did not find any significant correlation between number of positive and neutral words recalled by women and diurnal or reactive cortisol levels. Consequently, if this effect is present, it might be too weak to be detected in our sample with the experimental design employed. Also, as previously mentioned, some studies have found differences in processing of positive information in individuals, women in particular, with previous episodes of depression [69][70][71]. However, we re-iterate the necessity for replication of these results before making stronger theoretical or practical assumptions about this unexpected result. As we just touched upon, recall of work-stress words was not associated with diurnal cortisol levels nor to cortisol reactivity to stress. Although men reacted with greater secretion of cortisol during the TSST procedure when compared to women, we found no sex difference in recall of work-stress related content. This result does not support the hypothesis of a sex-specific cognitive bias as an explanatory factor for sex differences in stress-related mental health disorders in the workplace. Furthermore, replicating the current results with a control group without the TSST between the immediate and delayed recall could help disentangle the role of reactive cortisol and of recall phase in memory performance [73][74][75]. Conclusions Despite the fact that our sample of healthy men and women showed typically expected cortisol profiles, both in basal and reactive conditions, and that performance on the WSMT replicated usual robust effects in other declarative memory tasks, we did not find direct evidence for a work-stress memory bias in male and female workers. However, women showed systematically better recall of positive and neutral words than men, pointing towards differential processing of non-threatening information in men and women. Memory performance was also not significantly associated with either reactive or diurnal cortisol, for both sexes. Testing the presence of such a bias in a population that may have had a stronger environmental or experiential incentive to develop it would allow us to have more definitive evidence on the preferential treatment of work-stress related information. Conflicts of Interest: The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Table A2. Full list of questionnaires used in the original protocol.
2020-07-09T09:08:36.530Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "116fbdfa0d3e3d86c44031f6c52a2298bd0accbf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/10/7/432/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "31a41ca7d46d397699970a358f5913940ba1082f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
221258138
pes2o/s2orc
v3-fos-license
Intensive care management of a patient with necrotizing fasciitis due to non-O1/O139 Vibrio cholerae after traveling to Taiwan: a case report Background Vibrio cholerae are oxidase-positive bacteria that are classified into various serotypes based on the O surface antigen. V. cholerae serotypes are divided into two main groups: the O1 and O139 group and the non-O1/non-O139 group. O1 and O139 V. cholerae are related to cholera infection, whereas non-O1/non-O139 V. cholerae (NOVC) can cause cholera-like diarrhea. A PubMed search revealed that only 16 cases of necrotizing fasciitis caused by NOVC have been recorded in the scientific literature to date. We report the case of a Japanese woman who developed necrotizing fasciitis caused by NOVC after traveling to Taiwan and returning to Japan. Case presentation A 63-year-old woman visited our hospital because she had experienced left knee pain for the past 3 days. She had a history of colon cancer (Stage IV: T3N3 M1a) and had received chemotherapy. She had visited Taiwan 5 days previously, where she had received a massage. She was diagnosed with septic shock owing to necrotizing fasciitis. She underwent fasciotomy and received intensive care. She recovered from the septic shock; however, after 3 weeks, she required an above-knee amputation for necrosis and infection. Her condition improved, and she was discharged after 22 weeks in the hospital. Conclusions With the increase in tourism, it is important for clinicians to check patients’ travel history. Clinicians should be alert to the possibility of necrotizing fasciitis in patients with risk factors. Necrotizing fasciitis caused by NOVC is severe and requires early fasciotomy and debridement followed by intensive postoperative care. Background Vibrio cholerae are curved gram-negative rod (GNR) bacteria that are oxidase positive. They are classified into various serotypes based on the O surface antigen. V. cholerae serotypes are divided into two main groups: the O1 and O139 group and the non-O1/non-O139 group [1]. O1 and O139 V. cholerae are related to cholera infection, whereas non-O1/non-O139 V. cholerae (NOVC) can cause cholera-like diarrhea. NOVC are found as autochthonous microbes in coastal and marine environments [2]. Outbreaks of cholera-like illness caused by NOVC have been reported in the United States (O141 and O75), former Czechoslovakia (O37), Sudan (O37), Peru (O10, O12), and Mexico (O14) [3][4][5]. Moreover, NOVC can cause a range of extraintestinal infections, including bacteremia, meningitis, pneumonia, peritonitis, cholangitis, salpingitis, and soft-tissue infection [6]. Seafood, including oysters, fishes, shrimps, clams, mussels, and apple snail, is the most common source of infection (53.9%) [7]. A PubMed search revealed that only 16 cases of necrotizing fasciitis caused by NOVC have been reported in the scientific literature to date. We report the case of a patient who developed necrotizing fasciitis and septic shock caused by NOVC, which necessitated an above-knee amputation of her left leg. Case presentation A 63-year-old woman visited Minaminara General Hospital in Nara, Japan, because she had experienced left knee pain for 3 days prior to her visit. She had been diagnosed with colon cancer (Stage IV: T3N3 M1a) 2 years and 5 months previously and had undergone surgery and received chemotherapy. Her most recent dose of chemotherapy was administered 20 days before her initial consultation. She had visited Taiwan 5 days previously, where she had received a massage. After the massage, she developed gradually worsening pain in her lower left leg. On presentation, she was able to walk unaided, and she reported her history of colon cancer and recent travel. As we suspected that the pain in her leg could be due to necrotizing fasciitis, we requested magnetic resonance imaging (MRI) of her left lower leg. The images showed a swollen soleus muscle and posterior tibial muscle, and the T2-weighted image showed hyperintensity of the muscle tissue (Fig. 1). After the MRI, our patient's condition deteriorated and the following vital signs were observed: blood pressure (BP), 89/50 mmHg; heart rate, 101 beats/min; respiratory rate, 18 breaths/min; and temperature, 36.3°C. The results of arterial blood gas analysis were as follows: pH, 7.4; PaCO 2 , 26.7 mmHg; HCO 3 − , 18.8 mmHg; base excess (BE), − 6.9 mEq/L; and lactate, 3.20 mmol/ L. The patient's laboratory test results were as follows: C-reactive protein (CRP), 42.83 mg/dL; blood urea nitrogen (BUN), 73.3 mg/dL; creatinine, 1.78 mg/dL; procalcitonin, 16.06 ng/mL; N-terminal pro-brain natriuretic peptide (Nt-proBNP), 29,506 pg/mL; and fibrin/ fibrinogen degradation products (FDP), 16.7 μg/mL. Intravenous infusion of meropenem and noradrenaline was initiated, and the patient underwent emergency surgery. Before the surgery, the compartment pressure of her left leg was measured by simple needle manometry. The pressures were as follows: 63 mmHg, 26 mmHg, 32 mmHg, and 32 mmHg in the anterior, lateral, superficial posterior, and deep posterior compartments, respectively. Some muscle tissues in the anterior and deep posterior compartments were necrotic. For double incision fasciotomy, a relaxation incision was made on her left knee [8], and the affected area was irrigated and debrided ( Fig. 2). After the surgery, her blood pressure was low, and therefore, we administered polymyxin B direct hemoperfusion (PMX-DHP), to trap endotoxins, and continuous veno-venous hemodiafiltration (using HEMOFEEL CH-1.3 W, Toray Medical Co., Ltd., Urayasu, Japan). As a slightly curved GNR that was oxidase positive was detected in her blood, we diagnosed her with necrotizing fasciitis and septic shock caused by Vibrio species. We changed the antibiotics from meropenem to ceftriaxone, levofloxacin, and minocycline. We used the PMX-DHP once again and tapered the dose of noradrenalin gradually. We discontinued noradrenalin on Day 4 postoperatively. On Day 6 postoperatively, the organism was identified as NOVC. The susceptibility of antibiotics was confirmed postoperatively on Day 12, and we discontinued levofloxacin (Table 1). Although the patient's general condition improved, there was a discharge of pus from the postoperative wound. On Day 14 postoperatively, a second debridement was performed. Several muscles in the patient's left leg, including the anterior tibial muscle, had become necrotic, and the necrosis had spread to her knee. On Day 21 postoperatively, an above-knee amputation was performed. Her vital signs and laboratory data obtained since admission are shown in Fig. 3. Her condition improved and she was discharged 22 weeks after admission. Discussion and conclusion Sixteen cases of necrotizing fasciitis caused by NOVC have been previously reported (Table 2) [9][10][11][12][13][14][15][16][17]. The majority of patients were exposed to seawater or had an injury. In rare cases, vigorous massage is one of the risk factors of necrotizing fasciitis [18]. However, the patient in the present case had a risk of NOVC infection because of colon cancer and immunosuppression due to chemotherapy as she received chemotherapy within a month. Thus, in this case, the source of the NOVC remains unknown. As the patient did not report any exposure to sea water or eating seafood, the only potential cause of injury to her left leg was the massage she received. Therefore, we speculate that the massage might have been the source of the NOVC, based on the circumstantial evidence. We administered blood purification therapy using PMX-DHP and veno-venous hemodiafiltration for septic shock. Although no previous studies have reported the use of PMX-DHP for NOVC, a study reported the use of PMX for V. vulnificus [19]. Third-generation cephalosporins, tetracycline, and fluoroquinolone were used for severe Vibrio infections. Tetracycline combined with the fluoroquinolone or a parenteral third-generation cephalosporin followed by oral fluoroquinolones or doxycycline was recommended for invasive NOVC infections [10,14]. An in vitro study revealed that cefotaxime and minocycline have a synergistic effect in the treatment for V. cholerae infections [20]. As patients with NOVC bacteremia require antibiotic treatment for at least 1 month [14], we administered ceftriaxone and minocycline for 1 month. Necrotizing soft-tissue infections caused by NOVC are more lethal than those caused by V. vulnificus [21]. To conclude, we treated a woman with necrotizing fasciitis and septic shock caused by NOVC. This case illustrates that early fasciotomy and debridement are necessary for severe necrotizing fasciitis caused by NOVC, and prolonged intensive care may be required after surgery. Consent for publication Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal.
2020-08-24T13:46:58.631Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "f349c9ce825004038ea791501f73270c86ccdd6d", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05343-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f349c9ce825004038ea791501f73270c86ccdd6d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119461479
pes2o/s2orc
v3-fos-license
Circumstellar features in hot DA white dwarfs We present a phenomenological study of highly ionised, non-photospheric absorption features in high spectral resolution vacuum ultraviolet spectra of 23 hot DA white dwarfs. Prior to this study, four of the survey objects (Feige 24, REJ 0457-281, G191-B2B and REJ 1614-085) were known to possess these features. We find four new objects with multiple components in one or more of the principal resonance lines: REJ 1738+665, Ton 021, REJ 0558-373 and WD 2218+706. A fifth object, REJ 2156-546 also shows some evidence of multiple components, though further observations are required to confirm the detection. We discuss possible origins for these features including ionisation of the local interstellar environment, the presence of material inside the gravitational well of the white dwarf, mass loss in a stellar wind, and the existence of material in an ancient planetary nebula around the star. We propose ionisation of the local interstellar medium as the origin of these features in G191-B2B and REJ 1738+665, and demonstrate the need for higher resolution spectroscopy of the sample, to detect multiple ISM velocity components and to identify circumstellar features which may lie close to the photospheric velocity. INTRODUCTION Spectral lines from highly ionised species are observed at non-photospheric velocities in several hot DO white dwarfs. Of the 11 stars included in the survey presented by Holberg et al. (1998) (hereafter referred to as HBS), six objects, all with temperatures T eff 70,000 K, were found to exhibit highly ionised features at non-photospheric velocities. Holberg et al. (1999) argue that such features cannot be of an interstellar origin since such highly ionised species are uncharacteristic of the local ISM, and are not observed along adjacent lines of sight to stars at greater distances. Further, while interstellar lines can be observed at velocities which are blue-or red-shifted with respect to the photospheric value vphot, only blue-shifted features are found in the DO sample discussed by HBS. Such features provide strong evidence for the existence of ongoing mass loss in hot DO white dwarfs. Although over 80% of known white dwarfs belong to the DA group ), relatively few have been found to exhibit non-photospheric features. Only 5 of the 44 DA stars considered in HBS are accompanied by any type ⋆ Email: npb@star.le.ac.uk of circumstellar feature, with no apparent dependence on temperature. In some instances, the appearance of this phenomenon can be plausibly explained in terms of interactions between binary components, as in the DA + dM 1.5 system Feige 24, which shows multiple components in the C IV λλ 1548, 1550 doublet. However, in the DA white dwarf CD-38 • 10980, which is a member of a wide binary system, Holberg et al. (1995) find Si and C absorption features in IUE spectra, which are shifted by -12 km s −1 with respect to the photospheric velocity of the with dwarf, which can be inferred from its measured gravitational redshift. Holberg et al. showed that the atmosphere of this object was devoid of Si and C, at the expected photospheric velocity, and used the presence of excited or meta-stable levels in the shifted features as evidence that the material was not located in the ISM along the line-of-sight to the star. These observations were explained in terms of a dense, gaseous halo in close proximity to the star, possibly an extension to the atmosphere, for which a similar temperature and electron density was derived. Alternately, Wolff et al. (2001) using FUSE observations of CD-38 • 10980, observed a set of Si III lines shortward of 1120Å which they attribute to the stellar photosphere. They were able to successfully model the observed equivalent widths of these lines as well as the Si lines seen in the IUE observations with a photospheric Si abundance of 2x10 −8 . However, the velocity discrepancy remains unexplained. In another example of circumstellar features in an isolated DA object, found evidence for weakly blueshifted C IV and Si IV components in REJ 1614-085 (T eff ∼ 38,500 K) at ≈ 30% of the strength of the photospheric lines, shifted by -25 and -40 km s −1 respectively. Similar features in the spectrum of the T eff ∼ 57,000 K DA, REJ 0457-289 have also been discussed by . Agreement between the predicted and observed abundances of atomic species in the atmospheres of white dwarf stars has improved with the introduction of stratified model atmosphere codes, beginning with the stratification of He and Fe investigated by Barstow & Hubeny (1998) and Barstow et al. (1999). describe models in which stratified abundances are calculated self-consistently, by considering the depth dependence of temperature, density, radiation field and level populations using an iterative procedure. These models have succeeded in reproducing the soft X-ray, EUV and FUV spectra of a sample of DA stars, and can explain the widely varying levels of metallicity in hot DA white dwarfs. However, this agreement is not complete; for example, lines of C, N and O are not reproduced as accurately as those of Fe and Ni, while observed differences between objects of similar T eff and log g are unexplained . The success of stratified models is encouraging, but the observation of highly ionised circumstellar features suggests the existence of processes which may modify the predicted equilibrium abundances, and improving the agreement between observed and predicted abundances requires better understanding of the nature of these features. It is particularly interesting that the DO white dwarfs, in which circumstellar features are relatively common, are also poorly modeled by the new generation of stratified codes . Data from the STIS instrument on board HST have allowed the study of white dwarf spectra to be carried out with an accuracy impossible to achieve using earlier instruments, and at a resolution which permits more precise examination of intrinsic line profiles. A more sensitive search can now be made for signs of mass loss and accretion which may modify equilibrium abundances in white dwarf envelopes. In section 2 we present a phenomenological study of a sample of 23 hot (T eff 20,000 K) DA white dwarf stars for which either GHRS, STIS or high resolution IUE echelle spectra were available. In each star, the resonance doublets of C IV, N V and Si IV have been examined for signs of multiplicity (whether in the form of asymmetry or distinct components), and statistical tests applied to determine the significance of proposed secondary features. In several stars, Gaussian profiles are used to model the observed lines. A Gaussian approximation to line data can be justified since, at the resolution of both the IUE echelle data (R≃20,000) and the STIS E140M grating (R≃40,000), unsaturated ISM lines are not resolved, nor are the photospheric profiles of most absorption lines. In some cases, elemental abundances are considered as additional evidence for the presence of nonequilibrium processes. The discussion of individual objects is divided into two subsections, covering stars which show clear (or suspected) highly ionised, non-photospheric fea-tures, followed by those that do not at the resolution of the data used in this study. Several of the stars in this category exhibit unusual features which require further investigation at higher spectral resolution. In section 3, the results of this study are discussed, and a variety of explanations for the presence of highly ionised non-photospheric features in certain white dwarfs, are considered. Observational data The sample of stars was chosen to match that of , with the addition of the super-hot DA PG 0948+534 and Ton 021, since GHRS, STIS or high resolution IUE echelle spectra were available for these objects. Table 1 summarises the survey stars, their basic physical parameters, and the source(s) of data used in this work. Values for temperature and gravity were taken from . The adopted visual magnitudes were those of Marsh et al. (1997), except where otherwise stated. The mass, radius and distance to each star was estimated using the evolutionary models developed by Wood (1995), taking the stated values of T eff , log g and mv as input parameters. The results of the survey are summarised in table 2, which includes the measured velocities of interstellar (vISM), photospheric (vphot), and any non-photospheric highly ionised lines (vcirc) for all stars in the sample. Note that values for vcirc are not relative to the photospheric features, but are absolute velocities. Also included in the table are estimated values for the escape velocity (vesc) and gravitational redshift (vgrav) of each star, and the velocity of the primary component of the local interstellar cloud along the line of sight to each star (vLIC). As noted previously, the spectral resolution of IUE, GHRS and STIS (in the E140M configuration prevalent in this study) is insufficient to completely resolve the ISM components, and hence the value of vISM presented in table 2 represents the velocity of the primary component (or blend) observed in the data. This table also includes calculated values for gravitational redshift, escape velocity, and the velocity of the principal component of the local interstellar cloud in that direction. The local interstellar cloud can be described with reasonable accuracy by a cloud moving at 26 ± 1 km s −1 (heliocentric velocity) towards lII = (186 ± 3) • , bII = (−16 ± 3) • , or α = 74.5 • , δ = +15 • (Lallement et al. 1995). The velocity of absorption lines from the LIC in a particular star, vLIC*, may be estimated from the projection of vLIC onto the target direction. Analysis methods Absorption line parameters were measured using an IDL code, "Lines", written by one of us (JBH). The routine measures the properties (wavelength, velocity, equivalent width and associated uncertainties) of cursor-defined features in an input spectrum, given a user-supplied rest-frame wavelength for the feature. The principal lines in each spectrum were identified using the data contained in HBS, beginning with Table 1. Stars included in the survey of circumstellar features, in descending temperature order. Mass, radius and distance calculated from the evolutionary models of Wood (1995). Visual magnitudes taken from Marsh et al. (1997) unless otherwise stated. unambiguous features such as the saturated interstellar lines of, e.g., N I, and the resonance doublets of photospheric C IV and Si IV. Subsequent identifications were validated against the resulting ISM or photospheric velocity. Gaussian line profiles have been fitted to observed features to determine the velocity of circumstellar material, or multiple ISM clouds along the line-of-sight. The profiles were fitted using further bespoke IDL routines which applied first a single, and then dual, Gaussian to cursor-identified features in the spectra via the χ 2 minimisation technique, generating values for the velocity and equivalent width of the best-fit components. In cases where the dual-Gaussian fit was not obviously superior, χ 2 values from the single and dual fits were then used to perform a standard F-test, following the method outlined by , to determine whether a significantly better fit to observation was obtained with the latter. In addition to the IDL routines, cross-checking of results was performed using the Dipso package (part of a suite of tools produced for the UK Starlink system). Excellent consistency was observed between results obtained from these disparate packages. Co-addition of spectral features has been performed in several cases in order to reveal details which are only marginally detectable at the S/N of the data. In this technique, ∼ 10Å -wide sections of spectral data are extracted, each centered on the wavelength of a particular line of a given species (e.g. the λλ1548,1550 lines of C IV). The sections are then transformed into velocity space so that each shows a line at the velocity of the primary interstellar cloud or the photosphere. Spectral sections are then co-added so that the strength of the (randomly distributed) noise features remains essentially unchanged, while absorption features sharing a common velocity are summed. The result is a spectrum with improved S/N, showing the profile of lines of a particular species. Co-addition does not improve the resolution of the data, and is therefore ineffective in revealing circumstellar features which are blended with their photospheric counterparts at the resolution of the data. Further, the technique is only effective when several of the primary lines are accompanied by such features. Nevertheless, co-addition has proved to be a useful technique in detecting weak circumstellar features which are clearly separated from the primary component at the resolution of the instrument. Comments on individual objects 2.3.1 Stars exhibiting circumstellar features REJ 1738+665 REJ 1738+665 is the hottest DA white dwarf to be detected by ROSAT (Barstow et al. 1994b). A photospheric velocity of vphot ≈ 30 ± 1 km s −1 is determined, based on absorption features arising from Fe, Ni and O which show no multiple components; interstellar lines indicate a line of sight ISM velocity of vISM ≈ −18 ± 1 km s −1 . The line of sight velocity of the LIC is estimated to be vLIC* ≈ −3.4 km s −1 . Clear evidence is found for the presence of circumstellar material in this star. Figure 1 shows the C IV resonance doublet in REJ 1738+665, with shifted components at −18.5 ± 0.5 km s −1 dominating the 30 km s −1 photo- spheric contribution. Shifted features with similar velocities are observed in several other species, although in each case the photospheric component is dominant. The Si IV doublet shows non-photospheric components at −17.7 ± 0.7 km s −1 . Viewed individually, the lines of the N V doublet (λλ 1238.821, 1242.804) show no evidence of companions, but co-addition of these features suggests an extra, weak component at −15.2 ± 2 km s −1 . The O IV doublet (λλ1338.612,1343.512) shows no additional features, but the O V line at λ1371.292 is accompanied by a weak shifted component at −18.7 ± 0.5 km s −1 . Curve-of-growth analysis of the C IV and Si IV lines indicate column densities of N (C IV) = 1.70 × 10 13 atoms cm −2 and N (Si IV) = 1.66 × 10 13 atoms cm −2 with b ≈ 5 km s −1 (curves for the Si IV features are shown in figure 2). The velocities measured for these shifted features differ from vISM by less than 0.75 km s −1 . This raises the possibility that they may be produced by photoionisation of the ISM within the Strömgren sphere around the star. However, Tweedy & Kwitter (1994) also present evidence for the possible existence of a planetary nebula around the star, based on the observation of N II circumstellar features at optical wavelengths. The non-photospheric features of REJ 1738+665 may therefore be produced by ionisation of the ancient planetary nebula remnant surrounding this star. The relationship between planetary nebulae and highly ionised non-photospheric lines is discussed in section 3. Ton 021 Other than its inclusion in general white dwarf surveys, Ton 021 has received comparatively little attention in the literature. We determine T eff = 69700 ± 530 K and log g = 7.47 ± 0.05 (c.f. 69711 ± 1030 and 7.469 ± 0.05 respectively from Finley et al. (1997)). Weighted averages of the significant interstellar and photospheric absorption features give values of vISM = 0.86 ± 0.01 km s −1 and vphot = 37.25 ± 0.22 km s −1 . Among the observed interstellar features are strong lines of C II at λ1334.5323 (equivalent width 171 ± 1.6 mÅ) and λ1335.7076 (equivalent width 61 ± 2.2 mÅ). Velocities consistent with the weighted average vISM are observed for the broad interstellar lines of N I and Si II; however, the Si III λ1206.5 line appears at 7.8 ± 0.7 km s −1 . This is not a unique observation; Holberg et al. (1999a) find significant differences between the velocity of this line and the mean value of vISM in REJ 1032+532 (also included in the current study). Holberg et al. discuss this observation in some detail, arguing that the Si III feature is unlikely to originate in the LIC due to the high ionisation fraction of hydrogen (∼ 95%) required to maintain detectable amounts of Si III, which has a high rate coefficient for charge exchange with neutral H. As noted by Holberg et al. (1999a), a similar observation is made by Vidal-Madjar et al. (1998) in the case of G191-B2B. Both lines of the C IV resonance doublet are accompanied by shifted features. In the λ1548.202 line, the nonphotospheric component is best fit by a Gaussian with a velocity of vcirc = 9.5 ± 1.0 km s −1 , and an equivalent width of 20 ± 5 mÅ. This velocity agrees, within the stated error margin, with that determined for the Si III line. The shifted component in the λ1550.774 line is fit by a Gaussian at vcirc = 5.5 ± 2.2 km s −1 , with an equivalent width of 9.4 ± 2.5 mÅ. The estimated C IV column density contributing to these shifted features is N (C IV) = 3.98 × 10 12 -1.26 × 10 13 atoms cm −2 based on a curve-of-growth analysis. In both cases, the photospheric components are found at velocities consistent with the average value for vphot. The presence of a non-photospheric C IV feature is most clearly demonstrated in the co-added lines of the resonance doublet, as shown in figure 3. Viewed individually, the lines of the Si IV resonance doublet appear slightly asymmetrical but show no clear multiplicity at the resolution of this data. When co-added, this asymmetry is more noticeable (as shown again in fig 3), and we find the best fit dual-gaussian to be one with a primary component at ∼ 37 ± 1.2 km s −1 , in agreement with vphot, and a non-photospheric component at ∼ 6.5 ± 12.3 km s −1 . Although this velocity is close to the value of vcirc found in the C IV doublet, there is considerable uncertainty in the measurement, and data of higher S/N will be required to confirm the detection of non-photospheric components to the Si IV doublet of Ton 021. REJ 0558-373 The C IV λλ1548.202,1550.774 lines in this star are moderately asymmetrical. An F-test suggests that a dual Gaussian fit is preferred over a single line, above the 98% confidence interval, for each feature. No corresponding asymmetries are observed elsewhere in REJ 0558-373. For the λ1548.202 line, the individual components are found at 7.0 ± 1.0 and 26.8 ± 1.2 km s −1 (with equivalent widths of 79 and 68 mÅ respectively). Corresponding values for the λ1550.774 line are 7.9 ± 1.2 and 21.8 ± 1.0 km s −1 (13 mÅ and 150 mÅ). Fits to each line are illustrated in figure 4. For comparison, vphot = 22.7 ± 2.8 km s −1 , and vISM = 11.6 ± 1.4 km s −1 , and hence the longer wavelength component of each C IV line is in reasonable agreement with the photospheric value. vgrav is estimated as ≈ 20 km s −1 for this star, which is greater than the ∼ 10 km s −1 difference between the photospheric and shifted C IV components. It is therefore possible that the blueshifted, non-photospheric C IV features could be formed by material within the potential well, rather than the weakly shifted outer regions. However, the non-photospheric components lie close to the velocity of the ISM, raising instead the possibility that the star is ionising material in its local interstellar environment. In either case, the absence of corresponding features in other strong lines is puzzling. WD 2218+706 This object is unusual and important in several respects. Lines of the C IV and Si IV doublets are clearly multiple, dominated by a photospheric contribution, but with accompanying components of comparable equivalent width at a velocity of -16.3 ± 0.7 km s −1 . These features are therefore redshifted with respect to the photospheric velocity (vphot = -38.7 ± 0.2 km s −1 ), possibly representing the infall of material onto the white dwarf. Gravitational redshift therefore provides no viable explanation for these lines. WD 2218+706 is surrounded by an old planetary nebula, DeHt5, and is discussed by Napiwotzki & Schönberner (1995). In a study of planetary nebula dynamics, Dgani & Soker (1998) show that in regions where the ISM is reasonably dense (such as the galactic plane), Rayleigh-Taylor instabilities can develop in the outer regions of planetary nebulae, leading to fragmentation of the halo, and allowing the surrounding ISM to pass into the inner regions of the nebula where photoionisation can occur. Although WD 2218+706 is out of the galactic plane (bII = 11.6 • ), and therefore lies in a region where the mean ISM density may be expected to be relatively low, Kun (1998) describes the morphology of a nearby giant molecular cloud complex consisting of a large number of distinct regions previously identified in independent surveys; several are found close to WD 2218+706, and two are of particular interest (Lynds 1217 and Lynds 1219). The central portions of these clouds have galactic coordinates within less than 0.5 • of this star, and their distance limits (from 380 to 450 pc) encompass the distance to WD 2218+706 (440 pc, from Napiwotzki & Schönberner). This raises the possibility that the star may lie in an area where the ISM is particularly dense, allowing instability and inflow to take place (a curve of growth analysis for the non-photospheric features in WD 2218+706 suggests column densities of N (C IV) = 4.17×10 13 atoms cm −2 and N (Si IV) = 4.07×10 13 atoms cm −2 , each with a Doppler parameter of 6 km s −1 ). However, alternative explanations, such as the presence of a hidden companion, are also deserving of investigation, and this work is currently in progress. During the course of the WD 2218+706 study, evidence was found for the existence of trace amounts of He in the STIS spectrum , with the He II λ1640.5050 line observed in the STIS spectrum close to the estimated photospheric velocity. As noted by Barstow et al., the λ1640.5050 line has an n=2 lower level, which should not be populated in collisionless interstellar material, while any helium in the surrounding planetary nebula would be expected to be found in emission. Hence a photospheric origin appears to be the most satisfactory of these three possible sources. The surface gravity of WD 2218+706 (log g ≈ 7.00) is low for an isolated white dwarf, and may be explained in terms of close-binary evolution, in which the progenitor star fills its Roche lobe and loses mass to a companion. This loss of material prevents helium ignition from taking place. Instead, the star, consisting of a He core surrounded by a Hrich envelope (which still supports nuclear reactions at the base), contracts slowly towards the low-mass, He-core white dwarf configuration (Driebe et al. 1998;Napiwotzki 1999). As the case of Feige 24 illustrates, the presence of a binary companion can contribute to the appearance of circumstellar lines. However, there is no direct evidence for the existence of a binary companion, and the range of possible masses for this star does not preclude AGB evolution, leaving open the possibility that WD 2218+706 is the product of single-star evolution. Evidence exists for at least one other H-rich white dwarf star exhibiting the He II λ1640 line, in the DAB HS 0209+0832 (T eff ∼ 35,000 K; log g ∼ 7.8). This star is discussed by Wolff et al. (2000), who suggest that significant quantities of He are present in the atmosphere, despite the short diffusion timescales for He, as a result of ongoing accretion of matter from an interstellar cloud. Supporting evidence for this explanation can be found in the work by Heber et al (1997), who observe variability in the strength of the He λ1640 line, possibly as a result of the passage of HS 0209+0832 through an inhomogeneous medium. Alternatively, Unglaub & Bues (2000) find that DAO stars, in which mass loss in the form of a stellar wind prevents He from sinking out of the atmosphere, can transform into DA stars when the phase of wind-driven mass loss ends. This transition is found to occur near log g ∼ 7.0 for a star with T eff ∼ 60,000 K (see fig.6, Unglaub & Bues (2000)), raising the possibility that WD 2218+706 may be such a transitional object. Feige 24 Feige 24 is a white dwarf+red dwarf binary system, and is the subject of several detailed studies (e.g. Dupree & Raymond 1982;Vennes & Thorstensen 1994). Two STIS data-sets were available for this star, acquired on November 29 th 1997 (binary phase 0.73-0.75) and January 4 th 1998 (binary phase 0.23-0.25), representing the orbital quadrature points. Vennes & Thorstensen estimate a systemic velocity of 62.0 ± 1.4 km s −1 . For the current study, the systemic velocity has been estimated by taking the mean of the photospheric values obtained from each data set (31.6 and 129.1 km s −1 ), resulting in an estimated systemic velocity of 80.3 ± 0.5 km s −1 . vISM is estimated at 8.2 ±0.1 km s −1 . Feige 24 is known to exhibit multiple components, in the lines of the C IV doublet only. In this work, the dominant components match the photospheric velocity of each dataset, and secondary features are observed to remain at 7.8 ± 0.2 km s −1 , irrespective of the orbital phase. This stationary component has been discussed by Dupree & Raymond, who suggest that the most likely source is a Strömgren sphere excited by the white dwarf, and measure column densities of N(C IV) = 3.86(±1.51) × 10 13 atoms cm −2 using curves-ofgrowth. Vennes & Thorstensen investigated the possibility that the material responsible resides in the photosphere, in a circumstellar shell, or in a wind from the red dwarf companion. The current study shows no shifted components in any of the other resonance lines (e.g. Si IV or N V). A third, very weak feature on the red side of each photospheric line in the C IV doublet, most obviously at 1550Å, is identified as the Fe V λ 1550.907 line. The gravitational redshift estimate made during this study, vgrav ≈ 17 km s −1 , is somewhat higher than that derived by Vennes & Thorstensen (vgrav ≈ 9 ± 2 km s −1 ), but is still too low to explain the secondary C IV components. However, their velocities agree, within error, with that of the ISM, and hence a link between the star and its immediate surroundings (beyond any circumstellar shell) cannot be discounted. Alternatively, Vennes & Thorstensen estimate that a C IV column density from N (C IV) = 7.94 × 10 11 -7.94 × 10 13 atoms cm −2 (corresponding to equivalent widths of between 4-400 mÅ for the nonphotospheric component of the λ 1550.774 line, assuming a linear curve-of-growth), would be consistent with mass loss from the red dwarf companion. Although insufficient data are available to derive an unambiguous C IV column density in this study, the estimated value of N (C IV) = 1.48 × 10 13 atoms cm −2 , is within the range of possible values obtained by Vennes & Thorstensen, and the equivalent width of the λ 1550.774 line (24 mÅ) is also consistent with the large range allowed by earlier estimates. G191-B2B The STIS E140M data for this star show the C IV resonance doublets are accompanied by strong nonphotospheric blue shifted components, confirming the results of Bruhweiler et al. (1999). The wavelength and equivalent width of these features have been estimated by fitting multiple Gaussians to the data, as shown in figure 8. The component at photospheric velocity is of greater equivalent width, and the shifted elements are observed at velocities of 7.6 ±0.2 km s −1 . Recently, Holberg et al. (2002) also detected weak blue shifted components to the Si IV λλ1393.755,1402.777 doublet at the same velocity as the non-photospheric C IV features, using 22 co-added STIS E140H spectra. The calculated figure of vgrav ≈ 15.4 km s −1 is comparable with the velocity difference between photospheric and shifted high ionisation features, suggesting that the nonphotospheric material probably resides outside the limit of the potential well. The velocity of the highly ionised nonphotospheric features is substantially different to the value of the interstellar features (vISM = 16 ± 1 km s −1 as determined in this work), and at first sight, photoionisation of the cloud responsible for the primary ISM features does not appear to provide a viable explanation. However, the value quoted for vISM is based on analysis of an E140M (medium resolution) STIS data-set, with a resolving power of ∼ 35000. Sahu et al. (1999) describe observations made with the E140H grating (resolving power ∼ 110000), and clearly show two distinct interstellar components, with velocities of ∼ 8.6 km s −1 and ∼ 19.3 km s −1 , the latter component having a velocity close to the predicted value of vLIC (estimated at 20.58 km s −1 in this study). Clearly, the highly ionised non-photospheric components have a velocity which is very close to the 8.6 km s −1 interstellar cloud. A curve-of-growth analysis was performed for the C IV features in G191-B2B. As in the case of Feige 24, the availability of only two datum points prevents any rigorous constraints from being placed on the implied C IV column density, though the value of N (C IV) = 2.40 × 10 13 atoms cm −2 is not dissimilar from the results of Vennes & Lanz (2001), who use synthetic modeling techniques to estimate a value of N (C IV) = 6.31 × 10 13 atoms cm −2 . The Doppler parameter suggested by the current analysis, b = 10 km s −1 , is significantly higher than the value of b = 5.2 km s −1 presented by Vennes & Lanz; this discrepancy may also be explained by the lack of available data in the current work. REJ 0457-281 The exceptionally low H I column density to this star (1.3 × 10 17 atom cm −2 ), was revealed in the discovery paper by Barstow et al. (1994a). Along with G191-B2B, this white dwarf was the first to have phosphorous and sulphur identified in its spectrum (Vennes et al. 1996). Later, HBS showed that the photospheric Si IV and C IV resonance lines of REJ 0457-281 are accompanied by blue-shifted features (figure 9). Few interstellar and photospheric lines are identifiable, making precise velocity measurements difficult. The ISM velocity estimate of HBS is confirmed, but somewhat higher photospheric velocities are derived. Co-addition of the C IV doublet lines clearly reveals two velocity components: one at 22.5 ±1.39 km s −1 , and the photospheric component at 81.26 ± 2.65 km s −1 . A curve of growth analysis for the nonphotospheric components suggests N (C IV) ∼ 1.82 × 10 14 atoms cm −2 with a Doppler parameter b ≈ 4 km s −1 . For the co-added Si IV doublet, corresponding velocities are 19.08 ± 4.31 km s −1 and 80.65 ± 1.38 km s −1 . Although multiple velocity components are not obvious in the N V doublet, there is, nevertheless, evidence to suggest that they are present (figure 9). The λ 1238.8210 N V line is a narrow, well defined feature at 76.33 ± 4.78 km s −1 , accompanied by a weaker blueshifted feature at 16.55 ± 4.61 km s −1 . The main λ 1242.804 line has a similar velocity (76.91 ± 4.38 km s −1 ), but shows only tentative evidence for a blueshifted component. From these data, a weighted average is computed for the photospheric and blueshifted velocity components, suggesting vphot = 76.91 ± 0.83, (c.f. 69.60 ± 1.97 from HBS), and vcirc = 21.76 ± 1.27. Thus, the estimated velocity shift of the blueshifted features relative to the photospheric components agrees, within the stated error, with the 53 km s −1 value of HBS. REJ 2156-546 Barstow et al. (1997) determined limits to the heavy element abundance in REJ 2156-546, and described this object as being similar to HZ 43 in having a reasonably pure H atmosphere. The STIS spectrum of REJ 2156-546 shows clear interstellar lines, indicating vISM = -8.39 ±0.17 km s −1 . These new, high resolution data also appear to show features due to photospheric material. The lines are weak, and unambiguous identifications are limited to the strong resonance doublets of Si IV and C IV, although there may be features from N V and Ni V at the detection limit. To obtain a reliable value for vphot, the Si IV lines (λλ1393.755,1402.777) were co-added in velocity space, producing a clear feature at -17.79 ±1.33 km s −1 , with an equivalent width of 11 mÅ. Coaddition of the C IV doublet appears to reveal two features (figure 10). The first, at -20.71 ±0.80 km s −1 , is weak (4.7 mÅ), and very close to our value for vphot. The dominant second component lies at -1.65 ±0.76 km s −1 , and has an The proposed C IV feature lying at a similar velocity to the single Si IV line is admittedly weak, and spectra of improved S/N will be required before these results can be regarded as incontrovertible. Nevertheless, if it is assumed that the object is devoid of any non-photospheric features, whether blue-or redshifted, then the relatively large difference in velocity between the Si IV line and the dominant C IV feature (approximately 16 km s −1 ) is somewhat difficult to explain. Clearly, this is an object deserving of further attention. REJ 1614-085 found the amount of Si in the spectrum of REJ 1614-085 to be an order of magnitude underabundant compared to the predictions of radiative levitation calculations, while N appears to be three orders of magnitude over-abundant. Two velocity components were observed in the line of sight ISM, but most significant for the current work is the result that the lines of the C IV and Si IV doublets exhibit weak blueshifted features, as illustrated in figure 11. Results from this study suggest that the primary ISM component lies at a velocity of -29.56 ± 0.33 km s −1 , and the secondary component at +48.64 ± 1.15 km s −1 . These values compare reasonably well with those of Holberg et al., who find velocities of -27.05 ± 1.5, and +47.40 ±1.50 km s −1 respectively. Similar agreement with the earlier study is also found when considering the photospheric and blueshifted features. The photospheric velocity is found to be vphot = -37.31 ±0.40 km s −1 , in excellent agreement with Holberg et al.. As recorded previously, no clear evidence exists for a secondary component to the photospheric N V lines, although it should be noted that the centroid positions of Gaussian fits to these lines are found to differ by approximately 5 km s −1 . In the case of the Si IV doublet, the secondary components are shifted by -40 km s −1 relative to the primary features, as determined by ; for the C IV doublet, this figure is -29 km s −1 , compared to the value of -25 km s −1 quoted by . This apparent discrepancy is most likely to be a ✻ ✻ Figure 11. Co-added lines of the C IV (left) and Si IV (right) doublets in REJ 1614-085, in velocity space. The blueshifted components previously noted by are indicated by arrows. result of the different positions chosen for line demarcation in the two studies. A curve of growth analysis performed on the shifted C IV features suggests N (C IV) ≈ 3.16 × 10 13 atoms cm −2 , and the Doppler parameter b ≈ 2 km s −1 , though with only two data points, these values are particularly poorly constrained, and must not be over interpreted. GD 659 Both IUE and STIS data are available for this star, although the STIS spectrum is of limited coverage (1160 -1357Å). The ISM velocity determined from the IUE data (vISM = 12.33 ± 1.52 km s −1 ) agrees with that of HBS, while the IUE-based photospheric velocity appears to be somewhat lower than previously quoted (vphot = 33.51 ± 1.03 km s −1 , c.f. 40.31 ± 1.83 km s −1 from HBS). 1 Since these measurements were made with identical data sets, this discrepancy must be ascribed to differences in choices of continuum levels and line boundaries used during the measurement process. However, the available STIS data also point to a lower photospheric velocity, with vphot = 34.28 ±0.17 km s −1 , in agreement with the IUE estimate made in this study, and close to the value of 33.58 km s −1 determined by Holberg et al. (2000) using STIS data. The STIS ISM velocity is also lower than the IUE value, at vISM = 9.77 ± 0.22 km s −1 . These STIS velocities, based on higher resolution data with better S/N, are adopted for GD 659 in table 2. Although the resonance doublets of N V, Si IV and C IV are clearly visible in the IUE data, the resolution is insufficient to observe well defined Gaussian profiles, and thus the sensitivity to any non-photospheric components is low. However, the profiles appear to be narrow, ruling out any obvious multiplicity in these lines. STIS data show the lines of the N V doublet as narrow and symmetrical, effectively ruling out the existence of non-photospheric N V components. One interesting feature, clearly visible in the C IV λ 1548.202 line, though present also in the λ 1550.774 line, is a weak feature near 0 km s −1 (figure 12). Fitting a double Gaussian to the co-added C IV lines, velocities of 36.74 ± 2.56 km s −1 (primary), and -2.97 ± 3.00 km s −1 (secondary) are obtained. The secondary component is weak (with an equivalent width of 6 mÅ compared to 36 mÅ for the photospheric component), and is of comparable strength to scatter in the adjacent continuum regions. Although an F-test indicates that a dual Gaussian fit is preferred over a single component at the 94% confidence interval, the similarity between this feature and the natural scatter in the data suggests that this is simply noise. However, until high resolution STIS data for this region can rule out the existence of shifted components in the C IV lines, we tentatively include GD 659 among the stars with possible circumstellar features. figure 13. The photospheric lines, including the resonance doublets of C IV and Si IV, exhibit narrow, symmetrical profiles and are apparently devoid of any shifted components, defining vphot = -14.25 ± 0.22 km s −1 . A remarkably strong, multi-component C II 1335.7076Å feature is observed, also shown in figure 13. The velocity components match those of other ISM lines, although the -23 km s −1 feature is very weak, manifesting itself as a broadening on the blue side of the line. Excited Si II transitions such as λλ1265.002,1309.276,1533.431 are not observed used the presence of these lines in the white dwarf CD -38 • 10980 to infer the existence of a circumstellar cloud around the star). No evidence of highly ionised non-photospheric material is found in PG 0948+534. REJ 2214-492 Weighted average line velocities indicate vISM = -1.72 ± 0.51 km s −1 , and vphot = 33.49 ± 0.45 km s −1 . These values compare well with those of HBS, who find vISM = -0.71 ± 0.88 and vphot = 33.91 ± 0.47 km s −1 respectively. However, a significant difference exists between the velocity of each line in the C IV doublet, with λ 1548.202 at 30.5 ± 2.1 km s −1 , and λ 1550.774 at 40.4 ± 2.7 km s −1 , if each of the two lines is assumed to be made up of only one absorption feature. Visual inspection of the C IV doublet reveals slight asymmetry, particularly in the λ 1548.202 line, where a dual fit was found to be superior to the single Gaussian at the 99.9% confidence level, with velocity components at 5.37 and 38.36 km s −1 , and equivalent widths of 31.1 and 99.3 mÅ respectively. A dual Gaussian fit to the λ 1550.774 line produced a less obvious improvement (at the 90% confidence level) with components at 36.2 and 80.6 km s −1 , and equivalent widths of 137.6 and 14.8 mÅ respectively. The primary Gaussian components of the doublet thus lie at velocities more consistent with each other and the overall photospheric value. The status of putative non-photospheric contributions in the C IV doublet is less certain; although the 5 km s −1 feature at 1548Å appears to provide a good match to observation, the lack of a corresponding feature at 1550Å prevents confirmation of its reality. No evidence was found for multiplicity in other photospheric lines. Line profiles were compared with those from a model spectrum, produced using the TLUSTY and SYNSPEC codes, and adopting the heavy element abundances determined by (with N(C)/N(H) = 1.0 ×10 −6 ). After smoothing the model to the resolution of IUE, no significant differences were apparent in the shapes of model and observed C IV lines (figure 14), casting further doubt on the presence of shifted features in this star. REJ 0623-371 The photospheric velocity determined here (vphot = 41.15 ±0.56 km s −1 ) agrees with that of HBS, though the current value for vISM is somewhat lower than given by HBS (16.40 ±0.70 km s −1 c.f. 19.48 ±0.85 km s −1 respectively); the principal source of this difference lies in a more precise determination of the C II line velocity. HBS estimate the velocity of this line to be 14.07 ±3.24 km s −1 , compared to the new value of 13.33 ±1.14 km s −1 . No compelling evidence exists for the presence of shifted components in the spectrum of REJ 0623-371, but as in the case of REJ 2214-492, a significant difference is observed in the velocity of the lines in the C IV doublet (1548 A = 38.6 ±2.3 km s −1 , 1550Å = 47.6 ±2.2 km s −1 ). In contrast, the lines of the N V and Si IV resonance doublets, which are of comparable equivalent width, agree within the estimated error. Determining the reality of any features in the doublet is complicated by considerable absorption in the continuum of this extremely metal rich DA, though by restricting Gaussian fits to the region below this structure, useful comparisons between the level of agreement found with single and double Gaussian profiles, may be obtained. Using this method, a dual Gaussian fit is preferred to a single feature only at the 88.8% level for the 1548Å line. However, results for the 1550Å line are less ambiguous, suggesting a dual fit at 98.9%. The resulting Gaussians have velocities of 45.1 and 77.3 km s −1 , and equivalent widths of 128.5 and 10.8 mÅ respectively ( figure 15). Thus, while the quality of the available data is insufficient to prove the existence of circumstellar features in the star, the results of this analysis provide some justification for proposing repeat observations at a higher signal-to-noise ratio (S/N) and spectral resolution. REJ 2334-471 Values measured for vISM and vphot are in good agreement with those obtained by HBS. The relatively poor quality of the IUE data precludes unambiguous identification of any non-photospheric features, particularly in the case of the C IV lines. There is no evidence of multiplicity in the N V doublet, although a dual Gaussian fit to the 1242Å line, with components at 19.7 and 43.51 km s −1 , produces a fit which is preferred over a single feature at the 93% confidence level. It is therefore intriguing that each line in the Si IV doublet (λλ1393.755, 1402.777) is fitted reasonably well (above the 95% confidence interval when compared to a single feature) by double Gaussian profiles. Each line can be described by a double Gaussian with V1 = 34.00 ±0.82 km s −1 , EW1 = 48.85 ±0.95 mÅ, and V2 = 54.64 ±1.58 km s −1 , EW2 = 22.74 ±3.25 mÅ. Neither of the Gaussian velocities are in agreement with the average photospheric value, although a considerable spread in individual photospheric velocity measurements is observed, so that the discrepancy cannot be used to infer the absence of such features. Spectra with improved S/N are required to confirm or disprove the existence of circumstellar features in this star. GD 246 IUE and STIS data were available for this star. Photospheric and ISM line velocities measured from the IUE data are in good agreement with those of HBS. The photospheric velocity estimated from STIS data differs from the IUE value by 1 km s −1 , although this is within the IUE error bounds. The STIS ISM velocity (-5.78 ±0.12 km s −1 ) is marginally outside the IUE estimate (-7.87 ±1.00 km s −1 ). IUE and STIS data clearly show the C IV doublet lines as singular, and at the photospheric velocity. STIS data shows the Si IV λ 1393.755 line as being devoid of any secondary components, while a sharp feature (only one bin, or 0.02Å in width) is observed on the redward edge of the λ 1402.777 line. Although IUE data also hint at a broadening on this side of the line, unconvincing dual Gaussian fits, and the extreme narrowness of the extra feature in STIS data, suggest that this is due to noise, and thus GD 246 shows no clear evidence of circumstellar material. PG 1123+189 In the photometric study of hot white dwarfs by Green et al. (2000), this object is one of those listed as having a significant IR excess, suggesting the possibility of a low mass companion to the star. The STIS spectrum for this object is limited in coverage (1163 -1361Å), and has relatively low S/N. However, many interstellar lines are visible in the spectrum, and a value of vISM = -0.67 ± 0.06 km s −1 is obtained. Although photospheric features are difficult to distinguish in the data, by co-adding the N V lines at 1238 and 1242Å with those of Ni V between 1250 and 1336Å in velocity space, a single absorption feature is clearly visible, suggesting a value of vphot = 12.55 ± 0.53 km s −1 . The quality of these data is insufficient to confirm or rule out the presence of non-photospheric features with confidence. HZ 43 A well studied object, HZ 43 is a member of the group of white dwarfs which can be adequately modeled with an atmosphere devoid of any heavy elements. Several ISM lines are observed, leading to an estimate of vISM which agrees with that of HBS. Co-addition of the spectrum at the wavelengths of the major N, C, Ni and Si lines fails to reveal any photospheric features. Similarly, co-addition, in velocity space, at the wavelengths of the excited Si transitions (λλ1264.738, 1265.0020, 1309.2758 and 1533.4312) also shows no new features. REJ 1032+532 This object is the subject of a comprehensive study by Holberg et al. (1999); Holberg et al. (1999a). In the current work, the measured value of the primary ISM features, vISM = 0.84 ±0.21 km s −1 , agrees, within error, with that of Holberg et al. (1999a). A previously noted secondary component to the Si II λλ1193.2897, 1260.4221 and 1526.7065 lines is found to have a velocity of -30.43 ±1.39 km s −1 , also in agreement with the value quoted by Holberg et al.. A value of vphot = 38.16 ±0.40 km s −1 is determined for the photospheric velocity. The excited Si II lines found around some stars possessing circumstellar clouds are absent, and in none of the photospheric lines is any compelling evidence found for the existence of secondary components. PG 1057+719 This object (alternative ID REJ 1100+713) is also included in the photometric study of white dwarfs by Green et al. (2000), with no significant IR excess being detected. It belongs to the low opacity metal poor class which includes the majority of DA white dwarfs. presented a study of this star and REJ 1614-085 (see below). Their results revealed no signs of circumstellar features. The current work confirms the results of Holberg et al., revealing no shifted features in the GHRS data. Co-addition of the ISM lines results in vISM = -2.89 ±0.69 km s −1 . As expected for a low EUV opacity object, no significant photospheric lines are observed. To detect any signs of photospheric features, a series of 10Å -wide sections were extracted from the data, each centred on the rest wavelength of one of the lines of the N V, C IV and Si IV resonance doublets. The sections were then transformed into velocity space, and co-added. The presence of barelydetectable quantities of N, C and Si might then be expected to produce a noticeable reduction in continuum level around the photospheric velocity. The co-added data does indeed reveal a feature, with a velocity of 75.35 ± 2.59 km s −1 , consistent with the value of vphot = 76.1 ±3 km s −1 determined from Balmer line fitting. However, as indicated by Holberg et al., weak individual features found near the expected positions of these lines show a considerable spread in velocity, casting doubt on their authenticity. GD 394 GD 394 is photometrically variable in the EUV, though no signs of spectroscopic variation have been detected. In contrast to REJ 1614-085, GD 394 has an extreme overabundance of Si compared with model predictions . Dupuis et al. (2000) note that this extreme Si abundance, and the observed EUV variability, give a unique status to GD 394. Dupuis et al. present spectroscopic and timing analyses of GD 394, which suggest the presence of a large EUV-dark spot on the surface of the star, sharing the stellar rotation period of 1.150 days. Episodic accretion is proposed as the source of this spot, with a magnetic field directing material onto the magnetic poles. No evidence exists for the presence of a magnetic field in GD 394, though only upper limits can currently be placed on the strength of any such field. GD 394 appears to be an isolated star, and hence no obvious candidate exists for the source of accreted material, other than the immediate stellar neighbourhood. Early results suggested that the velocity of Si III and Si IV lines differed considerably from the established radial velocity, and that these lines were therefore of a circumstellar origin (Bruhweiler & Kondo 1983). It was also suggested that the absence of any observable Si II features in IUE data, which models predicted would be present, represented further evidence for the non-photospheric nature of the heavy elements. However, Barstow et al. (1996) demonstrated that the previous radial velocity measurement, obtained from the Balmer lines, was in error, and a revised value more consistent with the velocities found for the Si features was obtained. By using the latest NLTE models available at the time, Barstow et al. (1996) showed that the predicted abundance of photospheric Si would yield line strengths within the noise of the IUE spectrum, and hence the absence of Si II features did not require a non-photospheric solution. Chayer et al. (2000) report the first firm detection of heavy elements other than Si in the spectrum of GD 394, with spectra from the Far Ultraviolet Spectroscopic Explorer (FUSE) showing lines of Fe III and P V in the photosphere, as well as a large number of Si III and Si IV lines. Values of vISM = -7.28 ± 1.42 and vphot = 28.75 ± 0.91 are obtained in this work for the velocity of ISM and photospheric features respectively. As reported by , the inventory of photospheric lines is dominated by Si III, with additional contributions from the Si IV doublet. Holberg et al. also observe photospheric Al III lines at λλ1854.7159 and 1862.7900. However, C IV and N V are conspicuous only by their absence; co-addition of the data for these wavelengths reveals no trace of features. No record exists, in recent literature, of circumstellar features in the spectrum. The IUE data reveal no sign of C IV or N V, and GHRS data are available only for the ranges 1290 -1325Å and 1383 -1419Å, which do not include these ions. However, many Si lines are visible in the higher resolution GHRS data, and neither individual features or the co-added profiles of all visible Si lines show any form of asymmetry or other qualities which may indicate the presence of circumstellar features. GD 394 is therefore one of the few stars in this survey for which it can be conclusively stated that no shifted features exist, at the resolution of currently available data. GD 153 Another example of a star which may be modeled with a pure-H atmosphere, GD 153 is a frequently observed standard star. No obvious photospheric features are observed in the IUE spectrum of this star, although several ISM lines are recorded, indicating a value of vISM = -8.42 ± 2.68 km s −1 . This velocity agrees with that obtained by HBS. showed that Mg II (λ 4481) and excited Si II was present in the optical spectrum of this cool star. Since radiative levitation calculations predict that much higher temperatures are required before Mg can be suspended in the atmosphere, observable quantities of this species in EG 102 were interpreted as indicative of ongoing accretion, either from a low mass companion (for which no evidence was found) or from a diffuse interstellar cloud. Later, HBS analysed the IUE NEWSIPS spectrum of EG 102 and noted the presence of Al II and Al III lines at the photospheric velocity. This observation was also interpreted as being a product of ongoing accretion. Photospheric and ISM velocities determined for EG 102 in this work are in good agreement with those measured by HBS. EG 102 Data provided by Chayer et al. (1995) indicate an abundance of N(Al) = 10 −9 N(H) for a star with log g and T eff similar to those of EG 102. The Al abundance in EG 102 has been estimated by fitting data with a pure H model spectrum into which Al has been added, using the code SYNSPEC. After smoothing the output spectrum to the resolution of IUE, abundances of order N(Al) = 1 → 5 × 10 −8 N(H) are obtained. These figures are significantly higher than those suggested by Chayer et al., but the work of Chayer et al. does not adopt the self-consistent approach found in more recent work (e.g. ; , discussed in section 1), and a failure to accurately predict the Al abundance in EG 102 cannot be used to infer the presence of accretion processes in the star. We note that Zuckerman & Reid (1998) find significant quantities of Ca in EG 102 2 (Ca/H = 2.5 × 10 −7 , EW = 29mÅ for the λ 3933 Ca K line) using high resolution optical echelle spectra obtained with the Keck telescope; current theories of radiative levitation do not predict the Ca ion to be present in such a cool DA. Zuckerman & Reid also find significant quantities of Ca in both of the close white dwarf/red dwarf pairs in their survey, and suggest that the presence of heavy ions such as Ca in cool DA stars may be attributed to binarity. To date, no companion to EG 102 has been observed. Wolf 1346 The presence of Si in the photosphere of Wolf 1346 was revealed by Bruhweiler & Kondo (1982), but later questioned by Vennes et al. (1991), who noted discrepancies between the velocity of Si II lines and the ground-based photospheric velocity, and found that abundances determined from the Si II lines were inconsistent with the non-detection of Si III in the IUE spectra of this star -observations more suggestive of a circumstellar origin. The problem was resolved by Holberg et al. (1996), who revised the photospheric velocity and used the advanced non-LTE code TLUSTY to derive a Si abundance 0.5 dex lower than that of Vennes et al., confirming the photospheric nature of the Si lines. The velocities of features normally associated with the ISM appear to fall into two groups, at -16.19 ± 0.10 km s −1 and -7.67 ± 3.07 km s −1 . When considering the velocities of isolated features, these groups appear to contain separate species (O I, C II and S II at -16 km s −1 , and Si II at -7 km s −1 ). However, this segregation breaks down when those interstellar Si II features which are blended with photospheric features, are included. For example, the ISM component of the λ 1260.4221 Si II line, lies at -16.73 km s −1 . The photospheric lines (which are limited to Si II and Si III) show a considerable spread in velocity (from 19 to 33 km s −1 ), with the weighted mean being vphot = 24.32 ± 1.41 km s −1 . Given this range of velocities, the reality of the two ISM groupings is questionable. Data from HBS also show this grouping, though only four ISM lines are recorded (compared to ten in the current study). HBS treat these velocities as belonging to the same group, and give a weighted mean ISM velocity of -14.85 ± 1.50 km s −1 . For the current work, the analogous value is vISM = -15.38 ± 0.95 km s −1 . Thus, both vISM and vphot are found to agree with the values presented by HBS. Lines of Al III are clearly visible at the photospheric velocity. These features are not noted in previous studies of Wolf 1346 (Holberg et al. (1996), HBS). To estimate the Al abundance, the λ 1854, 1862 lines were reproduced by adding quantities of Al to a spectrum generated from a pure H+He non-LTE model (N(He) = 10 −8 N(H)), and the output smoothed to the resolution of IUE (∼ 0.2Å FWHM). An abundance of N(Al) = 2.2 × 10 −9 N(H) was found to match the observed 1862Å line, and is close to the value of ∼ 1 × 10 −9 N(H) implied by the results of Chayer et al.. A greater abundance (N(Al) = 6.0 × 10 −9 N(H)) was required to match the 1854Å feature; however, the region around this line exhibits unusual structure, possibly due to instrumental effects. The models used to determine these abundances are not stratified, and hence these values are likely to be revised when suitable stratified models become available. The Al lines and synthetic spectrum are illustrated in figure 16. DISCUSSION Of the twenty three stars considered in this survey, four were previously known to possess features from highly ionised species at non-photospheric velocities: Feige 24, REJ 0457-281, G191-B2B and REJ 1614-085. Four new DA white dwarfs may now be added to the list of those exhibiting unambiguous, highly ionised components at non-photospheric velocities: REJ 1738+665, Ton 021, REJ 0558-373, and WD 2218+706. A fifth object, REJ 2156-546, shows features which may also be interpreted as non-photospheric, although data of improved S/N are required to confirm this result. A weak blueshifted component in GD 659 is also suggested, though this is exceedingly faint, comparable with the structure of adjacent noise, and hence regarded as a tenuous identification until further data become available. Table 3 lists those ions appearing as circumstellar features. Observations of these features are restricted to resonance transitions, and are most common in the C IV λλ1548.202, 1550.774 doublet (the resonance lines are most strongly coupled to the stellar radiation field, and hence are more susceptible to radiative levitation). The velocities of interstellar, photospheric and non-photospheric features (if detected) have been presented in table 2. Of the eleven IUE spectra considered in this study, only one (REJ 0457-281) shows signs of highly ionised, nonphotospheric components. In contrast, six out of the twelve available STIS spectra reveal such features (with a further detection in one of the three available GHRS spectra). The low number of detections in IUE data is unsurprising given the low signal-to-noise ratio of the instrument, and the fact that its resolution (0.08Å at 1400Å) is equivalent to a velocity of 17 km s −1 (compared to ∼ 3.2 km s −1 with the STIS E140M grating, and ∼ 1.28 km s −1 in the E140H configuration). The non-detection of circumstellar components in IUE data therefore places an upper limit on the velocity of any shifted features, rather than proving absence. STIS data may yet reveal non-photospheric features in these objects. The need for consistently high resolution data with adequate S/N, covering all stars in follow-up studies is clear. Only the highest resolution STIS data were able to show secondary features in the ISM lines of G191-B2B, revealing a possible connection between the circumstellar components and the local ISM. These features were not resolved in the STIS E140M data. However, all but one of the STIS spectra included in this survey were acquired with the E140M grating (PG1123+189 was observed in the higher resolution E140H mode, but the data do not cover the C IV or Si IV resonance lines). Hence, more stars in the current sample may possess highly ionised non-photospheric features, at velocity differentials too small for the existing data to resolve. Alternatively, some stars may possess very weak circumstellar features which are hidden within the noise of existing data. Only when STIS data (ideally from the E140H configuration and covering the important resonance lines) are available for all stars in this sample, will a more comprehensive study of the distribution of circumstellar features in white dwarfs be possible. Spatial distribution The distribution of survey stars is depicted in figure 17; no correlation between the presence of circumstellar features and the position of objects on the sky is apparent. Since the sample stars encompass a relatively wide range of distances (between approximately 14 and 436 pc), the absence of such a positional dependence is not surprising. ISM N(H I) column density Line-of-sight ISM N(H I) column densities may be estimated by fitting models to the observed ISM Ly-α profile (after removing the stellar contribution to line+continuum using a stellar model), as demonstrated by Barstow et al. (1994a) and Holberg et al. (1999a). Alternatively, the H I column density may be determined from EUVE data using the strong continuum absorption below the 912Å Lyman edge. In this case, the column density is included as a free parameter in model spectra, which are fitted to the data using χ 2 reduction techniques (e.g. Barstow et al. 1999;Holberg et al. 1999a). These methods provide the average column density, and are insensitive to "clumping" along the line of sight. As an extension to this simplification, it may be assumed that, ceteris paribus, a more distant star along the same or similar line of sight will be observed through an intervening column, N(H I), of greater density. The figure of interest is therefore the volume density n(H I) along the ISM column, where, in the general case, and where s is the distance to the star. For a homogeneous column, n(H I) ≈ N (H I)/s. Previously determined column densities are available for many of the sample stars, while approximations may be made for others using the synthesis maps of Frisch & York (1983), and the contour maps of Paresce (1984). These columns are listed in table 4, showing that a relationship between circumstellar features, and the average density of interstellar material along the line of sight, is not observed. For example, both REJ 1738+665 and REJ 0457-281 show circumstellar features, yet their line of sight ISM volume densities are significantly lower than objects without such features. This result is not surprising. Dupree & Raymond (1983) find that for a DA white dwarf with T eff = 60,000K and log g = 8.0, the Strömgren radius ranges from 0.07 pc (for n(H)=10 2 cm −3 ) to 30.8 pc (n(H)=0.01 cm −3 ). Although their work is now dated (the heavy element features in white dwarf spectra are attributed to the ionisation of circumstellar material), these figures still provide a useful order of magnitude estimate for the sphere of influence of the white dwarf. It could also be argued that the quoted Strömgren radii represent upper limits, since the extra opacity from photospheric heavy elements would be expected to reduce the intensity of the radiation field at specific wavelengths. In either case, the Strömgren radius is typically a small fraction of the distance to the star, and since the distribution of material along the line of sight is unlikely to be homogeneous, the observed average value of n(H I) may not reflect conditions within the Strömgren sphere. The velocity of circumstellar features and the ISM For a white dwarf which is ionising nearby interstellar material, highly ionised non-photospheric components may be observed at velocities similar to those of the intervening ISM. Depending on the relative velocities of star and ISM, this mechanism will produce both red-and blue-shifted features. The majority of stars possessing circumstellar compo- Table 4. H I column and volume densities for the survey stars, obtained from a variety of previous studies. Data from Frisch & York (1983) and Paresce (1984) should be interpreted as broad estimations only. No data available for REJ 0948+534 or EG 102. nents show little agreement between vcirc and vISM. This result is unsurprising, since observed ISM absorption features may be blends of several unresolved ISM components at the resolution of the current data, and hence the dominant component (typically the LISM) may not be that in which the star is immersed. Nevertheless, it is useful to consider the residual value of |vISM -vcirc| (figure 18). These data show that in the majority of objects possessing circumstellar features, the difference in velocity between these and the primary interstellar cloud is typically 10 km s −1 or less (exceptions being REJ 1614-085 and GD 659). The most interesting cases are found in REJ 1738+665 and G191-B2B, where excellent agreement is found between vISM and vcirc, suggesting that the shifted features arise in a Strömgren sphere of material belonging to the primary identified ISM component. More detailed investigations are required before this hypothesis can be confirmed, and a detailed analysis of the ion populations and spectral characteristics of such a system is essential. The case of G191-B2B also acts as a caution against over-interpretation of these results; no correlation between vISM and vcirc would have been recognised if not for the higher resolution data discussed by Sahu et al. (1999). Similar correlations with hitherto undetected ISM components may be found in future studies based on E140H grating spectra. Despite the relatively narrow wavelength coverage available with this instrument, the current results justify a comprehensive program of E140H white dwarf observations, possibly tuned to a bandpass covering the C IV resonance doublet, which is most frequently accompanied by shifted features. Metallicity and mass loss MacDonald (1992) discusses the interaction between the flow of ISM material around a white dwarf star, and the weak stellar wind. In this work, the rate of mass loss,Ṁ , from the white dwarf is estimated using theory developed by Abbott (1982), viz. where Z is the metallicity, relative to solar abundances, of a star with luminosity L and mass M . However, the work of Abbott is concerned with the envelopes of O-to G-type stars, and results are found to be most successful for OB stars. Conversely, the theory does not explain mass loss rates in Wolf-Rayet stars, which are somewhat different in structure. Further, since the theory of Abbott is concerned with main sequence objects, the metallicity parameter is formulated in terms of solar abundances, and cannot be applied directly to the broad range of heavy element compositions exhibited in white dwarfs. Hence the use of equation 2 in the current work is not entirely appropriate. Nevertheless, it is interesting to compare the relative mass loss rates of sample objects calculated using equation 2. Individual abundances have been calculated by . These values were determined by matching observational data to a synthetic spectrum calculated using SYNSPEC, based on a model of appropriate T eff and log g generated by the non-LTE code TLUSTY. This information is available for all objects in the sample except PG 0948+534 (table 5). For each star, the metallicity parameter, Z, is calculated using the expression where A * ,⊙ is the abundance of the element of atomic number z relative to hydrogen in the star and in the Sun 3 respectively. Only "metals" (elements heavier than He) are included. Note that although equation 3 should be evaluated for all elements heavier than He, only clearly identifiable species as presented in table 5, have been considered here. Stellar luminosities quoted in table 5 have been calculated from the familiar expression where M⊙ represents the absolute bolometric magnitude of the Sun (+4.7), or the star (calculated using the stellar models of Wood 1995). No clear correlation is found betweenṀ and the presence of circumstellar features, although loss rates extend to lower values for objects without these features ( figure 19). The lowest values are found in the coolest stars, as expected given the dependence of equation 2 on luminosity and metallicity. However, GD 246 (T eff = 53,700K) is significantly hotter than REJ 1614-085 (T eff = 38,500K) despite having an appreciably lower calculated loss rate. While the subset of stars which exhibit non-photospheric components lacks any object withṀ < 3 × 10 −15 M⊙ yr −1 , (compared to a minimum value of 1.9 × 10 −21 M⊙ yr −1 for those with no circumstellar components), the number of objects in this survey is insufficient to determine the authenticity of a lower limit to the mass loss rates in stars showing highly ionised, non-photospheric components. Gravitational Redshift The apparent velocity of absorption features formed in the white dwarf atmosphere will be affected by the radial velocity of the star, and by gravitational redshifting. The velocity change due to gravitation will be lower in features which are formed in material further from the stellar surface (e.g. a circumstellar cloud), and will be effectively zero for a cloud with a sufficiently large inner radius. The gravitational redshift at the stellar surface therefore defines a range of velocities, with respect to the apparent photospheric value, at which highly ionised non-photospheric features may be attributed to material residing within the gravitational well of the star. Gravitational redshifts can be measured directly in binary systems such as Feige 24 (Dupree & Raymond 1982;Vennes & Thorstensen 1994), but for isolated systems, the velocity component due to gravitation redshift, vgrav, can be estimated using the standard formula, where G is the universal constant of gravitation, M is the stellar mass, R is the radius (of the star, or of the circumstellar cloud), c is the speed of light, and λ obs,rest are the observed and rest wavelengths of the absorption line, respectively. Substituting values for the physical constants, the expression for vgrav becomes simply vgrav ≈ 6.36 × 10 2 M * R * m s −1 , where solar units are to be used for the white dwarf mass and radius. In the work which follows, estimates using this expression are used to identify features which could arise from material residing well within the gravitational potential of an object. Values for vgrav, calculated using equation 6, are included in table 2. The presence of material in the gravitational potential well of a star provides no explanation for objects in which material appears at a velocity redshifted with respect to the photospheric value, as in the case of WD Table 5. Photospheric heavy element abundances of the surveyed stars, determined from far-UV spectroscopy (no data available for PG 0948+534). Note that gaps in the table for those stars up to (and including) REJ 0457-281 are mostly due to the absence of data for a particular spectral range rather than a true absence of the element itself. For the remaining, cooler stars, gaps in the table reflect genuine absence of these species at abundances detectable by IUE or STIS. From . The mechanism may also be ruled out for three further stars, REJ 1738+665, Ton 021 and REJ 0457-281, in which the calculated gravitational redshift is substantially lower than required to explain the velocity of non-photospheric components. For the three remaining stars, the gravitational redshift is comparable to the velocity difference between photospheric and circumstellar components. However, if the calculated values of vgrav are assumed to be reasonable, only circumstellar features in REJ 0558-373 may be explained by the presence of matter inside the gravitational well (at approximately 5 stellar radii from the surface). In the cases of G191-B2B and GD 659, the non-photospheric components differ from vphot by an amount equal to, or slightly greater than, the value of vgrav. This suggests that the material lies at a radius greater than that at which gravitational redshifting produces observable changes in line velocity. It is therefore apparent that, with the possible exception of REJ 0558-373, blueshifted features in the spectra of objects in this study cannot be explained by the presence of material within the gravitational potential well of the star. Non-photospheric material and its relation to planetary nebulae The existence of an old planetary nebula (PN) around two of the survey stars has already been addressed in the cases of REJ 1738+665 and WD 2218+706. Mass loss and the production of a PN as a white dwarf progenitor leaves the AGB, are relatively well accepted processes (though the precise details are certainly not (Langill et al. 1994)). It is therefore pertinent to ask whether the presence of non-photospheric, highly ionised features around other DA stars at the hotter end of the sample (i.e. those for which the nebula material may not be completely dispersed), is consistent with material from a (now ancient) PN around the star. Expansion velocities for PNe are available in the literature (e.g. Weinberger 1989). The study by Napiwotzki & Schönberner (1995) is particularly relevant, since it deals specifically with old planetaries, with central stars in the advanced stages of transition into the white dwarf area of the H-R diagram. Since even the hottest stars in the current study are more highly evolved than those covered by Napiwotzki & Schönberner, average temperatures are cooler, and any nebular material presumably of lower density and more widely dispersed. Direct detection of nebular emission around these stars is therefore difficult, excepting the two cases previously highlighted. Further, in the case of the older, cooler stars in the current work (e.g. REJ 2156-546 and REJ 1614-085), the planetary nebula must have dispersed long ago, and is therefore unlikely to offer an explanation for the existence of highly ionised, non-photospheric features. Nevertheless, the possibility that some circumstellar features are of this origin may be assessed by comparison with the expansion velocities noted by Napiwotzki & Schönberner. Figure 20 shows the distribution of expansion velocity (in this case plotted against nebula radius) for the objects investigated by Napiwotzki & Schönberner. No data are available for the radius of any PNe which may surround stars in the current survey; however, an analogous velocity can be derived in the form of the difference between the velocity of circumstellar and photospheric features, as listed in Figure 20. Circles: Nebula expansion velocity vexp plotted against nebula radius, for stars studied by Napiwotzki & Schönberner (1995). Triangles: vphot-vcirc for stars in the current work (no values for radius are available). table 2. Note that WD 2218+706 (DeHt5) is common to both studies, and in this case the true value of vexp, as determined by Napiwotzki & Schönberner, is considerably different to the negative velocity, relative to the photospheric value, found in the current work. It is therefore clear that the non-photospheric features of WD 2218+706 discussed in the current study are not produced by the observed planetary nebula (though the nebula cannot be discounted as a source of this material). We note that the observation of features which are redshifted with respect to the photosphere is not incompatible with the nebula hypothesis; for example, Tweedy & Napiwotzki (1994) discuss the planetary nebula Sh 2-174 and the white dwarf GD 561 observed on one edge of this nebula. The objects are found at similar distances, and [O III] emission in the nebula is located immediately adjacent to the white dwarf. These observations, the statistical improbability of GD 561 being an isolated hot white dwarf which happens to be wandering through the nebula, and the difficulty in explaining the existence of a small nebula other than being of PN origin, confirm that GD 561 is indeed the source of the planetary nebula. The distinctly non-spherical morphology of Sh 2-174 is far from unique in studies of old PNe (Tweedy & Napiwotzki and references therein). Gross asymmetries are attributed to interaction between the nebular material and the surrounding ISM through which it moves. Given the location of GD 561 relative to the central region of Sh 2-174, it is clear that the PN material may produce spectral features which are either blue-or red-shifted with respect to the photosphere, depending on the angle from which the system is viewed. For all but two of the stars in this work, vphot-vcirc is of a similar order of magnitude as the expansion velocity typical of old PNe. Two stars (WD 2218+706 andREJ 2156-546) are shown with the negative values of vcirc discussed above -but the absolute values, |vphot-vcirc|, are consistent with the remainder of the sample. In contrast, radiatively driven winds from the surface of a white dwarf should be of a similar order to the stellar escape velocity, i.e. ∼ 1000 km s −1 or more (MacDonald 1992). These results may indicate some form of link between the origin of the non-photospheric absorbing matter, and the old, dispersed PN material surrounding these stars. The results of this comparison suggest that further work on the link between highly ionised non-photospheric features, and planetary nebulae is justified. Although this hypothesis appears to contradict the suggestion that shifted features may be related to interstellar material within the Strömgren sphere of a white dwarf, both mechanisms may operate in different objects, with Strömgren spheres as the dominant source for cooler objects. Correlating typical PNe densities with the column densities derived from curve of growth analysis would provide further evidence of a relationship between these two apparently separate entities. This comparison is complicated by the considerable difficulty in estimating the masses of PNe, arising from uncertainties in the distances to nebulae, and from the ongoing debate as to whether these objects are typically ionisation or mass bounded. SUMMARY We have described the detection and interpretation of highly ionised absorption features at non-photospheric velocities, in high resolution UV spectra of hot DA white dwarfs. These features may be indicative of accretion or mass loss in white dwarfs -processes which may explain the non-equilibrium abundances, compared with the predictions of radiative levitation theory, observed in many objects. Four of the stars in the sample were previously known to show non-photospheric features: Feige 24, REJ 0457-281, G191-B2B and REJ 1614-085. This work has revealed at least four new objects with multiple components in one or more of the principal resonance lines: REJ 1738+665, Ton 021, REJ 0558-373 and WD 2218+706. A fifth object, REJ 2156-546 also shows some evidence of multiple components, though further observations will be required for their reality to be confirmed. Several possible mechanisms for the formation of these features have been discussed. The presence of material within the gravitational potential well of a white dwarf is found to be an unsatisfactory explanation for the production of these features. Predicted mass loss rates based on the luminosity and metallicity of stars show no correlation with the presence of shifted features. However, these mass loss rates are calculated using theories developed for main sequence stars, and may be inappropriate for application to highly evolved objects. Further, the quantification of metallicity is a highly subjective measurement, and is likely to be a major source of uncertainty in these calculations. A possible correlation is observed between the velocity of shifted features and that of the ISM. This is particularly obvious in REJ 1738+665 and G191-B2B, which show very close matches between vISM and vcirc. For most of the remaining stars, the difference between these velocities is less than 10 km s −1 . Higher resolution observations are required to detect the presence of multiple ISM velocity components, which may reveal further correlations -as demonstrated by the case of G191-B2B. An alternative or additional source of shifted features may be found in planetary nebulae. Velocities of shifted features with respect to the photosphere in this study are found to be entirely consistent with the expansion velocities typ-ical of old PNe. By appealing to the irregular morphology of highly evolved nebulae, both blueshifted and redshifted features may be explained. Detailed modeling of the interaction between the white dwarf and surrounding material should determine whether stellar radiation alone is sufficient to produce the observed ionisation, or whether additional excitation (perhaps in the form of shock-heating) is required (Napiwotzki, private communication). The non-detection of highly ionised non-photospheric features in many of the stars investigated may indicate their absence, but equally, may reflect the limited resolution and signal-to-noise ratio of available data. This is particularly important when considering non-detections in IUE data, where velocity differentials of less than 17 km s −1 between photospheric and shifted components are below the resolution limit of the instrument, and where the S/N ratio is inferior to that of more modern instruments such as STIS. This highlights the importance of acquiring consistently high resolution data for all stars in this and future samples. Four (and possibly five) new identifications of circumstellar features have been made using medium resolution (E140M) STIS spectra; these successes demonstrate the value in high resolution studies of this type, and justify an extension of the program to include higher resolution data of improved S/N for an expanded sample of objects. ACKNOWLEDGEMENTS NPB and MAB were supported by PPARC. JBH wishes to acknowledge support of for this work provided by NASA through grant GO-7296 and AR-9202 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, incorporated under NASA contract NAS 5-26555. We thank Cherie Miskey (Institute for Astrophysics and Computational Sciences) for assistance in processing STIS data, and Ralf Napiwotzki (Dr. Remeis-Sternwarte Bamberg Astronomical Institute of the University of Erlangen-Nürnberg) for useful discussions relating to planetary nebulae and their interaction with white dwarf stars.
2019-04-14T01:36:53.051Z
2003-01-13T00:00:00.000
{ "year": 2003, "sha1": "191bf325463b1deebf3de0a44ba9b90f7069cb24", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/341/2/477/3856688/341-2-477.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b25885be97cf1512362f48071c3bfe6bd948288a", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
53545974
pes2o/s2orc
v3-fos-license
The Colors of Extreme Outer Solar System Objects (Abridged) Thirty-three objects with possible origins beyond the Kuiper Belt edge, very high inclinations, very large semi-major axes or large perihelion distances were observed to determine their surface colors. All three objects that have been dynamically linked to the inner Oort cloud (Sedna, 2006 SQ372, and 2000 OO67) were found to have ultra-red surfaces (S~25). Ultra-red material is generally associated with rich organics and the low inclination"cold"classical Kuiper Belt objects. The observations detailed here show very red material may be a more general feature for objects kept far from the Sun. The recently discovered retrograde outer Solar System objects (2008 KV42 and 2008 YB3) and the high inclination object (127546) 2002 XU93 show only moderately red surfaces (S~9), very similar to known comets. The extended or detached disk objects, which have large perihelion distances and large eccentricities, are found to have mostly moderately red colors (10<S<18). The colors of the detached disk objects, including the dynamically unusual 2004 XR190 and 2000 CR105, are similar to the scattered disk and Plutino populations. Thus the detached and scattered disk likely have a similar mix of objects from the same source regions. Outer classical belt objects, including 1995 TL8, were found to have very red surfaces (18<S<30). The"cold"classical belt, outer classical belt and inner Oort cloud appear to be dominated by ultra-red objects (S>25) and thus don't likely have a similar mix of objects as the scattered disk, detached disk and Trojan populations. A possible trend was found for the detached disk and outer classical belt in that objects with smaller eccentricities have redder surfaces irrespective of inclinations or perihelion distances. There is also a clear trend that objects more distant appear redder. Introduction The dynamical and physical properties of small bodies in our Solar System offer one of the few constraints on the formation, evolution and migration of the planets. The Kuiper Belt has been found to be dynamically structured with several observed dynamical classes (Trujillo et al. 2001;Kavelaars et al. 2008Kavelaars et al. ,2009) (see Figure 1). Classical Kuiper Belt Objects (KBOs) have semi-major axes 42 a 48 AU with moderate eccentricities (e ∼ 0.1) and inclinations. These objects may be regarded as the population originally predicted for the Kuiper Belt, but their relatively large eccentricities and inclinations were unexpected. The dynamics of the classical KBOs have shown that the outer Solar System has been highly modified through the evolution of the Solar System Petit et al. 1999;Ida et al. 2000;Morbidelli and Levison 2004;Gomes et al. 2005). Resonant KBOs are in mean motion resonances with Neptune and generally have higher eccentricities and inclinations than classical KBOs. These objects, which include Pluto and the Plutinos in the 3:2 resonance, were likely captured into these resonances from the outward migration of Neptune (Malhotra 1995;Hahn and Malhotra 2005;Levison et al. 2008). Scattered disk objects have large eccentricities with perihelia near the orbit of Neptune (q ∼ 25 − 35 AU). The scattered disk objects are likely to have been scattered into their current orbits through interactions with Neptune Duncan 2008a;Gomes et al. 2008). A new class of outer Solar System object, called the extended or detached disk (Figure 1), has only recently been recognized (Gladman et al. 2002;Emel'yanenko et al. 2003;Morbidelli & Levison 2004;Allen et al. 2006). To date only a few detached disk objects are known. The detached disk objects have large eccentricities, but unlike the scattered disk objects the detached disk objects have perihelia q 38 AU, which do not appear to be directly caused by Neptune interactions alone (Gladman and Collin 2006;Levison et al. 2008). Though unexpected, the discovery of these detached disk objects have given us a new understanding of our Solar System's chaotic history. A few objects have been found that have very large semi-major axes and eccentricities (Sedna, 2006 SQ 372 , and2000 OO 67 ). Through dynamical simulations these objects are best described as coming from the inner Oort cloud (Brown et al. 2004; Kenyon and Bromley 2004;Morbidelli and Levison 2004;). Two objects have also been found to have retrograde orbits in the outer Solar System (2008 KV 42 and 2008 YB 3 ). These two retrograde objects along with the very high inclination object 2002 XU 93 (i ∼ 78 degrees) could be from the outer Oort cloud or a possibly yet to be discovered high inclination source region ). Some Trans-Neptunian Objects (TNOs) have likely not experienced significant thermal evolution since their formation. The amount of thermal evolution depends on how close the object formed to the Sun and how close it has approached the Sun during its lifetime (Meech et al. 2009). The objects in the Kuiper belt dynamical classes had varied histories with some experiencing very little thermal evolution, making them some of the most primitive bodies in the Solar System. Optical observations of TNOs and Centaurs have shown some of these objects have the reddest material known in the Solar System ( Figure 2) (Jewitt and Luu 2001;Peixinho et al. 2004;Doressoundiram et al. 2008;Tegler et al. 2008). This ultra-red material is currently thought to be rich in organic material (Gradie and Veverka 1980;Vilas and Smith 1985;Cruikshank et al. 2005;de Bergh et al. 2008). The ultra-red color may be from Triton tholins and ice tholins, which can be produced by bombarding simple organic ice mixtures with ionizing radiation ; Barucci et al. 2005a;Emery et al. 2007; Barucci et al. 2008). Interestingly, short-period comets that are believed to have originated from the Kuiper Belt don't show this ultra-red material ( Figure 2) (Jewitt 2002). The reason is because comet surfaces have been highly processed from their relatively close passages to the Sun (Jewitt 2002;Grundy 2009). This demonstrates that the surfaces of comets are not reliable for understanding the original compositions of the comets. Some Centaurs, which are the precursors to the short-period comets, do show these ultra-red colors probably because they have not yet been near the Sun for a long enough time to have their surfaces highly modified from thermal, sublimation or evaporation processes. No long period comets from the Oort cloud have been sufficiently observed before any significant heating would have taken place on their surfaces. Thus we don't have a good knowledge of what color an Oort cloud comet may have been before it started to thermally evolve (Meech et al. 2009). There have been one or possibly two subsets of TNOs that appear to be dominated by the ultra-red material ( Figure 2). First are the low inclination "cold" classical Kuiper Belt objects that also have large perihelions (Tegler and Romanishin 2000;Trujillo and Brown 2002;Doressoundiram et al. 2005;Gulbis et al. 2006;Fulchignoni et al. 2008;. These objects likely formed in the more distant Solar System unlike the higher inclination KBOs, which may have formed closer to the Sun and were transported to and captured in the Kuiper Belt during the planet migration process (Levison and Morbidelli 2003;Gomes 2003;Levison et al. 2008). Sedna, an object well beyond the Kuiper Belt edge at 50 AU (Jewitt et al. 1998;Allen et al. 2001), also has an ultra-red color and could be a new class of object, possibly from the inner Oort cloud (Brown et al. 2004;Morbidelli and Levison 2004;Kenyon and Bromley 2004;Brasser et al. 2006;Barucci et al. 2005b). Some previous works (Tegler and Romanishin 2000;Trujillo and Brown 2002;Doressoundiram et al. 2005) have noted that objects with larger perihelion distances tend to have redder surfaces, but but most of the ultra-red objects observed were in the main classical Kuiper belt. No systematic survey of the colors of the large perihelion detached disk population has been performed to date. In this work the optical colors were observed for most of the known detached disk objects, possible inner Oort cloud objects and other outer Solar System objects that exhibit extreme orbits in terms of their inclination, semi-major axis or perihelion distance. Understanding any color trends or correlations, in particular the ultra-red material, will constrain where these extreme objects may have formed in the Solar System and thus how they may have ended up on their current orbits. This in turn will allow us to determine how the planets may have migrated and what amount of this ultra-red organic rich material may have been incorporated into the planets. Observations and Analysis Observations of the outer Solar System objects presented in this work were obtained with the twin Magellan Baade and Clay 6.5 meter telescopes at Las Campanas, Chile and the 8.2 meter Subaru telescope atop Mauna Kea in Hawaii. Table 1 shows the various observational circumstances for the 33 objects observed. The LDSS3 camera on the Clay telescope was used on the nights of November 2 and 3, 2005; May 7 and 8 2008; January 28, 2009;August 25 and. LDSS3 is a CCD imager with one STA0500A 4064 × 4064 CCD and 15µm pixels. The field of view is about 8.3 arcminutes in diameter with a scale of 0.189 arcseconds per pixel. The IMACS camera on the Baade telescope was used on the nights of October 19, 2008 andDecember 3, 2008. IMACS is a wide-field CCD imager that has eight 2048 × 4096 pixel CCDs with a pixel scale of 0.20 arcseconds per pixel. The eight CCDs are arranged in a box pattern with four above and four below and about 12 arcsecond gaps between chips. Only chip 2, which is just North and West of the camera center, was used in the IMACS color analysis. The Suprime-Cam imager on the Subaru telescope was used on the night of October 15, 2009. Suprime-Cam is a wide-field CCD imager that has ten 2048 × 4096 pixel CCDs with a pixel scale of 0.20 arcseconds per pixel (Miyazaki et al. 2002). The ten CCDs are arranged in a 5 × 2 box pattern similar to the IMACS imager. Only chip 5, which is just West of the camera center, was used in the Suprime-Cam color analysis. Dithered twilight flat fields and biases were used to reduce each image. Images were acquired through either the Sloan g', r' or i' filter while the telescope was auto-guiding at sidereal rates using nearby bright stars. Exposure times were between 300 and 450 seconds. Southern Sloan standard stars were used to photometrically calibrate the data (Smith et al. 2005). In order to more directly compare our results with previous works the Sloan colors were converted to the Johnson-Morgan-Cousins BVRI color system using transfer equations from Smith et al. (2002). To verify the color transformation the known ultra-red (44594) 1999 OX 3 and grey (19308) 1996 TO 66 TNOs were observed (Tegler andRomanishin 1998,2000;Jewitt and Luu 2001;Barucci et al. 2005a). The BVRI photometric results are shown in Table 2 Photometry was performed by optimizing the signal-to-noise ratio of the faint small outer Solar System objects. Aperture correction photometry was done by using a small aperture on the TNOs (0. ′′ 57 to 0. ′′ 95 in radius) and both the same small aperture and a large aperture (2. ′′ 46 to 3. ′′ 40 in radius) on several nearby unsaturated bright field stars with similar Point Spread Functions (PSFs). The magnitude within the small aperture used for the TNOs was corrected by determining the correction from the small to the large aperture using the PSF of the field stars (cf. Tegler and Romanishin 2000; Jewitt and Luu 2001). For a few of the brighter objects (Sedna, 2003FY 128 , 2007JJ 43 , 2008 YB 3 ) both small apertures and the full large apertures were used on the TNOs to confirm both techniques obtained similar results. Results The orbital parameters of the 33 outer Solar System objects observed in this work are shown in Table 4. There were three main classes of objects in the observation sample: 1) Objects dynamically linked to the inner Oort cloud, 2) Outer Solar System retrograde and high inclination objects and 3) Extended or detached disk and outer classical belt objects. Each class is discussed in the subsections below. In addition, the well measured grey object (19308) 1996 TO 66 that is part of the Haumea KBO collisional family and ultra-red object (44594) 1999 OX 3 were observed to confirm the photometry is consistent with previous works. Figure 2 all objects observed appear to have correlated broad band optical colors. In other words, the objects appear to follow a nearly linear red slope in the optical. This has also been confirmed through spectroscopy and correlation analysis on other TNOs . Because of the near linearity in the optical colors we can obtain the spectral gradient, S, of the objects using two unique optical broad band filters. The spectral gradient is basically a very low resolution spectrum of an object and is usually expressed in percent of reddening per 100 nm in wavelength. We follow Doressoundiram et al. (2008) and express the spectral gradient as S(λ 2 > λ 1 ) = (F 2,V − F 1,V )/(λ 2 − λ 1 ), where λ 1 and λ 2 are the central wavelengths of the two filters used for the calculation and F 1,V and F 2,V are the flux of the object in the two filters normalized to the V-band filter. S is the measure of the reddening of an object's surface determined between two wavelength measurements (two different filters). We determined the spectral gradient of the observed objects using the g' and i' filters, which have well separated central wavelengths of 481.3 and 773.2 nm respectively. The spectral gradient results for the observed objects are shown in Table 3 and the spectral gradient determined for known small body populations in the Solar System are shown in Table 5. Ultra-red color is here defined as including the reddest 90% of the measured low inclination classical KBOs (Ultra-red: S 25, B-R 1.6, V-I 1.2, B-I 2.2, V-R 0.6, R-I 0.6, and using Sloan colors g'-i' 1.2, g'-r' 0.8, and r'-i' 0.4 magnitudes). Inner Oort Cloud Objects The Oort cloud is believed to have formed from the scattering of planetismals from the giant planet region during early planet formation and is usually separated into two parts (Oort 1950;Stern 2003;Leto 2008;Brasser 2008). The inner Oort cloud is within a few thousand to ten thousand AU and is fairly stable to Galactic tides and passing star perturbations unlike the outer Oort cloud at several tens of thousands of AU. While the short period comets are all likely from the Kuiper Belt region's scattered disk population (Duncan et al. 2004;Levison et al. 2006), the long period comets are believed to be from the Oort cloud (Kaib and Quinn 2009). All the known long period comets have perihelia within about 10 AU of the Sun. The surfaces of the long period comets have already been highly altered before they are first observed because of the thermal and sublimation processes that occur as they approach the Sun (Meech et al. 2009). Sedna was the first object suggested to be part of the inner Oort cloud (Brown et al. 2004). Recently have suggested through dynamical simulations that the relatively large perihelia and semi-major axes of 2006 SQ 372 and 2000 OO 67 also make them likely objects from the inner Oort cloud. The three inner Oort cloud objects Sedna, 2006 SQ 372 and 2000 OO 67 could be the first objects from the inner Oort cloud region that we have observed with thermally unaltered surfaces. These inner Oort cloud objects are likely to have formed in a different location than many of the Kuiper Belt objects. The observations obtained of these three possible inner Oort cloud objects in this work show all to be among the reddest objects observed in this sample. Their surfaces are, of ultrared material (S 25). Though all three having ultra-red material is a promising trend, more inner Oort cloud type objects are needed to be discovered (see Schwamb et al. 2009) in order to confirm a strong significant (3σ) color correlation for inner Oort cloud objects and ultrared material. The spectral gradients of the possible inner Oort cloud objects are very similar to the red lobe of the Centaur distribution, the low inclination classical KBOs and outer classical belt KBOs (Table 5). As discussed in the introduction, ultra-red material is likely rich in organic material . Retrograde and High Inclination Objects Until recently all known retrograde objects had perihelia within the inner Solar System. In the last year two objects have been discovered with retrograde orbits and perihelia in the giant planet region. Neither shows any current evidence of cometary activity. 2008 YB 3 has a perihelion around 6.5 AU and thus is likely to have undergone surface sublimation and interior recrystallization during its lifetime (Meech et al. 2009). 2008 KV 42 has a perihelion of about 21 AU and thus the amount of surface alteration of this object could be significantly less than other retrograde objects and comet type objects. Gladman et al. (2009) simulated the orbit of 2008 KV 42 and found its perihelion distance likely has not been interior to Saturn over the age of the Solar System. It is unknown where 2008 KV 42 came from but its orbit is similar to Halley's comet and thus it could have come from the Oort cloud or another yet to be discovered high inclination reservoir. The observations obtained of these two outer Solar System retrograde objects and the similar high inclination object (127546) 2002 XU 93 show all to have only moderately red surfaces (S ∼ 9). Their spectral gradients are similar to the known comets, extinct comet objects, Jupiter Trojans, Neptune Trojans, irregular satellites and damocloids (Table 5). This suggests the outer retrograde and high inclination object surfaces have been thermally altered over the age of the Solar System as is expected for these other similarly moderately red colored volatile rich objects. Extended/Detached Disk and Outer Classical Belt Objects Objects with large semi-major axes and perihelion distances have only recently been discovered (Gladman et al. 2002). Knowledge of the physical properties of these dynamically interesting objects is important to constrain their origins and evolution. Detached disk objects are considered to have moderate to large eccentricities (e > 0.2 − 0.25), large perihelion distances (q 38 AU) and large semi-major axes (50 a 500 AU) (Elliot et al. 2005;Lykawka and Mukai 2007a;Gladman et al. 2008). Detached disk objects are somewhat decoupled from the giant planet region yet have been considerably influenced dynamically to obtain their relatively large eccentricities. The objects in the detached disk can thus be considered intermediate between the Kuiper Belt and the inner Oort cloud. Objects with dynamics closely related to the detached disk are the outer classical belt population. The outer classical belt objects have a > 48.4 AU, e < 0.25 and are nonresonant . Objects with 39.4 < a < 48.4 AU and e < 0.25 are considered main classical belt objects or cubewanos. The 2 : 1 Neptune resonance separates the main classical belt from the outer classical belt. In this work most of the known detached disk and outer classical belt objects were observed to determine their optical colors for the first time in order to compare them to other Solar System small body reservoirs. In particular, determining if these populations are dominated by ultra-red material allows important constraints to be placed on the origin and evolution of these populations. The colors of the detached disk objects do not appear to be extraordinary ( Figure 2). Except for one ultra-red detached disk object, the rest show only moderately red colors (10 S 18). Their spectral gradient average (S = 14.5±5) is very similar to the scattered disk KBOs, Plutinos, high inclination classical KBOs as well as the damocloids and comets ( Table 5). The detached disk objects are thus not likely from the same source region as the ultra-red low inclination classical KBO population or the inner Oort cloud though if they are from the same source region than the detached disk objects had significantly different surface altering histories. Inclination is not important in the color of detached disk objects with even the few very low inclination objects observed in the detached disk (2003 FZ 129 and2003 QK 91 ) showing only moderately red colors. The discovery of more low inclination detached disk objects are needed to further confirm that this population is not rich in ultrared material unlike the low inclination main classical belt. The only detached disk object found to have ultra-red surface material is (84522) 2002 TC 302 , which has a large inclination of 35 degrees. (84522) 2002 TC 302 is possibly in the 5:2 Neptune resonance and as discussed below it appears objects in high order Neptune resonances are on average very red. Outer Classical Belt Objects The outer classical belt objects have a > 48.4 AU, e < 0.25, i < 40 degrees and are non-resonant. Outer classical belt objects are separated from the main classical belt by the 2:1 resonance and have slightly smaller eccentricities than the detached disk objects. The observed sample has 5 bonafide outer classical belt objects JJ 43 , 2003FY 128 , 2003UY 291 , 2001 1995 TL 8 ). The only other possible outer disk object in our sample would be 2004 XR 190 . This is a very dynamically unusual object since it has a relatively low eccentricity, large semi-major axis and large inclination (Table 4). It is to date a dynamically unique object but has been classified as an outer disk object by Gladman et al. (2008) and a detached disk object by Allen et al. (2006) and Lykawka and Mukai (2007a). Gomes et al. (2008) 12 − 17 degrees). This is unlike the main classical belt were the low inclination objects are dominated by ultra-red objects (S 25) while the higher inclination objects are not dominated by ultra-red material. More outer classical belt objects need to be discovered to confirm this population is dominated by very red objects (S 20). Detached and Scattered Disk The scattered disk is probably made up of two main source populations. Some scattered disk objects are likely the surviving members of a relic population of objects that were scattered during Neptune's migration in the very early Solar System . A second source for the scattered disk is from recently dislodged objects from the Kuiper Belt through various slow dynamical processes (resonances) or collisions (Duncan et al. 1995;Levison and Duncan 1997;Duncan and Levison 1997;Nesvorny and Roig 2001;. How the detached disk may have formed is still an open question Morbidelli et al. 2008;Kenyon et al. 2008;Duncan et al. 2008b;Gladman et al. 2008). For high inclination objects (i > 50 degrees) the Kozai resonance can allow scattered objects to obtain large perihelion distances (Thomas and Morbidelli 1996;Gallardo 2006). For objects with moderate inclinations the Kozai mechanism only works in increasing the perihelion distance of a scattered object if the object is also in a mean motion resonance with Neptune (Gomes 2003). Using Neptune mean motion resonances and the Kozai mechanism believes the high perihelia and relatively large semi-major axes of some moderate inclination detached objects can be explained through the above mechanism, specifically 2000 YW 134 , 2005 EO 297 and 2005 TB 190 as well as the high inclination object 2004 XR 190 , since they are all in or near Neptune mean motion resonances. These objects were likely at some point scattered disk objects that simply had their perihelia raised through Neptune mean motion resonances and the Kozai effect. Based on the similar average spectral gradients of the two populations the origin of the objects in the detached disk could be similar as the scattered disk (Table 5). The scattered disk spectral gradient (S = 10.1 ± 5) shown in Table 5 uses the strict definition similar to Gladman et al. (2008) which eliminates objects thought to be in any resonance with Neptune from being called a scattered disk object (called here the strict scattered disk: objects not in an obvious high order resonance with Neptune, perihelia less than 35 AU and semi-major axis between 30 and 100 AU). If objects in high order resonances with Neptune are allowed in the definition used for what is a scattered disk object the spectral gradient increases slightly and is almost the same as the detached disk average spectral gradient (14.5 ± 5). It is interesting to note that very red (S 20) objects are absent in the strict definition of the scattered disk but are not when including the higher order resonance objects. This may hint that many high order resonance scattered disk objects are coming from the ultra-red low inclination classical belt or outer classical belt objects. It may be that the only efficient way to dislodge these fairly dynamically stable ultra-red objects is through some resonance interactions. To further compare the scattered disk to the detached disk population the Student's t-test and the Kolmogorov-Smirnov (K-S) test were performed on the spectral gradients of the two populations ( Figure 5). The differences in the two population distributions were not statistically significant (< 3σ) in either test and thus are consistent with both populations coming from the same parent population (Table 6). This is true no matter if the high order outer resonance objects are considered scattered disk objects or not ( Figure 6). The similarity of spectral gradients may hint that Neptune mean motion and Kozai resonances allowed scattered disk objects to become detached overtime from significant Neptune influence and that the detached disk is a simple extension of the scattered disk (Gallardo 2006;Lykawka and Mukai 2007b;Emel'yanenko and Kiseleva 2008;Gladman et al. 2008;Gomes et al. 2008). Based on the spectral gradients and dynamics of the objects in the detached and scattered disk it appears they likely contain many objects from the same source region. Ultra-red Colors and the Outer Classical Belt The outer classical belt objects have lower eccentricities and usually lower semi-major axes than the detached disk objects. They are separated from the main classical belt by the Neptune 2 : 1 mean motion resonance. The dynamical origin of the outer classical belt objects are not easy to explain through simple Neptune mean motion resonances and the Kozai effect and may have a different origin than the detached disk objects (Gomes 2003;Gomes et al. 2008;Morbidelli et al. 2008). Simulations by Gomes (2003) of Neptune's migration and the formation of the Kuiper belt show that the objects coming from the outer most portion of the disk that Neptune migrates through would have preferentially low inclinations (i < 10 degrees) and low eccentricities (e 0.1) when dispersed to near 40 AU. This is likely the source of the "cold" classical disk (see Gomes (2003) Figure 2). The inclination distribution for these objects is found in the simulations to increase slightly at larger semi-major axes. More importantly, the Gomes simulations show that these same objects further out in semimajor axis around 50 AU would have significantly larger eccentricities (e ∼ 0.2). Using these ideas Gomes et al. (2008) suggest that objects with orbits like the outer classical belt are not fossilized detached disk objects and more likely share a similar origin as the low inclination "cold" classical population (Gomes 2003;Morbidelli et al. 2008). The very red colors (S 20) found in this work for these outer classical belt objects support this hypothesis. The spectral gradient of the outer classical belt objects averages S = 23.3 ± 5 which is similar to that found for the low inclination "cold" classical main belt objects (27.5 ± 5: see Table 5). To compare the spectral gradients of outer classical belt objects with the low inclination "cold" classical belt objects the Student's t-test and Kolmogorov-Smirnov test were performed on the two populations ( Figure 5). The two distributions do not appear to be significantly different (< 2σ) and thus could come from the same parent population (Table 6). This is unlike the detached and strict scattered disk which have > 3σ confidence in the differences of their spectral gradient distributions when compared to the low inclination "cold" classical belt objects (Table 6). Thus the detached disk and strict scattered disk objects are unlikely to have come from the same parent population as the low inclination "cold" classical belt objects. Table 6 shows that the K-S test hints at a possible trend with there being significant differences between the outer classical belt spectral gradient distribution and the strict scattered and detached disk objects but with only five known outer classical belt objects the test is unreliable. About twice as many outer classical belt objects need to be discovered and have their spectral gradients determined in order to confirm or reject them as having significantly different spectral gradients from the various dynamical populations in the outer solar system. It is apparent that the outer classical belt objects are very red objects and they are redder than both the detached disk and strict scattered disk and less red than the low inclination "cold" classical KBOs. As shown in Figure 5 the colors of the scattered disk objects not in resonances are the least red. The detached disk objects are slightly redder while the outer classical belt objects are even redder and finally the low inclination "cold" classical KBOs are the reddest objects. The high order resonance objects appear to span most of the spectral gradient range of the various populations ( Figure 6). The significant differences in spectral gradients for some of the populations is likely because the objects come from different source regions. It is also possible that the differences in the spectral gradients of the various populations comes from significantly different surface weathering processes on the objects over the age of the Solar System, such as different collisional or sublimation histories. It is apparent that the objects more distant from the Sun are on average redder. Spectral Gradients Versus Orbital Dynamics To further explore the origins of the detached disk and outer classical belt objects their eccentricities versus spectral gradient were plotted (Figure 7). There is an apparent trend that the lower the eccentricity the redder the object. The Pearson correlation coefficient is -0.49 using the eighteen known spectral gradients of the detached disk and outer classical belt objects. The correlation with eccentricity is only significant at about the 97% level and additional low eccentricity outer classical belt objects need to be found to confirm or reject this possible trend (Table 7). Including the strict scattered disk objects increases the significance of the correlation with eccentricity to 99.1%. If the low inclination "cold" main classical KBOs are also included the trend is even stronger with a Pearson correlation coefficient of -0.80 and a significance at the 99.99% confidence limit (Figure 8). There is no trend of spectral gradient with the inclination or perihelion distances of the detached disk and outer classical belt objects (Table 7). Including the strict scattered disk also finds no trend with spectral gradient and inclination or perihelion distance. Summary Thirty-three extreme outer Solar System objects were observed to determine their optical colors. 1) The three possible inner Oort cloud objects (Sedna, 2006 SQ 372 , and2000 OO 67 ) all have ultra-red surfaces (spectral gradient S ∼ 25). These ultra-red surfaces are abundant in the low inclination "cold" classical KBO population and is believed to be associated with organic-rich material. Because the ultra-red material is only seen in the very outer parts of the observable Solar System it is likely this material has not been significantly thermally altered. The red lobe of the Centaur distribution could thus either be from the low inclination classical KBO population or from the inner Oort cloud population. 2) For the first time a systematic color determination of extended or detached disk objects was obtained. Most detached disk objects have only moderately red surfaces (10 S 18). Though slightly redder on average than the scattered disk the detached disk colors are consistent with being from the same source region as the scattered disk objects. The only ultra-red objects observed with scattered disk like orbits appear to be objects in high order resonances with Neptune. 3) The outer classical Kuiper belt objects, which have semi-major axes beyond the 2 : 1 resonance with Neptune and low eccentricities, were found to be very red (S 20) and are on average redder than the detached disk objects. Unlike the scattered disk and detached disk the outer classical belt objects have spectral gradients similar to the ultra-red low inclination "cold" classical KBOs though they appear to be less red on average. 4) The two retrograde objects with perihelia in the outer Solar System (2008 KV 42 and 2008 YB 3 ) and the extremely high inclination object (127546) 2002 XU 93 show only moderately red colors (S ∼ 9). These colors are very similar to the known comets, dead comets, damocloids, Jupiter Trojans, Neptune Trojans, irregular satellites, D-type main belt asteroids, scattered disk objects and the neutral lobe of the Centaurs. 2008 YB 3 perihelion is near Jupiter thus this object has had its surface thermally altered over the age the of the Solar System as is probably true for all the above moderately red populations. 2008 KV 42 has a rather large perihelion at 21 AU and it is unknown if it has ever approached closer to the Sun. The moderately red surface color suggests its surface has likely been thermally altered. 5) The detached disk and outer classical Kuiper belt objects show a trend that the lower the eccentricity the redder the object. This trend is currently not statistically significant since only a few of these objects are known. The trend is strengthened when adding the strict scattered disk and low inclination "cold" classical KBOs. The trend must be confirmed through discovering and measuring the colors of more outer classical belt objects. Smith, J., et al. 2002, AJ, 123, 2121. Smith, J., Allam, S., Tucker, D. et al. 2005, BAAS, 37, 1379. Snodgrass, C., Lowry, S. and Fitzsimmons, A. 2008, MNRAS, 385, 737-756. Stern, A. 2003, Nature, 424, 639-642. Tegler, S. and Romanishin, W. 1998, Nature, 392, 49. Tegler, S. and Romanishin, W. 2000, Nature, 407, 979-981. Tegler, S. and Romanishin, W. 2003 Tegler, S., Bauer, J., Romanishin, W., Peixinho, N. 2008 This preprint was prepared with the AAS L A T E X macros v5.2. (1, 1, 0) a These few objects showed large light variations during the observations indicating possible significant rotational light curves (> 0.1 mags). Their colors were consistent throughout the observations since the variations caused by possible light curves were similar in all filters. Filters were also rotated after each observation to prevent a light curve from influencing the color calculation. The apparent magnitude (m R ) and calculated absolute magnitude (m R (1, 1, 0)) are based on the average of the photometry. A few of the above objects have also had colors independently determined. In most cases the colors reported elsewhere and found in this work are within the uncertainties of the various observations. 1995 TL 8 has large uncertainties from BVRI data of Doressoundiram et al. (2002) and Delsanti et al. (2001), our results agree with Delsanti et al. (2001) and are inconsistent with Doressoundiram et al. (2002); 1999 HW 11 has BVR data from Trujillo and Brown (2002); 2000 PE 30 has BVRI data from Doressoundiram et al. (2001); 2000 YW 134 has BVRI data from Tegler et al. (2003), Peixinho et al. (2004), Doressoundiram et al. (2007), Jewitt et al. (2007), and Santos-Sanz et al. (2009) 18.46 ± 0.01 9.6 ± 0.5 0.59 ± 0.01 0.28 ± 0.01 0.87 ± 0.01 a The normalized Spectral gradient for the optical colors of the observed objects (see text for details). b These few objects showed large light variations during the observations indicating possible significant rotational light curves (> 0.1 mags). Their colors were consistent throughout the observations since the variations caused by possible light curves were similar in all filters. Filters were rotated during the observations in order to prevent any rotational light curve from influencing the color results. The apparent magnitude is based on the average of the photometry. Elliot et al. 2005; 3) Brown et al. 2004;4) Ragozzine and Brown 2007;5) 6) Gladman et al. 2009; 7) Becker et al. 2008;8) Allen et al. 2006;9) Lykawka and Mukai 2007a. Quantities are the perihelion distance (q), semi-major axis (a), eccentricity (e) and inclination (i). Data taken from the Minor Planet Center. a Spectral gradient as defined in the text using known B or g' and I or i'-band photometry normalized to the V-band. The ± on the spectral gradient is not an error but displays the general range the type of objects span. The outer retrograde and high inclination objects (blue triangles) are slightly red and similar to the colors of the comets, Jupiter Trojans and Neptune Trojans. The extended or detached disk objects (green circles) occupy a fairly large range from moderately red to near but mostly less than ultra-red. The outer classical belt objects (purple diamonds) are mostly near the ultra-red area. Various extreme scattered disk objects observed in this work are also shown (brown pentagons). For reference the color of the Sun is marked by a filled black star. The very neutral colored Haumea collisional family member 1996 TO 66 and the extremely ultra-red object 1999 OX 3 (X's) were observed to show the large range of known colors in the outer Solar System and confirm the photometry results. Also shown are the typical B-R colors found for the C-and D-Type asteroids, Jupiter Trojans, Neptune Trojans, comets, Haumea collisional family members, low inclination classical KBOs, Centaurs and Kuiper Belt objects. The typical colors of all these objects are generally at the same level and slope as shown by the dotted line. The ultra-red material only seen on some KBOs and Centaurs are shown in the upper right. Moderately red objects like the Trojans, comets and some KBOs and Centaurs can be seen in the middle left of the figure. Grey or neutral colored objects like most main belt asteroids are in the lower left of the figure. There is an obvious trend that more distant The Kolmogorov-Smirnov test (K-S test) plotted for the detached disk (circles), outer classical belt (diamonds), low inclination "cold" classical belt (asterisks) and strict scattered disk objects (triangles: not including objects thought to be in high order resonances with Neptune, having perihelia above 35 AU or semi-major axes above 100 AU). The vertical axis shows the cumulative spectral gradient for the objects. It is clear that the groups have some overlap in color but on average the low inclination classical belt objects are the reddest followed by the outer classical belt objects, the detached disk objects and the most neutral objects being the scattered disk. The results of comparing various population spectral gradient distributions are shown in Table 6. Figure 5 except showing scattered disk objects thought to be in high order resonances with Neptune (upside down triangles). The high order resonance objects consist of a wide range of spectral gradients including a significant amount of ultra-red objects. Fig. 7.-The eccentricity versus the spectral gradient for 2004 XR 190 , detached disk and outer classical belt objects. There appears to be a trend that the lower the eccentricity the redder the object, but since there is only a few objects in the sample this trend is only at the 97% confidence level using the Pearson correlation coefficient. The lower eccentricity outer classical belt objects (diamonds) are near the ultra-red spectral gradient region while the higher eccentricity detached disk objects (circles) are mostly moderately red to neutral in color. 2004 XR 190 is dynamically distinct (see text) but has been simulated as a detached disk object by Gomes et al. (2008) and thus is plotted for completeness (plus sign). A linear fit is shown by the dashed line. Figure 7 except now the scattered disk objects not in high order resonances (triangles) and the low inclination "cold" main classical KBOs (X's) have been added. Adding these objects strengthens the trend that lower eccentricity objects have redder colors and is at the 99.99% confidence level.
2010-01-20T21:10:07.000Z
2010-01-20T00:00:00.000
{ "year": 2010, "sha1": "618f677d0926b70f9e90e74870be458e2122d3ff", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1001.3674", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "618f677d0926b70f9e90e74870be458e2122d3ff", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
220935652
pes2o/s2orc
v3-fos-license
Contact Classification in COVID-19 Tracing The present paper addresses the task of reliably identifying critical contacts by using COVID-19 tracing apps. A reliable classification is crucial to ensure a high level of protection, and at the same time to prevent many people from being sent to quarantine by the app. Tracing apps are based on the capabilities of current smartphones to enable a broadest possible availability. Existing capabilities of smartphones include the exchange of Bluetooth Low Energy (BLE) signals and of audio signals, as well as the use of gyroscopes and magnetic sensors. The Bluetooth power measurements, which are often used today, may be complemented by audio ranging and attitude estimation in the future. Smartphones are worn in different ways, often in pockets and bags, which makes the propagation of signals and thus the classification rather unpredictable. Relying on the cooperation of users to wear their phones hanging from their neck would change the situation considerably. In this case the performance, achievable with BLE and audio measurements, becomes predictable. Our analysis identifies parameters that result in accurate warnings, at least within the scope of validity of the models. A significant reduction of the spreading of the disease can then be achieved by the apps, without causing many people to unduly go to quarantine. The present paper is the first of three papers which analyze the situation in some detail. I. INTRODUCTION T HE COVID-19 pandemic has spread to enormous dimensions with 16 Million people affected and more than 644'000 fatalities up to July 26th, 2020. Unfortunately, the rate of increase has only flattened in China and selected European countries. The most important effective method to slow down the pandemic has so far been the enforcement of quarantine to large portions of the population, which led to a massive economic disruption. In countries such as China, South Korea, Singapore, and a number of European countries, the reduced infection rates made it possible to alleviate some of the restrictions. This involves the obligation to use masks and at least a recommendation to use some form of contact tracing. Different proposals for such a tracing have been made [1] and several different approaches are being followed in various countries. The most interesting proposals are those that fully focus on the tracing of contacts without tracking the movement of individuals, such as the scheme implemented in Germany [2]. The associated concepts were developed nearly synchronously by a number of authors and were published Christoph Günther is with the German Aerospace Center, 82234 Weßling, and with Technische Universität München, 80330 Munich, Germany, e-mail: KN-COVID@dlr.de. Daniel Günther is a student at Technische Universität München, 80330 Munich, Germany, e-mail: d.guenther@tum. de. in [3], [4], and [5]. A review of associated requirements is found in [6] and a review of major apps in [7]. In view of the highly contagious nature of COVID-19, of the lack of a vaccine and of the high casualty rates, an effective tracing and significant testing capabilities are essential. In Germany 16 million people have downloaded the associated app on their iPhone and Android Phones so far. Tracing apps rely on Bluetooth to detect the proximity of other people's devices. These apps generate random IDs, which are broadcast and stored to identify contacts in the case that the owner of a device is tested positively. If the owner is tested positively, the list of IDs stored on his device is published. Conversely, each device keeps the IDs of past contacts and compares them to the published list of IDs on a regular basis in order to establish whether a critical contact has taken place. Apple published an update of its operating system to support the development of such apps (iOS 13.1.5) and Google updated its Application Programming Interface (API). In the case of a critical contact the person should quarantine himself and register for testing. The outcome might be that he is found to be a carrier of the disease. In this case, the owner should trigger the release of his device's list of random IDs. The consequences of positive and negative testing depends on local regulations. In Germany a contact is characterized as critical and is called a Category 1 contact, if two people were in a face-to-face meeting at a distance of less than 2 meters for more than 15 minutes. The present paper relies on this definition but its parameters can easily be adapted to any other definition. Several countries have released tracing apps. The classification methods used are typically not discussed publicly. Ideally, the classification ensures a minimum missed detection rate at an acceptable level of false alarms. In the case of too many false alarms, people will be unduly sent to quarantine and the app-based will be rejected by the public. If on the other side the app fails to identify potential carriers, they continue infecting others and its effectiveness is jeopardized. As shall be seen both issues are most critical in the case of a high densities of COVID-19 carriers. As a consequence, the present analysis will be most pertinent to regions with a high infection rate. This paper is the first of a series of three papers. The other two papers address the particularities of the evaluation of Bluetooth Radio Signal Strength Indicators (BT-RSSI) [8] and of audio ranging [9] in more detail. Electromagnetic signals, such as Bluetooth signals, can be used for time of flight measurements, which provides accurate ranging results. Unfortunately, this and some other ideas cannot be considered presently, since a contact tracing app must rely on existing smartphones and devices. Thus, only existing functions provided by the chipsets, and even more importantly by the APIs of the devices can be used. The options for Bluetooth on existing equipment are limited to power measurements. The outcome of such measurements very much depend on the location of the device, which might be in a pocket or in a bag, often together with keys, coins, metallic business card holders and the like. Furthermore, the human body, with a strong water content, strongly absorbs Bluetooth signals. Together these uncertainties greatly influence the power levels measured at a distant receiver. The difficulties of tracing contacts by Bluetooth power measurements are also discussed in [10]. The remaining uncertainties about a potential contact to an infected person could potentially be resolved by interrogating the people involved. This would require the disclosure of the location at the time of contact, which might have been on a commuter train or at lunch in a restaurant, for example. The people must then identify where they sat or stood, which they might remember or not. In any case, this would be a source of privacy issues, discomfort and residual uncertainty. The German app would not support such a manual tracing anyway, since it does not collect the necessary information! In any case, such a human intervention would reduce the level of acceptance. As a consequence, we propose to carry the smartphone in an exposed manner, namely hanging around one's neck. In summer time, younger people often do that already. On the basis of the present findings, this is recommended to everyone, also in a business context, see Figure 1. Corresponding cases are available from several vendors. This mode of wearing the smartphone ensures a line of sight situation between two fellows facing each other. It leads to measurements that are a lot easier to interpret using Bluetooth Radio Signal Strength Indication (RSSI), audio ranging as well as gyro and magnetic sensors. The paper starts with a description of the statistical relationship between individual measurements and their classification in Section II. This section lays the foundation for evaluating the performance of classification in simulations or experiments. The probability of missed detection turns out to be critical for the success of the classification. Bluetooth RSSI evaluation is rather sensitive to the manner in which measurements are evaluated. Section III describes some aspects relating to the modeling of Bluetooth propagation and power measurements, as well as the essential result from the more in-depth study of the situation developed by Dammann et al. [8]. The following Section IV addresses audio ranging, which turns out to be an important complementary technique. Some audio properties of smartphones are summarized in this section. A more detailed study is published by Kurz et al. [9]. Section V shortly addresses the possibility of using attitude sensing, which is not explored in depth. Section VI finally discusses some basics of classifying contacts using the set of sensors mentioned. II. STATISTICS OF CLASSIFICATION The success of classifying contacts into Category 1 and other contacts depends critically on our capability of estimating distances. As a consequence, it is important to understand the influence of under-and overestimating distances from a pandemic point of view. This requires a study of the associated statistics. For a Category 1 contact, two fellows have to be facing each other at a distance of less than 2 meters for at least 15 minutes. This is called a C 1 contact throughout the paper. Assume that we are the person A and that we monitor the presence of B. We aim at determining whether the contact to B is a C 1 contact or not, denoted by C 1 or ¬C 1 , respectively. Furthermore, denote the outcome of the estimation process bŷ C 1 and ¬Ĉ 1 , then there are four different possibilities, as listed in Table I (classical hypothesis testing). Obviously, in any good design p md , and p f a are small. The four cases have to be considered jointly with the possibility that B is tested positively, which happens with probability p i , and shall be denoted by B. Current values, based on data published by John Hopkins University on July 26th, forp i are 1/5 100 for Germany and 1/113 for the USA. If B is either not tested or tested negatively, this shall be denoted by ¬B. Let finally p C1 be the probability of C 1 , then this leads to the situations summarized in Table II. The first and fourth rows of Table II provide the desired outcome. The probability p C1 of a contact being C 1 is driven by social behavior. Social distancing aims at reducing p C1 . This is important, since many people would otherwise be potentially infected and sent to quarantine by the first row in the table, whenever p i is significant. The product p C1 p i is the probability that the contact is C 1 and that fellow B is infected at the same time. Aiming at a small value of p md ensures that few potential carriers continue spreading the disease (second row). The actual value of p md is a direct measure of the containment benefit provided by a tracing app. Since 1 − p C1 is large, it is very important that the probability p f a of wrongly classifying a contact as being C 1 be small. Otherwise, numerous people would be unduly sent to quarantine by the third row. The value of p f a characterizes the extra load in terms of quarantining and testing generated by a tracing app. This has to be taken into account in the tradeoff of p f a versus p md . Also note that the undesired outcomes, i.e. the rows 2 and 3, have a probability proportional to p i , which means that they are unlikely to occur in the case of a small density of infectious people. As a consequence, a none (all other cases) potential under-performance of an app only becomes apparent in environments with a high number of infectious people. The decision forĈ 1 or ¬Ĉ 1 is taken after a substantial number of individual measurements. They are assumed to be performed at regular intervals. The number of such intervals in a time laps of 15 minutes is denoted by x 0 . Depending on the assumed behavior of people, different methods of analyzing the measurement data shall be considered: • Model A: People are rather mobile and the environment is changing quickly -the contact duration is accumulated over many short intervals. Examples of such situations occur when people work closely together, which is not particularly critical in terms of classification. They occur in underground trains, during breaks at conferences, at any form of party and the like. In these cases, a decision is taken every 15 seconds, if x 0 = 60 such measurements indicate that fellow B is in the contact zone of fellow A the contact is classified as being C 1 . It will turn out that this model cannot be addressed with the current capabilities. • Model B: People come together, stay in a given relative pose and then separate again. This happens when people are seated in a train, especially in long-distance intercity trains, in restaurants, meeting rooms, lecture halls, theaters and the like. In this case, a single test (x 0 = 1) is performed to decide on whether A is in the contact zone of B. Specifically, in the case of Bluetooth RSSI measurements, a timer is started when the RSSI value exceeds a critical value for the first time. From then on, the times for which the RSSI values are compatible with a C 1 contact are accumulated. If the time exceeds 15 minutes at the end of the contact, a C 1 contact is declared. There are many different options for the implementation of this model. They will not be further discussed, however, since they assumes a static constellation of people, which is not the most common case. This approach is more robust with respect to the behavior of people and preferable to Model B. Model A is most universally valid with respect to people's behavior. Its statistics are so unfavorable that it does not lead to acceptable values of p md , however. In all models, the number of RSSI measurement n that are combined, before taking an elementary decision, is another parameter that can be adapted. Large values lead to more reliable decisions but also to a higher number of exchanged messages. The rate of message will be n · x 0 measurements in 15 minutes. In order to assess p md , we need to know the number of times that the distance and attitude condition for C 1 between A and B are fulfilled. This depends on the profession and personality of the person. It has two components, the first one is determined by the number of people met during one day. Let us assume that this number is k and that it has probability p n (k), then the probability p S that a particular fellow A spreads the virus after having been in contact with m ∈ {1, 2, . . . k} people, who are infectious with probability p i , under the assumption that i ∈ {1, 2, . . . m} of these contacts are not detected, is given by: Since p i and p md are small numbers, the dominant term in this equation is obtained for m = i = 1 : with K = ∞ k=0 p n (k)k being the average number of contacts, see also the second row in Table II. All these contacts take place mostly independently and can thus be treated as such. Each of them is associated with a contact time x ∈ N, with a distribution p X (x). The latter is derived from social models and depends on whether people are practicing social distancing. The accumulation of n measurements leads to a decision c 1 . The latter has a probability of missed detection and false alarm denoted by π md and π f a , respectively. In the present section, both quantities are written without further indices. In later sections, the dependency on n will be made explicit. The combination of x 0 such decisionsĉ 1 finally leads to the decisionĈ 1 , which is associated with a missed detection probability: since the combined missed detection occurs whenever less than x 0 detections succeed. Using this in Equation 1 implies that the probability that A spreads the disease is: with, x M = 24·4·x 0 being the number of elementary decisions taken per day (24 · 4 quarters of hours times x 0 ). The above equation is an approximation since the distribution of contact times depends on the people and circumstances of the meeting, like sitting together in the train, having a joint lunch and so on. If π md 1, the term m = x 0 − 1 is dominant in Equation (3): The second line in the equation is obtained by shifting the indices, the third one is obtained by expanding the binomial coefficients and bounding the terms in the numerator. Note that the term for x = 1 holds with equality. Under the same assumptions used so far, the probability that fellow A is a C 1 contact of B after a day is: Thus, the comparison of p S , i.e. the probability of spreading the virus with tracing, and of p C1 , i.e. the corresponding probability without tracing, shows that contact tracing is a very effective option to reduce the spreading whenever is small. This implies that the probability of missed detection must be constrained to a value smaller than 1/x 0 , which is possible to achieve if x 0 is small, as it is the case in Model B and Model C and not possible to achieve in Model A, even with very large values of n. Rephrasing this in words may help developing some intuition: since x 0 individualĉ 1 decision are needed for aĈ 1 decision, missing any one of them leads to a missed detection. Since there are x 0 options for that, p S becomes essentially proportional to x 0 π md . We will use the latter product as a measure for the reduction in the spreading of the disease by the tracing app. In order to evaluate p f a , we need to additionally know the number of times y that a person is close enough for a measurement to take place. The distribution p Y (y) does again depend on social parameters but additionally depends on radio propagation in the case of Bluetooth measurements, and on the triggering mechanism in the case of audio ranging. The number of contacts K Y ≥ K is larger, since the presence detection by Bluetooth signaling is triggered well beyond C 1 separation. Consider Bluetooth measurements: if among the y time instances for which a radio contact to one particular fellow B persists, and assume that m < x 0 of those contacts are correctly detected as fulfilling the C 1 conditions. Then, q additional erroneously identified contacts (erroneousĉ 1decisions) with m + q ≥ x 0 are needed to cause a false alarm for that number y of radio contacts to B (see Table III for a summary of the meaning of the variables): for y ≥ x 0 and p f a (y) = 0 for y < x 0 . Variable Meaning y number of radio contacts x number of C 1 contacts x 0 number ofĉ 1 -decisions to declare C 1 m number of correctĉ 1 estimates q number of incorrectĉ 1 estimates Using Equation (6), the expected number of an unnecessary quarantining of people is approximated by: This equation also includes the possibility that users move with respect to each other, which means that the conditions C 1 and ¬C 1 alternate as a function of time. If C 1 is fulfilled π f a = 0, and if ¬C 1 , the equation π d = 0 holds. At the border of the C 1 domain, the two quantities change their role. This implies that a small p f a near that border is associated with a large p md ∼ 1 − p f a on the other side of the border. This is uncritical if the distributions are very narrow -concentrated around a value -as is the case for ranging, but becomes rather problematic with Bluetooth signal power measurements, which show a very flat distribution. Unless great precautions are taken the classification becomes unreliable. Consider the case, that fellow B is outside of the C 1 zone of fellow A, i.e. p X (0) = 1. Then x = 0 for these measurements and the equation becomes: Although terms with x > x 0 may be larger, the term x = x 0 gives us an idea of the scaling. Its asymptotic dependency can be evaluated using Stirling's formula and lim y→∞ (y/(y − x 0 )) y = e x0 : This means that in the long term, it is the duration of the radio contact y, which dominates the rate of quarantining people. Some target figures for π f a can be obtained for a fully occupied train, for example. In Germany's 2nd class setups, there are 4 seats in one row on each side of a carriage, and around 10 rows in the carriage. The range of Bluetooth reaches well beyond the next row forward and backward. This means that K Y > 24 of which 4-8 are within the contact zone and must thus be discounted, leading to an effective value K Y = 16. The value y itself is determined by the duration of the common journey. For commuter trains we choose 15 and 30 minutes, for inter-city journeys 1, 2, and 3 hours, which leads to y/x 0 = 1, 2, 4, 8, and 12. In such a train a carrier of the disease will send 4 people to quarantine, thus it should be tolerable that 2 additional people are sent to quarantine by false alarms as well. The value of π f a is then obtained by solving Numerical values of π f a are indicated in Table IV. They are the values that can be tolerated, leading to a 50% increase in the quarantining of people riding a German train. The situation is rather uncritical on a short commuter train ride π f a < 0.93 and much more demanding on a longer intercity train journey. III. BLUETOOTH POWER MEASUREMENTS The Application Programming Interfaces (API) of Android and iOS allow to trigger the transmission of Bluetooth Low Energy (BLE) advertisement messages and to measure the radio signal strength of the received signals. The corresponding values are provided in the form of a Radio Signal Strength Indicators (RSSI), which is defined as the received signal power on a logarithmic scale. Bluetooth uses frequencies from a band shared with microwave heating, which means that Bluetooth signals are strongly absorbed by water. As a consequence any part of a human body obstructing the line of sight significantly attenuates the signal. The wide variety of options for carrying mobile phones in your hand, pocket or bag thus implies an enormous variability in received power levels. This is further amplified by the directional characteristic of low-cost antennas. You might make an experiment yourself using a Bluetooth module and a BLE scanner app on your smartphone, which can be downloaded from the iOS or Android stores. With the module and phone separated by 1.5 meters, I personally found the following RSSI-values: -61 to -66 dBm when the module was in my hand and -81 to -89-dB when it was in my pocket. Knowing that a 20 dB change corresponds to a factor 10 in distance exemplifies the difficulty of estimating distances using Bluetooth RSSI values. This led us to propose the rule of carrying smartphones hanging down from the neck. Note that the smartphone could be replaced by a much smaller device built around a Bluetooth module, an Inertial Navigation System (INS) and a sonic or ultra-sonic ranging system, as well. Even if people follow the above recommendation on how to carry their smartphone, the situation remains difficult due to uncertainties in radio propagation, which furthermore takes place on three different carrier frequencies. The unknown association of carrier frequencies to measurements adds an additional level of difficulty. Gentner et al. identified certain patterns in the use of carriers, see [11], which can be used to reduce the associated uncertainty. Traditional models of propagation are shortly addressed in the following section and in more details in [8]. The section furthermore relates the associated statistics to the statistics of classification. A. Propagation Model The smartphone is assumed to be worn on the chest, see [8] for details of the measurement setup used to obtain numerical results. For each individual carrier, the received signal power P RX is modeled by the equation: with P T X denoting the transmit power, γ denoting a stochastic fading coefficient, d being the distance between the receiver and the transmitter, ν being the exponent of the decay law, which is 2 for free space propagation, and with n representing a superposition of noise and interference. For simplicity, the noise and interference are not further considered here -at low distances they are not dominant. In this case, the received power, can be represented on a logarithmic scale, which leads to the definition of the RSSI: with η = 10 log γ and with logarithms taken to the basis 10. The relationship between the reported RSSI value and d is the basis for distance measurement: the measured RSSI is compared to with d c = 2 m being the critical distance. Note that Equation (8) defines the units, which have to be maintained after taking logarithms. In order to evaluate the missed detection probability per event p md or the false alarm probability per event p f a , the statistics for η or γ need to be known. These statistics are dependent on the situation. In the case, that two fellows face each other, they are in a line of sight situation. If the direct path dominates all other contributions, γ is basically delta distributed with an average of Γ determined by the antenna pattern. In other cases, the direct path remains present but is superposed by scattered components. In this case, the distribution of the amplitude of the received signal is modeled by a Ricean distribution. This model is considered to provide a faithful representation of reality, whenever the parameters are properly estimated. Presently the model is only considered for comparative purposes, as shall be seen below. The received power (or attenuation γ) in this model has a non-central χ 2distribution with two degrees of freedom: with γ R being the non-centrality parameter and σ R being the variance. In the case that the decision about C 1 is taken on the basis of a single measurement (n = 1), e.g. in Model A, the criterion for the decision is: with γ c being given by: The associated estimate is denoted byĉ 1 , and the probability of missed detection for the distance d < d c is given by: If one would add several power measurements, i.e. n > 1, e.g. in Model B and C, this would mean adding n independent identically distributed variables, each of them being χ 2distributed with 2 degrees of freedom. The result would then be χ 2 -distributed with 2n degrees of freedom: The Equations (11) and (12) would remain valid and the latter integral could be computed in closed form for arbitrary n. The value γ c is the first moment of the χ 2 -distribution with 2n degrees of freedom and non-centrality parameter nγ R /σ 2 R : The probability of missed detection (13) in estimatingĉ 1 could then be computed in closed form using Marcum's Q-function Q n (., .): The above distributions are adequate for users A and B in close proximity of each other, as is the case for d ≤ d c . It is the desired result in Model A and shall serve as a benchmark in the Models B and C. The reason for not using this result directly in the latter models is that apps are expected to add the RSSI values rather than the power values. In this case, the statistics cannot be determined in closed form but must rather be evaluated numerically. Before addressing this case, let us consider the situaiton d > d c with a line of sight that is often obstructed. In such cases, a lognormal fading distribution is considered to be a reasonable model of reality, see [12]. The distribution may either be written in terms of γ: or in terms of η = 10 log γ: with η L = 10 log γ L = η . Equation (15) makes the Gaussian character and the meaning of η L and σ L obvious. In the above discussion, a decision in the case n = 1 was taken in favor of C 1 , whenever the power level was above a threshold. On the logarithmic scale this condition reads RSSI > Θ, i.e. whenever the difference is positive or equivalently whenever η > η +ν ·10 log(d/d c ). Thus, a false alert occurs if this condition is fulfilled for d > d c . The probability of a false alarm, i.e. and erroneous decision for c 1 , becomes with the present Q-function being a scaled version of the error function complement: In the case of n = 1, a closed form of the statistics thus exists for π md for d ≤ d c and for π f a for d > d c . In the case n > 1, e.g. Model B and C, the situation changes somewhat since measurements are now combined by adding RSSI-values. This corresponds to a geometric average of the received powers. In this case, the probability of false alarm can be computed easily: for d > d c . This equation is a consequence of the scaling of η L and σ 2 L by n. Using the same distribution, but with different parameters for d < d c is expected to be a worse match to reality but allows to also evaluate the probability of missed detection in closed from: It leads to an interesting symmetry between the probabilities of missed detection and of false alert. Note that both probabilities π md and π f a depend on the parameters of the distribution, on the true distance d, and on the critical distance d c , but that they do not depend on the explicit threshold Θ, see Equation (16) and the associated explanations. The resulting functional dependence can either be used in a simulation of roaming users or can simply be averaged over the interior of a circle of radius d c for π md or over its complement or a relevant subset for π f a . The closed form of Equation (6) provides the immediate insight that π f a,n (d c ) = 1/2, which shows that the models are consistent with our intuition. B. BLE Measurements Results The companion paper by Dammann et al. [8] describes the measurements and their analysis in more details. All these measurements have so far been made using ideal conditions with no additional people except A and B (in the very initial measurements A was a actually a post carrying the receiver). The experimental basis shall be further broadened in the future. A first result can be derived from the estimated Rice parameters at a distance of 2 meters γ R = 247 pW, and σ R 2 = 9.15 pW, as well as for the lognormal distribution at 2 and 4 meters: 1.60 and 1.97 dBm, respectively. This allows plotting the functions from Equation (14) and (17) for π md,R,n (d) and for π f a,n (d 2 c /d) = π md,L,n (d), respectively. The values of n determines how many measurements are combined into an elementary decisionĉ 1 . For n = 1, the values π md,R,1 (d) and π f a,1 (d) are the best models among those considered -the use of a decision threshold in the absolute or logarithmic domain are equivalent. The parameter for 4 meters 1.97 dBm is used for determining the false alarm rate. If several RSSI values are added (logarithmic domain), the statistics associated with the more realistic Rice distribution in the near range can not be determined in closed form, at least not today. In this case, Equation (19) for the lognormal distribution is used to determine π md,L,n (d) with the parameter for 2 meters. This is used as an approximation of the true distribution in the exemplary case n = 60. The plots in Figure (2) show two groups of curves. The upper group corresponds to n = 1 and the lower group to n = 60. The latter group of curves shows the benefit of diversity. Within these groups there are differences between π md,R,n (d) (wrong combination) and π md,L,n (d) (wrong fading statistics) but they turn out not to be fundamental. In Section III-A the probability of missed detection was determined as a function of distance. Since the probability of detection is additive in the sense that: In this equation π d (r) = 1 − π md (r) is the condition probability of detection given that fellow B is at distance r and dS(r) ρ(r) is the probability density for fellow B to be at that distance. Equation 20 thus is the marginalization of π d (r) with respect to r. Note that the limitation of the integration is a consequence of π d (r) = 0, whenever r > d c . This allows to define the average probability of missed detection over the distribution of users: π md,av,n = dc 0 2πrdr ρ(r) π md,n (r) dc 0 2πrdr ρ(r) . (21) 8 The probability distribution of users in Equation (20) and (13) is given by: In this expression n(r) = πr 2 is the number of people at a distance not greater than r in the case of a density of one person per square meter. This corresponds to the densest packing of people occupying a surface of 1 meter. People are continuously spread in a symmetric manner around fellow A, which is a simple way of achieving a densest packing. The "function" dn(r)/dr is mostly zero. It jumps at the values r m = m/π with which is a distribution in the sense of Schwartz [13]. With these preparations, the integrals become: with m c being the largest integer with such that r mc ≤ d c . Note that the density of points r m increases with increasing m, which means that the main contribution comes from the border of the contact zone. Using the experimental results from [8], this integral is evaluated to π md,av,1 = 0.15 for n = 1 for the χ 2 -distribution and to π md,av,1 = 0.12 for the lognormal distribution, which are both not very compatible with the need of a small x 0 π md, . Remember that the latter value is the reduction factor in the probability of further spreading of the disease, achieved by contact tracing. Table V lists values of π md,av,n , for different n, which can be used to determine the reduction factor. Even in the case n = 120, the factor x 0 π md = 0.21 in Model A and it would require 4 measurement per second. It is only with n = 480, that factor x 0 π md falls below 1%, which would require 16 measurements per second. This would seriously impact the standby time of the smart phone. Assuming Model C and a decision based on 3 minutes intervals, i.e. x 0 = 5, means that we could achieve a reduction by a factor 0.07 provided that n = 60 measurements are performed and aggregated in each 3 minutes interval, i.e. that one measurement is performed every 3 seconds. In the case of a decision every 5 minutes, which assumes a lower dynamics in the relative movement of people, the reduction factor is 0.04 with the same 60 measurements, but now spread over a 5 minutes interval, which corresponds to one measurement every 5 seconds. So, lower requirements in the dynamic allow both to improve the suppression of the spreading of the virus and to reduce the measurement rate. Tolerable alarm rates were derived for the train scenario. This led to the values in Table (IV). The evaluation of π f a,n (d) is straight forward. For d = d c it gives π f a,n (d c ) = 1/2 as was already discussed previously. Assuming that people occupy a circular surface of 1 square meter gives them a radius δ = 1/ √ π. Thus, the minimum distance to people fully outside of the critical zone is d c + δ. Evaluating Equation (19) yields: respectively. This means that n = 1 is compatible with a journey of 15 minutes before sending more than the two people to quarantine. For n = 3, long journeys of up to 3 hours become possible with the same consequences. The probability of false alarm does thus not strongly limit the number n of measurements aggregated to a decision and one might consider the more demanding homogeneous distribution of users. This requires a study of the combination of false alarms. Consider two fellows B and B', there is no alarm if neither B nor B' triggers an alarm, i.e.: 1 − π f a = (1 − π f a,B )(1 − π f a,B ). Furthermore, let users be at distances d c +δ(k+1) with k ∈ Z + being a positive integer and assume that there are users at that distance (they cover an angular shell of thickness 2δ). This guarantees a densest packing. In that case, the probability of false alarm, i.e. an erroneous decision in favor of C 1 , becomes: In this more demanding scenario, exemplary values are: p f a,3 = 0.413 and p f a,9 = 0.009, which means that n = 9 would be sufficient to reduce the probability of false alarm to a very small level. Table VI shows performance figures for a number of possible choices for the number n of measurements aggregated to an estimateĉ 1 , as well as for the number x 0 of estimatesĉ 1 that lead to a decision C 1 . The product of n and x 0 leads to the measurement rate ρ = x 0 n/(15 · 60). The performance figures are the reduction factor x 0 π md,n of the spreading achieved by tracing as well as the probability of unduly sending a person to quarantine. A choice with n = 15 and x 0 = 3, for example, requires a measurement to be performed every 12 seconds, suppressed the risk of spreading by a factor 0.12 and does hardly send anyone unduly to quarantine. Performing a measurement every five seconds reduces the risk of spreading by a factor 0.04. This assumes that people let their phones hang from their neck, and some standard form of environment. In reality, a number of additional factors have to be taken into account, such as a more complex propagation situation, e.g., due to metallic walls, a higher dynamic of user movements, e.g. due to people entering and exiting commuter trains, or unpredictable shadowing due to the user's hands, arms or body in the path of radio signals. Thus, it is advisable to complement the Bluetooth measurement by an alternative. Audio ranging is the option that shall be described in the next section. The idea is to use it whenever the situation is not clear. IV. AUDIO RANGING Smartphones have a microphone and a speaker with rather good transmit and receive conditions if the device is carried on the chest or held in the hand. This can be used for audio ranging up to distances of a few meters. Signals and their transmission can be configured by the API. In experiments that we performed recently, we focused on the use Android phones. The response of the microphones built into three different phones is shown in Figure 3. The references were a NT1-A microphone from Rode and an Adagio Infinite Speaker of A3 on the source side. Figure 3 shows the response of three smartphones from two different brands. The curves are very similar, suggesting that the same microphones are integrated in those phones. All microphones show a good sensitivity over all frequencies. A similar experiment was performed for the speakers with a rather different result. In that case only two smartphones were analyzed. The response on the better device is reduced by roughly 10 dB above 16 kHz, as compared to the reference. The response of the other one is degraded by another 3 dB and the degradation starts 2kHz earlier. Covering the speaker by one layer of tissue of a sweater degrades the performance by another 4 dB. If both parties cover their smartphones the associated attenuation adds up. Thus, the use of audio ranging requires carrying the devices in an exposed manner, e.g. hanging from one's neck, see Figure (1). Transmission at lower, less attenuated frequencies is not considered as a true option, since it would be too disturbing. The norm ISO 226:2003 compiles equivalent hearing sensitivity (isophones), which allows to compare the disturbance caused by acoustical signals on different frequencies. On the basis of such considerations, we propose modulating a carrier at 18 kHz with a modulation rate of 1 kbaud. This keeps the signal in a spectral range that is not too disturbing to most people. A spread spectrum modulation provides a good range resolution and allows to operate at a low signal-to-noise ratio at the same time. Different options exist and are discussed in [9]. Since the velocity of sound in air is c s = 343 m/s under standard conditions, a chip duration of 1 ms corresponds to a length of 34 cm. At a typical signal-to-noise ratio this leads to a distance resolution of 1 to 3 cm. Let us be conservative and assume a resolution of 5 cm. A multipath delay of two meters leads to an offset by 6 chips and is well suppressed by the autocorrelation of the spreading code. The length of the spreading code is assumed to be around 350 chips. An alternative using chirps is also considered. The performance of audio ranging is further developed in Section IV. Audio ranging can be performed in a peer-to-peer or in a networked manner. Consider the peer-to-peer situation first. Smartphones do not provide accurate timing control. However, the microphone input of a smartphone may be sampled at a fixed rate. Furthermore, smartphones can transmit and receive at the same time and this is furthermore supported by the APIs of Android and iOS. Let the smartphones thus agree to start audio ranging via Bluetooth . In a first step they open their microphone channels and then proceed according to Figure 5: at time t T X,A , A transmits the ranging signal using its speaker. This transmission is delayed with respect to the API by τ T X,A . In parallel to its transmission A's microphone capture the transmitted signal. This signal is delayed by the sum of the local propagation delay τ l,A and by the internal receive delay τ RX,A . The delay τ l,A is determined by the device geometry and can be stored in memory. A standard value of 14 cm should be appropriate for most devices on the market. The time of reception thus is: t RX,A = t T X,A + τ T X,A + τ l,A + τ RX,A , and is used for calibration purposes. The same definition of delays applies at B. Thus, the signal transmitted by A at time t T X,A is received at B at the time t RX,B : with τ being the propagation time from A to B. After reception of the signal from A by B, B sends a corresponding signal to A. The equations are obtained by changing the roles of A and B: At the end of the reception A sends to B and B sends ∆t B = t RX,B − t RX,B + τ l,B , using BLE. Thus, both can compute the propagation time: and thus the distance d = τ c s . The property of audio signals, which is crucial for this self-calibration, is the possibility to observe the own transmitted signal. A. Ranging Protocol The above peer-to-peer protocol can be extended to a networked protocol. In this case, the users agree on an ordering of transmissions via Bluetooth. All smartphones A 1 . . . A k activate their microphones and one after the other transmit their audio ranging signals. For simplicity, the scheduling is prearranged, which also works if some of the smartphone cannot acquire all signals. In this case, all delays are summed up: 350 ms for the ranging signal, 10 ms (corresponding to 4 meters) for propagation and 40 ms for the internal delays between the activation of the transmission command and the start of transmission (the latter needs to be confirmed by more data). This allows for a scheduling of a transmission every 400 ms. After the completion of the cycle and the evaluation of the reception time t RX,Ai by terminal A 1 , this terminal transmits the time difference using Bluetooth: If all terminals see each other, they transmit k(k − 1) such values in total. The annoying transmissions of audio signals remain limited to k, however. The overall time interval spanned by all transmissions in the networked protocol may be long enough for users to move slightly. This is not critical, however. The snap-shot measurements are simply converted to average values. The only instances, which require some care are those in which the audio signals are used to calibrate Bluetooth measurements. Finally, it should be emphasized that audio beacon transmissions should not be activated if the device is held to the ear. Even if the signals are hardly heard, this seems a reasonable precaution. B. Theoretical Performance of Acoustic Range Estimation The received audio signal is filtered to remove out-ofband interference and noise to the best possible extent. The filtered signal is used to determine the in-band interference and noise level N 0 and is furthermore correlated using the filtered ranging signal. For simplicity, the further exposition focuses on spread spectrum signals. In a first step the I and Q components of the correlation C(∆τ ) are computed at intervals of T c /2 with T c = 1 ms denoting the chip duration. The result is searched for the delay leading to the maximum norm |C(∆τ )|. Although, the implementations by widely used phones seem not to require that, frequency offsets may be searched as well. This allows to acquire the signal which may be present or not. Thus, it is sufficient to search for the delay (and frequency offsets) leading to the maximum norm from early to late. The latter ordering is to avoid locking on an echo. If the signal to noise ratio is above the expected threshold, the signal is assumed present. In this case, a successive refinement of the result is performed in a DLL type of processing. The power discriminator D P (∆τ ) = |R(∆τ + δ)| 2 − |R(∆τ − δ)| 2 is used to iteratively increase/reduce the delay ∆τ depending on the value of D P (∆τ ) ≷ 0. In this equation δ is half the correlator spacing and is expressed as a fraction ∆ of the chip duration: δ = ∆T c . We will restrict ourselves to ∆ = 1. A further optimization is possible, see Betz and Kolodziejski [14], [15]. The uncertainty of the delay estimate ∆τ due to noise is given by (see Dierendonck, Fenton and Ford [16]): In this expression, E i is the signal energy accumulated during the correlation, and N 0 is the spectral noise density of the audio noise and interference. The latter quantity is estimated using the norm of the filtered I and Q samples of the incoming signal: with N denoting the number of samples and with B S denoting the bandwidth of the passband filter. This estimate is performed ahead of time and is used for setting the volume of the transmission, such that E i /N 0 = 6 dB at 4 meters. At this level the signal can be acquired, and Equation (25) implies that σ ∆τ T c /4,which corresponds to 9 cm. At 2 meters, this is half that value, i.e. 4.5 cm. The calibration of the transmit power may be performed by listening to the own beacon. This allows detecting whether the user is inadvertently covering the microphone or the speaker, which should trigger a request to the user to remove the blockage. The distribution of audio ranging measurements is Gaussian with a standard deviation given by Equation (25). This allows computing π md , i.e the probability of deciding againstĉ 1 , as a function of the distance d ≤ d c : and π f a , i.e the probability of wrongly deciding in favor of c 1 , for distances d > d c : Note that the symmetry of lognormal fading between π md (d) and π f a (d 2 c /d) is lost. The plot for audio ranging, corresponding to σ ∆τ = 5 cm is shown in Figure 6 Again one might evaluate the average rate of missed detection and of false alarm as in Equation (22). In this case, the averaged probability of missed detection becomes π md,av = 0.016. In the present case, the number of measurements is primarily limited by the acoustical disturbances associated with the transmission of the beacon. The number of measurements n used for taking a decision is always 1. Furthermore, the number of measurements x 0 per 15 minutes must also be small for the same reason. With x 0 = 3, the reduction of the spreading rate of disease is x 0 π md,av < 0.05, which is a low figure. The probability of false alarm described by Equation (27) decays so quickly that it is insignificant at d = d c + δ, i.e. π f a (d c + δ) 0. The same applies for the integration over a two-dimensional plane according to Equation (23). The present discussion was about the contributions of uncertainty due to signaling. Additionally, the relative geometry of the microphones and speakers may add some bias, which may lead to a shift of the border to a contact zone by a few centimeters. This is rather uncritical, however. The important conclusion is that audio ranging provides sharp results. This form of ranging might thus be activated whenever the information gained by Bluetooth measurements may lead to a wrong conclusion. V. ATTITUDE SENSING This section is more a reference to options that may be considered. The benefits will become visible by the qualitative discussion of Section VI. Earth gravity in the − e z direction, i.e. towards the center of the earth and the magnetic field in the direction of e mN , i.e. towards magnetic North provide two directions that enable attitude determination. Both are seriously disturbed in ways that depend on the environment. A number of authors have investigated the quality of attitude sensing both using algorithms built into smartphones and using own estimation algorithms. Michel and co-authors summarize a number of findings [17]. They report an accuracy of 6 • with a sampling rate of 40Hz whenever the smartphone is kept in a relatively calm position (front pocket, texting or phoning). These results apply to their own algorithms "Mich-elObsF" and "MichelEkfF." They did not study the behavior in a train, which is a particularity difficult environment: with many sources of acceleration, due to the track geometry, due to passing switches or simply due to irregularities in the tracks themselves. Similarly, the magnetic field in trains is modulated by electrical motors, permanent magnets and large currents. On the other hand people sitting or standing next to each others are likely to be affected in a similar manner. Exploiting the latter property, however, requires the use of common standardized algorithm and precise time stamping of measurements. Carrying the smartphone by letting it hang down one's neck leads to two stable orientation, one with the display facing the chest and one with the display facing ahead. The resolution of the associated ambiguity is rather straightforward, at least as long as people do not predominantly walk backward. Alternatively, the cameras could be used for determining the orientation, since the brightness of the pictures is very different. Pitch angles are suppressed by gravity, as long as people do not bend backwards, which is unnatural. Roll angles may occur if one strap is shorter than the other one. They are compensated by sensing earth gravity. In our opinion the context of COVID-tracing is quite favorable to the use of relative attitude estimation, which would provide an interesting complement to Bluetooth sensing and/or acoustic ranging. This needs to be developed, however. VI. CLASSIFICATION The definition of a Category 1 contact by the Robert Koch Institute [18] includes three elements: • an accumulated duration of 15 minutes, which can easily be metered, • a distance of less than 2 meters, which is more difficult to establish, • and the concept of being face-to-face, discussed below. From the previous sections, specially Section II and III, we learned that under idealized conditions, Bluetooth RSSI measurements provide an adequate estimation of the distance between two fellows or more exactly an estimate on whether B is in the critical zone of A. The probability of missed detection was found to the be a critical performance measure. Audio ranging was found to be an interesting complement to Bluetooth measurements, in particular if the latter measurements are disturbed by shadowing or multipath. They provide a comparatively sharp answer, and may be used to calibrate past and future Bluetooth RSSI measurement. Audio measurements may be audible and thus annoying for younger people, as well as for dogs and other animals. As a consequence, it is beneficial to keep them sparse. In Section V, we very shortly addressed the use of attitude sensing. In this section, we shall superficially address the potential of combining these measurement types. For this discussion, it is meaningful to differentiate different poses, as shown in Figure 7. A selection of essential poses of two fellows in close proximity is shown in a top view. Fellow B is infected and exhales air charged with microscopic droplets carrying the virus. Fellow A inhales the droplets. Pose (a) in Figure 7 is what everyone would agree to call a face-to-face situation. It is the type of situation, which occurs during a meeting, lunch or in public transportation for people sitting or standing opposite to each other. It might also occur when desks are facing each other and in some other special situations. Pose (b) occurs in public transportation, in queues as well as in lecture halls, concert halls, cinemas or the like. It also appears dangerous, although Fellow B needs to be closer for that, but this might often be the case. However, unless B stands and is much taller than A, the air flow will only partially reach A's nose and mouth. A further specification by medical authorities would be helpful in this case. Pose (c) occurs in similar situations as Pose (b). Pose (d), (e) and (f) occur during meetings both while standing and sitting, in public transportation and some other situations. Pose (c) and (d) do not appear too critical, although B is likely to turn his head from time to time, which is not detected by the sensors considered. Pose (d), (e), and (f) are difficult to differentiate even using perfect ranging and orientation. Assuming that there is no specific direction in the air-flow, due to wind or draft, and that the different poses can be differentiated, medical requirements would probably choose • Pose (a), (d), and (e) to be Category 1, i.e. critical, • Pose (b) would be critical for a lower distance which might depend on the height differences, • Pose (c) and (f) would be essentially uncritical. The possibility to discriminate the cases depends on the type of sensing, as described so far, and is discussed in the following three sections. A. Bluetooth-only Measurements BLE RSSI measurements will return similar results for the Poses (a), (c), (d), and (e). The distance d between the fellows might appear larger in Pose (f) than it actually is. This is uncritical, however. In Pose (b), the received power will be associated with a larger distance than the actual one, as well. Depending on how Pose (b) is classified, this leads to a missed detection. A similar situation may also occur in Pose (e) whenever Fellow A obstructs the line of sight with his left harm, e.g. by holding himself on a bar in public transportation. All missed detection events are critical since they leave close encounters undetected. Finally, the poses (c) and (d) will typically generate false alarms, which sends people to quarantine and testing. This sort of differentiation has not been considered so far, at least to our knowledge. B. Bluetooth, Attitude Sensing The addition of a attitude sensing, allows to separate the cases of "Pose (b) with a small distance" from "Pose (a) with a large distance". Thus, it might use a lower threshold in the case of an aligned attitude and thus avoid the missed detection events in Pose (b). With a lower threshold, however, fellows in Pose (c) will be identified as C 1 up to a rather large relative distance, potentially generating many false alarms. C. Bluetooth, Attitude Sensing and Audio Ranging An extensive use of audio ranging, would eliminate false alarms mostly. It would implement the conditions of Category 1 without the alleviation due to the the condition of being face-to-face. When combined with the other measurements, audio measurements provide additional discrimination and allow reducing the rate of missed detection and false alarms. In reality, acoustical signals are subject to multipath, which might be critical if the direct path is strongly attenuated. Since the receiver searches from early to late it is unlikely to be induced in error, however, as long as the direct path can still be detected. VII. CONCLUSIONS Difficulties in Bluetooth RSSI-based ranging are mentioned by a number of scientists orally. The significant attenuation by the human body and other influencing factors, such as keys, coins, metallic pens, business card holders and the like make the power levels very unpredictable. We thus propose to standardize the wearing of smartphones or alternative devices on the chest, when not held in the hand or used for making phone calls. This provides an environment that is much better defined for Bluetooth RSSI-based ranging, audio ranging and attitude determination. Currently, we don't see an alternative setting to the present one that allows for an analysis of the tracing performance in terms of identifying Category 1 contacts and avoiding unduly frequent alerts for contacts that are not Category 1. The analysis shows that the accumulated statistics require low figures for the per event missed detection rate. This can be achieved with measurements every few seconds aggregated into decisions every few minutes, which is adequate for stable distributions of people, such as in a meeting, at lunch, in a train and the like. The false alarm rate is a lesser problem as soon as a few measurements are aggregated. The analysis presented in the paper is a preliminary one. Much more experimental data should be generated to refine the findings. In Germany, the current probability of encountering an infected person is rather low. In such a context the performance does not matter too much. There are many regions in the world, where this is not the case, however. It would thus be quite beneficial if this work was taken up and further developed, in particular with respect to attitude sensing. Some individuals may reject the idea of carrying their smartphone around their neck. This could be addressed by producing decorative gadgets which are less obstructive to wear. Beyond that the carrying of a device around the neck also enables the use of the camera. This would allow to further refine the evaluation of the risk but would drain the batteries much more and would raise concerns about privacy Thus, the use of the sensors addressed in the present papers seem to remain most promising. In the future, Bluetooth ranging should be considered as well. The complete analysis of the paper and its validity rely on the current model of infection of the Robert-Koch Institute.
2020-08-04T01:01:02.032Z
2020-08-02T00:00:00.000
{ "year": 2020, "sha1": "4bc8a772580e914b12d671c96d7df12d44b2da0a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6defbc153cb85950cffdcfede944ae96622a51b0", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
78168093
pes2o/s2orc
v3-fos-license
Investing in the future of Nigeria’s health work force: Strengthening human resources for health through sustainable pre service HIV/AIDS training systems at nursing, midwifery & health technology training schools in SE Nigeria: A case study : 2.013_HRW Investing in the future of Nigeria’s health work force: Strengthening human resources for health through sustainable pre service HIV/AIDS training systems at nursing, midwifery & health technology training schools in SE Nigeria: A case study T. Madubuko, A. Nwandu, E. Onu, N. Kehinde, A. Olutola; Center for Clinical Care and Clinical Research Nigeria, University of Maryland School of Medicine-Institute of Human Virology Baltimore, MD, University of Nigeria Program Purpose: Center for Clinical Care and Clinical Research Nigeria (CCCRN), in collaboration with local teaching institutions in Nigeria, sought to more closely align USG-funded HIV/AIDS efforts with the national programs through a program called Partnership for Medical Education and Training. The goal was to enhance capacity at the pre service training level in the management of HIV disease, by revising the HIV training curriculum to emphasize role specific core competencies that in turn ensure “practice ready” graduates. Structure/Method: Multiple advocacy and consensus buildingmeetings for stakeholders were held, followed by a comprehensive training needs assessment of five schools of nursing and 4 schools of midwifery, 3 schools of health technology in the South East of Nigeria. Pre service faculty were assessed for teaching/mentoring knowledge and skills to identify capacity gaps as well as presence or absence of ongoing HIV related education for faculty and students using structured questionnaires and key informant interviews. The required infrastructure for effective implementation of these trainings in the institutions was also assessed. This resulted in the following interventions-Curriculum review, Training of Trainers for faculty, refurbishing of the identified training halls and libraries, provision of teaching and training materials and books. Outcomes: The completed documents from the curriculum review were formally submitted to the respective regulatory bodies for adoption and provisional concurrence for their implementation was sought. A total of 37 faculty received training to implement the new curriculum, 28 participants trained on training of trainers on managerial competence for health care providers and a total of 3,108 undergraduate students from the 12 institutions benefitted from the revised curriculum between 2013 to 2014. Pre and post test results indicated a significant increase in knowledge (65% mean pre-test to 89% mean post test score). Regular quarterly technical assistance visits to the institutions further strengthened the programme. Going Forward: Strengthening pre-service education in tertiary level schools helps to provide a “practice ready” workforce that can assist in bringing the HIV/AIDs pandemic under control. The success of the program can be attributed to collaborative and participatory nature of the process with clear understanding and cooperation by all stakeholders. Funding: PEPFAR (CDC). Abstract #: 2.014_HRW: 2.014_HRW Understanding barriers to vaccination in an urban slum of Methods: We conducted 30 interviews with women from 3 communities. Interviewees included participants in the Primeros Pasos Nutrition Program and other non-affiliated individuals in these communities. Outcomes: We found that individuals distinguish between biomedicine and natural medicine. There are illnesses curable by clinical medicine and those that require the attention of a natural healer. The latter are considered unexplainable by biological causes and incurable by biomedicine. These culturally-specific illnesses include, but are not limited to, mal de ojo, its more advanced version el chipe, their relative lombrices, and susto. Community members do not seek clinical health services for various reasons. Many believe that clinical healthcare workers do not recognize culturally-specific illnesses and that they are unable to provide adequate treatment or may cause further harm. In addition, location and affordability often play a role in how community members decide between natural and biomedical treatment. Health beliefs surrounding these topics are transmitted through multiple systems: family and friends, schools, and outreach programs by aid organizations such as Primeros Pasos. We also found that amongst different communities there is wide variation in cultural health beliefs. Going Forward: These results demonstrate a greater need for addressing existing cultural health beliefs and other non-biomedical health factors. We suggest communication with community healers as a starting point for generating greater collaboration between communities and aid organizations such as Primeros Pasos in order to augment clinical treatment and improve education programs. , in collaboration with local teaching institutions in Nigeria, sought to more closely align USG-funded HIV/AIDS efforts with the national programs through a program called Partnership for Medical Education and Training. The goal was to enhance capacity at the pre service training level in the management of HIV disease, by revising the HIV training curriculum to emphasize role specific core competencies that in turn ensure "practice ready" graduates. Structure/Method: Multiple advocacy and consensus building meetings for stakeholders were held, followed by a comprehensive training needs assessment of five schools of nursing and 4 schools of midwifery, 3 schools of health technology in the South East of Nigeria. Pre service faculty were assessed for teaching/mentoring knowledge and skills to identify capacity gaps as well as presence or absence of ongoing HIV related education for faculty and students using structured questionnaires and key informant interviews. The required infrastructure for effective implementation of these trainings in the institutions was also assessed. This resulted in the following interventions-Curriculum review, Training of Trainers for faculty, refurbishing of the identified training halls and libraries, provision of teaching and training materials and books. Outcomes: The completed documents from the curriculum review were formally submitted to the respective regulatory bodies for adoption and provisional concurrence for their implementation was sought. A total of 37 faculty received training to implement the new curriculum, 28 participants trained on training of trainers on managerial competence for health care providers and a total of 3,108 undergraduate students from the 12 institutions benefitted from the revised curriculum between 2013 to 2014. Pre and post test results indicated a significant increase in knowledge (65% mean pre-test to 89% mean post test score). Regular quarterly technical assistance visits to the institutions further strengthened the programme. Going Forward: Strengthening pre-service education in tertiary level schools helps to provide a "practice ready" workforce that can assist in bringing the HIV/AIDs pandemic under control. The success of the program can be attributed to collaborative and participatory nature of the process with clear understanding and cooperation by all stakeholders. Background: Immunization is one of the most cost-effective public health initiatives regarding disease control and is an indicator of health-seeking behavior. Despite freely available vaccinations provided by GAVI and the national EPI program, Pakistan is one of two countries in the world with wild polio virus circulating. Has a vaccination rate of only 54% according to the Demographic Survey (2012)(2013). Urban slums with poor sanitation and housing density pose the highest risk of disease spread, yet few studies have surveyed this population. The objective was to determine the vaccination status amongst the population of 50,000 in an urban slum in Karachi, Pakistan and to analyze the knowledge, attitudes and practices towards immunization, which may be limiting vaccine acceptance and uptake. Human Resources and Workforce A n n a l s o f G l o b a l H e a l t h , V O L . 8 2 , N O . 3 , 2 0 1 6 M a y eJ u n e 2 0 1 6 : 4 7 3 -5 1 0
2019-03-16T13:05:42.567Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "7e375572ee282dada365ac166ac0cf50ff65c243", "oa_license": "CCBY", "oa_url": "http://www.annalsofglobalhealth.org/articles/10.1016/j.aogh.2016.04.309/galley/1148/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "afa1370c9177718f7425355871a26284bf9fb996", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18242290
pes2o/s2orc
v3-fos-license
Using Drugs to Probe the Variability of Trans-Epithelial Airway Resistance Background Precision medicine aims to combat the variability of the therapeutic response to a given medicine by delivering the right medicine to the right patient. However, the application of precision medicine is predicated on a prior quantitation of the variance of the reference range of normality. Airway pathophysiology provides a good example due to a very variable first line of defence against airborne assault. Humans differ in their susceptibility to inhaled pollutants and pathogens in part due to the magnitude of trans-epithelial resistance that determines the degree of epithelial penetration to the submucosal space. This initial ‘set-point’ may drive a sentinel event in airway disease pathogenesis. Epithelia differentiated in vitro from airway biopsies are commonly used to model trans-epithelial resistance but the ‘reference range of normality’ remains problematic. We investigated the range of electrophysiological characteristics of human airway epithelia grown at air-liquid interface in vitro from healthy volunteers focusing on the inter- and intra-subject variability both at baseline and after sequential exposure to drugs modulating ion transport. Methodology/Principal Findings Brushed nasal airway epithelial cells were differentiated at air-liquid interface generating 137 pseudostratified ciliated epithelia from 18 donors. A positively-skewed baseline range exists for trans-epithelial resistance (Min/Max: 309/2963 Ω·cm2), trans-epithelial voltage (-62.3/-1.8 mV) and calculated equivalent current (-125.0/-3.2 μA/cm2; all non-normal, P<0.001). A minority of healthy humans manifest a dramatic amiloride sensitivity to voltage and trans-epithelial resistance that is further discriminated by prior modulation of cAMP-stimulated chloride transport. Conclusions/Significance Healthy epithelia show log-order differences in their ion transport characteristics, likely reflective of their initial set-points of basal trans-epithelial resistance and sodium transport. Our data may guide the choice of the background set point in subjects with airway diseases and frame the reference range for the future delivery of precision airway medicine. Introduction The ciliated airway epithelium is the first line of host defence against airborne assault [1]. One measure of the integrity of this epithelial barrier is the product of its electrical (ohmic) resistance and epithelial surface area, known as trans-epithelial electrical resistance (TER; OÁcm 2 ). The clinical relevance is that TER dysregulation may drive disease pathogenesis. For example, it has been proposed that chronicity in asthma may be cued by an initial genetic and or environmental propensity that lowers TER, facilitating epithelial penetration that subsequently drives irreversible airway remodelling [2]. The clinical importance of TER is further illustrated by recent data suggesting that cigarette smoke, inhaled pollutants or acid exposure due to gastro-oesophageal reflux can all dysregulate tight junctional proteins by signalling through ion channels and/or acid sensors [3][4][5]. Furthermore, in Cystic Fibrosis (CF), mutation of one apical channel, the CF transmembrane conductance regulator (CFTR), disturbs a regulatory (proteostasis) network that, inter alia, controls TER [6]. Additionally, understanding TER regulation may also be important for new therapies aimed at rare inherited airway diseases, where a better understanding of the means to lower TER might aid penetration of agents targeted to repairing innermost progenitor cells that renew damaged airways [7,8]. Here, we focus on TER in primary human nasal epithelial (HNE) cells grown at air-liquid interface (ALI) with the caveat that many different methods exist to culture such cells [9][10][11] but as yet, open standard operating procedures that define a "normal" range of values for TER are not available. Perhaps a better term might be reference range because 'normal' is not agreed. Moreover, transparency of reporting is not aided by the polar opposite opinions about the relevance of the magnitude of TER measured in a given in vitro experiment [12]. These decade old controversies are unresolved which prompted us to review the factors underlying the plasticity of TER values generated by human nasal epithelia reconstituted in vitro derived from apparently healthy volunteers. Hence, our aim was firstly, to publish our reference range for the distribution of baseline TER values across nasal turbinate derived ALI cultures; secondly, to determine the range of TER across multiple ALIs derived from a given volunteer and thirdly, to quantitate drug-induced changes in TER after sequential manipulation of sodium and chloride ion transport using two differently ordered protocols (chloride transport stimulation after cAMP elevation followed by sodium transport inhibition, or vice versa). The data suggest that baseline TER is not normally distributed with dichotomous responses to drugs targeting ion channels. We propose measures to normalise the wide range of TER both at baseline and characterise the differential response to drugs acting on ion transport proteins such as the epithelial sodium channel (ENaC) and CFTR. Cell culture The cell culture protocol is based on a combination of different approaches [16][17][18] leading to our standard operating protocol (SOP); where reference is made to additional notes these may be found in the S1 File. After initial screening studies using different cell culture media (n>30 donors, data not shown) we identified two commercially available serum-free media that efficiently propagated cells on T25 collagen-coated dishes. Both media were used as per manufacturer's recommendation and additionally supplemented with 2% Penicillin/Streptomycin and 100μg/ml Primocin. From this initial work, a culturing methodology was optimized. Brushings were all collected in phosphate buffered saline (PBS) at room temperature and centrifuged at 122×g for 4 min. The cell pellet was washed once with PBS and then the cells were seeded onto flasks that were collagen-coated (Bovine Collagen I-note-1) at 5 μg/cm 2 in PBS for at least 1 hour at 37°C. Prior to cell seeding, the plate surface was washed twice with PBS. Typically, each pooled brushing (from six nasal scrapes, three per side) was sufficient to seed two T25 flasks (note-2 to 4). Cells were firstly seeded in CellnTec media, for 48h (adherent cells were designated P01) on collagen-coated T25 flasks, after which the supernatant containing the nonadherent cells was centrifuged (4 min at 122×g) and the cells were resuspended in the Promo-Cell Airway Epithelial Growth media, and seeded in T25 flasks for another 48h (attached cells then designated P03). After that time, the supernatant containing the non-adherent, and mostly spinning ciliated spheroids was transferred into a new flask and used for other experiments while fresh CellnTec media was added back to the original P03 T25s. The attached P01/ P03 cells were grown until 80-90% confluent (typically 7 to 10 days post-seeding, note-5), and exposed to a cell detachment solution (Accutase, 1ml per T25), monitoring until all cells detached. Cells were then re-seeded on to collagen-coated Transwell or Snapwell inserts, depending on the nature of the subsequent experiments. In general, from two T25 plates (either P01 or P03), 12 Snapwells and 2 Transwells could be seeded (note-6). The cells were grown in CellnTec media submerged until confluence was reached (typically after 2-3 days) and then the medium was removed on the apical side to establish ALI. Pro-differentiation media, which consisted of 1:1 ratio of DMEM high glucose and PromoCell containing 1X concentration of PromoCell supplements plus 50nM all-trans retinoic acid, was added to the basolateral side. Medium was changed in the lower chamber of the inserts and the cells were washed apically with cell media every two days (note-7 and 8). Trans-Epithelial Electric Resistance (TEER) was assessed in situ every three days using a chopstick Epithelial Voltohmmeter (EVOM2, World Precision Instruments -WPI, Stevenage, England, UK) and inserts with TEER >400 O/insert (note 9) were used for Ussing Chamber experiments. Ciliogenesis was typically observed after *3 weeks at ALI with evidence of tight junction formation. Ussing Chamber Snapwell supports with confluent and resistive cells, were mounted in an Ussing chamber and bathed both apically and basolaterally with Hank's Balanced Salt Solution (HBSS, composition mM: 137.93 NaCl, 4.17 NaHCO 3 , 1.26 CaCl 2 , 0.49 MgCl 2 , 0.41 MgSO 4 , 5.33 KCl, 0.44 KH 2 PO 4 , 0.34 Na 2 HPO 4 , 5.56 D-Glucose), bubbled with 5%CO 2 / 95%O 2 at 37°C. Experiments were performed as described in [19]. Briefly, the epithelia were maintained under open-circuit conditions and the spontaneous trans-epithelial potential difference (V) was monitored (DVC-1000 Voltage/Current Clamp, WPI) and recorded (4 Hz) electronically (ADI Powerlab Interface and associated software; AD Instruments, Chalgrove, Oxfordshire, UK). Experiments were initiated once V had stabilized (20-30 min) then standard pulses of trans-epithelial current (20 sec, −10 μAÁcm −2 ) were injected every 40 sec, leaving the epithelia to stabilize again (10-15 min) before adding any drug. The spontaneous voltage generated by the cells is reflective of the in vivo lumen-negative voltage when an electrode is placed on a nasal turbinate and connected in series with a voltmeter attached to another electrode in basolateral space. This universally observed negative deflection of the turbinate surface electrode in humans is thought to be due to positively charged sodium ions moving towards the blood inferred by the near collapse of that voltage when the sodium blocker amiloride is added to the perfusate bathing the turbinate, or in the Ussing chamber equivalent, when amiloride is added to the apical side of the cultured cells. We parameterised the role of sodium transport by cation substitution experiments in order to prove that the voltage at baseline was sensitive to sodium withdrawal (with N-methyl-D-glucamine, NMDG, S1 Fig). Next, we undertook pilot experiments showing that amiloride and the high affinity ENaC blocker benzamil at a 10-fold lower dose (1μM, 10 min) were electrophysiologically indistinguishable (data not shown). Thereafter amiloride was used as the most cost effective, widely used inhibitor thereby facilitating others to compare their protocols against our reference range. Additionally, two different concentrations of the CFTR inhibitor gave the same inhibition of forskolin-stimulated ion transport (data not shown). Drug addition regimes We determined how TER, V and the ratio of V/TER [I Eq ] changed under two drug regimes in which drugs where added sequentially, without any washout. 1. To raise cAMP, forskolin (FSK) was added both apically and basolaterally at 10μM for 15 min, followed by the CFTR blocker, CFTR Inh172 apically either at 8.0 or 4.0 μM for 15 min. Amiloride (AMI) was finally added apically at 10μM for 10 min. 2. Vice versa, such that amiloride was added first followed by forskolin and CFTR Inh172 , conditions as above. The important caveat to the ion transport data reported herein is that the results reflect symmetrical apical and basolateral sodium chloride (137.9 mM) as this eliminates differences across the paracellular space. For each ALI, V, TER and [I Eq ] values (measured or calculated) during 4 min window out of the 10-15 min initial stable state were averaged and termed Baseline (BAS.). The same approach was applied for the values after amiloride and CFTR Inh172 addition. For the means with forskolin, the time window for averaging was increased to 6 min. Additional information relating to brush biopsies, cell culture (SOP and immunofluorescence) and Trans-epithelial resistance calculation can be found in the S1 File. Statistics Analysis was performed with GraphPad Prism 6.0 (GraphPad Software Inc., La Jolla, CA, USA) and results expressed as means ± errors as specified in Fig legends. Data were compared using unpaired Student's t-test or one-way ANOVA. Where tests of normality (Shapiro-Wilk) failed, data were analysed using non-parametric tests (Mann-Whitney U test and Kruskal-Wallis ANOVA + Dunn's multiple comparison test). Groups of data were considered to be significantly different if P <0.05. Results Our final standard operating protocol ( Fig 1A, top panel, P01 and P03 submerged 'source' cultures, Fig 1B) for the differentiation of human nasal epithelia was applied to 18 healthy donor biopsies. This SOP for differentiation at ALI generated ciliated HNE cultures, with spontaneous apical negative voltage (resistive after 15-20 days in culture) in >98% of brushings. A typical fully differentiated epithelium is shown in Fig 1C demonstrating both cilia and tight junction formation. Baseline electrophysiological values When ALI cultures developed resistance, each was mounted in an Ussing chamber with symmetrical solutions bathing apical and basolateral surfaces. These data also show no relationship between initial voltage (V) and calculated TER ( Fig 2D). The baseline TER values were highly scattered with a *9-fold difference between the lowest and highest resistances (compare Min and Max Fig 2E), with most TER values clustered in the 500-800 OÁcm 2 range as reflected in the 'Christmas tree' shape of the distributions in Fig 2B (see also frequency distribution graphs in S2A Fig showing that nearly 60% of ALIs have TER between 300 and 900 OÁcm 2 , 25% of the total in the range 500-700). Statistical analysis confirmed the non-normal distribution for all the parameters (P<0.001). This asymmetry at an ALI level (which persisted after logarithmic transformation of voltage and TER but not calculated current, data not shown) resulted from very high values in a minority of ALIs consistent with the idea that these have electrical properties that differ from the majority. This finding that log transformation failed to normalise TER and voltage values prompted re-clustering at a volunteer level to test the hypothesis that high values were a characteristic of a given donor's cells. First, we ranked the TER at baseline by donors mean values in ascending order (abscissa Fig 3B) and cross compared the spread of voltage and equivalent current (respectively Fig 3A and 3C). As shown in Fig 2D, it was not possible given a starting voltage to predict the corresponding TER. Interestingly only a minority of ALIs from a single individual had a range of voltage values that spanned the extremes of the population distribution (e.g. donor C, H, L and N in Variability of Trans-Epithelial Airway Resistance Fig 3A). Fig 3D shows the length of time in culture at ALI in differentiation media to determine whether this could be used to predict the spread of the baseline values shown above. As shown, there is no discernible relationship and the intra-donor variability cannot therefore be explained by the different length of time in which the cultures have been grown prior to Ussing Chamber analysis. Our data suggest that some individual donors vary significantly in their baseline TER (Fig 3B). In summary the non-normal distribution of the TER values is best explained by the observation that there are a greater number of individuals than expected at the extremes of the distribution. This means there are more individuals with a very low or high TER compare to a normal distribution as shown in S2 Fig. Drugs as probes to study TER plasticity Next, we investigated how TER values for a given ALI change in response to the sequential addition of drugs that elevate cyclic AMP (forskolin, FSK), inhibit sodium transport (amiloride, AMI), or inhibit CFTR (inhibitor CFTR Inh172 ). The rationale for the use of these drugs is described in the methods. We used two different orders of sequential addition of drugs ( With drug regime I, ENaC was inhibited after pharmacological opening and closure of CFTR. Forskolin decreased TER by~30% with respect to baseline, this effect being mostly counteracted by CFTR Inh172 (increase up to~90% of BAS.). Subsequent amiloride addition raised TER by~30% respect to baseline, also revealing a minority of ALI cultures whose resistance rose into the >3000 OÁcm 2 (+AMI in Fig 4B-I also S2 Fig, panel B-I for changes in relative frequency distribution of values). Variability of response was reflected in the 13-fold difference between the Min and Max values (+AMI, Fig 4C-I). This rise in a minority of ALIs is an amiloride-driven effect and not an artefact of the regimen since it recurred when regime II was applied (no prior chloride transport modulation, Fig 4B-II and 4C-II). Upon amiloride addition TER increased by~40% with respect to baseline; again very high resistances were observed in a minority of ALIs (see also S2 Fig, panel B-II). Forskolin decreased TER to~90% of BAS., while CFTR Inh172 increased TER to~35% above baseline. There was a considerable rise in the median value in the presence amiloride alone (from 815 to 1201), which was not observed in drug regime I when amiloride was added after forskolin and CFTR Inh172 . This suggests that amiloride exposure after CFTR modulation was acting on the epithelium differently with respect to TER when compared to amiloride administered alone. Non-parametric statistical analysis demonstrated no significant difference between groups except +FSK vs +AMI in drug regime I. Contrastingly, drug regime II showed no significant difference between BAS. and +FSK, or between +AMI and +CFTR Inh172 . See also S4 Fig (along with Tables E and F in S1 File, for mean data values) that shows how TER changes for all of the ALIs after the addition of drugs and also for the individual donor responses suggesting that certain individuals respond differently to drug administration. This variability prompted a deeper analysis. Initial baseline vs rolling baseline approach to data analysis These analytical complexities prompted us to re-examine drug induced effects on TER using a different approach reflective of the starting value of TER. First, we determined whether the baseline TER was predictive of the degree of change upon drug addition (Fig 5). Initially, we plotted the drug-induced TER values against a common starting baseline TER, for each regime (Fig 5A-5D, Drug regime I and II). The hypotenuse of each shaded triangular area in each panel of Fig 5 is the line of identity between the TER response to a drug (ordinate) and the baseline (abscissa), the latter being the start value of TER. It can be seen that in drug regime I, forskolin induced a fall in TER irrespective of the magnitude of the baseline TER ( Fig 5A-I, inset). Interestingly CFTR Inh172 restored the TER back to the line of identity suggesting that the forskolin effect in lowering TER was largely driven by CFTR activation. Under these conditions, AMI induced a rise in TER to above the line of identity, but only for a minority of ALIs (Fig 5C-I) suggesting that there may exist two populations of amiloride-insensitive (data near the shaded area) and amiloride-sensitive ALIs (data lying outside the confidence intervals in the main Fig 5C). This is further exemplified in Fig 5D-I, where the low amiloride responders (LOW, closed grey circles) have been separated from the high responders (HIGH, open squares). The cut off to determine our division of this population into two groups was based on whether the fold increase of TER was above or below the slope value in Fig 5C-I. To test the validity of this arbitrary choice, we repeated this analysis with drug regime II. On this occasion however, amiloride induced a rise in TER for all ALIs, but of different magnitude enabling the discernment of two groups of responders, LOW and HIGH, grouped by their fold rise in TER i.e. an increase being above or below the mean of the population ( Fig 5A-II and 5B-II). Once again forskolin drove the TER below the line of identity (but only for LOW responders, see regressions in Table B in S1 File) and the two groups of LOW and HIGH responders remained quite distinct even when the CFTR Inh172 was superimposed (Fig 5C-II and 5D-II). This analysis shows that the changes in TER after the addition of drugs are independent of the baseline magnitude and the major changes that occur upon amiloride addition are specific to this compound, but partially affected by the order of its administration with respect to modulation of anion transport. Next, we re-plotted the data but now assuming that the TER value in the presence of a given drug generated a new drug-induced baseline with each addition (Fig 6). Using this rolling baseline approach, in Fig 6B-I, CFTR Inh172 doubles the slope (compare with panel A-I). Subsequent addition of amiloride increases the slope yet further (Fig 6C-I), and the two groups of LOW and HIGH responders are now discrete (Fig 6D-I). In regime II, where amiloride is added first, differences between the LOW and HIGH groups were more difficult to discern when more than one drug was present showing differences in the effect of CFTR Inh172 but not forskolin. (Fig 6B-II and 6C-II, see slope t-test, Table C in S1 File). S5 Fig shows the data redrawn for ease of comparison. First, the baseline resistance is used as a common reference point (S5 Fig, panels A-I and A-II). Alternatively, each drug-induced resistance value is used as a rolling reference point where the focus is on the individual effect of a given drug (S5 Fig, panels B-I and B-II). The response patterns are qualitatively different but in each case outliers become apparent as certain drugs are added, with each inhibitor of ion transport showing the greatest outlier generating effect (see also S6 Fig for a quantitative analysis of the range of outliers). Earlier, we had found that individual donors' data were reflective of ALI data as a whole. Hence, we re-initial baseline TER values (closed grey circles) are plotted against the values obtained after addition of FSK (n = 44), (B-I) FSK+CFTR Inh172 (n = 44) and (C-I) FSK+CFTR Inh172 +AMI (n = 44); line of regression is shown as a dashed line with 99% CI (grey vertical bars). In D-I same data as in C-I showing two distinct populations of amiloride responders designated as LOW (closed grey circles-black dashed regression line +99% CI vertical bars, n = 31) and HIGH (open squares, black regression line +99% CI dotted line, n = 13). (A-II) The initial baseline TER values (closed grey circles) are plotted against the values obtained after addition of amiloride; line of regression is shown as a dashed line with 99% CI (grey vertical bars). In B-II same data as in A-II showing two distinct populations designated as LOW (n = 57) and HIGH (n = 24) amiloride responders. Changes in TER with cumulative drug addition of (C-II) AMI+FSK (LOW n = 56, HIGH n = 24) and (D-II) AMI +FSK+CFTR Inh172 (LOW n = 56, HIGH n = 21). Additional regression and statistical analysis data are shown in Table B in S1 File. doi:10.1371/journal.pone.0149550.g005 Variability of Trans-Epithelial Airway Resistance Fig 7B-I shows that upon amiloride addition two distinct groups of volunteers are revealed in regime I using the rolling baseline approach, whose existence is implied but not clearly discriminated using the other drug regime (+AMI Fig 7A-II and 7B-II). For completeness we also show the mean responses when the reference is set to the initial baseline TER (Fig 7A-I and 7A-II). Our initial findings were that there was no relationship between baseline voltage and baseline TER. However, when the initial voltage is plotted for a given ALI against its TER ratio response to amiloride addition, we now observe a curvilinear relationship between the two parameters ( S7 Fig). Alternatively, plotting the logarithm of the initial voltage, two distinct populations emerge, especially in drug regime II; and also in drug regime I but only when the TER ratio is recalculated as +AMI/+CFTR Inh172 (rolling approach, S7 Fig, panel D-I). Applying the grouping as in Figs 5 and 6, the two populations of LOW and HIGH responders remain distinct (Fig 8), but are more clear with drug regime II (Fig 8A- Variability of Trans-Epithelial Airway Resistance These graphs show that the HIGH amiloride responders have higher baseline voltages. Importantly, from the comparison of the groups obtained with either of the drug regimes and the different analytical approaches (Figs 5 and 6), we observed that the outliers in the presence of amiloride (HIGH responders, for example Fig 7B-I and 7A-II), belong to the same donors irrespective of whether amiloride is added first or last (compare also donors with ratios above 1.5 for +AMI in S6 Fig, panel B-I and A-II). This suggests that healthy volunteers demonstrate significantly different changes in TER in response to drugs such as amiloride that fall into dichotomous groups after drug challenge. Discussion Many research groups use cultured nasal brushings to model airway function [9,20], albeit with differences in opinion on the relevance of such biopsies [12,21], which is not surprising given the many different protocols for their cell culture together with different methodologies to assay bioelectrical properties [22]. Our results are comparable to the literature for either nasal [16,23] or tracheal/bronchial [11,24] epithelial cells, and those from commercial sources [25]. Our observed spread of values, i.e. intra-donor variability between ALIs from a given biopsy, is most likely caused by differences in the seeding composition of progenitors (Fig 1A), since this variability cannot be explained by differences in the length of time in culture as shown in Fig 3D. These causes are beyond the scope of the current paper but have been speculated on recently [26]. Such TER variability poses a severe challenge for the very idea of normal controls which in turn becomes a critical issue in the interpretation of disease findings and especially when any therapy has to be personalised, also known as precision medicine [27]. First, we find a large range, irrespective of whether the mean or median values from a given volunteer's aggregated ALI data are studied. Second, our transport data are not normally distributed with voltages over a >30-fold range (-62.3/-1.8 mV), with an equivalent >9-fold range of values for TER (from 309 to 2963 OÁcm 2 ) coupled to a minority of ALIs that have drug responses which differ markedly from the rest. Third, this variability is likely not an in vitro artefact because such a wide range is also found in vivo when nasal potential difference is measured in apparently healthy volunteers [28,29] which is of clinical importance given that this test has been proposed as a discriminant between disease and health [30]. Fourth, the volunteers who have a higher response to amiloride are the same subjects irrespective of the order of drug addition, suggesting that the degree of amiloride sensitivity is an intrinsic property of an individual's epithelium. Fifth, a high amiloride response, is not predictive of an equivalently high response to forskolin plus CFTR Inh172 (i.e. donor A vs R in S6 Fig). Sixth, there is no clear relationship between baseline voltage and resistance (Fig 2D). This is not unexpected given the complexity of the regulation of sodium transport across the airway epithelium [31] as the dominant driver of baseline voltage coupled to the equivalent complexity of the dynamic regulation of TER, with many independent studies showing an interaction between sodium and chloride transport mediated by CFTR [32][33][34]. The combined data suggest that ion transport at baseline and after sequential exposure to drugs both vary but amiloride-sensitivity as a marker of the underlying sodium transport remains the major component. In fact amiloride reduced by 10-fold the reference range for the current transported by the epithelium (data not shown) and, as might be expected, widens the TER range (Fig 4B and 4C). Importantly the order of drug administration does not alter the width of the TER range (compare Fig 4C- From a disease perspective, the magnitude of TER has been reported to be important in the pathogenesis of asthma [2] and future work will have to determine whether the starting values of TER might determine the propensity of a given individual to develop clinically detectable disease, for example after exposure to diesel particulates. Mechanistically, recent work has shown that a number of genes alter the phenotype of Cystic Fibrosis [35,36] and some of these are epithelial transporters whereas others are immune modulators. Co-inheritance of differences in such proteins are equally likely to be present in our normal volunteers and might explain the background variability in the parameters reported in this paper. This idea is supported by the finding that magnitude of the forskolin current in the presence of amiloride, and its subsequent level after the addition of the CFTR inhibitor, is also variable (data not shown). Between donors, this means that the airway epithelium is reacting differently to the presence of different drug combinations and care is needed in the choice of control subjects in a disease setting. We believe these are important considerations for personalized medicine, particularly when choosing the best controls for a study. We report a dichotomous TER response that generates two drug-discriminative populations that are only revealed post amiloride. This group difference is further exemplified as different profiles when the TER responses are normalised to either an initial baseline value, or re-calculated at each drug addition step on a rolling baseline basis: both approaches are needed in order to identify outliers, reflective of extreme response to a given drug or to drug combinations. An example is given in S6 Fig with 99% confidence intervals shown in the shaded areas; the outliers are clearly shown irrespective of whether the initial (panels A-I, A-II) or the rolling baseline (panels B-I,B-II) is chosen as the starting value when calculating the ratio of changes. Currently funded to study a rare inherited airway disease, Birt-Hogg-Dubé Syndrome [37], we faced the problem of limited access to airway tissue. Therefore we focused on studying apparently healthy volunteers to define a reference range of electrophysiological parameters, a mandatory analysis prior to asking patients to volunteer for airway research studies [38][39][40]. In pilot work we tested the utility of our SOP in brushings from donors affected by different diseases: three asthmatics, one BHD patient and four uninfected CF patients (homozygotes F508del-CFTR). For the asthmatics and BHD patients we were able to expand, differentiate (with an average of about twenty reconstituted epithelia for each donor) and analyse the electrophysiological parameters of the resultant ALI cultures (data not shown). To our surprise, none of the CF patient cells would attach and expand like either the controls or the diseaseaffected subjects. Further work, outside the scope of the current paper, will have to establish the cause (which was not due to bacterial or yeast infection). By way of possible explanation, we note that over recent decades, many workers in the CF field [41] report abnormalities in cellular networks that might explain such very unusual growth characteristics in our CF brushings, consistent with the many independent pathways that are abnormal after F508 is deleted from the CFTR protein [42,43]. For example, some of these pathways centre on protein that binds to the first nucleotide binding domain of CFTR and controls the signalling flux between the epidermal growth factor receptor and the assembly of the scaffold which approximates MEK with ERK in the process of cell growth and differentiation. Thus our SOP could in future help determine the means to repair the abnormal pathways that alter the production of epithelia from nasal brushings. To this end, we present our range of values from our SOP that might help others navigate the data obtained by different groups who often take polar opposite positions on which ion transport characteristics are the principal drivers of pathophysiological change in a given disease. Importantly, we observe that a given individual can generate a mean/median in vitro response that is reflective of their own set of ALIs as a whole, with the caveat that sufficient numbers of cells must be harvested/cultured per nasal biopsy to permit sufficient ALIs to be generated to compensate for seeding progenitor variability. We propose that a minimum of 5-6 ALIs per donor are necessary when performing ion transport experiments for better interpretation and hope that our data will help different groups faced with the challenges of understanding the nature of the 'normality challenge' in their chosen disease. That challenge has recently been set out in a review of the issues researchers will have to overcome to compensate for the variability in the background on which disease occurs, which is the holy grail of personalised or precision medicine [44]. Supporting Information S1 Fig. Effect of Na+ depletion on the electrophysiological properties of human airway epithelia. (A) Recorded voltage from two different Ussing chamber experiments (donors C and K) in which the epithelia were exposed to forskolin (FSK, 1μM) and then 50% of the buffer was exchanged apically (white arrows) with NMDG + -HBSS (NaCl replaced with (NMDG + )Cl, NaHCO 3 with KHCO 3 and Na 2 HPO 4 with KH 2 PO 4 ). For volume compensation the basolateral side had the same 50% volume exchange but with standard HBSS. After stabilization, NaCl was added back by exchanging 50% of the buffer with standard HBSS (137.93 mM NaCl) for three times (grey arrows). (B) Changes in V, TER and I Eq at baseline (BAS.), after the addition of forskolin (FSK) and at the different concentration of Na + achieved after replacing each time 50% of the buffer in the apical chamber with the same volume of NMDG + -HBSS until 2.2mM NaCl was reached. (TIFF)
2016-05-16T11:41:00.248Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "f215231e8be658c498e87665461538a2ce71c6d6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0149550&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4474efd3281e4ccd3e7890c7ed31ccff7f1a5658", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
80039372
pes2o/s2orc
v3-fos-license
An Unusual Presentation of Lipoma on the Dorsum of the Foot in A 9 Year Old Girl : A Case Report and Review of the Literature Agu A lipoma can occur in virtually any organ of the body that has fat cells. A slow growing mass on the dorsum of the foot in a 9 year old girl which had recently ulcerated and was painful appeared sinister. In the absence of trauma or local scarification on the mass, the spontaneous denudation of the overlying skin and eventual ulceration with discharge of serous fluid seemingly made malignancy a probable but not a definite diagnosis. The groin nodes were enlarged and tender but there was no cough and no weight loss. Biopsy confirmed the diagnosis of lipoma. The aim of this report is to highlight the need for the clinician to be broad minded in the management of patients and to consider the psychological impact of making a pronouncement without a confirmatory diagnosis. Introduction Lipoma is a benign neoplasia of adipocytes consisting of mature fat cells in a thin fibrous capsule.It is the commonest form of benign soft tissue tumor of mesenchymal origin.Lipoma is regarded as a 'ubiquitous' tumor or 'universal' tumor [1,2].Solitary lipomas are uncommon in children and they affect more women than men between the ages of 40 and 60 years [1][2][3].Lipomas mainly occur in the head and neck and shoulder regions as well as proximal extremities [3].The fat cells in this lesion are rapidly dividing but are still within their confines, and also maintain their cell morphology-in the aspects of shape, size, uniformity, tissue boundary and nucleo-cytoplasmic ratio on microscopy.The lesion therefore is most unlikely to ulcerate through the skin except there is a malignant Arch Clin Med Case Rep 2017; 1 (2): 62-66 transformation which is extremely rare [4] or there is trauma from frictions or ulceration following infected scarification wounds.A rapidly enlarging mass can exact a lot of pressure on the skin, causes local ischemia, necrosis, sloughing and ulceration.Benign masses are not known to commonly grow very rapidly and so the investing structures have time to expand to accommodate it.Though a mass may appear to be associated with some clinical features of malignancy, it should not be pronounced malignant until a complete diagnostic work up is done. There is no literature on this kind of unusual presentation of lipoma on the distal extremity in a child especially with ulceration.This report highlights the difficulty with clinical diagnosis of an uncommon presentation of lipoma and the need to avoid avoid psychological trauma on the patient and family members by doing a complete diagnostic work-up before a definitive pronouncement. Presentation of Case A 9 year old girl presented to us with an initially painless swelling on the dorsum of the left foot of 19 months (Figures 1 and 2).The mass was increasing in size gradually until the past 5 months when the patient noticed rapid increase in its size and pain in the foot.The mass was disturbing her ability to put on her shoes.There was no weight loss.Four weeks prior to presentation, the mass ulcerated and was discharging serous foul smelling fluid (Figure1).She said that there was no form of trauma or scarification on the mass.The patient had already been seen by their family doctor who after assessment pronounced that she may be having cancer and probably would need amputation.In great distress, patient's mother brought her to our level II surgical facility for second opinion. Arch Clin Med Case Rep 2017; 1 (2): 62-66 64 Examination showed an ulcerated 5 cm by 4 cm soft to firm mass on the dorsum of the left foot.It was mildly tender, not attached to the underlying structures but attached to the hyper-pigmented skin around the ulcer (Figures 1 and 2).The ulcer had an undermine edge and does not bleed to touch.There was no slipping sign.The groin nodes were firm and tender.She could move her left toes and she walked unshod with a limp.The chest, abdomen and other systems were normal. At this point, the clinical diagnosis was unclear and the plan was to do an excision biopsy.Her investigations which included complete blood count and urinalysis were within normal ranges.Her hemoglobin electrophoresis was AA. Plain radiograph of the foot was also normal.Informed consent was obtained and under general anesthesia, and proximal tourniquet, excision biopsy was performed.Intra-operative, the mass was yellowish, shiny, fatty and lobulated with good plane and enucleating it was fairly easy (Figures 3). Discussion Lipomas are the commonest form of benign soft tissue tumors consisting of mature fat cells encapsulated in a thin fibrous tissue.These neoplastic lesions are ubiquitous and can occur in any organ of the body that has adipose tissues.Lipomas could occur as solitary or multiple lipomatas.Solitary lipomas are commoner in women in their 5 th and 6 th decades especially in the obese and consist of 80% of all lipomas and do not show any familial inheritance [4,5].Multiple lipomatas on the other hand are more common in adult males and occur as lipomatosis with familial predilection, and some variants are autosomal dominant.Some are also symmetrical in distribution and could be associated with Dercum's disease (painful multiple lipomatas} also known as adiposis dolorosa [6], Madelung's disease (benign symmetric multiple lipomatas in the head and neck and proximal extremities) and Gardner's disease (intestinal polyposis, osteoma) [7].Whatever type of lipoma, these lesions are very rare in children except for the occasional painful angiolipoma which occur in younger age group than the conventional solitary lipomas [8]. Lipomas are slow growing tumors that present as painless, palpable small masses and so are often overlooked by the patients until they get to appreciable sizes [9].They are mostly subcutaneous but could be sub-fascial or intramuscular and are usually less than 2 cm but some can reach 20 cm in their widest diameters [10].Large tumors could give rise to pressure symptoms on the surrounding structures like nerves or tubular organs resulting in pain. Stretching of the investing structure including the skin can also give rise to pain.In our patient, the location of the tumor on the dorsum of the foot with thin stretched out skin and pressure on the adjacent digital nerves and the ongoing infection could be responsible for the pains she experienced.In the absence of trauma, the possible explanation for the ulceration at the summit of the mass is pressure effect on the skin giving rise to ischemia and necrosis (Figure 1). The pre-biopsy diagnosis of lipoma in this patient was not definite because of the age of the patient, the unusual location of the tumor in the distal extremity in addition to the ulceration and the absence of the typical slipping sign. The mass was quite superficial and so we did not think ultrasound scan would be extremely beneficial especially as the features of the mass made it amenable to excision.In resource rich environments, magnetic resonance imaging would increase the diagnostic yield but still should be confirmed by biopsy.Lipomas contains in addition to mature fat cell, thin fibrous capsules and this is what distinguishes it from aggregation of fat cells [5] as seen in obese people.In this patient, we relied only on histo-pathological diagnosis to chart a way forward for her treatment which fortunately ended with the complete surgical excision and the routine post-operative care. The treatment of choice for lipoma is mainly surgical as this is more likely to ensure a complete removal unlike steroid injection that causes fat atrophy or liposuction which usually would leave some fat tissues and the capsules behind [11].However, the latter methods leave minimal scars.If steroid injections are repeated, skin hypopigmentation may result.Solitary lipoma of a small size could be left alone but if a patient has cosmetic concern based on its' anatomical location, then irrespective of size, the lipoma should be removed.The excision of the mass en bloc in our patient was made possible by the good plane and lack of infiltration of the tumor that is typical of benign lesion.Therefore, avoiding injuries to the digital nerves and the dorsalis pedis artery were possible and the patient recovered without paresthesia or post-operative hematoma.Some authors had reported these complications in addition to fat embolism, seroma, cellulitis, wound infection and wound breakdown in their studies [1,3,8]. The reasons for excising the tumor in our patient were pains, the large size with the attendant cosmetic dysfunction and also importantly to make a definitive diagnosis upon which proper treatments were based.The diagnosis put to rest the worries of the parents concerning the initial clinical diagnosis and the proposed treatment option made by their family clinician.The lesson here is that a solitary lipoma may present differently from the usual pattern.Even when a lesion appears malignant clinically, a definitive statement should only be made after a histo-pathological confirmation and by this way, unnecessary worries will not be placed on the patient and family members like in this index case. Conclusion It was both psychological and physical relief to confirm that a lesion suspected to be malignant due to its' unusual presentation was a lipoma.The excision was both diagnostic and therapeutic and patient as expected made a full recovery and went back to school within a short period. Consent The patient's parents gave an unconditional approval for this work and the images to be published. Conflict of interest None. Funding The work was self-sponsored. Figure 1 : Figure 1: A clinical photograph showing a huge ulcerated lesion on the dorsum of left foot. Figure 2 : Figure 2: The position of lesion definitely would affect wearing of shoe. After care including antibiotics was administered.Histo-pathological examination revealed mature fat cells with empty cytoplasm and eccentric nuclei encased in fibrous capsule but interspersed by fibroblasts and blood vessels without any epithelium which are consistent with lipoma.Patient was discharged after 5 days.By six weeks post-operative follow up, she had fully recovered and was wearing her shoes and had resumed her school activities. Figure 3 : Figure 3: An intraoperative clinical photograph showing easy and complete enucleation of a well encapsulated and lobulated fatty mass.
2019-03-17T13:10:50.381Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "74e80687ee367c800ac8bfc4b2abd1e96e632794", "oa_license": "CCBY", "oa_url": "http://www.fortunejournals.com/articles/an-unusual-presentation-of-lipoma-on-the-dorsum-of-the-foot-in-a-9-year-old-girl-a-case-report-and-review-of-the-literature.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "74e80687ee367c800ac8bfc4b2abd1e96e632794", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7660396
pes2o/s2orc
v3-fos-license
Sequence motifs and prokaryotic expression of the reptilian paramyxovirus fusion protein Summary. Fourteen reptilian paramyxovirus isolates were chosen to represent the known extent of genetic diversity among this novel group of viruses. Selected regions of the fusion (F) gene were sequenced, analyzed and compared. The F gene of all isolates contained conserved motifs homologous to those described for other members of the family Paramyxoviridae including: signal peptide, transmembrane domain, furin cleavage site, fusion peptide, N-linked glycosylation sites, and two heptad repeats, the second of which (HRB-LZ) had the characteristics of a leucine zipper. Selected regions of the fusion gene of isolate Gono-GER85 were inserted into a prokaryotic expression system to generate three recombinant protein fragments of various sizes. The longest recombinant protein was cleaved by furin into two fragments of predicted length. Western blot analysis with virus-neutralizing rabbit-antiserum against this isolate demonstrated that only the longest construct reacted with the antiserum. This construct was unique in containing 30 additional C-terminal amino acids that included most of the HRB-LZ. These results indicate that the F genes of reptilian paramyxoviruses contain highly conserved motifs typical of other members of the family and suggest that the HRB-LZ domain of the reptilian paramyxovirus F protein contains a linear antigenic epitope. Introduction Paramyxoviruses are recognized as important pathogens of reptiles [14,20,21,37]. Beginning with the initial reports [10,15] of the Fer-de-Lance virus (FDLV), reptilian paramyxoviruses have been isolated from outbreaks associated with severe sickness and mortality in collections of viperid-, colubrid-, boid-or elapid snakes and lizards held in venom farms, herpetariums and zoological parks in Europe and North America [17,20]. Analysis of the full genome of the Fer-de-Lance virus (FDLV), the proposed type species virus for the group, suggests these viruses are bona fide members of the family Paramyxoviridae, but have unique properties including the presence of a novel gene not found in other members of the family [26]. Sequence analysis of a portion of the polymerase (L) and hemagglutinin-neuraminidase (HN) genes was used byAhne et al. [2] to assess the extent of genetic diversity among 16 isolates of reptilian paramyxoviruses. In that study, and in an additional analysis by Kindermann et al. [24], most isolates fell into one of two distinct subgroups (genogroups), designated 'a' and 'b', that differed by more than 20% at the nucleotide level with a few isolates falling into 'intermediate' positions. A similar study of 18 reptilian paramyxovirus isolates by Franke et al. [16] confirmed these findings of genetic diversity and provided a preliminary characterization of portions of the fusion (F) gene. Studies of mammalian and avian paramyxoviruses have shown that the fusion (F) protein of paramyxoviruses is a key determinant of virulence [25]. The F protein is synthesized as a biologically inactive precursor (F0) that is posttranslationally cleaved by cellular proteases into the disulfide-linked, fusiogenically active F1 and F2 subunits [39]. The newly released N-terminus of the F1 subunit contains a fusion peptide that aids insertion of the virus into the target membrane of the new host cell [5]. The carboxy-terminus anchors the viral transmembrane domain. The F protein motifs are highly conserved among paramyxoviruses [28]. However, activation of the F protein can be influenced by amino acid changes within several of the F protein motifs such as the cleavage site, the fusion peptide, heptad repeats A and B, or the leucine zipper [7,8,18,22,33,36,40]. In 2001, Junquiera de Azevedo et al. [23] reported a partial F gene sequence of a reptilian paramyxovirus, found as an expressed sequence tag clone from a wild-caught Fer-de-lance snake. In the same year, analysis of partial F gene sequences from 18 reptilian paramyxovirus isolates by Franke et al. [16] showed the presence of a predicted furin cleavage site having an amino acid sequence R-E-K-R within the F protein. This site was highly conserved among the isolates and homologous to the paramyxovirus consensus sequence Arg-X-Arg/Lys-Arg (R-X-R/K-R). Franke et al. [16] also compared the sequences for a 25-aminoacid region of the predicted fusion peptide and reported an 11 amino acid region (TSAQITAGIAL) was identical among the isolates and was homologous to the conserved domain described for certain genera of the subfamily Paramyxovirinae. In the present study, the full-length F gene sequence from FDLV [26] was used to identify a more complete set of predicted motifs including, in addition to the furin cleavage site and fusion peptide, the complete heptad A and heptad B regions, that included a leucine zipper in the HRB. Portions of the F gene containing these predicted motifs were sequenced for 14 reptilian paramyxovirus isolates that were chosen from among those characterized by Ahne et al. [2] to represent the known extent of genetic diversity among this novel group of viruses. With the exception of the type strain FDLV and the isolate Gono-GER85, the isolates in the present study are different from those described by Franke et al. [16]. The purpose of this portion of the work was to characterize and compare additional sequence motifs within the F protein, and to describe the extent of conservation or variation among genetically diverse isolates of reptilian paramyxoviruses. In addition, prokaryotic expression of various portions of the F protein was used to verify the function of the furin cleavage site and to investigate the presence of antigenic determinants within the F protein of the reptilian paramyxoviruses. PCR amplification and sequencing Degenerate primers (Table 2) were used to amplify two portions of the coding region of the fusion gene from each virus isolate. The first region was 295 nucleotides (nt) in length (nt 327-621 of the F gene) and contained the predicted cleavage site, fusion peptide, and heptad repeat A. The second region of 120 nt (1420-1539) included heptad repeat B that contained a leucine zipper. The degenerate primers were chosen based upon alignment of the GenBank sequences of FDLV (AY141760), Sendai virus (M30202) and other representative Table 2 were also used to determine the sequence of the F gene of isolate Gono-GER85 (GenBank AY725422) for development of a prokaryotic expression system. Amplification of viral RNA using RT-PCR and sequencing of the PCR products was performed as described by Ahne et al. [2]. Nucleotide sequence analyses, alignments and predictions for amino acid sequences were performed using Mac Vector 6.0 software (International Biotechnologies, New Haven, CT, USA). Protein motifs such as signal peptides and transmembrane domains were predicted Prokaryotic expression of the F gene A prokaryotic expression system was used to create a set of recombinant proteins based upon the nucleotide sequence of isolate Gono-GER85 (AY725422). The two longest recombinant proteins were designed to contain either the entire F0 protein (residues 1-546, not shown) or a truncated form (residues 26-490) lacking much of the signal peptide and the entire transmembrane anchor and cytoplasmic domain (Fig. 1). Two shorter recombinant proteins were designed to be analogous to the F protein subunit F1, excluding the heptad B motif and the transmembrane and cytoplasmic domains (residues 111-460), and subunit F2, excluding most of the signal peptide (residues 26-110). To generate the expression constructs, RNA was amplified with fragment-specific primers ( Table 2) and the DNA fragments were cloned into the pET-30a (+) expression vector (Novagen, Madison, WI, USA). The vectors were propagated in the NovaBlue E. coli strain (Novagen) and expressed in Rosetta (DE3) E. coli strain (Novagen). As a positive control, the 120 kD β-galactosidase was expressed in E. coli strain BL21 (Novagen), and as negative control, E. coli was transfected with plain vector. Expression was induced at 37 • C after 2 h incubation with 1 mM isopropyl-β-D-thiogalactopyranoside Three selected regions of the F protein were synthesized as recombinant proteins (rp 1-3) in a prokaryotic expression system. The rp1 was digested with the endopeptidase, furin, and the resulting cleavage products compared to rp2 and rp3. The C-terminal portion of rp1 included a region of 30 amino acids (LTKVQSDLKEAQDKLDESNAI LQGINNKIL) within the HRB LZ motif containing an antigenic epitope that reacted with antiserum in a western blot (IPTG). Recombinant proteins containing a His-Tag were purified via affinity column using a Ni-NTA resin and the manufacturer's protocol (Novagen). Furin protease digestion The ability of furin to cleave the recombinant protein construct rp1 that included the furin cleavage site was tested in vitro. The recombinant protein was dialyzed against a furin buffer solution and 100 ng of protein was digested for 3 h at 30 • C with 2 units of furin per the manufacturer's product manual (Biolabs, Beverly, MA, USA). Western blotting For analysis of the recombinant proteins, equal amounts of crude bacterial cell lysates, the Ni-NTA purified recombinant proteins, or the dialyzed, digested proteins were separated using SDS-PAGE in 12% gels. Following electrophoresis, the proteins were transferred to 0.45 µm pore size nitrocellulose membranes for Western blotting (Bio-Rad, Hercules, CA, USA). An anti-His-tag monoclonal mouse-antibody (MAb Anti-His; Novagen) was used to detect the recombinant proteins. A virus-neutralizing polyclonal rabbit antiserum (PAb Anti-RPMV), made against isolate Gono-GER85, was used to detect the presence of antigenic domains. Secondary antibodies were detected using an alkaline phosphatase immunoblot kit (Bio-Rad). Model of the reptilian paramyxovirus F protein The full-length sequence of the F gene of FDLV (GenBank AY141760) and Gono-GER85 (GenBank AY725422) were used to predict a model of the reptilian paramyxovirus F protein as presented in Fig. 1. The predicted N-terminal hydrophobic domain at amino acids 1-33 is believed to represent the signal peptide (SP) while the C-terminal hydrophobic domain at amino acids 503-520 is believed to represent the transmembrane anchor (TM). A predicted furin cleavage site (CS) is located at residues 107-110 at the C-terminal end of the F2 subunit and adjacent to the predicted fusion peptide (FP) at residues 111-135. Cleavage at this predicted site would divide the F protein into the F1 and F2 subunits. Two heptad repeats (HR) were predicted at residues 145-186 (HRA) and at residues 461-502 (HRB). Each heptad unit consisted of seven amino acids (positions a, b, c, d, e, f, g) with conserved hydrophobic or mostly non-polar residues occurring at positions a and d. The heptad units in FDLV were found to repeat six times (Fig. 2a, b). A helical wheel-model (Fig. 3) was used to illustrate the cross section of an alpha-helix into which the heptad repeats are believed to fold creating two opposing faces, a nonpolar-neutral face and a charged face. The HRB sequence also was predicted to encode a leucine zipper (LZ), with the hydrophobic amino acids leucine or isoleucine lining up in position a as shown in Fig. 3. Three potential amino acid-linked glycosylation sites (N-G) were identified at the asparagine residues located at positions 102, 436 and 443 of the F protein (Fig. 1); however, it was not determined if any of these sites were actually glycosylated. Genetic diversity within the F gene motifs To assess the extent of genetic diversity within the predicted motifs of the F gene of reptilian paramyxoviruses, the nucleotide sequences were determined for two selected regions of the F gene of 12 additional snake paramyxoviruses. The first region included the CS, FP, and HRA motifs (Fig. 2a) and the second region contained the HRB-LZ motif (Fig. 2b). Comparisons among the 14 isolates showed a high degree of similarity, but not identity, in these regions. In general, isolates identified by Ahne et al. [2] as members of subgroup 'a' had a conserved sequence for each of the motifs while isolates identified as subgroup 'b' had a conserved sequence that differed slightly from that of subgroup 'a'. Those isolates identified as members 'intermediate' to the two subgroups showed inconsistent results, sometimes having the conserved sequence of subgroup 'a', sometimes subgroup 'b', or occasionally for certain motifs, differing from the sequence of members of either subgroup 'a' or 'b' (Fig. 2a, b). The amino acid sequence of the predicted furin cleavage site of the 14 reptilian paramyxoviruses is shown in Fig. 2a. The conserved sequence, R-E-K-R was present in 13 of the isolates in agreement with the results reported by Franke et al. [16], while one 'intermediate' isolate, Crot2-OH90, had the sequence R-G-K-R. All the sequences were consistent with the paramyxovirus consensus furin cleavage site, R-X-R/K-R [28]. In the 10-amino acid sequence upstream of the CS at residues 97-106, virus isolates of subgroup 'a' differed from those of subgroup 'b' by 60% (6 of 10 aa) and with the 'intermediate' isolates by 20-40% [2-4 of 10 aa; data not shown). Downstream of the CS was a 25 amino acid sequence identified as the fusion peptide (FP). The amino acid sequence of this motif at the start of the F1 protein was identical among the 14 isolates examined. The conserved domain of 11 amino acids (TSAQITAGIAL) within this motif was also identical to that of the reptilian paramyxovirus isolates examined by Franke et al. [16] and shared a high degree of similarity to FP sequences of members of other genera of the subfamily Paramyxovirinae (Fig. 2a). Following the FP, the HRA motif was easily located (Fig. 2a). For this motif within the F gene of the 14 reptilian isolates, all isolates in subgroup 'a' and all but one of the 'intermediate' isolates had an identical amino acid sequence consisting of six heptad repeats. The six isolates in subgroup 'b' shared an identical sequence in this region that differed from subgroup 'a' and 'intermediate' isolates at 2 out of 42 amino acids. These substitutions were conservative such that the two HRA sequences had 100% amino acid similarity. In the region of the F protein between amino acids 461-502 was the HRB motif (Fig. 2b). This motif possessed the characteristics of a leucine zipper (LZ) with a series of heptad repeats containing leucine or isoleucine in position a. The LZ motif was six L/I repeats in length for the reptilian sequences, which is longer than for Sendai virus, measles virus, simian virus 5, and Tupaia virus, and shorter than for Newcastle disease virus and Hendra virus. Within the 42 amino acids comprising HRB, isolates in subgroup 'a' and four of the 'intermediate' isolates had an identical sequence except for the last amino acids of the sixth repeat. One 'intermediate' isolate had a unique sequence with two additional amino acid replacements, while all of the subgroup 'b' isolates had a sequence that again differed by 2 of 42 amino acids. In general, the amino acid differences were substitutions with similar amino acids and occurred within the charged face of the repeat. The overall amino acid sequence identity within the F protein motifs of the various reptilian paramyxovirus isolates was high, ranging from 93-100%. Overall amino acid sequence identity for the F protein motifs of the reptilian isolates compared to viruses representing other paramyxovirus genera was typically less than 45%. This level of sequence divergence within portions of the F gene was similar to levels of difference determined for the partial HN, L, and F sequences [2,16]. For the entire F protein of FDLV (545 aa) the range of amino acid identity with representatives of other paramyxovirus genera was 22-32% [26]. Prokaryotic expression The isolate Gono-GER85 was selected for prokaryotic expression because a polyclonal antibody raised against this virus has been previously shown to neutralize the virus [31] and to detect viral proteins in Western blots (Fig. 4). The PCR amplicons from selected regions of the F gene of the Gono-GER85 isolate were used to produce various constructs of the F protein in a prokaryotic expression system. A construct designed to produce the full length F0 protein did not express detectable levels of recombinant protein and was not considered further. However, Fig. 4. The left column shows the western blots using a monoclonal anti-His antibody (MAb Anti-His) and the right column shows the corresponding western blots using the virus-neutralizing polyclonal rabbit serum (PAb Anti-RPMV). The blots in section A contain complete bacterial proteins including the recombinant protein (rp), section B contains Hispurified rp, and section C contains His-purified, dialysed rp after digestion with furin. Lanes 1: rp1 (55 kD); 2: rp2 (14 kD); 3: rp3 (42 kD); 4: control (plain vector, expression induced); 5: control protein (120 kD) not induced; 6: control protein induced; 7: gradient-purified viral proteins; 8: IgH-2 cell proteins; 9: N-terminal portion of furin-digested rp1 (41 kD). Estimated molecular masses of the proteins are given on the sides of the blots as estimated from molecular weight standards (M 1 , M 2 ) three other constructs shown as recombinant protein 1, 2, and 3 in Fig. 1 were expressed at levels that were easily detectable on Western blots developed using a monoclonal antibody to the terminal His-Tag on each recombinant protein (Fig. 4). The recombinant proteins were approximately 30 aa larger than the native partial proteins due to the His-Tag and the multiple cloning site derived from the expression vector. The largest of the three recombinant proteins (rp1) had an estimated molecular mass of 55 kD, the predicted size for the F0 protein without the signal peptide or the transmembrane and cytoplasmic domains. The recombinant protein designed to mimic the F2 subunit (rp2) had an estimated molecular mass of 14 kD, while the recombinant protein designed to mimic a truncated F1 subunit (rp3) had an estimated molecular mass of 42 kD. Furin cleavage of the recombinant protein Of the three recombinant proteins that were expressed at high levels, only the 55 kD rp1 was designed to contain the predicted furin cleavage site. After digestion of purified rp1 by furin, a new protein fragment of approximately 41 kD was seen in gels that was the predicted size of the F1 subunit less the transmembrane and cytoplasmic domains (Fig. 4), confirming that the predicted furin cleavage site was active. Antigenic analysis Western blots developed using a neutralizing, polyclonal rabbit antiserum made against the Gono-GER85 isolate revealed that only the longest recombinant expression product, rp1, was recognized by the antibody (Fig. 4). The antibody also recognized the 41 kD cleavage product following digestion of rp1 by furin (Fig. 4). This cleavage fragment differed from rp3 only in that it contained 30 additional C-terminal amino acids that included most of the HRB-LZ motif (LTKVQSD LKEAQDK LDESNAI LQGINNK IL). These results indicated that the HRB-LZ domain of the reptilian paramyxovirus F protein contained at least one antigenic epitope. While other antisera were not available for this study, binding of the antiserum to Ni-NTA purified recombinant proteins following separation by SDS-PAGE indicated the epitope located within the HRB-LZ motif was likely not conformation dependent. Discussion The features of avian and mammalian paramyxovirus F proteins responsible for fusion of the virus with the host cell have been described as being quite conserved [25,28], and include an enzymatic cleavage site, a fusion peptide, and two heptad repeat motifs, one of which is a leucine zipper [27]. Those motifs represent key determinants for cell infection and virulence [25,28]. Our results extend the presence of these conserved F protein motifs to paramyxoviruses of reptile hosts as well. The F protein cleavage site of the reptilian paramyxoviruses was identified by sequence homology at the C-terminal end of the F2 subunit next to the N-terminal end of the F1 subunit fusion peptide (Fig. 1) as reported for other paramyxovirus F proteins [28]. With the exception of a single isolate, the reptilian viruses exhibited the same multibasic furin recognition site (R-E-K-R) reported by Franke et al. [16]. In the present study, we showed that a recombinant partial F protein of the Gono-GER85 isolate was cleaved by furin in vitro. Proteins inducing membrane fusion in the families Paramyxoviridae, Orthomyxoviridae, Retroviridae, and Filoviridae generally require cleavage modification to become biologically active [5,22]. For the paramyxovirus, Newcastle disease virus (NDV), the nature of the CS was shown to be a major virulence factor [12,33,41]. Following conversion of the amino acid sequence of the CS of a low virulence (lentogenic) recombinant LaSota strain of NDV (G-G-R-Q-G-R L) to the multibasic (G-R-R-Q-R-R F) furin motif, the virus became highly pathogenic [33]. Also, passage of a lentogenic wild type strain of NDV in domestic chickens resulted in the emergence of a highly virulent (velogenic) pathogen. Sequence analysis of the strains showed the conversion of the cleavage site amino acid sequence, E-R-Q-E-R L, to K-R-Q-K-R F, the consensus motif for furin cleavage [41]. Furin occurs ubiquitously in the Golgi apparatus of host cells [47] and cleavage of the F0 protein by furin enables virus replication in a wide range of tissues and organs leading to fatal systemic infection and death [33,42]. In contrast, tryptase Clara, a protease cleaving the lentogenic NDV strains and Sendai virus, is only secreted from epithelial cells present in respiratory and intestinal tracts [43]. The presence of a furin cleavage site is consistent with the ability of the reptilian paramyxoviruses to replicate in many tissues and to cause severe sickness and mortalities among infected snakes [14,15,17,[19][20][21]. In generalized paramyxovirus F protein models, the fusion peptide is the 25-amino acid, post-cleavage N-terminus of the F1 protein, which is known to be a highly hydrophobic and conserved domain [18] having an amino acid identity of up to 90% among various paramyxoviruses [28]. In reptilian paramyxoviruses, the FP was identified next to the cleavage site on the N-terminal end of the F1 subunit, and a conserved region of 11 amino acids was identical among the 14 isolates used in this study and in the isolates reported by Franke et al. [16]. This completely conserved domain consisted of the exclusively hydrophobic residues F, V, G, I, and A. To enable fusion with a host cell membrane, the hydrophobic fusion peptide also requires the previous attachment of the viral envelope to the host cell [34]. In the reptilian paramyxoviruses, as for most other members of the family, this may be provided by the presence of the HN protein [2,26]. High fusion activity leads to successful viral replication, formation of syncytia and greater pathogenicity of the viruses [28]. The F protein sequences of the reptilian paramyxoviruses revealed two heptad repeats which proved to be nearly identical among isolates. An F protein model of SV5 paramyxovirus shows the HR motifs are located in the F1 protein following the N-terminal FP (HRA) and before the C-terminal transmembrane anchor (HRB). The two motifs are located more than 250 residues apart [22,28], similar to the organization for FDLV as shown in Fig. 1. An important attribute of a HR is to facilitate self-assembly into an alpha-helix. In the native form, the F protein is shaped as a hairpin with the HRA and HRB forming anti-parallel alpha-helices [6,8] and alpha-helical coiled coils form the backbones of the viral glycoproteins of paramyxoviruses and other enveloped viruses such as HIV [5,22]. Additionally, the HRB is a leucine zipper with the amino acid leucine or isoleucine in position a [7,50]. The leucine zipper motif is present in all paramyxovirus F proteins, coronavirus spike proteins and many retrovirus envelope proteins [6]. The amino acids in the key positions a and d generate a hydrophobic face on the helix. Those faces enable proteins to oligomerize, which was shown to be essential for the paramyxoviral F protein [9]. This property allows several F protein monomers to form homo-oligomers [6,13,36]. Oligomer-formation is a characteristic described for fusion proteins of enveloped viruses such as member of the families Paramyxoviridae (SV5), Orthomyxoviridae (influenza virus), Retroviridae (HIV), and Filoviridae (Ebola virus) [5,22], and the oppositely charged faces of the helix as shown in Fig. 3 are consistent with the structure needed for self assembly of reptilian paramyxovirus F protein monomers into oligomers. Mutation analysis of the leucine zipper motif in the measles virus (MV) and NDV showed that substitution of leucine residues abolished the fusiogenic activity of the protein. However, protein expression, oligomerization, and cellular transport seemed to be unaffected [6,36]. Young et al. [50] reported disruption of the secondary structure through replacement of leucine residues with non-polar alanine residues. For the HRB sequence of reptilian paramyxoviruses, leucine was found more often in position a and isoleucine in position d (Fig. 3) suggesting the oligomers are composed of a four-stranded assembly [11]. As described for the F protein of other paramyxoviruses [28], the transmembrane anchor was located near the carboxy-terminus of the F protein of FDLV where it fastens the end of the protein within the viral envelope. Adjacent to the TM anchor in the N-terminal direction was the HRB as described by Chambers et al. [8] for fusion glycoproteins. From an immunological standpoint, the highly exposed position of viral glycoproteins on the surface of infected cells makes them targets for the host immune response. Both linear and conformationdependent neutralizing epitopes have been identified on the F protein of several paramyxoviruses including human respiratory syncytial virus (HRSV), human parainfluenza virus 3, Sendai virus, and NDV [30, 32, 35, 44-46, 48, 49]. Sequence analysis of neutralization escape mutants selected by neutralizing MAbs showed the presence of unique epitopes on either the F1 or F2 subunits of the F protein as well as epitopes that mapped to both the F1 and F2 subunits [32,38] indicating that, following cleavage, the two subunits are folded together and joined by disulfide linkages in a manner that gives rise to a number of complex, conformationdependent epitopes. These analyses also revealed that many of the neutralizing and fusion-inhibiting epitopes mapped to the cysteine-rich region (amino acids 300-420) of the paramyxovirus F1 subunit located between the two heptad repeats. Langedijk et al. [29] mapped a highly conserved neutralizing epitope of HRSV to a site within HRA. The present study showed the reaction of a virus-neutralizing polyclonal antiserum with a recombinant partial F protein of a snake paramyxovirus. The reacting epitope was located within a 30 amino acid region of the HRB containing the leucine zipper motif, a domain located outside of the viral membrane and therefore exposed to the host immune system. Because the antiserum reacted with the recombinant protein in Western blots following SDS-PAGE, we assume it is a linear epitope; however, the antiserum also reacted with several other denatured proteins from purified virus (Fig. 4). Thus, further studies will be needed to search for the presence of conformation-dependent epitopes and to determine if the linear epitope we identified on the F protein is associated with neutralization.
2017-08-02T20:46:04.766Z
2005-11-21T00:00:00.000
{ "year": 2005, "sha1": "a650d8f0c47572e6b57a3f658d4310ebb8707a59", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7086783?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a650d8f0c47572e6b57a3f658d4310ebb8707a59", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233744163
pes2o/s2orc
v3-fos-license
Hospitalizations and Treatment Outcomes in Patients with Urogenital Tuberculosis in Tashkent, Uzbekistan, 2016–2018 Despite the global shift to ambulatory tuberculosis (TB) care, hospitalizations remain common in Uzbekistan. This study examined the duration and determinants of hospitalizations among adult patients (≥18 years) with urogenital TB (UGTB) treated with first-line anti-TB drugs during 2016–2018 in Tashkent, Uzbekistan. This was a cohort study based on the analysis of health records. Of 142 included patients, 77 (54%) were males, the mean (±standard deviation) age was 40 ± 16 years, and 68 (48%) were laboratory-confirmed. A total of 136 (96%) patients were hospitalized during the intensive phase, and 12 (8%) had hospital admissions during the continuation phase of treatment. The median length of stay (LOS) during treatment was 56 days (Interquartile range: 56–58 days). LOS was associated with history of migration (adjusted incidence rate ratio (aIRR): 0.46, 95% confidence interval (CI): 0.32–0.69, p < 0.001); UGTB-related surgery (aIRR: 1.18, 95% CI: 1.01–1.38, p = 0.045); and hepatitis B comorbidity (aIRR: 3.18, 95% CI: 1.98–5.39, p < 0.001). The treatment success was 94% and it was not associated with the LOS. Hospitalization was almost universal among patients with UGTB in Uzbekistan. Future research should focus on finding out what proportion of hospitalizations were not clinically justified and could have been avoided. Introduction Tuberculosis (TB), one of the leading causes of death globally, is mostly represented by pulmonary TB [1]. Extrapulmonary tuberculosis (EPTB) has traditionally received less priority and attention probably due to its non-infectious nature. In 2019, EPTB represented 16% of the 7.1 million patients with TB that were reported to the World Health Organization (WHO) [1]. Urogenital tuberculosis (UGTB) is a form of EPTB related to infectious inflammation of urogenital system organs in any combination, caused by Mycobacterium tuberculosis or Mycobacterium bovis. Globally, UGTB accounts for 30 to 40% of patients with EPTB [2]. EPTB diagnosis, including the urogenital form, is a challenge due to the pauci-bacillary nature of the disease and the non-uniform distribution of microorganisms in the body [3]. Extra-pulmonary specimens may need a decontamination procedure during the sample preparation, which in its turn reduces the sensitivity of culture methods, a gold standard in TB diagnosis [4]. Only one-third of patients with UGTB have X-ray abnormalities and classical symptoms, such as fever, night sweats, and weight loss, are not common [5]. Thus, the diagnosis is often presumptive based on clinical or radiological findings without laboratory confirmation. Moreover, the definition of a satisfactory response to treatment in EPTB is not well-defined and varies across countries [6]. On average, patients with EPTB have longer hospitalizations compared to those with pulmonary TB [7]. Moreover, children, homeless patients, patients with psychiatric or substance abuse issues, and patients with comorbidities and complications tend to have a longer length of stay [8][9][10][11]. Generally, in recent years, there has been an intention to reduce hospitalizations during TB treatment. Several studies showed that treatment models with hospitalization for the full length of the intensive phase and models with ambulatory treatment since the day of diagnosis have similar treatment success [12][13][14]. WHO suggests hospital admissions only for complicated patients with TB, such as with respiratory failure or requiring surgery, patients with severe forms of the disease, and life-threatening or serious adverse drug events [15]. Hospitalization is also considered when effective and safe treatment cannot be ensured in an outpatient setting [15]. Uzbekistan is among the 25 countries with the largest proportion of EPTB, accounting for 35% of 16,272 new and relapsed patients with TB in 2019 [1]. However, the burden of UGTB is not known. Since 2014, it has been recommended that drug-susceptible TB (DSTB) patients in Uzbekistan receive their intensive phase of the treatment in an outpatient TB facility. However, this is not frequently followed because of the continued incentivization in favor of hospital-based care (such as financing of hospitals based on bed numbers and occupancy rates), and the relatively underdeveloped and underfinanced ambulatory sector [16]. The long-term inpatient care puts an additional burden on the limited healthcare resources. Previous research showed that hospitalization may account for half of all TB treatment costs [17]. Moreover, hospitalization increases the risk of TB nosocomial transmission to healthcare workers and other patients and may lead to mental health complications due to isolation [18,19]. None of the previous studies explored the length of hospitalization during TB treatment in Uzbekistan or Central Asia. We aimed to determine the duration of hospitalization during the intensive and continuation phases of treatment, its associated factors, and its relation to successful treatment outcomes among patients with UGTB in a tertiary care hospital in Uzbekistan. Study Design This was a cohort study based on the secondary analysis of patients' records. General Setting Uzbekistan is a lower middle-income country in Central Asia with a population of 33 million, two-thirds of whom live in rural areas [20] The country is divided into 12 provinces, an autonomous republic (Karakalpakstan), and a capital city (Tashkent). Specific Setting In Uzbekistan, TB services are vertically structured and provided at central, oblast, district, and primary health care levels. Funding for the National TB program comes mainly from external donors, in particular, The Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund), The United States Agency for International Development (USAID), and Médecins Sans Frontières (MSF). The study was conducted in the Republican Specialized Scientific Research Medical Center of Phthisiology and Pulmonology (RSSRMCPP), a tertiary referral center with branches in regions (district-level TB centers). The majority of patients are referred to RSSPMCTP from the secondary care (regional TB hospitals) and other tertiary care facilities (infectious diseases clinics, urology centers, etc.). The UGTB department at RSSPMCTP consists of seven physicians and 12 nurses and provides diagnostic and treatment services including hospitalization and surgery, as required. In total, 55 hospital beds are available for patients with UGTB at RSSRMCPP. In Uzbekistan, UGTB diagnosis is usually classified into three categories: urinary tract tuberculosis, genital tuberculosis, both urinary tract and genital tuberculosis. Kidney tuberculosis is considered as urinary tract tuberculosis. UGTB diagnosis and treatment regimens follow the WHO guidelines [21]. Culture, histopathology, intravenous pyelography, laparoscopy, cystoscopy, and biopsy are the methods employed in the UGTB diagnosis. Clinical diagnosis for drug-susceptible UGTB is based on the patient's history and no history of drug-resistant TB (DR-TB) among the patient's contacts. Treatment for drug-susceptible UGTB consists of intensive and continuation phases. The intensive phase consists of treatment with four drugs (isoniazid, rifampicin, pyrazinamide, and ethambutol) and lasts 56 days. Patients with UGTB are usually hospitalized for the full duration of the intensive phase. As per the national guidelines, intensive-phase treatment can be extended up to 84 days among patients with a positive culture and those with urinary biomarkers of inflammation or treatment complications. The continuation phase consists of two drugs (rifampicin and isoniazid) and is provided for six months in an outpatient setting at RSSRMCPP, regional TB centers, or primary care facilities. The continuation phase is extended up to seven months for patients with disseminated TB, TB/human immunodeficiency virus (HIV) and when urinary tract destruction without bacterial excretion persists for over six months. The continuation phase of treatment can last for 8-12 months, if there is coexisting tuberculous meningitis. Patients with UGTB are hospitalized during the continuation phase in cases of serious adverse effects or need for surgery. Definitions of UGTB treatment outcomes in Uzbekistan are presented in Appendix A. Decisions regarding the length of treatment and duration of hospitalization for each patient with UGTB is made by the "consilium", a committee of physicians authorized to make treatment decisions. Study Population and Period We included all patients who met the following criteria: age ≥18 years, diagnosed clinically with presumed or bacteriologically confirmed drug-susceptible UGTB, and received first-line treatment during 2016-2018 in the UGTB department at the RSSPMCTP in Tashkent, Uzbekistan. We excluded patients with a previous history of UGTB and patients with drug-resistant UGTB as they have a different treatment regimen and longer treatment duration, which requires a separate analysis. Variables, Definitions, Data Sources The variables related to the study objectives were extracted from paper-based health records at the RSSPMCTP. These were sociodemographic, behavioral, and clinical characteristics of patients at admission; dates of diagnosis, treatment initiation, hospitalizations and discharges; count and period of UGTB-related surgeries; and presence of serious adverse drug events during treatment. Serious adverse events are events that are (a) lifethreatening; (b) lead to disability or permanent damage; (c) require hospitalization or its extension; (d) lead to a congenital anomaly or birth defect; (e) other events that may jeopardize the patient [22]. Data on final treatment outcomes was requested from regional TB centers as most patients with UGTB who completed intensive-phase in RSSPMCTP, received continuation-phase treatment in their residential area. Primary outcomes were (i) total length of stay, defined as the duration of all hospitalization episodes during UGTB treatment; (ii) length of stay during the intensive phase of treatment; and (iii) length of stay during the continuation phase of treatment. Secondary outcomes were (i) "extended intensive-phase hospitalization" defined as hospitalization for the full length of the extended intensive phase (84 days or longer), and (ii) presence of hospitalization(s) with a duration of one day or longer during the continuation phase of treatment. Final treatment outcomes were classified into favorable (cured and treatment completed) and unfavorable outcomes (death, lost to follow-up, failure). Data Entry and Analysis Data were collected during May-October 2020. We entered selected variables from paper health records into a structured EpiData database created for the purpose (version 3.1 for entry EpiData Association, Odense, Denmark). Fifteen percent of records were double-entered and validated. Characteristics of the study participants were summarized with descriptive statistics using frequencies and proportions (for categorical variables) and mean and standard deviation (SD) or median and interquartile range (IQR) for continuous variables, as appropriate. We calculated proportions of patients hospitalized during the intensive and continuation phases of treatment. The total lengths of stays during intensive and continuation phases of treatment were summarized with medians and interquartile ranges for each phase of treatment. Considering the count nature of primary outcome and presence of overdispersion, negative binomial regressions were used to assess factors associated with total length of stay. Variables for the adjusted model were selected based on AIC (Akaike Information Criteria) stepwise testing. Age and sex were included in the adjusted model regardless of the results of stepwise testing. We calculated proportions of unfavorable treatment outcomes with 95% confidence intervals among patients with different patterns of hospitalization, such as (i) outpatient treatment only; (ii) standard intensive phase hospitalization and no hospitalizations during continuation phase: the shortest length of stay; (iii) extended intensive phase hospitalization and no hospitalizations during continuation phase; and (iv) extended intensive phase hospitalization and hospitalizations during continuation phase: the longest length of stay. A level of significance was set at α = 0.05 for all analyses. Analysis was conducted using R software, version 3.5.2 (Copyright (C) 2018 The R Foundation for Statistical Computing). Patient Characteristics A total of 142 patients were included in the study. The mean (± standard deviation) age was 40 ± 16 years and about half (n = 77, 54%) were males ( Table 1). The majority of patients were from rural areas (n = 113, 80%). All patients had satisfactory living conditions by judgment of their TB physicians. Only three patients (2%) had a history of alcohol abuse at the time of admission while tobacco smoking was more common (n = 30, 21%). 1 Other comorbidities were urogenital disorders (n = 31), cardiovascular diseases (n = 13), neurological diseases (n = 3), cancer (n = 2). Hospitalizations and Length of Stay All 142 patients were hospitalized during the stage of diagnosis. On average, patients spent a week (median: 7 days, interquartile range (IQR): 5-8) in the UGTB inpatient department before confirmation of diagnosis and initiation of treatment. Most patients (96%, n = 136) were hospitalized during the intensive phase of treatment. Of them, 43% (59/136) had an extended intensive phase. The total length of stay during the intensive phase of treatment ranged from 23 to 113 days with a median of 56 days (IQR: 56-57). Six patients were treated in the outpatient setting; they received medication at home and were supervised by nurses. Of the six patients who had outpatient treatment since the day of diagnosis, four patients had an extended intensive phase of treatment. During the continuation phase of treatment, 12 of 142 (8%) patients had hospitalizations, and length of stay ranged from 7 to 275 days with a median of 18.5 days (IQR: 12-30). Of those 12 patients, 10 were admitted once during the continuation phase of treatment, one patient was admitted twice, and one patient was admitted three times. Overall, 73 of 142 (51%) patients were hospitalized only during the intensive phase of treatment and stayed in the hospital for 56 days, the full length of the standard intensivephase. Fifty-one patients (36%) had an extended intensive phase and stayed in the hospital over 56 days, but they did not have any hospitalizations during the continuation phase. Eight patients (6%) had the longest length of stay with extended intensive phase hospitalization and admission(s) to the hospital during the continuation phase. The total length of stay during intensive and continuation phases of treatment ranged from zero to 360 days with a median of 56 days (IQR: 56-58 days). Discussion This study explored duration of hospitalization among patients with drug-susceptible UGTB in a tertiary care center in Uzbekistan. We found high rates of hospitalization with nearly everyone hospitalized during the intensive phase of treatment, and nearly one in ten patients had repeated hospital admission during the continuation phase of treatment. This is not in line with the WHO recommendations and national treatment guidelines in Uzbekistan that encourage ambulatory TB treatment [15]. In our study, the median length of stay was 56 days for patients with UGTB, which is similar to the regional average for the duration of hospitalization among patients with drugsusceptible TB in Eastern Europe and Central Asia [1]. In countries with a predominantly ambulatory model of TB treatment, such as the United States or Italy, the average length of stay for extrapulmonary TB has been reported to be between 13-22 days [7,23]. Overhospitalization of patients with TB that is not always justified by clinical need has been described previously in health care systems similar to Uzbekistan, which inherited the Soviet model of hospital-based management [16,24]. Previous qualitative research in the country showed that structural barriers, such as the hospital financing mechanism in the country, which is based on occupancy rates, and lack of comprehensive outpatient care preclude the scale-up of ambulatory TB treatment [16]. In the study population, the admissions to RSSPMCTP began on average a week before UGTB treatment initiation. As UGTB diagnosis is often neglected, patients encounter TB care with severe disease and need inpatient symptomatic treatment before UGTB diagnosis confirmation [25]. Considering the length of stay during treatment and duration of hospitalization before treatment initiation, first-line patients with UGTB had more than two months of inpatient treatment on average. In our study, two-thirds of patients with UGTB had comorbidities, and 6% of them experienced serious adverse events during the treatment, such as life-threatening events or events requiring hospitalizations. This finding may indicate that comorbidities contributed to a more severe condition that required hospitalization [26]. In the previous research, patients with TB and comorbidities, particularly renal and liver diseases, diabetes mellitus, HIV, or cancer, were more likely to be hospitalized and have longer inpatient treatment [8,[26][27][28]. Similarly, we found that patients with UGTB and hepatitis B were more likely to stay longer in the hospital. Associations between the length of stay and other comorbidities were not significant. Hepatitis B and underlying chronic liver disease are well-known risk factors of hepatotoxicity induced by anti-tuberculosis treatment and poor outcomes [29,30]. In our sample, all four participants with hepatitis B developed hepatotoxicity during the intensive phase of TB treatment, and three of them had prolonged initial hospitalization with about three months duration. Unfavorable treatment outcome with treatment failure was observed in one of the four patients with TB and hepatitis B, who had extended intensive phase hospitalization and repeated hospitalizations during the continuation phase. Our data showed that the patients with UGTB who underwent surgery during their treatment were more likely to have a longer length of stay, which is quite expected. In UGTB clinical practice, the intensive phase of treatment starts with the standardized anti-TB treatment; if the patient's health is not improved during the first month of the intensive phase of treatment, a surgical intervention is considered [31]. Common kinds of surgery in patients with UGTB in RSSPMCTP were nephroureterectomy, nephrectomy, urinary diversion, orchiectomy, ureteral stent placement, and reconstructive urethral surgery. Patients with UGTB who had surgery usually stayed in the hospital for an extra month after 56 days of initial hospitalization (standard intensive phase). The history of labor migration was associated with shorter length of stay in our study. Migrants are a key affected population with respect to TB in Central Asia, including Uzbekistan, due to low socio-economic status, limited access to health care while working abroad, and increased vulnerability to HIV infection [32,33]. Considering that prolonged hospitalization leads to loss of productivity and income, migrants tend to avoid hospitalization regardless of the actual needs [34]. High rates of hospitalization and long length of stay might be influenced also by the low socio-economic status of patients with TB. Unemployment, poverty, and homelessness are considered by health workers as social reasons to justify hospital admissions [35]. Some patients may prefer to stay in the hospital to reduce expenditure, such as nutrition or travel expenses for medication refills and treatment monitoring [36]. Previous studies in Uzbekistan have shown that one in four patients with TB is unemployed and at increased risk of loss to follow up [37,38]. Repeated hospital admissions were not common in our study population, which is consistent with previous research. Patients usually start TB treatment in-hospital and initial hospitalization contributes the most to the total length of stay during treatment [8,28]. Research in the United States and Canada, countries that prioritize reduction of hospital admissions, showed similar proportions of patients with TB with multiple hospitalization episodes during treatment (7 and 8%, respectively) [8,39]. The final treatment outcomes were excellent-the treatment success was 94% and there were no patients lost to follow-up. The treatment success reported in the study is comparable with an overall estimate of the TB treatment success rate in Uzbekistan among new patients and relapses (92% in 2018) and with the international literature suggesting a high success rate in UGTB treatment [31]. There were some limitations in our study. While the initial aim was to include all patients who met inclusion criteria in the study, it was not possible to collect data on 302 of 444 (68%) of patients with UGTB. The data collection period overlapped with the COVID-19 pandemic and TB centers were involved in the pandemic response, which made data collection challenging. Hence, the study was subject to a non-response bias. Our study population included patients with UGTB diagnosed at a tertiary care hospital and may not be representative of all patients with UGTB in the country. Variables, such as alcohol abuse and tobacco smoking at admission were not assessed by standardized screening tools and so were likely underestimated. Patient records had limited information on social determinants of health. Health records included information on living conditions but data on socio-economic status, homelessness, and substance abuse were not available. These variables are important predictors of hospital admissions and length of stay as per the published literature [40]. Previous research has shown that hospital-acquired infections, particularly infections caused by drug-resistant microorganisms, are common in patients with TB, which prolong the length of stay [41,42]. Study participants were not routinely screened for hospital-acquired infections and neither were these infections monitored at RSSPMCTP. Therefore, we were not able to assess the impact of hospital-acquired infections on the length of stay. One limitation was the small sample size that restricted the analysis of factors associated with the length of stay. The finding on the link between hepatitis B and longer length of stay did not have adequate statistical power given only four patients in the sample had hepatitis B. Despite these limitations, there are two important programmatic implications. First, the National TB program needs a better understanding of what proportion of hospitalizations and what length of stay in patients with TB are justified by clinical needs. One study in the United States showed that up to 40% of hospitalizations in patients with TB were avoidable [19]. This study considered both clinical and social criteria for hospitalization: clinically unstable on admission; admitted to the intensive care unit or intubated; receiving home or nursing home care; homeless or living in a congregate setting; recent alcohol/drug abuser; aged < 5 years; admitting diagnosis of a severe form of TB; and presence of mental illness [19]. A study in the Russian Federation found that about 20% of hospital admissions in patients with TB would not be justified when clinical (severity of disease), public health (risk of transmission), social (risk behaviors and socio-economic status), and health-system factors (access to outpatient care) are considered as hospitalization criteria [35]. Similar research, particularly in patients with UGTB, will help the National Tuberculosis Programme in Uzbekistan to infer hospitalization data more appropriately. Second, high admission rates at the tertiary care found in the study and multiple comorbidities among patients with UGTB may indicate a delay in diagnosis that contributes to a severe illness and prolonged hospitalization [25]. In this regard, there is a need to understand what contributes most to the delay (late presentation by the patients to the health system, low clinical suspicion of UGTB on the part of health care providers at primary care, non-productive referrals within the health system) and address these barriers. Conclusions Our study demonstrated that the hospitalization rates among patients with UGTB in Uzbekistan were quite high, despite WHO and national recommendations in favor of a decentralized, ambulatory model of care. We found that the factors related to the longer length of stay include hepatitis B and surgery while history of labor migration was associated with shorter length of stay. Future research should focus on finding out what proportion of hospitalizations were not clinically justified and could have been avoided. Funding: This paper was produced with financial support from World Health Organization Country Office in Uzbekistan and the German KfW Development Bank which was provided within the project "TB prevention and control in Uzbekistan". Institutional Review Board Statement: Permission to access the data was obtained from the Republican Specialized Scientific Practical Medical Centre of Phthisiology and Pulmonology under the Ministry of Health of the Republic of Uzbekistan. Ethics approval was obtained from the National Ethics Committee of the Ministry of Health of the Republic of Uzbekistan based in Tashkent, Uzbekistan (protocol #1/38-1365 from 24 January 2020). The study was exempted from review by the World Health Organization Research Ethics Review Committee based in Geneva, Switzerland (ERC.0003417/12.08.2020), as the research project analyzed retrospective anonymized patient data. Informed Consent Statement: A waiver of informed consent was granted by ethics review bodies, as the study collected and analyzed de-identified routine recording and reporting data. Data Availability Statement: The data that support the findings of this study are available from the corresponding author, B.I., upon reasonable request. Acknowledgments: The authors thank the Ministry of Health of the Republic of Uzbekistan, Republican Specialized Scientific Practical Medical Centre of Phthisiology and Pulmonology under the Ministry of Health of the Republic of Uzbekistan for defining research questions and providing data for this study, and the secretariat of the European TB Research Initiative (ERI-TB) at the World Health Organization Regional Office for Europe and the World Health Organization Country Office in Uzbekistan for organizing the Structured Operational Research Training (SORT-TB) supported by the German KfW Development Bank, in line with joint World Health Organization/KfW "TB prevention and control in Uzbekistan" project. The SORT-TB curriculum was an adaptation of the SORT IT course of the UNICEF/UNDP/World Bank/WHO Special Programme for Research and Training in Tropical Diseases (TDR) SORT IT course (https://www.who.int/tdr/capacity/strengthening/sort/en/) (accessed on 4 October 2019) to the eastern European and central Asian context. The current course was co-facilitated by officers from the World Health Organization Country Office in Uzbekistan; the World Health Organization Regional Office for Europe; the International Union Against Tuberculosis and Lung Disease (the Union) and individual experts in the area of tuberculosis research. Conflicts of Interest: The authors declare no conflict of interest. Disclaimer: The authors alone are responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of the World Health Organization. Open Access Statement: In accordance with WHO's open-access publication policy for all work funded by WHO or authored/co-authored by WHO staff members, the WHO retains the copyright of this publication through a Creative Commons Attribution IGO licence (http://creativecommons. org/licenses/by/3.0/igo/legalcode) (accessed on 4 October 2019) which permits unrestricted use, distribution, and reproduction in any medium provided the original work is properly cited. Appendix A. Definition of Urogenital Tuberculosis Treatment Outcomes in Uzbekistan by the National Treatment Guidelines Treatment Outcome Description Cured A patient with bacteriologically confirmed tuberculosis at the beginning of treatment who was urine smear-or culture-negative in the last month of treatment and on at least one previous occasion. Treatment completed A tuberculosis patient who completed treatment without evidence of failure but with no record to show that urine smear or culture results in the last month of treatment and on at least one previous occasion were negative, either because tests were not done or because results are unavailable. A patient with clinically diagnosed tuberculosis who did not have clinical symptoms at the end of treatment, such as normalization of urine, blood tests, and X-ray of the urinary tract. Treatment failed A patient with bacteriologically confirmed tuberculosis at the beginning of treatment whose urine smear or culture was positive at month 5 or later during treatment. A patient with clinically diagnosed tuberculosis who had deterioration of urine tests, blood tests, or X-ray of the urinary tract at month 5 or later during treatment. Died A tuberculosis patient who died for any reason before starting or during the course of treatment. Lost to follow-up A tuberculosis patient who did not start treatment or whose treatment was interrupted for 2 consecutive months or more.
2021-05-06T06:16:10.491Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "078ea2a8273d4085c29c165311a3d7905c795098", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/9/4817/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3da1eae81778ff30d397fe561d7b04e95a2014c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119614337
pes2o/s2orc
v3-fos-license
Monadic Dynamics We develop a monadic framework formalising an operational notion of dynamics, seen as the setting and evolution of initial value problems, in general physical theories. We identify in the Eilenberg-Moore category the natural environment for dynamical systems and characterise Cauchy surfaces abstractly as automorphisms in the Kleisli category. Our main results formally vindicates the Aristotelian view that time and change are defined by one another. We show that dynamics which respect the compositional structure of physical systems always define a canonical notion of time, and give the conditions under which they can be faithfully seen as actions of time on physical systems. Finally, we construct state spaces and path spaces, and show that our framework to be equivalent to the path space approaches to dynamics. The monadic standpoint is thus as strong as the established paradigms, but the shift from histories to dynamics helps shed new light on the nature of time in physics. In the appendix we present some additional structures of wide applicability, introduce prop- agators and draft applications to quantum theory, classical mechanics and network theory. Introduction The operational understanding of time as some sort of universal, or free, notion of change has influenced the philosophy of science ever since Aristotle wrote it into history with his Physics: when talking about the nature of time in book IV (ch.12, 220b [14][15], the Greek father of empiricism indeed claims that "not only do we measure change by time, but time by change, because they are defined by one another ". We decide to take the operational notion of time as free dynamics seriously in this work, employing monads as a uniform language for the treatment of time and dynamics, seen as the setting and evolution of initial value problems, in general physical theories. In section 2, the monadic dynamics framework will be progressively exemplified in the mathematical theory of dynamical systems, which has a simple and familiar structure in which to showcase the main ideas. The framework is however much more general, and what will be referred to as time in this work actually encompasses a breadth of different notions of dynamics on closed systems, including all internal groups and monoids acting in symmetric monoidal categories. Furthermore, we take an empirical standpoint and restrict ourselves to notions of time that can be internalised, or simulated, by the physical theory, making our time objects a generalisation of physical clocks. In section 3 we will prove our main claim: under some physically sensible conditions, time is defined by dynamics and dynamics can always be seen as actions of time on physical systems. In section 4 we will show our approach to be equivalent to the more common path-space approaches. The main open questions then becomes: what operational characterisation of the dynamics will force time to take the "linear" form observed in most theories of physics? Only time will tell. 2 An operational notion of time 2 .1 A categorical approach to physics Under the Wittgensteinian 1 slogan "Don't ask for the meaning, ask for the use", the categorical approach to physics shifts the attention from the physical systems with their structure to the ways the systems relate to and transform into each other. The physical systems themselves become mere labels, their internals nothing other than an emergent characterisation of the way they transform. 2 Once we have a category of physical systems representing a specific theory, it is natural to ask if and how the dynamics of those systems can be described within the category, i.e. with the language and structure of the theory itself. We assume that the concrete dynamics of a physical system A share enough common structure (i.e. a notion of "time"), which can be abstracted and simulated by some bigger physical system T A of our theory, its space of free histories. Each individual dynamic then is a particular way of making the free histories concrete by folding T A onto A, i.e. turning each free history into a concrete history in a way that is meaningful for the physical theory under consideration: dynamic : T A ։ A (2.1) In the mathematical theory of dynamical systems, for example, we have a fixed notion of time that works for all systems, and a system T = R, N, Z, ... that simulates it. The construction of the space of free histories for A will then amount to taking T A = A × T, i.e. the space of "formal time evolutions" of points / states / elements of A. The dynamics of the system will be morphisms (a, t) → evolve t [a] evolving the system in some consistent manner, which will be elucidated in the rest of this section. The operational approach to dynamics We operationally characterise time by seeing it as a way of changing, evolving a system: to do so, we consider the original system A, its space of free histories T A and the space T T A of free histories of T A (as the latter is itself a physical system). Free and concrete histories Under the structure-inducing view of the category C of physical systems for a theory, when talking about subsystems of a system A we will mean ways D d → A of transforming other systems to it. The ways of transforming a fixed system D to system A form the space Hom C [D, A] of D-points of A, and the sheaf of subsystems of a physical system A will be: To us, subsystems of A will be general morphisms D → A: we will avoid the nomenclature "subobjects" altogether, and refer to monomorphisms D A as faithful subsystems. The latter are subsystems where all the structure of D is faithfully embedded into structure for A. Please note that, from now on, we will confuse the hom functors Hom C [ , A] and Hom C [A, ], their categories of elements, and the slice and coslice categories C/A and A\C. We define the free history of some subsystem D d → A to be the following subsystem of T A: The free histories contribute, as expected, to the structure of the physical system T A: Concrete histories Each concrete history of a subsystem D d → A will be obtained as the image of its free history under a dynamic α : T A ։ A for the parent system A: The space of concrete histories Hists[A] for a system A will be discussed in section 4. Liftings When talking about the lifting of a morphism f : A → B, we mean the following morphism: In this sense, the free histories for A are the liftings of subsystems of A. But when using the work "lifting", we will often have in mind its push-forward action f ⋆ on the free histories of systems: There is also a pullback action h ⋆ of liftings on the free histories of systems: Canonical initial surface If we want to characterise "time as change", we first need a starting point to change the system from, i.e. an initial surface η A embedding A as a faithful subsystem of T A: For the mathematical theory of dynamical systems, this is given by taking the system at time zero: Canonical evolution The free histories of A have a canonical dynamic / evolution, in that they encode the abstract notion of free dynamics for T A. Operationally, we need a way of canonically evolve the free histories of A by making them follow themselves, in the form of a dynamic for T A: For the mathematical theory of dynamical systems, this is given by adding the times: Structural rigidity We expect initial surfaces and canonical evolution to be compatible with the structure lifted from A to T A and T T A: In particular, naturality of η A means that A can be seen as a leaf in T A, all its morphisms (and hence its structure) faithfully lifted trough the embedding. For the mathematical theory of dynamical systems, this condition reads: T f (a, 0) = (f (a), 0) (2.14) This leaf-wise action of the liftings always holds on the initial surface, but in general the space of free histories need not be foliated: the mathematical theory of dynamical systems is, and the liftings act leaf-wise at all times: Initial value problems For η A to act as an initial surface, we have to require that all the concrete dynamics α : T A ։ A for the physical system A respect it as such, i.e. do nothing on it: This is the abstract construction required to be able to formulate an initial value problem, and for the mathematical theory of dynamical systems it reads: evolve 0 [a 0 ] = a 0 (2.17) Free dynamics The canonical evolution µ A is meant to encode the free dynamics of system A: by this we mean that it encodes the abstract compositional aspect of the evolution, i.e. it tells us what it means to "evolve the system a bit, then evolve it some more". This can be formalised by the following commutative diagram (more details can be found in section 7.1.1 of the appendix): Dynamics as a monad As the reader familiar with the theory of monads will have already noticed, the operational approach presented in section 2.2 (and section 7.1 of the appendix) is equivalent to asking that (T, η, µ) is a monad for the category C of physical systems, with dynamics being the algebras of the monad. Without loss of generality, we will assume each systems to have at least one dynamic: from a formal point of view, this implies that the unit η A is always (split) mono, and hence the initial surface is a faithful subsystem of the space of free histories. Eilenberg-Moore category and dynamical systems The algebras for the monad form, as usual, the Eilenberg-Moore category C T , with objects the bundles α : T A ։ A given by the dynamics, and morphisms f : α → β the bundle morphisms obtained by lifting: From now on we will refer to the objects of the Eilenberg-Moore category as dynamical systems; we'll also refer to the morphisms between them as morphisms of dynamical systems, or dynamical transformations. See section 7.2 for some more notes about symmetries in this paradigm. Kleisli category and Cauchy surfaces We now shift our attention to free dynamical systems, i.e. those in the form: The spaces of free histories T A are canonically free dynamical systems, and their dynamical symmetries h : Aut C T [µ A ] are exactly the symmetries h : Aut C [T A] that respect the free dynamical structure of the underlying physical system A. Section 6.3 of the appendix characterises propagators as dynamics of these systems, and thus we will also refer to them as spacetimes. Free dynamical systems form a full subcategory C T of the Eilenberg-Moore category, isomorphic to the Kleisli category. There is a bijection between the morphisms g : µ A → µ B of free dynamical systems and morphisms of physical systems in the form f : A → T B: where f ⋆ is called the Kleisli extension of f . Thus morphisms of free dynamical systems can equivalently be seen as morphisms of physical systems (their Kleisli form), with the following composition law inherited from the Eilenberg-Moore category: In particular, the dynamical symmetries g : Aut C T [µ A ] of a space of free histories T A correspond bijectively to the morphisms f : A → T A that are iso in the Kleisli category: i.e. the morphisms f : ). But what do these morphisms look like? First of all, note that if f is the Kleisli form of a symmetry of a free dynamical system µ A , then in particular it is (split) mono: and hence it embeds A as a faithful subsystem A f T A of its space of free histories, with the automorphism f ⋆ = µ A · T f : Aut C [T A] mapping the initial surface η A : A T A to it. To get an understanding of the action of f ⋆ over the entire space we look once again at the mathematical theory of dynamical systems: In this case, the map factors as an automorphism g : Aut C [A] and a field of time translations t : A → T: it is in fact what we'd usually call a "Cauchy surface". Informally, a Cauchy surface is a surface that intersects all maximal causal curves exactly once. Operationally, given a Cauchy surface Σ any other Cauchy surface can then be obtained by dynamical symmetries, simply by deforming Σ along the individual causal curves. Now two considerations from our conceptual framework come into play: 1. Closure. Since our system is closed in its evolution, all Cauchy surfaces will "look like A", i.e. they will be faithful subsystems f : A T A. 2. Initial surface. One of the operational requirements in our framework was the possibility of canonically setting initial value problems. As a consequence we already have a canonical Cauchy surface, the initial surface η A : A T A. Thus we define a Cauchy surface as a faithful subsystem f : A T A that can be pulled back to the initial surface via some dynamical symmetry of the space of free histories: As seen in eq'n 2.25, every dynamical symmetry of the space of free history gives a Cauchy surface as its Kleisli form. But the opposite is also true: if f is a Cauchy surface according to eq'n 2.27, then by uniqueness of inverse for h in C T we have that h = f ⋆ and we recover eq'n 2.25. We conclude that Cauchy surfaces are captured exactly by the automorphisms of spacetimes in the Kleisli category. 3 Time in symmetric monoidal categories 3.1 Time from change Strong monads If physical systems have some distinguished compositional structure, encoded by a (symmetric) monoidal structure (C, ⊗, I), then it is interesting to ask whether there is a (physical) relation between the dynamics of individual systems and the dynamics of composite systems. In [24][25] [29][28], a monoidal monad is defined to be a monad T on a symmetric monoidal category (C, ⊗, I) which comes with a natural family m A,B of morphisms relating the spaces of free histories of the individual systems to the space of free histories of their joint system: The definition of monoidal monad directly formalises our initial question. On the other hand, it can be shown that a monoidal monad is the same as a commutative strong monad, which comes with two natural transformations: There is a sense in which the transformations are associative, commute with each other, and respect I, η and µ: see the references for the 5 commutative diagrams that formalise those notions. Foliation maps We are interested in the equivalent formulation as (commutative) strong monad because these come with an evident natural foliation of the space of free histories: From these foliations we can define the time object, time system or simply time, to be the space of free histories T def = T I of the trivial system: foliate A can indeed be seen as a foliation of the space of free histories in same-time leaves, and naturality of foliation relates the same-time action f ⊗ id T of a morphism f to its lifting T f : We stress that the foliation does not, in general, need to cover the space of free histories, and that the leaves do not need to be faithful embeddings of A into T A. Concrete histories as curves parametrised by time In section 2.2.1 we introduced the notion of free history for a subsystem. For symmetric monoidal categories, there is a particular class of subsystems we are interested in, the states of a system: Morphisms f : A → B have the usual pushforward action on states, given by: The free histories for the states of a system are the reason we refer to T as the time of our theory, as a concrete history hist α ψ of a state ψ : States[A] is a curve in A, parametrized by time, which at the initial time η I : States[T] goes through ψ: Furthermore, the canonical evolution gives an action of time on itself, which makes time into a commutative monoid (T, ν, η I ) encoding time translation: It can be shown 3 that concrete histories are compatible with this time translation. The full construction can be found in section 7.3.1 of the appendix. In conclusion, dynamics that are compatible with the compositional structure of physical systems always come with a canonical notion of time (T, ν, η I ), making concrete histories into timeparametrised curves. On the other hand, there need not always be a natural way to faithfully see dynamics as actions of time on physical systems, but this is a topic for the next section. Uniform monads Any monoid induces a canonical monad, which we will call a uniform monad, as the endofunctor 3 It is a direct consequence of section 3.2.2. along with unitη the triangle identities and square identity follow from the results on ν proven in section 7.3.1 of the appendix. Equivalently, uniform monads can be characterised as commutative strong monads where the foliations are all isomorphisms. The dynamics for a uniform monads are exactly the actions α : A ⊗ T ։ A of time, as an internal monoid, on physical systems. Thus, at least for uniform monads, dynamics are determined by time. Dynamics as actions of time on physical systems If the monoid (T, ν, η I ) is the time of some commutative strong monad T , then we shall refer to the monad ( ⊗ T,η,μ) as the associated uniform monad for T . It is not hard to prove that foliation is not just a natural transformation, but in fact a morphism of monads: In particular, algebras α for the monad T will be pulled back, via the foliation maps, to algebras α for the associated uniform monad which will yield the same concrete histories for all states: Thus, foliation maps always allow us to see dynamics as actions of time on physical systems, by lifting them to dynamics for the associated uniform monad. See section 7.4.1 of the appendix for the various proofs. Epically strong monads Foliation maps always make it possible to see dynamics as actions of time, and concrete histories of states are always invariant, but in general the lifting of dynamics need not be a faithful process, and histories of arbitrary subsystems may be altered (unless the theory has enough states). We will call a monad epically strong if the foliations are all covering, i.e. epimorphisms: Intuitively, this means that the entire space of free histories can be foliated, not necessarily without "singularities", by time-indexed leaves isomorphic to the physical system A. Also, this is equivalent to requiring that t A,B are all epimorphisms, because of the following identity: 4 Dynamics can always be faithfully lifted actions of time on physical systems: Not all such actions, on the other hand, need be dynamical: the foliation map can be seen as imposing constraints 5 , and the actions of time on a system that correspond to dynamics are exactly those that respect those constraints, i.e. factor through the foliation. In conclusion, dynamics that respect the compositional structure of physical systems 6 come with a canonical notion of time, the space of free histories of the trivial system, and can always be seen as actions of time on physical systems by lifting them to dynamics for the associated uniform monad. Concrete histories of states are always invariant, but in order to guarantee the lifting to be a faithful process one has to require the original monad to be epically strong. It is no coincidence that most dynamics in physical theories seem to be described by epically strong monads: the latter are exactly those for which it is equivalent to see dynamics as actions of time on systems, possibly imposing some constraint on the actions by hand. 4 The space of concrete histories 4 .1 State spaces In physics, the most common approach to dealing with histories of a system is to consider histories of states under specific dynamics, rather than free histories: this section will be dedicated to reconciling our approach with the path-space approaches. To start off, recall that the states of a system A are given by the hom-set: , then we will say that the theory / category has enough states. From now on we will assume to have enough states; also, we will assume static systems ǫ A : A ⊗ T ։ T exist, making the Eilenberg-Moore forgetful functor into a bundle U : C T ։ C of categories (see section 6.2 of the appendix for more details on static systems). The state spaces States[A] of physical systems A : C arrange themselves into a concrete category Hom C [I, C], the category of state spaces, with morphisms given by Under the assumption of enough states, is immediate to check that C ∼ = Hom C [I, C]: this shows that one can equivalently work with the original category of physical systems or with the corresponding category of state spaces. We will also assume to be working with uniform monads. Spaces of concrete histories Concrete histories are morphisms from time to physical systems, i.e. they live in Hom C [T, C], but in fact more is true. It is immediate to see that concrete histories are T-module homomorphism hist α Ψ 0 : µ → α, and on the other hand one can show that all such homomorphisms 7 Ψ : µ → α arise as concrete histories: Thus concrete histories are not just the transformations of time, as the physical system T, into a physical system A, but in fact they coincide with the transformations of time, as the dynamical system µ : T ⊗ T ։ T, into a dynamical system α : A ⊗ T ։ A, allowing the following neat definition for the space of concrete histories of a dynamical system: It is possible to show [21] that this formulation of concrete histories as module homomorphism is equivalent, in the case of Quantum Mechanics, to Shrödinger's equation. If α, β : C T are two dynamics and f : α → β is a dynamical transformation 8 , then f can be lifted to a morphism between spaces of concrete histories for specific dynamics (forming a category H C, just like algebras form the Eilenberg-Moore category C T ): Hists and vice versa it's easy to prove, since our theory has enough states, that any such morphism comes from a dynamical transformation in this way. Thus we can define an isomorphism of categorical bundles 9 , linking dynamics and spaces of concrete histories for them (see diagram 4.6 p.14). We showed earlier that we can equivalently work with physical systems or with their state spaces: diagram 4.6 extends this claim by showing that working with physical systems, dynamics and spaces of free histories is the same as working with state spaces and spaces of concrete histories. We conclude that the more common, static, "concrete" approach to spaces of histories is equivalent to the monadic, dynamic, "free" approach presented in this work. Section 7.5.1 of the appendix contains a couple of additional remarks. Conclusions and future work In section 7.5.1, we have presented a monadic framework formalising an operational notion of dynamics in physical theories seen as the setting and evolution of initial value problems. We have identified in the Eilenberg-Moore category the natural environment for dynamical systems, and have defined Cauchy surfaces abstractly as automorphisms in the Kleisli category. In section 3, we have proven the main claim of the work, vindicating the Aristotelian view that time and change are defined by one another. We have shown that dynamics which respect the compositional structure of physical systems always define a canonical notion of time, and on the other hand that they can always be seen (not necessarily faithfully) as actions of time on physical systems. We have furthermore given an exact characterisation of the conditions allowing this lifting of dynamics to actions of time to be carried out in a faithful manner. Finally, in section 4, we have constructed state spaces and the analogous to path spaces. We have shown that, under some physically reasonable conditions, the monadic dynamics approach is equivalent in applicability to the more common path space approaches to dynamics. The appendix briefly drafts applications to a number of theories. Upcoming work from the author will focus on the applications of monadic dynamics to categorical quantum mechanics [21]. Future work will further investigate the application to classical and relativistic mechanics, the devised applications of propagators to network theory and flow in graphical calculi, the formulation of ergodic and chaos theory in the monadic framework. The author wishes to thank Bob Coecke, Aleks Kissinger, Amar Hadzihasanovic, Nicolò Chiappori and Sukrita Chatterji for useful feedback, discussions and support. Funding from EPSRC and Trinity College is gratefully acknowledged. 6 Appendix A: Applications 6.1 Draft applications Mathematical theory of dynamical systems The mathematical theory of dynamical systems (e.g. refer to [32] [23]) is embodied by the category Diff of smooth manifolds and differentiable maps, with cartesian product as composition of systems. The monadic construction has been carried in section 2. Pure state quantum mechanics The Categorical Quantum Mechanics programme (e.g. see [1] [9]), and related works (e.g. see [22][10] [37]), show how Quantum Theory can be understood abstractly in terms of symmetric monoidal categories. In particular we're interested in quantum computation and concrete simulation of finite-dimensional quantum systems, so we'll be working in the category fdHilb of finitedimensional complex Hilbert spaces and linear maps. 1. Complex Hilbert spaces come with a symmetric monoidal structure, with the notion of joint system given by the tensor product ⊗ and the trivial system given by the 1-dim space I := C. States of a system H are the morphisms I → H. The space of free histories for a system is given by T H := H ⊗ T, with T any fixed N-dim space (a clock with N positions). 3. There is a distinguished basis (|t n ) n:Z N for T, with N the length of time. The group Z N induces a monoid structure (T, µ, η) on T by: 4. This yields the initial surface and canonical evolution. In fact this is a group structure, with time inversion given by i : T → T |t n → |t −n (6.3) 5. The lifting of maps is given by same-time action, similarly to the previous example: The detailed construction, and its applications to time in quantum mechanics, can be found in the upcoming [21]: as an example of application, the construction can be used to understand time / energy duality in quantum mechanics in terms of the notion of strong complementarity introduced by [13]. The issues with extending this approach to infinite time are also covered in [21], and are related to the work presented in [2]. Quantum measurements The operational understanding of quantum measurements is one of the many faces of the Categorical Quantum Mechanics programme, and is presented in [13][14] [12]. The monadic dynamic framework can be used to characterise non-demolition measurements 10 as the concrete dynamics in mixedstate quantum mechanics for a particular notion of 1-step time. Classical and relativistic mechanics The work of Jean-Marie Souriau [34][35] on the evolution space of a mechanical system, as presented by [27], fits perfectly into the monadic framework, and will be the subject of future work. Notions of time in computation As mentioned in section 9, monads have a well-established role in the abstract modelling of computation, and references [29][31] give a number of constructions of interest. Each of those monads comes with a particular notion of time, and computations can be characterised as morphisms of dynamical systems. Particularly interesting is the notion of time for interactive input, which gives a model of branching time of somewhat Everettian flavour. The dynamics of exploration In order to better clarify the breadth of application of the framework, we briefly consider the following 3 views of spacetime in field theory, in order of increasing relativism: 1. the value of a field over the entire spacetime is known; 2. the value of a field over the entire space at some initial time is known, and evolved through time: its space component is part of the physical system, its time component is encoded by the concrete dynamic chosen; 3. the value of a field at a point is known, and the field is evolved through time and explored through space: both the time component and the space component are encoded by concrete dynamics. The last point of view shows what we could call "dynamics of exploration": one has a setting which, when seen in its entirety, would be static, but is understood through the dynamics. A monad modelling this setting could for example be T A := A × X × T, where A is the system encoding the possible values of the field at a point of spacetime, X is the group of motions associated with the underlying space (encoding the exploratory dynamics) and T is the time object (encoding the evolutionary dynamics). Applications of these "dynamics of exploration" might include: 1. the modelling of empirical and experimental scenarios, where a physical system is explored by manipulating several knobs; 2. the modelling of games, where a starting position is known and the evolution of the game is an exploration of the tree of moves. This exploratory point of view, and its applications, will be the subject of future work. 6.2 The operational structure of time Operations for time We have just seen that the dynamics of the theory are nothing but actions of time on physical systems. We already know that time is a monoid (T, η, µ), and the rest of this section will cover some additional structure for it that might be of interest. Firstly, we are interested to know whether there is a canonical notion of static system, i.e. whether each system comes with a natural "static" dynamic which just discards time: Naturality of ǫ A is what determines this static nature: it yields a well defined functor making our bundles ǫ A : A ⊗ T ։ A into trivial bundles. We can understand the dynamic ǫ A as "doing nothing" to the physical system A. It is immediate to see that an erasure operation ǫ : T ։ I for time will do the job by setting ǫ A := id A ⊗ ǫ; vice versa we can always recover an erasure operation by taking ǫ := ǫ I , and the static systems will necessarily be given by ǫ A := id A ⊗ ǫ I (if we assume our theory has enough states 11 ). This idea that the dynamics ǫ A are static is further confirmed by their action on free histories, which discards time leaving the initial value untouched: Secondly, in section 6.3 we will need time to be a copiable resource. By this we mean that we have an erasure operation ǫ which can be extended to a comonoid (T, ǫ, δ) satisfying the bialgebra law and duplicating the initial time. 12 There is no need for all states I → T of time to be copiable like the initial time (states that do, though, are closed under µ). Thirdly, it is interesting to have a notion of time inversion, i.e. an involution i = i −1 : T → T that satisfies the Hopf law with the comonoid above. If physical systems form a dagger category, then we say that the dagger structure is compatible with time inversion, i.e. can be interpreted as abstracting time-reversal of transformations, if daggering the action of any time state (under some fixed dynamic α) yields the same result as letting the inverse time state act (see eq'n 6.10 for notation): Finally, we would like to mention the role of richer time structures found in compact closed symmetric monoidal categories. If the group structure of invertible time can be enriched to a full Frobenius algebra, then it can be shown that strong complementarity (see e.g. [11]) plays the role of time / frequency duality 13 , and can be used to characterise Fourier transforms in terms of a change of basis. More details about this can be found in the aforementioned [21]. Propagators How do we keep track of evolution of a state in time? The space of free histories gives us the perfect abstract tool to do that, as long as we're interested in knowing its history at states of time that can be copied. From now on, we will implicitly assume (unless explicitly stated otherwise) that time is commutative and copiable, and assumption that time is invertible will be made explicitly when needed; most of the results can be extended to the smooth case, but this will be the subject of separate, future work. Given a dynamical system α : A⊗T ։ A, we define its (time-translationally invariant) propagator [α] : A ⊗ T ⊗ T ։ A ⊗ T to be the following morphism: δ α µ (6.9) The propagator of α takes a system in state ψ at time t, a (copiable) amount ∆t of time to evolve the system for, and returns the evolved system state α · (ψ ⊗ ∆t) at the new time µ · (t ⊗ ∆t). It is worth mentioning that time-translation can be seen as a propagator µ A = [ǫ A ] for systems coming with static dynamics. It is easy to see that propagators of dynamics are themselves dynamics: the initial time and the multiplication are always copied, and the fact that both α and µ are algebras of the monads takes care of the rest; so propagators of dynamics for a system A give us our first examples of non-free dynamics for A ⊗ T. We now define the restriction of a dynamic α : A ⊗ T ։ T to a time state t : States[T]: Fixing any amount of time ∆t to evolve a dynamical α : A ⊗ T ։ A system for, we can turn a propagator [α] into an endomorphism [α]| ∆t : A ⊗ T → A ⊗ T; since we assumed time to be commutative, it can be proven that this endomorphism is in fact an endomorphism µ → µ of time. By taking α = µ, one can further show that time is commutative if and only if all α and all ∆t give an endomorphism of time (conditional to the existence of enough copiable time states). Propagators give us a way of tracking the evolution of a system under a fixed, time invariant dynamic; there are plenty of applications, on the other hand, where more flexibility is needed, flow of circuits and time-varying Hamiltonians amongst them. Luckily there are more dynamics for the spaces of free histories yet to come. We will refer to dynamics β : A ⊗ T ⊗ T ։ A ⊗ T in general as propagators, reserving the term propagators of dynamics, or time-translationally invariant propagators, to dynamics for A ⊗ T in the form [α]. We can see a propagator β equivalently as an endomorphism: For the purposes of modelling things like time-dependent Hamiltonians 14 , we will consider Markovian propagators, i.e. propagators that take the form: Discarding time, we immediately have a time-dependent family of generators: On the other hand, any morphism 15 (U t ) t : A ⊗ T → T can be turned into an endomorphism Λ, a candidate family of generators for a Markovian propagator; there is, unfortunately, no guarantee that a propagator β : A ⊗ T ⊗ T ։ A ⊗ T will exist s.t. β| 1 T = Λ. The conditions under which this definition of propagators from families is possible, e.g. compact closure and the existence of bases, are covered more in detail in the aforementioned [21], which also deals with a notion of causality for propagators. A first application of propagators is the evolution operator for pure state quantum mechanics [33] in the case of a time-dependent Hamiltonian H(t): time dependence makes the underlying Hilbert space H a non-closed system for the purposes of modelling the dynamics, but we can close it by keeping track of time as well, i.e. by working in H ⊗ T. We'll take evolution from time |n := n1 T to time |n + 1 to be given by the unitary: U | 1 T |n := exp[ i H|n ] (6.14) where we're working on a basis (|n ) n for T generated by |1 := 1 T . Then the usual time-evolution operator on H can be defined as the propagator: where the product is expanded right to left as j increases. Other applications are left to future work: 1. Network theory. Non time-translationally invariant, Markovian propagators can be used in the modelling of sequential electrical and signal flow circuits like those in [5][19] [4], while non-Markovian (and non-causal) propagators extend the modelling to loopy circuits. 2. Other graphical calculi. More in general one can model flow in graphical calculi associated with symmetric monoidal categories: for example, the author will be interested in working out the monadic dynamics for the Feynman diagrams calculi of [18][6]. 3. Time travel. Non-causal propagators can also be used to model some of the quantum mechanical time-travel proposals presented in [3]. 4. General relativity. Propagators can be used to generalise dynamics in non-singular spacetimes: this will be covered as part of the application of monadic dynamics to relativistic mechanics. Since the two processes should yield the same result, we will ask for the corresponding morphisms to coincide. This gives an operational characterisation of µ A as free dynamics for A in the form of the following diagram: For the mathematical theory of dynamical systems, this condition reads: Neutral dynamics Being itself a concrete dynamic of a physical system, the canonical evolution of free histories must respect the relevant intial surface: η T A then provides a form of neutral dynamics on A, as exemplified by the mathematical theory of dynamical systems: The canonical evolution also has to respect the free dynamics for the system T A: Initial surface revisited Not only the initial surface should allow us to specify initial value problems, but we also expect dynamics for a system A to somehow get lifted to η A : A → T A, i.e. to A seen as an initial surface. Under our structure-inducing viewpoint, we translate this into the requirement that concrete histories for subsystems d : D → A of a physical system A canonically correspond to concrete histories for η A · d, their embedding as subsystems of the initial surface: We further strengthen 16 this to the following, more operational condition: For the mathematical theory of dynamical systems, the condition above reads: 16 In fact most cases of interest have enough free histories and enough dynamics to separate morphisms from and to T A, in which case the two requirements are equivalent. that remains unbroken under that particular dynamic of system A: For the mathematical theory of dynamical systems 17 this condition reads as usual: while for time-dependent dynamical symmetries Φ : T A = A ⊗ T → A one gets: The double appearance of ∆t on the LHS means that time must have a separate identity from the system A, and that it must be possible to duplicate its relevant states: section 6.2 of the appendix explains how this condition can be formalised. Since we mentioned symmetries, it is worth remembering that they form a category Aut C [ ], with objects automorphisms φ : A → A of C and morphisms: Action of time on itself The canonical evolution gives an action of time on itself as: 14) The action is associative and reduces to the identity on the initial time, as shown in figure 1 (p.25). The proof of associativity is given by the following commuting diagram: 17 Not the general ones defined above, the smooth manifolds with 1-parameter groups of endomorphisms. where commutativity of the square labelled "see below" is given by the following detail: To prove the other unit law required to get a monoid ν : T ⊗ T ։ T, we need to use the t ′ A,B transformations from eq'n 3.2. We start by defining the symmetric partner of foliate A : Then the definition of commutative strong monads guarantees 18 that the following two definitions for the action of time on itself are consistent: The other unit law follows immediately by symmetry, and we have an associative monoid ν : Dynamics as actions of time on physical systems Foliation is not just a natural transformation, but in fact a morphism of monads: foliate : ( ⊗ T,η,μ) → (T, η, µ) (7.20) where we definedη A := id A ⊗ η I andμ A := id A ⊗ ν. Indeed by naturality of t (and thus of foliation) it's immediate to see that: Furthermore by associativity of t one gets the following, where the two morphisms on the top coincide because of naturality of foliation: Going back to the pulled back algebrasᾱ = α · foliate A from ??, we need to show that they allow to set initial value problems (from now on IVPs, see section 2.2.7) for the associated uniform monad ( ⊗ T, η I , ν), and respect its free dynamics (see section 2.2.8). The possibility of setting IVPs follows from the fact that t (and thus foliation) respects η, and that α allows for IVPs to be set for T : The free dynamics follow by naturality of foliate, the free dynamics for α, the fact that t respects µ, and the fact that t is associative, natural and respects the unitors: The proof of commutativity for the upper left quadrant goes by associativity and naturality, similarly to that of diagram 7.15 (p.24). As a final note, uniform monads are particularly nice to work with, as their structure is completely determined by the monoid encoding the action of time on itself, and there is a clear-cut separation between the transformations / subsystems of the original system and its free dynamical structure. Furthermore, it is easy to see that uniform monads are commutative 19 , and can be used to combine multiple independent notions of dynamics together. 7.5 The space of concrete histories 7.5.1 Spaces of concrete histories As an aside, we note that free histories in FreeHists[A] are exactly those concrete histories for the free dynamic id A ⊗ µ : A ⊗ T ⊗ T ։ A ⊗ T that take their initial value in A, i.e. that factor through the unit η A : A A ⊗ T at the initial time η. Finally, just like the free histories perspective is monadic in nature, the concrete histories perspective has a comonadic flavour to it, and lacks the dynamical character of the former. The morphisms hist α mapping states to the associated concrete histories are sections of the following bundle: · η : Hists[A] ։ States[A] (7.25) and are in fact the coalgebras of a certain comonad on the "higher path spaces" Hom C [T ⊗n , A]. 19 And therefore closed under composition (whereas general monads are known not to be). Appendix C: Conventions New definitions are introduced by boldface, while proofs are integrated with the narrative. Some of the notational conventions might also be slightly unusual: 9 Appendix D: Discussion of related work The general understanding that physical theories can be turned into categories of systems and transformations has a number of sources of inspiration, but is perhaps best presented by the collected works [22][10] [15]. The operational characterisation of dynamics is more affine to the spirit of categorical quantum mechanics and quantum theory [1][9] [12], while the idea of internalising time by physical simulation was inspired by [20]. Only an elementary understanding of category theory is required, as most of the constructions will be carried out explicitly: [7][8] will do well as references. The theory of monads, particularly in the context of symmetric monoidal categories, is exposed in the seminal works [24][25] [26][28] [36]. Furthermore, the 2-categorical nature of monads suggests that the higher quantum theory programme of [37] will play a significant role in the application of monadic dynamics to quantum theory. The mathematical theory of dynamical systems [23] will provide a wealth of examples for the initial construction, a germ of the free histories idea for this theory having already appeared 20 in [30]; applications to ergodic and chaos theory [32] will be covered by future work. Certainly the main application at present, the dynamic of quantum theory is presented in the separate upcoming work [21] by the author; a similar, but unrelated, (co)monadic construction appeared in [14][13] in the context of measurements. The time / energy duality refers to the complementary structures of [11], while the issues associated with infinite time are related to the work in [2]. Our framework can be applied to classic and relativistic mechanics following the evolution space construction of [34][35] [27]: this application will be the subject of future work. Furthermore, monads have already played a prominent role as abstract models of computation for the past 20 years, to the point that the authors of [31] assert that "Notions of computation determine monads": 21 : future work will explore the application of the monadic dynamics framework to some computational monads from [29]. The propagator fragment of the framework can be used to model flow in graphical calculi, in particular electrical and control flow networks [4]
2015-01-19T20:16:09.000Z
2015-01-19T00:00:00.000
{ "year": 2015, "sha1": "1b01e50a7bdbdaabcd0256d4f5b9e632dbd2b41c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b01e50a7bdbdaabcd0256d4f5b9e632dbd2b41c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
252527015
pes2o/s2orc
v3-fos-license
Hepatitis B virus infection reactivation in patients under immunosuppressive therapies: Pathogenesis, screening, prevention and treatment With a 5.3% of the global population involved, hepatitis B virus (HBV) is a major public health challenge requiring an urgent response. After a possible acute phase, the natural history of HBV infection can progress in chronicity. Patients with overt or occult HBV infection can undergo HBV reactivation (HBVr) in course of immunosuppressive treatments that, apart from oncological and hem-atological diseases, are also used in rheumatologic, gastrointestinal, neurological and dermatological settings, as well as to treat severe acute respiratory syndrome coronavirus 2 infection. The risk of HBV reactivation is related to the immune status of the patient and the baseline HBV infection condition. The aim of the present paper is to investigate the risk of HBVr in those not oncological settings in order to suggest strategies for preventing and treating this occurrence. The main studies about HBVr for patients with occult hepatitis B infection and chronic HBV infection affected by non-oncologic diseases eligible for immunosuppressive treatment have been analyzed. The occurrence of this challenging event can be reduced screening the population eligible for immunosuppressant to assess the best strategies according to any virological status. Further prospective studies are needed to increase data on the risk of HBVr related to newer immunomodulant agents employed in non-oncological setting. INTRODUCTION Hepatitis B Virus (HBV) is a major public health challenge requiring an urgent response. According to the Global Hepatitis Report endorsed by World Health Organization (WHO) in 2017, the proportion of children 5 years old become chronically infected felt to 1.3% in 2015, compared with 4.7% of the prevaccine era, ranging 1980s to 2000s worldwide [1]. The spread of HBV vaccination during the childhood reduced the incidence of new HBV infections and the related possible chronicity [1]. However, it is estimated that about 3.5% of the global population (257 million people) in 2015 are affected by chronic HBV infection, most of them born before the availability of HBV vaccination: 68% of them are localized in Africa and in Western Pacific Region [1]. About 2.7 million of persons are co-infected with HBV, HDV and HIV and, among those with hepatitis, the estimated cumulative 5 years incidence of progression is estimated around 8%-20%[2] and 5%-15% of cirrhotic patients develop hepatocellular cancer (HCC) during the lifetime [2]. HBV belongs to the Hepadnaviridae family. It is a double stranded DNA virus with a lipoprotein envelope and a high hepatic tropism. Its transmission happens through the vertical route or intra-family contacts among infants and by sexual or parenteral contact. The first case is typical in regions with the highest prevalence determining the high endemicity described in these areas and the associated high rate of chronicization. The second case is common in regions with low prevalence among adults; nevertheless, high Hepatitis B surface antigen (HBsAg) prevalence there, can be encountered among immigrants from high HBV endemic area, People Who Inject Drugs (PWID), Men who have Sex with Men and People Living With HIV [3]. After a possible acute phase, the natural history of HBV infection can progress in chronicity, which consists of 5 phases, based on the HBeAg serostatus, the viral load, the transaminases levels and the grading/staging of the liver disease [4][5][6]. During the first one, once known as "immunotollerant phase" and currently named "HBeAg positive chronic infection", the immune response against the virus is limited or absent: Thus, there is a high viral replication with HBeAg positivity, unchanged transaminases and liver parenchyma. The second phase, called "HBeAg positive chronic hepatitis" is characterized by the production of active immune response of the host against viral antigens, with a reduction of viral load and an increase of transaminase levels along with liver inflammation. In case of immune response's control of the infection, the infection moves to the third phase, known as "HBeAg negative chronic infection" with HBeAg sero-clearance, low viral replication (HBV-DNA < 2000 IU/mL), normalization of transaminase levels and mild liver inflammation. However, severe liver inflammation and rapid progression of disease can still occurs, despite the presence of HBeAb, in case of mutation of the pre-core or basal core promoter regions. The fourth phase is the "HBeAg negative chronic hepatitis" one, with detectable anti HBe, moderate levels of serum HBV-DNA and ALT with hepatic necroinflammation. The last phase is HBsAg negative phase, with serum negative HBsAg and positive anti HBc with or without anti HBs. This phase is also called "occult hepatitis B virus infection" (OBI) defined as the replication of competent HBV DNA in the liver and blood in the absence of detectable HBsAg that contributes to the advancement of liver fibrosis and development of HCC. Patients with overt or occult hepatitis B virus infection can undergo HBV reactivation (HBVr) in course of immunosuppressive treatments. Apart from oncological and hematological diseases, immunosuppressants are also used in rheumatologic, gastrointestinal, neurological and dermatological settings, as well as to treat severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. The aim of the present paper is to investigate the risk of HBVr in those not oncological settings in order to suggest strategies for preventing and treating this occurrence. HBVr can be defined as the novo detection of HBV DNA or a ≥ 10 fold increase in HBV DNA level compared to its baseline value in HBsAg positive subject and seroreversion to HBsAg positive status in previously negative patients [7]. The viral genome can be detected as cccDNA in hepatocites. The HBVr following immunosuppressive treatments is commensurate to patient's characteristics and the kind of immunosuppressive agent employed. As regards as host characteristics, apart from the male gender[8], the old age[9] and any underlying lymphoproliferative diseases[10], the serostatus during immunosuppression is crucial. In fact, patients affected by chronic HBV infection have a greater risk of reactivation compared to those with OBI. Moreover, the presence of anti HBs among HBsAg negative subjects, is related to a lower risk of reactivation even in hematologic setting, according to Seto et al [11]. Regarding immunosuppressant, the risk of related HBVr can be classified as high, with frequency of reactivation > 10% without prophylaxis[7]; medium, with frequency of reactivation 1%-10% [12] or low, with frequency of reactivation < 1% [13]. A high risk of reactivation is described with the administration of B cell depleting agents [14], anthracycline derivatives [15] and corticosteroids at high dose, for treatments of more than 4 wk[7], along with inhibitors of cytokine, integrin[16], tyrosine kinases [17] and JAK kinases inhibitors [18]. The risk of HBVr is related to the immune status of the patient and the baseline HBV infection condition. The risk of developing HBVr is quite low for HBsAg positive or negative patients under csDMARDs and short low dose cortisone based therapy. The same risk is however higher for patients under anti-TNFs and tyrosine kinase inhibitors: when in combination, the risk is the highest. Here reported are the main studies about HBVr for patients with OBI and chronic HBV infection affected by non-oncologic diseases eligible for immunosuppressive treatment. RISK OF HBVR IN PATIENTS AFFECTED BY CORONAVIRUS DISEASE 2019 The ongoing SARS-CoV-2 pandemic, responsible for more than 50 million cases from 2020, still represents a challenge for the scientific community, not only regarding its pathogenesis but mostly its treatment. In fact, despite there is no available curative option yet, several immunosuppressive and immunomodulating agents have been proposed for the treatment of coronavirus disease 2019 (COVID-19) pneumonia in those last two years. Corticosteroids are currently recommended by the WHO for severe COVID-19; other employed immunosuppressive agents are interleukin 6 inhibitors (such as tocilizumab), JAK inhibitors (such as baricitinib, tofacitinib and ruxolitinib) associated with risk of HBVr in other settings [19]. Apart from a couple of retrospective studies reporting HBVr among patients receiving methylprednisolone[20] and tocilizumab [21], no data are already available in literature about the risk of HBVr among patients with COVID-19 treated with immunosuppressants. The short duration of immunosuppressive treatment in this specific setting probably limits the risk of HBVr. However, all the patients with COVID-19 pneumonia eligible for corticosteroid or immunosuppressants are routinely screened for HBV infection according to national and international guidelines to evaluate the risk of HBVr prior to prescribe those above mentioned drugs and start antiviral prophylaxis when needed. HBVr is quite common among unvaccinated people with rheumatic diseases (RD); Canzoni et al [25] reports that 2% of this study population (292 patients) affected by RD had a prevalence of HBsAg positivity and any kind of HBV infection markers retrieved in 24% of cases (70 patients): At least, 30% of those tested positive patients were unaware of their condition [25]. Despite European Association for the Study of Liver (EASL)[4] and AASLD [23] indication about HBV routine screening schedule before starting immunosuppressive therapies, the coverage still appears inadequate as in 2015 Lin et al[27] demonstrated in a retrospective cross national comparison of hepatic testing in rheumatic arthritis (RA) patients eligible to DMARD between the US and Taiwan [26]. The authors found that only 20.3% of patients in the US and 24.5% of patients in Taiwan were tested for HBV infection [27]. Similar results were found in Japan [28] where laboratory test for HBsAg, anti HBs and anti HBc were performed only in 28.33%, 12.52% and 14.63% of patients with RA, at baseline [28]. The deleterious role of HBV infection in recovery of patients with RA has been investigated by Chen et al [29]: Their case control study evaluated 32 patients with RA and chronic HBV infection, eligible to glucocorticosteroids, DMARDs and biologics. The study records, in a year, a worsening of hepatopathy of patients with chronic HBV September 25, 2022 Volume 11 Issue 5 infection under immunosuppressant with no antiviral intervention; moreover patients failed in achieving the therapeutic target in 6 mo. HBVr was reported in 34% of patients at one year follow up. Among those 32 studied patients, 14 were treated with prophylaxis with lamivudine, adefovir or entecavir: 4 of them developed HBVr and 2 of them also a hepatitis flare. The remaining 18 patients enrolled did not received antiviral prophylaxis and 7 of them experienced HBVr. HBVR IN DERMATOLOGIC SETTING cDMARDs such as acitretin, methotrexate and cyclosporin A along with bDMARDs including etanercept, infliximab, golimumab, certolizumab, adalimumab and secukinumab are currently used in several different dermatologic diseases, like psoriasis. The safety of those immunosuppressive drugs is not properly investigated, since trials conceived to explore new efficient drugs barely involve HBV patients. However, the reactivation risk of HBV in 14 (11 HBsAg positive, 3 HBsAg negative/HBcAb positive) patients with psoriasis eligible for ustekinumab based therapy has been evaluated by Chiu et al [30]. No reactivation was observed among all the HBsAg negative HBcAb positive patients, while HBVr was registered among two of the HBsAg positive patients under ustekinumab not receiving prophylaxis [30]. RISK OF HBVR IN GASTROENTEROLOGICAL SETTING The use of immunosuppressants is often required for patients affected by autoimmune, inflammatory gastroenterological disorders like Crohn disease and ulcerative colitis. The drug selected depends on the disease severity and the relapsing or remitting cause of the inflammatory bowel disease (IBD). Corticosteroids, immunomodulatory agents (methotrexate, azathioprine, mercaptopurine), anti IL12/23 p40 antibodies, JAK inhibitors, anti-adhesion therapies and biological therapies such as TNF inhibitors are widely used. Studies performed to evaluate the risk of HBVr in HBsAg positive patients with gastroenteric diseases under immunosuppressive agents clearly demonstrated that the use of more than two immunosuppressive agents is an independent predictor of HBVr [32]. A lower rate of reactivation has been registered for patients treated with antiviral prophylaxis [33]. Few cases of HBVr have been reported among HBsAg negative/HBcAb positive patients with IBD under immunosuppressants [34][35][36][37]. Thus, a complete serology for HBV is required in IBD patients to determine the active/inactive carrier status of IBD patients eligible for immunosuppressants in order to determine whether to treat, prescribe prophylaxis or monitor them, according to their HBV profile. HBsAg positive patients with IBD should undergo prophylaxis with nucleotide or nucleoside analogues before starting moderate or high doses steroids for more than 4 wk, anti TNF drugs, azathioprine or ustekinumab. This prophylaxis should last for at least one year after discontinuing immunomodulants. No standardized approach exists for HBsAg negative/HBcAb positive patients with IBD. In fact, while the American Gastroenterology Association recommends antiviral prophylaxis for this population under anti TNF or corticosteroids at moderate/high doses [38], the EASL and The European Crohn and Colitis Organization both recommend close monitoring of this population and the use of antiviral agents only after detection of HBV DNA viremia or seroreversion to HBsAg positivity [4,39]. HBVR RISK IN NEUROLOGICAL SETTING Among neurodegenerative diseases requiring disease modifying drugs to be treated, multiple sclerosis (MS) is one of the most frequent. MS causes chronic inflammation of the central nervous system, demyelination and disability. Apart from glucocorticoids, widely used in the acute phase of MS, DMD such as anti CD52 antibodies (alemtuzumab), anti CD20, a4b1 integrin inhibitor, sphingosine 1 phosphate inhibitors and its modulators (namely, fingolimod and siponimod), anti CD20 monoclonal antibodies [40] are employed to treat MS. Since limited data concerning the risk of HBVr in neurological setting are available from literature, there is no clear, definitive consensus on the best strategies to prevent HBVr in subjects with neurologic diseases requiring immunosuppressive drugs [41]. However, HBVr in a patient with a story of HBV infection and no proper prohylaxis, under ocrelizumab treatment for MS, has been reported by Ciardi et al[42], highlighting the need for antiviral prophylaxis in this setting. PREVENTION AND TREATMENT OF HBVR The risk of HBVr following immunosuppressive treatments depends mostly on type, duration and intensity of the iatrogenic immunosuppression. It is necessary to modulate any kind of therapeutic strategies to avoid HBVr, according to the risk profile of reactivation itself. Close monitoring of liver function test and qualitative/quantitative HBV DNA viral load is necessary at baseline, during and after the discontinuation of immunosuppressive therapy, taking into account that HBVr can still occur after the interruption of immunosuppressants. The management of HBVr in patients under immunosuppressant for non-oncological diseases depends, firstly, on HBsAg laboratory tests. In fact, in case of HBsAg positive value, patients with chronic hepatitis must undergo treatments of HBV with high genetic barrier nucleo(t)side analogues (entecavir, tenofovir, tenofovir alafenamide) [4,38,[43][44][45], while those with chronic infection must be considered for prophylaxis with lamivudine in case of undetectable HBV DNA or in case of expected duration of prophylaxis less than 6 mo [38]. Otherwise, because of emergence of resistance to lamivudine in patients requiring therapy for more than 6 mo long duration, the above mentioned newer nucleoside agents can represent an effective option for antiviral prophylaxis in this setting. In case of HBsAg negative and HBcAb positive laboratory test results, the HBV DNA viral load can guide physicians in determining if the patient requires prophylaxis or clinical and laboratory's close monitoring, followed, where appropriate, by preemptive therapy [4,38]. In fact, in case of HBV DNA positivity or in case of HBV DNA negativity occurred in patients under agents at moderate or high risk of immunosuppression, or affected by liver cirrhosis, a trimestral monitoring of HBsAg/HBsAb and HBV DNA is enough, and a preemptive therapy can be considered in case of reactivation [4,38]. The prophylaxis must be started before the immunosuppressive regimen and continued up to 12-18 mo after the end of the immunosuppressive treatment [38,[46][47][48]. In Figure 1 briefly is summarized the algorithm of HBVr diagnosis and management in patients eligible for immunosuppressant in non-oncological setting. September 25, 2022 Volume 11 Issue 5 CONCLUSION The widespread use of immunosuppressive and immunomodulant therapies in non-oncological setting highlighted the risk of HBVr in patients with overt or occult hepatitis B virus infection. The occurrence of this challenging event can be reduced screening the population eligible for immunosuppressant to assess the best strategies according to any virological status. Further prospective studies are needed to increase data on the risk of HBVr related to newer immunomodulant agents employed in nononcological setting, in order to better prevent and treat HBVr recurrence.
2022-09-26T15:06:22.414Z
2022-09-25T00:00:00.000
{ "year": 2022, "sha1": "99fee69eb82f2760bddd947c2bbbbfc347ca5251", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5501/wjv.v11.i5.275", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "38fbb5a4125810d72c38919b9dd8faaabf387597", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
237347486
pes2o/s2orc
v3-fos-license
How many hospitalizations has the COVID-19 vaccination already prevented in São Paulo? Departamento de Estatistica, Universidade Federal de Sao Carlos, Sao Carlos, SP, BR. Programa de Computacao Cientifica, Fundacao Oswaldo Cruz, Rio de Janeiro, RJ, BR. Laboratorio de Funcao Pulmonar, Divisao Respiratoria, Departamento de Medicina, Escola Paulista de Medicina, Universidade Federal de Sao Paulo (UNIFESP EPM), Sao Paulo, SP, BR. IV INSPER Instituto de Ensino e Pesquisa, Sao Paulo, SP, BR. Estatikos Assessoria e Consultoria Estatistica, Sao The high hospitalization rate of patients with coronavirus disease (COVID-19) has led to the collapse of healthcare systems in many countries. Vaccines for COVID-19 emerged in late 2020, and they have been shown to be a powerful tool to decrease hospitalization numbers in countries that were able to quickly vaccinate a majority of their populations (1). The goal of this study was to estimate how many COVID-19-related hospitalizations were prevented in the state of São Paulo (Brazil) because of vaccines as of May 28, 2021. We used data from the SIVEP-Gripe database, which was created by the Brazilian Ministry of Health in 2019 to record deaths and hospitalized cases with severe acute respiratory illness (SARI) (2). The SIVEP-Gripe dataset is open-access and is updated weekly on the website: https://opendatasus. saude.gov.br/dataset. We used the 14-day rolling average of the daily number of patients hospitalized for SARI because of COVID-19 (SARI-COVID). Until May 28, 2021, only individuals aged 465 years 1 were vaccinated. Thus, we only modeled hospitalizations that were avoided in this particular group. We created a counterfactual model that used a linear regression to predict the number of SARI-COVID hospitalizations in people aged 465 years that was based on the number of hospitalizations for patients between ages 55 and 62 years (the rationale for this choice is described below). The model was estimated using only data before the first vaccination, which occurred on February 8, 2021. The counterfactual curve was then estimated by applying the fitted model to the period after February 8, 2021. Figure 1 shows the actual number of hospitalizations in patients aged 465 years, as well as the counterfactual curve with 95% prediction intervals (3) and fitted pre-vaccination model. The vertical lines indicate when the vaccination was initiated for different age groups. Figure 1 shows that the counterfactual curve is always higher than the observed data, indicating that hospitalization levels would have been much higher if these groups had not been vaccinated. The difference between these curves became even larger in May of 2021. The reduction in hospitalizations reached its maximum of 66.1% (95% confidence interval: 65.7%, 67.5%) on May 28, 2021. The effect of vaccinations on the hospitalization rate of individuals aged 465 years is likely to improve: the Ministry of Health of Brazil has adopted a policy of using a 1-to 3-month interval between doses depending on the vaccine; thus, many individuals have only received one dose so far. This counterfactual analysis has three main assumptions: 1) Without vaccination, the relationship between the number of hospitalizations in people aged 465 years and the younger group would have been the same after February 8, 2021. 2) Vaccination had no substantial effect on the younger group before May 28, 2021. 3) The linear model is a good approximation. While Assumption 1 cannot be verified, we defined the age range of the younger group such that Assumptions 2 and 3 would approximately hold. Vaccination of individuals aged o62 years only started on May 6, 2021; thus, there was not enough time to observe its effect by May 28, 2021. Moreover, the blue curve in Figure 1 indicates that the linear model is a good approximation, possibly because the younger group was close in age to the group aged 465 years. Using the areas between the curves, we observed that approximately 24,364 hospitalizations were avoided in São Paulo because of vaccination before May 28, 2021. Considering that the estimated mortality rate of hospitalized patients aged 465 years with SARI-COVID-19 is approximately 45%, approximately 10.964 deaths might have been prevented by vaccination during this period. Moreover, using estimates of the average COVID-19 hospitalization costs 2 , US $297 million may have been saved, which is enough to purchase almost 30 million additional doses of vaccine (at US $10 each). In conclusion, we provide evidence that vaccines are an effective way to reduce mortality because of COVID-19, as well as to save substantial financial resources during the pandemic.
2021-08-30T05:22:19.745Z
2021-08-27T00:00:00.000
{ "year": 2021, "sha1": "43b09ab275c29291949a9d9284bf00ca1a1fa118", "oa_license": "CCBY", "oa_url": "https://doi.org/10.6061/clinics/2021/e3250", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43b09ab275c29291949a9d9284bf00ca1a1fa118", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
1135594
pes2o/s2orc
v3-fos-license
The role of noninvasive methods in assessing airway inflammation and structural changes in asthma and COPD Asthma and chronic obstructive pulmonary disease (COPD) share a condition of chronic inflammation of the airways, which is followed, to various extents and with different features, by a healing process that may lead to airway remodelling. Although the mechanisms of remodelling appear to be heterogeneous, and abnormal extracellular matrix (ECM) degradation and deposition may play an important role in the development of structural alterations of the airways, contributing to airway stiffness and to irreversible airflow obstruction [1]. Such remodelling within the airway wall is to be attributed mainly to qualitative and quantitative changes of ECM proteins resulting from an imbalance between proteases and their inhibitors. Inflammation in asthma and COPD is associated with increased production of active elastase, which can promote fibroblast migration through the ECM, as well as the degradation of elastic fibers [2, 3] ECM homeostasis is also influenced by the balance between metalloproteinases (MMP) and their specific tissue-inhibitor metalloproteinases (TIMP), in such a way that increases of TIMP over MMP (or vice versa) can either lead to collagen deposition or degradation [4, 5]. With the advances in the field of lung imaging, it is now possible, to non-invasively quantify structural changes of the airways both in asthma and COPD [6, 7], making it possible to establish a close relationship between structural and functional abnormalities. Introduction Asthma and chronic obstructive pulmonary disease (COPD) share a condition of chronic inflammation of the airways, which is followed, to various extents and with different features, by a healing process that may lead to airway remodelling.Although the mechanisms of remodelling appear to be heterogeneous, and abnormal extracellular matrix (ECM) degradation and deposition may play an important role in the development of structural alterations of the airways, contributing to airway stiffness and to irreversible airflow obstruction [1].Such remodelling within the airway wall is to be attributed mainly to qualitative and quantitative changes of ECM proteins resulting from an imbalance between proteases and their inhibitors. Inflammation in asthma and COPD is associated with increased production of active elastase, which can promote fibroblast migration through the ECM, as well as the degradation of elastic fibers [2,3] ECM homeostasis is also influenced by the balance between metalloproteinases (MMP) and their specific tissue-inhibitor metalloproteinases (TIMP), in such a way that increases of TIMP over MMP (or vice versa) can either lead to collagen deposition or degradation [4,5]. With the advances in the field of lung imaging, it is now possible, to non-invasively quantify structural changes of the airways both in asthma and COPD [6,7], making it possible to establish a close relationship between structural and functional abnormalities. Diseases and Department of pathology at Jawaharlal Nehru Medical College A.M.U., Aligarh from January 2006 to August 2007.50 asthmatic and 46 COPD patients were selected.Bronchial asthma patients were selected according to criteria of American Thoracic society [8].Asthma severity was assessed according to Global initiative for Asthma (GINA) Criteria.Current smokers, patients who had an upper or lower respiratory tract infection during the month preceding the test and patients who experienced a severe exacerbation of asthma resulting in hospitalisation were excluded from study.COPD patients with FEV 1 /FVC ≤70% were selected as per Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines 2004.Patients having the following conditions were excluded from the study; patients with a history of perennial allergic rhinitis, Patients with an improvement of FEV 1 capacity of >12% from baseline or an absolute value of 200 ml following inhalation of 200 µg salbutamol, patients with any upper or lower respiratory tract infection during the month preceding the test.The individual cases (50 Asthmatics and 46 COPD patients) were studied with emphasis on detailed history and clinical examination, Routine laboratory investigations, Pulmonary function testing, Chest X-ray PA and lateral view, HRCT Thorax, Biochemical and cellular analysis of sputum.Quality control and procedures of pulmonary function test were performed according to the European Respiratory Society guidelines [10,11]. Computed tomography (CT) scans of the chest were performed with a spiral CT scanner in high-resolution mode according to the method of Mayo et al. [12].In all subjects both end-inspiratory and end-expiratory HRCT scans were obtained using the following parameters 125 kV, 310 mAs, matrix size of 512 × 512, and a slice thickness of 1-1.25 mm.A 20°cranial inclination of the gantry was used to improve CT analysis at segmental and sub-segmental bronchi; the scanning time ranged from 1.5 to 3 sec.A window level of -600 Hounsfield units (HU) was chosen, with a width of 1,600 HU, as generally recommended for the analysis of the bronchi and lung parenchyma [12,13]. Sputum induction and processing were performed according to the methods of Fahy and colleagues [14] with slight modifications [2].Patients were exposed to an aerosol of 3% hypertonic saline solution, early in the morning, in a fasting condition for 20 min.The subjects were encouraged to cough throughout the procedure, and regularly interrupted their inhalation of hypertonic saline in order to expectorate sputum into 50-ml sterile ampoules.The aerosol was administered by an ultrasonic nebuliser (Fisoneb; Fisons Italchimici Spa, Rome, Italy), which generates particles with a median diameter of 2.5 µm and has an output of 1 ml/min.Sputum samples were immediately treated with 10% dithiotreitol (Sigma Chemical, St Louis, MO, USA) and centrifuged.The supernatants were immediately frozen for subsequent analysis.After fixation, the slides were stained by Haematoxylin and Eosin Method and Papanicolau staining method.At least 200 cells per slide were counted and the differential cell counts were expressed as corrected percentage. Results are given as mean ± standard deviation.Chi-square test was used to analyse qualitative data.F-test was used to analyse quantitative data.For multiple comparisons, Bonferroni's correction was applied.P-values of ≤0.05 were considered statistically significant. Results Out of the total 96 patients, 50 were suffering from asthma and the remaining 46 were having COPD.Out of the total 50 asthmatics, 23 (46%) were males and 27 (54%) were females.Out of the 46 COPD patients 34 (73.91%) were males and 12 (26.09%)were females.The mean duration of disease in asthmatics was 9.06±6.89.Out of the 50 asthmatics 10 (20%) cases belonged to intermittent and mild persistent group each and 15 (30%) belonged to each moderate and severe persistent group.The mean duration of the disease in COPD patients was 17.52±8.44years.In COPD, most of the patients (37%) belonged to moderate COPD, followed by 26.1% in very severe group. The frequencies of abnormal CT findings and their severity in each severity group of asthmatics are shown in the (table 1).Bronchial wall thickening was the most prominent abnormal of the HRCT findings and increases in frequency and severity as the severity of the disease increases.It was present in 12 (80%) patients of moderate persistent asthma and 15 (100%) patients of severe persistent asthma.Most of the HRCT abnormalities increase in frequency as the severity of asthma increases (figure 1), similarly the MMP-1 and TIMP-1 level increased from intermittent to severe persistent asthma.Only bronchial wall thickening, bronchiectasis and air trapping correlated well with disease severity in asthmatics (p<0.05)(figure 2).Other HRCT scan abnormalities did not differ among the disease severity groups in asthmatics (p>0.05). The frequencies of abnormal HRCT findings and their severity in COPD are shown in (table 2).All abnormal HRCT findings increase in frequency as the disease progresses paralleling level of markers of remodelling MMP-1 and TIMP-1 in sputum.All the abnormal HRCT findings correlated well with disease severity in COPD patients (p<0.05).Emphysema increases from 16.7% in mild COPD to 83.3% in very severe group.Centrilobular em-physema was the most common form present (figure 3).There was no bronchiectasis in mild COPD.It increases in frequency from 5.9% in moderate COPD to 66.7% in very severe form of COPD.Cylindrical bronchiectasis was the most common form of bronchiectasis and right lower lobe was the most commonly involved lobe (figure 4). All the physiological parameters of pulmonary function test such as FVC%, FEVI/FVC% and PEFR (L/sec) correlated well with the disease severity (p<0.001) in asthmatic patients.There was a statistically significant difference between the severity groups in asthma (p<0.001).Similarly all the values of pulmonary function test were statistically significant (p<0.001) in COPD patients. Except for squamous cells, the percentage of other cells differed significantly among the disease severity groups in asthma in contrast the percentage of squamous cells was significantly lower in those with very severe disease compared to the other groups in COPD.As can be seen from (table 3), the percentage of macrophages in intermittent and severe persistent asthmatics differed significantly from the other groups while the percentage of macrophages decreased significantly with increasing severity of disease in COPD patients (table 4).The percentage of neutrophils differs significantly among different groups in asthma while the percentage of neutrophils was significantly higher in those with very severe COPD when compared to those with mild or moderate disease but did not differ significantly from those with severe disease.Except for intermittent v/s moderate persistent & mild v/s severe persistent pairs, the percentage of eosinophils differed among the other pairs in asthma while the percentage of eosinophils did not differ among the COPD severity groups (figure 5). The percentage of lymphocytes in those with mild persistent asthma differed significantly from other groups in contrast the percentage of lymphocytes was highest in those with moderate COPD.The MMP-9 levels differed significantly among the disease severity groups in asthmatics.The TIMP-1 levels in those with severe persistent asthma were significantly higher than the other three groups. The MMP-9/TIMP-1 ratio was significantly higher in those with mild persistent asthma compared to those with severe persistent asthma.All the other pair wise differences for MMP-9/TIMP-1 ratio were insignificant (table 3).Applying bonferroni multiple comparison test p and correlations signifi-cant negative correlation was observed between MMP-9 and TIMP-1 levels and macrophages & lymphocytes (p<0.001).The levels of MMP-9 and TIMP-1 showed a significant positive correlation with neutrophils (p<0.001) and eosinophils (p=0.001).The levels of MMP-9 and TIMP-1 increased significantly with increasing disease severity in COPD.The difference in the levels of both the mediators was however insignificant between the mild and moderate groups.The MMP-9/TIMP-1 ratio decreased with increasing disease severity in COPD (table 4).The differences were however significant only between the mild v/s very severe, moderate v/s severe, and moderate v/s very severe groups.A significant negative correlation was observed between MMP-9 & TIMP-1 levels and macrophages, lymphocytes and eosinophils.The Patients with COPD were significantly older than those with asthma.The percentage of squamous cells did not differ among asthmatics & COPD patients.The percentage of neutrophils was significantly higher in COPD patients (p<0.001).The percentage of other cells was significantly higher in asthmatics (table 5).The levels of MMP-9 were not significantly different among the two groups.TIMP-1 levels showed a significantly higher mean value in COPD patients (p<0.001).MMP-9/TIMP-1 ratio was, however, reduced in COPD patients as compared to asthmatics (p=0.028)(table 6).The most relevant correlations between MMP9/TIMP-1 between different severity group of asthma and COPD has been demonstrated separately shown in table 7 and 8. Mucoid impaction & emphysema were significantly more common in COPD patients, while bronchial wall thickening was more common in asthmatics.The frequency of other HRCT abnormalities did not differ among the two groups. Discussion The main results of this study is that a relationship exists between increased levels of cells and metalloproteinases in induced sputum and HRCT abnormalities in patients of bronchial asth- It also increases as the severity of the disease with a statistically significant difference between the severity groups (p<0.001).The MMP-9/TIMP-1 was significantly higher in those with mild persistent asthma compared to those with severe persistent asthma.All the other pair-wise differences for MMP-9/TIMP-1 ratio were insignificant.The mean value of MMP-9 and TIMP-1 for COPD patients were 86.546±49.849and 570.180±496.258respectively.There was a statistically significant difference among the severity groups of COPD patients.The mean value of MMP-9/TIMP-1 ratio for COPD patients was 0.22904±0.17369.There was a statistically significant difference among the various severity groups of COPD.The ratio decreased with increasing disease severity. In our study, except for squamous cells (p=0.065), the percentage of macrophages, neutrophils, eosinophils and lymphocytes differ significantly among the various severity groups (p<0.001) of asthma. The percentage of eosinophils did not differ among the COPD severity groups.The percentage of macrophages decreased significantly with increasing severity of disease in COPD patients.The percentage of neutrophils was significantly higher in those with very severe COPD than those with mild or moderate disease but did not differ significantly from those with severe disease.The per-centage of lymphocytes was highest in those with moderate COPD. Patients with COPD were significantly older than those with asthma.The percentage of squamous cells did not differ among asthmatics & COPD patients.The percentage of neutrophils was significantly higher in COPD patients (p<0.001).The percentage of other cells was significantly higher in asthmatics.The levels of MMP-9 were not significantly different among the two groups.TIMP-1 levels showed a significantly higher mean value in COPD patients (p<0.001).MMP-9/TIMP-1 ratio was, however, reduced in COPD patients as compared to asthmatics (p=0.028). The results of our study are similar to previous studies done to assess airway remodelling in asthma and COPD.Vignola et al. [5] showed that MMP-9 and TIMP-1 concentrations were greater in sputum of patients with asthma and chronic bronchitis than in control subjects.The molar ratio between MMP-9 and TIMP-1 was lower in asthmatics and chronic bronchitis than in control subjects, and positively correlated with FEV 1 values.In asthma, MMP-9 levels were significantly correlated with the number of macrophages and neutrophils.This study shows that airway inflammation in asthma and chronic bronchitis is associated with an imbalance between MMP-9 and TIMP-1 which may have a role in the pathogenesis of ECM remodelling and airflow obstruction. Another study by Mattos et al. [15] showed similar results were assessing expression of MMP-9 in asthma.Patients with severe asthma had increased levels and activity of sputum MMP-9 in their sputum compared with patients with mild asthma and normal subjects. In 2002, Beeh et al. [16] conducted a study on the role of MMP-9, TIMP-1, and their molar ratio in patients with COPD, Idiopathic Pulmonary Fibrosis (IPF) and healthy subjects.Sputum MMP-9 levels were highest in COPD patients compared with IPF and controls (P<0.001,both comparisons).Sputum TIMP-1 was also elevated in COPD and IPF patients compared with healthy controls (P<0.01,both comparisons).MMP-9/TIMP-1 ratio was significantly higher in COPD patients than in those with IPF and controls (P<0.01,both comparisons).Sputum levels of TNF were similar in all three groups (P>0.2, all comparisons).In this study, they showed that there was a positive correlation of sputum MMP-9 with sputum neutrophils in all subjects (rho=0.68,P<0.001).There was a strong correlation of MMP-9 and sputum TNF in COPD patients (rho=0.76,P=0.004), but not in IPF and healthy subjects. Yildiz et al. [17] in 2003 conducted a study to demonstrate and compare the relative proportion of the cells in induced sputum samples in patients with asthma and COPD.Induced sputum total cell counts were higher in the COPD group as compared to the asthmatics, but the difference did not reach statistical significance (P>0.05).Sputum differential cell counts showed a predominance of neutrophils in COPD patients while eosinophils, lymphocytes and macrophages were more frequently seen in asthma patients.All these differences between the two groups were statistically significant. The major sources of MMP-9 in human lung are macrophages [18] but this enzyme may also be released by eosinophils [19].Our results suggest that in addition to macrophages and eosinophils, neutrophils are the major cell sources of MMP-9 in the airways.In the induced sputum of asthmatic and chronic bronchitis patients, Vignola et al. [5] had previously shown a significant correlation between the percentage of neutrophils and the concentrations of free elastase [2].The percentage of neutrophils has also been found to be correlated with the content of MMP-9 in several lung diseases [20,21].Thus, this study confirms previous evidence and suggests that neutrophils have the potential to destroy ECM and determine chronic injury of the lung in asthma and chronic bronchitis. The increased concentrations of TIMP-1 found in asthma and chronic bronchitis can be the result of the effects of several mediators released during the development of airway inflammation in these diseases.Among these mediators, an important role may be played by transforming growth factor-b (TGFβ) which is capable of increasing the production of TIMPs [22], and the expression of which is increased in the airways of asthmatic and chronic bronchitis patients [23].This study also shows that macrophages and neutrophils are an important cellular source of TIMP-1, and points out the involvement of these cells in airway Remodelling in asthma and COPD.The results of the present study extend previous evidence obtained by Vignola et al, showing that airways macrophages isolated from bronchoalveolar lavage of asthmatic and chronic bronchitis patients release high concentrations of the profibrotic growth factor TGF-b and of fibronectin [24] and, therefore, lend support to the concept that macrophages and neutrophils actively participate in the Remodelling of the airways. HRCT scan has been proposed as an additional tool to assess pulmonary changes in long-standing diseases, such as asthma and COPD [25,26,27].Our study was also based on HRCT scan and we found that in asthmatic group bronchial wall thickening, bronchiectasis and air trapping correlated well with disease severity.In COPD patients all abnormal HRCT findings correlated well with the disease severity.Emphysema, bronchial wall thickening and bronchiectasis were the most frequently occurring abnormalities.HRCT scan has been previously used to quantify abnormalities of the airways due to airway remodelling and it has been found that the HRCT scan score correlated with the severity of asthma and airflow obstruction [28,29]; conversely, conflicting results have been obtained on the relationship between HRCT-documented thickness of airway wall and hyper-responsiveness [30,31] and, from these results, it can be derived that the issue of airway responsiveness is perhaps a more complex phenomenon than airway obstruction.In the current study, the HRCT outcome is represented by a final score that was obtained by counting the number of the observed lesions because it avoids the risk of underestimating the degree of pulmonary involvement.Indeed, this approach has the advantage of combining information from both airways and parenchyma.The observation of bronchiectasis in COPD patients is not controversial.Associations between emphysema and bronchiectasis are frequent and related to processes of traction and cicatrisation, reflecting a process of peri-bronchial fibrosis [32] and abnormal synthesis and degradation of ECM proteins, such as collagen and elastin.In the context of COPD, the current authors were also able to assess the parenchymal involvement, mostly emphysema. An important finding of our study was the high prevalence of radiological bronchiectasis among patients diagnosed with COPD.CT scanning with a high resolution algorithm is now the investigation of choice to confirm a diagnosis of bronchiectasis [33] and, using generally accepted criteria [33,34].We found that almost one third of the patients had bronchiectasis.It is likely that bronchiectasis is generally under diagnosed, particularly in smokers where cough and sputum production are assumed to be due to cigarette smoke and COPD [33].In secondary care Currie et al. [35] found an incidence of bronchiectasis of 70% by bronchography and Smith et al. [36] reported an incidence of 68% by HRCT scanning. Focal hyperlucency was found in 48% of asthmatic patients and 58.6% of COPD patients.These results were in accordance with Harmanci et al. [37] in which it was found that focal hyperlucency was seen in 37% of the asthmatics, which is not different from that observed in COPD patients (33.3%). Small centrilobular opacities were found in 36% of asthmatic patients and 45.6% of COPD patients.Similarly Harmanci et al. [37], found that small centrilobular opacities, caused by peribronchiolar inflammation and muscular hypertrophy, are found in 24.2% of the asthmatics, a prevalence that is not different from that in COPD patients (40.7%).Lynch et al. [37] reported a proportion of 10% whereas Grenier et al. [38] reported that 21% of asthmatics have centrilobular prominence.We found that those features correlated with clinical severity and decreased FEV 1 values.Focal hyper- lucency and small centrilobular opacities are known as small airway abnormalities.Therefore more recent studies have focused on the role for distal airways and suggest that more peripheral airways are also involved in asthma [39].The present study concludes that ratio of MMP-9/TIMP-1 decrease with increasing severity in asthma and COPD and a close relationship exists between high-resolution computed tomography scan alterations and changes in the levels of markers of airway remodelling and these biological markers of remodelling might reflect the extent of structural changes occurring within the airways.The present study also provides evidence in support of the usefulness of a HRCT scan technique to assess structural alterations of the airways. Fig. 1 . Fig. 1. -High-resolution CT scan through bilateral lower lobe shows dilated thick walled airways consistent with bronchiectasis with mucoid impaction in left lower lobe.Also shows small centrilobular nodules and branching linear opacities (tree in bud) in the right lower lobe. Fig. 3 . Fig. 3. -High-resolution CT scan through bilateral upper lobe shows non-uniformly distributed areas of low attenuation without visible walls with predominant centrilobular location in bilateral upper lobe typical of centrilobular emphysema. Table 1 . -Frequencies of HRCT findings and their severity according to clinical severity groups in asthmatics Table 2 . -Frequencies of HRCT findings and their severity according to clinical severity groups in COPD Table 3 . -Cells and mediators in the severity groups of asthma Table 4 . -Cells and mediators in the severity groups of COPD Table 5 . -Baseline characteristics, cells and mediators in asthma Table 6 . -Baseline characteristics, cells and mediators in COPD patients
2018-04-03T03:11:04.585Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "1df72c6bd38ce82392cbe2aa216c63811c58d55a", "oa_license": "CCBYNC", "oa_url": "https://monaldi-archives.org/index.php/macd/article/download/161/149", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1df72c6bd38ce82392cbe2aa216c63811c58d55a", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233424869
pes2o/s2orc
v3-fos-license
Anti-TNF-α Compounds as a Treatment for Depression Millions of people around the world suffer from psychiatric illnesses, causing unbearable burden and immense distress to patients and their families. Accumulating evidence suggests that inflammation may contribute to the pathophysiology of psychiatric disorders such as major depression and bipolar disorder. Copious studies have consistently shown that patients with mood disorders have increased levels of plasma tumor necrosis factor (TNF)-α. Given these findings, selective anti-TNF-α compounds were tested as a potential therapeutic strategy for mood disorders. This mini-review summarizes the results of studies that examined the mood-modulating effects of anti-TNF-α drugs. Mood Disorders Millions of people around the world suffer from psychiatric illnesses, causing unbearable burden and immense distress to patients and their families [1]. Moreover, psychiatric disorders are associated with extensive financial costs to patients, the health care system and society in general [2,3]. Patients with mood disorders such as bipolar disorder and depressive disorders are of a higher likelihood to suffer from suicidal death and various comorbidities, leading to increased mortality rates in comparison to matched-healthy subjects [4][5][6]. The lifetime prevalence of bipolar disorder in the general population is between 0.7-1.5% [7,8] and that of depressive disorders is between 10-20% [1,9]. These estimations likely depict only a fraction of the true numbers, suggesting that there are presumably myriads of concealed and undiagnosed cases, and acknowledges that there is societal and cultural variance in recognition and interpretation of psychiatric symptoms [1,10]. Bipolar disorder is recognized as one of the most complex and difficult-to-treat psychiatric illnesses. Patients with bipolar disorder suffer alternating periods of mania and depression [11,12]. Mania is characterized by euphoric mood, impaired judgment, hyperactivity and excitement, increased erotic thoughts and engagement in sexual activity, among other features [11,12]. Depression is a rampant and devastating mental disorder [1,9], and is more prevalent in women than in men [1]. Melancholy is the primary feature/manifestation of depression [13][14][15][16]. Patients with depression may have alternative or accompanying symptoms including anxiety, low self-esteem, changes in appetite, social isolation, diminished interest in hedonic activities, insomnia or hypersomnia, and suicidal thoughts and/or attempts, among others [13][14][15][16]. Expectedly, the severity of symptoms and duration of depressive episodes vary significantly and, understandably, depressive episodes can impact even the most basic aspects of patients' lives. Occasionally, depression presents without a known triggering cause. However, sometimes a prominent emotional stimulus, such as a death of a close relative, precedes the inception of depression. The most widely used treatment strategy for bipolar disorder is pharmacotherapy [11,12,17]. Other approaches include electroconvulsive therapy [18,19] and cognitive behavioral therapy [20]. Similarly, pharmacotherapy, psychotherapy and electroconvulsive therapy are the three most frequently used treatments for depressive disorders [17,18,[21][22][23][24]. Among these, pharmacotherapy is the most common and it includes a wide variety of medications [23,24]. The treatment of depressive disorders is dictated by a number of factors including: (i) risk of suicide, (ii) the patient's ability to understand and follow instructions (adherence to treatment), (iii) level of supportive resources, (iv) level of encountered stressors, and, (v) level of functional impairment [17,24]. The availability of abundant and diverse medication options available for the treatment of mood disorders notwithstanding, a high proportion of patients present a poor response to treatment [11,12,14,17,[22][23][24]. Moreover, many patients suffer a plethora of unpleasant side effects (some of which may be severe and irreversible) further encouraging poor compliance to treatment [11,12,14,17,[22][23][24][25][26][27]. These limitations accentuate the necessity for new treatment strategies for mood disorders in an effort to supply hope for additional sub-groups of patients. Tumor Necrosis Factor (TNF)-α TNF-α is a multi-functional cytokine which plays central roles in numerous physiological as well as pathological processes in mammals [28][29][30][31]. It was recognized early on for its ability to induce necrosis of tumor cells [32], but was subsequently associated with plentiful biological functions [28][29][30][31]. TNF-α is synthetized and secreted mainly by macrophages though several cell types (including glia cells and neurons in the brain) are capable of producing it [28][29][30][31][32][33][34][35]. Newly synthesized TNF-α localizes in cell membrane until it undergoes proteolytic cleavage by TNF-α-converting enzyme, which releases the soluble form of the protein [36,37] (see Figure 1 for illustration). Both the transmembrane and the soluble form of the protein are biologically active-binding to and activating TNF receptor 1 (TNFR1) as well as TNFR2 [30,31,38,39] (Figure 1). TNFR1 and TNFR2 share some similar functions (e.g., advancement of immune defense mechanisms, induction of inflammation, and promotion of cell proliferation and survival) but, they also have distinct, sometimes opposite, biological activities [30,31,38,39]. Principally, TNFR1 is connected to pathological processes such as inflammation, apoptosis and necrosis, while TNFR2 is mostly linked to physiological responses such as host defense, tissue repair and regeneration [30,31,38,39]. However, delineating these receptors with distinctive pathological versus physiological tasks would be an over-simplification of a more complex biological reality. Thorough research has indicated TNF-α to be mostly linked to immune and inflammatory functions [30,31]. It has also been associated with cancer pathophysiology [29]. It is involved in various immune and inflammatory responses (usually acting as a proinflammatory mediator) contributing to host defense [30,31,38,39]. Under certain conditions, TNF-α facilitates apoptosis and cell death especially in cancer cells [29][30][31]38,39]. Nevertheless, and despite its common association with pathological conditions, TNF-α plays a crucial role in numerous physiological processes, particularly in the central nervous system (CNS-the brain and the spinal cord) [28,39]. For example, in the brain, TNF-α has a direct impact on neuronal function and survival, regulating production and secretion of neurotransmitters, controlling synaptic transmission, and contributing to myelin synthesis and preservation [28,[39][40][41][42][43][44][45]. TNF-α was found to increase the permeability of the bloodbrain barrier (BBB) which is accompanied by depressive behavior [46][47][48]. Dysfunction of the BBB hastens the penetration of inflammatory mediators and peripheral immune cells into the CNS leading to behavioral abnormalities and mood disorders [49,50]. Thus, taking into account the various crucial functions of TNF-α, it is expected that disruption of its activity would cause profound biological consequences, including alteration of neurological function. Transmembrane TNF-α (mTNF-α) undergoes proteolytic cleavage by TNF-αconverting enzyme (TACE) which generates the soluble form of the protein (sTNF-α). Both mTNF-α and sTNF-α are biologically active; they bind to and activate TNF receptor (TNFR) 1 and TNFR2. Arrows indicate that mTNF-α is also capable of activating TNFR1 and TNFR2. Brain Inflammation, TNF-α and Mood Disorders The CNS consists of two main types of cells: neurons and glia cells [33,34]. There are three types of glia cells: astrocytes, microglia, and oligodendrocytes [33,34]. The role of microglia cells in the CNS is comparable to that of macrophages in peripheral tissues. Astrocytes have important immune-inflammatory roles, and support the function and survival of neurons [33,34,51]. Oligodendrocytes produce myelin, the insulating substance that surrounds nerve cell axons. Microglia and astrocytes are involved in various neuroinflammatory processes and are associated with numerous CNS pathologies [28,34,35,[51][52][53][54]. Despite the presence of the BBB, the activity of the "peripheral" immune system still manages to impact the CNS. It has been consistently recognized that illnesses associated with systemic inflammation (e.g., rheumatoid arthritis and coronary artery disease) frequently present with behavioral abnormalities and symptoms of depression. Systemic inflammatory responses to infectious agents affect brain function and, in turn, evoke significant changes in behavior [54]. This association has revealed itself to be more than just a speculation, as even early studies suggested that dysregulation of the immune system may lead to depression [55,56]. Subsequently, many studies reported that immune-dysregulation and inflammation contribute to the pathophysiology of mood disorders. It was found that patients with depression had elevated levels of pro-inflammatory markers [57][58][59][60][61][62][63][64][65][66][67][68][69][70], while levels of anti-inflammatory mediators were either comparable [71,72] 72,[74][75][76][78][79][80][81][82][83][84][85]. Abnormalities in TNF-α levels have been shown to influence the severity of psychiatric symptoms and response to treatment. For example, a recent study showed that elevated baseline plasma TNF-α levels in patients with major depression may predict a better improvement in intensity of suicidal thoughts [86]. Patients with bipolar disorder [87] and depression [88] were reported to have altered levels of TNFR1 and TNFR2, respectively. Interestingly, the latter two studies [87,88] did not demonstrate abnormal TNF-α levels among their population. However, despite the large body of data attesting for alterations in inflammatory mediator levels among patients with mood disorders, some studies reported opposite findings [80,85]. Additional support for the inflammation hypothesis of mood disorders came from studies that showed that treatment with various anti-inflammatory/immune-modulating drugs reduced symptom severity and improved conditions of patients with mood disorders [58, [117][118][119][120][121][122][123]. Mainly, selective cyclooxygenase-2 inhibitors (e.g., celecoxib) were found beneficial as add-on therapy to psychotropic drugs in patients with mood disorders [58,120]. Nevertheless, here too, studies published negative findings regarding the effectiveness of anti-inflammatory/immune-modulating medications as a treatment for mood disorders [124,125]. Among the various anti-inflammatory drugs that have been explored as a potential treatment for mood disorders, selective TNF-α antagonists were given special attention. The following section summarizes the mood-modulating effects of clinically used anti-TNF-α compounds. Search Strategy The search strategy was based on surveying the following electronic databases for inclusive criteria: PubMed, Web of Science, and Google Scholar, for English language papers published in peer-reviewed journals reporting on the use of anti-TNF-a compounds in subjects with mood disorders. The customized search was restricted to the years 1990 (the year when the first report on the anti-TNF-a activity and beneficial therapeutic effects of infliximab was published [141]) to 2020. The search field contained the name of each compound, including: infliximab, etanercept, onercept, adalimumab, golimumab, humicade, certolizumab pegol, and pentoxifylline; together with each of the following keywords: depression, melancholia, depressive disorder, mania, bipolar disorder, manicdepressive illness. The search strategy resulted in many hits that were irrelevant to the purpose of the article. On the other hand, no relevant papers reporting on the effects of onercept, golimumab, humicade and certolizumab pegol in subjects with mood disorders were found. We included most relevant papers reporting on animal studies and almost all papers reporting on studies conducted in human subjects, because the latter were the main focus of the manuscript. Infliximab Infliximab is a chimeric TNF-α-specific neutralizing monoclonal antibody consisting of a human IgG Fc region and a murine Fv region (see Figure 2 for illustration). It is recognized as a potent selective TNF-α antagonist with powerful neutralizing effects against soluble TNF-α and, to a lesser extent, on transmembrane TNF-α [133,[142][143][144]. Infliximab is capable of binding to both monomeric and trimeric forms of soluble TNF-α. Each infliximab molecule can bind to two TNF-α molecules, while a single TNF-α homotrimer can bind to up to three infliximab molecules [133,[142][143][144]. Infliximab is administered intravenously and thus has a maximized (100%) bioavailability; it has a low clearance rate (~11 mL/hour) and a plasma half-life of nearly 8-10 days [133,143]. Infliximab has been used for the treatment of various rheumatoid and inflammatory-associated diseases such as rheumatoid arthritis, psoriasis, ankylosing spondylitis, and Crohn's disease, among others [30,133]. Several studies examined the effects of infliximab on depressive symptoms among patients with Crohn's disease [134,135] and ankylosing spondylitis [136,145,146] revealing encouraging results. Animal studies also demonstrated an antidepressant-like effect for infliximab [147,148]. Raison et al. [149] evaluated the antidepressant effect of infliximab in patients with treatment-resistant depression. Sixty patients were randomly allocated to receive either infliximab (n = 30) or a placebo (n = 30). Infliximab showed a significant therapeutic effect-mitigated depressive symptoms-but only in patients who had increased levels of inflammatory markers [149]. Consistent with these results, a recent meta-analysis study which evaluated the antidepressant efficacy of infliximab revealed that it was effective exclusively in patients with elevated levels of inflammatory markers such TNF-α and C-reactive protein [150]. The efficacy of infliximab was also tested in patients with bipolar depression [151][152][153][154]. McIntyre et al. [151] conducted a randomized, double-blind, placebo-controlled trial in which 29 patients were treated with infliximab and 31 patients with a placebo. Twelve weeks of infliximab treatment did not cause a significant reduction in severity of depressive symptoms. Only in a sub-group of patients with a history of childhood physical abuse infliximab (as compared to the placebo) led to a significant depletion in depressive symptoms [151]. Lee et al. [152] conducted a randomized, double-blind trial of adjunctive treatment with infliximab (together with standard pharmacotherapy) and a placebo for 12 weeks in patients with bipolar depression. They reported a significant improvement in a measure of anhedonia in infliximab-treated patients; however, the positive effect was short-lived and did not show sustainable positive results, dissipating within six weeks after the final infusion of the drug. Mansur et al. also reported positive therapeutic effects of infliximab on depressive symptoms [153] and cognitive function [154] in patients with bipolar depression. A recent study by the same group of investigators also demonstrated beneficial effects of infliximab on bipolar patients [155]. In a 12-week, randomized, double-blind trial, infliximab treatment was associated with a significant decrease in prefrontal levels of glutamate and a cognitive improvement in patients with bipolar depression [155]. Together, these findings (see summary of the findings in Table 1) suggest that infliximab produces antidepressant effects in particular sub-groups of depressive patients. Etanercept Etanercept is a human recombinant fusion protein of TNFR2 that neutralizes/inhibits TNF-α activity [30] (Figure 2). It is regarded as a less powerful TNF-α antagonist when compared to infliximab, but similarly to infliximab, it has a much stronger antagonizing effect against soluble TNF-α than transmembrane TNF-α [133,[142][143][144]. Etanercept binds only to the trimeric form of soluble TNF-α and each etanercept molecule is capable of binding to one TNF-α molecule [133,[142][143][144]. Etanercept is administered subcutaneously and has a bioavailability of nearly 75%; it has a relatively high but varying clearance rate (80-240 mL/hour) and a plasma half-life of 3-5.5 days [133,143]. Early pre-clinical studies showed that etanercept reduced depressive-like behavior in rats [156,157]. More recently, a study in rats showed that etanercept significantly decreased depressive-like behavior and improved cognitive function [158]. Similarly, a study in mice showed that etanercept exerted a potent antidepressant-like effect and an anxiolytic-like effect [159]. In line with these pre-clinical results, etanercept was found to significantly decrease the severity of fatigue, depression and anxiety symptoms among patients with psoriasis (Table 1) [137,138,160,161]. Moreover, non-randomized trials showed that addition of etanercept to standard therapy significantly reduced depressive and anxiety symptoms among patients with psoriasis [162][163][164] and rheumatoid arthritis [165,166]. For example, a prospective cohort study by Yang et al. [167] demonstrated that addition of etanercept to standard treatment was associated with a sustained significant reduction in depression and anxiety symptoms in psoriasis patients. In contrast to these findings, a study in patients with rheumatoid arthritis found that addition of etanercept to methotrexate (an immunemodulating drug) did not significantly improve depressive and anxiety symptoms [139]. Collectively, these results suggest that etanercept exhibits antidepressant and anxiolytic effects at least in some sub-groups of patients. Adalimumab Adalimumab is another human TNF-α-specific neutralizing monoclonal antibody ( Figure 2). It has similar pharmacokinetic properties to infliximab. Each adalimumab molecule can bind to two TNF-α molecules, while a single TNF-α homotrimer can bind to up to three adalimumab molecules [133,[142][143][144]. Adalimumab is administered subcutaneously and has a bioavailability of nearly 65%; it has a low clearance rate (~12 mL/hour) and a long but variable plasma half-life ranging from 10 to 20 days [133,143]. Randomized and non-randomized clinical trials showed that adalimumab exerts antidepressant and anxiolytic effects when administered to patients with chronic physical illnesses such as Crohn's disease [140], psoriasis [128,129,[168][169][170] and hidradenitis suppurativa [130] ( Table 1). To the best of our knowledge, the mood-modulating effects of adalimumab have not been directly tested in psychiatric patients with mood disorders. Pentoxifylline Pentoxifylline is a methylxanthine drug (Figure 2) that for many years has been used for the treatment of different clinical conditions such as peripheral vascular disease [171,172], idiopathic and ischemic cardiomyopathy [173][174][175], coronary artery disease [176], chronic kidney disease [177], alcoholic hepatitis [178], among other illnesses [171,179,180]. Pentoxifylline is administered orally and has a relatively high bioavailability, depending on the used formulation [160]. It has a low binding rate to plasma proteins (minimizing the chance for drug-drug interactions) and distributes vastly throughout body tissues, extending to the brain. Pentoxifylline undergoes extensive metabolism (mainly through reduction and oxidation) and has a short plasma half-life ranging between 1 to 4 h, again, depending on the used formulation [160]. The therapeutic efficacy of pentoxifylline in the treatment of peripheral vascular disease seems to be derived from its ability to improve the deformability of red blood cells, decrease blood fibrinogen levels and inhibit platelet aggregation [172]. Moreover, pentoxifylline inhibits the enzyme phosphodiesterase [181]. In the context of the present article, pentoxifylline is recognized as a potent inhibitor of TNF-α [173][174][175][176][177]179,[181][182][183][184][185][186]. Numerous studies showed that pentoxifylline inhibits the production of TNF-α in vitro and in vivo (in animals and humans) [173][174][175][176][177]179,[181][182][183][184][185][186]. Thus, pentoxifylline is regarded as a strong non-selective TNF-α inhibitor (as it exerts other pharmacological properties). Owing to the large body of data which linked TNF-α to the pathophysiology of depression, many pre-clinical studies have investigated the antidepressant potential of pentoxifylline [182,183,187]. Bah et al. [187] demonstrated that pentoxifylline exerted antidepressantlike effects in rats that were subjected to an experimental model of myocardial infarction. Pentoxifylline significantly increased sucrose preference and significantly decreased immobility time (both indicative of an antidepressant-like effect) in the forced swim test in post-infarction rats [187]. Mohamed et al. [182] observed that treatment with pentoxifylline for three weeks significantly increased sucrose preference in rats that were subjected to a chronic mild stress protocol. The chronic mild stress paradigm is used to induce depressive-like phenotypes in animals. Another study showed that pentoxifylline significantly decreased immobility time in rats that were exposed both to an inflammatory stimulus (lipopolysaccharide) and chronic mild stress [183]. Collectively, these studies [182,183,187] (among others) demonstrated that pentoxifylline has strong antidepressant-like effects in various behavioral models including the sucrose preference test and the forced swim test [182,183,187]. Consistent with these positive pre-clinical results, a randomized, double-blind, placebo-controlled clinical trial showed that adjunctive pentoxifylline treatment was associated with a significant anti-depressive effect [188]. Addition of pentoxifylline (400 mg/day) to escitalopram (20 mg/day) for 12 weeks significantly reduced depressive symptoms in patients with major depression [188]. Moreover, pentoxifylline caused a significant decrease in plasma TNF-α and IL-6 levels (suggestive of a potent anti-inflammatory effect) and a significant increase in plasma serotonin and brain-derived neurotrophic factor levels (suggestive of favorable behavioral/neuroprotective biochemical effects) [188]. These encouraging findings underscore the need for more randomized trials of pentoxifylline in patients with mood disorders. Infliximab Prospective, non-randomized trial n = 100 Crohn's disease All patients were treated with infliximab + standard therapy (4 weeks) Significant decrease in the proportion of depressed patients [134] Prospective, non-randomized trial n = 14 All patients were treated with infliximab + standard therapy (4 weeks) Significant reduction in depressive symptoms [135] Prospective, non-randomized trial n = 29 Ankylosing spondylitis All patients were treated with three doses of infliximab + standard therapy (6 weeks) Significant reduction in depressive symptoms [136] Randomized, placebo-controlled trial n = 23 Standard therapy + placebo vs. standard therapy + infliximab, followed by infliximab-only treatment (54 weeks) Significant reduction in depressive symptoms [146] Randomized, double-blind, placebo-controlled trial n = 60 Major depressive disorder (treatment-resistant) Antidepressant(s) or medication free + placebo vs. antidepressant(s) or medication free + infliximab (12 weeks) Overall, no significant difference between groups. Infliximab significantly decreased depressive symptoms in a sub-group of patients with high baseline CRP levels [149] Systematic review and meta-analysis of four randomized controlled trials n = 152 Standard therapy + placebo vs. standard therapy + infliximab Adjunctive infliximab treatment did not have a significant effect on depressive symptoms [150] Randomized, double-blind, placebo-controlled trial n = 60 Bipolar depression with higher inflammatory activity Standard therapy + placebo vs. standard therapy + infliximab (12 weeks) No significant difference between groups. Infliximab significantly decreased depressive symptoms in a sub-group of patients with a history of childhood physical abuse [151] Randomized, double-blind, placebo-controlled trial n = 60 Standard therapy + placebo vs. standard therapy + infliximab (12 weeks) Adjunctive infliximab treatment led to a significant although transient anti-anhedonic effect Significant decrease in depressive symptoms [188] * Type of comparison and follow-up duration are indicated in the table only if they were clearly mentioned in the reporting article. CRP denotes C-reactive protein. Summary Several clinical trials attested for the antidepressant efficacy of anti-TNF-α compounds (in patients with medical illnesses, major depression, or bipolar depression) [70]. Selective TNF-α antagonists such as infliximab and etanercept showed favorable neurological/antidepressant effects in specific sub-groups of patients. However, it is important to emphasize that most of the available data regarding the antidepressant effects of selective TNF-α antagonists is derived from studies in non-psychiatric patients (i.e., patients with inflammatory-associated diseases who presented depressive symptoms). Moreover, some evidence suggests that there is no connection between anti-TNF-α therapy and improvement in mood symptoms [139,150,151]. Therefore, new randomized, placebo-controlled clinical trials are necessary for direct examination of the mood-modulating effects of TNF-α antagonists in patients with mood disorders. In this regard, recently, concerns have been raised regarding the efficacy of selective TNF-α antagonists as a therapeutic strategy for mood disorders [139,151,189,190]. It is important to mention that most clinically available anti-TNF-α compounds possess low-to-null ability to cross the BBB, mainly due to their large molecular weight [191][192][193]. This suggests that the reported beneficial behavioral (antidepressant) effects of these compounds are derived from peripheral inhibition of TNFα activity rather than a direct effect on the brain. Potent peripheral inhibition of TNF-α activity may be sufficient for diminishing brain inflammation. Therefore, it is important to continue studying the therapeutic mechanism of action and effectiveness of selective TNF-α antagonists as a treatment for mood disorders.
2021-04-29T05:18:32.632Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "4665ddf5953fc4e478496344e5d95b74f2870319", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/8/2368/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4665ddf5953fc4e478496344e5d95b74f2870319", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268644461
pes2o/s2orc
v3-fos-license
Characterization and Antimicrobial Activity of Protease and α-Amylase Inhibitors from Immature Fruits of Capsicum chinense Jacq. : Antimicrobial peptides (AMPs) are small groups of proteins obtained from plants and animals. AMPs participate in the immune response, as they provide a quick line of defense against infections, while others may be related to the plant's defense against certain pests and pathogens. The objective of the present study was to evaluate an inhibitory activity of fractions obtained from immature fruits of Capsicum chinense Jacq. (accession UENF 1755) on trypsin, chymotrypsin and α-amylase families and on yeast growth. The peptides were obtained from the immature fruits using saline extraction. The extract was semi-purified by DEAE-Sepharose chromatography into two fractions: D1 (non-retained fraction) and D2 (retained fraction), and analyzed using SDS-tricine-gel electrophoresis. The antifungal activity of these fractions was tested on Candida albicans , Candida buinensis , Candida parapsilosis and Candida tropicalis . To elucidate the antimicrobial mechanism of these fractions, membrane permeabilization and endogenous reactive oxygen species (ROS) induction assays were performed. The fractions were also tested for inhibition of trypsin, chymotrypsin and α-amylase enzymes. The two fractions, D1 and D2, inhibited yeast INTRODUCTION The adverse effects of chemical pesticides, the frequent emergence of drug-resistant bacteria and fungi, and consequently, the discontinued use of some traditional antibiotics, has led to select increasingly resistant microorganisms, which directs us toward identifying new antimicrobial agents [1,2]. Plant antimicrobial peptides (AMPs) are molecules that exhibit hydrophobic and cationic properties rich in positively charged arginine and lysine residues, favoring their interaction with microbial cytoplasmic membranes.Rarely, there are also anionic AMPs, mainly in the plant kingdom, that adopt an amphipathic structure, carrying a high proportion of hydrophobic residues [3]. Generally, AMPs contain cysteine-rich residues and show different secondary structures, such as β sheets stabilized by two or three disulfide bonds and often exhibit a helical amphipathic structure [4].Plant AMPs are grouped into different classes based in some features such as type of charge, cyclic structure, presence of disulfide bonds, and mechanism of action.The most common classes of AMPs reported so far include defensins, hevein-like proteins, cyclotides, knotin-like proteins, lipid transfer proteins (LTPs), thionins and protease inhibitors (PIs) [5].Among these peptides: PIs, thionin-like peptide, defensins, vicilinlike peptide and some other AMPs have been isolated and identified from the Solanaceae plant family [6]. AMPs of plant origin have a variety of amino acid compositions and structures, many of which exhibit strong broad-spectrum antimicrobial activity and are capable of rapidly killing microbes [5].Several AMPs have been isolated from different plants and plant organs, such as stem, root, seed, flower and leaf, which have exhibited antimicrobial activities against different microorganisms (fungi, viruses, bacteria, parasites and protozoa) [7]. In addition to antimicrobial activity, AMPs may have other biological properties, such as protease and carbohydrase inhibition.The antimicrobial mechanism by which these inhibitors act is attributed to their action on protein digestion, where reduction in the availability of amino acids prevents the synthesis of new proteins necessary for the normal development of the pathogen's metabolism.This property suggests the potential of these proteins as biotechnological tools [8,9]. Over the last few years, our research group has isolated and characterized antimicrobial peptides present in seeds of different plant species, which have shown inhibitory activity on proteases, especially serine proteases.In plants of the genus Capsicum, peptides characterized as inhibitors of trypsin and chymotrypsin-protease with antifungal activity (MIC 50-250 μg.mL -1 ) were identified, mainly in C. annuum and C. chinense [10,11].The antifungal activity of these AMPs was characterized by visualization of cell agglomeration and pseudohyphae formation or by hyphal morphological changes as well as by membrane permeabilization due to reactive oxygen species (ROS) induction [11,12]. Plant material Capsicum chinense seeds accession UENF 1755 were provided by the Laboratório de Melhoramento Genético Vegetal (LMGV), Centro de Ciências e Tecnologias Agropecuárias (CCTA), Universidade Estadual do Norte Fluminense Darcy Ribeiro, (UENF), Campos dos Goytacazes, Rio de Janeiro, Brazil.The seeds were sown in a 128-cell Styrofoam tray containing commercial substrate (4% nitrogen; 14% phosphorus; 8% potassium) (Vivatto, Brazil) and maintained in a growth chamber with a controlled temperature of 28 ºC and a photoperiod of 12 h light/dark.After seedlings were more than 10 cm high (about 20 days), they were transplanted into 5 L pots and placed in a greenhouse where they were irrigated with water once a day.The flowers were marked at anthesis, and after a period of 30 days after anthesis for immature fruits and 45 days for ripe fruits.Fruits were harvested and used for protein extraction. Protein extraction from C. chinense fruits The protein extraction from immature and ripe C. chinense fruits was performed according to the methodology proposed by Taveira and coauthors (2014) [13].Briefly, the peduncles of all the fruits were removed and discarded, and 40 g of each immature and ripe fruit (without seeds) were collected and weighed.Further, they were separated and used for protein extraction.The fruits were ground in 200 mL of phosphate buffer (15 mM NaH2PO4; 10 mM Na2HPO4; 100 mM KCl; 1.5% ethylenediamine tetraacetic acid (EDTA), pH 5.4) (Sigma-Aldrich Co., St. Louis, MO, USA), for 15 min with the aid of a multiprocessor (Mix 3x1, Philips) and the homogenate was placed in a refrigerator under stirring for 2 hours.Further, the homogenate was centrifuged at 15400  g for 45 min at 4 °C, where the supernatant was filtered using a paper filter, and the precipitate was discarded.Ammonium sulfate was then added at 70% saturation under agitation for 40 minutes and placed in a refrigerator overnight.The solution was centrifuged at 15400  g for 45 min at 4 °C, the supernatant was discarded and the precipitate was heated in a water bath for 15 min at 80 °C.After centrifugation for 30 min at 15400  g, the precipitate was discarded and the supernatant was dialyzed (benzoylated dialysis tube, Sigma-Aldrich) with distilled water and lyophilized (Lyotop K105 Lyophilizer, SP, Brazil).The final extract obtained was named peptide-rich extract (PRE). Partial purification by anion exchange chromatography A DEAE-Sepharose anion exchange column (Sigma-Aldrich) was used, with 50 mL of resin.The column was assembled under the action of gravity and was equilibrated with 100 mM Tris-HCl buffer pH 8.0 for separation of peptides.After the preparation of column, 50 mg of the protein extract from immature fruits was dissolved in 10 mL of the equilibrium buffer, centrifuged at 16000  g for 3 min at room temperature and the supernatant was applied onto the column.Fractions of 3 mL were collected in a flow of 60 mL.h -1 , in a total of 80 tubes.The first 50 fractions were eluted with equilibration buffer (D1), and the retained proteins (D2) were eluted using equilibration buffer containing 1 M NaCl (Merck KGaA, Darmstadt, Germany).The absorbance of the fractions was measured at 280 nm [15] using a spectrophotometer (Spectrophotometer BEL LGS 53). Inhibition of trypsin and chymotrypsin enzyme activities The inhibitory activity of the peptides was determined by measuring the residual hydrolytic activity of trypsin and chymotrypsin using the substrates N-benzyol-D-L arginine-p-nitroanilide (BApNA) and N-Benzoyl-L-tyrosine p-nitroanilide (BTpNA) (Sigma-Aldrich), respectively, after pre-incubation with the protein fractions.Proteolytic activity was measured using a synthetic peptide derived from p-nitroanilide in 50 mM Tris-HCl buffer pH 8.0 at 37 °C, in a final volume of 200 µL.The reaction was stopped by adding 100 µL of 30% acetic acid (v/v).Next, the photometric reading of the treatments was measured based on the extent of p-nitroanilide release from the substrates at 405 nm using a spectrophotometer (Spectroquant Pharo 100, Merck KGaA) according to the methodology described by Ribeiro and coauthors (2013) [16]. Temperature stability For the thermal stability assay of the trypsin inhibitory activity, fractions D1 and D2 (50 μg.μL -1 ) were pre-incubated at various temperatures (40, 60, 80 and 100 °C) for 30 min in a water bath.After heat treatment, the aliquots were cooled on ice and the residual enzymatic activity of trypsin was tested as described in item "inhibition of trypsin and chymotrypsin enzyme activities" [16]. Reverse zymographic detection of protease inhibition Protease inhibition was detected using the methodology of Felicioli and coauthors (1997) [17].Fractions D1 and D2 were separated on polyacrylamide gel (12% SDS-PAGE) co-polymerized with 0.1% gelatin under semi-denaturing conditions (without SDS and β-mercaptoethanol in the sample buffer).After electrophoresis, the gel was washed twice using wash buffer (0.1 M Tris-HCl pH 8.0 containing 2.5% Triton X-100) for 60 min to remove the SDS present in the running buffer.The gel was then immersed in the incubation buffer (50 mM Tris-HCl pH 8.0, containing 20 mM CaCl2 and 50 µg.mL - trypsin) at 37 °C for 1 h.Thereafter, it was rinsed with distilled water to remove excess trypsin.Gel was first stained by a stained using solution containing 0.2% Coomassie Brilliant Blue G 250, 45% methanol, and 10% acetic acid for 30 min, followed by destaining.The presence of protease inhibitors was assessed based on inability of trypsin to digest gelatin according to the appearance of bands in the gel.We used 15 µL of a commercial trypsin inhibitor as control (Soybean Kunitz -Merck KGaA). α-Amylase inhibition assay The enzymatic activity assay for intestinal α-amylases from Tenebrio molitor was performed as described by Da Silva and coauthors (2018) [18] with some modifications.Larval intestines were macerated at 4 °C in sterile saline and subjected to centrifugation at 12000 × g for 10 min.The protein content in the supernatant was quantified using the bicinchoninic acid protein assay as described by Smith and coauthors (1985) [19].Initially, starch hydrolysis was quantified by reducing sugar liberation based on the colorimetric assay with 3,5-dinitrosalicylic acid (DNS).A reaction mixture containing different concentrations of intestinal α-amylase extracted from T. molitor with 25 μL of 1% starch (Sigma-Aldrich) in a final volume of 200 μL in water was incubated at 37 °C for 30 minutes.Subsequently, 400 μL of DNS solution was added to the reaction and heated at 100 °C for 5 min, after that time the samples were read at 540 nm (Spectroquant Pharo 100, Merck KGaA).The unit activity (U) was defined as the quantity of the intestinal enzyme extract (in μg) that increased the absorbance at 540 nm by 0.1 absorbance unit over 30 min. For the inhibition assay, D1 and D2 fractions (25, 50, 75 and 100 µg.mL -1 ) were previously incubated with 10 U (4 μL) of intestinal α-amylase extract at 37 °C for 30 minutes.The residual enzyme activity was determined as described above.EDTA (5 mM, Sigma-Aldrich) was used as a positive control and 50 μg.mL - of bovine serum albumin (Sigma-Aldrich) as a negative control.The percentage inhibition was calculated considering the control (enzyme only) as 100% enzyme activity. Antifungal activity assay Candida yeasts were transferred from the stock and grown in Sabouraud agar medium (Merck KGaA) for approximately 24 h at 30 °C.A portion of the culture was resuspended in 10 mL of Sabouraud broth (Merck KGaA).Cell numbers were quantified using a Neubauer chamber (LaborOptik Ltd, United Kingdom, UK) under an optical microscope.The quantitative test for fungal growth inhibition was performed using the protocol developed by Broekaert and coauthors (1990) [20], with modifications.To verify the effect of immature and ripe fruits fractions (D1 and D2) on yeast growth, 1  10 4 cells.mL - were incubated in 100 μL of Sabouraud medium at 30 °C in 96-well microplate (Thermo Fisher Scientifc Inc, Waltham, MA, USA) in the presence of protein extracts at a concentration of 100 µg.mL -1 .The optical readings were measured at 620 nm (EZ Read 400, Biochrom Ltd, Cambridge, UK) after 24 hours.Fungal growth without the addition of the fractions was also determined.The experiments were performed in triplicate. Fungal membrane permeabilization assay The membrane permeabilization of yeast cells treated with the fractions obtained in anion exchange DEAE-Sepharose chromatography was evaluated using the SYTOX Green fluorescent probe (Invitrogen, Carlsbad, CA, USA), according to the methodology described by Thevissen and coauthors (1999) [21] with modifications.SYTOX Green dye penetrates the plasma membrane of structurally compromised cells, binds to nucleic acids and leads to cellular fluorescence.Immediately after 24 h incubation of the fungal cells with the protein fractions, 100 µL aliquots of cells were incubated in the dark, for 15 min with the fluorescent dye SYTOX Green, at a final concentration of 0.2 µM, according to the instructions provided by the manufacturer.Control cells were incubated only with the SYTOX Green dye under the same conditions.After incubation, 100 µL fungal cell suspension was incubated with 0.2 µM of SYTOX Green and 10 µg.mL - 1 of propidium iodide (PI) in microcentrifuge tubes for 10 min at 30 °C with constant agitation.The cells were then analyzed using an optical microscope, version 4.0 (Axioplan.A2, Zeiss, Germany), coupled to an Axio CAM MRc5 (Zeiss) camera and the images were analyzed using the Axiovision software, version 4.0 Brazilian Archives of Biology and Technology.Vol.67: e24230077, 2024 www.scielo.br/babt(Zeiss).The microscope is equipped with a set of fluorescent filters for the detection of fluorescein (excitation with wavelength between 450 and 490 nm and emission at 500 nm). Determination of ROS induction The production of intracellular ROS in the fungal cells of C. albicans, C. parapsilosis, C. buinensis and C. tropicalis with/without protein fractions was measured by incubating with the fluorescent probe 2´,7´dichlorofluorescein diacetate (H2DCDFA) (Calbiochem-EMD, San Diego, CA, USA), at a concentration of 20 µM.The samples were incubated in the dark for 30 min at 30 °C, under constant agitation.Fungal cells were analyzed under an optical microscope (Axioplan.A2, Zeiss) equipped with a set of fluorescent filters for fluorescein detection (excitation with wavelength between 450 and 490 nm and emission of 500 nm) [22]. Statistical analysis Assays were performed in triplicate and with thrice repetition.All statistical analyses were performed using the GraphPad Prism software (version 8.0 for Windows).Statistical differences were assessed using Tukey's one-way analysis of variance (ANOVA) test.The test into account the analysis of variance between the means of the control and treatments, at the level of rigor 5% (p < 0.05). Extraction and characterization of peptides from immature and ripe fruits Antimicrobial peptides extracted from different plant organs, such as seeds and fruits of the genus Capsicum, have already been described and characterized previously [13,15,[23][24][25][26][27].Silva and coauthors (2014) [12], showed the antimicrobial potential of protein fractions obtained from seeds of C. chinense Jacq.accession UENF 1755 on the fungi Colletotrichum gloeosporioides, C. lindemunthianum, Fusarium oxysporum and F. solani, in addition to the inhibition of the activity of trypsin and α-amylase enzymes.The AMPs studied in this work were obtained from immature fruit of Capsicum chinense Jacq.accession UENF 1755. In order to evaluate the type of proteins extracted, that would be more advantageous, two extractions were performed: acidic and saline extraction, according to Dias and coauthors (2013) [28] and Taveira and coauthors (2014) [13], respectively.Inhibition of the enzymatic activity of trypsin showed that the saline extraction was more significant in terms of obtaining AMPs and possible protease inhibitors (data not shown). Protein saline extraction from immature and ripe fruits of C. chinense accession 1755, showed the presence of majority proteins bands of approximately 6.5 kDa (Figure 1a) in Tricine SDS-PAGE analysis in both immature fruits extract (IFE) and ripe fruits extract (RFE).Further, trypsin activity inhibition assays using immature and ripe fruits extracts was performed, to identify the highest inhibitory activity, between the two maturation times, at two different concentrations of 10 μg.mL -1 and 50 μg.mL - .Significant inhibition (p < 0.05) in trypsin activity of 34% and 37%, respectively, in the ripe fruit and 90% and 96%, respectively, of inhibition of trypsin activity in the immature fruit (Figure 1b) were observed. Protease inhibitors are found in plants, mainly in storage tissues, acting as a defense by inhibiting digestive proteases from pests and pathogens [29,30].Ribeiro and coauthors (2007) [31] isolated an inhibitor from Capsicum annuum seeds, called CaTI, with a molecular mass of 6 kDa, which showed antifungal activity against different yeasts.Silva and coauthors (2017) [11], corroborating these results, demonstrated that the CaTI inhibitor was able to inhibit the growth of the phytopathogenic fungi C. lindemuthianum and C. gloeosporioides.Dias and coauthors (2013) [28] isolated protease inhibitors from C. chinense seeds and demonstrated that these inhibitors were able to inhibit the growth of several yeast species, such as: Saccharomyces cerevisiae, C. albicans, C. tropicalis, Pichia membranifaciens and Kluyveromyces marxiannus. Maracahipes and coauthors (2019) [25] demonstrated that there is a difference between the peptide fractions obtained from mature and immature Capsicum fruits in their expression, antifungal activity and enzymatic inhibition of trypsin.Pearce and coauthors (1988) [32] described that during the immature phase, wild tomato species showed a significant amount of protease inhibitors, acting as a means of defense in these plants against herbivores, and with the maturation of these fruits, the levels of inhibitors would decrease, making these fruits edible and promoting the dispersion of their seeds.Therefore, to confirm this result, we tested two protein extracts from the saline extraction -IFE and RFE of C. chinense, as well as the inhibition of the trypsin enzyme, and we observed a significant difference between the immature and ripe fruits, thus choosing to continue the experiments only with IFE (Figure 1b). The immature fruit extracts were subjected to anion exchange chromatography in a DEAE-Sepharose column and two fractions named D1 (non-retained fraction) and D2 (retained fraction) were obtained.Figure 2a shows the chromatographic profile corresponding to these fractions.The fractions obtained by DEAE-Sepharose chromatography were revealed by SDS-tricine-gel electrophoresis.It was possible to observe in D1 fraction protein bands with molecular mass between 6.5 and 14 kDa, and in D2 fraction with a protein band of approximately 6.5 kDa (Figure 2b). Characterization of the activities of inhibition of trypsin and chymotrypsin enzymes by fractions D1 and D2 To assess the inhibition of the serine protease activity, 10 μg.mL -1 and 50 μg.mL - of the fractions D1 and D2 were able to inhibit 15.5% and 94.65% in D1 and 30.75% and 85.73% in D2 for trypsin (Figure 3a), and a significant inhibition for both fractions was observed at a concentration of 50 μg.mL - . To corroborate the in vitro trypsin inhibition assays, reverse zymography electrophoretic assays were performed.The assay was performed using 100 μg.mL -1 of D1 and D2 fractions.However, the protein bands appeared white in color, after coomassie staining instead of blue, both in the first assay and the Brazilian Archives of Biology and Technology.Vol.67: e24230077, 2024 www.scielo.br/babtrepetition.Despite the staining, it was possible to visualize protein bands, with molecular weights between 45 and 66 kDa, in both fractions, showing a positive result for protease inhibition and suggesting aggregation of these AMPs, as indicated by the arrows in (Figure 3b).Fractions D1 and D2 were also submitted to inhibition assays of the enzyme chymotrypsin at a concentration of 50 μg.mL - , where significant inhibition (p < 0.05) of the enzyme activity (93 and 30%, respectively) was observed (Figure 3c).In evaluating the stability of fractions D1 and D2 at different temperature.It is shown that D1 inhibited an average of 97% of trypsin activity after heating at 40, 60, 80 and 100 °C.Fraction D2 was also resistant to temperature variations, with an average of 95% enzyme inhibition (Figure 3d).The main antimicrobial mechanism of the inhibitors on these microorganisms is the inhibition of protein digestion, reducing the availability of amino acids for the synthesis of new proteins, which are necessary for the metabolic development of the pathogen [9].Some inhibitors also have the properties of inhibiting serine proteases and α-amylases together, called α-amylase/trypsin inhibitors [33][34][35].As already seen in the literature that these inhibitors are thermostable at different temperatures, as shown by our results.Tamhane and coauthors (2007) [36] demonstrated a bifunctional inhibitor called CanPI-7 from C. annuum that was able to inhibit the activity of trypsin, chymotrypsin and intestinal proteases of the moth Helicoverpa armigera. Inhibition of α-amylase enzyme activity by fractions D1 and D2 For the present study, we asssessed whether fractions D1 and D2, from the DEAE-Sepharose anion exchange chromatography (Figures 2a and 2b), would also have the inhibitory capacity not only of trypsin and chymotrypsin, but also intestinal α-amylase from Tenebrio molitor.An inhibitory curve was obtained at the following concentrations of each fraction: 25 μg.mL -1 , 50 μg.mL - , 75 μg.mL - , and 100 μg.mL -1 , where differences between the inhibitions caused by D1 and D2 can be observed.The D1 fraction remains in the inhibitory range of: 71% for 25 μg.mL -1 , 73% for 50 μg.mL - , 76% for 75 μg.mL - and 79% for 100 μg.mL -1 .We observe that the D1 fraction does not need high concentrations to inhibit the action of the α-amylase.However, the D2 fraction presented a different curve, where inhibition increased considerably as the fraction concentration increased.We observed 11% for 25 μg.mL -1 , 28% for 50 μg.mL - , 48% for 75 μg.mL - and 72% for 100 μg.mL -1 , which indicates that the D2 fraction needs to be in higher concentrations to act as an α-amylase inhibitor.Asterisks indicate significance by ANOVA test, and differences in mean values were considered significant (p < 0.05).5 mM EDTA was used in these assays as an inhibitor in the positive control (Figure 4).It has been shown that the D1 fraction has the ability to inhibit proteolytic enzymes (trypsin, chymotrypsin) and glycosidases such as α-amylase, while the D2 fraction has inhibitory activity for trypsin and α-amylase, requiring a higher concentration of α-amylase fraction for significant inhibition to occur, unlike D1.Thus, we can suggest that in the fractions, both in D1 and D2, there may be a bifunctional inhibitory potential which required further assessment. Many inhibitors extracted from plants are already known, among which many are protein.A well-known inhibitor is amaranth amylase inhibitor, isolated from the plant Amaranthus hypochondriacus, is a Knottintype inhibitor [37].Within the same "Knottin" family, we found other inhibitors also extracted from plants, and with similarity in their sequence, such as: Mj-AMP1, Mj-AMP2: antimicrobial peptides from the seed of Mirabilis jalapa L. [38], PAFP -S: antifungal peptide from Phytolacca americana seed [39] WR-AI1: inhibitor of cystine node α-amylase in Wrightia religiosa [40], AC-AI1: α inhibitor-cystine node amylase in Allamanda cathartica [41].It was recently described by Aguieiras and coauthors (2021) [15] an AMP from the defensin family, extracted from the fruits of the C. chinense pepper, capable of significantly inhibiting the activity of the α-amylase enzyme called CcDef3.Pereira and coauthors (2018) [35] showed that the protein extract from the leaves of C. annuum was able to inhibit the activity of the human salivary α-amylase enzyme at all concentrations tested.In addition, these extracts showed the ability to inhibit trypsin enzyme activity.Diz and coauthors (2011) [24] found a lipid transfer protein (LTP) called Ca-LTP1, isolated from C. annuum seeds, was able to inhibit human salivary α-amylase activity. Yeast growth inhibition by fractions D1 and D2 The D1 and D2 fractions were used to verify their inhibition potential on the growth of the yeasts in vitro.Optical density readings of growth were taken within 24 h after incubation of cells with 100 µg.mL -1 of fractions.Here, 56% inhibition for C. buinensis was observed in the presence of the D1 and 53% in the presence of the D2 fraction.Further, the D1 and D2 fractions showed an inhibition of 40% and 28%, respectively, for C. tropicalis.For yeast C. albicans, the D1 fraction showed a significant inhibition of 62% while D2 fraction revealed 59% inhibition.The D1 and D2 fractions displayed an inhibition of 44% and 64%, respectively, for C. parapsilosis.Absorbance values are represented as mean (± standard deviation) of triplicates.Asterisks indicate significance by ANOVA, and differences in mean values were considered significant at p < 0.05) (Figure 5). Although the mode of action of AMPs in cells is still not well completely understood, it is already known that they have the ability to cause membrane and, consequently, cell death [42,43]. Effect of fractions D1 and D2 on membrane permeabilization In order to understand the possible mechanism responsible for the inhibition of yeast growth, aliquots of cells treated with fractions D1 and D2 were incubated with SYTOX Green and propidium iodide (Figure 6).A considerable decrease in the number of cells was observed after treatment when compared to that of the control C. buinensis cells.In the presence of 100 μg.mL -1 of fractions D1 and D2, the cultured cells only showed SYTOX Green and propidium iodide staining for the D2 fraction treatment, however, due to the reduced number of cells the result was not significant. For C. tropicalis and C. albicans, a considerable decrease in the number of cells was evident when compared to that in the control.There was no significant labeling of cells in the control.When the cells were grown in the presence of D1 and D2 fractions (100 μg.mL -1 ), SYTOX Green and by propidium iodide staining was observed, indicating that these fractions were able to cause permeabilization in the cell membrane, and these cells were no longer viable, especially cells treated with D1.For C. parapsilosis, we visualized a considerable decrease in the number of cells when compared to the control and morphological alterations, such as the formation of pseudohyphae were observed.When cells were grown in the presence of 100 μg.mL -1 fractions D1 and D2, they did not show a significant SYTOX Green and propidium iodide staining. Some researchers suggest the ability to damage the membrane as an efficient mechanism, reducing the chances of the pathogen developing resistance, having already been elucidated that AMPs have amphipathic characteristics, attributing to them ability to interact with biological membranes [44]. Protease inhibitors are able to permeabilize the membrane of different pathogens, such as the trypsin inhibitor CaTI.S. cerevisiae and C. albicans cells showed SYTOX Green staining, indicating the permeabilization of their [26].Dib and coauthors (2019) [45] demonstrated a protease inhibitor (IETI), isolated from Inga edulis seeds, which inhibited the yeasts C. buinensis and C. tropicalis, caused membrane permeabilization and consequently affected cell viability.It is important to emphasize that membrane permeabilization and inhibition of microorganism growth are not necessarily related phenomena.It is possible to find peptides that inhibit the growth of microorganisms and do not permeabilize membranes and others that permeabilize membranes and do not inhibit growth [46]. Effect of fractions D1 and D2 on the induction of intracellular ROS Another approach to elucidate the mechanism of action of AMPs is through the induction of intracellular accumulation of ROS, which can generate DNA damage, oxidation of proteins, carbohydrates and lipids, including activation of apoptotic pathways [43].Reactive oxygen species are generated as a natural byproduct of normal oxygen metabolism and play important roles in cell signaling.However, under stress conditions, ROS levels can increase dramatically, causing significant cell damage [47]. To assess whether 100 μg.mL -1 of D1 and D2 fractions would cause an increase in endogenous ROS production, the H2DCFDA probe was used in different yeast species (Figure 7).For C. buinensis, fluorescence observed in the treatment with D1, but not in D2.Here, it is worth mentioning that the number of C. buinensis cells decreases considerably compared to the control.Moreover, for C. albicans, staining with the dye was not observed for both fractions.For the yeast C. tropicalis, we observed fluorescent staining in the treatment with D1 and D2, however there was a difference when compared to that in control, since the control cells also showed endogenous fluorescence.This yeast proved to be more sensitive to the D1 fraction, and even with an absorbance not very high for the microscopy assays, we can observe a decrease in microbial growth and several morphological changes, coupled with the fluorescent staining with propidium iodide indicating that those cells were no longer viable. Conversely, no significant labeling was found in C. parapsilosis following D1 treatment, while for the D2 fraction, there was probe labeling, which leads us to believe that the D2 fraction increased the level of ROS when compared to the control. CONCLUSION The results described in this work show, for the first time, that immature fruits of C. chinense accession UENF 1755, presents fractions rich in AMPs with antifungal activity against yeasts.It is possible to affirm that new analyzes are necessary of these fractions obtained from immature fruits of C. chinense (UENF accession 1755).A purification and more specific studies for the D1 fraction are of great interest, due to the results obtained in this work, where the D1 fraction proved to be a promising protein fraction because it has the highest antifungal activity, being able to inhibit the four yeasts tested and cause morphological changes, and it is also possible to observe SYTOX Green and ROS labeling in C. albicans and C. tropicalis, two yeasts of great medical interest due to the high rates of serious hospital infections caused by these two species [48], in addition to significant inhibition of the activity of trypsin, chymotrypsin and αamylase enzymes.It would be interesting to elucidate the peptides present in this fraction and their mechanisms of action, opening the possibility of obtaining purified peptides with possible potential for both medical and agronomic interest.The D1 fraction is the most promising fraction in view of the results obtained in this study, requiring further analysis of the peptides present in this protein fraction and assessment of their mechanisms of action in future studies. Figure 2 . Figure 2. (a) Chromatographic profile of the protein extract from immature fruits, obtained in anion exchange chromatography on a DEAE-Sepharose column.Fraction D1 (not retained) was eluted in equilibration buffer and fraction D2 (retained) was eluted in equilibration buffer containing 1 M NaCl; (b) Electrophoretic visualization by tricine SDS-PAGE of fractions D1 and D2.M -Low molecular mass marker (kDa). Figure 5 . Figure 5. Inhibition of yeast growth after 24 hours of incubation with D1 and D2 at a concentration of 100 µg.mL -1 .(%) Percentage of inhibition caused by fractions.Asterisks indicate differences (p < 0.05) between the experimental and control treatments by Tukey's test. Figure 6 . Figure 6.Images of membrane permeabilization assay in yeasts C. buinensis, C. tropicalis, C. albicans and C. parapsilosis cells after treatment with fractions D1 and D2 at a concentration of 100 µg.mL -1 for 24 hours.Cells were treated with fluorescente probes SYTOX Green and propidium iodide.Bars = 20 µm.
2024-03-24T15:17:25.961Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "2955222ef89723ea37219759591bb76f56559748", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/babt/a/NVKLybDG7xTRsXVSNKXBzsv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2686f908bc6a136424a771a1fbf83d445f167bc0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
89621293
pes2o/s2orc
v3-fos-license
Observation of wall-vortex composite defects in a spinor Bose-Einstein condensate We report the observation of spin domain walls bounded by half-quantum vortices (HQVs) in a spin-1 Bose-Einstein condensate with antiferromagnetic interactions. A spinor condensate is initially prepared in the easy-plane polar phase, and then, suddenly quenched into the easy-axis polar phase. Domain walls are created via the spontaneous $\mathbb{Z}_2$ symmetry breaking in the phase transition and the walls dynamically split into composite defects due to snake instability. The end points of the defects are identified as HQVs for the polar order parameter and the mass supercurrent in their proximity is demonstrated using Bragg scattering. In a strong quench regime, we observe that singly charged quantum vortices are formed with the relaxation of free wall-vortex composite defects. Our results demonstrate a nucleation mechanism for composite defects via phase transition dynamics. We report the observation of spin domain walls bounded by half-quantum vortices (HQVs) in a spin-1 Bose-Einstein condensate with antiferromagnetic interactions. A spinor condensate is initially prepared in the easy-plane polar phase, and then, suddenly quenched into the easy-axis polar phase. Domain walls are created via the spontaneous Z2 symmetry breaking in the phase transition and the walls dynamically split into composite defects due to snake instability. The end points of the defects are identified as HQVs for the polar order parameter and the mass supercurrent in their proximity is demonstrated using Bragg scattering. In a strong quench regime, we observe that singly charged quantum vortices are formed with the relaxation of free wall-vortex composite defects. Our results demonstrate a nucleation mechanism for composite defects via phase transition dynamics. Topological defects in a continuous ordered system are a splendid manifestation of symmetry breaking, with their fundamental types, such as walls, strings, and monopoles, inevitably determined by the topology of the order parameter space. However, if there is a hierarchy of energy (length) scales with different symmetries, composite defects, such as domain walls bounded by strings and strings terminated by monopoles may exist in the system [1]. In cosmology, it has been noted that such composite defects can be nucleated through successive phase transitions with different symmetry breaking in grand unification theories; furthermore, composite defect formation has been proposed as a possible mechanism for galaxy formation [1,2] and baryogenesis [3] in the early Universe. Spinful superfluid systems with multiple symmetry breaking provide an experimental platform for studying the physics of composite defects and, thus, to examine the cosmological scenario. In superfluid 3 He-B, it has been observed that a spin-mass vortex, on which a planar soliton terminates, can survive after phase transitions by being pinned on the vortex lattice [4,5] or nafen [6]. Composite defects have also been theoretically studied in the atomic Bose-Einstein condensate (BEC) system. Vortex confinement with a domain wall was predicted to occur in a two-component BEC under coherent intercomponent coupling [7]. In particular, for a spin-1 Bose gas with antiferromagnetic interactions, half-quantum vortices (HQVs) joined by a spin domain wall were anticipated to be responsible for the emergence of an exotic 2D superfluid phase with spin-singlet pair correlations [8,9]. In this Letter, we report the experimental observation of wall-vortex composite defects in a quasi-2D antiferromagnetic spin-1 BEC. The composite defects are nucleated via a two-step instability mechanism in quantum quench dynamics from the easy-plane polar (EPP) phase into the easy-axis polar (EAP) phase. In the first step, * yishin@snu.ac.kr spontaneous Z 2 symmetry breaking causes domain wall formation, the core of which is occupied by the EPP phase. In the second step, the snake instability splits the domain walls into segments, with each segment forming a composite defect, which is a domain wall terminating on a HQV [10]. The mass supercurrent in proximity to the wall end point is demonstrated using Bragg scattering [11]. We also observe that singly charged quantum vortices (QVs) can be formed by the relaxation of free composite defects. Our results directly demonstrate the existence of wall-vortex composite defects and their nucleation mechanism via phase transition dynamics in a spinful superfluid system. The experiment is performed with a BEC of 23 Na atoms in the F = 1 hyperfine state having an antiferromagnetic spin interaction coefficient c 2 > 0 [12]. The ground state of a spin-1 antiferromagnetic BEC is a polar state with F = 0 [13,14], where F = (F x , F y , F z ) is the spin operator of the particle. The order parameter of the BEC is parametrized with the superfluid phase φ and a real unit vectord = (d x , d y , d z ) for the spin director, and is expressed as where ψ mz=0,±1 is the condensate wave function of the |m z Zeeman component and n is the particle density. In the presence of an external magnetic field, e.g., along the z axis, uniaxial spin anisotropy is imposed by the quadratic Zeeman energy E z = q(1 − d 2 z ) and the ground state of the system is the EAP state withd = ±ẑ for q > 0 and the EPP state withd ⊥ẑ for q < 0. As a means of creating wall-vortex composite defects, we employ the quantum quench dynamics from the EPP phase to the EAP phase via a sudden change of spin anisotropy, which can be implemented by dynamically controlling the q value [15,16]. Because of the positional difference between the two phases in the order parameter space, the quench dynamics involves spontaneous Z 2 The order parameter of the polar phase has a discrete symmetry under the operation of (φ,d) → (φ + π, −d). The doublehead arrow denotes the spin directord and the color of each arrow head indicates the superfluid phase φ. (a) Domain wall at the interface of two domains with opposite spin directions. d flips to the opposite direction across the wall. The density distributions n0,±1 of the three mz = 0, ±1 spin components are displayed. (b) Domain wall bounded by a HQV. As the two domains are continuously connected to each other with changing φ by π, the domain wall spatially terminates and a HQV is formed at the wall end point. symmetry breaking, asd⊥ẑ →d =±ẑ. For q>0, the initial EPP state is dynamically unstable so that spin fluctuations will be exponentially amplified via the spin exchange process of |+1 |−1 → |0 |0 [17]. The microscopic origin of the Z 2 symmetry breaking arises from the two equivalent choices for the phase of the |0 component. A rapid quench can give rise to a complex network of domain walls in a uniform system according to the Kibble-Zurek mechanism [18,19]. The spatial structure of a domain wall is described in Fig. 1(a), which is formed at the interface between two domains with opposite spin directions. Hered is denoted by a double-head arrow and the superfluid phase is indicated by the color of each arrow head, reflecting the discrete symmetry of the order parameter under the operation of (φ,d) → (φ + π, −d) [20]. In the wall region,d continuously flips to the opposite direction and the |±1 components are present, sandwiched by the |0 component. The wall thickness is determined by the competition between the quadratic Zeeman energy and the gradient energy associated with the vector fieldd(r), giving a characteristic length scale of ξ q = / √ 2mq for q µ with the particle mass m and the chemical potential µ [21]. An interesting observation is that the two domains separated by the wall comprise only the |0 component, which means that they can be continuously connected to each other by varying φ without flippingd, thus, allowing spatial termination of the domain wall as shown in Fig. 1(b). In this case, the wall end point exhibits a su- perfluid phase winding of π, forming a HQV [10]. This is the wall-vortex composite defect expected in the EAP phase. The spatial structure of the composite defect is analogous to that of the spin-mass vortex, also referred to as a θ soliton, in superfluid 3 He [4,6,21]. We prepare a condensate containing N c ≈ 8.0 × 10 6 atoms in the |F =1, m F =0 hyperfine spin state in an optical dipole trap with trapping frequencies of (ω x , ω y , ω z ) = 2π ×(3.8, 5.5, 402) Hz. The Thomas-Fermi radii for the trapped condensate are (R x , R y , R z ) ≈ (230, 160, 2.2) µm. The external magnetic field is B z = 33 mG, giving q/h = 0.3 Hz, and the field gradient is controlled to be less than 0.1 mG/cm [22]. The EPPto-EAP quench dynamics is initiated by rotatingd from z to the xy plane by applying a short rf pulse and then suddenly changing the q value to a target value q f > 0 using a microwave dressing technique [10]. The postquench evolution of the BEC is examined by measuring the spatial density distributions of the three spin components at a variable hold time t with taking an absorption image after Stern-Gerlach (SG) spin separation for 24 ms time-of-flight [22]. The spin healing length is ξ s = / √ 2mc 2 n 0 ≈ 4.0 µm for the peak atom density n 0 , and our highly oblate sample with R z < ξ s constitutes a quasi-2D system for spin dynamics. In our experiment, q f , which represents the initial excitation energy per particle with respect to the ground state, is much smaller than µ ≈ h × 880 Hz, so incurrence of density perturbations is energetically improbable. Note that q f µ sets a clear hierarchy of energy scales in the system [21]. Figure 2 shows several images of the quenched condensate after various hold times for q f /h = 1.0 Hz. In the early stage of the quench dynamics, a few line de-fects are clearly observed to appear across the condensate [ Fig. 2(a)]. The |0 component shows density-depleted trenches and the trench regions are filled by both of the |±1 components, consistent with the spin distribution for the domain wall described in Fig. 1(a). The high visibility of the trench in the |0 component after such a long time-of-flight reflects the nature of a topological soliton. At later t > 0.2 s, the line defects are typically observed to end in the middle of the condensate [Figs. 2(b) and 2(c)]. The profile of the total condensate density was confirmed to remain unperturbed by imaging without SG spin separation. Such a smooth wall termination is the key characteristic of the wall-vortex composite defects. As t increases, the end point seems to recede toward the boundary of the condensate with the decrease of the domain wall length [ Fig. 2(d)], which we attribute to the wall tension due to the quadratic Zeeman energy. It is noteworthy that ring-shaped domain walls were sporadically observed [21], which is reminiscent of a 2D skyrmion that is also a topologically allowed defect for the polar phase [23]. To corroborate the existence of HQVs at the wall end points, we measure the mass superflow distribution using a spatially resolved Bragg scattering method [11]. Before applying a pulse of magnetic field gradient for the SG spin separation, we irradiate two pairs of counterpropagating Bragg laser beams onto the sample in the xy plane for 0.8 ms. The frequencies of the laser beams are set to be resonant to atoms with a velocity of 0.3 mm/s ≈ 0.4 mξs so that the atoms that have such high velocities near the HQV cores may be scattered out of the condensate. The HQV core size is characterized by ξ s [10,24,25]. Then, the mass circulation around the HQVs can be identified by examining the spatial distribution of the scattered atoms with respect to the wall end points [21,26]. Two examples of data of the Bragg scattering measurement are provided in Fig. 3. In the case of Fig. 3(a), a single domain wall terminates in the center region and a strong scattering signal is detected at the front side of its end point, consistent with the mass flow expected from a HQV with counterclockwise circulation at the end point [ Fig. 3(e)]. Figure 3(b) presents another case in which two end points are close to each other. The Bragg signal shows that a superflow passes through the gap between the two walls, indicating that the two HQVs at their end points have opposite circulations [ Fig. 3(f)]. The spatial configuration of the two walls, together with the superflow pattern, conjures up the possibility that they might be formed by breaking a single domain wall that initially traverses the condensate [ Fig. 2(a)]. The domain wall can be viewed as a three-component soliton [27][28][29], where a dark soliton of the |0 component with a π phase step coexists with the bright solitons of the |±1 components. A dark soliton in a scalar BEC is dynamically unstable due to snake instability, which causes a local Josephson current by breaking the soliton [30][31][32]. If a similar mechanism is active for the domain wall, the wall can break into many free composite defects, i.e., domain walls bounded by two HQVs at both ends. In the experiment with q f /h = 1.0 Hz, however, we rarely observed free composite defects detached from the condensate boundary, which means that the domain wall does not suffer much from the snake instability. It is the |±1 components at the wall core that suppress the snake instability by providing an effective pinning potential, which, thus, stabilizes the domain wall. In other words, in the quench dynamics for large q f , the snake instability can be enhanced with the domain wall becoming thinner, thus leading to proliferation of free composite defects. We perform the same quench experiment with a higher q f /h = 10.6 Hz. The Bogoliubov analysis of the dynamic instability of the initial EPP state shows that the characteristic length and time scales for the quench dynamics are proportional to √ q f and 1/ √ q f , respectively, for q f c 2 n 0 [17], and it is expected that domain walls will be nucleated in a denser pattern within a faster time scale. Indeed, we observe that a characteristically dense network of thinner domain walls develops within 60 ms and, also, that the domain walls subsequently break into many composite defects [ Fig. 4(a)], demonstrating the enhancement of the snake instability. After the fast wall splitting process, short free defects are clearly identified in the center region of the condensate at t > 0.2 s. The free defects have a spatial size 5ξ s with various shapes. Some of the defects appear very round [ Fig. 4(a)A], while others show dumbbell shapes, implying splitting [ Fig. 4(a) ation evolution, the defect number N d decays with a 1/e lifetime of ≈ 5 s, where it is observed that most long-lived free defects exhibit round shapes [ Fig. 4(b)]. At t > 0.8 s, the fractional population of the round defects increases to over 80%. Here we count a defect as a round one if its aspect ratio (≥ 1) is smaller than 1.2. From in situ magnetization imaging [10], we find that the long-lived defects can have nonzero axial magnetization, i.e., contain unequal |±1 spin populations at their cores [ Fig. 4(e)], which indicates that the spin current dynamics is intricately involved in the defect formation process. The free wall-vortex composite defects are classified into two types according to the net circulation around them. One type has a circulation of h/m with two HQVs having the same circulation, which is topologically identical to a singly charged QV in a coarse-grained view [ Fig. 4(c)], and the other involves with two HQVs having opposite circulations, which might be described as a magnetic bubble having linear momentum [ Fig. 4(d)]. We refer to them as vortex-vortex (VV) and vortex-antivortex (VAV) types, respectively. Immediately after domain wall splitting, the system can contain statistically equal numbers of the defects for the two types. However, when the free defects become short, comparable to ξ s as observed, the defect dynamics of each type will be different because of different HQV interactions [33]. We may expect that small VV-type defects will survive longer with the topological character of singly charged QVs, whereas those of the VAV type will decay faster due to their linear motion in the trapped sample and possible vortex pair annihilation [34]. In the experiment, we identify the long-lived round defects as the VV type by confirming the superflow circulation around them with the Bragg scattering measurement [21,35]. Finally, we remark on the peculiarity of defect formation in the EPP-to-EAP quantum phase transition. Since the U(1) symmetry is simply broken in the final EAP ground state in a similar manner to the case of scalar BECs, one may expect a conventional Kibble-Zurek scenario for vortex nucleation in our system. However, we observe a two-step defect formation process: first, domain wall creation via the Z 2 symmetry breaking, and then, production of composite defects by a splitting of the domain walls. The two-step scenario is also confirmed in our numerical simulation for a uniform system, based on the Gross-Pitaevskii equation for spin-1 BECs [21,36]. Our observations show that defect formation in phase transition dynamics critically depends on the symmetry breaking sequence of the system [2]. In conclusion, we observed the creation of wall-vortex composite defects in the EPP-to-EAP quantum quench dynamics of an antiferromagnetic BEC and demonstrated the unconventional mechanism of defect formation in the phase transition dynamics. Our findings provide a different framework for the nucleation of composite defects via the Kibble-Zurek mechanism [1,5]. Additionally, the observation of the free composite defects encourages the efforts to search for the exotic superfluid phase in 2D antiferromagnetic spinor gases [9,37,38]. A1. Domain wall in the polar phase To describe the theme of spontaneous symmetry breaking in our system, we introduce a complex vector field in the Cartesian representation for spin-1 spinor BECs as In this representation, the energy density E(ψ) is written as In antiferromagnetic BECs with c 2 > 0, the ground state is the polar state, where we have the spin density vector S ≡ iψ × ψ * = 0 and the order parameter of the polar phase represented by with a complex scalar field ψ T = √ ne iφ and a real unit To capture the feature of domain walls, we consider a stationary solution for a planar domain wall perpendicular to the y axis in a uniform system;d → ±ẑ for y → ±∞ [ Fig. 1(a)]. The vectord varies continuously across the wall, withd⊥ẑ at the core (y = 0). We assume φ = 0 andd⊥ŷ for ψ +1 = −ψ −1 without a loss Here, n0 = µ/c0 is the bulk density. The profile is symmetric with respect to y = 0. The wall thickness is characterized by ξq = √ 2mq = ξ µ q for ξq ξ. In the core regime (|y| ξq), the total density n = |ψx| 2 + |ψz| 2 with ψy = 0 is almost constant for ξq ξ, while it decreases substantially for ξq ∼ ξ and finally vanishes when ξq/ξ reaches to a certain value. of generality. Then, the planar wall is evaluated by the reduced energy density for the real functions ψ x and ψ z , For the case of µ q, the total density n (= ψ 2 x + ψ 2 z ) is approximately constant; the wall is thus characterized by the parameter q. Comparing the gradient term and the quadratic Zeeman term, the thickness of a domain wall is characterized by 2mq . Figure S1 shows numerical solutions for the domain wall for different values of q. According to the expression for ξ q , the domain wall becomes thicker as q decreases. This behavior is qualitatively consistent with our experimental observations. A2. Structure of the wall-vortex composite defect The structure of a wall-vortex composite defect is understood in a similar manner to that reported in the literature for superfluid 3 He-B [4,5]. In the 3 He-B system, the composite defect is formed due to the existence of two different length scales; the coherence length and the dipolar healing length. The length of the former is defined by the superfluid condensation energy and is much shorter than that of the latter, which is related to the much weaker spin-orbit interaction. Correspondingly, our system of anti-ferromagnetic spinor BECs has also two different length scales; ξ = √ 2mµ and ξ q . The length ξ is defined by the term of c 0 (or µ) in Eq. (S1), which is associated with the condensation energy. The large length ξ q is related to the much weaker quadratic Zeeman energy with q µ; ξ ξ q . The length (energy) hierarchy supports the coexistence of different kinds of topological defects as composite defects. This argument is connected with the condition q µ to stabilize the domain wall as a part of the composite object. The two length scales ξ and ξ q are associated with a vortex and a domain wall, respectively. If the order parameter varies spatially on length scales much shorter than ξ q , the quadratic Zeeman term is unimportant compared with the gradient term. Then, the energy density (S1) can be reduced to the simplest form where again we assume ψ × ψ * = 0 by considering the polar phase. This form of energy density supports the existence of a HQV. A linear HQV is realized by considering a combination of two transformations, φ → φ + π andd → −d, e.g., with the cylindrical coordinate r = (ρ, θ, z) [ Fig. S2(a)]. Here, the mismatch between (φ = 0,d =ẑ) and (φ = π,d = −ẑ) along the x axis for x > 0 is avoided and the order parameter varies continuous there because of the discrete symmetry under the operation of (φ,d) → (φ + π, −d). Note thatd is ill-defined at the origin because of the discrepancy for its orientation. In the HQV core, the broken-axisymmetry (BA) phase is energetically preferred causing a magnetized vortex core with finite spin density of S⊥ẑ. The size of the magnetized core for the BA phase is at most on the order of the spin healing length ξ s = ξ c0 c2 . To understand the structure of a wall-vortex composite defect, it is instructive to show a way to build a composite defect by putting a HQV in the EAP state. A HQV is a high-energy object and should be deformed due to the quadratic Zeeman energy with q > 0. The order parameter field in Eq. (S4) is deformed to decrease the area of the region for d x = 0 since the energy density there is higher than that in thed = ±ẑ region. The phase φ is not affected directly by the quadratic Zeeman energy. After the deformation, thed⊥ẑ region condenses along the x axis for x > 0 to form a domain wall terminating at the HQV at the origin [ Fig. S2(b)]. Inside the wall core, the vector fieldd flips in a continuous manner fromd =ẑ tod = −ẑ with the phase φ fixed approximately. The phase φ rotates from 0 to π about the HQV core, while the vector fieldd is fixed to point along the z-direction outside the wall core. A3. Numerical time evolution of the quench dynamics in a uniform system Such composite defects can be nucleated in the course of our non-equilibrium quench dynamics. Domain walls nucleated via the Kibble-Zurek mechanism for the Z 2 symmetry breaking can transform into composite defects owing to the snake instability, which causes a local Josephson current and nucleates a pair of HQVs by breaking a domain wall into two segments. To demonstrate this scenario more clearly in a uniform system, we numerically solve the Gross-Pitaevskii equation for spin-1 BECs. The numerical simulations were done in a method similar to those of the phase transition dynamics described by the Gross-Pitaevskii equation for multi-component systems [36]. Figure S3 shows a twodimensional simulation for q f /h = 2.6 Hz from the initial EPP state of (ψ 0 , ψ ±1 ) = (0, ± n0 2 ) under the Neumann boundary condition at x = ± L 2 and y = ± L 2 with the system size L = 724ξ. The time evolution is quite consistent with the two-step scenario of the defect formation and explains well the experimental observations. A4. Superflow detection with Bragg scattering The superflow distribution in the BECs containing composite defects is investigated by employing a spatially-resolved Bragg scattering method [11]. When atoms are irradiated by a pair of counterpropagating Bragg laser beams along the x -direction, a two-photon Fig. 2. The |0 component shows a density-depleted region of ring geometry, which is filled by the |±1 components. The spatial distributions of the spin components are suggestive of a 2D Skyrmion spin texture which has the topological charge Q = 1 4π dxdyd · (∂xd × ∂yd) = 1 [23]. process may resonantly occur by imparting momentum p 0 = 2 k Lx and energy ε = δ to the atoms, where k L is the wavenumber of the two Bragg beams and δ is their frequency difference. The resonance condition is determined by ε = 2m . Because of the velocity dependence of δ, the spatial distribution of the Bragg scattering response for a BEC directly shows the corresponding velocity region in the BEC. Figure S5(a) illustrates the experimental sequence for our Bragg scattering measurement. We first turn off the optical dipole trap (ODT). After a short 200 µs time-offlight (TOF), we apply two pairs of counterpropagating Bragg laser beams to the sample for 800 µs [ Fig. S5(b)]. Then, we apply an external magnetic field gradient for 3 ms to spatially separate the |m z =±1 components from the |0 component. After a subsequent TOF, we take an absorption image of the sample including the scattered atom clouds [ Fig. S5(c)]. The Bragg signal S B (x , y ) is constructed as S B = n − −n + with n ± (x , y ) being the density distribution of the atoms scattered out to the ±x direction, translated back to the condensate frame [Figs. 3(c), 3(d), and S6(d)] [11]. The original density distribution of the |0 component can be obtained by combining that of the unscattered, remaining condensate with n + and n − , which is used for locating composite defects in the BEC [ Fig. 3(a) and 3(b)]. In our Bragg scattering experiments, the sample condition is slightly different from that in the experiments described in the main text. The condensate atom number is N c ≈ 5.8 × 10 6 and the Thomas-Fermi radii of the condensate are (R x , R y , R z ) ≈ (200, 143, 2.0) µm for the ODT with trapping frequencies of (ω x , ω y , ω z ) = 2π × (4.5, 6.3, 460) Hz. The chemical potential and the peak spin interaction energy are µ = h × 927 Hz and c 2 n 0 = h × 14.7 Hz, respectively. The Bragg beams are red-detuned by 1.7 GHz from the |F = 1 to |F = 2 transition and each beam has an intensity of 0.3 mW/cm 2 with a 1/e 2 width of 1.8 mm R x,y . The TOF duration before taking an absorption image was chosen from the range 6 to 11 ms for different measurements. Figure S6(c) presents one of the rare occasions in the Bragg scattering measurements for low q f /h = 2.6 Hz, where the quenched BEC contains free composite defects. Two free composite defects are shown to line up, detached from the boundary of the condensate. The circulation directions of the HQVs at their end points are assigned based on the measured Bragg signal distribution around them [ Fig. S6(d)] despite the signal near the condensate boundary being faint due to low atom density. The free composite defect on the left side shows a strong negative Bragg signal along its wall, indicating its linear motion in the +x -direction, with its moving direction consistent with the circulation directions of its HQVs. In Fig. S7, we display four example data of the Bragg scattering measurements at long hold times of t > 1 s for high q f /h = 10.0 Hz, where several free composite defects of small size are present in the quenched BEC. Taking into account the short distances between the free composite defects, the frequency difference of the Bragg beams was set to be δ d /2π = −2.0 kHz, corresponding to a higher velocity of v x = 0.6 mm/s ≈ 0.8 mξs , to probe a region closer to the HQVs. The Bragg signal around the long-lived, round-shaped defect shows opposite signs at the different lateral sides with respect to the Bragg beam axis line passing through the defect core. This demonstrates the nonzero superflow circulation around the defect, i.e., that the round-shaped defect is of the VV type. We also present a couple of image data for dumbbellshaped defects in Figs. S7(c) and (d). The Bragg signal indicates that the defect is of a VAV-type having two HQVs with opposite circulations.
2019-02-28T01:45:30.000Z
2018-12-21T00:00:00.000
{ "year": 2018, "sha1": "c843fc72784a8e2ea666e2c886cbbac26fa7945c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1812.08955", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c843fc72784a8e2ea666e2c886cbbac26fa7945c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
90778288
pes2o/s2orc
v3-fos-license
Chemotyping the Essential Oil in Different Rosemary ( Rosmarinus officinalis L . ) Plants grown in Kashmir Valley The aim of the present study was to evaluate the yield, chemical constituents and determine the chemotype of the essential oil obtained from different rosemary plants growing in different accessions of rosemary fields. About four plant samples were analyzed for essential oil yield and the essential oil yield varied from 0.88% to 1.2%. The essential oil samples were further analyzed by Gas Chromatography (GC) for the purpose of identification of chemical constituents present in them. It was contended from the results that the selected plants differed from each other in terms of chemical constituents. Camphor content was found in higher amount in all the four samples, thus it could be inferred that the plants are camphor chemotype. Rosmarinus officinalis L. is a perennial herb with an evergreen needle-like leaves that belong to the Lamiaceae (mint) family [1][2][3] .Rosemary is a widely used aromatic and medicinal plant 4,5 .The leaf of rosemary is an indispensable spice of the French, Italian and Spanish cuisine.Rosemary is cultivated for the valuable oil, which can be extracted from the harvested plants when flowers are in buds 5 . Essential oils are natural, concentrated, hydrophobic liquids containing volatile aroma compounds from plants 1 .They are also known as volatile or ethereal oils.The volatile or essential oils correspond to a mixture of hemiterpenoids, monoterpenoids and some sesquiterpenoids that are in conjunction with oil.These mixtures are highly volatile when exposed to air at room temperature, thus the name ethereal oils.They are almost insoluble in water and soluble in alcohol and usually lighter than water.They have high refractive index and many of them are optically active.Essential oils are generally extracted by distillation.The chemical composition of an essential is quite different from one plant to another and the main chemical constituents present in essential oil determine its aroma, taste and biological activity. There is a wide range of techniques which have been used for extraction and concentration of essential oils as well as chromatographic separation and identification of chemical constituents present in essential oils.Extraction of volatile terpenoids from plant materials and from a wide variety of other matrices is often carried out using hydrodistillation or steam distillation 6 .Rosemary oil was notified for Generally Recognized as Safe (GRAS) status by the Fragrance and Essence Manufacturers Association of the USA (FEMA) in 1965 and has been listed by the U.S. Food and Drug Administration (FDA) for food use (GRAS) 2 .In 1970, the Council of Europe included rosemary oil in the list of substances, spices and seasonings deemed admissible for use, with a possible limitation of the active principles in the final product 2,7 . It is generally known that the components of essential oils from aromatic plants of the same scientific name could be different according to the plant's habitats, or parts and methods for extraction.This variation in composition is called chemotype 8 .Chemotype occurs when aromatic plants grow under different climatic and soil conditions 8,9 . The present study was therefore, carried out to evaluate the yield, chemical composition and determine the chemotype of different rosemary plants grown in Kashmir valley. Sample collection Fresh leaves of Rosmarinus officinalis L. were collected 15 minutes prior to distillation of essential oil, from the fields of Indian Institute of Integrative Medicine (IIIM) Sanatnagar, Srinagar, Jammu & Kashmir, India. Distillation of essential oil The essential oil was obtained by hydrodistillation in a Clevenger (PERFIT, INDIA) for 3 hours 3,10 .Briefly, 250 g of freshly collected leaves were taken in 500 ml round-bottom flask (PERFIT, INDIA) followed by addition of water in the ratio of 1:6 (w/v) and was distilled for about 3 hours.Essential oil from each of the samples was collected and dried over anhydrous sodium sulphate 3,11 .Oil was then stored at 4 o C until analysis with Gas chromatography (GC) 3 . Chromatography The analysis of the oil was carried out by following the method of Amin et al. ( 2013) on a gas chromatograph Perkin Elmer -Auto XL equipped with head space analyzer and FID, using a fused-silica capillary column (30 m x 0.32 mm; 0.25 ¼m film thickness).The oven temperature was programmed from 60 o C to 250 o C at 5 o C per minute.The injector and detector temperatures were set at 250 o C and 270 o C, respectively.Nitrogen at a pressure of 8 psi was used as the carrier gas.The identification was done on the basis of retention time, Kovats index, MS Library search (NIST & WILEY).Retention indices (RI) of the chemical components of samples and authentic compounds were determined.The relative amounts of the identified compounds were calculated based on GC peak areas without using correction factors. Yield of essential oil The yield of essential oil from each of the four samples ranged from 0.88% to 1.2% (Figure 1).It is clear from Figure 1 that the highest essential oil content was found from plant -P 55 /B 2 (1.2%) and lowest yield from plant P 57 /B 2 (0.88%). Identification of compounds Gas chromatographic analysis of the essential oil resulted in the identification of 18 different components representing about 97.2217% of the essential oil of P 55 /B2; 19 components representing 97.6266% of the essential oil of P 57 /B 2 ; 17 components representing about 76.5109% of the essential oil of P 67 /B 2 and 22 components representing about 86.0686% of the essential oil of P 178 /B 4 .All of the identified compounds and their percentage present in each of the plant are summarized in Tables 1, 2, 3 & 4. Camphor was found to be present in the highest concentration (53.3871%) which is much higher than that reported by Verma et al. 10 followed by 1, 8-cineole (11.6516%), alpha-pinene (5.6862%) and camphene (5.4258%).It is thus clear from the results that all the plants are camphor chemotype. It is clear from the results that the yield of essential oil from samples of different plants varied considerably.The composition of the essential oil varies from plant to plant as can be seen in the tables 2, 3, 4 and 5.The camphor content of the essential oil for all the 4 samples is higher than the value of 15.64% and 22.01% reported by Verma et al. 10 and Shawl 12 , respectively.The camphor content also varies within the samples.Since the camphor content of essential oil is higher, therefore, all the samples are camphor chemotype.The chemical composition of oil depends on how and where the plant was grown, harvested and distilled.When the conditions cause permanent variation in the chemical composition of essential oil of rosemary plants, such plants are called chemotypes.Three principal chemotypes of R. officinalis L. have been reported which include camphor/borneol, cineole and verbenon 13 .Typical components of rosemary are 1,8-cineole, ±-pinene and camphor 14 and the relatively stable ratio of these components defines each chemotype.It is known that the rosemary oils are widely divided into two chemotypes by the ratio of major components; one with more than 40% of 1,8-cineole and the other with almost the same percentage of 1,8-cineole, ±-pinene and camphor 15 .These findings are in concomitance with our present results.
2018-12-11T04:16:00.355Z
2017-09-25T00:00:00.000
{ "year": 2017, "sha1": "eca37a3c729a6832dcfdd07aa491266e67fdb143", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/bbra/2537", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "eca37a3c729a6832dcfdd07aa491266e67fdb143", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
236095023
pes2o/s2orc
v3-fos-license
Effect of the Ecological Location of a Water Source on Entropy and other Spatio- temporal Behavioral Features: An extended and systematic replication The continuous analysis of spatial behavioral-dynamics under stimuli-schedules has been a scarcely studied field in experimental psychology. A recent study conducted in our laboratory suggest that the features embedded in the spatial dynamics of behavior are affected by stimulus-schedules, at least, as much as features embedded in discrete responses. In that study we compared the spatial behavioral dynamics under two time-based schedules (fixed vs variable time) of water delivery, and two different locations of water delivery (delivery in central zone vs. perimetral zone) on a Modified Open Field System (MOFS). The present work replicates those findings taking in consideration previously uncontrolled variables. In Experiment 1, three subjects were exposed to a Fixed Time 30s water-delivery schedule. In the first phase the water dispenser was located at the perimetral zone. In the second condition, the water dispenser was located at the central zone. Each location was presented for 20 sessions. In Experiment 2, conditions were the same, but a Variable Time schedule was used. Measures of entropy were used to describe the spatial behavioral dynamics. We found higher levels of entropy under central location of water delivery than in the perimetral location; and higher entropy under Fixed than Variable Time Schedule, confirming previous findings but under different sequences of dispenser locations. In general, a well-differentiated dynamic between experimental conditions was observed in terms of direction (distance to the dispenser) and variation (entropy) of spatial behavior. These findings are discussed under a systemic, parametric, ecological, and non-mediational framework. In recent studies, the relevance of the integration of the ecological approach with the arbitrary approach in behavioral sciences has been acknowledge (Cabrera et al. 2019;León et al. 2020;Timberlake, 1990). While the ecological approach emphasizes the study of behavior in a context in which stimuli and responses have ecological relevance (e.g., water seeking behavior, exploratory behavior, etc.) and the role of displacement activity on changes in the contact stimulation, the arbitrary approach is characterized for its emphasis on the systematic variations of temporal parameters of stimuli and its effect on the rate of an arbitrary and discreate response. An integrated system of discrete responses (e.g., lever pressing, nose poking, head inputs) and displacement patterns (e.g., displacement routes) could bring the arbitrary approach closer to the ecological approach since it allows, from the arbitrary approach, to characterize and analyze behavioral patterns with ecological relevance (e.g., water-seeking behavior) but in a parametric perspective. Focusing on Spatio-temporal variation in displacement patterns, studies in behavioral science have shown that, using non contingent schedules, changes in Spatiotemporal variation in displacement patterns can be modulated by the schedule of reinforcement (Eldridge et al., 1988;Van Hest et al., 1986), the number of available dispensers in the experimental chamber (León et al. 2020) and variations in the location of reinforcement (León et al. 2020), among others variables. For example, from a parametrical approach, in a study conducted by León et al. (2020) the authors analyzed changes in displacement patterns under different combinations of FT and VT schedules with water delivered in a constant or varied location. They found more variable patterns in the combination of VT in a varied location than with FT in a single location. On the other hand, studies from the ecological perspective (Martinez & Morato, 2004;Yaski. et al. 2011;Whishaw et al. 2006), using the open field paradigm have found differences in spatio-temporal patterns of behavior regarding different zones of the experimental chamber, even without programed contingencies in it. For example, Yaski and colleagues found that rats exposed to an open field arena concentrated their displacement patterns in the peripheral zones in comparison to the central zones, they also found a higher velocity in the rat's movement in the central zone in comparison to the peripheral zone. For this authors different zones of the open field arena have different ecological relevance, so the close zones serves as a refuge or "safe area" while the central zone is an "insecure area" for organisms. In a more recent study León et al. (2020) conducted a study bringing together the ecological and the parametrical approach. Considering the two previously mentioned findings, they compared the spatial behavioral dynamics under two time-based schedules Fixed Time (FT, Experiment 1) and Variable Time (VT, Experiment 2) of water delivery on two different locations: perimetral zone and central zone on a Modified Open Field System (MOFS). Subjects in each experiment were 3 experimentally naïve, water deprived Wistar-rats. In Experiment 1, water was delivered according to a FT 30-s schedule. In Condition 1, water was delivered at the Center of the experimental chamber; in Condition II, water was delivered close to a wall of the chamber. Each condition lasted 20 sessions, each session lasted 20 minutes. For both schedules, FT and VT, with the dispenser locater at the center of the chamber, the distance from the rat to the dispenser was higher than the distance to dispenser when it was located close to the wall. The accumulated time of stays in different zones of the experimental chamber when the dispenser was located at the center, concentrated in the center and less in the peripherical zones, while with the dispenser located at the walls the distance was markedly lower. Also, the entropy index (an index that represents variations in displacement patterns) was higher when the dispenser was at the center than when it was located close to the wall. The difference among the VT and the FT schedules is that the under first one there was a lower behavioral dynamics in terms of variation of displacement patterns. In Leon et al. study, the sequence of exposition to each location of the dispenser was the same for all subjects in both experiments, this is, centerwall, so it could be possible that the decrement in the dynamic of displacement patterns was due to a sequencing effect, since it is well known that the behavior variation decreases with exposure to the stimulusschedule through the sessions (Iversen, 2017). In order to determine if this was the case, it would be important to conduct a study in which subjects were presented with the location of the dispenser at the wall first and then at the center. If the previously mentioned results maintain, this would provide more robust evidence of the effect of the water location in the experimental chamber and its ecological relevance. On the other hand, if it were found results that differ from the ones reported by León al colleagues, it would mean that it was the effect of the sequence of exposition and that the water location is not a relevant factor in behavioral dynamics, as it was supposed to be. In behavioral science, as well as in the general sciences an important component of the scientific practice is the reproducibility of the obtained results. A recent survey conducted in Nature (Baker, 2016) with 1576 researchers showed that more than 70% of them have tried and failed to reproduce the work of other scientists and more than 50% have failed to reproduce their own studies. The importance of the replication of findings lies down in that, as Sidman (1960) stated "The soundest empirical test of the reliability of data is provided by replication." Considering the importance of replicating León et al (2020) results, data analysis and methodology, the purpose of the present study was to conduct a direct replication of León et al. study, presenting the location of water delivery in a reversed order: first with the water located at the wall and then at the center. In Experiment 1 a Fixed Time schedule was used while in Experiment 2 a Variable Time schedule was employed. Subjects Four experimentally-naïve, one male and three female, Wistar rats were used. All rats were three and a half months old at the beginning of the experiment. They were housed individually under a schedule of 23 hours of water deprivation with free access to water for 1 hour at the end of the experimental sessions. Food was freely available in their home cages. One session was conducted daily, 7 days a week. All procedures were conducted in agreement with university regulations of animal use and care and followed the official Mexican norm NOM-062-ZOO-1999 for Technical Specification for Production, Use and Care of Laboratory Animals. One subject died of unknown reasons before finishing the experiment, because of this, the data for only three rats is reported. Apparatus A Modified Open Field System (MOFS) was used (León, et. al., 2020). Figure 1 shows a diagram of the apparatus. Dimensions of the chamber were 100 cm x 100 cm. All four walls of the chamber as well as the floor were made of black Plexiglas panels. The floor contained 100 holes of 0.8 cm located .95 cm from each other. A water dispenser, based in a servo system, made by Walden Modular Equipment was located close to the wall (Condition I) or close to the center of the MOFS (Condition II). When activated, it delivered 0.1 cc of water on a water-cup that protruded 0.8 cm from the floor of the MOFS in one of the holes. The MOFS was illuminated by two low-intensity lights (3 watts) located above the chamber and in opposite sides of the room to avoid shadow zones. Once delivered, water remained available 3 s for its consumption. A texturized black patch, 9x9 cm with 16 dots/cm, printed in a 3d printer was locate in proximity (5.5 cm) to the water dispenser to facilitate its location. The experimental chamber was located on an isolated room on top of a table of 45 cm of height. The room served to isolate external noise. All programmed events were scheduled and recorded using Walden 1.0 software. Rats' movement was recorded by a Logitech C920 web camera, located at the center, 1.80 mts. above the experimental chamber. Tracking data was analyzed using Walden 1.0 software. This software recorded rats' location every 0.2 s in the experimental space using a system of x, y coordinate. The system recorded the rats according to their center of mass. Data files obtained from this software were then analyzed using MOTUS® and SPATIUM software. Procedure Subjects were exposed to two consecutive conditions in the same order (see Table 1). On each condition water was delivered using a Fixed Time (FT) 30 s schedule. When delivered, water remained available for 3 s. In Condition I, the water dispenser was located on the floor next to a wall of the experimental chamber (see Figure 1). In Condition II the water dispenser was located on the floor at the center of the experimental arena. Each condition lasted 20 sessions. Each session lasted 20 minutes. Rats were directly exposed to the conditions without any previous training. The MOFS was cleaned using isopropyl alcohol between each experimental session. Representation of a Modified Open Field System Note. The panel A shows an isometric view of the systems. The panel B represents the first condition with the dispenser on the wall-location, and the panel C represents the second condition with the dispenser on the center-location. The blue circle indicates the water dispenser location and the black square represents the texturized black patch. (Created with BioRender.com) Complete routes for the last session for each rat for Condition I and II for Experiment 1(left panel) and 2 (right panel) Note. Each panel shows the analogic routes in the MOFS for a complete session. Black points show rats' location in the arena at the first moment of water delivery (the first frame of 0.2 s of the 3 s of water availability). Each row depicts data for one rat, and each column depicts data for sessions 20 (last session of Wall-Condition) and 40 (last sessions of Center-Condition) for Experiment 1 (FT schedule) and Experiment 2 (VT schedule). highlighted with a black mark. Since the water dispenser remained activated for 3 s, it was still possible that rats contacted the drop of water after the first 0.2s so the marks may not exactly corresponds with the number of drops of water consumed. In Condition I, for the three rats, the patterns of displacement were located predominantly at the walls of the chamber, although there were some crossings between walls. In that condition, rats location at the time of delivery was close to the dispenser. In condition II, for all rats, a back and forth pattern between the walls and center of the chamber was found, rats location at the time of delivery for R2 and R3 was distributed between the wall and the center of the chamber (close to the dispenser). Location of R4 at time of delivery was away from the dispenser for most of the session. Note. Each panel shows the relative value of the distance (0 = minimum to 1 = maximum) from the rat to the dispenser, every frame or 0.2 s (gray dots) and a moving average of 200 frames (red line) for a complete session. Each row depicts data for one rat, and each column depicts data for sessions 20 (last session of Wall-Condition) and 40 (last sessions of Center-Condition) for Experiment 1 (FT schedule) and Experiment 2 (VT schedule). Figure 3 (left side) shows the normalized, moment to moment distance (every 0.2 s) from the location of each rat to the dispenser on the last session for each condition. A value close to 1 means higher distance form the rat to the dispenser and a number close to zero means minimum distance. In Condition I, with the water dispenser located in a wall of the chamber, the distance values remained at low levels, although with some variability along the session. In Condition II, with the water dispenser located at the center of the chamber, the distance values remained at higher levels in comparison to the values obtained when it was located at the wall, with some variability along the session. With the same format of previous figures, the left panel of Figure 5 shows the recurrence plots for subjects under the Fixed Time schedule. Both axes show time against time on a time frame of 0.2 s. These plots depict, with a black mark, the reiteration of the organism in a given location at different times (the intersection X, Y, represents it). If, on the contrary, the rat's location was not reiterated between times (frames), a white mark would be shown. The organism's location is defined as a given position value, for a given frame, within a set of 10 x 10 defined zones, comparing rat's location, frame by frame, throughout the session (for a complete description see León et al. 2020). There are several aspects to consider in these plots. The densification and alternation of black-white, a checkered pattern, indicates high recurrence; this is periodic returns of the organism to given regions. In checkered patterns, the size of the squares indicates the acceleration of the recurrence patterns. Checkered patterns with relatively bigger squares mean recurrence with lower acceleration, while relatively smaller squares mean recurrence with higher acceleration. Finally, a higher proportion of continuing black means higher stays in a given region, while a higher proportion of white means higher transitions among regions. Under the wall-dispenser condition, all subjects showed a well-defined checkeredpattern, this is, higher recurrence for the whole session, but with different acceleration (R2 depicts a significantly higher acceleration, this is, well defined checkered pattern with small Accumulated time of stays in each of 100 zones of the MOFS for the last session for each rat for Condition I and II for Experiment 1(left panel) and 2 (right panel) Note. Each panel shows the accumulated time of stays in a square region from a configuration of 10 × 10 zones. Each row depicts data for one rat, and each column depicts data for sessions 20 (last session of Wall-Condition) and 40 (last sessions of Center-Condition) for Experiment 1 (FT schedule) and Experiment 2 (VT schedule). Recurrence plots for the last session for each rat for Condition I and II for Experiment 1(left panel) and 2 (right panel) Note: Each panel depicts change of regions for each rat in a configuration of 10 × 10 defined zones every 0.2 s. Each row depicts data for one rat, and each column depicts data for sessions 20 (last session of Wall-Condition) and 40 (last sessions of Center-Condition) for Experiment 1 (FT schedule) and Experiment 2 (VT schedule). Squares). While, under the center-dispenser condition, checkered patterns were faded and only for brief periods they had a significantly higher acceleration. The predominant quality of recurrence plots, under center-dispenser condition, was the extended white zones which means higher transitions of the organisms among regions with low recurrence and without longer stays (there is not continuing black). Method Subjects Four experimentally-naïve, one male and three female, Wistar rats were used. All rats had the same characteristics of the subjects reported in Experiment 1. Apparatus The apparatus was the same as the one described in Experiment 1. Procedure The procedure was identical to the one employed in Experiment 1 (see Table 1) with the difference that in Experiment 2 the schedule used was a Variable Time schedule VT 30. The list of values that comprised the VT schedule were 3, 7, 13, 21, 31, 47, 88 s., one value was randomly taken on each occasion from the list without replacement. Results Figure 2 (right side) shows rats displacement in the experimental chamber every 0.2 s for a complete session for the VT schedule. The first column depicts routes for the last session of Condition I (wall) and the second column depicts routes for the last session of Condition II (center). The location of the rat in the first 0.2 s of water delivery is highlighted with a black mark. Since the water dispenser remained activated for 3 s, it was still possible that rats contacted the drop of water after the first 0.2s. In Condition I all rats were located close to the dispenser at time of delivery, although there were some crossings between walls for R6 and R7. In that condition, rats location at the time of delivery was close to the dispenser located at the wall. In condition II, for all rats, a back and forth pattern between the walls and center of the chamber was found. The location of the rats at time of delivery was distributed between the wall and the center of the chamber (close to the dispenser). Figure 3 (right side) shows the normalized, moment to moment distance (every 0.2 s) from the location of each rat to the dispenser on the last session for each condition. moment to moment distance (every 0.2 s) from the location of each rat to the dispenser on the last session for each condition. In Condition I, with the water dispenser located in a wall of the chamber, the distance value remained at low values, although with some variability along the session. In Condition II, with the water dispenser located at the center of the chamber, the distance value remained at higher levels in comparison to the values obtained when it was located at the wall, with some variability along the session. Discussion The purpose of this paper was to conduct a direct replication of León et al. (2020) study, presenting the location of water delivery in a reversed order, this is, first with water located at the wall and then at the center. With this purpose, we compared the spatial behavioral dynamics under two time-based schedules (fixed vs variable time) of water delivery, and two different locations of water delivery (perimetral zone vs. central zone). Similar to León et al. (2020), we found that, under both schedules (FT and VT) at the moment of water delivery, when the dispenser was at the center zone, the animals were usually far and scattered from the dispenser (they were near the perimeter zone), while when the dispenser was in the perimetral zone, the animals were near the dispenser. It was interesting to notice that with the dispenser at the center, back-and-forth patterns associated with the deliveries took place while with the dispenser in the perimeter, patterns of stays took place and distance to the dispenser was significantly reduced. On the other hand, the distribution of time spent in zones throughout the sessions confirmed that, under dispenser at the center condition, two functional segments with associated approach patterns emerged in different physical areas: the zone of water delivery, at the center, and perimeter zone. While, under dispenser at the wall condition, both functional segments, zone water delivery and safety area, converged in the perimetral zone. The routes, distance to the dispenser and time spent in zones, for each experimental condition in both experiments, resulted in well differentiated recurrence patterns and a robust difference in entropy: higher values at the center condition in comparison to the wall condition. The decrement of spatial dynamics, with the dispenser located in the perimeter, was more salient under the VT schedule than under FT, an expected finding given the literature (Eldridge et al., 1988;Van Hest et al., 1986). Nevertheless under VT with the dispenser at the center, a higher dynamics was observed in all subjects (e.g., back and forth patterns, higher and accelerated recurrence), a finding that could be unexpected if only the stimulus schedule is considered. The present work provides evidence that the decrement in the dynamic of displacement patterns was due to the experimental arrangement: location of water delivery and schedules of reinforcement, and not to a sequencing effect. As we previously stated, replication is an important process of the scientific practice since it allows to provide robust evidence about the reliability of the effects of our procedures (Baker, 2016). With the results of this study we can confidently state that the decrement in behavioral dynamics observed was a function of the location of water dispenser in combination with the schedule of delivery. In addition to the replication of previous findings controlling for sequencing effects, the present study replicates and strengthens the following assumptions of Leon et al. (2020a): a) there is a differential ecological segmentation of the experimental arena, regardless of scheduled contingencies; b) the functional relevance of the space on the dynamics of behavior depends both static (e.g. delimited zones, texture path and dispenser's location) and dynamic (e.g. water delivery and stimuli-schedule) arrangements of the environment; this would be referred to as: functional densification of space ; c) the proposed approach is a comprehensive and allows a broad characterization of the continuum of behavior that is difficult to obtain with approaches based only in discrete responses or other unique measure of the spatial dimension of behavior (Henton & Iversen, 2012); d) the behavior is an integrated functional system comprising an environmentsubsystem and an organism-subsystem, in which the ecological relevance of the events and segments of the space are co-determined by the qualities of the organism, defined in a phylogenetic and an ontogenetic way; e) recurrence plots and entropy are useful representations and non-first-order measures in order to characterize, in plausible way, the spatial dynamics of behavior, and its embedded features neglected under standard approaches (León et al., 2021). Finally, there is pending empirical work for future studies. The current study invites to the parametric study of the effect of the dispenser's location on spatial dynamics of behavior and it's transitions, because in the present work, and in the previous one, only the extreme values of this parameter were explored. Another pending task is the systematic evaluation of the effect of the texturized patch as discriminative segment or signal of water delivery zone.
2021-07-20T13:25:53.272Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "9ebe41dcfb9614603d239216f0433c40b523dc5d", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/07/16/2021.07.15.452514.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "9ebe41dcfb9614603d239216f0433c40b523dc5d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
201664814
pes2o/s2orc
v3-fos-license
Wear Resistant Coatings with a High Friction Coefficient Produced by Plasma Electrolytic Oxidation of Al Alloys in Electrolytes with Basalt Mineral Powder Additions To achieve a better performance of engineering components, modern design approaches consider the replacement of steel with lightweight metals, such as aluminum alloys. However, bare aluminum cannot satisfy requirements for surface properties in situations where continuous friction is needed. Among the various surface modification techniques, plasma electrolytic oxidation (PEO) is considered as promising for structural applications, owing to its hard and well-adhered ceramic coatings. In this work, the surfaces of two Al alloys (2024 and 6061) have been modified by PEO coating (~180 µm) reinforced with basalt minerals, in order to increase the coefficient of friction and wear resistance. A slurry electrolyte, including a silicate-alkaline solution with addition of basalt mineral powder (<5 µm) has been used. The coating composition, surface morphology, and microstructure were studied using X-ray diffraction, scanning electron, and optical microscopy. Linear reciprocating wear tests were employed for the evaluation of the friction and wear behavior. It was found that the coatings reinforced with basalt mineral showed that the wear and friction coefficients reached values 10−6–10−7 (mm3 N−1 m−1) and 0.7–0.85, correspondingly (sliding distance of 100 m). In comparison with the characteristics of resin-based materials (10−5–10−4 (mm3 N−1 m−1) and 0.3–0.5, respectively), the employment of thin inorganic frictional composites may bring considerable improvement in the thermal stability, durability, and compactness, as well as a reduction in the weight of the final product. These coatings are considered an alternative to the reinforced resin composite materials on steel used in frictional components, for example, clutch disks and braking pads. It is expected that the smaller thickness of the active frictional material (180 μm) reduces the volume of the wear products, extending the service intervals associated with filter and lubricant maintenance. Introduction Using lightweight materials to reduce the weight of vehicles provides a potential solution to the topical problem of emission reduction in transport. Best industrial practices suggest using lighter aluminum instead of ferrous alloys. Aluminum alloys can have sufficient corrosion resistance and allow for up to 25-50% weight reduction in some parts [1]. However, because of their relatively low hardness and high friction coefficient, the parts made from Al alloys often have poor wear resistance, which limits their application range. While there are several coating techniques to harden the surface The mineral composition of basalt, which is a natural composite material, is defined by the microlites it consists of. These microlites are mostly composed of silicates, aluminosilicates, various silicon dioxides, and apatite. Its chemical composition may be represented as a set of metallic and non-metallic oxides with a variable content [27]. The water content in basalt can be as high as 10%, depending on its porosity and formation conditions [28]. It is known that solid materials may be incorporated into the PEO coating inertly (without considerable internal transformation), from slurry electrolytes in electrophoretic regime [29][30][31][32]. The goal of this study was to obtain ceramic coatings with high friction coefficients suitable for dry sliding, in order to avoid the disadvantages of conventional organic-based frictional materials. This paper discusses the original single-step PEO treatments of two examples among the most popular wrought series (20xx and 60xx) of aluminum alloys carried out in an alkaline-silicate electrolyte with additions of mineral powder, as well as general characteristics of the coatings with high and stable friction coefficients, which may offer new design solutions for clutch disk applications. Materials and Methods Samples were made of two Al alloys, 2024 (AlCu4.5Mg) and 6061(AlMg), in the shape of a ring, with outer and inner diameters of 90 mm and 50 mm, respectively, and a thickness of 3 mm, providing a total surface area of 1 dm 2 . PEO treatments were carried out in a 10-litre cylindrical stainless-steel tank equipped with a cooling jacket. The body of the tank served as the counter electrode. The process was carried out at constant temperature (T = 65-70 • C), with the electrolyte stirred constantly. A 50 Hz AC waveform with the initial average current densities of J + = J − = 10 A/dm 2 was provided by a capacitor-type power supply. The duration of the PEO treatment was chosen at 90 min to provide the final coating thickness of 180 µm. The coating thickness was measured by the "Quanix 1500" eddy-current thickness gauge (Quanix, Berlin, Germany) for the dielectric coatings. Vickers microhardness was measured using a PMT-3 tester (LOMO, St. Petersburg, Russia) on polished surfaces under a 1.96 N load. The microhardness values were averaged from ten indentations. The coating phase composition was studied using a DRON-3M X-ray diffractometer (XRD) (Burevestnik, Moscow, Russia) utilising Cu Kα radiation, with scans performed in the normal coupled mode in the 2θ range from 15 • and 60 • , at a rate of 0.02 • per second. Phase identification was performed using a PDF-2 database. The surface microstructure and elemental composition were studied by Hitachi TM3000 Scanning Electron Microscope (Hitachi, Tokyo, Japan) equipped with an X-ray Energy dispersive spectroscopy (EDS) analyzer. Optical microscopy observations were carried out using a Neophote-2 (Carl Zeiss) metallographic microscope (Carl Zeiss, Jena, Germany) equipped with a 5 Mpx CMOS camera (Olympus, Tokyo, Japan). A linear reciprocating sliding ball-on-plate wear tester was used for the tribological tests, carried out under ambient atmospheric conditions against SAE 52100 bearing steel and WC-4%Co balls with a diameter of 6 mm at 5 N and 10 N normal loads, with a 10 mm stroke length (5 Hz), until the overall sliding distance of 100 m was reached. Other test conditions corresponded to ASTM G133-95. To evaluate the coating durability, the tests were performed on flat surfaces of the rings for four different residual thicknesses (180, 135, 90, and 45 µm). The three lower values were achieved by grinding the initial coating with abrasive paper up of to 4000 grit. After that, the samples were cleaned with acetone, washed with distilled water, and dried at 60 • C. The volume of the worn material was evaluated with a Dektak S3 profilometer (Bruker, Billerica, MA, USA) at a 7000 µm scan length with a ±1350 kÅ vertical limit. The mean average cross-sectional area of the wear scar was evaluated based on five measurements in random positions. Basalt Characterisation As the basalt composition depends on its origin, the samples of the powder material were analyzed by EDS and XRD before the PEO treatment. The results of the elemental analysis of the sample used in the experiments are presented in Table 1. The chemical composition of basalt can be expressed as a set of the following oxides: SiO 2 , Al 2 O 3 , MgO, Na 2 O, K 2 O, P 2 O 5 , CaO, TiO 2 , and FeO(Fe 2 O 3 ). The elemental composition shows an excess of oxygen, caused most probably by hydration. An XRD pattern of the basalt powder used is depicted in Figure 1. The XRD analysis shows that the sample contains high and low temperature phases of quartz. In addition, two phases of cristobalite (SiO 2 , low and high temperature), stishovite (SiO 2 ), and magnesium silicate (MgSiO 3 ) have been identified. Materials 2019, 12, x FOR PEER REVIEW 4 of 15 a ±1350 kÅ vertical limit. The mean average cross-sectional area of the wear scar was evaluated based on five measurements in random positions. Basalt Characterisation As the basalt composition depends on its origin, the samples of the powder material were analyzed by EDS and XRD before the PEO treatment. The results of the elemental analysis of the sample used in the experiments are presented in Table 1. The chemical composition of basalt can be expressed as a set of the following oxides: SiO2, Al2O3, MgO, Na2O, K2O, P2O5, CaO, TiO2, and FeO(Fe2O3). The elemental composition shows an excess of oxygen, caused most probably by hydration. An XRD pattern of the basalt powder used is depicted in Figure 1. The XRD analysis shows that the sample contains high and low temperature phases of quartz. In addition, two phases of cristobalite (SiO2, low and high temperature), stishovite (SiO2), and magnesium silicate (MgSiO3) have been identified. The expected oxides of aluminum, iron, calcium, titanium, or silicates have not been identified in the pattern that can be associated with (i) a substantial amorphous plateau between 5° and 45° 2θ, indicating that the major part of the basalt material is likely to be in an X-ray amorphous form, and in (ii) relatively small quantities. In addition, under CuKα radiation, the mass absorption coefficient of iron is about 15 times greater than that of other elements, and, in this case, no traces of Fe could be found in the patterns. It is evident that the major elemental and phase compositions of the studied sample are typical of basalt mineral. The expected oxides of aluminum, iron, calcium, titanium, or silicates have not been identified in the pattern that can be associated with (i) a substantial amorphous plateau between 5 • and 45 • 2θ, indicating that the major part of the basalt material is likely to be in an X-ray amorphous form, and in (ii) relatively small quantities. In addition, under Cu Kα radiation, the mass absorption coefficient of iron is about 15 times greater than that of other elements, and, in this case, no traces of Fe could be found in the patterns. It is evident that the major elemental and phase compositions of the studied sample are typical of basalt mineral. Coatings Structure and Composition The appearance of the aluminum parts with PEO coatings is shown on Figure 2. The total coating thickness of 180 µm was achieved for both of the substrate alloys. The surface plane optical micrographs of the obtained coatings and corresponding cross sections are presented in Figure 3a,d and Figure 3b,e, respectively. The surface morphology of the coatings on the A2024 and A6061 alloys is similar, featured by protuberances with a typical size of about 50-100 µm (Figure 3a,d). Moreover, the cross-sectional structures of the coatings on both alloys are similar, which is typical for PEO coatings on aluminum produced under an alternating current (AC; Figure 3b,e). In both cases, the outer and inner layers can be clearly distinguished. Moreover, on the samples of the 2024 alloy (Figure 3b), the inner layer appears to be divided into two sub-layers, as follows: a dark sub-layer in the middle part of the coating and a white sub-layer at the metal-oxide interface (marked with an arrow in Figure 3b), which is often referred to as an "interfacial barrier" layer. The inner layer always appears dark when copper is incorporated as one of the substrate alloying components. In the case of the 6061 alloy, the interfacial barrier layer has the same white color as the internal hard layer. Coatings Structure and Composition The appearance of the aluminum parts with PEO coatings is shown on Figure 2. The total coating thickness of 180 µm was achieved for both of the substrate alloys. The surface plane optical micrographs of the obtained coatings and corresponding cross sections are presented in Figure 3a,d and b,e, respectively. The surface morphology of the coatings on the А2024 and А6061 alloys is similar, featured by protuberances with a typical size of about 50-100 µm (Figure 3a,d). Moreover, the cross-sectional structures of the coatings on both alloys are similar, which is typical for PEO coatings on aluminum produced under an alternating current (AC; Figure 3b,e). In both cases, the outer and inner layers can be clearly distinguished. Moreover, on the samples of the 2024 alloy (Figure 3b), the inner layer appears to be divided into two sub-layers, as follows: a dark sub-layer in the middle part of the coating and a white sub-layer at the metal-oxide interface (marked with an arrow in Figure 3b), which is often referred to as an "interfacial barrier" layer. The inner layer always appears dark when copper is incorporated as one of the substrate alloying components. In the case of the 6061 alloy, the interfacial barrier layer has the same white color as the internal hard layer. Coatings Structure and Composition The appearance of the aluminum parts with PEO coatings is shown on Figure 2. The total coating thickness of 180 µm was achieved for both of the substrate alloys. The surface plane optical micrographs of the obtained coatings and corresponding cross sections are presented in Figure 3a,d and b,e, respectively. The surface morphology of the coatings on the А2024 and А6061 alloys is similar, featured by protuberances with a typical size of about 50-100 µm (Figure 3a,d). Moreover, the cross-sectional structures of the coatings on both alloys are similar, which is typical for PEO coatings on aluminum produced under an alternating current (AC; Figure 3b,e). In both cases, the outer and inner layers can be clearly distinguished. Moreover, on the samples of the 2024 alloy (Figure 3b), the inner layer appears to be divided into two sub-layers, as follows: a dark sub-layer in the middle part of the coating and a white sub-layer at the metal-oxide interface (marked with an arrow in Figure 3b), which is often referred to as an "interfacial barrier" layer. The inner layer always appears dark when copper is incorporated as one of the substrate alloying components. In the case of the 6061 alloy, the interfacial barrier layer has the same white color as the internal hard layer. The backscattered electron images (BSE) of the coatings (Figure 3c,f) show pancake-like craters (30-50 µm) and a large number of small channels of breakdowns (<5 µm); detailed information about the typical morphology of the PEO coatings can be found elsewhere [33][34][35]. According to the data collected from the optical and SEM micrographs, there is no significant influence of the alloy composition on the morphology of the top coating. Therefore, we may assume that the proposed approach is suitable for different Al alloys, for which the composition is close to those of the 2XXX and 6XXX series. The results of the elemental analysis of the coatings are presented in Table 2. It can be seen that the surface chemical compositions of both of the coatings are similar. It is likely that Na and Si originate mainly from the electrolyte solution, whereas K, Ca, Fe, and P originate from basalt powder. According to the elemental analysis, the oxygen content exceeds the value calculated, based on the stoichiometry of the corresponding oxides (14-15%), which can be attributed to the presence in the coating of either hydroxides or adsorbed water. The phase composition of the studied coatings is presented in Figure 4. The XRD patterns taken at both the original coating, with a thickness of 180 µm, and the polished plane (90 µm), show a broad scattering between 15 • and 35 • , with a maximum at 25 • 2θ, indicating that the major part of the coating material is X-ray amorphous. The backscattered electron images (BSE) of the coatings (Figure 3c,f) show pancake-like craters (30-50 µm) and a large number of small channels of breakdowns (<5 µm); detailed information about the typical morphology of the PEO coatings can be found elsewhere [33][34][35]. According to the data collected from the optical and SEM micrographs, there is no significant influence of the alloy composition on the morphology of the top coating. Therefore, we may assume that the proposed approach is suitable for different Al alloys, for which the composition is close to those of the 2XXX and 6XXX series. The results of the elemental analysis of the coatings are presented in Table 2. It can be seen that the surface chemical compositions of both of the coatings are similar. It is likely that Na and Si originate mainly from the electrolyte solution, whereas K, Ca, Fe, and P originate from basalt powder. According to the elemental analysis, the oxygen content exceeds the value calculated, based on the stoichiometry of the corresponding oxides (14-15%), which can be attributed to the presence in the coating of either hydroxides or adsorbed water. The phase composition of the studied coatings is presented in Figure 4. The XRD patterns taken at both the original coating, with a thickness of 180 µm, and the polished plane (90 µm), show a broad scattering between 15° and 35°, with a maximum at 25° 2θ, indicating that the major part of the coating material is X-ray amorphous. The diffraction patterns of the outer coating layers on the studied alloys are slightly different. The magnitude of the amorphous halo is significantly higher for the coating on the Al 2024 alloy. Moreover, a noticeable peak of the high-temperature quartz phase can be seen in Figure 4a. Furthermore, a small peak corresponding to the γ-alumina phase can also be found. In contrast, the outer coating layer on the Al alloy 6061 is X-ray amorphous (Figure 4b). However, the inner regions The diffraction patterns of the outer coating layers on the studied alloys are slightly different. The magnitude of the amorphous halo is significantly higher for the coating on the Al 2024 alloy. Moreover, a noticeable peak of the high-temperature quartz phase can be seen in Figure 4a. Furthermore, a small peak corresponding to the γ-alumina phase can also be found. In contrast, the outer coating layer on the Al alloy 6061 is X-ray amorphous (Figure 4b). However, the inner regions of both coatings with residual thickness of 90 µm have a similar phase composition, comprising γand α-alumina. As the electrolyte solution includes noticeable concentrations of solid basalt particles and soluble silicate, the mechanism of the coating formation may comprise electrochemical substrate oxidation and electrophoretic precipitation from the electrolyte. The outer layer in the PEO coatings obtained from the alkaline-silicate solutions at a 50 Hz AC regime comprises typically~1/3 of the total coating thickness [19,36]. In our case, the outer layer reached a half of the total coating thickness (see Figure 3b,e), confirming an electrophoretic mass transfer of basalt particles into the coating. Two residual thickness values of 135 and 90 µm were chosen for the microhardness evaluation within the outer layer (incorporating basalt), and the third measurement was performed in the middle of the inner layer, containing αand γ-alumina phases, at residual thickness of 45 µm (see Figure 3b). The microhardness values (as average of 10 indentations) at residual thicknesses 135, 90, and 45 µm were 1110 ± 320 HV, 1450 ± 210 HV, and 1850 ± 130 HV, respectively. It can be seen that the microhardness of the outer layer is significantly higher than that typically observed in the corresponding regions of the coatings obtained from alkaline-silicate solutions without particle additions (600-800 HV), whereas the microhardness of the inner layer is comparable to the typical values of 1900-2200 HV [8,36]. Tribological Properties The evolutions of the friction coefficient under linear reciprocating wear test (LRWT) conditions for the original coatings on the Al 2024 and 6061 alloys during the 100 m unlubricated sliding against both SAE 52100 bearing steel and WC-4%Co balls are depicted in Figure 5. The friction coefficients measured on the coated samples obtained under identical conditions demonstrate similar behavior, in a way, in that their values are rather high at 0.55-0.60 and remain constant or increase slightly throughout the test. This may be accounted for by an increase in the contact area between the coated surface and ball counterface as a result of progressive wear during reciprocal sliding. The observed similarity in the frictional behaviors could be attributed to the interaction of the counterfaces with the outer layer of the coating, and its composition and properties did not depend on the composition of the substrate (see Section 3.2). of both coatings with residual thickness of 90 µm have a similar phase composition, comprising γand α-alumina. As the electrolyte solution includes noticeable concentrations of solid basalt particles and soluble silicate, the mechanism of the coating formation may comprise electrochemical substrate oxidation and electrophoretic precipitation from the electrolyte. The outer layer in the PEO coatings obtained from the alkaline-silicate solutions at a 50 Hz AC regime comprises typically ~1/3 of the total coating thickness [19,36]. In our case, the outer layer reached a half of the total coating thickness (see Figure 3b,e), confirming an electrophoretic mass transfer of basalt particles into the coating. Two residual thickness values of 135 and 90 µm were chosen for the microhardness evaluation within the outer layer (incorporating basalt), and the third measurement was performed in the middle of the inner layer, containing α-and γ-alumina phases, at residual thickness of 45 µm (see Figure 3b). The microhardness values (as average of 10 indentations) at residual thicknesses 135, 90, and 45 µm were 1110 ± 320 HV, 1450 ± 210 HV, and 1850 ± 130 HV, respectively. It can be seen that the microhardness of the outer layer is significantly higher than that typically observed in the corresponding regions of the coatings obtained from alkaline-silicate solutions without particle additions (600-800 HV), whereas the microhardness of the inner layer is comparable to the typical values of 1900-2200 HV [8,36]. Tribological Properties The evolutions of the friction coefficient under linear reciprocating wear test (LRWT) conditions for the original coatings on the Al 2024 and 6061 alloys during the 100 m unlubricated sliding against both SAE 52100 bearing steel and WC-4%Co balls are depicted in Figure 5. The friction coefficients measured on the coated samples obtained under identical conditions demonstrate similar behavior, in a way, in that their values are rather high at 0.55-0.60 and remain constant or increase slightly throughout the test. This may be accounted for by an increase in the contact area between the coated surface and ball counterface as a result of progressive wear during reciprocal sliding. The observed similarity in the frictional behaviors could be attributed to the interaction of the counterfaces with the outer layer of the coating, and its composition and properties did not depend on the composition of the substrate (see Section 3.2). within the intermediate region between the outer and inner layers, similar results for the friction coefficient values are not surprising. Figure 7 illustrates the variation of the friction coefficient depending on the employed counterface material. The increase in loading from 5 to 10 N did not lead to noticeable changes in the friction coefficient, however different counterface materials showed a variation in final values from 0.60 to 0.85 (for 135 µm residual thickness). Moreover, for the WC-4%Co counterface, the final coefficient of the friction was noticeably greater at the end of the test. A similar behavior was observed for the coatings with a residual thickness of 90 µm. In contrast, once the residual thickness reaches 45 µm, sliding occurs on the surface of the inner layer enriched with α-Al2O3 (~2200 HV), demonstrating a different behavior of the friction coefficient (Figure 8a,b). With the load increasing to 10 N, a decrease of the friction coefficient in respect to that at the 5 N load was observed for both counterfaces, at a final distance of 100 m. 0.85. The values of the friction coefficient at the residual thicknesses of 90 and 135 µm are similar, and vary from 0.55 to 0.65 (see Figure 6a,b). As the surfaces of the coatings with a residual thickness of 180 and 135 µm are situated within the outer layer, and the residual thickness of 90 µm is situated within the intermediate region between the outer and inner layers, similar results for the friction coefficient values are not surprising. Figure 7 illustrates the variation of the friction coefficient depending on the employed counterface material. The increase in loading from 5 to 10 N did not lead to noticeable changes in the friction coefficient, however different counterface materials showed a variation in final values from 0.60 to 0.85 (for 135 µm residual thickness). Moreover, for the WC-4%Co counterface, the final coefficient of the friction was noticeably greater at the end of the test. A similar behavior was observed for the coatings with a residual thickness of 90 µm. In contrast, once the residual thickness reaches 45 µm, sliding occurs on the surface of the inner layer enriched with α-Al2O3 (~2200 HV), demonstrating a different behavior of the friction coefficient (Figure 8a,b). With the load increasing to 10 N, a decrease of the friction coefficient in respect to that at the 5 N load was observed for both counterfaces, at a final distance of 100 m. Moreover, for the WC-4%Co counterface, the final coefficient of the friction was noticeably greater at the end of the test. A similar behavior was observed for the coatings with a residual thickness of 90 µm. In contrast, once the residual thickness reaches 45 µm, sliding occurs on the surface of the inner layer enriched with α-Al 2 O 3 (~2200 HV), demonstrating a different behavior of the friction coefficient (Figure 8a,b). With the load increasing to 10 N, a decrease of the friction coefficient in respect to that at the 5 N load was observed for both counterfaces, at a final distance of 100 m. coatings includes hard grains of α-Al2O3 incorporated in a softer alumina-silicate matrix. Such structures possess a relatively low elastic module (10-40 MPa) compared with polycrystalline alumina (370 MPa) [37]. Thus, under the increased load (10 N), it is likely that the elastic deformation of the substrate beneath the coating takes place, considerably affecting the sliding conditions and values of the friction coefficient. These findings may be used as an estimation for the maximum working load of the designed component with a given residual coating thickness. However, from Figure 8, it can be clearly seen that the counterface material does matter in this case. Sliding tests on the inner layer have shown the highest friction coefficient through all of the performed tests (up to 0.85 for WC-Co and 0.7 for bearing steel). As each residual thickness was achieved by grinding, the coating surface at 45 µm was relatively smooth, and was well adhered (200-300 MPa [8]) to the metal substrate with no additional particles in a sliding zone, which excludes abrasive wear mechanism in this case. At the normal load of 5 N, both of the counterfaces showed the same behavior as that for the higher residual thicknesses, although it should be noticed that the sliding against WC-Co corresponds to an extended initial breaking-in period (Figure 8a). Later, is likely due to the approximately equal microhardness of the inner coating layer and the WC-Co counterface. In contrast, the sliding of the relatively soft bearing steel against the inner coating layer could be accompanied by the fast plastic deformation of the steel ball surface, flattening of the contacting area, and the stabilization of the sliding conditions. Figure 8 illustrates that once the normal load increased to 10 N, the sliding conditions changed considerably. It should be noted here that in spite of the high hardness and wear resistance of the inner layer in the PEO coating, the interaction of the counterface and coating surface is influenced by the mechanical properties of substrate materials. It has been known [36] that the inner layer of PEO coatings includes hard grains of α-Al 2 O 3 incorporated in a softer alumina-silicate matrix. Such structures possess a relatively low elastic module (10-40 MPa) compared with polycrystalline alumina (370 MPa) [37]. Thus, under the increased load (10 N), it is likely that the elastic deformation of the substrate beneath the coating takes place, considerably affecting the sliding conditions and values of the friction coefficient. These findings may be used as an estimation for the maximum working load of the designed component with a given residual coating thickness. The wear coefficients (k) derived from the results of the tribological tests for the coatings on the Al 2024 alloy with different residual thicknesses are depicted in Figure 9. The coatings showed wear coefficients of the order of 10 −5 to 10 −7 mm 3 N −1 m −1 , with a general trend to decrease across the coating thickness from the original surface (180 µm) to the inner layer (45 µm), which is in accordance with the pattern of the hardness evolution. The wear coefficients of the coatings tested against the bearing steel were slightly smaller than those obtained in the tests against tungsten carbide. In addition, the wear rates increased from the lower (5N) to the higher load (10N), probably because of the changes in the interaction mechanism. Higher values of wear coefficients at the initial part of the tests (Figure 9) indicate that the coatings after PEO may require some post treatment in order to remove the roughest top part of the coating ( Figure 3). However, this additional step is compensated by improved heat resistance, durability, and compactness, and a reduction in weight. Moreover, the smaller thickness of the active frictional material reduces the amount of wear products, thereby potentially increasing the service intervals accompanied by filter and lubricant maintenance. Analysis of the Modified Layer Wear Mechanism As information about the friction and wear behavior of the inner dense layer of PEO coatings on Al alloys has already been widely reported (see references in Introduction), we will focus our attention to the wear mechanism of the upper-most coating part enriched by basalt powder. For these purposes, additional 18-meter wear tests against WC-4%Co and bearing steel counterfaces have been performed with a 10 N load, and with the same conditions as described in the experimental section. The optical and electron microscopy images of wear scars are depicted in Figure 10. The wear scar resulting from the interaction with WC-Co is noticeably narrower than for the bearing steel ( Figure 10). The interaction of the basalt-enriched layer of the PEO coating and the bearing steel counterface is accompanied by the considerable transfer of ball material in the contact area. This is evident from the specific brown color of the wear scar ( Figure 10b); the domination of the white areas in the SEM BSE image (Figure 10d), which is attributed to the presence of heavier elements; and from direct the EDS analysis of both rectangular areas (Figure 10d, Table 3) and line scans across the wear scars ( Figure 11). In contrast, the wear scar developed in the test against WC-Co ball looks clean (Figure 10a,c), with negligible material transfer in the friction contact (Figure 10c, Table 3); neither tungsten nor cobalt contaminations have been detected by the EDS analysis. As information about the friction and wear behavior of the inner dense layer of PEO coatings on Al alloys has already been widely reported (see references in Introduction), we will focus our attention to the wear mechanism of the upper-most coating part enriched by basalt powder. For these purposes, additional 18-meter wear tests against WC-4%Co and bearing steel counterfaces have been performed with a 10 N load, and with the same conditions as described in the experimental section. The optical and electron microscopy images of wear scars are depicted in Figure 10. Table 3. Table 3. The wear scar resulting from the interaction with WC-Co is noticeably narrower than for the bearing steel ( Figure 10). The interaction of the basalt-enriched layer of the PEO coating and the bearing steel counterface is accompanied by the considerable transfer of ball material in the contact area. This is evident from the specific brown color of the wear scar ( Figure 10b); the domination of the white areas in the SEM BSE image (Figure 10d), which is attributed to the presence of heavier elements; and from direct the EDS analysis of both rectangular areas (Figure 10d, Table 3) and line scans across the wear scars ( Figure 11). In contrast, the wear scar developed in the test against WC-Co ball looks clean (Figure 10a,c), with negligible material transfer in the friction contact (Figure 10c, Table 3); neither tungsten nor cobalt contaminations have been detected by the EDS analysis. Both wear scars demonstrate crack networks at the bottom of the sliding area; this is particularly distinguishable in the scar left by the WC-Co counterface (Figure 10c). To study the propagation of those cracks in depth, the coating cross sections have been investigated after wear tests (Figure 12a,b). It can be seen that the cracks are mainly concentrated in the 5−15 µm uppermost part of the coating, whereas the main coating thickness demonstrates only a few relatively deep cracks, typical for the PEO coatings tested against both counterfaces. At a higher magnification (Figure 12c), it is evident that the cracks tend to propagate in a horizontal direction (yellow arrows), causing further delamination of the coating fragments during the test against WC-Co. On the other hand, after the test against bearing steel (Figure 12d), the degree of defragmentation in the top layer is much smaller. Both wear scars demonstrate crack networks at the bottom of the sliding area; this is particularly distinguishable in the scar left by the WC-Co counterface (Figure 10c). To study the propagation of those cracks in depth, the coating cross sections have been investigated after wear tests (Figure 12a,b). It can be seen that the cracks are mainly concentrated in the 5−15 µm uppermost part of the coating, whereas the main coating thickness demonstrates only a few relatively deep cracks, typical for the PEO coatings tested against both counterfaces. At a higher magnification (Figure 12c), it is evident that the cracks tend to propagate in a horizontal direction (yellow arrows), causing further delamination of the coating fragments during the test against WC-Co. On the other hand, after the test against bearing steel (Figure 12d), the degree of defragmentation in the top layer is much smaller. As shown in Section 3.2, the top layer of the PEO coating possesses an average microhardness of 780 to 1420 HV, attributed to the basalt and quartz particles incorporated into the relatively soft amorphous silicate matrix. Therefore, the wear mechanism for the studied counterfaces (tungsten carbide and bearing steel) is expected to be different, because of the noticeable difference of their microhardness (~2000 HV and ~850 HV, respectively) in respect to that of the coating. Indeed, the cross sectional analysis of wear scars at a higher magnification revealed that the surface profile contour produced by the harder WC-Co ball is smooth, reproducing the shape of the ball curvature (Figure 12c), whereas after the softer bearing steel, the surface is wavy with noticeable asperities denoted by arrows in Figure 12d. As ceramic-like materials possess a very low capacity for plastic flow, the brittle fracture provides a significant contribution to wear, when the counter-face has a similar or higher hardness. This is exactly what was observed for the PEO coating interacted with the WC-Co ball, where fragmentation was present in both the surface plane and cross sectional directions. In contrast, the interaction of the PEO coating with the softer bearing steel ball demonstrated an abrasive wear mechanism, including micro-cutting of the steel by the coating asperities, with intensive material transfer from the ball into the sliding area (Figure 11b and Table 3). This is evident from the micrograph of the ball surface after the test (Figure 10b inset), where scoring marks along the oscillations can be easily observed. However, the wear rate analysis (Figure 9) revealed that the top layer of the coating demonstrates similar wear rates against the WC-Co and bearings steel counterfaces (~1.8 × 10 −5 mm 3 N −1 m −1 ), despite the much lower microhardness of the latter. This could be due to the effect of the wear-induced debris acting as the third body in the tribological interaction with the bearing steel. The size of the debris is assumed to be in the submicron level, which is evident from the shining smooth surface of the scar on the bearing steel counterface, enveloping the scoring marks from the interaction with the coating asperities. As shown in Section 3.2, the top layer of the PEO coating possesses an average microhardness of 780 to 1420 HV, attributed to the basalt and quartz particles incorporated into the relatively soft amorphous silicate matrix. Therefore, the wear mechanism for the studied counterfaces (tungsten carbide and bearing steel) is expected to be different, because of the noticeable difference of their microhardness (~2000 HV and~850 HV, respectively) in respect to that of the coating. Indeed, the cross sectional analysis of wear scars at a higher magnification revealed that the surface profile contour produced by the harder WC-Co ball is smooth, reproducing the shape of the ball curvature (Figure 12c), whereas after the softer bearing steel, the surface is wavy with noticeable asperities denoted by arrows in Figure 12d. As ceramic-like materials possess a very low capacity for plastic flow, the brittle fracture provides a significant contribution to wear, when the counter-face has a similar or higher hardness. This is exactly what was observed for the PEO coating interacted with the WC-Co ball, where fragmentation was present in both the surface plane and cross sectional directions. In contrast, the interaction of the PEO coating with the softer bearing steel ball demonstrated an abrasive wear mechanism, including micro-cutting of the steel by the coating asperities, with intensive material transfer from the ball into the sliding area (Figure 11b and Table 3). This is evident from the micrograph of the ball surface after the test (Figure 10b inset), where scoring marks along the oscillations can be easily observed. However, the wear rate analysis (Figure 9) revealed that the top layer of the coating demonstrates similar wear rates against the WC-Co and bearings steel counterfaces (~1.8 × 10 −5 mm 3 N −1 m −1 ), despite the much lower microhardness of the latter. This could be due to the effect of the wear-induced debris acting as the third body in the tribological interaction with the bearing steel. The size of the debris is assumed to be in the submicron level, which is evident from the shining smooth surface of the scar on the bearing steel counterface, enveloping the scoring marks from the interaction with the coating asperities. As the wear tests have been carried out at ambient conditions including water vapor and oxygen as the main corrosive agents, we can reasonably assume that the debris acting as the third body also included a noticeable amount of iron (hydr-)oxides, rather than metal particles, in addition to the removed coating material. Because of the large amounts of background oxygen in the PEO coating, we were unable to distinguish the iron (hydr-)oxides from the metal iron using EDS. However, from a comparison of the optical microscopy images (Figure 10a,b), the characteristic brown color of the wear scar on the coating tested against steel appears to support this assumption. Finally, we can provide a simplified estimation of the mean average contact pressure (P) developed at the end of the test, as follows: where F Load = 10 N of the normal load, and A contact is 0.418 or 0.698 mm 2 for WC-Co and bearing steel, correspondingly. The average contact pressures were around 14 MPa for the bearing steel and 24 MPa for tungsten carbide. From Figure 6, it can be seen that after~10 m of sliding distance, the tribological contact came to a steady state, so the obtained values of the contact pressures can be considered steady state too. Taking into account the typical loads in the clutch applications (0.05-5 MPa), the calculated values are three to four times greater, and the absolute values of wear coefficient are expected to be even lower than those demonstrated in our experiments. Therefore, the load-bearing capacity and wear resistance of the PEO coating reinforced with basalt powder can be considered sufficient to provide an alternative for frictional materials based on organic resins. Conclusions Thick (180 µm) PEO coatings with an outer layer reinforced by basalt mineral powder were fabricated on two commercial Al alloys, 2024 and 6061. For both of the alloys, the coatings showed a similar surface morphology featured by protuberances with a typical size of about 50-100 µm, and large numbers of small (<5 µm) breakdown channels. Although the coatings had a layered microstructure typical of PEO coatings produced in silicate-alkaline electrolytes under alternating polarization conditions, the incorporation of basalt particles into the outer porous layer resulted in a rather uniform distribution of the mechanical properties across the coating thickness. The Vickers microhardness values for the outer layer of the PEO coating were within 1110 ± 320 HV, which is noticeably higher in comparison to mulit layer without basalt reinforcement (700 ± 100 HV). With most of the coating material being X-ray amorphous, the presence of crystalline phases of αand γ-alumina in the inner region located about 90 µm from the interface resulted in a hardness of 1850 ± 130 HV. The PEO coatings on both of the Al alloys showed a similar tribological behavior, with friction coefficients recorded in the ranges of 0.60 to 0.85 and 0.50 to 0.70 against WC-Co and bearing steel counterfaces, respectively; these values are higher than those typical of reinforced resins (0.3-0.5). The coatings have shown a stable friction behavior, with wear coefficients being of the order of 10 −5 to 10 −6 and 10 −6 to 10 −7 mm 3 N −1 m −1 for the outer and the inner regions of the coating, respectively, which is two to three orders of magnitude lower than for the resin-based materials. Moreover, the wear coefficient of the outer layer reinforced by basalt particles was almost independent on the counterface material, although different wear mechanisms were observed, including brittle fracture in sliding against WC-4%Co and abrasive three-body interaction against bearing steel. Because if its inorganic nature, the coating material is expected to provide an improved heat resistance, durability, and compactness of friction components. The PEO coatings are thinner than conventional resin-based pads (180 µm versus 1-10 mm), which is expected to improve the thermal conditions of the sliding because of their lower thermal resistance and reduced weight, the and dimensions of the moving parts (e.g., because of their decrease in clutch/brake piston traveling distance). Therefore, the developed PEO coatings may be considered as an alternative to the frictional materials, based on the reinforced organic binding.
2019-08-30T16:45:35.869Z
2019-08-27T00:00:00.000
{ "year": 2019, "sha1": "ea7ba4db921e447855c3afe46373b9613e77bb9f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/12/17/2738/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0770dfb25d579d1e8b8a4ef0dbbb50a27fa58650", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
250716360
pes2o/s2orc
v3-fos-license
Atypical, Composite, or Blended Phenotypes: How Different Molecular Mechanisms Could Associate in Double-Diagnosed Patients In the last few years, trio-Whole Exome Sequencing (WES) analysis has revolutionized the diagnostic process for patients with rare genetic syndromes, demonstrating its potential even in non-specific clinical pictures and in atypical presentations of known diseases. Multiple disorders in a single patient have been estimated to occur in approximately 2–7.5% of diagnosed cases, with higher frequency in consanguineous families. Here, we report the clinical and molecular characterisation of eight illustrative patients for whom trio-WES allowed for identifing more than one genetic condition. Double homozygosity represented the causal mechanism in only half of them, whereas the other half showed peculiar multilocus combinations. The paper takes into consideration difficulties and learned lessons from our experience and therefore supports the powerful role of wide analyses for ascertaining multiple genetic diseases in complex patients, especially when a clinical suspicion could account for the majority of clinical signs. It finally makes clear how a patient’s “deep phenotyping” might not be sufficient to suggest the presence of multiple genetic diagnoses but remains essential to validate an unexpected multilocus result from genetic tests. Introduction Traditionally, in clinical genetic settings, identifying the correct diagnosis in a patient requires collecting all the history and physical hallmarks and then recognizing a pattern of a single known genetic condition that could explain all of them, in a "single-disorder" paradigm. The presence of additional clinical features that do not fit into the known pattern of the condition could either suggest a phenotype expansion, or an apparently new condition [1,2]. The finding of a pathogenic variant consistent with the majority of the patient's clinical features usually stops any further genetic testing. However, the expanding use of wide next-generation sequencing (NGS) analyses brought to evidence an unnegligible number of patients whose phenotype is caused by the association of multiple genetic conditions. The proportion of multiple disorders in a single patient has been estimated at approximately 2-7.5% of diagnosed cases, largely depending on the studied cohort [1][2][3][4]. In this scenario, single-gene tests or gene panels focused on a limited number of genes might be potentially inadequate, hiding possible adjunctive genetic variations based on an apparent phenotypic expansion. Smith and colleagues indeed performed a retrospective review of multiple findings in diagnostic exome sequencing and observed that they were three times more frequent in patients from consanguineous families compared to patients from non-consanguineous families, due to co-inheritance of recessive disorders [2]. Even if more described, double homozygosity in autosomal recessive disease genes is not the unique inheritance mechanism for multiple diagnoses, as well as parental consanguinity isnot the only aspects to consider for suspecting multiple genetic defects in a patient. For example, in a large cohort of patients, Posey et al. found that the most commonly observed pattern of double diagnoses was two pathogenic variants in autosomal dominant disease genes [1]. In this paper, we report the clinical and molecular characterization of eight representative patients for whom trio-based Whole Exome Sequencing (WES) allowed identifying more than one genetic condition with different inheritance patterns. The article thus highlights the role of WES in providing complete and fast diagnosis in patients with complex presentations of rare genetic syndromes, with important implications in the assessment of recurrence risk. Materials and Methods Over the last 7-year period (2015-2021), 2573 patients were referred to our laboratory to perform trio-WES analysis. This study complied with the Declaration of Helsinki and was approved by the Ethics Committee of ASST Papa Giovanni XXIII of Bergamo as part of the RARE project (Rapid Analysis for Rapid Care) and the GENE Project (Genomic analysis Evaluation NEtwork). After genetic counselling and written informed consent, genomic DNA was extracted from peripheral blood samples of probands and parents using standard procedures. The exonic regions and flanking splice junctions of the genome were captured using the Clinical Research Exome v2 kit (Agilent Technologies, Santa Clara, CA, USA). Sequencing was done on a NextSeq500 Illumina system with 150 bp paired-end reads. Reads were aligned to human genome build GRCh37/UCSC hg19, and analyzed for sequence variants using a custom-developed analysis tool [9]. The variant call file (vcf), including single nucleotide polymorphism and indels, was annotated by querying population frequencies databases and mutation databases, including the Genome Aggregation Database (http://gnomad. broadinstitute.org/, accessed on 19 June 2022), ClinVar (https://www.ncbi.nlm.nih.gov/ clinvar/, accessed on 19 June 2022), and Human Gene Mutation Database Professional (HGMD, Release 2017.4). To prioritize variants, a sequential filtering strategy was applied, retaining only variants with the following characteristics: (a) potential effect on protein and transcript (splicing, missense, nonsense, and frameshift); (b) consistency with the patient's phenotype according to the Human Phenotype Ontology classification (www. human-phenotype-ontology.org/, accessed on 19 June 2022); (c) consistency with the suspected inheritance model (autosomal recessive or de novo) with a frequency in the general population compatible with prevalence and incidence of the disease and showing a pathogenic mechanism corresponding to the one expected for the disease [9]. Variants were classified based on ACMG guidelines [10] (Supplementary Table S1). The potential causative variants were subsequently confirmed by Sanger sequencing in the proband and parents using an independent DNA sample. Two pipelines were used to identify the copy number variants (CNVs) based on ExomeDepth and one created in-house, as previously described [11]. All the CNVs detected by both pipelines were annotated by matching every call with the genes involved and related diseases and classified according to ACMG and ClinGen guidelines [12]. Results Among the diagnosed cases of our cohort, trio-WES allowed us to detect two independent genetic conditions in about 2.5% of them. Herein, we retrospectively described the clinical and molecular characterization of eight representative double-diagnosed patients. Case 1 A 16-year-old boy, born from a non-consanguineous Italian couple, was referred to genetic evaluation for a polymalformative clinical picture characterized by mild hypoplastic cerebellum at brain magnetic resonance imaging (MRI), cleft palate, heptadactyly of the left hand, bilateral clubfoot with bilateral postaxial hexadactyly, agenesis of the right kidney, hypo-dysplasia of the left one (requiring renal transplantation at 4 years old), anal stenosis with ano-cutaneous fistula, and shawl scrotum. Karyotype analysis, performed right after birth, turned out normal (46, XY). At the first genetic evaluation, at approximately 16 years old, his weight was at 75-90th centile, height was at 25-50th centile and head circumference was at 90th centile. Minor facial anomalies were observed, in particular small ears, bushy eyebrows, and a mildly short philtrum. He also presented brachydactyly with bilateral clinodactyly of the fifth finger, a proximal implant of both halluces and slightly shortened-appearing limbs. The patient had normal cognitive functions and was successfully attending secondary school. An array-comparative genomic hybridization (aCGH), together with GLI3 analysis (NGS and Multiplex Ligation-dependent Probe Amplification) were carried out, with normal results. At 18 years old, he underwent left hip joint replacement because of femoral head osteonecrosis, which was also present to a lesser degree on the contralateral leg. Taking into consideration the non-specific clinical presentation, trio-based WES analysis was performed and detected two different homozygous variants. The first one was a missense pathogenic variant in SLC26A2 (NM_000112.4:c.835C>T; p.Arg279Trp), previously reported in Recessive Multiple Epiphyseal Dysplasia type 4 (OMIM #226900) [13] and accounting for his cleft palate, short limb appearance, and hip dysplasia that probably led to femoral head necrosis. The other was a novel and likely pathogenic variant found in IFT27 (NM_006860.4:c.350G>A; p.Gly117Asp), associated with Bardet-Biedl Syndrome 19 (OMIM #615996) [14,15]. Parents were heterozygous for both variants; the analysis also revealed multiple regions of genome-wide homozygosity in different chromosomes, suggesting a probable "territorial" consanguinity. Case 2 and Case 3 The second patient was a girl born from two Tunisian second cousins. She was diagnosed at 5 years old with complete situs viscerum inversus (dextrocardia and abdominal organs) and progressive intrahepatic cholestasis, which led to liver cirrhosis. She underwent her first orthotopic liver transplant at 10 years old and required three subsequent re-grafts because of thrombotic complications. At the last evaluation (15 years old), her height and weight were under the 3rd centile and she presented a pubertal delay. Her third-grade cousin (case 3) was a 2-year old girl affected by dextrocardia and mild psychomotor delay. Her little brother presented dextrocardia too. At approximately 1 year old, she was admitted to the hospital for persistent pruritus; an abdomen ultrasound revealed hepatosplenomegaly, whereas her blood tests showed severe cryptogenic hypertransaminasemia and elevated levels of bile acids, without evidence of infection by hepatotoxic viruses. An abdominal MRI confirmed complete situs viscerum inversus with polysplenia and hypertrophic hepatic lobes without nodular findings. At the evaluation, minor and non-peculiar facial anomalies were observed (epicanthal folds, pointed helixes, and saddle nose). Case 4 Genetic consultation was requested for a newborn born of Bangladeshi first-cousin parents that presented diffused skin macules with hyperchromic and hypochromic stains, variable in sizes and with jagged edges, some of which followed Blaschko lines; she also had multiple soft cafè-au-lait macules on the thoracic and dorsal regions. No freckling, dysmorphisms, or body asymmetries were detected. To investigate the presence of Lisch nodules, she underwent appropriate ophthalmologic and fundus oculi examinations that showed no abnormalities. After 5 months, she was admitted to the hospital for feeding difficulties, growth arrest, and worsening respiratory distress. Chest computed tomography (CT) showed severe bilateral pneumonia and progressively developed respiratory failure that required invasive ventilation. Karyotype on skin fibroblast was performed and results were normal (46, XX). Her blood tests revealed severe lymphopenia, agammaglobulinemia, and abnormal lymphocyte subpopulations (absence of T and B lymphocytes with normal natural killer cell count); these findings led to the suspicion of severe combined immunodeficiency. Because of the worsening of the clinical course, urgent trio-WES was performed on the peripheral blood and detected a homozygous pathogenic variant in DCLRE1C (NM_001033855.3:c.95C>G; p.Ser32Cys) [16,17], consistent with the diagnosis of severe combined immunodeficiency (OMIM #602450). Furthermore, her skin phenotype was explained by the identification of the novel homozygous likely pathogenic variant p.Met2158fs in the ATM gene (NM_000051.4:c.6472_6473del), responsible for Ataxia-telangiectasia syndrome (OMIM #208900). Parents were heterozygous carriers for each variant. Thanks to immunoglobulin replacement and appropriate antimicrobial drug administration, she gradually improved and could be extubated. Case 5 Trio-WES was performed on a 13-year-old girl for a neurological complex clinical picture. Her epilepsy began at 4 years old with a generalised tonic-clonic seizure during sleep. Despite multiple anticonvulsive treatments, she continued having episodes of sudden loss of leg tone with consequent falls to the ground and multi-daily crisis of oral automatisms and hyperventilation, with suspected loss of contact. She also presented with a moderate intellectual disability, a dyskinetic movement disorder characterised by bradykinesia and stiffness, hypotonia, sleep disturbance, stereotypic movements, and feeding difficulties. Repeated brain MRIs showed minor aspecific anomalies and a lack of myelination in bilateral parieto-occipital lobes, whereas MR spectroscopy detected a markedly decreased N-acetylaspartate (NAA) signal of unknown explanation. At the genetic evaluation, she presented a long and narrow face, downslanting palpebral fissures, simplified ears, and constantly open mouth with thick gums and ogival palate. She also had slender and long limbs, hypotrophic thenar and hypothenar eminences, articular limitation of the knees and valgus feet. Her height was at 25-50th centile, weight at <3rd centile, and OFC at 50-75th centile; her arm span was proportionate. There were previously carried out karyotype and fluorescence in situ hybridization (FISH) analyses for chromosome 22q11.2, aCGH, and an NGS panel of 28 genes related to Rett syndrome and differential diagnosis, all with normal results. Trio-WES identified a double genetic cause of her complex neurocognitive disorder. It was partly explained by a de novo likely pathogenic variant in the GRIN2B gene (NM_000834.5:c.1246T>C; p.Phe416Leu) which causes a dominant neurodevelopmental disorder (OMIM #616139) with described abnormal movement disorders such as dystonia or dyskinesia [18]. Moreover, WES identified two novel likely pathogenic variants in the SLC25A12 gene, responsible for autosomal recessive epileptic encephalopathy, a paternally inherited missense variant p.Arg586Gln (NM_003705.5:c.1757G>A) combined with a frameshift variant p.Phe39fs (NM_003705.5:c.116_117del), occurring de novo on the maternal allele. Case 6 A 3-year-and-8-month-old boy was referred to genetic evaluation for neurosensorial deafness and ichthyosis. He was born to consanguineous parents (first-grade cousins) from Pakistan; both parents and two brothers were normal-hearing. He presented with normal motor development, but no language due to severe profound sensorineural hearing loss on all frequencies. A brain MRI and a CT showed regular cochleovestibular apparatus; therefore, he underwent a right cochlear implant. Eye examination and fundus oculi results were normal. He had normal stature-ponderal growth, absence of face dysmorphisms or anomalies of ear auricles. Dermatologists had already visited the child, making a diagnosis of ichthyosis, with dark brownish scales, more evident on the abdomen, arms, and legs; his palms and soles were lesion-free. Supposing a single genetic cause for his clinical picture, trio-WES was performed and unexpectedly resulted in two conditions; in fact, it highlighted two variants of unknown significance, the paternal p.Arg202Gln (NM_144672.4:c.605G>A) and the maternal p.Thr322Ile (NM_144672.4:c.965C>T) in the OTOA gene, whose biallelic mutations underlie moderate to severe prelingual sensorineural deafness (OMIM #607039). The simultaneous use of CNV-detection tool led to the identification of a microdeletion on the short arm of the X chromosome, maternally inherited, of approximately 928 kb (ChrX:6966861-7895483, GRCh37/hg19) that included the STS gene. Deletions of STS are implicated in 80-90% of cases affected by X-linked ichthyosis (OMIM #308100) [19]. Case 7 A girl with a previous diagnosis of Prader-Willi syndrome (OMIM #176270) was referred for trio-WES analysis. She was the third child of non-consanguineous parents; her family history was unremarkable. Pregnancy was uneventful except for growth restriction at the last ultrasound. Soon after birth, she presented with severe hypotonia and feeding difficulties for which she required enteral tube feeding. The clinical suspect of Prader-Willi syndrome (PWS) was confirmed by detection of the maternal uniparental disomy (UPD) of the critical region 15q11.2-q13. From 2 and a half years old, after achieving walking alone and speaking, she presented an abrupt arrest in motor and verbal acquisitions followed by a constant regression of motor functions until she gradually lost her walking ability and standing position and developed unmotivated laughing episodes associated with loss of axial and head control and head-nodding stereotypes. At approximately 5 years of age, following an episode of gastroenteritis, she presented a rapid worsening of motor skills, with increased hypotonia, reflexes and muscle weakness, no autonomous feeding nor sphincter control. Her height was at the 10th centile and her weight was under the 3rd centile for age. An electroencephalogram detected focal anomalies on the bilateral posterior regions and diffused sequence waves of an uncertain nature. A brain MRI showed cerebral and cerebellar atrophy, hypomyelination, particularly of periventricular zones and abnormal signals on the basal ganglia. On suspicion of a neurodegenerative disorder in addition to her PWS diagnosis, an urgent trio-WES was performed and detected the homozygous pathogenic variant p.Tyr142fs (NM_017882.3:c.424dup) in the CLN6 gene, consistent with an infantile form of neuronal ceroid lipofuscinosis (OMIM #601708). The variant was inherited only from the mother; its homozygosity was explained by the localisation of the CLN6 gene on chromosome 15q23, inside the region of maternal uniparental disomy. Case 8 A 16-year-old male with neonatally diagnosed Williams-Beuren syndrome (WBS) (OMIM #194050) was sent for trio-WES analysis for supposed double comorbidity. Born from unrelated Italian parents, his family history was significant, because his father, paternal grandfather, and grandfather's brother all presented with congenital anosmia. In the first months of life, WBS was diagnosed based on a large perimembranous ventricular septal defect with mild supravalvar aortic stenosis, developmental delay, feeding difficulties with poor weight gain, and distinctive facies. An array-CGH resulted indeed in a de novo 7q11.23 microdeletion of about 1.4 Mb. From 3 years old, he also developed hypothyroidism, mild myopia, and important language regression. Focal epilepsy arose at approximately 12 years old, which was resistant to combined anticonvulsive therapy, and a subsequent brain MRI revealed a lesion with an altered signal at the paramedian pontine zone of uncertain origin and hypoplasia of the olfactory tracts. Nevertheless, his condition gradually evolved into a severe intellectual disability, with language regression and scoliosis. At the last auxological evaluation, his weight and height were at −2 DS on specific WBS growth curves [20]. Discussion The reported double-diagnosed cases are a good illustration, on top of previously described ones [5][6][7][8], of how different genetic conditions and molecular mechanisms could combine in a single patient, causing peculiar and complex clinical pictures. In Table 1, we summarised the combined diagnoses and their contribution to the patient's phenotype of our eight cases together with two other cases previously published by Cianci et al. [5] and Pezzani et al. [7] and diagnosed in our laboratory as well. According to what was reported by Smith and colleagues [2], the more prevalent causing mechanism, in our cohort, was double homozygosity (cases 1 to 4), especially for children of consanguineous parents. Nevertheless, it was not the only one; our fifth case presented a de novo mutation in a dominant gene, along with a defective recessive one, because of a paternally inherited mutation and a de novo variant on the other allele. The sixth case, in spite of parental consanguinity, showed compounded heterozygous variants in a recessive gene, combined with a maternal pathogenic microdeletion revealed by WES. The seventh case was an already diagnosed maternal UPD that concurrently unmasked a pathogenic maternal variant in an included gene. The last case was affected by two de novo diseases: a recurrent microdeletion and an additional dominant TNPO2-neurodevelopmental disorder; interestingly, only 15 other patients carrying de novo pathogenic variants in TNPO2 have been reported to date [21]. The natural tendency in medical genetics has always been to find a diagnosis that could explain all the patient's characteristics, in a "single-disorder" paradigm; in doing so, the non-classical signs relatable to the detected diagnoses were mainly classified as "phenotypic expansions" or "atypical presentations" [4]. However, in the last few years, it became evident that a significant part of so-called atypical cases actually represented "blended" (i.e., mixed phenotypes with overlapping features) or "composite" (i.e., distinct phenotypes that singularly explain patient's characteristic) cases with multiple genetic conditions. For clinicians, supposing comorbid conditions could be more straightforward when the patient has multiple features that do not fit one single diagnosis, when his clinical picture is more severe than expected by the first diagnosis, or when there are only some signs that independently segregate in the family, especially with a history of consanguineous parents [2,3]. For example, in our last case, a comorbid condition besides the WBS diagnosis was easily suspected because of the unusual epileptic encephalopathy together with a severe speech impairment; moreover, the patient's hypoplasia of the olfactory tracts and the congenital anosmia that was segregated in his paternal family suggests a possible third unrevealed condition. On the other hand, presuming multiple diagnoses is much more challenging when the conditions are overlapping, i.e., when one or more features could be attributable to both diseases [2]; and it is even more arduous when the overlapping features are the main clinical signs of both diseases (as in our cases 4 and 5). Another tricky situation, rarely discussed in precedent reports, is when multiple conditions, despite having distinct phenotypes, deviate the suspect towards a third disease that includes all the patient's features; a representative case is our sixth patient in which the initial suspicion was a "Keratitis-ichthyosis-deafness syndrome" whereas trio-WES returned two independent genetic causes. Taking into account the above, the probability of correctly diagnosing multiple genetic conditions strictly depends on the patient's "deep phenotyping" and on what type of genetic tests are used in the diagnostic journey. Concerning this, it seems evident that wide analyses, such as whole-exome and whole-genome sequencing, are the most powerful tools to bring to light multiple conditions in one single patient. This could be even more appropriate thanks to the ongoing incorporation of technologies for copy-number variants (CNVs) detection in the WES pipeline, uncovering at the same time monogenic and genomic disorders [22], as in Case 6. For example, copy number variants and single-nucleotide variants as a part of multiple diagnoses were reported in 11.9% of double-diagnosed patients by Posey et al. [1], whereas Chen et al. recently applied simultaneous CNV-seq and WES analysis on a large cohort of malformed fetuses and detected coincidental pathogenic CNVs and single gene variants in about 1% of them. In some of these cases, using the standard workflow of sequential karyotype, aCGH and WES, might lead to a premature halt in the diagnostic path [23]. Striving to find the complete diagnosis in genetic patients is essential for tailoring the correct management and follow-up. It also has strong implications for the rest of the family, particularly for the right assessment of parents' reproductive risk and for providing correct prenatal options; the two conditions can in fact segregate independently or they could be linked to each other, as in the case 7. Furthermore, in a few rare cases, it could lead to an important rebound for the parents' health itself, as with the mother of case 4 in that being a heterozygous carrier of an ATM mutation has to undergo stricter breast cancer screening [24]. Conclusions The report illustrates how different co-occurring phenotypes and inheritance patterns might cause blended or composite clinical pictures, sometimes in a misleading and challenging way for clinicians. In this scenario, the patient's "deep phenotyping" might not be enough to suggest the presence of multiple genetic diagnoses; however, it remains essential to validate an unexpected multilocus result from genetic tests. As stated by several studies, our cases further support the increasing evidence of how the adoption of genome-wide sequencing analyses (such as trio-WES) as first-tier sequencing tests can ensure accurate and time-saving diagnosis, particularly for complex patients [25,26]. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13071275/s1, Supplementary Table S1: Supporting criteria for classification of variants identified with trio-WES in our cases [27,28]. Funding: The work was partially supported by the "PG23/FROM 2017 Call for Independent Research" as part of the RARE -Rapid Analysis for Rapid carE-project and by "Progetti di innovazione in ambito sanitario e socio sanitario Regione Lombardia, bando ex decreto n. 2713 del 28/02/2018" as part of the GENE-Genomic analysis Evaluation Network-project. Institutional Review Board Statement: The analyses were approved by the Ethics Committee of ASST Papa Giovanni XXIII of Bergamo as part of RARE and GENE projects. Informed Consent Statement: Informed consent for molecular analysis and publication was obtained from all subjects involved in the study. Data Availability Statement: The WES data supporting the findings of this study are available on request from the last author (M.I.). The data are not publicly available due to privacy/ethical restrictions.
2022-07-21T15:12:42.058Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "39b6fa15e62b37e1fe947cbf0a9f7cabfe4904b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/13/7/1275/pdf?version=1658224309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58abe941772c7c8f42feba41e32578c0d1723490", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
67861010
pes2o/s2orc
v3-fos-license
Rare Case of Surgical Treatment of a Giant Aortic Arch False Aneurysm The authors present a clinical case of a 44-year-old male patient with a chronic giant aortic arch pseudoaneurysm with a diameter of 136 × 72 mm. The open resection of false aneurysm was accomplished without artificial circulation. Repair was performed with temporary ascending-to-descending and brachiocephalic bypass without cardiopulmonary bypass. Introduction Damage to the aorta as result of blunt chest trauma is a rare and dangerous condition with a high mortality rate (80-90%) due to massive bleeding. However, in some patients, despite the rupture of the thoracic aortic wall, mediastinal tissues "restrain" the hematoma, preventing the development of fatal bleeding. Subsequently, this leads to the formation of a false aneurysm. According to Parmley et al, 1 the most frequent site of aortic rupture in such injuries is the zone of its isthmus. This location of injury at the isthmus has been attributed to embryogenesis of the aorta and anatomical features. 1 The localization of the rupture in the region of the aortic arch, as in our case, is an extremely rare observation. Case Presentation A 44-year-old male patient was admitted to our Vascular Surgery Department. In 2001, the patient was in a traffic accident, resulting in a blunt injury to the chest and pelvis. This, presumably, was the mechanism of development of an aneurysm of the aortic arch. In 2012, on the plane X-ray of the chest, an abnormal mass lesion was found, but computed tomographic (CT) verification was not performed due to unknown reason. In 2014, the patient was hospitalized in our department when we confirmed the diagnosis of the aortic arch pseu-doaneurysm (►Fig. 1). CT imaging identified a giant pseudoaneurysm with maximum size 136 Â 72 mm. The size of posterior aortic arch wall defect was 28 mm. There were no signs of aortic dissection. We performed an operation-the elimination of the aortic arch pseudoaneurysm and posterior wall tear and false aneurysm in the mediastinum without the use of cardiopulmonary bypass. The position of the patient was on hs back with his left hand fixed above the head. Under total anesthesia, through the L-shaped median sternotomy and left 5th intercostal thoracotomy, we identified and extracted the ascending aorta, aortic arch, left common carotid and subclavian arteries and mid part of descending aorta (►Fig. 2). The brachiocephalic trunk was unable to mobilize because it was intimately fused with the anterior wall of the false aneurysm. Therefore, the right subclavian artery was controlled. A temporary bypass (TB) shunt of 20 mm between the ascending and descending aorta was created. In addition, from this bypass an anastomosis with a bifurcation prosthesis for temporary blood supply to the brachiocephalic trunk and left common carotid artery was formed. The first branch of the bifurcated bypass was anastomosed to the right subclavian artery, and the second connected through cannulation to the left carotid artery. The bloodstream was allowed to run through all temporary shunts. The ascending aorta was clamped distal to the shunt, and the descending aorta was clamped proximal to the shunts. Single clamps were placed on the brachiocephalic trunk, left carotid, and left subclavian arteries. Then, a longitudinal aortotomy was made on the front wall of the aortic arch. On the back, the aortic wall was detected, with the defect (with smooth edges, 35 Â 20 mm) leading into the cavity of the giant pseudoaneurysm, which was partially filled with old thrombotic material. The posterior aortic wall defect was closed with a Dacron patch. The anterior aortic wall was restored by closing the incision in the aortic wall, with Teflon felt reinforcement. Blood flow was sequentially restored in the aorta and its branches (►Fig. 3). During the entire operation, blood pressure on the right brachial and femoral artery did not flow below 85 and 90 mm Hg. The duration of operation was 480 minutes. The duration of anesthesia was 680 minutes. Total blood loss was 1,500 mL, with approximately 700 mL from aneurysm cavity. There were no complications after surgery. On the first day after operation, a right-sided pneumothorax was diagnosed, which was treated with active drainage. On the second day, the patient was extubated. An additional drainage to the left pleural cavity was implanted on the fifth day due to persistent left-sided limited pneumothorax. The patient was discharged in good condition on the 19th day after the operation. CT scan at 8 months has shown a persistently closed defect (►Fig. 4). The size of aneurysm became two times less during 8 months of observation. Discussion Currently, there are two treatment methods for surgery of aortic arch aneurysms. Most often, an aneurysm resection is made with an aortic prosthesis performed using artificial circulation and circulatory arrest with antegrade cerebral perfusion. The second method is a hybrid operation on the aortic arch with complete debranching and implantation of stent graft modules. 2,3 The method of treatment applied by our team for this giant false aneurysm of the aortic arch made it possible to perform this operation in a department without cardiac surgical or endovascular equipment and teams. The use of a TB between the ascending and descending aorta accomplishes an adequate unloading of the left chambers of the heart during the thoracic aorta clamping and provides enough blood flow and pressure to the lower extremities, abdominal aorta, and visceral arteries. Temporary debranching of the brachiocephalic arteries maintains adequate perfusion of the brain.
2019-03-11T17:19:53.736Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "7ba1eb5eff3852a487bf52845efa295435d97185", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1678553.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7ba1eb5eff3852a487bf52845efa295435d97185", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
197623975
pes2o/s2orc
v3-fos-license
Optimizing Semiconductor Laser PIDNN Decoupling Control Base on CPSO The coupling model of semiconductor laser is established for optical port components. The decoupling controller is designed by PIDNN, which enables the controller to response quickly and set parameter. The initial value of PID parameter is optimized by CPSO, and the appropriate initial value can make the system stable and reach expected value. CPSO can effectively prevent weight parameter search from falling into local optimal, and has a fast search speed. Introduction Semiconductor laser as an important light source of optical fiber communication system has been widely concerned by the academia and engineers. Because of its small size, light weight, high reliability, low voltage drive and other advantages, semiconductor lasers are also widely used in scientific research, medical and military fields [1]. Semiconductor lasers are very sensitive to operating temperature and driving current, and the output optical power will depend on the PN junction and driving current inside the laser.There is a serious coupling effect between junction temperature and output light power of semiconductor laser, that is, the increase of junction temperature will inevitably lead to the decrease of quantum efficiency, which will lead to the increase of threshold current and heat dissipation, thus further increase of temperature of semiconductor laser. The increase of temperature will inevitably increase of the kinetic energy of free electrons, which will lead to the increase of the optical absorption rate, which will further cause temperature rise. Only by dealing with the coupling effect between junction temperature and optical power well, the semiconductor laser can work properly [2,3]. In actual engineering, the semiconductor laser realizes temperature control by changing the operating current through the TEC control circuit, and the optical power of the semiconductor laser is mainly determined by the driving current of the laser [4][5], as shown in figure 1 Gs is a multivariable nonlinear system with complexity and uncertainty [6], which is difficult to accurately model. Therefore, this paper attempts to use the PID neural network(PIDNN) without the control model as the controller, and adopts the initial weight of the Chaotic particle swarm optimization(CPSO) to ensure the convergence speed and accuracy of the PIDNN. CPSO has the advantages of simple architecture, fast convergence speed, global search ability, and prevention of getting into the local optimal solution. CPSO combined with PIDNN can effectively improve the convergence speed and accuracy [7][8]. PIDNN structure In order to overcome the traditional PID controller parameters real-time tuning difficulties and improve the performance of the controller, at the end of the last century, a new control strategy PID controller combing with neural network appeared. The PID controller combined with neural network has three forms, including neural network tuning PID parameters, single neuron PID controller and PID neural network controller (PIDNN). The controller of communication laser is required to have quick command response. PIDNN structure is selected and the decoupling controller is designed considered its simple structure and quick parameters convergence. The structure of the controller is shown in figure 3. The nodes of the hidden layer in PIDNN correspond to proportional, integral and differential control respectively. There are 2n neurons in the input layer, and n = 2 in the laser controller [9]. The basic model structure of single PID (SPIDNN) is shown in figure 4. The neuron input is The proportional, integral and differential neuron forms of the discrete system are shown in equations (3),(4) and (5), where sij w is the connection weight, () si xk is the output value of neurons in the input layer of each sub net, j is the neuron number of sub net hidden layer. The superscript " ' " indicates the hidden layer variable. Neural network learning algorithm The learning objective of generalized networks of PID neural networks and multivariable control objects is to minimize the output bias or error, where, () p rkis the command of the system, () p ykis the output or response of the system, m is the number of sampling points per batch, and n is the number of controlled variables. The weight iteration formula of PID neural network adjusted by gradient method is with ' ' 1 . . , and in which When we define The weight iteration formula from hidden layer to output layer is In the same way, we can get the weight iteration formula from the input layer to the hidden layer : When the learning step of PIDNN satisfy the control system converges in the learning process, where W represents the connection weight of the neural network. CPSO Optimizes Initial Weights Reasonable selection of initial value of network weight can accelerate the speed of network learning and parameter convergence. A prominent advantage of PID neural network is that the initial value of connection weight is set according to the basic principle of PID control law. The initial value of network weight can be determined by using a large amount of existing experience data of PID control. Based on the initial value, the network can be trained, learned and adjusted to make the network learn quickly . CPSO was selected to optimize the initial value of the weight, respectively determining the connection weights of proportional element, differential element and integral element from the input layer to the hidden layer, and the initial value of the network weight from the hidden layer to the output layer. Finally, the PIDNN is equivalent to several independent PID controllers through weight selection. The multi-output PIDNN becomes n independent sub networks, that is, n single-output PID neural networks, and the equivalent control law can be obtained as: Due to the nonlinear mapping characteristics ,the PIDNN controller get the decoupling control ability. During training and learning, the controller itself does not know whether the task is decoupling or control. The controller only completes the mapping from system input to system output according to the requirements of the objective function. Therefore, according to the input and output of the system, the PIDNN can adjust the connection weight gradually according to the learning algorithm, so that the decoupling control performance of the system can reach the set point. Decoupling is the means and control is the end. PIDNN controller integrates decoupling and control and they complement each other and are closely related. Thus, PIDNN is used in the laser controller to simplify the design and realize the decoupling control. Because of the strong linear characteristic of the laser parameters, the system can keep the parameters in a small range to get good output characteristics. Proper initial weights can stabilize the system quickly at the desired output value. By introducing chaotic mapping, the dynamic characteristics of the CPSO were revealed to be chaotic, so as to avoid the optimization falling into the local optimal solution. By adding chaos to the particle motion, the local search can be more refined. Common chaotic mappings include Logistic mapping, Lorenz mapping and Henon mapping [10]. The Logistic mapping was used in this paper,and the motion equation of particle swarm with chaos factor is shown in equation(15): rand are random numbers with chaos effect. Limit the search scope around the global optimal point, and increase the search times within the limit range, we may get a refine point. The iterative mode of random numbers is carried out according to the Logistic mapping. The fitness function should be set for the evaluation of the population position. For the laser system, the fitness function with the output deviation of the controller as the main factor should be considered. The control strategy flow of CPSO optimization is as follows: Step1. Initializes population particles and positions, and performs chaotic mapping in preparation for particle swarm update. Step2. Determine PIDNN weight by the particle swarm ,thus controller parameters was determined ,and run the control system model. Step3. Evaluate the system output and searcher for the optimal location. The chaotic map is used to update the particle swarm position. Step4. Calculate the weights of the new positions and run the control system under the same conditions. Compare the system output and determine the update direction. At present, the global optimal point is searched in detail according to the chaotic map. Step5. Terminate the search and determine the initial weight value when the control target is met. Otherwise, exit the search in the optimal location after setting the search time. Step6. After the system starts to operate, monitor the output of the system in real time. When the output deviation exceeds the limit, enter Step3 for control. Analysis of Examples The coupling model of semiconductor laser was established according to literatures [3], and two control strategies, PIDNN controller and PIDNN controller with CPSO optimized initial value were used for test and simulation. Control goal of system temperature set at 25 °C, and the system output optical power set is 0.6 w. After several simulation calculations for different initial values, the typical experimental results are shown below. The system decoupling control output of the optimized PIDNN of CPSO is shown in figure 5 and 6, and the population search is shown in figure 7. In order to compare the optimization effect of CPSO, the PIDNN control effect without CPSO optimization under the same parameter condition was shown in figure 8, parameters are shown in Conclusion The results show that the PIDNN controller can decouple the laser system effectively. The introduction of CPSO did not significantly slow down the response speed of the system. In the case of disturbance, the response of the system with CPSO optimization was better, and it could effectively avoid the problem of falling into the local optimal point and output deviation in the process of neural network weight learning.
2019-07-20T02:03:48.625Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "46e0b67e330b9ead1fe29e98e11ecfe796cdb769", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1213/5/052110", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6f5b2f290c9c964e590e30985ffec622fc96164b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Materials Science" ] }
260921579
pes2o/s2orc
v3-fos-license
An update on one-dose HPV vaccine studies, immunobridging and humoral immune responses – A meeting report Highlights • Clinical trial subjects followed up for up to 10 years after receiving one dose HPV vaccine show stable antibody levels.• One dose vaccine efficacy against vaccine-type incidence persistent infection in India and Costa Rica cohorts is > 80%.• Further study is needed for one-dose HPV vaccination in HIV + individuals, but it shouldn’t delay general adoption.• HPV vaccine immune response plateaus at 24 months. Studies with 24 + months follow-up are needed for accurate comparison.• Anchored International Standards for 9 HPV vaccine genotypes will facilitate immunogenicity reporting and assay optimisation. Introduction The HPV Prevention and Control Board is an independent, international, and multidisciplinary group of experts, created in 2015 to provide evidence-based guidance and reflection on strategic, technical, and policy issues regarding the implementation and sustainability of HPV prevention and control programmes.The board aims to disseminate and amplify relevant information on HPV prevention and control to a broad array of stakeholders by organising two meetings every year; a technical meeting covering topics such as vaccine characteristics, vaccine safety, screening technologies and landscape, treatment strategies, the role of healthcare providers in vaccination programmes, and dealing with antivaccine messages (Vorsters et al., 2017;Vorsters et al., 2019;Waheed et al., 2021); and a country meeting, covering a strengths, weaknesses, opportunities, and threats (SWOT) analysis of a country or region HPV prevention and control programs (Vorsters et al., 2020). The objectives of the meeting were focused on: 1) One-dose HPV vaccination studies: -Update on efficacy and effectiveness data on HPV vaccination onedose schedule.-Evaluate immunobridging results from current one-dose HPV vaccination trials to historical one-dose efficacy data.-Discuss the global recommendation landscape of HPV vaccination one-dose schedule.2) Humoral immune responses upon HPV immunization: -Availability and use of standardized measurements for reporting humoral immune responses.-Characterisation of humoral immune responses for evaluating HPV vaccines immunogenicity.-First-void urine as a sampling mechanism to evaluate humoral immune responses. This report presents an overview of available data and discussions taking place during the June 2022 meeting.It is worth noting that not all available studies on the topics are included due to data availability.Furthermore, the allocated time of 0.5 days for this topic within the meeting restricted the time available for in-depth discussion of certain subtopics.Despite these limitations, the discussions held among authors and various experts from academia, regulatory authorities, and other stakeholders are unique and provide important discussion points essential for future research. Updates on one-dose HPV vaccination studies Over the past few years, there has been growing evidence that a single dose of HPV vaccine can provide protection against cervical cancer.Long-term follow-up data is available from the Costa Rica HPV Vaccine Trial (CVT, launched in 2004) conducted prior to licensure of 2vHPV (Cervarix®) in 18-25-year-old women using three doses; In the CVT trial the vaccine efficacy (VE) of the one dose group for prevalent HPV 16/18 infection was 82.1% (40.2-97.0)at 11.3 years after vaccination.HPV16 serum antibodies are stable after a follow-up of 11 years in participants that received one, two and three doses.Immunologic follow-up is set to continue for up to 20 years (Kreimer et al., 2020) See Table 1. Furthermore, the India IARC trial, a multicentric cohort study to compare the efficacy of a two-dose versus three-dose 4vHPV (Gardasil-4®) schedule in 10-18-year-old females in India, provides 10-year follow-up data on one dose efficacy (Basu et al., 2021).After the suspension of recruitment and vaccination, the study became a longitudinal, prospective cohort study by default.Participants were allocated to four cohorts based on the number and timing of vaccine doses received.See Table 1 for further details.At 10 years post-vaccination, 96% and 97% of one-dose recipients had detectable HPV16 and HPV18 antibodies, respectively, with titres 15 and 10 times higher than natural immunity.All vaccinated cohorts had a similar incidence of HPV16/18 infections [one-dose cohort 3.1 (2.6-3.8)vs two-dose regimes 2.6 (2.0-3.3)vs 3-dose cohort 3.0 (2.3-3.8)] while the control arm had an increased incidence [unvaccinated 9.7 (8.2-11.3)].Adjusted VE for incident persistent HPV 16/18 infections was 93.3% (77.5-99.7)for the three-dose cohort, 93.1% (77.3-99.8)for the two-dose cohort, and 95.4% (85.0-99.9)for the one-dose cohort. Two important randomised controlled trials aiming to evaluate one dose HPV VE are currently ongoing in Tanzania and Kenya. The KEN SHE Study investigates one-dose HPV vaccination between Gardasil-9® (9vHPV) Cervarix® (2vHPV) VE for incident persistent HPV infection among sexually active adolescent girls and young women in Kenya.At month 18 the incidence persistence of non-vaccine-type HPV infections was similar between study arms, ranging from 22.2 to 24.5 per 100 woman-years.However, the incidence persistence of HPV16/18 infections was significantly lower in 2vHPV and 9vHPV than in the control arm (0.17 in both study arms vs 6.83 per 100 woman-years), with a VE of 2vHPV of 97.5% (81.6-99.7)and 9vHPV of 97.5% (81.7-99.7)See Table 1 (Barnabas et al., 2022).Finally, the incidence of HPV16/18/31/33/45/52/58 infections (the vaccine types in the nonavalent vaccine) was significantly lower in the 9vHPV study arm than in the control arm (1.03 vs 9.42 per 100 woman-years, VE = 88.9%(68.5-96.1)See Table 1. The Dose Reduction Immunobridging & Safety Study (DoRIS) offered one, two or three doses of the 2vHPV (Cervarix®) or 9vHPV (Gardasil-9®) vaccine in order to demonstrate non-inferiority of HPV16/18 seroconversion after one dose compared with two or three doses of the same vaccine.At month 36, one dose was non-inferior to two doses and three doses for HPV16 for both vaccines, but for HPV18 non-inferiority was only met for two doses versus three doses for both 2vHPV and 9vHPV.HPV16/18 antibody levels after one dose reached plateau from month 12 to month 36 for both vaccines.The HPV 16/18 avidity index was very similar between one dose, two doses, and three doses, for both vaccines and both HPV types, with avidity index ratios close to 1. Given the challenge of recruiting and sampling younger age cohorts to evaluate the efficacy of HPV vaccines, immune responses were bridged to populations where efficacy has been shown.Similar to the original licensure of HPV vaccines, non-inferiority of immune responses were used to infer efficacy in younger girls through a comparison of anti-HPV ELISA titres (IARC, 2014).Historic efficacy data used for immunobridging include antibody levels from CVT (11yearsfollowup), India IARC (10yearsfollowup) and KEN SHE (18 months). For all these trials, after one dose HPV vaccination schedule, antibody levels in DoRIS were shown to be non-inferior for HPV16 and 18, for both 2vHPV and 9vHPV in comparison to the similar vaccine in the older aged efficacy populations.See Table 1. Prospects: Studies investigating the efficacy and impact of one-dose HPV vaccination in preventing cervical cancer The ESCUDDO trial compares the efficacy of one versus two doses of 2vHPV (Cervarix®) and 9vHPV (Gardasil-9®) vaccines in Costa Rican girls aged 12-16, with results expected in 2025.The PRIMAVERA trial, an immunobridging trial comparing antibody levels in girls receiving one dose of 2vHPV from ESCUDDO to those receiving three doses of 4vHPV (Gardasil-4®) in historical cohorts, with results expected by 2023/2024. The PRISMA study evaluates the efficacy of one dose of 2v-(Cer-varix®) and 9vHPV (Gardasil-9®) vaccines against persistent HPV16/18 cervical infections in HPV16/18-DNA baseline negative women aged 18-30, with data expected by 2027.This will provide an opportunity to protect additional women from HPV-related disease, as a one-dose schedule in adult women may allow for a massive, one-time catch-up. The HOPE study was set up to monitor the impact of a two-dose and one-dose HPV vaccination schedule on community-level HPV prevalence using repeat cross-sectional surveys, collected from independent •Non-inferiority for seroconversion was met (1D is not inferior than 2D/3D) for HPV 16 for both vaccines. • For HPV18, non-inferiority of 1D was not met. • HPV 16/18 Ab concentrations on 2D and 3D cohorts declined after peak in M7, while 1D cohort concentration remain constant from M7. • Avidity index similar between dose groups, for HPV16 and HPV18, for both vaccines products.cohorts of South African adolescent girls, from before ("pre-vaccine cohort") and after ("vaccine eligible cohort") implementation of the programme.Additionally, the investigators wanted to measure the population impact of a one-dose vaccine schedule, delivered as a catchup, to Grade 10 pupils in one district, in protecting against infection with HPV16 and/or 18 (Machalek et al., 2022).Of 6,673 potential recipients, 4,807 (72%) received a single HPV vaccine dose.The median age of the vaccine recipients was 16 (interquartile range 15-17) years.The primary reason for non-vaccination was lack of signed parental consent or absenteeism (98%).Analysis of the data is currently in progress.See Table 1 for further details of these trials. Evidence-based impact projections of single-dose HPV vaccination in India EpiMetHeos model was used to predict the impact of HPV vaccination under the one-dose schedule in India, looking at its effectiveness on HPV infection and cervical cancer, the potential of elimination according to different indicators, the relative efficacy of one-dose compared to the two-dose schedule, the impact of catch-up, and the variability of impact across India (Man et al., 2022). Four scenarios were used, based on India IARC trial 10 year efficacy data where vaccine efficacy of incidence persistent infection for HPV16/ 18 is 95% and HPV31/33/45 is 9%.Scenario A assumed lifelong vaccine protection for both, single-dose and two-dose HPV vaccination.Scenario B assumed similar initial HPV16/18 VE (95%/95%) but waning of protection for single-dose vaccination.Scenario C, similar to assumption B, but lower initial HPV 16/18 VE (90%/85%) and faster waning of protection for single-dose vaccination.Lastly, scenario D with lower initial HPV 16/18 VE (85%/55%) and faster waning of protection for single-dose vaccination.These assumptions were derived from the lower bound of efficacy estimated by the IARC India HPV vaccine trial and by projecting the time until HPV16 and HPV18 antibody levels observed in the trial decreased below predefined thresholds. The base-case scenario reached the WHO elimination threshold in the long term, with a 71% reduction in cervical cancer risk in the first five vaccinated cohorts.With the three alternative scenarios, elimination was still attained in most scenarios.Furthermore, under any scenario, the two-dose schedule needed more doses than the one-dose schedule to prevent one case of cancer, 26% more under the less favourable set of assumptions.Hence, in most scenarios the one-dose schedule is cost-saving (when undiscounted) and cost-effective (when discounted), whereas introducing a second dose is not cost-effective (Man et al., 2022).These projections indicate that single-dose vaccination can substantially decrease cervical cancer burden across India and that some Indian states with the highest burden would benefit from additional control measures. Update on SAGE advice on HPV schedule optimisation and the permissive single-dose recommendation in younger women High interest in HPV vaccination by countries across all income groups has resulted in increased demand in the past several years.However, a combination of factors, primarily linked to continued supply constraints, has slowed the pace of introductions, particularly in lowresource settings (WHO Health Organization, 2022).It was in this setting that the WHO Strategic Advisory Group of Experts on Immunization (SAGE) took up a review of the new evidence around HPV reduced dose schedule in 2022. SAGE advised that the target population for vaccination should remain 9-14-year-old girls.For this target population, one dose or two doses can be used.Similarly, for 15-20-year-olds, one dose or two doses can be used, whereas, from the age of 21, a two-dose schedule can be used.Finally, regardless of age, at least two doses should be used in immunocompromised patients, while ideally, three doses are recommended.SAGE recommended that countries, where feasible and affordable, prioritise a catch-up of older cohorts and missed girls through multi-age cohort vaccination.Introducing the vaccination of boys and older females should be carefully managed until the global supply situation is fully resolved.Where gender-neutral vaccination is introduced, males can receive the same schedule as females (World Health Organization, 2022). Data gaps that still exist are: first, the immunogenicity, protective efficacy, and duration of protection, with reduced-dose schedules in immunocompromised individuals, especially the level of protection provided when HIV seroconversion happens after one dose of HPV vaccine; second, long-term immunogenicity, efficacy, and duration of protection of one-dose HPV vaccine schedule in girls and boys 9-14 years old; third, the use of one-dose schedules in older adults and children below 9 years of age; and finally, implementation research to identify strategies to improve HPV vaccine coverage, including among populations at high risk of early HPV infection and immunocompromised individuals (World Health Organization, 2022). Panel discussion on one-dose HPV vaccine trials and introduction of a one dose schedule During the panel discussion, experts involved in the previously presented clinical trials had the opportunity to discuss among them, raising relevant points and concerns from the evidence showed. Background HPV prevalence and selection bias Several attendees raised the concern of one-dose HPV vaccine performance being subject to the difference in background HPV prevalence in the settings where these trials were carried out.The data from the KEN SHE study seem to argue against this suggestion where single-dose HPV vaccination provides high efficacy against incident persistent HPV 16/18 infection even in the setting of a high incidence of HPV infection.Although the efficacy follow-up in KEN SHE is relatively short (18 months) and the antibody levels in participants receiving two or threedose schedules is considerably higher across studies, there are suggestions that immune responses will be stable over time as presented from the African setting study (DoRIS) in Tanzania.Moreover, in all presented trials measuring one dose schedule, immunogenicity reports minimal HPV 16/18 antibody decay after plateau.Similar observations were presented in Costa Rica and India, where 10 years of follow-up has been done.DoRIS data presented during the meeting showed stable immune response at 3 years after a single dose of HPV vaccine.This may suggest that regardless of the number of doses administered, there is a stable long-lived plasma cells niche produced which continue to generate antibodies.Results from the HOPE study are expected to provide insight on the impact and effectiveness of one-dose HPV vaccination schedule at a wider level. Genotype replacement Levels of protection elicited by one-dose HPV vaccination were raised as a point of concern, specifically with uncertainties of reduced dose schedules to allow type-replacement of oncogenic genotypes.However, for type replacement to take place there needs to be competition, which is not seen at a lesion level.Although multiple infections can be found, lesions are known to be driven by only one genotype.Furthermore, characterisation of humoral responses following HPV vaccination have not shown results suggesting type replacement, in the contrary, the responses have suggested to be more cross-protective than expected.According to transmission modelling, however, it is still too early to preclude type replacement and monitoring of non-vaccine types remains pivotal (Man et al., 2021). Clinical trials are a good setting to investigate possible vaccine failures by looking at breakthrough cases.In CVT trials, there are currently eight possible vaccine failures among the ~ 3000 women in the vaccinated arm, who had on average 11 years of follow-up.Investigations to D.-e.-N.Waheed et al. confirm these cases as vaccine failure include persistence of HPV infection that has not been detected in previous follow up, antibody levels and avidity and viral variance studies. Implementation of HPV one-dose schedule in settings with high HIV prevalence Questions regarding the impact of HPV vaccine-induced protection among people who will acquire HIV after being vaccinated with one dose HPV vaccination were raised.This is especially concerning in Sub-Saharan Africa, where six out of seven new HIV infection in adolescents aged 15-19 years occurs in girls (Schiller and Müller, 2015;UNAIDS data, 2022). In persons living with HIV, HPV vaccination induce high rates of antibody seroconversion (Toft et al., 2014;Faust et al., 2016) and vaccine-induced antibody responses are sustained for at least four years (Levin et al., 2017), but cross-reactive antibody responses were diminished as compared to that reported in HIV-negative populations.Despite the reasonable evidence supporting the immunogenicity of HPV vaccines in HIV-positive individuals, the corresponding efficacy data is inconsistent (Lacey, 2019).Further research is needed to understand the functional and anatomical immunologic remodelling that occurs in HIV infection in regard to HPV-vaccine-induced protection.A major question for further research is looking at the impact of an HIV infection after HPV vaccination.This can only be investigated in areas with high HIV incidence.The question remains whether this will reduce the protection gained through vaccination.Results from the HOPE study, measuring the community-wide impact of one-dose HPV vaccination, are likely to provide further insight given the high prevalence of HIV in South Africa. Lessons learned & the way forward one-dose HPV vaccination studies IARC India trial and Costa Rica trial (CVT) present long-term follow-up data on a substantial number of subjects that received one dose.These ten-year follow-up results show sustained HPV16/18 antibody levels and > 80% VE for incident persistent HPV infecton.DoRIS and KEN SHE trials provide further insight into vaccine-induced antibodies up to 36 months after vaccination with a one-dose regime.These studies are especially important because they are carried out in countries with the highest HPV incidence/attack rate in the world.HPV type replacement is currently not an issue as no evidence of genotypes competition has been demonstrated.However, surveillance of non-vaccine types remain warranted. Modelling studies in India, suggesting different scenarios for one-dose including stable or waning of protection, based on detection and seropositivity thresholds, shows one dose strategy to be cost saving and cost effective in most scenarios. To ensure accurate immunobridging responses comparison, samples from prospective trial and historical trials must be from the same sampling timepoint and tested in the same laboratory with the same serologic validated assay. Increase need for evidence base data and policies on immunogenicity, protective efficacy, and duration of protection of HPV immunisation in immunocompromised individuals. One-dose HPV vaccination on cohorts with high-risk HIV acquisition should be further studied.However, it should not be a reason to delay adoption of the one-dose schedule in the general population. Immunogenicity of HPV prophylactic vaccines: serology assays and their use in HPV vaccine evaluation and development: Importance of international units for reporting immune response HPV serology is used to evaluate vaccine-induced antibody duration and antibody levels.HPV serology can be used to determine the quality of vaccine-induced antibodies and to report in a standardised manner humoral immune responses regardless of serologic assay, laboratory or vaccine used.In all these cases, the availability, relevance, and proficiency of internationally standardised tests are important to report HPV immunogenicity and validate improvements in serologic HPV assays.International standards are required to define an International Unit. International monospecific standard sera have been established for HPV16 and HPV18 (Faust et al., 2016;Ferguson et al., 2011), while work on sera for HPV6,11,31,33,45,52 and 58 is ongoing.Secondly, reproducible methods for analysing readouts should be used.The parallel line method is the method of choice, as it increases reproducibility (Grabowska et al., 2002). Neutralising and cross-neutralising antibody levels to HPV following vaccination HPV vaccines induce a type-specific neutralising antibodies (NAb) response directed to the L1 loop regions exposed on the HPV capsid surface.Anti-L1 antibodies can reach the cervix via transudation from the systemic circulation and are postulated to be the primary mechanism of protection against HPV infection.In human studies, antibody-induced neutralisation responses measured in vitro correlate well with the observed endpoints, including protection against HPV-caused pre-malignant lesions or prevention of persistent infection (defined as infections lasting > 6 months) (Schiller et al., 2012).NAbs are, therefore, convenient correlates of protection but the minimal protective levels are currently unknown. A head-to-head comparison study with serum samples collected from participants of the PATRICIA (2vHPV Cervarix®) clinical trial in Finland and the India clinical trial (4vHPV), who had received three doses of the vaccines when aged 16-17 years old, showed that 2vHPV recipients had significantly higher HPV16/18 peak antibody levels than 4vHPV recipients, as determined by a semi-automated high-throughput Pseudovirion-Based Neutralisation Assay (PBNA).Furthermore, crossneutralising HPV31/33/45/52/58 Abs were induced by the 2vHPV Cervarix® significantly more frequently and at higher concentrations than by the 4vHPV (Mariz et al., 2020).Similarly, analysis of serum samples from 4vHPV recipients and 2vHPV Cervarix® recipients that were enrolled in the FUTURE and PATRICIA clinical trials and then followed up by the population-based Finnish Maternity Cohort showed that NAb to HPV 16/18 were generally found up to 12 years after vaccination, as well as HPV6 antibodies in 2vHPV Cervarix® recipients (Mariz et al., 2021).However, 15% of the 4vHPV recipients had no detectable HPV18 NAb 2-12 years after vaccination, whereas all corresponding 2vHPV recipients had HPV18 NAbs.Cross-neutralising Abs to HPV31, 33, 45, 52, and 58 were more prevalent in the 2vHPV Cer-varix® recipients, but similar GMCs to vaccine types were found up to 12 years after vaccination in both vaccine cohorts. When comparing the immunogenicity and reactogenicity of the 2vHPV Cervarix® and 4vHPV vaccines in HIV-positive adults recipients of a three-dose vaccination schedule, anti-HPV18 NAb titres were higher in the bivalent group compared with the quadrivalent group at seven and twelve months (Toft et al., 2014).Interestingly, only a moderate NAb seroconversion (50%) limited to non-vaccine HPV31 in 2vHPV Cervarix® recipients was observed in this HIV-positive cohort (Faust et al., 2016).Finally, children with well-controlled HIV infection who receive three doses of the 4vHPV vaccine maintain NAbs for at least four years (Levin et al., 2017).Although the 2vHPV Cervarix® provides slightly broader long-term protection than the 4vHPV in participants of the PATRICIA and FUTURE trials, the cross-reactivity induced in HIVpositive adults seems, however, diminished in relation to HIV-negative cohorts, whereas data on long-term immunity following HPV D.-e.-N.Waheed et al. vaccination in HIV cohorts is scarce. Although NAb titres after vaccination are correlated with protection against persistent infection for vaccine HPV types, the correlation is weaker for non-vaccine types in 4vHPV recipients (Mariz et al., 2021).It is still to be confirmed whether cross-NAb responses are the main effectors of protection against non-vaccine HPV types.Other Ab-mediated cellular cytotoxicity responses, which are not measured by in vitro neutralisation assays, may contribute to prevent infection and virus clearance (Wang et al., 2018).Vaccination with HPV Virus-Like-Particles VLP also triggers cell-mediated responses (Pinto et al., 2003;Stanley, 2006) to T helper epitopes conserved across distinct genotypes (Pinto et al., 2003), which may play some role in both cross-protection and immunological memory. In contrast, for considerably more 4vHPV recipients, NAb titres remained below test sensitivity, remarkably for HPV18.This triggered a discussion about the impact of the valency (the number of genotypes included in the vaccine) of a given vaccine on the immune response against that vaccine.The immunogenicity data suggest that HPV16 VLP are immunodominant because at similar (2vHPV Cervarix®) or lower concentrations (4vHPV), these particles induce higher NAb titres than HPV18 VLP.Considering that adjuvants are key factors determining the balance of antigenic immunodominance (Chen et al., 2021;Maeda et al., 2017), the distinct adjuvant systems employed by these vaccines, in addition to the valency and antigen concentration, are likely to differently impact the resulting Ab levels.Nevertheless, while NAb titres induced by each of these vaccines to HPV16 and HPV18 are different, their effectiveness levels against corresponding infection seem to be comparable. Current status of using urine samples to monitor HPV vaccination status First-void urine (FVU), or the initial stream of urine, captures impurities lining the urethra opening.These impurities include transudated Abs and biomarker-containing mucus and debris from exfoliated cells originating from the female genital tract.As it is a noninvasive sample, which can be obtained at home, it is an interesting option to reach non-attendees of the cervical cancer screening programme (Pattyn et al., 2019).Several studies have demonstrated that first-void urine is a suitable sample to detect HPV DNA and vaccineinduced HPV Abs originating from female genital tract secretions are detectable in FVU as well (Arbyn et al., 2018;Pathak et al., 2014).This presents an opportunity for non-invasive sampling to monitor HPV Ab status in women participating in large epidemiological studies and HPV vaccine trials (Pattyn et al., 2019(Pattyn et al., , 2020;;Van Keer et al., 2019).The simultaneous assessment of both HPV infection and immunogenicity on a non-invasive, readily obtained sample is particularly attractive. Paired FVU and serum samples from female volunteers who participated in a 9vHPV trial (HPV V503-004 study) were collected before vaccination (month 0), one month after the third dose (month 7), and approximately three years after the third dose (month 43) (manuscript under preparation).HPV-specific antibody concentrations in FVU were detected in 0-16% at month 0, 95%-100% at month 7, and 84%-100% at month 43.In addition, results show significant spearman correlations between HPV-antibody titres of paired FVU and serum samples (Month 0 r s = 0.52, Month 7 r s = 0.69, Month 43 r s = 0.80).In conclusion, HIV Abs can also be detected in urine, which might make FVU a valid sample in LMICs with a high HIV burden.However, due to biological differences in the genital tract, FVU is probably not an appropriate sample for HPV DNA or antibody detection in men. Optimisation of current antibody neutralization assays An important remark was made in regard to the fact of not detecting neutralising antibodies 12 years post-vaccination in recipients of 2vHPV Cervarix® and 4vHPV does not necessarily mean that individuals are unprotected, as other factors should be considered, including suboptimal sensitivities of current NAbs serological assays.Important advances to address these research gaps are underway, including objective comparison between available testing platforms by the use of international units. Cross-protective vaccine-induced neutralising antibodies A head-to-head comparison between 2vHPV Cervarix® and 9vHPV provided important insight into the 9vHPV vaccine elicited higher crossprotective antibodies against HPV 35 (Arroyo Mühr et al., 2022).This is specifically important in Africa, where about 10% of cervical cancers have been shown to be caused by this genotype.Further research is needed to validate and understand the protective value of these antibodies against persistent infection.This could present an opportunity for implementation of these vaccines in countries and populations that are mostly affected by cervical cancer caused by oncogenic genotype HPV35. Feasibility of using first-void urine to assess vaccine status and measure impact Suboptimal vaccine registries in HICs and LMICs could benefit from the use of FVU sampling as a non-invasive sampling strategy to collect impact data.However, there were doubts in regard to the sensitivity of the sample to detect vaccine-induced antibodies in recipients of onedose schedule.Data presented during the meeting shows that although stable, one dose HPV vaccination responses yields lower antibody titers in serum.While further validation and optimization of this strategy is needed, promising results, including the detection of antibody after natural infection in urine and a very good correlation between serum and FVU antibodies titers has been reported, making this sampling strategy a very promising asset for HPV vaccine effectiveness assessment worldwide. Lessons learned & the way forward. humoral immune responses upon HPV vaccination The availability of international standards is relevant, as it will facilitate HPV immunogenicity reporting and accurate data interpretation across labs and testing batches. VLPs are highly immunogenic, resulting in high-affinity Abs.Due to intramuscular administration with adjuvant, the resulting Abs are of better quality than Abs resulting from natural infection.Humoral immune responses following HPV vaccination reach a plateau at 24 months, irrespective of the number of doses administered.As such, it is essential to analyze data from studies with a minimum follow-up period of 24 months in order to accurately compare results across studies.First-void urine sampling is a non-invasive, home-based sampling method that allows the detection of HPV-specific antibodies.International Standards for 9 HPV vaccine genotypes need to be anchored, and for instance, peer reviewers should ask for international units when reviewing manuscripts. Ethics approval Not applicable. Consent to participate Not applicable. Consent for publication HPV Prevention and Control Board meetings are invitation-only meetings.All participants accepted the invitation and attended the meeting out of their free will.The HPV Prevention and Control Board asked the participants to fill out a 'consent form', agreeing that the D.-e.-N.Waheed et al. videos and photos of the meetings can be published online.The speakers are also asked to fill out a consent form to agree/disagree that their presentation can be published on the website, included in the meeting report or used for publication. Availability of data and material All the presentations of the meeting report are published on the website (https://www.hpvboard.org)after speakers' approval. Funding The HPV Prevention and Control Board is supported by in kind contributions and support from the international experts involved and their institutions.To set up the activities and support publication costs, the secretariat obtained unrestricted grants from industry (Glax-oSmithKline Biologicals, Merck).All funds were handled according to the rules of the University of Antwerp.No remuneration for experts or speakers was provided. Competing interests AV University of Antwerp obtained unrestricted educational grants from GSK, Merck, Roche and Hologic; an investigator-initiated grant from Merck and speaker fees from Merck. MB received medical writing fees from Merck, SPMSD and GSK.DWJ received funding from GSK Biologicals for a clinical trial of HPV vaccine and donation of Gardasil® from Merck and Co. MS is part of the Global Advisory Board HPV vaccines MSD Merck. For the authors identified as personnel of the IARC or WHO, the authors alone are responsible for the views expressed in this Article, and they do not necessarily represent the decisions, policies, or views of the IARC or WHO.The designations used and the presentation of the material in this Article do not imply the expression of any opinion whatsoever on the part of WHO and the IARC about the legal status of any country, territory, city, or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries. Author's contribution AV, MS, IB, FRB, DNW: defining the meeting objectives, speakers, and the program.CE, FCM, NM, DWJ: presenting, chairing sessions, leading discussions, providing and validating the meeting conclusions.MB, FRB, DNW, LT, AV: drafting the manuscript.All authors have contributed in editing the manuscript Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 1 Summary of studies studying efficacy and immunogenicity of cohorts that received one-dose regime of any HPV vaccine product.
2023-08-16T15:18:54.376Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "06c378b6c93495715223ecab87cfbb7c5b4b0e86", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.pmedr.2023.102368", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff713a29d6eeffc5ee636f123371bdedf39b4049", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
556927
pes2o/s2orc
v3-fos-license
A case-referent study on acute myeloid leukemia, background radiation and exposure to solvents and other agents A case-referent study on acute myeloid leukemia, background radiation and exposures to solvents and other agents. Scand j work environ health 7 (1981) 169-178. The effect of potential risk factors for acute myeloid leukemia was evaluated in a case-referent study encompassing 42 cases and 244 referents, all deceased. Information on exposure was obtained with questionnaires mailed to the next of kin. Particularly the effect of background radiation was evaluated, as assessed with a gamma radiation index weigh ing the time spent outdoors and indoors and considering the building material (stone, wood, etc) in the homes and the workplaces of the subjects. Especially between the ages of 20 and 49 a, to some extent also between 50 and 69 a but not above 70, there seemed to be an effect from background radiation and a trend suggesting an exposure effect relationship. There was also about a sixfold increase in the rate ratio with regard to solvent exposure, which also seemed to modify the effect of background radiation. Other exposures were associated with relatively modest increases in the rate ratios and/or very small numbers of exposed individuals. It would be worthwhile to undertake further cancer epidemiologic studies of background radiation in which effec tive study designs are applied and a variety of potential confounders and modifiers of effect are identified and accounted for. Epidemiologic studies of leukemia have focused on genetic, infectious, and environmental risk factors. Thus a familial aggregation tends to occur (13,25), and there are some suggestions of a viral etiology (11,16), particularly in relation to contacts with cats, poultry and cattle. Exposure to benzene seems to be associated with an increased risk of leukemia (19,34), especially of the acute myeloid type, but also other, more ill-defined exposures to Reprint requests to: Prof 0 Axelson, Department of Occupational Medicine, University Hospital, 5-581 85 Linkoping, Sweden. petroleum products might be of etiologic importance (6). Ionizing radiation has been firmly established as a cause of leukemias of all types except the chronic lymphatic kind (8, 23), but it is still rather unclear whether or not low-level radiation might play an etiologic role in the development of leukemia (9,21). The principal question of the etiologic role of background radiation in cancer causation has gained further interest during the past few years in view of recent concerns about increasing indoor concentrations of radon as a potential lung cancer hazard. A preliminary observation of an association between lung cancer and stone-house residency, as presumably reflecting higher exposure to radon and its daughters (3, 4), and a puzzling finding of an increased rate ratio for leukemias among office personnel in a city in south-0355-3140/81/030169-10 ern Sweden (5), a phenomenon that could be related to background indoor radiation, have provoked our interest for further studies in this respect. This case-referent (case-control) study of acute myeloid leukemia (International Classification of Disease 1965, lCD 205,00), and particularly background radiation, reflects this interest and is part of an attempt to study leukemias in relation to various occupational and environmental exposures. Material and methods The general principles followed in this investigation have already been described in a methodological paper (2). More specifically, the study considers various exposures, especially background radiation, among individuals deceased from acute myeloid leukemia and among individuals deceased from other causes of death, malignancies excluded, within the county of Ostergotland and during the period 1972-1978. The county of Ostergotland is a low land area about 50-150 m above sea level and located in southeastern Sweden; there is differentiated industry, but also farming and some forestry. Source of subjects The cases were primarily obtained from the Linkoping University Hospital register of deaths, and all individuals with acute myeloid leukemia diagnosed during the study period were included. In a second step, these cases were identified in the parish registers of deaths and burials, from which the referents were then chosen (all deaths are registered in the home parishes, irrespective of where the individual has died). The number of available cases amounted to 46, out of which one newly deceased individual was directly excluded for ethical reasons. Six referents for each case were chosen from the parish registers; they were those in the nearest three register positions before and after each case if they fulfilled certain requirements. Thus they should be of the same gender as the case and similar in age (± 7 a) and without a cancer diagnosis according to the death certificate, since there might be relationships between cancer and the various risk factors for 170 leukemia and the inclusion of cancer diagnoses among the referents would then result in a distortion of the exposure frequency among the referents as compared to the source population of the cases. In one rather small parish no individuals fulfilled these criteria, and therefore also the case from this parish was excluded. For another case there were only five individuals available as referents and another three referents turned out to have been selected more than once. Therefore, 44 cases and 260 referents remained for the study. However, out of these 304 subjects, one case was found to have lived abroad for a very long time, and another had suffered primarily from a multiple myeloma and had been treated with cytotoxic agents. These two cases were therefore excluded. Thus the material for the study encompassed 42 cases and 260 referents as selected through a procedure providing a reasonable homogeneity of the material with regard to age, gender, and domicile. Assessment of exposure Information about various types of exposure among the cases and referents was obtained with a nine-page questionnaire, preceded by an introductory letter and sent out by mail to the next of kin. The questionnaire contained 30 main questions, out of which 15 concerned occupational exposures, some of them further subspecified with regard to certain details. Four questions were devoted to medical care, particularly the use of drugs and X-ray examinations and treatments. Furthermore, smoking habits were asked for. Aspects about residency were covered in six main questions, and another four questions were given in reference to various environmental aspects, leisure-time activities, etc, as well as information about urban or rural domicile during the lifetime of the individual. Of the referents sent the questionnaire, 16 were not included in the final analyses, five due to refusal to participate, five due to inability of the next of kin to reply to the questionnaire, and another three due to the impossibility to trace the next of kin, since the mailing address turned out to be incorrect. Two referents had very recently died and were therefore excluded for ethical reasons, and one referent was found to be a recent immigrant; thus information was finally obtained from 42 cases and 244 referents. Classification of exposure For the main purpose of revealing confounding and modification of effect, various potential risk factors for the disease were evaluated on the basis of the exposure situation of the subjects during the 20 a prior to death (table 2). The exposure to background radiation was assessed with a radiation index created to estimate exposures 5-25 a prior to death. For five subjects younger than 25 a, the remaining time period was considered. In principle the background gamma radiation was accounted for in this index, but there should also be a relatively good positive correlation with radon and radon daughters, although not known to be of importance for the development of leukemia. The use of a "time window" in looking at exposure allows for an induction-latency requirement and also leads to the disregarding of remote exposure which might have little or no effect on the development of acute myeloid leukemia; this view would be consistent with the incidence of leukemia peaking about a decade after exposure among the Japanese A-bomb survivors (32). A couple of Swedish investigations (17, 33) have shown that the gamma radiation in wooden houses is rather similar to outdoor background radiation (the average absorbed dose being about 1 mGy/a or 0.1 rad/a, cosmic radiation included), whereas stone houses seem to produce about double of that exposure and plastered houses and brick houses are in between. The gamma radiation index was assessed blindly for the subjects with regard to case-referent status and was timeweighted over the years with respect to outdoor and indoor work, leisure time being spent outdoors or indoors and type of residency; furthermore the contribution to the exposure from various types of building materials at workplaces and in homes was estimated. More specifically, one-third of the time was assumed to have been devoted to three main activities, ie, work, "leisure time," and sleeping. The gamma radiation index being denoted by GRI, the estimated background radiation dose in the various situations by r, exposure time in years by t, and the total of the considered time period by T (ie, the "time window" of 20 a, 5 to 25 a prior to death), the described calculations were made according to GRI =~~rt/T ij with summation over the three main activities, i, and over time periods, j, during the "time window." The background radiation, r, was taken as 2 for stone houses, 1 for wooden houses, and 1.5 for "mixed" types of houses (eg, plastered and/or brick houses) on a relative scale. In general, the minimum score always amounted to three points and the maximum score amounted to six in terms of this gamma radiation index. In the data analysis, three exposure levels were chosen, category I encompassing individuals with less than four points, category II being those with four but less than five points, and category III being those having achieved five points or more. Exposure to other factors was directly obtained from the questionnaire and only crudely weighted with regard to intensity and duration, requiring a minimum of 1 a of exposure or at least five episodes of X-ray examination or one series of treatment 5 a prior to death. Various leisuretime exposures to chemicals were disregarded as not fulfilling this requirement. Furthermore, all exposures beyond 20 a prior to death were disregarded, and no additional latency-time criterion was applied. Statistical methods The statistical analyses of the data were based on the Mantel-Haenszel procedures (27) and the Mantel extension of the Mantel-Haenszel test (26) with regard to trends over the categories of exposure. The principles applied for the determination of the standardized rate ratios have been outlined by Miettinen (29) along with useful principles for evaluating confounding (28) and a method for calculating the confidence interval of a rate ratio (31). Results Out of the 42 cases and 244 referents finally included in the study, 71 and 55 %, respectively, were found to belong to exposure category II or III, ie, the crude rate ratio (odds ratio) was 2.1 and the Mantel-Haenszel point estimate 2.0 (table 1). There was a slight trend towards an exposure-effect relationship over the exposure categories [Mantel-extension, r(1) = 3.34]. Since concerns of validity might affect the reference entity with regard to violent deaths, as perhaps not being representative of the exposure frequency in the source population, the numbers of violent deaths are given in parentheses in table 1. However, the exclusion of violent deaths would have almost no influence on the measures of effect as reflected in the various rate ratios given in the table, and it seems appropriate, therefore, to include violent deaths among the referents. Since there is only stratification for age in table 1, there should be concern about other, uncontrolled confounders. To identify potential confounding factors, a number of crude rate ratio analyses were also undertaken according to table 2. They showed some effect of exposure to pesticides, X-ray treatment, contacts with animals, and solvent exposure, whereas the effects from other exposures were less pronounced. Furthermore, among the referents (table 3), there turned out to be a positive relationship between a low gamma radiation index and contacts with animals, as well as exposure to pesticides, ie, negative confounding, whereas the relationship to X-ray treatment was positive. Solvent exposure was comparatively common in category II of the gamma radiation index, whereas the distribution was similar for categories I and III. Due to the possible confounding properties of pesticides and X-ray treatment, the material was restricted through the exclusion of individuals with these risk indicators. (The exclusion of individuals with pesticide exposure made the material homogeneous with respect to animal contacts as well.) The results are shown in table 4; the material was somewhat "strengthened," although the trend for the standardized rate ratio became less clear, seemingly due to the influence from solvent exposure (table 3). Stratifying on solvent exposure in addition to age resulted in table 5 with a relatively strong trend over the exposure categories [Mantel-extension for the trend r(l) = 4.07]. Notice the lack of positive confounding from solvents between exposure category I and III in table 3 and the rather strong X~(l) = 5.12 in table 4. The tendency towards (a numerically strong) effect modification from solvent exposure is noteworthy. Evaluating table 5 with regard to the effect from solvents resulted in a crude rate ratio of 6.0. Mantel-Haenszel t(1) = 12.25, the Mantel-Haenszel rate ratio being 6.7 with an approximate 95 0/0 confidence interval of 2.1-14.4. It was not possible to obtain any more-detailed and definite information
2018-04-03T03:08:09.106Z
1981-09-01T00:00:00.000
{ "year": 1981, "sha1": "0bf510b4cfa316c5d3b2c28bb469ca1e63748772", "oa_license": "CCBY", "oa_url": "https://www.sjweh.fi/download.php?abstract_id=3109&file_nro=1", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "af23805f4536a63f52763e0b8e4a361d41aa5a73", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256407876
pes2o/s2orc
v3-fos-license
Pathological tau deposition in Motor Neurone Disease and frontotemporal lobar degeneration associated with TDP-43 proteinopathy It has been suggested that patients with motor neurone disease (MND) and those with MND combined with behavioural variant frontotemporal dementia (bvFTD) (ie FTD + MND) or with FTD alone might exist on a continuum based on commonalities of neuropathology and/or genetic risk. Moreover, it has been reported that both a neuronal and a glial cell tauopathy can accompany the TDP-43 proteinopathy in patients with motor neurone disease (MND) with cognitive changes, and that the tauopathy may be fundamental to disease pathogenesis and clinical phenotype. In the present study, we sought to substantiate these latter findings, and test this concept of a pathological continuum, in a consecutive series of 41 patients with MND, 16 with FTD + MND and 23 with FTD without MND. Paraffin sections of frontal, entorhinal, temporal and occipital cortex and hippocampus were immunostained for tau pathology using anti-tau antibodies, AT8, pThr175 and pThr217, and for amyloid β protein (Aβ) using 4G8 antibody. Twenty four (59 %) patients with MND, 7 (44 %) patients with FTD + MND and 10 (43 %) patients with FTD showed ‘significant’ tau pathology (ie more than just an isolated neurofibrillary tangle or a few neuropil threads in one or more brain regions examined). In most instances, this bore the histological characteristics of an Alzheimer’s disease process involving entorhinal cortex, hippocampus, temporal cortex, frontal cortex and occipital cortex in decreasing frequency, accompanied by a deposition of Aβ up to Thal phase 3, though 2 patients with MND, and 1 with FTD did show tau pathology beyond Braak stage III. Four other patients with MND showed novel neuronal tau pathology, within the frontal cortex alone, specifically detected by pThr175 antibody, which was characterised by a fine granular or more clumped aggregation of tau without neurofibrillary tangles or neuropil threads. However, none of these 4 patients had clinically evident cognitive disorder, and this type of tau pathology was not seen in any of the FTD + MND or FTD patients. Finally, two patients, one with MND and one with FTD, showed a tau pathology consistent with Argyrophilic Grain Disease (AGD). Western blotting and use of 3- and 4-repeat tau antibodies confirmed the histological interpretation of Alzheimer’s disease type pathology in all instances except for those patients with accompanying AGD where a banding pattern on western blot, and immunohistochemistry, confirmed 4-repeat tauopathy. In all 3 patient groups, amyloid pathology was more likely to be present in patients dying after 65 years of age, and in the presence of APOE ε4 allele. We conclude that tau pathological changes are equally common amongst patients with MND, FTD + MND and FTD though, in most instances, these are limited in extent. In patients with MND, when cognitive impairment is present this is most likely due to an accompanying/evolving (coincidental) Alzheimer’s disease process or, as in a single case, Dementia with Lewy bodies, within the cerebral cortex rather than as a result of TDP-43 proteinopathy. Conversely, in FTD and FTD + MND dementia is more likely to be associated with TDP-43 proteinopathy than tau. Hence, present study shows no progression in severity of (tau) pathology from MND through FTD + MND to FTD, and does not support the concept of these conditions forming a continuum of clinical or pathological change. Introduction Motor Neurone Disease (MND), also known as Amyotrophic Lateral Sclerosis (ALS), is classically described as a neurodegenerative disorder of the locomotor system, characterised by degeneration and loss of upper and lower motor neurones, leading to a progressive weakness and wasting of limb, bulbar and trunk musculature, with death usually occurring within 2-3 years of symptom onset [3]. It affects 2-3 people in 100,000 worldwide, males slightly more than females. While about 90 % of cases appear to be sporadic in nature, with no known genetic cause, at least 6 genes are implicated in the pathogenesis of the remaining 10 % of familial cases [3]. These, in order of frequency, are expansions in C9orf72, and point mutations in SOD-1, FUS, TARDBP, UBQLN1 and VAPB genes. In histological terms, all sporadic, and most familial cases (those associated with C9orf72, TARDBP, UBQLN1 or VAPB), are characterised by the presence of neuronal cytoplasmic inclusions (NCI) within spinal and brainstem motor neurones composed of the TAR DNA binding protein of 43KDa, TDP-43, whereas cases associated with mutations in SOD-1 and FUS display NCI within these same cell types containing these respective proteins [3]. However, MND is becoming increasingly recognised as a multisystem disorder in which behavioural changes and cognitive deficits can occur [12]. Cognitive change, particularly in executive functions, has been reported in up to half of patients [19,28]. Of these, about 10-15 % patients fulfil criteria for behavioural variant frontotemporal dementia (bvFTD) [13,28]. In keeping with the pattern of cognitive change, frontal lobe abnormalities have been demonstrated in MND both on structural [2,17] and functional [1,18] imaging. bvFTD may precede, follow or coincide with the onset of motor symptoms [24], reinforcing the inter-relationship between the two disorders. The pathological substrate of dementia in MND, when combined clinically with bvFTD (henceforth termed FTD + MND), has been consistently linked to TDP-43 rather than tau pathology [4,26]. However, the basis for changes underlying cognitive deficits in MND, which do not match up to fully fledged bvFTD, remains unclear. It is of interest, therefore, that in a recent study, Yang and Strong [36] found evidence of both TDP-43 and tau pathology in MND patients with and without cognitive impairment. These authors employed novel tau polyclonal antibodies to investigate tau pathology in 10 patients with clinically and pathologically confirmed Amyotrophic Lateral Sclerosis (ALS) (aka MND). Five showed cognitive impairment (ALSci), as defined by Strong [31], whereas five showed no cognitive impairment. In patients with ALS alone, an antibody directed against tau phosphorylated at Thr175 (pThr 175 ) detected limited neuronal tau aggregates predominantly within entorhinal cortex and amygdala, whereas an antibody directed against tau phosphorylated at Thr217 (pThr 217 ) detected astrocytic tau deposition in frontal cortex as well as in entorhinal cortex and amygdala. In patients with ALSci, a more extensive spread of neuronal pThr 175 was seen, involving the frontal lobe, whereas for pThr 217 a more extensive astrocytic involvement than in MND alone was observed. These findings prompted Yang and Strong [34] to suggest that in patients with MND with cognitive changes, a coincidental tau and TDP-43 pathology is present, and that widespread (astrocytic) tau pathology may be fundamental to pathogenesis. Furthermore, Bieniek et al [6] noted excessive tau pathology in a higher proportion of patients with FTLD-TDP associated with an expansion in C9orf72, and in others with FTLD-TDP with no known mutation, when compared to cases of FTLD-TDP with GRN mutations and suggested that some forms of TDP-43 proteinopathy might favour or promote the development of tauopathy. Hence, the overlap between TDP-43 and tau pathologies in ALS [36] and FTLD [6], and the more marked tau pathology in patients with MND with, rather than without, cognitive impairment [36], could be interpreted as supporting the spectrum/continuum notion of the relationship between ALS and bvFTD. If this were so, it might be postulated that the clinical combination of FTD + MND could be driven in either direction from FTD or from MND through a common pathogenetic pathway. By this argument, it might be anticipated that tauopathy in MND would be exacerbated in FTD + MND, and even more so in FTD. In order to test this hypothesis we have used the same tau polyclonal antibodies used by Yang and Strong [36] to evaluate tau pathology in an independent cohort of patients with MND, as well as in patients with FTD + MND and those with FTD without MND. Patients The study group consisted of 80 patients, 41 with a clinical diagnosis of MND (27 males, 14 females; patients #1-41), 16 clinically diagnosed with FTD + MND (10 males, 6 females; patients #42-57) and 23 patients with FTD but without MND (15 males, 8 females; patients #58-80) ( Table 1). Fifteen of the FTD group of patients had a predominantly bvFTD phenotype whereas the other 8 patients had a predominant language phenotype (Table 1); for purposes of comparison all were subsumed under the rubric of FTD without MND. Notably, all 23 patients within this FTD group shared a common TDP-43 histological phenotype (see below). No patients were available in which the clinical syndromes of Progressive Non-Fluent Aphasia (PNFA) or Semantic Dementia were combined with MND. The brains of these patients were consecutively acquired by the Manchester Brain Bank over the years 1986 to 2015. All patients were from the North West of England and North Wales and tissues were obtained through appropriate consenting procedures for the collection and use of the human brain tissues. The 16 patients with FTD + MND and the 23 patients with FTD without MND fulfilled relevant clinical diagnostic criteria [14,25,27]. They had all been investigated longitudinally within a specialist dementia clinic using the Manchester Neuropsychological Profile (Man-NP) [30,34] to determine and characterise the nature of their dementia. Some of the MND patients had also undergone this formal neuropsychological assessment, though in most others where this had not been performed the presence of cognitive impairment was deduced (in patients #35 and 36) from inspection of clinical notes and medical correspondence by specialist neuropsychologists. All 41 patients with MND fulfilled El Escorial criteria [9]. Comparison of the three patient groups showed no significant differences in gender distribution (χ 2 = 0.095, p = 0.953) or mean age at onset of disease (F 2,65 = 0.89, p = 0.416). However, mean age at death and duration of illness did differ (F 2,65 = 6.0, p = 0.004, F 2,65 = 33.8, p < 0.001, respectively). Patients with FTD alone died at a later age than those with MND (p = 0.003) and both patients with MND, and those with FTD + MND, had a shorter disease duration than those with FTD alone (p < 0.001), though those with MND and FTD + MND did not differ in this respect (Table 2). Four patients with MND (patients #10, 14, 30 and 38), 4 with FTD + MND (patients #44, 53, 54 and 56) and 7 with FTD (patients #58, 71-74, 77 and 80) bore an expansion in C9orf72, as evidenced by Southern blot and/ or repeat primed PCR [11,20] (Table 1). Twelve of the other patients with FTD (patients #59-66, 69, 70, 75 and 76) bore mutation in progranulin gene (GRN). No mutation was known to be present in the remaining 4 patients (patients # 67, 68, 79 and 80) ( Table 1). There were no significant differences between age at onset (F 2,65 = 0.158, p = 0.854) or age at death (F 2,65 = 2.10, p = 0.130) between carriers of GRN mutation, C9orf72 expansion or those with no known mutation, though duration of illness did vary significantly between the three groups (F 2,65 = 21.2, p < 0.001) with bearers of GRN mutation having a significantly longer disease course than either those with C9orf72 expansion or those without known mutation (p < 0.001 in both instances), which did not differ from each other (p = 0.140) ( Table 2). Previous pathological diagnostic investigations had shown all MND and FTD + MND patients to display atrophy and loss of motor neurones from trigeminal and hypoglossal cranial nerve nuclei, and anterior horn cells (where spinal cord was available), with the presence of skein-like, or rounded, more solid, TDP-43 immunoreactive neuronal cytoplasmic inclusions (NCI) within surviving cells, or with fine, particulate accumulations of TDP-43, in which the nucleus has been 'cleared' of its normal immunoreactivity Additional file 1: Figure S1. Patient #35 with MND also had isocortical DLB [22], along with Alzheimer-type pathology, though typical TDP-43 pathology was still seen in anterior horn cells of the spinal cord (see Additional file 1: Figure S1). Thirty four MND patients showed no extramotor TDP-43 pathology at all, whereas 7 MND patients showed occasional or moderate numbers of NCI within dentate gyrus granule cells, four of whom also displayed moderate numbers of, or many, TDP-43 immunopositive granules within the cytoplasm of small pyramidal cells of layer II of the frontal and temporal cortex, though well-formed NCI were only rarely present. On the other hand, all 16 patients with FTD + MND showed widespread TDP-43 immunoreactive NCI within hippocampal dentate gyrus granule cells and numerous cells in layer II of the frontal and temporal cortex contained TDP-43 immunopositive granules with well-formed NCI in others, in the relative absence of TDP-43 immunoreactive neurites, consistent with neuropathological classification of FTLD-TDP type B [20]. Additionally, there was loss of motor neurones from trigeminal and hypoglossal cranial nerve nuclei, and anterior horn cells (where spinal cord was available) with TDP-43 immunoreactive NCI within surviving cells. Conversely, all 23 patients with FTD alone showed numerous TDP-43 immunoreactive NCI and neurites in layer II of the frontal and temporal cortex with variable numbers of TDP-43 immunoreactive NCI in granule cells of the dentate gyrus of the hippocampus, consistent with pathological classification of FTLD-TDP type A [20]. Additionally, those patients bearing GRN mutations showed variable presence of TDP-43 immunoreactive neuronal intranuclear inclusions (NII) in neurones of layer II of frontal and temporal cortex, but these were not seen in those patients bearing expansion in C9orf72, or in the 4 cases without known mutation. Dipeptide repeat proteins consisting of poly-GA, poly-GP and poly-GR proteins were present in CA4 neurones of hippocampus and granule cells of the dentate gyrus and cerebellum in all 16 C9orf72 expansion bearers, irrespective of clinical phenotype [11]. Immunohistochemistry Paraffin sections were cut at 6 μm from formalin fixed blocks of frontal lobe (BA8/9), temporal lobe (BA21/22) including anterior and posterior hippocampus and entorhinal cortex, occipital lobe (BA17/18), corpus striatum and cerebellum from all individuals. We did not include sections from 'neighboring' areas such as insular and cingulate cortex into the study as previous diagnostic neuropathological analyses had not revealed these to be different (in terms of tau pathology) from chosen areas of temporal and frontal cortex, respectively. Following titration to determine optimal immunostaining, antibodies were identically employed in a standard IHC protocol, as described previously [11,21]. Frontal, temporal (to include hippocampus and entorhinal cortex) and occipital lobe sections were immunostained for tau proteins. The following tau antibodies were employed: AT8 (1:750), pThr 175 and pThr 217 (both of which were used at 1:1000 dilution). These latter antibodies are polyclonal phospho-tau antibodies generated against sequences Ac-SLP[pT]PPTREPC-amide and Ac-RIPAK[pT]PPAPKC-amide, respectively. Full details regarding the production and specificity of these antibodies have been presented elsewhere [36]. Negative controls omitting pThr 175 and pThr 217 antibody, and normal brain sections known to be free from tau pathology using AT8 antibody were employed to substantiate the specificity of pThr 175 and pThr 217 antibodies. Selected sections of frontal and temporal cortex (see later) were immunostained for 3repeat (3-R) and 4-repeat (4-R) tau proteins using RD3 and RD4 antibodies (Millepore), at a dilution of 1:1500 and 1:200, respectively. For each tau antibody, antigen unmasking was performed by pressure cooking in citrate buffer (pH 6.0, 10 mM) for 30 min, reaching 120°Celsius and >15 kPa pressure. Additional sections of frontal, temporal (to include hippocampus and entorhinal cortex) and occipital cortex, along with those of corpus striatum and cerebellum, were immunostained for amyloid plaques using 4G8 antibody (1:3000). Antigen retrieval was in this case performed by immersion in 95 % formic acid for 5 min prior to incubation in primary antibody. Sections of frontal and temporal cortex were also immunostained for TDP-43 and phosphorylated α-synuclein as above. AT8, pThr 175 and pThr 217 immunostained sections were scored microscopically at an objective magnification of x25 (overall magnification of x250) for the presence and severity of tau pathological changes, as visualised by each of the tau antibodies, employing the following rating scale: 0 = No tau pathology present. 0.5 = rare (ie 1-5 tau immunoreactive neurofibrillary tangles/neurites per section. 1 = 1-5 tau immunoreactive neurofibrillary tangles/ neurites per x250 microscope field. 2 = 5-10 tau immunoreactive neurofibrillary tangles/ neurites per x250 microscope field. 3 = more than 10 tau immunoreactive neurofibrillary tangles/neurites per x250 microscope field. Cases were also assessed for the extent and distribution of neurofibrillary (AT8) and amyloid plaque (4G8) pathology, employing Braak and Braak [7] and Thal [33] staging procedures, respectively. Cases where no tau pathology whatsoever was present were staged 0, those where only rare neurofibrillary tangles were present in entorhinal cortex alone were staged 0-I. Stage I cases showed abundant tangles in entorhinal cortex Table 2 Mean (±SD) values for age at onset of symptoms, age at death and duration of illness for patients with Motor Neurone Disease (MND), behavioural variant Frontotemporal Dementia and Motor Neurone Disease (FTD + MND) and FTD. Also shown are mean (±SD) values for age at onset of symptoms, age at death and duration of illness for those cases of MND, FTD + MND and FTD, collectively, with mutations in GRN, expansion in C9orf72, or no known mutation, along with mean (±SD) values for age at onset of symptoms, age at death and duration of illness for those cases of MND, FTD + MND and FTD, collectively, showing with and without amyloid pathology, and those with and without (any type of) tau pathology Western blotting 200-500 mg samples of frozen frontal and temporal cortex were dissected from selected tau-immunopositive cases (see later) and subjected to western blot analysis of insoluble tau, as we have described elsewhere [32]. Briefly, sarkosyl-insoluble pellets were prepared by homogenization of tissue samples in 20vol (v/w) of extraction buffer containing 10 mM Tris-HCl (pH 7.5), 0.8 M NaCl, 10 % sucrose, 1 mM EGTA, 2 % sarkosyl and incubated for 30 min at 37°C. After centrifugation at 20,000 g for 10 min at 25°C, the supernatants were taken, transferred to 1.5 mL tubes and ultracentrifuged at 100,000 g for 20 min at 25°C. The pellets were washed by ultracentrifugation with 0.5 mL of sterile saline, solubilized in SDS-sample buffer and subjected to 4-20 % gradient polyacrylamide gel (Wako) SDSPAGE. Proteins were transferred to PVDF membrane, incubated overnight with the anti-tau monoclonal antibody T46 (Thermo Scientific), biotinylated 2nd antibody, avidinbiotin complex (Vector) and developed with diaminobenzidine and nickel chloride. Statistical analysis Comparisons of semiquantitative scores for severity of AT8, pThr 175 and pThr 217 immunostaining in frontal and temporal cortex, entorhinal cortex and CA1 region of hippocampus, were performed using Kruskal-Wallis test with post-hoc Mann-Whitney test where Kruskal-Wallis yielded a significant difference between antibody staining scores. Comparisons of APOE ε4 allele frequency between MND, FTD + MND and FTD groups, and cases of MND, FTD + MND and FTD, collectively, with and without amyloid deposition, were made using Chi squared test. Comparisons of mean age at onset, age at death and duration of illness between patients with MND, FTD + MND and FTD, with and without amyloid deposition, were made using unpaired t-test. Significance levels were set at p < 0.05 throughout. All research reported in the paper was performed with ethical approval under the Manchester Brain Bank Generic Tissue Bank Ethics approved by Newcastle and North Tyneside Ethics Committee. Eleven patients (27 %) with MND (patients #18-25 and #27-29), 7 patients (44 %) with FTD + MND (patients #51-57) and 9 with FTD (39 %) showed sparse tau neuronal pathology (ie a single or a few neurofibrillary tangles and/or a few neuropil threads per section, usually only in a single brain region, and then most often in the entorhinal cortex) with any, or all, of the 3 tau antibodies employed. These 27 patients were classed as Braak stage 0-I/I. The remaining 13 patients (31 %) with MND (patients #26, 30-41) (but none with FTD + MND and only 1 with FTD (patient #79 with PNFA) displayed a 'significant degree of tau pathology' , as defined by the presence of a few to many neurofibrillary tangles and/or neuropil threads in several brain regions, usually with all 3 tau antibodies and again usually to a similar extent and with a similar distribution. Eight of these patients (patients #26, 30-36 and 79) were classed as having Braak stages II and greater. Seven patients (patients #30-36) showed moderate to severe involvement of entorhinal cortex with mild to severe involvement of CA1 region of hippocampus, but this was without neocortical involvement in patients #30-34, consistent with Braak stages I-II/II. The other 3 patients (patients #35, 36 and 79) (see Additional file 2 for full clinical and neuropathological details of patient #35) also showed moderate or severe involvement of inferior temporal gyri and superior frontal cortex (Braak stage IV), and 2 of these (patients #36 and 79) had some involvement of the visual association cortex, but not primary visual cortex, consistent with Braak stages IV-V. The pattern of tau pathology in the remaining 5 MND patients (patients #37-41) was such that it was not possible to Braak stage these cases (see later). Patients with MND were no more, or no less, likely to display some/any degree of tau pathology than those with FTD + MND, or those with FTD (χ2 = 0.037, p = 0.982). Three major patterns of tau pathology were noted, and patients were grouped accordingly. Group 1 tau staining pattern was most common, irrespective of the actual amount of staining present, being seen in 19 (of the 24 tau positive) patients with MND (patients #18-36), in all 7 patients with FTD + MND (patients #51-57) and in 9 of the patients with FTD (patients #71-79). The staining pattern resembled that of an Alzheimer's disease-type process (Fig. 1a-f), with a few to many neurofibrillary tangles and neuropil threads being present within entorhinal cortex (in all 19 tau-positive patients with MND, all 7 taupositive patients with FTD + MND and all 10 tau-positive patients with FTD), CA1 region of hippocampus (16/17 patients with MND, all 7 patients with FTD + MND and 8/9 patients with FTD), inferior and middle temporal gyri (15/17 patients with MND, 4/7 patients with FTD + MND and 5/9 patients with FTD), superior frontal cortex (9/17 patients with MND, 1/5 patients with FTD + MND and 5/9 patients with FTD) and visual association cortex (1/17 patients with MND and 1/9 patients with FTD) (see Fig. 1). Interestingly, 4 patients with MND patients (patients #19, 20, 25 and 26) also showed extensive CA2 tau pathology. Group 2 tau staining pattern was seen in 4 patients (patients #37-40) (Fig. 2). Here, there was occasional to frequent tau immunostaining of neurones of superior frontal cortex, and to a lesser extent inferior temporal cortex, but no involvement of entorhinal cortex, occipital cortex or CA1 region of hippocampus, with pThr 175 antibody: no such immunostaining was seen with either AT8 or pThr 217 antibodies. In contrast to the above group of patients, the tau immunostaining appeared either finely, or coarsely, granular with no neurofibrillary tangle-like structures, or neuropil threads, being seen. Although one of the patients showing the group 2 form of tau pathology (patient #38) bore an expansion in C9orf72, the tau pathology did not appear to be specifically associated with this genetic change as none of the other 3 patients with this tau particular pathology bore an expansion in C9orf72, nor did any of the other 7 expansion carriers display group 2 type changes in tau. In patients #41 (see Additional file 2 for full clinical and neuropathological details) and 80 (group 3), a third pattern of tau pathology was seen (Fig. 3). In this there was mild neurofibrillary tangle formation in granule cells of the dentate gyrus of the hippocampus, and in areas CA3 and CA4. However, there was total involvement of CA2 region with all cells being affected by neurofibrillary tangles or containing amorphous tau but without apparent cell loss. There was severe loss of cells from CA1 and subiculum, with severe hippocampal sclerosis, with the remainder containing neurofibrillary tangles. Likewise the entorhinal cortex was severely affected (especially layer II stellate cells) and this extended into layers III and V of the adjoining inferior temporal gyrus, thinning out to minimal involvement in superior temporal gyrus, and superior frontal gyrus. In addition to the neuronal pathology, there was dense oligodendroglial cell involvement in the form of tangles resembling coiled bodies. These were most numerous in white matter in entorhinal cortex and inferior temporal gyrus, becoming infrequent in superior temporal and superior frontal gyri. This pattern of tau pathology was consistent with Argyrophilic Grain Disease (AGD). In some patients, occasional glial cells, resembling astrocytes, also showed some granular, or fibrillary, tau immunoreactivity with all 3 tau antibodies, though for the most part this did not adopt a consistent pattern, nor was it present in anything but isolated cells. Notably, we did not observe any specific immunostaining of glial cells of the kind described by Yang and Strong [34] using pThr 217 antibody in any patient. Comparisons between immunostaining with AT8, pThr 175 and pThr 217 antibodies Semiquantitative scores for tau pathology, as detected by AT8, pThr 175 and pThr 217 antibodies, were compared in each of entorhinal cortex, CA1 region of hippocampus, temporal and frontal neocortex by Kruskal-Wallis test. No significant difference between the degree of tau antibody staining was detected for CA1 region, entorhinal cortex, temporal cortex, or frontal cortex either when all 80 patients were grouped together, or when split according to clinical grouping (Table 3). Patients were also grouped according to their pattern of tau pathology (as described above) including, as group 4, those patients with no or isolated tau (ie patients #1-17, #42-50 and #58-70). Again, no significant difference between the degree of tau staining with each of the three antibodies was detected for CA1 region, entorhinal cortex, temporal cortex, or frontal cortex for tau groups 1, 3 and 4. However, for tau group 2 there was a significant difference between the degree of tau antibody staining in frontal cortex (χ2 = 10.51, p = 0.005), but not in the other 3 regions (Table 3). Post hoc analysis showed that the level of tau staining was significantly higher with pThr 175 than with pThr 217 (p = 0.029) or AT8 (p = 0.029) antibodies, but the latter 2 did not differ significantly (p = 0.999), thereby bearing out microscopic observations. Tau isoform analysis In order to further characterise the molecular nature of the tau pathology present in each tau group, sections of frontal and/or temporal cortex from selected patients (ie tau group 1, patients #23, 31, 35, 36, 53, 56, 78 and 80; tau group 2 patients #37-40; tau group 3, patients #41 and 79) were subjected to immunostaining with 3-R (RD3) and 4-R (RD4) tau antibodies. These patients were selected because they showed the greatest levels of tau pathology within each of their respective groups, and were therefore considered to be most informative as regards the 3 patterns of tau pathology seen on AT8, pThr 175 and pThr 217 immunostaining. Sections of temporal and frontal cortex from tau group 1 cases showed neurofibrillary tangles, neuropil threads and neuritic plaques to be strongly immunoreactive for 4-R tau (Fig. 4a, b) and also, but less intensely so, for 3-R tau proteins (not shown). Sections of frontal cortex from tau group 2 cases showed neurones to be weakly immunoreactive for 3-R tau (not shown), but more strongly for 4-R tau protein (Fig. 4c). In tau group 3, there was strong 4-R tau immunostaining of neurofibrillary tangles and amorphous tau (pretangle) in cells of CA1 region, and amorphous tau staining in CA2 neurones, with tangles also being present in some CA2 cells (Fig. 4d). Tau grains were also strongly 4-R tau immunoreactive (Fig. 4e), as were oligodendroglial cells with coiled bodies in the adjoining white matter (Fig. 4f). The well-formed neurofibrillary tangles in CA2 region were also 3-R tau immunoreactive, but grains and glial cells were negative for 3-R tau (not shown). Where available, frozen tissue samples of frontal and/or temporal cortex were taken from selected patients in each tau group (group 1, patients #23, 26, 30, 31, 34, 35, 55 and 80; group 2, patients #39 and 40; group 3, patient #41) and subjected to western blot analysis. Unfortunately, in most patients the amount of insoluble tau extractable from the tissue samples was too low to detect on blotting, even on 5-fold enrichment of applied sample. However, in patients #30 and 35 (tau group 1), 40 (tau group 2) and 41 (tau group 3) clear banding patterns were obtained from temporal, but not frontal, cortical samples which enabled molecular classification of the pathological tau proteins present (Fig. 5). Patients #30, 35 and 40 (lanes 1-3) showed an Alzheimer's disease-like triplet banding pattern comprising bands of hyperphosphorylated full-length tau at 60, 64 and 68 kDa, though various C-terminal fragments and smears were also detected. In contrast, the banding pattern in patient #41 with AGD (lane 4) is characteristic of 4-repeat tauopathy with major bands at 64 and 68 kDa. Cerebral amyloid angiopathy (CAA) was generally absent in all 3 diagnostic groups, totally so in the FTD + MND group and affecting only a single patient in the FTD group). Occasional leptomeningeal arteries were affected in 4 MND patients (patients #15, 16, 29 and 36) though in 3 other MND patients (patients #17, 31 and 33) and the FTD patient with PNFA phenotype (patient #79) leptomeningeal CAA was more extensive, particularly in the occipital cortex, and in one of these (patient #33) this also involved capillaries within the primary Consequently, there were 13 patients with MND (patients #18-26, 37-39 and 41), 6 patients with FTD + MND (patients #51-56) and 6 patients with FTD (patients #71-76) that showed some degree of tau pathology but no amyloid plaque pathology at all. All of these were at Braak stage 0-I/I except patient #41 where no Braak classification was possible. Conversely, there were 3 patients with MND (patients #15-17) and 2 with FTD (patients #69 and 70) who showed amyloid plaque formation without tau pathology, 2 being at Thal phase 1 (patients #15 and 16) and 3 being at Thal phase 2 (patients #15, 69 and 70). Amyloid, tau, age and Apolipoprotein E (APOE) genotype Where relevant age at onset and death data was available, patients with MND, MND + FTD or FTD, collectively, showing amyloid plaque formation were significantly older, both at onset (p = 0.003) and at death (p = 0.001), than those not showing amyloid plaque formation. However, duration of illness did not differ between each group (p = 0.343) ( Table 2). By contrast, there were no significant differences between patients with MND, MND + FTD or FTD, collectively, showing any type of tau pathology and those without tau pathology at all, for age at onset (p = 0.493), age at death (p = 0.726) or duration of illness (p = 0.364) ( Table 2). Nonetheless there was a significant effect of age. Irrespective of APOE genotype, 14 of the 15 patients showing amyloid plaques were all over 65 years of age at death. Conversely, only 23 of the 53 patients without amyloid were over 65 years of age at death, and of these only 3 bore APOE ε4 allele. The other 5 APOE ε4 allele bearers not showing amyloid plaque formation were all under 65 years of age at death. Patients with MND, MND + FTD and FTD, collectively, were therefore significantly more likely to show amyloid in the brains if they died after the age of 65 years (χ2 = 11.7, p < 0.001). Indeed, all except 1 of the 10 patients with MND, MND + FTD and FTD, collectively, who showed both amyloid and tau in their brains were over 65 years of age at death, and of these 9 patients, 6 were bearers of APOE ε4 allele. Hence, patients with MND, MND + FTD and FTD most likely to show amyloid in their brains were those who died after the age of 65 years and bore APOE ε4 allele. Discussion Although the presence of some degree of tau pathological changes in patients with ALS/MND [16,29] or FTLD [29] has been anecdotally reported, there has been only little work where this has been systematically studied. Using novel antibodies to tau phosphorylated at Ser208/210, Thr175 and Thr217, Yang and Strong [36] investigated 5 MND patients with cognitive impairment (ALSci) and 5 others with no cognitive impairment (ALS) (as defined by Strong et al [31]). In the ALS patients, they observed a limited number of intraneuronal tau inclusions (neurofibrillary tangles) and neuropil threads in temporal lobe structures, mostly in entorhinal cortex, and amygdala, less so in hippocampus and frontal and cingulate cortex in 1-3/5 cases, which were broadly similarly immunoreactive with all 3 antibodies. A similar type of tau pathology was seen in the entorhinal cortex, amygdala and hippocampus in the 5 ALSci cases, but in these the frontal cortex and cingulate gyrus were more often involved (usually in 3-5/5 cases). Again, the level of immunostaining with all 3 antibodies was roughly similar. Although, Yang and Strong [36] did not perform Braak staging for neuronal tau, from their descriptions it can be inferred that cases of ALS were at Braak stages 0/I, whereas those with ALSci may have been at Braak stages III-IV. In the present study we have shown there to be 'significant' neuronal tau pathology in 59 % patients with MND, 44 % patients with FTD + MND and 44 % patients with FTD, whereas some degree of amyloid pathology was present in only 34 % patients with MND, 7 % patients with FTD + MND and 26 % patients with FTD. In this study, we have also employed the same antibodies to tau phosphorylated at Thr175 and Thr217, along with commercial AT8 antibody, and have supplemented these observations with 3-R and 4-R tau immunostaining and western blotting, on selected patients. Analysis of the patterns of tau and amyloid plaque pathologies suggested several 'profiles' to be present. Firstly, in those 9 patients with MND (patients #18-26), 6 with MND + FTD (patients #51-56) and 6 with FTD (patients #71-76), where minimal temporal lobe tau (Braak stage 0-I/I) but no amyloid plaque pathologies were present, the tau changes might be simply considered to be 'age-related' and unlikely to be associated with (early stage) Alzheimer's disease, given the lack of amyloid pathology [23]. Nonetheless, the concept of Primary Age-Related Tauopathy (PART) has been promoted to describe cases where tau pathology, especially medial temporal lobe tau, occurs in the complete absence of amyloid plaque deposition (Thal phase zero), or at least minimal amounts [10,15]. Such a designate would encompass pathologies formerly described as 'tangle only dementia' , or 'tangle predominant senile dementia' where extensive tau pathology, but usually not beyond Braak stage III-IV, is seen (sometimes) in the presence of an identifiable dementia or cognitive impairment [5,15,35]. Unfortunately, because of the low level of tau pathology present, and despite a 5-fold enrichment of sample, it was not possible to demonstrate any tau banding patterns on western blot in either frontal or temporal cortex in any of patients which might illuminate the molecular nature of this staining. Nonetheless, the neurofibrillary changes present were detected by both 3-R and 4-R tau immunostaining, as is typical for Alzheimer's disease, and as has been reported in PART by others [15]. Consequently, these 21 patients with limited temporal lobe tau pathology, but no amyloid, might alternatively be considered to fall under the 'umbrella' of PART. Secondly, in those 10 patients with MND (patients # 27-36), 1 with FTD + MND (patient #57) and 3 with FTD (patients #77-79), where both tau AND amyloid pathology was present, the pattern and distribution of tau pathology within the temporal lobe (and other regions when present) was of the type associated with Alzheimer's disease, ie neurofibrillary tangles, neuropil threads and occasionally neuritic plaques. However, in most instances the extent of neurofibrillary pathology clearly fell well short of that associated with fully developed Alzheimer's disease, and none of the patients met pathological diagnostic criteria for (a high probability of ) Alzheimer's disease [23]. For the most part, this can be interpreted as 'incidental' and probably-age related, being of that type commonly seen in many older healthy, individuals and considered unlikely to generate significant clinical dysfunction [8]. Nonetheless, three patients did meet pathological criteria for an intermediate likelihood of Alzheimer's disease [23]. One of these patients (patient#36) showed mild cognitive impairment, another (patient #35) also had isocortical DLB and was clinically demented, and the third (patient #79) had FTD (with PNFA). Where tau pathology was sufficiently extensive to make western blotting possible (in patients #30 and 35) this produced a banding pattern consistent with Alzheimer's disease, and neurofibrillary changes were detected by 3-R and 4-R tau immunostaining, again consistent with (an evolving) Alzheimer's disease pathology. Thirdly, in 4 patients (patients #37-40) an unusual pattern of tau pathology (fine or coarse granules) was seen, which was only demonstrated by pThr 175 immunostaining, and not at all with pThr 217 or AT8 antibodies. Such changes were most prominent in frontal cortex, being uncommon in, or absent from, temporal cortex. None of the 4 patients were considered to have shown overt clinical evidence of cognitive impairment, although this had not been formally assessed in any of the 4 patients. Again, despite 5-fold enrichment of sample, it was not possible in patients #37-39 to demonstrate on western blot from the frontal cortex any tau banding pattern relevant to the pThr 175 tau pathology seen histologically in the frontal cortex of these patients. However, in patient #40 a pattern resembling that of Alzheimer's disease was seen in the temporal cortex sample consistent with the presence of limited tau neurofibrillary tangle formation and mild amyloid deposition on histological inspection. Immunostaining for both 3-R and 4-R tau showed occasional nerve cells in frontal cortex to be immunoreactive for both, again consistent with the presence of mild Alzheimer-type pathology within temporal lobe only. Consequently, the nosology, and significance, of the pThr 175 frontal cortical tau pathology, of these cases (including case #40) presently remains uncertain. Yang and Strong reported the presence of tauimmunoreactive astrocytes, especially within the frontal cortex, amygdala and entorhinal cortex, that were generally much more common in ALSci than ALS, and were more strongly detected using pThr 217 antibody than pThr 175 or pSer 208/210 antibodies [36]. From descriptions presented, it is difficult to ascertain precisely just how common this glial cell pathology might have been, but from inspection of the tabulated data, it would appear to be sparse in any region of brain in ALS in most patients, being relatively frequent only in isolated individuals (in 1/5 studied). In ALSci tau positive glial cells were seen in a greater proportion of patients (at least in frontal cortex), but were not seemingly present in any greater numbers than in ALS alone. In the present study, tau positive astrocytes cells were not, or only very rarely, seen irrespective of diagnosis, these being equally detected by AT8, pThr 175 and pThr 217 antibodies. The reasons for this discrepancy are not clear, but may relate to case selection or tissue processing. In the present study, cases of MND, FTD + MND and FTD were unselected, representing consecutive cases entering Manchester Brain Bank from 1986 onwards. The MND patients, with the exception of two, were not thought to exhibit cognitive change, although in the absence of formal neuropsychological assessments, the presence of subtle changes cannot be excluded. Patients with FTD + MND and FTD had undergone extensive neuropsychological assessment and their pattern of behavioural, personality and cognitive change was well documented [30,34]. The degree of clinical and pathological overlap between the ALSci cases reported by Yang and Strong [36] and those in the current series is open to debate. Hence, in the present study, we were able to substantiate Yang and Strong's findings of neuronal/neuritic tau pathology in over half of patients with MND, this also being similarly present in around 40 % of FTD + MND and FTD. The tau pathology was of a type similar to that seen in Alzheimer's disease, albeit to a much more limited extent, usually confined to temporal lobe structures, sometimes restricted to entorhinal cortex. The clear inference from present observations is that when cognitive impairment does occur in MND, this is most likely to be associated with Alzheimer's disease pathology, particularly involving medial temporal lobe structures. Exacerbation of this extent of pathology in ALS/MND might explain the cognitive deficits seen in patients with ALSci reported by Yang and Strong [36].
2023-01-31T14:33:43.730Z
2016-03-31T00:00:00.000
{ "year": 2016, "sha1": "3f87d38a011ae363e57dc94fb3b77e45a384dd35", "oa_license": "CCBY", "oa_url": "https://actaneurocomms.biomedcentral.com/track/pdf/10.1186/s40478-016-0301-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3f87d38a011ae363e57dc94fb3b77e45a384dd35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
215820099
pes2o/s2orc
v3-fos-license
Pesticide use and risk of Hodgkin lymphoma: results from the North American Pooled Project (NAPP) Purpose The purpose of this study was to investigate associations between pesticide exposures and risk of Hodgkin lymphoma (HL) using data from the North American Pooled Project (NAPP). Methods Three population-based studies conducted in Kansas, Nebraska, and six Canadian provinces (HL = 507, Controls = 3886) were pooled to estimate odds ratios and 95% confidence intervals for single (never/ever) and multiple (0, 1, 2–4, ≥ 5) pesticides used, duration (years) and, for select pesticides, frequency (days/year) using adjusted logistic regression models. An age-stratified analysis (≤ 40/ > 40 years) was conducted when numbers were sufficient. Results In an analysis of 26 individual pesticides, ever use of terbufos was significantly associated with HL (OR: 2.53, 95% CI 1.04–6.17). In age-stratified analyses, associations were stronger among those ≤ 40 years of age. No significant associations were noted among those > 40 years old; however, HL cases ≤ 40 were three times more likely to report ever using dimethoate (OR: 3.76 95% CI 1.02–33.84) and almost twice as likely to have ever used malathion (OR: 1.86 95% CI 1.00–3.47). Those ≤ 40 years of age reporting use of 5 + organophosphate insecticides had triple the odds of HL (OR: 3.00 95% CI 1.28–7.03). Longer duration of use of 2,4-D, ≥ 6 vs. 0 years, was associated with elevated odds of HL (OR: 2.59 95% CI 1.34–4.97). Conclusion In the NAPP, insecticide use may increase the risk of HL, but results are based on small numbers. Electronic supplementary material The online version of this article (10.1007/s10552-020-01301-4) contains supplementary material, which is available to authorized users. Introduction Hodgkin lymphoma (HL) is a cancer of the lymphatic system with an estimated 8500 new cases in the USA and 990 new cases in Canada in 2017 [1,2]. Men are at slightly greater risk of HL compared to women [3]. Epidemiological evidence suggests a viral etiology with Epstein-Barr virus (EBV), which also causes mononucleosis [3]. Those with a history of EBV infection are at 2-3 times greater risk of developing HL [4]. However, this link is more clearly established for classical Hodgkin lymphoma and less so for the other major subtype, nodular lymphocyte-predominant Hodgkin lymphoma [4]. Family history, genetics (many susceptibility polymorphisms map to genes that affect immune function such as the human leukocyte antigen (HLA) region [5]), autoimmune disorders, immunodeficiency and tobacco use are also associated with HL [3]. The North American Pooled Project (NAPP) [35] included case-control studies from a broad geographic range with diverse agricultural practices and different occupational and non-occupational exposures to pesticides. The aim of this analysis was to use these pooled case-control data from the NAPP to investigate possible associations between selfreported pesticide use and HL. The association with the use of multiple pesticides, the association with individual pesticides and the association with duration and frequency of use for select pesticides was investigated. The North American Pooled Project (NAPP) The NAPP is composed of four case-control studies conducted in Kansas, Iowa and Minnesota, and Nebraska (1980s) in the United States and the CCSPH (1990s). Data from three of these studies were harmonized to provide larger numbers to evaluate possible associations between agricultural exposures and risk of several lymphatic and hematopoietic cancers. Complete details of study design, participant recruitment, data collection and harmonization are described elsewhere (NAPP [35], Iowa and Minnesota [36], Kansas [37], Nebraska [38] and CCSPH [39]). The questionnaire design in the CCSPH was modeled after the studies in Kansas and Nebraska allowing for efficient harmonization and pooling of data. Population selection and outcome ascertainment The study from Iowa and Minnesota [36] did not recruit HL cases; thus, controls from this study were excluded from this analysis, which includes cases and controls from three of the four NAPP studies. In Kansas, data were obtained from 121 newly diagnosed, pathologist confirmed HL cases (international classification of disease (ICD)-9-code 201), among men aged 21 years and older, identified from a populationbased registry covering the state of Kansas from 1976 to 1982 [37]. Population-based controls (N = 948) identified via random-digit dialing, Medicare, or state mortality files were frequency matched to cases on age (± 2 years) and vital status. The response rate was 69.9% among HL cases 94.0% among controls. For the Nebraska study, men and women 21 years of age and older, with a first diagnosis of HL from 1983 to 1986, were identified through the Nebraska Lymphoma Study Group and area hospitals in eastern Nebraska [38]. Controls were selected from the same area with 3:1 frequency matching by race, sex, vital status and age (± 2 years) by two-stage random-digit dialing, Medicare, or state mortality files. Response rates were 91.0% for cases and 87.2% for controls. Proxy respondents were used for participants unable to complete study questionnaires independently in Kansas (26.5% cases and 52.3% controls) and Nebraska (22.9% cases and 43.6% controls). In the CCSPH, incident cases among men, 19 years of age and older, with a first diagnosis of HL between 1991 and 1994 were ascertained from cancer registries in six Canadian provinces, except for hospital ascertainment in Quebec [39]. Controls were men selected from provincial health insurance records (Alberta, Saskatchewan, Manitoba and Quebec), telephone listings (Ontario), or voter's lists (British Columbia) stratified by age (± 2 years) and province. The studies recruited cases with multiple lymphatic and hematopoietic cancers. Response rates were 48.0% for controls and 67.1% for NHL cases. Response rates for HL were not reported. A total of 507 HL cases and 3 886 population-based controls were available from Kansas, Nebraska and the CCSPH. The controls were matched to the overall age groupings of all cancer cases recruited by the NAPP and not Hodgkin lymphoma cases specifically. Therefore, the age distribution varies between the cases and controls included in this analysis. Pesticide use in the North American Pooled Project Pesticide use was self-reported via interviewer-administered questionnaire by telephone in Kansas and Nebraska and by mailed questionnaire in the CCSPH. In the Kansas study, cases and controls, or their proxy respondents, were interviewed by telephone and reported information on pesticides used and the names and locations of companies where pesticides were purchased. To corroborate self-reported pesticide use, pesticide suppliers for 110 participants from the Kansas study were contacted and asked to provided information on the participants' crops and their herbicide and insecticide purchases. Approximately, 60% of self-reports agreed with suppliers' records of purchases [37,40]. In the Nebraska study, blinded telephone interviews conducted by interviewers who were not aware of the participants' case-control status were used to collect information on pesticide use, years of use, and the average annual number of days of use on the farm [38]. CCSPH questionnaires were modified versions of questionnaires used in the Kansas and Nebraska studies [39]. In a validation pilot study conducted on the modified questionnaires, the suppliers of 27 farmers were contacted for access to their purchase records. Agreement between self-reported pesticide use and pesticide purchase records was reported as excellent [39]. In the CCSPH, pesticide use data were collected using a two-stage approach. Canadian participants who reported 10 or more hours per year of pesticide use on a postal questionnaire and a 15% random sample of the remaining participants were contacted by telephone for a detailed interview to collect information on their use of major classes of pesticides and individual compounds. For each self-reported individual pesticide used, information on ever use (yes/no), duration of use (years), and frequency of use (number of days/year) was collected in Nebraska and CCSPH. Duration and frequency of use of specific pesticide compounds were not collected in Kansas. Participants were prompted to report use of individual pesticides using a list of chemicals and trade names provided in the CCSPH and Nebraska. However, in Kansas this prompt was not employed; instead an open-ended question was used to ask participants to recall the specific chemicals. Use of over 120 different insecticides, herbicides, and fungicides was reported in the NAPP. Information on demographic characteristics, occupation, lifestyle, medical history and other possible cancer risk factors was also collected in the questionnaires. Statistical analysis Descriptive analyses, which included frequencies, means and standard deviation, median and range values, were conducted to determine the distribution of covariates. Wald 95% confidence intervals (CI) and odds ratios (OR) were estimated using logistic regression (Fisher's scoring method) in SAS v 9.4 (SAS Institute, Cary, NC). Models were selected based on a theoretical consideration of the relationships between variables using the directed acyclic graph approach [41][42][43] (Supplementary Fig. S1). In addition to the design variables of age, sex, province or state of residence and respondent type, variables considered as potential confounders included level of education, a history of allergies, a history of a doctor diagnosed mononucleosis, family history of lymphatic or hematopoietic cancers and having worked or lived on farmland. A multivariable model was selected using the change-in-estimate approach, with a 10% change in the coefficient estimate considered as meaningful. Final models were adjusted for the design variables of age group (< 30, 30-39, 40-49, 50-59, ≥ 60), sex (male, female), province or state of residence (Nebraska, Kansas, Quebec, Ontario, Manitoba, Saskatchewan, Alberta, British Columbia) and respondent type (proxy, self). Observations with missing adjustment covariate values were removed from the analysis (listwise deletion) for a complete-case analysis (97.5% of data) that included 496 HL cases and 3789 controls. Exposures to multiple pesticides (0, 1, 2-4, ≥ 5 pesticides), multiple pesticides grouped by functional group (fungicide, herbicide, insecticide) and select major chemical groups (organophosphate, organochlorine, and carbamate insecticides) were considered. Analyses for individual pesticides were performed for compounds with at least five exposed cases. Given the bimodal incidence of HL, suggesting multiple etiologies, effect modification by age, ≤ 40 and > 40 years old [23], was explored for compounds with a sufficient number of exposed cases. Furthermore, when there were enough exposed cases (malathion, methoxychlor, 2,4-dichlorophenoxyacetic acid (2,4-D), glyphosate) the effect of duration of use (years) and frequency of use (days per year) was investigated. In analysis of duration and frequency of pesticide use, observations with missing duration and frequency were dropped from the analysis. P-trend values for duration (0, 3, 9.5 years) and frequency of pesticide use categories (0, 1, 6 days/year) were calculated as Wald statistics from respective logistic regression models. Sensitivity analyses Three sensitivity analyses were performed: (1) re-sampling controls to match the age-frequency distribution of the cases, (2) excluding proxy respondents and (3) fitting models with a random effects parameter for province or state of residence. Controls were resampled to match the age distribution of cases stratified for state or province of residence and age group category. Odds ratios and 95% confidence intervals were estimated using multiple logistic regression adjusted for age category, sex, respondent status and province or state of residence. Results are presented in Supplementary Tables S1-S4. The results from logistic regression models adjusted for age group, sex and province or state of residence excluding proxy respondents are presented in Supplementary Tables S7-S9. Results from mixed logistic regression models, with a random effects parameter for province or state of residence, and adjusted for age, sex and respondent status are presented in Supplementary Tables S10-S13. Ethics The NAPP received ethics approval from the University of Toronto Research Ethics Board and an exemption from the NIH Office of Human Subjects Research Protection. All case-control studies included in the NAPP received ethics approval at the time the studies were conducted, and informed consent was obtained from all individual participants included. Participant characteristics Cases were on average younger than controls, more likely to have a family history of lymphatic or hematopoietic cancer and more likely to have a history of doctor diagnosed mononucleosis. Educational attainment, having lived or worked on farmland, smoking cigarettes and medically diagnosed allergies were not associated with HL (Table 1). Subtype information was missing for 41.2% of HL cases. However, among those cases with complete subtype information nodular sclerosis was the most common HL subtype in the NAPP overall (26.6%), followed by mixed cellularity (12.4%). Among those 40 and younger with complete HL subtype information, nodular sclerosis (35.6%) was the most common HL subtype. However, among those older than 40 there was a more even distribution of HL subtypes (nodular sclerosis (17.7%), mixed cellularity (14.3%) and other (21.2%)) ( Table 2). Subtype information was not collected in the case-control studies from Nebraska and Kansas. Use of multiple pesticides The use of multiple pesticides as a group was not associated with HL, nor was the use of multiple herbicides, fungicides, or organochlorine insecticides (Table 3). Hodgkin's lymphoma cases had 1.85 (95% CI 1.05-3.24) times the odds of reporting use of five or more insecticides than controls and 1.59 (95% CI 0.96-2.63) times the odds of reporting use of two or more organophosphate insecticides than controls. Cases were also more likely to report use of two or more carbamate insecticides (OR: 2.56, 95% CI 1.07, 6.15). Among those older than 40, no statistically significant associations were observed with the use of multiple pesticides. However, among those 40 and younger, the use of five or more insecticides (OR: 2.45, 95% CI 0.93-6.44) and two or more organophosphate insecticides (OR: 2.96, 95% CI 1.33-6.61, p-trend < 0.01) was significantly associated with HL. Furthermore, the interaction with age and ever use of organophosphate insecticides was statistically significant (p = 0.01). Ever use of individual pesticides Self-reported ever use of 16 individual insecticides was investigated and results are presented in Table 4. In the NAPP overall, ever use of terbufos was statistically significantly associated with HL (OR: 2.58, 95% CI 1.06-6.25). Among those 40 and younger, elevated odds ratios were noted for dimethoate and malathion, although the estimate for dimethoate was based on small numbers (8 exposed cases, 5 exposed controls). Furthermore, the interactions between age and malathion (p-interaction = 0.005), age and dimethoate (p = 0.03) and age and chlordane (p = 0.03) were statistically significant. Among those older than 40, no statistically significant associations were found. The association with HL was investigated for ever use of 12 individual herbicides (Table 5). No elevated odds ratios or statistically significant associations were observed in the NAPP overall. In an age-stratified analysis, HL cases 40 and younger were two times more likely to report ever use of dicamba (OR: 2.09, 95% CI 0.91-4.81) and 1.69 times more likely to report ever use of trifluralin (95% CI 0.78-3.67). In general, elevated odds were not observed for the individual herbicides investigated among those older than 40 years. HL was not associated with ever use of captan and thiram, the fungicides with enough exposed cases to be considered in the individual analysis (Table 5). Duration of use and frequency of use of select pesticides Results for duration of use and frequency of use of malathion, methoxychlor, 2,4-D, and glyphosate in relation to the risk of HL are presented in Table 6. Numbers of exposed cases were small, and CIs were wide. No obvious exposure-response trends were observed in the analysis for duration and frequency of use of any specific pesticide. However, among the 40 and younger age group duration of use of 1-5 years and frequency of use of 1-2 days/year of malathion was statistically significantly associated with HL (p-interaction with age = 0.03). Duration of use of 2,4-D in the NAPP overall tended to be slightly elevated, but no statistically significant associations were observed. We noted a statistically significant interaction with age for 2,4-D (p-interaction < 0.001). In those 40 and younger, HL cases had 2.58 times the odds of reporting ≥ 6 years of 2,4-D use than controls (95% CI 1.38-4.83, p-trend < 0.01). Sensitivity analyses In general, the results of the sensitivity analyses are qualitatively like those from the primary analysis with statistically significant associations noted for use of five or more insecticides and the use of 2 or more carbamate insecticides. From logistic regression models, on data where controls were age-frequency re-matched to the HL cases, the OR for use of 5 + insecticides (relative to 0) is 1.75, 95% CI 0.92-3.31 and the OR for use of 2 + carbamate insecticides (relative to 0) is 3.37, 95% CI 1.16-9.77 (Supplementary Table S1). Similarly, the odds ratio for those reporting use of 5 + insecticides relative to those who reported not having used any insecticides, from mixed logistic regression models that included a random effects parameter for province or state of residence, is 1. 76 Discussion The epidemiologic evaluation of cancer risk resulting from pesticide exposure is challenging because of intermittent exposures of varying levels and changes in use patterns over time. We attempted to address these complexities by assessing associations using a variety of analytical approaches: by functional group (herbicide, fungicide, insecticide), by major chemical class (organophosphate, organochlorine, and carbamate), and when data were available, by the use of different exposure metrics (ever vs never, duration of use and frequency of use) for individual compounds within these classes. The analyses of broad groupings of pesticides showed a few interesting trends. First, ORs for those 40 years of age or younger tended to be elevated in comparison to those in the over 40 age group. No analysis for the over 40 group showed a statistically significant trend with the number of different pesticides used, in contrast to the 40 and under group. Second, a simple counting of pesticides used in the different groups was sometimes associated with an increased risk. This points to the need to perform more sophisticated analyses in additional studies, with available information on the timing of pesticide use, to address the issue of multiple and overlapping exposures. Our results showed an elevated risk of Hodgkin lymphoma with exposure to multiple organophosphate insecticides among those under 40 years of age. Navaranjan et al. [23] had previously noted this in the CCSPH for acetylcholinesterase inhibitors and Orsi et al., from a French case-control study [26]. The two main classes of cholinesterase inhibiting pesticides are organophosphates and carbamates. We observed an elevated odds ratio for the organophosphate terbufos, and for those 40 and younger for dimethoate and malathion. Using our pooled NAPP data, we did not observe much of the previously reported association between HL and the organophosphorous insecticide chlorpyrifos [24], although there were only five exposed cases. HL risk in relation to dichlorprop [25] was not investigated in this study as information on the use of this herbicide was only collected by the CCSPH. Similarly, we were not able to confirm the previous associations with the chemical groups' pyrethrin insecticides and picoline, amide and urea herbicides [26] as we did not have exposure information for these pesticides. The effect modification by age observed in this study may be due to the distribution of different subtypes of HL by age. Nodular sclerosis typically develops in teens and young adults 15-35 years of age. In our study population nodular sclerosis was 2.4 (95% CI 1.21-4.57) times more common among younger HL cases (≤ 40 years of age). It is possible that the differences in the association between HL and pesticide use observed between the two age groups represent different etiologies in HL subtype. However, because many our study participants were missing subtype information, we did not have enough exposed cases to make any conclusive comments about the association between pesticide use and subtype of HL. A history of doctor diagnosed mononucleosis was also more common among those 40 and younger than those older than 40 (≤ 40: 9.5%, > 40: 1.1%, p value < 0.0001). Additionally, there were some differences in pesticide exposure patterns by age. A higher proportion of those 40 and younger reported having used fungicides (3.7% Where at least 5 exposed cases in each age category c Joint test for product term including age dichotomized at 40 and pesticide category Table 5 Adjusted odds ratios and corresponding 95% confidence intervals for Hodgkin lymphoma and ever use of specific self-reported herbicides and a stratified analysis for age, ≤ 40 and > 40 years old, in the North American Pooled Project OR odds ratio; CI confidence interval; int interaction; bold text indicates statistically significant result at alpha = 0.05 a If ≥ 5 exposed cases then included in analysis b Adjusted for age group, sex, province/state of residence, respondent status c Where at least 5 exposed cases in each age category d Joint test for product term including age dichotomized at 40 and the specific insecticide modeled Overall ≤ 40 years old c > 40 years old c N exposed N exposed Table 6 Adjusted odds ratios and corresponding 95% confidence intervals for Hodgkin lymphoma and duration (years used) and frequency of use (days/year) of select pesticides in the North American Pooled Project Overall ≤ 40 years old > 40 years old N exposed N exposed Table S5). In contrast, a higher proportion of those older than 40 reported use of organochlorine insecticides, specifically aldrin, dieldrin, DDT, and lindane. Although we found some elevations in risk associated with pesticide use, limitations of this study must be acknowledged and include exposure measurement error and potentially uncontrolled confounding. The studies pooled in the NAPP relied on self-reported information on the personal use of pesticides, often years prior to the interview and in some instances information on use was reported by proxy respondents. Non-differential exposure misclassification would undoubtedly occur and has been demonstrated in methodological investigations of the NAPP and other farm populations [72][73][74][75]. Non-differential misclassification would tend to bias estimates of relative risk toward the null [75]. However, the effects of measurement error, particularly in combination with unmeasured confounding can be unpredictable. Because the NAPP is composed of case-control studies, there is also a possibility of differential exposure misclassification, which can bias relative risks toward or away from the null depending upon the location and magnitude of the error. A specific methodological effort in the Nebraska case-control study [40] found no evidence for case-response bias regarding pesticides in that study. Previously, year-to-year repeatability based on questionnaires for 11 commonly used pesticides in a group of 4 088 pesticide applicators was reported to be high for self-reported pesticide use, varying from 70 to 90%. Agreement for duration and frequency was lower, varying from 50 to 59% for days of use per year and 50-77% for duration of use in years. However, inter-individual variation in exposure in a single day of pesticide use can be high. It has previously been reported that the number of acres treated in a day by farmers, the number of pesticides mixed, the pounds of active pesticide ingredients handled, and daily urinary concentrations of pesticide metabolites can vary considerably between individuals [76][77][78]. Nevertheless, previous studies suggest that while self-reported pesticide use information lacks the precision required to detect dose-response effects, ever users and frequent users of individual pesticides can be differentiated from never users or infrequent users with reasonable accuracy. Approximately 31% of pesticide use reports in the NAPP were from proxy respondents. A previous study investigating the accuracy of information collected from proxy respondents showed that proxies were more likely to report use of fewer pesticides, unknown use or no use of specific pesticides while famers were three to five times more likely to report use of five or more pesticides [40]. We performed a sensitivity analysis excluding proxy respondents. Qualitatively, results were like those reported for the primary analysis. For Kansas and the CCSPH, response rates for cases were lower than those for controls. If the choice to participate in the study was related to pesticide use (i.e., cases who enrolled in the study were higher users of pesticides) then this is a form of selection bias that would impact the estimated magnitude of the association between pesticide use and HL. If cases were more likely to participate because they had used pesticides, this would overestimate the exposed proportion among cases and overestimate the true effect. Confounding is always a concern in epidemiologic studies. These pooled studies, however, collected information on many potential risk factors and statistical adjustment was employed in the analysis, although there are not many risk factors that are likely to be associated with pesticide use to cause confounding [79]. Given the age distribution of the cases and controls there is the potential for residual confounding by age in our study population. The controls were generally older than the cases with a relative lack of younger controls to match the case distribution. We performed a sensitivity analysis, presented in Supplementary Tables S1-S4, in which we resampled the controls to match the age distribution of the cases stratified by state or province of residence. Qualitatively, the results were the same in the resampled subset as the full data. We elected to use the full data to maximize statistical power. Furthermore, many comparisons were made, some based on small numbers, and these may generate chance findings. Although the pooling of data for the NAPP resulted in an increased overall sample size, the numbers of exposed participants for many pesticides was still low, limiting our ability to assess their effect on HL risk. Few participants reported long-term exposure to the insecticides investigated. While selection bias, residual bias due to unmeasured confounding, and measurement error likely have an impact on the reported results, a formal quantitative bias analysis is not warranted given the imprecision that is already reflected in the confidence intervals presented for the observed association estimates. It is important to note that the pesticide exposure information used in this study was collected in the late 1980s to early 1990s. However, many of the pesticides investigated are still in use today including: captan, carbaryl, 2,4-D, glyphosate, diazinon, dicamba, dimethoate, chlorpyrifos, malathion, atrazine, thiram, trifluralin, and alachlor. Finally, given that < 7% of participants were female, the results of this study cannot be generalized to women who make up a considerable proportion of HL cases in adults. Strengths of this study include the large sample size of the NAPP, which did allow us to investigate a larger number of individual pesticides as compared to previous studies, as well as to investigate different metrics of pesticide exposure, and to consider age differences that are associated with HL subtypes. The availability of medical history, including family history of cancer and lymphatic or hematopoietic cancer, prior diagnosis with allergy (food and drug), asthma, hay fever, mononucleosis, rheumatoid arthritis and tuberculosis allowed us to explore and adjust for possible confounding. Furthermore, the exposure ascertainment methodology employed in the CCHS included the mailing of a list of pesticides with both trade and generic chemical names to participants followed by a telephone interview, permitting the collection of more comprehensive pesticide exposure information [39]. Synergistic interactions are a concern as pesticides are currently tested and regulated on a single compound basis. The presence of synergistic effects when multiple pesticides are used such that they jointly exert a larger effect than predicted is an important consideration for risk assessment given that pesticides exposures co-occur in agricultural or other settings. A review of 73 reported pesticide synergistic interactions, 69 binary and the remaining consisting of combinations of three to eight compounds, from 36 experimental studies, showed that organophosphate and carbamate insecticides (cholinesterase inhibitors), azole fungicides, triazine herbicides and pyrethroid insecticides were overrepresented in the synergistic mixtures [80]. In future studies, if combinations of pesticides that are likely to induce synergistic interaction, or even additivity, can be identified, this information can be used to inform pesticide use practices, mitigate exposure and improve risk assessment. Our results suggest possible associations between HL and insecticide exposures. Assessment of risk by several approaches, including general functional categories, exposure to specific chemical classes followed by assessment of risk for specific chemical compounds demonstrates the complexity of the relationships between pesticide exposures and HL risk. Although individual pesticides could be related to HL, evaluation of specific combinations of exposure may also be warranted. Acknowledgements The authors wish to thank all participants for their contribution to this study. In addition, the authors acknowledge the efforts of the principal investigators of the individual case-control studies pooled to form the North American Pooled Project (NAPP). The contributions of Dr. Leo F. Skinnider and the late Dr. Helen McDuffie to the Cross-Canada Study of Pesticides and Health, and of the late Dr. Leon Burmeiser to the U.S. studies, are recognized. The authors thank Mr. Joe Barker at IMS Inc. for his programming services to pool data from the CCSPH and the U.S. case-control datasets. This analysis was conducted with the support of a Canadian Cancer Society (CCS) Prevention Research Grant (#703055). CCS was not involved in the design of the NAPP, collection, analysis, or interpretation of data, writing of this manuscript, or submission for publication. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-04-20T14:39:28.629Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "76290bbccc17baf15b4b55f2a9e404b2171dded9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10552-020-01301-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "76290bbccc17baf15b4b55f2a9e404b2171dded9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52153770
pes2o/s2orc
v3-fos-license
Automatic Event Salience Identification Identifying the salience (i.e. importance) of discourse units is an important task in language understanding. While events play important roles in text documents, little research exists on analyzing their saliency status. This paper empirically studies Event Salience and proposes two salience detection models based on discourse relations. The first is a feature based salience model that incorporates cohesion among discourse units. The second is a neural model that captures more complex interactions between discourse units. In our new large-scale event salience corpus, both methods significantly outperform the strong frequency baseline, while our neural model further improves the feature based one by a large margin. Our analyses demonstrate that our neural model captures interesting connections between salience and discourse unit relations (e.g., scripts and frame structures). Introduction Automatic extraction of prominent information from text has always been a core problem in language research. While traditional methods mostly concentrate on the word level, researchers start to analyze higher-level discourse units in text, such as entities (Dunietz and Gillick, 2014) and events (Choubey et al., 2018). Events are important discourse units that form the backbone of our communication. They play various roles in documents. Some are more central in discourse: connecting other entities and events, or providing key information of a story. Others are less relevant, but not easily identifiable by NLP systems. Hence it is important to be able to quantify the "importance" of events. For example, Figure 1 is a news excerpt describing a debate around a jurisdiction process: "trial" is central as the main discussing topic, while "war" is not. Researchers are aware of the need to identify central events in applications like detecting salient relations (Zhang et al., 2015), and identifying climax in storyline (Vossen and Caselli, 2015). Generally, the salience of discourse units is important for language understanding tasks, such as document analysis (Barzilay and Lapata, 2008), information retrieval (Xiong et al., 2018), and semantic role labeling (Cheng and Erk, 2018). Thus, proper models for finding important events are desired. In this work, we study the task of event salience detection, to find events that are most relevant to the main content of documents. To build a salience detection model, one core observation is that salient discourse units are forming discourse relations. In Figure 1, the "trial" event is connected to many other events: "charge" is pressed before "trial"; "trial" is being "delayed". We present two salience detection systems based on the observations. First is a feature based learning to rank model. Beyond basic features like frequency and discourse location, we design features using cosine similarities among events and entities, to estimate the content organization (Grimes, 1975): how lexical meaning of elements relates to each other. Similarities from within-sentence or across the whole document are used to capture interactions on both local and global aspects ( §4). The model significantly outperforms a strong "Frequency" baseline in our experiments. However, there are other discourse relations beyond lexical similarity. Figure 1 showcases some: the script relation (Schank and Abelson, 1977) 1 between "charge" and "trial", and the frame relation (Baker et al., 1998) between "attacks" and "trial" ("attacks" fills the "charges" role of "trial"). Since it is unclear which ones contribute more to salience, we design a Kernel based Centrality Estimation (KCE) model ( §5) to capture salient specific interactions between discourse units automatically. In KCE, discourse units are projected to embeddings, which are trained end-to-end towards the salience task to capture rich semantic information. A set of soft-count kernels are trained to weigh salient specific latent relations between discourse units. With the capacity to model richer relations, KCE outperforms the feature-based model by a large margin ( §7.1). Our analysis shows that KCE is exploiting several relations between discourse units: including script and frames (Table 5). To further understand the nature of KCE, we conduct an intrusion test ( §6.2), which requires a model to identify events from another document. The test shows salient events form tightly related groups with relations captured by KCE. The notion of salience is subjective and may vary from person to person. We follow the empirical approaches used in entity salience research (Dunietz and Gillick, 2014). We consider the summarization test: an event is considered salient if a summary written by a human is likely to include it, since events about the main content are more likely to appear in a summary. This approach allows us to create a large-scale corpus ( §3). In this paper, we make three main contributions. First, we present two event salience detection systems, which capture rich relations among discourse units. Second, we observe interesting connections between salience and various discourse relations ( §7.1 and Table 5), implying potential research on these areas. Finally, we construct a large scale event salience corpus, providing a testbed for future research. Our code, dataset and models are publicly available 2 . 1 Scripts are prototypical sequences of events: a restaurant script normally contains events like "order", "eat" and "pay". However, studies on event salience are premature. Some previous work attempts to approximate event salience with word frequency or discourse position (Vossen and Caselli, 2015;Zhang et al., 2015). Parallel to ours, Choubey et al. (2018) propose a task to find the most dominant event in news articles. They draw connections between event coreference and importance, on hundreds of closeddomain documents, using several oracle event attributes. In contrast, our proposed models are fully learned and applied on more general domains and at a larger scale. We also do not restrict to a single most important event per document. There is a small but growing line of work on entity salience (Dunietz and Gillick, 2014;Dojchinovski et al., 2016;Xiong et al., 2018;Ponza et al., 2018). In this work, we study the case for events. Text relations have been studied in tasks like text summarization, which mainly focused on cohesion (Halliday and Hasan, 1976). Grammatical cohesion methods make use of document level structures such as anaphora relations (Baldwin and Morton, 1998) and discourse parse trees (Marcu, 1999). Lexical cohesion based methods focus on repetitions and synonyms on the lexical level (Skorochod'ko, 1971;Morris and Hirst, 1991;Erkan and Radev, 2004). Though sharing similar intuitions, our proposed models are designed to learn richer semantic relations in the embedding space. Comparing to the traditional summarization task, we focus on events, which are at a different granularity. Our experiments also unveil interesting phenomena among events and other discourse units. The Event Salience Corpus This section introduces our approach to construct a large-scale event salience corpus, including methods for finding event mentions and obtaining saliency labels. The studies are based on the Annotated New York Times corpus (Sandhaus, 2008), a newswire corpus with expert-written abstracts. Automatic Corpus Creation Event Mention Annotation: Despite many annotation attempts on events (Pustejovsky et al., 2002;Brown et al., 2017), automatic labeling of them in general domain remains an open problem. Most of the previous work follows empirical approaches. For example, Chambers and Jurafsky (2008) consider all verbs together with their subject and object as events. Do et al. (2011) additionally include nominal predicates, using the nominal form of verbs and lexical items under the Event frame in FrameNet (Baker et al., 1998). There are two main challenges in labeling event mentions. First, we need to decide which lexical items are event triggers. Second, we have to disambiguate the word sense to correctly identify events. For example, the word "phone" can refer to an entity (a physical phone) or an event (a phone call event). We use FrameNet to solve these problems. We first use a FrameNet based parser: Semafor (Das and Smith, 2011), to find and disambiguate triggers into frame classes. We then use the FrameNet ontology to select event mentions. Our frame based selection method follows the Vendler classes (Vendler, 1957), a four way classification of eventuality: states, activities, accomplishments and achievements. The last three classes involve state change, and are normally considered as events. Following this, we create an "eventevoking frame" list using the following procedure: 1. We keep frames that are subframes of Event and Process in the FrameNet ontology. 2. We discard frames that are subframes of state, entity and attribute frames, such as Entity, Attributes, Locale, etc. 3. We manually inspect frames that are not subframes of the above-mentioned ones (around 200) to keep event related ones (including subframes), such as Arson, Delivery, etc. This gives us a total of 569 frames. We parse the documents with Semafor and consider predicates that trigger a frame in the list as candidates. We finish the process by removing the light verbs 3 and reporting events 4 from the candidates, similar to previous research (Recasens et al., 2013 Times Annotated Corpus, we extract event mentions. We then label an event mention as salient if we can find its lemma in the corresponding abstract (Mitamura et al. (2015) showed that lemma matching is a strong baseline for event coreference.). For example, in Figure 1, event mentions in bold and red are found in the abstract, thus labeled as salient. Data split is detailed in Table 1 and §6. Annotation Quality While the automatic method enables us to create a dataset at scale, it is important to understand the quality of the dataset. For this purpose, we have conducted two small manual evaluation study. Our lemma-based salience annotation method is based on the assumption that lemma matching being a strong detector for event coreference. In order to validate this assumption, one of the authors manually examined 10 documents and identified 82 coreferential event mentions pairs between the text body and the abstract. The automatic lemma rule identifies 72 such pairs: 64 of these matches human decision, producing a precision of 88.9% (64/72) and a recall of 78% (64/82). There are 18 coreferential pairs missed by the rule. The next question is: is an event really important if it is mentioned in the abstract? Although prior work (Dunietz and Gillick, 2014) shows that the assumption to be valid for entities, we study the case for events. We asked two annotators to manually annotate 10 documents (around 300 events) using a 5-point Likert scale for salience. We compute the agreement score using Cohen's Kappa (Cohen, 1960). We find the task to be challenging for human: annotators don't agree well on the 5-point scale (Cohens Kappa = 0.29). However, if we collapse the scale to binary decisions, the Kappa between the annotators raises to 0.67. Further, the Kappa between each annotator and automatic labels are 0.49 and 0.42 respectively. These agreement scores are also close to those reported in the entity salience tasks (Dunietz and Gillick, 2014). While errors exist in the automatic annotation process inevitably, we find the error rate to be reasonable for a large-scale dataset. Further, our study indicates the difficulties for human to rate on a finer scale of salience. We leave the investigation of continuous salience scores to future work. Feature-Based Event Salience Model This section presents the feature-based model, including the features and the learning process. Features Our features are summarized in Table 2. Basic Discourse Features: We first use two basic features similar to Dunietz and Gillick (2014): Frequency and Sentence Location. Frequency is the lemma count of the mention's syntactic head word (Manning et al., 2014). Sentence Location is the sentence index of the mention, since the first few sentences are normally more important. These two features are often used to estimate salience (Barzilay and Lapata, 2008;Vossen and Caselli, 2015). Content Features: We then design several lexical similarity features, to reflect Grimes' content relatedness (Grimes, 1975). In addition to events, the relations between events and entities are also important. For example, Figure 1 shows some related entities in the legal domain, such as "prosecutors" and "court". Ideally, they should help promote the salience status for event "trial". Lexical relations can be found both withinsentence (local) or across sentence (global) (Halliday and Hasan, 1976). We compute the local part by averaging similarity scores from other units in the same sentence. The global part is computed by averaging similarity scores from other units in the document. All similarity scores are computed using cosine similarities on pre-trained embeddings (Mikolov et al., 2013). These lead to 3 content features: Event Voting, the average similarity to other events in the document; Entity Voting, the average similarity to entities in the document; Local Entity Voting, the average similarity to entities in the same sentence. Local event voting is not used since a sentence often contains only 1 event. Model A Learning to Rank (LeToR) model (Liu, 2009) is used to combine the features. Let ev i denote the ith event in a document d. Its salience score is computed as: Table 2); W f and b are the parameters to learn. The model is trained with pairwise loss: where ev + and ev − represent the salient and nonsalient events; y is the gold standard function. Learning can be done by standard gradient methods. Neural Event Salience Model As discussed in §1, the salience of discourse units is reflected by rich relations beyond lexical similarities, for example, script ("charge" and "trial") and frame (a "trial" of "attacks"). The relations between these words are specific to the salience task, thus difficult to be captured by raw cosine scores that are optimized for word similarities. In this section, we present a neural model to exploit the embedding space more effectively, in order to capture relations for event salience estimation. Kernel-based Centrality Estimation Inspired by the kernel ranking model (Xiong et al., 2017), we propose Kernel-based Centrality Estimation (KCE), to find and weight semantic relations of interests, in order to better estimate salience. Formally, given a document d, the set of annotated events V = {ev 1 , . . . ev i . . . , ev n }, KCE first embed an event into vector space: ev i Emb − −− → − → ev i . The embedding function is initialized with pretrained embeddings. It then extract K features for each ev i : Name Description Frequency The frequency of the event lemma in document. Sentence Location The location of the first sentence that contains the event. Event Voting Average cosine similarity with other events in document. Entity Voting Average cosine similarity with other entities in document. Local Entity Voting Average cosine similarity with entities in the sentence. is the k-th Gaussian kernel with mean µ k and variance σ 2 k . It models the interactions between events in its kernel range defined by µ k and σ k . Φ K (ev i , V) enforces multi-level interactions among events -relations that contribute similarly to salience are expected to be grouped into the same kernels. Such interactions greatly improve the capacity of the model with negligible increase in the number of parameters. Empirical evidences (Xiong et al., 2017) have shown that kernels in this form are effective to learn weights for task-specific term pairs. The final salience score is computed as: where W v is learned to weight the contribution of the certain relations captured by each kernel. We then use the exact same learning objective as in equation (2). The pairwise loss is first backpropagated through the network to update the kernel weights W v , assigning higher weights to relevant regions. Then the kernels use the gradients to update the embeddings, in order to capture the meaningful discourse relations for salience. Since the features and KCE capture different aspects, combining them may give superior performance. This can be done by combining the two vectors in the final linear layer: Integrating Entities into KCE KCE is also used to model the relations between events and entities. For example, in Figure 1, the entity "court" is a frame element of the event "trial"; "United States" is a frame element of the event "war". It is not clear which pair contributes more to salience. We again let KCE to learn it. Formally, let E be the list of entities in the document, i.e. E = {en 1 , . . . , en i , . . . , en n }, where en i is the ith entity in document d. KCE extracts the kernel features about entity-event relations as follows: similarly, en i is embedded by: en i Emb − −− → − → en i , which is initialized by pre-trained entity embeddings. We reach the full KCE model by combining all the vectors using a linear layer: The model is again trained by equation (2). Experimental Methodology This section describes our experiment settings. Event Salience Detection Dataset: We conduct our experiments on the salience corpus described in §3. Among the 664,911 articles with abstracts, we sample 10% of the data as the test set and then randomly leave out another 10% documents for development. Overall, there are 4359 distinct event lexical items, at a similar scale with previous work (Chambers and Jurafsky, 2008;Do et al., 2011). The corpus statistics are summarized in Table 1. Input: The inputs to models are the documents and the extracted events. The models are required to rank the events from the most to least salience. Baselines: Three methods from previous researches are used as baselines: Frequency, Location and PageRank. The first two are often used to simulate saliency (Barzilay and Lapata, 2008;Vossen and Caselli, 2015). The Frequency baseline ranks events based on the count of the headword lemma; the Location baseline ranks events using the order of their appearances in discourse. Ties are broken randomly. Similar to entity salience ranking with PageRank scores (Xiong et al., 2018), our PageRank baseline runs PageRank on a fully connected graph whose nodes are the events in documents. The edges are weighted by the embedding similarities between event pairs. We conduct supervised PageRank on this graph, using the same pairwise loss setup as in KCE. We report the best performance obtained by linearly combining Frequency with the scores obtained after a one-step random walk. Evaluation Metric: Since the importance of events is on a continuous scale, the boundary between "important" and "not important" is vague. Hence we evaluate it as a ranking problem. The metrics are the precision and recall value at 1, 5 and 10 respectively. It is adequate to stop at 10 since there are less than 9 salient events per document on average (Table 1). We also report Area Under Curve (AUC). Statistical significance values are tested by permutation (randomization) test with p < 0.05. Implementation Details: We pre-trained word embeddings with 128 dimensions on the whole Annotated New York Times corpus using Word2Vec (Mikolov et al., 2013). Entities are extracted using the TagMe entity linking toolkit (Ferragina and Scaiella, 2010). Words or entities that appear only once in training are replaced with special "unknown" tokens. The parameters of the models are optimized by Adam (Kingma and Ba, 2015), with batch size 128. The vectors of entities are initialized by the pre-trained embeddings. Event embeddings are initialized by their headword embedding. The Event Intrusion Test: A Study KCE is designed to estimate salience by modeling relations between discourse units. To better understand its behavior, we design the following event intrusion test, following the word intrusion test used to assess topic model quality (Chang et al., 2009). Event Intrusion Test: The test will present to a model a set of events, including: the origins, all events from one document; the intruders, some events from another document. Intuitively, if events inside a document are organized around the core content, a model capturing their relations well should easily identify the intruder(s). Specifically, we take a bag of unordered events {O 1 , O 2 , . . . , O p }, from a document O, as the origins. We insert into it intruders, events drawn from another document, I: {I 1 , I 2 , . . . , I q }. We ask a model to rank the mixed event set M = {O 1 , I 1 , O 2 , I 2 , . . .}. We expect a model to rank the intruders I i below the origins O i . Intrusion Instances: From the development set, we randomly sample 15,000 origin and intruding document pairs. To simplify the analysis, we only take documents with at least 5 salient events. The intruder events, together with the entities in the same sentences, are added to the origin document. Metrics: AUC is used to quantify ranking quality, where events in O are positive and events in I are negative. To observe the ranking among the salient origins, we compute a separate AUC score between the intruders and the salient origins, denoted as SA-AUC. In other words, SA-AUC is the AUC score on the list with non-salient origins removed. Experiments Details: We take the full KCE model to compute salient scores for events in the mixed event set M , which are directly used for ranking. Frequency is recounted. All other features (Table 2) are set to 0 to emphasize the relational aspects, We experiment with two settings: 1. adding only the salient intruders. 2. adding only the non-salient intruders. Under both settings, the intruders are added one by one, allowing us to observe the score change regarding the number of intruders added. For comparison, we add a Frequency baseline, that directly ranks events by the Frequency feature. Evaluation Results This section presents the evaluations and analyses. Event Salience Performance We summarize the main results in Table 3. Baselines: Frequency is the best performing baseline. Its precision at 1 and 5 are higher than 40%. PageRank performs worse than Frequency on all Case Study: We inspect some pairs of events and entities in different kernels and list some examples in Table 5. The pre-trained embeddings are changed a lot. Pairs of units with different raw similarity values are now placed in the same bin. The pairs in Table 3 exhibit interesting types of relations: e.g.,"arrest-charge" and "attack-kill" form script-like chains; "911 attack" forms a quasiidentity relation (Recasens et al., 2010) with "attack"; "business" and "increase" are candidates as frame-argument structure. While these pairs have different raw cosine similarities, they are all useful in predicting salience. KCE learns to gather these relations into bins assigned with higher weights, which is not achieved by pure embedding based methods. The KCE has changed the embedding space and the scoring functions significantly from the original space after training. This partially explains why the raw voting features and PageRank are not as effective. The left figure shows that KCE successfully finds the non-salient intruders. The SA-AUC is higher than 0.8. Yet the AUC scores, which include the rankings of non-salience events, are rather close to random. This shows that the salient events in the origin documents form a more cohesive group, making them more robust against the intruders; the non-salient ones are not as cohesive. Intrusion Test Results In both settings, KCE produces higher SA-AUC than Frequency at the first 30%. However, in setting 2, KCE starts to produce lower SA-AUC than Frequency after 30%, then gradually drops to 0.5 (random). This phenomenon is expected since the asymmetry between origins and intruders allow KCE to distinguish them at the beginning. When all intruders are added, KCE performs worse because it relies heavily on the relations, which can be also formed by the salient intruders. This phenomenon is observed only on the salient intruders, which again confirms the cohesive relations are found among salient events. In conclusion, we observe that the salient events form tight groups connected by discourse relations while the non-salient events are not as related. The observations imply that the main scripts in documents are mostly anchored by small groups of salient events (such as the "Trial" script in Example 1). Other events may serve as "backgrounds" (Cheung et al., 2013). Similarly, Choubey et al. (2018) find that relations like event coreference and sequence are important for saliency. Conclusion We propose two salient detection models, based on lexical relatedness and semantic relations. The feature-based model with lexical similarities is effective, but cannot capture semantic relations like scripts and frames. The KCE model uses kernels and embeddings to capture these relations, thus outperforms the baselines and feature-based models significantly. All the results are tested on our newly created large-scale event salience dataset. While the automatic method inevitably introduces noises to the dataset, the scale enables us to study complex event interactions, which is infeasible via costly expert labeling. Our case study shows that the salience model finds and utilize a variety of discourse relations: script chain (attack and kill), frame argument relation (business and increase), quasi-identity (911 attack and attack). Such complex relations are not as prominent in the raw word embedding space. The core message is that a salience detection module automatically discovers connections between salience and relations. This goes beyond prior centering analysis work that focuses on lexical and syntax and provide a new semantic view from the script and frame perspective. In the intrusion test, we observe that the small number of salient events are forming tight connected groups. While KCE captures these relations quite effectively, it can be confused by salient intrusion events. The phenomenon indicates that the salient events are tightly connected, which form the main scripts of documents. This paper empirically reveals many interesting connections between discourse phenomena and salience. The results also suggest that core script information may reside mostly in the salient events. Limited by the data acquisition method, this paper only models discourse salience as binary decisions. However, salience value may be continuous and may even have more than one aspects. In the future, we plan to investigate these complex settings. Another direction of study is large-scale semantic relation discovery, for example, frames and scripts, with a focus on salient discourse units.
2018-09-03T16:35:07.000Z
2018-09-03T00:00:00.000
{ "year": 2018, "sha1": "b4f825893fc40cbcd33c38f6f9cb04e0dcdb104d", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/D18-1154.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "b4f825893fc40cbcd33c38f6f9cb04e0dcdb104d", "s2fieldsofstudy": [ "Sociology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
4997228
pes2o/s2orc
v3-fos-license
Certain Actions from the Functional Movement Screen Do Not Provide an Indication of Dynamic Stability Dynamic stability is an essential physical component for team sport athletes. Certain Functional Movement Screen (FMS) exercises (deep squat; left- and right-leg hurdle step; left- and right-leg in-line lunge [ILL]; left- and right-leg active straight-leg raise; and trunk stability push-up [TSPU]) have been suggested as providing an indication of dynamic stability. No research has investigated relationships between these screens and an established test of dynamic stability such as the modified Star Excursion Balance Test (mSEBT), which measures lower-limb reach distance in posteromedial, medial, and anteromedial directions, in team sport athletes. Forty-one male and female team sport athletes completed the screens and the mSEBT. Participants were split into high-, intermediate-, and low-performing groups according to the mean of the excursions when both the left and right legs were used for the mSEBT stance. Any between-group differences in the screens and mSEBT were determined via a one-way analysis of variance with Bonferroni post hoc adjustment (p < 0.05). Data was pooled for a correlation analysis (p < 0.05). There were no between-group differences in any of the screens, and only two positive correlations between the screens and the mSEBT (TSPU and right stance leg posteromedial excursion, r = 0.37; left-leg ILL and left stance leg posteromedial excursion, r = 0.46). The mSEBT clearly indicated participants with different dynamic stability capabilities. In contrast to the mSEBT, the selected FMS exercises investigated in this study have a limited capacity to identify dynamic stability in team sport athletes. Introduction The Functional Movement Screen (FMS) is often used to monitor functional capacity, as the actions have been described as challenging an individual's ability to expedite movement in a proximal-to-distal fashion (Cook et al., 2006a). Traditionally, the FMS has been used as a potential indicator of injury risk in athletes (Chorba et al., 2010;Kiesel et al., 2007), although further research is needed to confirm this relationship (Teyhen et al., 2014). More recently, the FMS has been investigated with regard to its relationship to athletic performance (Lockie et al., 2013a;Lockie et al., 2015;Parchmann and McBride, 2011), given that effective movement patterns are needed for sport. However, research has found limitations with the FMS in providing an indication of ineffective movement patterns that influence athletic performance. For example, multidirectional speed has been found to have Journal of Human Kinetics -volume 47/2015 http://www.johk.pl minimal relationships with the FMS, including 20 m sprint and T-test performance in collegiate golfers (Parchmann and McBride, 2011), and 20 m sprint, 505 change-of-direction speed test, and modified T-test performance in male team sport athletes (Lockie et al., 2015). Nonetheless, it should be noted that multidirectional speed incorporates a number of physical capacities, one of which includes dynamic stability (Sheppard and Young, 2006). In recent times, this capacity has been investigated in team sport athletes (Lockie et al., 2013b;Lockie et al., 2014b, in press;Thorpe and Ebersole, 2008). Within multidirectional movements, athletes must maintain stability when transitioning from a dynamic (deceleration) to a static (stopping in preparation to change direction), before returning to a dynamic (reacceleration) state. A valid and popular assessment of dynamic stability is the Star Excursion Balance Test (SEBT), which utilizes functional reaching of the legs from a unilateral stance in eight directions (anterior, anterolateral, lateral, posterolateral, posterior, posteromedial, medial, and anteromedial) (Olmsted et al., 2002;Robinson and Gribble, 2008). The SEBT is a valuable test, as it may predict the risk of leg injuries in athletes (Dallinga et al., 2012;Plisky et al., 2006), while more importantly for this study, also relates to athletic performance (Lockie et al., in press;Thorpe and Ebersole, 2008). When compared to non-athletes, collegiate female soccer players could reach further in anterior and posterior directions (Thorpe and Ebersole, 2008). Lockie et al. (in press) found that faster male team sport athletes in assessments such as the 40 m sprint, T-test, and change-of-direction and acceleration tests, could reach further in the medial and posteromedial directions. Given the importance of dynamic stability for team sport athletes (Lockie et al., 2014b, in press;Sheppard and Young, 2006), there is value for strength and conditioning coaches to understand whether other tests also provide an indication of this physical quality, and potentially identify physical deficiencies affecting performance. Although the FMS has been found not to relate to multidirectional sprinting itself (Lockie et al., 2015;Parchmann and McBride, 2011), screens that require a stable base during movement may be able to provide an indication of a component of speed in dynamic stability. In addition to this, FMS literature has implied the importance of dynamic stability to the screening movements (Cook et al., 2006a(Cook et al., , 2006b. Indeed, Teyhen et al. (2014) found small-to-moderate correlations between the Y-balance test and the deep squat (correlation and coefficient [r] = 0.38), hurdle step (r = 0.34), and in-line lunge (r = 0.40), in male and female active duty service members. Research investigating relationships between the FMS and an established test of dynamic stability specific to team sport athletes could provide strength and conditioning coaches the opportunity to use certain screening exercises as a means to identifying movement limitations affecting this capacity. This would also confirm whether anecdotal recommendations as to the importance of dynamic stability within screening exercises are appropriate. Therefore, this study analyzed the relationship between individual FMS assessments (a deep squat, a hurdle step, an in-line lunge, an active straight-leg raise, and a trunk stability push-up) with performance in a modified SEBT (mSEBT) in team sport athletes. The mSEBT utilizes only the posteromedial, medial, and anteromedial excursions, and eliminates redundant measurements to make the assessment more efficient (Hertel et al., 2006). Participants were split into high-, intermediate-, and lowperforming groups according to the mean of reach scores attained for each leg when used for the stance in the mSEBT. This demonstrated whether athletes who had better dynamic stability were superior in the selected screens from the FMS. As these screens had been said to require some form of dynamic stability and movement control (Cook et al., 2006a(Cook et al., , 2006b, it was hypothesized that participants who demonstrated superior dynamic stability would also perform better in these screens. Additionally, higher scores in the hurdle step and the in-line lunge would correlate with further excursion distances. Participants Forty-one recreational team sport athletes (age = 22.80 ± 4.13 years; body height = 1.76 ± 0.09 m; body mass = 76.05 ± 12.85 kg), including 32 males (age = 22.84 ± 3.90 years; body height = 1.79 ± 0.07 m; body mass = 79.37 ± 12.49 kg) and 9 © Editorial Committee of Journal of Human Kinetics females (age = 22.67 ± 5.12 years; body height = 1.66 ± 0.05 m; body mass = 64.22 ± 4.44 kg), volunteered for this study. Mixed-gender groups have been previously used in the FMS (Okada et al., 2011;Parchmann and McBride, 2011;Teyhen et al., 2014), and sport (Eikenberry et al., 2008;Guissard et al., 1992;Lockie et al., 2012;Spiteri et al., 2013) research. Participants were recruited if they: currently played a team sport (soccer, netball, basketball, rugby, Australian football, touch football); were currently training for a team sport (≥three times per week); and had a training history (≥two times per week) extending over the previous year. Although there may be certain differences in traits between different sport participants, the analysis of performance with regard to physical characteristics common to athletes from assorted team sports had been consistently conducted within the literature (Lockie et al., 2014a;Lockie et al., 2011;Sassi et al., 2009;Sekulic et al., 2013;Spiteri et al., 2013). To limit the influence of any injuries that could affect FMS scoring, participants were only included if they had not sustained an injury in the previous 30 days that prohibited them from full participation in regular training and competition (Chorba et al., 2010). The study occurred within the competition season for all participants, and the procedures were approved by the University of Newcastle ethics committee. All subjects received a clear explanation of the study, including the risks and benefits of participation, and written informed consent was obtained prior to testing. Procedures Data was collected over two sessions, separated by one week. The first session involved the FMS assessments, while the second testing session incorporated the mSEBT. Prior to the FMS assessment in the first session, each participant's age, body height, and body mass were recorded. Body height was measured using a stadiometer (Ecomed Trading, Seven Hills, Australia), while body mass was recorded using electronic digital scales (Tanita Corporation, Tokyo, Japan). Participants then completed the selected screens. In the second session, the mSEBT warm-up consisted of low-intensity cycling on a bicycle ergometer, followed by circuits of the mSEBT, the specifics of which will be documented. Participants were tested at the same time of day for both sessions and in the same order, did not eat for 2-3 hours prior to their testing sessions, and refrained from taking any stimulants such as caffeine, or intensive lower-body exercise, in the 24 hours prior to testing. Functional Movement Screen (FMS) Five movements were used from the FMS for this study, and the intra-rater reliability of these screens had been previously established (Minick et al., 2010;Onate et al., 2012). Although Shultz et al. (2013) documented some limitations in the inter-rater reliability of the FMS, as will be detailed, the procedures adopted in this study sought to limit the influence of this. The selected screening tests, as described by Frost et al. (2012), were completed in the following order: 1. deep squat: a dowel was held overhead with arms extended, and the participant squatted as low as possible; 2. hurdle step: a dowel was held across the shoulders, and the participant stepped over a hurdle in front of them that was level with their tibial tuberosity; 3. in-line lunge: with a dowel held vertically behind the participant such that it contacted the head, back and sacrum, and with the feet aligned, the participant performed a split squat; 4. straight-leg raise: lying supine with their head on the ground, the participant actively raised one leg as high as possible; and 5. trunk stability push-up: the participant performed a push-up with their hands shoulder-width apart. As stated, these screens were selected as they had been said to require some form of dynamic stability (Cook et al., 2006a(Cook et al., , 2006b. The shoulder mobility test was not used as it consists of completely isolated movement to the glenohumeral joint (Cook et al., 2006b). The rotary stability test was excluded because previous research had stated that it was not a practical test for athletic populations (Schneiders et al., 2011). A clearing test was employed for the trunk stability push-up, where the participant performed a press-up from the push-up start position, while maintaining contact between the hips and the ground (Cook et al., 2006b). FMS scoring checklists had been presented in the literature (Cook et al., 2006a(Cook et al., , 2006bFrost et al., 2012;Okada et al., 2011), and were used for this study. Three repetitions of each task were completed, and the best performed repetition was graded. Approximately five seconds of rest were provided between trials, one Journal of Human Kinetics -volume 47/2015 http://www.johk.pl minute of rest between tests, and participants returned to the starting position between each trial (Okada et al., 2011). Participants were recorded by two video camcorders (Sony Electronics Inc., Tokyo, Japan), positioned anteriorly and laterally. Two qualified exercise scientists, trained and experienced with the FMS, analyzed participants live and later reviewed the video footage if required, and scored each participant individually. Movements were scored from 0-3. Scores of 3, 2, 1, and 0, represented, according to relevant criteria: 'performed without compensation', 'performed with a single compensation', 'performed with multiple compensations or could not perform', and 'pain', respectively (Cook et al., 2006a(Cook et al., , 2006bFrost et al., 2012). If there was any scoring discrepancy between the investigators, they reviewed the footage and discussed the result until a resolution was reached. This was done to minimize any discrepancies that may result between scorers (Shultz et al., 2013). Except for the deep squat and the trunk stability push-up, each side of the body was assessed within the movements, and all scores were considered in the analysis for this study. Modified Star Excursion Balance Test (mSEBT) Dynamic balance was assessed by using the mSEBT through three excursions (posteromedial, medial, and anteromedial), which are shown in Figure 1. The testing grid consisted of 120-centimeter long tape measures taped to the laboratory floor. Each tape measure extended from an origin at 45º increments, measured by a goniometer. Participants stood on the center marker of the mSEBT, with the ankle malleoli aligned with lateral tape measures, which were visually assessed by the researcher. Participants then used their free leg to reach in the aforementioned order. With each attempt, the participant attempted to reach as far as possible along each line and make a light touch on the ground with the most distal part of the reaching leg. The participant then returned the reaching leg to a bilateral stance, without allowing this movement to affect overall balance. A researcher noted the distance after each attempt. Participants placed their hands on their hips during the mSEBT, and kept them there throughout all reach attempts. A trial was disregarded if the researcher felt the participant used the reaching leg for an extended period of support, removed the stance leg from the grid, removed their hands from their hips, or did not maintain balance. A minimum of three practice trials were used prior to data collection to familiarize participants to the movements required, and to serve as a warm-up. The order of the stance leg used during testing was randomized across participants. Reach distances were considered relative to leg length, and expressed as a percentage: relative reach distance = reach distance/leg length x 100 (Gribble and Hertel, 2003;Lockie et al., in press). Statistical Analysis All statistics were computed using the Statistics Package for Social Sciences Version 22.0 (IBM, Armonk, United States of America). Descriptive statistics (mean ± standard deviation) were used to profile each parameter. The Levene statistic determined homogeneity of variance of the data. Following established procedures (Frost and Cronin, 2011;Lockie et al., 2011;Lockie et al., 2013b;Spiteri et al., 2013), participants were ranked and split into high-, intermediate-, and low-performing dynamic stability groups according to two methods. The two ranking methods were the mean of reach distances when the right leg was used for the stance in the mSEBT, and the mean of reach distances when the left leg was used for the stance. As there is a tendency for dichotomized data to regress towards the mean, the participants ranked 14 and 28 for each dichotomization method were removed from the analysis, and groups of 13 participants each were established. This was done to ensure each group comprised participants of different dynamic stability levels. Thus, participants ranked 1-13 were in the highperforming group; participants ranked 15-27 were placed in the intermediate-performing group; and participants ranked 29-41 became the lowperforming group. According to these groups, a one-way analysis of variance computed any significant (p < 0.05) differences between the selected individual screening exercises and mSEBT reach distances. Post hoc analysis was conducted for between-group pairwise comparisons using a Bonferroni adjustment for multiple comparisons. Data was then pooled (n = 41) for a Pearson's correlation analysis (p < 0.05) conducted between the deep squat, the left and right leg © Editorial Committee of Journal of Human Kinetics hurdle step, the in-line lunge, the active straightleg raise, the trunk stability push-up, and the mSEBT scores. This analysis determined the relationships between performance in the individual screens, and dynamic stability as measured by functional reach distance. The strength of the correlation coefficient (r) was designated as per Hopkins (2009). An r value between 0 to 0.30, or 0 to -0.30, was considered small; 0.31 to 0.49, or -0.31 to -0.49, moderate; 0.50 to 0.69, or -0.50 to -0.69, large; 0.70 to 0.89, or -0.70 to -0.89, very large; and 0.90 to 1, or -0.90 to -1, near perfect for predicting relationships. Table 1 displays the participants' descriptive data and screening scores for each group when both the right (left leg reach), and left (right leg reach) legs were used for the mSEBT stance. No participant scored 0 for any of the screening exercises. There were no between-group differences for age (p = 0.47-1.00), body height (p = 1.00 for all between-group comparisons) or body mass (p = 1.00) for either grouping condition. There were also no significant differences in the deep squat (p = 1.00), the trunk stability push-up (p = 0.90-1.00), or the hurdle step (p = 0.06-1.00), the in-line lunge (p = 0.11-1.00) and the activestraight leg raise (p = 0.08-1.00) for either leg, for each mSEBT stance group dichotomization. Table 2 shows the mSEBT reach distances when the right and left stance leg mSEBT totals were used to delineate the groups. When both legs were used for the stance, the high-performing group was significantly (p ≤ 0.02) better than the low-performing group for all excursion measures, and significantly (p ≤ 0.01) superior in all but the anteromedial excursions when compared to the intermediate group. The intermediate-performing group performed significantly (p ≤ 0.01) better in all but the anteromedial excursions when compared to the low-performing group. Results The correlations between mSEBT and FMS scores are shown in Table 3. The trunk stability push-up had a moderate positive relationship (p = 0.02) with the right stance leg posteromedial excursion, and moderate negative relationships (p = 0.04) with the right and left stance leg anteromedial excursions. The left leg in-line lunge had a moderate positive relationship (p < 0.01) with the right-leg posteromedial excursion when the left leg was used for the stance. There were no other significant relationships between the mSEBT and the screen scores. Discussion To the authors' knowledge, this is the first study to investigate relationships between specific FMS exercises and dynamic stability as measured by the mSEBT in team sport athletes. The results of this study generally showed that there were no relationships between the screens and dynamic stability as measured by the mSEBT. When participants were dichotomized into high-, intermediate-, and low-performing dynamic stability groups, there were no significant differences in performance of any screening exercise (Table 1). Furthermore, only four correlations between the mSEBT and FMS exercises were significant, and two of these significant relationships suggested that a poorer score in the screen (the trunk-stability push-up) related to a further anteromedial excursion (Table 3). This was counter to the studies' hypothesis, and occurred even through the analyzed screens are said to challenge dynamic stability within a functional movement (Cook et al., 2006a(Cook et al., , 2006b. The results from this study appear to support the research that found the FMS to have limited to no relationship to athletic performance (Lockie et al., 2015;Okada et al., 2011;Parchmann and McBride, 2011). If the deep squat, the hurdle step, the inline lunge, the active straight-leg raise, and the trunk stability push-up had provided an indication of dynamic stability, it would have been assumed team sport athletes who exhibit better dynamic stability would also perform better in these screens. However, this was not the case. There were no differences between the groups comprising participants with high, intermediate, or low dynamic stability capabilities (Table 1). The results from this study imply that the qualities measured from functional lower-limb reaching and the mSEBT, which are valid tests of dynamic stability (Hertel et al., 2006;Olmsted et al., 2002;Robinson and Gribble, 2008), appear to be relatively disparate from that assessed in the FMS by the hurdle step and the in-line lunge. Journal of Human Kinetics -volume 47/2015 http://www.johk.pl These findings were also reinforced by the results from the correlation analyses (Table 3). There were only two significant positive relationships between the screens and the mSEBT (the trunk stability push-up and the in-line lunge with posteromedial excursions). This was despite previous research finding significant correlations between FMS exercises and a different measure of dynamic stability in the Y-balance test in soldiers (Teyhen et al., 2014). Nevertheless, even though there were significant relationships found by Teyhen et al. (2014) with screens including the deep squat, the hurdle step, and the in-line lunge, using parameters set by Hopkins (2009), the strength of these correlations documented was still relatively weak. Taken together with the between-group analysis from this study, any suggestion that exercises from the FMS can provide some type of measure of dynamic stability appear to be questionable. This is an important concern for strength and conditioning coaches who may use a screening tool such as the FMS, and what they can surmise about the results they attain from their athletes. Coaches would be better served to use valid assessments such as the mSEBT, which is also reinforced by findings from the current research. When either leg was used for the stance, the mSEBT distinguished team sport athletes with different dynamic stability capabilities (Table 2). This supports the work of Hertel et al. (2006), who stated that the posteromedial, medial, and anteromedial excursions best represented dynamic stability measured by reach distances. Furthermore, the mSEBT and its variations have been shown to relate to multidirectional speed (Lockie et al., in press), and can be improved through specific training (Filipa et al., 2010;Lockie et al., 2014b;Valovich McLeod et al., 2009). Therefore, strength and conditioning coaches could use the mSEBT to assess dynamic stability in their athletes, with the knowledge that it is applicable to team sport athletes, will delineate between athletes of different dynamic stability capabilities, and can be enhanced through appropriate training. There were certain limitations associated with this study. Although it is a valid test (Hertel et al., 2006), the mSEBT was the only measure of dynamic stability utilized. Indeed, there are several different dynamic stability assessments used by practitioners in the field (Dallinga et al., 2012), including the Y-balance (Teyhen et al., 2014) or hop-and-balance (Myer et al., 2006) tests. The FMS could potentially relate to these alternate assessments. Males and females can demonstrate different movement biomechanics during certain actions (McLean et al., 2004), and the combined gender approach may have influenced the study results. However, this approach had been used in previous FMS (Okada et al., 2011;Parchmann and McBride, 2011;Teyhen et al., 2014) and sports technique (Eikenberry et al., 2008;Guissard et al., 1992;Lockie et al., 2012;Spiteri et al., 2013) research, and thus was viewed as appropriate. Correlation analyses do not establish cause-andeffect between variables, in that factors such as the participants' physical characteristics, flexibility, technique, and strength can influence the statistical models that are derived (Brughelli et al., 2008). Lastly, the use of other methods of analysis, such as electromyography or force plates, would also be useful to elucidate any technical similarities between the characteristics of the FMS exercises and the mSEBT. Electromyography has been used in the literature to demonstrate leg muscle activation patterns during SEBT excursions (Earl and Hertel, 2001;Norris and Trudelle-Jackson, 2011), while a force plate has been used to track postural sway and the center of pressure pattern during a stability task (Brown and Mynark, 2007;Gribble et al., 2007). Nonetheless, this research is still valuable for strength and conditioning coaches, as the findings demonstrate that unlike the mSEBT, FMS exercises such as the deep squat, the hurdle step, the in-line lunge, the active straight-leg raise, and the trunk stability push-up have a limited capacity to indicate dynamic stability in team sport athletes. The results of the current study document the limited application of FMS exercises to provide some indication of dynamic stability in team sport athletes. The FMS may have value in monitoring movement deficits that could increase the risk of injury in athletes, although this is still to be confirmed. However, as for previous research (Lockie et al., 2013a;Lockie et al., 2015;Okada et al., 2011;Parchmann and McBride, 2011), the screens have restricted application to athletic performance. In contrast, the mSEBT can be used to delineate between team sport athletes © Editorial Committee of Journal of Human Kinetics of different dynamic stability capabilities. Strength and conditioning coaches who use the FMS as a measure of dynamic stability should be aware that the attained scores may not provide an accurate assessment of this capacity in their athletes. Thus, an assessment such as the mSEBT should also be included in an athlete's testing protocol. Coaches who use the mSEBT can be confident that they will be utilizing an assessment that will provide a valid assessment of dynamic stability in team sport athletes, which may also provide useful data for training progress or team selection.
2018-04-03T02:28:58.079Z
2015-09-29T00:00:00.000
{ "year": 2015, "sha1": "86b96bcda10dc19f91ea8e7fbd68e74dc54d3301", "oa_license": "CCBY", "oa_url": "https://content.sciendo.com/downloadpdf/journals/hukin/47/1/article-p19.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86b96bcda10dc19f91ea8e7fbd68e74dc54d3301", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246600591
pes2o/s2orc
v3-fos-license
Dietary Effects of Carotenoid on Growth Performance and Pigmentation in Bighead Catfish ( Clarias macrocephalus Günther, 1864) : This study investigates the effects of supplemental carotenoid pigments on growth and color performance in bighead catfish ( Clarias macrocephalus ). Two experiments were undertaken to determine the appropriate types, feed duration, and dose of astaxanthin (As), canthaxanthin (Ca), and xanthophyll (Xa) pigments individually and in combination. In the first experiment, fish were fed with one control diet (basic diet), six experimental diets comprised of three diets of As, Ca, and Xa at a 100 mg/kg rate of supplementation, respectively, and three diets combinations of As + Ca, As + Xa, and Ca + Xa at a supplement rate of 50 mg + 50 mg/kg. The results showed no significant difference in weight gain (WG), specific growth rate (SGR), survival rate (SR), and feed conversion ratio of fish among treatments ( p > 0.05) after 6 weeks. The L* (Lightness) and a* (redness) values in the Xa diet were significantly lower than other treatments, while b* (yellowness) was significantly higher than in the control and others treatments ( p < 0.05). These values peaked after 4 weeks and remained stable until the end of the experiment. Consistently, the highest muscle carotenoid content (16.89 ± 0.60 mg/100 g) was found in the fish fed with the Xa diet. The Xa diet was selected for the second experiment. This experiment consisted of four Xa supplemented diets at rates of 25, 50, 75, and 100 mg/kg and a basal diet without any Xa supplementation. The results showed that there was no difference in the SGR or SR of fish fed various Xa levels ( p > 0.05). Fish fed the Xa diet of 75 mg/kg were the most preferred by consumers for the natural “yellowness” of muscle. Thus, the results suggested that additional carotenoid pigments did not affect the growth performance of fish. Farmers and feed producers could utilize Xa at an optimal dose of 75 mg/kg to enhance color performance in the market size of bighead catfish for at least 4 weeks prior to harvest. Introduction Bighead catfish Clarias macrocephalus is one of the most popular and economically important indigenous fish in Southeast Asia [1][2][3][4][5]. The fish has become one of the most important freshwater species for the aquaculture industry in Vietnam [3][4][5]. The market values of this fish not only depend on meat quality and taste but also rely on skin and muscle pigmentation performance. Buyers and consumers alike prefer good quality bighead catfish to have yellowish skin and muscle tone. This is the most important characteristic of fish quality at the market. Other studies reported that the color of the fish is the first characteristic perceived and is a determinant selection criterion, directly related to subsequent acceptance or rejection [6,7]. Farmed bighead catfish typically exhibit pale muscle color and do not have the natural attractive color found in wild fish. This issue negatively affects profitability for farmers because of reduced market prices and reduced consumer demand. To overcome this issue, Experimental Fish Juvenile bighead catfish (46.11 ± 1.19 g) were obtained from a reliable hatchery (Tam Loc hatchery, Can Tho City, Vietnam). Fish were placed in plastic containers with gentle aeration then transferred to the wet lab of the College of Aquaculture and Fisheries, Can Tho University. The fish were acclimatized to the experimental conditions for 2 weeks prior to use for the trial, and they were fed with a basal diet during this period. Experimental Design Two experiments were conducted to examine the appropriate dietary pigments supplementation for optimal growth and color performance of bighead catfish. In the first experiment, fish were fed with seven dietary levels or combinations of astaxanthin (As), canthaxanthin (Ca), and xanthophyll (Xa) (Manufacturer BASF, Germany), including a control diet without additional pigment supplementation (Basic diet only, Table 1). Three diets were supplemented with As, Ca, and Xa at the level of 100 mg/kg of feed, and three combination diets were supplemented with 50 mg As + 50 mg Ca/kg of feed, 50 mg As + 50 mg Xa/kg of feed, and 50 mg Ca + 50 mg Xa/kg of feed. Overall, the diets were labelled as control, As, Ca, Xa, As + Ca, As + Xa, and Ca + Xa diets, respectively. These reference levels were based upon the approved carotenoid pigments level for use in aquaculture feed in the United States and the European Union and range from 80 to 135 mg/kg of feed (USA: 21CFR Section and EU Code No. (EC, 2003b), regulation No: CD70/524/EEC). Table 1. Chemical composition of basal diet (dry matter basis). Ingredient Amount (%) Fish meal 1 25.0 Defatted soybean meal 2 35.0 Blood meal 3 7.00 Rice bran 4 15.0 Cassava meal 5 14.5 Fish oil 6 1.00 Premix mineral and vitamin 7 1.00 Shrimp soluble extract 8 1.00 Guar gum 9 0.5 Total 100 Proximate analysis (% as dry matter basis) Crude Protein 43.8 Crude Lipid 6.82 Ash 11.9 Carbohydrate 37.5 Gross energy (KJ/g) 19.5 1 Ca Mau fishmeal Vietnam. 2 Defatted soybean meal Maharashtra Solvent extraction LTD India. 3,4,5,6 Blood meal, rice bran, fish oil, cassava meal were imported and supplied by Viet Thang feed mill. 7 Feed formulation and proximate analysis of the basal experimental diet are shown in Table 1. This diet meets the optimum nutrient requirement for bighead catfish [4,5]. Experimental diets were prepared and processed according to Hien et al. [4,5]. In brief, the ingredients were ground in a hammer mill to pass through a mesh screen size of 0.8 mm. All ingredients were then mixed thoroughly by a mixer. Thereafter, extruded feed (2.0 mm) was dried in an oven at 45-50 • C for about 8-10 h and then stored at −20 • C until used. The chemical composition of diets was analyzed following the methods of AOAC [20]. Fish were held in a recirculating experimental tank system at a stocking density of 60 fish/200 L tank (approximately 13.8 kg/m 3 ). A total of 28 round composite tanks (250 L/tank) were used. In total, there were seven recycling systems, each consisting of four experimental tanks and filter tanks of 350 L. Biofilter tanks contained 87.5 litters of biofilter media (RKPlast Bioelement, Brorup, Denmark, surface area 750 m 2 /m 3 ) and a settlement tank (solid tank) of 120 L. All experimental recirculating systems were prepared at least 21 days prior to use, during which time water flow rates remained constant at 0.8-1 L/min. The experiment lasted for 6 weeks. The second experiment, this experiment was designed to assess the effect of xanthophyll pigment on growth and color in bighead catfish. Results from Experiment 1 showed that dietary xanthophyll gave the best improvement in color after 4 weeks. This experiment was carried out to examine the appropriate supplemental doses of xanthophyll pigment for growth and color performance to meet consumer tastes. Fish were fed with five diets consisting of 0.0 mg (Basal diet), 25, 50, 75, and 100 mg Xa/kg of feed, hereafter called control, 25 mg Xa, 50 mg Xa, 75 mg Xa, and 100 mg Xa. The basal diet (Table 1) and experimental design and set-up were also the same as in Experiment 1. All experiments were carried out in accordance with national guidelines on the protection and experimental animal welfare in Vietnam, Law of Animal Health, 2015 (Report number: VM5068). Growth Performance Parameters Fish weight was measured at the beginning of the experiment and at two-week intervals until the end of the experiment. Growth performance parameters such as weight gain (WG), specific growth rate (SGR), and survival rate (SR) were calculated using the followings equations: Survival rate (SR, %) = (Final number of fish)/(Initial number of fish)× 100 (3) where: W f and W i are the final and initial wet weight of bighead catfish; T = time duration of the experiment; Ln = normal logarithm. Skin and Muscle Pigmentation The colour performance in bighead catfish was assessed by a combination of three different methods, a colorimeter, sensory assessment methods, and by examining accumulated carotenoids in the muscle of fish as follows: The color change was examined using a CR200 Colorimeter (Minolta Camera Ltd., Osaka, Japan) [21,22]. Fish were measured at the beginning of the experiment and every two weeks until the end of the experiment. Here, 12 fish per treatment (3 fish per tank) were assessed. All measurements were shown in the colorimetric space L*, a*, b* according to Commission Internationale de l'Éclairage guidance [23]. Each measurement determined and recorded standard L*, a*, b* values. The L*, a*, b* values were measured at various positions on each fish (body skin, abdominal skin, and muscle), as shown in Figure 1. Each measurement was repeated three times for body skin, abdominal skin, and muscle. Finally, average color values L*, a*, b* for experimental fish were calculated and recorded. The L* value represents the lightness from black to white on a scale between 0 and 100, while the a* value represents a shade from red (+) to green (−), and the b* value represents shade from yellow (+) to blue (−) in the color measurement of fish. To evaluate the effect of different treatments on the catfish's color, the mean (L*, a*, b*) color value of the bighead catfish for each treatment was compared to that of the control group. The comparison metric was the color difference defined by the International Commission on Illumination (CIE) in 1976, which has been extensively applied in various studies related to food color measurement and comparison [24][25][26]. Accordingly, the mean color difference between a non-control catfish with the mean color c(L*, a*, b*) and a treated catfish with the mean color c(L t *, a t *, b t *) in their L*, a*, b* color space was calculated as seen in Equation (4). Sensory Evaluation Method The appearance of bighead catfish color was also assessed through the sensory evaluation method of Meilgaard et al. [27]. The apparent color of the fish was scored on a scale from 1 to 9. The color of the control was scored at 6, while the other experimental samples were coded and scored in comparison. Scores increased more than 6 if the yellow was darker than the control and decreased less than 6 if the yellow color was lighter than the control. This evaluation method was carried out by 10 independent assessors who had normal colour visions and were able to detect anomalies in the appearance of fish in a consistent manner. Carotenoid Analysis At the end of the experiment, the accumulated carotenoid level in the flesh (muscle tissue and skin) of fish was also analyzed following the method described by previous studies [28,29]. Samples were randomly taken from three fish per tank. After anesthetizing with clove oil, each fish was carefully dissected, and muscle was immediately sampled for carotenoid analysis. Briefly, the flesh samples were randomly collected from 12 fish per treatment (three fish per tank). The flesh samples were frozen at −20 °C and then homogenized in frozen conditions using a grinder. Carotenoids were extracted from representative 5.0 g subsamples using 3 × 25 mL of acetone. Acetone extraction was carried out three times until the solvent became colorless. After the addition of the last acetone extraction and lasting for 24 h, the samples were centrifuged at 4000 RPM for 5 min. The absorbance of the extracts was recorded at 470 nm using a Hitachi U5100 (Tokyo, Japan) spectrophotometer, and the carotenoid concentration (mg/mL) was determined by reference to a standard curve. The carotenoid concentration in the muscle sample (mg/100 g) was calculated based on the dilution and the weight of muscle samples. Statistical Analysis All data were calculated as mean values and standardized deviations (Mean ± SD) using Microsoft Excel 2013. Two factorials analyses of various pigments and times were employed by Two-way ANOVA (IBM SPSS Statistics 21, SPSS Inc., Chicago, IL. USA). Sensory Evaluation Method The appearance of bighead catfish color was also assessed through the sensory evaluation method of Meilgaard et al. [27]. The apparent color of the fish was scored on a scale from 1 to 9. The color of the control was scored at 6, while the other experimental samples were coded and scored in comparison. Scores increased more than 6 if the yellow was darker than the control and decreased less than 6 if the yellow color was lighter than the control. This evaluation method was carried out by 10 independent assessors who had normal colour visions and were able to detect anomalies in the appearance of fish in a consistent manner. Carotenoid Analysis At the end of the experiment, the accumulated carotenoid level in the flesh (muscle tissue and skin) of fish was also analyzed following the method described by previous studies [28,29]. Samples were randomly taken from three fish per tank. After anesthetizing with clove oil, each fish was carefully dissected, and muscle was immediately sampled for carotenoid analysis. Briefly, the flesh samples were randomly collected from 12 fish per treatment (three fish per tank). The flesh samples were frozen at −20 • C and then homogenized in frozen conditions using a grinder. Carotenoids were extracted from representative 5.0 g subsamples using 3 × 25 mL of acetone. Acetone extraction was carried out three times until the solvent became colorless. After the addition of the last acetone extraction and lasting for 24 h, the samples were centrifuged at 4000 RPM for 5 min. The absorbance of the extracts was recorded at 470 nm using a Hitachi U5100 (Tokyo, Japan) spectrophotometer, and the carotenoid concentration (mg/mL) was determined by reference to a standard curve. The carotenoid concentration in the muscle sample (mg/100 g) was calculated based on the dilution and the weight of muscle samples. Statistical Analysis All data were calculated as mean values and standardized deviations (Mean ± SD) using Microsoft Excel 2013. Two factorials analyses of various pigments and times were employed by Two-way ANOVA (IBM SPSS Statistics 21, SPSS Inc., Chicago, IL, USA). Mean comparisons between treatments were made using a one-way ANOVA. Differences between means were evaluated for significant differences by Duncan's test at p < 0.05. Growth Performance, Feed Utilization, and Survival Rate The growth performance, feed utilization, and survival rate of bighead catfish fed with the various dietary pigments are shown in Table 2. The results show that the highest final weight (W f ) was found in those fish fed the xanthophyll diet. However, statistical analysis by one-way ANOVA showed no significant difference in specific growth rate (SGR) among the dietary pigments treatments (p > 0.05). The average final weight (W f ) varied from 56.8 to 59.5 g/fish, and the SGR fluctuated from 0.35 to 0.43%/day. High survival rates ranging from 80.8 to 89.2% were recorded for all treatments. Color Performance Statistical analysis by two-way ANOVA showed that there was a significant difference in the L*, a*, and b* values for body skin, abdominal skin, and muscle of bighead catfish in all treatments over the sampling weeks (p < 0.001) ( Table 3). Fish fed with xanthophyll supplemented diet showed the most yellow color in body skin, abdominal skin, and muscle compared with other diets (Figures 2 and 3). The highest values for the fish fed the Xa diet were achieved at 4 weeks and remained stable or reduced gradually at week 6. These values were relatively improved and consistent for body skin, abdominal skin, and muscle in all diets after two weeks of the feeding trial. Among the diets, the L* (Lightness) and a* (redness) values in the xanthophyll supplemented diet (Xa diet) were significantly lower than in the other treatments, but not significantly compared to the control treatment, while b* (yellowness) was significantly higher than the control and others treatments (p < 0.05). Lightness values of the body skin, abdominal skin, and muscle of fish are shown in Figure 4. Statistical analysis by one-way ANOVA showed that there were no significant differences in the L* values of body skin, abdominal skin, and muscle for the control treatment during the feeding trial (p > 0.05). The body skin lightness of fish fed with carotenoid supplemented diets showed a slight increase with increasing feeding duration, while the abdominal skin and muscle lightness showed a slight decrease. Values are mean ± standard deviation (SD) of triplicates (n = 3); a,b,c,d,e,A,B are statistical symbols. a,b Mean values with difference superscripts in the same column within sampling time (week) are significantly different (p < 0.05); A,B Mean values with difference superscripts in the same column are significantly different (p < 0.05) in color performance of animal between sampling time (week) by two-way ANOVA. Abbreviation: As, Astaxanthin; Ca, Canthaxanthin; Xa, Xanthophyll. Redness values of the abdominal skin of all treatments tended to increase during the experiment, while this phenomenon did not appear on the body or in muscles tissue ( Figure 5). Both Ca and Ca combination groups showed lower a* values on the abdominal skin than those of As and Xa groups at the end of the experiment. As to the redness of the body and abdominal skin, fish fed with carotenoid supplemented diets were redder than Redness values of the abdominal skin of all treatments tended to increase during the experiment, while this phenomenon did not appear on the body or in muscles tissue ( Figure 5). Both Ca and Ca combination groups showed lower a* values on the abdominal skin than those of As and Xa groups at the end of the experiment. As to the redness of the body and abdominal skin, fish fed with carotenoid supplemented diets were redder than those of the control group. The yellowness (b*) values of fish fed with carotenoid pigments supplemented diets tended to increase over the experiment in body skin, abdominal skin, and muscle tissues ( Figure 6). The highest body skin yellowness of the Xa group was significantly higher than those of the control group (p < 0.05) at 2, 4, and 6 weeks ( Figure 6A). However, the b* values of the abdominal skin of fish fed with the control treatment showed no significant difference (p > 0.05) to those of As, Ca, and/or the combined groups ( Figure 6B). In contrast to As and As combination groups, Xa and Ca + Xa groups exhibited significantly higher b* values in muscle tissue. Fish fed with the Xa group reached the highest b* values in The yellowness (b*) values of fish fed with carotenoid pigments supplemented diets tended to increase over the experiment in body skin, abdominal skin, and muscle tissues ( Figure 6). The highest body skin yellowness of the Xa group was significantly higher than those of the control group (p < 0.05) at 2, 4, and 6 weeks ( Figure 6A). However, the b* values of the abdominal skin of fish fed with the control treatment showed no significant difference (p > 0.05) to those of As, Ca, and/or the combined groups ( Figure 6B). In contrast to As and As combination groups, Xa and Ca + Xa groups exhibited significantly higher b* values in muscle tissue. Fish fed with the Xa group reached the highest b* values in muscles at 4 weeks, significantly (p < 0.05) higher compared to other groups ( Figure 6C). The sensory evaluation and accumulated carotenoids in the muscle of fish fed with the various pigments diets are presented in Table 4. Sensory evaluation showed that the body skin and muscle color of the bighead catfish fed Xa diet was the most appreciated, followed by the combination of As + Xa pigments. Statistical analysis showed that there was a significant (p < 0.05) color change of fish fed the Xa diet compared with other pigments diets. Similarly, accumulated carotenoid levels in the muscle (16.89 mg/100 g) of fish fed the Xa diet were significantly (p < 0.05) higher compared to other treatments. The sensory evaluation and accumulated carotenoids in the muscle of fish fed with the various pigments diets are presented in Table 4. Sensory evaluation showed that the body skin and muscle color of the bighead catfish fed Xa diet was the most appreciated, followed by the combination of As + Xa pigments. Statistical analysis showed that there was a significant (p < 0.05) color change of fish fed the Xa diet compared with other pigments diets. Similarly, accumulated carotenoid levels in the muscle (16.89 mg/100 g) of fish fed the Xa diet were significantly (p < 0.05) higher compared to other treatments. Growth Performance and Survival The growth performance and survival rates of bighead catfish fed different rates of Xa diet is summarized in Table 5. Statistical analysis showed that there was no difference in the SGR or SR of fish fed various Xa levels (p > 0.05). However, feed utilization of the Xa diet at 75 mg/kg of feed was more efficient than others. Table 5. Growth performance, initial weight (W i ), final weight (W f ), specific growth rate (SGR), survival rate (SR), feed conversion ratio (FCR) of bighead catfish fed various xanthophyll diets for 4 weeks. Color Performance The color performance of bighead catfish fed various supplementation levels of Xa are shown in Table 6 and Figure 7. The L* value of the body was statistically significant (p < 0.05) compared to the control treatment after 4 weeks. In contrast, the a* value of the pigment supplemented treatments was reduced. Furthermore, the b* values for fish fed the diets supplemented with 75 mg (b*: 10.6) and 100 mg (b*: 11.3) xanthophyll were significantly higher compared with (p < 0.05) other treatments. Values are means of three replicate ± S.D; within a column, values with the same letters are not significantly different (p > 0.05). Abbreviation: As, Astaxanthin; Ca, Canthaxanthin; Xa, Xanthophyll. . Sensory evaluation and accumulated carotenoids in the muscle tissue of fish fed various levels of xanthophyll in the diet are presented in Table 7 and Figure 8. The highest score was also observed for the body skin and muscle color of those fish fed diets supplemented with 75 mg xanthophyll, followed by the 100 mg/kg treatment. Carotenoid accumulation in the muscle of bighead catfish fed a diet containing 75 mg/kg of feed was significantly higher (p < 0.05) than other treatments. Sensory evaluation and accumulated carotenoids in the muscle tissue of fish fed various levels of xanthophyll in the diet are presented in Table 7 and Figure 8. The highest score was also observed for the body skin and muscle color of those fish fed diets supplemented with 75 mg xanthophyll, followed by the 100 mg/kg treatment. Carotenoid accumulation in the muscle of bighead catfish fed a diet containing 75 mg/kg of feed was significantly higher (p < 0.05) than other treatments. Disc ssion Besides color enhancement, carotenoids have also been found to have various other beneficial functions in aquatic species, including the improvement of broodstock performance [30,31], improved disease resistance [32,33], and improved growth performance [34,35]. In the present study, the growth performance and survival rate of bighead catfish were not significantly affected by the dietary supplementation of carotenoid pigments. Similarly, previous studies have also found that supplementing carotenoids into diet did not affect the growth and survival of Atlantic salmon [36,37], rainbow trout [33], gilthead seabream [38], and flame-red dwarf gourami Colisa lalia [20]. The same study was also conducted on yellow cichlid Labidochromis caeruleus. The growth performance and survival of fish was not affected by the xanthophyll diet, and adding xanthophyll to feed may improve the feed coefficient [39]. Moreover, the results of the present study show that the efficiency of feed utilization in fish fed dietary pigments was improved. The color performance of fish fed dietary carotenoid supplements varies depending on the types and feeding period of the dietary pigmentation [7]. In the present study, statistical analysis by two-way ANOVA showed a clear significant difference in color performance of bighead catfish feeding various types and feeding periods of dietary pigmentation (Table 3). Among diets, bighead catfish displayed a golden yellow appearance after 4 weeks of feeding with a xanthophyll diet and differed significantly compared to fish fed diets containing other carotenoid pigments. The color lasted for 6 weeks (Figures 2 and 3). Another study showed that adding astaxanthin pigment to sea bream feed took 8 weeks [38], whilst adding astaxanthin pigment to flounder feed or adding carotenoids to hybrid catfish feed required 12 weeks [40]. The difference may be due to feed intake and its utilization by fish in the present study compared to previous studies. In agreement with the previous study, the ability to metabolize, absorb, and Discussion Besides color enhancement, carotenoids have also been found to have various other beneficial functions in aquatic species, including the improvement of broodstock performance [30,31], improved disease resistance [32,33], and improved growth performance [34,35]. In the present study, the growth performance and survival rate of bighead catfish were not significantly affected by the dietary supplementation of carotenoid pigments. Similarly, previous studies have also found that supplementing carotenoids into diet did not affect the growth and survival of Atlantic salmon [36,37], rainbow trout [33], gilthead seabream [38], and flame-red dwarf gourami Colisa lalia [20]. The same study was also conducted on yellow cichlid Labidochromis caeruleus. The growth performance and survival of fish was not affected by the xanthophyll diet, and adding xanthophyll to feed may improve the feed coefficient [39]. Moreover, the results of the present study show that the efficiency of feed utilization in fish fed dietary pigments was improved. The color performance of fish fed dietary carotenoid supplements varies depending on the types and feeding period of the dietary pigmentation [7]. In the present study, statistical analysis by two-way ANOVA showed a clear significant difference in color performance of bighead catfish feeding various types and feeding periods of dietary pigmentation (Table 3). Among diets, bighead catfish displayed a golden yellow appearance after 4 weeks of feeding with a xanthophyll diet and differed significantly compared to fish fed diets containing other carotenoid pigments. The color lasted for 6 weeks (Figures 2 and 3). Another study showed that adding astaxanthin pigment to sea bream feed took 8 weeks [38], whilst adding astaxanthin pigment to flounder feed or adding carotenoids to hybrid catfish feed required 12 weeks [40]. The difference may be due to feed intake and its utilization by fish in the present study compared to previous studies. In agreement with the previous study, the ability to metabolize, absorb, and accumulate the pigmentation in the skin and muscle tissue varies according to the species [7]. A high a* value in the presence of astaxanthin was found in the present study. Bighead catfish is a species characterized by its yellow color, and by adding astaxanthin in this experiment, it lost its natural color because of increased redness. Similar results were found for yellow croakers when combining xanthophyll and astaxanthin [12]. When astaxanthin was added, the red color appeared, and the xanthophyll supplement appeared more yellow. The b* value was yellow; therefore, it had a high value in the treatments with Xa supplementation. Therefore, for improving the appearance and marketability of catfish, a supplement of xanthophyll is suggested because it gives the skin and muscles a yellow color compared to the addition of other carotenoids. The degree of pigmentation in the muscle tissue of aquatic animals when adding pigmentation to feed depends on the species [7,12]. In the present study, the number of carotenoids accumulated in muscle varied with the type of carotenoid supplied ( Table 4). The carotenoid content was lowest in the control group, while higher muscle and carotenoid accumulation were found in treatments supplemented with xanthophyll diet. Similar results were also found in other fish species [8,34,35,41], and additional synthetic pigments and/or natural carotenoids sources also enhanced carotenoid levels in muscle of European seabass [17] and Spinefoot rabbitfish [18]. Other studies reported that fish were better able to accumulate yellow pigments (lutein and zeaxanthin) than red pigments (canthaxanthin and astaxanthin) [14,42,43]. The addition of astaxanthin pigment (120 mg/kg feed) to the pompano after 30 days showed the clearest color and highest accumulation in fish muscle tissue. In addition, the sensory evaluation also provided clear support for the yellowness of bighead catfish fed the xanthophyll diet (Tables 4 and 7). In other treatments, where astaxanthin or canthaxanthin (orange) alone were added to the diet, fish had a dark coloring, but the yellow color was not clearly shown. The combination of the two pigments As + Xa also gave a golden yellow color with good sensory scores but not as high as Xa alone. Similarly, a combination of astaxanthin and xanthophyll at the ratio of 1:1 also showed an improvement in large golden-yellow croakers [44]. The amount of pigment added to the feed affects the color and level of carotenoid accumulation of fish. At 25 and 50 mg Xa/kg of feed, the color of the fish did not meet requirements, as seen by the low sensory score and accumulated carotenoids in the flesh. The 75 mg Xa/kg application showed the best results in bighead catfish. A similar conclusion on the use and rate of Xa (75 mg) to achieve the best color was reached for the golden croaker [12]. Conclusions This study examined appropriate carotenoid pigments, enhancing the color performance of bighead catfish. Dietary xanthophyll showed the best result for color performance after 4 weeks. Adding xanthophyll pigment at a level of 75 mg/kg of feed over a feeding period of 4 weeks duration is recommended for the achievement of golden-yellow skin and muscle tissue of bighead catfish C. macrocephalus, matching market demands and consumer's appreciation.
2022-02-06T16:18:06.370Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "966202aa0c87522e0a11d0c255f696ca9d836ed4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2410-3888/7/1/37/pdf?version=1643959440", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "75ac8ea78b90294c37ac681bbbbd4571b412a2a8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
1787505
pes2o/s2orc
v3-fos-license
Towards scalable entangled photon sources with self-assembled InAs/GaAs quantum dots Biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for deterministic entangled photon pair source, which is essential in quantum information science. The entangled photon pairs have recently be realized in experiments after eliminating the FSS of exciton using a number of different methods. However, so far the QDs entangled photon sources are not scalable, because the wavelengths of the QDs are different from dot to dot. Here we propose a wavelength tunable entangled photon emitter on a three dimensional stressor, in which the FSS and exciton energy can be tuned independently, allowing photon entanglement between dissimilar QDs. We confirm these results by using atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications. Entangled photon pairs play a crucial role in quantum information applications, including quantum teleportation [6], quantum cryptography [7] and distributed quantum computation [8], etc. The biexciton cascade process in a self-assembled QD has been proposed [1] to generate the "event-ready" entangled photon pairs. As shown in Fig. 1(a), a biexciton decays into two photons via two paths of different polarizations |H and |V . If the two paths are indistinguishable, the final result is a polarization entangled photon pair state [1,3](|H XX H X + |V XX V X )/ √ 2. However, the |H -and |V -polarized photons have a small energy difference, known as the fine structure splitting (FSS), which is typically about -40 ∼ +80 µeV in the InAs/GaAs QDs [9][10][11], much larger than the radiative linewidth (∼ 1.0 µeV) [3,12]. Such a splitting provides therefore "which way" information about the photon decay path that can destroy the photon entanglement, leaving only classically correlated photon pairs [3,12]. Great efforts have been made trying to eliminate the FSS of excitons in QDs, and significant progress has been made in understanding [13][14][15][16] and manipulating the FSS in selfassembled QDs in recent years. Various techniques has been developed to eliminate the FSS in QDs [4,[17][18][19][20][21][22][23]. Especially, it was recently found by applying combined uniaxial stresses or stress together with electric field, it is possible to reduce to the FSS to nearly zero for general self-assembled InAs/GaAs QDs [5,23,24]. However, to build practical QDs devices for applications in quantum information science, they must be scalable. One possible application for scalable entangled photon emitters is shown in Fig. 1(b) as quantum repeater to distribute entanglement over long distance. The set-up of Fig. 1(b) can also be used to generate multi-photon entanglement [25,26]. The on-demand entangled pho-ton emitters have great advantages of over the traditional parametric down convention process to generate multi-photon entanglement, which has finite probability of generating more than one photon pair in a excitation cycle [7]. In these applications, the wavelengths of the joint photons have to be identical, i.e., λ 2 =λ 3 in Fig. 1(b). Besides, one often need to interface the entangled photon pairs to other quantum system, such as NVcenter, cold atom, or other solid quantum systems etc. These applications also requires that the wavelengths of the QDs to be tunable, while at the same time keep the FSS nearly zero. However, it was found that there are strong correlations between exciton energy and FSS of exciton [18,27]. Furthermore, because of the random alloy distribution and other uncontrollable effects, the physical properties of QDs differ dramatically from dot to dot. Therefore, it is still a great challenge to build such scalable entangled photon generators using dissimilar quantum dots. The independent tunability the FSS and exciton energy is therefore essential for the scalable entangled photon emitters. We demonstrate such a tunability by proposing a three dimensional stressor for QDs. Our basic setup is schematically shown Fig. 2 (a). We consider QDs that are tightly glued to the yz plane of the piezoelectric lead zirconic titanate (PZT) ceramic stack [17]. The [100] axis of the QDs samples are aligned to the polar (z) axis of PZT, whereas [010], [001] axes of the QDs are aligned to the y and x axes of the PZT respectively. Two independent in-plane electric voltages, V z and V y , are applied to the PZT device as shown in the Fig. 2(a), which generate electric fields F z and F y along the PZT z and y axes respectively. The electric field causes the in-plane strain to the QDs as, 15 are the piezoelectric coefficients of PZT and d ⊥ = (d 33 + d 31 )S 12 /(S 11 + S 12 ). S 11 , S 12 and S 44 are the elastic compliance constants of GaAs. The electric fields F z and F y lead to in-plane strains to the QDs are shown in Fig. 2 [24], one can almost fully eliminate the FSS in a general InAs/GaAs QD by suitable combination of such strains. To tune the energy of the exciton, we apply a stress along the [001] direction of the QDs sample [see Fig. 2(b)], which can be easily implemented in experiments. This pressure generates the strain e zz to the QDs. Now we have a device that can tune freely the 3D strain to the QDs. Next we show that the device is able to tune the exciton emission energy in a wide range while keep the FSS minimum (< 0.1 µeV). To see if our device really works, we perform atomistic pseudopotential calculations (see Methods) to confirm the above predictions. We have calculated 8 (In,Ga)As/GaAs dots. The details of the structure and alloy composition are given in Table S4 of the Supplementary materials [28]. The results of two dots QD-A and QD-B are presented in Fig. 3 (a). The results are obtained in such way: First, in the absence of p [001] , we carefully choose the in-plane electric fields F z and F y to tune strain tensor e ↔ , that reduces the FSS of exciton to nearly zero [24]. For QD-A, the applied in-plane electric fields are F z (A)=9.6 kV/cm, and F y (A)=3.3 kV/cm, whereas for QD-B, the electric fields are F z (B)=3.5 kV/cm, and F y (B)=4.3 kV/cm, respectively. We then switch on the The two bias voltages Vz and Vy are applied to generate inplane strain, which is used to tune the FSS of exciton. The p [001] is used to tune the exciton energy. The blue and red structures represent the shapes of QDs before and after applied voltages and stress respectively. perpendicular stress to study the evolution of exciton energy and FSS as functions of p [001] . Figure 3(a) depicts the exciton and biexciton emission energies for QD-A and QD-B as functions p [001] , while keep the in-plane electric field F z and F y (thus the in-plane strain) unchanged. Although, in practice, one can only apply positive (compression) pressure to the QDs in our device, we plot the results of negative pressure just for theoretical interest. We find that the exciton energy can be tuned in a wide range of about 20 meV when p [001] change from -200 MPa to 200 MPa, with the slope of ∼ 6 meV/100 MPa for both QDs. The change of exciton energy is comparable with the full width at the half maximum of a general QDs ensemble. These results suggest that in principle, the exciton energies of most QDs grown in the same sample can be tuned to identical using our scheme. The corresponding results for FSS are presented in Fig. 3(b). Remarkably, the FSS change with p [001] is rather small. For QD-A, the FSS [the red dots in Fig. 3 has somehow stronger dependence of p [001] , which reaches approximately 0.5 µeV at p [001] =± 200 MPa. This is nevertheless still smaller than the homogeneous broadening of the spectral (∼ 1 µeV), which is the upper limit for entangled photon generation. In this situation, it is possible to further reduce the FSS at given p [001] , by tuning the in-plane electric fields F z and F y . The blue dots are the FSS of QD-B after such optimization. By slightly changing F y (B) from 4.3 kV/cm to 4.5 kV/cm, the FSS reduces from approximately 0.5 µeV to approximately 0.08 µeV at p [001] =200 MPa. This change will shift the exciton energy by only about 0.02 meV. This energy shift can be compensated by increasing p [001] by 0.36 MPa, which hardly change the FSS. In such way, we can tune the FSS to nearly zero at any given exciton energy in the range in only one or two iterations. We also calculate the exciton radiative lifetimes under p [001] . The exciton lifetimes for QD A and QD B are around 1 ns, and change little under p [001] , which is good for the proposed device applications. More results for dots with different geometries and alloy compositions are given in Table S5 of the supplemen-tary materials [28]. We fit the atomic pseudopotential calculated results by a 2×2 model [16,24]. Although it is easy to understand that in-principle the FSS and exciton energy can be tuned simultaneously to desired values by suitable combination of three linearly independent external fields from the 2×2 model, there is an additional advantage that in our scheme the exciton energy and FSS can be tuned almost separately, i.e., the in-plane strain have very strong effects on the FSS, and relatively small effect to the exciton energy. In contrast, p [001] have strong effect on the exciton energy, but rather small effect to the FSS. The (nearly) independent tuning of FSS and exciton energy is an enormous advantage for the scalable entangled photon sources. The electric field may also be used to tune the FSS [4,23]. However, at the same time the exciton energies change dramatically under electric field due to the stark effects. It is therefore harder to tune both quantities to the target values, which requires to tune the three external fields simultaneously. Now we try to understand the above results in several different levels. First we would like to understand why in-plane strains have small effects on E X , but p [001] have large effect on E X ? Because the envelope functions of the electron and hole states change little if the external strain is not very large, the direct electron-hole Coulomb interaction also change little (See Figure S1 in the supplementary materials[28]). The change of exciton energy is therefore mainly determined by the single-particle energies gap E g . We can estimate the slope of exciton emission energy (or recombination energy) to the stress as, dE(X 0 ) dp ≈ dE g dp . ( If we neglect the O(p 2 ) terms, the slope of band gap under the stress along the [001] direction can be written as according to the Bir-Pikus model [28], Here a g =a c -a v =-6.08 eV is the deformation potential for band gap, and a c , a v are the deformation potentials for the conduction band, and valence bands respectively. b v =-1.8 eV is the biaxial deformation potential of the valence bands. Because of the cancelation between the first term and the second term in Eq. 4, the in-plane stresses have small effects on the band gap. On the other hand, the stress along the [001] direction has much larger impact on the exciton energy because the first term adds up to the second term. The second question is why the in-plane stresses (strain) have more important influence on FSS than the [001] stress (strain)? Intuitively, as shown Fig. 2(b), F z and F y change the in-plane anisotropy of the QDs, whereas p [001] does not. The microscopic mechanism of strain tuning of FSS in self-assembled InAs/GaAs QDs has been studied in Ref. 29, where some of us derived analytically the change of FSS of excitons under the external stresses using the Bir-Pikus model. For simplicity, we illustrate the results using a 6×6 model. We have, where K od is the off-diagonal element of exchange integral matrix, equivalent to half the FSS. κ, δ and K are exchange integrals over different orbital functions [29]. Especially, 2K ∼ 300 -400 µeV is approximately the dark-bright exciton energy splitting. The exchange integrals over different orbital functions only changes slightly under external strain. The change of FSS is mainly due to the bands mixing [29], where R, Q, ∆, S are parameters in Bir-Pikus model (See Supplementary materials[28]). As one can see from Eq. 6, Q only appears in the denominator and has a much larger value than R and S, therefore the change of ε + under stress mainly depends on the slope of R and S. As shown in To conclude, we proposed a novel portable device that allow to tune the FSS and exciton energies of (In,Ga)/GaAs QDs (nearly) independently. This provides a first step towards future realization of scalable entangled photon pairs generators for quantum information applications, such as long distance entanglement distribution, multi-phonon entanglement and interfaces to other quantum systems, ect. The device can be implemented using current experimental techniques. METHODS We model the InAs/GaAs quantum dots by embedding the InAs dots into a 60×60×60 8-atom GaAs supercell. The QDs are assumed to be grown along the [001] direction, on the top of the one monolayer InAs wetting layers [30]. To calculate the exciton energies and their FSS, we first have to obtain the single-particle energy levels and wavefunctions by solving the Schrödinger equation, where V ps (r) = V SO + n α v α (r − R n,α ) is the superposition of local screened atomic pseudopotentials v α (r), and the total (non-local) spin-orbit (SO) potential V SO . The atom positions {R n,α } of type α at site n are obtained by minimizing the total strain energies due to the dot-matrix lattice mismatch using the valence force field (VFF) method [31]. The pseudopotentials of the InAs/GaAs QDs are taken from Ref. 32, which have been well tested. The Schrödinger equations are solved via a Linear Combination of Bulk Bands (LCBB) method [33]. The exciton energies are calculated via the manyparticle configuration interaction (CI) method [34], in which the (many-particle) exciton wavefunctions are expanded in Slater determinants for single and biexcitons constructed from all of the confined single-particle electron and hole states. The exciton energy is obtained by diagonalizing the full Hamiltonian in the above basis, where the Coulomb and exchange integrals are computed numerically from the pseudopotential singleparticle states, using the microscopic position-dependent dielectric constant. Including spin, this state is fourfold degenerate. The electron-hole Coulomb interactions leave this fourfold degeneracy intact. The FSS arises from the asymmetric electron-hole exchange matrix [13]. The piezo-effects were ignored in the calculation, as it was shown in Ref. 35 that the FSS does not change much in the InAs/GaAs QDs by including the piezo-effects.
2014-12-09T01:31:44.000Z
2014-12-09T00:00:00.000
{ "year": 2014, "sha1": "8f0ab4fed20751ace4f3207a2ee961ee9b121462", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1412.2826", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8f0ab4fed20751ace4f3207a2ee961ee9b121462", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
213994812
pes2o/s2orc
v3-fos-license
Dynamic analysis of the elasto-plastic behaviour of buildings and structures in the SCAD ++ software package The problem formulation of nonlinear equations of motion obtained by the finite element method applying to the dynamic analysis of buildings and structures is presented. The elasto-plastic behaviour of reinforced concrete structural elements is described using the plastic flow theory, moreover, for concrete, the Drucker-Prager yield criterion is used for bending elements, and the yield surface, which coincides in shape with the Geniev strength surface, for compressed-bending elements. Concrete degradation due to crack opening is simulated by the descending branch of the σ − ε diagram. The plastic flow theory with von Mises yield criterion describes the behaviour of reinforcement. Using the aforementioned constitutive models, a finite element library was developed, including plane shell quadrilateral and triangular finite elements based on the Mindlin-Reissner shear theory as well as a two-node spatial frame finite element based on the Timoshenko shear theory. The seismic analysis of the 3-D model of multi-storey building is considered as an example. Introduction The nonlinear dynamic analysis allows us to significantly approximate the behaviour of the design model to the real behaviour of buildings and structures, especially when dynamic loads cause damage and partial destruction of structural elements. In this article, we will limit ourselves to taking into account only physical nonlinearity and the impact of seismic loads. We will consider the elasto-plastic behaviour of reinforced concrete structures, and the degradation of concrete caused by the formation of cracks will be simulated by the descending branch of the σε diagram for concrete. In many works, for example, in [1], [16], [17], [18], [22], [24], simplified nonlinear models are used. Most often, these are nonlinear hinges that realize lumped plasticity in one way or another, in which it is assumed that all other elements work linearly and elastically. In addition, a number of papers use nonlinear pushover analysis and/or a unimodal approximation of dynamic analysis, which can be correct if the first natural vibration mode contains a significant percentage of modal masses. However, for many design models, including design models of multi-storey buildings, eigenmodes with a small percentage of modal masses are in the lower part of the spectrum, therefore nonlinear pushover analysis, as well as a unimodal approximation of dynamic analysis, significantly underestimate the calculated values of forces in structural elements. There are much fewer works in which each nonlinear finite element is considered in an elastoplastic formulation (distributed plasticity) than works in which mentioned above simplified approaches are used. Without pretending to be a complete review, we give only some of them. IOP Conf. Series: Journal of Physics: Conf. Series 1425 (2020) 012041 IOP Publishing doi: 10.1088/1742-6596/1425/1/012041 2 Using the MARC.MSC software package, in [5] a seismic analysis of the containment vessel was performed using quadrilateral finite elements of a thin-walled shell for itself shell and 8-node volumetric finite elements for supporting structures. The reinforcement in the containment vessel was modeled with quadrilateral membrane finite elements, and in support structures with 2-node rod finite elements, which works only on tension-compression. The behavior of concrete and reinforcement is described by the plastic flow theory; moreover, the Buyukozuturk yield criterion is used for concrete, and von Mises yield criterion is used for reinforcement. An extensive report [19] for the reinforced concrete elements of ordinary bridges compares the results obtained by using lumped plasticity and distributed plasticity for concrete and reinforcement. It is noted that computational models created on the base of distributed plasticity require not only significantly greater computational efforts but also are much more difficult for the understanding of engineers than computational models using lumped plasticity. In [20], the problem of the structuresoil interaction in a 3-D formulation is considered, and the soil is modeled by volumetric finite elements using the Drucker-Prager plastic flow theory, and concreteby volumetric finite elements applying a microplane model. Volumetric finite elements and the plastic flow theory with the von Mises yield criterion are also used for reinforcement. Given the computational complexity of seismic analysis based on the direct integration of nonlinear equations of motion, we tried to reasonably simplify the calculation model. On the one hand, we rejected the simplifications associated with the unimodal approximation, as well as with static nonlinear pushover analysis, which originally arose as a tool used to certify structures in the USA, but not as a type of strength analysis. In addition, we abandon the technique of nonlinear hinges and consider a complete elasto-plastic formulation for each nonlinear finite element, which we use both for the bar systems as well as for plates and shells for which the use of nonlinear hinges is extremely difficult. On the other hand, we do not consider multilevel models for concrete based on the interaction of macro-level and micro-level models, but restrict ourselves to applying to concrete the theory of plasticity with elements of degradation modeling a softening of concrete when cracking, and to reinforcementthe plastic flow theory using von Mises yield criterion. Problem formulation We neglect the anisotropy of concrete compared with structural anisotropy caused by reinforcement. Unlike many industrial FEA software that takes into account the work of reinforcement only in tension-compression, we also take into account the work of reinforcement in transverse shear. This makes it possible to significantly improve the stability of the numerical approach in those cases when in the finite elements the concrete has completely collapsed and the reinforcement has not yet [7], [8]. The S.P. Timoshenko shear theory is applied for bar finite element, and the Mindlin-Reissner onefor shell finite elements [7], [8]. It is assumed that the reinforcement does not slip in concrete, which agrees well with the taking into account the softening of concrete simulating the crack appearance and the kinematic hypotheses of Timoshenko and Mindlin-Reissner models. It is known that the descending branch of the σε diagram gives rise to a mesh-dependent finite element solution [15]. In this work, in the presence of descending branches of the σε diagram, to ensure the stability of numerical results during mesh refinement, a simple engineering idea is used, consisting in the fact that the reinforcement, whose elastic modulus is a magnitude on order greater than the concrete deformation modulus, does not have a descending branch in the σε diagram and must regularize the numerical solutions. Thus, we consider only reinforced concrete structures in which the presence of reinforcement is mandatory. The condition under which the reinforcement stabilizes the numerical solution is determined by the length of the descending branch of the σε diagram for concrete and is given in [7], [8]. The finite element library The finite element library of the SCAD ++ FEA software [14], taking into account physical nonlinearity, contains quadrilateral and triangular isoparametric finite elements, as well as a 2-node IOP Conf. Series: Journal of Physics: Conf. Series 1425 (2020) 012041 IOP Publishing doi:10.1088/1742-6596/1425/1/012041 3 finite element of the spatial frame. All presented finite elements can be used to model the behavior of homogeneous materials, such as steel. In this case, they do not contain inclusions. In this paper, we consider reinforced concrete structures for which each of the finite elements contains inclusions modeling reinforcement. In the case of shell finite elements (figure 1), the reinforcement is smeared in the plane of the finite element because we assume that the spacing between the reinforcing rods is quite small. At the same time, the discreteness of reinforcement over the thickness of the element is maintained. So rebar layers are formed. Each rebar layer is formed by the same reinforcing rods located at the same spacing, and their axes are parallel to each other. The number of rebar layers is not limited. s1, s2, ... are the directions of the axes of the rebar layers, coinciding with the direction of the axes of the rods forming this layer. Figure 1. Quadrilateral and triangular finite elements Each of the axes s1, s2, ... can be rotated to the local coordinate axis Ox relative to the axis Oz on the arbitrary angle, which allows us to consider structures with a geometric shape of any complexity for any configuration of a finite element mesh. The principle of virtual work is used to obtain the tangent stiffness matrix and the vector of internal forces: where V is a volume of the finite element, Ω is the domain of finite element in-plane, σ and ε is a stress and strain tensors for concrete, sum s covers all rebar layers, A s and h s is the cross-section area and spacing of rods forming the rebar layer s, σ s , τ xy s , τ xz s and ε s , γ xy s , γ xz s are the components of stress and strain tensors for rods of rebar layer s, m s = 0.66shear correction factor for the circular crosssection of reinforcement rod, δA extvirtual work of external forces. The expression in parentheses appeared due to taking into account the work of reinforcement on the transverse shear. When calculating the integrals over the volume of the finite element, the trapezoid method is used for integration over the thickness and the Gauss-Legendre method for integration over the domain Ω according to the 2 × 2 scheme for a quadrilateral finite element and with a single Gauss point for a triangular onethe finite element is divided into layers by thickness. Linear The two-node finite element of the spatial frame is shown in figure 2. To calculate the integrals over the volume of the finite element, the cross-section area is triangulated. At the centers of gravity of the triangles, the components of the stress and strain tensors are calculated and stored. Longitudinal reinforcing rods are discretely taken into account, and no binding of their centers of gravity to the vertices of triangulation is required. The principle of virtual work is applied to obtain the tangent stiffness matrix as well as the vector of internal efforts: where a is the length of the finite element, x is the coordinate along the longitudinal axis OX, the sum over s covers all the rods of the longitudinal reinforcement, and all other designations correspond to the above. The linear shape functions are used, and to avoid shear locking, the following expressions for the transverse shear deformations are assumed: Details for both shell finite elements and spatial frame finite elements are given in [8]. The constitutive relations Since in this paper we consider the behaviour of a structure under the action of both a constant load in time and a seismic one, in which deformations in structural elements change cyclically, the application of the deformation theory of plasticity to such problems seems unreasonable. Therefore, we apply the plastic flow theory. 2.2.1 Concrete. For concrete, in the case of bending structural elements, such as frame crossbars or floor slabs, the Drucker-Prager yield criterion is used. However, in the case of significant compressive stresses, the Drucker-Prager model ceases to adequately describe the behaviour of concrete, therefore, for the columns and walls, a yield surface in the form of a circular or non-circular paraboloid is used. The initial shape of the paraboloid is taken as the strength surface proposed in [13], therefore, for brevity, we will call the corresponding yield criterion the Geniev criterion. The yield surface equation is as follows: where a = σ c •σ t , b = σ c + σ t , 0.531 <  ≤ 1/3 1/2 , β = 1 -3 2  , σ 0 = 3a 2  , σ c and σ t are the compressive and tensile strength of concrete, respectively, I 1 is the first invariant of the stress tensor, J 2 and J 3 are the second and third invariants of the deviator of stresses. The parameter  defines the deviation of the paraboloid from a circular shape. The σε diagram is adopted as it is presented in figure 3. Section AA՛ corresponds to the linear work of the material (Hooke's law), and sections AB and A՛ B՛ correspond to descending branches in the compression and tension zones simulating concrete degradation during crack propagation. Parameter E is the initial modulus of deformation of concrete, E c and E t are the softening modules in the compression and tension zones. The parameters α c and α t determine the residual strength of the concrete and are usually either 0 or α с < 1, α t < 1. When the image point moves along the descending branch, the σ c or σ t decreases, which leads to compression and moving of the yield surface in the space of principal stresses, and the softening of concrete occurs. This relates both to the Drucker-Prager yield surface and to the Geniev one. Further, for brevity, we will call the model of reinforced concrete using the Drucker-Prager yield criterion, CM2 (CM -Constitutive Model), and the model using the Geniev yield criterion, CM3. The designation CM1 refers to the deformation theory of plasticity, which is not used in this paper. Numerous numerical experiments have shown that the CM3 model describes well the behavior of compressed concrete, but it does not always successfully cope with the softening of concrete in the tensile zone [8]. For this reason, it is recommended to use the CM2 model for bending reinforced concrete elements. 2.2.2 Reinforcement. When using the plastic flow theory for concrete (CM2 and CM3), the plastic flow theory with the von Mises yield criterion, bilinear σε diagram and kinematic hardening is also used for reinforcement. 2.3 Equations of motion. Nonlinear equations of motion are represented as the Cauchy problem: where M and C are mass and dissipation matrices, u is displacement vector, N(u) is a nonlinear operator, returning a vector of internal forces, Γ is the diagonal matrix, arising due to the application of the penalty function method [25] allowing us to take into account the imposed displacements   t u , f ext (t) is a vector of external forces. In this approach, the equations of motion are formulated in terms of absolute displacements, which allows us to naturally describe the asynchronous excitation of the supports during seismic action. In the case of linear equations of motion, this idea is presented in detail in [11]. The technique for choosing the penalty parameters is similar to that given in [6]. The dissipation matrix C is accepted as follows: where α and β are the proportionality coefficients. In contrast to the proportional damping in linear problem, the tangent stiffness matrix K t (u) = ∂N(u)/∂u depends on the displacement vector u(t), therefore, the dissipation matrix C(u) also depends on time. The external forces vector where f stat is a vector of static loads, 0 u is a static imposed displacements, f dyn (t) is a vector of dynamic loads and   t u is the dynamic imposed displacements. Since the problem is nonlinear, we cannot separately solve the static problem from the action of a static load alone, then solve only the dynamics problem from the action of only a dynamic load, and then add these solutions [4]. We must consider the combined action of both static and dynamic loads since the principle of superposition for nonlinear problems is not fulfilled. On the other hand, if, when integrating the Cauchy problem (5) under uniform initial conditions for displacements u 0 = 0, suddenly apply only one static load f stat or only imposed displacements 0 u , then the system will oscillate, which does not correspond to the physical meaning of the problem. Therefore, at the first stage, a nonlinear static problem is solved where we consider the action only statically applied loads and obtain static displacements u 0 . At the second stage, static displacements u 0 is taken as non-uniform initial conditions for the Cauchy problem (5). If now suddenly we apply only those loads that caused the non-uniform initial conditions u 0 and set C = 0, then the system will not make any oscillations. Next, we numerically integrate the Cauchy problem (5) with given static and dynamic loads (7) and non-uniform initial displacements obtained from (8). Typically, a seismic load is specified as an accelerogram. However, the proposed approach requires the seismic impact to be represented in the form of the seismogram   t u . Therefore, the specified accelerogram must be integrated two times, for which the SCAD Office accelerogram editor is used [14]. Besides, it turned out that when imposed displacements are given, the use of linear interpolation of the time function leads to the appearance of spurious rapidly oscillating components of accelerations of nodes adjacent to nodes in which imposed displacements are applied. Therefore, we use cubic interpolation of the time function for imposed displacements. The details are in [11]. The predictor-corrector approach [12] is applied for the numerical integration of (5). The Newton-Raphson method is used to depress the residual vector during the corrector's iterations. The α-HHT method [21] considerably improves the numerical stability of our approach due to the damping of high-oscillated modes, which cannot be approximated well by accepted time step Δt (ωΔt > 1, where ω is the frequency of the considered vibration mode). The details are in [8]. Numeric results The study was performed on a computer with a 12-core Intel Core i9 processor -9920X 3.50 GHz. To reduce the analysis time, multithreaded parallelization of the main algorithms was produced, namely: the evaluation of internal forces procedure [9], the assembling of a consistent tangent stiffness matrix [9], solving the system of linear algebraic equations [10] and the procedure of forward and back substitutions. The verification and validation of the developed finite elements for a single static loading are carried out based on comparison with the results of well-established experiments and the reliable numerical solutions published in peer-reviewed articles and is given in [8]. 3.1 Test on cyclic loading. Static cyclic loading experiments are much simpler to perform and interpret than a fully dynamic one. For this reason, static cyclic loading tests are often used to verify and validate numerical approaches intended to perform structural analysis for cyclic loading, both static and dynamic. The last type of load also includes seismic impact. The static cyclic loading test mainly checks how the nonlinear operator N(u) works. Figure 4 presents the scheme and loading schedule for an experimental study [23]. The design model uses 16 finite elements of the spatial frame. The location of the reinforcement, the mesh of triangulation and the section dimensions are shown in figure 5.a. A static force P st = 157 KN and the imposed displacements in accordance with the loading schedule presented in figure 4.b is applied to the upper end of the column. Тhe Figure 5.b demonstrates the acceptable correspondence of the results obtained by numerical solution with the experimental results, which confirms the possibility of applying the finite element of the spatial frame proposed in this paper to solve the problems of cyclic loading and seismic impact with considerable plastic deformations of structural elements. The structure is subject to the action of static loads in the form of dead load and operating load. Also, horizontal seismic load with synchronous excitations of the supports and with a specified accelerogram shown in figure 7 and reduced to magnitude 9 is applied in the horizontal direction along Peaks of efforts obtained on the basis of traditional linear analysis (solid curve) turn out to be several times larger than with full nonlinear analysis (dotted curve). The efforts peaks obtained on the basis of linear analysis using finite elements that take into account reinforcement (dashed curve) turn out to be larger than in the case of full nonlinear analysis (dotted curve), but smaller than in traditional linear analysis (solid curve). Multi-storey building under the action of seismic load Damaged finite elements are shown in figure 9. In the marked columns, the concrete of the compressed zone in several or all fibers passed the limit point of the σε diagram and is at the descending path. The reinforcement is within elastic deformations. In the marked finite elements of walls in the several layers over the thickness concrete of the compressed zone also passed the limit point of the σε diagram, as well as in some or all of the reinforcement rods the yield stress is exceeded. We consider such elements destroyed. In the marked finite elements of the floor slabs, the It turned out that when performing the traditional linear analysis, the integration step Δt, at which satisfactory convergence of the numerical solution was obtained not only in displacements but also in efforts, is in ten times smaller (Δt = 10 -5 s) than when producing a fully nonlinear analysis (Δt = 10 -4 s). We explain this to the fact that nonlinear elastoplastic models are dissipative-type models in which the scattering of mechanical energy occurs with increasing plastic strains. In addition, hysteretic damping occurs during oscillations. Therefore, in elasto-plastic systems, in addition to vibration damping caused by viscous friction (term u C  in (5)), typical for elastic systems, there is also dissipation due to hysteresis effects. This leads to a more intense suppression of highly oscillating modes, which are not approximated sufficiently by the accepted integration step Δt and contribute to the accumulation of computational error. In addition, we believe that this effect is one of the reasons that the elasto-plastic analysis leads, as a rule, to lower oscillation amplitudes than the elastic one. The time to solve the linear problem on the mentioned above computer was 13 hours 48 minutes 21 seconds, and the nonlinear problem -36 hours 24 minutes 22 seconds. Conclusions The account of elasto-plastic behaviour of the material, type of the σ -ε diagram, type of yield surface, the choice of the dissipation model, as well as the level of plastic deformations achieved under static loading, have a significant impact on response of the system on seismic impact.
2020-01-09T09:15:32.794Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "1eba43dfe70aece20d1324dbf1a24d571f0e8738", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1425/1/012041", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7cfaa51d208dd38f47b979bdb85ddb2952ccfe53", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
208643769
pes2o/s2orc
v3-fos-license
Hierarchical assembly governs TRIM5α recognition of HIV-1 and retroviral capsids TRIM5α combines distinct modes of binding into successively higher-order structures to recognize HIV-1 and retroviral capsids. INTRODUCTION Mammalian cells express a variety of innate immune receptors that sense the presence of invading viruses and induce defensive countermeasures. TRIM5 is an E3 ubiquitin ligase that senses incoming retroviruses by binding to the capsid coat that protects the viral core, subsequently inducing premature core dissociation and inhibiting reverse transcription of the viral genome [ (1,2) and reviewed in (3)]. TRIM5 recognizes retroviral capsids by assembling a lattice with complementary hexagonal symmetry and spacing to the capsid lattice, thereby aligning otherwise very weak interaction epitopes and enabling avid binding (4). Structural insights on the TRIM5 lattice have been derived from crystallographic studies of oligomeric subcomplexes and low-resolution cryo-electron microscopy of biochemically reconstituted TRIM5/capsid complexes (4)(5)(6)(7)(8). However, the extent to which the TRIM lattice can cover the capsid and how TRIM5 directly contacts the capsid surface have not been established. Retroviral capsids are organized as fullerene structures comprising several hundred viral CA protein hexamers and 12 CA pentamers (9,10). These capsids display a remarkable degree of polymorphism [reviewed in (11)]. For example, a typical HIV-1 capsid is cone shaped and displays highly variable surface curvature (9,10). These capsids can also be cylindrical, spherical, or polyhedral; the different shapes arise from differing distributions of the hexamers and pentamers (12). Individual capsids can use different numbers of CA subunits (ranging from around 1200 to around 2000), and so capsid size can also vary. Thus, there is considerable structural variation both within a single capsid particle and across different capsids, even within a single retrovirus species. To function effectively, TRIM5 must have requisite flexibility to accommodate these variations, yet the molecular basis of such flexibility is not yet fully established. Purified recombinant HIV-1 CA proteins can assemble in vitro into long helical tubes that recapitulate the structural and functional properties of the hexagonal capsid lattice (13,14). TRIM5-bound HIV-1 CA tubes can also be reconstituted in vitro (7,8). We applied cryo-electron tomography and subtomogram averaging on these complexes to obtain a series of reconstructions that collectively describe how the TRIM5 lattice recognizes and binds the HIV-1 capsid lattice. Our maps show that the TRIM5 capsid-binding domains act as dimeric units and contact the capsid surface in multiple different ways. These contacts are organized in a hierarchy of structures, which constitute a TRIM5 lattice that completely cages a retroviral capsid. RESULTS We reconstituted TRIM5/capsid complexes by coincubating purified TRIM5 and HIV-1 CA proteins (7,8). Cryotomograms of the resulting tubes, collected at high defocus values (high contrast), exhibited patches of clearly resolved, large hexagonal rings on the tube surface (Fig. 1A). Thus, binding of TRIM5 to the capsid-like tubes was evident in individual raw images. To visualize higher resolution, we performed subtomogram averaging (15) of seven tubes from lowdefocus cryotomograms. Each CA tube belongs to a distinct helical family with differing diameter (fig. S1). Collectively, the tubes therefore sample the structural variations found within authentic capsids (but not pentamer-containing declinations). Average structures were calculated for the CA hexamer (the repeating unit of the capsid-like tubes) ( fig. S2, A We visualized the global architecture of the TRIM5/capsid complexes by generating lattice maps from the positions and orientations of subunit densities as determined by subtomogram averaging (Fig. 1, B to E). The tubes consist of an inner wall of CA hexamers, similar to previous helical reconstructions (Fig. 1B) (13, 14). The TRIM5 proteins make an essentially contiguous network of interactions, forming a hexagonal wire cage that completely surrounds the CA tube ( Fig. 1, C to E). Like other tripartite motif family members, TRIM5 contains an N-terminal RBCC motif-consisting of RING, B-box 2, and coiledcoil domains-followed by a SPRY domain that directly binds the capsid ( Fig. 2A). The coiled-coil domain forms a long  helix that dimerizes in an antiparallel orientation, making an elongated rod that is capped at each end by the B-box 2 domain (5,6). The B-box 2 domain makes a trimer that links dimers into a hexagonal lattice (8,16). Although our reconstructed maps are of limited resolution, fitting the crystal structures of B-box 2/coiled-coil dimers (6) and trimers (8) resulted in an unambiguous solution (Fig. 2, B to E, and fig. S4). This is because both the B-box 2/coiled-coil trimer crystal structure (8) and our corresponding trimer reconstruction here have pronounced curvature, with the concave surface facing the capsid. In the fitted model, the N-terminal end of the B-box 2 domain is found on the cytoplasmic (convex) face of the trimer, whereas the C-terminal end of the coiled-coil domain is found on the capsid (concave) side. Our interpretation is further bolstered by an additional density feature on the cytoplasmic side of the B-box 2 trimer and adjacent to the fitted B-box N termini. This extra density becomes more pronounced at low contour levels ( fig. S5A), and we established that it is due to the RING domain by comparison with reconstructions from TRIM5/CA complexes made with a TRIM5 RING deletion mutant ( fig. S5B). Our reconstructions therefore confirm the proposed organization of the TRIM5 hexagonal lattice that was deduced from isolated structures of the subcomplexes (8). After modeling the RBCC domains, the only remaining density feature projects downward from the center of the coiled coil, which we therefore assigned to the SPRY domain (Fig. 2, C and E). This assignment is consistent with previous analyses, including difference density comparisons of flattened TRIM5 lattices, which also localized SPRY to the center of the hexagon edges (4, 7). The SPRY density appears as a symmetric closed-packed dimer, even in the trimer reconstruction, which was averaged with imposed threefold (but not twofold) symmetry ( fig. S3B). This observation supports the proposal that the two SPRY domains within a TRIM5 dimer act as a single bivalent unit that simultaneously engages two binding epitopes (5,6,8,17). In the dimer reconstruction, the SPRY density is more clearly bilobed and flares out before joining with the capsid surface (Fig. 3, A and B). Guided by overlapping residues in separate crystal structures of the coiled-coil and SPRY domains (6,18,19), a computational model of the coiled-coil/SPRY substructure was generated (17). Fitting of this model positions two copies of SPRY well within the dimeric density, with only minimal adjustments ( fig. S4). Although more precise details will have to await an experimentally determined higher-resolution structure, our SPRY domain positioning satisfies multiple constraints from previous studies. Each SPRY domain is packed against the coiled coil through a short helix and an amphipathic interface previously shown to be important for capsid binding and restriction (17,20). The V1 loops are positioned at the flared regions that contact the capsid surface (magenta in Fig. 3, A and B), consistent with studies indicating that V1 directly binds the CA subunits (18,19,(21)(22)(23). Furthermore, our model also suggests that a short segment ( 430 IVPLSVIIC 438 in rhesus TRIM5) that includes the outermost strand of the SPRY -sandwich fold may mediate lateral SPRY/SPRY contacts (asterisk in Fig. 3A). The V435K/I436K mutations within this segment were previously shown to disrupt capsid binding and restriction activity (24). In the averaged CA reconstruction, the hexamers are well defined (Fig. 1B), whereas in both the TRIM dimer and trimer maps, the capsid surface is essentially featureless (Fig. 2, B to E). This indicates that the SPRY domains adopt multiple different orientations relative to the underlying CA hexamers. To examine this further, we projected the centroid SPRY dimer positions onto the same plane and analyzed their distribution relative to the nearest seven CA hexamers (with the hexamer closest to the SPRY in the center) (Fig. 3, C and D). Although the distribution showed substantial overall scatter, clustering was also evident, which appeared most pronounced above the three capsid symmetry axes (Fig. 3C). These results not only show that the SPRY dimer indeed has a degenerate set of binding modes relative to the CA hexamer but also suggest that certain binding modes are preferred. The clustering pattern has pronounced anisotropy that follows the long axis of the capsid tube. This provides further support for the notion that the assembling TRIM lattice can detect the curvature of the underlying CA lattice. Guided by the lattice maps, we identified and extracted 550 subvolumes that each encompassed an entire TRIM hexagon. After an initial round of refinement, the resulting map had well-defined densities for the TRIM5 hexagon, and one of the helical lines for the capsid lattice was resolved ( fig. S6A). This indicated to us that the average was composed of only a discrete number of configurations. The subvolumes could be classified into two subsets: Class 1 having 335 particles and Class 2 having 215 particles. The two classes differ in the relative rotation of the TRIM hexagon relative to the long axis of the CA tube ( fig. S6B). In the Class 2 average, two helical lines of the capsid lattice were now visible, whereas in the Class 1 average, the CA hexamers were resolved. We therefore focused on Class 1. Two additional refinement rounds produced a map in which both the TRIM hexagon and underlying CA hexamers are resolved and interpretable ( Fig. 4A and fig. S6D). In this reconstruction, the TRIM hexagon covers an area equivalent to about 11 CA hexamers. All six SPRY domain dimers in the hexagon edges and connecting densities to the CA hexamers are visible. We observed four distinct modes of SPRY/CA interactions (Fig. 4B). SPRY dimers connect two adjacent CA hexamers in edges ii, iv, and v, with edge ii having the opposite handedness as edges iv and v. In edges iii and vi, the SPRY dimer is positioned asymmetrically above a single CA hexamer. Last, in edge i, the SPRY dimer is almost directly above a CA hexamer. We therefore conclude that indeed, TRIM5 contacts the capsid surface in a degenerate manner, but the SPRY domains have preferred modes of binding to the CA subunits. Although a more accurate accounting of the actual number of SPRY/CA interaction modes and the precise details of how the SPRY V1 loops contact the CA subunits will have to await further studies, such degenerate positioning agrees very well with results from mapping studies of susceptibility and resistance determinants on CA (25)(26)(27)(28)(29)(30)(31)(32)(33). The Class 1 map is globally asymmetric because the two lattices are offset translationally. This is also evident from the nonsymmetric arrangement of the six hexagon edges (Fig. 4C). Therefore, the Class 1 map cannot be the repeating unit of a TRIM5/CA superlattice (Fig. 4D). We therefore asked whether the Class 1 map represents a smaller portion of a larger asymmetric unit that can be tessellated (or tiled) into a superlattice. Because the TRIM hexagons must share edges, such a unit would require SPRY/CA contacts on opposite edges to be oriented in the same way or related by translational symmetry. This is only true for the iii,vi edge pair (Fig. 4D). However, three of the edges are formally twofold rotationally symmetric (edges ii, iv, and v), and one is pseudo-twofold symmetric (edge i) (Fig. 4, B and C). Therefore, one can generate a larger asymmetric unit-a dihexagon-of a putative TRIM5/CA superlattice by rotating a second copy of the map around edge i and then overlapping this with the equivalent edge in the first copy (Fig. 4E). This dihexagon can now be tessellated into a planar P2 lattice (Fig. 4F). The above analysis explains how the TRIM hexagonal lattice can undergo limited extensions beyond the initial seed by using only four distinct types of SPRY/CA contacts. But how can TRIM5 cover the entire capsid lattice? Closer examination of the Class 2 particles indicated that these can be further classified into two additional subsets, which now differ from each other by translation relative to the underlying capsid lattice (fig. S6C). Unfortunately, the reconstructions cannot be improved further because of the limited number of particles. However, it is likely that, as with the Class 1 hexagon, these Class 2 hexagon subsets are also one-half of two other dihexagon units (or each is half of a single dihexagon unit). Regardless, we surmise that the Class 2 reconstructions present distinct arrangements of the same four types of SPRY/CA contacts identified in the Class 1 reconstruction. Lattice mapping of the Class 1 and 2 particles revealed that these form separate patches of TRIM5/CA superlattices in the tubes (Figs. 1, G and H, and 5). The patches are small, and each comprises only a few dihexagon units (Fig. 5, A and B). Because each patch differs in the relative rotation and translation of the TRIM lattice relative to the CA lattice, adjacent patches cannot be joined without creating a seam in one of the two lattices. The capsid lattice is contiguous in the tubes, and so it is the TRIM lattice that makes the adjustments by joining the TRIM hexagon patches with TRIM pentagons and heptagons (Figs. 1F and 5). Such a phenomenon-having small discrete patches of hexagonal lattice joined together by pentagon and heptagon insertions-is well documented in single-layer paracrystalline arrays of carbon graphene; the pentagon/heptagon insertions are called grain boundaries (34,35). We therefore conclude that just like graphene, the TRIM5 lattices assembled on the surfaces of the HIV-1 capsid tubes are paracrystalline, composed of small patches of hexagonal order joined together by pentagon-and heptagon-containing grain boundaries. DISCUSSION Although the lattice-lattice matching mechanism of capsid recognition by TRIM5 is now a well-established model (3), the molecular details have been quite challenging to characterize structurally. We and others have previously used a "divide-and-conquer" approach to obtain high-resolution x-ray crystal structures of the separate repeating structural units in the HIV-1 capsid and TRIM5 lattices (6,8,16,36). Our key goal in this current study is to deconstruct how conformational variations within the viral capsids are accommodated by the bound TRIM5 lattice, by using in vitro-assembled TRIM5/HIV-1 CA complexes as a model system. Our studies also highlight the general challenge that is inherent to structural characterization of these types of systems, which arises from the fact that high-resolution structures are obtained by averaging structurally identical (or at least highly similar) particles. In this case, each tube that we examined (23 total, with 7 selected for analysis here) belongs to a different helical family and hence has a different diameter and degree of surface curvature. By using lattice mapping and subtomogram averaging (15), different structural subclasses that provide complementary structural information could be identified. Although gathering sufficient numbers of particles for high-resolution reconstruction of each subclass is significantly limiting, we nevertheless were able to generate a series of maps of sufficient resolution for meaningful interpretation, including a low-resolution map of part of an "asymmetric unit" of a putative TRIM5/CA superlattice. By integrating the low-resolution reconstructions with previously determined x-ray crystal structures, we achieved a more sophisticated understanding of how TRIM5 proteins recognize and bind retroviral capsids. The reconstructed maps of the TRIM5 dimer and trimer confirm the molecular architecture of the TRIM hexagonal lattice that we previously deduced from crystallographic structures of TRIM5 domain fragments (8). Our maps also provide direct experimental evidence that the two SPRY domains of a TRIM5 dimer are indeed bound to the center of the coiled-coil domain and form a closepacked dimeric unit as proposed (5,6,8,17), consistent with coordinated, simultaneous binding of the two SPRYs to CA. By definition, a key feature of avidity-driven binding is the correspondence in relative spacing of the interacting elements (37), in this case between the TRIM5 SPRY domains and as yet unknown epitopes on the CA subunits. Ideally, these spacings are strictly matched, yet it is evident that this is unlikely with retroviral capsids, because their continuously varying curvature necessarily generates varying distances between equivalent surface epitopes on CA. Furthermore, TRIM5 must accommodate not only the variations in spacings but also variations in relative rotations of these equivalent epitopes, the retroviral capsids being made of CA hexamers and pentamers. Our studies now reveal that TRIM5 accomplishes this through hierarchical assembly, in which a limited number of basal interaction modes between the SPRY and CA subunits are successively organized in increasingly higher-order structures that culminate in a cage surrounding the retroviral capsid. Specifically, we identified at least four distinct types of basal SPRY/CA interactions that allow the SPRY domain to juxtapose the HIV-1 CA hexamer in multiple different ways. At the next level, the four types of SPRY/CA contacts are mixed and matched in a limited number of higher-order arrangements, which we observed as two (perhaps three) distinct classes of TRIM dihexagon-containing asymmetric units. These dihexagon units, in turn, form distinct patches of TRIM5/CA superlattices. The patches are small because supercrystalline order or complementarity between the two component lattices can be only sustained over short distances. Last, the patches are connected by grain boundaries made of adjacent pentagons and heptagons, analogous to paracrystalline carbon arrays. Further studies are now required to elucidate the dynamics of TRIM5 assembly on retroviral capsid templates. We envision that the "minimal recognition unit" of TRIM5 constitutes a ditriskelion-a central TRIM5 dimer with two "arms" at each end-which forms the central scaffold of the dihexagon. A ditriskelion satisfies all three functional requirements of capsid recognition: direct binding of the SPRY domain to CA, dimerization of the coiled coil, and higher-order assembly (trimerization) of the B-box 2 domain [reviewed in (3)]. A ditriskelion can act both as a molecular ruler (by matching the spacings, in a degenerate manner, of the arrayed SPRY domain dimers and CA hexamers) and as a protractor (because binding of the flanking arms locks the central dimer in its bound position and defines the local lattice vector of the assembling TRIM lattice relative to the underlying capsid lattice). We further envision that as assembly progresses, joining and locking of each additional TRIM5 dimer within a ditriskelion effectively constitute repeated measurements of the capsid lattice. This allows the growing TRIM lattice to detect changes in capsid surface curvature and adjust accordingly. The ability of TRIM5 to form pentagons and other shapes is also likely to be an important mechanism to accommodate sharp capsid surface declinations containing CA pentamers. Although TRIM5 restriction is associated with nonproductive, accelerated uncoating of retroviral cores (2,38,39), the capsid lattice is intact in our reconstructions. This indicates that, contrary to previous reports (40)(41)(42), the TRIM5 cage may not be intrinsically destabilizing to the capsid. In support of this interpretation, a variety of studies have detected stable TRIM5/capsid complexes in the cytoplasm under conditions where the proteasome or self-ubiquitination of TRIM5 is inhibited (43)(44)(45). These observations also imply that the proteasome or some ubiquitin-dependent cellular machinery is required to accelerate uncoating. We also found that stable complexes are formed in vitro when TRIM5 assembles de novo around preformed capsid-like particles, provided that the recently described capsid stability factor-inositol hexakisphosphate (46,47)-is present ( fig. S7). Under nonrestricted infection conditions, reverse transcription inside the core is thought to induce uncoating (48)(49)(50)(51), likely by increasing pressure from within that eventually ruptures the capsid (52). The capsid-binding inhibitor PF74 stabilizes a ruptured capsid (53,54) and delays uncoating in vitro despite continued reverse transcription (54). We propose that TRIM5 may also stabilize the capsid lattice against rising pressure from inside the core. Recruitment of proteasomes [or autophagosomes (55)] would then destroy the entire assemblage and halt reverse transcription. Under conditions where ubiquitination is inhibited, reverse transcription can proceed to completion within the TRIM5-bound capsid (43). Nevertheless, virus replication remains restricted, perhaps because the surrounding TRIM5 cage would interfere with other functions of the capsid, such as engagement of nuclear import and integration machinery. Thus, we propose that cage formation constitutes the restriction mechanism of TRIM5. Sample preparation TRIM5 and HIV-1 CA proteins were purified, and recombinant TRIM5/CA complexes were prepared as described (7,8). For this study, we used TRIM5 from African green monkey because this variant is active against HIV-1 and efficiently assembles into hexagonal lattices in vitro (7). The recombinant TRIM5 protein contained an L81F mutation in the RING domain that allows in vitro coassembly with HIV-1 CA more efficiently than wild type and does not affect the ubiquitination activity of the RING domain or overall restriction activity of the protein. Data acquisition and processing A 20-l aliquot of the coassembled sample was mixed with an equal volume of 10-nm BSA Gold Tracer (Electron Microscopy Sciences); 3.5 l was applied on glow-discharged C-flat grids (Protochips) and then plunge-frozen into liquid ethane. Cryotomograms were acquired using an FEI Titan Krios electron microscope operating at 300 kV and equipped with a Falcon II camera. Tilt series were collected using the data collection software Tomography 3.0 (FEI) with an angular range of −60° to +60°, an angular increment of 1°, defocus values of 2.5 to 4 m, and a nominal magnification of ×29,000, which corresponds to a pixel (px) size of 2.92 Å. One dataset was collected at a defocus value of 9 m and used to generate initial reference-free maps for the TRIM5 trimer and dimer. Tilt series were aligned by using IMOD (56). Weighted back-projection was used to reconstruct tomograms, and the contrast transfer function was applied in IMOD. Subtomogram averaging was carried out using the Dynamo software package (57). Reconstruction of the CA hexamer Subvolumes were extracted from 2× binned data in 100 × 100 × 100 px uniformly distributed along the length of each tube, spaced by 17 px (fig. S2A). Initial Euler angles were assigned on the basis of the centroid position of each volume relative to the tube axis ["backbone" as defined in Dynamo (57)]. Initial averaging was performed via sixdimensional search (16 iterations), applying no symmetry. The resulting average map of the tube segment was then used to determine the positions of CA hexamers throughout the length of each tube. Subvolumes of 64 × 64 × 64 px centered on these positions were then reextracted from the 2× binned tomograms and assigned initial Euler angles in reference to the tube axis. An initial hexamer search template was generated by averaging the subvolumes using only azimuthal refinement, applying sixfold symmetry. Subtomogram averaging was then performed separately for each tube, applying twofold symmetry, a low-pass filter of 30 Å, and default masks in Dynamo. Upon convergence, lattice maps were generated as described (58) and visually examined. Particles that were clearly misaligned and/or had very low cross-correlation values were discarded. After another round of averaging and examination, 9684 subvolumes were extracted from unbinned tomograms (128 × 128 × 128 px) and split into even/odd subsets. The two subsets were treated independently from this point forward. For each subset, an initial template was generated by averaging all particles according to the Euler angles and positions determined from the previous refinement. Four iterations of refinement were performed, applying a soft-edged spherical mask of 35-px radius and progressively narrower angular and positional search ranges. The final map was calculated with a low-pass filter of 20 Å. Reconstruction of the TRIM5 dimer and trimer Initial maps of the trimer and dimer were generated by hand-picking ~60 particles from a single tomogram (defocus value of 9 m), assigning initial Euler angles in reference to the tube backbone as defined above, and performing one round of azimuthal refinement with threeor twofold symmetry, as appropriate. These maps were used as initial search templates for the each of the seven tubes, as described below. For each tube, the optimized tube backbone was defined in reference to the refined CA hexamer positions. This, in turn, was used to generate a tubular mesh with a 50-px radius; this mesh oversampled the TRIM lattice by at least 25× for the dimer and 35× for the trimer. Subvolumes of 64 × 64 × 64 px whose centers were uniformly distributed on this mesh were extracted from 2× binned tomograms. Initial polar angles were assigned in reference to the tube backbone, whereas azimuthal angles were randomized. One iteration of azimuthal and positional refinement was performed, using the far-from-focus trimer and dimer models as search templates, again applying three-or twofold symmetry as appropriate and a low-pass filter of 40 Å (fig. S3A). The averaged maps from these first rounds were then used as search template in all subsequent refinements. In the first three iterations, refined positions that were within 4 px of each other were averaged, reextracted from the tomograms, and reassigned Euler angles as above. On the fourth iteration, lattice maps were examined, and particles that migrated to unrealistic positions and/or had very low cross-correlation values were discarded. Subvolumes were then reextracted (128 × 128 × 128 px) from unbinned tomograms (3204 dimers and 2108 trimers from seven tubes) and split into even/odd subsets, which were treated independently from this point forward. Refinement iterations were performed until convergence (which required six iterations for the dimer and eight for the trimer), with progressively narrower angular and positional search ranges and a low-pass filter set at 30 Å. Soft-edged tubular and ellipsoidal masks were used for the dimer and trimer, respectively. Reconstruction of the TRIM5/CA complex Guided by the combined TRIM and CA lattice maps, 550 subvolumes encompassing entire TRIM hexagons were extracted (64 × 64 × 64 px) from 4× binned tomograms and assigned polar Euler angles in reference to the underlying CA lattice [which was modeled as "surface" in Dynamo (57)]. Azimuthal angles were randomized. One iteration of azimuthal refinement was performed to generate an initial model ( fig. S6A). Classification was performed by multireference alignment in Dynamo, using as reference two copies of the initial model with random noise added, a cylindrical alignment mask that covered both the TRIM and CA densities, and a cylindrical classification mask that covered only the CA densities. The classification separated the particles into two classes (Classes 1 and 2) according to the rotation of the TRIM hexagon relative to the long axis of the tube (fig. S6, A and B). A second classification run was performed on the Class 2 particles, which further separated the particles into two subclasses (Classes 2a and 2b) that differed in translation of the TRIM hexagon relative to the underlying CA lattice (fig. S6, A and C). Class 1 particles from above (335 particles) were recropped from 2× binned tomograms (128 × 128 × 128 px) and refined for two additional iterations with progressively narrower angular and positional search ranges. Resolution estimation for the Class 1 TRIM5/CA complex was performed as follows. The central hexamer was aligned with the CA hexamer reconstruction as reference with Chimera (59), and the correlation between the two maps was calculated with the dfsc subroutine in Dynamo (fig. S6D, green curve) (57). The obtained 0.143 cutoff value was 23.8 Å. We also aligned each of the six hexagon edges with the TRIM dimer reconstruction as reference, which gave an average value of 31.7 ± 1.8 Å at the 0.143 cutoff ( fig. S4B, blue curves). Structural analysis and visualization Map examination, PDB model fitting, and figure rendering were all performed with Chimera (59). SUPPLEMENTARY MATERIALS Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/5/11/eaaw3631/DC1 Fig. S1. Gallery of TRIM5-coated HIV-1 CA tubes analyzed in this study. View/request a protocol for this paper from Bio-protocol.
2019-12-05T09:25:40.613Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "3f817e4b316acf2e4db4b838ba7757676699d465", "oa_license": "CCBYNC", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.aaw3631?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1443128dfb05c1670e71d4773ce9db92df16aa01", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
211232954
pes2o/s2orc
v3-fos-license
The efficacy of antibiotics to control colibacillosis in broiler poultry: a systematic review Abstract The objective of this systematic review was to evaluate the efficacy of antibiotics to prevent or control colibacillosis in broilers. Studies found eligible were conducted controlled trials in broilers that evaluated an antibiotic intervention, with at least one of the following outcomes: mortality, feed conversion ratio (FCR), condemnations at slaughter, or total antibiotic use. Four electronic databases plus the gray literature were searched. Abstracts were screened for eligibility and data were extracted from eligible trials. Risk of bias was evaluated. Seven trials reported eligible outcomes in a format that allowed data extraction; all reported results for FCR and one also reported mortality. Due to the heterogeneity in the interventions and outcomes evaluated, it was not feasible to conduct meta-analysis. Qualitatively, for FCR, comparisons between an antibiotic and an alternative product did not show a significant benefit for either. Some of the comparisons between an antibiotic and a no-treatment placebo showed a numerical benefit to antibiotics, but with wide confidence intervals. The risk-of-bias assessment revealed concerns with reporting of key trial features. The results of this review do not provide compelling evidence for or against the efficacy of antibiotics for the control of colibacillosis. Rationale Escherichia coli (E. coli) are a diverse group of bacteria that are a normal part of poultry microflora. E. coli are found throughout the intestinal and upper respiratory tracts, as well as on the skin and feathers of healthy birds (Nolan et al., 2013). Although most strains of E. coli are not detrimental to bird health, some are capable of causing disease outside of the intestinal tract. Those that are capable of causing disease in birds, or cause disease when host defenses have been impaired, are referred to as avian pathogenic E. coli (APEC) (Dziva and Stevens, 2008). Colibacillosis refers specifically to a localized or systemic infection caused by an APEC and is a leading cause of morbidity and mortality in the global poultry industry (Guabiraba and Schouler, 2015). Syndromes of APEC-associated disease include colisepticemia, hemorrhagic septicemia, coligranuloma (Hjarre's disease), airsacculitis (chronic respiratory disease, CRD), swollen head syndrome, polyserositis, enteritis, venereal colibacillosis, coliform cellulitis (inflammatory or infectious process, IP), peritonitis, salpingitis, orchitis, osteomyelitis/synovitis (including turkey osteomyelitis complex), panophthalmitis, and omphalitis/yolk sac infection (Barnes et al., 2008;Nolan et al., 2013;Guabiraba and Schouler, 2015). Colibacillosis can develop as a primary infection, or as a secondary infection alongside other viral or bacterial pathogens (Nolan et al., 2013). Prevention and control of colibacillosis can be challenging, as E. coli are part of the normal intestinal flora of birds, and approaches that focus on management strategies have limited success. Approaches to prevention of colibacillosis include biosecurity to manage access of personnel and the movement of birds to limit the introduction of pathogenic E. coli and reduce exposure of the flock. Ensuring adequate environmental sanitation and optimal climate conditions, such as humidity, ventilation, and temperature can also help minimize pathogen growth in the flock and reduce the numbers of E. coli in the water and feed. In addition, protecting flocks from other bacterial or viral infections that can decrease host resistance can reduce the risk of colibacillosis (Nolan et al., 2013). Antibiotics are also used for APEC, either in flocks where the birds are not diseased but may at risk of illness in order to prevent illness (prophylaxis) or in flocks where some birds are already ill with the intention to prevent further illness or mortality (metaphylaxis) (Singer and Hofacre, 2006). Current challenges in the prevention and control of colibacillosis include the limited availability of drugs and the emergence of strains that are highly virulent and resistant due to virulence and resistance plasmids (Johnson et al., 2005(Johnson et al., , 2006. There is a global consensus that antibiotics should be used prudently in humans and animals to reduce the risk of antimicrobial resistance; both the World Health Organization (WHO) and the World Organization for Animal Health (OIE) have published recommendations on the judicious use of antimicrobials in response to the threat of antimicrobial resistance (WHO, 2015;OIE, 2018). In order to reduce antibiotic use, veterinarians and poultry specialists need access to unbiased, and accurate evidence regarding the efficiency of antimicrobials for the prevention and control of colibacillosis in broilers. Such information enables informed comparisons between the benefits and the harms associated with various antibiotics, which in turn allows practitioners to select the most appropriate and effective preventive applications or treatments. Systematic reviews provide a rigorous and transparent method of identifying and summarizing the available literature to address a specific question related to the efficacy of an intervention (European Food Safety Authority, 2010;Higgins and Green, 2011;Sargeant and O'Connor, 2014). Systematic reviews follow defined steps and require the involvement of multiple reviewers at each stage to reduce the potential for bias. When sufficient data exist, the results from multiple studies can be combined in a statistical meta-analysis to provide a summary measure of the effect size of an intervention across studies (Higgins and Green, 2011;. Where there are multiple treatment options for a specific disease or condition, a network meta-analysis (NMA) provides a method for evaluating the comparative efficacy of the treatment choices (Salanti, 2012). Research synthesis methods such as systematic reviews, metaanalyses, and NMA are therefore powerful tools that can provide scientifically valid information about the scope and conclusions of the existing literature on preventive approaches to colibacillosis in broiler poultry; these syntheses can in turn support evidencebased decision-making by managers and practitioners. Objectives Our objective was to conduct a systematic review and a network meta-analysis, if supported by the data, to address the following review question: 'What is the efficacy of antibiotics to prevent or control colibacillosis in broiler chickens?' Protocol and registration An a priori protocol for this review was prepared and is archived in the University of Guelph's institutional repository (The Atrium; https://atrium.lib.uoguelph.ca/xmlui/handle/10214/14349). The protocol was also published online on the Systematic Reviews for Animals and Food (SYREAF) website (available at http://www. syreaf.org/). The review protocol was reported in accordance with PRISMA-P guidelines (Moher et al., 2015), and this systematic review is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement guidelines Moher et al., 2009). Eligibility criteria Primary research studies available in English were eligible for inclusion in the systematic review. In addition, studies must have been conducted in broiler chickens (the target population) and must have evaluated an antibiotic regime licensed for use in broilers in ovo, by injection, in feed, or in drinking water at doses consistent with therapeutic or prophylactic use (target intervention). Studies must have compared the antibiotic intervention to a placebo, an untreated control group, a non-antibiotic intervention, or a different antibiotic treatment. In the protocol, the eligible antibiotic regimes included any antibiotic used in treating or preventing colibacillosis in poultry that is included in the OIE list of approved antimicrobial agents of veterinary importance (OIE, 2015), regardless of their importance to human medicine. However, this was later modified to include any antibiotic regime included in a published study, due to the difference in approved antibiotic regimes over time and among countries. Eligible studies must have examined at least one of the following outcomes: mortality, feed conversion ratio (FCR), condemnations at slaughter due to colibacillosis, or total antibiotic use. Although FCR is a performance measure, it was included as an eligible outcome because it is likely to reflect the clinical and subclinical disease experience of a flock. Only controlled trials with natural disease exposure were eligible for inclusion, although we documented the number of controlled trials with deliberate disease challenge and the number of analytical observational studies evaluating eligible interventions and outcomes that were captured during the full-text screening stage. Information sources The databases searched were MEDLINE (via PubMed; 1946 to date of search), CAB Abstracts (via the University of Guelph CAB interface; 1900 to date of search), Science Citation Index and Conference Proceedings Citation Index -Science (via Web of Science; 1900 to date of search), and AGRICOLA (via ProQuest; 1970 to date of search). Additionally, a single reviewer hand-searched the proceedings of the Western Poultry Disease Conference and the section of the United States Food and Drug Administration website dedicated to recent animal drug approvals for relevant data. Search Initially, the search strategy was designed around the concepts of poultry, antibiotic interventions, colibacillosis, and antibiotics to exclusively capture primary studies that examined prophylactic uses of antibiotics to prevent colibacillosis in poultry. However, during preliminary screening, articles were identified in which antibiotic interventions were applied as metaphylaxis to control colibacillosis (i.e. to prevent further illness or death in flocks where colibacillosis infections were present in some birds). In these instances, the authors referred to the interventions as 'treatment for disease,' The American Veterinary Medical Association defines 'disease control' at the population level as the use of antimicrobials to reduce the incidence of illness in groups of animals where some are already showing signs of disease or infection and 'disease treatment' at the population level as administration of an antimicrobial to those animals within the group with evidence of disease (https://www.avma.org/KB/Policies/Pages/ AVMA-Definitions-of-Antimicrobial-Use-for-Treatment-Controland-Prevention.aspx). The terminology used in the colibacillosis literature is not entirely consistent with the AVMA definitions, while noting that these concepts may not have been as explicitly defined at the time of publication of all of the relevant articles. We therefore deviated from the review protocol and modified the search to include search terms related to antibiotic use for prevention, control, or treatment, regardless of the specific terminology used by the authors. The updated search was conducted on 28 October 2018 through the University of Guelph, Canada. The searches were not limited by date, language, or publication type. Table 1 shows the modified search strategy as it was applied in the Science Citation Index database (via the Web of Science platform). The search string formatting was modified as needed to reflect differences in database interfaces for each of the remaining databases. Search results were uploaded to EndNoteX7 (Clarivate Analytics, Philadelphia, PA) and duplicate citations were removed. Citations were then uploaded to the systematic review management software DistillerSR (Evidence Partners Inc., Ottawa, ON) and additional duplicates were removed. When the same data were presented in both a conference proceeding and a journal article, the conference proceeding was removed. Study selection DistillerSR was used to manage the screening, data extraction, and risk of bias assessment stages of the review. Initially, titles and abstracts of all citations identified in the search were screened for eligibility. All reviewers had training in epidemiology and systematic review methods, and all reviewers participated in a pretest of the first 250 titles and abstracts to resolve any uncertainties about the wording of the screening questions. Thereafter, two reviewers independently evaluated each citation. The following questions were used to assess relevance: (1) Is this a primary study evaluating the use of one or more antibiotics to prevent or treat* colibacillosis in broilers? [* question wording differs from the protocol, based on a protocol deviation to allow metaphylactic use in infected flocks to be included as eligible] YES, NO (EXCLUDE), UNCLEAR (2) Is there a concurrent comparison group? (i.e. controlled trial with natural or deliberate disease exposure or analytical observational study) YES, NO (EXCLUDE), UNCLEAR (3) Is the full text available in English? YES (include for full-text screening), NO (EXCLUDE), UNCLEAR (include for full-text screening) Citations were excluded if both reviewers responded 'no' to any of the screening questions. Disagreements whether to include or exclude were resolved by consensus. If consensus could not be reached, the article was marked as 'unclear' and advanced to fulltext screening. Following the title and abstract screening, full-text articles were retrieved and were subject to additional eligibility screening. Two reviewers independently evaluated the full-text articles, with agreement required at the question level. Any disagreements were resolved by consensus, or if consensus could not be reached, a third reviewer arbitrated the decision. All reviewers conducted a pre-test on the first ten full-texts to ensure that all eligibility questions were clear. The same three questions that were applied at the title and abstract screening level were applied again in the full-text screening, but reviewers could select only 'yes' (neutral) or 'no' (EXCLUDE). The full-text screening form also included the following questions: (1) Is the full text available with >500 words? YES, NO (EXCLUDE) (2) Does the study assess the use of any antibiotic intervention for the prevention of colibacillosis or pathogenic E. coli, either as prophylaxis in healthy birds or as metaphylaxis to prevent further illness/death when colibacillosis is present in the flock? YES, NO (EXCLUDE) [Italics represent deviations from the original protocol] (3) Are at least one of the following outcomes described: mortality, FCR, condemnations due to colibacillosis, or total antibiotic use? YES, NO (EXCLUDE) (4) Eligible study design: Is the study a controlled trial with natural disease exposure? YES (moves to data extraction stage), NO, the study is a controlled trial with deliberate disease induction (indicate the antibiotic(s) evaluated, but exclude from data extraction) NO, the study is an observational study (indicate the antibiotic(s) evaluated, but exclude from data extraction) Data collection process Two reviewers used a standardized form to extract data from all citations that met the full-text screening criteria. Nested forms were created in DistillerSR to facilitate data extraction for multiple intervention comparisons or outcomes within a trial. All reviewers were trained in the use of nested forms, and all reviewers piloted the forms on the first five articles to ensure consistency. Discrepancies in data extraction were resolved by consensus, or if consensus could not be reached, by a discussion with CBW or JMS. Study characteristics Study-level data extracted included year and country of conduct, months of data collection, setting (research or commercial flock (s)), strain of birds, sex of birds, number of flocks/farms enrolled, inclusion criteria at the flock level, rearing conditions (conventional, organic, antibiotic-free) and whether the treatment was given as prophylaxis (all birds free of colibacillosis at the start of treatment) or as metaphylaxis (some birds ill at the time of treatment initiation). These definitions are consistent with the American Veterinary Medical Association (AVMA) definitions of antimicrobial use for prevention and treatment (AVMA, 2019). Data on study characteristics were extracted for all studies included after the full-text screening. Further data on the effect sizes of the interventions were only collected, and risk of bias assessment was only undertaken if sufficient data were presented for one or more of the eligible outcomes. Intervention details Details on the interventions evaluated in each study were recorded, including a description of the intervention (antibiotic name, dose, route, and frequency of administration), a description of the comparison group(s), the number of birds, and flocks enrolled, the length of follow-up, any losses to follow-up, and descriptions of concurrent treatments. Eligible outcomes Outcomes eligible for data extraction were mortality, FCR, condemnations at slaughter due to colibacillosis, and total antibiotic use. For each outcome reported in a study, if an adjusted summary effect was presented (adjusted odds ratio (OR) or risk ratio (RR) if the outcome was binary, or least square mean differences if the outcome was continuous), these data were extracted. Variables included in the adjustment and the corresponding precision estimates were recorded. If an adjusted measure was not reported, unadjusted summary effect size (second priority) or arm-level data (third priority) were recorded along with applicable variance components. Data were not extracted if they were presented without variance measures and if a measure of variance could not be calculated. Risk of bias in individual studies The Cochrane Risk of Bias tool for Randomized Trials (RoB 2.0, 2016 version) was used to assess the risk of bias at the outcome level for all outcomes with extracted data (Higgins et al., 2016). Signaling questions were modified for the use in livestock and poultry trials. The following domains of bias were assessed: bias arising from the randomization process, bias due to deviations from the intended interventions, bias due to missing outcome data, bias in the measurement of the outcome, and bias in the selection of reported results. In the Cochrane risk of bias instrument, a single question in the 'bias due to randomization' domain asks whether the authors described the method for generating the random sequence. We modified this question to include a response category for studies in which the authors reported that allocation to the intervention groups was 'random,' but did not provide details on the actual method for generating the random sequence. Under the risk of bias domain related to deviations from the intended intervention, there is a question on whether the participants were aware of their assigned interventions; in the present review, the 'participants' in all applicable trials were broiler chickens, and so this question was always answered as 'no'. Another question under this domain asks whether study personnel were blinded; for the purposes of this review, the animal caregivers were considered to be the relevant study personnel. The overall risk of bias within each domain was calculated as per Higgins et al. (2016), with one exception: for bias due to the randomization process domain, we did not include allocation concealment in the algorithm because all animals within a flock are included in the type of trial involved in this review. Further, it is unlikely that a producer or investigator would have any treatment preference for a given flock, as the differential economic value of a flock would not be known at the time of allocation. This approach has been used in a previous synthesis study evaluating the risk of bias in livestock trials (Moura et al., 2019). Summary measures An effect size (OR or mean difference) was calculated for the results from individual studies where the data were presented at the arm level (i.e. raw data on the number of events and the total number of observations for each intervention group were reported). For binary data reported at the arm level, the OR and 95% confidence intervals were calculated using Epi Tools Epidemiological Calculators, available at: http://epitools.ausvet.com.au/content.php?page=2by2Table. For continuous data presented at the arm level, the mean difference and confidence intervals were calculated using OpenEpi, available at: https://www.openepi.com/Mean/t_testMean.htm. Synthesis of results As described in the protocol, the intention of this review was to conduct a network meta-analysis. However, due to the heterogeneity of the interventions and outcomes in the eligible studies that were captured in the search, no quantitative synthesis was performed. Trial results were presented in a forest plot for purposes of visualization, but no summary measure was calculated and heterogeneity was not formally assessed. Risk of bias across studies Risk of bias across studies ('publication bias') is usually evaluated by examining a funnel plot examination for small-study effects 266 Jan M. Sargeant et al. using pairwise comparisons. Two steps generally are recommended: a visual evaluation of the symmetry in the funnel plots, and a formal statistical test for symmetry, if sufficient data are available (>10 studies) (Higgins and Green, 2011). In this dataset, too few observations were available for each intervention, so any assessments of symmetry could not be reliable. Therefore, an evaluation of the risk of bias across studies was not conducted. Additional analyses No additional analyses were conducted. Study selection Of the 3425 unique citations identified by the search, 301 were advanced to the full-text screening (Fig. 1). There were 73 articles at the full-text screening stage that evaluated antibiotics in broilers and included at least one eligible outcome, but ultimately were excluded because they involved a deliberate disease exposure (i.e. challenge trials). No observational studies with relevant exposures and outcomes were identified. Nine controlled trials with natural disease exposure were included in the review. Study characteristics The study characteristics for the eligible trials are shown in Table 2. Reporting of the characteristics of interest was not complete for some trials, particularly concerning the months and years during which some trials were conducted. The included trials were conducted in several countries in both commercial and research flocks. assessment of one or more relevant outcomes, but the data were not presented in a form that could be extracted (Cracknell et al., 1986;Huff et al., 2004). In all of the studies with extractable data, antibiotics were used prophylactically (i.e. for the prevention of colibacillosis). Risk of bias within studies Six of the seven trials with one or more outcomes assessed for bias reported the use of random allocation to treatment group (Jamroz et al., 2003;Olnood et al., 2007;Baurhoo et al., 2009;Amerah et al., 2012;Bostami et al., 2016;Vineetha et al., 2017), although none provided information on the method used to generate the random sequence. For the remaining domains of bias, none of the trials provided the information necessary to evaluate the potential risks of bias, and therefore there were 'some concerns' about the potential risk of bias for all bias domains for all outcomes. Results of individual studies The most commonly reported outcome was FCR, which was evaluated in seven trials with a total 25 treatment comparisons with at least one eligible antibiotic ( (Table 3). Results were presented at the arm level for all of the trials and thus the effect sizes shown in Table 3 were calculated post hoc. The antibiotics evaluated in one or more treatment comparisons were bacitracin (four studies, 14 comparisons), virginiamycin (one study, three comparisons), avoparacin (one study, three comparisons), chlortetracycline (one study, two comparisons), and avilamycin (two studies, two comparisons). Comparison groups varied but included groups receiving no intervention, various probiotic products, mannan-oligosaccharides, herbal mixtures, grape derivatives, and one direct comparison of different antibiotics. All interventions in all trials were applied at the group (flock) level. Comparisons for FCR are shown in Fig. 2. None of the comparisons between an antibiotic intervention and an alternative intervention showed a benefit of one intervention over another. The comparisons between an antibiotic and a no-treatment control group showed numeric benefits of the antibiotic treatment, although many of the confidence intervals were wide and generally included the null value. The heterogeneity among specific interventions and comparators precluded a quantitative summary of antibiotic efficacy, and thus no summary effect or evaluation of heterogeneity is included in Fig. 2. None of the included trials reported results for condemnations at slaughter due to colibacillosis, and none of the trials reported total antibiotic use by the group. A single trial reported mortality outcomes for two comparisons: one between a group receiving bacitracin methylene disalicylate and a no-treatment control group, and another between the antibiotic group and a probiotic control group (Amerah et al., 2012) (Table 3). The results of both comparisons had corresponding confidence intervals that included the null value. Synthesis of results The research synthesis approach proposed in the protocol was not conducted due to the sparsity of data available to address the review question. Risk of bias across studies Not conducted due to insufficient data. Additional analysis None conducted. Summary of evidence Although antibiotic use in the poultry industry has decreased significantly as a result of the Veterinary Feed Directive (VFD; (7) Probiotic, in feed, 7.5 × 10 4 cfu g −1 350 (7) ) https://www.fda.gov/animalveterinary/developmentapprovalprocess/ ucm071807.htm), and consumer pressure for antibiotic-free products, antibiotics are still used to manage some diseases. Given the societal imperative to use antibiotics judiciously, it is important to consider the scientific efficacy of specific antibiotics for specific diseases as a part of the treatment decision-making process. Based on the available scientific literature, there is no strong scientific evidence for the efficacy of antibiotics to prevent or treat colibacillosis in broilers. However, it is important to consider that only a small volume of literature exists, and existing trials examined heterogeneous interventions and comparison groups. In addition, most trials had poor reporting of key trial design features that are necessary to assess the validity of the research. As a result, there may be some uncertainty about the true efficacy of antibiotics for the prevention or control of colibacillosis in broilers. Although there is no compelling evidence that antibiotics are effective, there is also no compelling evidence that they are not. We only included controlled trials with natural disease exposure in this review. For interventions where it is feasible to allocate individuals to treatment groups, results from controlled trials provide higher evidentiary value compared to studies using an observational design (Sargeant et al., 2014a;Roudebush et al., 2004). Challenge trials may be a useful component of the development and validation process for an intervention, as they may provide proof of concept for efficacy. However, the conditions of the disease challenge may not be representative of natural disease exposure, and in addition, challenge trials tend to be conducted in more controlled settings than what is typical of commercial operations (Sargeant et al., 2014a(Sargeant et al., , 2014b. Published challenge trials also tend to result in exaggerated treatment effects compared to natural disease exposure trials evaluating the same intervention and outcome (Egger et al., 1997;Wisener et al., 2014). This is partially related to publication bias; challenge trials often involve smaller numbers of animals than natural disease exposure trials, and small studies are more likely to be published if they show statistically significant results (Egger and Smith, 1998). Thus, small challenge trials that show a statistically significant intervention benefit are more likely to be published compared to small challenge trials that show no effect of the intervention. Therefore, although challenge trials represented a larger body of literature (73 challenge trials versus 9 trials with natural disease exposure met the eligibility criteria for population, intervention, and outcome) we chose not to include challenge studies in this review. The eligible outcomes used in this review were selected based on their importance for decision-making concerning the use of antibiotics. The most consistently reported outcome was FCR measured across the entire growing period. Feed conversion ratio provides a measure of the amount of weight each bird gains per unit of food consumed and is thus a measure of bird performance. The antibiotic interventions examined in the trials included in this review were intended to prevent or treat illness, rather than for growth promotion purposes. However, FCR was a commonly reported measure in the literature and is of importance to poultry producers. Meta-analysis can be conducted when there is a minimum of two studies reporting the same outcome. In this review, six trials measured FCR, but there was essentially no replication of interventions or comparison groups within the trials measuring this outcome. The results of a single study represent one observation within a distribution of possible study results, which may vary due to nuanced differences in the populations, interventions, outcome 270 Jan M. Sargeant et al. measurements, and disease exposures between trials. Therefore, without replication, it is not possible to evaluate whether the results from a single study represent the true efficacy of an intervention or whether the results represent an outlier (e.g. a type I where the results suggest a statistical difference when none is present, or a type II error result where there is an actual difference but it is not identified in the sample population). Comparison groups in the captured trials included treatments with other antibiotics, non-treated controls, and alternative (nonantibiotic) treatments. Given the importance of identifying effective alternatives to antibiotics, alternative products may be the most appropriate control group. In this review, eligible studies needed to include at least one antibiotic treatment arm. Since the goal of this review was not to assess the efficacy of nonantibiotic interventions, our search was not designed to identify all trials evaluating the efficacy of non-antibiotic interventions. However, future reviews could investigate the number and quality of trials examining alternative treatments. Such reviews could provide valuable syntheses of the existing evidence for the efficacy of non-antibiotic interventions, which would further inform treatment decision-making. The results of this review highlight a number of issues related to the completeness of reporting primary studies. These issues included reporting of the characteristics of the trials, most notably related to reporting of the country where the trial was conducted, the month(s) and year of the trial, and whether the trial was conducted in a commercial setting. Information on the study setting and population(s) is necessary for the reader to make judgements about the external validity of trial results in their own context. Without knowledge of the specific context of a trial, a reader cannot evaluate the various context-specific factors that might impact the application of the research findings to other contexts. Information on key trial design features is necessary to allow readers to evaluate the potential for bias in the results of the trial. Additionally, poor reporting of design features is associated with exaggerated treatment effects (Schulz et al., 1995;Moher et al., 1998;Burns and O'Connor, 2008;Sargeant et al., 2009aSargeant et al., , 2009b. The information needed to access the risk of bias in the domains of bias included in the Cochrane Risk of Bias instrument (Higgins et al., 2016) was not provided in any of the trials included in this review. The Cochrane risk of bias tool was developed for evaluating randomized controlled trials in human healthcare. However, the general tenets of good trial design do not vary between human and veterinary medical research. It is possible that study authors did not conduct the trials using accepted methods for high-quality trial design, or they may have conducted the trials with a high degree of methodological rigor, but did not report information on key design features. Nonetheless, a reader has only the information provided in the publication by which to access the methodological rigor, and if that information is not available, then the reader cannot judge the appropriateness of the approach, methods, and ultimately the results of the study. Deficiencies in reporting have been documented in numerous publications examining reporting quality in food animal studies (Wellman and O'Connor, 2007;Burns and O'Connor, 2008;Sargeant et al., 2009aSargeant et al., , 2009bBrace et al., 2010;Winder et al., 2019). The REFLECT statement was developed by an expert consensus process in response to concerns about the quality of Fig. 2. Forest plot to illustrate the efficacy of antibiotics compared to alternative treatments or to non-treated controlled from a systematic review on the efficacy of antibiotics to prevent or treat colibacillosis in broiler chickens. reporting in clinical trials in livestock . The REFLECT statement consists of a 22-item checklist to provide guidance on what should be reported in livestock trials, as well as an explanation and elaboration document that provides additional details for each item on the checklist. The REFLECT statement methods and elaboration documents were co-published in multiple journals (O'Connor et al., , 2010c(O'Connor et al., , 2010dSargeant et al., 2010aSargeant et al., , 2010b, and are also available online (http://www.reflect-statement.org/; https://meridian.cvm.iastate. edu/). Improved reporting of trials will allow readers to make a clearer, more accurate judgement about the validity of study results. Limitations In searching for relevant literature, we used multiple electronic databases, as well as some gray literature sources. However, it is known (although difficult to document), that there is considerable research conducted in-house within the vertically integrated poultry industry, and the results of this research may not be publicly available. Thus, it is possible that our results do not reflect the body of research that has been conducted to evaluate the efficacy of antibiotics for preventing or treating colibacillosis. However, our results do reflect the publicly available literature for decision-making by those outside of poultry groups who conduct their own research. Conclusions In conclusion, based on a small volume of heterogeneous studies, there is no strong evidence for or against the efficacy of antibiotics to prevent or treat colibacillosis in broilers. Reporting of study characteristics and key trial design features was generally poor and could be improved by following recommended reporting guidelines for controlled trials. Author contributions. JMS developed the review protocol, coordinated the project team, assisted with data analysis, interpreting the results, and wrote the manuscript drafts. MB, KC, KD, BD, JD, MR conducted relevance screening, extracted data, conducted risk of bias assessments, commented on manuscript drafts and approved the final manuscript version. AMOC assisted with the development of the review protocol, provided guidance on the interpretation of the results, commented on manuscript drafts and approved the final manuscript draft. CML and AV assisted with the development of the review protocol, provided guidance on the interpretation of the results, commented on manuscript drafts, and approved the final manuscript. CBW assisted with the development of the review protocol, assisted with data screening, data extraction and risk of bias assessment, conducted the analysis, provided guidance on the interpretation of the results, commented on manuscript drafts, and approved the final manuscript draft. Financial support. Support for this project was provided by The Pew Charitable Trusts. Conflicts of interest. None of the authors has conflicts to declare.
2020-02-22T14:04:04.126Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "1e18dece0bceaafc4b8f5e0ae10c56e2917c3da7", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EBFA33A3ECD41B85B7F0480CD514907C/S1466252319000264a.pdf/div-class-title-the-efficacy-of-antibiotics-to-control-colibacillosis-in-broiler-poultry-a-systematic-review-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "2f50eb55f597570c04bee3914b4d66b96fa746ae", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6102497
pes2o/s2orc
v3-fos-license
Axonal neuropathy with neuromyotonia: there is a HINT Mutations in HINT1 cause a common, autosomal-recessive, axonal Charcot-Marie-Tooth neuropathy, often with neuromyotonia. Peeters et al. summarize neurological aspects of the disease, epidemiology and mutation spectrum, and structural and functional characteristics of the affected protein. They propose guidelines to recognize and differentiate HINT1-neuropathy and suggest strategies to treat common symptoms. Introduction Hereditary peripheral neuropathies are a clinically and genetically heterogeneous group of disorders, characterized by muscle weakness, wasting and sensory loss, starting in the distal parts of the limbs and slowly progressing in a length-dependent manner (Boerkoel et al., 2002;Patzko and Shy, 2012). In 2012, we identified recessive mutations in the gene encoding the histidine triad nucleotide binding protein 1 (HINT1) causing axonal, motor-predominant Charcot-Marie-Tooth (CMT) neuropathy, frequently associated with neuromyotonia (Zimon et al., 2012). HINT1 represents a global cause of CMT, with 79 patients of European, North American and Chinese ancestry identified to date (Zimon et al., 2012;Caetano et al., 2014;Zhao et al., 2014;Boaretto et al., 2015;Jerath et al., 2015;Lassuthova et al., 2015;Rauchenzauner et al., 2016;Veltsista and Chroni, 2016). The frequency of HINT1 mutations in a heterogeneous cohort of recessive CMT patients is $10% and rises to 80% when focusing on individuals with axonal neuropathy having the clinical hallmark of neuromyotonia (Zimon et al., 2012(Zimon et al., , 2015. Thus, HINT1-associated peripheral neuropathy represents a distinct clinical and genetic entity that needs to be differentiated among the numerous subtypes of CMT; and from myotonic dystrophy and the various channelopathies causing non-dystrophic forms of myotonia. This update summarizes the current knowledge on the clinical and electrophysiological aspects of the HINT1 neuropathy, the overlap with other clinical entities, the epidemiology and mutation spectrum, and the structural and functional characteristics of the encoded protein. Epidemiology HINT1 neuropathy has a non-random distribution (Fig. 1). The majority of diagnosed individuals are of European origin, a fact attributed to three founder mutations (R37P, C84R, H112N). R37P is the most common among them, displaying a gradient of distribution increasing from west to east in Europe. Forty-eight families described to date carry this mutation, most of them inhabiting or originating from central or south-eastern Europe and Turkey (Fig. 1). The R37P carrier frequency in outbred populations living in this geographic area is as high as 1:67-182, making HINT1 neuropathy one of the most Figure 1 Worldwide distribution of HINT1 mutations. Pie chart size represents the number of patients identified per country and colours indicate which founder HINT1 mutations they are carrying. Dashed lines point out the country of origin of the identified patients. Enlarged panel below shows the regions in Europe where most patients are clustered. Note the gradient of distribution for the most common HINT1 mutation (R37P), increasing in central and south-eastern Europe. common autosomal recessive disorders in this part of the world (Zimon et al., 2012;Lassuthova et al., 2015). The high R37P carrier rate can even lead to 'pseudo-dominant' inheritance of CMT, with affected individuals in two consecutive generations due to the influx of unrelated heterozygous carriers (Jordanova A. and Tournev I., unpublished results). In the Czech population, HINT1 neuropathy is among the most frequent causes of inherited neuropathy, only surpassed by CMT1A/HNPP and mutations in GJB1 (previously known as Cx32) and MPZ (Lassuthova et al., 2015). Because 90% of the Czech HINT1 patients carry R37P, genetic diagnosis becomes straightforward. Moreover, the USbased patients homozygous for R37P have central European origin (Zimon et al., 2012;Jerath et al., 2015). H112N is another founder mutation, with five families reported of Italian, Turkish, Bulgarian and (Portuguese) Roma origin. Finally, C84R is present in homozygous or compound heterozygous state in four Belgian families. Overall, the genetic epidemiology suggests that HINT1 neuropathy should be considered in the diagnostic work-up of patients of European descent presenting with axonal CMT. Clinical features The phenotype initially related to mutations in HINT1 encompasses axonal, motor-greater-than-sensory polyneuropathy with an onset mostly in the first decade of life, combined with action neuromyotonia (more pronounced in the hands) and neuromyotonic or myokymic discharges on needle EMG (Zimon et al., 2012;Caetano et al., 2014;Lassuthova et al., 2015;Rauchenzauner et al., 2016). The identification of additional patients extended the clinical spectrum; including a later disease onset (up to 28 years of age) (Zhao et al., 2014), asymmetric gait involvement (Rauchenzauner et al., 2016) or a pure distal motor neuropathy (dHMN) without neuromyotonia (Zhao et al., 2014;Boaretto et al., 2015). The initial complaints are distal lower limb weakness with gait impairment, combined with muscle stiffness, fasciculations and cramps in hands and legs, worsened by cold. When specifically asked, most patients report difficulties in releasing grip after a strong voluntary hand contraction, dating back from childhood. The disorder is slowly progressive; none of the reported patients lose ambulation until the sixth decade of life. Upon clinical examination, foot/toe extension and flexion weakness to plegia are present in almost all cases (Zimon et al., 2012;Caetano et al., 2014;Boaretto et al., 2015;Lassuthova et al., 2015). Achilles tendon reflexes are diminished to absent, depending on the stage of progression. Upper limbs become involved later in the disease course, usually in the first or second decade. Calf and intrinsic hand and foot muscle wasting is almost always observed to a variable degree ( Fig. 2A-E). The hypotrophy of the intrinsic hand muscles, particularly of the hypothenar and thenar eminence is pronounced, leading to flexion contractures of the fingers, even in cases with mild muscle weakness ( Fig. 2D and E). Mild distal sensory impairment can be present (Zimon et al., 2012;Caetano et al., 2014;Lassuthova et al., 2015). Neuromyotonia Neuromyotonia is present in 70-80% of patients and is a diagnostic hallmark. It is characterized by spontaneous muscular activity at rest (myokymia), impaired muscle relaxation (pseudomyotonia), and contractures of hands and feet (Maddison, 2006); and can be observed with or without overt peripheral neuropathy (Hahn et al., 1991(Hahn et al., , 2000. In contrast to myotonia, in which abnormal muscle activity occurs only after voluntary or induced muscle contraction, neuromyotonia results from spontaneously occurring peripheral nerve discharges often accentuated by voluntary muscle contraction (Rauchenzauner et al., 2016). This phenomenon was comprehensively characterized in two sibs of a Canadian family (Hahn et al., 1991), where subsequently HINT1 mutations were identified (Zimon et al., 2012). The abnormal electrical activity can be enhanced by nerve ischaemia, but not by mechanical or electrical stimulation of the nerve supplying the muscle, thus suggesting that the nerve hyperexcitability is a generalized phenomenon related to a functional or structural abnormality of the axonal membrane. The neuronal origin of neuromyotonia was subsequently proven by regional neuromuscular blockade with curare and nerve block with xylocaine. HINT1 patients display action myotonia (delayed muscle relaxation of the hands after strong flexion of the fingers), while percussion myotonia of the thenar eminence is not typical (Zimon et al., 2012;Caetano et al., 2014;Boaretto et al., 2015;Lassuthova et al., 2015). Unfortunately, the symptoms of peripheral nerve excitability can be easily missed from patients' history or from the neurological examination. Various types of skeletal deformities are noted in HINT1 patients. Foot deformities (pes cavus, pes equinovarus, pes cavovarus or Achilles tendon shortening) are present in a great proportion of cases (Zimon et al., 2012;Lassuthova et al., 2015). Flexion contractures of the fingers are typical, occurring up to several years after the lower limb involvement (Tournev I., unpublished results). Scoliosis is reported in one-third of the patients (Lassuthova et al., 2015;Jerath et al., 2015). In some patients, mild-to-moderate elevation of creatine kinase levels is observed (Zimon et al., 2012;Jerath et al., 2015), probably related to the chronic neurogenic muscle atrophy in combination with the neuromyotonia. Electrophysiology Electrophysiological studies of peripheral nerves are compatible with axonal polyneuropathy; either motor-and-sensory (42/64; 66%) (Zimon et al., 2012;Lassuthova et al., 2015) or pure motor (22/64; 34%) (Zimon et al., 2012;Zhao et al., 2014;Boaretto et al., 2015). Conduction velocities of motor and sensory fibres are (nearly) normal, while the amplitudes of compound muscle action potential or sensory nerve action potential are decreased. No markers of demyelination (conduction slowing, temporal dispersion or conduction block) are present. Needle EMG shows increased amplitude of motor unit action potentials and reduction of recruitment pattern with temporal summation. Concentric needle EMG from proximal and distal muscles often displays neuromyotonic discharges (Fig. 2F) occurring spontaneously or provoked by needle movement or muscle contraction (Zimon et al., 2012;Lassuthova et al., 2015). They are characterized by high frequency , decrementing, repetitive discharges of a single motor unit with motor unit action potential morphology. Myokymic discharges, representing rhythmic, grouped discharges of the same motor unit, are also observed. The firing frequency within the burst is 2-60 Hz followed by a short period (up to a few seconds) of silence, and then recurrence of the burst at regular intervals (Kucukali et al., 2015). Hyperexcitability and ectopic impulse generation can occur along the whole length of the axons, including the terminal arborizations (Hahn et al., 1991). Although considered an EMG hallmark, neuromyotonic or myokymic discharges are absent in around 20-30% of patients, thus complicating the differential diagnosis (Zimon et al., 2012;Zhao et al., 2014;Boaretto et al., 2015). Moreover, they may occur in the later stages of the disease (Zimon et al., 2012;Caetano et al., 2014;Boaretto et al., 2015;Lassuthova et al., 2015). Nerve biopsy The changes observed in the sural nerve of five HINT1 patients are consistent with an axonal neuropathy, even when no clinical features of sensory neuropathy are present (Zimon et al., 2012). Differential diagnosis The diagnosis of HINT1-associated hereditary neuropathy requires consideration whether the phenotype is genetic or acquired. Due to the recessive pattern of inheritance this is not always straightforward, especially in sporadic cases. Detailed genealogy, neurological examination, nerve conduction studies and EMG are crucial. Diagnostic guidelines to recognize HINT1 neuropathy are represented in Fig. 2G. The differential diagnosis includes several acquired and inherited disease entities, associated with abnormal spontaneous muscle/nerve hyperexcitability and/or weakness ( Table 1). As neuromyotonia can be absent or under-recognized, other types of hereditary axonal CMT and pure motor neuropathies should be considered (Rossor et al., 2012;Zimon et al., 2012;Zhao et al., 2014). Treatment strategies There is no curative treatment for patients with HINT1 neuropathy, therefore regular physical therapy, ankle-foot orthoses and/or special shoes remain mandatory. In the stage of limb deformities, surgical orthopaedic corrections are beneficial. These include soft-tissue procedures (plantar fascia release, tendon release or transfer), osteotomy (metatarsal, midfoot, calcaneal), and joint-stabilizing procedures (triple arthrodesis) (Caetano et al., 2014;Boaretto et al., 2015;Lassuthova et al., 2015;and Tournev I., unpublished results). Additionally, to decrease the symptoms of neuromyotonia and the abnormal spontaneous discharges on EMG, a favourable therapeutic response has been elicited with medications blocking sodium channels, such as antiepileptic drugs (diphenylhydantoin and carbamazepine) (Hahn et al., 1991;Tournev I., unpublished results) and anti-arrhythmics (tocainid) (Hahn et al., 1991). HINT1 structure and enzymatic activity HINT1 is a member of the histidine triad (HIT) protein family, sharing a characteristic HIT motif (His-x-His-x-Hisx-x, where x is a hydrophobic residue) in the catalytic pocket (Seraphin, 1992;. Mammalian HINT1 orthologues are nearly identical, and even though sequence similarity is lower with other eukaryotes, HINT1 function is evolutionary conserved (Bieganowski et al., 2002). The protein is ubiquitous, but highly expressed in brain and spinal cord (Barbier et al., 2007;Liu et al., 2008;Zimon et al., 2012), suggesting its important role in the nervous system. HINT1 is a globular protein of 13.7 kDa that acts as a homodimer and binds purine nucleosides and nucleotides . Each monomer has a nucleotidebinding cleft containing the HIT motif (Brenner et al., 1997). The nucleotide-contacting residues in this cleft are strictly conserved throughout the HIT superfamily (Brenner et al., 1997), but substrate specificity is dependent on the sequence of the C-terminal loop . Furthermore, dimerization is required to maintain sufficient catalytic activity . HINT1 functions HINT1 hydrolyses aminoacyl adenylates, intermediary products of the charging reaction of tRNAs with their cognate amino acids by aminoacyl-tRNA synthetases (ARS); it was isolated in complexes with lysyl-tRNA synthetase (KARS) and transcription factors (Lee et al., 2004;Lee and Razin, 2005). In the presence of KARS and ATP, HINT1 is adenylated in a lysine-dependent manner, suggesting that the HINT1-AMP formation relies upon the production of lysyl-AMP by KARS . Similarly, HINT1 reacts with other aminoacyl adenylates (Ala-AMP, Asp-AMP, Met-AMP, His-AMP) produced by their respective cognate (and no other) ARS . Thus, by hydrolysis of the aminoacyl adenylate intermediate, HINT1 might mediate ARS activity and influence the overall level of tRNA aminoacylation. Intriguingly, like HINT1, several ARS (YARS, GARS, AARS, KARS, MARS, HARS) are causal genes for CMT (Antonellis et al., 2003;Jordanova et al., 2006;Latour et al., 2010;McLaughlin et al., 2010;Gonzalez et al., 2013;Safka Brozkova et al., 2015). Since they form part of the same functional network, a common pathomechanism could exist, linking HINT1 and ARS jointly to disorders of the peripheral nervous system. producing free H 2 S inside the cell. By influencing the levels of this signalling molecule, HINT1 might regulate physiological processes, like post-translational protein modification, targeting of ATP-sensitive potassium channels and modulation of vascular tone (Koenitzer et al., 2007). HINT1 acts as a transcriptional suppressor via direct binding to transcription factors. The HINT1-transcription factor complex can be dissociated through the sequestering of HINT1, consequently leading to transcriptional activation. Known sequesters of HINT1 are the diadenosine tetraphosphate (AP 4 A), a side-product of KARS activity (Brevet et al., 1982) and the N-terminal intracellular domain of teneurin-1 (TEN1-ICD), a cleaved-off peptide that translocates to the nucleus (Scholer et al., 2015). Basal AP 4 A and TEN1-ICD levels thus determine the delicate balance between HINT1-mediated transcriptional repression and activation. Known transcription factors directly regulated by HINT1 are MITF, USF2, pontin and reptin (Razin et al., 1999;Lee and Razin, 2005;Weiske and Huber, 2005). Independent of its enzymatic activity, HINT1 has an overall repressive effect on the T cell factor (TCF)-4-b-catenin transcriptional activity, neutralizing the activating effect of pontin and strengthening the transcriptional repression by reptin (Weiske and Huber, 2006). Also unrelated to its catalytic activity, HINT1 inhibits the activator protein-1 (AP1) transcription factor by binding to the POSH-JNK2 complex and inhibiting c-Jun phosphorylation (Wang et al., 2007). Finally, HINT1 interacts with the cyclin-dependent kinase 7 (CDK7), part of the TFIIK component of the general transcription factor TFIIH (Keogh et al., 2002). It is presumed that catalysis by HINT1 is a prerequisite for proper formation of the TFIIH complex (Bieganowski et al., 2002). Loss of HINT1 increases susceptibility to carcinogenesis in mice (Su et al., 2003;Li et al., 2006), suggesting a role as a tumor suppressor. Hint1 À/À mouse embryonic fibroblasts display an augmented growth rate, spontaneous immortalization and an increased resistance to ionizing radiation (Su et al., 2003). In some cancer cell lines, HINT1 expression is decreased due to epigenetic silencing and its subsequent upregulation then halts cell proliferation, independent of the HINT1 catalytic activity (Wang et al., 2007;Zhang et al., 2009). HINT1 interacts with the m-opioid receptor (MOR), the major molecular target for morphine analgesia (Guang et al., 2004;Rodriguez-Munoz et al., 2011). HINT1 functions as an adaptor coupling protein kinase C gamma (PKC) to the MOR to downregulate its signalling capacity (Ajit et al., 2007). The role of HINT1 in the MOR pathway is unrelated to its enzymatic activity, as this function does not depend on HINT1 dimerization (Rodriguez-Munoz et al., 2008). HINT1 is implicated in the regulation of mood and behaviour, suggesting an additional role in the CNS. HINT1 levels are increased in dorsolateral prefrontal cortex of patients with major depression disorder and, adversely, decreased in the same brain regions of patients with schizophrenia (Varadarajulu et al., 2012). Furthermore, association studies reveal HINT1 as a susceptibility gene for schizophrenia (Chen et al., 2008;Kurotaki et al., 2011), bipolar disorder (Elashoff et al., 2007) and nicotine dependence (Jackson et al., 2011;Fang et al., 2014). So far, CMT patients carrying HINT1 mutations have not been neuropsychiatrically evaluated. Such examinations could help revealing putative common pathomechanisms for disorders of the peripheral and CNS. CMT mutations cause loss of HINT1 function The 12 known CMT-causing mutations (Zimon et al., 2012;Zhao et al., 2014;Boaretto et al., 2015;Lassuthova et al., 2015;Rauchenzauner et al., 2016) (Fig. 3B-D) cause loss of functional HINT1 protein, because they: (i) affect residues critical for the catalytic activity of HINT1 (H112N, H114R) (Bieganowski et al., 2002;Ozga et al., 2010); (ii) putatively lead to nonsense-mediated decay of the mutant transcript (H51Ffs*18, Q62*); or (iii) are proven to cause protein instability and subsequent proteasome-mediated degradation (R37P, H51R, C84R, W123*) (Zimon et al., 2012). Five of the mutations (R37P, H51R, C84R, H112N, W123*) were modelled in a yeast strain that is deficient for the orthologous gene, HNT1 (Zimon et al., 2012). This strain does not grow on synthetic galactose-containing media at 39 C (Bieganowski et al., 2002). Under standard culturing conditions, however, HNT1 knockout strain is perfectly viable and indistinguishable from the wild-type, indicating that HNT1 is a non-essential gene. Unlike wild-type human HINT1, the CMT-causing proteins cannot complement the growth phenotype of this strain, thus providing further evidence that loss of functional HINT1 leads to peripheral neuropathy. It is currently unclear which one of the multiple HINT1 functions is mostly affected by the CMT mutations. However, it is likely that this function in dependent on enzymatic activity of the protein, as stable, but catalytically inactive HINT1 versions (e.g. H112N), are capable of causing the neuropathy. Hint1 knockout mice do not show signs of peripheral neuropathy Homozygous and heterozygous Hint1 knockout mice display normal foetal and adult development and appearance. Yet, they are more susceptible to chemically induced carcinogenesis and to spontaneous tumour development on ageing (Su et al., 2003;Li et al., 2006). These findings are indicative of the role of HINT1 as a haploinsufficient tumour suppressor. Additionally, ablation of HINT1 leads to major reprogramming of lipid homeostasis (Beyoglu et al., 2014) likely due to increased proliferative signalling and reduced pro-apoptotic signalling in the liver of Hint1 knockout mice. Intriguingly, unlike in humans, Hint1 À/À mice do not have overt signs of neuropathy (Seburn et al., 2014). Thorough examination for relevant neurological phenotypes, including motor performance, nerve, muscle and neuromuscular junction anatomy, nerve conduction studies and EMG does not show any evidence of axonal degeneration or neuromyotonia. Mice were aged to more than 1 year and, additionally, they were subjected to external stressors such as low temperature and a potassium channel-blocking agent to provoke neuromyotonia (Shillito et al., 1995); yet all without the appearance of neuropathy-related phenotypes. This finding supports the notion that, similar to yeast, HINT1 is a non-essential gene in mammals and that alternative pathways exist that can functionally complement organismal HINT1 deficiency. This suggests that activation or upregulation of such pathways in patients with HINT1 mutations may provide an attractive route for the development of therapeutic strategies for HINT1 neuropathy. Conclusion Recessive, loss-of-function mutations in HINT1 cause an early-onset, axonal form of motor-predominant peripheral neuropathy, often accompanied by the characteristic feature of neuromyotonia. The considerable prevalence of the disorder, especially in patients of European ancestry, is largely due to the existence of founder mutations, of which R37P is by far the most frequent. Here, we propose guidelines to recognize and differentiate HINT1-related neuropathy and suggest treatment strategies to manage common symptoms. As a recent player in the field of hereditary neuropathies, the function of HINT1 in peripheral nerves is still completely unexplored. The gene is ubiquitously expressed, playing a role in manifold transcriptional and signalling pathways. Moreover, previous studies have indicated a relation of HINT1 to CNS functioning and pathology, yet, it was highly unexpected to find this housekeeping gene causing a disorder affecting the peripheral nerves exclusively. The high prevalence and significant burden of the HINT1 neuropathy warrant further investigations into its underlying pathomechanisms, with the aim of finding therapeutic strategies to treat this incurable disorder.
2018-04-03T05:53:22.264Z
2016-12-21T00:00:00.000
{ "year": 2016, "sha1": "6ee4cbe837a8418f6f651be895f73d141a06eba1", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/brain/article-pdf/140/4/868/24174825/aww301.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6ee4cbe837a8418f6f651be895f73d141a06eba1", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
174802925
pes2o/s2orc
v3-fos-license
A Generic Synchronous Dataflow Architecture to Rapidly Prototype and Deploy Robot Controllers The paper presents a software architecture to optimize the process of prototyping and deploying robot controllers that are synthesized using model-based design methodologies. The architecture is composed of a framework and a pipeline. Therefore, the contribution of the paper is twofold. First, we introduce an open-source actor-oriented framework that abstracts the common robotic uses of middlewares, optimizers, and simulators. Using this framework, we then present a pipeline that implements the model-based design methodology. The components of the proposed framework are generic, and they can be interfaced with any tool supporting model-based design. We demonstrate the effectiveness of the approach describing the application of the resulting synchronous dataflow architecture to the design of a balancing controller for the YARP-based humanoid robot iCub. This example exploits the interfacing with Simulink and Simulink Coder. Introduction In the past few decades, robotics has experienced a continuous shift from applications in constrained industrial environments to those involving autonomy, interaction, and collaboration with external agents. The adaptation of the robotic devices to new tasks often presents big challenges in both cost and time. Thus, the capability of prototyping a new controller and rapidly deploying it to the target robotic device is becoming more and more paramount. The canonical approach to develop a robotic controller can be summarized in two distinct phases. 1 In the first phase, the robotic controller is synthesized, tuned, analyzed, and possibly tested in a simulated environment. Arbitrarily complex models of the controlled system are typically exploited. In the second phase, the controller is ported to the real device, tuned again and executed. Each minor change to the controller requires iterating this entire process from start, and a lot of effort is spent to minimize manual operations. Model-based design 2 (MBD) is a methodology that emerged to deal with the challenges introduced by the need to continuously improve complex systems. MBD aims to simplify the development by providing a common environment shared by people of different disciplines involved in the the different design phases. 3 Later changes of the original design either due to early mistakes or requirements modifications are easier to propagate, therefore time and cost of the development can be reduced. 4 A characteristic of the model-based design is that the iterative process of the continuous improvement is performed with unified visual tools, typically based on dataflow programming languages and frameworks. The dataflow naming originates from the view of programs as directed graphs of computations, where The application of MBD to robotic controllers development can narrow the gap between control engineers, used to approach systems with block diagrams, and software engineers, used to procedural and object-oriented programming. However, this approach is not exempt from the complications introduced by system integration, which often introduces time-consuming obstacles. Particularly for what concern robotics, actor-oriented programming languages by themselves are not the final solution. In fact, object-oriented programming still has a central role in the development of low-level algorithms. The aim of these actor-oriented languages is not substituting OOP, but complementing it. In fact AOP is more suitable to target the creation of applications that belong to higher abstraction layers, implementing a design principle compatible with the separation of concerns. 13,14 It represents a valid choice to ease the interconnection of self-contained black-box functionalities, which represent the building blocks of any robotic controller. In this work, we propose a software architecture composed of a framework inspired by AOP and a pipeline for its application to robot controllers design. The framework intends to reduce the effort spent on system integration while minimizing both code and functionality duplication. The pipeline implements all the MBD stages and it aims to minimize the controllers lead time while automatizing as much as possible the prototyping and deployment processes. Rapid prototyping and continuous deployment are achieved interfacing the framework respectively with Simulink and Simulink ® Coder™. 15 Simulink provides out-of-the-box a wide library of black-box functionality exposed as blocks and also allows to be extended and integrated with external algorithms. Its status of visual programming and debugging is very mature and well documented. Simulink Coder provides the automatic code generation capability that aid the implementation of the deployment stage of MBD, removing the need to port or adapt the controller to another domain before being executed in the target platform. 16 Despite our tools selection, the framework has been designed in such a way to simplify the integration with other existing actor-oriented frameworks. The logic of the presented black-box functions (developed in OOP) is independent of them. This design allows to effectively separate the two programming domains while exploiting the best features from both. More specifically, the work presented in this paper is based on a previously introduced framework. 17 From the status described in that work, the underlying software architecture considerably changed, but a big effort was spent to maintain as much as possible the same user experience. The whole-body interface layer proposed in the original work has been entirely removed, moving the responsibility of the robot abstraction to the middleware layer. Moreover, most of the improvements detailed in the same study have been adapted to the new architecture and implemented. Beyond a radical architectural advancement, the main extension presented in this work is the fulfillment of the automatic code generation support, fundamental to complete the implementation of model-based design. The paper is structured as follows. First, we list tools and frameworks belonging to actor-oriented programming and implementing the model-based design pattern, and define a common terminology used throughout the paper. Then, we present the architecture of the proposed software framework and outline how we implemented AOP for robotic controllers design. Successively, we describe how the proposed framework can be exploited to obtain a pipeline that implements the typical stages of model-based design. We present a development cycle example for a balancing controller that targets a humanoid robot. We proceed by discussing the current limitations of this workflow and future improvements. Finally, we draw conclusions. Related Software Model-based design is a methodology that covers many software layers. Following a top-down view, the conventional unified tools typical of MBD usually share the following features: • Support to automatically generate real-time code from a model Providing a complete taxonomy of the existing tools and frameworks is not a trivial task. To simplify the analysis, we limit the overview to frameworks that use the synchronous dataflow model of computation, ignoring those that also support concurrent actors. Considering the scope of the current work, we separate the existing solutions in two categories: hybrid and discrete-only. Hybrid tools are the most generic and complete, they typically allow performing both continuous and discrete simulations. Since they provide solvers for each of the two domains, hybrid tools can execute both offline simulations of continuous ODE systems and their discrete equivalent which is compatible with real-time usage. Discrete-only tools, instead, target only discrete-time systems, and their execution is limited to call an equivalent step function. Given their discrete nature, this second category is compatible by design with real-time usage. Given these definitions, engines that belong to the hybrid group are Drake, 18 OpenModelica, 19 and the commercial software Simulink, Dymola ®20 and LABVIEW. 12 Excluding Drake, all the others are unified visual tools which fully enter into the model-based design framework. Simulink, in particular, is the engine that became the de-facto standard for model-based design. It implements all the MBD stages providing great flexibility and very simple user experience. Other available engines are represented by those that emerged in the context of software engineering for robotics, all belonging to the category of the discrete-only tools. These type of engines are typically designed to support the development of software that runs in real-time on a robotic platform, and so they do not support simulation-specific features such as continuous time system modeling. Examples of this software are Stack of Task's Dynamic Graph, 21 Genom3, 22 OpenRTM 23 and Orocos. 24 A features comparison of the tools listed in this section is shown in Table 1. For what concerns the deployment stage of MBD, we can identify few suitable frameworks that provide support of automatically generate code. The scope of this process is to convert a model prototyped as a directed graph to a low-level procedural representation. Nowadays, automatic code generation is a standard feature of the MATLAB system. The Simulink Coder toolbox allows generating optimized C and C ++ code from a Simulink model, and it provides support to customize the sources injecting custom code during the generation process. Other frameworks that are worth mentioning are most of the software suites based on the Modelica 11 language, which typically support generating low-level code from their models. The Functional Mock-up Interface 25 (FMI), despite being outside the categorization described above, is still relevant to this overview. FMI is a standardized interface widely used in industry for model-based development. It is a feature-rich and production-grade tool with a clear standard, constantly improving at each release but, as most of the tools listed in this section, it was not available when we started the development of our software stack. In any case, its adoption in its current form would not be possible due to the lack of the support of vector messages between actors, shortcoming that will be removed in the upcoming version of the standard. Software Hybrid Visual Tool Code Generation Real-time Native Open Source Figure 1: Visualization of the implementation of an actor: the block. Other interesting frameworks for controllers design which are related to the cited engines are the Robotic Toolbox 26 and the Robotic System Toolbox. 27 The latter, particularly, is one of the few unified framework that fully implements MBD specifically for robotic controllers. It is based on the ROS 28 middleware and it implements many of its features. However, the support of kinematics and dynamics has been added only recently and it lacks the possibility to be extended to interfacing with third-party robotic libraries. Terminology The majority of the hybrid and discrete-only software listed in the previous section share a software architecture composed of similar components. In view of the AOP architecture used in this work, we will make use of the following terminology, illustrated in Figure 1: Blocks are elements that provide self-contained functionality. They wrap algorithms exposing a black-box interface composed of inputs, outputs, and parameters. Ports are virtual elements associated to block inputs and outputs. They store information to identify which kind of data is supported by the block (typically size and type). Signals are the elements that connect ports of different blocks. When two ports are connected, the share their data. Engines control the channel through which the blocks communicate. Engines typically create the computational graph and assign the blocks execution order. They also collect the block outputs and propagate them to the handled channel. They usually provide graphical tools to visualize blocks and help their interconnection by creating signals between them. These terms naturally translate to the definitions of the AOP framework: blocks map to actors, signals map to channels, and ports represent the interface between actors and channels. Figure 2: Software architecture outline. BLOCKFACTORY provides the Block and BlockInformation API along with the necessary tools to interface with the supported engines. The BLOCKFACTORY plugins contain the logic of the blocks, and they are loaded during runtime from the implementation of the engine API. This figure illustrates well the abstraction of the components of the architecture. The engine is only aware of its own API, which represent the entry-point that allow interfacing with it. The implementation of the engine API, once blocks are loaded, can call their functionality only through the Block API. Finally, the blocks contained in the plugins can communicate with the engine only through the abstraction provided by the BlockInformation API. Framework Software Architecture This section describes the software architecture of the proposed framework. Firstly, the factory pattern and the plugins concepts are introduced. Their combined usage has direct applicability within an AOP context. Secondly, we describe in details the two components that form the framework: BLOCKFACTORY 29 and WHOLE-BODY TOOLBOX. 30 BLOCKFACTORY provides the support to actor-oriented programming and the interfacing with third-party frameworks. WHOLE-BODY TOOLBOX provides a plugin library containing the actors that expose the robotic stack used for controllers design: robotic middlewares, rigid-body dynamics libraries, and robotic simulators. An overview of the main classes of these two projects is shown in Figure 3. In other terms, WHOLE-BODY TOOLBOX provides the algorithms, BLOCKFACTORY provides the back-end of the software infrastructure that abstracts blocks and engines. The solvers and the front-end are instead provided by the selected engine. Factory pattern and plugin libraries Third-party engines typically offer a set of APIs that can be used to integrate external software inside their framework. In order to detach effectively the block implementations from the third-party engine, the combination of the factory pattern and dynamically loaded plugins represents one of the canonical solution. 31 With the factory pattern, objects are created from a factory function without the need to specify their class. Typically a label or identifier is associated with this kind of objects, and only this information is required during their instantiation. Unfortunately, this is not enough to achieve the separation between engines and blocks, because the factory function only hides their allocation and the engine still needs to link against their implementation. This shortcoming can be overcome with plugin libraries dynamically loaded during runtime. In this case, the engine needs to have two information: the label associated with the implementation of the block and the name of the shared library that contains it. Once the plugin is dynamically loaded, the engine can instantiate block objects using a factory function without knowing anything about the class that implements them. Then, it can call their functionality through the common interface. The implementation of block classes is not constrained to any model of computation of AOP. In the most general form they can be asynchronous and concurrent. The combined architecture of factory and plugins represents a natural implementation of actor-oriented programming. In fact, the limitation of the engine to access the functionality of the blocks through their exposed abstraction layer enforces one of the key characteristic of actors: the exposure of a well-defined interface. For what concerns robotic controllers, the separation layer introduced by the plugin-based factory pattern provides a great help in system integration. In fact, since the plugin libraries containing the blocks are engine-agnostic, they can be loaded from each engine without the need to recompile them. This means that a controller prototyped with one engine can load the same library of the deployed controller. The code duplication is hence minimized and the robustness of the system is improved because the logic of the blocks is shared. Another benefit of this architecture to the system integration is about dependencies. The standalone plugins can link against any third-party library without the need to operate on the layer specific to the engine. BlockFactory The concepts defined by actor-oriented programming are implemented in a tool called BLOCKFACTORY. It allows creating blocks (the actors) that exchange data between each other through the signals connected to their exposed ports, as illustrated in Figure 1. The entities of AOP are mapped to C ++ classes and interfaces, reported in Figure 3. BLOCKFACTORY also implements the factory pattern and provides support to dynamically load during runtime plugins that contain block objects. In order to obtain engine-agnostic blocks, the information exchanged between blocks and engines needs to be abstracted. For this scope, BLOCKFACTORY provides an abstraction layer called BlockInformation placed between blocks and engines. As shown in Figure 2, blocks can query information from the engine only through the BlockInformation interface, and engines can only call block functionalities through the Block interface. The interfacing with third-part engines can be achieved in two steps. Firstly, their own API or callbacks need to be implemented for loading during runtime the plugins containing the block logic. Secondly, in order to provide blocks the information from the engine they need, the BlockInformation interface needs to be implemented for the selected engine. In the current version of BLOCKFACTORY, we provide support of the Simulink and Simulink Coder engines. In this case, the implementation of their API corresponds in developing respectively a C MEX S-function and a Target Language Compiler (TLC). BLOCKFACTORY provides these two files that are independent from the block implementation, and can load generic objects implementing the Block interface. The actor-oriented applications that can be built with BLOCKFACTORY are universal, and not related by any means to robotic controllers. BLOCKFACTORY is engine-agnostic, and can be interfaced with engines specific to the target application. System integration is simplified since it contains only a small number of classes and it has no dependencies. Beyond the scope of the presented work, BLOCKFACTORY can find applicability in fields such as electrical drives, communication systems, power converters, etc. Generally, it can cover all use-cases that need exposing to the engines custom logic (either inlined in the block or wrapping external libraries) or interfacing with external devices while exposing only a simple and unified interface. Whole-Body Toolbox WHOLE-BODY TOOLBOX is a C ++ plugin library that exposes canonical algorithms and utilities commonly used to develop robotic controllers, such as rigid-body dynamics algorithms and communication capabilities with robotic devices mediated by middlewares. These functionalities are wrapped as block entities and they can be loaded independently by all the third-party engines supported by BLOCKFACTORY. In order to use the blocks in a Simulink model, the toolbox also provides a Simulink library that exposes the C ++ classes as visual blocks, which can be imported by drag-and-drop and configured through text boxes and drop-down menus. For historical reasons the middleware we actively support is YARP. 32 Our main target platform is the iCub humanoid robot, 33 even though all YARP-compatible real and simulated robots are supported out-of-the-box. As an example, a previous work 17 showed a simulated whole-body controller running on both iCub and Walkman 34 robots. Historically WHOLE-BODY TOOLBOX was developed for whole-body control, 35 hence the name. In its last revisions, it became a generic robotic toolbox that can be used for any type of controller. The blocks implementing dynamics and kinematics algorithms are mainly based on iDynTree 36 and do not depend on any middleware. They can be used also with robots which are not YARP-based, outsourcing, in this case, the interfacing with the target platform to third-party plugin libraries. The only requirement for using the provided algorithms is the availability of an URDF 3 description of the robot to control. A complete software stack for robotic controllers typically involves the interaction with a physic simulator. The robotic simulator we chose to support is Gazebo. 37 The interaction between Simulink and Gazebo follows a co-simulation pattern, where the former is the master that issues forward step commands to the physic engine at each simulation step. The controller transparency between the real and the simulated robot is achieved by exposing the same network interface exploiting the abstraction layers provided by the YARP middleware. In the case of the simulated robot, the implementation of these interfaces are provided by Gazebo Yarp Plugins. 38 The toolbox also provides generic utilities for robotic applications, such as discrete filters, cartesian trajectory controllers, 39 and quadratic programming solvers based on QpOASES. 40 The Pipeline In the previous section, we introduced the proposed framework, described its architecture, and discussed how its components interact with each other. In this section, we will describe the pipeline that implements MBD from the point of view of the control engineer, detailing how the it is practically used and how the components of the framework relate to each step of the development. The proposed pipeline implements all the four stages on which the model-based design pattern is derived. We will demonstrate a practical usage showing the steps to rapidly prototype and deploy a balancing controller, 41, 42 executed on the humanoid robot iCub. 33 A simplified overview of the theory behind the controller is reported in the preceding work. 17 Since we managed to maintain the compatibility of the controllers designed with the previous architecture, the experimental results of that study that use Simulink correspond to the prototyping phase of this pipeline. As explained more in detail below, thanks to the abstraction between the controller and the robot provided by the YARP interfaces, the pipeline includes few intermediate steps in addition to the stages defined by MBD. The first stage of MBD is plant modeling. For controllers applications, the plant is typically composed by the robot and the environment where it operates. In our case, the model of iCub is represented by an URDF file, which stores its kinematic and dynamic properties. The model of the robot is generated semi-automatically from its CAD design, Figure 4: Overview of the pipeline implementing model-based-design. The prototyping phase, in the first row, assumes the availability of a model of the robot. In step 1, a Simulink model of the controller is created. In step 2, the controller is tested in the Gazebo simulator using the robot model. In step 3, the same controller used in simulation is tested on the real robot, leveraging the robot transparency provided by exploiting the same YARP interfaces. All the computations of this phase are executed from an external machine running Simulink. The communication with the real robot is achieved through the YARP middleware. The second row illustrates the deploying phase. Exploiting Simulink Coder, in step 4, C ++ code is automatically generated from the Simulink controller. Step 5 and 6 perform the same tests of the previous phase from the same external machine, respectively on the simulated and real robot. This time, though, the autogenerated controller is executed. Eventually, in step 7, the controller is deployed to the computer in the robot head and runs standalone. solution that allows obtaining a very detailed description of the robot. For what concern the environment, we use the default empty world provided by the physic engine running inside the simulator. The same applies to the interaction between the robot and the environment. The implementation of the remaining stages of MBD is illustrated in Figure 4. The first row shows the prototyping phase of the pipeline and the second one shows the deploying phase. Referring to the figure, the depicted steps serve as follows: 1. This first step implements the controller prototyping stage of MBD. The controller is designed in Simulink using the default system blocks and the blocks provided by WHOLE-BODY TOOLBOX. In this case, when the user drops a block in the model, the S-function contained in BLOCKFACTORY loads the plugin library and, using the factory method, it allocates the object that implements its logic. 2. When the controller is ready, it can be executed on the simulated robot model. WHOLE-BODY TOOLBOX provides a block for interfacing with Gazebo, synchronizing it with the simulation loop running in Simulink. This step provides the means of the system simulation stage. 3. In this additional third step, the control designer has the possibility to connect the controller, still running in Simulink from an external machine, to the real robot. Since the controller now needs to run in a real-time setting, the block used to interface with the simulator is substituted with a block that enforces the simulation loop to be synchronized with the real clock. Measurements and reference signals are gathered and streamed in real-time. 4. Reached this point, the controller is already functional on both the simulated and real robot. The last controller deployment stage starts with step 4. Exploiting the capabilities of Simulink Coder, the oriented graph visually created in Simulink is translated to an automatically generated C ++ class. In our software architecture, Simulink Coder is handled as another engine (as reported in Figure 3), and a different implementation of the BLOCKFACTORY interface that abstracts the engine is used. A very important detail of this process is that the logic implemented by the WHOLE-BODY TOOLBOX blocks is not inlined in the autogenerated class. In fact, analogously to the behavior of any engine supported by BLOCKFACTORY, the plugin-based factory pattern is used. This means that the autogenerated C ++ class loads the same plugin containing the logic of the robotic blocks that was used in the Simulink engine. Firstly, this helps to keep the behavior of the controllers running in different engines aligned. Secondly, assuming a constant controller graph, it simplifies the delivery of updates and fixes of the logic of the WHOLE-BODY TOOLBOX blocks. In fact, updated blocks can be deployed to the target platform by only distributing an updated plugin library, removing the need to regenerate the sources and rebuild the application. As last comment, it is worth noting that once the class has been generated and compiled, the presence of Simulink is no longer necessary. 5. This step corresponds to step 2. In this case, though, the automatically generated controller is executed on the simulated robot. 6. Similarly, this step corresponds to step 3 with the automatically generated controller. 7. The real deployment to the target platform is represented by this last step. Until now, the controller always ran from the external machine, communicating to the real robot through the network, exploiting the YARP observer pattern. 31 The automatically generated class of the controller and the BLOCKFACTORY plugin are now compiled (or cross-compiled) for the on-board machine of the robot and, lastly, deployed. The comments about the choice of the plugin-based factory pattern of step 4 are even more central reached this last stage. In this example, the controlled robots -simulated and real-refer to the same kinematic structure. One may wonder which modifications are necessary in this new architecture in order to run the controller on a robot endowed with a different number of degrees-of-freedom. One of the new features of WHOLE-BODY TOOLBOX is the presence of a configuration block, where it is possible to specify runtime information such as the name of the URDF model, the names of the controlled joints, and the name of the robot used to set up the YARP context. Excluding edge cases, this is enough to make controllers independent from the robot. Blockfactory The dataflow framework BLOCKFACTORY represents, as described in the previous sections, the abstraction layers between engines and black-box functions, supplied e.g. by plugins such as WHOLE-BODY TOOLBOX. This means that BLOCKFACTORY is responsible for exposing blocks in such a way that they can be properly configured by the solvers included in the engines. Currently, it only supports engines that provide discrete solvers with fixed-step. This is the only requirement for models that have to be executed on a real-time system. However, BLOCKFACTORY was born as a generic dataflow framework and, when the deployment is not the final target, it should provide compatibility with continuous solvers which typically need to operate on the derivatives of the block state. At its current state, the block interface is modeled to be a stateless system. The engine can only trigger the evolution of a block state by calling its output method since they are akin to instantaneous functions. However, stateful blocks can be extremely convenient in some use cases. Indeed, WHOLE-BODY TOOLBOX already contains blocks that hold an internal state, but it is hidden inside the implementation. One of the consequences of the presence of this hidden state is that blocks that need to know the step size cannot gather it directly from the engine, and this information must be passed as a parameter. This behavior can be not very intuitive for the end user. Furthermore, is it more error prone since every time the user changes the step size, also the parameters of all blocks requiring it must be updated accordingly. This would not be necessary if the blocks would be modeled in such a way to expose their hidden state and rely on the engine features to address this shortcoming. The Functional Mock-up Interface 25 represents a common standard as an alternative to the provided interfaces. Instead of a complete substitution, though, being able to expose blocks in their counterparts called Functional Mock-up Units can be a valuable addition. This would open the interoperability with a plethora of tools that already support FMI, improving the integration of the models designed with BLOCKFACTORY in complex co-simulation environments. Whole-Body Toolbox WHOLE-BODY TOOLBOX currently grounds the interfacing with robots on the YARP middleware, and we are aware that there are not many existing YARP-based robots. Despite implementing the YARP interfaces for a new platform is not an insuperable task, it might limit the applicability of this pipeline. Going in this direction, a native implementation of the more common ROS middleware would enlarge the adoption of the proposed tools. A proof-of-concept of a ROS plugin implementing its publisher-subscriber pattern is already available. 43 On the same line, allowing to install the WHOLE-BODY TOOLBOX without its YARP component would be another possible improvement. In fact, the majority of the blocks are middleware-agnostic, and they could be already used in systems without any middleware installed. For instance, many use-cases might benefit from the included algorithms for rigid body dynamics. The current support of simulating a kinematic structure consists of a co-simulation setup between Simulink and Gazebo that communicate through YARP messages thanks to the Gazebo Yarp Plugins. This entire system worked well for us in the past, however, its use is not as straightforward as it could be. In fact, in order to obtain a correct synchronization between the two simulators, all the components of this system should be started passing extra options. In order to simplify this process, it would be beneficial embedding the physic simulator inside a new block, treating it as a regular node of the graph. In this way, the synchronization could be greatly simplified taking advantage of the information available from being executed as part of the computational graph. This would also enable to execute headless simulations and allow to open the graphical user interface only if visual feedback is required. A limitation of WHOLE-BODY TOOLBOX that might restrict its applicability to generic tasks is the lack of maturity of the robotic perception stack. The main scope of our applications are balancing and locomotion, therefore we always ignored perception and focused mainly on dynamics. Our controllers currently operate only on flat terrain, where perception is not required. However, creating new specific blocks to retrieve sensory data would be straightforward. An improved perception can then allow controllers to handle more structured scenarios, that can be already simulated in Gazebo inserting the robot model into a structured world. In the long run, we would like to add the support of existing machine-learning frameworks in order to embed networks and function approximators into our robotic controllers. Furthermore, we are planning to introduce the possibility to export controllers with an interface that exposes a set of parameters which would allow applying reinforcement learning algorithms. Pipeline The description of the pipeline reported in the previous section offers a general overview of its functionalities. However, it hides few caveats which might not be straightforward. In step 2, obtaining a model that can be effectively actuated in Gazebo requires tuning its PID gains. Finding a proper configuration is not straightforward and many iterations are necessary. Furthermore, this process has to be repeated again in step 3, when the controller is executed on the real robot. Once the right gains have been properly found, they can be reused in steps 5 and 6. However, these low-level configurations are not strictly specific to this pipeline. In fact, they are related to the YARP implementation of the robot and these parameters are meant to be abstracted by YARP interfaces. Similarly, it is interesting to analyze the factors that might differ between running the autogenerated controller from the external machine and from the on-board device of the robot. The communication between the controller and the robot -typically consisting of sensor measurements and references-are mediated in both cases by the transport layer handled by YARP. In the first case, since the controller is running in an external machine, the exchange of data occurs through the network transport layer. This type of data transfer introduces overhead and delays that might affect the performance of the controller. Deploying the controller to the on-board machine provides a great opportunity to mitigate this problem. However, this is not exempt to side effects. In fact, controllers are very sensitive to time delays and dealing with them is yet an open problem in many applications. Assuming that the same gains can be applied might hold surprises. In our experience though, controllers did not need any tuning. In any case, moving the computation of fundamental tasks such as motion control as close as possible to the actuators offers a tremendous possibility to enhance system robustness. Furthermore, the deployed controller is an optimized version of the one executed in Simulink. If the rate of the controller is slower than the rate of the robot measurements and the actuation bandwidth, the gain of speed might allow increasing the controller frequency, which is typically related to better performances. A current limitation of the autogeneration process is how controller parameters stored in Simulink are handled. With the current BLOCKFACTORY version, due to how the code is generated, accessing them from the code is not very intuitive. As a consequence, it is not yet possible to obtain an autogenerated controller that, without the need of regenerating the sources, can be used on different YARP-based robots. Conclusions In this paper, we presented a rapid prototyping and deployment architecture for robotic controllers based on the principles of model-based design. The architecture is composed of a framework and a pipeline. Developing and maintaining a controller in pure C ++ is typically extremely demanding, and even minor architectural changes might require a considerable effort. In light of the fast prototyping aims, developing controllers using visual tools and then automatically generate optimized C ++ code represents a great speedup. As first component of the framework, we presented the actor-oriented tool BLOCKFACTORY. It abstracts generic algorithms and allows embedding them in generic applications modeled as directed graphs. The black-box functions that BLOCKFACTORY exposes are modeled as blocks with a predefined interface, and are stored in collections as shared libraries. These libraries can then be loaded from third-party software that implement the model-based design pattern. Among all the available possibilities, BLOCKFACTORY allows interfacing with Simulink and Simulink Coder. However, it streamlines the extension to other frameworks by providing a second interface that abstracts the engines from the block implementations. Different kind of robotic controllers are typically based on a limited set of elemental functionalities, and complex logic can be achieved by their composition. In this work, as second component of the framework, we presented WHOLE-BODY TOOLBOX, a collection of black-box functions for robotics representing the building elements of generic robotic controllers. This toolbox wraps a number of existing open-source projects belonging to the categories of robotic middlewares, rigid-body dynamic libraries, and mathematical optimization tools. These two projects serve as the primary components of the proposed pipeline to rapidly prototype and deploy robotic controllers. In particular, the presented pipeline implements the rapid prototyping capability -idiomatic feature of model-based design-by interfacing with the Simulink engine. The rapid deployment, instead, is achieved exploiting the automatic code generation support provided by Simulink Coder. We explained step by step how the entire process works, detailing how the stages of model-based design have been implemented. Ultimately, we explained the shortcomings of the current status of both the components of the presented framework and the resulting pipeline, and our plans to address them. The present condition of these projects is the outcome of many years of development, during which the architecture often changed and gave us the possibility to learn from our mistakes. Despite these continuous changes, a big effort has been spent to keep the experience of the control engineers that use this framework as consistent as possible. As attempted in the previous papers, we tried to be as critic as possible to our choices, being aware that the presented pipeline still has a big room of improvements. To conclude, we would like to remark that the development of all the presented tools followed from their beginning an open-source and community-driven approach. From one hand, we could have never achieved the current development status and our results if we couldn't interface with existing open-source software such as middlewares, simulators, and libraries. We are grateful to the entire robotic community to provide and maintain them over time. From the other hand, collaborations with other research institutes -mainly belonging to the community built around the iCub humanoid robot -helped us to improve the robustness of the entire framework by using it within different contexts. A great contribution, as an example, regards the interfacing with MATLAB. In fact, due to licensing limitation, we cannot test the pipeline thoroughly with many versions, and also the application of continuous integration pipelines presents many restrictions. A wider user-base with diverse setup helped us debugging problems we would probably have never encountered.
2019-06-05T20:06:51.000Z
2019-06-05T00:00:00.000
{ "year": 2019, "sha1": "d71ce618e4c41e3c6a9e8843e729a934e6aa7899", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1729881420921625", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "de0eb233230d366946d875984491fedabfbad139", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15592857
pes2o/s2orc
v3-fos-license
Effects of Atmospheric Pressure Plasmas on Isolated and Cellular DNA—A Review Atmospheric Pressure Plasma (APP) is being used widely in a variety of biomedical applications. Extensive research in the field of plasma medicine has shown the induction of DNA damage by APP in a dose-dependent manner in both prokaryotic and eukaryotic systems. Recent evidence suggests that APP-induced DNA damage shows potential benefits in many applications, such as sterilization and cancer therapy. However, in several other applications, such as wound healing and dentistry, DNA damage can be detrimental. This review reports on the extensive investigations devoted to APP interactions with DNA, with an emphasis on the critical role of reactive species in plasma-induced damage to DNA. The review consists of three main sections dedicated to fundamental knowledge of the interactions of reactive oxygen species (ROS)/reactive nitrogen species (RNS) with DNA and its components, as well as the effects of APP on isolated and cellular DNA in prokaryotes and eukaryotes. Introduction The nascent field of plasma medicine is a rapidly growing and innovative interdisciplinary endeavor encompassing plasma physics, life sciences, biochemistry, engineering and clinical medicine [1]. Electrical plasma ignited in gas under ambient conditions, called an "atmospheric pressure plasma" (APP), is an ionized gas composed of charged particles (electrons, positive and negative ions), radicals, neutral species (excited atoms and molecules), photons (visible and UV) and electromagnetic fields. An important feature of non-equilibrium (cold) APP is its ability to produce a mixture of biologically active agents, such as reactive oxygen species (ROS) and reactive nitrogen species (RNS), while remaining close to ambient temperature, which enables its safe application to living cells and tissues. The physical and chemical properties of the APP, and thus, the formation of plasma products, can be modified by using different types of APP (e.g., APP jets (APPJs), dielectric barrier discharge (DBD)), various configurations of plasma sources, or by varying the voltage applied, type of feed gas and its flow rate [2][3][4]. Thus, the type and dose of reactive species, as well as their distribution and penetration into the tissue, can be readily controlled. One of the APPs widely used for the direct treatment of cells and tissues is a DBD ignited in ambient air [5][6][7][8][9]. The DBD is also known as a "silent discharge" and typically consists of two electrodes, one connected to a high voltage and the other grounded, with either one or both of the electrodes covered with a dielectric material [10]. While DBD plasma provides the delivery of high concentrations of ROS/RNS directly to the treatment material, it is unable to treat non-homogenous surfaces. The floating electrode-DBD (FE-DBD) developed by Fridman et al. [6,9,11] for in vivo applications has the dielectric material covering the high-voltage electrode, while the tissue acts as the ground electrode. This configuration greatly reduces the flow of current to the treatment tissue. Another commonly used APP is APPJ, which is an indirect source since the plasma generated between two electrodes is transported to the treatment material using a feed gas, typically helium, argon or nitrogen [12][13][14]. The concentration of ROS/RNS reaching the treatment material is typically lower than that obtained with direct DBD. APPJ offers the advantage of treating irregular surfaces and oddly shaped objects. In addition to the above-mentioned direct and indirect APP sources, Isbary et al. [15,16] developed several hybrid plasma sources that provide the advantages of both direct and indirect APPs. Two such hybrid sources include FlatPlaSter and MiniFlatPlaSter, which are based on a surface microdischarge (SMD) technology. The SMD technology, in which a dielectric material is sandwiched between a high-voltage and a ground wire mesh electrode, has the advantage of generating a homogenous plasma discharge in atmospheric air without the need for special voltage requirements [15,16]. The hybrid sources allow direct treatment of living objects while eliminating the risk of current flowing through it. Typical DBD, APPJ and hybrid sources are shown in Figure 1, and their production and applications have been reviewed in detail by [1,4,17]. Several studies have attempted to characterize DNA damage and the associated cellular responses induced by APPs (Table 1). In this review, we briefly describe the various ROS/RNS involved in DNA damage. The DNA damage response and repair mechanisms in eukaryotic systems pertaining to oxidative stress are also summarized. Further, the effects induced on isolated and cellular DNA by the interactions of ROS/RNS present and/or produced in biological systems due to APP treatment are outlined in detail. The high levels of bicarbonate in interstitial (30 mM) and intracellular (12 mM) fluids suggest that the reaction between ONOO − and CO2 is the major pathway of decay of peroxynitrite in biological systems [94,95]. The redox potentials of some of the ROS and RNS are given in Table 3. Among ROS, • OH with a redox potential of 1.89 V vs. the potential of normal hydrogen electrode (NHE) is a strong oxidant. The • OH has the ability to abstract the hydrogen atom from the C-H bond. The • OH can also be added to C=C bonds at a faster rate than that for hydrogen abstraction [96]. Carbonate radical anions (CO3 •− ) with a redox potential of 1.59 V vs. NHE can oxidize biomolecules selectively by one-electron abstraction mechanisms [97]. In comparison, • NO2 is a milder oxidant. The redox potentials of the DNA bases are 1.7, 1.6, 1.42, and 1.29 V for thymine (T), cytosine (C), adenine (A), and guanine (G), respectively [98,99]. Among these radicals, • OH, CO3 •− , and • NO2 are capable of damaging biomolecules, and show different reactivity towards DNA residues and DNA itself. DNA is composed of two polynucleotide strands wound around each other to form a three-dimensional double-helix structure. Each nucleotide is, in turn, comprised of a five-carbon (deoxyribose) sugar, a phosphate group and a nitrogenous base. The nucleotides in each strand are covalently linked by the phosphodiester bond between the sugar and phosphate molecules, thus forming the sugar-phosphate backbone of the DNA strand. There are two basic categories of bases: the purines (adenine and guanine) and the pyrimidines (thymine and cytosine). The base is attached to the deoxyribose via the N-glycosidic bond. The two antiparallel strands of the DNA are held together by hydrogen bonds between the complementary base pairs, A-T and G-C. With regards to the hydrolytic stability of the various bonds in DNA, the most labile under physiological conditions is the N-glycosidic bond. Any modification to DNA nucleobases such as oxidation by ROS/RNS can hydrolyze the N-glycosidic bond, thus separating the nucleobase from the deoxyribose leaving an apurinic/apyrimidinic (AP) site. The fundamental chemistry and radical generation, as well as usual reactivity trends with DNA and its components [100], and with amino acids, peptides and proteins [101], have been summarized previously. Both the deoxyribose sugar and the nucleobases of DNA are susceptible to direct oxidative/nitrosative attacks by ROS/RNS. Under physiological conditions, O2 •− and H2O2 appear incapable of directly causing strand breaks or nucleobase modifications in DNA [100,102]. However, treatment of mammalian cells with H2O2 has been reported to induce DNA strand breakage, which is abrogated in the presence of • OH scavengers [103]. Hence, it appears that the toxicity of species such as O2 •− and H2O2 in vivo likely results from their conversion into • OH radicals via the Fenton reaction [100,102]. Moreover, the binding of Fe 2+ to DNA observed in vivo also promotes production of • OH radicals in the vicinity of DNA, facilitating the alteration of the nucleobase and deoxyribose moieties [104]. Interestingly, several researchers have demonstrated that O2 •− also extracts iron from iron-sulfur (4Fe-4S) clusters in dehydratases present in Escherichia coli (E. coli), thus increasing cytosolic iron concentration, and facilitating increased production of • OH radicals [105][106][107]. The • OH radicals react with all the purine/pyrimidine bases as well as the deoxyribose backbone generating both base-derived and sugar-derived products. In addition, • OH reactions with proteins surrounding DNA (e.g., histone) can produce DNA-protein cross-links. Apart from • OH radicals, several other ROS, such as 1 O2 and O3, are also capable of reacting directly with DNA. Of the four DNA nucleobases, 1 O2 oxidizes only guanine, which is the most oxidizable of the nucleobases [108][109][110]. In addition, 1 O2 induces strand breaks in DNA, however, it is much less frequent than oxidation of guanine to 8-oxo-guanine [108][109][110][111]. Studies have also shown that 1 O2-induced strand breaks in plasmid DNA are increased in the presence of thiols, glutathione, and cysteine, etc. [109]. O3 causes DNA damage both directly and indirectly [112,113]. Ito et al. [113] have shown experimentally that O3 reacts directly with DNA to produce the base oxidation product 8-oxo-guanine, while O3-induced strand breaks proceed via • OH radical production. Overall, the reactivity of O2 •− and 1 O2 is orders of magnitude lower than that of the • OH radical. The • OH radical is the most reactive oxidant, with nearly diffusion-controlled rate constants. However, the half-lives and the diffusion distance of ROS, as well as the location of residues in DNA, control the efficiency of inactivation and must also be considered. For example, O2 •− has a longer half-time than the • OH radical and therefore may possibly diffuse at great distances to react with DNA residues. DNA nucleobases can also be modified by hydrated electrons (eaq) and H atoms which are typically produced by ionizing radiation in water; however, they are far less reactive than • OH radicals [114]. While H atoms induce single strand breaks (SSBs) in DNA, they are not caused by a direct reaction with the deoxyribose backbone. Instead, the H atom reacts with a nucleobase to form a nucleobase radical, which then abstracts an H atom from the deoxyribose sugar, causing a strand break [100]. It has also been shown, both experimentally and theoretically, that hydrated electrons cannot induce strand breaks in DNA [114,115]. Similar to O2 •− , nitric oxide ( • NO) also does not react directly with DNA despite being a free radical. Instead, • NO toxicity is attributed to its conversion into other RNS such as ONOO − , HNO2, and N2O3. These species are capable of modifying nucleobases and inducing DNA strand breaks via nitration and deamination. It has been observed that, at physiological pH, N2O3 is formed from • NO. N2O3 directly reacts with DNA, causing nitrosation of the primary amines in DNA, which in turn lead to deamination. Specifically, N2O3 deaminates the nucleobases guanine, adenine and cytosine to xanthine, hypoxanthine and uracil, respectively. The nucleobase deamination by N2O3 causes mispairing during replication leading to mutation. Moreover, the unstable xanthine can depurinate, eventually leaving an AP site, which may then be cleaved by endonucleases to form SSBs. While N2O3 shows reactivity to several nucleobases, ONOO − reacts only with guanine. Guanine can undergo oxidation or nitrosation by ONOO − to produce 8-oxo-guanine and 8-nitro-guanine, respectively. Interestingly, 8-oxo-guanine is more susceptible to oxidation by ONOO − than guanine itself. Base modification by ONOO − also leaves an AP site which can lead to the formation of an SSB. ONOO − concentrations as low as 2 μM have been shown to cause strand breaks [116]. ONOO − also directly attacks the sugar phosphate backbone of the DNA by abstracting an H atom from the deoxyribose, which then opens the deoxyribose sugar generating strand breaks. This section describes the kinetics of the reactions with nucleobases and DNA, and summarizes well-established reactions and products that result from DNA residue modifications upon interactions with ROS/RNS that are produced either in APPs or in biological systems (e.g., culture medium, cells) treated by APPs. Reactivity of ROS towards Nucleobases The kinetics of the ROS reactions with nucleobases were determined using pulse radiolysis and laser flash techniques [93,[117][118][119][120]. The oxidized products of nucleobases and DNA have been analyzed using many analytical techniques, including capillary electrophoresis (CE), thin-layer chromatography (TLC), liquid chromatography (LC), LC-mass spectrometry (LC-MS), gas chromatography-mass spectrometry (GC-MS), and immune-based detection. Descriptions and advances made in these techniques can be found elsewhere [121]. Cadet et al. [122] reported that a superoxide radical does not oxidize DNA. Among the nucleobases, guanine is oxidized most easily, but it has no reactivity with O2 •− [123]. However, the guanine radical, observed in several oxidative systems, can be oxidized by O2 •− , yielding derivatives of guanine 5-hydroperoxides, imidazolone, and oxazolone as the oxidized products [123]. A study performed by Lafleur et al. [124] showed that oxidation of guanine only occurs in the reaction of 1 O2 with DNA. This selective oxidation was observed to yield derivatives of 8-oxo-7,8-dihydroguanine, guanidinohydantoin, dehydroguanidinohydantoin, and spiroiminodihydantoin [92,125]. The rate constants for the reactions of nucleobases and DNA with • OH are given in Table 4. The diffusion-controlled rate constants represent the electrophilic nature of the • OH radicals. Thus, • OH radicals may damage DNA, and they can attack different components of DNA indiscriminately. The • OH radicals react mainly with heterocyclic bases, resulting in heterocyclic-derived radicals that are irreversibly transformed. The products of the oxidation of thymine, cytosine, and guanine by • OH radicals and one-electron oxidants are presented in Figure 2 [117], which also demonstrates the basic similarities and differences between • OH and a one-electron oxidant. DNA Strand Breaks Induced by ROS/RNS Strand breaks can occur either directly by oxidation of the deoxyribose sugar by ROS/RNS (sugar damage) or indirectly by enzymatic cleavage of the phosphodiester backbone during repair of the oxidized bases via base excision repair (BER) or nucleotide excision repair (NER) processes (repair processes detailed in Section 4.1.3). In general, base modifications induced by ROS/RNS do not produce altered sugars or strand breaks unless the altered nucleobase labilize the N-glycosidic bond to form an AP site which is then removed by β-elimination. The damage to the sugar moiety occurs typically due to hydrogen abstraction from the deoxyribose. The H atom abstraction from the C4' position of deoxyribose generates a deoxyribose radical [100], which in turn reacts further causing the release of intact nucleobases, alteration of other deoxyribose moieties, and eventually strand breaks in the DNA. While some of the altered deoxyribose is released from the DNA backbone, some remains in the backbone, forming "alkali-labile" sites. Some of the typical products of • OH radical interaction with deoxyribose in DNA identified using the GC/MS technique are shown in Figure 6a. The • OH-induced sugar products include 2,5-dideoxypentos-4-ulose, 2,3-dideoxypentos-4-ulose, 2-deoxypentos-4-ulose, 2-deoxytetrodialdose, 2-deoxypentonic acid and erythrose. The SSBs induced by ROS/RNS have blocked termini such as 3'-phosphoglycolate, 3'-phosphate, 5'-OH and 5'-deoxyribosephosphate, as shown in Figure 6b [131]. [102] with permission from Elsevier, Inc., 1991); and (b) ROS-induced SSBs containing blocked termini such as 3′-phosphoglycolate, 3′-phosphate, 5′-OH and 5′-deoxyribosephosphate (adapted from [131], 2014). During the repair of nucleobases altered by ROS/RNS via the BER and NER processes, the excision of two altered nucleobases located close to each other on opposite strands can cause a double strand break (DSB) in the DNA [43,132,133]. Moreover, SSBs generated by ROS/RNS can also be converted into DSBs during normal replication of the DNA [134][135][136]. In addition to specifically attacking the DNA, ROS/RNS can also attack DNA indirectly through reaction products generated via their interaction with other biomolecules such as lipids and proteins [91,103,137]. The end-products of lipid oxidation by reactive species, such as malondialdehyde, can bind to DNA to induce mutations. Moreover, ROS/RNS may also directly damage DNA damage repair enzymes and polymerases, thus slowing the repair processes or preventing replication altogether [91,103,137]. Another factor affecting the stability of DNA structure is pH [138]. A pH of less than 4 (mildly acidic) results in the hydrolysis of the N-glycosidic bond, thus separating nucleobases from the deoxyribose backbone. A pH of less than 1 (very acidic) leads to hydrolysis of both the N-glycosidic bond and the phosphodiester bond separating nucleobases, deoxyribose and phosphates [138]. In comparison, a pH of more than 11.3 (basic) alters the polarity of hydrogen-bonded groups and causes the separation of the two complementary strands, leading to DNA denaturation [138]. In summary, nucleobases are susceptible to damage by ROS and RNS. Modifications of the nucleobases alter the specificity of their hydrogen bonding. As a consequence, nucleobase oxidation and deamination products, if left unrepaired, can cause base mispairing (G→T transversion and G:C→A:T transitions) during replication, thereby causing mutations. In mammalian cells, a complex signaling pathway called "DNA Damage Response" (DDR) is activated in response to DNA damage and ultimately decides the fate of a cell-cell cycle arrest and DNA repair, cell death, or mutation (detailed in Section 4.1). While DNA repair systems exist in biological systems for the successful removal of modified bases, failure to repair these irregular bases can have serious biological consequences. DNA glycosylases are a family of enzymes that initiate repair processes by hydrolyzing the N-glycosidic bond and thereby isolating the modified base from the deoxyribose moiety of the DNA. DNA glycosylases are involved in the repair of both oxidized and deaminated bases. Removal of the modified base creates an AP site, which is then processed by AP endonucleases that cleave the phosphodiester bond at the AP site and create a nick in the strand. Typically, DNA polymerase β then adds a single nucleotide, and DNA ligase seals the nick. However, failure to do so will leave a break in the strand, thus creating SSBs and DSBs. APP Interactions with Isolated DNA A comprehensive understanding of the physical and chemical processes governing DNA damage under various APP conditions is crucial for the development of biomedical applications using plasma. The control of DNA damage initiated by plasma treatment can be beneficial in some applications (e.g., cancer therapy); however, for other applications (e.g., wound healing), it is necessary to avoid DNA damage. Therefore, in order to elucidate plasma-mediated DNA-alteration, as well as DNA protection mechanisms against plasma, it is necessary to investigate the effects of APP on DNA that is isolated or surrounded by compounds that can be found in the vicinity of the DNA in a cell. This section primarily describes the experimental efforts of a number of research groups involved in investigating plasma exposure conditions that govern DNA strand break formation. These groups have also made an attempt to evaluate the physical and chemical factors in plasma that are responsible for alterations in DNA. Agarose gel electrophoresis has been used for the assessment of damage in different types of plasmid DNA treated by APPs (e.g., pBR322 [45,47,53,54], pUC18 [12,57,58], pAHC25 [55], pCDNA3.1 [52], and hrGFP-II-I [29]). This technique can be used to separate DNA fragments with respect to their different lengths or the topological conformations that result from strand break formation [139]. A typical agarose gel image taken by a UV imager is presented in Figure 7. The fastest, middle, and slowest bands represent the supercoiled conformer (indicating undamaged plasmid DNA), the linearized conformer that forms due to a single event of DSB formation, and the open circular conformer that results from SSBs, respectively. The fluorescent intensity of these bands represents the amount of the corresponding conformers in DNA samples, which were treated under specific conditions and then stained with SYBR Green or ethidium bromide dyes. Molecular combing, which is used for single molecule observations, measures the length of individual linear DNA molecules (e.g., λDNA [59][60][61]). In this technique, fluorescently stained DNA molecules from solution are adsorbed and combed on a glass coverslip. The coverslip is then dried and the sample is observed under a fluorescence microscope. The DNA length measured shows significant changes after plasma exposure by comparison to non-irradiated samples, as shown in Figure 8. The rate of strand breakage can be determined using a simple mathematical model from the measurement of relative changes in the length of the DNA as a function of plasma exposure [61]. Some of the other techniques used include polymerase chain reaction (PCR) [55], Fourier transform infrared (FTIR) spectroscopy [58], Raman spectroscopy [57], matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF) [50], and high-performance liquid chromatography-tandem mass spectrometry (HPLC-EIS-MS/MS) method [62]. Most of these methods are used for the detection of DNA constituents, such as oligonucleotides (single- [57,58] and double-stranded [57]) and 2-deoxyguanosine [62]. A number of research groups have focused on the formation of strand breaks in plasmid DNA that are caused by plasma exposure and lead to alterations in topology. In order to eliminate contributions from the medium in which DNA is placed during plasma treatment, plasmid DNA was dried and exposed directly to APPs [49,53,57,58]. Ptasinska et al. [53] and Kim et al. [49] observed rapid degradation of supercoiled DNA within the first few seconds of plasma treatment for APPs ignited both in inert gas (i.e., He) [53] and in the He/O2 mixture [49]. The increase in DNA damage reached ~80% after 10 min of He APP irradiation. This 10-min APP treatment yielded 70% production of SSBs and 10% production of DSBs. A more dramatic damaging effect on dry plasmid DNA was observed when the APP was used with an oxygen admixture [49]. Exposure for longer than 20 s resulted in complete DNA degradation due to the production of multiple fragments. Most of the studies on strand break formations in DNA were performed in an aqueous DNA solution [12,29,47,48,52,54,55,61], and all of the studies showed that water failed to protect the plasmid from APP species, even under different experimental conditions. Lackmann et al. [57,58] performed a series of experiments in which different plasma components (i.e., vacuum UV (VUV) and reactive particle components) were separated. These components were used for DNA plasmid treatment in a He environment. The authors detected SSB and DSB formations in plasmid DNA exposed to particle components of the APP, while SSB and dimers formed due to the VUV component [58]. Moreover, they transformed plasma-treated DNA into E. coli cells that resulted in reduction of transformation efficiencies by comparison to untreated DNA, most likely due to mutagenic effects [57]. As for results with dry DNA, Yan et al. [55] reported that the abundance of the supercoiled form of plasmid DNA in aqueous solution decreased, while that of the open circular and linearized forms of plasmid DNA increased with increased treatment times. The authors observed approximately 90%, 40% and no supercoiled DNA conformers at plasma treatment times of 1, 2, and 4 min, respectively. Further increases in plasma exposure exhibited a gradual degradation of linear DNA and formation of smaller DNA fragments, which were detected as smeared bands on the agarose gel image [50,55]. Leduc et al. [29] observed that, under their experimental conditions, APP exposure for 30 s was sufficient to degrade the plasmid completely. In the studies mentioned above, the distance between the plasma source and the DNA sample was fixed; however, it is known that the distribution of reactive species varies depending on the location within and around the APP jets [140]. Bahnev et al. [47] measured the radial and axial lengths of the visible zone of the plasma jet to be 0.4 and 5.5 cm, respectively, whereas plasmid DNA damage was detected at distances of 2 cm radially and 25 cm axially from the source. The highest damage to DNA detected (~60%) was at the tip of the plasma jet, followed by a gradual decrease in the axial direction. In contrast, using another APP source [12], the level of damage was shown to remain constant (90%-80%) along the entire length of the visible zone of the jet, after which it dropped dramatically outside the zone. These discrepancies can be explained by different experimental parameters of the APP sources used, such as the input power, DC pulses vs. AC pulses, and so on. For example, the contribution to plasmid DNA damage was studied by varying both the distance from the plasma source and the exposure time for two different electrical parameter settings with respect to the power of the plasma source [12]. The trends in DNA damage observed for both spatial and temporal factors were comparable, but showed a difference in the relative yield of damage. Moreover, the formation of DSBs was observed only in the higher power plasma source condition. Li et al. [50] performed similar studies in which the genetic effects of the plasma jet became more significant with an increase in the source power, with other parameters held constant. In the lower power range (10-20 W), the authors primarily observed the formation of SSBs, while above 60 W there was a significant yield of DSBs. A further increase in power led to the total degradation of plasmid DNA. In addition, varying other parameters of the plasma source (e.g., gas flow rate affects fluxes of chemically active species and induces evaporation of an irradiated sample, thus affecting the volume and concentration of a DNA solution) can influence the degree of DNA damage [50]. In order to study APP effects on the formation of strand breaks in DNA under more realistic conditions, plasmid DNA was resuspended in the following buffers: PBS (phosphate buffered saline, which contains sodium chloride and sodium phosphate) [29,45,51,52], and TE (tris-EDTA) [52,56]. Two water and PBS comparison studies of plasmid DNA degradation reported contradictory results [29,52], which may be due to different buffer concentrations. Leduc et al. [29] suggested that the buffer is partially responsible for plasmid DNA protection, because PBS reduced plasmid DNA damage compared to DNA in aqueous solution. However, an experiment performed by O'Connell et al. [52] resulted in a yield of DNA damage that was quantitatively similar but showed a different rate for strand break formation in plasma-irradiated DNA in aqueous and PBS solutions. The time scale for total degradation of supercoiled DNA was approximately one order of magnitude longer in PBS than in water. The rate of SSB and DSB formation for DNA in PBS is presented in Figure 9. As seen for dry plasmid DNA and for plasmid DNA resuspended in aqueous solution, Alkawareek et al. [45] reported rapid damage upon plasma exposure, with complete loss of the supercoiled DNA conformation after 90 s. The formation of DSBs occurred as early as 10 s and reached ~35% at 60 s. While PBS is a moderate radical scavenger, TE is known to be a strong radical scavenger, in particular of OH radicals [141]. Therefore, DNA in TE buffer can be used to evaluate damage to DNA induced by plasma radicals and/or can mimic the radical scavenging environment found in cells in order to protect DNA from damage. As reported by O'Connell et al. [52], DNA damage upon plasma exposure is reduced significantly compared to damage in aqueous or PBS solutions. In their investigations, there was clear evidence of DNA damage in water, whereas there was only minor formation of strand breaks in TE solution. These studies definitely indicate the importance of radicals in DNA damage [52]. However, in an experiment performed by Kim et al. [56], an enhancement of strand break yields for DNA in TE buffer was observed by adding oxygen to the flow of the inert gas to increase oxygen reactive species. To approach realistic conditions even more closely, a study of the influence of amino acids on DNA strand break formation was performed by Stypczynska et al. [54]. Plasma irradiations were conducted for different molar ratios and for two different amino acids, glycine and arginine, and the authors observed a decrease in the strand break yields for both amino acids. In order to quench the occurrence of DSBs, the addition of a small amount of amino acid was sufficient (e.g., an amino acid to nucleotide ratio of 0.5:1), while the yield of SSBs remained the same up to an amino acid to nucleotide ratio of approximately 4:1. The authors concluded that the changes in the yield of strand breaks due to the presence of amino acids were determined not only by the physical shielding of DNA, but also by the interactions of radicals formed from amino acids upon plasma irradiation. These two competing processes, protection and damage due to plasma-induced radicals, can occur in the cell and depend on the type of compounds that surround the DNA. It is worth noting that other bio-macromolecules (e.g., proteinase K) were also altered during plasma irradiation; however, the rate of inactivation was significantly lower than the damage rate of plasmid DNA under the same experimental conditions [45]. This was explained by the fact that, in order to inactivate an enzyme, many different physicochemical events must be accumulated, whereas plasmid DNA can be damaged by just a single DSB event [45]. The authors concluded that DNA might be a more sensitive cellular target than some enzymes. Leduc et al. [29] treated plasmid DNA in a complex culture medium that consisted of a carbonate buffer with salts, amino acids, a phenol indicator and vitamins required for cell growth. The results were compared to those for DNA in aqueous and PBS solutions irradiated under the same plasma experimental conditions. They observed that even after maximum operational plasma exposure, the DNA in the medium remained unaffected. The authors suggested that some components in the medium were able to protect the plasmid DNA from plasma degradation, but the effect of the composition of the medium on plasma degradation was not explained. The same group obtained similar results using another plasma source to assess the possible effects of direct and indirect plasma treatment on isolated plasmid DNA in these three different environments [46]. DNA was unaffected when plasma treatment was carried out in the culture medium, whereas the plasmid was destroyed completely after 30 and 60 s of plasma treatment in aqueous and PBS solutions, respectively. In comparing results from the two different plasma sources, the authors reported that direct plasma treatment is more severe than indirect treatment, which confirmed previous studies [142]. Significant changes in the length of isolated linear DNA molecules were also recorded [59][60][61]. Studies of plasma-irradiated DNA in an aqueous solution resulted in a number of fragmented, short DNA molecules, which indicates that the relative DNA length decreased exponentially with increased exposure time [60], as shown in Figure 10, and as observed for strand break formation detected by the gel electrophoresis technique [12,29,47,48,52,54,55,61]. Moreover, the DNA DSB cutting rate for DNA fragmentation increased proportionally to the discharge power [60]. In an experiment by Antoniu et al. [59], DNA in solutions with different pH was irradiated by APP, and DNA fragmentation was correlated to cell viability under the same plasma treatment conditions. In the case of citric acid (pH 4), DNA samples were exposed to plasma for up to 2 min, while PBS (physiological pH) required longer treatment times, most likely because of the nontoxicity of PBS and its ability to maintain the pH and prevent the denaturing of cellular DNA [59]. The reduction in the average relative length of DNA, in both citric acid and PBS buffers, was significant after exposure to APP. The molecule length decreased by ~40% after 2 min, yielding a DNA DSB cutting rate of 0.17/min and 0.21/min in PBS and citric acid buffers, respectively. After 20 min of APP treatment, the average relative length reached only 20% of the normalized length value of DNA in PBS [59]. The authors reported that DNA experienced an average of 2.5 DSBs/molecule and the E. coli decontamination value was 14.5 min when treated in the PBS buffer, whereas 0.29 DSBs/molecule and a decontamination value of 1.4 min were obtained when treated in the citric acid buffer [59]. These results suggest that the citric acid medium is approximately ten times safer for DNA and more effective for sterilization [59]. Further, Kurita et al. [61] evaluated the protective effect on DNA of antioxidant agents such as ascorbic acid, glucose, and sodium azide. The relative DNA length decreased gradually with increased exposure time for all three agents [61]. However, in the cases of ascorbic acid and glucose, this decrease was suppressed with increasing concentrations of the antioxidant reagents. The authors reported that even several tens of micromoles of these two agents were sufficient to prevent a length reduction in half of the DNA molecules. Their experiment also showed that glucose exhibited a higher protection potential than did ascorbic acid. For sodium azide, which is a specific scavenger of 1 O2, no significant protective effect was detected. In addition, the authors reported that the pH of water remained above 6 following APP treatment; hence, the influence of pH on DNA damage can be considered negligible [61]. It is interesting to note that even when protection of a plasmid DNA against plasma treatment was observed, DNA damage was detected in more complex environments, (i.e., in the cell). This point will be discussed further in the next section. Other techniques (e.g., MALDI-TOF, HPLC) have shown strong plasma-induced fragmentation of DNA analogues, such as oligonucleotides [50] or the production of oxidized nucleoside from 2-deoxyguanosine [62]. In a comparison of mass spectra of treated vs. untreated oligonucleotide samples, Li et al. [50] found additional evidence for the formation of small fragments induced by APP. Sousa et al. [62] reported that oxidized nucleoside production increased nearly linearly with exposure to 1 O2 formed by atmospheric pressure microdischarges. The authors showed that 1 O2 induced the formation of hydroxy-8-oxo-4,8-dihydro-2-deoxyguanosine followed by nucleoside oxidation. Further, other oxidized nucleoside products formed by secondary decomposition of transient oxidized species were observed [62]. They also observed a decrease in pH from 6.8 and 5.5 before treatment to a pH of 4 after only 2 min APP treatment when two different sources of water was used, while no change in pH change was observed in a buffered aqueous solution. Hence, they concluded that the pH of the treatment solution influences the type and amount of APP-induced DNA damage [63]. Lackmann et al. studied oligonucleotides with 18-nucleobabses of T (dT18) [58], C (dC18) and G (dG18) [57]. A comparison of FTIR spectra for single-stranded dT18 before and after treatment with the VUV component showed loss of C=C bonds and formation of C-C bonds indicating to thymine dimer formation [58]. Raman spectra of single-stranded dG18 treated by APP indicated breakage or modification of DNA strands and nucleobase alteration, while spectra of double-stranded dG18: dC18 showed only minor changes [57]. These results have proven that single-stranded DNA has a higher sensitivity to APP treatment than does double-stranded DNA [57]. By using the PCR technique for amplification of specific segments of DNA, Yan et al. [55] conducted experiments with three types of genes that were exposed to APP treatment. The authors concluded that, under proper conditions, APP exposure does not affect the genes of plasmid DNA. Evaluation of Effects of APP Components on Strand Break Formation As presented above, the effects of APP on isolated DNA in different environments have been studied extensively; however, a question remains: What is the mechanism of strand break formation? In order to approach this issue, many groups have explored which plasma components are the most efficient in producing DNA damage. A recent review summarized results from computational simulations of plasma-biomolecule and plasma-tissue interactions in which two types of operative factors, chemical and physical, were considered [143]. Chemical factors included radicals, ions, and neutral molecules lead to chemical reactions, while physical factors included heat, electric fields, UV radiation and surface charging. Here, our primary focus is on the findings that deal with these two operative factors from an experimental point of view. There is a strong consensus among many research groups that the most likely factor that causes strand breaks in DNA comes from the chemically active species. The technique most commonly used to determine reactive species in APP is optical emission spectroscopy. Two parameters influence gas composition in APP, input power and the type of discharge gas [59]. Most of the studies of APP effects on DNA were performed using a pure inert gas (e.g., He [12,47,50,53,54,59] or Ar [60,61]), or an inert gas with O, NO, or an air admixture (e.g., He/O2 [45,49,51,52,[55][56][57][58], Ar/air [48] and He/O2/NO [62]). The most dominant emission bands observed in optical spectra of an APP zone corresponded to excited chemically nonreactive N2, N2 + , and He species. Relatively lower-intensity emission bands were observed for chemically reactive oxygen and nitrogen species, such as O, O3, 1 O2, • OH, and • NO. All of these reactive species are known to be very destructive to biomolecules and are likely involved in synergistic processes that lead to DNA damage [52]. In order to further increase the reactivity of APP, molecular gases were added to the inert gas flow. However, it has been observed that only a limited amount of oxygen can be introduced to the inert gas flow in which APP can be sustained [56]. This drawback prevents a higher production of reactive species in the APP zone. However, despite the relatively low concentration of • OH or other ROS, the concentration of these species increased significantly due to collisions between plasma components and molecules in the surrounding air [47,48]. O'Connell et al. [52] and Niemi et al. [51] correlated the formation of DSBs with atomic oxygen density. The authors measured rates for SSB and DSB formation as a function of absolute atomic oxygen density formed in the core of the plasma bulk. However, they reported that the assumption that the atomic oxygen itself is responsible for DSB production has not been confirmed. They also stated that the density of other neutral components in the APP jet can be effective in inducing DNA damage. In contrast, the rate of SSB formation showed no evidence of any correlation with atomic oxygen density. The possibility of DNA damage by radicals was also suggested by Leduc et al. [29] from their experiment in which plasmid DNA was treated under the same conditions, but with DNA resuspended in water, PBS, and culture media. There was no DNA damage observed in the media, which contained radical scavengers, such as vitamins (present in the media, but not in the PBS). Therefore, the protective effect of the media was attributed to the presence of components that scavenge radicals and to the presence of charged plasma species, as well as to the different buffering capability of the media [29]. These findings were confirmed by Kurita et al. [61], who found that DNA fragmentation was reduced significantly due to protection from antioxidant agents during APP exposure. Sousa et al. [62] also stressed the importance of oxygen radicals on nucleoside modifications; however, other reactive species that can be byproducts of NO (i.e., NO2, NO3, N2O5, and HNO3) should not be ruled out, particularly in the formation of decomposition products. Ptasinska et al. [53] estimated that ~60% of the total damage to plasmid DNA is caused by excited and reactive species. Regarding physical operative factors, Li et al. [50] concluded that no thermal factor contributes to DNA damage, because the APP jet used in their experiment had a very low temperature. Moreover, no intense electric field was detected, therefore this was also excluded as a contributor to strand break formation. Similar findings were obtained from molecular dynamics simulations, which demonstrated that, although the field applied effectively created electroporation of the cell membrane, the internal structure of DNA was largely unaffected [143]. Ptasinska et al. [53] likewise concluded that 10% of the DNA damage observed was due to UV light that induced SSBs; however, no DSBs were detected. Li et al. [50] also reported a small effect of UV radiation on DNA damage. Additionally, by using electric probe measurements, they showed that the concentration of charged particles in the APP zone is relatively low and that, therefore, these particles do not contribute to DNA damage. In contrast, Ptasinska et al. [53] estimated that ~30% of the total DNA damage was due to the charged particles passing through a high-transmission metallic mesh with a corresponding applied voltage and polarity. However, using an electric probe or metallic mesh can perturb the electric field, and therefore, induce different plasma conditions than would be present without a probe or mesh. In another approach taken by Lackmann et al. [57,58], the VUV or reactive particle (primarily O3 and O) component was isolated and the resultant effects were compared to the total effect of APP. The VUV radiation induced SSBs, dimerization of DNA, and chemical modifications of nucleobases in single-stranded oligonucleotides. In contrast, the reactive particle component led to negligible changes in nucleobases, but induced both SSB and DSB formation. These approaches were tested in order to find the most effective plasma component involved in DNA damage. However, the synergistic effect of many plasma components may play the most significant role in the mechanism of DNA strand breaks. Indeed, as was reported by Lackmann et al. [57], the effects observed for DNA treated with APP containing all components indicated much more significant changes than the sum of effects from particular APP components, thus proving the synergy of plasma components. Formation of strand breaks induced by APPs has been studied extensively, but other changes to DNA, such as base modification, base release, oxidative DNA damage, and DNA-protein cross-links still need to be investigated further. Therefore, detection of these DNA alterations continues to be encouraged, because the outcome of such investigations will give a more comprehensive picture of DNA damage by APPs. APP Interactions with Cellular DNA APPs produce a variety of ROS and RNS including • OH, H2O2, 1 O2, O2 •− , • NO, ONOO − , etc., species that can also be generated by eukaryotic cells via normal cellular metabolism. During plasma treatment, cells or tissues are exposed to numerous ROS/RNS produced directly by the plasma as well as those produced through interaction of the APPs with the surrounding medium. Inadequate neutralization of these ROS/RNS by the cellular antioxidant defense system may lead to oxidative stress, which subsequently, may induce many cytoplasmic and nuclear responses, including DNA damage, cell cycle modification and apoptosis. APP treatment of dry or aqueous isolated DNA, detailed in the previous section, offered a simple approach to understanding the effects of various plasma species in inducing DNA damage and provided information primarily about the different types of damage to DNA. However, it is imperative to study the plasma induced DNA modifications in the context of living cells, as some of the damaging or protective effects observed in the case of isolated DNA may either be enhanced or quenched by the complex interplay between DNA damage sensing and repair mechanisms in the cell. Eukaryotic cells have a well-developed DNA damage repair system; hence, certain types of plasma-induced lesions observed in isolated DNA might not even be visible in cellular DNA. On the other hand, if not repaired properly, certain types of DNA damage are converted to a different type of DNA lesion. For example, Vilenchik et al. [144] estimated that during each cell cycle in a eukaryotic cell, ~1% of the SSBs are converted to ~50 DSBs. Taking this into consideration, it may be assumed that even if plasma treatment induces only SSBs, during the course of DNA damage repair, some of those may be converted to DSBs. Hence, in addition to exploring APP-induced effects on isolated DNA, plasma researchers are also investigating APP effects on cellular DNA. In this section, we briefly describe the cellular responses associated with DNA damage in eukaryotic cells, and the various repair mechanisms activated in response to oxidative DNA damage at various phases of the cell cycle. The current state of the knowledge regarding APP effects on eukaryotic and prokaryotic cellular DNA is also outlined. DNA Damage Response (DDR) and Cell Cycle Checkpoints To ensure normal functioning and survival of a eukaryotic organism, it is extremely important to conserve and accurately transmit its genetic information from each cell to its daughter cells. However, cells are continuously exposed to endogenous and exogenous genotoxic agents that damage DNA, including oxidative stress. This, in turn, triggers an intricate signaling pathway known as the DNA Damage Response (DDR), which ultimately determines the fate of a cell following DNA damage. The DNA lesions are detected by several sensor proteins upstream of the DDR pathway, and this information is then relayed to a family of phosphoinositide 3-kinase related serine/threonine protein kinases (PIKKs), such as ataxia telangiectasia mutated (ATM), ATM and Rad3-related (ATR), and DNA-dependent protein kinase (DNA-PK). The PIKKs then convey these DNA damage signals to checkpoint control proteins. ATR and ATM bind to the chromosomes at the site of DNA damage and trigger the activation of two other kinases, Chk1 and Chk2. This leads to activation of cell-cycle checkpoints that arrest the cell cycle briefly to provide time for cells to appropriately repair the DNA lesions. The cell cycle of a dividing eukaryotic cell involves four different phases: Gap1 (G1), Synthesis (S), Gap2 (G2), and Mitosis (M). However, metabolically active and viable cells that stop dividing enter a resting phase called Gap0 (G0). The G1, S, G2, and M phases in cells grown in culture last approximately 12, 6, 4, and 0.5 h, respectively [145]. DNA damage checkpoints, controlled by PIKKs, can be classified into the G1/S checkpoint, which prevents replication of damaged DNA, the intra-S phase checkpoint, which monitors cell cycle progression and decreases the rate of DNA synthesis following DNA damage, and the G2/M checkpoint, which allows suspension of the cell cycle prior to chromosome segregation. Once the damage has been repaired, checkpoint-arrested cells resume progression of the cell cycle. However, rapid accumulation of unrepaired DNA lesions at the checkpoint can induce permanent cell cycle arrest (senescence), or if the damage is too severe to be repaired, the cell may undergo programmed cell death (apoptosis). If the DNA damage is not repaired properly, it can cause errors during DNA replication, thus transmitting error-prone genetic information that lead to mutations. While ATM and DNA-PK respond mainly to DSBs caused by ionizing radiation and radiomimetic drugs, ATR is activated by a broader spectrum of DNA damage, including stalled DNA replication forks, SSBs and bulky adducts induced by UV light and oxidative stress [146,147]. However, several studies have also determined that ATM can be activated by ATR and vice versa [148][149][150][151][152]. Jazayeri et al. [148] demonstrated that ATR is activated in response to DSBs in an ATM-dependent manner in the S and G2 phases of the cell cycle, while Adams et al. [149] reported that ATR is activated following ATM activation in response to ionizing radiation-induced DNA damage in the G1 and S cell cycle phases. On the other hand, Stiff et al. [152] showed that ATM is activated in an ATR-dependent manner in response to UV radiation and stalled replication forks. Dual Function of Tumor Suppressor p53 Tumor suppressor p53, a downstream target of ATM/ATR, plays an important role in mediating cellular responses to DNA damage. Under normal conditions, p53 levels are kept low in the nucleus by ubiquitination and proteosomal degradation. Following phosphorylation by ATM/ATR on serine-15, p53 is stabilized, which leads to its accumulation in the nucleus. This activated p53 may then trigger either cell cycle arrest and DNA repair, or alternatively, induce apoptosis if the DNA damage is too severe [153]. Transient cell cycle arrest at the G1/S and G2/M checkpoints is maintained by p53 through increased expression of the cyclin-dependent kinase (CDK) inhibitor, p21. Increased levels of p21 induce cell-cycle arrest by inhibiting the activity of the cyclin-CDK complex that regulates cell cycle progression [154]. However, under stress conditions, p21 can also be induced by a p53-independent mechanism [154]. Phosphorylated p53 may also up-regulate the expression of the pro-apoptotic factors Puma, Bax and Noxa, thereby inducing apoptosis. Moreover, activated p53 may also be involved in inducing cell senescence through induction of the CDK inhibitor p16 and tumor suppressor p19. DNA Damage Repair Mechanisms in Response to Oxidative Stress ROS have been implicated in a multitude of DNA modifications, including sugar and base modifications, DNA-protein cross-linking, and SSBs and DSBs. DSBs are the most severe form of DNA damage in eukaryotic cells, as inefficient repair may cause mutations or even cell death. Depending on the extent and type of DNA lesion and the stage of the cell cycle, various DNA damage repair systems are activated in eukaryotic cells, including base excision repair (BER) for SSBs, nucleotide excision repair (NER) for bulky adducts, non-homologous end joining (NHEJ) and homologous recombination (HR) for DSBs, and DNA mismatch repair (MMR) for correction of replication errors, such as base-pair mismatches and loops/bubbles arising from a series of mismatches ( Figure 11) [132]. Whenever a homologous sequence, e.g., a sister chromatid, is available as a template, such as in the G2 and S phases of the cell cycle, a DSB is repaired by HR [134][135][136]. However, the absence of a homologous sequence in the G1 phase and the highly condensed chromatin structure in the G2/M phase decreases HR activity, and instead, recruits NHEJ for DSB repair [134][135][136]. NHEJ is active throughout the cell cycle, but is a highly error-prone repair mechanism. For instance, NHEJ repair of DNA cross-links induced by the drug cisplatin produced DSBs [155]. MMR plays an important role in removing mismatches during replication in the S phase. Figure 11. Types of DNA damage and repair. Various types of DNA damage can occur in cells as a result of endogenous agents, such as replication stress or free radicals from oxidative metabolism, and exogenous agents, such as ionizing or UV radiation and chemotherapeutics. These agents can cause SSBs or DSBs in the DNA, base modifications, helix-distorting bulky lesions or cross-links of DNA strands that are repaired by biochemically distinct DNA repair pathways. Adapted from [132] with permission from Macmillan Publishers, Ltd., 2009. Non-bulky base damages resulting from oxidation are removed primarily by BER [43,133]. One example of base damage that is widely studied is the oxidation of guanine to generate 8-oxo-guanine (8-oxo-G), which can cause mutations if unrepaired. This base damage is removed by short-patch BER via the action of a DNA glycosylase, 8-oxoguanine glycosylase (OGG1), which cleaves the N-glycosidic bond between the sugar-phosphate backbone and 8-oxo-G. However, this leaves an abasic/apurinic site (AP), which is still considered DNA damage, and is eventually processed by AP endonuclease that cleaves the phosphodiester bond at the AP site. DNA polymerase β then adds a single nucleotide (in this case guanine) to the AP site and DNA ligase to seal the nick [133]. If left unrepaired, 8-oxo-G can cause a mismatch in the nucleotide sequence during replication by base pairing with thymine rather than cytosine, resulting in T-A base pairing instead of G-C. While 8-oxo-G is regarded as the most common product of non-bulky oxidative damage to purine bases, thymine glycol is the most frequent product of damage to pyrimidine bases [156]. BER also repairs ROS-induced SSB that has a blocking residue at the 3ʹ terminal of the cleaved site [43,133]. This type of SSB is usually produced by the action of ROS on the sugar residues producing 3'-phosphoglycolate, 3'-phosphate or 3'-phosphoglycoaldehyde [157]. ROS have also been implicated in the formation of bulky adducts following direct reaction with DNA [158]. When a purine base forms a covalent bond with the 5'-carbon of the deoxyribose sugar of the same nucleoside and the closest pyrimidine base, these interactions produce two types of bulky adducts; purine cyclonucleosides and base-base intrastrand cross-links, respectively [158]. These lesions are mostly repaired by the NER pathway [159]. The damage repair begins with the unwinding of the DNA helix by XPB and XPD helicases. This is followed by dual excision by the endonucleases XPG and ERCC1/XPF of only one DNA strand at the 3' and 5' ends of the region containing the lesion, which removes the damaged nucleotides [160]. Using the complementary DNA strand as a template, the resulting gap is filled with new nucleotides by DNA polymerases δ or ε and associated replication factors. Finally, DNA ligase seals the nick in the new strand and thus completes the repair process [160]. Lesions such as cyclobutane pyrimidine dimers (CPDs) and pyrimidine-6,4-pyrimidone photoproducts ((6-4) photoproducts) induced in DNA following exposure to UV radiation and certain cytotoxic chemicals are also repaired by NER [161]. Phosphorylated H2AX, a well-known DNA damage marker, was employed by several plasma groups to detect DNA damage in eukaryotic cells following APP treatment. The phosphorylation of H2AX, a variant of the H2A family of histone protein, on the serine 139 residue referred to as γ-H2AX, is one of the earliest events that occurs in response to DNA damage [164]. Once phosphorylated, H2AX acts as a docking site for multiple DDR proteins that accumulate at the site of DNA damage and result in the formation of a nuclear foci that can be detected by several techniques, such as immunofluorescence microscopy, immunoblotting and flow cytometry. Additionally, flow cytometry is an excellent technique to study the changes in γ-H2AX intensity in relation to the distribution of cells in the various phases of the cell cycle. Interestingly, in addition to genotoxic agents, DNA fragmentation during apoptosis has also been shown to generate a large number of SSBs and DSBs that also result in extensive H2AX phosphorylation [146,[165][166][167]. Hence, careful measurement and analysis with respect to morphology and kinetics of γ-H2AX should be conducted by plasma researchers to distinguish between γ-H2AX induced by direct DNA damage and that associated with apoptosis (which may also be induced by damage to other cellular components, such as the cell membrane), and also when making conclusions about the type of DNA lesion (SSB, DSB, bulky adducts, thymine dimer, etc.) based only on γ-H2AX staining [168]. Depending on the type of plasma source, dosage and cell type, plasma treatment has been shown to elicit multiple responses to the DNA damage induced, ranging from cell cycle arrest to DNA repair or apoptosis. An earlier study by Kim et al. [69] demonstrated that a surface type APP in air induced apoptosis in a dose-dependent manner in B16F10 melanoma cancer cells in vitro. At higher doses, they reported an increase in the DNA damage marker γ-H2AX, p53 tumor suppressor gene, and caspase-3, a downstream apoptosis effector, 3 h after plasma treatment. This was accompanied by an accumulation of cells in the sub-G1 phase of the cell cycle 24 h after plasma treatment, thus indicating DNA damage leading to apoptosis. Besides damage to DNA, this surface type APP also caused damage to the mitochondrial membrane and induced cytochrome C release. While not shown experimentally, they ascribed APP-induced DNA damage to the high concentration of O3 produced by their APP. While Kim et al. [69] attributed melanoma cell apoptosis to DNA damage, Leduc et al. [46] concluded that the DNA damage observed in their study might not be responsible for the cancer cell apoptosis observed. Leduc et al. [46] compared the effects of reactive species produced by a direct APP and an indirect APP, both ignited in He gas, on human adenocarcinoma HeLa cells in vitro. Immediately after plasma treatment, an increase in intracellular reactive species was observed in direct APP treated cells, as measured by an increase in the fluorescence intensity of the general ROS detection dye 2,7-dichlorodihydro-fluorescein diacetate (carboxy-H2DCFDA), while no increase was observed in indirect APP-treated cells. However, they attributed the lack of a fluorescence signal in indirect APP-treated cells to cell loss due to detachment. While DNA damage increased gradually up to 24 h post-treatment in both direct and indirect APP-treated cells, interestingly, caspase-3 increased only in the direct case. Apoptosis in HeLa cells was claimed to be induced by direct APP via oxidative stress and not by DNA damage, as no apoptosis was observed in indirect APP-treated cells, despite the fact that APP induced DNA damage. Several research groups have designed DNA studies to explore the spatial extent [13,72] and penetration depths [73] of plasma effects on cellular systems. Han et al. [13,66] conducted a spatial distribution study to investigate the extent of DNA damage induced in SCC-25 oral cancer cells by an APP ignited in N2 gas. This type of study provides valuable information on the target area achieved by plasma treatment and hence, has significant clinical relevance with respect to cancer therapy. Interestingly, 3D mapping of the coverslips with cancer cells treated by plasma provided detailed information on the effective damage area and damage levels with respect to the plasma jet dimensions [66]. In general, for a relatively small plasma jet tip diameter of ~1 mm, a much larger effective area was observed even with 10 s of plasma treatment. A longer treatment time resulted in a wider effective area of DNA damage, as indicated by γ-H2AX staining; however, the number of cells with DNA damage decreased farther from the treatment center. Because the tip diameter is comparatively smaller than the effective damage area, they attributed the damaging effects to secondary interactions due to diffusion of reactive species and electrons produced by the N2 APP, which triggered complex chemical reactions that induced DNA damage in cells. In another study, Morales-Ramirez et al. [72] looked at the effect of axial distance from the source on DNA damage induced in mice leukocyte embedded in agarose using a radio-frequency APP generated by a He plasma needle. Employing a single-cell gel electrophoresis assay, also known as a comet assay, they showed exposure time-dependent DNA damage at a treatment distance of 0.5 cm, beginning with slight damage and proceeding to complete DNA fragmentation. However, complete fragmentation of the DNA close to the needle (0.1 cm) was observed for all treatment times. They also indicated that plasma-induced DNA damage was caused primarily by oxidative radicals rather than by UV light. Plewa et al. [73] recently investigated plasma penetrative effects using multicellular tumor spheroids (MCTS) that mimic a microtumor in terms of 3D organization, and cell-cell and cell-environment interactions. They showed that an APP ignited in He gas dose-dependently inhibited the growth of colon carcinoma HCT116 MCTS (400 µm in diameter) and reduced the expression of the proliferation marker Ki67 [73]. They correlated these observations with a dose-dependent increase in γ-H2AX staining 4 h after plasma exposure that indicated DNA damage [73]. Interestingly, an ROS scavenger, N-acetyl cysteine, abrogated plasma-induced growth inhibition and DNA damage, and increased Ki67 staining, thus indicating that ROS are responsible for MCTS DNA damage and growth inhibition. The addition of conditioned media to MCTS also induced DNA damage, suggesting that reactive species produced by plasma in culture media play a major role in DNA damage. APP treatment of U87MG human glioblastoma and colorectal carcinoma HCT-116 cells by Vandamme et al. [163] induced DNA damage 1 h after treatment that resulted in cell cycle arrest in the S and G2/M phases of the cell cycle. A comparison between directly treating cells in culture medium vs. adding treated medium to cells demonstrated that plasma-generated species in culture medium were responsible for inducing DNA damage that eventually led to cell cycle arrest and the induction of apoptosis in cancer cells. They also treated U87MG-bearing mice in vivo, and observed an S phase accumulation and apoptosis of tumor cells in the entire tumor volume, indicating either penetration of plasma effects or induction of ROS production inside the tissue. While it was not confirmed experimentally in vivo, they attributed the observed plasma effects to formation of DNA strand breaks. Another APP generated by a single electrode plasma jet device also induced cell cycle arrest at the G2/M cell cycle phase, leading to apoptosis of HepG2 human hepatocellular carcinoma cells in vitro [75]. Increased expression of p53 and p21 were also observed, with a corresponding decrease at the transcriptional level of two regulatory proteins, cyclin B1 and cdc2, which normally control G2 to M cell cycle progression. Volotskova et al. [74] demonstrated that APP inhibited the cell cycle progression in mouse skin cancer cells (transformed keratinocytes) by accumulating them at the G2/M checkpoint ~24 h after plasma treatment. This observation correlated with an increase in DNA damage (γ-H2AX) and a decrease in DNA replication in the S-phase of the cell cycle. In short, they inferred that cancer cells are more sensitive to APP effects because a higher percentage of cells are in the S-phase ( Figure 12). Moreover, analysis of the kinetics of H2AX phosphorylation suggested that the observed DNA damage was not DSBs. While Vandamme et al. [163], Yan et al. [75], and Volotskova et al. [74] have observed G2/M cell cycle arrest in plasma-treated cancer cells, as mentioned above, Wende et al. [14] and Blackert et al. [8] made a similar observation in normal cells. Wende et al. [14] showed a dose-dependent reduction in cell number and DNA synthesis of human HaCaT keratinocytes treated by an Ar APP (kINPen 09). They employed a single-cell gel electrophoresis assay in alkaline and neutral modes to identify DNA SSBs and DSBs, respectively. Interestingly, SSBs were detected immediately after treatment, declined within 4 h and returned to control levels after 24 h. In comparison, UV-B irradiation also immediately induced SSBs, but sustained higher than control levels for up to 48 h. In contrast, DSBs increased slowly over time and peaked at 6-12 h, which was attributed to apoptosis-associated DNA fragmentation and may not be due to direct DNA oxidation by the plasma, which also dropped to control levels within 24 h. APP treatment also resulted in a dose-dependent accumulation of the HaCaT keratinocytes in the G2/M phase of the cell cycle in response to the DNA damage observed. They concluded that APP induced transient and reversible DNA damage (SSBs) that slowed down cell cycle progression and ultimately, reduced DNA synthesis and resulted in decreased cell proliferation. These effects were attributed to intracellular ROS levels post-plasma treatment, which depended heavily on the ROS scavenging capacity of the treatment medium. In a similar study by Blackert et al. [8], HaCaT cells treated with a direct APP also reduced cell viability, while increasing both intracellular ROS levels and the accumulation of cells in the G2/M phase. Alkaline SCGE showed a dose-dependent increase in DNA damage within 1 h which, except at high doses, returned to control values at 24 h. These effects were diminished when the treatment medium was replaced immediately after plasma treatment, thus indicating the role of long-living reactive species produced by the interaction of plasma ROS with medium components such as amino acids, vitamins, etc., that induced DNA damage and cell-cycle arrest. In order to further elucidate the molecular mechanism associated with APP-induced DNA damage and cell-cycle arrest, additional studies were conducted to identify which pathway, ATM or ATR, triggered the DNA damage response in eukaryotic cells in response to plasma treatment [7,65,70]. These studies also attempted to identify the role of tumor suppressor p53 in determining cell fate following plasma exposure [65,71]. A recent study by Chang et al. [65] observed that a spray-type APP ignited in a He/O2 mixture induced DNA damage and apoptosis in both wild-type (SCC25) and p53-mutated (MSK QLL1, SCC1843 and SCC15) oral squamous carcinoma cells (OSCC). An increase in γ-H2AX foci in SCC25 cells was observed 24 h after plasma treatment (Figure 13a). A comet assay revealed cells containing long tails, indicating breaks in DNA (Figure 13b). However, the plasma triggered a sub-G1 cell cycle arrest only in wild-type SCC25 cells. Interestingly, they also detected increased expression of ATM, p21 and p53 in SCC25 cells, indicating activation of the ATM/p53 pathway in response to DNA damage and leading to cell cycle arrest and apoptosis of SCC-25 cancer cells. Additional investigation is required, as it was found that in addition to ATM activation, plasma also induced ATR phosphorylation. This study was supported by the findings of Kim et al. [70], who detected increased levels of phospho-p53 and γ-H2AX in N2 APP-treated ATM-complemented YZ5 cells, but not in ATM-deficient S7 cells. Furthermore, they observed increased H2AX phosphorylation in HCT15 human colon cancer cells with wild-type Chk2 compared to kinase-dead Chk2. Hence, they concluded that APP-induced DNA damage that activated the ATM-Chk2 pathway and p53 tumor suppressor protein, leading to apoptosis. In contrast, Kalghatgi et al. [7] and Lazovic et al. [77] determined that plasma-induced phosphorylation of H2AX is ATR-dependent and not ATM-dependent. In their dose-dependent study of APP treatment of MCF10A human breast epithelial cells in vitro, Kalghatgi et al. [7] observed cell proliferation at low doses and apoptosis at high doses. They attributed these dose-dependent effects to the formation of intracellular ROS. They also demonstrated that neutral plasma ROS and not UV radiation or charged particles were instrumental in phosphorylation of H2AX, likely due to the formation of organic peroxides in the culture medium. However, there were no bulky adducts or formation of thymine dimer. Hence, they presumed that the increase observed in γ-H2AX staining may have been due to formation of DNA SSBs or replication arrest. In addition, the same group demonstrated that APP induced lipid peroxidation in MCF10A cells; however, they concluded that plasma-induced DNA damage is not mediated via plasma-induced lipid peroxidation [67]. They also demonstrated that the DNA damage observed was not mediated by plasma-produced ozone [68]. Lazovic et al. [77] showed that a capacitively-coupled APP ignited in He directly caused SSBs and bulky lesions in fibroblasts, but also induced DSBs as a consequence of DNA repair. They also observed small γ-H2AX foci typical of ATR-induced H2AX phosphorylation following APP exposures. Ma et al. [71] conducted an extensive study on 17 mammalian cell lines to investigate the anti-tumorigenic effects of APP generated in He gas. at the same treatment dose, APP selectively induced DNA damage and apoptosis in cancer cells compared to normal cells and stem cells. Interestingly, for the same treatment conditions, p53-deficient (p53 −/− ) cancer cells showed hypersensitivity to plasma by comparison to p53-proficient (p53 +/+ ) cancer cells. The apoptotic effect of plasma was greater for p53-deficient cells, while artificial p53 expression in p53-deficient cells decreased sensitivity to plasma. They concluded that, in p53-proficient cells, plasma-induced DNA damage activated p53 and the downstream apoptotic factors Puma and Bax, causing a G1 cell cycle delay that eventually led to cell apoptosis. Meanwhile, in p53-deficient cells, plasma-induced DNA damage accelerated apoptosis independent of the p53 pathway and without a G1 delay. The presence of ROS scavengers, N-acetyl cysteine and sodium pyruvate, abrogated DNA damage and apoptosis, indicating that ROS generated by APP are crucial in inducing DNA damage and apoptosis. Moreover, APP also induced DNA damage and apoptosis in chemotherapeutic drug-resistant cancer cell lines. Poly(ADP-ribose)polymerase-1 (PARP-1) is a nuclear enzyme activated in response to DNA damage, primarily SSBs, to initiate DNA damage repair [169]. However, the proteolytic cleavage of 116 kDa PARP-1 by caspases-3 and -7 to 85 and 24 kDa fragments is a characteristic event of apoptosis [170,171]. Hence, measurement of PARP-1 cleavage indicates DNA damage leading to apoptosis. A N2 APP generated with a micronozzle array induced DNA damage and increased the apoptosis marker proteins, caspase-3 and poly(ADP-ribose) polymerase (PARP) in human embryonic kidney 293T cells [70]. In another approach, Ar microwave plasma treatment of skin cells (NIH3T3 mouse fibroblasts and HaCaT keratinocytes) by Choi et al. [76] showed no induction of p53 and PARP cleavage, thus indicating the absence of DNA damage-induced apoptosis. However, they observed cell cycle arrest in the G2 phase and a p53-independent increase in p21, but no cell death [76]. The cell cycle arrest was abrogated upon replacement with fresh media immediately after treatment, indicating the role of plasma-produced components in the cell culture medium. A few studies have also compared DNA damage induced by plasma with that induced by UV [64], X-ray [162] and gamma [77] radiation. A study conducted by Brun et al. [64] demonstrated a decrease in microbial load that did not affect the viability of ocular cells (keratinocytes and conjunctival fibroblasts) after treatment with an APP. However, they observed a transient increase in the expression of the oxidized base, i.e., 8-oxodeoxyguanosine (8-OHdG) in plasma-treated keratinocytes, which returned to control levels within 24 h. Furthermore, they observed an increase in the expression of OGG1, a DNA glycosylase enzyme involved in the removal of mutagenic 8-OHdG by BER. APP treatment of human cornea ex vivo showed an increase in OGG1 mRNA and protein levels; however, no thymine dimerization was observed in the nuclei of APP treated corneal tissue. By comparison, UV treatment of corneal tissue ex vivo induced significant formation of thymine dimers. Graham et al. [162] investigated the response of MDA-MB-231 human breast cancer cells exposed to an APP generated directly in the growth medium. A linear dependence between the average number of DNA damage foci, detected by γ-H2AX staining, and the number of plasma pulses applied was observed based on a Poisson damage distribution curve. Correspondingly, a decrease in the viability of cells was also observed. Interestingly, they observed a similar damage pattern on the same cell line exposed to 160 keV X-ray irradiation, and deduced that 100 plasma pulses would cause similar DNA damage as 1 Gy of X-ray irradiation in MDA-MB-231 breast cancer cells. They concluded that APP-liquid interaction and radiolysis follow similar liquid chemistry, ultimately leading to their biological effects. Lazovic et al. [77] compared the effects of APP and gamma (Co 60 γ-ray) irradiation on fibroblasts by measuring DSBs via γ-H2AX staining at various times following treatment. Interestingly, maximum DSB induction was detected 30 min and 2 h after gamma irradiation and APP treatment, respectively. In the case of gamma irradiation, the number of γ-H2AX foci increased linearly with treatment dose, while for APP treatment, it increased with both treatment time and power. Comparing the number of γ-H2AX foci per cell after gamma and plasma treatment, they also obtained the effective doses of plasma irradiation comparable to gamma irradiation. As mentioned before, they demonstrated that APP-induced H2AX phosphorylation was ATR-dependent, while it was ATM-dependent in the case of gamma irradiation. In addition, they observed heavily damaged nuclei typically caused by charged particles in APP-treated samples. Because in vitro studies showed induction of apoptosis via DNA damage in both cancer and normal cells in a dose-dependent manner after APP treatments, follow-up ex vivo [15] and in vivo [9] studies were conducted to investigate those processes under more realistic conditions. Isbary et al. [15] treated human skin with two APP devices based on surface microdischarge (SMD) technology ex vivo and showed, over shorter treatment times, significantly higher, as well as significantly lower, DNA damage in plasma-treated skin compared to control skin samples. Higher DNA damage was observed with a treatment time of 120 s compared to the control. Interestingly, they also observed that a higher initial cell load provided a protective effect from DNA damage for other cells. However, the damage was not localized to the higher cell layers, thereby warranting further investigation into the penetration of plasma effects into deeper cell layers. A preliminary toxicity study conducted in vivo by Wu et al. [9] investigated the effects of a direct APP on DNA damage in intact and wounded skin of Yorkshire pigs. They observed significant accumulation of γ-H2AX only in skin exposed to more than a 5 min treatment at a power setting of 0.17 W/cm 2 , while lower treatment times showed no H2AX phosphorylation, indicating the absence of DNA damage [9]. These studies also concluded that there were dose-dependent effects for DNA damage induction by plasma treatment. In order to understand APP effects on DNA damage, repair and recovery in mammalian systems, several groups have conducted experiments on a model microbe for eukaryotic cells, Saccharomyces cerevisiae (budding yeast). A recent study by Lee et al. [78] reported the induction of DSBs in yeast by an APP ignited in air, leading to loss of cell viability in a dose-dependent manner. Interestingly, these effects were enhanced in rad51 mutants lacking the Rad51 protein required for the repair of DNA DSB via homologous recombination. They also observed, that compared to wild-type yeast cells, cells deficient in other HR proteins, such as Rad52 and Mec1 (yeast analog of human ATR), were also more susceptible to air plasma treatment. Because the antioxidant N-acetyl cysteine and NO scavenger c-PTIO failed to rescue the cells from cell death, they concluded that DSBs induced by plasma do not occur via ROS/RNS generation. Ryu et al. [79] observed differential inactivation of yeast treated with an Ar APP in various liquid environments (water, saline and Yeast extract, Peptone, Dextrose (YPD). The highest inactivation of yeast cells was obtained in water and the lowest was in YPD. Agarose gel electrophoresis analysis of genomic DNA extracted from treated yeast cells showed significant DNA damage after plasma exposure in saline and water, but no damage in YPD. Besides DNA damage, plasma treatment in the presence of water and saline also induced lipid peroxidation and damage to proteins. Higher levels of • OH radicals were also detected in plasma-treated water and saline compared to YPD. These results indicate a crucial role of the liquid environment of microbes in determining the outcome following exposure to plasma. APP-induced DNA Damage in Prokaryotic Cells and Associated Response In order to cope with various types of DNA damage, bacteria possess a novel mechanism known as the SOS response. There are two key proteins that control the SOS response: LexA and RecA. In the absence of DNA damage, the repressor protein LexA binds to the SOS box (a 20 base pair regulon consisting of lexA and recA genes), thereby switching off the SOS response, while the inducer protein RecA scans for DNA damage. In the event of DNA damage, RecA binds to SSBs and cleaves LexA resulting in activation of the SOS response, which leads to up-regulation of SOS genes. The first genes induced are the uvr genes involved in the NER pathway, followed by lexA and recA genes. If the DNA damage is too severe, genes encoding the highly error prone repair DNA polymerases polB, dinB, umuC and umuD are activated. Over the years, several groups have demonstrated rapid inactivation of gram-positive and gram-negative bacteria, including both vegetative cells and spores, by low temperature APPs [57,58,[80][81][82][83][84][85]. Several of these studies investigating the mechanism of plasma inactivation of microbes reported DNA damage as one of the detrimental effects induced by APP. A brief summary of these studies with a particular focus on DNA damage is presented in this section. Lu et al. [86] observed that the extent of genomic DNA damage following exposure to APP depended on the type of bacteria (gram-positive/gram-negative) and treatment time. PCR amplification of DNA extracted from treated bacteria showed that a short treatment time (5 s) had no effect on DNA damage, while a 30 s exposure to APP induced significant DNA damage in L. monocytogenes, which correlated with its higher inactivation. Besides DNA damage, they also observed significant damage to the membrane in E. coli compared to L. monocytogenes as indicated by leakage of intracellular components. They concluded that the different damage patterns observed in the two bacterial strains were likely due to the difference in their membrane structure and resistance to damaging agents. Joshi et al. [80] reported dose-and concentration-dependent inactivation of E. coli treated with a direct APP. Further investigation revealed depolarization of the bacterial cell membrane, as well as lipid peroxidation that lead to loss of membrane integrity. They also measured significant levels of the DNA damage marker, 8-OHdG. They concluded that the membrane damage induced by plasma propagated into the cell, causing DNA damage, and finally, E. coli cell death. On the other hand, Kvam et al. [81] observed only a minor increase in damage to DNA and protein following direct APP treatment. Hence, they concluded that DNA damage and oxidative stress were not responsible for the observed inactivation of multidrug resistant microbes. Tseng et al. [82] reported inactivation of spores of Bacillus and Clostridium species treated with a He APP. Interestingly, the inactivation of Bacillus subtilis vegetative cells was achieved with a lower treatment time than Bacillus subtilis spores. However, no visible degradation of DNA extracted from vegetative cells and spores that were subjected to 20 min of plasma treatment prior to extraction was observed using a gel electrophoresis technique. On the other hand, naked DNA extracted from the vegetative cells and spores showed damage after 5 min of plasma treatment, with severe fragmentation after 20 min treatment. Hence, they concluded that plasma-induced DNA damage may not be the reason for the inactivation of vegetative cells and spores. In fact, they attributed the inactivation to spore coat leakage, indicated by an increase in the coat chemical, dipicolinic acid (DPA). To study the differential regulation of genes in microbes in response to plasma treatment, several groups have conducted extensive transcriptome analysis using a DNA microarray [83][84][85]. Mols et al. [83] achieved 99.9% inactivation of B. cereus vegetative cells on surfaces within 5 min treatment of an APP ignited in N2 gas. However, they observed that the nucleotide excision repair genes involved in the SOS response, such as uvrA and uvrB, were not affected following plasma treatment. In contrast, Winter et al. [85] observed the up-regulation of uvrA, uvrB and uvrC in gram-positive B. subtilis 168 cells in liquid treated with an Ar APP. Moreover, the induction of recA, lexA, dinB, yhaZ and ydgG genes in addition to uvrABC genes led them to conclude that plasma-induced DNA damage is primarily due to UV. Supporting the results of Winter et al., Sharma et al. [84] also observed up-regulation of uvrA and uvrB genes in plasma treated gram negative E. coli, indicating induction of DNA damage following plasma exposure. However, the absence of the genes uvrC, uvrD, and polA involved in the NER pathway indicated incomplete induction of DNA damage repair. They also suggested a synergistic involvement of plasma-produced UV and reactive species in inducing DNA damage and inactivation of E. coli. Reporter gene studies conducted by Lackmann et al. [58] observed thymine dimer formation in E. coli DH5α cells treated with an APP jet. They identified VUV radiation, and not particles, emitted from the APP as responsible for the dimerization observed. Another study conducted by the same group monitored gene expression using various reporter gene fusions and observed UV-induced DNA damage (monitored using recA) in B. subtilis vegetative cells treated in liquid [57]. However, they concluded that the DNA damage observed was relatively less significant compared to protein damage and oxidative stress in inactivating B. subtilis under their experimental conditions. Employing bacteriophages as surrogates for viruses, several groups have also investigated the potential of APPs in inactivating viruses [87,88]. Venezia et al. [87] observed that APP treatment of temperate λ bacteriophage C-17 and lytic bacteriophages for 10 min rendered them inactive. Interestingly, they observed damage to the cell wall, but no damage to the DNA. Yasuda et al. [88] observed rapid inactivation of λ phages within 20 s of APP treatment. Even though they observed increased DNA damage with increased plasma treatment, they concluded that the observed inactivation of bacteriophages is not due to DNA damage, but due to damage to coat proteins [88]. Prokaryotes responded to irreparable DNA damage differently from multicellular eukaryotes. In multicellular organisms, any DNA damage left unrepaired can cause mutations leading to uncontrolled proliferation that is detrimental to the organism. As the survival of the whole organism is more important than the survival of individual cells, the response to unrepaired DNA damage is permanent cell cycle arrest or apoptosis. However, in the case of prokaryotes, each cell is an organism whose survival is dependent on continued cell division, and therefore, continuale division with unrepaired DNA damage is advantageous regardless of the risks. Interestingly, several groups have also reported mutation in microbes treated with APP. While Wang et al. [89] reported mutation in Streptomyces avermitilis spores following exposure to a He APP, Fang et al. [90] induced mutation in a filamentous cyanobacterium, Spirulina platensis, using an APP ignited in air. While these mutations were beneficial in the studies reported above, they highlight the potential mutagenic effects of APP treatment that should be investigated carefully. Conclusions Advancements and developments in plasma medicine and its successful applications require continual research in parallel with clinical trials in order to enhance our knowledge about the exact physical, chemical and biological processes operating at the molecular level. Over the last fifteen years, great effort has been made to understand the effects of APPs, which can be used to further develop plasma sources to deliver precise doses and a specific type of ROS/RNS for a variety of biomedical applications. A summary of the APP effects observed on isolated and cellular DNA is shown in Figure 14. Figure 14. Summary of APP effects on isolated and cellular DNA. Studies on isolated DNA have shown that APP induces strand breaks, dimerization and base modifications. In prokaryotic cells, APP induced thymine dimerization and oxidation of DNA bases leading to formation of 8-OHdG. Depending on the extent of damage, DNA damage repair or cell death was initiated. However, mutation in response to DNA damage was also reported in prokaryotic cells. In response to DNA damage in eukaryotic cells, ATM and/or ATR were activated, which then phosphorylated p53. This in turn activated p21 and subsequent DNA repair mechanisms. Increased levels of p21 induced cell-cycle arrest by inhibiting the activity of the cyclinB-cdc2 complex leading to G2/M cell cycle arrest. In the event of irreparable DNA damage, p53 activation also caused activation of pro-apoptotic factors, such as Puma, Bax and caspase-3, which lead to apoptosis. This review emphasized the importance of understanding the underlying mechanisms regarding plasma-induced damage to DNA. It also revealed, through the sometimes conflicting results, the challenging nature of determining its effects. Due to the inherent and intrinsic complexity of plasma interactions with DNA, such interactions need to be investigated through different scientific approaches at various scales, starting from small segments of DNA to DNA in a cellular environment to enable their true scientific benefit to be understood. Their potential usage and impact warrants this further study. Acknowledgments This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences under Award Number DE-FC02-04ER15533. This is contribution number NDRL 5037 from the Notre Dame Radiation Laboratory. Author Contributions Virender K. Sharma and Krishna Priya Arjunan collected the literature and wrote the section on reactive species involved in DNA damage. Sylwia Ptasinska and Krishna Priya Arjunan collected the literature and wrote the sections on APP interactions with isolated DNA, and APP interactions with cellular DNA, respectively. All authors were involved in the correction of the manuscript.
2016-03-22T00:56:01.885Z
2015-01-29T00:00:00.000
{ "year": 2015, "sha1": "b572e1d336f63f26425f4e709c333d0fded25d95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/16/2/2971/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b572e1d336f63f26425f4e709c333d0fded25d95", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248602924
pes2o/s2orc
v3-fos-license
Cancer treatment-induced NAD+ depletion in premature senescence and late cardiovascular complications Numerous studies have revealed the critical role of premature senescence induced by various cancer treatment modalities in the pathogenesis of aging-related diseases. Senescence-associated secretory phenotype (SASP) can be induced by telomere dysfunction. Telomeric DNA damage response induced by some cancer treatments can persist for months, possibly accounting for long-term sequelae of cancer treatments. Telomeric DNA damage-induced mitochondrial dysfunction and increased reactive oxygen species production are hallmarks of premature senescence. Recently, we reported that the nucleus-mitochondria positive feedback loop formed by p90 ribosomal S6 kinase (p90RSK) and phosphorylation of S496 on ERK5 (a unique member of the mitogen-activated protein kinase family that is not only a kinase but also a transcriptional co-activator) were vital signaling events that played crucial roles in linking mitochondrial dysfunction, nuclear telomere dysfunction, persistent SASP induction, and atherosclerosis. In this review, we will discuss the role of NAD+ depletion in instigating SASP and its downstream signaling and regulatory mechanisms that lead to the premature onset of atherosclerotic cardiovascular diseases in cancer survivors. From the perspective of the Fibonacci mathematical modeling, hand grip strength (HGS) can be a physical biomarker or an indicator of aging [2,21] . A meta-analysis of 53,476 participants [22] revealed that HGS was associated with reduced all-cause mortality. This association appeared to be compromised in participants with an average age of 60. As such, cancer treatment-induced premature senescence can be reflected by HGS and the association with increased incidence of CVD, all-cause mortality, and cardiovascular mortality. An impaired physical function, as evidenced by the decreased 6-minute walking distance (6MWD), was observed in long-term survivors after allogeneic hematopoietic stem cell transplantation (allo-HSCT) relative to their siblings [23] . In the peripheral blood of cancer survivors, an increased level of both serum interleukin 6 (IL-6) and CDKN2A [p16(INK4A)] was noted [24][25][26] . These observations suggest the association between allo-HSCT and premature senescence [23][24][25][26] . Diverse cancer treatment modalities are utilized in different cancer types [3][4][5] . However, survivors of different cancer treatment modalities exhibit common phenotypes with late cardiovascular complications [27][28][29][30][31] . Among cancer patients, who had or had not been previously exposed to known cardiotoxic agents, (i) those who survived after chemotherapy agents without acute cardiotoxicity (as defined by Children's Oncology Group guidelines) also exhibited increased cardiac dysfunction, body mass index, fasting serum non-highdensity lipoprotein cholesterol, insulin, and C-reactive protein compared to non-cancer siblings; (ii) those who survived after chemotherapy agents with acute cardiotoxicity exhibited similar phenotypes as described in (i). These data demonstrated that chemotherapy agents with acute cardiotoxicity only contributed little to CVD incidence including hypertension, dyslipidemia, and obesity [32] . Without pre-exposure to acute cardiotoxicityinducing agents, cancer survivors also exhibited late cardiovascular complications. These findings suggested that, by inducing premature senescence, different cancer treatment modalities can cause common cardiovascular complications long after the completion of cancer treatments [33][34][35] . WHAT IS SENESCENCE? For the first time in 1961, Hayflick and Moorhead introduced the original concept of senescence based on their observation that the proliferation of human diploid fibroblasts was irreversibly arrested after serial passage in vitro. This type of time-dependent growth or proliferation arrest was termed replicative senescence (RS) [36,37] . Further studies revealed that both internal and external stimuli, including cellular stress, reactive oxygen species (ROS), radiation, and mitochondrial dysfunction, can induce cell cycle arrest, i.e., "stressinduced premature senescence (SIPS)" [38,39] . RS and SIPS may follow different molecular mechanisms and time frames. RS is accompanied by the shortening of telomeres until a critical length at which senescence is induced, known as the Hayflick limit [40] . SIPS may or may not be associated with telomere shortening [41][42][43][44] . For example, the telomeraseimmortalized human foreskin fibroblast (hTERT-BJ1) cells exposed to ultraviolet B light or H 2 O 2 develop SIPS, suggesting that SIPS can be independent of telomerase activity and telomere shortening [42] . It is important to emphasize that both telomeric and non-telomeric DNA damage contribute to the induction of cellular senescence with time; both senescence and organismal aging are accompanied by increased DNA damage, as evidenced by γH2AX foci formation [45] . Nakamura et al. [45] examined the chromosomal location of senescence-associated γH2AXfoci that may be found at either uncapped telomeres or non-telomeric DNA damage in human and murine cells, and found that telomeric and non-telomeric DNA damage responses (DDR) play equivalent roles in inducing senescence, which is mainly regulated by telomere length rather than species differences. It is also important to note that genomic DNA damage is repaired relatively faster (within 24 hours) than telomeric DNA damage [46,47] . Accordingly, telomeric DNA damage can persist for longer periods of time and can cause a persistent DDR for months [48] . As such, the long-lasting effect of cancer therapy-induced SISP may be explained by this delayed and sustained telomeric DDR. DDR recognizes DNA damage to activate pathways to repair the damage. Double-strand breaks (DSB) can be repaired by (i) non-homologous end joining (NHEJ) machinery that joins two chromosomal ends with no or minimal base-pairing at the junction; and/or by (ii) 5′-to-3′ resection of the DSB ends to generate 3′-ended single-stranded DNA tails, which are then repaired by homology-dependent recombination pathways [64][65][66][67][68] . Through binding telomeric DNA at the chromosome ends, Shelterin protects telomeric DNA from being recognized as DNA damage by DDR. Loss of Shelterin protective effects and/or telomere shortening leads to telomere dysfunction. Dysfunctional telomeres are recognized by DDR and repaired by NHEJ machinery and/or homology-dependent recombination apparatus. As a result, chromosomal abnormalities are generated, cell cycle arrest is induced, p53 signaling is upregulated, and the PPARγ co-activator 1-α and -β (PGC1-α and β) is repressed. Consequently, mitochondrial biogenesis and function are hampered, leading to the upregulation of mitochondria ROS production. Therefore, mitochondrial dysfunction and elevated levels of ROS can be early events in premature senescence induced by telomere dysfunction [69] . Different from proliferative cells, post-mitotic cardiomyocytes develop senescent-like phenotypes through a mechanism independent of cell division and telomere length. As characterized by persistent telomeric DNA damage, post-mitotic cardiomyocyte senescence can be mediated by mitochondrial dysfunction, which activates p21 CIP and p16 INK4a , resulting in a non-canonical Senescence-associated secretory phenotype (SASP) [70,71] . Studies revealed the critical role of cardiomyocyte mitochondria in cardiac function [72] . As radiation therapy induces cardiomyocyte mitochondrial dysfunction [73,74] , it is reasonable to speculate that cancer treatments induce post-mitotic cardiomyocyte senescence through persistent telomeric DNA damage-mediated mitochondrial dysfunction, independent of cell division and telomere length. CELL TYPES Numerous in vitro and in vivo studies have shown that cancer treatments including chemotherapy and radiation therapy can induce premature senescence in different cell types [75] . For instance, cancer cell senescence was detected in clinical cancer samples of breast cancer patients after preoperative neoadjuvant chemotherapy. Cyclin-dependent kinases 4/6 (CDK4/6) small molecule inhibitors mediated the induction of cancer cell senescence [8] . IR induces endothelial cell senescence, as evidenced by decreased NO production and thrombomodulin expression, increased adhesion molecule expression, elevated ROS production and inflammatory cytokines, and impairment of proliferative capacity as well as the formation of capillary-like structure. Endothelial cell senescence can cause endothelial dysfunction through dysregulation of vasodilation and hemostasis, inducing oxidative stress and inflammation and inhibition of angiogenesis, which are involved in IR-mediated late effects [76] . Doxorubicin and IR induce myeloid cell senescence by triggering metabolite changes with nicotinamide adenine dinucleotide (NAD) depletion and mitochondrial stunning [77] . Low doses of doxorubicin induce human primary vascular smooth muscle cells (VSMC) senescence through the mediation of TRF2 ubiquitination and proteasomal degradation [78] . Doxorubicin and IR induce stem cell premature senescence by mediating expression of the senescence marker p16(INK4a) in human cardiac progenitor cells, and consequently impair their regenerative capacity, leading to cardiotoxicity and heart failure in cancer survivors [79] . Cancer treatments induce SIPS not only in cardiovascular cells, but also in stem cells, which may have an important role in the long-lasting effects of cancer treatments leading to CVD. SURVIVORS Stem cells are cells with self-renewal properties for an unlimited or prolonged period of time and have the potential to differentiate into other cell lineages [80] . Stem cell exhaustion and diminished activity of hematopoietic stem cells (HSCs) are hallmarks of aging [81,82] . Thus, the overall disease-free survival after bone marrow transplantation depends on the age of stem cell donors, i.e., young donors can provide recipients with longer disease-free survival [83,84] . Importantly, hematopoietic cell transplantation (HCT) causes significant stress on HSC [6] , leading to SIPS, and subsequently reduces HSC's potency to repopulate [85,86] . HCT also elicits telomere shortening in HSCs of the recipients, irrespective of myeloablative or non-myeloablative conditioning regimens [85] . PHENOTYPE Senescent cells communicate with neighboring cells, such as immune and cancer cells, through secreting cytokines, chemokines, matrix metalloproteinases etc., as well as through a direct intercellular protein transfer (IPT) [87] . Particularly, senescent cells secrete a cocktail of proinflammatory cytokines, chemokines, growth factors, pro-angiogenic factors, ROS, and proteases etc., namely senescence-associated secretory phenotype (SASP). These secreted cytokines and chemokines recruit T cells, macrophages, and natural killer cells, which help remove senescent cells. Of note, the timely clearance of senescent cells is critical in tissue homeostasis, in which immune cells play a vital role [88][89][90] . In a direct IPT process, proteins from senescent cells are directly transferred to neighboring cells, activating signaling pathways in neighboring cells, ultimately changing neighboring cell behaviors [87] . Unlike apoptotic or quiescent cells, senescent cells exhibit high metabolic activity. Studies demonstrated that high metabolic activity directs energy toward activities related to senescent state, including the induction of SASP and the modulation of immune responses within the senescent microenvironment. Like cancer cells, the glycolytic state in senescent cells, for example, in senescent human diploid fibroblasts (HDF), was higher than in their young counterparts, even in high oxygen conditions [91,92] . Senescent human HDF displayed an increased expression of key glycolytic enzymes including hexokinase, phosphoglycerate kinase, and phosphoglycerate mutase [93][94][95] . Metabolic activity is controlled at various levels. The oxidized state of NAD (NAD + ) is a vital cofactor that controls metabolic activities using its electron transfer function in redox reactions. Functioning as a co-enzyme, NAD + regulates glycolysis, tricarboxylic acid (TCA, Kreb's) cycle, and fatty acid oxidation to form NAD + hydrogen (NADH) [96,97] . In glycolysis, NAD + is reduced to form NADH + H + [96] . Numerous studies showed that cellular NAD + levels are reduced in senescent cells, inducing premature senescence and age-related diseases [98,99] . Our recent study revealed that in myeloid cells, NAD + is reduced after treatment with doxorubicin or IR. Specifically, we observed a sustained SASP induction and an upregulation of p90RSK-mediated ERK5 S496 phosphorylation as well as downstream inflammatory signaling pathways in myeloid cells treated with doxorubicin or IR [77] . To gain insight into the underlying molecular mechanism, we discovered that doxorubicin and IR activated poly (ADP-ribose) polymerase (PARP) and subsequent NAD + depletion. Consequently, reversible mitochondrial dysfunction was mediated without inducing cell death even when ATP is depleted. We also noted that, although low-dose IR inhibited both oxidative phosphorylation (OXPHOS) and glycolysis without causing cellular necrosis or apoptosis, significant upregulation of mitochondrial ROS production and succinate production occurred, attesting to the metabolically active status of SASP [77] . MAINTAINS THE NAD + LEVEL NAD + biosynthesis There are three independent biosynthetic pathways for generating NAD + , namely, de novo biosynthesis from tryptophan, Preiss-Handler pathway, and NAD + salvage pathway [100] . The de novo (kynurenine) pathway uses tryptophan, which enters the cell via plasma membrane transporters SLC7A5 and SLC36A4 [ Figure 1]. The functional contribution of kynurenine pathway to the production of NAD + remains unclear, because not all the enzymes related to kynurenine pathway are expressed in most cells besides liver and immune cells including macrophages. Nicotinamide (NAM) generated by tryptophan metabolism in the liver is released into the circulation and is taken up by the other cells for conversion to NAD + via the salvage pathway as described below. The second pathway to generate NAD + is Preiss-Handler pathway and the dietary nicotinic acid (NA), which enters the cell via SLC5A8 or SLC22A13 transporters, and is used as the precursor to produce NAD + by the Preiss-Handler pathway and converted to nicotinic acid mononucleotide (NAMN) by NA phosphoribosyl-transferase (NAPRT). NAMN is the common intermediate produced by kynurenine and Preiss-Handler pathway and is converted to the nicotinic acid adenine dinucleotide (NAAD) by nicotinamide mononucleotide adenylyl transferases (NMNAT1, NMNAT2 and NMNAT3). Finally, NAD + synthetase (NADS) converts NAAD into NAD + [ Figure 1]. The third pathway is the salvage pathway, which generates NAD + not only from extracellular nicotinamide riboside (NR) or nicotinamide mononucleotide (NMN) but also recycles NAM to NMN, which is metabolized to produce NAD + by nicotinic acid mononucleotide transferases (NAMNTs). In the extracellular space, the ectoenzymes CD38 and CD157 convert NAD + to NAM, and then NMN [by extracellular nicotinamide phosphoribosyltransferase (eNAMPT)]. CD73 dephosphorylates NMN, generates nicotinamide riboside (NR), imports into the cells by an unknown transporter, then forms NMN via nicotinamide riboside kinases 1 and 2 (NRK1 and NRK2) In addition, there is an NMN-specific transporter (SLC12A8)for importing NMN into the cell. After all, NMNAT1-3 converts NMN to NAD + [ Figure 1]. Subcellular compartment-specific NAD + metabolism There are subcellular compartment-specific NAD + -consuming or -generating enzymes, and subcellular NAD + homeostasis and level are regulated by subcellular compartmentalization. For example, intracellular NAMPT (iNAMPT) and NMNAT2 localize in the cytoplasm and generate NAD + in the cytoplasm. NMNAT isoform of NMNAT3 specifically localizes in the mitochondria. Also, NAD + -dependent mitochondrial Sirtuin 3 (SIRT3), SIRT4 and SIRT5 can consume NAD + and covert it to NAM in the mitochondria. There is a nucleus-specific NMNAT isoform (NMNAT1) that converts NMN to NAD + . Although there is subcellular location-specific regulation of NAD + by subcellular-specific enzymes, NAD + levels in each subcellular compartment are also co-regulated by various shuttling mechanisms amongst compartments. Recent studies showed that the mammalian NAD + mitochondrial transporter SLC25A51 plays a crucial role in intact NAD + uptake from the cytoplasm to mitochondria [101] . The malate/aspartate shuttle system can also shuttle NAD/NADH between cytoplasm and mitochondria. The cytosolic NADH imported to mitochondria via the malate/aspartate shuttle is oxidized by complex I in the electron transport chain (ETC) and converts back to NAD + [99,102,103] . The nuclear NAD + pool equilibrates with the cytosolic NAD + pool by diffusion through the nuclear pore, but the details of this mechanism remain unclear [99,103] . NAD + -consuming enzymes and aging process NAD + concentration decreases during aging in humans and in animal models [104][105][106][107] . An increased NAD + availability by NAD + precursors counteracts the effects of aging in various experimental models [108,109] , demonstrating the critical role of NAD + depletion in aging. NAMPT expression declines with aging, supporting the involvement of NAMPT in NAD + depletion [110][111][112] . As NAMPT plays an important role in regulating circadian oscillation, the decline of circadian oscillation with aging may be indirectly involved in aging-mediated NAD + depletion [113] . Recent studies suggested that NAD + -consuming enzymes may also play a regulatory role in aging-mediated NAD + depletion. These NAD +consuming enzymes are sirtuins, poly (ADP-ribose) polymerases (PARPs), Sterile Alpha and TIR Motif Containing 1 (SARM1), and the ectoenzymes CD38 and CD157 [114,115] . Poly (ADP-ribose) polymerases (PARPs)-PARPs are expressed in all eukaryotes except yeast and are involved in numerous cellular processes such as DNA repair and apoptosis, gene regulation, and chromatin remodeling. PARPs can transfer one (mono) or more (poly) ADP-ribose moieties from NAD + to substrates to form poly (ADP-ribose) (PAR) chains with varying lengths and contents. There are 17 PARP isoforms that share a conserved catalytic domain with various domains such as zinc finger, BRCT, SAM, SAP, ankyrin and macro domain [123][124][125] . PARPs consume NAD + and convert NAD + to NAM [99,124] . PARP1 is the best characterized PARP member. PARP1 activation plays distinct roles based on the context. In a normal (non-stressed) condition, PARP1 protects the replication fork. In aging, PARP1 maintains telomere length and telomerase activity [126,127] . As a sensor of DNA damage, PARP1 activation increases in response to DNA damage [123,128] and mediates NAD + depletion [99] . PARP1 activation is involved in various DDR mechanisms such as single-strand break (SSB) repair, DSB repair, homologous recombination (HR), and NHEJ. In a stressed (DNA damage) condition, activated PARP1 recruits DDR machineries such as scaffold protein XRCC1 (in case of SSB), and MRE11, EXO1, BRCA1, and BRCA2 (in case of DSB) [129] . Through activating PARP1 and the downstream NF-κB signaling, anti-melanoma DNA-damaging drugs induce melanoma cell SASP [130] . However, excessive DNA damage leads to PARP1 overactivation, severe NAD + depletion, and cell death. PARP1-dependent DNA damage-induced programmed cell death pathway, namely "parthanatos", has been implicated in heart diseases [131] . In cancer cells, such as breast and ovarian cancers, HR is less effective due to mutations in DNA repair genes, e.g., BRCA1, BRCA2. Consequently, DNA damage accumulates and induces genome instability. These cancer cells utilize other DDR systems including PARP1. In such cases, PARP1 inhibitors will be helpful as they cause cellular apoptosis and eliminate the mutant, damaged cells [132] . As such, PARP inhibitors, e.g., isoindolinone-based PARP inhibitor INO-1001 (ClinicalTrials.gov. NCT00271765, NCT00271167), and Olaparib (NCT03782818) are in early stages of evaluation for the treatment of atherosclerotic CVD and pulmonary arterial hypertension. CD38 and CD157-Both CD38 and CD 157 are paralogues, and both are located on chromosome 4 (4p15) [133] . The CD38 type II transmembrane protein is an ectoenzyme [134] , which has multiple functional roles in tumorigenesis and aging by regulating NAD + levels and extracellular nucleotide homeostasis [134] . This ectoenzyme is on the cell surface with its catalytic site facing towards the extracellular environment. CD38 can also regulate the NAD + metabolism by regulating the metabolism of its extracellular precursor, such as nicotinamide mononucleotide (NMN) [105] . CD38 mRNA and protein expression along with CD38 NADase enzymatic activity are significantly increased during the process of aging. Furthermore, CD38 induction correlates with NAD + depletion in liver, adipose tissue, spleen, and skeletal muscles in aged mice [105] . CD157 is expressed on myeloid cells, B-cell progenitors, and endothelial cells [99] . In vitro macrophage polarization from M0 to proinflammatory M1 macrophages exhibited increased expression of CD38 and to a smaller extent CD157 with lesser NADase activity. Like CD38, CD157 has also been suggested to consume extracellular NAD + [133] . However, it is becoming clear that CD157 consumes mostly the NAD + precursor nicotinamide riboside (NR) [135] and is not a significant NAD + consumer [105,115,136,137] . Both CD38 and CD157 expression are increased in the epididymal white adipose tissue from aged (25-month-old) wild-type mice as compared to 6-month-old mice [137,138] . Therefore, it is possible that the expression of CD38 and CD157 plays a significant role in agingmediated NAD + depletion. CD38 is activated in the endothelial cells in heart by hypoxiareoxygenation and triggers NAD + depletion [139] and endothelial cell dysfunction. In in vivo models, the CD38 inhibitors (thiazoloquin(az)olin(on)es and luteolinidin) can block the CD38 activity and prevent endothelial and myocardial cell damage in the post-ischemic heart [137,[140][141][142] . Human sterile alpha and HEAT/Armadillo motif containing 1 (SARM1)-SARM1 has multiple functional domains including a mitochondrial targeting signal (MTS), an auto-inhibitory N-terminus region with armadillo motifs (ARM) and HEAT motifs, two sterile alpha motifs (SAM), and Toll/interleukin-1 receptor (TIR) domain. SARM1 has two different types of NADase enzymatic activities for (1) hydrolyzing NAD + to NAM and ADP-ribose (ADPR), and (2) ADP-ribosyl cyclase activity and generating NAM and cyclic ADPR from NAD + by utilizing the TIR domain [143] . The NADase activity of SARM1 is regulated by phosphorylation. NAM acts as a feedback inhibitor of SARM1 NADase activity [96] . SARM1 is reported to be a key mediator of axonal degeneration via the breakdown of NAD + after neural injury or disease [144] , and SARM1 regulates the neuronal intrinsic immune response to axonal injuries through activating JNK-c-Jun signaling [145] . In oxidative stress, activated JNK phosphorylates SARM1, thereby increasing its NADase activity, reducing NAD + levels and suppressing mitochondrial respiration [146] . Recently, it has been reported that the depletion of SARM1 inhibited NMNAT2-deficiency mediated axonopathy during the process of aging without any phenotypic manifestations [147] . Sur et al. [148] have reported an important role of SARM1-induced inflammatory response in age-dependent susceptibility to rotenone-induced neurotoxicity. These data suggest that SARM1 plays a crucial role in increased susceptibility to age-associated neuronal loss. In the context of cytotoxic chemotherapy for cancer, the induction of chemotherapy-induced peripheral neuropathy (CIPN) by axonal degeneration may be due to the loss of NAD + via SARM1 activity [149] . SARM1 is also expressed at high levels in neurons in the brain and is linked to neuronal cell death after deprivation of glucose, ischemia, viral infection, or axonal damage [150][151][152] . Declines in cellular NAD + levels and the rate-limiting enzyme NAMPT in NAD + biosynthesis may play pathogenic roles in age-related cognitive decline, and treatment with NMN to increase NAD + may improve cognition in the setting of aging. Chemotherapyinduced cognitive impairment (CICI) has been reported in almost 3/4 of cancer patients treated with chemotherapy, and a significant fraction of patients have continued cognitive decline. The metabolic pathways involving NAD + contribute significantly to CICI, and treatment with NMN to increase NAD + may prevent CICI [153] . NAD + DEPLETION AND AGING PROCESS NAD + levels decline during aging and this decline can be linked to aging-related diseases, including atherosclerosis, arthritis, diabetes, cognitive dysfunction, and cancer. The relationship between NAD + and various hallmarks of aging has been extensively reviewed elsewhere [154] . In this review, we will focus on the role of NAD + depletion in "inflammaging" and telomere dysfunction. INFLAMMAGING The term "inflammaging" refers to the systemic low-grade chronic inflammation status in the absence of infection. Inflammaging represents a central biological process in aging [155,156] as well as the strong link between chronic inflammation and systemic metabolism including NAD + depletion. First, studies have shown a strong correlation between NAD + depletion with activation of innate immunity. CD38 expression increases with M1 (pro-inflammatory) polarization in macrophages, leading to a significant increase of NAD + consumption and subsequent NAD + depletion [136,138,157] . NAD + precursors NMN and NR can inhibit glycolytic shifts, which are observed in M1 macrophage polarization, and CD38-mediated NAD + depletion can attenuate the inflammatory response [158] . Aging is associated with a sustained increase in ROS, which upregulates NLRP3 inflammasome activation [138,159,160] . ROS induced by proinflammatory cytokines can also induce DNA damage, thus activating PARPs and CD38 and leading to aging-related NAD + depletion. Therefore, enhanced expression of proinflammatory cytokines or ROS drives a vicious cycle of inflammaging by a positive feedback loop with activation of major consumers of NAD + , such as CD38 and PARPs, induced by ROS-mediated DNA damage and accelerated physiological age-related decline [99] . In contrast, M2 type-like (antiinflammatory) macrophage increased NAMPT expression and subsequently upregulated NAD + production [138] . Therefore, it is possible that M2 type-like macrophages can inhibit the process of inflammaging. Further investigation will be necessary to clarify these issues. The aging process has a profound impact on adaptive immunity, and age-related immune dysfunction that includes remodeling of lymphoid organs and impairment of adaptive immunity is referred to as immunosenescence. Studies reported decreased levels of naïve T and B cells, and increased levels of memory T cells including cytotoxic CD8 + CD28populations during the process of aging. This cytotoxic CD8 + CD28population is characterized by inhibition of SIRT1 and FOXO1 levels [161] , which can be reversed by the inhibition of CD38 [162] . Another type of T cells is also increased by aging, i.e., exhausted T cells. These T cells have increased expression of inhibitory receptor molecules (PD1 and TIM3) and a decrease in proliferative capacity and effector functions [163,164] . PD1 inhibitor can restore the effector function of aged T cells [165] . Both adoptive CAR-T and anti-PD1 immune checkpoint blockade mouse models demonstrated that NAD + supplementation enhanced the tumor-killing efficacy of T cells in vivo. NAD + supplementation may promote tumor-killing by tumor-infiltrated T cells after anti-PD1 immune checkpoint inhibitor treatment or adoptive chimeric antigen receptor (CAR) T cells through rescuing defective TUB-mediated NAMPT transcription [166] . Interestingly, in cancers that show resistance to PD1 inhibitor, CD38 expression is upregulated in exhausted CD8 T cell populations [167,168] , but a critical role of CD38-mediated NAD + depletion in resistance to PD1 inhibitor cancer therapy needs further investigation. CHEMO-RADIATION Telomere shortening and telomeric DNA damage induced by aging and cancer treatments can cause NAD + depletion by activating PARP [77] or increasing [169] CD38 expression [170] . PARP1 overactivation can lead to a catastrophic decrease of cytosolic NAD + and thereby directly inhibiting glycolysis and causing cell death [117,171,172] . Recently, however, we found that various cancer treatments (IR and doxorubicin) activate the p90RSK/ERK5-S496 inflammatory complex that leads to the formation of a positive feedback loop, which inducing mitochondrial stunning and a persistent SASP. This positive feedback loop is formed by the following steps (i) cancer treatments increase mitochondrial ROS production, (ii) mitochondrial ROS activates the p90RSK/ERK5-S496 complex and thereby decreasing NRF2 transcriptional activity; (iii) the reduction of NRF2 transcriptional activity inhibits antioxidant gene expression (HO1 and Trx1) that involve in the initiation of a persistent SASP including senescence, inflammation, mitochondrial ROS production, and impaired efferocytosis; (iv) steps (i) to (iii) is required for IR and doxorubicin to induce telomere shortening; (v) telomeric DNA damage activates PARP [127] ; (vi) PARP activation causes mitochondrial damage and cell death [148][149][150][173][174][175] [ Figure 2]. As IR low dose and doxorubicin did not trigger immediate cell death, and the depletion of NAD + and ATP was recovered by PARP and p90RSK inhibitors, this mitochondrial dysfunction is reversible. Accordingly, we referred this unique reversible form of mitochondrial dysfunction as "mitochondrial stunning"; (vii) cancer treatments-induced "mitochondrial stunning" was unique in that the cells remained metabolically active even in ATP-depleted conditions. Of note, an increased mitochondrial ROS and succinate production after IR low dose was sustained, and the late phase (but not early phase) of mitochondrial ROS and succinate production are p90RSK dependent. We also found that complex II activity is required for mitochondrial stunning-triggered mitochondrial ROS production. As such, sustained mitochondrial ROS production without killing the cells is critical for chronic inflammation and unceasing SASP status, which we noted even long after the completion of cancer therapy (late effects) [77] . A positive feedback loop formed by the p90RSK/ERK5-S496 inflammatory complex contributes to persistent SASP induction as (i) telomeric DNA damage to senescence; (ii) ERK5 S496 phosphorylation-NRF2 and PARP activation to inflammation and ROS [176,177] ; (iii) mitochondrial stunning to mitochondrial ROS; and (iv) p90RSK-ERK5 S496 phosphorylation to efferocytosis [ Figure 2] [176] . We anticipate that this positive feedback loop can explain the persistent SASP status seen in many cancer survivors with increased late CVD risks. AGING-RELATED DISEASES IN CANCER SURVIVORS With the recent advancements in cancer detection and therapeutics, the life expectancy of patients with cancer has significantly increased [178] . Nearly 70% of patients with cancer will live at least 5 years from diagnosis and 18% will live 20 years or longer [179] . In 2016, almost 10 million elderly cancer survivors (≥ 65-year-old) were living in the US, while by 2040, the number is projected to grow to 19 million [180,181] . Even though this represents a great accomplishment of modern medicine, there is a cost to these accomplishments. Cancer and cancer treatments accelerate the process of aging in cancer survivors, manifesting as earlier onset and higher incidence of aging-related diseases including CVD, compared to the general population [6] . Aging is a well-known risk factor for the development of coronary and peripheral artery disease, hypertension, heart failure, valvular disease, and atrial fibrillation [182,183] . The pathogenetic processes behind the above clinical manifestations include atherosclerosis, decreased arterial elasticity, arterial and myocardial fibrosis, calcification, and decreased myocardial relaxation [183] . These conditions are now seen significantly more often among cancer survivors, years after their cancer diagnosis and treatment. NAD + biosynthetic and metabolic processes are mechanistically involved in aging, cancer, and many ageassociated comorbidities [184] , and therapies aimed at raising intracellular NAD + may remedy accelerated aging and age-associated diseases in cancer survivors. In fact, there are several ongoing clinical studies examining supplementation with NAD + precursors as a therapeutic option for age-related diseases [ Table 1]. Using reports on childhood cancer survivors makes it easier to disentangle the contribution of aging later in life from the direct effects of cancer and cancer therapies in the development of cardiovascular disease. In the Childhood Cancer Survivor Study, a retrospective study of over 10,000 adults who survived childhood cancer between 1970 and 1986, coronary artery disease was 10.4 times more frequent later in the life of cancer survivors compared to their siblings [185] . Furthermore, congestive heart failure was 15.1 and cerebrovascular accidents were 9.3 times more frequent in survivors of childhood cancer than their siblings [185] . A different study, using the registry of the Pediatric Oncology Group of Ontario Networked Information System, including over 7,000 childhood cancer survivors, showed that heart failure was 9.7 times more frequent in cancer survivors than in age-, gender-, and postal code-matched control individuals [186] . Additionally, coronary artery disease was 3.4, valvular disease 4.7 and arrhythmia 1.8 more frequent [186] . A smaller prospective study of 92 childhood cancer survivors from Germany further supports the theory of premature cardiovascular aging in cancer survivors. When compared to healthy controls, childhood cancer survivors had significantly reduced health-related physical fitness, significantly increased systolic and reduced diastolic blood pressure (wide pulse pressure) consistent with premature arterial stiffening, matching the cardiovascular phenotype of older individuals [187] . A different study including 19 long-term (> 10 years) high-risk neuroblastoma survivors revealed significantly higher levels of high-sensitivity CRP, which correlated with increased common carotid artery intima-media thickness in cancer survivors compared to age-and gender-matched controls, again another marker of premature cardiovascular aging [188] . Despite current evidence supporting higher prevalence and earlier onset of age-related CVD in cancer survivors, methodological challenges have limited the efforts to thoroughly study the aging-related consequences of cancer and cancer treatment [189] . To overcome these challenges, in July 2018, the National Cancer Institute convened basic, clinical, and translational science experts who identified several research and resource needs that to be addressed immediately. The main items included: i) the need for longitudinal studies examining aging trajectories that include detailed data prior to, during, and post cancer treatment; ii) mechanistic studies that investigate the pathways leading to the development of aging phenotypes in cancer survivors; iii) long-term clinical surveillance studies to assess for late effects [189] . Addressing these needs will allow for a better understanding of aging in cancer survivors and will help to better identify, predict, and mitigate aging-related consequences of cancer and cancer treatment [189] . Yet there is a paucity of mechanistic investigations into the role of NAD + metabolism and its regulatory pathways in accelerated aging of cancer survivors. WITH ORGAN-CHIPS Organ-Chips are contemporary preclinical experimental models of disease and drug discovery. An organ chip is a microfluidic cell culture device typically consisting of two or more cell types arranged to simulate tissue-and organ-level physiology under continuous perfusion. These systems are capable of forming physiologically relevant tissue architecture in 3D and forming a tissue, thus recreating organ-level functionality not possible with conventional 2D or 3D culture systems. Importantly, they offer a reductionist approach to studying signaling pathways and are supported by high-resolution, real-time imaging and in vitro analysis of biochemical, genetic, and metabolic activities of living cells. The organ-on-chip technology of blood vessels has been transformational over the last decade; it has allowed functional analyses, continuous nutrition, intercellular transport, removal of byproducts, and secretion and biochemical assessment of co-cultured vascular cells not possible before with traditional experimental models [190] . Therefore, these systems are suited to model aging and senescence associated with cancer therapy in a manner complementary to animal models. Recent works by Jain et al. [193,195] demonstrated a 3D anatomical Vessel-Chip that supports the co-culture of EC and mural cells and recapitulates their bi-directional signaling under cyclic flow. The platform allows the construction of a wide range of luminal diameters and muscular layer thicknesses, thus providing a toolbox to create variable anatomy [191][192]194] . In this device, smooth muscle cells (SMCs) align circumferentially while ECs align axially under flow, as only observed in vivo in the past and in rare in vitro models. This system successfully characterizes the dynamics of cell size, density, growth, and alignment due to co-culture and shear. The matrix used in this system has bulk mechanical properties close to in vivo vessels. Another significant feature of our Vessel-Chip is that the subendothelial gap (distance between ECs and SMCs) of a Vessel-Chip is the same scale (~5µm-10µm in thickness) as in vivo, which is an important physiological feature. This platform technology can be used to include tissue-resident myeloid cells and other immune cells of the circulation and simulate senescence. The platform will also allow the investigation of changes in metabolites (e.g., NAD + and its metabolites) in the microvascular microenvironment. CONCLUSION Premature aging in cancer survivors is now well documented. In this current review, we discussed how various cancer treatments can modulate premature aging and induce SASP in these cancer survivors. Importantly, we also linked the role of NAD + in the accelerated senescence and aging in these cancer patients. However, the role of NAD + in different cell types and their cross talk contributing to the accelerated aging and SASP is yet to be explored. The outcome of several ongoing clinical trials on the drugs targeting the NAD + and the related factors of that NAD + metabolism pathway might provide some clue whether NAD + can be a potential target for long-term management for inhibiting cardiovascular events in cancer survivors. Novel approaches, including the 3D anatomical Vessel-Chip model, can be a great tool for pre-assessment of the role of NAD + depletion-mediated secreted factors in forming the interplay among different cell types, resulting in cancer therapy-induced vessel senescence. Financial support and sponsorship This work was partially supported by grants from the National Institutes of Health (NIH) to Drs. Abe (AI156921), Cooke (HL-149303), Chini (AG26094, AG58812, and CA233790), Le (HL-149303), and from Cancer Prevention and Research Institute of Texas (CPRIT) to Drs. Abe and Schadler (RP190256). The Glenn Foundation for Medical Research via the Paul F. Glenn Laboratories for the Biology of Aging, Calico Life Sciences LLC to E.N.C.
2022-05-10T15:48:34.593Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "529324cbf4c73340c25b2bf9f03672d07787e5b9", "oa_license": "CCBY", "oa_url": "https://cardiovascularaging.com/article/download/4809", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd28310f6b7911497569decdb912afbfaa420b2e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232310818
pes2o/s2orc
v3-fos-license
Designing MOF Nanoarchitectures for Electrochemical Water Splitting Electrochemical water splitting has attracted significant attention as a key pathway for the development of renewable energy systems. Fabricating efficient electrocatalysts for these processes is intensely desired to reduce their overpotentials and facilitate practical applications. Recently, metal–organic framework (MOF) nanoarchitectures featuring ultrahigh surface areas, tunable nanostructures, and excellent porosities have emerged as promising materials for the development of highly active catalysts for electrochemical water splitting. Herein, the most pivotal advances in recent research on engineering MOF nanoarchitectures for efficient electrochemical water splitting are presented. First, the design of catalytic centers for MOF‐based/derived electrocatalysts is summarized and compared from the aspects of chemical composition optimization and structural functionalization at the atomic and molecular levels. Subsequently, the fast‐growing breakthroughs in catalytic activities, identification of highly active sites, and fundamental mechanisms are thoroughly discussed. Finally, a comprehensive commentary on the current primary challenges and future perspectives in water splitting and its commercialization for hydrogen production is provided. Hereby, new insights into the synthetic principles and electrocatalysis for designing MOF nanoarchitectures for the practical utilization of water splitting are offered, thus further promoting their future prosperity for a wide range of applications. Introduction Considering the rise in the number of global energy crises and environmental concerns, it is crucial to develop green and sustainable energy resources to substitute nonrenewable sources, such as fossil fuels. Hydrogen (H 2 ), as an energy consideration, developing non-precious alternatives with excellent activity and durability is a challenging but essential task. As a new class of highly porous materials, metal-organic frameworks (MOFs) consisting of organic ligands and metal ions or clusters have high crystallinity and long-range order. [10,11] Owing to the intrinsic features of large surface areas, adjustable chemical components, tunable pore structures, and diverse topologies, a large number of MOFs have been employed for electrochemical water splitting. [12][13][14][15][16][17] Moreover, the properties of MOFs can be improved or modified by coupling various functional materials, including polyoxometalates (POMs), metal compounds, carbon nanotubes (CNTs), and other conductive substrates to form guests@MOFs or MOF/substrates. [18][19][20][21][22] The superior electrochemical performance of water splitting can be achieved from the combined advantages of more active sites and enhanced conductivity through functionalization. Additionally, the MOF-based skeleton allows the rearrangement of the elements at the molecular and atomic levels during pyrolysis. Thus, MOFs or MOF-based composites can act as templates for the synthesis of MOF-derived porous, carbon-based nanomaterials, such as metals, metal compounds, and single-atom catalysts (SACs), under thermal treatments. The pyrolysis of MOFs with ordered calcination modulates various characteristics, such as conductivity and porosity, surface area, stability, and catalytic activity; hence, such derivatives are highly interesting for water splitting. [23][24][25][26][27][28][29][30] Based on these guidelines, diverse MOF-based/derived materials have been reported in the past five years. However, a comprehensive review summarizing MOF-based/derived materials with well-defined synthetic methods, chemical compositions, nanostructured morphologies, electrocatalytic activities, and reaction mechanisms is urgently needed to provide strong inspiration and direct future developments in engineering MOFbased/derived electrocatalysts for water splitting. Herein, this new progress report provides pivotal advances and commentaries on recent research on engineering MOF nanoarchitectures for efficient electrochemical water splitting. First, the design of catalytic centers for MOF-based/derived electrocatalysts is summarized and compared from the aspects of synthetic strategy, chemical composition optimization, and structural functionalization at the atomic and molecular levels (Scheme 1). Second, we focus on the electrocatalytic performance of MOF-based/derived materials for the HER, OER, and bifunctional catalysts. In particular, significant attention has been paid to summarize the fast-growing breakthroughs on the catalytic activity, identification of the highly active sites, and fundamental mechanisms of MOF-based/derived electrocatalysts with unprecedented water splitting performance. Finally, we provide a comprehensive commentary on the current primary challenges and future perspectives on the design and commercialization of MOFs and their derived electrocatalysts for water splitting. We believe that this progress report may offer new insights into the synthetic principles and electrocatalysis in designing MOF nanoarchitectures for practical utilization in water splitting, thus further promoting their future prosperity for a wide range of applications. Pristine MOFs Much attention has been paid to the selection and optimization of metal sites and organic ligands at an atomic level, which is considered a powerful strategy to regulate the electrocatalysis behavior of pristine MOFs. The component design could readily modulate the physical and chemical properties of MOFs, such as electron structures, conductivity, bonding energy of the intermediates, and stability. The commonly used strategies to optimize the catalytic performance of pristine MOFs are introducing multivalent metal sites and heterometallic doping, incorporating functional groups into organic ligands, adsorbing multiple ions onto the organic ligands or metal nodes, and immobilizing conjugated organic ligands in the skeletons. Structural Design of Metal Nodes in Pristine MOFs: Some monometallic MOFs, such as Co-MOFs, [31] Cu-MOFs, [32] and Zr-MOFs, [33] have witnessed rapid and significant development in electrocatalysis. Moreover, bimetallic and trimetallic sites have shown catalytic activity in the OER. Zhao et al. designed and synthesized NiCo bimetal-organic framework nanosheets (NiCo-UMOFNs) that achieved extraordinary electrocatalytic activity toward the OER under alkaline conditions (Figure 1Aa-d). [34] Among the four catalysts mentioned, the NiCo-UMOFNs achieved a very low overpotential of 250 mV at 10 mA cm −2 and a low Tafel slope (42 mV dec −1 ) in an N 2saturated 1 m KOH solution (Figure 1Ae,f). Subsequent density functional theory (DFT) studies confirm that the high electrocatalytic activity is attributed to the coordinatively unsaturated metal center and the coupling effect between Co and Ni. Zhang et al. reported the facile ambient temperature synthesis of a unique trimetallic MOF nanofoam (Figure 1Ba) with controllable molar ratios. [35] The transmission electron microscopy (TEM) image of the (Ni 2 Co 1 ) 1−x Fe x -MOF-NF and high-resolution TEM (HRTEM) analysis after the OER are illustrated in Figure 1Bb,c. The overpotential values needed for different molar ratios to achieve a current density of 10 mA cm −2 in 1.0 m KOH are summarized in Figure 1Bd, in which (Ni 2 Co 1 ) 0.925 Fe 0.075 -MOF-NF has the lowest overpotential (257 mV). The component design of metal nodes is a simple and efficient strategy to purposefully introduce catalytically active metal nodes into MOFs for various types of electrocatalysis and take advantage of the synergistic effect between the multi-metal elements. However, challenges still need to be addressed: 1) it is difficult to introduce multiple metal nodes into MOFs while retaining the original character of the structure; 2) the real active sites and the catalytic mechanism are difficult to identify. Structural Design of Organic Ligands in Pristine MOFs: Another way to adjust the characteristics of MOFs for electrocatalysts is to modify the chemical composition of the organic ligands. It has been theoretically and experimentally studied that UiO-66 can be functionalized with single-and dual-functionalized linkers (OH, NH 2 , or SH). [36] Syzgantseva et al. studied and summarized the impact of the functional groups and scaffolds of ligands in MOFs. [37] The influence of F, Cl, Br, I, OH, SH, CN, NH 2 , NO 2 , SO 3 H, PO 3 H 2 , and NMe 2 in MIL-125 were studied systematically. Meanwhile, for UiO-66, F, Cl, Br, I, OH, SH, CN, NH 2 , and NO 2 were considered. The results implied that the electron-donating groups would boost the energy of the ligand-centered states, whereas the electron-withdrawing groups could facilitate the opposite effect. With the increase in the degree of conjugation in the organic ligand, MOFs could have fewer electrons localized on it. These observations support the idea that the component design of organic ligands could adjust the electron structure of MOFs, subsequently optimizing the catalytic performance. Nevertheless, little attention has been paid to the modification of organic ligands by introducing functional groups in the field of electrocatalysis due to the complex and varying effects of the side group. Structural defects may offer opportunities to tune and optimize the performance of electrocatalysts because of the modulation of the electronic and geometric structures. Recent work on missing linker defects of the UiO-66-type framework unambiguously demonstrated that structural defects mostly affect the local node geometry and, therefore, offer an alternative route to node modification. [38] Furthermore, Zheng et al. applied a NaBH 4 treatment to modulate the defect concentration to optimize the electrochemical performance. [39] It has been revealed that the defects created in MOFs will lead to redistributed electronic configurations, which may provide defective conducting channels, thus resulting in enhanced OER catalytic activity. Additionally, the coordinatively unsaturated sites associated with organic linker defects could serve as catalytic sites and enhance the intrinsic activity of the catalytic sites. For instance, the missing ligands of Co 2 (OH) 2 (C 8 H 4 O 4 ) (CoBDC) modified the local coordination geometry of Co 2+ and generated unsaturated Co 2+ sites, achieving remarkable OER catalytic activity with an extremely low overpotential of 241 mV at 100 mA cm −2 with Ni foam as the substrate. [14] From the above examples, the defect engineering of MOFs with a controllable density of defects is generally followed by electron localization, lattice distortion, and bond breaking and reforming, resulting in a larger number of active sites. Nonetheless, several significant challenges still exist: 1) high densities of defects may reduce electroconductivity, thereby reducing the electrocatalytic activity; 2) it is difficult to define the actual reactive sites due to the various types of defects. Guests@MOFs In many cases, due to the inherent defects of MOFs, such as poor conductivity and inferior functionality, pristine MOFs c) energy-dispersive X-ray spectroscopy (EDS) mapping, and d) high-angle annular darkfield scanning transmission electron microscopy (HAADF-STEM) images of NiCo-UMOFNs, showing metal atoms (pink), light elements (blue), and background (green). e) Linear sweep voltammetry (LSV) curves and f) Tafel plots of various catalysts. A) Reproduced with permission. [34] Copyright 2016, Springer Nature. B) a) Synthesis process and b) TEM image of hierarchical (Ni 2 Co 1 ) 0.95 Fe 0.05 -MOF-NF (NF:Ni foam) c) HRTEM analysis after the OER. d) The overpotential of different molar ratios at 10 mA cm −2 . B) Reproduced with permission. [35] Copyright 2019, Wiley-VCH. bonds (electrostatic, π-π interactions, host-guest interactions). In many cases, the synergistic effect between the guests and MOFs could also improve the catalytic activity. The immobilization of precious metals with functional supports could be an effective strategy to acquire excellent electrocatalysts because of the desirable dispersity and regulated interfaces. [45,46] Rui et al. prepared a 2D Ni-MOF@Pt hybrid with well-defined interfaces via in situ deposition of Pt nanoparticles, [20] achieving improved electrochemical HER performance under both acidic and alkaline conditions (Figure 2Aa,b). As shown in Figure [49] Copyright 2020, American Chemical Society. C) a) 3D POM-encapsulated metal-organic nanotube. b) Diagram of POM linkage mode in a 1D POM-encapsulated metal-organic nanotube. c) LSV curves of the catalysts. d) Cycling stability tests of the catalysts. HUST-200 (X = P) and HUST-201 (X = As). C) Reproduced with permission. [44] Copyright 2018, American Chemical Society. 102 mV at 10 mA cm −2 , outperforming that of the commercial Pt/C (Figure 2Ae-f). The strong PtO covalent bonds in Ni-MOF@Pt are believed to enable ideal interfacial interaction and facilitate the electron transfer of the Pt nanoparticles. This interface engineering method provides a broad prospect for developing new functional MOFs and other 2D nanocomposites with great potential for water-splitting applications. The incorporation of base metal complexes into MOFs has been investigated to enhance the catalytic properties. Lin et al. reported that UiO-67 doped with [Ru(tpy)(dcbpy) OH 2 ] 2+ (tpy = 2,2′:6′,2′-terpyridine, dcbpy = 5,5′-dicarboxy-2,2′-bipyridine) via coordination bonds for electrochemical water oxidation achieved a high turnover frequency (TOF) and good electrochemical stability in a buffered solution (pH = 7). [47] Meanwhile, the strategy to incorporate metal complexes into MOFs, such as porphyrins and phthalocyanines, could further improve the stability under the highly oxidative environment in OER catalysis, thus guaranteeing efficient and stable catalytic performance. For instance, the Co-tetramethoxyphenyl porphyrin has been incorporated into the cavity of ZIF-8 via host/guest interaction and functions as a high-performance bifunctional electrocatalyst for both the OER and oxygen reduction reaction (ORR). The strong interactions between the guest molecule and ZIF-8 host ensure excellent structural and electrochemical stability. [48] Recently, the metal-salen complex, one of the closest analogs to metal porphyrin complexes, has also received increasing attention for OER catalysis. The Fe-salen complex and POM co-loaded ZIF- 8 Figure 2Ba. [49] Figure 2Bb,c presents the field emission SEM (FESEM) images of FSZ-8 and FSWZ-8, respectively. The LSV of FSWZ-8 and FSZ-8 ( Figure 2Bd) demonstrates that FSWZ-8 could achieve a higher water oxidation current than FSZ-8 under the same potential, proving the benefits of the strong interactions between the co-encapsulated Fe-salen and POM. POMs are highly soluble inorganic nanoclusters composed of polyanion clusters and counter cations, which can be immobilized in the pores of MOFs via covalent or noncovalent bonds to serve as different kinds of active sites for many catalytic reactions. [50,51] Figure 2Ca,b. [44] These two types of POM-encapsulated MOF with CuO covalent bonds show high activity and stability toward the HER under acidic conditions (Figure 2Cc,d). In particular, the best catalyst achieved a relatively low overpotential of 131 mV to reach a current density of 10 mA cm −2 . It has also been reported that POMs can migrate within MOFs if induced by thermal treatment. [21] Additional investigation is required to insert other types of POMs to explore their effects on the HER or OER catalytic activities. Bimetallic alloys or core-shell nanoparticles on MOF substrates are promising catalysts because they typically present higher catalytic activity than their monometallic counterparts. Ma et al. synthesized MOF-encapsulated bimetallic nanoparticles with enhanced OER performance and stability via the in situ etching of Cu-Ni nanostructures. [52] TEM/STEM images and EDS mapping showed no significant changes in the structure and element distribution after the electrochemical measurements. In particular, Ni-Cu@Cu-Ni-MOF presented the lowest overpotential at a current density of 10 mA cm −2 and the smallest Tafel slope of 98 mV dec −1 . Notably, for the strategy to introduce electrocatalytic nanomaterials into MOFs, the locality and loading number of the functional materials must be regulated and controlled meticulously to ensure the maximum utility of the active sites. The uniform dispersion of the guest materials inside the MOF nanocrystals could significantly shorten the charge transfer distance and facilitate the catalytic reaction. In brief, loading functional materials inside MOFs is a promising strategy to yield heterogeneous catalysts with high activity and stability. The confined materials could serve as the active sites and deliver outstanding catalytic functionality, and uniform dispersion of the guest materials inside the MOF crystals could significantly shorten the charge transfer distance and facilitate the catalytic reaction. However, because organic ligands can partition the metal nodes of MOFs and guest nanoparticles, the interaction between the encapsulated metal nanoparticles and MOFs is weak. Strengthening the interaction between the host and guest (such as the PtO and CuO covalent bonds mentioned above) is the key to enhancing the structural stability and catalytic activity. MOF/Substrates Functional materials as substrates to support MOFs are another way of introducing functional materials into pristine MOFs. Quite a few MOFs have been assembled onto various substrates as building blocks; this not only endows more exposure to the active sites of the catalysts but also improves the conductivity of the MOFs. For instance, Duan et al. prepared ultrathin nanosheet arrays of 2D MOFs on various supports through a facile route of one-step chemical bath deposition, showing superior performance for the OER, HER, and overall water splitting. [53] An NiFe-MOF with the presence of Ni foam (NF) as substrates demonstrated the best OER performance in a 0.1 m KOH electrolyte, achieving an overpotential of 240 mV at 10 mA cm −2 and a small Tafel slope of 34 mV dec −1 . Recently, Cheng et al. reported a lattice-strained NiFe-MOF nanosheet array with foamed Ni as a substrate synthesized through a low-temperature hydrothermal approach, exhibiting excellent electrocatalytic performance as a bifunctional oxygen electrocatalyst. [54] The mechanism of lattice expansion of NiFe-MOF under ultraviolet light is shown in Figure 3Aa. Ultraviolet treatment leads to the enlargement of the interlayer of the lattice-strained NiFe-MOF from 11.6 to 12.1 Å (Figure 3Ab,c). Lattice-strained NiFe-MOFs exhibit extraordinary OER and ORR activities, achieving comparable activity to RuO 2 and Pt/C (Figure 3Ae). Notably, the 4.3%-MOF catalyst shows mass activity of 2000 and 3100 A g metal −1 at overpotentials of 300 and 400 mV, respectively, which are significantly higher than those of commercial RuO 2 and pristine NiFe-MOF ( Figure 3Af). The catalyst also exhibited superior Faradaic efficiency and desirable stability after 200 h of continuous OER ( Figure 3Ag). As illustrated in Figure 3Ad, the improvement in the OER and ORR catalytic performance of the NiFe-MOF may be ascribed to the change in the electron structure driven by the tensile lattice strain. The OER catalytic mechanism diagram in Figure 3Ah shows that NiFe-MOF undergoes a quick and efficient four-electron pathway for oxygen electrocatalysis, for either the ORR or OER, as the key intermediates superoxide *OOH species emerge on the high-valence Ni 4+ active sites. Khalid et al. reported a bimetallic NiCo-MOF that was directly grown on Ni-mesh and wrapped by a graphene oxide aerosol skeleton, providing a highly available active surface area and showing improved electrocatalytic activity for the HER in an alkaline medium (Figure 3Ba). [18] Figure 3Bb,c shows that NiCo-MOF was homogeneously grown on the Ni-mesh. The catalytic performance apparently increased owing to the electronic coupling between NiCo-MOF and rGOAS and the Nimesh substrate, ensuring a strong electron transfer within the composite. The LSV curves and Tafel values suggested that the rGOAS-covered nanoflocks led to a profound improvement in the HER performance (Figure 3Bd,e). The physicochemical interactions between the rGOAS and NiCo-MOF, as well as its unique architecture, guarantee excellent electroconductivity, mass transport of the electrolyte, and high exposure of the active sites, subsequently leading to a superior HER catalyst. Growing the MOF on the substrate can not only inherit the advantages of pristine MOFs but also provide a flexible and effective strategy to increase the macro-or mesoporosity for mass transport and facilitate the exposure of active sites during electrocatalysis. Moreover, attaching MOFs to high-conductivity substrates may overcome the limited electrical conductivity of pristine MOFs. Nonetheless, MOF/substrates catalysts are still in their infancy, and some underdeveloped issues still need to be resolved: 1) hybridization with substrates may block the intrinsic micropores of the MOFs, resulting in poor mass transport; 2) it is critical to design MOF/substrates catalysts with enhanced stability in aqueous media, particularly in acidic and basic media. In summary, pristine MOFs can theoretically be employed as electrocatalysts to maximize the electrocatalytic surface area and precisely adjust the active sites straightforwardly by the rational selection and modification of the organic ligands and metal nodes. For example, incorporating nonbridging ligands into the MOF could significantly improve the electrocatalytic performance. Converting bulk MOF crystals into 2D nanosheets could also enable a higher exposure of the active surface sites. Designing bimetallic MOFs may further optimize the electrocatalytic performance for water splitting because of the synergistic effect between the multi-metal elements. Nevertheless, the inferior conductivity, controversial stability, and generally poor activity of MOFs hinder the extensive development of pristine MOFs for electrocatalysis. Synthesizing the π-conjugated structure with transition metal atoms and aromatic organic ligands as precursors could also provide a promising pathway to achieve highly conductive MOF-based electrocatalysts. Furthermore, benefiting from the tunable pore structures of the MOFs, functional nanomaterials, such as metal nanoparticles, metal complexes, and POMs, can be encapsulated inside MOFs to form guest@MOFs. The resulting MOF nanocomposites can always achieve multifaceted catalytic activity or significantly improved conductivity. In many cases, the synergistic and strong interactions between the guests and MOFs could further enhance the catalytic performance. Meanwhile, the uniform dispersion of the guest materials inside the MOF crystals could improve the stability of the catalysts. For MOF/substrates, the functional materials serve as substrates that generally contribute to the dispersion and stability of the MOFs and may also improve the electrical conductivity in many cases. MOF-Derived Electrocatalytic Materials Since the research of MOF-5 calcined into carbons was first reported in 2008, [55] carbon-based materials derived from MOFs, featuring high conductivity and well-dispersed catalytic sites, have been increasingly employed as highly efficient electrocatalysts. During the pyrolysis of the MOF synthesis, the organic linkers are converted to a highly porous graphitic carbon matrix, and the metal nodes transform into metal compounds, alloys, or single-atom dopants in carbon materials. Herein, the design strategies of MOF-derived carbon-based catalysts are presented in detail. MOF-Derived Metal-Free Carbon Electrocatalysts Carbon materials with heteroatomic dopants (e.g., B, N, P, S, etc.) have drawn increasing attention because of their high conductivity, erosion resistance, and excellent catalytic performance. Some reports have proved that metal-free carbon nanomaterials can catalyze the OER [56,57] and HER. [58,59] Recently, MOFs have been regarded as ideal templates for producing carbon nanomaterials owing to their large surface area, high conductivity, and affordable price. Heteroatom-containing MOFs can be employed as precursors to obtain metal-free carbon-based electrocatalysts by carbonization. Lei et al. demonstrated that MOF-derived N-and O-doped carbon materials could be utilized for the electrochemical splitting of water. This bifunctional catalyst was prepared by calcinating the ZIF-8 precursor, followed by the electrochemical activation of the catalytic sites (Figure 4a). [60] As revealed by energy-dispersive X-ray 6 . c) HER and d) OER LSV polarization curves. e) Photograph of the electrolyzer using ZIF-8-C 6 as the HER electrocatalyst and ZIF-8-C 4 as the OER electrocatalyst. a-e) Reproduced with permission. [60] Copyright 2018, Royal Society of Chemistry. spectroscopy (EDX), N and O were uniformly dispersed and anchored in the MOF-derived porous carbon matrix (Figure 4b). The excellent electrocatalysis behavior could be attributed to the modulated N-and O-containing surface groups caused by the electrochemical activation (Figure 4c-e). The carbonaceous materials with heteroatomic dopants can also be synthesized by the calcination of MOFs with a highly porous nature and controllable nanostructures under certain atmospheres, like NH 3 , H 2 S, and PH 3 gases. [23,61,62] For instance, Liu et al. reported an efficient trifunctional electrocatalyst with N-and P-doping synthesized by the calcination of MOF precursors under a PH 3 atmosphere. [63] For the synthesis of heteroatom-doped carbons with organic ligands containing heteroatoms as precursors, heteroatom doping will be more uniform, but the design of appropriate precursors is more complex. Using certain gas environments (e.g., NH 3 , H 2 S, and PH 3 ) as an external heteroatom source is a feasible and straightforward strategy, but the dispersion of the heteroatoms is relatively poor. To date, MOF-derived carbon materials for electrochemical water splitting have rarely been reported, which may be caused by the limited intrinsic electrocatalytic of these metal-free sites. It is still a great challenge to design MOF-derived metal-free carbon materials with excellent electrochemical water splitting performance. Metal-and Alloy-Doped Carbon Electrocatalysts In recent years, metals and alloy nanoparticles supported by heteroatom-doped carbon materials have become a thriving topic for designing highly efficient electrocatalysts. [64,65] Through the pyrolysis of MOFs in the presence of carbon or external reductive agents, metal ions around organic ligands (e.g., Co, Ni, and Fe ions) could be reduced in situ to metal or alloy nanoparticles encapsulated in a heteroatom-doped carbon frame, which exhibits excellent catalytic performance and stability owing to its highly adjustable metal composition and robust carbon structure. [66][67][68][69][70][71] As representatives of MOFs, there are many reports about zeolite imidazole frameworks (ZIFs) as precursors to prepare metal-nitrogen-carbon electrocatalysts, [24,[72][73][74][75][76][77][78][79][80][81] for instance, the porous cage structure of N-doped carbon nanotubes (NCNTs) synthesized via the simple pyrolysis of polyhedral ZIF-67 particles. [72] Thus, enhanced electrocatalytic performance and durability for the ORR and OER were observed, which were mainly attributed to the synergistic effect between the N dopants and restricted Co nanoparticles in the CNTs, the NCNTs structure, and the rugged porous cage structure. Li et al. developed a series of Co/Zn bimetallic zeolitic imidazolate frameworks (BMZIF) that served as precursors to synthesize porous carbon nanomaterials loaded with Co nanoparticles (Co@NC-x/y) and exhibited exceedingly high activity for bifunctional oxygen electrocatalysis. [73] Recently, Wang et al. reported a 2D dual-metal (Co/Zn), leaf-like ZIF-pyrolysis routine for scalable preparation to encapsulate Co nanoparticles within N-doped CNTs. [74] The resultant Co-N-CNTs were shown to be excellent bifunctional air electrodes for primary and rechargeable Zn-air batteries. Noble-metal electrocatalysts can also be synthesized using MOFs as precursors. Qiu et al. synthesized Ru-based electrocatalysts that exposed massive Ru active sites (Ru-HPC, Ru-decorated hierarchically porous carbon) with bimetallic CuRu-MOFs serving as templates for highly efficient hydrogen evolution (Figure 5Aa,b). [70] As revealed by X-ray diffraction (XRD) patterns (Figure 5Ae), the less active Cu species of CuRu-C were etched by an FeCl 3 solution to achieve Ru-HPC. Meanwhile, the results of Brunauer-Emmett-Teller (BET) surface areas and TEM images revealed that abundant meso-and macropores were generated in situ (Figure 5Ac,d). Remarkably, Ru-HPC presented desirable catalytic activity for the HER, outperforming the commercial Pt/C by achieving a current density of 25 mA cm −2 at an overpotential of 22.7 mV (Figure 5Af). Metal alloy nanomaterials, including FeNi, [82,83] FeCo, [84][85][86] IrCo, [87] and CoNi alloys, [88,89] can also be obtained from MOFs by pyrolysis for application in the field of electrocatalysts. Recently, Zhang et al. reported FeCo bimetallic N-doped porous carbons (FeCo-C/N) obtained from the calcination of yolk-shell-structured ZIFs. [84] The obtained FeCo-C/N exhibited excellent ORR performance and good OER activity because of its unique structural and compositional features. Xu et al. reported a self-template approach to preparing open carbon cages with a hydrangea-like superstructure by the morphology-controlled thermal transformation of core@shell MOFs. [85] The direct calcination of core@shell Zn@Co-MOFs could be used to construct well-defined open-wall carbon cages. However, the introduction of guest Fe 3+ ions into the Zn@Co-MOF precursor will lead to the self-assembly of open carbon cages into a hydrangea-like 3D superstructure connected by CNTs, which are grown in situ on the Fe-Co alloy nanoparticles formed during the calcination of Fe-doped Zn@Co-MOFs. The as-prepared composite exhibits excellent performance as an air cathode catalyst in a Zn-air battery owing to its unique superstructure. Xiong et al. reported a group of optimized bimetallic MOF-derived Co-Fe alloys trapped within the carbon nanocomposites via a combination of the typical self-assembly of MOFs and a guest-host method. [86] Among them, Zn 6 Co has been proven to be a compositionally optimal precursor for synthesizing bimetallic nanoparticle-carbon composite materials with the incorporation of external Fe (Figure 5Ba,b). As revealed by STEM and EDX spectroscopy (Figure 5Bc-f), Co 0.9 Fe 0.1 bimetallic nanoparticles, with a uniform distribution of Co and Fe and a Co/Fe ratio of 9:1, were uniformly dispersed and anchored in the MOF-derived porous carbon matrix. The resulting nanocomposite exhibited excellent stability after 30 000 cycles in alkaline solution due to its compositional and structural integrity ( Figure 5Bg). Metal-Compound-Doped Carbon Electrocatalysts Monometallic-Compound-Doped Carbon Electrocatalysts: Monometallic compounds, including metal carbides, [90,91] oxides, [92,93] nitrides, [94] phosphides, [25,[95][96][97][98][99][100] and chalcogenides, [101][102][103] can be directly synthesized from the pyrolysis of MOFs. Recently, by making use of the unique characteristics of highly and uniformly dispersed metal nodes and the suitable thermostability of MOFs, Deng et al. reported an efficient Bi 2 O 3 @C catalyst, which was prepared by an oxidation treatment after the carbonization of Bi-based MOFs. [93] In addition to the optimal pyrolysis time and temperature, suitable ligands play a critical role in forming metal compounds. For example, Cu 3 P/CNS composites were directly prepared by annealing the MOPF, whose ligand contained the P atom. [98] However, additional heteroatom-containing sources are usually necessary to prepare metal phosphides, nitrides, and chalcogenides. Kang et al. fabricated 3D and mesoporous Co 3 N@AN-C nanocubes (NCs) using in situ nitridation and calcination processes under an N 2 (200 sccm)/NH 3 (10-100 sccm) atmosphere via a Prussian blue analog (PBA) of Co 3 [Co(CN) 6 ] 2 NC precursors (Figure 6Aa,b). [94] The TEM images (Figure 6Ac Figure 6Ba. [90] The TEM images (Figure 6Bb,c) and XRD pattern (Figure 6Bd) show the presence of ultrafine nanocrystals for MoC-Mo 2 C. Furthermore, the strong coupling interactions between MC and M 2 C afford favorable sites for both water dissociation and hydrogen desorption, which endows dual-phased carbide nanocrystals with much better catalytic activity than that of the single-phase MC or M 2 C (Figure 6Be-g). Multiple-Metal-Compound-Doped Carbon Electrocatalysts: MOF-derived bimetallic compounds may exhibit superior catalytic activities compared to their monometallic counterparts due to the strong synergistic effects that overcome the sluggish d) The N 2 adsorption/desorption isotherms of Ru-HPC and CuRu-C. Inset shows the corresponding pore size distributions for Ru-HPC and CuRu-C. e) XRD patterns of Ru-HPC and Ru-C. f) HER polarization curves of the catalysts in 1 m KOH solution. A) Reproduced with permission. [70] Copyright 2019, Elsevier Ltd. B) a,b) SEM images of pyrolyzed BMOF_Zn 6 Co (a) and Zn 6 Co_Fe (b). c) STEM image and the corresponding EELS elemental maps of Co (red), Fe (green), and the composite map (Co vs Fe). d) Low-magnification STEM image of BMOF. e) Atomic-scale STEM image of a Co 0.9 Fe 0.1 nanoparticle with five subregions on the [110] zone axis. Inset: the corresponding Fourier transform with five pairs of {111} diffraction spots. f) STEM-EDX spectrum with Fe Kα and Co Kα,β edges. g) EDX patterns of BMOF before and after 30 000 cycles. B) Reproduced with permission. [86] Copyright 2019, American Chemical Society. kinetics of multiple electron transfer processes. Directly annealing bimetallic MOF precursors is a common approach to prepare bimetallic compounds. [104][105][106][107][108][109] Lou et al. designed Ni-doped FeP/C hollow nanorods with Ni-Fe bimetallic MIL-88A as the template and phytic acid as the etching agent and phosphorus source. [105] The optimized hollow nanorods obtained via the pyrolysis process exhibited pH-universal HER activity. XPS and DFT calculations attributed the efficiency to the synergistic modulation of the active components and the structural and electronic properties. Recently, Ouyang et al. employed CoMo-MOF as the precursor to synthesize a magnetically functionalized Co 2 Mo 3 O 8 @NC-800 consisting of highly crystallized Co 2 Mo 3 O 8 and ultrathin N-rich carbon via an NaCl-assisted pyrolysis strategy (Figure 7Aa). [109] According to the XRD results, Co 2 Mo 3 O 8 @NC-800 presents a hexagonal crystal structure with the space group P6 3 mc (Figure 7Ab). The successful formation of Co 2 Mo 3 O 8 with high crystallinity was confirmed by atomic-scale STEM (Figure 7Ac,d). Besides, the magnetic and theoretical calculation results reveal that Co 2 Mo 3 O 8 with T d Co 2+ (high spin, t 2 3 e 4 ) atoms as the active sites are beneficial to the rate-determining step to form *OOH, consequently enhancing the OER performance (Figure 7Ae,f). A solvothermal reaction is a facile synthesis strategy for introducing metal ions into the precursor, which is further converted to multimetallic, compound-doped carbons through a post-annealing process. [26,[110][111][112][113][114][115] Li et al. reported a novel dispersing-etching-holing (DEH) approach to fabricate the 3D open nanonet-cage electrocatalyst (Figure 7Ba,b). [114] The operando XAS results confirmed that ZnO could be etched in situ during the HER process, while the provided RuIr alloy acted as the active sites (Figure 7Bc,d). The DEH method might significantly enhance the electrochemically active surface area (ECSA) by providing a porous nanocage with a large number of exposed active sites and the 3D availability of substrate molecules, as shown in Figure 7Be,f. Recently, Guo et al. reported a new strongly coupled NiCoN/C hybrid nanocage. [115] First, ZIF-67 and Ni(NO 3 ) 2 were used to synthesize NiCo LDH Adv. Mater. 2021, 33,2006042 nanoboxes via a chemical etching method under sonication. Then, the nanoboxes were chemically converted into strongly coupled NiCoN/C hybrid nanocages by a low-temperature thermal ammonolysis treatment. The mass activity of the catalyst in the 1.0 m KOH electrolyte was 0.204 mA µg −1 at an overpotential of 200 mV. Furthermore, through the pyrolysis of MOF precursors, complex metal compounds with multiple nanostructures and compositions can be obtained. [116][117][118][119][120][121][122] Liang et al. synthesized bifunctional Co-NC@Mo 2 C complex catalysts that showed excellent catalytic performance for overall water splitting with a low cell voltage of 1.685 V at 10 mA cm −2 . [119] The superior HER and OER performance could be ascribed to the synergistic effects of Mo 2 C and Co-NC. Remarkably, the coating structure of Mo 2 C not only protects the electrolyte erosion of Co nanoparticles but also provides more catalytic sites. A recently reported Ru-modified Co-based electrocatalyst, which was anchored in an N-doped carbon (NC) matrix and presented a rationally designed Mott−Schottky heterostructure (RuO 2 /Co 3 O 4 -RuCo@NC), achieved outstanding activity and stability for overall water splitting under strongly acidic conditions. [121] RuO 2 /Co 3 O 4 -RuCo@ NC was synthesized via a three-step process: pyrolysis of Co-MOF, galvanic replacement reaction between Co and Ru, and controlled partial oxidation. Notably, the composite with rich metal-semiconductor interfaces obtained by partial oxidation could promote the charge-transfer process; thus, the catalytic performance would be further improved. In brief, for the design of MOF-derived metal-based electrocatalysts, choosing the appropriate MOF precursors is a commonly adopted strategy. For instance, directly annealing bimetallic MOF precursors is a common approach to prepare bimetallic compounds. The advantages of this strategy are that it is simple to adjust the proportion of the metal elements, and the resulting catalysts can be evenly dispersed on the carbon substrates, resulting in excellent catalytic performance. Manipulating the conversion conditions and the introduction of additional precursors can also be employed to regulate the chemical composition of MOF-derived metal-based electrocatalysts. This strategy is applied widely, but it is difficult to adjust the ratio of the components accurately. Indeed, these strategies are used simultaneously in many cases to obtain excellent electrocatalytic activity and stability. Metal-Based Single-Atom Catalysts With high catalytic activity, selectivity, and maximum metal atom utilization efficiency, single-atom catalysts (SACs) have drawn considerable attention in the field of catalysis. However, under realistic reaction conditions, the isolated atoms can easily migrate and aggregate into nanoparticles, owing to the high surface energy of the monatomic catalysts. To overcome this problem, MOFs have become promising precursors to develop SACs owing to their porous structures and precisely designable components. The direct pyrolysis of MOFs is a facile strategy to synthesize SACs. [123,124] Recently, other strategies have also been reported. MOF-derived N-doped carbon is an effective scaffold for the adsorption of metal ions and the subsequent formation of SACs via thermal treatment. [125,126] Li et al. developed a simple method to create atomic-dispersed Fe-N 4 active sites embedded into carbon phases, which are synthesized by the carbonization of ZIF-8 precursors (Figure 8a). [125] Benefiting from this method, researchers can fine-tune the Fe-N 4 site structure and density while maintaining the carbon matrix and N doping. Upon pyrolysis and etching, SACs can be obtained using bimetallic MOFs as the precursor. [127][128][129][130] For a facile adsorption strategy, engineering the structure and composition of the carbon substrates is typically recognized as the focus of future research. A single-atom Ni electrocatalyst was designed using bimetallic MgNi-MOF-74 as a precursor (Figure 8b). [127] It is worth noting that the spatial distance of adjacent Ni atoms can be extended by introducing Mg 2+ ions into MgNi-MOF-74. The N coordination numbers of single-atom Ni catalysts could be adjusted and controlled by regulating the pyrolysis temperature. The pyrolysis-etching strategy is regarded as one of the most facile strategies. However, low metal loading resulting from the activation process and a limited number of precursors are the disadvantages of the strategy. Another commonly used strategy for synthesizing SACs is the MOF-assisted host-guest strategy. [27,[131][132][133][134][135][136][137][138][139] Typically, the extra metal precursor was encapsulated in the cavity or skeleton of the MOF, and the pyrolysis process was performed to obtain the SACs. Recently, Xiong et al. reported single-atom dispersed Rh embedded on N-doped carbon (SA-Rh/CN) with favorable electrocatalytic performance. ZIF-8 with molecular-scale cavities was used as a precursor for the substrate to disperse and anchor Rh(acac) 3 because the size of Rh(acac) 3 (9.36 Å) is between that of the large holes (diameter of 11.6 Å) and small pores (diameter of 3.4 Å) of ZIF-8. Thus, Rh(acac) 3 could be immobilized within the molecular cages of ZIF-8 (denoted Rh/ZIF-8), which was reduced in situ to synthesize SA-Rh/CN by pyrolysis (Figure 8c). The spatial distribution and structure of the Rh species were elucidated by AC HAADF-STEM images and XAFS spectroscopy (Figure 8d-f). The host-guest strategy could effectively restrain the migration of the metal species during calcination. Nevertheless, mononuclear, metal-based guests with appropriate sizes below those of the MOF pores should be considered. Recently, Fan et al. synthesized Ni-based SACs (A-Ni-C) with graphitized carbon materials. [140] The A-Ni-C was produced by the carbonization of a Ni-MOF, followed by HCl etching and electrochemical activation (Figure 9Aa). The presence of single Ni atoms was elucidated by HAADF STEM imaging and XRD (Figure 9Ab,c). The A-Ni-C electrocatalysts exhibited significantly improved HER performance after electrochemical activation, which could remove the Ni nanoparticles protected by graphitic carbons and create single-atom Ni sites (Figure 9Ad). Moreover, the host is crucial to the design of SACs because it influences the space and electronic environment of the metal center. [141] Besides the carbon frameworks, a variety of metal crystals doped with single-metal sites have also been identified as promising SACs for electrochemical water splitting. [142,143] Recently, Lou et al. reported a series of metal-atom-doped Co 3 O 4 nanosheets for efficient OER using MOF precursors. [143] These novel electrocatalysts were fabricated by a cooperative etchingcoordination-reorganization approach with ZIF-67 nanoplates. Remarkably, the Fe-doped Co 3 O 4 nanosheets exhibited superior OER activity with an overpotential of 262 mV at 10 mA cm −2 , which is comparable to that of commercialized noble-metal OER catalysts. Significant progress has been achieved toward the design of SACs for electrochemical catalysis, as summarized in Table 1, including SACs from MOF precursors. However, a relatively limited number of these MOF-derived SACs have been applied to electrochemical water splitting, especially in OER catalysis. Because of the advantages afforded by the use of SACs for scalable production and electrochemical reactions, we believe that much more effort is needed to enhance further the current intrinsic catalytic activities of MOF-derived SACs in both the HER and OER. For instance, DFT calculations have been extensively used to investigate the coordination environment of catalysts and reaction mechanisms and to analyze and design the active sites for electrochemical reactions. Mohajeri et al. investigated a single transition metal from 3d atoms (TM/B 36 , TM = Sc-Zn) with finite-sized B clusters, B 36 , as the substrate. [144] Among the mentioned catalysts, Ni/B 36 was recognized as the most efficient OER electrocatalyst, which could be attributed to the appropriate binding strengths of various adsorbates (Figure 9Ba,b). The Ti/B 36 electrode showed the highest activity for the HER electrocatalyst owing to the lowest ΔG *H (0.12 eV) (Figure 9Bc). [125] Copyright 2019, Wiley-VCH. b) Reproduced with permission. [127] Copyright 2019, Wiley-VCH. c-f) Reproduced with permission. [138] Copyright 2020, The Authors, published by Springer Nature. In brief, we put forward a summarization of the composition and structural design of MOF-derived carbon-based electrocatalysts. MOF-derived carbon materials with heteroatomic dopants (e.g., B, N, P, S, etc.) have drawn increasing attention because of their high conductivity, erosion resistance, and excellent catalytic performance. The improvement of the electrocatalysis behavior is due to charge accumulation and spin polarization caused by heteroatom doping. We believe that carbon substrates with nonmetallic heteroatomic dopants can be considered when designing the MOF-derived electrocatalysts. For metal-based-material-doped carbons derived from MOFs for electrochemical water splitting, more attention should be paid to the chemical and structural composition in the design process. For instance, more Mo-and W-based materials have been widely used to catalyze HER. Besides the metal oxides and hydroxides commonly employed as OER catalysts, other types of metal compounds (e.g., phosphides, nitrides, and chalcogenides) exhibit outstanding HER performance. A limited number of studies on MOF-derived SACs for electrochemical water splitting have been reported. When the adsorption-calcination method is applied to the design of SACs, the calcination temperature should be carefully considered. The host-guest strategy is usually employed for the synthesis of MOF-derived SACs, and the regulation of pore size and guest matching has a substantial effect on the successful preparation of SACs. The SACs can also be obtained using bimetallic MOFs as the precursor through a pyrolysis-etching strategy, which is relatively simple but is limited by the formation of a bimetallic MOF precursor. Furthermore, to control the chemical composition of MOF derivatives, choosing appropriate MOF precursors is a commonly adopted strategy. Manipulating the conversion conditions and introducing additional precursors can also be employed to regulate the chemical composition of the electrocatalysts. Indeed, these strategies are used simultaneously in many cases to obtain excellent electrocatalytic activity and stability. Catalysts for the Hydrogen Evolution Reaction Hydrogen is a promising green energy source as a substitute for traditional fossil fuel energy. The HER is a key half-reaction of electrochemical water splitting, which is an efficient strategy for converting electricity into storable hydrogen. The HER reaction proceeds as follows The evaluation parameters for the HER include the onset potential, the value of the potential at a current density of 10 mA cm −2 (the overpotential η 10 ), and the Tafel slope, among others. To concisely describe these concepts, the overpotential (η) is a measurement of the additional potential needed above the thermodynamic potential (E 0 ) required for an electrocatalytic reaction at a certain current density. The Tafel slope is a parameter of kinetic measurement, which describes the relationship between the overpotential and the base 10 logarithm of the current density. A lower Tafel slope reflects better HER kinetics. During the HER process, hydrogen intermediates (H*) are adsorbed onto the catalytic sites, so the hydrogen adsorption energy is crucial for the sensible selection of active HER catalysts. According to the HER mechanism, the active site with an H* adsorption free energy (ΔG H* ) of 0 could achieve the best HER activity. [4,5] MOFs have been extensively utilized as preferred heterogeneous catalysts for the HER because of their large surface area and controllable structure. [1,147,148] In particular, the conductive MOFs exhibit significantly more electron transfer than traditional MOFs. Generally, to improve the intrinsic electrical conductivity, one strategy is to synthesize a π-conjugated structure with transition metal atoms (Ni, Cu, and Co) and aromatic organic ligands (such as 1,3,5-triaminobenzene-2,4,6-trithiol, hexaaminobenzene, and benzenehexathiolate) as precursors. [17,[149][150][151][152] Recently, hexaiminohexaazatrinaphthalene (HAHATN), an analog of hexaazatriphenylene (HATN), was fabricated as an organic ligand to prepare different bimetallic conductive MOFs with in-plane mesoporous structures (2.7 nm) (Figure 10Aa,b). [17] The obtained Ni 3 (Ni 3 •HAHAT) 2 bimetallic, conductive MOFs exhibited outstanding HER catalytic activity, achieving a rather low overpotential of 115 mV at 10 mA cm −2 in an alkaline medium (Figure 10Ac). The DFT calculations suggest that the Ni-N 2 groups have a stronger ability to absorb and bond protons, which can remarkably enhance the HER performance of Ni 3 (Ni 3 •HAHATN) 2 compared to that of the traditional Ni 3 (HITP) 2 conductive MOF (Figure 10Ad-f). Recent research has revealed that combining MOFs with functional materials is an effective strategy to synthesize MOF composites, which can not only overcome the deficiencies of traditional MOFs, such as poor conductivity and limited functionality but also inherit their strengths. [153] In addition, numerous investigations have demonstrated that 2D MOFs could be utilized as promising electrocatalytic materials because of their intrinsic advantages, such as fast mass and electron transfer, tunable structures, and more exposed active sites. [8,20] In one study, novel 2D Co-BDC/MoS 2 hybrid nanosheets (Figure 10Ba) were designed and fabricated as efficient electrocatalysts for alkaline HER via a simple sonication-assisted solution strategy. [154] As shown in Figure 10Bb, the introduction of Co-BDC in the Co-BDC/MoS 2 resulted in a partial phase transfer from 2H-MoS 2 to 1T-MoS 2 , which contributes significantly to enhanced HER activity. The Co-BDC/MoS 2 required a lower overpotential at −10 mA cm −2 , lower Tafel slope, and lower charge-transfer resistance than bare Co-BDC and MoS 2 in 1 m KOH (Figure 10Bc,e,f). More importantly, a well-designed Co-BDC/MoS 2 interface is highly desirable for the alkaline HER. As shown in Figure 10Bd, Co-BDC facilitates the kinetics of the rate-determining water dissociation step of the alkaline HER, while modified MoS 2 is beneficial for the subsequent H 2 -generation step. The high conductivity of substrates such as CNTs and rGO can be used to support MOF particles and improve the mechanical stability of pristine MOFs. [8,18] Khalid et al. reported nanoflocks of a bimetallic organic framework (NiCo-MOF), which was grown on a Ni mesh and covered with a graphene oxide aerosol skeleton by utilizing a nebulizer air compressor. [18] The obtained composites showed enhanced electrocatalytic behavior for the HER and excellent stability in the alkaline electrolyte compared to the pristine nanoflocks. Moreover, coupling MOFs with conductive materials, such as acetylene black (AB), have been reported to enhance the HER catalytic properties efficiently. [19,155] For instance, Li et al. designed and fabricated a series of composites containing [Co 1.5 (TTAB) 0.5 (4,4′-bipy)(H 2 O)] (CTGU-9) and AB, which demonstrated a distinct electrocatalytic activity for the HER with an overpotential of 128 mV at 10 mA cm −2 , a small Tafel slope of 87 mV dec −1 , and excellent long-term stability of no less than 21 h. [19] In situ synthesis of hybrid catalysts combining metal compounds and MOFs for the HER has been considered as an ideal strategy to enhance further the HER activity owing to the strong interaction of the MOF-based composites. [2,156] Recently, Liu et al. synthesized CoP-doped MOF-based electrocatalysts for the pH-universal HER via a controllable partial phosphorization strategy. [156] The CoP/Co-MOF hybrid also showed excellent electrocatalytic activity for the HER with overpotentials of 27, 49, and 34 mV at a current density of 10 mA cm −2 in 0.5 m H 2 SO 4 , 1 m phosphate buffer solution (PBS, pH 7.0), and 1 m KOH, respectively. The results of DFT calculations and experiments show that the MOF-based HER electrocatalysts not only possess the optimal adsorption energy of H 2 O (ΔG H2O* ) and hydrogen (ΔG H* ) but also take advantage of the well-defined channel structure of MOFs. Huang et al. introduced a facile in situ sulfurization strategy to synthesize a hybrid catalyst containing good conductive Fe 3 S 4 ultrasmall nanosheets attached on the surface of 3D MIL-53(Fe) for the HER under acidic solutions. [2] The Fe 3 S 4 /MIL-53(Fe) hybrid catalysts maintain the advantages and overcome the deficiencies of the individual components, which contributes to the remarkable HER performance. Recently, transition metal compounds and composites derived from MOF precursors, featuring their well-defined structure and large surface area, have been applied as excellent electrocatalysts. With MOFs as precursors, Ni-based catalysts, such as Ni 2 P, [118,157,158] Ni/NiO, [159,160] NiSe, [161] and NiFeP, [105,162] have been widely used to catalyze the HER. [105,118,140,[157][158][159][160][161][162][163][164] For instance, Jiao et al. designed and fabricated a Ni/NiO nanoparticle with subtle lattice distortions using Ni-MOF as the precursor and template. [160] Notably, the incorporation of Ni 3+ in Ni/NiO heterostructures led to a subtle atomic rearrangement and exposed more electrochemically active reaction sites, which promoted HER activity with a rather low overpotential of 41 mV at 10 mA cm −2 . Recently, Lou et al. reported a simple method to prepare Ni-doped FeP/C hollow nanorods with adjustable aspect ratios by the etching and coordination reaction between MOFs and phytic acid followed by a pyrolysis process. [105] The obtained Ni-doped FeP/C hollow nanorods show outstanding electrocatalytic activity and robust stability in solutions for the HER over a full range of pH values because of their abundant active sites and the shortening of the diffusion distance for both mass and electron transport. The overpotential at 10 mA cm −2 of the Ni-doped FeP/C hollow nanorods was only 72, 117, and 95 mV in acidic, neutral, and alkaline media, respectively. Moreover, the DFT-calculated electronic structures indicate that Ni doping can further improve charge transfer. Co-based catalysts derived from MOFs, such as CoSe 2 , [165] CoP, [100,166] CoPS, [167] IrCo nanoalloys, [87] Co x Ni y N, [168] CoFeP, [107] and CoNiP, [169] have also been widely reported to facilitate the HER. Jiang et al. designed a one-step annealing strategy for Ir-doped MOFs to prepare IrCo nanoalloys coated with N-doped graphene shells (IrCo@NC) (Figure 11Aa,b). [87] The as-prepared IrCo@NC shows excellent catalytic activity for the HER with an exceedingly low Tafel slope of 23 mV dec −1 and an overpotential of only 24 mV at a current density of 10 mA cm −2 under acidic conditions (Figure 11Ac,d). The remarkable HER activity is even more outstanding than that of commercial Pt/C catalysts, which results from the significantly reduced ΔG H* ( Figure 11Ae). Feng et al. fabricated porous, rodlike Co-Ni bimetal nitrides (Co x Ni y N) as high-efficiency HER electrocatalysts in all pH environments through nitridation from a bimetallic MOF-74 precursor. [168] The obtained Co x Ni y N presents several advantages, such as large specific surface area, abundant mesoporous structure, and perfect active site dispersion, resulting in enhanced HER catalytic activity. In recent years, MOF-derived SACs have been explored as promising electrocatalysts. [132,170,171] Chen et al. designed and synthesized a W-SAC, with W atoms immobilized on a N-doped carbon substrate derived from a MOF for HER applications. [132] WCl 5 /UiO-66-NH 2 was annealed at 950 °C and then treated with a hydrofluoric acid solution to etch the zirconic oxide (Figure 11Ba). HAADF-STEM and XAFS analyses reveal the uniform dispersion of the W atoms (Figure 11Bb,c). The W-SAC exhibited a small overpotential of 85 mV at 10 mA cm −2 and a low Tafel slope of 53 mV dec −1 in a 0.1 m KOH solution (Figure 11Bd,e). DFT calculations suggested that the ΔG H* of the W-SAC approached the ideal value of zero, which is closely related to the HER activity of the catalyst (Figure 11Bf). Recently, Li et al. reported novel synthesis tactics utilizing an in situ phosphatizing of triphenylphosphine embedded within MOFs to obtain an atomic Co 1 -P 1 N 3 interfacial structure, where one single Co atom is coordinated with one P atom and three N atoms (denoted as Co-SA/P-in-situ). [170] In the acid solution, the as-prepared Co-SA/P-in-situ exhibited excellent HER activity with an overpotential of 98 mV at 10 mA cm −2 and a Tafel slope of 47 mV dec −1 , which are better values than those of the catalyst with the Co-N 4 interfacial structure. Moreover, in situ XAFS analysis and DFT calculations supported the explanation that the enhanced HER performance was ascribed to the bond-length-extended, high-valence Co 1 -P 1 N 3 atomic interface structure. The most recently reported MOF-based/derived electrocatalysts with different substrate that are favorable for the HER are systematically summarized in Table 2. Catalysts for the Oxygen Evolution Reaction The OER, with an equilibrium potential of 1.229 V versus RHE, is vital for several energy-related applications, including rechargeable metal-air batteries and water electrolysis. [174] The reaction equation for the OER is as follows The parameters commonly utilized to assess the activity of OER electrocatalysts include the onset potential, η 10 , and the Tafel slope, which resemble those of HER electrocatalysts. The OER is a complicated 4-electron process, in which the oxygen intermediates are adsorbed onto the active sites. In order to further optimize the durability of earth-abundant transition-metalbased OER catalysts, and to improve the kinetics, the OER is usually carried out under extremely alkaline environments (pH = 13 or 14). More details of the MOF-based/derived materials for OER electrocatalysis are discussed in the subsequent sections. Recent studies have shown that Co-, [13,175,176] Mn-, [177] Ni-, [178,179] and Fe-based MOFs [180,181] with enhanced electrocatalytic activity have been widely investigated as novel materials for the OER owing to their tunable structures, welldefined pores, and high specific surface areas. For instance, Zhang et al. prepared an Fe-MOF nanosheet array on Ni foam (Fe-MOF/NF) by the hydrothermal treatment, which showed outstanding electrocatalytic performance for the OER with an overpotential of approximately 240 mV at 50 mA cm −2 and a relatively low Tafel slope of 72 mV dec −1 in 1.0 m KOH. [181] Moreover, Fe-MOF/NF showed robust long-term electrochemical stability with its catalytic activity being maintained for no less than 30 h. Recently, Jiang et al. employed a microwaveinduced plasma engraving strategy to achieve a fine regulation of the coordinatively unsaturated metal sites of Co-MOF-74 with a distinctly improved OER activity and no damage to the integrity of its phase. [176] The hydrogen-plasma-engraved Co-MOF-74 exhibited superior OER activity in a 0.1 m KOH electrolyte with a relatively low overpotential of 337 mV at 15 mA cm −2 , a high TOF of 0.0219 s −1 , and large mass activity of 54.3 A g −1 . Theoretical calculations suggest that the electron structure of MOFs can be regulated by introducing missing linkers, which enhances the OER activity of the MOF. [14] Inspired by this, Xue et al. reported a universal strategy to introduce different missing linkers, such as carboxyferrocene (Fc), to regulate the electronic structure of layered-pillared MOF Co 2 (OH) 2 (C 8 H 4 O 4 ) (CoBDC) (Figure 12Aa-c). The calculated density of states (DOS) of CoBDC and CoBDC-Fc revealed that new electronic states near the Fermi level were generated after introducing missing linkers, demonstrating that the CoBDC-Fc has a more conductive structure (Figure 12Ad). The conductive structure plays a critical role in improving the OER activity by introducing missing linkers into the MOF, which could reduce the energy barrier (Figure 12Ae). The self-supported CoBDC nanoarrays with missing linkers of Fc and NF serving as substrate (CoBDC-Fc-NF) exhibited remarkable OER activity with an ultralow overpotential of 241 mV at 100 mA cm −2 and a small Tafel slope of 51 mV dec −1 (Figure 12Af,g). More recently, Ji et al. proposed a facile linker scission strategy to induce lattice strain in MOF catalysts by partially replacing binary carboxylic acids with monocarboxylic acids. [182] The strained NiFe-MOFs with 6% lattice expansion showed excellent activity for the OER in an alkaline electrolyte with a low overpotential of 230 mV at a current density of 10 mA cm −2 and a small Tafel slope of 86.6 mV dec −1 . A 2D sheet-like nanostructure could facilitate not only mass or electron transfer but also enable higher exposure of the active catalytic sites of unsaturated coordinated metal atoms, which remarkably enhances their intrinsic OER activity. Huang et al. developed a typical self-dissociation-assembly method to fabricate well-defined, ultrathin CoNi-MOF nanosheet arrays (CoNi-MOFNA), which could be employed as a highly active OER electrode. [183] The catalysts achieved an extremely low overpotential of 215 mV at a current density of 10 mA cm −2 , and its mass activity was 14 times that of commercial RuO 2 . Besides, the current density of CoNi-MOFNA showed no significant degradation even after 300 h of continuous electrolysis. Pang et al. reported a new method for preparing single-layer metal-organic nanosheets from 3D layered-pillared MOFs by capping solvent molecules prepared with sonication in a solvent over a comparatively short period (30 min). [15] The as-obtained single-layer metal-organic nanosheets exhibited higher OER activity than other heterogeneous catalysts. Multimetallic MOFs, such as CoFe-, [187,190,192] NiCu-, [188] FeNi-, [189,191,193] NiCo-, [194] and NiCoFe-based MOFs, [35] have been reported with the enhanced catalytic performance for the OER relative to their monometallic counterparts. For example, Sun et al. proposed a self-templating approach to growing NiFebased MOF nanosheets, such as MIL-53(FeNi)/NF, on foamed nickel in situ using a one-step solvothermal synthesis. [193] An overpotential of 233 mV at 10 mA cm −2 , mass activity of 19.02 A g −1 , Tafel slope of 31.3 mV dec −1 , and desirable stability was obtained for MIL-53(FeNi)/NF in 1 m KOH electrolyte, demonstrating the superior OER performance of MIL-53(Ni). The encapsulation of the Fe species into MIL-53 could readily facilitate the modulation of the electronic structure, facilitate electron transport, and increase the number of active electrochemical areas to improve the catalytic performance. MOF-based hybrids have also been reported to accelerate the OER. In a recent study, 2D Ni-BDC/Ni(OH) 2 hybrid nanosheets were fabricated by a simple sonication-assisted solution route. [195] Ni-BDC/Ni(OH) 2 showed excellent activity, desirable kinetics, and long-term stability for the OER. Because of the strong electronic interactions between Ni(OH) 2 and Ni-BDC, the electron structure of Ni(OH) 2 was fine-tuned. As a result, Ni(OH) 2 with higher oxidation states could be obtained; thus, the OER catalytic performance could be further improved. Notably, the OER current density of Ni-BDC/Ni(OH) 2 was 82.5 mA cm −2 at 1.6 V versus RHE, outperforming the benchmark commercial Ir/C catalyst by up to three times. Ultrafine CoFeO x nanoparticles were immobilized in the lattice of a poly[Co 2 (benzimidazole) 4 ] (PCB) layer. The (Figure 12Ba). [16] The TEM image reveals the formation of M-PCBN with an ultrathin heterogeneous nanosheet structure (Figure 12Bb). Structural characterization and analysis suggested a changed 3d electronic configuration and a higher valence for the interfacial Co sites between the metal oxide nanoparticle and monolayered MOF, resulting in an enhanced OER activity, which was consistent with the theoretical calculations and electrochemistry measurements (Figure 12Bc-f). Electrochemical tests indicated the excellent OER activity and durability of the M-PCBN with an overpotential of 232 mV at 10 mA cm −2 and a Tafel slope of 32 mV dec −1 . Many researchers have also found that MOFs can act as attractive templates or precursors to prepare porous carbon-encapsulated metal (M@C) materials via a pyrolysis method. [79,[214][215][216][217][218][219][220] The M@C materials retain not only the large surface area of their parental MOFs but also show excellent conductivity and catalytic activity. Meanwhile, the metal nodes around organic ligands can be reduced in situ to single atomic sites [221] or bimetal alloy [82,88,89,222] cores encapsulated in a heteroatom-doped carbon frame after pyrolysis under an inert gas atmosphere. The resulting materials possess a unique electronic effect and extra synergistic effect, resulting in the acceleration of the catalytic reaction kinetics for the OER. Recently, Lin et al. reported a topology-guided synthetic procedure for a novel 2D Ni-based MOF hexagonal nanoplate (HXP). [218] Under the inhibition and modulation of pyridine in the substitutionsuppression process, the morphology could be changed from hexagonal nanorods (HXR) to nanodisks (HXD) and nanoplates with controllable thickness by regulating the amount of pyridine. Moreover, a subsequent calcination process transformed the nanoplates into nitrogen-doped Ni@carbon nanocomposites (Figure 13Ba), which showed a low overpotential of 307 mV at a current density of 10 mA cm −2 , Tafel slope as low as 48 mV dec −1 , and satisfactory durability in the OER (Figure 13Bb-d). Bu et al. developed a versatile, ultrafast microwave-assisted chemical vapor deposition (CVD)-like route to synthesize a variety of uniformly dispersed mono-or few-layer N-doped graphene shell immobilized metal nanocrystals (M@NC) within 10 s using MOFs on graphene as the precursors in the presence of carbon cloth (CC). [82] Among the M@NCs, the as-synthesized FeNi@ NC/graphene exhibited the best electrocatalytic activity toward the OER with the lowest overpotential (261 mV) at 10 mA cm −2 in 1 m KOH, the smallest Tafel slope of 40 mV dec −1 , and robust stability for no less than 120 h. Studies have demonstrated that MOF-derived materials without high-temperature treatment were also employed as OER catalysts. For instance, Xu et al. proposed a new hybrid nanostructure CeO x /CoS consisting of hollow CoS derived from ZIF-67, which was decorated with CeO x nanoparticles grown in situ (Figure 13Ca). [223] The TEM images of the precursor CoS and 14.6% CeO x /CoS are displayed in Figure 13Cb,c, respectively. The obtained CeO x /CoS hybrid exhibited outstanding OER performance by regulating the surface electron states and producing defective sites, which is related to the increase in the Co 2+ /Co 3+ molar ratio and defect number. Additionally, the deposition of CeO x in situ on the surface of the CoS hollow structures can avoid the corrosion of CoS. As expected, the CeO x /CoS exhibited a low overpotential of 269 mV at 10 mA cm −2 , a low Tafel slope of 50 mV dec −1 , and significant operational stability in an alkaline electrolyte (Figure 13Cd-f). Hollow (Ni, Co)Se 2 arrays immobilized on flexible CC originating from cobalt-MOF via an ion-exchange/etching reaction, and following solvothermal selenization exhibited excellent OER performance, as reported by Song et al. [224] Similarly, Li et al. successfully synthesized a hierarchical hollow (Co, Ni)Se 2 @NiFe layered double hydroxide (LDH) nanocage originating from ZIF-67 for efficient OER with a facile ion-exchanged method and then decorated the nanocage with NiFe LDH to enhance the electrocatalytic kinetics further. [225] Qiu et al. reported a universal and straightforward approach to preparing a boundary defect-rich, ultrathin Co(OH) 2 (D-U-Co(OH) 2 ) nanoarray under room temperature for the OER by in situ etching of a Co-MOF (ZIF-L-Co). [226] Here, the remarkable OER performances of recently reported MOF-based/derived electrocatalysts are summarized ( Table 3). Bifunctional Electrochemical Catalysts (HER + OER) Electrochemical water splitting consists of two half-reactions: the HER occurs at the cathode and the OER occurs at the anode. The cell voltage to afford a specific current density (generally 10 mA cm −2 ) in an electrolysis cell is generally applied to assess the HER/OER bifunctional performance of the electrocatalyst. In this section, MOF-based/derived catalytic materials with activities for both OER and HER will be discussed. A small number of MOF-based materials have emerged as electrocatalysts for water splitting, mainly due to their low electrical conductivity and stability. [12,227,228] 2D MOFs have been demonstrated to have enhanced conductivity and more exposure to active sites. For instance, Duan et al. developed a 2D bimetallic MOF on conductive substrates for high-efficiency water electrolysis via a dissolution-crystallization process. [53] Recently, Xu et al. reported another 2D MOF-based catalyst (ultrathin Ni-ZIF/Ni-B NSs with massive crystallineamorphous phase interfaces), which was derived from Ni-ZIF nanorods through a facile and room-temperature boronization strategy. [229] Remarkably, the Ni-ZIF/Ni-B@NF required an extremely low cell voltage of 1.54 V for overall water splitting to achieve a current density of 10 mA cm −2 . By hybridizing the 2D Ni-MOF and noble Pt nanocrystals into one heterostructure, an interfacial-bond-induced charge transfer takes place and electronically optimizes the active sites further to modify intermediate adsorption (Figure 14Aa), providing significant electrocatalysis behavior. [230] The aberration-corrected TEM image reveals the interfacial structures of the Pt-NC/Ni-MOF at the atomic scale (Figure 14Ab). The positive shift of A 1 and A 2 is attributed to the electronic state transition from low-energy Ni 2p 3/2 to high-energy Ni 3d, optimizing the adsorption of OH* (Figure 14Ac). The as-prepared Pt-NC/Ni-MOF presented outstanding electrocatalytic performance for both the HER and OER ( Figure 14Ae) and outstanding stability toward the HER (Figure 14Ad). Lu et al. synthesized a Ni-and Fe-based bimetallic MOF on a conducting Ni foam, NFN-MOF/NF, which is an efficient and stable electrocatalyst with double function for water splitting (Figure 14Ba). [231] These NFN-MOF/NF materials are nanosheets with thicknesses of approximately 15 nm and are bundled into micrometer-sized clusters (Figure 14Bb). The NFN-MOF/NF catalyst can provide a current density of 10 mA cm −2 at a low cell voltage of 1.56 V, which is better than the performance of the Pt-C/NF//IrO2/NF couple, the accepted benchmark catalysts (Figure 14Bc). The Tafel slope of 143 mV dec −1 obtained for the NFN-MOF/NF//NFN-MOF/NF couple is also considerably lower than that of the benchmark couple (160 mV dec −1 ) (Figure 14Bd). Moreover, the NFN-MOF/NF catalysts possess remarkable durability, presenting negligible chronopotentiometry decay of 7.8% at 500 mA cm −2 after 30 h (Figure 14Be). Recently, Lu et al. further developed well-blended Fe-and Ni-MOFs [232] and modulated Fe-rich FeNi(BDC)(DMF,F) and Table 3. Continued. Adv. Mater. 2021, 33,2006042 Ni-rich FeNi(BDC)(F), [233] grown in situ on NF, to obtain MOF/ NF composite electrodes, which showed remarkable electrocatalytic activity for water splitting as well as outstanding durability at a high current density. In recent years, to reduce the overpotential (η) resulting from the OER on the anode and the HER on the cathode, a wide variety of MOF-derived carbon-based materials have been thoroughly explored (e.g., noble metals, [70] non-noble metals/ alloys, [64,65,76,[234][235][236] metal carbides, [237] oxides, [238] chalcogenides, [239][240][241] phosphides, [242][243][244][245][246][247][248][249][250][251][252] etc. [253][254][255] ). Among them, transition metal phosphides (Fe 2 P, [243] CoP, [246] Ni 2 P, [251,252] etc.) are promising for overall water splitting because of their remarkable activity, excellent stability, and low fabrication cost. In particular, bimetallic phosphides can further enhance the electrocatalytic activity by adjusting the atomic coordination and electronic structure. [242,245,[247][248][249][250] Recently, Sun et al. reported that an Fe-doped Ni(BDC) MOF (BDC = 1, 4-benzenedicarboxylate) was utilized as the precursor to synthesize Fe-doped Ni 2 P/C toward highly efficient water splitting (Figure 15Aa). [249] As shown in Figure 15Ab, Fe-doped Ni 2 P nanoparticles were encapsulated in the CNTs after the phosphorization process. More importantly, Sun et al. employed DFT calculations and a series of experiments to systematically analyze and evaluate the effect of phosphorization and Fe doping (Figure 15Ac,d). The results show that while phosphorization is more beneficial for the OER than the HER, Fe doping is not only able to tune the micromorphology of the catalyst but also modulate the electronic structure, synergistically resulting in enhanced HER and OER. Consequently, the hybrid displayed outstanding electrocatalytic performance for overall water splitting with a cell voltage of 1.66 V at 500 mA cm −2 , which is far better than the standard electrode couple consisting of Pt/C and RuO 2 (Figure 15Ae). Cao et al. designed a 3D bifunctional porous Fe-CoP electrocatalyst formed by directly growing a Co-Fe PBA on Ni foam with further phosphorization, showing excellent performance toward large current density OER and overall water splitting. [250] The obtained Fe-CoP/NF catalyst with meso-and macropores presented high electrocatalytic efficiency and excellent stability for the OER and HER, reaching a current density of 10 mA cm −2 with a rather low cell voltage of 1.49 V in 1.0 m KOH, which far outperforms that of the electrolyzer with IrO 2 -Pt/C as the electrode couple. Notably, the catalyst showed remarkable electrocatalytic performance for the OER and provided high Figure 15. A) a) Schematic illustration of Fe-doped Ni 2 P/C catalyst preparation. b) TEM image of Fe 2 -Ni 2 P/C. c) Free-energy diagrams of the intermediates on different modeled surfaces for the OER. d) ΔG H* for the HER. Inset: Volcano plot depicting the HER overpotentials as a function of ΔG H* . e) Polarization curves of overall water splitting in a water electrolyzer. A) Reproduced with permission. [249] Copyright 2019, American Chemical Society. B) a) Schematic illustration of the synthetic procedure for NG-NiFe@MoC 2 . b) HRTEM image of NG-NiFe@MoC 2 . c) LSV of water electrolysis with non-noble NG-NiFe@MoC 2 and a noble couple of Pt/C//RuO 2 in 1 m KOH condition. Inset: stability testing of the electrolyzer at 10 mA cm −2 . B) Reproduced with permission. [256] Copyright 2018, Elsevier Ltd. current densities of 500 and 1000 mA cm −2 , only requiring ultralow overpotentials of 295 and 428 mV, respectively, to satisfy the rigorous criteria for practical industrial applications. In recent times, a versatile strategy for designing highperformance electrocatalysts has been to controllably introduce two different metal species into a single nanostructure, namely Co-NC@ Mo 2 C, [119] Co 3 O 4 -RuCo@NC, [121] NG-NiFe@MoC 2 , [256] Co/Co 9 S 8 @ NSOC, [122] NiO/Co 3 O 4 , [120] Co@Ir/NC, [257] Ni 2 P/CoN-PCP, [258] among others, [92,[259][260][261][262][263][264] to further facilitate and accelerate the activation process of the reactants. For instance, Hu et al. synthesized MoC 2 -doped NiFe alloy nanoparticles (NPs) embedded within several-layer-thick N-doped graphene (NG-NiFe@MoC 2 ) using one-step calcination of hybrid precursors composed of PVP-encapsulating NiFe-PBA and grafted Mo 6+ cations (Figure 15Ba). [256] The HRTEM image of NG-NiFe@MoC 2 (Figure 15Bb) demonstrated that the majority of NPs were embedded within several layers of the graphene shell. An NG-NiFe@MoC 2 -based water electrolyzer required a potential of 1.53 V to reach a current density of 10 mA cm −2 in 1.0 m KOH with impressive durability of 10 h, exceeding the noble Pt/C//RuO 2 -based electrolyzer (Figure 15Bc). Recently, Du et al. [122] proposed a facile route to fabricate Co/ Co 9 S 8 nanoparticles incorporated into an N, S, and O ternarydoped carbon support with a Co-based MOF (Co-NSOMOF) as a single precursor. The optimized Co/Co 9 S 8 @NSOC exhibited impressive performance for overall water splitting, resulting from the synergistic effects and protection of the ternary-doped carbon shell, requiring a rather low cell voltage of 1.56 V at 10 mA cm −2 . The electrochemical performances of recent MOF-based/derived catalysts for water splitting considered in this review are listed in Table 4. To briefly conclude, MOF-based/derived materials exhibit a prospect of widespread application as water-splitting electrocatalysts. For the HER under alkaline conditions, the reacting species are H 2 O or OH − , which possess much lower conversion kinetics to H* than the conversion from H + to H* under acidic conditions. As a result, the HER performance under acidic conditions is generally superior to that under alkaline conditions, as identified with the works reviewed above. In contrast to the OER electrocatalysts, Mo-and W-based materials have been widely used to catalyze the HER. Rather than metal oxides and hydroxides commonly employed as OER catalysts, other types of metal compounds (e.g., phosphides, nitrides, and chalcogenides) exhibit outstanding HER performance. As shown in Figure 16a,b, a MOF-based/derived catalyst with an overpotential of 150 mV at a current density of 10 mA cm −2 can be considered as an excellent catalyst for the HER. Various kinds of MOF-based electrocatalysts with remarkable OER electrocatalysis behavior have been reported (Figure 16c). Apart from the catalytic performance, electrocatalysts for the OER need excellent stability under harsh alkaline environments. Thus, for MOF-derived OER catalysts, a high extent of graphitization of the carbon substrate and the existence of metal-based constituents are favored. Moreover, a limited number of studies on MOF-derived SACs for water oxidation have been reported, resulting from the SACs migrating and aggregating into NPs under harsh reaction conditions. Figure 16d shows, through appropriate design of the composition and structure, some MOF-derived electrocatalysts can attain high OER activity with an overpotential of no more than 300 mV at 10 mA cm −2 . Water electrolysis is the main application of HER/OER bifunctional catalysts, which can effectively optimize the energy utilization of the water electrolyzer. The cell voltage to afford a particular current density (generally 10 mA cm −2 ) in an electrolysis cell is commonly applied to estimate the activity of an HER/OER bifunctional electrocatalyst. A large variety of MOFbased materials and MOF-derived carbon-based materials (e.g., metal NPs, metal carbides, phosphides, and complicated metal compounds) have been explored for efficient water splitting (Figure 16e,f). The MOF-based/derived catalyst with a cell voltage of less than 1.6 V at 10 mA cm −2 can be regarded as an excellent catalyst for water splitting. Conclusions and Perspectives We have reviewed pivotal advances and provided commentary on recent research on engineering MOF nanoarchitectures for efficient electrochemical water splitting. Benefiting from the large surface area, adjustable chemical components, tunable pore structure, controllable topology, and well-defined surface functionality, various MOF-based/derived materials with excellent water splitting performance have been developed. A range of new synthetic strategies for chemical composition optimization and structural functionalization to improve the electrocatalytic performance of catalytic sites are highlighted herein, especially at the molecular and atomic scales and for tailored nanoarchitectures and configurations. Meanwhile, the fast-growing breakthroughs in catalytic activity, identification of highly active sites, fundamental mechanisms, and recent designs of electrolyzers to promote the commercialization of electrochemical hydrogen production are also fully discussed and summarized. Although abundant efforts have been devoted to this newly emerging area, and rapid and promising progress has been achieved in recent years, there are several challenges for the future engineering of high-efficiency and durable MOF-based/derived electrocatalysts for water splitting. The general comments are summarized below. 1) For MOF-based materials, poor electron conductivity is the biggest obstacle for electrochemical applications and needs to be further enhanced by adjusting the chemical properties and structure of the ligands to increase the electron transport and electrocatalytic reaction rate, especially for pristine MOFs. In addition to the poor conductivity, the controversial stability should also be overcome to satisfy the requirements of actual water splitting. The characterization and regulation of the synergistic effects between the functional components and MOFs remain challenges for guest@MOF and MOFs/ substrates. 2) The high cost of ligands, harsh synthetic conditions, and limited synthetic strategies comprise the major impediment to the large-scale preparation of MOFs, which is crucial for the development of cost-effective water splitting electrocatalysts. 3) For MOF-derived materials, high-temperature calcination usually causes severe structural damage and the collapse of porous frameworks, which hamper the rationally controlled design of MOF derivatives and decrease the accessibility of catalytic centers and their intrinsic electrocatalytic performance. 4) Although other single-atom electrocatalytic systems have been used in applications for the HER and OER, MOF-derived SACs with efficient catalytic activity and stability for water splitting are rarely reported. Considering the maximum atom-utilization efficiency and low cost, the development of MOF-derived SACs is urgently required for water electrolysis. 5) Research on the synthetic mechanisms and characterizations of intrinsic active sites for MOF-based/derived electrocatalysts are still in the initial stage. More comprehensive and thorough mechanistic studies are urgently needed to guide the future rational synthesis of MOF-based/derived electrocatalysts with well-defined catalytic structures and durable activity. 6) For the industrialization of water splitting, numerous challenges still exist, such as the formation of explosive H 2 /O 2 gas mixtures and reactive oxygen species, the limited HER rate due to the more sluggish kinetics of the OER, as well as the lack of cost-efficient H 2 storage and transport systems. To address the above challenges, some strategies and perspectives are detailed below. 1) Numerous efficient and versatile strategies have been proven to strengthen the electrocatalytic activity of pristine MOFs. For instance, incorporating nonbridging ligands into the MOF could significantly improve the electrocatalytic performance. Synthesizing the π-conjugated structure with transition metal atoms and aromatic organic ligands as precursors could enhance the conductivity. Converting bulk MOF crystals into 2D nanosheets could also enable the higher exposure of the active surface sites. Designing bimetallic MOFs may further optimize the electrocatalytic performance for water splitting because of the synergistic effect between the multi-metal element. Furthermore, MOFs could be combined with various functional materials (e.g., metal NPs, molecule complexes, and graphene) to achieve more exposure of the active catalytic sites, thus significantly improving the conductivity for the resulting MOF composites. 2) Solvothermal reactions with extra salt solutions and the synthesis of bimetallic MOFs are commonly employed strategies to introduce the second metal component into MOFs, followed by pyrolysis, to realize heteroatom doping, more accessible active sites, well-controlled nanostructures, welldesigned synergistic effects, and robust structural stability. During the calcination process, the low-boiling-point metals in the frameworks evaporate, causing the formation of additional porous structures for the MOF-derived materials. 3) Under reaction conditions for water splitting, carbons are usually not chemically or thermodynamically stable; thus, a carbon matrix with a high level of graphitization is preferred to maintain the high conductivity of the catalysts. 4) With the advances in characterization techniques (e.g., XAFS, HAADF-STEM, and Raman spectroscopy) and theoretical calculations, systematic studies will provide a new perspective to identify the intrinsic catalytic active sites and many new possibilities for the rational design and performance breakthroughs in state-of-the-art water-splitting electrocatalysts. 5) Low-cost and high-performance non-precious electrocatalysts for overall water splitting are urgently needed to promote the development of affordable water-splitting electrolyzers, thus accelerating future industrialization. Meanwhile, new techniques to fabricate electrodes or electrolyzers are required for the rational assembly of water splitting devices for scalable and safe production or utilization of hydrogen. For instance, decoupled water electrolyzers have recently been developed to separate the HER and OER processes to avoid the formation of explosive H 2 /O 2 gas mixtures, which is usually achieved by redox mediators (Figure 17a). Due to the lack of cost-efficient hydrogen storage and transport systems, tandem water electrolysis has been developed; as shown in Figure 17b, the produced hydrogen can be converted in situ to valuable chemicals (e.g., NH 3 , CH 4 , etc.). [265] 6) Low-temperature water electrolyzer systems can use either a proton exchange membrane (PEM) or alkaline anion exchange membrane (AEM) as the electrolyte. Nevertheless, the high capital cost of the cell stack and the large noble metal loading amount required for the electrodes are notable disadvantages for the PEM-based electrolyzer. It has been found that alkaline AEM electrolysis can offer the possibility of replacing the PEM-based electrolyzer, including the possible use of noble metal-free catalysts, without significant loss of catalytic performance. For instance, Li et al. recently reported several quaternized polystyrene electrode binders, which show excellent activity for the HER and OER in AEM electrolyzers. [266] The electrolyzer of the NiFe-anode-catalyzed device showed excellent operational performance comparable to that of a state-of-the-art PEM electrolyzer using noble metals (Figure 17c,d). In summary, from a practical standpoint, the development of low-cost and high-efficiency electrocatalysts is crucial to address the challenges, both in water splitting and other energy devices, for overcoming the ever-growing energy crisis and environmental concerns. We hope that this review will advance the rapid exploration of MOF-based/derived electrocatalysts for water splitting. Furthermore, we anticipate that, with the development of inexpensive and efficient electrocatalysts, a new technological revolution will take place in the field of electrochemical hydrogen production in the not-too-distant future. Figure 17. a) Decoupled water electrolyzer system. b) Tandem water electrolyzer system. a,b) Reproduced with permission. [265] Copyright 2018, American Chemical Society. c) The chemical structures of the AEM electrolytes. TMA: trimethyl ammonium functionalized polystyrenes. HTMA-DAPP: hexamethyl trimethyl ammonium-functionalized Diels-Alder polyphenylene. d) AEM electrolyzer performance that equipped with the HTMA-DAPP. c,d) Reproduced with permission. [266] Copyright 2020, Springer Nature.
2021-03-23T06:16:43.981Z
2021-03-22T00:00:00.000
{ "year": 2021, "sha1": "7e7dfa83803b270e9330e50b6849a47c9c7cff6e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adma.202006042", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "6e619729dfe89b069632e50e424ceb28b80efb23", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
53290501
pes2o/s2orc
v3-fos-license
Genetic algorithm for optimal distribution in cities The problem to deal with in this project is the problem of routing electric vehicles, which consists of finding the best routes for this type of vehicle, so that they reach their destination, without running out of power and optimizing to the maximum transportation costs. The importance of this problem is mainly in the sector of shipments in the recent future, when obsolete energy sources are replaced with renewable sources, where each vehicle contains a number of packages that must be delivered at specific points in the city , but, being electric, they do not have an optimal battery life, so having the ideal routes traced is a vital aspect for the proper functioning of these. Now days you can see applications of this problem in the cleaning sector, specifically with the trucks responsible for collecting garbage, which aims to travel the entire city in the most efficient way, without letting excessive garbage accumulate. INTRODUCTION The accelerated development of new technologies has brought with it the problems involved in using them in the best way, without wasting their potential and taking into account all the limitations they bring. The situation with electric vehicles is not different, they have a very large field of action and bring a significant improvement, both to society and the environment, but these mean a shorter duration of each trip due to the limitations brought using electric batteries instead of fuels. This document will treat with different possible solutions that can reduce this problem as much as possible, through algorithms that analyze in different ways different data structures in which the information of each "map" will be stored. PAGE SIZE All material on each page should fit within a rectangle of 18 × 23.5 cm (7" × 9.25"), centered on the page, beginning 1.9 cm (0.75") from the top of the page and ending with 2.54 cm (1") from the bottom. The right and left margins should be 1.9 cm (.75"). The text should be in two 8.45 cm (3.33") columns with a .83 cm (.33") gutter. SIMILAR PROBLEMS 3.1 Ant Colony Optimization (ACO) The algorithm of the colony of ants is an algorithm that aims to mimic the behavior of these insects, which move from a beginning node to an end leaving a path, which over time is fading, to be followed by the other ants find him as the road disappears, the longer paths from the beginning node to the end will be forgotten over time, while for the short and efficient ones, the ants will reinforce it and continue using it. Genetic Algorithms Genetic algorithms are a type of algorithms that, as the name says, "evolve" to find the best solution to a problem. These algorithms work by sending them a number of inputs, and, from them, generate a random output. After many outputs, the algorithm chooses the ones that have provided the best results, combining and altering them, in order to continue testing outputs and mixing solutions, until a sufficiently good result is reached that provides the optimal solution to the problem. Constructive heuristics A constructive heuristic is an algorithm that is building a complete solution from an empty solution adding to the latter, in each iteration, the best local choice of a set of possible choices. This method has been used to solve problems such as the problem of the traveling salesman, however, despite finding a complete solution, this is not the most effective. Next, an example of constructive heuristics is shown, always choosing the shortest arch to an unvisited node that leaves a node: Taboo search Created by Fred W. Glover, it is a mathematical optimization method that iteratively generates different solutions and stores them in a memory structure until a certain stop condition is met, and at the end, defines the final solution as the optimal one of the generated ones. For example, in the problem of the traveling salesman, a solution is generated from a constructive heuristic described above and from this, new solutions are generated by randomly exchanging the order in which cities are visited until they meet a certain number of iterations. 4. GENETIC ALGORITHM These algorithms evolve a population of individuals subjecting it to random actions similar to those that act in biological evolution, as well as to a selection according to some criterion, depending on which it decides which are the most adapted individuals, that survive, and which are the least apt, which are discarded. Operation of Data Structure *The values represent the fitness of the individuals. Design criteria of the data structure Some of the criteria on which we rely is that this type of algorithms operate simultaneously with several solutions, instead of working sequentially as traditional techniques, it is also very easy to execute them in modern massively parallel architectures. In addition when they are used for optimization problems to maximize an objective function, they are less affected by local maxima (false solutions). CONCLUSIONS In conclusion we see that is a problem that we will have to deal with sooner or later, and it may be earlier than imagined, that is why we must work on alternative solutions to deal with the problems that are to come. The most important stuff that we learned from this solution is the same ones listed before in the criteria, some which sow that this is a very optimal solution. Future work In further works we would like to start working earlier so we would have time to develop other data structures to compare results. Acknowledgements We would like to thank our teachers, EAFIT University, and our classmates for helping us develop this project.
2018-11-13T14:02:29.000Z
2018-11-13T00:00:00.000
{ "year": 2018, "sha1": "0ac8b205c5b4d9598c9fbb219b1c66806ee13b4a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0ac8b205c5b4d9598c9fbb219b1c66806ee13b4a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
4779908
pes2o/s2orc
v3-fos-license
Intrinsic Photoconductivity of Ultracold Fermions in Optical Lattices We report on the experimental observation of an analog to a persistent alternating photocurrent in an ultracold gas of fermionic atoms in an optical lattice. The dynamics is induced and sustained by an external harmonic confinement. While particles in the excited band exhibit long-lived oscillations with a momentum dependent frequency a strikingly different behavior is observed for holes in the lowest band. An initial fast collapse is followed by subsequent periodic revivals. Both observations are fully explained by mapping the system onto a nonlinear pendulum. We report on the first experimental observation of a persistent alternating photocurrent in an ultracold gas of fermionic atoms in an optical lattice. The dynamics is induced and sustained by an external harmonic confinement. We find a counterintuitively momentum-dependent oscillation frequency for excited particles and a fast decay of holes which we attribute to spatial trapping. Lifetime measurements reveal a significant enhancement of particle-hole recombination with increasing interactions. Photoconductivity describes the change of a materials conductivity following an excitation with incident photons. If the photon energy is resonant with a band transition, electrons are excited from the valence band to the conduction band and an initial insulator becomes conducting [1]. Today, photoconductivity is widely used in technological applications such as semiconductor photodiodes and photoresistors. It also provides a powerful probe for novel materials, such as graphene [2], transistors made from carbon nanotubes [3] or semiconductor nanowires [4]. To extend the understanding of such complex materials, atomic quantum gases have proven to be powerful model systems. In this context, however, it is desirable to develop and adopt versatile probing methods [5][6][7]. Owing to its excitational structure in several bands, photoconductivity can provide deeper insight into intra-and interband dynamics as well as orbital effects, which gained much interest in recent years. In the field of quantum gases, multiband interactions and dynamics have been experimentally studied mainly with bosonic atoms [8][9][10][11][12][13][14][15][16][17]. For fermionic atoms, population transfers into higher bands have been recently demonstrated via Landau-Zener transitions across Dirac cones [18] and momentum-resolved lattice amplitude modulation [19]. In this letter, we study experimentally and theoretically the photoconductivity of fermionic atoms in an optical lattice. We create uncoupled particle and hole excitations using lattice amplitude modulation and thoroughly investigate the dynamics of the atoms in the second excited band as well as the holes in the valence band. The lattice modulation corresponds to the incident photons in solid state photoconductivity experiments, while the atoms (holes) correspond to the electrons (holes) in the conduction (valence) band. A typical difference between ultracold gas experiments and real materials is the presence of an overall harmonic confinement, which -in our experiment -plays the role of an external force. It induces pronounced oscillations of the atoms in the excited band with a counterintuitive dispersion relation which differs maximally from the zero lattice case for shallow lattices. We explain this with a reduced effective mass in the conduction band. In strong contrast to the long-lived coherent dynamics of the atoms, we observe a dramati-cally shorter lifetime of the holes in the valence band. We attribute this effect -generic to harmonically trapped lattice systems -to spatial trapping at localized states at the edge of the atomic cloud. As a key result of this paper, photoconduction of fermionic atoms is mainly connected to long-lived populations in excited bands, forming the majority component, whereas holes in the valence band decay quickly. For the experiments we prepare either a spin-polarized Fermi gas with m = 9/2 or an interacting binary spin mixture of m = −9/2 and m = −5/2 in the f = 9/2 groundstate manifold of 40 K (for details see [20]). The atoms are held in the combined potential of an optical dipole trap with variable trapping frequency ω 0 and an optical lattice at λ = 1030 nm. To induce a photocurrent, we excite the system via lattice modulation spectroscopy [21,22] as explained in [19]. This two-photon process creates particle excitations in the conduction band and leaves vacancies (holes) in the valence band. Both have the same initial quasimomentum q 0 . Due to the different curvature of the involved bands, q 0 can be arbitrarily tuned by choosing a particular modulation frequency [see Fig. 1(a)]. To record the time evolution of the excited system, we detect the quasimomentum distribution by performing adiabatic bandmapping followed by resonant absorption-imaging after 15 ms time-of-flight [19,23]. Recall, that the bandmapping technique maps particles in the second excited band to real momenta in the interval k/k BZ = [2,3] and k/k BZ = [−3, −2], constituting the third Brioullin zone. Particles in the lowest band are mapped to k/k BZ = [−1, 1], with k BZ = 2π/λ. A typical time evolution for a spin-polarized gas is shown in Fig. 1(b) after an excitation at a quasimomentum q 0 = 0.5 and a lattice depth of s = 10 in units of the recoil energy E r = 2 k 2 BZ /2m with m the mass of 40 K. The atoms in the conduction band show a pronounced oscillation in momentum space. The decay time of the excitations is of the order of 100 ms. This indicates a very slow recombination between particles and holes. However, the holes in the valence band undergo a fast decay and are only visible in momentum space for the first 2 ms. This indicates a local trapping of the holes within the valence band, reminiscent of trapping of charge car- (c) Center-ofmass quasimomentum of the particle-excitation in the conduction band at 10Er for two different trapping frequencies ω0 = 2π × 66 Hz (circles) and ω0 = 2π × 50 Hz (diamonds). Solid lines are fits to the data. Extracted oscillation frequencies are Ω = 2π × 295 ± 9 Hz (circles) and Ω = 2π × 213 ± 3 Hz (diamonds). riers in photoconducting solids [1]. Consequently, the excited atoms form the majority component of the photocurrent in our system. We concentrate on their behavior in the following and afterwards analyze the trapping of the holes in more detail. Our method is related to the measurements in [13] using bosonic atoms, where oscillations in the spatial domain were studied. Hence, neither holes nor excitations with small q 0 could be investigated. We first examine the effect of the harmonic trapping potential on the dynamics of the particles. Figure 1(c) shows two typical measurements with different harmonic trapping frequencies ω 0 for elsewise equal initial parameters. Fits to the data reveal, that the oscillation frequency linearly depends on ω 0 but is substantially higher than the bare trapping frequency. This result significantly deviates from the dynamics in harmonic potentials without any lattice, where the oscillation frequency is equal to ω 0 . To understand the main features of the photocurrent we first present two simplified descriptions before we describe the complete experimental and numerical results. Consider the case of a particle at a given quasimomentum in an excited band of an optical lattice. This is depicted in Fig. 2(a) using the extended zone scheme. In the presence of an harmonic confinement different quasimomenta are coupled which causes an oscillation in momentum space. The existence of a substantial band gap between first and second excited band leads to a striking modification of the simple harmonic oscillator dynamics: When a particle with initial quasimomentum q 0 reaches the gap, it is Bragg reflected and thus omits a huge part of the extended zone scheme. The oscillation continues until the particle reaches −q 0 and is reversed. Thus, depending on q 0 , the intraband dynamics is limited to different fractions of the harmonic oscillation. This leads to a strong momentum dependence of the oscillation frequency ω(q 0 ). In summary, by omitting the inner part of the spectrum, the oscillation period is dramatically decreased in comparison to the pure harmonic case. Recall that the oscillation is still driven by the harmonic trap and thus the frequency is proportional to ω 0 as observed experimentally. For oscillations with small initial quasimomentum, there is a complementary explanation which exploits the form of the bands in the reduced zone scheme. It shows, that the photocurrent frequency non-trivially depends on the lattice depth: The presence of an optical lattice results in a modified kinetic energy with a momentumdependent effective mass. Around q = 0, where the conduction band has a minimum, this effective kinetic energy can be approximated as a quadratic dispersion with a renormalized mass m . The effective Hamiltonian is thus given by where m is the real mass of the particles and ω = m/m ω 0 the renormalized trapping frequency. Especially for shallow lattices the effective mass m becomes much smaller than m and thus ω becomes very large, as shown in Fig. 2(b). This result leads to the counterintuitive feature, that the dynamics in shallow lattices maximally differs from the zero lattice case. This apparent contradiction is resolved by the onset of Landau-Zener tunneling to the first excited band in very shallow lat- tices. The bandgap becomes small and vanishes in the zero-lattice limit, where the atoms oscillate in the harmonic confinement with the bare trapping frequency ω 0 . In contrast to the conduction band, in the valence band the effective mass m is always larger than m for any lattice depth and thus the oscillation frequency is smaller than ω 0 . Therefore, photoconductivity in the conduction band qualitatively differs from the conduction in the valence band. We measured the photocurrent in our system to investigate all abovementioned dependencies. Figure 3(a) shows our measurements of the photocurrent frequency as a function of the lattice depth and the initial quasimomentum. For spin-polarized gases, we measured the frequency of the conduction band photocurrent at s = 8 and s = 10 for different q 0 . In the interacting case we investigated the three lattice depths s = 4, 8, 16 at the background scattering length of 168.5 a 0 [24]. All data exhibit the behavior expected from the argumentation presented in connection to Fig. 2: The oscillation frequency ω decreases with increasing lattice depth and increasing q 0 . To obtain a quantitative prediction for the dispersion relation ω(q 0 , s) of the photocurrent, we performed nu-merical calculations. We assume a single particle in the conduction band, represented by a Gaussian distribution of Bloch states centered around a given quasimomentum q 0 . This state is subject to the full potential of the optical lattice and the harmonic confinement. For further details see [20]. The results of our calculation are presented in Fig. 3(a) in comparison with our experimental data, showing very good agreement. Figure 3(b) displays an exemplary comparison of experimental data in momentum space and a corresponding calculation. The oscillatory behavior with a renormalized frequency is clearly reproduced by the numerical results. In particular, the calculations confirm the experimentally determined linear dependence between ω and the bare trapping frequency ω 0 for all q 0 . Moreover, we find that for small q 0 , where (1) is valid, it agrees with the numerical results. For the interacting case, no significant deviations from the single-particle picture could be observed in the oscillation frequency. In contrast to the spin-polarized case, however, we find a substantially enhanced decay rate of atoms from the conduction band. This includes both, loss of particles from the trap and decay of particles into lower bands. To investigate the decay-dependence on the interaction strength, we used a Feshbach resonance at 224 G [25] with a width of 7.6 G [26]. The corresponding data is shown in Fig. 3(c). The total decay rate from the conduction band strongly depends on the interaction, while the total atom loss is independent of the scattering length, which we checked independently. Thus, the results are a measure for the time needed for recombination of free charge carriers, which increases with decreasing interaction as naively expected and shows a maximum at vanishing interaction. As an application, this effect might be used as a new probe to characterize more precisely the zero-crossing of Feshbach resonances. In conclusion, we find very good agreement of experiment and numerical calculations for many different parameters, which shows, that photoconductivity of excited fermions in optical lattice with superimposed harmonic traps can be thoroughly understood. In addition to the excited particles in the conduction band, one can clearly observe holes in the valence band in the photoconduction measurement of Fig. 1(b). As shown in more detail in Fig. 4(a), the holes decay, however, very fast on the timescale of a few ms, which cannot be explained by recombination with excited atoms, which have a much longer lifetime [see Fig. 3(c)]. The decay of the holes in momentum space instead indicates spatial trapping, which is also observed in photoconducting normal insulators [1]. In these systems, local imperfections of the periodic potential typically lead to trapping of the holes, which can no longer participate in the photocurrent anymore. In ultracold atom experiments, it is well known, that the harmonic potential leads to localized states at the edge of the system [27]. These states are off-resonant for tunneling in the lattice, since the lo- cal potential difference between two neighboring sites is larger than the bandwidth of the lattice. This is true both for particles and for holes. However, for a hole to be dynamically trapped in localized states, in analogy to real solids, it is necessary, that these state are occupied by particles before the excitation. Since the localized states are at the edge of the atomic cloud, their occupation increases with the filling and the lifetime of the holes should decrease accordingly. To quantitatively analyze the trapping of the holes in our experiment, we adopt a theoretical description from solid state physics, where holes in an otherwise completely filled valence band can be described as particles with negative mass. We have simulated the holes in analogy to the excited particles by assuming a single particle with negative mass in the valence band of the combined periodic and harmonic potential (see [20] for details). Figure 4(a) shows, that the decay of the holes can be very accurately described by our model for various regimes. Examplarily, in Fig. 4(b) a comparison of experimental and numerical data in momentum space shows the fast decay of a hole in less than 1 ms. As for the excited atoms in the conduction band, we investigated the influence of the harmonic trapping in detail. Figure 4(c) shows the decay of a hole in dependence of ω 0 . The lifetime of the holes clearly decreases with increasing ω 0 in good agreement with our numerical calculations. The main effect of the increased confinement is to increase the local potential difference between two neighboring sites. This creates more localized states near the center of the system. At constant total filling, the result is a higher relative population of such localized states. Assuming local trapping of the holes, this explains the decrease in lifetime. We also numerically checked the dependence on the filling at constant trapping strength. The results are shown in Fig. 4(d) for two different trapping frequencies ω 0 . At high filling the holes decay much faster than for a system with low filling. This supports our experimental findings from Fig. 4(c) and our intuitive explanation. In conclusion we have presented a comprehensive study of photoconductivity in an ultracold gas of fermionic atoms. By independently analyzing the dynamics of excited particles and holes we have shown that atoms in the conduction band constitute the majority charge carriers in our system. The observed long-lived oscillatory dynamics could be reproduced very well by our numerical simulations and proves to be very sensitive to the harmonic as well as the lattice potential. In particular we measure that counterintuitively the dynamics maximally differs from the purely harmonically trapped case in very shallow lattices. Opposed to the persistent dynamics of the atoms in the excited band, we observe a very short free-carrier lifetime of the holes which we attribute to trapping in localized states at the edge of the combined lattice and harmonic potential. These results may prove crucial for further studies on particle-hole excitations such as excitons. The presented measurements extend the available techniques to explore dynamical properties of optical lattice systems and equally important emphasize the increased role of the harmonic confinement for experiments involvong excited spatial bands. Finally, our results constitute an important contribution for the understanding of fundamental dynamical properties of fermionic quantum gases in optical lattices. We thank P. Törmä for valuable discussion. We acknowledge financial support by DFG via Grant No. FOR801. * J. Heinze, J. S. Krauser SUPPLEMENTAL INFORMATION This supplemental information discusses the preparation of our atomic sample (S1), details of the fitting procedures for the experimental data (S2,S3) and the theoretical calculations both for particle (S4) and hole (S5) excitations. S1. PREPARATION OF THE ATOMIC SAMPLE By sympathetic cooling, we create a mixture of spinpolarized 87 Rb and 40 K atoms in a magnetic trap. The atoms are transferred adiabatically to a crossed optical dipole trap operated at 811 nm with a 1/e 2 radius of 120 µm. After switching off the magnetic trap, we remove the rubidium atoms from the trap using a resonant light pulse. For experiments with spin polarized atoms, the preparation is finished her. For experiments with interacting mixtures, we use a series of rf-pulses and -sweeps to prepare an equal mixture of the hyperfine states m = −9/2 and m = −5/2 in the hyperfine manifold f = 9/2. This mixture is evaporatively cooled in the following by reducing the laser power of the optical dipole trap. The final particle number is about N = 5·10 4 atoms at typical temperatures of 0.2 T F . After the preparation we linearly ramp up an optical lattice within 100 ms. The lattice consists of up to three orthogonal retro-reflected laser beams at λ = 1030 nm with a 1/e 2 radius of 200 µm. For measurements at the Feshbach resonance, the magnetic field was set to the final value 50 ms prior the 100 ms optical lattice ramp. To initialize the photocurrent, we modulate the amplitude of one of the lattice directions for 1 ms with a frequency, that is resonant with a transition from the lowest energy band to the second excited band. This excites a fraction of particles and leaves vacancies in the lowest band. Due to the different curvature of the bands, the resonance condition depends on the quasimomentum. By tuning the modulation frequency, we have full control over the quasimomentum of the excited particles. Since lattice amplitude modulation does not imprint any quasimomentum, the holes in the lowest energy band have the same quasimomentum as the particles. S2. ANALYSIS OF EXPERIMENTAL DATA: PARTICLE EXCITATIONS In this section we describe how we extract the oscillation frequency of the excited atoms from the experimental data. For each time step we determine the quasimomentum of the excitation by taking the center-of-masses of the atoms in the conduction band independently at positive and negative momentum. To be insensitive to global displacements, we measure the difference of both excitation centers instead of their absolute positions. We extract the oscillation frequencies Ω from the differential center-of-masses by fitting an exponentially damped cosine of the form ∆q(t) = A exp(−Γt) cos(Ωt + Φ) + C , with oscillation amplitude A, damping rate Γ, a phase shift Φ and a constant offset C. In the experiment, always two excitations with opposite initial quasimomenta q 0 and −q 0 are created. Both excitations independently perform oscillations in the combined potential of lattice and harmonic trap. After a quarter of an oscillation period, both excitations arrive at q = 0, are Bragg reflected and continue at the other side of the Brioullin zone, respectively. Since the excitations are not distinguishable, our center of mass determination cannot resolve this. Therefore the extracted data artificially exhibits a turning point of the oscillations at this position. The data thus shows an oscillation with twice the fundamental frequency: Ω = 2ω. Consequently, the factor of 2 is corrected in all experimental data except for Fig. 1, such that we always show ω instead of Ω. S3. ANALYSIS OF EXPERIMENTAL DATA: HOLE EXCITATIONS To determine the depth of the hole excitation in the valence band, we fit a sum of three Gaussians to the first Brioullin zone in the time-of-flight picture. One of the Gaussians represents the atomic background density in the valence band. The other two represent the holes at positive and negative q, respectively. The background is determined from the momentum distribution directly after the excitation for each time series individually. For the dynamical evolution we take the form of the background to be constant for all times and solely allow for a variation of the absolute magnitude as a function of time. S4. CALCULATION OF CONDUCTION BAND DYNAMICS In this section we describe the numerical calculations for the particle excitations in the conduction band. To derive quantitative predictions for the oscillations for all initial quasi-momenta, we performed calculations including both the periodic and the harmonic confinement. We assume a single particle confined to the full potential By diagonalizing the homogeneous lattice Hamiltonian H 0 = p 2 /2m + sE r cos(k BZ x) 2 for a single particle, we obtain the Bloch states for a given lattice depth s. In energy space, the system has bands of allowed states divided by gaps where no states are located. In the experiment, the initial excitation is produced by a lattice amplitude modulation pulse of t = 1 ms duration. This pulse width is much larger than the trapping frequency ω 0 . For the parameters discussed here, it holds in general, that where E C is the band width of the conduction band. Therefore, the harmonic confinement is negligible during the preparation. In all calculations we assume a distribution of Bloch states centered at a certain quasimomentum q 0 . For simplicity we use a Gaussian distribution, with a variance σ = 364.5 Hz, corresponding to the width of the 1 ms lattice modulation. We obtain the time evolution by exact diagonalization of the full Hamiltonian H which leads to a non-trivial dynamics as shown in Fig. 3(b). To extract the oscillation frequencies from the dynamics, we calculate the center-of-mass of the quasi-momentum distribution with respect to the homogeneous lattice Hamiltonian H 0 and fit a cosine to the data. For a typical calculation we use up to 400 quasi-momenta and 11 bands. In the calculations we only use one excitation which has a positive quasi-momentum. We clearly observe the Bragg reflection at q = 0 and obtain an oscillation frequency in agreement with the approximation of equation (1). As mentioned in section S2, the analysis of the experimental data results in twice the oscillation frequency, since excitations at positive and negative quasimomentum are indistinguishable. This effect is consistently reproduced by the calculations, if we incoherently overlap two excitations with opposite sign in the quasimomentum, as shown in Fig. 3(b). S5. CALCULATION OF HOLE DYNAMICS For electron gases in solids a missing electron can be described as a single particle with negative mass. Such a hole is in complete analogy to an electron. We adopted this prescription for holes in the valence band of our harmonically trapped quantum gas. To describe this situation numerically, we assumed a single particle with negative mass m * = −m in the valence band and calculated its evolution in the presence of the harmonic confinement and the lattice potential. The excitation has the same shape as in the conduction band, since the missing atoms in the valence band directly correspond to excited atoms in the conduction band. We also take into account the finite filling of the valence band. This is done by using only the lowest energy states in the time evolution of the initial state. Finally, we include finite temperature by using a Fermi-Dirac distribution instead of a sharp edge when weighting the included eigenstates of H. To match the occupation number to the experimental situation, we take the total atom number and the trapping frequencies and calculate the number of occupied states in the appropriate direction.
2019-04-22T13:03:53.688Z
2013-01-01T00:00:00.000
{ "year": 2012, "sha1": "1309190acd5bd0c7605f50c6aaf41bd64c02e80b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.4020", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1309190acd5bd0c7605f50c6aaf41bd64c02e80b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
149715122
pes2o/s2orc
v3-fos-license
Impact of Psychological Risk Factors on the Buying Decisions of the Consumers Internet has made a paradigm shift in the market by making buying and selling with a click. Lot of options available through internet lead the online retailers to explore the various determinants affecting the behaviour of online shoppers. Consumer Behaviour is defined as activities people undertake when obtaining, consuming, and disposing of products and services. There are various factors that induce consumers to buy the goods where as in case of online shopping there are some risk factors that restrain the buyers to make their decisions. In the current study we tried to explore the fear factors that or filtering elements that don’t let the consumers to buy online. INTRODUCTION: Online Shopping is a form of E-commerce that allows consumers to directly buy good or services from a seller over the Internet. Increasing usage of Internet and availability of various facilities like smartphones enable consumers to buy online. For buying online, the customer must have access to Internet and an online payment method. As the revenues from Online sales continued to grow, researchers identified different types of online shoppers. Dange and Kimar (2012) explained the FFF model of online Consumer behaviour. The model explains that there are three factors that affect the buying motives of the consumers. They are: 1. External Factors: External factors are those which are beyond the control of the consumers. These fcators include the Demographics, Socio Economic, Culture, Sub culture, Reference group and marketing factors. 2. Internal factors: Internal factors are the personal traits and attributes of the consumers that include Attitudes, Learning, Perceptions, Motivation, Self Image. 3. Filtering elements: Filtering elements are the hurdles that the buying motives need to cross to become a filtered buying motive that actually leads to a purchase decision. Kumar and Dange recognised Security concern, Privacy Concern and Trust & Worthiness as the filtering elements. As compared to shopping from stores, online shopping has been perceived more risk factors and so they effect the consumers decision making process. These filtering elements filter the buying motives which ultimately leads to the buying decision of the online consumers. Among all factors Gender plays an important role of influential factor that effect the buying decisions. Consumer's gender related behaviour for buying will vary because of the effect of symbolic consumption and social comparison. Men's motives for shopping appear to be more utilitarian, whereas women's shopping motives tend to be hedonic (Wahyuddin, Setyawan and Nugrogo, 2017). Seock and Bailey (2007) examined the differences between male and female consumers in their shopping orientations, online information searches and purchase experiences. Seven shopping orientation constructs were identified i.e. shopping enjoyment, brand/fashion consciousness, price consciousness, shopping confidence, convenience/time consciousness, in-home shopping tendency and brand/store loyalty. As per a study by Eramus University, men are more loyal to brand where are women are loyal to good service. REVIEW OF LITERATURE: Katawetawaraks and Wang (2011) conducted a study is to provide an overview of online shopping decision process by comparing the offline and online decision making and identifying the factors that motivate online customers to decide or not to decide to buy online. It is found that marketing communication process differs between offline and online consumer decision. Managerial implications should be developed for online stores to improve their website. Uzun and Poturuk (2014) conducted a research to find out what factors affect consumers in the context of electronic commerce, also to see the relationship between e-satisfaction and e-loyalty. As Internet has become a channel were online transactions have been done, and this created need for companies to understand how consumers perceive online buying. Seven hypotheses were formulated regarding to consumers previous experiences with e-commerce. Data gathering was carried out by the survey which was sent online to 200 randomly selected citizens, from which 104 responded. Through the survey, the results showed that factors that affect consumers while shopping online, and that affect satisfaction, are convenience, and trust as the most important variables, the next which are important for them are prices and quality of products. Those variables are the most essential ones for consumers when they decide to shop online. According to collected answers, they are very suspicious. And the cause of this may be raised cheating and fraud on the Internet. If the price on the Internet and in some local store is approximately identical, the consumers will give more attention and interest on selection of goods rather than to price. Haider and Nasir (2016) conducted a study with a the purpose to analyse the different factors that usually cause to fluctuate the online shopping behaviour of customers in Pakistan. Because of the newness and apparently complicated nature of this phenomenon, there is very little information to which the customers have a direct access. Therefore, the objective of this research was to study and uncover different factors that affect the online shopping behaviour of people in Pakistan. The research was conducted with the help of a model that will examine the impact of factors like financial risks, convenience risks, non-delivery risks, return policy risks and product risks on the behaviour of online consumers in Pakistan. Results of hypotheses testing indicated that financial risk and non-delivery risk has negative effect on attitude toward online shopping behaviour. That is, eretailers should make their website safer and assure customers for delivery of their products. Shailesh and Taruna (2016) conducted a research with focuses on the specifications affecting the purchasing behaviour of smart mobiles and usage pattern smart phones of consumers in Lucknow city. The descriptive research method has been used in this study. The information related to Smartphone consumers were collected through a well-defined questionnaire. The convenience sampling method was used by the researcher to collect the data. Primary as well as Secondary sources of data was used. The sample size is 100. With the help of (SPSS) software, the data collected was modified, coded and administered. The arithmetical tools are used for F-Test and T-Test. There is a major difference between the gender of the respondents and the level of satisfaction of smart phone users. The difficulties faced by the smart are Hanging of phone, problem of charging, Language of phone, Battery life of phone, Slow internet, Network issue, Call Drop, High call rates, Improper support from call centre, Complex Technology, etc. Lakshmi et al (2017) studied the impact of gender on the consumer purchasing behaviour. Study was conducted with a view to identify the men and women approach for shopping with different needs, perspectives, rationales and consideration. As the males and females have different requirements for products due to their upbringing and Socialization along with various factors like social, psychological etc. The study concluded that gender is an important factor among all. The study showed that women are more internally focussed where as men ought to be externally focussed. OBJECTIVE: The current study is aimed at extracting various Filtering factors that affect the buying behaviour of the consumers shopping online. Design of the Study: Descriptive method of research is used to know the factors that affect the buying behavior of the consumers who are shopping online. Population and Sample Size: The term research population refers to all members of the group of interest to the researcher. The population of the present research are the customers who are doing online shopping. The sample of 500 customers are randomly drawn from areas in and around Chandigarh. Research Instrument Used: The Questionnaire prepared consisted of Likert's five-point scale for measuring attitudes & behavior of the customers where strongly disagree is coded as 5 while strongly agree is coded as 1. Reliability: Its reliability has been tested by applying the Cronbach Alpha whose value came out to be 0.911 which is acceptable indicating that the internal consistency of the questionnaire is good. ANALYSIS & FINDINGS: The current study is aimed to study the filtering elements that affect the buyers behavior. For analysis demographic factors have been explored through various percentage method and for filtering elements factor analysis has been used. Table 1.1 shows the demographic profile of the respondents. The study showed that majority of respondents are females with 67.9% in comparison to males with 32.1%. In case of their age groups, the majority of respondents belong to the age category of 26-35 years with 36% respondents followed by 18-25 years age group with 21% respondents, 36-45 years and 46-55 years. The study indicates that for online shopping age is not an influencing factor. People are getting more advanced towards Internet and so for online shopping. If we talk about their marital status, majority of the respondents are married with 70% than unmarried with 30%. This reports that married people have more tendency to buy online. Qualification profile of the respondents reported that majority of the respondent are Graduates with 33.4% followed by Post graduates with 31% and Undergraduates with 22.2% which shows that qualification does not impact the buying behaviour of the consumers while buying online. The study also shows that majority of respondents are from the salary group 40000-60000 with 26% followed with a minor difference of income group of 60000-80000 with 25% and 20000-40000 with 23%. FACTOR ANALYSIS: Factor analysis is a tool for data reduction and structure detection. Different factors have been studied and the role of factor analysis is to keep the significant factors and omit the non significant factors. The method followed here is the Principal Component Analysis along with rotation procedure of Varimax for summarizing the original information with minimum factors and optimal coverage. Here the Kaiser-Meyer-Olkin measure of sample adequacy test is followed and Bartlett's test of sphericity if followed to check if the factor model is appropriate. .000 As per Kaiser Criterion, we retain only those components whose Eigen value is greater than 1. This is because unless a factor extracts at least as much as the equivalent of one original variable, we drop it. The KMO measures the sample adequacy which should be greater than 5 for satisfactory factor analysis and in our study KMO value is 0.949. The degree of common variance is marvelous as per Kaiser Criterion among all variables. If a factor analysis is conducted, the factors extracted will account for substantial amount of variance The table shows that these three components are Eigen value more than 1, so the four components will be used. The cumulative percent of the variance explained by these factors is 81.860%. Uncomfortable feeling on thought of purchasing online .997 Bodily discomfort due to poor while purchasing apparels .997 Three components were factor analyzed by Principal Component analysis using Varimax rotation. The analysis yields four components i.e. Privacy, Security and Trustworthiness. Out of 15 statements while using rotation 1 statement were deleted as their factor loading was less than 0.5. First factor is labelled as Privacy. The factor is loaded with four items. It is clear from these items that they are related to the privacy of the consumers while buying online. These items include no possibility to touch the product, their credit cards details and their fear of losing their social contacts. So privacy concern is an important factor that affects the buying motive of the consumers. Second factor is labelled as Security. The factor is loaded with only one item that is fear of not getting the delivery on time. So they are more concerned about the security that if they have already paid but they will not get the product on time. Third factor is labelled as Trustworthiness. The factor is loaded with nine items. Trustworthiness has emerged as very important factor to affect the buying motive of the consumers. The items include the concern about the performance of the product, fear of information, if the delivered product would match the shown on website, if wrong product will be choose, fear if friends will show that they show off and personal health issues. CONCLUSION: The current study was aimed at extracting the fear factors or filtering elements that affect the buying motive of the customers. There are various Demographic, Psychographic, Social and Cultural factors that impact the consumer behaviour but in case of online shopping there are some filtering elements or the fear factors that further filters the buying motives of the respondents. The current study showed that three components have been extracted that filters the buying motive i.e. Privacy concern, Security Concern and Trustworthiness. Among the three components, trustworthiness has emerged as the most loaded factor with nine items that shows the consumers are more concerned about the trust they can have on the online merchants.
2019-05-12T14:23:17.698Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "a4c11e0b8891bf393b2f7717352dc7a310092458", "oa_license": null, "oa_url": "https://doi.org/10.18843/ijms/v5i3(7)/06", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3c314da24eb6ed422fee4bbf0ae840f4eb933906", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [ "Psychology" ] }
241276721
pes2o/s2orc
v3-fos-license
Stress and Coping Strategies in Public Speaking: Comparative Case Studies of Japanese and Malaysian Undergraduates Public speaking competency is one of the core skills that is essential for personal and professional growth. Students who display effective public speaking skills are able to get their messages across while projecting confidence, clarity, and conviction which enhance their job prospects for future employability. Conversely, students’ failure to cope with the stress faced in preparing for public speaking may affect their speaking effectiveness. This comparative case study explored the stress and coping strategies among Japanese and Malaysian undergraduates in two universities to understand the similarities, differences, and patterns across these two groups that share a common focus. Study participants were selected through a purposive sampling technique in which relevant data was collected using semistructured interviews. Data gathered were then analysed thematically to identify the stressors and coping strategies in public speaking across the two groups. Results indicate similar stressors experienced by participants which are external speech stress factors, resource deficit, and anticipatory speech anxiety in public speaking. Personal, social, and academic-oriented strategies were the participants’ strategies to cope with the stress. These results suggest that similar stressors are faced by undergraduate students in performing public speaking, as well as and coping strategies used. This shows that the stress of public speaking is a prevalent occurrence and that institutional intervention can be developed by tertiary institutions to minimise its detrimental effects. The mastery of public speaking skills ensures that students can convey their messages effectively and confidently. These skills are also highly valuable as the ability to conquer public speaking skills such as speaking with confidence, projecting controlled body language and presenting good enunciation and pronunciation could impress employers and enhance job prospects for professionalism and employability (Mousawa & Elyas, 2015). However, public speaking anxiety (PSA) remains a persistent occurrence among undergraduate students despite continuous research over the years. Students' failure to cope with the stress faced in preparing for public speaking may lower their performance and negatively affect their ability to communicate with clarity and conviction in both academic and non-academic settings. This research explores the stress and coping strategies faced in public speaking among undergraduate students in two universities (Japan and Malaysia) for a comparative study to identify the similarities, differences, and patterns across the two groups of students. The study focuses on the following research questions: 1. What are the stressors faced in preparing and performing in public speaking among undergraduate students? 2. What are students' strategies to cope with stress related to performing public speaking? Literature Review Stress, Appraisal, and Coping One of the most significant cornerstones in stress research can be found in Lazarus and Folkman's (1984) theory of stress, appraisal, and coping. This theory posits that stress is a product of a transaction between a person (cognitively, physiologically, affectively, psychologically or neurologically) and his or her environment. Stress is experienced when a person perceives that the "demands exceed the personal and social resources the individual is able to mobilize." This transactional framework of stress is fundamentally different from earlier stress research (Selye, 1956) which views stress as a response or stimulus. Therefore, it focuses on cognitions and perceptions, or appraisals, that mediate the response to stressful events (Lazarus, 1999). Within this framework, the appraisal theory defines an appraisal as the individual cognitions of a specific event or a stressor. According to Lazarus and Folkman (1984), cognitive appraisal occurs when a person considers two major factors that majorly contribute to his response to stress. These factors are the threatening tendency of the stress to the individual and the assessment of resources required to minimise, tolerate or eradicate the stressor and the stress it produces. These two factors form the basis of the two types of appraisal that happen simultaneously; primary appraisal and secondary appraisal. Primary appraisal happens when the individual evaluates an event or situation as a potential hazard to their well-being (Matthieu & Ivanoff, 2006). In other words, the appraisal focuses on the magnitude of the event and whether it poses a threat to the individual. On the other hand, the secondary appraisal is related to the individual's evaluation of their ability to handle the event or situation. This evaluation is subjective to, but not necessarily after, the primary appraisal of the event or situation (Lazarus, 1999). It is suggested that after both primary and secondary appraisals of a stress-inducing situation are made, an individual will be able to move from thinking to action. In response to primary and secondary appraisals, a behaviour called coping takes place. Coping is defined as a process of "constantly changing cognitive and behavioural efforts to manage specific external and/or internal demands that are appraised as taxing or exceeding the resources of the person" (Lazarus & Folkman, 1984). In this stage of the framework, coping strategies are employed based on their two forms: problem-focused coping and emotion-focused coping. Problem-focused coping strategies involve directing all efforts to handle distressing situations with task-oriented actions like gathering information, making decisions, resolving conflict, and acquiring necessary resources such as knowledge, skills, and abilities (Folkman & Moskowitz, 2000). These strategies allow individuals to focus on the specific goals of the situations and align their behavioural responses to achieve them. Emotion-focused strategies, on the other hand, involve the practice of positive reappraisal to regulate stress. This process of cognitively reframing typically complex thoughts in a more positive light impacts the individual's evaluation of the difficult situation and helps them alter the interaction between perceived stress and its appraisal, facilitating better coping strategies (Lazarus, 1999). Public Speaking Anxiety Public speaking anxiety (PSA) is widely regarded as one of the most common social phobias. Language anxiety is elaborated by Horwitz et al (1986) as "a distinct complex of selfperceptions, beliefs, feelings, and behaviours related to classroom language learning arising from uniqueness of the language learning process" (p. 128). This is further divided into three categories: communication apprehension, test anxiety, and fear of negative evaluation to provide teachers with anxiety. Specifically, communication apprehension is defined as "a type of shyness characterised by the fear of or anxiety about communicating with people", test anxiety as "a type of performance anxiety stemming from a fear of failure," and fear of negative evaluation as "apprehension about other's evaluation, avoidance of evaluative situations and expectations that others would evaluate negatively" (Tercan & Dikilitaş, 2015). Several studies on speaking anxiety among students have been conducted over the years, specifically in the Malaysian setting. For instance, a study by Zulkurnain and Kaur (2014) revealed that the communication difficulties experienced by the group of university students were consistent with four categories of communication difficulties stated by Dornyei and Scott (1997) which are namely; resource deficit, processing time pressure, own-performance problem and other-performance problems. The study also identified limited English vocabulary as being the most common barrier. Next, another study by Miskam and Saidalvi (2019) adapted the Foreign Language Speaking Anxiety Scale (FLSAS) (Balemir, 2009;Huang, 2004) to measure the level of undergraduate students' speaking anxiety. It was found that a majority of these students had a moderate level of speaking anxiety. The dominant factor contributing to this issue was communication apprehension for high and moderate anxiety learners, while test anxiety was observed in low anxiety learners. Another noteworthy study on this issue can also be observed in Long, Yih, and Lin (2019) on undergraduate students from two public institutions of higher learning in Sarawak. While this study reported that the students generally experienced an average of speaking anxiety, which was a similar finding to Miskam and Saidalvi's (2019) study, one interesting observation was that female undergraduate experienced a significantly higher level of speaking anxiety compared to their male counterparts. Research on public speaking anxiety (PSA) among students remains pertinent as this affective variable plays a significant factor in predicting their speaking performance. While a reasonable amount of anxiety can lead to the students being more motivated and focused in preparing for a speaking task, students' excessive anxiety may lead to them having low achievement levels due to their inability to regulate their stress effectively (Tercan & Dikilitaş, 2015). Methodology The present study investigates 24 undergraduate students from a Malaysian and a Japanese university using a qualitative case study approach. Yin (1994, p.51) defined a case study as "an empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident". Yin (1994) suggested the case study as a particular style of educational research that may be appropriate for investigating the concept of public speaking-related stress and anxiety. In this study, purposive sampling was utilized to select prospective participants. The participant selection was to select undergraduate students who were undergoing a public speaking course in English at the time of data collection. All 24 participants were solicited during formal and informal meetings of the course. The search for prospective participants ended when the data reached saturation level, in which there was sufficient information to replicate the study and no new data emerged. The data gathered was able to answer the research questions, and the ability to gain additional new information had been accomplished, and further coding was no longer feasible (Glesne & Peshkin, 1992). Data Analysis In this study, participants' views, opinions and meaningful responses contain detailed and descriptive responses. Semi-structured interviews were used as a primary data collection method to illustrate the experiences gained in fulfilling a public speaking task. The recordings were transcribed, and field notes were examined immediately after the session to ensure accurate analysis. Observations were also conducted to gain insider's perspectives of the phenomenon under study as another source for triangulation of data, increasing the study's trustworthiness and filling any gaps between what research participants narrated through interviews and what happens. The researchers have attempted to acknowledge what had been seen and noted in actual reality. Reflections were done at the end of each session as an addition to the field notes. Some of the data obtained in the observations were also used as talking points in the post-observation interviews. Data were then analysed using thematic analysis. Thematic analysis is a method to identify, analyse, and report data themes (Boyatzis, 1998). This study uses inductive thematic analysis in which the themes identified are closely related to the data itself (Patton, 1990). Inductive thematic analysis tends to analyse themes in the data in detail, without giving attention to the related themes from the findings in previous research. Public Speaking Stress The interview sessions revealed that all 24 participants from Malaysia and Japan felt a sense of fear in public speaking. The analysis shows three main themes of stressors that lead to public speaking stress: external speech stress factors, resource deficit, and anticipatory speech anxiety. External Speech Stress Factors Social-related Anxiety The fear of being assessed while speaking in public was mentioned most by Malaysian participants. The fear includes being the centre of attention, evaluation of their speaking ability. In contrast, only one Japanese participant remarked that being assessed of his language ability contributes to the stress in English public speaking. The result shows Malaysian participants concern more on social-related anxiety compared to Japanese participants. Another concern among Malaysian participants is the audience's reaction. The participants expressed that they would get anxious if they felt that their audience was getting bored of their presentations. Participants from both countries agreed that the nervousness was due to speaking publicly and perceived facial audience expression. Other than that, the tendency to compare themselves to other speakers has also contributed to the factor for the audience's reaction. Situational-related Anxiety The fear of becoming forgetful and being the last speaker during public speaking was acknowledged mainly by Malaysian participants. The participants were concerned about forgetting their lines or scripts or losing ideas in the middle of the presentations. Meanwhile, Japanese participants' stress factors depended on the audience, place, and situation affect their confidence to speak publicly. Resource Deficit Low English Proficiency As non-native speakers, having low English proficiency was expressed by participants from both countries as a factor that contributed to their lack of confidence in public speaking. It was particularly due to limited vocabulary and mispronunciation of words for Malaysian participants, while Japanese participants felt that mastery of the English language is challenging. Preparation Time Preparations prior to public speaking also determine participants' readiness in their presentations where participants, as their stress factors mentioned time and technical issues. It is interesting to learn that both nations' participants responded that insufficient preparation time and lack of preparation affect their ability to present well. The participants stated that lack of preparation for his speech would cost him to feel nervous during public speaking. They also mentioned preparation time as one of the fear factors in public speaking. Script memorisation was an important step before the presentation. Technical Preparation When it comes to technical preparations, it is found that Japanese participants felt more stressed compared to Malaysian students, and the factors included were the writing of the speech, initiating the speech, and getting unfamiliar and uninteresting speech topics. Anticipatory Speech Anxiety Self-conscious Issues The participants also reported anticipatory speech anxiety. It then was divided into two categories; self-consciousness and confidence issues. More Malaysian participants feared being in the limelight compared to Japanese participants. Apart from that, they were also concerned about being perceived as stupid and confused and feeling shy. Confidence Issues Participants from both countries also mentioned issues in self-confidence. While most Malaysian participants concerned about making mistakes, Japanese participants expressed that their lack of confidence level was a fear factor in public speaking. Public Speaking Stress Coping Strategies In coping with the stress faced, the Japanese and Malaysian students shared several coping strategies to deal with public speaking-related stress: personal, social, and academic-oriented strategies. Personal coping strategies Cognitive Interestingly, most participants used a self-persuasion strategy to overcome speaking anxiety, where the former mentioned this the most in the interview. Concerning that, being optimistic was dubbed to be crucial by the participants in coping with public speaking stress. Additionally, they preferred to be in solitude before the session to search for tranquillity while Physical The data exhibits that more Japanese participants practised physical activities to overpower public speaking anxiety. Activities like listening to music, playing sports, reading books, and watching movies or videos helped them ease stress. Interestingly, several Malaysian participants opted for singing as a coping strategy. Behavioural Besides overcoming anxiety cognitively and physically, both Malaysian and Japanese students opted for behavioural actions such as taking deep breaths, sputtering, holding on to handphones or stress-reliever tools, eating and sleeping. Social Coping Strategies Peer support From the interviews, participants from both countries, particularly Malaysians, expressed that peer support plays a major role in easing their anxiety by talking to their friends and giving them supportive reactions. Instructor Support The participants also sought for instructor's support, and it is believed that just like getting peer support, receiving supportive reactions from the instructors helped provide comfort to their public speaking anxiety. Several participants specifically mentioned that they would directly talk to their professors whenever they need help. Academic-oriented Coping Strategies Speech Preparation The interviews revealed that speech preparation, which included preparing scripts and presentation slides, helped reduce public speaking anxiety. More Malaysian participants mentioned this strategy than only one Japanese participant stated that the task would not be difficult for him with sufficient preparation time. Additionally, participants disclosed that experience allowed them to perform better in oral presentations. Speech Practice Besides preparing for the scripts, participants noted that frequent practices were also the key to overcoming public speaking anxiety. More than half of the Japanese participants expressed that frequent practices helped them to have better stress management. A few Malaysian participants felt the same, too. Speech Delivery Establishing eye contact with audiences appeared to be used by Japanese and Malaysian students as a coping strategy in public speaking stress. Other strategies that also involved the audience would be interacting with them, using humor, and attracting their attention. Discussion This study aimed to investigate the factors that lead to public speaking anxiety in Japanese and Malaysian students and their coping mechanisms. Oral assessment has been a norm in courses and is made compulsory. Even though this practice has long been integrated into schools, the participants admitted to feeling stressed concerning public speaking. This has frequently caused students to avoid speaking in English even outside of their classroom (Sadighi & Dastpak, 2017). The results show that students would feel very stressed when their instructors assessed them, which affected the way they present their speech. This is probably related to fear of negative evaluation, leading them to obtain lower marks for the assessment (Akkakoson, 2016). Apart from their instructors, they were also worried the audience might judge their appearance and stage presence. They worried about appearing stupid and confused in front of the audience, especially because they became the centre of attention when they speak. This further strengthens Lazarus and Folkman's theory where they stated that whenever an individual experience a situation where he is expected to provide more than what he could, he would feel stressed out. Students' confidence level also played an important part in giving practical public speaking sessions. According to the results, students who felt they lacked in it would end up with speech anxiety. It is also interesting to note that Japanese students were not affected by social-related anxiety than Malaysian students. Nevertheless, it is evident that the fear of being assessed by others has a major impact in inducing participants' public speaking anxiety. Low English proficiency level was also considered to influence students' anxiety in public speaking, mainly students from Malaysia. Students admitted that limited vocabulary caused them to be anxious as they would be lost at words and ended up using too many fillers while speaking, hence, interrupting the flow of their speech. Akkakoson (2016) stated that students would have difficulty understanding others or expressing their viewpoints without sufficient vocabulary. Since the participants are non-native speakers, they may find it challenging to construct complete sentences without the help of pre-written scripts. However, at most times, instructors prohibit their students from relying too much on the scripts to prevent students from reading aloud instead of demonstrating their speaking skills. This could also trigger another element in public speaking anxiety: mispronunciation of words due to limited vocabulary, which eventually deemed poor communication skills. Through the analysis, it could be seen that more Malaysian students were worried about their English proficiency level than Japanese students. Nevertheless, a few Japanese students mentioned that making mistakes caused them to be anxious in public speaking. A student also confessed that English is a complex language to learn and practice, which is understandable since it is deemed a foreign language in the land of the rising sun. In fact, Japan is placed in the "low proficiency" band after the country dropped to 53rd place in global English proficiency, and they are currently refining their English educational curriculum in schools (Margolis, 2020). Thus, this further explains why English proficiency was not a major factor for Japanese students because the language is not perceived as of the utmost importance in the country's socio-economic status. Besides English proficiency level, preparation time and technical preparation were also part of resource deficit, contributing to public speaking anxiety. It can be acknowledged that students were aware of the significance of being well-prepared before any public speaking sessions. Insufficient preparation time, which might be due to other assignments or poor time management, can also become a source of anxiety. There were possibilities that students would wait until the eleventh hour to prepare for the task. As to defend this outlook, students also listed speech writing and unfamiliar topic to be the stressors. Unless it is a public speaking competition, instructors typically provide a stipulated period for students to prepare for their assessments. Technical issues like unfamiliar topics and incomplete slides would not emerge as stress factors should the students have better time management skills. On the other hand, what is interesting in the findings is the similarities in the strategies adopted by students from both countries to overcome public speaking stress. The most mentioned coping strategy is through self-persuasion or self-talk. This showed that students were self-conscious and aware of their weaknesses. Therefore, they employed this strategy to improve their speaking skills and complete the speech. This strategy was also found in a study by El-Sakka (2016), where self-talk has proven to reduce speaking anxiety in forty English-major students in an Egyptian university. Another coping strategy involved support mainly from friends and instructors, which is not surprising as these are the people that students usually refer to when it is related to educational matters. Mistakes are unavoidable for language learners, but when instructors tolerate the errors and assist students, it will "release pressure" in public speaking (He, 2017). It can be concluded that just talking to them or seeing their positive reactions helped students feel calmer and more confident to complete any oral activities. Earlier, speech preparation was one of the main factors inducing public speaking anxiety among students from Japan and Malaysia. Interestingly, it was also found to be among the most mentioned coping strategy but by having proper preparation of the speech and frequent practices. A similar result was also found in a study by Rafieyan and Yamanashi (2016) regarding prior preparation. This further supports Lazarus and Folkman's problemfocused coping theory, where a person can control a difficult situation once the source of the problem is identified. It is worth noting that students were aware of their limitations in public speaking, which led to anxiety. However, they were also attentive in their coping strategies in order to improve their skills. Conclusion The triangulation of data postulates clearly that the participants acknowledged the presence of stress in fulfilling public speaking tasks. The researchers have identified external speech stress factors, resource deficit, and anticipatory speech anxiety as the stressors that significantly influence their performance in public speaking. Nevertheless, the results also revealed that all participants in the study appraised the stress cognitively to seek possible solutions. Personal, social, and academic-oriented strategies were used as coping mechanisms to manage and alter the stress or regulate the response to the stress. These findings advance the understanding of English education in tertiary education by providing insights into the subject, and context-specific descriptions of the lived experiences and perceptions of the participants. The understanding gained could offer recommendations for changes in educational practices to improve, thus positively impacting students' performance and university reputation.
2021-10-15T16:33:11.042Z
2021-06-20T00:00:00.000
{ "year": 2021, "sha1": "f516aab7cf3d8b06f86a178fee164e4c39a88144", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/10385/stress-and-coping-strategies-in-public-speaking-comparative-case-studies-of-japanese-and-malaysian-undergraduates.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "521c48b9e9d988739c9e124641eb8bd4b488b9b5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
7001187
pes2o/s2orc
v3-fos-license
Elevated inflammatory biomarkers during unemployment: modification by age and country in the UK Background There is raised risk of mortality following unemployment, and reviews have consistently found worse psychological health among the unemployed. Inflammation is increasingly implicated as a mediating factor relating stress to physical disease and is strongly linked to depression. Inflammation may, therefore, be implicated in processes associated with excess mortality and morbidity during unemployment. This study examined associations of unemployment with inflammatory markers among working-age men and women from England and Scotland. Methods Cross-sectional analyses using data from the Health Survey for England and the Scottish Health Survey collected between 1998 and 2010. Systemic inflammation was indexed by serum concentrations of C reactive protein (CRP) and fibrinogen, and compared between participants currently employed/self-employed, currently unemployed and other groups. Results CRP, fibrinogen and odds of CRP >3 mg/L were all significantly raised for the unemployed, as compared to the employed participants (eg, OR for CRP >3 mg/L=1.43, CI 1.15 to 1.78 N=23 025), following adjustment for age, gender, occupational social class, housing tenure, smoking, alcohol consumption, body mass index, long-term illness and depressive/anxiety symptoms. Strengths of associations varied considerably by both age and country/region, with effects mainly driven by participants aged ≥48 and participants from Scotland, which had comparatively high unemployment during this time. Conclusions Current unemployment is associated with elevated inflammatory markers using data from two large-scale, nationally representative UK studies. Effect modification by age suggests inflammation may be particularly involved in processes leading to ill-health among the older unemployed. Country/regional effects may suggest the relationship of unemployment with inflammation is strongly influenced by contextual factors, and/or reflect life course accumulation processes. INTRODUCTION There is raised risk of mortality following unemployment 1-3 and reviews have consistently found worse psychological health among the unemployed. [4][5][6] Unemployment is a stressful life event, often involving loss not only of financial resources but psychosocial assets, such as time structure, status and social support. 7 The inflammatory response occurs in response to infection or injury, where it helps to fight infection and repair damaged tissue, 8 but can also occur for extended periods of time in the absence of infection or injury. Such 'systemic' inflammation is increasingly implicated as a mediating factor relating stress to cardiovascular disease 9 and is strongly linked to depression. 10 It is, therefore, plausible that inflammatory markers may be elevated in the unemployed, and reflect processes associated with the excess morbidity and mortality in this group. To our knowledge, two small-scale studies have examined associations of unemployment with inflammatory markers. We explored this association in a large data set of 23 025 participants from the Health Survey for England (HSE) and Scottish Health Survey (SHeS), allowing for a wide range of potential confounders and mediators to be explored. METHODS Participants The HSE and the SHeS are annual government surveys, each comprising a new sample every year, with core samples nationally representative of residents with private addresses. 11 12 Each has a stratified two-stage sampling design, with households selected from primary sampling units. 13 This analysis was restricted to core-sample participants of working age, defined as 16-64 last birthday. Surveys consisted of a face-to-face interview followed by a nurse visit during which clinical measurements were taken, including serum C reactive protein (CRP) and fibrinogen, both markers of systemic inflammation. Data was aggregated from nine HSE and SHeS surveys in which CRP and fibrinogen measurements were taken for the core sample: HSE 1998, 2003, 2006, and SHeS 2003, 2009. From HSE 1999 and from the 2008 SHeS only a subsample of core sample adults were targeted for a nurse visit; so only these participants had measurements of CRP and fibrinogen. Observations from SHeS 2011 were not used, because introduction of a different CRP analyser resulted in measured CRP concentrations on average 15 mmol/L higher, leading to concerns about consistency. 14 The initial sample comprised all core sample working-age adults from nine surveys targeted for a blood sample (N=49 385). Of these, 43 129 (87.3%) consented to a nurse visit but only 30 103 (61%) consented to a blood sample. Problems in taking samples, laboratory problems with samples obtained, and nonmeasurement of fibrinogen for participants taking fibrates resulted in 27 366 CRP measurements and 24 551 fibrinogen measurements. Participants with CRP >10 mg/L were excluded from CRP (N=1453) and fibrinogen analyses Open Access Scan to access more free content (N=1237) since this is considered evidence of current infection, rather than chronic processes. 15 Of remaining observations, only 25 participants were missing employment status, but a further 2863 and 2568 participants were excluded due to missing covariates (mostly for body mass index, BMI, occupational social class and General Health Questionnaire (GHQ-12) score, missingness of 5.3%, 3.6% and 3%, respectively, in CRP analyses). The final complete-case sample sizes were 23 025 for CRP models, and 20 724 for fibrinogen models. Measures Current employment status was assessed by questionnaire. Using the International Labour Organisation definition, we considered participants unemployed if they were without work and seeking work, or waiting to take up work. 16 The baseline group in all analyses was participants in paid employment or selfemployment. Participants out of the labour force, due to sickness/disability or otherwise economically inactive (including homemakers, the retired, full-time students, participants in government training or doing unpaid work), were analysed separately. Participants who were unemployed but temporarily prevented from seeking work due to illness were included with the sick/disabled group. In all surveys, serum CRP concentrations were analysed by the Biochemistry Department of the Royal Victoria Infirmary, Newcastle, using the N Latex CRP mono immunoassay on the Behring Nephelometer II Analyser. 17 Imprecision at the low end of the analytical range results in a coefficient of variation of <6% for this analyser. 13 The limit of detection was 0.1 mg/L. Fibrinogen was analysed at the Royal Victoria Infirmary Haematology Department using a modified Clauss thrombin clotting method. The Organon Teknika MDA 180 analyser was used until HSE 2006 18-21 when the Auto Coagulation lab (TOP) CTS analyser was introduced. 13 22-25 A correlation of 0.96 indicates results from the two analysers are comparable. 25 The limit of detection was 0.2 g. Fibrinogen was not measured for participants taking drugs known to affect fibrinogen. Mean values of CRP and fibrinogen differed between surveys (see online supplementary appendix A). All covariates except BMI were assessed by questionnaire. Socioeconomic position was indexed by occupational social class (Registrar General's Social Classification) from current or most recent employment, and housing tenure (classified as owns home outright, buying with a mortgage or loan or renting). Smoking was categorised as never smoker, ex-smoker, current (<10/day), current (10-19/day) and current (20+/day). Alcohol intake was assessed by frequency of drinking occasions in the past year (every couple of months or less, 1-2 times per month, 1-2 times per week, 3-4 times per week, 5+ times per week or never). Height and weight were assessed by the nurse and BMI calculated, with WHO BMI categories (<18.5, 18.5-24.99, 25.0-29.99, 30+) used as measure of adiposity. Long-term illness (mental or physical) was categorised as none, limiting and non-limiting. Total GHQ-12 score (dichotomised using the standard cut-off of 4+) was included to account for depressive/anxiety symptoms. Non-steroidal anti-inflammatory drugs, systemic corticosteroids, corticosteroid injections, lipid-lowering drugs, β-blockers, diclofenac sodium for gout and aspirin or ibuprofen as an analgesic or antiplatelet were classified as medications that would influence inflammatory marker levels. Data analysis Multivariate linear regression was used to examine associations of unemployment with serum concentrations of CRP and fibrinogen (both log-transformed), and multivariate logistic regression to investigate associations of unemployment with odds of raised CRP, defined as >3 mg/L-the standard cut-off in CRP analyses in recognition of the clinically significant increase in cardiovascular risk past this point. 26 All analyses used STATA's svyset command to account for clustering in the primary sampling unit. Sensitivity analyses To investigate whether bias could have resulted from conducting a complete-case analysis, we compared age-adjusted, genderadjusted, country-adjusted and year-adjusted associations between unemployment and inflammatory markers in participants lacking covariate data and other participants. In total 12.7% of the final CRP sample was taking medications with potentially anti-inflammatory effects. To investigate whether their inclusion could have affected results, we compared associations between unemployment and inflammatory markers in these participants and other participants. RESULTS Compared to those excluded, participants retained in final models were older and more likely to be male (both significant p<0.001). The original and final analytic samples are compared in table 1. Age-adjusted associations of inflammatory markers with covariates are shown in online supplementary appendix D. Unemployment was higher among Scottish participants than English participants at 2.6%, compared to 2.1% in the final CRP sample (table 2). Within England, it was lowest in the Southwest at 1.4%. Unemployment and inflammation Across the whole sample, log-transformed CRP, log-transformed fibrinogen and odds of CRP >3 mg/L were significantly raised for unemployed, compared to employed participants (table 3). Effects were robust to adjustment for age, gender, socioeconomic position, long-term illness, GHQ-12 score and health behaviours. For all three markers, attenuation occurred with adjustment for SEP (table 3) but additional adjustment made little difference. Significant interactions were found for age band and country, although not for gender. Age-stratified and country-stratified analyses were conducted to investigate further. Within England, interactions of unemployment and government office region were tested for with the Southeast (the largest group) as baseline. Stratification by age band, country and region Associations of unemployment with CRP and fibrinogen were significantly stronger for participants aged 48-64, compared to those aged 16-31 (interaction p=0.004 and p=0.001, respectively). Stratification by age band (table 4) showed that associations with all three markers were strong for those aged 48 and over, but non-significant in the younger groups. Associations with CRP and fibrinogen were considerably stronger for Scottish participants (interactions p<0.001 and p=0.007). Stratification by country (table 5) showed that among English participants, only odds of CRP >3 mg/L was significantly raised for unemployed participants after full adjustment but in Scotland, associations with all three measures of inflammation were robust. Within England, there were significant regional interactions for CRP and fibrinogen (interactions p=0.03 and p=0.02). This was driven by differences in the Southwest, where associations of all three inflammatory markers with unemployment were found to be negative (table 5). Sensitivity analyses Age-adjusted, gender-adjusted, country-adjusted and yearadjusted associations between unemployment and inflammatory markers did not differ between participants lacking covariate data and other participants, indicating their exclusion had not produced bias. Associations did not differ between participants taking anti-inflammatory medicines other participants, indicating their inclusion had not produced bias. Since the years of data collection differed between the two countries, we considered whether country differences might reflect secular changes in associations of unemployment and health due to the onset of the recession. Analyses were re-run and restricted to 2003, a year well before the recession when large numbers of observations were taken in both countries, but significant country interactions remained for both CRP ( p=0.01) and fibrinogen ( p=0.05). To explore whether country/regional differences were due to climate, English observations were stratified into latitudinal bands: The North West, North East and Yorkshire, the Midlands and East Anglia and London and the South. No latitude effect was observed. In both countries (see online supplementary appendices B and C), attenuation occurred with adjustment for SEP on all measures of inflammation. In contrast, additional adjustment for long-term illness made no difference in either country. Adjustment for health behaviours produced modest attenuation in Scotland, but not for England. DISCUSSION Unemployment and inflammation In a large data set representing working-age people in England and Scotland, we found elevations in CRP and fibrinogen among unemployed men and women, compared to their employed counterparts. Results were robust to adjustment for pre-existing illness, social position, health behaviours and symptoms of depression/anxiety. This suggests unemployment is linked to inflammation via pathways independent of these factors and that inflammation may help explain the increased morbidity and mortality repeatedly observed in this group. Our findings accord with research linking inflammation to social stressors, including bereavement 27 and caregiving 28 and disadvantaged socioeconomic position. 29 30 To our knowledge, two studies have explored associations between unemployment and inflammation. Both were small, with sample sizes of 225 31 and 1227, 32 and neither carried out in a UK population. Both report that inflammatory markers (CRP and/or Interleukin-6) were raised in unemployed participants compared to working counterparts. Our findings serve to confirm and extend these findings using data from large scale, nationally representative UK studies. Our results do not support a model whereby the poor health of the unemployed can be explained by direct selection due to poor health. However, in both countries, substantial attenuation occurred with adjustment for SEP, supporting indirect selection by socioeconomic position. While unemployment is associated with adverse health behaviours, 33 in our study this did not explain the association of unemployment with raised inflammatory markers. Modest attenuation with adjustment for smoking, drinking and BMI was observed in Scotland, but not in England. This may reflect inaccuracies in measurement of tobacco and alcohol consumption in large-scale health surveys, limiting how effectively these factors can be controlled for. Alternatively, results may support the idea that the relationship of unemployment with health behaviours may itself vary by context. 34 Associations were largely independent of psychological distress as measured by the GHQ-12. Measurement of psychological distress may not have been optimal in our analyses, since there is more evidence that inflammation is associated with depression than anxiety and the GHQ-12 may be a relatively poor measure of depression. Disadvantaged groups may also tend to under-report symptoms of minor psychiatric disorder as measured by the GHQ, 35 potentially leading to discrepancies in the accuracy of measurements for biomarkers and psychiatric symptoms. Age and country/regional effects The age modification observed could reflect unemployment being more stressful for older jobseekers, for instance due to outdated skills, or real or perceived job discrimination. 5 Alternatively, it could reflect accumulation of exposure over the life course. There is substantial evidence that unemployment spells cluster longitudinally within individuals, due to loss of skills or impact on perceived employability. 36 37 There is also evidence that effects of unemployment on inflammation are lasting and could act additively over time. 32 Hence, late-career unemployment may be acting as a marker for longer term unemployment and/or more past unemployment, with plausibly greater effects on inflammation. It is unclear what is driving the country/regional modifications. Sensitivity analyses allowed us to discount differential medication use by country, proximity of data collection to the recession and latitude as explanations. Furthermore, country differences are not consistent with differential selection effects due to variation in background unemployment rate. 'Direct selection'-the idea that poor health of the unemployed can be largely explained by selection into unemployment of the unhealthy, and/or selection of the healthier unemployed back into employment-predicts weaker associations of unemployment and ill-health in times and places when unemployment is higher. Against a high background unemployment rate, job loss should be less discriminating, selection minimised and the unemployed more 'normal' as a result. 2 Since unemployment benefit rates are determined by central UK government, country effects are unlikely to stem from differential financial impacts of unemployment. Hence, if the differences are not due to any of these processes and persist after full adjustment, results may implicate a genuinely greater impact of unemployment in Scotland via alternative pathways, such as psychosocial stress. While selection predicts stronger associations of unemployment and ill-health against a low background unemployment rate, there are also theoretical reasons to expect the opposite. It has been suggested that unemployment may be a more stressful experience, with worse effects on health where unemployment is high because jobseekers will perceive prospects for re-employment as worse. 38 This could produce stronger associations of unemployment and ill-health, despite weaker selection effects. A final possibility is that country and regional differences may again reflect life course accumulation processes. If unemployment was more widespread in Scotland at the time of data collection and had been during much of these participants' working lives, then it is likely that unemployed Scottish participants will have been unemployed for longer than their counterparts elsewhere or accumulated more lifetime unemployment, with plausibly greater effects on inflammation. Indeed, this explanation is supported by the stronger associations observed for older participants, since differences stemming from accumulation processes would be expected to emerge later in life. While this cannot be tested within this cross-sectional data set, support comes from other UK data sources for this period. An analysis of unemployment duration between 1991 and 2006 using the British Household Panel Survey 39 found probability of re-employment during follow-up was lower in Scotland than in every English region (0.655, compared to the South East). The negative effects in the South West require a different explanation. Unemployed participants in the South West did not appear different in terms of demographics or health behaviours, but this region had the least unemployment, in accordance with Labour Force Survey data from this period. It is, therefore, likely that these participants will have been unemployed for less time than their counterparts elsewhere, perhaps with better perceptions of re-employment prospects playing an additional protective role. However, these factors cannot explain why inflammatory markers were actually lower for the unemployed compared to employed participants in this region. Given the small sample sizes in regionally-stratified models, negative effects in the South West could be type 1 errors. Alternatively, differences in three-way selection between the employed, unemployed and economically inactive could be involved. For people with sufficient health problems to claim sickness/disability benefits, the financial incentive to exit the labour market altogether is considerably greater for those who are unemployed than employed, and people do appear to follow these incentives. 40 Such differential labour market exit would mean that, all else equal, the unemployed should be more selected for good health than the employed. Of course, other processes-such as selection of healthy jobseekers back into employment plus any negative causal influences of unemployment on health-would act in the opposite direction, potentially obscuring effects of differential labour market exit. However, in a context of very low unemployment, these effects could plausibly come to the fore, possibly accounting for the negative associations in the South West. If so, effects reported for Scotland, and England overall, should be considered underestimates. This analysis had several advantages; our sample was much larger than the two previous studies, and contained both men and women from across the working-age range, increasing generalisability of results. By considering a wide range of potential confounders and mediators, we were able to explore confounding by socioeconomic position, by pre-existing illness and the role of health behaviours. Participants who were temporarily sick during a spell of unemployment were excluded, leading to conservative estimates. This analysis has three main limitations. The first concerns loss of data between those targeted for a blood sample, and the usable CRP and fibrinogen measurements actually obtained; resultant bias cannot be ruled out. Second, comparatively few unemployed women in the sample meant gender modifications could not be fully explored. Third, analysis of current unemployment in the context of life histories was not possible. This would have allowed further exploration of effect modifications by age and region. CONCLUSIONS This analysis found robust elevations in CRP, fibrinogen and odds of CRP >3 mg/L among English and Scottish unemployed men and women compared to their employed counterparts, but strength of effects varied considerably by both age and country/ region, suggesting the relationship of unemployment with inflammation may be strongly influenced by environmental or contextual factors. Alternatively, if these differences reflect life course accumulation processes, they may indicate long-term or repeated unemployment as especially damaging to aspects of health related to inflammation. What is already known on this subject ▸ Systemic inflammation is increasingly implicated as a mediating factor relating stress to morbidity and mortality. ▸ Both morbidity and mortality are elevated during unemployment, but questions remain regarding the direction of causation and mediating mechanisms involved. ▸ Two small-scale studies have reported elevated inflammatory markers in unemployed participants, consistent with an impact of unemployment on health via psychosocial stress. The bold text signifies associations in the stratified analyses which are significant at p<0.05. *Adjusted for age in years, gender, country, survey year, occupational social class, housing tenure, presence of a long-term illness, smoking, alcohol consumption, categorised BMI and dichotomised GHQ-12. BMI, body mass index; CRP, C reactive protein; GHQ-12, General Health Questionnaire. The bold text signifies associations in the stratified analyses which are significant at p<0.05. *Adjusted for age in years, gender, country, survey year, occupational social class, housing tenure, presence of a long-term illness, smoking, alcohol consumption, categorised BMI and dichotomised GHQ-12. BMI, body mass index; CRP, C reactive protein; GHQ-12, General Health Questionnaire. What this study adds ▸ We confirm and extend these findings using data from two large-scale, nationally representative studies, and explore this association in a UK context, for the first time. ▸ While current unemployment was robustly associated with elevated inflammatory markers, effect modifications by both age and region suggest the relationship may be strongly influenced by contextual factors and/or accumulation processes. Competing interests None. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement This analysis was entirely conducted on data publicly available from the UK Data Service. Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/ licenses/by/4.0/
2016-05-04T20:20:58.661Z
2015-02-19T00:00:00.000
{ "year": 2015, "sha1": "249015e1a549351ba0dbba405b6a732079ae8e41", "oa_license": "CCBY", "oa_url": "https://jech.bmj.com/content/jech/69/7/673.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "19669fc9890e3aa04aa42093036cf2a59dab7101", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
145426972
pes2o/s2orc
v3-fos-license
Measuring Developmental Differences With an Age-of-Attainment Method The sensitive measurement of variation in rate of attainment is an underutilized but useful indicator of individual differences in development. To assess such individuality, we used longitudinal parental diary checklists of infant attainments to estimate the ages at which ubiquitous developmental milestones like sitting and walking were reached. Parents using this diary checklist have been shown to be valid reporters of milestone attainments. Present analyses show that multiple definitions of milestone onset have high reliability as well. Babies differ considerably in their rates of development, and such individual differences in rates may be predicted from other variables with survival (event history) analysis. Ages of attainment for sustained sitting, crawling, and walking were calculated for 519 infants and predicted using 11 common covariates. Our discovery that babies of younger mothers reach these milestones sooner than those of older mothers reveals the value of an age-of-attainment (AOA) approach. A framework with a SAS program for collecting and analyzing AOA data is presented. The timely achievement of motor, physical, language, and cognitive milestones defines healthy development, and the developmentally delayed infant or child may be at risk of various problems (Dosman, Andrews, & Goulden, 2012;Young, 2010). Consequently, much of developmental measurement is concerned with appropriately characterizing differences among children and with identifying those who are lagging their agemates developmentally. The typical approach to measuring individual differences in development was pioneered a century ago by Binet and Simon (1916). This psychometric approach has usually involved measuring some feature of development on one or more age-related tasks and combining the results into an overall score. Task performance could be reflected in many types of outcomes, such as the number of correct answers on a vocabulary test or the latency of response. To translate specific, but variable, outcome measures onto a developmental metric, the individual's performance was expressed in relation to the average for others of the same age. This became the de facto way to express the developmental status of an individual, as relative to an age-based norm. There is no external criterion for development. Rather, the group average is treated as the standard of typical development. Such an approach has been very useful in many contexts. However, a group-normed approach is limited because of the possibility of secular changes in the group average. For example, Flynn (1984Flynn ( , 1987 has documented an historical increase in population IQ averages in a number of industrialized countries. The causal factors responsible for a population increase could also be operative at the individual level, so the use of the mean as a standard against which to compare the individual may miss important causal influences. A measure of individual developmental progress that is independent of group norms would be preferable, but it would require a developmental scale on which the individual could be located. There are countless concepts that develop, from expressive vocabulary size to theory of mind, and there is little professional reward for doing the painstaking work of developmental scaling for each concept. Fortunately, Wohlwill (1973) articulated an overlooked alternative approach that scales development on a readily understood and measureable metric, time to attainment. Moreover, an attainment approach can be applied to diverse developmental phenomenon, and it is to this approach that we now turn. The Age-of-Attainment (AOA) Alternative Many, if not most, developmental processes involve an identifiable moment when a new developmental ability or feature first appears. The more salient of these are often called 529775S GOXXX10.1177/2158244014529775Sage OpenEaton et al. research-article2014 1 University of Manitoba, Winnipeg, Canada milestones, such as when a first step is taken or when the first two-word sentence is uttered, but they may be subtle and largely unremarked. Wohlwill's (1973) key idea was to incorporate chronological age with a specific milestone, so that the focus is on the age of milestone attainment. Once a specific developmental destination is identified, individual differences can be captured by the variation in the ages at which children reach the milestone. In the sections below, we describe and illustrate the AOA method. In essence, different developmental phenomena are expressed on a common metric, chronological age. This is done by identifying significant age-related events and expressing those events in terms of the age at which the child reached them. Such an event-centered approach has been considered before (e.g., Campbell & Weech, 1941), though rarely. However, determining the age at which a developmental event occurs requires either longitudinal measurement (always difficult and expensive) or retrospective recall (suspect by many). As we shall see, methodological and technical developments are converging to make the longitudinal collection of attainment data quite feasible. Thus, stumbling blocks to the use of prospective milestone observations are receding at the same time that the advantages are becoming more apparent. An advantage of the AOA method is its applicability to situations where interval scales of measurement are not available. Instead, all that is needed is an important observable developmental (age-related) qualitative event, and there are many: first steps, menarche, reading a sign without assistance, and becoming a parent. With an event defined, the individual's chronological age at the date of attainment is the key outcome variable of interest. In essence, age shifts from being a predictor of other variables to being the outcome explained by those other variables (Wohlwill, 1973). The implications of such a shift are far-reaching, which we will illustrate by applying the AOA method to some key motor attainments of infancy. Infant Gross Motor Development Nearly all infants transform themselves from being relatively immobile and horizontal to being upright and capable of walking in two years. Bipedal locomotion is a major evolutionary adaptation and a defining characteristic of our species. Not surprisingly, the processes involved in its achievement are many and complex. From an AOA perspective, we want to identify those processes that influence rate of development (i.e., covary with AOA), and in so doing further our understanding of how development works. In the illustration that follows, we will specify various factors thought to predict the speed with which infants reach three key gross motor achievements: (a) sitting without support, (b) crawling on hands and knees, and (c) walking without support. Many infant motor achievements are readily observable by parents, who can be recruited to make recordings. Our study utilized parents in just this way and, as we learned, parents are very interested in watching for infant accomplishments. Parents usually see more of their infant's development than do non-family members, and our methodology capitalizes on their privileged vantage point. Thus, longitudinal study of observable milestones may be more feasible than we have assumed, and an AOA approach to differences in rate of development becomes an attractive method. An AOA method appeals, too, because a well-developed analytic procedure called survival analysis (Allison, 2010) is readily applicable to longitudinal AOA data. Survival Analysis This method, also known as event-history analysis, was originally developed for predicting how long persons would survive (Singer & Willett, 2003). However, survival analysis is much more generally applicable because it can be used for any time-situated event, which is defined as a qualitative change from one discrete state to another (Allison, 2010, p. 2). This type of analysis is relatively rare in developmental research, which is somewhat surprising given its emphasis on time-based events. Gross motor milestones are well suited to survival analysis because they are qualitative transitions that can be located on a time (age) continuum. Developmental transitions do not occur instantaneously, but as long as the interval in which the transition occurs is brief relative to the total duration under consideration, they can be appropriately analyzed with survival analysis (Allison, 2010). More importantly, survival analysis can include estimates of how much a milestone is shifted in age by the presence of significant predictors (e.g., does high socioeconomic status [SES] lower the age when babies will walk?). With survival analysis, not only can we say "we are there," we can also identify variables that may speed or slow the trip. Daily Checklist of Motor Milestones We needed a longitudinal checklist of easily observable developmental milestones for use by parents of pre-walking infants. We reasoned that by having prospective, rather than retrospective, reports of milestone attainment, we could minimize the potential problem of faulty recall. These requirements follow the suggestions of Fenson et al. (1994), who developed a parent-based measure for measuring language acquisition. As for format, our checklist items were modeled after those used by Adolph, Biu, Pethkongathan, and Young (2002), and we included items like those on the Denver Developmental Screening Test (Frankenburg, Dodds, Archer, Shapiro, & Bresnick, 1990) and the Alberta Infant Motor Scale (Piper & Darrah, 1994). Our checklist requested daily entries, which made the process routine for parents. With such daily recording we obtained a fine-grained longitudinal record of developmental change during infancy, a time when "even weekly observations may miss the critical transitions" (Thelen & Smith, 1998, p. 602). A crucial question is whether parents who used our checklist could be dependable reporters of their baby's attainments. Bodnarchuk and Eaton (2004) addressed this validity question by having home visitors, who were blind to what the parent had reported, observe 95 babies using Piper and Darrah's (1994) Alberta Infant Motor Scale. Twelve parent checklist milestones were matched to Alberta Infant Motor Scale (AIMS)' items to assess the level of agreement between what the parent had reported on the checklist and what the visitors saw. The checklist-visitor concordance rates ranged from 69% to 98%, and kappa's ranged from .31 to .96. These results clearly confirmed that parents can provide dependable reports of milestone attainment. Defining an Event Our daily checklist approach produced for each infant an array of daily readings, each with one of three possible values, observed, not observed, or missing. To apply survival analysis to such data, one needs to define a time-situated event that represents a transition from one discrete state to another. Consequently, an initial goal was to identify appropriate criteria for event definition. The choice of an appropriate event definition depends in part on the nature of the milestone and how abruptly it is attained. For some milestones, the transition from one status to another may be gradual; for others the transition may be sudden (Bushnell & Boudreau, 1993). In part, then, the appropriateness of an event-threshold definition is an empirical question, and one that we address below by considering different event definitions and their reliabilities. A non-walking baby might walk one day and then not do so again for many days. How is a milestone attainment to be determined if a transition is not abrupt and consistent? The first day a baby walks would be the obvious choice as the event, but a single observation is more vulnerable to errors than an event criteria based on multiple days of observations (Epstein, 1979). On the other hand, aggregation over multiple days would make the estimate of AOA less precise and more prone to loss due to missing observations. Because there are multiple ways to define an attainment, we considered different criteria for deciding that a milestone had been reached. The Problem of Unobserved Events Babies entered and left our study at different ages, which meant that a given infant may have attained one or more of the milestones before or after the period of parental observation. This reality leads to a complex data set, as depicted in Figure 1, which illustrates various possibilities for three milestones 1, 2, and 3. Some babies are observed to reach all three milestones, others are not. Baby B leaves the study before Milestone 3 is reached and Baby D enters the study after Milestone 1 had been attained. To exclude Baby F from analytic consideration of Milestones 1 and 2 would lead to an underestimation of the average ages at when babies reach those milestones. Cases in which the event is not observed are known as censored cases in survival analysis (e.g., Baby B Milestone 3 and Baby D Milestones 1 and 3). Because survival analysis assesses the risk of an event occurring at a specific time, both event occurrences and non-occurrences are informative, and survival analysis makes better use of the available information than more traditional analytic approaches for AOA data (see Singer & Willett, 2003, for a non-mathematical discussion of these issues). Predictors of AOA Survival analysis has another advantage: It allows for the statistical evaluation of covariates' influence on the timing of an event. We identified 11 commonly used predictors of infant development (e.g., gestational age, mother education, family income, etc.) and evaluated their potency in accounting for variation in attainment. By applying survival analysis to our diary data, we could identify factors related to individual variation in developmental rate. Event Definitions We focused on three age-related events. For each, the baby had to sustain the posture or activity over time, as the following descriptions from the parent instructions illustrate. Drawings for each milestone were provided to the parent, and descriptions of the three milestones follow: Sit. "Sits up alone (not propped on pillows or a chair) without using hands for support for at least 30 seconds. Back is straight. Baby often uses hands to play with a toy." Crawl. "Uses only hands and knees for support. Baby's back is straight and doesn't sag. The knees are under the hips, and the elbows are under the shoulders. Only check this skill if you see Note. ○ = start of data collection; 1, 2, and 3 = ages of attainment for the different milestones; • = end of data collection. your baby continuously go 10 feet or more (this will involve several consecutive crawling steps)." Walk. "Walks alone more than 10ft (3m). This item should be marked as observed when the baby uses walking as the main means of getting around, although the baby may still fall. Baby can walk across the room without your help and without holding onto furniture for support." Recruitment and Procedure Participating families were recruited primarily from a brochure distributed to new mothers at the largest hospital maternity ward in the city and from a packet for new mothers at a second hospital. The brochure invited parents to call our study office. Others learned of the research in a variety of ways: from a newspaper article about the study, from a news segment on a local television news program, from attending a birth fair, and from friends and relatives. Interested parents (N = 784) contacted the project coordinator and were told about the general nature of the study. If they agreed to participate, our coordinator recorded some initial information, which included the infant and mother's birth dates and the sex of the infant. When the baby was 2 months old, the coordinator mailed the parent a packet containing a consent form, the checklist, and postage-paid envelopes for returning the consent forms and checklists. Those who contacted us with infants older than 2 months were sent a package of materials immediately. Parents mailed back completed forms monthly, and after they ceased reporting, we sent them a small gift and a Baby of Science diploma. Participants General information about the participants was obtained and covered issues such as family income, mother education, smoking and alcohol use during pregnancy, and birth order, birth weight, and gestational age. The median ages of the infants at the start and end of recording was 10.1 weeks (range = 4.1-53.3) and 44.1 weeks (range = 8.1-98.1), respectively. Information about the infants is summarized for 11 variables shown in Table 1. The infant's birth order in the family was recorded, as was the type of delivery. Information about the pregnancy, such as maternal smoking and alcohol ingestion, was coded dichotomously, and gestational age in weeks was calculated as the difference in weeks between the actual birth date and the mother-reported due date. Other birth information used was birth length and ponderal index, a measure of infant chubbiness (birth weight in grams / birth length in cm cubed × 100). Checklist Data Of those initially registered for the project, 78% (n = 613) completed and returned at least one monthly checklist (the median number of monthly forms returned was 7). Thus, each participant had multiple records, one for each day of recording, which produced a total of 117,354 records, each with information about 31 different milestones (3.6 million bits of milestone information). Because daily checklists produce a huge amount of data, its management requires complex data manipulation programming. The SAS programming language has the necessary procedures for such data manipulation (see Eaton & Bodnarchuk, 2013). Attainment Event Definitions Age of first attainment (AOF). The simplest attainment event definition is the first observation of a milestone, and we calculated the AOF by subtracting the baby's birth date from the day of first observation and converted to weeks. One complication arises in that the milestone may have been reached prior to the start of observation. We handled this possibility by establishing from the checklist that a milestone had not been previously seen. More specifically, we evaluated the 7 days prior to the first observed attainment; at least 4 or more of those 7 days had to have been recorded as "not observed" (up to 3 days of the 7 days could have missing observations). The power of Proc Expand. As noted earlier, the AOF is not the only possible event definition, and one of our goals was to assess the reliability of additional threshold definitions. Given the large volume of daily observations, it is impractical to hand calculate alternatives to AOF. Fortunately, SAS software provides a solution with its Expand procedure, which is designed for the manipulation of timeseries data. It enables one to select intervals of varying lengths (e.g., 3, 5, or 7 days) and to calculate a wide variety of transformations from values in the chosen interval (e.g., to identify the median value). Moreover, one can apply the transformations to successive intervals (e.g., first to Days 1 to 5, then to Days 2 to 6, 3 to 7, etc.). We used Proc Expand to calculate and test several different threshold definitions. More stringent attainment criteria. In addition to AOF, we considered three other event definitions that used increasingly larger observational windows from which the attainment was determined. A window of an established number of days was successively applied to the date-ordered array of observations for a given baby. This moving window began when the checklist was started and ended for a particular milestone when a specified number of cases of the milestone being observed were first seen. For a 3-day window, we required that the first such window in which 2 passes were observed would encompass the threshold of attainment; we used the middle day of the three as the exact day of attainment. In a similar fashion, 5-and 7-day windows with three-and four-pass thresholds were also considered. Thus, we had a 2-of-3-day criterion, a 3-of-5-day criterion, and a 4-of-7-day criterion. From this perspective, the AOF attainment would be a 1-of-1-day criterion. The operation of the four definitions is illustrated in Figure 2, which illustrates how different patterns of observations (e.g., a pattern of saltatory change) will interact with the different definitions. Reliability. To assess the reliability, we divided an infant's daily records into two samples, one from even-numbered calendar days and the other from odd-numbered days. We then applied each event definition to each sample, first to the even-days' recordings and second to the odd-days' recordings. With two estimated dates of attainment for each baby, we could estimate a split-half reliability coefficient. This we did for each of three milestones and four event definitions. Predictor Variables We identified for use in the survival analysis 11 individual difference variables that are widely used as predictors of infant development (see Table 1). Because missing values for a predictor is not permissible in our survival analysis, we considered the 519 cases with complete data. To make the parameter estimates of survival analysis more readily interpretable, all predictors were transformed to have a zero value, either by centering or by assigning zero to one level of the variable (see Table 1). Reliability Analysis With three milestones and four event definitions each, we calculated 12 reliability estimates (see Table 2). Reliabilities for each of the three milestones are uniformly high and vary little by event definition. Apparently, once the event was observed for the first time, it was observed on most subsequent days. An abrupt onset means that the specific definition chosen does not much influence the calculated day of attainment (see Saltatory Change in Figure 2). Thus, we chose the simplest definition, AOF, for subsequent analyses because it minimizes missing data. Survival Analysis Participants joined and left our study at different ages, so we used an accelerated failure time regression model implemented with the SAS Lifereg procedure. We specified the most general distribution model, gamma, because it can accommodate many distribution shapes. A key product of the analysis is the hazard function (see Figure 3), which depicts the momentary "hazard" of attaining a particular milestone by age (if one has not reached it already). A related and more intuitively useful curve is the cumulative distribution function (see Figure 4), which presents the proportion of infants estimated to reach a milestone by age. Based on our data, 50% of infants would be expected to demonstrate sustained sitting, crawling, and walking by 25.6, 38.3, and 55.6 weeks, respectively. Interesting though such point estimates may be, the real advantages of survival analysis lie in its ability to relate various predictors to the age of event attainment. Parameter estimates for each of our 11 predictor variables are presented in Table 3. A positive coefficient indicates that an increase in the predictor is associated with an increase in the time to the event (later AOA), whereas a negative coefficient indicates that an increase in the predictor is associated with a decrease in time to the event (earlier AOA). Thus, for all three milestones, the positive coefficients for mother age mean that additional years of mother age predict later attainment. In contrast, gestational age has negative coefficients, which mean that later gestation predict shifts attainment to a younger age. An advantage of an AOA approach is that a variable's influence can be expressed on an easily understood metric, age. We illustrate this by estimating the ages at which the babies of different-aged mothers will sit, crawl, and walk. We selected mother ages of 26 and 36 years, which are approximately 1 SD on either side of the median mother age of 31 years. We then created two contrast observations where all covariates are constant (set to 0) except for mother age; one was set to 26 years (-5) and the other to 36 years (+5). These observations were appended to our actual data set following Allison (2010, p. 110), and PROC LIFEREG was rerun. This procedure generates predictions for the two contrast observations without influencing the estimation process. The resulting estimates are presented in Table 4, where it can be seen that the baby of the 26-year-old is predicted to reach these motor milestones earlier than the baby of the 36-year-old. Discussion Not one of the methodological elements of this article is new, nor has any one of them been developed by us. Indeed, all of them can be found in the scientific literature: age of milestone attainment (Shirley, 1933), prospective diary checklists (Adolph et al., 2002), the SAS Expand procedure (Low et al., 2006), and survival analysis applied to child development data (Singer, Fuller, Keiley, & Wolf, 1998). What is new here is the combination of these elements into a framework that makes available to developmental researchers a feasible, flexible, and practical approach to (a) collect AOA data, (b) manipulate and summarize it, and (c) analyze it with appropriate statistical techniques. Furthermore, this approach has revealed substantive findings, to which we now turn. Little methodological work has been done on the measurement of motor milestone events, and we implemented a practical split-half approach. The reliability issue highlighted for us the importance of how the onset of an event is defined, an issue central in survival analysis. Although we ultimately used the first day of attainment in our analysis, we considered other event definitions. In this regard, the SAS Expand procedure is tremendously flexible and powerful and can readily implement almost any event definition one might articulate numerically. This procedure also allows for the processing of the large volume of data generated by the longitudinal implementation of daily checklists. Our reliability results tell us that differences due to measurement factors (odd-vs. even-day recording; or 1-, 3-, 5-, or 7-day aggregation intervals) are miniscule compared to differences among babies. Of course this conclusion is limited to the present milestones, and there may well be milestones whose onset is more gradual and intermittent. For those cases, our age-of-first-attainment event definition may be less appropriate, and reliability results may be poorer. In the case of sitting, crawling, and walking, however, our reliability results confirmed that differences among babies generalize beyond the details of the specific definitions we considered. The question then becomes, what is responsible for this variation? Survival analysis provides some new clues. Two predictors emerged from our analysis, gestational age and mother age. The finding that later gestational age at birth is associated with earlier attainment has been reported in twin studies (Peter, Vainder, & Livshits, 1999) and is consonant with the idea that conceptual age is important. However, a 1-week difference in gestational age is associated with less than a 1-week shift in milestone attainment, which suggests that post-gestational events are influential. More surprising was our finding of a link between mother age and gross motor attainment, a link that, to our knowledge, has not been made previously. The babies of younger mothers tended to reach these milestones sooner, even after we controlled for 10 other factors, including birth order. Mother age is undoubtedly a crude proxy for other influences, from biological variables associated with pregnancy to post-natal social factors, and we do not know which of these influences are critical. Having been alerted to the possible importance of mother age, we have found two related findings. Schum et al. (2001) reported, without comment, that earlier completion of toilet training is associated with younger maternal age, and Adams, Jones, Esmail, and Mitchell (2004) found that "the younger the mother, the sooner the baby slept through the night" (p. 98). Such results hint at some kind of general maternal age effect on developmental rate. Our initial inclusion of mother age in the analysis was a pro forma choice on our part, so we were surprised when it emerged as a predictor of motor milestones. Our findings reveal the potential of an AOA approach and buttress Bornstein, Putnick, Suwalsky, and Gini's (2006) call for more research attention to mother age as an influence on development. Our milestones study also uncovered another unanticipated phenomenon-parents' great enthusiasm for observing their own baby. With little incentive, prodding, or follow-up from us, those parents who started the daily checklist procedure persisted for many months (7 on average). Of course, their enthusiasm could be a testament to the unique power of babies to capture the attention of adults, but we know that parental regard and concern extends to older offspring as well. The interest of parents in observing their infants provides researchers with an opportunity to show parents how their infant develops. Parents are often unaware of what to look for or what constitutes a change in development. The diary provides a guide that essentially translates a vague concept of motor development into specific, observable facets of behavior. Parents thus gain a greater understanding and appreciation of their infant's progress. We believe that the milestones approach could be successfully applied to older groups if the recording task is simple and convenient. Technological developments (e.g., automated messaging, e-mail, and mobile apps) may well make feasible AOA studies that would have been prohibitively expensive in the past. The downside of parental concern is the possibility of bias in their observations. We minimized this potential by focusing on overt behaviors, low-inference coding definitions, and same-day observations and found strong evidence for validity (Bodnarchuk & Eaton, 2004). There are reasons for optimism about the validity issue. First, there are many important developmental phenomena about which parents have few preconceived expectations. For example, a child's ability to point to an interesting event has implications for a theory of mind, but few parents would have any expectation about when a baby "should" point. Furthermore, investigators could include checklist items designed to identify suspect or careless recording. A parent-based AOA model has many potential applications. For example, nutrition studies typically use standardized tests like the Bayley Scales of Infant Development (Bayley, 2005) as outcome measures. Such tests are expensive and are usually restricted to one post-treatment occasion. In contrast, parental AOA checklist measures would be more economical and might well be more sensitive to nutritional interventions. This tool also has the potential to improve our methods of developmental surveillance through the development and use of simple forms that parents, with regular observations, could use to track their child's developmental progress. Not only can children be followed and assessed before they reach school-age, as recommended by school and health practitioners, but because parents can report from a distance, the technique could be useful in remote locations. An AOA approach has the potential to identify at an earlier age children who lag their peers. Early identification could, in turn, facilitate more timely intervention. An AOA approach to developmental differences specifies not only when a developmental event is typically reached but also what other variables may influence it, and it combines diary checklist methods with existing analytic tools that are within the reach of most investigators. Such an approach makes age part of the dependent variable (Wohlwill, 1973), and between-individual variation in rate of development then provides clues about causal processes (e.g., mother age). This method also engaged and interested parents, who maintained a high level of cooperation and enthusiasm over many months. Researchers should capitalize on such parental enthusiasm by following the examples of human enterprises that successfully harnessed volunteer contributions (e.g., Winchester, 2003). An AOA methodology has the potential to do so. Authors' Note The participation of the parent and infant participants of the Milestone Study provided the foundation for this work, which was built upon by the combined efforts of Wendy Guenette, Kara Bazylewski, Meghan Duncan, Cori Syrnyk, Denee Ryz, Amy de Jaeger, and Carolyn Barg.
2019-05-06T14:06:46.512Z
2014-04-03T00:00:00.000
{ "year": 2014, "sha1": "cb2cd7e31ae03c109129f338b21228028ae54967", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/2158244014529775", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "40dcf0c0cc222b71dd1a1a6335f00e3d8e3344ce", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
140032376
pes2o/s2orc
v3-fos-license
Area-Selective Growth of Aligned ZnO Nanorod Arrays for MEMS Device Applications † ZnO nanorods (NRs) arrays with good vertical alignment were selectively grown on microscale patterned surfaces by a MEMS-compatible, low-temperature chemical-bath deposition method (CBD). The direct-current (DC) sputtered and subsequently annealed ZnO seed-layer was found to have a crucial effect on the ZnO NRs growth. Depending on the preannealing temperature between 200 °C and 700 °C, which is compatible with our microcantilever fabrication process, diameters and area densities of the NRs of 60–99 nm and 17–27 μm−2 were observed, respectively, with the best alignment at 600 °C. A surface-area enlargement factor of 48 was achieved with respect to a ZnO layer indicating the potential of ZnO NRs arrays for MEMS applications, such as gas sensing. Introduction In recent years, one-dimensional (1D) ZnO nanostructures have received great interest due to their potential applications in electronic and optoelectronic devices, such as solar cells, gas sensors, photodetectors, light-emitting diodes and surface acoustic wave devices.Various methods have been studied to fabricate aligned 1D ZnO nanostructures, including thermal vacuum evaporation (TVE), electron beam gun evaporation, molecular beam epitaxy (MBE), metal-organic chemical vapor deposition (MOCVD), sputtering techniques, chemical bath deposition (CBD), etc. [1]. However, for the moment, it is still challenging to achieve the growth of functional ZnO nanostructures with highly aligned and oriented arrays on micro and nanoscale surfaces, which are crucial for the performance of these devices.The low-temperature CBD method is a simple facility, no-catalyst and low-cost process [2].However, before the growth of ZnO NRs, an aqueous spin-coating method or sol-gel alkaline solution is essential for seed-layer (SL) coating, which are not compatible with MEMS fabrication [3].In this paper, we report a CBD-based twosteps process, by using a DC sputtering/annealing (S/A) method for SL deposition.The areaselective growth of aligned ZnO nanorods (NRs) arrays was achieved on n-type silicon microcantilevers, indicating its applicability for MEMS device fabrication.In comparison with a sol-gel seed-layer deposition method, the DC-sputtered ZnO seed-layer has some advantages, such as easy thickness control, good morphology and high process repeatability.Compared with radio frequency (RF) magnetron-sputtering, which has a base pressure about 9 × 10 −5 Pa and a working Ar pressure of 5 × 10 −3 Pa [2], DC-sputtering can be operated at a moderate working pressure of 640 Pa, and has a much lower power consumption.Furthermore, it has been found that the properties of ZnO NRs have a dependence on the pre-annealing of the seed-layer either by sol-gel deposition or by RF magnetron sputtering, so in this paper, DC-sputtered Zinc films were annealed under different temperatures to prepare seed-layers for subsequent ZnO NRs growth by CBD.We find that the crystallinity, resistivity and morphology of the ZnO NRs strongly depend on the pre-annealing of the seed-layers. Results and Discussion Figure 2 shows the top-view SEM graphs with a tilt angle of 30° of the ZnO NRs grown on the ZnO seed-layers annealed at different temperatures, all the NRs were grown in the chemical bath at 90 °C for 3 h.As it can be seen from the graphs, the obtained ZnO NRs were vertically oriented with respect to the substrates, the NRs based on the seed-layers annealed at 600 °C (NRs-600) tends to have the best orientation.The density and average diameters of the corresponding ZnO NRs on the seed-layers annealed at different temperatures are calculated according to Figure 2 and listed as Table 1.NRs-600 arrays possessing highest density and a relative big diameter, which means high surface-to-volume-ratio and that is important to the improve MEMS devices performance.XRD was further used to characterize the crystallinity of different NRs arrays.As shown in Figure 3, the indexed diffraction peaks are consistent with the standard values of the bulk ZnO crystal (JCPDS 36-1451) and all the ZnO NRs have a wurtzite structure.The sharp and strong (002) peak indicates that the ZnO NRs have a potential c-axis orientation on sputtered/annealed ZnO SLs.Besides, we did not observe the (100) and (101) peaks which can be found in some samples grown on the sol-gel seed-layers [5].The (002) diffraction peak of NRs-600 has the highest intensity, showing that NRs-600 arrays have the highest (002) c-axis orientation preference, revealing the best NRs vertical alignment as Figure 2. Furthermore, the resistivity of ZnO NRs arrays was measured using a four-point probe, and the results are depicted in Figure 4.The observed highest resistivity of the NRs-600 was thought to be caused by their best vertical alignment as visible in Figures 2 and 3. We reported the fabrication and humidity sensing performance of ZnO-NRs-patterned piezoresistive silicon MEMS microcantilevers [6], based on the aforementioned two-step deposition method.In the present study, ZnO NRs were coated on the back surface of the microcantilevers, a schematic and SEM graphs of a microcantilever coated with NRs of 6 µm in length solely on its back surface was displayed in Figures 5 and 6, respectively.A surfacearea enlargement factor of 48 was found, indicating its considerable application potential for MEMS devices. Conclusions In this work, a DC-sputtered/annealed ZnO seed-layer and chemical-bath deposition based two-step ZnO NRs arrays growth method was introduced, and ZnO NRs arrays grown on seed-layers annealed under different temperatures were characterized to study their area density, diameter, crystallinity and resistivity.The SEM graphs, XRD patterns and four-point probe resistivity measurements illustrate that NRs arrays grown on the seed-layer annealed at 600 °C have the best c-axis orientation and vertical alignment as well as the highest surfacearea enlargement factor of 48 (with 6 µm in length).Next, further properties relevant for MEMS devices applications, e.g., growth of NRs on different materials, vacancy concentrations and optical properties will be investigated. Figure 1 Figure1depicts the fabrication process steps of area-selective growth of aligned ZnO nanorods (NRs) arrays on n-type silicon, and the details are described as follows:(a) The fabrication started from a piece of sample with a dimension of 30 × 30 mm 2 , the sample was cut from an n-type bulk-silicon wafer (crystal orientation: <100>; resistivity: 1-10 Ω × cm; thickness: 275 ± 15 µm and diameter: 100 ± 0.13 mm) and cleaned by putting the sample into a boiling acid mixture (H2O2 (30%) and H2SO4 (96%), v:v = 1:1) for 5 min.(b) A positive photoresist (AZ 5214E, Merck, Kenilworth, NJ, USA) was utilized during the subsequent photolithography step and a MJB4 mask aligner (SÜSS MicroTec AG, Garching, Germany) was used to expose the pattern area.Prior to the exposure, the photoresist spincoating procedure was run at a speed of 5000 rpm for 35 s, to create a homogenous photoresist layer of 1.5 µm in thickness.After the exposure, the sample was then dipped and developed in AZ 726 MIF developer solution (MicroChemicals, Merck) for 60 s, followed by DI water rinsing and nitrogen purging.(c) Afterwards, a polycrystalline Zn film was prepared by sputtering Zn (99.99%) using high purity Ar (99.99%) gas under 50 µA direct current (DC), at room temperature (25 °C) and a working pressure of 640 Pa.To obtain the selective deposition of sputtering film on the patterned area, the excess ZnO was removed using photoresist lift-off.(d) Then the sputtered Zn-film was annealed in an oven with open atmosphere, to investigate influence of annealing temperature on the nanorods growth, the sputtered samples were annealed at 200 °C, 300 °C, 400 °C, 500 °C, 600 °C and 700 °C, respectively.(e) When the seed-layer has been prepared, a subsequent photolithography step was implemented corresponding to step (b) to protect the substrate during the next CBD process.(f) ZONRs were grown by dipping the sample in an aqueous solution, which consisted of 30 mmol/L zinc nitrate (Zn(NO3)2) and 30 mmol/L hexamethyleneteramine (HMT, C6H12N4), respectively.The deposition was carried out in a temperature-controlled chemical reactor, which was additionally equipped with a thermometer and a reflux condenser, for 3 h at 90 °C.After the reaction, the sample was cleaned with acetone and deionized water, successively[4]. Figure 1 . Figure 1.Schematic diagram of area-selective growth of aligned ZnO nanorods (NRs) on n-type silicon. Figure 2 . Figure 2. Inclined-view (30°) SEM graphs of ZnO NRs grown on S/A SLs at different preannealing temperatures from 200 °C to 700 °C. Figure 3 . Figure 3. XRD spectra of ZnO NRs grown on SLs pre-annealed at different temperatures. Figure 4 . Figure 4. Resistivity of ZnO NRs array grown on a SL annealed at different temperatures. Figure 5 . Figure 5. Schematic graph of a silicon microcantilever patterned with ZnO NRs on its back surface. Figure 6 . Figure 6.Inclined-view SEM graph of a Si microcantilever with ZnO NRs grown on its back surface (inset). Table 1 . Summary of the characterized parameters of NRs grown at different pre-annealing temperature.
2019-03-07T12:26:49.626Z
2018-11-23T00:00:00.000
{ "year": 2018, "sha1": "5a1c461bde5827ea688b5cad37ee3813f8d4b88c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/2/13/887/pdf?version=1542965916", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "5a1c461bde5827ea688b5cad37ee3813f8d4b88c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
53982367
pes2o/s2orc
v3-fos-license
Evaluating the difference in employee engagement before and after business and cultural transformation interventions Levels of engagement within an organization can have substantial and measurable impacts upon the outputs of an organization. The objective of this exploratory study was to establish, the difference between employee engagement before and after a business and culture transformation intervention in the workplace. The participants of an IT firm represented all employee levels in the organization. A preintervention and post-intervention sample consisted of 427 and 253 individuals respectively. The Gallup q12 method was used to determine if differences exist in employee engagement before and after a twoyear preand post merger intervention. The main findings of the research indicated that there was a limited impact on employee engagement before and after the business and culture transformation interventions, and are discussed accordingly INTRODUCTION Organizations' work practices and the workforce have changed dramatically over the past 25 years, due to technological advances, demographic shifts and continual demands for innovation (Kampschroer and Heerwagen, 2005). According to Pech and Slade (2006), globalization, speed, and ambiguity in the business landscape demand the highest levels of fitness to facilitate organizational survival.In such volatile environments, competitors with the correct combination of economic output, trust, innovation and leadership have the greatest prospects of survival.Pech and Slade further state that no organization can afford to underutilize its employee energy, and that employee engagement is a critical element of this underlying energy.In support of this, Buckingham and Coffman (1999) are of the opinion that the payoff for an energized work environment is enormous: improved *Corresponding author.E-mail: whavenga@uj.ac.za.retention, productivity and employee engagement, and therefore reduced turnover. Traditional approaches to organizational and people development, however, tend to focus more on the law of economics with a view to maximizing financial return on employer investment.These approaches can be traced back to the influential, innovative writings of Taylor (1911) in which strategies for optimizing organizational deliverables focused on matters such as recruitment, job design and motivation based on financial incentive.Researchers in the human resource discipline (McGregor, 1957;Mayo, 1949) was in opposition to the so-called Taylorism, and argued that the mechanistic approach of Taylor and his followers was both flawed and unsustainablelargely because it neglected the importance of group dynamics which contribute both to employees' attitudes to work and to their output.Such views initiated a range of theories in the 1950s and 1960s which focused not only on reducing work to its bare elements, but on enriching it by attending to motivators of individual and team development (Herzberg, 1966). These researches lead to the so-called Human Relations Approach which focuses on workers themselves and suggests strong worker relationships, recognition and achievement as motivators for increased productivity (Daft, 1997). During the 1980s and early 1990s the hard-line economic rationalist view surfaced anew, although more recently this has again given way to rhetoric built around ideas of job satisfaction, employee empowerment, selfdetermination and the need to harness human, not just economic, capital (Leana and Van Buren, 1999;Nahapiet and Ghoshal, 1998). Also during the 1980s, the popularity of examining the concept of organizational culture and organizational climate surged as leaders around the world became increasingly aware of the ways in which organizational culture and climate can affect organizations and employees. Warren Bennis uses the analogy that a good organization can articulate its culture, but an incompetent one is incapable of explaining its culture (Bennis, 1989).Organizational culture helps people better understand the hidden and complex aspects of organizational life (Schein, 1992).According to Harrison and Stokes (1992), an organization's culture is made up of those aspects of the organization that give it a particular climate or feel; culture is to an organization what personality is to an individual; it is that distinctive constellation of beliefs, values, work styles and relationships that distinguish one organization from another. Many authors, including Schein (1992) have drawn sharp lines of demarcation between the constructs of organizational culture and climate - Rousseau (2004) differentiated between the two constructs on the basis of climate being the descriptive beliefs and perceptions individuals hold of the organization, whereas culture is the shared values, beliefs and expectations that develop from social interactions with the organization.Schneider and Bowen (1995) refer to the climate in an organization as the perceptions that employees share about what is important in the organization, obtained through their experiences on the job and their perceptions of the kinds of behaviors that management expects and supports.Work climate within an organization refers to how organizational environments are perceived and interpreted by its employees (James and James, 1989;James and Jones, 1974). Despite the fact that the interdependence between the concepts of organizational culture and climate is of vital importance for both theoretical and practical reasons, most researchers have ignored the similarities and differences between organizational climate and organizational culture (Fey, 2002).In this regard, a large part of the studies (Denison and Mishra, 1995;Kotter and Hesket, 1992;Deal and Kennedy, 1982), examined the relationship between overall performance of organizations and organizational culture.Another part of the Werner et al. 8805 the studies focused on examining the association between organizational culture and climate, and between relevant organizational issues such as person-environment fit, creativity, innovation or managerial values (Wallace, Hunts and Richards, 1999;Verbeke, Volgering and Hessels, 1998). Recent developments in employee opinion research and emerging models of effective leadership have introduced the term "employee engagement" to management literature.Employee engagement has been defined in many different ways and the definitions and measures often sound much like other better known and established constructs like organizational commitment and organizational citizenship behavior (Robinson et al., 2004).Most often it has been defined broadly as emotional and intellectual commitment to the organization or the amount of discretionary effort exhibited by employees in their jobs.Employee engagement can be contributed to employees being involved in their work to such an extent that it has a positive impact on the organization's interest thus aiding in any change-process like cultural-transformation which the company is experiencing (Baumruk, 2004;Richman, 2006;Shaw, 2005).Through measurement of this engagement before and after the change-process, a clearer distinction can be made on the variance of engagement through-out this transformation. The Charter Institute for Personnel and Development (CIPD, 2006a) discusses the impact that engagement has on the sense of community within an organization.Whilst managerial actions are important, the results of the CIPD survey (CIPD 2006c) suggest that relationships among fellow workers are just as important in contributing towards job satisfaction.On the other hand, the impact of the organizational climate and the extent to which engagement is embedded in the organization (or individual team or department) are vital for employees' willingness to stay on with their employer and for the extent to which they advocate their organization.This "affective engagement" is strongly related to positive discretionary behavior -or "going the extra mile".(CIPD, 2006) Organizations have traditionally relied on financial measures or hard numbers to evaluate their performance, value and health.According to Pfeffer (1998), although metrics such as profitability, revenue, and cash flow remain important financial indicators of effective performance, the so-called "soft", human-oriented measures such as employee attitudes, traits and perceptions are also now being recognized as important predictors of employee behavior and performance (Pfeffer,1998).For instance, researchers have found a significant positive relationship between employee cognitive attitudes and performance (Petty et al., 1984;Ostroff, 1992), personality traits and job performance (Barrick and Mount, 1991;Tett et al., 1991), and emotions and favorable job outcomes (Staw et al., 1994). Moreover, a recent meta-analysis conducted by the Gallup Organization concluded that the most profitable work units of companies consist of people doing what they do best, with people they like, and with a strong sense of psychological ownership for the outcomes of their work (GWJ, 2006). As highlighted by Robinson et al. (2004), it makes sense for organizations to monitor the engagement levels of employees and to increase these levels if necessary.CIPD (2007a) also highlights the importance of monitoring levels of employee engagement as a key element in managing the organization's human capital.These findings imply that levels of engagement within an organization can have substantial and measurable impacts on an organization's outputs, be they profit, productivity, customer satisfaction, achievement of strategies and objectives, successful implementation of change, or transformation initiatives. Purpose of the study This study will focus on the analysis of survey data collected from a recent consulting effort in an Information Technology (IT) company in SA -to demonstrate through application the utility and validity of survey-based feedback as a tool for organizational change and development in an effort to justify the importance in assessing worker engagement through use of the Gallup q 12 .As aforementioned, the organization examined in this assessment is operating in the IT Sector and the vast majority of changes taking place in the organization were prompted by a two-year process of post-merger business and culture transformation.For the purpose of this study, the feedback from a data-based survey will be assessed.The Gallup q 12 was selected as the diagnostic tool to explore the progress and success of the change interventions implemented over a period of two years.According to the Gallup Organization (1998), feedback serves fundamentally as a powerful tool for change.Moreover, it is a particularly powerful method for examining the relationship between employee attitudes and perceptions, and actual behaviour in the workplace.Employee engagement is more often the intended outcome of employee surveying.The survey is the first step in building a chain of values that underpins the sort of organizational environment that supports and contributes to organizational success. The primary goal is to identify and measure the preand post-intervention scores of the employee-engagement elements that are most powerfully linked to culture transformation.One particularly important area that was specifically examined during the change initiative is the dispositions and attitudes of organizational members at all levels of the organizationemployees, managers and leaders. Hypothesis In all employee engagement studies and methodologies, the importance of measuring the impact of change interventions is emphasised.For purposes of this study it is assumed that there are barriers-to-measure levels of employee engagement during business and culture change interventions.From the above-mentioned empirical objectives, a hypothesis for the empirical investigation is formulated as follows: H 1 : There is no statistically significant difference between the mean employee engagement before and after business and culture transformation interventions Rationale Based on the fact that no evidence could be found in the relevant literature to support a significant difference between the mean employee engagement before and after business and culture transformation interventions, the hypothesis is stated in a non-directional way. In order to test the hypothesis, the ANOVA was used to compare the pre-and post-intervention groups.The results of these analyses are reported in Tables 5 and 7.The hypothesis is therefore supported by the empirical evidence. LITERATURE REVIEW A synthesis and valuation of the literature is done in this section.Aspects that are addressed include; definition of employee engagement; positioning of employee engagement; key drivers of engagement; employee engagement surveys; and business care for employee engagement. Exploring employee engagement In recent years, there has been a great deal of interest in employee engagement.Many researchers claimed that employee engagement predicts employee outcomes, organizational success and financial performance, e.g. total shareholder return (Bates, 2004;Baumruk, 2004;Harter et al., 2002;Richman, 2006).At the same time, it was reported that employee engagement was on the decline and that there was a deepening disengagement among employees today (Bates, 2004;Richman, 2006).It has even been reported that today the majority of workers are not fully engaged or are disengaged, leading to a so-called engagement gap, that is costing e.g.US businesses $300 billion a year in lost productivity (Bates, 2004;Johnson, 2004;Kowalski, 2003). Unfortunately, much of what has been written about employee engagement comes from practitioner literature and consulting firms.There is a surprising lack of research on employee engagement in the academic literature (Robinson et al., 2004). According to the Gallup Organization, USA (GWJ 2006), the engaged employee is someone who is 100 percent psychologically committed to their role.They thrill to the challenge of their daily work.They are in roles that utilize their talents, they know the scope of their job, and are always looking for new and different ways of achieving the outcomes of their role. In academic literature, a number of definitions have been provided.Kahn (1992, p. 694) defines personal engagement as the harnessing of organization members' selves to their work roles; in engagement, people employ and express themselves physically, cognitively, and emotionally during role performances.Thus, according to Kahn (1990Kahn ( , 1992)), engagement means to be psychologically present when occupying and performing an organizational role.Rothbard (2001, p. 656) also defines engagement as psychological presence and, furthermore, states that it involves two critical components: attention and absorption.Attention refers to cognitive availability and the amount of time one spends thinking about a role, while absorption means being engrossed in a role and refers to the intensity of one's focus on a role. Employee engagement is a multidimensional construct.Employees can be emotionally, cognitively or physically engaged.In their study, Luthans and Peterson (2002) proposed Kahn's (1990Kahn's ( , 1992) ) work on personal engagement, which provides a convergent theory for Gallup's empirically derived employee engagement.Schmidt (2004) defines engagement as bringing satisfaction and commitment together.Whilst satisfaction addresses the more emotional or attitudinal element, commitment has bearing on the motivational and physical elements.Schmidt (2004) contends that although satisfaction and commitment are the two key elements of engagement, either of them on its own is sufficient to guarantee engagement.Schaufeli et al. (2004, p. 293) define engagement as a positive, fulfilling, work-related state of mind that is characterized by vigour, dedication, and absorption.They further state that engagement is not a momentary and specific state, but rather a more persistent and pervasive affective-cognitive state that is not focused on any particular object, event, individual, or behavior (p.293). Engagement is, however, different from satisfaction.Gubman (2004, p. 13) states that engagement means "a heightened emotional connection to a job and organization that goes beyond satisfaction", that enables people to perform well, and makes them want to stay with their employers and say good things about them.The CIPD Annual Survey report (2006c) defines engagement in terms of three dimensions of employee engagement: The survey report states that the very engaged will speak out as advocates of their organization, in what they describe as a 'win-win' situation for both the employee and the employer, thus driving productivity. Some writers discuss the varying degrees of engagement that employees can experience.Meere (2005) describes three levels of engagement: 1. Engaged -employees who work with passion and feel a profound connection with their organization.They drive innovation and move the organization forward; 2.Not engagedemployees who attend and participate at work but are merely time-serving and put no passion or energy into their work; and 3. Disengagedemployees who are unhappy at work and who act out their unhappiness at work.According to Meere (2005), these employees undermine the work of their engaged colleagues on a daily basis. While the link between employee attitudes, perceptions and job performance has been mixed in past research, it has been demonstrated that engagement, as defined here, has a stronger relationship with important employee and organizational outcomes.The reason being that engagement is a work-specific attitude and therefore likely to impact directly on work-related activities and attitudes. Having reviewed the literature of Tasker, The Gallup Organization, the CIPD, Buckingham and Coffman, Kahn, Schmidt, Meere and others, the commentary on the evolution of employee engagement is summarized in the following points: 1. Definitions of engagement, or characteristics of an engaged workforce, focus on motivation, satisfaction and commitment, finding meaning at work, taking pride in and advocating the organization.Besides, having some connection with the organization's overall strategy and objectives, and wanting and being able to work to achieve them, are key elements of engagement.2. There is no 'one size fits all' model of engagement leadership.Effective management, open two-way communication, pay and benefits, fair and equal treatment, employing the 'right' workforce, career development and training, working hours, as well as health and safety are all aspects of the work environment which organizations are able control and influence levels of employee engagement.3. A notable feature of these definitions is, in fact, their lack of precision or definition.Furthermore the demands placed upon the organization regarding the quality of its leadership systems, and the structure and design of roles to achieve or maintain the desired engagement are not expressed explicitly. 4. Each of the definitions listed above apparently indicates that an increase in employee engagement supports improved productivity, continuous improvement, better staff retention and a commitment to the organization's success.5.It builds upon and goes further than 'commitment' and 'motivation' in the management literature (Woodruffe, 2005, as cited in CIPD, 2006a) Despite the lack of a definitional consensus on employee engagement, there is a clear indication in the above definitions of employee engagement providing evidence of the connection between increased employee engagement and improved business outcomes.Research done by the Gallup Organization over the past 30 years supports these views (Gallup, 1992(Gallup, -1998)). To summarise: although the definition and meaning of engagement in the practitioner literature often overlaps with other constructs, academic literature has defined it as a distinct and unique construct that consists of cognitive, emotional and behavioral components associated with individual role performance.Furthermore, engagement can be distinguished from several related constructs, most notably organizational commitment, organizational citizenship, behaviour, and job involvement.As noted in the above literature, employees can be motivated and committed to their jobs without necessarily engaging with the overall strategies and objectives of the organization, or without really feeling the wider impact of their outputs and efforts. Positioning of employee engagement The main reason behind the popularity of employee engagement is its predominantly positive consequences for organizations.There is a general belief that there is a connection between employee engagement and business results (Harter et al., 2002).In point of fact, Luthans and Peterson (2002) state that Gallup has empirically determined employee engagement to be a significant predictor of desirable organizational outcomes such as customer satisfaction, retention, productivity and profitability (see Buckingham and Coffman, 1999). According to Joo and Mclean (2006), engaged employees are strong organizational assets for sustained competitive advantage, as well as a strategic asset.A strategic asset can be defined as a set of difficult-totrade-and-imitate, scarce, and specialized resources and capabilities that present an organization's competitive advantage (Amit and Shoemaker, 1993). Engaged employees provide organizations with a competitive advantage, as explained by the resourcebased view (RBV) of an organization (Joo and Mclean, 2006), and therefore employees should be engaged on a continuous basis.The resource-based view posits that human and organizational resources, more than physical, technical or financial resources, can provide an organization with sustained competitive advantage because these resources are particularly difficult to emulate (Lado and Wilson, 1994).The RBV points out firms can develop sustained competitive advantage only by creating value in a way that is rare and difficult for competitors to imitate (e.g.Barney, 1991, Grant, 1991;Peteraf, 1993;Foss, 1997).Effective talent-management policies and practices demonstrate commitment to human capital, resulting in more engaged employees and lower turnover.Consequently, employee engagement has a substantial impact on employee productivity and retention of talent.Employee engagement can, in fact, make or break any organization, according to Lockwood (2006).Martel (2003) is of the opinion that, in order to obtain high performance in post-industrial, intangible work that demands innovation, flexibility and speed, employers need to engage their employeesespecially by affording them participation, freedom and trust; this is the most comprehensive response to the ascendant post-industrial values of self-realization and self-actualization. Key drivers of engagement According to Gallup (Gallup Journal, 2006), supported by Aon Consulting and Hewitt Consulting, key drivers of employee engagement typically include the following: 1. Encouragement to develop skillsfocus on career planning and individual growth and development.2. Work-life balanceestablishment of a culture where leaders are role models of a balanced work-life.3. Belief in the organization's direction and leadershipawareness and understanding of the strategic direction of the organization.4. Praise/recognition for good workreward and recognition mechanisms 5. Being cared about as a personculture of caring.6. Competitive compensation and benefits programsformal mechanisms in place, e.g.incentive programs.7. Clear job expectationsawareness and understanding of what is expected of them.8. Resources for effective job performanceavailability of sufficient equipment and resources to all employees.9. Opportunity to use skillsequal opportunities to utilize current skills and develop new ones.Not all employees have the same sources of motivation or can be influenced to initiate action and change behavior by considering the same factors.Engagement is an individual construct and if it does not lead to business results, it must first impact on individual-level outcomes.Therefore, one of the biggest challenges that leaders face in the 21 st century, is how to motivate effectively, initiate change and sustain improved performance among employees.Factors that contribute to an employee's level of engagement are specific or variable for each individual.It then becomes imperative for leaders to determine which organizational factors contribute to employee engagement and must be able to enhance and maintain these factors, both at individual and group level (Harter et al., 2002). Employee engagement surveys Consulting firms like Gallup and Aon agree that one of the best ways to measure employee perceptions of such drivers still remains the time-tested, solid employee opinion survey that takes employee engagement into account.Employee surveys are one of the most common forms of data collection used by researchers and practitioners.Employee engagement surveys are also a good tool for soliciting the ideas, perceptions, and opinions of employees in an organization.The results of these surveys communicate to employees what management deems important.According to a study by Accord Management Systems (AMS, 2004), employee engagement surveys provide data to help employees understand their organization and work group's strengths and opportunities for improvement.It also provides a baseline of historic and normative comparisons enabling employees to know how their organization is doing compared to others.The research done by AMS endorses the fact that employees' surveys of themselves do not, however, create totally engaged or committed employees. One of the most effective tools for understanding and diagnosing types of perceptual issues such as employee engagement, are the organizational survey.Data-based feedback, either using large-scale surveys or behaviourally-based management rating scales, is one of the most powerful and effective forms of inducing positive change (Goodstein and Burke, 1991;Kanter, 1983). According to the Corporate Leadership Council (CLC, 2004) and Martel (2003) employee engagement surveys are designed to gauge the employee engagement based on employees' perceptions of the work environment.Furthermore, when done and executed well, practices that support talent management also support employee engagement (e.g.work-life balance programsflexi-time, shorter work-weeks, programs of reward and recognition, performance management systems). Among both researchers and practitioners, employee surveys are being used increasingly for simultaneous measuring of a broad range of work outcomes (such as job satisfaction or the now popular construct of employee engagement) as well as a multitude of potential determinants of these outcomes (CLC, 2004).The Gallup organization designed and developed their Employee Engagement Surveys by initiating thousands of focus groups in 2,500 business, healthcare and education units Werner et al. 8809 world wide.The questions were factor-analyzed and subjected to confirmatory factor analyses.Linking the empirical evidence to an established theory in the management research seems desirable.A theoretical framework can be of use in further validation, understanding and testing of Gallup's conceptualization of engagement (Luthans and Peterson, 2002).They further state that by conceptually comparing the Gallup Workplace Audit, Gallup q 12 (Buckingham and Coffman, 1999) with Kahn's (1990) theoretically derived dimensions of engagement, there seems to be a conceptual fit, thus establishing a theoretical grounding for better understanding of employee engagement and a way to measure it through the Gallup q 12 (Luthans and Peterson, 2002). Employees are one of the most important role players within organizational structures, and it is because of their involvement, commitment and engagement that an organization remains competitive.Today's high-performance organizations recognize the fact that an active process of consultation with employees is essential in implementing strategy successfully, building the employer brand, and raising overall performance levels.The employee survey is a valuable management tool, and it is here to stay. Business case for employee engagement For more than 20 years, researchers and organizations have been looking at the organizational factors which engage (or disengage) employees.The idea of creating a more engaged workforce is no new idea.Research studies have been conducted to determine the connection between an engaged workforce and organizational performance.Although some research remains inconclusive, there is a growing body of work that suggests the existence of a connection between employee engagement and organizational performance.Marcus Buckingham and Curt Coffman (1999) found that employees who responded favourably to survey questions regarding engagement, worked in business units with higher levels of productivity, profit, retention and customer satisfaction.These researchers also found that the leader or manager, and not the pay, benefits, perks, etc., was the key to building and sustaining a strong workplace and an engaged workforce. Companies with engaged employees have better productivity, improved customer satisfaction, greater profitability and lower turnover than companies whose employees are not engaged in their work (Buckingham and Coffman, 1999).They further state that today we can't merely employ people's hands and tell them to leave their hearts, minds and spirit at home; and that today's workers are looking for various things in an employment relationship, amongst others a meaningful partnership with their workplace.They are of the opinion that workplaces which provide meaning and purpose and are fun, engaging, and energising will enjoy greater retention, higher productivity and lower turnover; and that leadership performance plays an integral role in creating this work environment. An analysis by Harter et al., 2002, of business-unitlevel relationships among employee satisfaction, engagement and business results, also found that employee engagement was linked directly to profitability, customer satisfaction/loyalty/sales, employee retention, productivity, and safety. The Corporate Leadership Council (CLC) ( 2004) completed a study of engagement levels of over 50,000 employees across the globe and found that those employees who are most committed: 1. Perform 20% better, which CLC ( 2004) claims infers that moving from low to high engagement levels will induce an increase in employee performance of 20 percentage points; and 2. Are 87% less likely to leave the organization, which CLC ( 2004) states indicate the significance of engagement to organizational performance. In the CLC's report on Engaging the Workforce ( 2004), the Council's research findings on the business impact of engagement are clearhigh levels of engagement can generate a performance improvement of up to 20 percentage points.The business impact of engagement creates a clear need for engagement strategies to focus on business outcomes.However, organizations' current use of engagement datapreferring it as a broad metric rather than a focused strategic toolmay not support this objective. On the other hand, in reporting on the costs of employee disengagement, Meere (2005) discusses a survey carried out by ISR on 360,000 employees from 41 companies in the world's 10 largest economies, and found that in companies with low engagement, both operating margin and net profit margins had reduced over a three year period, whilst in companies with high levels of engagement both these measures had increased over the same time period.An organization can only realize large revenues through the optimal engagement of its employees' knowledge, skills, abilities and motivation. In DDI's Leadership Forecast 2003 Study, results show the connection between leadership and productivity and engagement.Employees with strong leaders are more engaged, satisfied and loyal than those with weak leaders.In another study by the Institute on Employment Studies on the service-profit chain, Entitled From People to Profits indicated that an increase in employee commitment has a significant influence on sales, both directly and, through increasing customers' satisfaction with service, indirectly.The January 2003 issue of the Harvard Business Review highlighted substantial research done on workforce motivation throughout the years and revealed consistent findings emphasizing the need to let employees become fully engaged in their work in order to gain employee commitment.Giving employees more responsibility for their work and the way to do it, along with clear measures for accountability, reinforces employee productivity and inspires employees to be more committed to their work.(HBR, Jan 2003). A recent SHRM Conference (2006) reported the result of a new global employee engagement study showing a dramatic difference in bottom-line results in organizations with highly engaged employees, when compared to organizations whose employees had low engagement scores.The study, gathered from surveys of over 664,000 employees from around the world, analyzed three traditional financial performance measuresoperating income, net income and earnings per share (EPS) -over a 12-month period.Most dramatic among its findings was the almost 52% difference in the one-year performance improvement in operating income, between organizations with highly engaged employees and organizations whose employees have low engagement scores.Furthermore, when done well, practices that support talent management also support employee engagement (e.g.work-life balance programsflexi-time, telecommuting, compressed workweeks, reward programs, performance management systems) according to the Corporate Leadership Council (2004) and Martel (2003).Employee engagement begins with an onboarding program and is essentially a part of the human capital pipeline or talent pipeline, as some researchers have determined (e.g.Romans and Lardner, 2005). Companies with highly engaged employees articulate their values and attributes through signature experiences visible, distinctive elements of the work environment that send out powerful messages about the organization's aspirations, and the skills, stamina and commitment employees will need in order to succeed in these organizations (Erickson and Gratton, 2007).Harter et al. (2002), states clearly that, over the past two decades, a properly executed employee survey which can measure employee engagement, has emerged as a strategic tool for top management.Organizations need to implement the survey with care, developing a valid and reliable methodology tailored to meet the needs of the organization and its employees.One size does not fit all, and the time, effort and expense to implement a survey project properly, should not be underestimated.Data analysis, reporting, action planning and follow-up are where the real return on the investment will be realized. Measuring employee engagement The organizational leaders could benefit from an assessment providing them with a comprehensive overview of the attitudes and perceptions of staff in the midst of all of these changes.A further benefit would be to analyze how these events impacted on organizational outcomes such as turnover, absenteeism, stress and individual performance.The comprehensive nature of this information could also be used as a guide by organizational leaders for planning corrective action to address any weaknesses in the change implementation process.Thus, plans for future changes could be adjusted to reduce any undesirable impact these changes had had on employee attitudes, perceptions, and related organizational outcomes.In the event that organizational leaders implemented specific intervention(s) to improve employee attitudes or perceptions, the information obtained through these assessments could also serve as a useful benchmark to determine if the interventions achieved the desired effects. The measurable impact of employee engagement depends, in part, on how it is defined.For example, the Corporate Leadership Council ( 2004) reports outcomes ranging from shareholder return to absenteeism to sales.Other researchers describe engagement as "involvement and satisfaction, as well as enthusiasm for work (Harter, Schmidt and Hayes, 2002). Employee perceptions are difficult to track and respond to, so leading organizations throughout the world invest large amounts of time, energy, and financial resources in conducting employee surveys.According to Mercer Human Resource Consulting What's Working™ research (Mercer website), upwards of 50 percent of employers in Sweden, Japan, Singapore, the USA, Brazil, Australia, Canada, the UK and Ireland regularly conduct employee surveys.It is also becoming a more regular aspect of change interventions in South Africa (Verwey, 2007). Over the course of the past 30 years, researchers with The Gallup Organization have held thousands of qualitative focus groups across a wide variety of industries.During the mid '80's, The Gallup Organization decided to create a better feedback process for employers large and small: an opinion-based tool that would both release and direct the powers of feedback.The primary goal was to identify and measure the elements of worker engagement that are most powerfully linked to improved business outcomes -be they sales growth, productivity, customer loyalty, and so forth -and the generating of value (GWJ, 2006). Over a decade ago, the Gallup Organization reviewed its database of more than one million employee and manager interviews to 'identify the elements most important in sustaining workplace excellence'.Twelve key elements were identified in the Gallup q 12 employee climate survey, which was first published in the Gallup Organization's book titled First, Break all the Rules, in 1999.The Gallup q 12 explores a number of questions about the quality of systems and leadership, especially team leadership, as experienced by team members at every level in the business. The Gallup research revealed a link between teams in the top quartile of engagement scores based on the Gallup q 12 survey, and better employee performance; this Werner et al. 8811 in turn resulted in significantly better business outcomes than for teams in the lowest quartile.The most recent 2006 Meta Analysis study by the Gallup Organization involved 681,000 employees of 23,910 business units in 125 organizations across 37 industries.The study identified that teams within a business unit with a high level of engagement performed better than those with a low level of engagement: 12 percent more for customer satisfaction, 62 percent more for safety and 12 percent more for profitability (Verwey, 2007).The approach underlying this research has been founded on what might be called "positive psychology" (Seligman and Csikszentmihalyi, 2000) -specifically the study of the characteristics of successful employees and managers, as well as productive work groups.In developing measures of employee perceptions, Gallup researchers have focused on the ever important human resource issues for which managers can develop specific action plans. Throughout the workplace research conducted by Gallup researchers, both qualitative and quantitative data have indicated the importance of the supervisor or the manager and his or her influence over the engagement level of employees and their satisfaction with their company.In Gallup's research, items that measure environmental aspects which can be directly influenced by supervisors, explain most of the variance in lengthier job-satisfaction surveys and lengthier employee-opinion surveys.This finding has been mirrored in individual-level meta-analyses (e.g.Judge et al., 2001), in which the specific facet of satisfaction most highly related to performance was satisfaction with the supervisor/manager/leader. The instrument, the Gallup Workplace Audit -The Gallup q 12 (GWA; The Gallup Organization, 1992 to 1999), is composed of an overall satisfaction item plus 12 items that measure employee perceptions of work characteristics.These 13 items were developed to measure employee perceptions of the quality of peoplerelated management practices in business units.The criteria for selection of these questions came from focus groups, research, and management and scientific studies of the aspects of employee satisfaction and engagement that are important and can be influenced by managers at the business-unit or work-group level. The Gallup q 12 was designed to reflect two broad categories of employee survey items: those measuring attitudinal outcomes (satisfaction, loyalty, pride, customer service intent, and intent to stay with the company) and those measuring or identifying issues within a manager's control that are antecedents to attitudinal outcomes.The Gallup q 12 includes one outcome item referring to overall satisfaction with one's company that can be seen as a generalised summary of specific affect-based reactions to work (GWJ 1998).These questions were derived from thousands of focus groups in over 2,500 business, healthcare and education units.The questions were factor analyzed and subjected to confirmatory factoranalyses. Gallup has overwhelming empirical evidence of their measured employee engagement and desirable organizational outcomes (e.g.profit, productivity, safety, and retention and customer satisfaction) over the years (Buckingham and Coffman, 1999).However, linking the engagement construct to an established theory in the management literature also seems desirable for two reasons: firstly, such a theoretical framework can aid further validation, understanding and testing of Gallup's conceptualization of engagement.Secondly, there may be other, perhaps overlooked, theoretically-based mechanisms or mediators which could help explain and add value to the relationship between employee engagement and the effectiveness of managers in today's organizations. To identify the elements of worker engagement, Gallup conducted hundreds of focus groups and many thousands of worker interviews in all kinds of organizations, at all levels, in most industries and in many countries.From these inquiries researchers pinpointed, out of hundreds of variables, 12 key employee expectations which, when satisfied, form the foundation of strong feelings of engagement.The result was a 12question survey asking employees to rate their response to each question on a scale of one to five. One of the best ways to measure employee perceptions of such drivers remains a time-tested, solid employee-opinion survey which takes employee engagement into account.Analysis of the employee-attitude responses across companies and cultures demonstrates that 12 key areas consistently relate to Retention of Employees, Business Unit Productivity, Profitability, Customer Loyalty and in fact, boiling down to Employee Engagement. The following 12 areas have been distilled into statements through which employees can understand their existence within their own company.The Gallup q 12 Workplace Audit (GWA) Statements: Engaged employees are better equipped to handle workplace relationships, stress and change, according to the latest national Gallup Management Journal (2006) survey.Companies that understand this, and assist employees to improve their well-being, can boost their productivity.The Gallup Organization found that employee responses to these crucial 12 items tend to fall into three distinct categories: The success of your organization doesn't depend on your understanding of economics, or organizational development, or marketing.It depends, quite simply, on your understanding of psychology: how each individual employee connects with your company; how each individual employee connects with your customers".The Gallup Organization * (Gallup Website, 2008). Possible reasons for survey failure Despite good intentions, employee surveys often fail in their strategic aims.Through Mercer's work (Mercer website) on more than 1,000 survey projects, they identified ten key areas within the survey process that consistently stand out as potential stumbling blocks to survey success.By being aware of these potential blocks and adopting best practices to avoid them, organizations can significantly improve the odds of conducting a survey that produces meaningful, actionable results, builds employee engagement and enhances organizational performance.The ten key areas are: (i) proper project planning; (ii) engaging senior management; (iii) communication; (iv) data delivery; (v) questionnaire design; (vi) follow-up support; (vii) timing; (viii) monitoring and accountability; (ix) prioritising issues; and (x) linking survey results to business outcomes. According to Sanchez (1993) survey projects can be complex, with any number of risks which are difficult to anticipate.In the absence of proper planning, problems can be experienced in all aspects of the process, including the survey field work, survey return rates, and the timely delivery of results.When things go wrong early in the process, the survey loses credibility in the eyes of management and employees, and the follow-up process fails to secure the time and resources required for success. METHODOLOGY A quantitative approach was followed in this exploratory study.The primary data is based on the pre-and post application of the Gallup q 12 Descriptive statistics of the sample group will be provided and thereafter differences between the pre-and post-intervention scores will be studied. Participants and sampling strategy This study formed part of a larger project.The CEO of the IT Company gave his approval to use his organization as a participant in the research study.The data used for the study is based on a sample from an information technology organization in South Africa. The company used the Gallup q 12 during a two-year process of post-merger business and culture transformation.The data is based on the pre-and post-intervention application of the Gallup q 12 .The respondents represent all levels of the organization, ranging from ground-level employees to top management. Data collection and recording All the staff members completed and submitted the questionnaires electronically to the researcher.Communication regarding the purpose of the study, problems experienced was done in the same manner. For the purpose of this study, one survey form was used, the Gallup q 12 that measures employee engagement.This instrument is based on 25 years of management research through thousands of focus groups (Buckingham and Coffman 1999).It consists of an overall organizational satisfaction item and 12 employee engagement items that measure the respondent's perceptions of his/her workplace characteristics.In an unpublished study by Brand (2008), the scale reliability of the Gallup q 12 for the sample was calculated using only the 12 items measuring employeeengagement.In this study the reliability based on Cronbach alpha is highly satisfactory at 0,922.This result is very similar to those reported in other studies.A meta-analysis of 4 172 business units (Harter, 1999) obtained a Cronbach alpha of 0,91.In an unpublished study by Janse van Rensburg (2004) on the relationship between leadership styles and work-related attitudes, a Cronbach alpha of 0.90 (based on aggregate data set N = 36) was reported. Data description As can be seen from Table 1, the pre-intervention sample consisted of all employees.The post-intervention sample consisted of 253 individuals.For purposes of this research, the sample size was reduced since only questionnaires completed by individuals who (1) had completed the pre-intervention survey and (2) had participated in one or more of the change interventions were used. Table 2 shows that the sample composition over the two measurement periods is very similar, and that the sample in both cases can be described as being predominantly Technical and Customer Site staff who have been in their current positions with the organization for 2 to 10 years.No further analyses were done on the variables of job type, job tenure and company tenure due to the fact that there were no statistically significant differences in the results of these three groups. Sample consistency To assess the degree to which the pre-and post-intervention samples are statistically similar, Chi squared analyses was performed for each of the three variables in Table 3. RESULTS The research results supported the hypothesis which stated that there is no statistically significant difference between the mean employee engagement before and after business and culture transformation interventions.Given that the pre-and post-intervention samples were shown to be statistically similar, the next step was to test for differences between the means for each item and the dimension of the Gallup q 12 for the pre-and postintervention sample groups. As can be seen from Figure 1, the pre and post measures show a very similar pattern.As can be seen from Table 4, the pre-and post intervention groups showed a statistically significant difference on only one of the twelve Gallup q 12 items, this being "Over the past six months I have made progress at work".A number of items (I know what is expected of me at work; At work, I have the opportunity to do what I do best every day; I share a sense of commitment to the work we do with my colleagues; At work, my opinions seem to count; and I have opportunities to learn and grow at work) also showed a similarity of variance. As can be seen from Table 5, the pre-and postintervention groups showed a statistically significant difference on only one of the twelve Gallup q 12 items, this being "Over the past six months I have made progress at work". As can be seen from Figure 2, the pre and post measures show a very similar pattern.As can be seen from Table 6, the pre-and post-intervention groups showed a statistically significant difference on only one of the twelve Gallup q 12 dimensions, this being "How we Grow", whilst the dimension "What I Give" also showed a similarity of variance.As can be seen from Table 7, the pre-and post-intervention groups showed a statistically significant difference on only one of the Gallup q 12 dimensions, this being "How we Grow".The effect size was however insignificantly small, meaning in practice no real difference occurred. DISCUSSION As stated earlier, the research objective of this study was to determine the difference in employee engagement before and after a business and culture transformation intervention in the workplace.The research was conducted in one organization in the Information Technology industry in South Africa with the significant advantage that variances due to differences in organizational culture, leadership style and other factors that may impact on employee engagement were eliminated. The research results supported the hypothesis which stated that there is no statistically significant difference between the mean employee engagement before and after business and culture transformation interventions. The pre-and post-intervention results show that there was a limited impact of the interventions described on the Gallup q 12 items and dimensions.This clearly has a number of possible explanations, such as: 1.The interventions did not succeed in impacting significantly on employee engagement as measured through the research instrument; or 2. The Gallup q 12 s not a sufficiently sensitive instrument to assess the impact of the interventions used; or 3.The time period between the pre and post measures was not sufficient to allow for the impact of the inter-ventions to be assessed adequately through the research instrument; or 4. A combination of the above. However, there are sufficient indications in the literature to draw some broad conclusions even if these are not necessarily strongly underpinned by objective evidence. The key conclusions drawn from the literature are as follows: 1. Employee engagement does matter, but the extent to which it can lead to a step-change in organizational performance is not conclusive.Even where there is a clear vision and understanding of what needs to be done, there can be significant barriers to effecting 'change on the ground', for example if staff are generally opposed to change or if the capacity to implement change is limited by resource constraints.Harter et al., (2002) states that factors that contribute to an employee's level of engagement are specific or variable for each individual.It then becomes imperative for leaders to determine which organizational factors contribute to employee engagement and must be able to enhance and maintain these factors, both at individual and group level; 2. Some of the approaches aimed at improving employee engagement can significantly increase employee engagement (as measured by staff surveys) and, in turn, this can have a measurable impact on Human Resource variables such as retention and staff illness.Wider impacts in areas such as client service, satisfaction levels and, for private sector business, turnover and profitability, tend to be more insubstantial.Pech and Slade (2006) state that no organization can afford to underutilize its employee energy, and that employee engagement is a critical element of this underlying energy.In support of this, Buckingham and Coffman (1999) are of the opinion that the payoff for an energised work environment is enormous: improved retention, productivity and employee engagement, and therefore reduced turnover; 3. Increasing employee engagement is highly dependent on leadership and establishment of two-way communication where people's work and views are valued and respected.There are many ways in which an organization can work towards better employee engagement without incurring high costs -as long as there is the organizational determination to focus on this issue.According to the Gallup Organization (Gallup Journal, 2006), belief in the organization's direction and leadershipawareness and understanding of the strategic direction of the organization is a critical driver of employee engagement.Even in the absence of robust impact data, the principle of employee engagement is to be endorsed in terms of good practice regarding people management, and the softer benefits this confers on organizations; and. 4. As stated by Harter et al. (2002), a properly executed employee survey which can measure employee engagement, has emerged as a strategic tool for top management and organizations need to implement the survey with care, developing a valid and reliable methodology tailored to meet the needs of the organization and its employees.Data analysis, reporting, action planning and follow-up are where the real return on the investment will be realized. CONCLUSION AND RECOMMENDATIONS This study provides insights into employee engagement, as well as the possibility of employee engagement improving through business and culture transformation interventions.In this study we have attempted to make a contribution by providing a diagnostic framework for enhancing the quality of work-life of organizational members, which can also be used in other organizations; and by demonstrating, through application, the usefulness of survey-based feedback as a tool for improving For organizations wishing to embark on a process of using the Gallup q 12 as a means of measuring the impact of business and culture transformation interventions, it is important to ensure, right from the design phase of the entire process, that the Gallup q 12 and the interventions are in fact aligned in terms of their theoretical foundation.The expectation that significant change should normally be experienced within one year is perhaps not a realistic one. The research reported on in this study shows that the transformation interventions described, have limited impact on employee engagement as measured through the Gallup q 12 .This finding raises a number of related questions to be researched. The fact that the results do not show a significant impact of the transformation initiatives creates further research opportunities, specifically in terms of the construct validity of the Gallup q 12 .From a practitioner perspective, it also implies that the use of the Gallup q 12 requires careful consideration to ensure that the definition and measurement are applicable to the specific organization and the interventions launched. It should be clear from this study that organizational surveys are a highly effective tool and/or catalyst for organizational change efforts.As a tool, surveys provide the means for assessing the current state of an organization as regards employees′ understanding of such areas as: mission and strategy; degree of change being achieved; leadership and management practices; organizational culture; reward systems; communication flow; motivation; and individual needs and values.Surveys can also serve as catalysts for change by communicating desired messages (what the core values of the company should be; how managers are expected to act) and by involving people in the development, interpretation and action-planning stages of the effort.Finally, surveys are a powerful means for identifyingthrough statistical methods -specific levers for changing the conditions of people′s lives in organizations.Surveys should always be seen as the starting-point of a process, however, not the end result. In conclusion, it can be stated that relatively few studies have been done to evaluate the difference in employee engagement before and after business and culture transformation interventions. While some research remains inconclusive, there is a growing body of research suggesting that a link between employee engagement and organizational performance does exist. Marcus Buckingham and Curt Coffman (1999) found that employees who responded favorably to survey questions on engagement, also worked in business units with higher levels of productivity, profit, retention, and customer satisfaction and that the leader or manager, and not the pay, benefits, perks, etc., was the key in building and sustaining a strong workplace and an engaged workforce. It is believed that this study, although exploratory in nature, indicates the value of measuring employee engagement.In terms of future research, it would be worthwhile to extend this study over a longer period so that the difference in employee engagement before and after specific culture transformation interventions can be explored more fully. Limitations From the discussion of the limitations directions are also given with regard to future research projects.Although this particular study is not robust as far as sample size and statistical analyses are concerned, it can be used as comparison with other engagement surveys.This could have assisted in determining whether the lack of demonstrated impact is due to the Gallup q 12 or some of the other potential reasons listed.Moreover, repeated measures in future years will also be useful in evaluating the rate at which transformation initiatives have a significant impact on organizations.Employee engagement has become a key corporate priority and employeeopinion surveys are the accepted means of measuring, monitoring and managing it.Seven of the pitfalls that could impair the effectiveness of employee surveys, are: 1. Bad timing; 2. Surveying the entire organization, 3. Surveying too frequently, 4. Over-simplifying surveys, 5. Too small sample, 6. Tying results to performance bonuses, and 7. Setting arbitrary survey goals. In view of the comments made in the introduction to this chapter, there are a number of shortcomings in the literature, as well as gaps not currently covered.These are indicated below: 1.There is an inherent positive bias in the literature -as noted above; 2. The literature tends to emphasise that improvements to employee engagement are always positive.There is no consideration that a certain level of employee engagement might already be optimal and may also vary in different organizations; 3.In this respect, further study is required to determine where the focus of the intervention should be.Current literature seems to steer us towards addressing the disenfranchised majority, but says little about the seriously 'disaffected' minority.If, for example, significant parts of the workforce were disengaged, it would impact negatively on the organization as a whole.Employers would then need to think carefully about how to identify this portion of the workforce and how to address the problem (e.g. through further engagement measures or letting this section of the workforce go); 4.There is also the related issue of how organizations go about recruiting staff who are likely to have a higher engagement propensity.Although several articles were identified in which this issue was discussed, this area would undoubtedly benefit from more specific research on employee engagement; 5.The importance of the different factors underpinning employee engagement has not really been tested.For example, even though pay and working conditions are not emphasised, a number of empirical studies in this field show that pay and conditions are critical to job satisfaction for some individuals and organizational types.More detailed desegregation of employee surveys by organizational and employee types as drivers of engagement would be really useful in assessing whether employee engagement is dependent on the factors stipulated in the literature; 6.The degree to which effective implementation of any new initiative depends on the readiness of staff to engage with the change.This is especially critical within the public sector as surveys show more resistance to change; 7.There is no real consideration of the cost of achieving higher levels of employee engagement; 8.The small number of studies attempting to quantify impact relies on identifying relationships between factors (e.g.current employee engagement and future profitability).This correlation data cannot determine cause and effect issues (e.g. the extent to which employment engagement can directly influence future profitability); and 9.There is no evidence that the models for employee engagement are equally applicable to all types of work across the board. In jobs which are quite unpleasant or very money- Werner et al. 8819 focused (stock market dealing), monetary rewards are more successfully used as incentives.Besides, it is likely that individuals will be motivated differently by different factors, a fact that is not reflected in the current models for employee engagement.Future research can aim to explore the managerial findings and lessons on business and cultural transformation interventions as well as aim to provide recommendations to the suggested measures for the current problem. Figure 2 . Figure 2. Mean per group per dimension. know what is expected of me at work II.I have the materials and equipment I need to do my work correctly III.At work, I have the opportunity to do what I do best every day IV.In the last seven days, I have received recognition or praise for doing good work V. My supervisor or someone at work seems to care about me as a person VI.There is someone at work who encourages my development VII.At work, my opinions seem to count VIII.The mission/purpose of my company makes me feel my job is important IX.My associates (fellow employees) are committed to delivering quality work X.I have a best friend at work XI.In the last six months, someone at work has talked to me about my progress XII.This last year, I have had opportunities at work to learn and grow (All statements ©1997-1999 The Gallup Organization) Table 4 . Levene's test for equality of variances on Gallup q 12 Items. Table 5 . ANOVA for Gallup q 12 Items. Table 6 . Levene's Test for Equality of Variances on Gallup q 12 Dimensions.
2018-11-30T12:36:10.376Z
2011-09-30T00:00:00.000
{ "year": 2011, "sha1": "ac36bdbd8ca2bf04e10ab9dd4cc00a16e6a8049c", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/59D2B7219667.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6b93cf42e0f4da94c923ca868eb728211fb3d3f7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
83777488
pes2o/s2orc
v3-fos-license
Note on breeding and parental care behaviours of albino Hoary-bellied Squirrel Callosciurus pygerythrus ( Rodentia : Sciuridae ) in Sibsagar District of Assam , India Acknowledgements: Author is thankful to the Oil and Natural Gas Corporation Ltd. India, Assam State Department of Forest and Social forestry division, Sibsagar district of Assam, Society for Zoology and Nature, Guwahati College and Principal, Guwahati College for their help in various ways and cooperation during the study. Author is also thankful to Dr. M.M. Goswami, Professor, Department of Zoology, Gauhati University due to his encouragement in revising the scientific paper. "" 1 " 1 " 1 % * & " " . 1 " " " 1 89 79: 8 " " % % " During a field survey conducted by the team of Society for Zoology and Nature, Guwahati, between October,1997 to June, 1998, a few individuals of albino squirrel (Image 1) belonging to the species Callosciurus pygerythrus were recorded from Sibsagar district of Assam, India (Kalita 1998).Albinism in this species is rare and has not been widely reported in India (Bhattacharyya & Murmu 2004;Sharma 2004, Mahabal et al. 2005). The present paper deals with some of the habitats, ecology and feeding habits of the albino Callosciurus pygerythrus.One female albino individual of Callosciurus pygerythrus collected from its natural habitat along with a normal male of the same species were observed in captive conditions for five years (from 2000 to 2005) and their breeding biology and subsequent parental care behaviour recorded.Ltd.India, Assam State Department of Forest and Social forestry division, Sibsagar district of Assam, Society for Zoology and Nature, Guwahati College and Principal, Guwahati College for their help in various ways and cooperation during the study.Author is also thankful to Dr. M.M. Goswami, Professor, Department of Zoology, Gauhati University due to his encouragement in revising the scientific paper. Methods The approach to locate the albino squirrels of the species Callosciurus pygerythrus was through questionnaire surveys to ascertain the presence of albino squirrels, and thorough surveys of the entire Sibsagar District.Communities in direct contact with the forest as tribals and the Muga keepers were interviewed for vital information on the sightings of this species. One individual albino squirrel out of two encountered was captured with the help of the villagers of that area for observation on its breeding and parental care behaviour.The squirrel was housed in a 6x4.5x3.6 m cage of iron net constructed in a private place, in a well ventilated condition, and a normal male individual of the same species was introduced into the cage.The roof of the cage was covered with thatch.Shade was also arranged to protect the cage from high temperatures.The inside of the cage was decorated with potted shrubs and dry pieces of bamboo with holes at internodes.The cage was also provided with a small tray for drinking water.The water was renewed and the cage was cleaned every day.Fresh fruits and nuts like bananas, oranges, pineapples, coconuts, betel nut and seeds or fruits of Ficus spp., Azadirachta indica, Nerium indicum (ripe fruit) and Bombax ceiba, were provided as normal diet.Fruits, nuts and seeds were selected from observations in its natural habitat.Some times orthopteran insects like grasshoppers and crickets were also included in the diet.Curd was the favoured food of the albino variety and was a convenient medium for administering oral drugs and vitamins.No special treatment was provided for their breeding in captivity, except giving a drop of vitamin E from a freshly punctured IP 200mg liquid vitamin E capsule along with 20ml curd every morning during September 2002 to October 2004. Observations Morphologically, the albino squirrel is completely white, the tail faded white, eyes red and ears untufted; fore limbs with 4 toes and hind limbs with 5 toes.Total length is 30cm, body 12cm and tail 18cm. Information regarding the existence of albino squirrels in the study areas is quite localized.The squirrels were observed to prefer a rich and varied habitat consisting predominantly of Bamboo, Ficus spp., Azadirachta indica, Bombax ceiba, and various other fruit plants and wild shrubs and trees.This is owing to the proximity to the two major rivers, the Disang and the Dikho cutting through the Sibsagar District.Observed habitat of albino squirrel is plain villages near two main rivers of Sibsagar district namely, Disang and Dikho.The population of this albino variety in its habitat is observed to be very low.Maximum 3 individuals were recorded in 1995 in a village of that area.A detailed list of its occurrence in the studied areas has been incorporated in Table 1. Albino squirrels in their natural habitat are quite lethargic.They never go up to the tip of their supporting trees.In most cases they are observed up to a maximum height of 20feet.During the day they mostly remain sitting on the low branches of bushy trees.However, they become active at dawn and at dusk. In their natural habitat they were observed consuming fruits, nuts, bark born fungus and insects infesting Ficus spp., Olea europaea, Bombax ceiba, Bambusa spp.and Dendrocalamus spp. Breeding and parental care habit of albino squirrel in captivity The male Callosciurus pygerythrus in captivity exhibited hostile behaviour towards the albino female.In the first two years of captivity (2000 & 2001) breeding was not observed.However, after adding vitamin E to the diet in the third year (2002) the pair exhibited courtship behaviour from the last week of September (2002) and were observed occupying the same hole during that period.From first fortnight of October, the female was observed collecting dry leaves and thatch as nest building material.In the last week of April 2003, the female gave birth to a single male pup.The new born baby was black in colour and about 13cm in length (Image 2).Its eyes remained closed for 22 days (Image 5).The individual was born in a hairless condition.However, normal hairs developed and grew thicker and longer to resemble the normal male individual except for a morphological dissimilarity of a slightly tufted ear (Image 6).During that period the mother squirrel was observed to be aggressive towards the male driving him away from the hole/drey.She suckled the pup at intervals of 40 to 70 minutes.She exhibited parental care behaviour by transferring the pup as and when there was a disturbance by carrying it in her mouth to a safer place in the cage.Even after every handling of the pup for photography she transferred it to a new drey (Images 3 & 4).The first baby survived in captivity till August, 2003. Breeding was observed in the second consecutive year too, and a male offspring was born on the 4th of May 2004 and attained its full grown stage.During the study, the albino squirrel did not exhibit pairing tendency with her male baby.The adult male was not aggressive towards the male baby and play between them was often observed.However, he was aggressive towards the baby during the time of feeding. The mother albino squirrel survived till 18th February, 2005.During her survival no courtship behaviour with her baby was observed. Discussion Albinism in wild animals is not very common.However, there are previous reports on albinism in some species of mammals including squirrel in India and abroad (Gee 1959;Walker 1968;Tehsin & Chawra 1994;Kalita 1998;Bhattacharyya & Murmu 2004;Sharma 2004;Mahabal et al. 2005).The occurrence of the present albino variety of Callosciurus pygerythrus in the studied areas in Sibsagar district of Assam was discovered as early as 1995.The emergence of this albino species to such a new territory might be due to its migration from other territories of forest cover, which is yet to be ascertained.It is known that variations in coat colour may develop among squirrels living in the same place (Prater 1980).However, it is very difficult to ascertain the species status of albinos unless one goes through a genetic study.The present albino variety is found together with the colony of gray C. pygerythrus.Individuals of albinos differ from the gray C. pygerythrus in only the following points. -The coat colour of the albino variety is snow white; but, the gray squirrel bears gray hair on its dorsal surface and smoke white to the ventral. -Eyes of the albino variety are red, but it is black in gray species. -Tail fur in albino variety is thicker and longer than the gray species.-Gray squirrels are not friendly to the albino variety. -Albinos are lethargic in comparison to the gray species. All these differences may be due to their body physiology and has the scope for further study.Although there appears no significant difference between the albino and the gray variety of the studied species apart from the above, the breeding performance in captivity brings forth to an idea for considering it under the same species category of C. pygerythrus. Accurate observation on the life history of most of the rodents in the wild is very difficult as they are evasive and fleet in nature (Lang 1925).Vitamin E is a long investigated drug in reproductive physiology in different animals particularly in rodents (Hafez 1970) which has been tried here.Earlier it was investigated in laboratory reproduction of squirrel like, Citellus tridecemlineatus pallidus Allen by George & Wade (1931).However, in rodents, the requirement of vitamin E in reproduction is species related (Hafez 1970).Though administration of vitamin E as oral dose in ground squirrel like, C. tridecemlineatus pallidus Allen does not affect in reproduction (George & Wade 1931), present observation signifies its requirement as oral dose in the reproduction of Callosciurus pygerythrus. The parental care behaviour of carrying its offspring by the squirrel and other rodents is unlike that of cat species (Lang 1925).However, they also use the mouth to carry their young during danger (Lang 1925).Lang (1925) has confirmed the babies cooperation in maintaining balance during its shifting from one place to an other by the mother, as studied in a Central American squirrel, Sciurus hoffmanni which the present study contradicts that in early stages the babies are too weak to provide support by holding their mother with the help of tail or legs (Images 3 & 4).The baby supports its mother only by bending its head and tail parts in an inward direction (Images 3 & 4). It is known that normally the litter size in most of the species is 2-4 pups and they breed 2-3 times in a year (Walker 1968;Prater 1980).However, the development of single fetus during April-May in the present observations is significant and may be a reason for its lower population in the study area.The lethargic nature of the albino variety in day time may be related to their biological inability to cope with strong daylight. The present study may be regarded as base-line information for breeding experiments in albino squirrel in the region.Further study is required for more information regarding breeding and parental care behaviour of the animal.However, albinos are at a distinct disadvantage in nature as they are easy prey and subject to killing out of curiosity.The urgent need of conservation measures on this rare animal is emphasized. Table 1 . Occurrence of albino Hoary-bellied Squirrel in villages of Sibsagar District surveyed during 1997-98 (Data based on local villagers' information). Image 1. An albino Hoary- bellied Squirrel in wild wild Image 2. A new born baby of albino squirrel Image 5. Close up view of a 15-day old male baby born to the albino squirrel Image 6. Mother albino squirrel with her full grown male baby Images 3&4. Albino squirrel showing parental care by carrying her baby in mouth Kujibali and Barpatragohaingaon village 1 was killed by some villagers.The rest 1 was captured by a villager.
2018-12-15T01:15:33.029Z
2009-06-26T00:00:00.000
{ "year": 2009, "sha1": "dae1be7ac32db24fcfb1a18886bb1cbafb490870", "oa_license": "CCBY", "oa_url": "https://threatenedtaxa.org/index.php/JoTT/article/download/397/645", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dae1be7ac32db24fcfb1a18886bb1cbafb490870", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
55066620
pes2o/s2orc
v3-fos-license
TEACHERS’ COMPREHENSIONS, PERCEPTIONS, AND ATTITUDES TOWARDS INCLUSIVE EDUCATION This study aims at finding out 1) the comprehensions of Islamic Elementary School Teachers (MI) in Salatiga to inclusive education; 2) their perceptions; and 3) their attitudes to the inclusive education. This is a survey research. The primary data collection method used in this research was questionnaires, besides structural interviews. Based upon the calculation of the questionnaires, the writer found out that 1) the teachers’ comprehension to inclusive education is still low (88.1 %) and only 4.7 % feels having a little knowledge about it. In addition, 50.02 % have negative perceptions, and are only 21.46 % who have positive perceptions to it. Even though, 43.89 % have positive attitudes and 37.81 have negative attitudes to the inclusive education. Introduction Inclusion is the worldwide agenda so as to inclusive education. The issue of inclusive education has become important part of the discussions on the progress of education systems in the worldwide level. Inclusive education practices have also been implemented in several schools in diversified education systems (Ballard, 2005:1). In the worldwide, inclusion becomes a major focus of the policies of many governments (Armstrong at.al, 2010:3). Indonesia, one of the worldwide countries in the world has conceptualized the ideas of inclusive education. The ideas have been formulated in Indonesian Republic Regulations Number 20 Year 2003 about National Education System in Chapter IV, article 5, verses 1,2,3,4, and 5. Based upon the regulations, the concept of inclusive education is beyond special education because it accommodates several differences possessed students not only because of mental but also non-mental retardations. For the reasons, inclusive education is a very big business which needs to be conceptually and operationally understood. Since the ideas of inclusive education are still universal, academic experts and practitioners have not yet agreed concerning the conceptualization of it. In addition, the interpretations of the inclusive education are also various. Furthermore, if it refers to the ideas proposed in the regulations. There has not been a clear cut of inclusive education, so who should be included and who should be excluded from inclusion, when there is no separation between students with special needs and regular students such as in United States. When inclusive students are separated from regular students as in The Netherlands, what special education looks like; how many special educations should be built around the country, cities, towns, districts, and villages; to what extent is the capability of parents to send their children when there is only one special education in a town or a city; and to what extent is the capability of government to provide the well-educated and skilled inclusion teachers, inclusive curricula, approaches, methods, teaching aids, school buildings, infrastructures, and the like. Inclusive education as said in the regulations is a big umbrella. In inclusive education there can be many special educations. But the problems of special educations themselves vary from one and another. Some students could not be integrated in regular schools because their difficulties could not be solved in the schools; let us say those who are permanently deaf and blind since there were children. Therefore, they might be separated from regular students. On the other hand, there are many students who are able to be integrated in regular schools because their difficulties could be coped with by the schools; thus there is no reason for segregation, as students with special intelligences who are able to join in accelerated classes in regular schools. The case in Yogyakarta City is one of several examples (Sindo, April 7 th , 2012). In the city, inclusive classroom for students with special intelligences was developed to follow up some establishments of some inclusive school practices such as Muhammadiyah Elementary School in Sapen and State Elementary School in Giwangan; State Junior High School 2 and 5 and Muhammadiyah 2 Junior High School; and State Senior High School 1 and 3, and Muhammadiyah 1 Senior High School. How are about students with high-special needs as categorized by Jere Brophy (1996) andby Brenda Freeman (1994) who are characterized as (1) passive, (2) aggressive, (3) attention problems, (4) perfectionist, and (5) socially inept (Marzano, 2003: 55)? Should they be left behind even though they have the same rights as the students with special intelligences? Since there are still multi interpretations and practices concerning the conceptualizations and practices of inclusive education, the relating educational agents should take a seat together to conceptualize operationally what inclusive education means. In addition, this must be followed by socialization and preparation of the infrastructures needed in the inclusion. In the writer's points of view, one of the fundamental steps for inclusion practice namely socialization should be taken into account. The fact, however, it has not yet been conducted comprehensively. The socialization does not only discuss the concep-tualization, but also the specification and the implementation ways of inclusive education. In the processes all the participants should also talk about all related sources needed for the conduct of the inclusive education. Among others are teachers' knowledge and skills for inclusion, curricula, teaching media, teaching approaches, books, and the like. In the pre-research interviews conducted in October 2012, the writer found out polarizing facts particularly concerning teachers' comprehension towards inclusive education. Most of the interviewees in State Islamic Elementary Schools (MI) Gamol Salatiga argued that they were still confused about the terms and of course the implementation of inclusive education. They were confused because whether or not inclusive education includes those who are disabling mentally and physically. Were they any limitations? Mentally and physically, are all disable students should be fused in regular classes. When there are limitations, what are the limitations and how to implement such a concept? What should be done by teachers when there are passive students, extra-naughty students, aggressive students, and etcetera? Besides confusion in the conceptualization, most of teachers also could not access information about inclusive education from the responsible education agents. So far there is no socialization, seminar or workshop which is conducted by the agents such as the Ministry of Religious Affairs or for the extent the Ministry of Education Affairs as well as University Institutions. The comprehensions of inclusive education are much needed because in their comprehensions they always face to face with inclusion. By comprehending the concept of inclusive education, they expect that they are able to cope with problems of inclusion. Borrowing Rose's ideas (2010: 1-2), it is understandable that understanding demands efforts. Even at the age of increased mass communication technology when it is relatively simple to share ideas and perspectives but it is not always easy to interpret the meaning of the messages we receive. Accordingly, direct access to the inclusion through socialization, seminar, and workshop is very important. Teachers have insufficient knowledge towards inclusive education because of several reasons: 1. Inclusive education is considered as a new education phenomenon so most of the teachers do not take into account. 2. Inclusive education is necessary to understand but they do not have sufficient access to know more about it. 3. Inclusive education is necessary to understand but they may be reluctant to know more about it. 4. Teachers are not aware they will always encounter inclusion in their day-to-day teaching-learning processes. 5. Teachers may be waiting for knowledge sharing from the relating education agents. Based upon the explanations and reviews above, the writer formulates the problems of the study as the following: Teachers towards inclusive education? The Terms of Inclusion The definition of inclusion goes more beyond students with disabilities and views the innumerable ways that students differ from one another as the differences in race, class, gender, ethnicity, family background, sexual orientation, language, ability, size, religion, and the like (Mara Sapon-Shevin, 2007: 10). James McLeskey and Nancy L. L. Waldron (2000:50) compare between inclusion and non-inclusion as follows: Table 1 Compare Between Inclusion And Non-Inclusion The Terms of Inclusive Education Armstrong at.al. (2010) note that: 1. The idea of 'inclusive education', actually goes well beyond special education in terms of its approach to social integration although historically it is closely related to debates and reforms in the field of special education; 2. This should be comprehended in terms of an approach to the 'problems' of social diversity as the result of social changes after the Second World War including the end of colonialism, the increase of labor-force mobility, and the tension between global and local cultures; 3. There are consequential contradictions between conceptions and practices because education systems attempt to manage the social and economic complexities of national and cultural identity in societies that are highly diversified internally but globally interconnected; 4. The increase of 'inclusive education', particularly in the developing countries indicates the efforts of the countries to promote access of social and educational advantages to schooling and educational resources, as well as reflecting the borrowing of the first-world thoughts to countries which reinforces dependency and what Paulo Friere calls 'the culture of silence'. One of the debates, for instance, focuses on whether students with special needs should be integrated or separated with the common students. Some disagree that students with disability should be separated because they have the same rights with the regular students. But some support the separation because the former needs special attention and treatments. In addition, the terms of inclusion are also polarized because this is not only limited to the disabled students but broader as those who are disadvantageous as a result of poverty, sexuality, minority ethnic status, or other characteristics assigned significance by the dominant culture in their society (Ballard, 2005:2). Villa (2005: 5) says that inclusive education means embracing everyone and making a commitment to provide every student with a community, every citizen with a democracy and the undeniable right to belong. Inclusion supports that living and learning together are advantageous everyone, not just children who are labeled as having a difference (e.g., those who are gifted, are non-English proficient, or have a disability). She concludes that inclusion is a belief system, not just a set of strategies. Separated education creates a permanent underclass of students and conveys a strong message to those students that they do not measure up, fit in, or belongs. The inclusive education should not be segregated in school institutions since it is obvious that we can understand and appreciate differences only if we are surrounded by different people. Otherwise, our understanding, our acceptance, and even our tolerance are only academic paradigms. Within inclusive environments, students are not only showed by great varieties of people and their differences, but also learn how to talk about their differences, ask critical questions, and get along with the others (Mara Sapon-Shevin, 2007: 18). On the contrary, Cor J.W.Meijer, Sip Jan Pijl and Seamus Hegarty, 2002:1) suggest that the separate system used to be seen as an expression of the care for pupils with special needs. They learn from the educational system in the Netherlands. In this country, the educational system is divided into two namely regular schools and special schools. Compared with many other European countries, the Dutch special education system is extensive, differentiated, and segregated. In New Zealand, as elsewhere, the disabled students are those who were most obviously excluded from ordinary schools, classrooms and learning opportunities, and their integration was a project originated by their parents and extending across many years (Sonntag, cited by Ballard, 2005:1). In Europe, North America and Australia throughout the twentieth century, the disabled children are those who are categorized as having 'handicap' or 'impairment' and growth in the number of schools outside the mainstream for children whose needs were seen as different to those of 'normal' children. In these countries, however, the concept of special educational needs was never simply synonymous with 'impairment' (Armstrong, 2010:5). Since special education is polarized, special education is conceived differently in different parts of the world and is practiced variously accordingly (Pijl,2002: xi). The terms of inclusive education in Indonesia is closely related to wider education context. Therefore, it is not limited to the disabled students but for those who are disadvantageous in terms of emotional, mental, intellectual, and social aspects, including students who live in a very remote areas. The rights and obligatory responsibilities of Indonesian citizens in terms of inclusive education are implicitly formulated in Chapter IV, article 5, verses 1,2,3,4, and 5, of Indonesian Republic Regulations Number 20 Year 2003 about National Education Systems (Undang-Undang RI Nomor 20 Tahun 2003 Tentang Sistem Pendidikan Nasional). It is said that 1) the very citizen has the same right to get the qualified education, 2) citizens with disability physically, emotionally, mentally, and or socially have opportunities to get special education, 3) citizens who live in a very remote places or suburb areas and the very rural communities right to have very-special education, 4) citizens having intellectual potential and special capabilities have rights to get special education, and 5) the very citizen rights to have opportunity to improve the life-long education. From the five verses mentioned above, inclusive education and for some extent 'special education' in Indonesian National Education System is comprehensive. The regulations cover not only students with disabilities physically but also those who are disadvantageous emotionally, mentally, and socially as because of poverty, retardation, and illiteracy. In addition, the national education systems also give attentions to students who live in remote areas or in the very rural communities, clever students and the students who have specific capabilities. All should be accomplished in life-long processes. To accomplish such kinds of ideal education systems, there must be at least three processes namely transformation of the knowledge, comprehension of the transferred knowledge, and the implementation in the day-to-day practices. To implement the three prerequisites, there are two figures that are responsible to so doing i.e. education policy makers in the case are the stakeholders of the school institutions and the practitioners in this case are teachers. The two sides should go hand in hand to strengthen cooperation because inclusive education is a very big agenda. The policy makers are responsible to socialize about the policies and the regulations for the implementation. Then, the practitioners need to comprehend the knowledge and implement the knowledge in the education practices. The position of teachers is very central for the implementation of inclusive education because the teachers will be immediately to encounter students with disabilities in the school practices. They are considered as the most important persons who are responsible to the conduct of the inclusive education. They should be objective in that conduct because subjectivity is not itself inclusive but exclusive. To be inclusive teachers, they should know holistically what inclusive education is. The Concept and Implementation of Inclusive Education Some studies show that inclusive education is a sophisticated issue in terms of concepts as well as its implementations. Conceptually, there are two main paradigms of inclusive education namely integration and separation. The former supporters say that inclusive education should be integrated in regular school institutions, so there must not be inclusive schools. The second paradigm suggests that there must be separation between regular students and students with disabilities as the case in the Netherlands that divides education system into two namely regular schools and special schools (J. Pijl, Ysbrand, at.al.,2005:10). Besides the polarization in conceptualization, the problem also appears in the implementations of inclusive education. Pijl and Meijer (2005) identify three factors that may influence the conduct of inclusive education i.e. teacher factors, school factors, and external factors. Because of the factors, it is possible that special education is conceived differently in different parts of the world and is practiced variously accordingly (Pijl,2002: xi). The concept of inclusive education is extended to the wider context of 'social inclusion', addressed to those who are marginalized, unproductive and non-participative in society. This can be in family, friendships, and community, education, workplace and leisure activities. Shucksmith, 2000 cited by Topping at.al. (2005: 2) says that social exclusion is associated with complicated problems such as poor skills, unemployment, low incomes, poor housing, high crime environments, bad health and family breakdowns. This is not always identical with purely urban phenomena but may be shared to non-urban communities. A research conducted by Pavlovic Slavica (Department of Education Science, Faculty of Sciences and Education, University of Mostar, Mostar, Bosnia and Herzegovina) shows that 1) more than 80% of primary teachers in Herzegovina-Neretva Canton (HNC) are neither available nor educated enough for the implementation of the inclusive education in their schools, 2) almost a half of primary school teachers in HNC (49.52%) strongly agree that schools are not ready for the inclusive education. Their opinion is also supported by more than a third of teachers (37.14%), and 3) more than 90% of teachers say that they need additional education and training to be able to work with pupils with special needs. The research result conducted by Ýsa Korkmaz from Selcuk University, Konya, Turkey entitled Elementary Teachers' Perceptions about Implementation of Inclusive Education shows various tendencies on the inclusive education. From 66 participants of the research 1) 34 participants say that inclusive education is okay if the size of the classroom is small, 2) 18 participants agree if disabilities students should be partly included in classrooms, (3) while 14 participants say that all kinds of disabilities should not be included in classroom activities. The next study was conducted by Yoon-Suk Hwang (Queensland University of Technology) and David Evans (University of Sydney) entitled Attitudes towards Inclusion: Gaps between Belief and Practice, addressed to 33 Korean general education teachers from three primary schools in Seoul concerning their attitudes towards, and willingness to accommodate, the needs of students with disabilities. The results show that 41.37% of general education teachers had positive attitudes towards inclusion programs, while 55.16% were unwilling to actually participate. This looks that even though some of the general education teachers in the Republic of Korea (41.37%) agree to include students with disabilities in general education settings, but more than half (55.16%) disagree to include the students in such settings. Exactly, schools should think about numerous differences, since everyone has multiple identities including racial, ethnic, religious, familial, language, gender, and etcetera. Accordingly, inclusive schools require that teachers be responsible to all children and not simply to one aspect or characteristic (Mara Sapon-Shevin, 2007: 11). Based on the studies, this is proven that teachers play important roles in terms of comprehension and perception of inclusive education and its implementation. Even, Sip Jan Pijl and Cor J.W.Meijer (2002) say that teacher is one of the three important factors for the successful implementation of inclusive education. The conduct of inclusive education very depends upon their attitude towards pupils with special needs and the resources available to it. Hegarty (1994) cited by Sip Jan Pijl and Cor J.W.Meijer (2002) reminds that in a number of studies the attitude of teachers towards educating pupils with special needs has been put forward as a decisive factor in making schools more inclusive (Hegarty 1994). They sum up that teacher' attitudes, available time allotment, the knowledge and skills of teachers, the teaching methods, and the materials seem to be important prerequisites for special needs. Rose (2010: 1-2) says that understanding demands efforts. Even at the age of increased mass communication technology when it is relatively simple to share ideas and perspectives but it is not always easy to interpret the meaning of the messages we receive. James McLeskey and Nancy L. L. Waldron (2000: 48) say if teachers and administrators do not understand the need to examine and change some of their beliefs concerning inclusion and schooling, the result of inclusive program will likely entail only superficial change. The concept and implementation of inclusive education do not yet spread in all areas -even in all universities in terms of Indonesia. Therefore, although inclusive education has started in 1980's in western countries but it has not gotten proportional responses in this country. This is proven by an International Workshop entitled ''Toward Inclusive Education for Universities in Indonesia" conducted by Brawijaya University in cooperation with the Directorate General of Higher Education, the Ministry of National Education from 10 -11 November 2012. This was followed by participants from around 50 (fifty) state and private universities in Indonesia. One of the recommendations proposed from the workshop is the commitment to implement inclusive education for the disabled students in Indonesia. In addition, there must be a wide access for the disabled students to get a proportional education as the normal students (accessed in November, 15 th 2012 at 10:15 AM: http:// www.dikti.go.id/?p=6958&lang=id). Since the concept of inclusive education has not yet spread all over, the implementation and the research concerning inclusive education are not also spread on the whole. There are various ways to comprehend and implement inclusive education from place to place, from area to area, and for some extent from university to university. One of the research projects was conducted by Zaenal Alimin in 2011. He described the Implementation Profile of Inclusive Education in Bandung City. His samples were 10 classrooms from four Elementary Schools in the City. He was going to find out 1) the existence of students with special needs in Elementary Schools which conduct inclusive education; 2) the inclusion indexes achieved by Elementary Schools which conduct inclusive education; and 3) the ways of achieving inclusion indexes for the Elementary Schools which conduct inclusive education. His findings show that 1) there are about 1 up to 4 students, and averagely 2 students with special needs in a classroom with the total number minimally 20 students and maximally 46 students on the whole; 2) the average inclusion indexes are 38.58 with the ideal indexes 54; 3) the higher inclusion indexes are achieved by classrooms having more than one teacher, having greater students with special needs, having lesser total number of students with special needs, and having teachers who often join training in inclusive education (accessed in November 11 th 2012 at 11.22 AM: http://repository.upi.edu/operator/upload/pros_ui-uitm_2011_zaenal_profil_implementasi_pendidikan_inklusif.pdf) Besides such a kind of research, there are still many; but the writer used the last point relating to the comprehension processes for the extended implementation of the inclusive education. For the conclusion, the more often the teachers join training in inclusive education, the better their understanding on the inclusion -and for some extent the ability for implementing the inclusive education. In addition, the samples used in this research are regular schools which do not conduct inclusive education. Based upon the theories, explanations, and the implementation sample of inclusive education above, it is understandable that teachers are able to more comprehend inclusive education from many ways. Introduction from relating educational agents is the most expected way. In addition, they are able to comprehend the concept of inclusive education through seminar, training or workshop which is conducted by the agents including by university institutions. Since the conceptualization of inclusive education varies, the implementation may also vary. However, the primary message of inclusive education conduct must be equal namely giving the same opportunities to the disabled students to access the equal education processes as the regular students. Whatever the ways used, the better understanding on inclusive education the better the implementation of it. Research Methodology Based upon the research procedures which had been conducted later on, this study is included into quantitative research (Sugiyono, 2008:v). In terms of the natural setting, this is included in survey research (Sugiyono, 2008: 4). Survey research is used to find out facts from research setting naturally and the researcher uses treatments for collecting the research data. The treatments can be in the form of questionnaires, tests, structural interviews, and the like (Sugiyono, 2008: 6). The research participants of this study were all Islamic Elementary School Teachers in Salatiga town. Based upon the preliminary information from the Ministry of Religious Affairs of Salatiga (2008), there are twelve Islamic Elementary Schools in Salatiga. One is state school, and the others are private institutions that are conducted by diversified social-mass organizations. In the schools, there are one hundred and twenty two teachers. Eight teachers work in State Islamic Elementary Schools and one hundred and fourteen work in Private Islamic Elementary Schools. The samples of this research were seventeen teachers from one hundred and twenty two persons or 14 % from the participants. The technique used to get the sample of the research was probability simple random sampling. The writer used questionnaire as the primary data collection method and structural interviews as the secondary data collection method. The data collected will be analyzed by using Interactive Model developed by A. Michael Huberman and Matthew B. Miles consisting of data reduction, data display, and conclusion drawing/verification (Miles & Huberman, 1984, 1994. The components of data analysis: interactive model can be seen in the following figure: Analysis The writer divides the writing into several sub chapters namely data collection, data display, data reduction, data analysis and discussion. These research data were primarily collected through questionnaire. There are two types of questionnaires namely completion and option. The former was used to find out general information concerning gender, age, education level, and teaching experiences. The latter was used to find out specific information concerning teachers' comprehensions, perceptions, and attitudes towards inclusive education. Also, the secondary data collection method, namely interview was used to complete the former. From the seventeen participants, only fourteen who returned the questionnaires. The result of the questionnaires are presented as follows: 1. Teachers' comprehensions to inclusive education Table 1. Teachers' Comprehensions on Inclusive Education Based upon table 1 above, it seems that most of teachers in Islamic Elementary Schools have not yet had sufficient comprehensions concerning inclusive education. Only 7.1 % who has sufficiently, and 14.4 % who have neutrally comprehensions to inclusive education; while 78.5 % have insufficient comprehensions to the education. There may be caused by the ongoing and the post going education processes. In the ongoing processes, almost of the teachers (92.9 %) have insufficient inclusive education; and 7.1 % does not believe that he/she got inclusive education (neutral). In the post education processes, they almost never join -whether in seminar or workshop, on inclusive education. Therefore, most of them (92.9 %) need improvement concerning inclusive education, and only 7.1 % who does not believe concerning the improvement. In a little interview, most of the teachers much need contribution from the Ministry of Religious and Education Affairs as well as Universities to improve their comprehensions on inclusive education. The improvement may be through intensive socialization, seminar or workshop. By the ways, they expect to be able to identify and manage classroom with students having special needs. Table 2. Teachers' Perceptions to Inclusive Education As the previous explanations about teachers' comprehensions concerning inclusive education, teachers' perceptions to the education are also needed to be observed conscientiously. In table 2 it is seen 21.4 % strongly disagree and 42.9 % disagree that education facilities provided have accommodated the needs of students with special needs; and 28.7 % are neutral and only 7.1 % agrees with the given facilities. This means that almost 64.3 % say that education facilities are insufficient for fulfilling the needs of students with special needs. However, they disbelieve that school institutions are not able to provide such facilities. They may have to do or be able to do; so 42.9 % disagree with the difficulties, 42.9 % are neutral, and 7.1 % strongly agrees and 14.4 % agree with the school institutions to provide facilities needed by students with special needs. In addition to education facilities, the curricula are not yet prepared for students with special needs. There are 7.1 % who strongly disagrees and 50 % who disagree that the curricula have accommodated the needs of disable students. Only 21.4 % who are neutral and 21.4 % who agree that the given curricula are appropriate for disable students. Because of such facts, the readiness of the ready teachers and unready teachers are fifty -fifty. About 7.1 % strongly disagrees and 28.7 % disagree with their readiness to prepare inclusive education as the same as those who strongly agrees (7.12 %) and agree (28.7 %) with such a preparation. The rest of the participants (28.7) is neutral concerning the preparation. Since they are not absolutely ready to prepare inclusive classroom, they also do not get ready (50 %) to manage inclusive classroom, 28.7 % are neutral, and only 21.5 % who get ready to manage the inclusive classroom. They perceptions on inclusive classroom are understandable because they have insufficient comprehensions on the inclusion. Table 3. Teachers' Attitudes towards Inclusive Education Teachers' attitudes towards inclusive education relatively stay in between. They do not absolutely disagree or agree with inclusion. The first concerns with the ideas whether or not disable students and regular students should be mixed at the same classroom. 7.1 % and 35.7 % participants strongly disagree and disagree if disable students study together with regular students. Some of them (28.7 %) are neutral or stay in between, while 28.5 % agree if they are mixed in a classroom (7.1 % strongly agree and 21.4 % just agree). The previous ideas concerning the blend between such kinds of students in disagreement pole and agreement pole portrait the second tendency dealing with the separation. If in the former about 42.8 % who disagree with the blend, the same percentage also appears (42.8 %) who agree with the separation. Thus, some of them (50 %) disagree with the separation and only 7.1 % who is neutral. This means that the supporters for blending is greater than that of the separation. The third tendency also depicts the first and the second ones, since 43.1 % of the participants disbelieve that disable students are able to follow teaching-learning processes as the regular students. However, 50 % of them still believe on their capability to follow the class as the regular students and only 7.1 % who are neutral on their capability. The same trend could also be seen in the following answer by which 43.1 % participants feel that disable students will not have more self-esteem even though they study together with the regular students. By contrast, 57.3 % of the participants believe that the blend will increase disable students' self-esteem. Based upon the facts above, 14.2 % participants do not believe that disable students are able to cooperate with regular students, 21.4 % participants are neutral, and 64.2 % (7.1 % strongly agree and 57.3 % agree) participants believe that disable students are able to cooperate with the regular students. The last two answer concerning whether or not teaching-learning processes are disturbed when disable students study together with regular students also portraits the previously mentioned tendencies. Around 42.8 % participants believe that the class will be broken if the disable students are fused in a class, 42.9 % are neutral and only 14.4 who agree that teaching-learning processes will not be disturbed when students with special needs study together with the regular students. The last point is the assumption that disable students have relatively the same academic capabilities as the regular students. At the last point, 64.3 % participants do not believe that disable students have relatively the same academic capabilities as the regular students, 21.4 % are neutral, and 14.4 % participants do agree that students with special needs have relatively the same academic capabilities as the regular students. To be clear, the writer presents the total account of the calculation, namely the calculation of teachers' comprehensions, perceptions, and attitudes towards inclusive education. The result of this research shows that the teachers' comprehensions to inclusive education (as in table 1 above) are still very low. 88.1 % participants feel that their comprehensions and education on inclusive education need to be improved. In interviews, they expect that such educational agents as the Ministry of Religious and Education Affairs as well as University Institutions involve them for the improvement. So far, there is no intensive socialization, seminar, or workshop from the agents. In the future, it is expected that those agents contribute a lot in such improvement programs. Borrowing Rose's ideas (2010: 1-2) it is not always easy to access information in increased mass communication technology. Whatever, it is necessary to reconsider the importance of reading as recommended in the holy Qur'an. The first revelation (surah al A'la: 1 -5) given to the Prophet Muhammad SAW is as if reminding us all Muslims to intensify our daily activity to read. Reading should not only be a routine but necessity because reading is the window of the world. Moreover, reading is much recommended in Islamic Institutions which are so far considered as secondary educational institutions. The negative stamp will change if Muslims voluntarily change this. Unless, there will be no changing ( Surah Ar-Ra'd:1). Since the comprehensions of teachers are low, it is undeniable that their perceptions to inclusive education are negative; because comprehension to something will consider the perception of the thing. In table two it is seen that 50.02 % of the participants disbelieve with the available facilities and the readiness to provide the facilities, curricula, and their readiness to prepare and manage the inclusive classroom. Only 21.46 % participants have positive expectations to the preparing practice of inclusion. The last section of this research portrait the teachers' attitudes towards inclusive education. Based upon the result of table 3, it is still fortunate because even though the teachers' comprehensions are very low, and their perceptions are also negative; but their attitudes towards inclusive education are still better. It is said that (43.89 %) have positive attitudes greater than those (37.81 %) that have negative attitudes to inclusion. In terms of Islamic Institutions and for some extent Islamic teachings, the findings of this research should be thought carefully because Islam is 'rahmatan lil 'alamiin' (Al Anbiyaa: 21). To so doing, it better if all Islamic institutions try very hard to achieve this ambition. As said before, The Al Mighty God 'Allah s.w.t.' will never change the Muslims fate unless they try hard to change themselves, including changing the quality of education. In addition, they should work hand in hand, cooperate together to make the hard work become easier (Al Maidah: 2). Since Islam was sent to be 'rahmatan lil 'alamiin' Muslims should have positive attitudes to students, including to the students with special needs. They should be aware that there must be inclusive students in every class. As explained at the previous chapters, inclusion is beyond special education. Therefore, teachers may find out students with special intelligences; students who are very passive, aggressive, attention problems, perfectionist, and socially inept, and the like in their classrooms, who need special needs. They have the same rights to get education. Conclusion Based upon data analyses and discussions above, the writer is able to draw conclusion as follows: 1. Most of Islamic Elementary School Teachers consider that inclusive education is a relatively new phenomenon in their mind and practice. The result of the survey shows that teachers (88.1 %) have fewer comprehensions concerning inclusive education; and are only 4.7 % who feel having a little knowledge about it. They have so comprehensions because they did not get sufficient knowledge when they were still studying in universities. In addition, they are also not experienced well in intensive seminar or workshop dealing with inclusive education conducted by related institutions as the Ministry of Religious or Education Affairs and University Institutions. 2. Most of Islamic Elementary School Teachers (50.02 %) have negative perceptions concerning inclusive education; and are only 21.46 % who have positive perceptions of it. They have negative perceptions dealing with the availability of education facilities needed by disable students, even though school institutions may be able to provide such facilities. In addition, the curricula also do not yet accommodate the needs of the students. Therefore, most of them are pessimistic to prepare the inclusive education as well as to manage inclusive classroom. 3. Even though most of Islamic Elementary School Teachers have fewer comprehensions and negative perceptions to inclusive education, but they still have positive attitudes (43.89 %) to the inclusive education; and the others (37.81 %) have negative attitudes towards inclusion. Their expectation to the successful practice of inclusive education is greater than their assumption of the failure if among related institutions such as policy makers, university institutions and school institutions are able to cooperate together in socialization, seminar, and workshop concerning inclusion.
2018-12-07T14:20:19.583Z
2012-12-01T00:00:00.000
{ "year": 2012, "sha1": "00ef54f862c805978eee935ba92551969f4a4fc5", "oa_license": "CCBYSA", "oa_url": "http://inferensi.iainsalatiga.ac.id/index.php/inferensi/article/download/192/152", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00ef54f862c805978eee935ba92551969f4a4fc5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
11997286
pes2o/s2orc
v3-fos-license
Critical Role of Aberrant Angiogenesis in the Development of Tumor Hypoxia and Associated Radioresistance Newly formed microvessels in most solid tumors show an abnormal morphology and thus do not fulfil the metabolic demands of the growing tumor mass. Due to the chaotic and heterogeneous tumor microcirculation, a hostile tumor microenvironment develops, that is characterized inter alia by local hypoxia, which in turn can stimulate the HIF-system. The latter can lead to tumor progression and may be involved in hypoxia-mediated radioresistance of tumor cells. Herein, cellular and molecular mechanisms in tumor angiogenesis are discussed that, among others, might impact hypoxia-related radioresistance. Introduction Endothelial cells build up the first -barrier‖ between the blood, interstitial space (and stroma) and parenchymal cells. A dense network of blood vessels is necessary to provide an adequate supply of oxygen and nutrients, and efficient drainage of waste products. Although the turnover rate of endothelial cells is generally slow in adult organs, endothelial cell growth can be induced under (patho-)physiological conditions like wound healing, menstrual cycle or placenta formation. As shown for normal tissues also growth of solid tumors depends on blood vessels. Vasculogenesis, arteriogenesis and angiogenesis are three major principles to build up new vessels. Vasculogenesis is a process that involves undifferentiated progenitor cells in order to form a vascular network. Vasculogenesis is required for the de novo formation of a vascular network in embryogenesis and growth [1]. In contrast to vasculogenesis, arteriogenesis refers to the remodelling of pre-existing arterioles to form arteries upon, e.g., increased shear stress. Arteriogenesis is based on chemokine/growth factor-induced growth processes and enlargement of vascular wall structures at larger shear stress that is induced by increased blood flow rates in arteries [2]. During angiogenesis vessels are formed from the existing microvasculature. The mechanism of angiogenesis involves either sprouting from pre-existing vessels or splitting through intussusception [3,4]. Apart from the female reproductive organs, during pregnancy and in wound healing [5,6], the vasculature rarely forms new branches in adults. However, endothelial cells retain their plasticity to sense and to respond to angiogenic signals during their whole life-time. In general, angiogenesis is tightly regulated by a fine balance of activating and inhibiting signals. Cytokines, hormones, circulating progenitor cells, whose role is not completely understood, endothelial cell migration and destabilization of the vessel wall, the basal lamina, and the interstitial matrix can impact on angiogenesis. Apart from physiological parameters, microenvironmental factors such as hypoxia and nutrient deficiencies can also trigger the angiogenic switch. Angiogenesis is also a crucial player in the pathogenesis of autoreactive diseases such as age-related macular degeneration, rheumatoid diseases, inflammation, arteriosclerosis, vascular restenosis and different vasculopathies. A close link of inflammation and angiogenesis is indicated by hallmark factors of acute and chronic inflammation such as VEGF-A and angiopoietins [7]. Vessel Formation in Malignant Tumors Tumor angiogenesis involves the production and release of growth factors, permeability regulating factors, migration stimulating factors, proteolytic enzymes, extracellular matrix and adhesion molecules. These factors can be released either by tumor, stromal and/or inflammatory cells that are located within or in close proximity to the tumor. Growth factors of tumor angiogenesis can either involve specific vascular endothelium factors (i.e., vascular endothelial growth factor (VEGF), angiopoietin and ephrin family members), or non-specific factors (i.e., platelet-derived growth factor (PDGF), transforming growth factor-beta (TGF-β), fibroblast growth factor (FGF) or tumor necrosis factor-α (TNF-α)) [8]. In principle, the progression of tumor growth is critically dependent on oxygen and nutrient supply and the drainage of metabolites [9], since diffusion without the involvement of blood vessels allows transport processes only over very short distances of less than 500 µm. The physiology of tumors is different from that of normal tissues. It is characterized inter alia by O 2 depletion (hypoxia or anoxia), extracellular acidosis, high lactate and adenosine levels, glucose and bicarbonate deprivation, energy impoverishment, significant interstitial fluid flow, interstitial hypertension, and other adverse conditions characterizing the metabolic tumor microenvironment [10][11][12][13][14]. This hostile microenvironment is largely determined by an abnormal tumor microcirculation. When considering the continuous and indiscriminate formation of a vascular network in growing tumors, the following pathogenetic mechanisms can be involved either alone or in combination: (a) Angiogenesis by endothelial sprouting from pre-existing venules [15,16]. (b) Co-option of existing vessels [17]. (c) Vasculogenesis (de novo vessel formation) through incorporation of circulating endothelial precursor cells [17]. (d) Intussusception (splitting of the lumen of a vessel into two). (e) Formation of pseudo-vascular channels lined by tumor cells rather than endothelial cells (-vascular mimicry‖). (f) Microvessel formation by a subset of bone marrow-derived myeloid cells infiltrating the tumor [17]. Despite these various possibilities for the formation of tumor microvessels, the tumor vasculature often lacks the signals to mature and therefore, tumor vasculature is also termed -aberrant monster‖ [18]. Tumor vessels are characterized by vigorous proliferation which leads to immature, structurally defective and, in terms of perfusion, ineffective microvessels ( Figure 1). Consequently, tumor blood flow is chaotic and heterogeneous, the vascular supply and the metabolic microenvironment are inadequate and hostile. However, due to the spatio-temporal heterogeneity of pro-angiogenic signals, not all vessels are totally immature in clinical cancers. Some of them actually retain contractile properties [19]. Angiogenic Switch in Tumors Small tumors can stay dormant until the so-called angiogenic switch occurs. Neovascularization is driven by pro-angiogenic factors that facilitate the formation of new microvessels from pre-existing blood vessels. This angiogenic switch by which an avascular tumor nodule is converted into a fast growing, vascularized and aggressive tumor is important in tumor growth and dissemination [20]. The angiogenetic switch is regulated by various pro-angiogenic factors such as VEGF, IL-8, bFGF, EGF, PDGF, MMP-2/-9, uPA, Notch-1/-4, osteopontin, and angiogenin and anti-angiogenic factors such as angiostatin, thrombospondin, IFN-γ, IL-1/-4/-12/-18/-21. Among them, members of the VEGF family (VEGF-A, B, C, D) have been identified as the dominant players in tumorigenesis. VEGF-C has been shown to activate the VEGFR-3 and Notch signalling pathways [21]. Under normoxic conditions, VEGF is mainly regulated by hypoxia-inducible factor 1 (HIF-1) that can be activated by pro-inflammatory cytokines such as IL-1 and TNF via PI3K and NF-κB [22,23], or indirectly by IL-6 and PGE 2 in an autocrine manner [24,25]. In human tumors the VEGF expression is frequently upregulated [26]. High VEGF levels promote the production of abnormal vessels as mentioned before [27] and binding of VEGF to its corresponding receptor leads to the activation of the PI3K/Akt/mTOR and Ras/Raf/MAPK pathways that in turn promote not only angiogenesis but also proliferation, differentiation and survival of tumor cells [28]. Human osteosarcoma and pancreatic adenocarcinoma cells have been found to spontaneously release high amounts of VEGF, MMP-2, and IL-8 that enhance the invasiveness of tumors [29][30][31]. Via an auto-regulatory loop, the pro-inflammatory cytokine IL-1 further up-regulates the secretion of the pro-angiogenic factor IL-8 and thus supports the growth of tumors [30]. In SW1353 chondrosarcoma cells IL-1 induces a massive release of the pro-angiogenic factors MMP-1 and MMP-13. These findings highlight the crucial impact of inflammatory mediators in tumor angiogenesis [32]. Epidermal growth factor receptor (EGFR) that is also frequently overexpressed in tumors is associated with poor prognosis, high resistance to radiochemotherapy and increased metastatic spread [32]. EGFR predominantly induces the Ras/Raf/MAPK and the PI3K/Akt pathway that are both responsible for anti-apoptotic and pro-survival signals. Pro-angiogenic factors orchestrating the complex angiogenesis in solid tumors also include metabolites/catabolites such as lactate, 3-hydroxybutyrate, succinate and fumarate [33]. The role of irradiation in angiogenesis is still a matter of debate. Whereas some groups speculate that radiation can induce angiogenesis others report on a repression of angiogenesis by ionizing irradiation [34][35][36][37]. Tumor Microcirculation As already mentioned, newly formed microvessels in most solid tumors do not conform to the normal morphology of the host tissue vasculature. The tumor vasculature can be described as a system that is maximally stimulated, yet only minimally fulfils the metabolic demands of the growing tumor that it supplies. Microvessels in solid tumors exhibit a large series of severe structural and functional abnormalities. They are often dilated, tortuous, elongated, and saccular. It is of note that not only the quantity of microvessels counts, but also-or even more so-the quality of vascular function in terms of the tumor tissue supply or drainage [37,38]. There is significant arterio-venous shunt perfusion accompanied by a chaotic vascular organization that lacks any regulation matched to the metabolic demands or functional status of the tissue [11]. Excessive branching is a common finding, often coinciding with blind vascular endings. Incomplete or even missing endothelial lining and interrupted basement membranes result in an increased vascular permeability with extravasation of blood plasma and/or red blood cells expanding the interstitial fluid space and drastically increasing the hydrostatic pressure in the tumor interstitium (interstitial fluid pressure). In solid tumors, there is a rise in viscous resistance to flow mainly caused by the hemoconcentration (hematocrit increase ranging from 5 to 14%) [39,40]. Aberrant vascular morphology and a decrease in vessel density are responsible for an increase in geometric resistance to flow, which can lead to an inadequate perfusion. Substantial spatial heterogeneity in the distribution of tumor vessels and significant temporal heterogeneity in the microcirculation within a tumor [41][42][43][44] may result in a considerably anisotropic distribution of tumor tissue oxygenation and a number of other factors, which are usually closely linked and which define the so-called pathophysiological microenvironment. Variations in these relevant parameters in different tumors are often more pronounced than differences occurring between different locations or microareas within a tumor [12,45,46]. Tumor Blood Flow Rates A number of studies on blood flow through human tumors have been reported. Some of them are anecdotal reports rather than systematic investigations, and therefore, definite conclusions cannot be drawn partly due to the use of non-validated techniques to measure flow in volume flow rate units. Considering the presently available data, the following conclusions can be drawn when flow data derived from different reports are pooled (for reviews see [11,14,17,47]): (a) Blood flow can vary considerably despite similar histological classification and primary site (0.01-2.9 mL/g/min; [17,48,49]). (b) Tumors can have flow rates which are similar to those measured in organs with a high metabolic rate such as liver, heart or brain. (c) Some tumors exhibit flow rates which are even lower than those of tissues with a low metabolic rate such as skin, resting muscle or adipose tissue. (d) Blood flow in human tumors can be higher or lower than that of the tissue of origin, depending on the functional state of the latter tissue (e.g., average blood flow in breast cancers is substantially higher than that of postmenopausal breast and significantly lower than flow data obtained in the lactating, parenchymal breast). (e) The average perfusion rate of carcinomas does not deviate substantially from that of tissue sarcomas. (f) Metastatic lesions exhibit a blood supply which is comparable to that of the primary tumor [11]. (g) In some tumor entities, blood flow in the periphery is distinctly higher than in the center whereas in others, blood flow is significantly higher at the tumor center compared to the tumor edge. (h) Flow data from multiple sites of measurement show marked heterogeneity within individual tumors. In cervical cancer, the intra-tumor heterogeneity was similar to the inter-tumor heterogeneity [50]. (i) There is substantial temporal flow heterogeneity on a microscopic level within human tumors as shown by multichannel laser Doppler flowmetry [51,52]. (j) There is no association between tumor size and blood flow in many cancers [48,53]. (k) Tumor blood flow is not regulated according to the metabolic demand as is the case in normal tissues. With regard to the efficacy of radiotherapy the effectiveness of blood flow greatly influences the oxygen supply of tumors. Therefore, the responsiveness of solid tumors to radiotherapy (and chemotherapy) profoundly depends on blood perfusion [54]. Arterio-Venous Shunt Perfusion in Tumors First rough estimations concerning the arterio-venous shunt flow in malignant tumors showed that at least 30% of the arterial blood can pass through experimental tumors without participating in the microcirculatory exchange processes [55][56][57]. In patients receiving intra-arterial chemotherapy for head and neck cancer, shunt flow is reported to be 8% to 43% of total tumor blood flow, the latter consistently exceeding normal tissue perfusion on the scalp [58]. The mean fractional shunt perfusion of tumors was 23% ± 13% in studies utilizing 99m Tc-labeled micro-aggregated albumin (diameter of the particles, 15-90 µm). The significance of this shunt flow on local, intra-tumoral pharmacokinetics, on the development of hypoxia, and on other relevant metabolic phenomena has not yet been systematically studied and remains speculative. High amounts of shunt flow through solid tumors not only impact on pharmacokinetics of anti-cancer agents, but also limit the effectiveness of radiotherapy due to the development of diffusion-limited, chronic hypoxia [44]. Tumor Hypoxia and HIF Aberrant microcirculation is a major causative factor for the development of hypoxia in solid tumors [59]. Hypoxia is strongly associated with radio-resistance of malignant tumors, tumor recurrence after radiation therapy, and poor prognosis in patients subjected to radiation therapy [50,60]. On the one hand, free radicals that are produced by radiation, either directly or indirectly from an interaction with other molecules such as water, can react with H + in the absence of oxygen and thus the target can be chemically restored to its original form. On the other hand, hypoxia can stimulate the HIF system which in turn can lead to tumor progression. It is hypothesized that the heterodimeric transcription factor HIF-1 is also involved in hypoxia-mediated radioresistance of tumor cells [61,62]. However, in vitro data from our group indicate that high HIF-1α levels in lung cancer cell lines are not associated with radioresistance [63]. Apart from its role in the development of radioresistance, HIF-1α is crucially involved in tumor angiogenesis, invasion, survival, and growth [64]. Harada et al. have demonstrated that irradiation causes an up-regulation of intra-tumoral HIF-1α protein and activity in regions of radiation-induced re-oxygenation of the solid tumor via the PI3K/Akt/mTOR pathway which is responsible for synthesis, stabilization and accumulation of HIF-1α, the oxygen-regulated subunit of HIF-1 [62]. From these results it can be concluded that Akt/mTOR-dependent translation of HIF-1α plays a critical role in the post-irradiation up-regulation of intra-tumoral HIF-1 activity in response to radiation-induced alterations of oxygen availability in solid tumors. Stability of HIF-1α can also be regulated in an oxygen-independent manner by RACK-1 (receptor of activated protein kinase C) through competition with Hsp90 and recruitment of the elongin-C/B ubiquitin ligase complex [65]. As stated by Yoshimura et al., the transactivational activity of HIF-1 is critically regulated by the MAPK/ERK pathway [66]. HIF-1 transactivational activity was found as being suppressed by FIH-1 (factor inhibiting HIF-1) under normoxic conditions via HIF-1α hydroxylation and concomitant blockage of adapter molecule binding [67]. Furthermore, the irradiation-induced HIF-1 activation can also depend on the availability of nitric oxide [68]. Therapeutic Interventions to Overcome Hypoxia-Related Radioresistance Since hypoxia is known to protect tumor cells from standard radiation therapy, several strategies have been developed to interfere with hypoxia-related radioresistance of solid tumors [66]. Hyperbaric oxygen therapy, carbogen, nicotinamide and other -flow modifiers‖ as well as modification of the hemoglobin-O 2 affinity have been tested to facilitate oxygen delivery to hypoxic regions. Nitroimidazole derivatives such as misonidazole and nimorazole have been used to sensitize tumors to radiation by mimicking the effect of oxygen. Hypoxic cytotoxins (tirapazamine and analogues) are meant to directly kill tumor cells by hydroxyl radicals or oxidizing radicals. Combined treatment strategies consisting of tirapazamine analogues and HIF-1 inhibitors, such as YC-1, have been tested to increase the radio-responsiveness of tumors [69][70][71][72][73]. However, due to the chaotic vascularization in most solid tumors none of these approaches could significantly improve the sensitivity towards ionizing irradiation. Anti-angiogenesis is another approach that may affect the radiosensitivity of tumors. The key angiogenic factor VEGF is facilitating tumor growth and survival, and therefore most anti-angiogenic strategies aim to interrupt the VEGF pathway. This idea has led to the development of several anti-VEGF reagents including anti-VEGF antibody bevacizumab, anti-VEGF receptor antibody ramucirumab and VEGF antagonist aflibercept. Despite some promising results in clinical trials, the blockade of VEGF signalling also exerts adverse effects such as resistance to VEGF inhibitors as well as hemorrhagic and thrombotic events due to the damage of healthy vessels [71]. In molecular biology a small molecule is defined as a reagent with a low molecular weight of approximately less than 900 Da. These molecules harbour the capacity to rapidly diffuse across cell membranes and thus can enter cells. Small molecule drugs in pharmacology frequently serve as signalling molecules. A wealth of evidence indicates that small-molecule tyrosine kinase inhibitors such as axitinib, brivanib, cediranib, imatinib, motesanib, pazopanib, sorafenib, sunitinib as well as vatalanib and vandetanib harbor promising activity and safety in certain cancer subtypes (for reviews see [66,74,75]). Attempts to target the tumor microenvironment in order to improve the effects of radiotherapy also comprise the endogenous angiogenesis inhibitors angiostatin [76] and endostatin [77][78][79][80]. Preclinical results of Ke et al. demonstrated that the recombinant human endostatin, endostar, can increase the radiation sensitivity of nasopharyngeal carcinomas in a nude mouse model by lowering the VEGF expression [79]. Interestingly, in patients with advanced cervical cancer the combination of endostar with standard chemoradiotherapy was found to improve the early therapy outcome with acceptable adverse effects [79]. Due to the small sample size and the relatively short follow-up period further investigations are needed with respect to long-term effects. Despite its history as a human teratogen, thalidomide was tested as a putative drug to disrupt tumor angiogenesis. Although thalidomide monotherapy in patients with therapy-resistant uterine carcinomas prolonged the progression-free survival in a phase II trial [81], a phase III trial did not reveal any survival benefit for patients with brain metastases that have been treated with thalidomide in combination with radiotherapy compared to radiotherapy alone [82]. A meta-analysis of eight randomized trials with 2,317 patients with brain tumors confirmed this observation. Whole brain radiotherapy (WBRT) combined with the potential -radiosensitizer‖ thalidomide did not significantly improve the overall survival, local control and tumor response compared to WBRT alone [83]. Targeting tumor cells with the EGFR inhibitor erlotinib followed by radiation delayed tumor re-growth to a greater extent than radiation alone [85]. The increase in radiosensitivity by erlotinib was accompanied by a down-regulation of HIF-1 and VEGF, decreased vascular permeability, an increase in tumor blood flow, and a decrease in hypoxia. In a phase I trial, the safety and tolerability of therapy with the mTOR inhibitor everolimus in combination with radiation and temozolomide (TMZ) was evaluated in patients with newly diagnosed glioblastoma multiforme (GBM) [86]. As demonstrated in this study, the combination of everolimus with a standard chemoradiotherapy in patients with GBM was reasonably well tolerated. Moreover, early FDG-PET imaging one week after an everolimus monotherapy revealed a partial metabolic response in a subset of the patients. The efficacy of adding the anti-VEGF antibody bevacizumab and everolimus to standard radiation therapy plus TMZ in the first-line therapy of patients with glioblastoma has been shown to be feasible and safe [87]. The progression-free survival was improved compared to standard radiation therapy plus TMZ. These data are in line with results achieved in other phase II trials in which bevacizumab was used as a fist-line therapy [87]. At present, phase III clinical trials are ongoing to clarify the role of bevacizumab in glioblastoma patients. A novel approach to radiosensitize tumors is the use of the Hsp90 inhibitor NVP-AUY922. This compound was found to radiosensitize cervical, colorectal and head and neck squamous cell carcinoma (HNSCC) cell lines with a greater potency than any other tested Hsp90 inhibitor in vitro and in vivo [88]. Moreover, NVP-AUY922 in combination with radiotherapy resulted in a delayed growth of human prostate cancer cells in a mouse model in a supra-additive manner [89]. This effect is due to an oxygen-independent degradation of HIF-1α [65]. These data indicate that the interference of signalling pathways related to hypoxia might improve the radiosensitivity of tumors. Conclusions As summarized in this review, the dynamic and complex tumor microenvironment is largely determined by an aberrant tumor microcirculation characterized, among others, by hypoxia leading to radioresistance of malignant tumors and promoting tumor progression via stimulation of HIF-1. However, more detailed information is needed to characterize the dynamic aspects of tumor hypoxia during treatment. A better understanding of signalling pathways related to hypoxia in the tumor microenvironment should help to develop clinical approaches that address radioresistant hypoxic tumors.
2016-03-14T22:51:50.573Z
2014-04-08T00:00:00.000
{ "year": 2014, "sha1": "70927d532628a8310ddefc2fef6cea9e31106229", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/6/2/813/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70927d532628a8310ddefc2fef6cea9e31106229", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
265404463
pes2o/s2orc
v3-fos-license
Electrode sharpness and insertion speed reduce tissue damage near high-density penetrating arrays Objective. Over the past decade, neural electrodes have played a crucial role in bridging biological tissues with electronic and robotic devices. This study focuses on evaluating the optimal tip profile and insertion speed for effectively implanting Paradromics’ high-density fine microwire arrays (FμA) prototypes into the primary visual cortex (V1) of mice and rats, addressing the challenges associated with the ‘bed-of-nails’ effect and tissue dimpling. Approach. Tissue response was assessed by investigating the impact of electrodes on the blood-brain barrier (BBB) and cellular damage, with a specific emphasis on tailored insertion strategies to minimize tissue disruption during electrode implantation. Main results. Electro-sharpened arrays demonstrated a marked reduction in cellular damage within 50 μm of the electrode tip compared to blunt and angled arrays. Histological analysis revealed that slow insertion speeds led to greater BBB compromise than fast and pneumatic methods. Successful single-unit recordings validated the efficacy of the optimized electro-sharpened arrays in capturing neural activity. Significance. These findings underscore the critical role of tailored insertion strategies in minimizing tissue damage during electrode implantation, highlighting the suitability of electro-sharpened arrays for long-term implant applications. This research contributes to a deeper understanding of the complexities associated with high-channel-count microelectrode array implantation, emphasizing the importance of meticulous assessment and optimization of key parameters for effective integration and minimal tissue disruption. By elucidating the interplay between insertion parameters and tissue response, our study lays a strong foundation for the development of advanced implantable devices with a reduction in reactive gliosis and improved performance in neural recording applications. Early studies of cortical reactive gliosis indicate that the size of implantable devices did not impact the overall foreign body response [33].However, recent studies have emphasized the significance of 'subcellular-sized' microelectrodes in mitigating the adverse tissue responses typically associated with conventional-sized electrodes [34].This reduced tissue response is attributed to the diminutive dimensions of these microelectrodes, which facilitate improved neural density around the probes, a decrease in gliosis and scar tissue formation, and improved recording performance [35,36].In turn, this research has underscored the critical role of design considerations, particularly due to the steep signal drop-off [2] (1/r ∼ 1/r 2 ) of recorded actions potentials that occurs with microelectrodes that typically have a small recording radius of ∼100 µm [31] even with low impedances [36].As expected, functional microelectrodes with subcellular dimension made from silicon or carbon demonstrated improved recording performance compared to traditionally sized devices [35][36][37][38][39][40].However, a number of technical hurdles remain for implementing these technologies on a high-channel count scale [40]. The implementation of high-channel-count arrays on a large scale poses substantial technical challenges [35,39,40].One of the primary hurdles is the manual assembly process associated with ultrasmall carbon fiber devices, making it impractical to build arrays with hundreds or thousands of channels efficiently [41].Furthermore, the transition to subcellular-sized devices has presented an additional trade-off, with a need to address the increased fragility of these devices [2,42].Strategies involving cylindrical geometries devoid of hard corners have been proposed to mitigate this issue [2,41].However, blunt probes pose several challenges during tissue penetration, including increased insertion forces, a higher likelihood of missing anatomical targets, and heightened potential for tissue trauma [43][44][45][46].Additionally, the crucial phase of electrode insertion demands careful consideration, as sharper tip profiles and controlled insertion speeds have been shown to be instrumental in preserving tissue integrity and optimizing electrode performance [47][48][49].Although 16-channel Utah arrays can easily be inserted at a slow speed [30], 100-channel arrays need to be inserted at ballistic speeds using a pneumatic inserter [50].With an increasing number of channels, it becomes increasingly more difficult to insert bed-of-needle arrays (microscale 'bed-of-nails') without compressing and damaging the underlying brain tissue. Once penetration through the brain surface is achieved, implantation of intracortical arrays leads to severing and rupturing of blood vessels [47,48,51]; tearing of neuronal cell membranes and calcium activation [52]; injury to axons and myelin [27,52,53]; and activation of glial cells such as microglia, astrocytes, and oligodendrocyte progenitor cells [22][23][24][54][55][56][57][58][59][60][61].In addition, as the number of channels increases, the total volume of the device that is implanted also increases, requiring the displacement of a greater volume of brain tissue [2].As the pitch of individual shafts is made smaller, the tissue response eventually treats the individual sub-cellular shafts as one large electrode shaft [40].Careful consideration of interdependent design parameters, together with innovative design strategies, are necessary for the practical and functional implementation of highdensity, high-channel-count arrays [2]. Furthermore, the challenges associated with the practical implementation of high-channel-count arrays extend to the intricate aspects of device connectivity and the effective extraction of neural signals amidst external noise sources.The limitations imposed by connector bandwidths and computational power necessitate innovative approaches, including the integration of onboard multiplexers to streamline data transmission [32].However, the use of multiplexers introduces capacitively coupled artifacts in the recording data, constraining the sampling frequency of the channels and posing limitations on the comprehensive recording of wideband data.To address these concerns, Paradromics, Inc. has embarked on developing a scalable, high-throughput strategy for multichannel electrophysiology, emphasizing the critical role of a viable front-end interface that seamlessly integrates with the brain tissue while upholding the functionality of nearby neurons [62]. Nevertheless, the success of this strategy is predicated on the success of a front-end interface successfully implanting into the brain while maintaining viability of nearby neurons.The current study seeks to evaluate the efficacy of Paradromics' frontend prototype arrays (Fine Microwire Arrays, FµA) through an extensive, iterative assessment of various parameter spaces, aimed at achieving a highchannel-count microelectrode array.Particularly, we focus on bypassing the 'bed-of-nails' effect, where the insertion force of the electrode array is distributed over a larger surface area, diminishing the ability for any single electrode in the array to penetrate the tissue and leading to dimpling of the brain.The evaluation process entails a comprehensive examination of multiple pitch sizes, tip profiles, and insertion speeds conducive to successful intracortical insertion.A central objective of this study is to meticulously characterize the tissue response to the microelectrode array, facilitating a high-throughput evaluation across diverse parameter configurations.This characterization involves the assessment of cell membrane rupture through the use of propidium iodide (PI) and the analysis of blood-brain barrier (BBB) integrity and leakage via immunoglobulin G (IgG) staining.Moreover, a comparative evaluation between the FµA and commercially available Blackrock arrays is conducted, shedding light on the specific array configurations and insertion parameters that minimize the impact on BBB and surrounding tissue from the electrode sites. Microelectrodes Fine microwire electrode prototype arrays (FµA) (∼24 channels or ⩾60 channels) with a diameter of 20 µm for each electrode was provided by Paradromics, Inc., Austin, TX, USA (paradromics.com)and tested to address optimal insertion parameters for larger bundle arrays.Proprietary prototype arrays were microwires with glass insulation as previously described [63].FµA were provided with three different tip profiles: blunt, angled cut (30-degree angle), and electro-sharpened with a short pinpoint tip (∼8-degree angle) (figure 1(A)) [63].The blunt and angle Au arrays had a 100 µm pitch (∼100 electrodes); electro-sharpen W arrays had 300 µm pitch (∼160 electrodes); and control (16 electrodes) with 400 µm pitch.These FµA were tested (figures 1(A)-(D)) and compared to Utah Electrode Array (UEA) (Blackrock Microsystems, Salt Lake City, UT) as the control array (figure 1(E)).W was chosen to replace Au wires for electrosharpened arrays, due to concerns that Au would be too soft to maintain the sharpened shape.In addition, to test the recordability of the individual wires of the FµA implants, single wire electrodes (20 µm in diameter) were implanted and electrophysiologically evaluated. Surgical procedures for probe insertion in mice for two-photon imaging Seven mice were induced with 75 ml kg −1 of Ketamine with 7 mg kg −1 xylazine via an intraperitoneal (IP) injection and the skull was then shaved and prepped for surgery.The animal was maintained with ketamine (40 mg kg −1 ) so that it did not respond to a toe pinch and remained properly anesthetized throughout the experiment.The animal was placed in the stereotaxic frame and on top of a heating pad with a rectal probe for maintaining the proper body temperature (37 • C).An ocular ointment was placed on the eyes to prevent desiccation.Next, the area was prepped with three cycles of the betadine scrub and alcohol washes.The skin was removed over the skull and the connective tissue was reflected back and bone exposed.Vetbond (3 M) was placed over the bone surface and two bone screws were placed over the frontal cortex areas.Dental cement was used to anchor the bone screws and to create an imaging well for holding the saline for the water-immersion objective lenses.Next, a 4 mm by 6 mm craniotomy was made over the visual cortex centrally located over 1.5 mm lateral from lambda and 1.5 mm rostral.The implant area was frequently flushed with sterile saline to prevent thermal heating due to drilling and prevent the tissue from desiccating during the procedure. FµA probes (∼24 channel FµA) were inserted at a 30-35 degree angle from the horizontal plane and parallel to midline down to layers II/III (∼300 µm down from the pia) in the mice visual cortex area.The arrays were inserted with a Narishige oil hydraulic drive (MO-81, Narashige, Japan) at a speed of 500 nm s-1.For animals that were imaged for cellular damage, propidium iodide (PI, 1 mg ml −1 in saline) was placed on the surface of the brain post insertion.After 20 min the surface of the brain was flushed 3 times with saline prior to imaging.Also, for the electro-sharpen insertions sulfonamide (SR101) was injected (IP) for visualizing the vasculature around the electrode sites.Images were acquired at 1024 × 1024 pixel (407.5 × 407.5 µm) over ∼4.8 s using Prairie View software.Stacks were acquired along depth (ZT stacks) every 2 µm from the surface to 100 µms below the tip of the electrode.Images were taken from the individual Z stacks to calculate the number of damaged cells around the implanted electrode wires and were measured using the 'Measure' function in ImageJ (National Institutes of Health).All procedures were approved by the Division of Laboratory Animal Resources and the Institutional Animal Care and Use Committee of the University of Pittsburgh and in accordance with the standards set by the Animal Welfare Act and the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Surgical procedures in rats for perpendicular electrode implantation Acute Experiments were performed on 19 Sprague Dawley (Charles, River), male rats (400 + 50 g) and implanted with non-functional high-density arrays (⩾60 electrodes) in each hemisphere as previously described [53].For 1-week chronic experiments, 8 animals were implanted.Prior to surgery animals were given 75 mg kg −1 Ketamine and 7 mg kg −1 Xylazine cocktail via an intraperitoneal (IP) injection with regular updates of 40 mg kg −1 Ketamine IP.The anesthetic level was confirmed via the absence of a reaction from toe pinch response.The surgical site was shaved and the animal was placed in the stereotaxic frame (David Kopf Instruments, CA) and vitals were monitored throughout the procedure.Proper body temperature was maintained (37 • C) via a heating pad.Next, the ocular ointment was placed on the eyes and the surgical area was prepped with betadine surgical scrub and then cleaned with 70% isopropyl alcohol.An incision was made over the scalp exposing the surgical area.The fascia was reflected and the periosteum from the bone was removed and a thin layer of cyanoacrylate (Vetbond, 3 M) was placed over the skull.A pinhole craniotomy was made with a manual drill and rongeurs were used to perform the craniotomy.The tissue was frequently irrigated to prevent desiccation and wash away bone debris.The craniotomy was placed over the V1 area approximately −5 mm to −7 mm of bregma and −2 mm to −4 mm lateral from midline and about 5 × 5 mm in dimensions.After the durotomy, a micromanipulator was used to insert the electrodes into the brain and physiological saline was used to keep the brain moist during the procedures.The arrays were inserted at a 90 • angle into the brain approximately 1 mm down with either the slow, fast or pneumatic insertion method (as described in the following section).After the electrodes were inserted, propidium iodide (PI, 1 mg ml −1 in saline) was placed over the surface of the cortex for 20 min to label damaged or compromised cell membranes (n = 15 of acutely implanted animals), a method used in previous studies.Following the incubation period, the PI [52,69] was rinsed with three flushes of saline; the craniotomy was sealed with dental cement (Henry Schein, Flowable Composite 101-6773).Then, the animals were transcardially perfused with PBS and then with 4% paraformaldehyde for fixation [69-72]. Insertion methods in rats Fast insertions were performed using a linear translator by Physik Instrumente (V551-4B stage from Physik Instrumente).Probes were positioned at −22.162 mm from the brain surface to achieve a velocity of 80 mm sec −1 at the time of contact with the brain surface, which was followed by a deceleration length of <0.75 mm (supplementary figure 1).Slow insertions were performed with a manual micromanipulator and inserted slowly by hand at approximately 1-2 mm s −1 .Also, the pneumatic inserter (Blackrock Microsystems) was used to compare tissue response to various insertion methods.The pneumatic insertion uses a vacuum pump to reach very high velocities for inserting surface probe arrays [30,72].Proper calibration of the pneumatic inserter was done to only allow a '1 hit' insertion, preventing a 'double-hit' .All electrodes were zeroed to the surface on the pia and then advanced 1 mm down into the brain.The control array was tested across slow and pneumatic insertions and used as a comparison to the other tested methods [30]. Histological processing After the animal was perfused, the skull was dissected and placed in a 4% paraformaldehyde for up to ∼12 h.Next, the brain was dissected out and placed in 15% sucrose until tissue dropped (∼12-18 h).Next, the brain was transferred to 30% sucrose solution until the tissue reached equilibrium.Once the tissue dropped in the 30% sucrose (∼1-2 d), the tissue was then blocked and frozen in a mold in optimal cutting temperature (OCT) compound 2:1 mixture (20% sucrose: OCT) on a shallow dish of 2-methylbutane sitting on dry ice.Finally, the tissue was sectioned serially (15 µm thick horizontal sections) and stored in −20 • C until staining. Immunostaining & fluorescence imaging Sections were re-hydrated within a week with 1XPBS for 15 min followed by the following stains: Hoechst 33 342, Nissl (500/525) and Donkey anti-rat IgG for analysis.Slides were coverslipped with fluoromount-G and stored at 4 • C in the dark until imaged within the following week.Images were acquired with a Leica microscope and processed using ImageJ for analysis with the I.N.T.E.N.S.I.T.Y.Analyzer [23,73].The sections were analyzed with the radial image binning around the probe site to calculate the changes in fluorescence intensity from the IgG.The background intensity for each image was calculated by the 4 corners within each image.For all pixels with less than 1 standard deviation (STD) of the mean intensity was used as the background threshold and the image threshold was set to 1 STD.The region of interest was selected and binned at 10 µm with a total of 32 bins and the electrode diameter was 20 µm for each image.For the control array, the electrode diameter was set to 25 µm and binned at 10 µm with a total of 32 bins.The single wire electrodes that were inserted were 20 µm and the control (Microprobe single wire) was 81 µm in diameter.The single wires were also binned at 10 µm with 32 bins for image analysis. Electrophysiology Sprague-Dawley (SD) rats (n = 3) were induced with Ketamine and Xylazine and maintained with Ketamine as listed under surgical procedures.The electrophysiological recordings were taken inside a Faraday cage while single wire electrodes were implanted into the left visual cortex and a monitor was placed outside the cage in the right visual field as described in previous studies [3,26,28,30,37,41,[74][75][76][77][78].The electrode impedance for the electrode was 0.17 Mohm at 1 kHz and recordings were made at multiple depths (0, 520 and 1000 µm) down in the brain.Tucker Davis Technologies system (Medusa preamp, Tucker-Davis Technologies, Alachua FL) was used to record the cortical signals (high pass 300 Hz and low pass at 5000 Hz) for 3 min trials.The visual stimulus was made up of 8 different directional translations of drifting gratings that moved across the monitor (1 sec) while the neural response was recorded from a single wire (20 µm in diameter) electrode [75].The 8 directional drifting grating stimulation were repeated 8 times, for a total of 64 stimulation trials.Once units were sorted as previously described [35], the firing rate was determined for each unit and unsorted multiunit threshold crossings.The units were sorted into 50 ms binned PSTH from −1 s before the visual stimulation to 1 s after the stimulation.The visual cortex was chosen for assessment, allowing the evaluation of functional connectivity of recorded single-units through visual stimulation to the contralateral retina under acute anesthetized conditions.This choice ensures confidence in distinguishing between functional evoked neural activity and potential spontaneous dysfunction or epileptic neural activity due to ruptured axons from insertion strain that potential disconnected neurons near the electrode from the broader neural network.This distinction would not be possible in M1 under acute anesthetized condition. Data analysis Intensity values were extracted from I.N.T.E.N.S.I.T.Y.Analyzer per image (⩾2 animals and ⩾3 electrode sites per condition) and reported as a mean ± standard error as a function of distance from the insertion probe site.The intensity data was averaged over 150 µm away from the electrode site and generated bar graphs reporting the mean ± standard error for each insertion condition.Also, the number of damaged cells were counted manually by identifying neural and cell body staining (Hoechst/Nissl) that showed co-localization staining with PI.The ImageJ measurement tool was used to count and calculate the distance for each damaged neuron from the center of the electrode site.Measurements were taken 0-150 µm away from the center of the insertion site for the two-photon in-vivo images.Cell count images were binned 50 µm up to 150 µm away from the center of the electrode site.Cell counts were averaged per bin and reported as a mean ± standard error as a function of distance from the electrode site. Statistics Bar plots were compared using an unequal variance Welch's t-test followed by a Bonferroni correction to correct for repeated measures.In addition, a twoway ANOVA with a post-hoc Tukey test was performed on line plots to examine interactions between the insertion parameters: tip profiles (blunt, angle, electro-sharpen, and control) and speeds (slow, fast, and pneumatic). Results The study focused on assessing insertion strategies for implanting high-channel-count microelectrode arrays (FµA) with specific dimensions (⩾60 channels, ∼20 µm diameter, ∼100-300 µm pitch), employing three distinct tip profiles and three insertion speeds.Successful insertions were determined based on the identification of probe tracks in cortical layers housing neuronal cell bodies.Additionally, the investigation included the evaluation of cell membrane rupture and BBB leakage to quantify acute insertion-related injuries, facilitating the identification of optimal array and insertion parameters.Furthermore, the study aimed to verify the suitability of the final electrode design parameters for the array by conducting single-unit recordings from individual FµA wires, ensuring their compatibility with electrophysiological recording requirements. In vivo two-photon insertion in mice Implantation of microelectrodes is known to induce tissue compression [42] and compromise neuronal membranes, as evidenced by membraneimpermeable dyes such as PI [51].Given that multiple shanks of an array can contribute to overall tissue strain [2] in a geometrically dependent manner [42], we sought to understand how the tip geometry of high-density arrays influences acute neuronal injury.To address this, we compared various tip profiles, including blunt, angle polished (30-degree angle cut), and electro-sharpened ∼24 channel arrays during in vivo two-photon insertion analysis.In vivo images highlighted damaged cells (PI+) and Ca 2+ active neurons around the electrode tip profiles (figure 2(A)).The findings indicated a lower incidence of membrane injured cells around the electro-sharpened tips, while the blunt and angled tips caused significantly greater membrane damage around the insertion site (unequal variance t-test p < 0.05; figure 2(B)).Taken together, implanting electro-sharpened tipped arrays in vivo exhibited less cellular injury as indicated by PI labeling in the first 50 µm from the electrode tip compared to blunt or angle polished tip profiles (figure 2(B)). Tip profile-Angle, Electro-sharpened, and Control in rats Given the efficacy of electro-sharpened arrays in reducing neuronal injury during angled insertion in mice, we aimed to determine whether these results held true for larger channel arrays implanted perpendicularly into rats.To address this, we evaluated the same electrode tip profiles integrated into larger arrays (⩾60 channels) by measuring the disruption of the BBB leakage with IgG and cellular damage during insertion of neurons with Nissl + PI (figure 3(A)).All three profiles and the control Blackrock arrays were inserted at the same slow speed (1-2 mm s −1 ) to ensure an unbiased analysis of the electrode profiles.Neurons were labeled with Nissl, while PI was used to identify cells with compromised cellular membranes.The co-localization of Nissl and PI staining highlighted the areas of damaged cells or neurons, indicated by the yellow arrow (see figure 3(A)).Notably, the blunt tips did not insert and only caused dimpling of the tissue.The other large bundles were successfully inserted, although all arrays showed some tissue dimpling in the histological images.Careful attention was paid to avoid post-processing artifacts and prevent any tissue damage during array removal.The analysis indicated minimal IgG differences among the three profiles within 0-150 µm from the electrode site (p > 0.05; figure 3(B)), suggesting a comparable level of BBB injury.However, it should be noted that a substantial outlier in IgG leakage was induced by one of the large Blackrock shanks, attributed to its penetration of an invisible arteriole.Furthermore, the analysis did not reveal a significant difference between the electrode profiles in terms of the BBB response (p > 0.05; figure 3(C)).However, the angled tip exhibited a significantly higher number of PI+ labeled cells compared to the other tip profiles within the first 50 µm of the electrode site (p < 0.05).Nonetheless, no significant differences were observed between the bins and groups (p > 0.05; figure 3(D)).Taken together, the results highlight the role of tip profiles and insertion dynamics in influencing the number of damaged cells during electrode implantation, demonstrating the potential benefits of tailored insertion strategies in minimizing acute tissue damage and enhancing the efficacy of high-density array integration. Insertion speed-Slow vs fast vs pneumatic in rats Given that tip profiles of high-density arrays did influence dimpling, insertion, and neuronal membrane injury, we further investigated the impact of insertion speed on BBB injury and neuronal membrane damage.Prior research focused mainly on single shank microelectrodes, making it important to examine this relationship in the context of array insertions [42,47].Three insertion speeds, namely slow (1-2 mm s −1 ), fast (80 mm s −1 ), and pneumatic (∼200 mm sec −1 ), were evaluated for their effects on BBB leakage (IgG) and neuronal membrane damage (Nissl and PI) (figure 4(A)).Co-localization of Nissl and PI staining revealed the regions of cellular damage or neuronal impairment (figure 4(A)).The angled tip profile arrays (30-degree bevel cut tip) were used for comparing the various insertion speeds as these arrays were more abundantly available due to the ease of fabrication compared to the proprietary electro-sharpened arrays.Slow insertion speed demonstrated the greatest amount of BBB leakage compared to the fast and pneumatic methods.In contrast, IgG intensity values within 0-150 µm from the electrode sites showed the lowest values with the pneumatic insertion, which demonstrated notably higher insertion velocity.While the study did not find a significant difference between slow and fast insertion methods (p > 0.5), slower speeds were found to significantly impact the BBB compared to pneumatic insertions (p < 0.05; figure 4(B)).Moreover, the IgG intensity levels varied significantly among the groups (p < 0.05) but not between the bins and groups (p > 0.05; figure 4(C)).Interestingly, the pneumatic insertion method elicited a comparatively lower BBB response, but the area surrounding the implant site exhibited a greater number of PI-stained cells (p < 0.05; figure 4(D)).Taken together, our findings underscore the complex interplay between insertion dynamics, tip profiles, and insertion speeds, emphasizing the necessity for meticulous consideration of these parameters to minimize tissue damage. Chronic response to large bundle insertions-Slow vs fast & acute response in rats Given that slow and fast insertion led to the least amount of neuronal membrane injury, we proceeded to investigate whether electro-sharpened arrays insertion speeds led to increased BBB leakage one week post-implantation.To assess the tissue response at this stage, slow and fast insertions were performed in animals that underwent recovery after surgery (n = 3).Neuronal staining with Nissl and IgG labeling for BBB leakage was employed (figure 5(A)).We compared chronic fast insertions to acute fast insertions to analyze the temporal changes in BBB integrity over the course of a week.Analysis revealed a significant disparity in IgG levels between acute and Further investigations were conducted by implanting animals for one week using various insertion speeds (slow, fast, and control).The acute control involved the slow insertion Utah array.IgG intensity values were measured within 0-150 µm of the insertion site, indicating that both chronic slow and fast methods resulted in lower BBB leakage compared to the control after one week of implantation (p < 0.05) (figure 5(D)).The overall results for normalized IgG intensity values suggested that the high-density arrays with both slow and fast insertion methods had a reduced impact on the BBB compared to the control Blackrock Arrays.Two-way ANOVA also identified significant effects of insertion group (p < 0.01), distance (p < 0.01), and the interaction between insertion group and distance (p < 0.01) on IgG leakage.Tukey's post-hoc tests indicated a significant increase in IgG for acute animals 50-100 µm compared to all other groups (figure 5(E)).Taken together, the results from the chronic insertion experiments underscore the nuanced interplay between insertion speed and chronic BBB leakage, emphasizing the critical role of controlled insertion strategies in mitigating chronic BBB damage from high-density electrode implantation. Single wire insertions and recordings in rats Having demonstrated successful implantation of the FµA into the brain, we then investigated whether the electro-sharpening process affected the electrodes' ability to record neuronal action potentials in layer 4 of the primary visual cortex of rats.Acute single wire recordings were conducted on 20 µm wires, confirming their capability to capture characteristic neural activity within the V1 cortex and respond to visual stimuli, as evidenced by robust neuronal single-unit waveforms (figures 6(A) and (B)) and responsiveness to drifting grating stimuli (figure 6(C)).Additionally, impedance values before and after the experiment remained stable and within the optimal range for acute neural recording settings (∼0.17 Mohm at 1 kHz). Comparative analysis of recording wires between the experimental 20 µm FµA single-wire and the control 81 µm Microprobe wire revealed that the larger diameter wire exhibited a greater number of damaged cells at distances of 50-100 µm from the insertion site (unequal variance t-test; p < 0.05) (figure 6(D)).This suggests that the larger diameter wires cause greater compression related damage to distant cells, while nearby the electrode (0-50 µm) the damage is saturated and/or has large variability.Histological images, stained with Nissl, Hoechst, PI, and Immunoglobulin G, highlighted the areas of damaged cells or neurons (figure 6(E)), confirming the reduced cellular damage associated with the 20 µm FµA single-wire implantation.Taken together, these results emphasize the effective recording capabilities and reduced tissue impact of the electro-sharpened 20 µm FµA wires, positioning it as a promising candidate for precise and reliable acute neural recordings. Discussion The manufacturing and performance of Paradromics' arrays have been previously reported elsewhere [62].Our primary objective was to evaluate a range of array design and insertion parameters to identify optimal settings for achieving improved bio-integration of high-channel-count microelectrode arrays, with a particular focus on bypassing the bed-of-nails issue.Functional performances of implants require careful consideration of design space parameters and the viability of the underlying tissue [2,21,22].The study extensively assessed different tip profiles and insertion speeds, shedding light on the interplay between these variables and their impact on array insertability and adjacent tissue damage, highlighting the critical role of the BBB in assessing the success of these implantations. Previous research has demonstrated a strong relationship between BBB leakage and recording performance [79, 80], leading us to assess the extent of damage through acute quantification of BBB leakage (IgG) and cell membrane rupture (PI).During the device insertion process, the probe could potentially tear through neurites or strain the surrounding tissue, resulting in the rupture of nearby cell membranes [51].Although PI selectively labels cells with compromised membranes, it remains unclear whether these labeled cells are capable of recovery and long-term survival, emphasizing the need for additional studies to ascertain the fate of these PI-labeled neurons.Nonetheless, the presence of PI-negative, GCaMP6-positive cells near the electrodes suggests the acute presence of viable neurons surrounding the electrode tips. In the in vivo two-photon experiments, we implanted multi-shank (∼24 channel) arrays into the cortex at a 30-degree angle.Due to the constraints posed by the skull, objective, probe, and tissue scattering, only the top shanks were observable, while the deeper shanks were obscured, all shanks successfully inserted into the brain.However, because the arrays were not slanted, when implanted at a 30degree angle, the deeper shanks inserted first.While this method provided a rapid screening for optimal array tips and spacing, it does not fully represent the insertion mechanics associated with a large bed-ofneedle arrays, even though it enabled us to visualize viable GCaMP active neurons during the insertions.Consequently, perpendicular acute insertions were carried out with larger arrays.Despite the fact that blunt tipped arrays could insert at an angle because the shanks inserted through the dural and pial surface row by row [81], blunt-tipped arrays cause a 'bed-of-nails' effect resulting in brain tissue dimpling.Notably, the substantial strain on the tissue was evident in the two-photon data, revealing a large number of PI-labeled cells with blunt-tipped arrays when compared to angled or electro-sharpened tips.Electro-sharpened microwire bundles effectively alleviated the bed-of-needles effect compared to other tip profiles in this study, providing critical insights into the benefits of such designs in reducing tissue strain during insertion. Although the production of electro-sharpened microwires entails greater overhead costs compared to blunt or angled microwires, the study's findings underscore their superior performance in mitigating tissue strain during insertion.Highlighting the significant role of neurotechnology companies, it is essential to emphasize the dual responsibility of ensuring the safety of implanted subjects 'at all costs' and sustaining the viability of the company.Ethical considerations arise due to the harm inflicted on patients left with obsolete implants when companies fail [82].In this study, however, the results clearly indicate that electro-sharpened microwires are both necessary for inserting into the brain tissue to enable neural recordings as well as significantly reduce dimpling related tissue injury.Subsequent work by Paradromics has therefore prioritized electro-sharpened microwires in their arrays [62]. Contrary to expectations, the angled tip caused the greatest number of cells to experience membrane damage, whereas the blunt tip caused greater BBB damage.Our data demonstrate that electrosharpened microwire bundles alleviate the 'bed-ofnails' effect more effectively than other tip profiles studied.The evaluation of insertion speeds aimed to facilitate the insertion of angle-tipped arrays and avoid the need for electro-sharpening the tips.Intriguingly, pneumatic insertion led to reduced IgG leakage but resulted in a significantly higher number of cells showing damage, as indicated by PI labeling.These findings suggest that the BBB/vascular wall may exhibit greater structural stability compared to individual cells around the implant site with pneumatic insertion speeds.This aligns with previous observations that it takes several days for single-units to be detectable on pneumatically inserted Blackrock arrays in non-human primates [29].Consequently, the study emphasizes that fast insertion speeds represent a compromise to minimize acute BBB and cell membrane rupture. The process of electro-sharpening the microwire tip leads to a significantly thinner recording interface region due to tapering, which could change the surface area and geometry of the recording sites.However, our results demonstrate that individual electro-sharpened microwires can effectively record visually evoked single-unit activity with high fidelity in the V1 region of rats.Thus, the alteration in geometry not only facilitated easier and reliable implantation of high-density arrays but also effectively maintained single-unit signal fidelity.Our results confirm that electro-sharpened arrays with fast insertion speeds can successfully insert into the brain.Furthermore, the study underscores the critical role that insertion speeds play in determining the extent to which the BBB is compromised during implantation.Interestingly, the findings revealed no positive correlation between BBB leakage and initial cell membrane damage when varying insertion speed.However, it remains imperative to evaluate the longterm impacts of cell membrane damage.Nonetheless, the study presents promising parameters that can be leveraged in the development of a high-density, high-channel-count array.The evaluation of scalable electro-sharpened arrays with higher density in future studies is warranted. Limitations of this study Several limitations are acknowledged in this study.The evaluation focused on the first 5 generations of prototypes of the Paradromics high-density arrays, resulting in a limited number of device replicates and incomplete exploration of the design space.Despite these constraints, the study yields statistically significant outcomes, offering insights into the impact of tip geometry, insertion speed, and microwire diameter on the cellular-level spatial scale of neuronal membrane integrity, BBB leakage, and successful cortical tissue insertion (figures 2-6).Future investigations should consider quantifying insertion force using ultra-sensitive sensors [43,46], especially one that is compatible with a pneumatic inserter. Another limitation associated with this study was its use of mice and rats.The choice of mice was essential for visualizing how electrode tip geometry impacts neuronal cell membrane injury under a twophoton microscope, providing novel insights into the spatial scale of cellular injury.However, the thinner dura and pia in mice, compared to clinically relevant thickness, result in less required insertion force for penetration, while the in vivo two-photon imaging necessitates a 30-degree angled insertion, increasing brain surface contact with array tips and requiring greater insertion force. In contrast to mice, rat dura has a comparable thickness to human pia [83][84][85][86].While the rat model efficiently excludes prototype designs that are unable to perpendicularly penetrate the brain, the assessment of higher channel count arrays necessitates evaluation in large animal models.Paradromics, building on insights gained from the mice and rats reported in this study, has advanced this research by developing a 30 000-channel array.Successfully implanted in a sheep model, this array demonstrated singleunit recordings on hundreds of channels [62].Despite these advancements, the present study remains valuable for its contribution to fundamental basic science knowledge regarding the interplay between array design, insertion conditions, and the cellular spatial scale of tissue injury. Finally, the rapid prototypes were insulated with relatively thick glass [63].Conventional hypothesis positing that high modulus materials like glass, ceramics, and carbon perpetuate micromotion in low modulus brain tissue, driving the foreign body response and glial scarring [87], aligns with studies on flexible tethering to the skull contributing to glial activation [88,89].However, extensive documentation highlighting the significant variability in glial scarring around stiff devices suggests the involvement of additional contributors to this undesirable tissue reaction [25,28,41,[90][91][92].For example, Rousche and Normann showed that two adjacent shanks with the same stiff material properties and insertion condition can generate highly disparate degree of foreign body response.Similarly, Williams et al showed that the same identical arrays can generate different tissue response and impedance spectra as well as recording performance [91,92].Similarly, our prior work demonstrated large variability in tissue response and recording performance with identical silicon arrays [25,28,93].These results suggest there are other driving contributors [25,50,93].Considerations should also extend to dielectric stability of low elastic modulus materials [2] and their contribution to performance as well as the fact that decreasing the crosssectional area of implants can increase the overall flexibility of the device [35].Similarly, it is crucial to note that not all glass and ceramics exhibit equal brittleness or flexibility [62,94], and the materials used in the current Paradromics arrays differ from the rapid prototype devices used in this study [62].Lastly, limited basic science knowledge exists on the collective impact of a large number of implanted shanks [62] on long-term tissue response and device performance, necessitating careful evaluation due to the intricate interplay of design parameters [2]. Conclusion This comprehensive study systematically evaluated various array parameters, including tip profiles and insertion speeds, to optimize the design of a high-channel-count microelectrode array for neural recordings.By carefully assessing the impact on tissue viability and the BBB, we were able to identify key parameters that contribute to successful implantation and recording performance.The findings underscore the critical role of tip geometry and insertion speed in minimizing acute tissue damage, with electrosharpened arrays and controlled insertion speeds demonstrating superior performance.These results lay a strong foundation for the development of future high-density arrays, promising significant advancements in both fundamental neuroscience research and the practical application of neural interface technology.Further investigations into long-term tissue responses and the scalability of these optimized parameters are essential to ensure the continued progress of this promising technology in the field of neuroprosthetics and cognitive neuroscience. Figure 2 . Figure 2. (A) Two-photon in vivo microscopy was performed for different tip profiles (blunt, angle and electro-sharpen).Green: Neurons (blunt/angle); Microglia (electro-sharpen).Red: cells stained with PI indicating damaged cells; and sulfonamide (SR101) was used to localize blood vessels.Blue dotted line indicates the location of the electrode shanks.Electrodes were inserted at ∼30 degree angle so that the electrode could be viewed during insertion.The results show that there are less injured cells around the electro-sharpen arrays compared to the angle and blunt tip profiles.The white scale bar = 100 µm.(B) The bar plot shows the distribution of damaged cells from the electrode shank.The blunt and angled tip profile indicates the greatest amount of PI stained cells.The electro-sharpen insertion had the least amount of damage to the cells around the implant sites.* indicates p < 0.05 with an unequal variance t-test.(N: Blunt = 2, Angled = 2, electro-sharpened = 3). Figure 3 . Figure 3. (A) Fluorescence staining for Hoechst stain (Blue); Nissl (Green); PI (Red); and IgG (White) shows the tissue response to the different electrode tip profiles (angle, electro-sharpen and control).The control array (Blackrock array) and all tip profiles were inserted at the same slow speed.The yellow arrows identify example area of co-localization for the Nissl and propidium iodide (PI) stained cell and indicated areas of damaged cells or neurons.White scale bar = 100 µm.(B) Normalized IgG intensity within 0-150 µm from probe hole.(C) Normalized IgG intensity within 10 µm bins up to 300 µm away from the probe hole.(D) The number of damaged cells (PI+) labeled was measured up to 350 µm away from the probe hole and averaged within 50 µm bins.* indicates p < 0.05 with an unequal variance t-test.(N: angled = 4, electro-sharpened = 5, control = 2. N sites : angled = 4, electro-sharpened = 13, control = 11). Figure 4 . Figure 4. (A) Fluorescence staining for Hoechst stain (Blue); Nissl (Green); PI (Red); and IgG (White) displays the tissue response to the different insertion speeds (slow, fast and pneumatic).The control speed was the pneumatic insertion method and all array profiles were the angle tip profiles within the different insertion speeds.The yellow arrows identify example area of co-localization for the Nissl and PI stained cell and indicated areas of damaged cells or neurons around the insertion site.White scale bar = 100 µm.(B) Normalized IgG intensity within 0-150 µm from probe hole for the different insertion speeds.(N electrode : Slow = 7, Fast = 3, Pneumatic = 6).(C) Normalized IgG intensity within 10 µm bins up to 300 µm away from the probe hole.(D) The number of damaged cells (PI+) measured up to 300 µm away from the probe hole and averaged within 50 µm bins.* indicates p < 0.05 with an unequal variance t-test.(Nrats: Slow = 4, Fast = 3, Pneumatic = 2). Figure 5 . Figure 5.To test the chronic response of the High-Density Arrays has on the 7 (BBB), animals were implanted (N = 3 rats/grp) for 1 week.(A) Fluorescence staining for Hoechst stain (Blue); Nissl (Green); and IgG (White) displays the tissue response to the chronic insertion speeds (slow vs fast).The control speed was the acute slow insertion method.White scale bar = 100 µm.(B) Normalized IgG intensity within 0-150 µm from probe hole for the acute (N = 10) and chronic (N = 4) fast insertion speeds.(C) Normalized IgG intensity within 10 µm bins up to 300 µm away from the probe hole for the acute and chronic fast insertion speeds.(D) Normalized IgG intensity within 0-150 µm from probe hole for the chronic implants with varied insertion speeds (slow (N = 8), fast (N = 5), and control (N = 7)).(E) Normalized IgG intensity within 10 µm bins up to 300 µm away from the probe hole for the acute and chronic fast insertion speeds.* indicates p < 0.05 and * * indicates p < 0.01 with an unequal variance t-test with Bonferroni correction for repeated measure. Figure 6 . Figure 6.(A) Single wire (20 µm) recordings from the HDA were inserted 520 µm down into the visual cortex (V1 area of the cortex).The electrode recorded cortical activity while a visual stimulus was played.The raw spike trace for an isolated neuron firing (green) and multi-unit activity (blue) was recorded.(B) Mean Spike Wave form blue and green was measured.(C) Raw spike stream filtered between 500 and 3000 Hz. (D) Average peri-stimulus time histogram of 64 visual stimulation (On) to the contralateral eye showing evoked activity at stimulation onset at 0 s and offset response at 1 s.Green and Blue indicate spikes sorted from (A-B) and black indicates unsorted multi-unit activity.(E) The number of damaged cells (PI+) measured up to 100 µm away from the probe hole and averaged within 50 µm bins.(F) Fluorescence staining for Hoechst stain (Blue); Nissl (Green); PI (Red); and IgG (White) displays the tissue response to the different single electrode diameter (81 µm vs 20 µm; N = 2 rats/electrode/bin).The yellow arrows identify areas of co-localization for the Nissl and PI stained cell and indicated areas of damaged cells or neurons around the insertion site.White scale bar = 100 µm.* indicates p < 0.05 with unequal variance t-test. 20 046044 [69] Roth T L, Nayak D, Atanasijevic T, Koretsky A P, Latour L L and McGavern D B 2014 Transcranial amelioration of inflammation and cell death after brain injury Nature 505 223-8 [70] Kolarcik C L, Catt K, Rost E, Albrecht I N, Bourbeau D, Du Z, Kozai T D Y, Luo X, Weber D J and Tracy Cui X 2015 Evaluation of poly(3,4-ethylenedioxythiophene)/carbon nanotube neural electrode coatings for stimulation in the dorsal root ganglion J. Neural Eng. 12 016008 [71] Biran R, Martin D C and Tresco P A 2005 Neuronal cell loss accompanies the brain tissue response to chronically implanted silicon microelectrode arrays Exp.Neurol.195 115-26 [72] Rousche P J and Normann R A 1992 A method for pneumatically inserting an array of penetrating electrodes into cortical tissue Ann.Biomed.Eng.20 413-22 [73] Du Z J, Kolarcik C L, Kozai T D Y, Luebben S D, Sapp S A, Zheng X S, Nabity J A and Cui X T 2017 Ultrasoft microwire neural electrodes improve chronic tissue integration Acta Biomater.53 46-58 [74] Kolarcik C L et al 2015 Elastomeric and soft conducting microwires for implantable neural interfaces Soft Matter 11 4847-61 [75] Kozai T D Y, Du Z, Gugel Z V, Smith M A, Chase S M, Bodily L M, Caparosa E M, Friedlander R M and Cui X T 2015 Comprehensive chronic laminar single-unit, multi-unit, and local field potential recording performance with planar single shank electrode arrays J. Neurosci.Methods 242 15-40 [76] Meliza C D and Dan Y 2006 Receptive-field modification in rat visual cortex induced by paired visual stimulation and single-cell spiking Neuron 49 183-9 [77] Chen K, Cambi F and Kozai T D Y 2023 Pro-myelinating Clemastine administration improves recording performance of chronically implanted microelectrodes and nearby neuronal health Biomaterials 301 122210 [78] Wellman S M, Guzman K, Stieger K C, Brink L E, Sridhar S, Dubaniewicz M T, Li L, Cambi F and Kozai T D Y 2020 Cuprizone-induced oligodendrocyte loss and demyelination impairs recording performance of chronically implanted neural interfaces Biomaterials 239 119842 [79] Nolta N F, Christensen M B, Crane P D, Skousen J L and Tresco P A 2015 BBB leakage, astrogliosis, and tissue loss correlate with silicon microelectrode array recording performance Biomaterials 53 753-62 [80] Saxena T, Karumbaiah L, Gaupp E A, Patkar R, Patil K, Betancur M, Stanley G B and Bellamkonda R V 2013 The impact of chronic blood-brain barrier breach on intracortical electrode function Biomaterials 34 4703-13 [81] Escamilla-Mackert T, Langhals N B, Kozai T D Y and Kipke D R 2009 Insertion of a three dimensional silicon microelectrode assembly through a thick meningeal membrane Conf.Proc.IEEE Engineering in Medicine and Biology Society vol 2009 pp 1616-8 [82] Strickland E and Harris M 2022 What happens when a bionic body part becomes obsolete?: blind people with second sight's retinal implants found out IEEE Spectr.59 24-31 [83] Reina M A, Casasola O D L, Villanueva M C, Lopez A, Maches F and De Andres J A 2004 Ultrastructural findings in human spinal pia mater in relation to subarachnoid anesthesia Anesth.Analg.98 1479-85 [84] Sparrey C J, Manley G T and Keaveny T M 2009 Effects of white, grey, and pia mater properties on tissue level stresses and strains in the compressed spinal cord J.Neurotrauma 26 585-95 [85] Maikos J T, Elias R A and Shreiber D I 2008 Mechanical properties of dura mater from the rat brain and spinal cord J.Neurotrauma 25 38-51 [86] Wittek A, Kikinis R, Warfield S K and Miller K 2005 Brain shift computation using a fully nonlinear biomechanical model Medical Image Computing and Computer-Assisted Intervention-MICCAI 2005: 8th Int.Conf.(Palm Springs, CA, USA, 26-29 October 2005) (Springer) pp 583-90 [87] Subbaroyan J, Martin D C and Kipke D R 2005 A finite-element model of the mechanical effects of implantable microelectrodes in the cerebral cortex J. Neural Eng. 2 103 [88] Biran R, Martin D C and Tresco P A 2007 The brain tissue response to implanted silicon microelectrode arrays is increased when the device is tethered to the skull J. Biomed.Mater.Res.A 82 169-78 [89] Markwardt N T, Stokol J and Rennaker II R L 2013 Sub-meninges implantation reduces immune response to neural implants J. Neurosci.Methods 214 119-25 [90] Rousche P J and Normann R A 1998 Chronic recording capability of the Utah Intracortical Electrode Array in cat sensory cortex J. Neurosci.Methods 82 1-15 [91] Williams J C, Hippensteel J A, Dilgen J, Shain W and Kipke D R 2007 Complex impedance spectroscopy for monitoring tissue responses to inserted neural implants J. Neural Eng. 4 410-23 [92] Williams J C, Rennaker R L and Kipke D R 1999 Long-term neural recording characteristics of wire microelectrode arrays implanted in cerebral cortex Brain Res.Brain Res.Protocols 4 303-13 [93] Kozai T D Y, Jaquins-Gerstl A, Vazquez A L, Michael A C and Cui X T 2015 Brain tissue responses to neural implants impact signal sensitivity and intervention strategies ACS Chem.Neurosci.6 48-67 [94] Kennedy P 2012 Reliable neural interface: the first quarter century of the Neurotrophic Electrode 2012 Annual Int.Conf.IEEE Engineering in Medicine and Biology Society (IEEE) pp 3332-5 dynamics of astrocyte reactivity following neural electrode implantation Biomaterials 289 121784 [61] Chen K, Wellman S M, Yaxiaer Y, Eles J R and Kozai T D Y 2021 In vivo spatiotemporal patterns of oligodendrocyte and myelin damage at the neural electrode interface Biomaterials 268 120526 [62] Sahasrabuddhe K et al 2021 The Argo: a high channel count recording system for neural recording in vivo J. Neural Eng.
2023-11-25T14:12:24.739Z
2024-02-20T00:00:00.000
{ "year": 2024, "sha1": "acf1a26ea9a578194ea8bcd74e6e12f6f8318d54", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1741-2552/ad36e1/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "0b84bf8700ddbc2a2a26b7ecc51c85f1e78ff947", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
92564064
pes2o/s2orc
v3-fos-license
Bioactive compounds and antibacterial activity of endophytic fungi isolated from Black Mangrove ( Avicennia africana ) leaves The antibacterial potential of fungal endophytes from black mangrove against pathogenic marine bacteria was evaluated and the bioactive compounds identified and quantified. Black mangrove (Avicennia africana) healthy leaves were obtained from a Mangrove Forest in Eagle Island, Port Harcourt, Nigeria. The fungal endophytes were o cultured on acidified Potato dextrose agar plates for 5 days at 28 C. The isolated fungal endophytes were identified based on microscopic and colonial morphologies. Different concentrations (0, 20, 40, 60, 80 and 100 mg/ml) of ethyl acetate extract of the fungal isolates were screened against pathogenic marine bacteria (Salmonella spp., Staphylococcus aureus and Shigella spp.) via agar well diffusion assay. The most active isolate was identified using molecular method. Gas chromatography – Mass spectrometry was employed in the identification and quantification of its bioactive secondary products. Fungi of the genera Aspergillus, Penicillium, Fusarium, Collectotrichum, Phomopsis, Epicoccum and Rhizopus were isolated. Two bioactive compounds were identified in ethyl acetate extract of Fusarium sp. which was molecularly identified as Fusarium phyllophilum KU350622.1. Dibutyl phthalate (C H O ) is the major 16 22 4 compound at 55.926% peak area. The results showed that fungal endophytes from black mangrove exhibited antibacterial action against pathogenic marine bacteria. The bioactive secondary products identified have vast potentials for use in agriculture and industries. Introduction The mangrove ecosystem, therefore, is a Mangroves are extremely useful discrete environment inhabiting various groups ecosystem with different essential economic and of microbes (Thatol et al., 2013).Among such ecological functions (Bandaranayake, 2002). microbes of unique significance are marine fungi which inhabit diverse marine ecosystems and are Black Mangrove (Avicennia africana), locally capable of producing a number of new bioactive known in Nigeria as Ogbun or Ofun (Odugbemi compounds with extensive biological activities and Akinsulire, 2008), is a grey -barked small (Amira et al., 2009).tree or shrub.The leaves have glands where salt A major group of fungi found in the is excreted.Yellow centred white flowers appear marine habitat is mangrove -endophytic fungi during the dry season at the axillary stalk which are found in most species of plants (Hyde, (Steentoft, 1988).This ecological unit is known 2008;Rodriguez et al., 2009).They colonize for intermittent tidal flooding which causes inner tissues of plants with no obvious harmful ecological factors like nutrient availability and effects (Darshan and Shishupala, 2014).They salinity to be greatly inconsistent with definite flourish under severe environment that cause environmental uniqueness (Holguin et al., 2006).them to develop unique metabolic pathways and traces were removed by rinsing with sterile generate distinctive chemicals that make it distilled water.The leaf fragments were dried on possible for them to bear such tense sterile blotting paper.The cut surfaces of the leaf environmental setting.A number of these tissue were placed aseptically on acidified potato chemicals are long-established to be of immense dextrose agar plates.The plates were incubated potential as a source of new agents for diverse at 28°C for 7 days.Tips of fungi, growing out of applications in industries (Eldeen and Effendy, the leaf tissue, were selected and cultivated in 2013).There is need for regular exploration for pure culture on potato dextrose agar.They were new antimicrobial compounds as a result of identified on the basis of their cultural and constant antibiotic resistance by pathogenic microscopic characteristics (Barnett and Hunter, microbes (Pucci and Bush, 2013).Through 1998).The pure cultures of endophytic fungal isolation of endophytic fungi, novel species that strains were maintained in potato dextrose agar are regarded as an exceptional source of slants at 4°C. bioactive compounds are being revealed. Antibacterial potential of indigenous red Bioactive compounds extraction: Mycelial plug (1 mangrove-leaf fungal endophytes and their cm diameter) of 7 day -old culture of each fungal bioactive compounds have been documented isolate was inoculated into 1 L Erlenmeyer flask (Ariole and Akinduyite, 2016).In the present containing 300 ml sterile potato dextrose broth. o study, endophytic fungi from the leaf of The flasks were incubated at 28 C for 21 days indigenous Avicennia africana (Black mangrove) under static state.The liquid culture of each flask were evaluated for their antibacterial potential. was filtered using sterile cheesecloth.Then, The bioactive compounds of the most active ethyl acetate (50 ml) was added to each filtrate endophyte were identified and quantified. and centrifuged for 10 min at 1500 rpm.mg/ml) and 0.1ml sterile distilled water (0 mg/ml extract) served as positive control and negative Isolation of endophytic fungi: The method control respectively.The plates were incubated described by Suryanarayanan et al. (2003) with o for 24-48 hr at 37 C.The clearance zones around some modifications was employed for the the wells were measured in millimetre and used endophytic fungi isolation.The leaves were as indicator of antibacterial activity.carefully washed with running tap water, cut with sterile precautions into small fragments (0.5-1 Molecular identification of endophytic fungi: DNA cm) and surface sterilized with 1% sodium extraction was performed using Norgen's hypochlorite for 1 minute and then 75% ethanol Yeast/Fungi Genomic DNA Isolation Kit.Genomic for 30 seconds.Sodium hypochlorite and alcohol Staphylococcus aureus (a gram positive concentrations of 80-100 mg/ml showed bacterium).antibacterial activity against at least one of the tested pathogens.However, the different Molecular characteristics of the most active concentrations (20-100 mg/ml) of ethyl acetate endophytic fungus: The fungal isolate (WB2) extracts of WB2 (Fusarium sp.) were active which was the most active fungus was identified against all the tested pathogens.The zones of as Fusarium phyllophilum KU350622.1.The inhibitions obtained were between 8 and 17.67 phylogenetic tree is presented in Figure 1 DNA was proficiently extracted from the cells Gas chromatography-mass spectrometry (GCaccording to the method employed by Zhang et MS) analysis: Agilent 7890A-5975C GC-MS al. (2010).Spin column chromatography was system (Tao et al., 2011) was employed.Exactly used for purification.The purified genomic DNA 0.5 µl of the most active fungal extract was was completely digestible with restriction injected into the GCMS system with injector o enzymes.DNA quantification was carried out temperature of 250 C. Nitrogen was used as a using DNA standard and the absorbance carrier gas during the compounds separation measured at 450 nm.Polymerase chain reaction which was carried out on a 60m HP-INNOWAX (PCR) master mix from Norgenbiotek Canada capillary column (0.25 mm).The flow of the was employed for PCR analysis which was carrier gas was 1ml/min with a split ratio of 10:1.agarose gel.DNA ladder (100 bp) was 280 C at 5 C/min and left for 9 min and ionization employed as DNA molecular weight marker.energy of 70 eV was employed.The mass Electrophoresis was carried out at 80V for 1½ h.spectra of the unidentified bioactive compounds The gel was stained with ethidium bromide and were related with mass spectra of identified viewed using UV light.The sequence was compounds in the National Institute of observed by the use of Chromaslite for base Standards and Technology (NIST) Database.calling.Then, BioEdit was employed for The molecular weight, name and peak area (%) sequence editing.A Basic Local Alignment of the bioactive compounds were determined.Search Tool (BLAST) was performed using National Centre for Biotechnology Information Results ( N C B I ) d a t a b a s e Bacterial isolation: A total of seven (7) ( ). endophytic fungi were obtained from black Related sequences were aligned with Cluster W mangrove (Avicennia africana) leaves.The after downloading.The phylogenetic tree was fungal genera isolated were: Penicillium, constructed with Molecular Evolutionary Fusarium, Collectotrichum, Rhizopus, Genetics Analysis (MEGA) version 6 (Tamura et Aspergillus, Epicoccum and Phomopsis ( . mm.Furthermore, gram negative bacteria Fusarium phyllophilum PEN6 (KR909206.1)Fusarium phyllophilum CBS 216.76 (AB587006.1)Fusarium phyllophilum MRC7543 (KR909430.1)Fusarium phyllophilum BCCM/IHEM:10241 (KJ126553.1)Fusarium phyllophilum MRC7543 (KR909359.1)Fusarium phyllophilum CBS 216.76 (AB586912.1)Fusarium phyllophilum CBS 239.56 (AY526489.2) Fusarium phyllophilum CBS 137.35 (HQ232206.1structuresand peak area Bioactive compounds in the ethyl acetate extract percentage of each compound identified in the of WB2: The bioactive compounds, their extract are presented in Table 2. retention time, molecular weight, molecular Gas chromatogram of bioactive compounds in the ethyl acetate extract of WB2 (Fusarium phyllophilum KU350622.1) is presented in Figure 2. Figure 3 : Figure 3: Mass spectrum of the major bioactive compound (Dibutyl phthalate) in the ethyl acetate extract of Fusarium phyllophilum KU350622.1 from black mangrove leaves Table 1 : Antimicrobial activity of ethyl acetate extract of black mangrove leaf -endophytic fungi against pathogenic bacteria Akinduyite and Ariole / Nig.J. Biotech.35 (2) : 35 -42 (Dec.2018)Antibacterialactivity:Theresult of the (Salmonella sp. and Shigella sp.) were more antibacterial assay presented in Table1revealed resistant to the fungal extracts than that all the endophytic fungi extracts at
2019-04-03T13:11:53.925Z
2019-03-04T00:00:00.000
{ "year": 2019, "sha1": "c450f9bf05f386c3879482ca10944cd7cef13bf1", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/njb/article/download/184109/173478", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c450f9bf05f386c3879482ca10944cd7cef13bf1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
1650726
pes2o/s2orc
v3-fos-license
Mortality and cancer rates among workers in the Swedish PVC processing industry. Personnel lists from four PVC-processing industries were collected on production of employees with at least three months of employment at the beginning of 1945 and the last day of employment December 31, 1974. Of 2073 persons, 103 could not be followed up, because they had moved abroad. The remaining persons comprise the cohort of 1970 individuals who were analyzed and compared with the national population with respect to mortality from various diseases and cancer morbidity. The death risk from myocardial infarction is elevated in the cohort. This elevation is most clearly apparent in the subcohort which had at least two years of exposure time and where the analysis was directed at circumstances chronologically close to the time of exposure. The myocardial infarction risk related to vinyl chloride exposure is discussed in relation to earlier studies on the vascular effects of vinyl chloride. An indication of an elevated risk of morbidity and mortality from tumors in the digestive organs is also present. However, this is not statistically confirmed. A few future follow-ups of the present study are necessary in order to clarify any possible elevated risk of tumors in the PVC-processing industry. Personnel lists from four PVC-processing industries were collected on production of employees with at least three months of employment at the beginning of 1945 and the last day of employment December 31, 1974. Of 2073 persons, 103 could not be followed up, because they had moved abroad. The remaining persons comprise the cohort of 1970 individuals who were analyzed and compared with the national population with respect to mortality from various diseases and cancer morbidity. The death risk from myocardial infarction is elevated in the cohort. This elevation is most clearly apparent in the subcohort which had at least two years of exposure time and where the analysis was directed at circumstances chronologically close to the time of exposure. The myocardial infarction risk related to vinyl chloride exposure is discussed in relation to earlier studies on the vascular effects of vinyl chloride. An indication of an elevated risk of morbidity and mortality from tumors in the digestive organs is also present. However, this is not statistically confirmed. A few future follow-ups of the present study are necessary in order to clarify any possible elevated risk of tumors in the PVC-processing industry. Vinyl chloride has been shown to cause sclerodermia, Raynaud's phenomenon, acroosterolysis, liver damage and liver cancer (hemangiosarcoma) (1) in workers exposed to vinyl chloride monomer (VCM). This has been shown in studies (2,3) performed at companies which fabricate poly(vinyl chloride) (PVC). In animal experimental studies it has been reported that inhalation of VCM causes malignant tumors in different organs in rodents (4)(5)(6). In Sweden, in 1974, two cases of liver heman-giosarcoma were diagnosed in employees at a company engaged in the processing of VCM and PVC (7). Later another two cases occurred at the same factory. Studies on other forms of cancer (8,9) suggest that VCM-exposed workers in the PVC fabricating industries may possibly run an elevated risk of contracting forms of cancer other than hemangiosarcoma in the liver. Earlier, an excess mortality from cardiovascular diseases was also observed (10) in employees in the PVC manufacturing industry. The present retrospective cohort study was performed for the purpose of determining the pattern of morbidity and mortality in the PVC processing industry. The PVC processing industry, generally speaking, has had a lower level of exposure to VCM than the fabrication industry. In the Swedish PVC processing industry, at present, about 5000 persons are employed in production. October 1981 145 Material and Methods Information for the study was collected from four PVC processing companies. The four companies all used PVC which, after additions of various chemicals, is heat-treated for fabrication into floor covering, lace, pipes, and food packaging. Data Collection The following data were collected from the personnel lists at the companies: personal number, name, beginning and end of exposure (year and month) and class of exposure. In order for a person to be included in the original cohort, at least three months of employment was required in the period beginning in 1945 and the ending December 13, 1974. The exposure was classified as follows: class 3 (high), work in the mixing department; class 2 (medium), heat treatment machines; and class 1 (low), other production departments. The collected data were transferred to punched cards and magnetic tape for statistical processing. The magnetic tape was coordinated with the national total population and the so-called death tapes for the 1961-1976 period and checked against the cancer registry by the Central Bureau of Statistics (SCB). The personnel numbers which could not be recovered at this time were checked by the national taxation office. The original cohort included a total of 2,073 persons. Of these, 103 persons (5%) dropped out, 70 or whom had moved abroad, 5 were found in the missing persons register of the tax office, and 28 could not be traced. Study Cohorts For the statistical processing, the results of the cohort of 1970 persons were divided into a number of subcohorts (study cohorts): (1) all persons with at least three months of exposure (follow-up time from the beginning of exposure and through 1976); (2) all persons with at least six months of exposure excluding those who stopped before 1961 (follow-up time from beginning of exposure but no earlier than 1961 and through 1976); (3) all persons with at least six months of exposure and where the exposure began no earlier than 1961 (follow-up time from beginning of exposure and through 1976); (4) all persons with at least two years exposure (follow-up time from two years after beginning of exposure but no earlier than 1961 and through 1976 but no more than ten years after exposure stopped); (5) all persons with at least two years exposure (follow-up time from ten years after exposure began but no 146 earlier than 1961 and through 1976). The two latter named study cohorts were chosen in order to study whether any differences existed in the death cause pattern with respect to when the deaths occurred after the beginning of exposure. The first of the study cohorts was intended to shed light on possible causes of death which occur relatively early, e.g., accidents caused by the job. The second was intended to shed light on such death causes as occurred after a longer time had passed. Tumors caused by occupational exposure, for example, often have a long latency period, 5-10 yr or longer. Results The original cohort was relatively young at the beginning of exposure. The age distribution in the different exposure classes is given in Table 1. One finds various dissimilarities between the exposure classes. In class 1 (low), 41.7% were younger than 35 years at the beginning of exposure; in class 2 (average), 47.7%; and in class 3 (high), 50.6%. There were also dissimilarities in the length of exposure with respect to exposure class (Table 2). However, it should be noted that the table includes cases which were still under exposure at the final date for entrance into the cohort (December 31, 1974), for which reason, a certain bias toward short exposure times is found. Regardless of this, exposure class 3 has longer exposure times on the average. The cohort as a whole reveals no noteworthy increase in the total risk compared with the national average, nor is there any indication of this in the subgroups making up the study cohort. Study cohort 1, which includes everyone with at least six months of exposure and with calculation of the risk from the beginning of exposure, is somewhat remarkable in that the anticipated number of deaths is significantly higher than that observed up to 1964 (Fig. 1). This is commented on further in the discussion. Study cohort 2 (Tables 3 and 4; Fig. 2) includes persons with at least six months of exposure, excluding those who stopped before 1961. The risk calculation is made from the beginning of the exposure, but no earlier than 1961 and up to the end of the follow-up time (1976). The observed number of deaths is somewhat lower than expected, much lower in exposure class 2. Classes 2 and 3 are relatively small and are sensitive to random deviations in this type of analysis. In order for random deviations not to influence the results, the classes were combined. This is true of all study cohorts. This distribution with respect to the vari- Percentage for a given year calculated as in Fig. 1. Percentage for a given year calculated as in Fig. 1. ous death causes is shown in Table 4. The observed and anticipated number of deaths during the 1961-1968 period is relatively small, only a few cases, and the death cause classification was modified as mentioned earlier in 1969, for which reason 1961-1968 period is not discussed separately. By and large, the picture is the same there as for the 1969-1976 period reported on. From Table 4, one sees that the observed number of deaths, especially those from tumors of the digestive tract, myocardial infarction and accidents, is somewhat higher than anticipated. However, the differences are not significant. Study (Tables 5 and 6; Figs. 3-5) which pertains to those who began working in 1961 or later but which otherwise satisfy the same criteria as study cohort 2, displays a similar picture. An analysis of study cohort 3 according to formula B (Figs. 4 and 5) indicates that the annual risk during the first year of exposure is somewhat lower than the anticipated one, but that after about ten years, an increased risk occurs so that the observed risk becomes higher than the anticipated. 10 15 No. of risks FIGURE 5. Cumulative survivalprobability (in percent) ofthose who began exposure in 1961 or later and have at least six months of exposure. Study cohort 3 (1428 persons). In study cohort 4 ( Table 7) which concerns time during ongoing exposure or a relatively short time after the end of exposure, i.e., "short-term perspective," one sees arn increased death risk from myocardial infarction. Other causes are somewhat lower here than expected. In study cohort 5 (Table 8), finally, one finds an indication of an increase in the death risk as regards tumors, but also for myocardial infarction. The differences between the observed and anticipated numbers are not, however, statistically confirmed at the 5% level. The result with respect to mortality can be summarized as follows. In the study cohorts, overall, one finds no noticeable increase in mortality. On the other hand, there are indications of a shift in the death cause pattern compared with the national average. This shift is expressed primarily in the fact that the number of myocardial infarctions is noticeably higher during ongoing exposure or within a relatively short period of time after the end of exposure. There are also indications that the death from tumors can be elevated among persons with a long latency period (Tables 7 and 8). In the question of cancer morbidity, there is no certain increase in study cohort 2 (Table 9 and Fig. 6). In the question of tumors of the digestive organs, in the same study cohort, 11 cases were observed as opposed to an anticipated 8.5. The difference is not statistically verified. One of these eleven tumors was liver cancer (ICD 155.0). Discussion A noteworthy finding which arises in the analysis of the total cohort mortality (Fig. 1) is that the number of deaths at the beginning of the observation period (1947)(1948)(1949)(1950)(1951)(1952)(1953)(1954)(1955)(1956)(1957)(1958)(1959)(1960)(1961)(1962)(1963)(1964) is significantly lower than one would expect in relation to the national average. This difference is so great that one cannot directly consider it to be randomly conditioned, nor can it be entirely ascribed to the so-called healthy worker effect. Theoretically, of course, the possibility exists that the selected cohort, in the question of mortality and the factors which influence said mortality, deviates from the general population. A more credible possibility is, however, that the personnel register that was available at the company involved at the time of this study was incomplete in the matter of hirings during this early period. of persons who began employment before 1960, could lead to the difference mentioned above. The companies involved reported that such a purging did not occur, so far as they knew. If such a purging (thinning out) nevertheless occurred, this would have resulted in the elimination of persons with a long observation time at the time of follow-up. In the present study, the risk calculations were limited to beginning no earlier than 1961. This means a limitation of the analysis to pertain to the group of employees who were living at the beginning of 1961 and where the risk of an elimination is positively eliminated. This limitation, however, signifies a weakening of the analysis, since parts of the cohort with long follow-up times are excluded. Basically, this weakening signifies a poorer possibility of discovering an elevated incidence of cancer if one exists. The myocardial infarction mortality (ICD 410.90) is elevated in the cohort. This elevation occurs most clearly in the category of the total cohort which has at least two years of employment time and where the analysis was directed at the period of time following two years after the beginning of employment and extending to no more than five years after the beginning of employment. Therefore, this involves that fraction of the mortality from myocardial infarction which chronologically is relatively closely connected to the time of employment. It is impossible on the basis of such observations to draw conclusions that the elevation was caused by exposure to vinyl chloride. The observed increase in myocardial infarction mortality is, however, so striking that it, in combination with the known facts about the toxic properties of vinyl chloride, must be given consideration. There are no reasons to assume that varying diagnostics, standards or practices in filling out the death certificates alone could provide an explanation. A natural conclusion is, therefore, that if one disregards the possibility of a random local phenomenon, the increased frequency is to be ascribed either to selection of individuals susceptible to the risk or an outbreak of risk factors in the close environment of employees. A combination of these two circumstances is, of course, also possible theoretically. In this connection, it should be noted that many risk factors for myocardial infarction are environmentally conditioned in the fact that they constitute part of the lifestyle of the modern social environment in an industrialized country. Cigarette smoking, physical inactivity, overweight and high blood lipids constitute environmental factors which are related to social behavior. It is a well known fact that the risk of coronary vascular disease in the heart varies, inter alia, with the total load of risk 150 factors. Among other risk factors, one can also name hereditary characteristics and high blood pressure. In this connection, there is reason to recollect that the causal network of coronary disease is multifactorial and that the disease has an environmental relationship in the broad sense. There is also reason to recall the aspect that the total risk increases when several risk factors, known or unknown, are allowed to collaborate (11,12). It has not been possible to establish the distribution of such already known risk factors for coronary disease in the cohorts studied with respect to the national population in general. Therefore, no continued analysis of the matter of the causal relationship between close environment and heart disease morbidity can be made within the limits of this study. Exposure classes 2 and 3 constitute subcohorts that are too small, in the present study, to allow a meaningful discussion of the myocardial infarction risks relative to the various exposure levels in the processing industry. In this connection, one should also consider the circumstance that the exposure classes in this study are based on interviews with the employees directed at the work environment at the time in question some 10 to 15 years ago. Therefore, this involves an environment which has subsequently undergone changes. Objective classification criteria in the matter of exposure, e.g., in the form of environmental measurements, do not exist. The distribution into exposure classes is, for this reason, fraught with uncertainty. In animal experiments, it has been found that the toxicity picture in rodents chronically exposed to VCM involves the blood vessels. Besides hemangiosarcoma in the liver and the other organs (4,5) the inhalation of VCM is also believed to cause development of telangiectasis (4) in the liver of mice which can lead to death from hemocoele. Changes in the sinus cells have been observed in liver biopsies in VCM-exposed workers (13). Capillary changes in the skin of the fingers have also been observed (14)(15)(16), both in VCM-exposed workers with other vascular-involved diseases, such as acroosteolysis, Raynaud's phenomenon, and sclerodermia, and in VCM-exposed workers without such diseases. An over-representation of deaths from cardiovascular diseases has also been observed in a study on the PVC-fabricating industries (10). Animal experimental and previous medical studies of VCM-exposed populations therefore support the assumption that the increased risk of myocardial infarction observed in the present study could possibly be ascribed to VCM exposure. As regards the mortality and morbidity from tumors, the results are uncertain. There are certain Environmental Health Perspectives indications of an elevation, but the differences are not statistically confirmed. One can think of two possibilities here: (1) in reality, there is no increase in the risk of tumors; (2) there is indeed an increased risk of tumors. The results neither confirm nor refute this. Tumors do not occur until after a long latency period. The majority of the persons included in the study did not begin their exposure until the 60's and 70's and therefore could not be followed for a sufficiently long time. An accurate follow-up of the present cohort during the coming five-year period should bring greater clarity into this. In the present connection, it is of interest that in a recently published mortality study (17) on almost 4300 deaths in the American PVC-processing industry, an overrepresentation in cancer mortality appears to exist (all cancer), especially gastrointestinal cancer in both sexes.
2014-10-01T00:00:00.000Z
1981-10-01T00:00:00.000
{ "year": 1981, "sha1": "3bbb9bad9dd2ac3455ab62ff84dfa6830397b911", "oa_license": "pd", "oa_url": "https://doi.org/10.1289/ehp.8141145", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bbb9bad9dd2ac3455ab62ff84dfa6830397b911", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221613948
pes2o/s2orc
v3-fos-license
Phyllodes Tumors of the Breast: A Literature Review Phyllodes tumors (PTs) of the breast are considered a rare fibroepithelial neoplasms of the breast and are considered a challenging for both pathologists and surgeons. The World Health Organization (WHO) has classified PTs histologically as benign, borderline, and malignant. PTs can be detected in all ages; however, the median age of presentation is 45 years. PTs can mimic fibroadenoma in clinical presentations. Breast imaging is also similar to fibroadenomas. Cytological diagnosis of PTs by biopsy is usually unreliable. However, a core needle biopsy is superior to fine-needle aspiration. Surgery is considered the mainstay treatment for PTs of the breast with a goal of achieving negative margins. Adjuvant chemotherapy and radiation therapy use for malignant PTs are controversial. Introduction And Background Phyllodes tumors (PTs) of the breast are an infrequent fibroepithelial neoplasm that accounts for less than 1% of all breast neoplasm. The incidence of PTs is low 0.3%-0.9% of all breast tumors [1,2]. PTs were initially described by Muller in 1838 as Cystosarcoma phyllodes. Phyllodes derive from the Latin Phyllodium which means 'leaf-like' based on a gross pathological description of a leafy, bulky, cystic, and fleshy tumor of the breast [3]. In 1982, the World Health Organization (WHO) has classified PTs histologically as benign, borderline, and malignant based on their histopathologic characteristics, which has been accepted widely ( Table 1) [4]. Benign PTs are more frequent, constitute between 35% and 64%, borderline PTs between 7% and 40% of cases, while malignant PTs reach up to 30% [5,6]. Triple assessment by clinical, radiological, and histological examination are the initial assessment to evaluate PTs. It is difficult to differentiate PTs from other breast tumors before surgical excision. It has unpredictable behavior regardless of its histological grade. Local recurrences and distal metastasis rarely occur in benign PTs, while common in borderline and malignant PTs. Surgery with clear resection margins remains the mainstay treatment for PTs of the breast. Local recurrence is reported to be approximately 8% for benign PTs and 21% for borderline cases [7]. Review Epidemiology and risk factors PTs are considered rare tumors. Because of limited data, the etiology of PTs is unknown, and the risk factors are not yet clearly identified; however, Latin women and East Asians who were born in Central or South America and living in the United States have a higher risk [8][9][10]. Additionally, genetic mutations in the chromosomal regions of +1q, +5p, +7, +8, −9p, −10p, −6, and −13 correlated with borderline and malignant PT of the breast [11]. Few studies have exhibited the association between family relatives and PTs [12,13]. Women with Li-Fraumeni syndrome have an increased risk for PTs [14]. PTs occur almost exclusively in females; however, a few cases have been reported in men, all of which were associated with gynecomastia [15,16]. The median age of presentation in PTs is 45 years, with age ranging between 9 and 93 years [8,17], with Asians diagnosed at a significantly earlier age than other groups [18]. Clinical presentation and diagnosis PTs usually present clinically as a benign breast mass, with rapid growth sometimes. In some patients, the lesion may present with rapid growth after being present for many years. It can associate with blue discoloration, dilated skin veins, skin ulcers, nipple retraction, and palpable axillary lymph nodes in rare cases [17][18][19]. It rarely involves the nipple-areola complex or causes ulceration to the skin. The most common presenting symptom is a breast lump, usually located at the upper outer quadrant of the breast, and rarely bilateral in 1.8% [15]. The size of PTs varies between 0.5 and 30 cm with a mean between 5 and 7.2 cm [20,21]. Triple assessment including, clinical, radiological, and histopathological evaluations of suspected breast lumps are considered to be the standard of care. By ultrasound, it appears as a solid mass, inhomogeneous, with a radiolucent halo, lobulated border, and sometimes coarse microcalcifications. The presence of a solid mass with multiple or single, round or cleft like cystic spaces with posterior acoustic enhancement suggest the diagnosis of PT, but Intramural cysts with the absence of posterior acoustic enhancement can also be present. High vascularity is usually present in solid components. On mammographic imaging, they emerge as hyperdense, large, round, or oval, well-circumscribed lesions [22,23]. There is no clear indicator of malignancy observed on either ultrasounds or mammography, most of the time they have features similar to fibroadenoma on mammography and ultrasonography, however, with a higher mammographic density for PTs [24]. Even though magnetic resonance imaging (MRI) is considered to be extremely sensitive for the detection of breast cancer, it is still difficult to differentiate PTs from other breast tumor types [25,26]. On MRI, they are seen as oval, round, or lobulated masses with circumscribed margins as with the mammography. Although it is still difficult to differentiate phyllodes tumors from other breast tumors, PTs have higher signal intensities on T1-weighted images and lower or equal signal intensity on T2-weighted images than normal breast parenchyma [17,27]. The role of MRI in diagnosing PTs still under argument and not yet understood, although some authors have found evidence suggesting that MRI may have a high concordance rate with histopathology [17,27]. Cytological diagnosis of PTs by biopsies is usually unreliable [17,28]. Diagnosis of PTs by fineneedle aspiration cytology (FNAC) is difficult [17,29], as it is unreliable in distinguishing PTs from fibroadenoma cytologically. Scolyer et al. [29], compared the cytology of PTs and fibroadenomas, and concluded that if hypercellular stromal fragments are seen on FNAC, the possibility of PT should be raised and excision recommended. Foxcroft et al. [28], reviewed 83 cases of PTs, found cytology proposed PT in only 23% by FNA guided biopsies, while it was 65% of PTs on core biopsy. Pathology PTs are fibroepithelial tumors characterized by epithelial and stromal proliferation. On gross examination, PT mimics the fibroadenoma, but in the cut surface exhibits cleft like spaces with distributed nodular stromal growth, and the color differs from tan to yellowish gray. Also, stromal overgrowth, mitotic activity, and increased stromal cellularity are present [30]. Hemorrhage and necrosis can be seen in malignant type, also, a malignant type can have a fleshy sarcoma-like cut surface which is softer than benign PTs or fibroadenoma. Histologically, PTs are classified into different grades by the World Health Organization to determine their prognosis and clinical behavior [4]. These include benign, borderline, and malignant PTs based on histologic criteria, which include stromal cellularity, degree of cellular pleomorphism, mitotic activity, tumor margin, and stromal pattern ( Table 1) [4]. Benign PTs constitute between 35 % to 64 %, whereas the malignant form accounts for about 25 % of cases [5,6,31]. A benign PT is characterized by well-defined tumor borders, mild stromal cellularity, none to mild atypia, < 5 mitotic figures per 10 high power field (HPF), and lack of stromal overgrowth or malignant heterologous components [32]. A borderline PT is characterized by typically well-defined or focally permeative tumor borders, absent or focal stromal overgrowth, moderate stromal cellularity, mild or moderate stromal atypia, and no malignant heterologous components [32]. Mitotic activity is in the range of 5-9 per 10 HPF. A malignant PT characterized by marked stromal cellularity and atypia, permeative margins, stromal overgrowth, and mitotic activity of at least 10/10 HPFs [33]. Treatment Surgery is the mainstay treatment for PTs of the breast. However, due to their unclear clinical presentation, vague pathological behavior, and difficult preoperative diagnosis, there still seems to be a predicament in their treatment plans. In the past, simple mastectomy was the recommended treatment for borderline and malignant PTs. Breast-conserving surgery (BCS) was safe and adequate even for malignant PTs if complete excisions achieved [34,35]. The extent of surgery remains arguable because the surgical resection margin is thought to be associated with the local recurrence of PTs. Also, numerous clinical studies recommend wide excision of the tumor with 1 cm clear margin [22,31,36], which can cause a major difficulty in achieving good cosmetic results. However, recent studies show that there is no direct relationship between local recurrence rate and the width of negative margins [21,37]. Jang et al. reviewed 164 PT cases, found that the only factor that strongly predicted local recurrence was the presence of tumor cells on the resection margin, while the width of the resection margin did not correlate with local recurrence risk [21]. Onkendi et al. have shown that disease-free survival was not affected by the extent of surgical resection in patients with borderline and malignant PTs [37]. For benign and borderline PTs have a less aggressive disease course and the recurrence rates are low regardless of the resection margin status [38,39]. Adjuvant radiation therapy use for malignant PTs is controversial [19,36,40]. Gnerlich et al. [41] in an analysis of cases collected from National Cancer Data Base from 1998 to 2009, demonstrated that there were an increase in time to LR and a significant decrease in LR in women who received adjuvant radiotherapy in comparison to those women who had surgery alone for malignant PTs but without a significant improvement in disease-free or overall survival. Belkacemi et al. [42] demonstrated that adjuvant radiotherapy for borderline and malignant PTs yielded a superior 10-year local control rate (86% with radiation versus 59% without radiation). Also, Barth et al. [7] found that no local recurrence was observed after a median follow-up of 56 months for women who received breast conservative surgery and adjuvant radiotherapy for borderline and malignant PTs with a confirmed margin-negative. However, the current National Comprehensive Cancer Network (NCCN) guidelines recommend consideration of radiotherapy for malignant phyllodes only in the setting of local recurrence (level 2B evidence) [43]. Adjuvant chemotherapy is more controversial and its effect in PTs is lacking. Adjuvant cytotoxic chemotherapy lacks the evidence of providing benefits in reducing local recurrences or improvement in disease-free or overall survival death. However, it can be considered for large tumors, when adjacent structures such as the chest wall are involved, or unresectable distant metastases [44]. Endocrine therapy is not proven to have an effect in PTs, although pathologically contain estrogen receptors in 58%, and progesterone receptors in 75% [45]. Chemotherapy, radiotherapy, and hormonal therapies can be considered to treat metastatic disease, but without clear evidence of their efficacy [46]. The management summary for PTs is shown in Figure 1.
2020-09-10T10:24:22.887Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "1bf253fc7e659ddfc79ea3a72b56fa45da800c0a", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/40363-phyllodes-tumors-of-the-breast-a-literature-review.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "880ff4691b5baaa9106331ca7f3bc521a8848a02", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239025309
pes2o/s2orc
v3-fos-license
ATS Core Curriculum 2021. Pediatric Pulmonary Medicine: Pulmonary Infections The following is a concise review of the Pediatric Pulmonary Medicine Core reviewing pediatric pulmonary infections, diagnostic assays, and imaging techniques presented at the 2021 American Thoracic Society Core Curriculum. Molecular methods have revolutionized microbiology. We highlight the need to collect appropriate samples for detection of specific pathogens or for panels and understand the limitations of the assays. Considerable progress has been made in imaging modalities for detecting pediatric pulmonary infections. Specifically, lung ultrasound and lung magnetic resonance imaging are promising radiation-free diagnostic tools, with results comparable with their radiation-exposing counterparts, for the evaluation and management of pulmonary infections. Clinicians caring for children with pulmonary disease should ensure that patients at risk for nontuberculous mycobacteria disease are identified and receive appropriate nontuberculous mycobacteria screening, monitoring, and treatment. Children with coronavirus disease (COVID-19) typically present with mild symptoms, but some may develop severe disease. Treatment is mainly supportive care, and most patients make a full recovery. Anticipatory guidance and appropriate counseling from pediatricians on social distancing and diagnostic testing remain vital to curbing the pandemic. The pediatric immunocompromised patient is at risk for invasive and opportunistic pulmonary infections. Prompt recognition of predisposing risk factors, combined with knowledge of clinical characteristics of microbial pathogens, can assist in the diagnosis and treatment of specific bacterial, viral, or fungal diseases. A variety of organisms infect the airways and lung parenchyma, including bacteria, mycobacteria, viruses, and fungi. Diagnostic methods have improved greatly in recent years, yet culture remains the clinical gold standard for bacteria, mycobacteria, and fungi. Disadvantages of culture include a limited spectrum of culturable organisms, the time until results, and personnel efforts. Advantages include quantitative results and the ability to further test the organism. Besides conventional identification of organisms by growth characteristics and biochemical testing, 16S rRNA sequencing and mass spectrometry (matrix-assisted laser desorption ionization-time-of-flight mass spectrometry) have enhanced the sensitivity and accuracy of species-level identification. Culture-independent methods typically rely on amplification of bacterial or viral nucleic acids and subsequent identification (1). Polymerase chain reaction (PCR) with or without quantitation has been available the longest, and newer, refined methods of nucleic acid extraction and amplification have streamlined the process, including isothermal nucleic acid extraction and amplification for high-throughput and point-of care testing. Several syndromic panels for upper and lower respiratory tract infections are commercially available (2). These multiplex panels include bacterial, atypical bacterial, and viral codetection, often with concomitant detection of bacterial resistance genes. Sensitivity and specificity vary among panels and specimen types. Sensitivity for different targets may differ within a sample, and detection of bacterial resistance in a mixed infection may not be pathogen specific (3). Increasingly, panels are semiquantitative to decrease uncertainty about contamination versus a clinically relevant bacterial load. Figure 1 demonstrates standard pathways for respiratory organism identification. Clinicians should understand the appropriate specimen type (swab, induced or expectorated sputum, bronchoalveolar lavage fluid) relevant to the clinical question and recognize that sample quality and contamination are a few of the issues that may affect interpretation, especially for molecular diagnostics (4). Viral Diagnostics Viral detection is largely PCR-based, with many point-of-care methods for single or multiplex detection being available. Separate panels are optimized and Food and Drug Administration-approved for upper and lower respiratory tract samples. Antigenbased viral detection is widely used (e.g., influenza, severe acute respiratory syndrome coronavirus 2 [SARS-CoV-2]) and has rapid results but lower performance. Correspondence and requests for reprints should be addressed to Jane E. Gross, M.D., Ph.D., National Jewish Health, 1400 Jackson Street, J329, Denver, CO 80206. E-mail: grossjane@njhealth.org. Fungal Diagnostics Fungal diagnostics are often used in immunocompromised patients (ICPs). Culture and histopathology of secretions or tissues with fungus-specific staining remain valuable. Molecular methods include panfungal PCR and pathogen-specific PCR. Some syndromic lower respiratory infection panels include targets for Aspergillus fumigatus, Pneumocystis jirovecii, and Cryptococcus neoformans (5). To date, there are few published evaluations of those tests. Antigen detection assays are available for Cryptococcus, Aspergillus (galactomannan), Candida (enolase), and b-glycan as nonspecific markers of fungal infection. Given the concern of contamination from upper airway secretions, evaluation of the host's immune response to fungus can provide specificity. Assays include complementfixation, immunodiffusion, and enzyme immune assays. Mycobacterial Diagnostics Diagnosis of mycobacterial infections remains challenging, given the fastidious nature of the organisms. Culture remains the gold standard, followed by Mycobacterium tuberculosis-specific PCR (6). Skin and serologic tests are routinely used for tuberculosis, with cautious interpretation being used for ICPs. For nontuberculous mycobacteria (NTM), culture remains the standard in most settings; however, molecular assays and panmycobacterial PCR with gene sequencing for subtype identification are becoming available (7). NEW AND EVOLVING IMAGING MODALITIES Diana Y. Chen and Nazia Hossain In addition to clinical assessment and microbial tests, imaging studies can aid in the diagnosis and management of pulmonary infections. Although plain chest radiography (CXR) is often the preferred initial modality, findings are often normal or nonspecific. Chest computed tomography (CT) remains the gold standard for characterization of pulmonary infections but increases the totaldose radiation exposure. Lung ultrasound (LUS) and lung magnetic resonance imaging (MRI) have gained attention as radiation-free alternatives for pulmonary imaging. Video 1 shows common radiologic findings for pulmonary infections from the imaging modalities described in this review. LUS Although the diagnosis of pneumonia is made in a clinical context, CXR studies are often obtained, although appreciable falsenegative rates and wide inter-and intraobserver interpretation variability are reported. LUS is an advantageous alternative diagnostic modality that can detect the presence of consolidations, focal B-lines, pleural line abnormalities, and effusions with significantly better sensitivity than (95.5% vs. 86.8%) and specificity similar to (95.3% vs. 98.2%) CXR alone or in combination with clinical findings, and the use of LUS results in less interpretation variability (1,2). LUS is better for diagnosing and characterizing pleural effusions and identifying mediastinal lymphadenopathy in suspected pulmonary tuberculosis than CXR (3,4). LUS techniques yield quality imaging results in children because of their unique anatomical features, including a thinner chest wall and smaller thoracic width. Advantages to LUS include the lack of radiation exposure, lower relative cost, potential for expanded access in low-resource settings, and rapid availability of results (5). Chest CT versus MRI CT is the gold standard for detecting pulmonary infection and for diagnosing bronchiectasis and air trapping. Despite routine use of pediatric low-dose CT protocols, minimizing the cumulative dose of radiation exposure in children remains crucial, particularly in patients with DNA-repairing deficiencies and oncologic conditions. With advances in imaging quality, emerging studies have aimed to validate MRI as a radiationfree alternative. The advancements and development of MRI with fast-imaging sequences and higher field strengths have enhanced image resolution while reducing total scan times and the need for prolonged sedation (6 NTM are ubiquitous in the environment, particularly in soil and water sources. They can cause uncommon, but significant, pulmonary disease in children (1,2). These atypical pathogens are slow growing compared with typical bacteria and require special culture conditions for proper isolation and identification (3). NTM species associated with pulmonary disease in children include the M. abscessus group (MAB) (which includes the subspecies abscessus, massiliense, and bolletii), which typically grow within 1 week in culture, as well as slower growing species such as M. avium complex (MAC) (which includes M. avium, M. intracellulare, and M. chimaera, among others) and M. kansasii, which can take weeks to grow in culture (4). In the pediatric population, NTM infections most commonly occur in children with CF. Pulmonary NTM can also be diagnosed in children with non-CF bronchiectasis, including primary ciliary dyskinesia, autosomal dominant hyper-IgE syndrome, and primary or secondary immunodeficiencies (5). MAC and MAB are the most common species associated with pulmonary disease in the United States. Screening and Diagnosis Annual screening for NTM should be performed in older children with CF via an expectorated or induced sputum culture, even if they are asymptomatic. Routine screening should also be considered in children with non-CF bronchiectasis. In addition, the presence of NTM should be considered in pulmonary infections or exacerbations that are unresponsive to treatment of typical pathogens or that present with an unexpected clinical decline. Because NTM can cause transient or indolent infection, or be a contaminant, it is imperative that careful consideration of the cause and a formal diagnosis of NTM pulmonary disease is made before starting treatment. Diagnostic criteria are shown in Figure 2. If a patient is on chronic azithromycin therapy for immunomodulation, it should be stopped upon isolating NTM to avoid partial treatment and induced resistance. Management Management of NTM disease in children requires a prolonged course of multiple antibiotics, with the treatment choice being based on the species or subspecies and guided by susceptibilities. Expert consultation is recommended, especially for drug-resistant MAC or MAB (4). Standard treatment approaches are shown in Table 1 for the most common NTM (4,6). Newer antibiotics that show in vitro activity against NTM are included on the basis of updated treatment guidelines (6). Typically, MAC and M. kansasii can be treated with an oral antibiotic regimen. MAB requires an intensive phase of intravenous and oral antibiotics, which is followed by a continuation phase of oral and inhaled antibiotics. Daily treatment is recommended in children (vs. thrice-weekly treatment in adults) because of drug metabolism. Any patient started on NTM treatment requires frequent follow-up with intensive drug-toxicity monitoring (e.g., audiograms, vision testing, electrocardiography, and laboratory testing). The goal of treatment is to achieve 12 months of consecutive negative culture results. If microbiologic conversion cannot be achieved, drug concentrations should be measured and alternate antibiotics can be trialed. (2). Pathogenesis The SARS-CoV-2 virus gains entry into cells by binding to the ACE2 receptor (3). The reason children experience less severe symptoms is unknown, and hypotheses include children having lower levels of ACE2, differential immune responses, and fewer comorbidities (4). To date, only age (less than 1 yr old) and having a preexisting medical condition have been identified as risk factors for severe disease in children (5). Diagnostic Testing Diagnosis of COVID-19 is typically through detection of viral mRNA from a respiratory sample with reverse transcription-PCR. The accuracy of testing results depends on numerous factors, such as the sample source and the viral load present. Consultation with an infectious disease expert is recommended in certain clinical scenarios (e.g., congenital or acquired immunodeficiency) to ensure proper test interpretation and determine the need for possible retesting (8). Although laboratory findings have limited diagnostic and prognostic value, lymphopenia as well as elevated creatinine kinase, liver enzymes, and procalcitonin have been documented (7). CT abnormalities can present before clinical symptoms and may include GGOs and consolidations (6,7). Treatment and Management Currently, there are no specific therapies for children with COVID-19. Stable patients are advised to quarantine at home (8). Treatment strategies for hospitalized patients include supplemental oxygen, fluid resuscitation, and empiric antibiotics when indicated (8). Systemic corticosteroids are not recommended. Patients in whom supplemental oxygen fails should be escalated to noninvasive positive pressure ventilation before mechanical ventilation with lung-protective strategies (9). Monoclonal antibodies to SARS-CoV-2 can be administered to high-risk pediatric patients 12 years or older who test positive for SARS-CoV-2 (10). To date, the Pzifer-BioNTech COVID-19 mRNA vaccine is the only vaccine available to patients 12 years or older. Clinical trials for patients under 12 years of age are currently ongoing. Complications and Prognosis An estimated 13.3% of pediatric patients with COVID-19 are admitted to the hospital, with 3.5% requiring intensive care unit (ICU) care. Mortality is low and is estimated to be less than 1% (5). Multisystemic inflammatory syndrome in children is a rare complication defined by fever, systemic inflammation, and multiorgan dysfunction seen after COVID-19 infection in patients 3-12 years old. There are no established guidelines, and treatment includes intravenous immunoglobulin and glucocorticoids. With multisystemic inflammatory syndrome in children, 80% of patients require ICU care, and mortality is estimated at 2% (11). INFECTIONS IN THE IMMUNOCOMPROMISED HOST Stephen Kirkby and Robin Ortenberg The pediatric ICP is at risk for invasive and opportunistic pulmonary infections. Prompt recognition of predisposing risk factors, combined with knowledge of clinical characteristics of microbial pathogens, can assist in the diagnosis and treatment of specific bacterial, viral, or fungal diseases. Classification of Patients at Risk for Infection There are hundreds of primary immunodeficiencies, further characterized by specific cellular defects (1). Disorders affecting T cells are likely to present with recurrent or severe viral and fungal infections. B-cell abnormalities lead to decreased antibody production and recurrent bacterial pneumonias, specifically caused by encapsulated organisms (2). Secondary immunodeficiencies vary in severity depending on the underlying cause of disease and include diabetes, malignancy, malnutrition, sickle-cell disease, human immunodeficiency virus, and immunomodulatory medication use. In addition, significant immunosuppressive states exist related to solid organ or bone marrow transplantation (3). Diagnostic Evaluation Evaluation of the ICP with respiratory symptoms should aim to quickly identify the source organism and rule out noninfectious etiologies. Imaging with CXR or CT can assist in identifying pathognomonic patterns of disease, in conjunction with specific laboratory studies. Bronchoscopy with bronchoalveolar lavage remains a key tool for diagnosis through cultures, cytology, and molecular diagnostic testing (such as PCR) (4). There may also be a limited role for lung biopsy (transbronchial or radiology-guided biopsy) or sampling of other tissues, such as lymph nodes or skin (Table 2). Infectious disease consultants can offer valuable assistance in specific testing, particularly in complicated cases. Bacterial Infections Bacterial infections are commonly encountered among ICPs and may be acquired in the community or healthcare setting. Clinicians should have high clinical suspicion and quickly institute appropriate broad antibiotic coverage for gram-positive and gram-negative species, as the typical treatment for community-acquired pneumonia in immunocompetent hosts may be insufficient (5). Slow-growing bacterial species including mycobacteria and Nocardia species may also be identified in ICPs. Practitioners can tailor treatment to a specific pathogen once cultures and drug susceptibility patterns are identified. Fungal infections Although many fungi represent commensal or nonpathologic organisms, others can cause true infection in the ICP. P. jirovecii is a common infection among organ transplant recipients as well as patients with chronic systemic glucocorticoid therapy or prolonged neutropenia or lymphopenia. Prophylactic strategies are often used in these high-risk patients. Aspergillus species, including A. fumigatus, may cause invasive pulmonary aspergillosis with sequelae such as necrotizing pneumonia, vascular invasion, and hematologic spread. These patients may present with hemoptysis or pulmonary infarction. The classic radiologic finding in invasive pulmonary aspergillosis is a cavitary lesion with surrounding GGO, the so-called "halo sign." The highest-risk patients include those with hematologic malignancies and bone marrow transplant recipients. Other important fungal pathogens to consider in ICPs include mucormycosis, cryptococcosis, and Candida infections. Published guidelines detail the diagnosis of pulmonary fungal infections (6). Viral Infections Seasonal community-acquired viral infections, including with influenza, respiratory syncytial virus, adenovirus, and other common upper respiratory tract viruses, are more likely to cause clinical pneumonia in pediatric ICPs. A high level of suspicion and prompt diagnostic testing via PCR panels are indicated. Solid organ and bone marrow transplant recipients are at particularly high risk for cytomegalovirus, which can cause pneumonitis and other organ disease. Risk is based in part on the seropositivity of the donor and recipient, and prophylactic strategies are often employed (7).
2021-10-20T05:24:03.877Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "9a5934c2cff96805f50c0cfb7a17063540e791ed", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.34197/ats-scholar.2021-0034re", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a5934c2cff96805f50c0cfb7a17063540e791ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4708784
pes2o/s2orc
v3-fos-license
The response strategy and the place strategy in a plus‐maze have different sensitivities to devaluation of expected outcome Abstract Previous studies have suggested that spatial navigation can be achieved with at least two distinct learning processes, involving either cognitive map‐like representations of the local environment, referred to as the “place strategy”, or simple stimulus‐response (S‐R) associations, the “response strategy”. A similar distinction between cognitive/behavioral processes has been made in the context of non‐spatial, instrumental conditioning, with the definition of two processes concerning the sensitivity of a given behavior to the expected value of its outcome as well as to the response‐outcome contingency (“goal‐directed action” and “S‐R habit”). Here we investigated whether these two versions of dichotomist definitions of learned behavior, one spatial and the other non‐spatial, correspond to each other in a formal way. Specifically, we assessed the goal‐directed nature of two navigational strategies, using a combination of an outcome devaluation procedure and a spatial probe trial frequently used to dissociate the two navigational strategies. In Experiment 1, rats trained in a dual‐solution T‐maze task were subjected to an extinction probe trial from the opposite start arm, with or without prefeeding‐induced devaluation of the expected outcome. We found that a non‐significant preference for the place strategy in the non‐devalued condition was completely reversed after devaluation, such that significantly more animals displayed the use of the response strategy. The result suggests that the place strategy is sensitive to the expected value of the outcome, while the response strategy is not. In Experiment 2, rats with hippocampal lesions showed significant reliance on the response strategy, regardless of whether the expected outcome was devalued or not. The result thus offers further evidence that the response strategy conforms to the definition of an outcome‐insensitive, habitual form of instrumental behavior. These results together attest a formal correspondence between two types of dual‐process accounts of animal learning and behavior. | I N TR ODU C TI ON In the early half of the twentieth century the major point of dispute in behavioral psychology was what exactly animals learn when they learn. Early theorists viewed animal learning merely as an association between a stimulus (S) and a subsequent response (R), the strength of which is mechanistically modified, or reinforced, by an event that follows the response (e.g., Hull, 1943;Thorndike, 1911). This simple S-R view was later challenged by a series of findings showing that animals appear to possess detailed expectations about the outcome (O) of an action and act purposively to obtain or avoid that outcome (e.g., Tinklepaugh, 1928;Tolman, 1948;Tolman & Gleitman, 1949;Tolman & Honzik, 1930). The debate between the behaviorist and cognitivist camps was instrumental in fostering at least two types of dual-process accounts of learning. On one hand, the debate concerned how animals, in most cases rats, learn to navigate in space, which led to an idea that they can navigate with two different strategies; a place strategy that is based on a "mental map-like representation" of the absolute spatial position of the goal in relation to various stimuli within the environment (e.g., Tolman, 1948), and a response strategy that relies on the formation of an association between a specific cue from the maze and the animal's own kinesthetic response such as turning in a specific direction (e.g., Hull, 1943;Spence & Lippitt, 1946). Various behavioral techniques have been developed to dissociate the two types of navigation strategies. In one version of such experiments, a rat is trained initially in a Tmaze discrimination, in which learning both the appropriate bodily response and the spatial position of the goal are effective solutions (i.e., "dual-solution" T-maze). The rat is then subjected to a probe trial in which it starts from an arm opposite to that used during the training (Tolman, Ritchie, & Kalish, 1947). A rat that had acquired the original discrimination based on the response strategy would make the same turning response and find itself in the opposite location from the original goal location. By contrast, a rat that had learned the location of the goal during training would show a preference for the arm leading to the same goal location. The accumulated evidence has suggested that both types of learning can occur, depending on experimental variables such as the availability and distinctiveness of cues outside the maze, the use of a correction procedure, and the amount of training (for reviews, see Packard & Goodman, 2013;Restle, 1957). On the other hand, the same behavioral-cognitive debate also led to more detailed behavioral analyses of instrumental conditioning, often conducted in testing chambers with lever press as a target instrumental behavior; that is, the analysis of the free operant in a nonspatial context. Through the use of ingenious behavioral assays such as post-conditioning outcome devaluation (e.g., Adams & Dickinson, 1981) and contingency degradation (e.g., Hammond, 1980), it was made possible to dissociate and more finely define S-R/reinforcement learning (the behavior controlled by this S-R process is called "habit") and the purposive, or goal-directed, form of instrumental learning that depends on an R-O association (the behavior governed by this process is called "goal-directed action"; for reviews see Dickinson, 1985Dickinson, , 1994. In the typical outcome devaluation procedure, the value of the reinforcer is first decreased by pairing it with an aversive event such as illness, or by taking advantage of sensory-specific satiety by pre-feeding the animal with the reinforcer in a context that does not provide the animal with the opportunity to make the instrumental response. The animal's propensity to perform the instrumental response that had previously produced the now-devalued outcome is then tested. Crucially, this test takes place in extinction. Thus, throughout the devaluation and extinction phases, there is no opportunity for the animals to experience the devalued outcome as a result of the instrumental response. Therefore, the devalued reinforcer has no opportunity to modify the strength of an S-R connection directly, and hence any change in the animal's propensity to perform the instrumental response during the extinction test must be attributed to its use of expectation about the current value of the instrumental outcome. If, on the other hand, the instrumental response had been established through an S-R/reinforcement process during training and controlled by the same process during the extinction test, the performance should be insensitive to whether or not the outcome is devalued. The devaluation procedure thus offers a diagnostic tool with which one can assess whether a given instrumental response is an S-R habit or a goal-directed action. In this paradigm, therefore, the dichotomy is made, not on the basis of animals' spatial dispositions, but based on the sensitivity of a given behavior to the expected value of the outcome; that is, its goal-directedness. Given the common historical background in the literature, it is surprising that rather little is known about the relationship between these two types of dual-process accounts of learning; the two spatial learning strategies and the two instrumental learning processes (e.g., Sage & Knowlton, 2000;de Leonibus et al., 2011, see Section 8 for the details of these studies). In the current study, we aimed to find a formal correspondence between the two types of dual-process theories, by combining those assays used in each area of research; the opposite start arm test and outcome devaluation. The specific question we asked was whether place-and response strategies in the spatial domain formally correspond to goal-directed and habitual instrumental learning processes, respectively. In Experiment 1, we trained rats on a dual-solution T-maze spatial discrimination and conducted a probe trial from the opposite start arm, before which the value of the food outcome was lowered by the offbaseline specific satiety procedure. In Experiment 2, we addressed the same question using rats with hippocampal lesions, thereby forcing the animals to rely predominantly on the response strategy to acquire the original discrimination (Packard & McGaugh, 1996). | EX PE R IM E N T 1 In Experiment 1, hungry rats were trained on a dual-solution T-maze discrimination for a food reinforcer. After reaching the learning criterion and immediately before being tested in a probe trial from the opposite start arm, the rats were prefed with either the reinforcer pellets, to induce a sensory-specific satiety, or the maintenance diet, to preserve the outcome value while equating the general deprivation level. The rats were given two probe trials on separate days, one under each of the two devaluation conditions, with a retraining session conducted in between. The order of devalued and non-devalued probe trials was counterbalanced across animals. | Subjects The subjects were 32 experimentally naïve male Lister Hooded rats purchased from Charles River, UK. They were about 5 months old at the start of the experiment. They weighed on average 476.3 g (SD 5 38.7), and were food-deprived to 85% of their free-feeding body weight. | Apparatus Training and testing took place in an eight-arm radial maze, which consisted of an octagonal central platform (34-cm diameter) and eight equally spaced radial arms (87 3 10 cm; Figure 1). The floors of the central platform and the arms were made of wood painted white, while the walls of the arms were made from clear acrylic panels (24-cm high). At the end of each arm was a circular food well (2 cm in diameter and 0.5 cm deep). At the base of each arm was a transparent Perspex guillotine door (12 cm high) that controlled access to each arm. Each door was operated manually with a string attached to a pulley system. Only three arms, forming a T-maze, were open and accessible at any given time. Access to each of the remaining arms was blocked by the guillotine door. The entire maze was on a stand (63 cm high) that could be revolved. The maze was installed approximately at the center of a rectangular room (255 3 330 3 260 cm). Illumination was provided by two banks of fluorescent strip lights (0.5 m long, luminance 1022 lux) positioned over the center of the maze. There were various types of visual stimuli around the experimental room, such as posters on the walls, a door, and a small table and a stool close to one wall. Forty-five-mg chocolate-flavored sucrose pellets (Sandown Scientific, England) were used as reinforcers. Prefeeding occurred in eight identical consumption cages installed in a rack in the holding room. Each cage contained a ceramic ramekin (8 cm diameter and 4 cm deep) that was filled with the chocolate pellets or the maintenance diet, depending on the devaluation condition. | Habituation On the first habituation session, pairs of animals from the same home cage were placed together at the far end of the start arm, and allowed The rats were trained on a dual-solution T-maze task in which the start arm and the correct arm were fixed throughout the training. (Middle panel) Immediately preceding the probe trial, rats were given 1-hr free access to either the reinforcer pellets, for the devalued condition, or the maintenance diet for the non-devalued condition. (Right panel) The probe trial was conducted in extinction, and the animals were released from the novel arm opposite to the start arm that had been used during training. Each animal received two probe trials, on separate days, one after prefeeding of the reinforcer (devalued probe trial) and the other after prefeeding of the maintenance diet (non-devalued probe trial). The assignment of the start arm from the four possible arms (N, E, S, and W), the correct arm (left or right), and the order of probe trials (non-devalued and devalued) were fully counterbalanced across animals FIG URE 2 The mean percentage of correct choices (left-hand panel) and the mean latency to reach a goal location (right-hand panel) across training sessions in Experiment 1. Note that from Session 4 onwards each session contained a progressively smaller number of animals as more animals reached the criterion. The numbers on the plots on Session 4, 5, and 6 indicate the number of animals that were run on the session to explore the maze for 10 min. During the first session, a total of 12 pellets were scattered across the entire maze. Three out of eight doors were open to allow access to the start arm and the two choice arms. On the second and third habituation sessions, each animal was run individually for 5 min. Five pellets were placed in each of the two choice arms; three in a food well and two in the alley. If the rat failed to collect all 10 pellets in the third session, additional sessions were run. The start arm was chosen from four arms (N, E, S, and W) for different animals in a counterbalanced way. For each animal, the start arm was consistent throughout the experiment. | Training Each daily session consisted of eight trials for the first two sessions, and nine trials including one omission trial, inserted at a random position in the trial series, except for the first and the last trials, from Session 3 onward. The omission trial was included in an attempt to make animals' responding more resistant to extinction as we scheduled multiple probe trials. On each training trial, the rat was placed at the end of the start arm, facing the center of the maze, and allowed to run down the alley and make a choice. The choice was deemed to have occurred once all four legs of the rat were inside the choice arm. If the choice was correct, the rats were able to retrieve two 45-mg chocolate-flavored pellets baited in a food well at the end of the arm. The rat was removed from the correct arm 10 sec after finding the reinforcer. If the choice was incorrect, the animals were allowed to stay in the incorrect arm for up to 10 sec before removal, but no track back beyond the choice point was allowed (non-correction procedure). If the rat attempted a track-back beyond the choice point to the central octagonal arena, the experimenter picked up the rat and returned it to the holding cage, and recorded an error. No separate correction trial was included. The assignment of the correct arm (left or right) was consistent for each animal throughout the experiment and counterbalanced across animals, orthogonal to the counterbalance of start arm. After an ITI of 60 s, the same animal was run on the next trial until it completed all trials in the session. During the ITI, the experimenter wiped clean the start arm and the two goal arms first with 70% ethanol and then wiped them dry with clean paper towel, and re-baited the correct arm with two chocolate pellets. | Devaluation by prefeeding and probe trial In order to enable within-subject comparison, the outcome devaluation was achieved using a prefeeding procedure. Each animal was given two probe trials, one after prefeeding of the chocolate pellets that had been used as a reinforcer during training (devalued probe trial), and the other after prefeeding of the maintenance diet (non-devalued probe trial). The two probe trials were conducted on separate days, intervened by a retraining session which was run in the same manner as in the original training. During the prefeeding sessions, each animal was individually placed in a consumption cage and allowed 1-hr free access to one of the food types. Immediately after this prefeeding period, animals were moved to the testing room and run on a probe trial. In the probe trial, the start arm was opposite to that used during training, and no reinforcer was available at either goal location. If the rat chose an arm which led to the same goal location as that which had been rewarded during training, then the choice was deemed to be based on the "place strategy." If the rat made the same turning response as that which had been reinforced during training, thereby leading to the location opposite to that which had been rewarded during training, then the choice was deemed to depend upon the "response strategy." | Learning criterion A learning criterion was set such that the rat was required to make fewer than four errors across two sessions of training, as well as making the correct response on the first trial on both of the final two sessions. Regardless of the performance, all animals completed at least three sessions of training (25 trials; 2 3 eight trials and 1 3 nine trials) before being tested in the probe trials. Those animals that failed to reach the criterion within six sessions (52 trials) were removed from the experiment. | Acquisition Seven rats that failed to reach the criterion within six sessions of training and one rat that was incorrectly assigned the wrong arm in one training session were removed from subsequent phases of the experiment. Figure 2 shows the acquisition of the T-maze discrimination from the remaining 24 animals. Note that the data points from Session 4 onward include progressively fewer animals due to attainment of learning criterion. The mean number of sessions required to reach the criterion was 4.42, and the mean number of trials to reach the criterion The number of rats that displayed the place strategy (gray bars) and the response strategy (black bars) following the prefeeding of the maintenance diet (non-devalued probe trial; left) and following the prefeeding of the instrumental reinforcers (devalued probe trial; right) KOSAKI ET AL. | 487 was 37.75. The mean latency to reach the goal location on the last session in which each animal reached the criterion was 6.33 s (SD 5 4.26). On the retraining session, which intervened the two probe sessions, the mean latency was 7.15 s (SD 5 3.76), which was not statistically different from the last training session for each animal (paired t test, t 5 0.996, p > .1). The mean percent correct choice on the retraining session was 85.65, which again was not different from the last training session, 89.35 (paired t test, t 5 1.556, p > .1). The latency to reach the goal location in the non-devalued and devalued trials was 9.8 s (SD 5 6.22) and 11.7 s (SD 5 9.43), respectively. The difference was statistically not significant (paired t test, t 5 0.796, df 5 23, p > .1). Thus, when the expected outcome was devalued, the rats showed a marked preference for the use of response strategy, whereas the animals were indifferent to either strategy when the outcome was not devalued. | Probe trial As the probe trial was repeated twice for each subject (one nondevalued trial and the other devalued trial), we investigated whether the repetition of the probe affected the pattern of strategy expression. The results showed that whether the devalued test was conducted first or second did not affect the pattern of strategy expression in the devalued test; of the 12 animals that experienced the devalued trial first, 9 showed the response strategy and 3 showed the place strategy, and the pattern was identical to the other 12 animals for which the devalued test was conducted second (9 response performers and 3 place performers). Similarly, the result of the non-devalued probe trial was not affected by the order of the test; of the 12 animals that experienced the non-devalued trial first, 6 showed the place strategy and the other 6 showed the response strategy. Of the other 12 animals (nondevalued test second), 8 displayed the place strategy and 4 displayed the response strategy. The fact that the animals did not show a significant reliance on the place strategy in the non-devalued trial may appear inconsistent with some previous studies which, without involving a devaluation procedure, showed a significant preference for the place strategy early in training (e.g., Packard & McGaugh, 1996). However, as noted in the Introduction, whether animals typically rely on the place or response strategy is affected by various factors, among which is the distinctiveness of the extramaze cues (for a review, see Restle, 1957). Therefore, it is difficult to make an a priori assumption about the dominant strategy in a given set of experimental variables. For example, Yin and Knowlton (2004) did not observe a predominance of the place strategy when they tested their rats after just 28 trials of training. It seems likely that in the current experiment, the relatively inconspicuous nature of the extramaze cues, the central illumination that lit the maze evenly from above, and the use of a non-correction procedure made the task more difficult to solve on the basis of a place strategy, and therefore the animals took longer to acquire the task by recruiting the response strategy to a greater extent than in some previous studies. The argument is also supported by the fact that seven out of 31 rats (22.6%) never reached the learning criterion within six sessions or 52 trials. The most important finding in the current experiment is that the mild, non-significant preference for the place strategy in the nondevalued probe trial was completely reversed if the expected outcome was devalued. There are two possible explanations for such a pattern of results. First, it may be possible that the rats simply tried to avoid the devalued outcome. This account implies that the animals' behavior during the devalued probe trial was controlled by the knowledge about the place of the outcome (i.e., place strategy) and the current value of the outcome (i.e., goal-directed process). That is, the animals did not rely on a response strategy for any part of their behavior in the probe trials. This is unlikely, however, given that the same animals did not show an equally strong preference for the place strategy in the nondevalued trial. Moreover, we should expect the latency to be longer for such goal-directed avoidance of the devalued outcome (Sage & Knowlton, 2000), because there is no need for animals to make a choice if not motivated (i.e., paralleling the lower response rates for goaldirected lever pressing after devaluation; e.g., Adams & Dickinson, 1981), and yet we did not observe a difference in the choice latency. The second explanation is that the differential expressions of place and response strategies in the non-devalued and devalued trials reflected different goal-sensitivities of the two strategies. If it is assumed that the place strategy is inherently sensitive and the response strategy is insensitive to the current value of the expected outcome, then in the probe trial after devaluation the choice should be biased to the one controlled by the goal-insensitive response strategy, to the extent that the response strategy had been acquired. | EX PE R IM E N T 2 The results of Experiment 1 suggested that spatial navigation controlled by the response strategy is insensitive to the current value of the expected outcome, whereas the navigation based on the place strategy comprises a representation of the expected value of the outcome. A prediction that naturally follows is that a spatial behavior governed solely by the response strategy in the first place should show a reduced sensitivity to the outcome devaluation. We conducted Experiment 2 to confirm the prediction. One way to test that question would be to use a so-called 'singlesolution' maze task, such as a response-only-relevant maze task, in which animals are released from different start arms across trials and the food is consistently placed in an arm that bears a consistent angular relation to the start arm across trials (e.g., Tolman, Ritchie, & Kalish, 1946;Chang & Gold, 2003;Gibson & Shettleworth, 2005). A specific problem expected to arise from the use of such a task in the current context is that the response task usually takes longer to acquire as compared to the dual-solution task or the place-only-relevant task, presumably because of the animals' predisposition to initially rely on the place strategy, which interferes with acquisition of the response task (Tolman et al., 1946;Chang & Gold, 2003). As the amount of training is a critical variable which controls the transition from actions to habits (Adams, 1982;Dickinson, Balleine, Watt, Gonzalez, & Boakes, 1995;Killcross & Coutureau, 2003), we could end up confounding the effects of the strategy required for the solution of a task, and the amount of training, on the sensitivities to the outcome value. Consequently, we adopted an alternative strategy to test the question, by making selective lesions to the hippocampus (HPC) in rats. It is now widely accepted that a functioning HPC is required for the acquisition and expression of the place strategy. For instance, rats with HPC inactivation or lesions rely less on the place strategy and more on alternative strategies during a conflict probe trial (McDonald & White, 1994;Packard & McGaugh, 1996). Hippocampus-lesioned animals also show impaired acquisition of a place-only-relevant version of a plusmaze task (Chang & Gold, 2003;Compton, 2004), a place-relevant component of a dual-solution water maze task (Pearce, Roberts, & Good, 1998;Kosaki, Poulter, Austen, & McGregor, 2015), and a passive place learning in the water maze, which precludes the involvement of a response component (Kosaki, Lin, Horne, Pearce, & Gilroy, 2014). On the other hand, Corbit and Balleine (2000), and Corbit, Ostlund, and Balleine (2002), demonstrated that HPC lesions did not impair rats' sensitivity to the expected value of the outcome. Therefore, it is theoretically possible that a spatial behavior governed by the HPCindependent response strategy, after HPC lesions, remains sensitive to the expected value of the outcome. On the other hand, if the devaluation treatment did not affect the choice pattern of hippocampallesioned rats during the opposite-start probe trial, then it would offer further support for the conclusion from Experiment 1 that the response strategy is intrinsically an outcome-insensitive, habitual form of instrumental behavior. | Subject The subjects were 22 male Lister Hooded rats, about 5 months old at the start of the current experiment. They weighed on average 476.5 g (SD 5 48.21) before surgery. Following the surgery, they were given 2 weeks of recovery before participating in an unrelated spatial learning experiment in a water maze. Approximately three weeks after the completion of the water maze experiment, the animals were subjected to a food-deprivation schedule, under which their body weight was maintained at 85% of their baseline body weight throughout the experimental period. The animals were naïve to the experimental room, the apparatus, food reinforcement and all other aspects of the current experiment. | Surgery During the surgery, the rats were anaesthetised with a mixture of isoflurane (1%-5%) and oxygen and placed in a stereotaxic frame (David Kopf Instruments). The incisor bar was set at 23.3 mm. The scalp was incised at the midline to expose the skull. A dental drill was used to remove the skull over the target sites. A 2-ml Hamilton syringe was used to infuse 63 mM ibotenic acid (Tocris Bioscience, Bristol, UK) dissolved in buffered saline bilaterally into the target region. The infusion was made with an infusion pump at the rate of 0.03 ml/min, and each infusion was followed by a 2-min diffusion time before the syringe was removed. For the HPC lesions, the coordinates for injections and the volume of each injection followed those described by Jarrard (1989); briefly, the lesion of the whole hippocampus was produced with a total of 28 infusions of ibotenic acid bilaterally. For sham lesions, the skull was removed, the dura was exposed and pierced through with the syringe needle at three points per side, but the syringe was not lowered down into the brain. After the infusions of toxin or the sham procedure were complete, the wound was sutured and the rats were allowed to recover in a warm chamber until conscious. A 10-ml mixture of glucose and saline was injected subcutaneously after surgery to aid recovery. Buprenorphine (0.012 mg/kg) was injected subcutaneously before and after the surgery for pain relief. | Apparatus The testing took place in an eight-arm radial maze that was similarly constructed to that used for Experiment 1 and installed in a different room of similar size. Each arm measured 10 cm wide and 70 cm long. Each part of the maze, including the floor, was made of clear acrylic panels, except for the octagonal central platform (10 cm a side) that was made of wood and painted gray. In between the transparent floor panel and a base panel beneath it, uniform gray paper was inserted so that the color was matched between the floors of the arms and the central octagonal platform. At the end of each arm was a small circular hole (4-cm diameter), into which a metal cup (5-cm diameter) could be inserted with its lip hanging on around the perimeter of the hole. The center of the food cup was placed 4.5 cm from the end of each arm. Access to each arm could be blocked by a frosted acrylic panel vertically inserted at the base of each arm, where the arm met the central platform. The entire maze sat centrally on a rotating round table (180-cm diameter, 30-cm high from the floor of the room). The maze floor was raised by 12 cm from the surface of the round base. The maze apparatus was installed in a rectangular room (340 3 300 3 245 cm high), equidistant from the two long walls, but closer to one of the short walls with a 50-cm gap between the edge of the round base and the near short wall. The maze was lit unevenly with two desk lamps placed in the two corners at the opposite ends of the short wall that was closer to the maze. Each lamp gave illumination towards the corner, not to the maze. Extramaze cues were provided by the two lamps, different posters and cards of different shapes and sizes pasted on the wall, an air purifier installed on the floor close to one wall, which also constantly emitted light through indicator LEDs, a small desk with a TV monitor on it, and a dark blue curtain that was hung from the ceiling to the floor KOSAKI ET AL. | 489 outside the edge of the round table, covering about an eighth of the perimeter of the table. These arrangements were taken in order to increase the control by the extramaze cues and hence by the place strategy, as a pilot experiment using this maze revealed that normal rats did not show a place strategy even with a minimum amount of training when the maze was installed centrally in a larger experimental room and lit brightly and evenly by non-directional ceiling lights. | Procedure The procedure was identical to that described for Experiment 1, except for the following detail. The start arm assigned for each animal, consistent throughout training, was chosen from three arms, each separated by 908 (S, E, W; number of animals started from S; HPC: n 5 4, Sham: | Probe trial The result of primary interest is from the probe trials conducted under the non-devalued and devalued conditions. The number of animals that displayed the place or the response strategy in each condition is depicted in Figure 6a. The devaluation did not affect the distribution of choices in the sham animals, McNemar test, p > .1. Thus the data from the non-devalued and devalued probe trials were combined and subjected to a binomial test, which revealed that Sham rats exhibited an overall preference for the use of the place strategy (binomial test, p < .05). The HPC-lesioned rats, by contrast, showed a substantial preference for the choice that conformed to the response strategy, regardless of whether the reinforcer was devalued or not. A McNemar test revealed no difference in the distribution of place and response choices in the non-devalued and devalued trials for HPC rats, p > .1. The data from the two probe trials combined were subjected to a binomial test, which revealed that there were significantly more HPC animals displaying the response strategy than the place strategy, p < .01. Although latency to make a choice was not sensitive to devaluation in Experiment 1, it appeared to be so in the current experiment, as latency data during the probe trials demonstrate (Figure 6b) 6 (a) The number of rats that displayed the place strategy (gray bars) and the response strategy (black bars) in probe trials following the prefeeding of the maintenance diet (Non-Deval) and following the prefeeding of the instrumental reinforcers (Deval) in Experiment 2. (b) The mean latencies to reach one of the goal locations during the non-devalued and devalued probe trials in Experiment 2. Error bars represent 6 SEM KOSAKI ET AL. | 491 The fact that the devaluation in Sham animals affected the response latency, rather than the choice of arms, in Experiment 2 may appear inconsistent with the result of Experiment 1. The results, however, can be consistently explained by the difference in the extents to which animals learned the response strategy in the two experiments. In Experiment 1, animals developed the response strategy to some extent so that around the half of them showed the response choice in the non-devalued test. In Experiment 2, however, Sham animals showed a much weaker response strategy and instead demonstrated predominantly the place strategy. It is important to note that whether or not the place strategy predominates at the early training phase depends upon the availability and salience of extramaze cues. In fact, we intended to enhance the normal animals' reliance on the extramaze cues in Experiment 2 based on a pilot experiment conducted in the same room (see Section 6). When the environment in Experiment 2 favored the preferential use of the place strategy as revealed in the choice (unlike in Experiment 1), the devaluation increased the latency to make a choice but did not change the choice. Such a pattern of results in Experiment 1 and Experiment 2 collectively indicates that the devaluation confers the response strategy behavioral control (i.e., animals express the response strategy) only if animals had developed the response strategy to some extent, as was the case in Experiment 1. Otherwise, as in Experiment 2, the animals relying only on the place strategy would perform a goal-directed run with a longer latency, which is analogous to the lower response rates after devaluation in freeoperant experiments. In addition, the increased latency after devaluation is consistent with a previous study that used a win-shift working memory version of the radial maze task (Sage & Knowlton, 2000; this study will be discussed later). Therefore, the results from the unoperated rats in the two experiments collectively confirm our original prediction that that the response strategy is insensitive to devaluation (habit) and the place strategy is sensitive (action). Another, and theoretically more important, finding of Experiment 2 was from the HPC rats. First, the HPC-lesioned rats showed a substantial reliance on the response strategy during the non-devalued probe trial, as expected from previous findings in a variety of conflict tests in spatial navigation paradigms (Devan & White, 1999;Kosaki et al., 2015;Lee, Duman, & Pittenger, 2008;Mitchell & Hall, 1988;Packard & McGaugh, 1996). Crucially, the HPC animals' reliance on the response strategy was not affected by the outcome devaluation. The latter result thus offers a further support to the conclusion derived from Experiment 1; the response-strategy-based navigation formally conforms to the habitual form of instrumental behavior. | D I SCUSSION The aim of the present study was to seek a formal correspondence between the two types of dual-process accounts of learned behavior; place versus response strategies in spatial navigation and responseoutcome (action) versus stimulus-response (habit) processes in instrumental learning. In Experiment 1, the mild preference for the use of place strategy in a T-maze under the non-devalued condition was completely reversed when the expected outcome was devalued, such that the majority of animals displayed the response strategy. The pattern of results indicated that the probe trial performance was concurrently mediated by two processes, the place strategy and the response strategy, and the former was sensitive while the latter was insensitive to the expected value of the outcome. The result therefore suggests that the response strategy meets the formal definition of instrumental habit, whereas the place strategy is a form of goal-directed action that is sensitive to the expected value of the outcome. Experiment 2 was conducted in an attempt to answer the same question with one group of animals being forced to rely on the response strategy by means of HPC lesions. On the non-devalued probe trials, the HPC-lesioned rats showed predominantly the response strategy as expected, and, crucially, the reliance on the response strategy was unchanged after the outcome devaluation. Again, the result confirms that the response strategy in the spatial domain is a habitual form of instrumental behavior, which is insensitive to the expected goal value. As previous studies have shown that the HPC is not involved in the representation of the expected value of an instrumental outcome (Corbit & Balleine, 2000;Corbit et al., 2002), the result of Experiment 2 is unlikely to reflect an impairment in the encoding of outcome value per se, which would otherwise explain the result independently of the intrinsic associative property of the response strategy. Instead, the result supports the conclusion that the response strategy-based spatial navigation is inherently insensitive to the goal value, thus meeting the criterion of S-R habit. The conclusion is also congruent with previous findings that both the response strategy in the spatial domain and the instrumental S-R habit in non-spatial domain depend upon the integrity of the same neural substrate, the dorsolateral striatum (e.g., Packard & McGaugh, 1996;Yin, Knowlton, & Balleine, 2004). Previously, there have been only a few studies that assessed the issue of outcome representations in spatial navigation with modern behavioral techniques to devalue the reinforcer. Sage and Knowlton (2000) used an eight-arm radial maze to train rats either on a "winstay" S-R version of the task, in which four randomly selected correct arms were signaled by lights on each trial, or on a "win-shift" working memory version of the task, in which animals needed to remember the four unsignaled baited arms on the first run and to choose the other four arms on the second run. Post-training devaluation of the food reinforcer did not affect the choice accuracy in either task, but rather increased the choice latency in the win-shift (working memory) task as well as in the early phase of the win-stay (reference memory) task. The results indicate that the place-based working memory performance is a form of goal-directed behavior, and so is the early-stage performance in the cue-approaching win-stay task, but after extended training cueapproaching becomes autonomous of goal representations. Thus, the results by Sage & Knowlton are consistent with our current results, despite the difference in nature of the tasks in that our task required animals to make an egocentric left-right choice rather than an approach to a randomly-lit arm. Both tasks do not tax allocentric spatial processing, and commonly depend upon the integrity of the dorsal striatum (McDonald & White, 1993;Packard & McGaugh, 1992, 1996. Given the importance of clarifying the nature of potential interactions between multiple memory systems (e.g., Gibson & Shettleworth, 2005;Poldrack & Packard, 2003), the current results are also interesting as they show that training with more conspicuous place cues somehow make the response strategy underdeveloped. This is a conclusion which is difficult to reach with a standard conflict test without devaluation, as the weak expression of the response strategy in such a test is most likely due to the predominance of the place strategy. Another study of direct relevance to the current issue of outcome representation in spatial learning was conducted by De Leonibus et al. (2011), in which mice were trained in a dual-solution T-maze task just as in the current study. They showed that the choice in a probe trial after overtraining was not affected by outcome devaluation when the mice started from the original start arm, but the devaluation did affect the strategy expression if the animals were tested using the opposite start arm. While the former result is consistent with the current conclusion, the latter appears contradictory. Although a direct comparison of results obtained from different species must be taken with some caution, there appears to be room for explanation for the discrepancy. In the study by De Leonibus et al., the devaluation was achieved by means of conditioned taste aversion across six days and, critically, in parallel with the normal maze training. Such an arrangement effectively allowed the animals to experience the devalued outcome in the goal location a number of times. This raises the possibility that the reduced expression of the response strategy was due to a direct punishment of the S-R habit, rather than reflecting a goal-directedness of the response strategy. This reduced S-R habit might have been still sufficient to support the animals expressing the response strategy when the stimulus was exactly the same as before on the probe trial with the original start arm, but may have been weakened so as to suffer from generalization decrement when the stimulus to which a response should be made was completely changed except for the intramaze cue, on the probe trial with the opposite start arm. Importantly, when a different group of mice was subjected to even more extensive training on the same task (61 days with 15 trials per day), the response strategy was immune to devaluation regardless of the start arm. The result is thus consistent with the current conclusion, and our results complement their result by showing that place strategy-based spatial navigation in the dual-solution T-maze is sensitive to goal devaluation. It may require a comment as to the different effects of devaluation for sham animals in Experiments 1 and 2. In Experiment 1, the devaluation resulted in a change in preference from the use of the place strategy to the response strategy while not affecting choice latency. In Experiment 2, the devaluation increased the choice latency while not affecting the choice of arms, which overall indicated the predominance of the place strategy. As already noted in the discussion for Experiment 2, an important difference between Experiment 1 and 2 was that the rats under the non-devalued condition did not reveal a preference for one strategy over the other in Experiment 1, whereas the shamlesioned rats showed a preference for the use of place strategy in Experiment 2. Thus, the devaluation brought about the expression of the response strategy only if the animals had acquired the response strategy to some extent in the first place. The different degrees to which the animals acquired the response strategy in the two experiments, then, are most likely to reflect the different availability of extramaze cues; in fact, we intended to enhance the normal animals' reliance on the extramaze cues in Experiment 2 by ways described in the Method section. In other words, with the response strategy underdeveloped in Experiment 2, the animals had no other option but to rely on the place strategy. With the place strategy, navigation came under the control of the currently lowered value of the outcome and therefore the animals performed the run with longer latencies. This is analogous to the low response rate in instrumental lever presses after outcome devaluation (e.g., Adams & Dickinson, 1981;Kosaki & Dickinson, 2010; for a review see Dickinson, 1985). The current conclusion about the instrumental status of place-and response-strategies could offer some account for the apparently inconsistent findings that the above-mentioned associative phenomena, such as overshadowing and blocking, are not always found, especially when one of the competing cues is provided by the geometry of an arena (e.g., Cheng, 1986;Doeller & Burgess, 2008;Hayward, McGregor, Good, & Pearce, 2003;Kelly, Spetch, & Heth, 1998;McGregor, Horne, Esber, & Pearce, 2009). We have offered at least two, not mutually exclusive, explanations for such an inconsistency Austen et al., 2013). The present results could serve to further elucidate when spatial learning follows associative rules and when not. For instance, associative learning principles might apply to spatial navigation only when the training regime makes animals invariably experience single stimulus-response-outcome contingency over many repeated trials, as in typical reference-memory type spatial learning tasks, a condition that favors the development of S-R habits (Holland, KOSAKI ET AL. | 493 2004; Kosaki & Dickinson, 2010). This may not apply when animals continue to be exposed to multiple stimulus-response-outcome contingencies concurrently (as in trial-unique working memory tasks, with two goal locations, or when testing animals at pre-asymptotic level of discrimination where animals' choice still retains some variability), a condition that has been shown to keep the behavior under the goaldirected control even after extended training (Kosaki & Dickinson, 2010). With regard to this issue, it is interesting to note that previous studies have shown that stress can facilitate the use of S-R strategies (Schwabe, Dalm, Schachinger, & Oitzl, 2008;Schwabe, Hoffken, Tegenthoff, & Wolf, 2011;Schwabe & Wolf, 2009;Kim, Lee, Han, & Packard, 2001), and that many of the demonstrations of associative phenomena in the spatial domain, especially where conflicting results exist, were achieved in the water maze, a stressful environment for animals with negative reinforcement as an underlying learning process. Related to this point is a finding by Asem and Holland (2013), who showed that in a submerged plus-maze in water the rats relied more on the response strategy early in training before switching to the place strategy. Thus, the identification of the precise behavioral process underlying a given spatial behavior is important not only on its own right but also because it merits an attempt to formally relate spatial navigation to non-spatial learning and behavior, which in turn is critical in fully understanding the neural basis of goal-directed navigation. In conclusion, we have demonstrated in two experiments that the two spatial learning strategies, the response strategy and the place strategy, are differentially sensitive to the current value of the expected outcome, and thus each conform to one of the definitions of S-R habit and goal-directed action, respectively.
2018-04-26T23:26:32.652Z
2018-04-23T00:00:00.000
{ "year": 2018, "sha1": "4fe8152f9a0a313fbb8b514637b5b5207fd675b2", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hipo.22847", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4fe8152f9a0a313fbb8b514637b5b5207fd675b2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
114962
pes2o/s2orc
v3-fos-license
Detection of Leukotrienes in the Serum of Asthmatic and Psoriatic Patients Uolila P, Punnonen K, Tamrnivaara R, Jansfn CT. Detection of leukotrieaes in the serum of asthmatic and psoriatic patients. Acta Derm Venereol (Stockh) 1986; 66: 381-385. Purified serum samples from asthmatic and psoriatic patients and hcalthy controls were analysed by high-pressure liquid chromatography (HPLC) and the amounts of leukotrienes were measured from the corresponding HPLC fractions by specific radioimmunoassays. In the serum of healthy controls the amounts of leukotrienes B4 , C4 and D4 were very small or negligible. Rather great amount of leukotriene B4 was, however, detected in the serum of many asthmatic and psoriatic patients. The amount of leukotriene B, was in the serum of aslhmatic patients 120±20 pmol/ml (n= 11, mean±SEM) and in 1ha1 of psoriatic patients 100± 10 pmol/ml (n= 10). The amounts of leukotrienes C4 and D4 were rather small in the serum of most patients. The amount of leukotriene c. was, however, very high (250 pmoVml) in the serum of a psoriatic patient. Significanl amount of leukotriene 04 was also detected in the serum of th.is patient. The present study indicated that leukotrienes are formed du ring blood clotting in the leukocytes of asthmatic and psoriatic patients and that the rate of formation is so high that leukotrienes may have a role in these diseases. Leukotrienes are a new group of biologically active compounds which have obviously a rote in immediate hypersensitivity reactions and inflammation (1). They are formed by the 5-lipoxygenase enzyme from arachidonic acid and two other polyunsaturated falty acids. When arachidonic acid is released from ceUular phospholipids, it can be metabolized by the cyclo-oxygenase enzyme to prostaglandins and thromboxanes or by different lipoxy genases to hydroxy acids and leukotrienes (1, 2). Arachidonic acid is metabolized by the 5-lipoxygenase enzyme first to unstable 5-HPETE. whlch can be converted into leukotriene A 4 (LTA 4 ) or 5-hydroxy eicosatetraen oic acid . LTA i can be hydrated into LTB4, a dihydroxy acid, or conjugated with glutathione to LTC 4 which can be metabolized to LTD 4 and LTE 4 (I). LT8 4 causes adhesion and chemotactic movement of leukocytes and stimulates the aggregation of neutrophils (1). LTC 4 , LTD 4 and LTE 4 are potent bronchoconstrictors and they increase vascular permeability in postcapillary venules (I, 3). As the 5-Iipoxygenase enzyme is usually not active, leukotrienes are not gcneralJy formed. Leukotrienes were first identified from stimulated leukocytes (I, 4, 5). Now they have been detected also in allergen stimulated lung tissue of as1hmatics (6), and after allergen challenge in the tear fluid (7) and nasal washes (8,9) of allergic subjects as well as in the skin of psoriatic patients (10,11). As some leukotrienes are rapidly metabolized in the circulation (cf. 12) it has not been possible to detect leukotrienes in the plasma of asthmatic patients (13). Arachidon.ic acid is release from phosphoLipids by the action of phospholipases ( 14,15). Because the phosphoLipases are activated in platelets during blood clotting, arachidonic acid is release from platelet phospholipids and is subsequently metabolized in platelets to thromboxane A 2 and 12-HETE during clotting (16). We suggested that a similar activation of phospholipases could occur also in leukocytes during clotting and that this could result in the formation of leukotrienes if the 5-lipoxygenase enzyme is active. Therefore we have analysed the amount of leukotrienes from the serum of asthmatic and psoriatic patients and healthy controls. PATIENTS AND METHODS Eleven asthmatic patients (aged 20 to 59 years; 2 female and 9 male) and ten psoriatic patients (aged 14 lo 71 years; 3 female and 7 male) and seven healthy subjects (aged 21 to 56 years; I female and 6 male) were involved in this study. Seven asthmatic patients had an intrinsic type and four an extrinsic type of asthma. The blood samples were taken when the patients were on conventional long term treatment for asthma or psoriasis. Two asthmatic patients had an inhaled steroid. Blood samples were taken from the antecubital vein into glass tubes. The blood samples from control and asthmatic subjects were allowed to clot for one hour at 37°C in a shaking water bath and those from psoriatic patients at room temperature for two hours. When serum was separated it was stored at -2o•c until analysed. The serum samples (I ml) were mixed with ethanol (2 ml) and prostaglandin B2 ("interna! standard", 5 µI, 20 µg /ml), and were then centrifuged to remove precipitated material. Then 2 ml of the supemautant was mixed with 9 ml of phosphate buffer (pH 8.0) and this sample was purified using a SEP-PAK Cl8 column (Waters, Milford, MA USA) (17). The SEP-PAK column was activated with ethanol (15 ml) and washed with water (15 ml) before the sample in phosphate buffer (containing ethanol) was applied twice to the column using reduced pressure. After washing the column with 15 ml of ethanol : water (I: 9) the sample was eluted from the column with 10 ml of methanol. This methanol fraction was evaporaled to drynes5 under nitrogen and was rcd. issolved in 50 µI of methanol : water (65 : 35) and was then analysed in a high-pressure liquid chromatography (Shimadzu LC-4A, Kyoto, Japan) using a reverse-phase column (Bondapak Cl8, 2 mmX30 cm, Waters) and an UV-detector (Shimadzu SPD-2AS). A representative HPLC chromatogram is shown in Fig. I. Two eluting solvents were used: Solvent A: methanol: water (65: 35, pH 6.5) and solvent B: methanol• : water (68: 32, pH 5.6). The flow rate was 0.3 ml/min. After the injection the compounds were ftrst eluted with solvent A for 30 min. Then the solvent was changed during one minute by a linear gradient to solvent B which was used until the end ot the analysis (90 min). The effiuent was monitored by a spectrophotometer first at 280 nm (to detect conjugated trienes) and after 42 min at 235 nm (to detect monohydroxy acids). The elution of refernce compounds (LTB., LTC 4 , LTD., 5-HETE. PGB 2 ) was checked every day. When the serum samples were analysed the effiuent fractions corresponding to the reference compounds as well as some intermediate fractions were collected for radioimmunoas says. Appropriate amounts of the HPLC effiuents were taken for the radioimmunoassays. The effiuent fractions were first neutralized and then evaporated to remove methanol. LTB• radioimmunoassay was performed as described earlier (18). The antiserum for LTB 4 and unlabelled LTB4 were from Wellcome (RP93, Dartford, England) and 3 H-LTB 4 was from New England Nuclear (Boston, MA, USA). The cross-reactivities of the antiserum for LTB 4 were according to the manufacturer (18): 12-HETE 2 %, LTC4 0.03 %, LTD 4 0.03 %. Detection limit for LTB 4 was 0.03 pmol in the radioimmun oassay corresponding to 3 pmol/ml in the serum samples. The radioimmunoassay kit for LTC 4 was purchased from New England Nuclear (NEK-030). The cross-rections of the antiserum for LTC 4 were according to the manufacturer: LTB4 0.006 %, LTD 4 55 %. The detection limit for LTC 4 was in the radioimmunoassay 0.1 pmol corresponding to 5 pmol/ml in the serum samples. As the cross reactivity of the LTC. antiserum was 55 % with LTD 4 , it could be used to measure also the amount of LTD 4 from the corresponding HPLC fraction. Unlabelled LTD 4 was a generous gift from Dr J. Rokach (Merck Frosst Canada lnc., Canada). RESULTS When purified serum samples from control, asthmatic and psoriatic subjects were ana lysed by HPLC, the main peak in the chromatogram was due to 12-HETE (UV absorbance at 235 nm). A peak corresponding to LTB 4 standard was usually detected in the chromato gram (UV absorbance at 280 nm) of asthmatic and psoriatic patients. As this peak may be due to LTB 4 or its isomers, the amount of LTB 4 was measured from this HPLC fraction by a specific radioimmunoassay. lmmunoreactive LTB 4 was detected in these HPLC fr actions of all serum samples, but not in other fractions. Many asthmatic and psoriatic patients had a rather high concentration of LTB 4 and both the HPLC chromatogram and the radioimmunoassay indicated that the amount of LTB 4 was very small or negligible in the serum of healthy controls (Fig. 2). The serum concentration of immunoreactive LTB 4 was 120±20 pmoVml (mean±SEM, n-11) for asthmatic patients, 100:t JO pmoVml (n= 10) for psoriatic patients and 20±3 pmol/ml (n=7) for healthy controls. Using HPLC analysis and the radioimmunoassay only small amounts (5-10 pmol/ml) of LTC 4 were detected in the serum of all control and asthmatic subjects and most psoriatic patients. Under the detection limit (5 pmol/ml) the amount of LTC 4 was in the serum of Lwo out of ten control and three out of eleven asthmatic subjects and in none of the psoriatic patients. In the HPLC chromatogram of one psoriatic patient a clear LTC 4 peak was detected. The radioimmunoassay from this HPLC effiuent fraction indicated that the concentration of LTC 4 was in the serum of this patient as much as 250 pmol/ml. The total leukocyte count of this patient was 6.0x 10 6 per ml of blood, and the proportional amount of eosinophils was 3 %. The highest levet of immunoreactive LTD 4 was detected also in the serum of this patients (60 pmol/ml). The amount of LTD 4 was in the serum of other psoriatic patients very low (5-10 pmol/ml) or below the detection limit (5 pmol/ml). In the serum of asthmatic and control subjects the amount of immunoreactive LTD 4 was usually below the detection limit. A distinct 5-HETE peak was detected in the HPLC chromatogram (235 nm) of six psoriatic and three asthmatic patients. The 5-HETE peak was usually clearly smaller than that of 12-HETE but greater than the LTB 4 peak. No 5-HETE peak was detected in the HPLC chromatograms of healthy controls. DISCUSSION The present study suggests that leukotrienes are formed during blood clotting in astmatic and psoriatic patients. Plasma levels of leukotrienes B 4 and C 4 have been in our pilot studies (unpublished) consistently below the detection limits. Therefore, it is apparent that the leukotrienes detected in the serum samples have been formed during blood clotting. As the 5-lipoxygenase enzyme is present in leukocytes but not in platelets (19). leukotrienes were obviously formed in leukocytes. Thus, the present study suggests that the activat. ion of phospholipases occurs also in leukocytes <luring blood clotting, and that arachidonic acid is released and metabolized to leukotrienes in the lcukocytes of asthmatic and psoriatic patients. As no significant amounts of leukotrienes or 5-HETE were detected in the serum of healthy controls, it is obvious that the 5-lipoxygenase enzyme was not active in the leukocytes of control persons. LTB 4 was the main leukotriene detected in the present study. LTB4 has been reported to be formed in stimulated polymorphonuclear leukocytes and LTC 4 predominantly in stimulated eosinophils, specially in the presence of glutathione (20,21). In the present study significant amounts of leukotrienes C 4 and D 4 were detected in the serum of onJy one psoriatic patient whose leukocyte count and the proportional amount of eosinophils were, however, normal. As a great amount of 12-HETE is formed in platelets <luring blood clotting (16) and 12-HETE has a cross-reactivity of 2 % in the LTB 4 radioimmunoassay (18), LTB4 was separated from 12-HETE by HPLC before the radioimmunoassay. As the LTB 4 peak was rather distinct in the HPLC chromatogram of many asthmatic and psoriatic patients and the amount of immunoreactive LTB 4 was also great in these HPLC fractions, but not in other fractions, the results can be considered to be reliable. Leukotrienes have earlier been detected in stimulated lung tissue of asthmatics (6) and in psoriatic skin (10,11). Our study indicates that a part of the leukotrienes detected in the lung tissue and psoriatic skin could have been formed in leukocytes present in the tissues. Irrespective of their cellular origin, the detection of significant amounts of leukotrienes in the tissues and body fluids of asthmatics and psoriatics points to a possible significance of these inflammatory mediators in these diseases.
2018-04-03T03:06:38.823Z
1986-09-01T00:00:00.000
{ "year": 1986, "sha1": "32244557f2a864344d9f17d0362d939f87a7ff64", "oa_license": "CCBYNC", "oa_url": "https://medicaljournalssweden.se/actadv/article/download/6711/10060", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "816a4d32738d16256ad474a994a7a56ece78d9c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237625446
pes2o/s2orc
v3-fos-license
Ischaemic brain changes associated with catheter-based diagnostic cerebral angiography: a diffusion-weighted imaging study Purpose This study aims to evaluate the incidence of clinically silent embolic cerebral infarctions and associated risk factors following diagnostic cerebral angiography with diffusion-weighted imaging (DWI). Material and methods A total of 71 cerebral digital subtraction angiograms (42 male, 29 female, average age: 56.0 ± 15.0) obtained using nonionic contrast material were prospectively evaluated. To assess embolic events, before and after (1-3 days) angiography, DWI was performed. The risk factors for embolic ischemic brain changes such as the patient’s age and sex, atherosclerotic vessel wall disease, type of indication for catheter angiography, the number and size of the catheters, anatomic variants, selective/nonselective catheterization, contrast media volume, and time of procedure were determined. Fisher’s exact tests and Student t-tests were used for the statistical analyses of outcomes. Results Thirteen new silent ischemic lesions were identified in 7 out of 71 patients who underwent diagnostic cerebral angiography. Embolic cerebral lesions were often 6-10mm in diameter. According to the findings in this study, there was a strong correlation between diffusion abnormality and patient age, which was considered risk factors (p < 0.05). However, there were no significant correlations between other risk factors and the lesions’ appearance (p > 0.05). Conclusions In elderly patients, the angiographic procedures should be performed meticulously and DWI in all patients obtained routinely, even if the regular neurological examination shows they are healthy. In this way, the presence of microemboli and clinical results can be evaluated. Introduction Diagnostic angiography is an imaging modality accepted as the gold standard for vascular system imaging. Advancements in catheter technology and the digital subtraction angiography (DSA) technique have decreased the invasiveness and enabled the procedure to become minimally invasive [1]. In several studies, however, it has been reported that asymptomatic embolism also may oc-cur with cerebral angiography. Patient age, sex, vascular structure and anatomical variants, and many risk factors related to the procedure were reported for such embolic events [2]. Assessment of peri-procedural embolic complications was based on neurological examination findings. It is possible to detect embolic events presenting such neurological symptoms. With diffusion-weighted imaging (DWI), which is highly sensitive to acute ischaemic cerebral changes, there is a chance to define not only macro embolic events showing clinical symptoms but also micro embolic events. These occur more commonly and are mostly subclinical, and thus more comprehensive information about actual sizes of the embolic events can be gained due to the procedure [2,3]. The objective of this study was to investigate the incidence of acute emerging cerebral ischaemic lesions on DWI following diagnostic cerebral angiography and possible correlations between the risk factors defined related to the patient, vascular system, procedure, and DWI positivity. Study population This study was prospectively conducted between November 2009 and September 2010. Patients who required DSA to diagnose cerebrovascular diseases such as aneurysm, arteriovenous fistula, and ischaemic cerebral attack were included in the study. The following patient groups were excluded from the study: patients requiring emergency DSA but without time for DWI, patients requiring any intracranial endovascular interventions (embolization, stenting, balloon angioplasty), patients with claustrophobia, patients under 18 years of age, and patients who did not consent. Seventy-one patients who underwent diagnostic angiography were included in the study. Forty-two of the patients were male and 29 were female, aged between 22 and 83 years, with a mean age of 56.0 ± 15.0 years. All the patients included in the study underwent only DWI pre-operation and post-operation within the same day. The study was approved by the Ethics Board of Erciyes University (Number: 2009/126). Study protocol DWI examinations were performed with a 1.5 T magnetic resonance scanner (Gyroscan Intera Release Philips, Netherlands). Diffusion-weighted images were acquired through multi-sectioned, single-shot, spin-echoed, and EPI sequences. Diffusion gradients were applied using 2 different 'b' values (b = 0 s/mm 2 and b = 1000 s/mm 2 ) in x, y, and z directions. Imaging parameters were TE = 145, FoV = 260 × 260 mm, 128 × 96 matrix, intersection thickness = 5 mm, and intersection gap = 1 mm. The scanner automatically created ADC maps of the isotropic images. Angiography processes were carried out with cerebral DSA (Philips Integris 5000, Philips Medical Systems, Netherlands) devices. Anticoagulant therapy was discontinued a day before the procedure in heparinized patients because the patients' DSA processes were planned under elective conditions. A preparation including 25,000 IU/ 5 heparin sodium (Nevparin®, Mustafa Nevzat İlaç San. Istanbul, Turkey) was added at 1000 IU/1000 mL to the catheter cleaning solution of all patients. A nonionic con-trast agent (Ultravist 370/100, Schering, Berlin, Germany) was used in correct doses (0.2-6 ml/kg) according to the patient's age and weight, and the amount of the given agents was recorded for all patients. The contrast agent was injected with an automatic injection device with a rate of 15-30 ml/s in arcus aortography, 6-12 ml/s in nonselective CCA catheterization, and 5-10 ml/s in selective ICA catheterization, filming until the late phases. Pulse number, arterial blood pressure, and carbon dioxide and oxygen saturation were monitored in all patients during the procedure. Arcus and bovine type arcus were defined for all patients. A standard 0.035-inch guidewire (Radifocus, 0.035 inch diameter, Terumo, Tokyo, Japan) and a 5F Simmons 2 (5-F, 100 cm length, SIM2 Super Torque; Cordis Corporation, Miami Lakes, USA) (SIM2) or 5F Headhunter (Cordis Neurovascular, Miami Lakes, USA) (H1) were used as catheters according to the arcus type. Arterial images of the cerebral carotid system were obtained in a routine projection by selective introduction into the ICA or non-selective catheterization of CAA in the patients with ICA stenosis. The procedure duration started with femoral artery access and ended with removal of the last used catheter, and this duration was recorded for all patients. A single academic neuroradiologist performed all the angiography processes. Data collection Two separate radiologists randomly assessed DWI outcomes. Hyperintensities on DWI were classified as 6 < mm, 6-10 mm, and > 10 mm for acute emerged ischaemic lesions corresponding to "low ADC values" on ADC maps. For lesion numbers, the presence of 1 lesion was considered single, while 2 or more were considered multiple. The lesions were described as cortical, subcortical, and white matter lesions regarding their location. Moreover, the vascular distribution of the lesions was defined. The patients were assessed as 2 groups: DWI positive and DWI negative, and risk parameters for embolic ischaemic lesion development were defined as 1) patient characteristics (age, sex); 2) additional problems related to the vascular system complicating catheter manipulation and the procedure, such as the presence of atherosclerotic plaque and severe anatomic conditions, including tortuosity, angled ICA outlet, and arcus types (bovine type arcus, type 2 and type 3 arcus); and 3) procedure-related parameters (specifications of the catheter used, selective or nonselective ICA catheterization, amount of the contrast agent used, and procedure duration). Statistical analysis SPSS for Windows 15.0 software was used for the statistical analysis. The patients with and without lesions were compared according to the diffusion findings in terms of the parameters, investigating the possible risk factors for Figure 1. A, B) A 63-year-old female using an H1 catheter. Angiographic images with short segment atherosclerosis of the type-2 arcus aorta and right ınternal carotid artery bulbs. Diffusion-weighted imaging -before the procedure (C, D) and after the procedure (E, F). Two subcortical acute ischaemic lesions (arrows) in the middle cerebral artery territories on the right side A C E D F B Figure 2. A, B) A 50-year-old female using an H1 catheter; Angiographic images type-1 arch aorta and an aneurysm in the anterior cerebral artery localization. Diffusion-weighted imaging; before the procedure (C, D) and after the procedure (E, F). Acute ischaemic lesion (arrow) in the middle cerebral artery territories on the right side Categorical variables such as sex, age, atherosclerosis, anatomical variants, selectivity, and catheter type were expressed as number (n) and percentage (%), while the continuous variables such as age, sex, procedure duration, and the amount of the contrast agent were stated as mean ± standard deviation (mean ± SD). Comparison of categorical variables was carried out using chi-square independence tests (Pearson's c 2 , continuity correction, and Fischer's exact). In contrast, Student's t-test was used to compare the continuous variables. P-values < 0.05 were considered statistically significant in all analyses. Results When examining the patients for their clinical indications, the most common cause of diagnostic angiography was atherosclerotic vascular disease in 22 patients, cerebrovascular diseases (CVD) in 19 patients, and vascular lesion in 16 patients. Diffusion-weighted imaging findings Following the procedure, an acute emerged ischaemic lesion that corresponded to a "low ADC value" on ADC maps and hyperintense on DWI was observed in 9.9% (n = 7) of the patients. All the lesions found on DWI were silent ischaemia without clinical symptoms. The lesion number in 7 positive diffusion patients was 13, and the median lesion load per diffusion positive patient was 2 (1-4). Of the lesions, 69.2% (n = 9) had cortical and subcortical localization, 69.2% (n = 9) were in the middle cerebral artery (MCA) irrigation area (Figures 1 and 2), 61.5% (n = 8) had single lesions, and the lesion sizes of 53.9% (n = 7) of patients were between 6 and 10 mm (Table 1). When considering the distribution of the patients with newly developed lesions regarding the clinic indications, new lesions developed in 3 patients from the diagnostic angiography group due to atherosclerotic vascular disease. Next, new lesions developed in 2 patients from the MCA -middle cerebral artery, ACA -anterior cerebral artery cerebrovascular disease group -in 1 patient undergoing angiography due to transient ischaemic attack (TIA) and in 1 patient undergoing angiography due to pre-diagnosis of carotid cavernous fistula (CCF) ( Table 2). Four (57.1%) of the patients with newly developed lesions according to DWI findings were males and 3 (42.9%) were females (p > 0.05). The mean age of the patients with newly developed lesions according to DWI findings was 66.4 ± 11.8 years, while the mean age of patients without newly emerging lesions according to DWI findings was 53.7 ± 14.9 years (p < 0.05) ( Table 3). According to DWI findings, the mean amount of contrast agent was defined as 139.2 ± 55.9 ml in patients with newly developed lesions, while it was 148.9 ± 36.2 ml in patients without newly emerging lesions. Mean procedure duration was defined as 20.9 ± 6.2 min for patients with newly developed lesions according to DWI findings, while it was 22.7 ± 7.8 min for patients without newly developed lesions (p > 0.05) ( Table 3). Discussion Diagnostic angiography is an imaging modality accepted as the gold standard for vascular system imaging. The main indications for diagnostic angiography are to obtain more detailed information about cerebrovascular deficiency conditions due to an obstruction in the carotid or vertebrobasilar system or as a result of stenosis and vascular lesions and tumoural formations such as aneurysm, arteriovenous fistula (AVF), and arteriovenous malformation (AVM) [4]. The most typical indication is atherosclerotic occlusive cerebrovascular disease [1]. Seventy-one patients were included in this study, and diagnostic angiography was performed in 22 of these due to atherosclerotic vascular disease. This rate is consistent with the literature when assessed in terms of the patients' indication incidence. Technological advances have provided a significant reduction in DSA-related complications. Therefore, the examinations can be carried out in a shorter time, and fewer contrast agents are used. Moreover, softer catheters with various tip shapes have been produced owing to advances in catheterization technology, and the reduction in complications was contributed to this. The related risk factors leading to angiographic complications are reported as advanced age, presence of systemic disease, frequent catheter alterations, contrast agent quantity, long examination time, presence of cerebrovascular disease, frequent transient ischaemic attack, advanced carotid artery dis- ease, and examination performed by people without sufficient experience [5]. All kinds of intraarterial procedures have a risk of cerebral embolism [6,7]. Microscopic air embolisms do not present neurological symptoms, and thromboembolic changes cannot be detected with clinical examinations [6]. In the studies comparing stroke incidence, considering the detection of thromboembolic complications based on neurological examinations, it is impossible to demonstrate the incidence and the real sizes of thromboembolism. Today, there is a chance to obtain comprehensive information about the real size of the embolism by defining not only the macroembolic events presenting clinical symptoms but also clinically silent microembolic events with DWI, which is highly sensitive in revealing cerebral ischaemia in the early period. Embolic ischaemic events that are not revealed clinically can be detected in the first minutes with DWI [8]. On reviewing the literature, silent ischaemia incidence is between 5 and 23% on DWI following diagnostic angiography [9][10][11]. The study with the most extensive series on this subject was conducted by Krings et al. [12], and silent ischaemia was defined in 11.1% of the total of 107 patients. In this study, the asymptomatic ischaemic lesion incidence with DWI was 9.9%, consistent with the literature. The number of lesions was 1-2 in 92.31% of the positive diffusion patients, mostly 6-10 mm size (53.85%), and mainly located in MCA irrigation (69.23%) and subcortical (69.23%) areas. In their studies, Büsing et al. [13] and Bendszus et al. [14] found similar results for newly revealed DWI lesions. They stated the lesions were mostly single and had cortical location in the MCA territory, and the events were compliant with the embolic pattern. Similarly, in this study, the topographic distribution of the lesions supports the view that the embolic pattern is the primary mechanism. In this study, a statistically significant difference was found between age and embolism development (p < 0.05). Although there was no statistically significant difference between the presence of atherosclerosis and new ischaemic lesion development (p > 0.05), this difference was numerically remarkable. Diffusion anomalies occurring after angiography were stated in the literature to depend on the patient factor and not to the procedure itself [3,12]. This conclusion suggests that the age of the patient and the presence of atherosclerotic lesions may cause an ischaemic lesion. Atherosclerotic plaques, especially severe atherosclerosis of the aorta, are an essential risk indicator for thromboembolism [14,15]. In the study by Willinsky et al. [16], the neurological complication rate was demonstrated to increase prominently with age 55 years and over. They found the complication rate to be 1.8% in cases aged 55 years and over, while this rate was 0.09% in cases under 55 years old. In their series with 3636 cases, Fifi et al. [17] reported that the complication rate for diagnostic angiographies performed by an experienced neuroradiologist in a modern academic centre was 0.3% and the most critical risk factor was advanced age (65 years and over). The mean age in this study was 55.97 ± 15.04 years. The mean age of the cases with newly emerged lesions, according to DWI findings, was 66.43 ± 11.76 years. When the presence or absence of development of new symptoms was compared with the ages of the cases, a statistically significant difference was defined between them (p < 0.05). Furthermore, while there was no atherosclerosis in 1 case (14.3%) from the newly developed lesions according to the DWI findings group, 6 patients (85.7%) in this group had atherosclerosis. This finding shows that age and atherosclerosis, which are essential factors in numerous literature reports, were also the most critical factors in our study in causing the diffusion anomalies. Many more diffusion anomalies in advanced age with atherosclerotic vascular structure suggest the development of procedural atheroembolism. Therefore, it was emphasized that catheterization techniques should be applied rigorously to prevent silent or symptomatic cerebral embolic ischaemia occurring following catheterization, particularly in atherosclerotic cases [18,19]. In silent embolic ischaemia occurring following angiography, scratching the vessel wall and removing the atherosclerotic plaques were demonstrated. In interventional radiology, the catheters used for diagnostic purposes are preferred to have 5F diameter. In addition, there are several studies indicating that the complication rate decreases with decreasing catheter diameter [20]. However, using catheters with a diameter of less than 4F was reported to decrease the vascular opacification and prolong the procedure duration [21]. Thus, we chose to use a 5F catheter in this study. Two types of catheters (5F H1 and 5F SIM2) were used in this study, and these were not altered. There is no specific study on the effectiveness of both the catheters. In the virtual simulation course, which is a new endovascular training method, Riddick et al. [22] compared complex (SIM2) and simple type catheters in difficult conditions for ischaemic stroke due to carotid catheterization. For the static and dynamic performance of the catheters, similarly to our study, they found that the procedure duration was longer with simple catheters than with SIM2, while there were numerically more vascular complications. However, the course participants found the performance of the SIM2 catheter to be better. There was a statistically significant difference between the ages of the cases regarding catheter use (p > 0.05). The SIM2 catheter was used in all type-3 arcus with difficult catheterization, and H1 was not used in these cases. We believe the type-3 arcus seen more in elderly and atherosclerotic cases numerically increased the ischaemic lesion development following catheterization with SIM2. Earnest et al. reported a clearly increased complication rate with the coexistence of diffuse atherosclerosis, tortuous vascular structure, and cerebrovascular disease along with catheter alterations, rather than the type and size of the catheter [23]. In this study, we did not evaluate the complications resulting from the catheter selection according to arcus type and catheter alterations. Selective catheterization of ICA is known to increase the complication rate and to cause dissection [24]. In this study, new lesions were detected during 4 selective procedures and 3 nonselective procedures. However, there was no statistically significant difference between the groups with and without lesions emerging (p > 0.05), although a higher number of lesions developed in the selective procedures. Arcus aorta types and outlet variations in the primary vascular structures may affect the interventional procedures. Type-2 and type-3 aortic arcus, and bovine type arcus may cause difficulties during catheterization. Moreover, complicated anatomies, like tortuosity, have an increasing effect on thromboembolic events because they increase the catheter manipulation [5]. There was complicated anatomy in 6 of our 7 cases who developed lesions (p > 0.05) in our study. Especially the effects of the type-2 and type-3 arcus are seen more in the elderly and atherosclerotic population, and the results should be considered with caution based on the small number of our cases. Contrast agent-related complications are systemic complications that depend on the toxicity of the contrast agent. Complication rates were significantly reduced with the use of nonionic contrast agents that have lower osmolality [5]. Bendzus et al. [10] reported the number of contrast agents in the groups without and with lesions as 110 and 149 mL, respectively, and this difference was statistically significant. The average amount of the contrast agent used in this study was 139.2 ± 55.9 ml in the groups without and with developed lesions, and this was more than in the study by Bendzus et al. This is because cerebrovascular lesions such as aneurysm, AVM, CCF, etc. were included in this study, and more images were acquired from various angles for better visualization of these lesions. The contrast agent amount used in the group in which lesions developed (148.9 ± 36.2) was consistent with the literature (p > 0.05). The duration the catheter remains in the arterial structure is essential and directly associated with angiographyrelated complications [15]. In their study of 1517 patients in which the mean procedure time was 46 min, Earnest et al. [23] found a statistical correlation between the complications and increases in age, contrast agent amount, and pro-cedure duration. The mean procedure duration was 22.49 ± 7.58 min in our study (p > 0.05). This depends on the determination of the catheters according to arcus type, using SIM2 catheter had high performance and technical competence of the operator. Moreover, the duration was stated to be a criterion reflecting the competence of the neuroradiologist who performs the procedure [25]. Another multicentre study included 5000 cerebral angiographies, and the complication rate was 3.9% in training hospitals and 0.9% in other hospitals. Complication rates were 1.8% for radiologists and 0.07% for neuroradiologists, and the difference between the 2 groups who performed the operations was statistically significant [26]. Because our procedures were performed by a single experienced neuroradiologist, a comparison between the operators could not be made. There were no obvious systemic or local complications in this study. This suggests that a single specialist neuroradiologist contributed to the procedures. It is reported that if there is no alternative method to the diagnostic angiography in cases of high-risk situations like atherosclerosis and vasculitis, angiography performed by an experienced neuroradiologist decreases the silent embolism rate [12]. There is a general tendency to call lesions defined on DWI but not showing symptoms during routine neurological examination as silent ischaemia. However, in a study by Vermeer et al. [27], silent ischaemia was reported to cause cognitive impairment in the general population. Therefore, it is correct not to refer to these lesions as silent ischaemia, unless neurophysiological and neuropsychiatric tests are performed. Conclusions Diagnostic angiography procedure should be performed rigorously in the case population with cerebral embolism risk factors such as advanced age, atherosclerosis, and anatomical variants. Following angiography, DWI should be included in routine examinations, and the presence of silent ischaemia should be investigated. Applying neurophysiological and neuropsychiatric tests to cases in which silent ischaemia develops will provide more information about the importance of these lesions in the future.
2021-09-26T05:18:45.857Z
2021-08-12T00:00:00.000
{ "year": 2021, "sha1": "73daf8a2e42ae50a6b6226de099ba2646b0af786", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-126/pdf-45053-10?filename=Ischaemic%20brain%20changes.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73daf8a2e42ae50a6b6226de099ba2646b0af786", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253283515
pes2o/s2orc
v3-fos-license
Model Signatures for the Integration of Simulation Models into System Models : Model-based systems engineering (MBSE) is an auspicious approach to the virtual development of cyber-physical systems. The behavior of the system’s elements is thus represented by specialized simulation models that are integrated into the descriptive SysML-based system model. Although many simulation models have been developed in research for the common system elements for various purposes and fidelities, their integration remains a major challenge: the parameter interfaces of the simulation models must be coupled with each other and with the parameters of the system elements in such a way that they are correctly parameterized. So far, this coupling can only be carried out by model experts in a time-consuming and error-prone manner. Therefore, in this paper, we first propose a concept that structures the system element parameters for targeted use in validation and design cases. Second, we propose a model signature for simulation models that differentiates its parameters by input, internal, output, and model parameters and specifies them with spatial and temporal dimensions as well as admissible ranges, among others. Based on the two contributions, domain models can be validly and automatable coupled and used for the virtual development of system elements in model-based systems engineering. Introduction Model-based systems engineering (MBSE) is a promising approach for the accelerated, virtual development of cyber physical systems (CPS) with reusable models [1]. Thereby, the system to be developed (e.g., a connected vehicle) is represented by a system model consisting of a hierarchical structure of system elements for the individual subsystems (e.g., top-down: mechatronic drive train, electric engine, bearing, cylindrical roller bearing, lubricated rolling contact) in accordance to [2]. Low-level system elements are also referred to as solution elements in [3,4]. Regardless the terminology, however, the modular and generic entities describe fundamental interrelationships, which are often reused in a wide variety of higher-level systems. Therefore, these low level system elements can be identified as enablers for accelerated virtual development of CPS [3,5]. Testing requirements and ensuring the functionality of the overall system concerning the mechanical domain, requires all system elements to be validated with regard to their behavior. To validate the behavior of a system element, different purposes (e.g., lifetime, friction losses, pressures, temperatures, lubrication conditions) have to be considered [3]. Since the behavior usually cannot be validated with a descriptive, SysML-based system model alone [6,7], a system element needs appropriate domain models to account for all the different purposes. A domain model is, by its nature, a representation of the original system that has been shortened or abstracted in terms of scale, detail and/or functionality [8] and typically a specialized simulation model that runs in an external software tool and needs to be integrated into the SysML-based system model [6,[9][10][11][12][13]. One of the open challenges with such an integration is to provide a well-defined, modular and consistent interface between domain models and their system element counterparts, for instance to assure a correct parameter exchange. The term parameter is used in this paper analogously to [3] as a generic term for all quantitative attributes of the interfaces of domain models. In the current state of research, numerous domain models for a specific purpose are published, which differ in terms of their used parameters, fidelity, model assumptions, computational effort, and other criteria. This leads to the challenge of selecting the most appropriate domain models, which in return requires that domain models must be identifiable with respect to relevant criteria in order to be able to clearly assign them to system elements [14]. To this end, suitable and efficient methods must be explored that enable the large variety of existing models to be sustainably utilized for model-based product development. Another challenge is that many purposes of a system element are interdependent and coupled. As an example, pressure, temperature, and manufacturing accuracy result in certain lubrication conditions, which in turn affect service life. Neglecting these coupling effects via an isolated consideration of these purposes is not sufficient in the development process und would lead to significant errors in the system model [15,16]. Instead, it must be possible to couple the respective domain models in order to virtually represent and validate their feedback mechanisms. In doing so, it is crucial that the coupling of the domain models is consistent and correct for the specific validation question. Hence, this calls for novel research approaches to evaluate the compatibility of individual domain models in a systematic and potentially automated way. Methods for the latter are actively being worked on, yet proposed solutions are often restricted to specific simulation tools or data exchange standards. While a decomposition of the system model into system elements is often used and common practice, low-level domain models are individually proposed and investigated in the literature, yet not analyzed in the context of a system model. Domain models are hence not systematically structured by an appropriate taxonomy that would be accessible from system models: Up to now, it has not been possible to standardize the descriptions of the numerous and often only gradually different domain models in terms of content and form. As a result, system engineers have not yet been able to identify domain models unambiguously and efficiently, assign them to associated system elements, and reuse them. Instead, system elements are usually modeled individually, often with considerable effort and expert knowledge, although they and their domain models are actually fundamentally known [14]. Since there are no standardized concepts or methods either for the parameter interfaces of the domain models or for the parameters of the system element, a great deal of effort is involved, especially in linking these parameters. Furthermore, due to the poor documentation described above, it is not possible to clearly and efficiently evaluate if two domain models can be coupled in an automated way. As a result, system engineers either need a lot of time to reliably assess the compatibility or domain models are partially coupled incorrectly. Depending on when these errors become apparent, this leads to change costs in the development phase or, in the worst case, high recall costs in the use phase of the system. Imagining an ideal development process, each system element and domain model would have an unambiguous and machine-readable interface description of its relevant model parameters, so that within the required time, cost and quality for the development of CPS • A clear and automated assignment of domain models to system elements is possible; • The compatibility of domain models can be automatically evaluated; • The most appropriate combination of domain models given a certain requirementsdriven goal can be identified. To reach this goal, we propose a parameter concept for system elements (cf. Section 4) as well as an unambiguous and machine-readable model signature for domain models (cf. Section 5). Finally, we discuss to what extent the parameter concept and model signatures help in combination to uniquely identify and correctly link domain models for system elements (cf. Section 6). For exemplary application and explanation of the results, in this paper we use the lubricated rolling contact as a typical machine element in mechanics: Two convex or convex/concave cylinder surfaces touch each other in a lubricated state at a narrow contact area where a thin lubricant film transfers mechanical forces from one surface to the other (Figure 1) [17]. errors become apparent, this leads to change costs in the development phase or, in the worst case, high recall costs in the use phase of the system. Imagining an ideal development process, each system element and domain model would have an unambiguous and machine-readable interface description of its relevant model parameters, so that within the required time, cost and quality for the development of CPS • A clear and automated assignment of domain models to system elements is possible; • The compatibility of domain models can be automatically evaluated; • The most appropriate combination of domain models given a certain requirementsdriven goal can be identified. To reach this goal, we propose a parameter concept for system elements (cf. Section 4) as well as an unambiguous and machine-readable model signature for domain models (cf. Section 5). Finally, we discuss to what extent the parameter concept and model signatures help in combination to uniquely identify and correctly link domain models for system elements (cf. Section 6). For exemplary application and explanation of the results, in this paper we use the lubricated rolling contact as a typical machine element in mechanics: Two convex or convex/concave cylinder surfaces touch each other in a lubricated state at a narrow contact area where a thin lubricant film transfers mechanical forces from one surface to the other (Figure 1) [17]. State of the Art Nowadays, and in the future, increasingly, CPS are being developed. CPS are characterized by interacting subsystems of the mechanical, electrical and software domain. The different subsystems and development processes of the domains lead to an immanent complexity in the development of CPS [18,19]. Function-Oriented Model-Based Systems Engineering A promising approach to multidisciplinary CPS development is function-oriented model-based systems engineering whose key element is a cross-domain functional architecture typically modeled with the Systems Modeling Language (SysML) [7] or an advanced profile based on SysML [20]. This functional architecture is derived from the requirements and comprises functional flows as interfaces between the functions [20,21]. Based on this functional architecture, all involved domains develop system elements that realize the individual functions assigned to them. These system elements inherit the State of the Art Nowadays, and in the future, increasingly, CPS are being developed. CPS are characterized by interacting subsystems of the mechanical, electrical and software domain. The different subsystems and development processes of the domains lead to an immanent complexity in the development of CPS [18,19]. Function-Oriented Model-Based Systems Engineering A promising approach to multidisciplinary CPS development is function-oriented model-based systems engineering whose key element is a cross-domain functional architecture typically modeled with the Systems Modeling Language (SysML) [7] or an advanced profile based on SysML [20]. This functional architecture is derived from the requirements and comprises functional flows as interfaces between the functions [20,21]. Based on this functional architecture, all involved domains develop system elements that realize the individual functions assigned to them. These system elements inherit the functional flows of the functions and can thus be developed modularly within the specific domains. Due to this encapsulation, the system elements have a low complexity and jointly represent the behavior of the superordinate system. As a rule, several function-oriented decomposition steps are necessary to reduce the typical CPS complexity within the system elements to a manageable level. This results in a system architecture consisting of system elements across multiple hierarchy levels. The elementary system elements at the lowest level describe very small and fundamental relationships ( Figure 2) [3]. One example of such a function-Systems 2022, 10, 199 4 of 15 oriented and model-based development approach is the motego method, which has already been applied in several research projects and is continuously being extended [4,[22][23][24]. jointly represent the behavior of the superordinate system. As a rule, several functionoriented decomposition steps are necessary to reduce the typical CPS complexity within the system elements to a manageable level. This results in a system architecture consisting of system elements across multiple hierarchy levels. The elementary system elements at the lowest level describe very small and fundamental relationships ( Figure 2) [3]. One example of such a function-oriented and model-based development approach is the motego method, which has already been applied in several research projects and is continuously being extended [4,[22][23][24]. Figure 2 shows the system element lubricated rolling contact which comprises three main constituents: The principle solution, domain models and workflows [3]. The principle solution is an established concept in design methodology to describe solutions based on physical effects and active surfaces with certain geometric and material properties [21,[25][26][27]. The physical effect is modeled as a constraint and typically establishes a mathematical relationship between active surfaces, material properties and functional flows. This means that the equation of the physical effect comprises, e.g., the length l, which is of course also a parameter of the two active surfaces ( Figure 2). To avoid redundant or inconsistent parameters, these parameters must be linked. Even if system elements sometimes describe only a small scope of a technical system, the parametric description of the active surfaces including material properties, the physical effect, the incoming and outgoing functional flows as well as other relevant physical quantities quickly result in a large number of parameters, most of which must be linked together. When domain models are integrated into the system element, the number of parameters (to be linked) increases again significantly. Since there is no simplifying structuring for the parameters occurring in the system element so far, the linkage is complex, effortful and error-prone [3,21]. The domain model section in the system element contains and structures all models relevant for the development of the scope (e.g., Lubricated rolling contact). At the top level, a differentiation is made between engineering, production and controlling models, whereby only the engineering domain is considered in this publication which typically applicates models of analytical and numerical nature calculating the physical behavior of Figure 2. Extract of two system elements linked via functional flows from a function-oriented system architecture. Figure 2 shows the system element lubricated rolling contact which comprises three main constituents: The principle solution, domain models and workflows [3]. The principle solution is an established concept in design methodology to describe solutions based on physical effects and active surfaces with certain geometric and material properties [21,[25][26][27]. The physical effect is modeled as a constraint and typically establishes a mathematical relationship between active surfaces, material properties and functional flows. This means that the equation of the physical effect comprises, e.g., the length l, which is of course also a parameter of the two active surfaces ( Figure 2). To avoid redundant or inconsistent parameters, these parameters must be linked. Even if system elements sometimes describe only a small scope of a technical system, the parametric description of the active surfaces including material properties, the physical effect, the incoming and outgoing functional flows as well as other relevant physical quantities quickly result in a large number of parameters, most of which must be linked together. When domain models are integrated into the system element, the number of parameters (to be linked) increases again significantly. Since there is no simplifying structuring for the parameters occurring in the system element so far, the linkage is complex, effortful and error-prone [3,21]. The domain model section in the system element contains and structures all models relevant for the development of the scope (e.g., Lubricated rolling contact). At the top level, a differentiation is made between engineering, production and controlling models, whereby only the engineering domain is considered in this publication which typically applicates models of analytical and numerical nature calculating the physical behavior of system elements. Here, the models are classified according to their computational purpose, such as the deformation of the active surfaces or the temperature in the lubricated rolling contact [3,4,14]. Workflows are the third area in the system element. Since domain models must be coupled for specific issues in the development process [14,24,28,29], these coupled models are also stored in a reusable manner and differentiated between validation, design and optimization workflows [3]. The joint storage of principle solution, domain models and workflows enables the specification of the system element (principle solution) to be reusable and consistently linked to the behavior description (domain models) and efficiently applicable in the development (workflows) [3]. Integration and Coupling of Simulation Models The system model with the central functional structure and the system elements provides a descriptive representation of the system under development. In order to validate system elements against requirements or to design them with respect to requirements during development, domain models that describe the behavior of the system element need to be integrated and correctly linked to parameters of the other constituents of the system element [30]. Furthermore, typically not only one but several domain models of different purposes and suitable fidelities are necessary to test and design system elements during development. This results in the fact that several domain models must be coupled with each other [3,15,30]. In order for the coupled domain models to perform valid calculations, it is essential that the domain models themselves and their parameter interfaces must be compatible with each other. Research on the design and of mechanical system elements has built up a large number of models over the last decades. Even within a certain scope (e.g., lubricated rolling contact) and purpose (e.g., lubrication), a large number of domain models of different fidelities can be found, resulting from different (empirical) approaches, boundary conditions and simplifications [31,32]. As a result, a high two-or even three-digit number of domain models is typically available for common system elements such as bearings, gears, shaft-hub connections, or fasteners, respectively. If several domain models have to be coupled with each other, e.g., for service life calculations and wear predictions, a simple combinatoric consideration results in a very large number of potential model configurations. The naive number of model combinations can be significantly reduced, when focusing on the model configurations that are physically compatible. To avoid manual efforts and to use the potential of existing domain models, an unambiguous and machine-processable description of the models and their parameter interfaces is necessary. For this reason, several approaches for the interaction of system model and domain models have been developed in the past. A good overview of the basic strategies for data exchange between models in general is provided by [33] and with a focus on the parameter exchange between SysML-based system models and domain models by [9,34,35]. In some approaches, SysML profiles were developed to enable data exchange, e.g., for the model transformation between system models and Modelica-based simulation models [36] or for the automatic generation of analysis models from system models [10]. In this context, [9] states that the developed interfaces are often limited to specific simulation tools and compatibility issues frequently arise due to different versions of exchange standards (e.g., FMI [37,38]). Another approach is to orchestrate the data exchange between domain models and the system model by SysML diagrams [30,39]. Often, the approaches develop a specific interface and do not address the fundamental question of how the parametric interfaces of a domain model must be formalized generally in order to enable the valid coupling of domain models inside system elements. Therefore, it makes sense to analyze the parameter and model definitions of data exchange standards such as Functional Mock-up Interface (FMI) [37], which among other things aim to integrate Modelica domain models into SysML-based system models [40]. The FMI standard requires in particular that each functional mock-up unit contains an XML file describing the model. In addition to their name and description, the parameters of the model are characterized by their causality and variability. The causality specifies whether the parameter is an input or output parameter, a parameter that controls the model, or a calculated, independent or local parameter. The variability defines whether a parameter is constant, fixed after the initialization, tunable or discrete. Thereby, the FMI standard allows only certain combinations of the attributes 'causality' and 'variability'. In addition, it is possible to specify start, nominal, minimum and maximum values for the different types of parameters [37]. Since the FMI standard is relatively advanced, the model signature developed in this contribution should ensure its logical compatibility to FMIs. In addition to FMI as a cross-tool standard, there is also research on model classification or signatures for specific tools. [41] aims to improve the quality of Modelica models by adding information on traceability, uncertainty and calibration in a standardized way; [42] proposes a signature for Simulink subsystems as a generalization of the interface including input and output ports as well as data stores. [43] introduces a model identity card capturing classifiers of input and output parameters as well as the expectable quality. Preliminary work on validity and credibility exists in a fundamental nature by [44] and with a focus on software intense embedded systems by [45], who developed a framework to assess and formalize the validity range of simulation models. Many of the approaches mentioned contain classifiers that are very specifically adapted to the needs and possibilities of certain software tools and only partially offer generally valid methods for the lack of logical systematization of domain models in the context of system elements for model-based development described in the introduction. Another important research approach that has been established in software engineering is contract-based design algebra. Here, system components can be combined to form systems on the basis of predefined sets of rules [46], for example in order to automatically generate consistent design variants that meet requirements [47]. A modeling approach for evaluating compatibility between SysML blocks was introduced in [48]. This approach considers the conformance and direction of data types as well as the compatibility of the value ranges of two parameter interfaces but not on domain model level. Research Question In the introduction (cf. Section 1), three challenges were described. First, the parameters occurring in system elements are not classified in such a way that parameter associations cannot be efficiently identified when integrating domain models and workflows. Secondly, it is not possible to assess without expert knowledge and high effort whether an existing simulation model is suitable for the calculation of certain properties of a system element. Third, multiple simulation models can only be coupled manually and with a certain error rate, which can potentially lead to high damages and costs [16]. These challenges have not yet been overcome by the current state of research (cf. Section 2). Therefore, the research question addressed in this publication is: How can an unambiguous parameter relationship be established between system elements and domain models for their identification and coupling? Two subordinate questions can be derived from this research question: 1. How can the parameters in the system element be structured for testing and design with domain models? 2. How can model signatures for domain models be defined unambiguously and machine-readable? The following two sections address the two derived questions: In Section 4 a parameter concept for system elements is proposed and in Section 5 a model signature for domain models based on requirements from the development process is elaborated. In Section 6, research findings are discussed, concluded, and an outlook on necessary and possible future research directions are outlined. System Element Parameter Concept As described in Section 2.1, system elements which are typically used in functionoriented model-based development consist of inherited function ports, physical effects, active surfaces with material properties, domain models, and other physical parameters. All these constituents of the system element are formalized with parameters [3,21] resulting in a large number of parameters, which can complicate the integration of domain models with their parameter interfaces. Therefore, we propose the differentiation of the following three types of parameters: Functional flow parameters are all parameters comprised in the functional flows entering and leaving the system element. These parameters are imposed on the system element by the environment or functionally dependent system elements and reflect operating and environmental conditions. Examples include the pressure of a fluid flow and the rotational speed of a mechanical energy flow. Design parameters can be set directly by the engineer, written into the engineering drawing, and imposed on the real product via manufacturing. These parameters may also change over time due to operation (e.g., wear) or the environment (e.g., ambient temperature), but the initial value is set by the engineer and the manufacturing process. Examples might be the diameter of an active surface or the Young's modulus of a material. State variables cannot be set directly by the engineer. These parameters (e.g., tensile stress) adjust themselves depending on the functional flows from the environment and operation (e.g., force) as well as the design parameters (e.g., cross-sectional area) according to the laws of physics. It is the engineer's task to define the design parameters in such a way that the state variables are within certain value ranges in all relevant operational and environmental scenarios experienced by the system element via the functional flows. The proposed differentiation of parameter types helps in the integration and coupling of domain models for validation and design of system elements. The validation of system elements with workflows [30] involves checking whether the behavior of the system element meets the requirements. These requirements can relate to state variables (e.g., a maximum permissible temperature) or to functional flow parameters (e.g., the minimum required torque of a drive system). In both cases the design parameters are already known or at least estimated. This means that such domain models have to be selected and coupled with each other, which take known design parameters as input and calculate the state variable or functional flow parameter to be validated as output (Figure 3, orange). In the case of the design of a system element, it is the other way around. One or more design parameters are to be determined such that the state variables are within the ranges of validity and the functional flow parameters are generated as required by the operating case. Therefore, such domain models must be selected and coupled in a way that the desired functional flow parameters and limits of the state variables can be taken as input In the case of the design of a system element, it is the other way around. One or more design parameters are to be determined such that the state variables are within the ranges of validity and the functional flow parameters are generated as required by the operating case. Therefore, such domain models must be selected and coupled in a way that the desired functional flow parameters and limits of the state variables can be taken as input and the sought design parameters are calculated as output (Figure 3, green). Of course, in addition to the sought design parameter, there are also design parameters that are already fixed or at least should not be calculated in the design workflow under consideration. These subordinate design parameters may also be an input. Figure 3 only shows the flow directions of the main parameters considered in the respective workflow in a simplified way. Thus, the parameters of the system element are meaningfully structured for validation and design. For the appropriate selection and coupling of the domain models, these still lack an unambiguous description of the parameter interfaces, which is proposed in the following section. Model Signature for Domain Models Model signatures are an approach to describe domain models and their interfaces unambiguously and in a machine-processable way, thus enabling the valid selection and combination of domain models within a system element. Since a large number of individually and inconsistently documented domain models is actively being used, our approach to tackle the research question is to consider a collection of well-known domain models for a specific example, and to derive requirements for model signatures based on their content and form (Section 5.1). From these requirements we propose an approach for model signatures (Section 5.2). Domain Model Requirements for the Model Signature An extract of known domain models for the system element 'lubricated rolling contact' is shown in Figure 4. As already mentioned, they can be distinguished by purpose and fidelity [14] whereby the term fidelity is used here in the combined sense of validity and detail of [44]. In our example three domain models of various fidelity can be differentiated for the purpose 'temperature calculation' ranging from the assumption of a constant temperature to a fully spatially resolved, transient temperature evolution. Depending on the required modeling fidelity of (thermo-)elastohydrodynamic lubrication calculations in the lubricated rolling contact, different modeling strategies can be applied as demonstrated in Figure 4. For instance, in order to model the lubrication film, either temperature (represented by Barus equation) or pressure dependencies of the viscosity (represented by Vogel equation) in the lubrication film or the combination (represented by Eyring, Barus, and Vogel) can be considered to reach the desired fidelity levels in simulations. Analogously, different approaches for temperature, deformation and pressure calculations can be used [31,32,49]. Please note that the domain models shown are only a small excerpt. Both in the published research literature and in companies, such as a bearing manufacturer, a significantly higher number of models will be found. Based on the extent shown here, an elastohydrodynamic (EHD) calculation (Figure 4, green line) and a thermo-elastohydrodynamic (TEHD) calculation (Figure 4, yellow line) can be found as meaningful model configurations and performed as calculation. The Assessment of the domain model compatibility requires expert knowledge or a formalized and evaluable domain model signature. Only the latter can later be utilized in automated validation tests. Figure 5 shows the parameters which are exchanged between the domain models if a TEHD calculation is executed. The depicted workflow ( Figure 5, top left corner) combines the Reynolds equation, half-space theory, energy equation and fluid models for viscosity. After iteratively solving the equations for required parameters with given boundary conditions, the film thickness and pressure distribution in the contact area will be achieved as the result of the simulation model ( Figure 5, top right corner). ogously, different approaches for temperature, deformation and pressure calculations can be used [31,32,49]. Please note that the domain models shown are only a small excerpt. Both in the published research literature and in companies, such as a bearing manufacturer, a significantly higher number of models will be found. Based on the extent shown here, an elastohydrodynamic (EHD) calculation (Figure 4, green line) and a thermo-elastohydrodynamic (TEHD) calculation (Figure 4, yellow line) can be found as meaningful model configurations and performed as calculation. Figure 4. Engineering domain models of the system element "lubricated rolling contact" classified by their purposes and fidelities (in accordance with [13,14]). The Assessment of the domain model compatibility requires expert knowledge or a formalized and evaluable domain model signature. Only the latter can later be utilized in automated validation tests. . Engineering domain models of the system element "lubricated rolling contact" classified by their purposes and fidelities (in accordance with [13,14]). Apparently, mainly state variables as well as design parameters are exchanged, which constitute input and output parameters of the domain model. Besides these input and output parameters, however, also internal parameters are needed within the individual domain models. These internal parameters only exist inside domain models, where they can be changed in the model's code, and cannot be accessed from outside. This leads to the challenge of possible inconsistencies between invisible instances of the same internal parameter in two different domain models, which is still a common problem in system modeling. This consideration leads to the conclusion that the model signature of a domain model should not only contain input and output parameters, but also internal parameters explicitly. Another challenge are undefined spatial and temporal resolutions of parameters. If a parameter occurs in several domain models, these instances must be linked together (e.g., the dynamic viscosity between both domain models in Figure 5) and match in particular with respect to their spatial and temporal resolutions as well as admissible physical or operational regimes. While, e.g., the spatial dimensions (x, y and z) of the parameters have to match completely, a partial match of the regimes can be sufficient to execute two coupled domain models. As a final point, it can be stated that also properties resulting from the model building must be compatible to each other. For instance, the computation times of coupled models should be harmonized in order to guarantee an efficient execution. While input and output relations can be represented in today's SysML the admissibility regimes require a linguistic extension of SysML. This is also the case if regime compatibility at higher hierarchical levels is to be tested with the system element parameter concept building on this contribution. Another important aspect for the model signature is the variability of parameters. Depending on the validation and design question, the developer may want to specifically keep individual parameters constant or allow them to change. Therefore, when integrating a domain model, it must be transparent whether the model keeps the individual parameters constant or varies them partially during the calculation. Hence, the model signature for domain models should explicitly contain the variability of the parameters in addition to the classification according to input, internal and output, dimensions, regimes and execution times. While input and output relations can be represented in today's SysML the admissibility regimes require a linguistic extension of SysML. This is also the case if regime compatibility at higher hierarchical levels is to be tested with the system element parameter concept building on this contribution. Another important aspect for the model signature is the variability of parameters. Depending on the validation and design question, the developer may want to specifically Proposal of a Model Signature for Domain Models From the requirements identified based on domain models (cf. Section 5.1), the following proposal of a domain model signature is derived comprising four constituents ( Figure 6). parameters constant or varies them partially during the calculation. Hence, the model signature for domain models should explicitly contain the variability of the parameters in addition to the classification according to input, internal and output, dimensions, regimes and execution times. Proposal of a Model Signature for Domain Models From the requirements identified based on domain models (cf. Section 5.1), the following proposal of a domain model signature is derived comprising four constituents ( Figure 6). Among the input parameters all parameters are collected, which are needed as input for the specific calculation purpose of the domain model. Similarly, the output parameters are also specified, which are result of the calculation purpose of the particular domain model. In addition to the input and output parameters, the internal parameters are also included as a third constituent, which are characterized by the fact that they cannot be specified or read out externally of the domain model calculation. The domain model signature specifies all input, output and internal parameters, concerning their name, dimension, data type, physical quantity and unit, spatial and temporal resolution as well as admissible regimes ( Figure 6). Additionally, it is indicated whether the parameter is fixed or tunable inside the model. For example, it is defined that the domain model 'Eyring, Barus, Vogel' needs an input parameter 'pressure' with unit 'Pa', which is resolved in x and y direction as well as in time. This parameter is fixed since it is not changed or optimized inside this particular domain model. This fluid model is valid for moderate temperatures [50] and low pressures [49]. To fix the admissible regimes in Figure 6. Model signature of the domain model 'Eyring, Barus, Vogel' (partly based on data from [49,50]). Among the input parameters all parameters are collected, which are needed as input for the specific calculation purpose of the domain model. Similarly, the output parameters are also specified, which are result of the calculation purpose of the particular domain model. In addition to the input and output parameters, the internal parameters are also included as a third constituent, which are characterized by the fact that they cannot be specified or read out externally of the domain model calculation. The domain model signature specifies all input, output and internal parameters, concerning their name, dimension, data type, physical quantity and unit, spatial and temporal resolution as well as admissible regimes ( Figure 6). Additionally, it is indicated whether the parameter is fixed or tunable inside the model. For example, it is defined that the domain model 'Eyring, Barus, Vogel' needs an input parameter 'pressure' with unit 'Pa', which is resolved in x and y direction as well as in time. This parameter is fixed since it is not changed or optimized inside this particular domain model. This fluid model is valid for moderate temperatures [50] and low pressures [49]. To fix the admissible regimes in the proposed model signature, temperatures up to 100 • C and pressures of 100 kPa to about 1 GPa are assumed as an example. The parameter specification (Figure 6, right) is a suggested notation that allows an algorithm-based evaluation of parameter compatibilities. For example, the unit Pascal is expressed via the exponents of the power product of the seven standardized SI units [51]. As a last parameter group, the domain model signature also contains the model parameters. These model parameters have no equivalent on the modeled system, but arise from the way the model is built. These include, for example, the computation time, time steps or termination criteria. Discussion, Conclusion and Outlook In this section, we discuss and summarize the results and provide an outlook for future research. Discussion Besides the advantage of an unambiguous and machine-readable description, the model signature also offers the possibility to evaluate the formal compatibility of domain models. The domain models considered in this example for the system element 'lubricated rolling contact' can be combined theoretically to 81 different model chains (Figure 4). This example is still idealized, such that in reality many more combinations can be expected. Of course, not all of the model chains can be technical coupled. Even with in-depth knowledge of the domain models, it is not possible to reliably and reproducibly filter out incompatible model chains without error and with acceptable effort. The proposed model signatures allow to easily and unambiguously determine whether the respective coupled parameters match in terms of dimensions, data type, unit, spatial and temporal resolution, and regime. In order to reduce the set of possible model configurations to compatible ones with the proposed model signature, it makes sense to implement the model signature as an extension of SysML in a language profile. For example, the mechanisms of structural expressions in system modeling environments such as Cameo could be used to automatically evaluate compatible domain models [52]. Conclusions In function-oriented model-based system development, executable domain models must be integrated into the SysML-based descriptive system model in order to virtually validate and design its system elements. Since all constituents of a system element are formalized via parameters, the challenge arises on the one hand of how to structure these parameters in order to connect them in a meaningful way with the domain models. At the same time, a large number of domain models exist for typical mechanical system elements, which are not documented in a standardized manner, and therefore, on the other hand, can only be integrated into the system element and coupled with each other in a effortful and failure-prone manner. Therefore, we proposed a parameter concept for system elements and a domain model signature, which are harmonized with each other and allow the integration and unambiguous coupling of domain models inside system elements. The parameter concept for system elements distinguishes its parameters into design parameters that need to be defined by engineers or models, state variables that cannot be set directly be engineers and adjust themselves according to the laws of physics, as well as functional flows that enter and leave the system element representing operating and environmental conditions. The proposed notion of model signatures specifies domain models concerning the following attributes. All input, internal and output parameters are defined by their respective name and their physical quantity. Furthermore, the physical is indicated by the power of the seven standardized SI units. The admissible regime is specified by a basically unrestricted set of constraints. Thus, several disjoint ranges of validity can also be expressed by minimum and maximum values or a formulaic relationship. Furthermore, the spatial and temporal resolution as well as the variability are provided. The latter categorizes whether the value of a parameter is fixed or tunable through changes or optimizations inside the domain model. The domain model signature additionally includes the model parameters as a final parameter group. These model parameters are a result of how the model is constructed rather than having an equivalent in the modeled system. This unambiguous and machine-processable description allows domain models to be validly coupled with each other. In combination with the parameter concept, the domain models can read and calculate parameters of the system element according to the certain validation and design cases. Outlook In this article, we motivated the necessity of model signatures and investigated its realization based on a specific example. The conceptual approach, however, is not restricted to system elements representing the lubricated rolling contact in a gearing box, but can be generalized to other system elements. In order to further develop and establish the concept of model signatures, it will therefore be important to apply it to additional, typical system elements in the course of further research. In this context, it makes sense to extend SysML with a possibility to specify resolutions and regimes in order to formulate model signatures with this language in the future. In preparation for application, it is also necessary to develop algorithms for automated compatibility checking and coupling of domain models. The proposed notion of model signatures also reminds of software structures used in multi-physics software systems, that choose an object-oriented approach, in which 'model classes' exist that encapsulate a certain process model to facilitate hierarchical modeling [53], or reproducibility [54]. A specific simulation is then an object of class model with certain parameters (constraining the physical regime) and certain underlying mathematical and numerical methods (that define spatiotemporal resolution). Such an object-oriented software structure also helps to orchestrate high-throughput simulations such as needed for model-based uncertainty management. Additionally, ontologies could provide a way to semantically express and make usable the information needed to select and link simulation models from a model building perspective [55,56]. Combing these closely related concepts will offer new pathways towards a conceptual integration of system models with high-fidelity simulation models.
2022-11-04T19:28:38.206Z
2022-10-29T00:00:00.000
{ "year": 2022, "sha1": "d21356333209ed04d52059240c9c5f292197bf54", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-8954/10/6/199/pdf?version=1667036425", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ec2741309836934e3cee9a98c7ce6f3b096c5103", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
257402968
pes2o/s2orc
v3-fos-license
Diagnostic criteria and proposed management of immune-related endocrinopathies following immune checkpoint inhibitor therapy for cancer Checkpoint inhibitors are now widely used in the management of many cancers. Endocrine toxicity is amongst the most common side effects. These endocrinopathies differ from most other immune-related toxicities in frequently being irreversible and rarely requiring cessation of checkpoint inhibitor therapy. This review considers an approach to the presentation and diagnosis of endocrinopathies, compared to classical endocrine diagnosis, suggesting improvements to classification and treatment based on fundamental endocrine principles. These will help to align management with other similar endocrine conditions and standardise the diagnosis and reporting of endocrine toxicity of checkpoint inhibitors to improve both endocrine and oncological care. In particular, the importance of considering any inflammatory phase (such as painful thyroiditis or hypophysitis resulting in the pituitary enlargement), from the endocrine consequences (transient hyperthyroidism followed by hypothyroidism, pan-hypopituitarism or isolated adrenocorticotrophic hormone deficiency), is highlighted. It is also important to consider the potential confounder of exogenous corticosteroids in adrenal suppression. Introduction Endocrinopathies are amongst the most frequent adverse events of immune checkpoint inhibitors (ICPIs). In common with all other side effects of systemic anticancer treatment, they are currently reported according to the Common Terminology Criteria for Adverse Events (CTCAE, currently updated to version V1 for reporting and case management) (1). In this standardized grading system, symptoms' severity and need for intervention determine the grading stratification (1 -mild, 5 -death) and guide decisions regarding continuation of immunotherapy and need for an immunomodulatory intervention (Table 1). However, endocrinopathies differ from other immune-related adverse events (irAEs) in several aspects. The long-term clinical manifestations stem from the hormone deficiency, whilst the inflammatory process itself may yield only minimal manifestations. As the hormone production of an involved gland ceases completely and irreversibly in most cases, lifelong hormone replacement should be initiated, whilst discontinuation of immunotherapy and/or immunomodulation with glucocorticoid therapy is usually unnecessary, with immunosuppressive doses of glucocorticoids only rarely required where the initial inflammation causes severe symptoms. Lastly, optimal exogenous correction of hormone deficiencies is readily available, based on an early and accurate diagnosis. Thus, even severe symptoms usually resolve rapidly after the initiation of hormone replacement. These unique characteristics mean a classification based predominately on symptom severity at presentation is not helpful in guiding decisions about hormone replacement and continuation or discontinuation of ICPI therapy for endocrinopathies (2). Immunerelated endocrinopathies do not fit into a five-tier symptoms severity stratification either diagnostically or therapeutically as they require binary diagnostic criteria based on the demonstration of absolute or relative hormone deficiency. Consequently, documentation and reporting of the incidence, severity and nomenclature of endocrinopathies (e.g. hypophysitis vs isolated adrenocorticotrophic hormone (ACTH) deficiency) are currently challenging. There might be an underrecognition of asymptomatic hormonal abnormalities that may still benefit from long-term treatment such as sub-clinical hypothyroidism. There are also implications for trial protocols, as endocrinopathies already present an exception, where current guidelines support the continuation of ICPIs, having initiated hormone replacement therapy, regardless of the grade. Furthermore, some endocrine irAEs require adjustment and correlation with standard endocrine diagnostic criteria (e.g. hyperglycaemia). These recommendations complement published guidance, including emergency management guidelines endorsed by the Society for Endocrinology (3), and recent European guidelines (4). They extend these by addressing the requirement for a reliable classification, combined with an approach to the management of endocrine irAEs which will accurately reflect the clinical status, need for hormone replacement and ensure consistency with wider endocrine practice. General considerations for endocrine irAEs The potential mechanisms underlying ICPI toxicity have been reviewed elsewhere (5). When compared to the classic, non-iatrogenic autoimmune endocrinopathies, ICPI-induced endocrinopathies (ICPI-IEs) differ both in their presentation and immune features. Although classic autoantibodies such as GAD-65 (6) and anti-TPO antibodies (7,8) can be present, the literature to date suggests that they are less frequently detected than in classic autoimmune disease, suggesting a different mechanism of gland dysfunction. The timescale often differs as well, with rapid evolution into complete gland destruction frequently described with ICPI-IE (9,10). In contrast to non-endocrine irAEs, the active autoimmune phase of endocrine irAEs is usually clinically silent excluding two exceptions, both related to specific anatomical and functional characteristics. First, hypophysitis can present with headache, and occasionally visual field defects, resulting from pituitary oedema and swelling within the osseous borders of the sela turcica (11,12,13,14,15). Secondly, acute thyroiditis can result in local thyroid pain, typically alongside symptoms of thyroid hormone excess (16). Whilst CTCAE may accurately reflect these shortterm inflammatory symptoms, it less accurately reflects the clinical significance of the hormone deficiencies. Hypophysitis and hypopituitarism Pituitary abnormalities are reported in between 1.8 and 18.3% of patients treated with ipilimumab-based regimens (17), usually resulting in panhypopituitarism, associated with inflammatory symptoms such as headache and pituitary enlargement in 50% of cases (13,14,15). Hypopituitarism, though not a universal finding (16), usually follows, with ACTH deficiency the most common abnormality, followed by thyroid-stimulating hormone (TSH) and gonadotrophin deficiency (12,18,19). We, therefore, propose that hypophysitis should be reserved to describe either the symptomatic phase with headache or MRI findings of enlarged pituitary, whilst ICPI-induced hypopituitarism is used to describe the resultant long-term deficiencies of at least two anterior pituitary hormones (following exclusion of sick euthyroid or sick eugonadal states). Isolated ACTH deficiency Isolated ACTH deficiency (IAD) is induced by PD-1 and PD-L1 inhibition and usually manifests with weakness and loss of appetite without clinical or laboratory or radiological evidence of wider pituitary dysfunction (20,21). IAD is therefore a clinically distinct pituitary abnormality, caused by specific classes of ICPIs, with different presentations, treatments and prognoses that should be classified separately from hypophysitis and hypopituitarism. IAD presents as hypocortisolism resulting from ACTH deficiency, with intact remaining pituitary axes and preserved aldosterone secretion. Adrenalitis A direct autoimmune damage to the adrenal cortex is relatively rare. This endocrine irAE poses a challenge in terms of evaluating the precise incidence mainly due to the lack of unified criteria for immune-related endocrine adverse effects in clinical trials and also due to incomplete endocrine profiling. Similar to the classic Addison's disease, depletion of both glucocorticoids and mineralocorticoids yields more pronounced manifestations, including haemodynamic and electrolyte compromise compared to secondary adrenal insufficiency (22,23,24). Presentation is with hypocortisolaemia with elevated ACTH and renin levels. Posterior pituitary -central diabetes insipidus Several cases of central diabetes insipidus have been reported so far, presenting with polyuria, polydipsia, hypernatremia, diluted urine with low/absent antidiuretic hormone levels (25,26). Challenges in diagnosing HPA axis endocrinopathies Excluding adrenal suppression due to exogenous glucocorticoids treatment As systemic and topical glucocorticoids are frequently used to treat non-endocrine immune-related toxicity, some patients will develop adrenal suppression. However, due to the risk of multiple toxicities, it is possible that some will also have ACTH deficiency and require long-term glucocorticoid replacement. It is, therefore, important to exclude exogenous glucocorticoid use before making a diagnosis of ACTH deficiency. If exogenous steroids have been used, then standard approaches to weaning should be followed, but if after a prolonged period there has been no recovery of endogenous cortisol production, then the possibility of co-existent ACTH deficiency should be considered. Hence, exclusion on HPA axis suppression by e220513 12:5 exogenous glucocorticoids should be the first step in the endocrine workup, requiring a careful history to include not just systemic glucocorticoids for the treatment of ICPI-related toxicity, but also topical glucocorticoids such as creams, inhalers or nasal sprays, including those used for non-ICPI-related toxicity (e.g. asthma). Where potentially suppressive doses of corticosteroids have been used (27), a careful weaning protocol should be followed (such as that in Supplementary Appendix 1, see section on supplementary materials given at the end of this article), and a diagnosis of ACTH deficiency should be reconsidered if there is no adrenal axis recovery. Replacing single axis dynamic testing with an integrated 'frozen section' of the HPA axis An accurate and rapid diagnosis of an HPA axis autoimmune injury is essential in the context of a cortisol-deficient cancer patient. The distinction between hypophysitis, IAD and adrenalitis has longterm consequences regarding hormone replacement therapy. Whilst hormonal deficiencies require physiological replacement, rare cases of hypophysitis accompanied by severe oedema and pressure on the optic chiasm require treatment with high-dose steroids. A recent study identified an incidence of biochemical hypocortisolaemia in 4.7% of ICPI-treated patients. Using robust endocrine criteria, 14 cases of isolated ACTH deficiency were identified, with 6 of hypophysitis (29), confirming the heterogeneity of presentations, and the need for precise endocrine diagnosis. Whilst ACTH stimulation tests (Synacthen® test) are used for the assessment of the adrenal glands' stress response, in accordance with the traditional endocrine dynamic diagnostic paradigm, this test is nondiscriminative between primary and secondary adrenal insufficiency. A Synacthen® test may demonstrate a rise in cortisol during the first weeks of secondary adrenal insufficiency before adrenal atrophy has occurred. This is of particular concern in ICPI-induced pituitary disorders, given the often rapid onset of cortisol deficiency following checkpoint inhibitors when a Synacthen® test may be falsely reassuring. Therefore, the diagnosis of adrenal insufficiency has to largely rely on the measurement of baseline morning cortisol levels, whilst ACTH and renin levels can help to distinguish the rare cases of primary adrenal insufficiency from those with ACTH deficiency. Especially where regimens including ipilimumab have been used, a full baseline assessment of other pituitary hormones is required (TSH and free thyroxine (T4), luteinizing hormone, follicle-stimulating hormone, growth hormone, Prolactin, insulinlike growth factor 1, total testosterone for men and oestradiol for pre-menopausal women). Hypocortisolism accompanied by low or inadequately normal ACTH levels with all other anterior pituitary hormones and target glands functioning within the normal range is indicative of IAD. Two or more depleted pituitary axes, reflected by hormone levels below the normal range of target glands (thyroid hormones, etc.) and low or inadequately normal pituitary hormone levels indicate hypopituitarism (with or without hypophysitis), whilst low cortisol and aldosterone levels with compensatory elevation in ACTH levels and in renin levels or renin activity correspond with primary adrenal failure due to adrenalitis. Notably, this approach may lead to a degree of underdiagnosis in those with partial ACTH deficiency, and in the presence of symptoms compatible with adrenal insufficiency and indeterminate basal cortisol levels, insulin stress testing may be required where there are no contra-indications. Long-term follow-up data will be required to fully elucidate the long-term outcomes in those in this category. Management of HPA abnormalities We propose that assessing patients with HPA axis abnormalities after receipt of ICPI should be focused on two principles: a. Management of any active inflammatory/hypophysitis phase b. Prompt assessment for and replacement of any endocrine deficiencies. In those presenting with severe symptoms of hypophysitis, for example, headache, urgent imaging of the pituitary is required, both to confirm pituitary enlargement and exclude other causes such as brain metastases. High-dose corticosteroids in the form of methylprednisolone are only indicated in those with pituitary enlargement that may lead to chiasma compression, as this treatment is associated with poorer oncological outcomes (29), and does not lead to recovery of endocrine function in those with hypopituitarism (30). Oral prednisolone at 30-40 mg daily could be considered if necessary for short-term control of inflammatory symptoms such as headache. An algorithm for assessment e220513 12:5 of those presenting with symptoms of hypophysitis is shown in Fig. 1. Those who present acutely unwell or with signs of adrenal crisis should be managed as per standard approaches, as outlined in the Society for Endocrinology guidelines for the acute management of the endocrine complications of checkpoint inhibitor therapy (3). This should not be delayed whilst awaiting imaging even in those with symptoms of hypophysitis. However, there is increasing experience with the use of physiological replacement doses of glucocorticoids (e.g. hydrocortisone 20 mg daily in three divided doses or prednisolone 2-4 mg od) on an out-patient basis in those who are not systemically unwell (31) (Fig. 2). Replacement with levothyroxine and sex hormones may be required as per standard approaches in time, although morning cortisol should always be checked and replacement initiated prior to starting levothyroxine. Whilst there is some evidence of recovery of thyroid and gonadal axis in some patients, necessitating re-testing over time, ACTH deficiency is mostly permanent (19). Although primary adrenal insufficiency is rare, if the ACTH or renin level was elevated then mineralocorticoid replacement with fludrocortisone is required. Screening for pituitary abnormalities Most patients with pituitary abnormalities from ICPI treatment are diagnosed after presenting with symptoms of either hypophysitis hypopituitarism or isolated ACTH deficiency. Although pituitary dysfunction may be detected on routine testing during ICPI therapy, there is little evidence to guide a screening strategy. All available guidelines and summary of product characteristics on ICPI-IE recommend checking thyroid function on each cycle of treatment, and changes in TSH (13) and free T4 Figure 1 Algorithm for assessment of management of hypophysitis following ICPI. e220513 12:5 (18) have been shown to precede the development of pan-hypopituitarism in those treated with ipilimumab. A fall in early morning cortisol has also been used to detect adrenal insufficiency at an early stage, increasing out-patient treatment (32), but this may be limited by the challenges of early morning testing. Therefore, currently a high index of clinical suspicion for symptoms of either hypophysitis or hypoadrenalism is required. Thyroid Thyroid abnormalities are amongst the most common irAEs described after ICPI therapy (33,34,35,36,37,38,39). Both hyperthyroidism and hypothyroidism occur, sometimes in the same patients. Sub-clinical thyroid abnormalities are even more common, not always requiring therapy (33). Patients with pre-existing thyroid abnormalities (manifested either as the presence of thyroid autoantibodies (40) or higher TSH at baseline (33)) and prior TKI therapy (41) are at increased risk for thyroid dysfunction. Two mechanisms have been proposed -an immune-mediated destructive thyroiditis and Grave's disease. Thyroiditis with transient hyperthyroidism Thyroiditis is the most common endocrinopathy, affecting 20-30% of ICPI-treated patients. It usually starts a few weeks after the initial introduction to checkpoint inhibitors, with a transient thyrotoxic phase, presumed to reflect spillage of pre-formed hormone, from the inflamed gland. Interestingly, this phase is usually less symptomatic than would be expected in other thyrotoxic states (42). In contrast to Hashimoto's disease, which may evolve at various paces and extents, from a partial, intermittent thyroid insufficiency to complete elimination of thyroid function, the robust cytotoxic activity induced by ICPIs leads to a rapid and complete tissue consumption in the majority of cases. Hence, hypothyroidism usually follows, resulting in a requirement for lifelong thyroid hormone replacement. Thyroiditis can also present as increased uptake in the thyroid on FDG PET scanning (43). Primary hypothyroidism Of note, hypothyroidism can develop without a preceding thyrotoxic phase. Whilst this may present with symptoms of hypothyroidism, it is more commonly detected on screening, as most protocols recommend testing thyroid function with each cycle of ICPI. Graves' disease Rarer are reports of Graves' disease resulting in persistent thyrotoxicosis requiring antithyroid drugs such as carbimazole (44,45,46,47,48). Graves' disease should be considered if thyrotoxicosis persists for more than 4 weeks and in those who present with signs of thyroid eye disease. In these cases, we recommend checking TSH receptor antibodies and/or a thyroid uptake scan, and treatment with antithyroid drugs may be required. The CTCAE criteria for hyperthyroidism and hypothyroidism (Table 1) are based on symptoms, regardless of hormone levels and specific disease or hormonal pattern. Indeed, patients with a typical thyroiditis pattern, with hyper-then hypothyroidism may fall into both criteria over time. Furthermore, subclinical changes in thyroid hormone levels are common, may still require treatment, or at least monitoring, and therefore reporting by CTCAE criteria can underestimate the true incidence of thyroid hormone abnormalities, without clearly separating those with Graves' disease requiring a different treatment paradigm. Management of thyroid dysfunction Hypothyroidism, whether occurring as the first manifestation of ICPI-related thyroid disease or after a period of thyrotoxicosis, is managed as per standard approaches in the management of primary hypothyroidism (49). This is crucial in those with secondary hypothyroidism (see 'Pituitary' section). In particular, levothyroxine should be initiated if the free T4 is low or TSH sustained at >10 mIU/L, at an initial dose of 1.6 µg/kg rounded to the nearest 25 µg, unless there are comorbidities such as uncontrolled ischaemic heart disease or atrial fibrillation, or in those over 65 where an initial dose of 25-50 µg daily can be used. If there is any clinical suspicion of adrenal insufficiency, a morning cortisol should be checked prior to initiating levothyroxine, as increased thyroid hormone levels can precipitate an adrenal crisis (50). It may also be appropriate to initiate at a lower dose if a patient presented with both hypothyroidism and signs of ICPI-induced myocarditis. Thyroid hormone replacement is likely to be lifelong, and there is some evidence that ICPI-induced hypothyroidism may require a higher average dose than Hashimoto's thyroiditis (51). In contrast, as the hyperthyroid phase of thyroiditis is usually short-lived, treatment is usually supportive. As in other forms of thyroiditis, beta-blockers such as propranolol can be used for symptomatic relief, and occasionally, patients with severe neck pain may require systemic glucocorticoids. There is a limited role for antithyroid drugs. However, in those who present with signs of thyroid eye disease or whose thyrotoxicosis persists for more than 4 weeks, we recommend checking TSH receptor antibodies and/or a thyroid uptake scan, and consider antithyroid drugs such as carbimazole if positive or in persistent thyrotoxicosis, due to the rare occurrence of Graves' disease. In these rare cases of severe thyrotoxicosis due to immune-related Graves' disease, it may be necessary to withhold the ICPI, at least until the thyrotoxicosis is controlled, and possibly permanently in those with significant thyroid eye disease. Parathyroid Symptomatic hypocalcaemia accompanied by low levels of parathyroid hormone (PTH) has been described in several case reports (52,53). ICPI-induced hypoparathyroidism differs mechanistically from the other endocrine irAEs. Activating autoantibodies against the calcium-sensing receptor disrupt the glands' activity, rather than autoimmune gland destruction (54,55,56). This endocrinopathy was shown to be reversible when a patient with concomitant high-grade immunerelated colitis received high-dose glucocorticoids (52), assumingly due to improved immune regulation that led to attenuation of activating antibodies to the calciumsensing receptors. The reversal indicates a controllable e220513 12:5 Table 2 Proposed diagnostic classification and criteria for ICPI-induced endocrinopathy. Pathology/description Proposed diagnostic criteria Thyroid 1. Thyroiditis -active inflammation, usually clinically silent, rare cases of gland swelling or tenderness. Tissue damage coincides with spillage of preformed thyroid hormones to the bloodstream, manifesting with rapid elevation of free T4 levels and consequent TSH suppression, followed most commonly by subsequent hypothyroidism, or, in rare cases by return to normal thyroid function 1. Hyperthyroidism is defined as elevated free T4 or T3 with low or suppressed TSH, in the absence of eye signs of thyroid eye disease, that resolves within 6 weeks, to either euthyroidism or hypothyroidism. Local symptoms of thyroiditis support but are not required for this diagnosis 2. Post CPIs Hypothyroidism -complete and permanent hypothyroidism following a thyrotoxic phase, in the majority of cases, or as a single-phase thyroid functional decline 2. Low free T4 with elevated TSH, with or without a prior thyrotoxic phase 3. Graves' disease -thyrotoxicosis persisting for more than 6 weeks, with elevated free T3, with or without thyroid eye disease or positive thyroid stimulating antibodies 3. Elevated free T4 or T3 with low or suppressed TSH, with either positive TSH receptor antibodies, increased uptake on an isotope scan or co-existent thyroid eye signs, or thyrotoxicosis that persists for more than three months a. 08:00-10:00 h cortisol below assay-specific reference range with non-elevated ACTH, in the absence of exogenous glucocorticoids b b. Free T4 below reference range with non-elevated TSH c. Morning testosterone (males) below reference range with non-elevated gonadotrophins on more than one occasion d. Secondary amenorrhea with oestradiol < 100 pmol/L and non-elevated gonadotrophins (females pre-menopause) e. Prolactin above or below reference range can support a diagnosis 3. Isolated ACTH deficiency usually without hypophysitis a 3. 09:00 h cortisol below assay specific reference range with nonelevated ACTH, in the absence of exogenous glucocorticoids b Other pituitary axis intact Adrenal Primary adrenal insufficiency -cortisol deficiency with either elevated ACTH or renin > 2 × ULN 08:00-10:00 h cortisol below assay-specific reference range with elevated ACTH or renin > 2 ULN Posterior pituitary Central diabetes insipidus -new onset polyuria and polydipsia 24-h urine volume greater than 50 mg/kg body weight; water deprivation test revealing diluted urine (osmolality below 100) when serum osmolality exceeds 295 mOsm/kg, followed by a desmopressin challenge leading to urine concentration above 300 mOsm/kg Endocrine pancreas 1. New-onset hyperglycemia 1. Blood glucose measurements > 11.1 mmol/L; 200 mg/dL in the absence of steroids treatment in two or more occasions after immunotherapy with C-peptide <100 pmol/L when available c positive islet cell/IA2/GAD antibodies' titre are supportive but not mandatory 2. Diabetic ketoacidosis 2. New onset hyperglycaemia as above with pH < 7.30 and/or bicarbonate < 15.0 mmol/L and capillary or blood ketones > 3.0 mmol/L Parathyroid New onset hypocalcemia Activating antibodies to the calcium-sensing receptor (CaSR) , of IgG1 and IgG3 subclasses with affinity to functional epitopes on the receptor, thus causing hypocalcaemia Albumin corrected calcium below 2.1 mmol/L with low PTH levels and normal magnesium levels a Exclude use of exogenous glucocorticoids prior to diagnosing Isolated ACTH deficiency, reconsider if failure of adrenal recovery after the standard withdrawal approach; b In those with ongoing symptoms of adrenal insufficiency but 09:00 h cortisol within the reference range, consider an insulin tolerance test to confirm or rule out HPA axis dysfunction if no contra-indications; c In those treated with high-dose glucocorticoids, without ketosis, steroid-induced hyperglycemia should be considered. Management Initial management is calcium replacement, which may require intravenous calcium infusion, and the use of vitamin D analogues, as per other forms of hypoparathyroidism (57). However, consideration should be given to a trial of glucocorticoids, especially if calciumsensing receptor antibodies are detected, given the potential reversibility. Diabetes and hyperglycaemia Programmed Death-1 (PD-1)-and Programmed Death-Ligand 1 (PD-L1)-based treatment regimes can result in a rapid onset, insulin-deficient state closely resembling type 1 diabetes, although autoantibodies are detected in less than 50% of cases (6,9,58,59,60,61). This results in a need for lifelong insulin treatment, with all the associated complexities. Those with pre-existing type 2 diabetes may also develop this important and potentially life-threatening complication (62,63). However, a third of patients require high-dose glucocorticoids for nonendocrine reasons and some of these will develop steroidinduced hyperglycaemia (64,65). Currently, both will be categorised under hyperglycaemia toxicity, with grade determined by the need for intervention or glucose levels that do not match the established levels for diagnosing diabetes, despite the very different mechanisms. Notably, the long-term implications of permanent checkpoint inhibitor-associated insulin-deficient diabetes and other temporary forms of hyperglycaemia, including steroidinduced, are very different, with the former requiring longterm insulin therapy, whilst the latter, representing an indirect side effect, frequently resolves on cessation of the glucocorticoids. Management is based on a careful assessment of the cause, between steroid-induced hyperglycaemia and immunotherapy induced diabetes, according to standard diabetes approaches to these conditions (66,67) and a diagnostic approach to safely manage new-onset hyperglycaemia has recently been proposed (65). Table 2) 420422005 |Ketoacidosis due to diabetes mellitus (disorder)| in addition DKA (see Table 2) Summary The current CTCAE classification of endocrine irAEs is based on short-term symptoms rather than accurate diagnostic criteria and the implications of long-term hormone replacement. This makes accurate documentation and reporting of the severity, nomenclature and incidence of endocrinopathies challenging resulting in the under-recognition of largely asymptomatic hormonal abnormalities. This issue, if not addressed in clinical practice, will hamper the development of more evidence-based strategies for investigation and treatment, as well as studies into the longer-term implications of endocrinopathy. We therefore propose a new classification of endocrine toxicity of ICPI therapy that accurately reflects the pathology, the hormonal disturbance and the treatment ( Table 2). We also propose diagnostic criteria, based on an assessment of hormonal function combined with imaging and clinical assessment where appropriate, to enable standardised diagnosis and reporting of endocrine outcomes. This can apply both to endocrine toxicity presenting symptomatically and diagnosed based on laboratory abnormalities detected during screening, as in standard endocrine practice. Expected benefits from the new diagnostic system: 1. More accurate documentation and reporting of endocrinopathies. 2. Appropriate clinical management based on long-term perspective regarding hormone replacement therapy rather than short-term symptoms. 3. Prevention of unnecessary cessation or interruption of ICPI therapy and unnecessary corticosteroid therapy. 4. Improved reporting and future development of trial protocols, as endocrinopathies already present an exception, where current guidelines support the continuation of ICPI, having initiated hormone replacement therapy, regardless of the grade. The proposed classification would make the development of guidelines easier as the treatment would more clearly reflect the underlying diagnosis. 5. Enhanced confidence in clinical diagnostic coding (by applying these criteria when formalising diagnosis) to facilitate improved accuracy in determining incidence, prevalence, and outcomes in prospective and retrospective real-world data analysis. Future audits of clinical practice will also be refined by clearer diagnostic criteria and development of specific diagnostic codes for ICPI-IEs. To support this, and to facilitate future audits, Table 3 contains suggested use of SNOMED codes to document the different endocrine IRAEs in electronic health records in a standardised manner. This classification will allow the identification of those with IRAEs whilst also allowing linkage to standard endocrine diagnostic terms. We propose the adoption of these new diagnostic criteria for ICPI-induced endocrine dysfunction that recognise their unique characteristics compared to other forms of irAEs and to classic endocrine diseases. The revised criteria would enable accurate assessment of different endocrinopathies, matching the required treatment, and allow care planning to occur, alongside existing management guidelines (4,68). Supplementary materials This is linked to the online version of the paper at https://doi.org/10.1530/ EC-22-0513. Declaration of interest DM reports personal fees from Bristol Meyer Squibb, personal fees from MSD, personal fees from Roche. SC reports personal fees from Bristol Myer Squibb. RP, SA and KY report no declarations. Funding This study did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
2023-03-09T06:16:36.221Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "2ba3d7b95c283a9fd44108d9ee63b7289f37a038", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "140426007f0d5cc41cf1911c7d356ac8b1daa7ed", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245101553
pes2o/s2orc
v3-fos-license
Turbulence statistics from three different nacelle lidars . Atmospheric turbulence can be characterized by the Reynolds stress tensor, which consists of the second-order moments of the wind field components. Most of the commercial nacelle lidars cannot estimate all components of the Reynolds stress tensor due to their limited number of beams; most can estimate the along-wind velocity variance relatively well. Other components are however also important to understand the behavior of, e.g., the vertical wind profile and meandering of wakes. The SpinnerLidar, a research lidar with multiple beams and a very high sampling frequency, was deployed together with two commercial lidars in a forward-looking mode on the nacelle of a Vestas V52 turbine to scan the inflow. Here, we compare the lidar-derived turbulence estimates with those from a sonic anemometer using both numerical simulations and measurements from a nearby mast. We show that from these lidars, the SpinnerLidar is the only one able to retrieve all Reynolds stress components. For the two-and four-beam lidars, we study different methods to compute the along-wind velocity variance. By using the SpinnerLidar’s Doppler spectra of the radial velocity, we can partly compensate for the lidar’s probe volume averaging effect and thus reduce the systematic error of turbulence estimates. We find that the variances of the radial velocities estimated from the maximum of the Doppler spectrum are less affected by the lidar probe volume compared to those estimated from the median or the centroid of the Doppler spectrum. Introduction Understanding and measuring atmospheric turbulence are essential for the effective use of wind energy, to assess wind turbine site conditions, and for the assessment of the structural integrity of wind turbines.Traditionally, in situ anemometers installed on meteorological (met) masts are used to measure turbulence.However, with the increasing size of modern wind turbines, installing and operating a met mast that reaches the top of the rotor disk are becoming more and more expensive and infeasible.Nacelle lidars are compact and portable.They yaw with the wind turbine and scan over an area comparable to the rotor plane. The Reynolds stress tensor is one of the most important turbulence statistics used in the wind energy industry.It consists of the second-order moments (variances and covariances) of the wind field components.One of the Reynolds stress components, the along-wind velocity variance, is used in the definition of turbulence intensity (IEC, 2019) and ap-plied in different aspects of wind energy.Other components are also essential in wind energy and boundary-layer meteorology.For example, the vertical wind shear is connected to the friction velocity (Wyngaard, 2010), which can be computed using the momentum fluxes (two covariances); the momentum fluxes can also be used to crudely estimate the height of the boundary layer (Stull, 1988).The turbulence kinetic energy, expressed as half the sum of the three velocity components' variances, is a key parameter for investigating the turbulence structure in, e.g., wind turbine wakes (Kumer et al., 2016). The main objective of this study is to investigate the benefit of using multiple-beam nacelle lidars for measuring inflow turbulence.Most commercial nacelle lidars are not able to estimate all components of the Reynolds stress tensor due to the limited number of beams and the scanning configuration.The SpinnerLidar is a research continuous-wave (CW) Doppler nacelle lidar.It scans at 400 positions at a high sampling frequency, which enables characterizing the inflow in detail.We evaluate and compare the turbulence characterization performance of a two-and a four-beam commercial lidar, and the SpinnerLidar through both numerical simulations and measurements inter-comparisons with in situ anemometers.For the latter, we deployed the three lidars in a forward-looking mode on the nacelle of a V52 wind turbine.Measurements from sonic anemometers on a met mast are used as reference for evaluation of the lidar-derived turbulence characteristics. Assuming statistical homogeneity, we estimate the Reynolds stress components by fitting lidar radial velocity variances from the beams over the scanning pattern using a least-squares-based method.To determine the six components of the Reynolds stress tensor, we require at least six radial velocity variances measured in different beam orientations in analogy to the method by Eberhard et al. (1989).Here, we discuss the limitations of using different methods and assumptions to estimate the along-wind velocity variance with fewer than six radial velocity variances.We focus on this variance because it is a key parameter for load validation (Dimitrov et al., 2019;Conti et al., 2021), power performance assessment (Wagner et al., 2014(Wagner et al., , 2015;;Borracino et al., 2017) and wind turbine control (Schlipf et al., 2014(Schlipf et al., , 2020)). Measurements of turbulence by lidars are affected by spatial average filtering effects caused by the lidar probe volume and cross-contamination effects from combining lineof-sight velocities at different locations assuming instantaneous homogeneity and not only statistical homogeneity (Sathe and Mann, 2013;Kelberlau and Mann, 2020).Both effects contribute to the systematic error of turbulence estimation using lidars.As a consequence of the first effect, a lidar estimates turbulence essentially through a low-pass filter and cannot detect high-frequency variations, which yields the so-called "filtered variances".Held and Mann (2018) showed that different methods of deriving the radial velocity from the lidar Doppler spectrum influence the degree of the turbulence attenuation.We explore the ability of these methods for turbulence estimation with the SpinnerLidar measurements.We also compensate for the probe volume filtering effect and compute "unfiltered variances" of the radial velocity using Doppler radial velocity spectra from the Spin-nerLidar measurements.Peña et al. (2017) used Doppler radial velocity spectra and showed that the along-wind unfiltered variance from a conically scanning lidar agreed well with the one from a cup anemometer on a met mast.However, other lidar-derived estimates of velocity-component variances were largely biased due to the lidar scanning configuration. This paper is organized as follows.Section 2 describes the turbulence spectral model, the maximum, median and centroid methods to derive the lidar radial velocities from the Doppler spectrum, the filtered and the unfiltered radial velocity variances, the least-squares method to compute the Reynolds stress tensor, and the numerical lidar simulations.Section 3 provides information on the measurement cam-paign and the employed nacelle lidars.Section 4 describes how we filter and post-process the high-frequency lidar radial velocities and the Doppler radial velocity spectra.Section 5 shows the inter-comparison of turbulence characteristics between three nacelle lidars and a mast-mounted sonic anemometer at turbine hub height.Discussions and conclusions are given in Sects.6 and 7, respectively. Turbulence spectral model Assuming Taylor's frozen turbulence hypothesis (Taylor, 1938), the wind field can be described by a vector field u(x) = (u, v, w) = (u 1 , u 2 , u 3 ), where u is the horizontal along-wind component, v the horizontal lateral component, w the vertical component, and x = (x, y, z) the position vector defined in a right-handed coordinate system.The mean value of the homogeneous velocity field is u(x) = (U, 0, 0), so the coordinate x is in the mean wind direction.The turbulence spectral properties of the three-dimensional homogeneous wind field are described by the spectral velocity tensor ij (k) (Kristensen et al., 1989): which is the Fourier transform of the covariance tensor R ij (x) ≡ u i (x)u j (x + r) , where denotes ensemble averaging, r is the separation vector, u i are the fluctuations around the mean and k = (k 1 , k 2 , k 3 ) is the wave vector in the (x, y, z) directions. We assume that the spectral velocity tensor ij (k) can be described by the model of Mann (1994) (hereafter the Mann model), which, besides k, only contains three parameters (known as Mann parameters): αε 2/3 is a product of the spectral Kolmogorov constant α and the turbulent energy dissipation rate ε to the two-thirds power, L is a length scale related to the size of the energy-containing eddies, and is a parameter describing the anisotropy of the turbulence.From the spectral tensor, the one-point spectra of velocity fluctuations are calculated by The wind velocity components have the three auto-spectra F 11 (= F u ), F 22 , and F 33 .The auto-spectra can be evaluated using Eq. ( 2).The variances of the velocity components are Wind Energ.Sci., 7, 831-848, 2022 https://doi.org/10.5194/wes-7-831-2022and these, together with the covariances, are the components of the Reynolds stress tensor: Nacelle lidar The unit vector n describing the beam orientation of a nacelle lidar can be expressed as (Peña et al., 2017) where θ is the angle between the y axis and n projected onto the y-z plane and φ is the angle between the beam and the negative x axis (hereafter half-cone opening angle).As with any other Doppler lidar, nacelle lidars only measure the radial velocity (also known as the line-of-sight velocity) along the laser beam.Thus, the radial velocity can be expressed as (Mann et al., 2010) where ϕ is the lidar weighting function that considers the probe volume, s is the distance from the focus point along the beam and f d is the focus distance.This equation assumes that v r is determined from the Doppler spectrum by the centroid or center of gravity method.For the case of the investigated CW lidars, their weighting functions are assumed to be of the Lorentzian form (Sonnenschein and Horrigan, 1971): where z R is the Rayleigh length that can be estimated as where λ is the laser wavelength and r b the beam radius at the output lens. If we assume that the lidars measure at a point, instead of over a probe volume, and that u, v and w do not change over the scanned area, the radial velocity in Eq. ( 6) can be estimated as the sum of the projection of the three-dimensional wind components on the beam pointing direction: The variance of the radial velocity σ 2 v r can be derived by taking the variance of Eq. ( 9) (Eberhard et al., 1989): (10) Equation ( 10) provides accurate velocity-component variance and covariance estimates if the radial velocity variance is unfiltered, i.e., if we are able to account for the lidar probe volume.In practice, if the Doppler radial velocity spectrum is available, we have means to estimate the unfiltered radial velocity variance.This will be described in Sect.2.4. Estimation of the radial velocity and the filtered radial velocity variance Three methods are used here to determine the dominant frequency from the Doppler radial velocity spectrum to compute the radial velocity.The centroid method computes the characteristic frequency f in the Doppler radial velocity spectrum p(f ) as The maximum method finds the frequency bin where the maximum peak in the Doppler spectrum occurs.The median method treats the Doppler spectrum as a probability distribution and finds the frequency bin that corresponds to the median value.These frequencies are then converted to radial velocity estimates according to the sampling frequency of the digitizer, the length of the fast Fourier transform, and the lidar's laser wavelength.Since none of these methods considers the whole Doppler radial velocity spectrum, turbulence statistics computed from these radial velocities are filtered.Therefore, we use the term-filtered radial velocity variance σ 2 v r ,filt . Estimation of the unfiltered radial velocity variance Here, we use the Doppler radial velocity spectrum to estimate the unfiltered radial velocity variance σ 2 v r ,unf of the lidar beams.Since the investigated nacelle lidars measure at small opening angles over a relatively homogeneous inflow, the effect of the radial velocity gradient within the probe volume is negligible (see Mann et al., 2010, for a detailed discussion).Therefore, σ 2 v r ,unf can be estimated as the second central statistical moment of the ensemble-average Doppler spectrum of the radial velocity.The mean radial velocity can be estimated from the area-normalized mean Doppler spectrum p(v r ) as https://doi.org/10.5194/wes-7-831-2022Wind Energ.Sci., 7, 831-848, 2022 and its variance as Assuming all radial velocity contributions to the Doppler spectrum are due to turbulence, σ 2 v r in Eq. ( 13) provides an estimate of σ 2 v r ,unf .This can be used to extract the velocity variances using Eq. ( 10), which gives the components of the Reynolds stress tensor. Estimation of the mean wind velocity Radial velocity measurements from different beam directions can be combined to reconstruct the mean wind.In the following sections, we show that different approaches are used for different lidars. First approach A least-squares formulation is used to find the mean wind vector U = (U, V , W ) over all beam positions.Here, we minimize the sum of squared differences between the beamprojected wind and the measured radial velocities: The integral dµ could be an area-weighted average of the beam measurements.In practice, the integral could simply be the sum over all pairs of radial velocity v r and the corresponding beam unit vectors n among the scanning area.The vector U that minimizes the integral must fulfill Expanding the integral and isolating U we get This approach assumes wind homogeneity over the scanning area.To get the three mean wind components, we need at least three values of v r measured in different orientations.This approach is used for deriving the mean wind vector from SpinnerLidar multi-beam measurements. Second approach Assuming that the inflow wind is horizontal, i.e., w = 0 m s −1 , Eq. ( 9) can here be reduced to To compute the mean wind components, we need at least two radial velocities measurements and the corresponding beam positions (φ and θ ) assuming that u and v are identical at the focus points of a pair of beams.Therefore, a two-beam nacelle lidar can compute u and v: A similar approach can be used for a four-beam nacelle lidar. The two upper beams and two lower beams are used separately (Larvol, 2016) to estimate u and v at two different heights.Here, we average the estimates at the two heights to represent the mean inflow velocity. Induction correction Due to the presence of the wind turbine, the wind slows down as it approaches the rotor.We perform the correction of the slowdown in speed (also referred as the induction correction) to the estimates of lidars and the sonic anemometer using the method in Simley et al. (2016): where U ∞ is the undisturbed free stream wind speed, x is the distance between the lidar scanning plane and the rotor, and a is the axial induction factor.The induction factor a is determined using the same procedure as the one in Held and Mann (2019) assuming the effect of the induction is constant over a 10 min period.A steady-state thrust curve of the V52 turbine and the 10 min mean wind speeds measured by the cup anemometer at 44 m are used to look up the thrust coefficient C t .Then, we compute the induction factor using axial momentum theory, i.e., C t = 4a(1 − a). Estimation of the Reynolds stress tensor We assume that the Reynolds stresses R ij ≡ u i u j are homogeneous over the rotor plane irrespective of the mean wind field.We apply a least-squares fit to the radial velocity variances and the corresponding beam unit vectors to estimate the Reynolds stresses: The matrix R that minimizes the integral must fulfill This can be written as Wind Energ.Sci., 7, 831-848, 2022 https://doi.org/10.5194/wes-7-831-2022 The right side of Eq. ( 22) is written as a vector having the length of six using the six combinations of indices (i, j ) = (1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3) with n 1 = − cos φ, n 2 = cos θ sin φ and n 3 = sin θ sin φ (as given in Eq. 5).Similarly, on the left side of Eq. ( 22), R kl is rearranged to a length six vector, where n k n l n i n j dµ is a 6-by-6 matrix with both (k, l) and (i, j ) going through the same combinations of indices: To compute the six Reynolds stresses, we need at least six radial velocity variances from different beam directions to ensure that the large matrix in Eq. ( 23) is not degenerate (i.e., its determinant is not zero) (Sathe et al., 2015).If fewer than six variances of the radial velocity are available, we have fewer knowns than unknowns.If the nacelle lidar beams have only one opening angle φ, the equations will be linearly dependent, and so the determinant will be zero and Eq. ( 23) will have infinite solutions.In those cases, only σ 2 u can be well determined, and the stresses involving the lateral component will be more noisy (Peña et al., 2019).In this study, we use all radial velocity variances from the SpinnerLidar to calculate the six Reynolds stresses. Numerical simulations We generate three-dimensional random turbulence fields using the Mann model (Mann, 1998) with typical values of the model parameters: αε 2/3 = 0.05 m 4/3 s −1 , L = 61 m and = 3.2.We furthermore assume Taylor's frozen turbulence hypothesis: so the wind field at any given time can be obtained by translating the wind field at time t = 0.The turbulence boxes are 18 km long in the along-wind and 128 m long in both the vertical and lateral directions.The number of grid points in the simulation in the three directions is (N x , N y , N z ) = (8192, 64, 64).A total of 100 turbulence boxes with the same Mann parameters but different seeds were generated.For simulating lidar measurements, we add a mean wind U and a linear vertical shear dU/dz to the along-wind velocity com- ponent u in each box: where U = 10 m s −1 , dU/dz = 0.0288 s −1 , z rotor is the turbine hub height in the turbulence box, i.e., the middle grid point in the z coordinate, and u is the fluctuation around the mean from the turbulence box. We also account for the lidar probe volume.The lidar Doppler spectrum S(v r , t) is (Held and Mann, 2018) where δ is the Dirac delta function and M is the distance along the beam that we use to truncate the integral due to the finite length of the turbulence boxes.Figure 1 shows an example of an instantaneous Doppler radial velocity spectrum simulated in a turbulence box for one arbitrary beam of the SpinnerLidar, in which the radial velocity is determined by the three methods introduced in Sect.2.3.The velocity bin resolution is 0.1 m s −1 bin −1 and M = 8z R , which is hereafter always used. 3 Experiment setup Measurement campaign A measurement campaign on a Vestas V52 Nacelle lidars Three forward-looking nacelle lidars are investigated here. All lidars are based on a CW system and they all were scanning at a single plane (see Fig. 4).The specifications for three nacelle lidars can be found in Table 1.The Spinner-Lidar (Peña et al., 2019) scans in a rosette-curve pattern and generates 400 radial velocities in one full scan.For this measurement campaign, the SpinnerLidar was set up to perform a full scan every 2 s at a focus distance of 62 m.The system also recorded the instantaneous Doppler spectrum of the radial velocity, which is used here both to derive the radial velocity using different methods and to estimate the unfiltered radial velocity variance.The SpinnerLidar streams out average Doppler spectra at a rate of 200 Hz.Each Doppler spectrum is represented in 256 frequency bins with a spectral resolution of 195.3 kHz corresponding to a radial velocity resolution of 0.1528 m s −1 per bin.In addition, it recorded the signal strength (here called "power") of the instantaneous spectrum.We also use the inclination and the azimuthal positions from the SpinnerLidar sensors to correct the scanned locations. The two-beam WindEye (hereafter W2) and the four-beam WindVision (hereafter W4) are two commercial lidars from Windar Photonics A/S (Windar Photonics, 2020).W2 measured at 37 m and has similar width of the probe volume (indicated by the Rayleigh length) as the SpinnerLidar.Note that the largest probe volume and the smallest half-cone opening angle are those of the four-beam system.The azimuthal angle in Table 1 refers to the position of the beams on the scanning cone surface (from the top of the cone).The two beams from W2 are aligned horizontally, while the four beams from W4 focuses at each quadrant of the rotor area.Both systems complete a scan in 1 s. Data selection and filtering The measurements were collected between 1 October 2020 and 30 April 2021.We analyze the time series of all data and their statistics within 10 min periods (in total 30 492 periods of 10 min).There are three types of measurements: the supervisory control and data acquisition of the wind turbine, the mast measurements and the measurements from three lidars.We concentrate our analysis on the wind sectors, which are relatively aligned with the mast-turbine direction (291 • ) to exclude the influence of the wakes from the nearby wind turbines to the greatest extent.We select 10 min periods for the analysis using the following criteria: Wind Energ.Sci., 7, 831-848, 2022 https://doi.org/10.5194/wes-7-831-2022-All lidars and the V52 turbine should be concurrently operating.The turbine status is indicated by the rotor speed, which should be higher than 14 rpm.This leaves us 19 190 periods of 10 min. -The wind direction measured by the wind vane and the yaw angle of the turbine are both between 261-321 • .The absolute difference between these two directions is lower than 5 • .Since the dominant wind direction at this site is west and south-west, we have 2457 periods of 10 min left after applying this filter. -The wind speed measured by the cup anemometer at the turbine hub height is higher than 3 m s −1 . -No precipitation is detected during the 10 min period. After filtering, the number of the 10 min periods for the analysis is 2348. Data filtering We process the SpinnerLidar measurements for the selected 2348 periods of 10 min.The SpinnerLidar measurements are further filtered based on both the system-reported radial velocity, which is the median estimate from the raw Doppler radial velocity spectrum, and the power of the spectrum.The following criteria are applied (Fig. 5 shows an example of results of the SpinnerLidar filtering within an arbitrary 10 min period): -We filter out all measurements with system-reported radial velocity estimates below 3.2 m s −1 , which is the reference minimal detectable radial velocity by the Spin-nerLidar due to the interference of the turbine blades (Karen Enevoldsen, personal communication, 2021). -We simulate the radial velocity of all possible blade returns as (Angelou et al., 2015) where is the 10 min mean rotor speed, S y is the lateral component of the unit vector with reference to the Spin-nerLidar in the y-z plane, and h SL is the vertical displacement between the SpinnerLidar scan head and the wind turbine rotation axis.Equation ( 27) does not consider the misalignment between the SpinnerLidar and the nacelle, which is negligible (below 0.5 marked in red in Fig. 5.We discriminate the wind speed signal from the blade return signal from Eq. ( 27) when the difference between them is above 0.2 m s −1 . -We filter out all measurements exceeding power values above 100 (Peña et al., 2019) (this signal strength has arbitrary units).We can see from Fig. 6 that some measurements close to the middle of the pattern are filtered out with this criterion. -Finally, we filter out radial velocities exceeding its mean ± 3 times its standard deviation within the 10 min period. Further, there should be at least half of the raw measurements left for the analysis to consider a 10 min period of SpinnerLidar measurements, which leaves us 1605 periods of 10 min for the later post-processing. Gridding the scans We estimate the lidar scan locations using the average azimuthal and inclination angles of the SpinnerLidar within the 10 min period, i.e., the system-reported coordinates are rotated along the longitudinal and lateral axis of the Spinner-Lidar scanhead, respectively.Figure 6a shows the scan locations in blue and the non-rotated locations in orange within a 10 min period (26 February 2021 at 14:10:00), where the average inclination angle is 3.15 • and the average azimuthal angle is 0.34 • .Due to the turbine movement and SpinnerLidar slack, we aggregate the azimuthal-and inclination-corrected scan locations within a grid of 1 m resolution in the y-z plane, as shown in Fig. 6b.The coordinates of the grid cells, which are marked in light grey, are given by the resolution and extension of the grid.The "gridded" rosette pattern is shown in black (some are covered by red color as explained later).All radial velocity spectra for the scans lying within each grid cell in the given 10 min period are accumulated.We use only grid cells, where there are more than 30 instantaneous Doppler radial velocity spectra.In Fig. 6b, we show in red the grid cells satisfying this criterion.Finally, we only use those 10 min periods in which we have 900 grid cells satisfying the criterion. Doppler spectra processing and usage Figure 7 shows an example of the processing of the Doppler radial velocity spectra from the accumulated measurements within a grid cell close to the middle of the scan.The raw Doppler radial velocity spectra within that grid cell are shown in Fig. 7a.For this 10 min period (26 February 2021 at 14:10:00), the vane measures a wind direction of 291.6 • and the yaw angle is 291.0 • .The lidar unit vector pointing onto this grid cell is almost parallel to the terrain (φ is around 1.4 • ), thus close to the main wind direction.As shown in Fig. 7a, high spectral values "contaminate" the spectra in the first few velocity bins due to, e.g., optical reflections from the bore point (i.e., the beam hitting the telescope lens perpendicularly) or few left blade signals.To ease the spectra processing, we define a threshold for each individual spectrum, which defines the limit above which a Doppler spectrum is considered to be caused by the wind.The calculation of the threshold is based on the mean value (µ) plus a number of standard deviations (σ ) within a frequency range where no radial velocity signals are anticipated.Angelou et al. (2012) showed that a systematic selection of the threshold level should take into account the shape of the Doppler spectrum relative to the variation of the spectrum noise level.The number of standard deviations is thus different for the case of a wide Doppler velocity spectrum (high turbulence level) and a narrow one (low turbulence level).An overestimation of the threshold removes low-intensity fluctuations and, subsequently, biases the estimation of the radial velocity and reduces its variance.Here, we select a threshold of µ+3σ of the spectral values in the last 50 frequency bins.After thresholding, we remove the spectral values up to the bin corresponding to 2.3 m s −1 , which filters out the high spectral peaks in unrealistic low-velocity bins (Fig. 7b). Each "cleaned" spectrum is then area-normalized.surements at hub height within the same 10 min period projected to the direction of the grid cell unit vector, which as illustrated are in good agreement with the ensemble-average Doppler radial velocity spectrum.We use the ensembleaveraged Doppler radial velocity spectrum to derive both the unfiltered radial velocity variance and the radial velocity estimates (maximum, centroid and median), which are later used for the reconstruction of the mean wind.All grid cells with at least 900 Doppler radial velocity spectra within each 10 min period are considered for the re-construction of the mean wind and the Reynolds stresses.The three-dimensional mean wind vector is computed from the median-, maximum-and centroid-radial velocities, using the approach in Sect.2.5.1.Figure 8a shows a contour map of the median-derived radial velocity for an arbitrary 10 min period of SpinnerLidar measurements.As expected, the highest radial velocities are found in the middle-top part of the scan.This radial velocity contour map shows a similar pattern as that from the average of SpinnerLidar simulations using 30 turbulence boxes (Fig. 8b). Data filtering The measurements for the W2 and W4 nacelle lidars are processed at 2 and 4 Hz, respectively.Therefore, within a 10 min period, the optimal amount of radial velocities per beam for W2 is 1200 and for W4 is 2400.We remove outliers of radial velocities and apply the same blade filtering using the method described in Sect.4.2.We set a criterion that there should be at least 90 % of the optimal amount of data left after the filtering for a 10 min period.We do not account for the radial velocities of a full scan when data from any beam are missing.This leaves us 1499 periods of 10 min for the intercomparison. Methods to compute the along-wind velocity variance The along-wind and lateral velocities are reconstructed for each scan (i.e., for every 1 s) using the approach described in Sect.2.5.2, and we compute 10 min statistics from these velocities.Due to the limited number of beams and the unavailability of Doppler radial velocity spectra, we only compute the filtered along-wind variance using two methods.We can compute the wind speed variance directly from the https://doi.org/10.5194/wes-7-831-2022Wind Energ.Sci., 7, 831-848, 2022 time series of reconstructed along-wind velocity U within the 10 min periods (hereafter denoted as the "U -variance" method).We can also compute σ 2 u using Eq. ( 23) with some assumptions and three are investigated here.The first is to assume that all Reynolds stress components apart from σ 2 u are zero (hereafter denoted as the "LSP-σ 2 u " method).This basically means that Eq. ( 23) becomes Since the half-cone opening angle of nacelle lidars is usually small, this method tends to overestimate σ 2 u .The second is to assume turbulence isotropy; i.e., the auto-variance of the three velocity components is the same and they are uncorrelated (hereafter denoted as the "LSP-isotropy" method).From Eq. ( 23), this means that σ 2 u is then the average of the radial velocity variances of the lidar beams.The third option is to assume that σ v = 0.7σ u and σ w = 0.5σ u , as suggested in IEC (2019) (hereafter denoted as the "LSP-IEC" method). Sonic measurements We use the 20 Hz raw sonic measurements at hub height (44 m) to calculate the mean horizontal wind speed and its variance for all selected 10 min periods.Figure 9a shows that the horizontal speed measured by the cup and the sonic anemometer is nearly the same.When looking at the computed variance in Fig. 9b, a bias of 3.4 % is found.We rotate the sonic-measured 3-D wind components, which are defined in the main wind coordinate system, to the coordinate system fixed with the wind turbine so that the sonic u velocity is aligned with the rotation axis of the turbine.We use the velocity and the variance of the rotated sonic-measured mean wind components as the reference for the comparison with the estimates from the nacelle lidars. Mean wind speed We perform comparisons of the 1499 10 min mean alongwind velocity component reconstructed from the lidar measurements with that from sonic measurements at 44 m (see Fig. 10).The estimates from lidars and the sonic anemometer are corrected for the induction using the method in Sect.2.5.3.The lidar-derived estimate is a rotor-effective mean velocity since measurements at all scanning positions are considered.As illustrated, there is a high correlation for all nacelle lidars, as expected.The W2 and the SpinnerLidar estimates are slightly higher than that from the sonic anemometer while the estimate of W4 is 2.6 % lower.From the numerical simulation with 30 turbulence boxes, we found that all nacelle lidars are able to estimate the along-wind velocity well (not shown here); the uncertainties in the mean wind obtained from lidar are as large as those from the sonic anemometer. Radial velocity variance Figure 11a shows the simulated ratio of the unfiltered radial velocity variance to the u-velocity variance of the sonic anemometer among the scanning area.As the simulated wind field is based on the Mann model, the major source of crosscontamination on the radial velocity comes from the spectral tensor components involving w.As seen from the plot, the ratio is higher than one above the center and lower than one beneath it, which is due to the positive and negative contribu-Wind Energ.Sci., 7, 831-848, 2022 https://doi.org/10.5194/wes-7-831-2022tion of u w , respectively, to the beam radial variance.Figure 11b shows the result from the measurement campaign as a scatter plot between the unfiltered radial velocity variance of the central grid cell (y = 0 m, z = 48 m) from the Spin-nerLidar to the u variance of the sonic anemometer measurements at 44 m.From the measurements, the unfiltered radial velocity variance of the central beam reaches 91.5 % of the sonic variance, whereas the simulations show a zero bias for that central beam.We attribute this difference to our rather conservative method to clean Doppler radial velocity spectra, which attempts to eliminate any possible noise.However, this might lead to reduction of true turbulence contained in the Doppler radial velocity spectrum.In Fig. 12a, we show the probe volume filtering effect on the scanning pattern by plotting the ratio of the filtered to the unfiltered radial velocity variance from the simulations.Here, the filtered radial velocity variance is computed from the centroid-derived radial velocity, because the cen-troid method experiences the most turbulence attenuation caused by the probe volume (Held and Mann, 2018).The filtering effect due to probe volume is very similar throughout the pattern.The highest ratios are found around the center of the pattern, where the beam aligns with the along-wind velocity component.As the beam moves from the center, the ratio decreases because the beam's opening angle increases and the cross-contamination from other velocity components increases.The amount of the cross-contamination depends highly on the anisotropy of turbulence .Our simulation was conducted with a set of typical Mann parameters (see Sect. 2.7), so the degree of simulated filtering can be different from that of measurements.Figure 12b shows the comparison between the filtered and unfiltered radial velocity variance at the grid cell (y = 0 m, z = 48 m) from the measurement campaign.The correlation is very high, as expected, and the unfiltered radial velocity variance is around 9 % higher than the centroid-derived filtered one. https://doi.org/10.5194/wes-7-831-2022 Wind Energ.Sci., 7, 831-848, 2022 Turbulence estimates Using the methodology described in Sect.2.6, we estimate the six components of the Reynolds stress tensor from the SpinnerLidar unfiltered radial velocity variances and compare them against the computed components from the sonic anemometer measurements at 44 m for the 1499 periods of 10 min.Figure 13 shows the inter-comparison for σ 2 u .From the simulation with 30 turbulence boxes, we get a nearly perfect correlation and a bias of 1.4 %, whereas from the measurements the bias is 8.9 %.The bias is higher in the measurements mainly because we cannot guarantee that some variance of the radial velocity is lost when processing the Doppler radial velocity spectra. We perform the comparison of all Reynolds stresses computed from the SpinnerLidar scans with those from the sonic anemometer at 44 m in Fig. 14.The Reynolds stresses from the measurement campaign are normalized by U 2 with which they are roughly proportional.The unfiltered variances from simulations were derived by the same method (see Sect. 2.4) as for the measurements.The numerical simulations show that we can accurately estimate all components of the Reynolds stress tensor using the SpinnerLidar compared to the sonic anemometer.The SpinnerLidar uncertainties of u u are not very different from those of the sonic anemometer, while the uncertainties of other components are larger.This is mainly because all other components where u fluctuations are not included are driven by fluctuations of components largely misaligned with the beams.Results from the measurements show that all Reynolds stress components estimated from SpinnerLidar are close to those from the sonic anemometer but biased.We even observe negative values for v v and w w .This is discussed in Sect.6.2. Figure 15 shows the comparison of the SpinnerLidar estimations of the maximum-, median-and centroid-derived filtered variances of the along-wind velocity component with those from the 44 m sonic measurements.Results from both the simulations using 30 turbulence boxes and the measurements indicate that turbulence attenuation is most severe using the centroid method from the Doppler radial velocity spectrum, while the maximum method gives the closest value, as expected (Held and Mann, 2018). Figure 16 shows the comparison of the Windar lidar reconstructed filtered σ 2 u using different methods against σ 2 u values from the 44 m sonic anemometer.As illustrated, about 37 % of the variance is filtered out for both W4 and W2, when the variance is computed by taking the statistics of the reconstructed U time series.This is still the common practice in the wind energy community.The degree of filtering is similar for both lidars although W4 has a larger probe volume.From Eq. ( 28), we note that by using the "LSPσ 2 u " method, we can overestimate the along-wind variance when all beams are scanning horizontally (or close to).Estimates using the "LSP-isotropy" method take the average of all beam variances.When the scanning geometry is symmetrical in the two-dimensional y-z plane (like in the W4 case), the contributions from u w might (nearly) cancel out.The method "LSP-IEC" is perhaps a fairer procedure when compared to the other methods, as it assumes relations between velocity components' variances that are close to those we can observe within the atmospheric surface layer. Estimates from the "LSP-IEC" and "LSP-isotropy" methods can be computed by scaling those from method "LSP-σ 2 u "; that explains the same correlations in Fig. 16a-c and e-g.All inter-comparison results of the estimated along-wind components are summarized in Table 2. Influence of spectra processing on the unfiltered variances The way we process the Doppler radial velocity spectra influences the unfiltered variance estimates.Therefore, we investigate the sensitivity of using a more rigorous method to further alleviate the contamination of the Doppler spectra from, e.g., noise.This method first determines the peak of the Doppler signal and then moves forwards and backwards in the vicinity of the peak velocity bin to find the two locations (velocity bins) where the Doppler signal reaches zero.Only Doppler signals between these two velocity bins are used to compute the variance.The unfiltered along-wind velocity variance estimated from the SpinnerLidar measurements shows a bias reduction of ≈ 3.0 % using the more rigorous spectra-processing when compared to the relatively "moderate" method, which is used in Sect.4.2.3.The coefficient of the determination reduces from 97 % to 96.6 %. Negative SpinnerLidar-derived variances Negative variances might result when using SpinnerLidar measurements to estimate the Reynolds stress tensor.We find randomly occurring negative values of σ 2 v in 7 % and of σ 2 w in 15 % of the 10 min periods that are used for the intercomparison.We investigate the conditions in which this occurs by simulating measurements of a nacelle lidar with 30 beams such that they cover the extent of rotor at hub height https://doi.org/10.5194/wes-7-831-2022Wind Energ.Sci., 7, 831-848, 2022 (see Fig. 17a).Figure 17b shows the simulated radial velocity variances (marked in blue) of the beams across the rotor.Each point corresponds to the average radial velocity variance from five turbulence fields.With increasing opening angle, the simulated radial velocity variance decreases.By using the method in Sect.2.6 to derive the velocity variances, we obtain positive values of all velocity components and σ 2 u σ 2 v , as expected.We obtain negative σ 2 v values when the radial velocity variances highly decrease with increasing opening angle (high decrease marked in green in Fig. 17b).In this case, the turbulence homogeneity assumption is not satisfied.Further, we find σ 2 u ≈ σ 2 v when σ 2 v r slowly decreases with increasing opening angle (low decrease).Figure 18a shows the pattern of unfiltered radial velocity variances in one of the 10 min periods where we estimate negative σ 2 v and σ 2 w variances.As illustrated, the pattern shows a strong decrease of σ 2 v r particularly around the right side of the scans.σ 2 v > σ 2 w .The occurrence of the negative variances is less frequent in our measurements when we perform the turbulence estimation every 30 min, as expected. Conclusions In this study, we analyzed measurements of three forwardlooking nacelle lidars with different scanning configurations to investigate the benefit of multi-beam nacelle lidars for turbulence characterization.For the first time, the Spinner-Lidar measurements were compared with those of commercial nacelle lidars.We focused our analysis on wind sectors, in which the inflow is relatively homogeneous.The inflow characteristics estimated by three lidars were compared with those from a nearby sonic anemometer at hub height. Our results from the analysis of numerical simulations and measurements showed that all lidars were able to estimate the mean wind velocity well compared to the sonic anemome-ter.We also found that the SpinnerLidar was the only one out of the three nacelle lidars that is able to measure the six Reynolds stress components accurately.This is due to both its multi-beam capability and its ability to measure unfiltered radial velocity variances. By using the information from the Doppler radial velocity spectrum, one can partly compensate for the probe volume averaging effect and reduce the error of turbulence estimation.We showed that using maximum-derived radial velocities to compute the along-wind velocity variance mitigates best the turbulence attenuation caused by the lidar probe volume. For the commercial lidars, one can estimate the alongwind velocity variance using three different methods: scaling the radial velocity variance with a factor of cos 2 φ, assuming σ v = 0.7σ u and σ w = 0.5σ u , or assuming isotropic turbulence.We found the smallest bias in the estimates using the first method when compared to the sonic anemometer values.However, the first method can overestimate the along-wind Wind Energ.Sci., 7, 831-848, 2022 https://doi.org/10.5194/wes-7-831-2022 variance when all beams are scanning horizontally.The second method is the fairest procedure among the three methods.All methods showed smaller bias when compared to computing the variance from the reconstructed along-wind velocity values in the time series. Figure 1 . Figure 1.Example of a Doppler radial velocity spectrum simulated in a turbulence box, including the radial velocity estimates using the maximum (max), the median (med) and the centroid (cen) methods. Figure 2 . Figure 2. The Risø test site in Roskilde, Denmark, on a digital surface elevation model (UTM32 WGS84).The V52 meteorological mast is shown in a red square.The wind turbines are shown in circles (in red the reference V52 wind turbine).The color bar indicates the height above mean sea level in meters. Figure 4 . Figure 4. (a) The scanning trajectory of the nacelle lidars.(b) An upwind view of the theoretical scanning pattern performed by the W2, W4 and the SpinnerLidar. Figure 5 . Figure 5. Radial velocity as function of (a) index of the 400 beams in each full scan, (b) the lateral component of the unit vector S y and (c) power for an arbitrary 10 min period.Filtered data are shown in black and data left after filtering in blue.The red color in the top panel represents the simulated radial velocity from the possible blade returns. Figure 6 . Figure7shows an example of the processing of the Doppler radial velocity spectra from the accumulated measurements within a grid cell close to the middle of the scan.The raw Doppler radial velocity spectra within that grid cell are shown in Fig.7a.For this 10 min period (26 February 2021 at 14:10:00), the vane measures a wind direction of 291.6 • and the yaw angle is 291.0 • .The lidar unit vector pointing onto this grid cell is almost parallel to the terrain (φ is around 1.4 • ), thus close to the main wind direction.As shown in Fig.7a, high spectral values "contaminate" the spectra in the first few velocity bins due to, e.g., optical reflections from the bore point (i.e., the beam hitting the telescope lens perpendicularly) or few left blade signals.To ease the spectra processing, we define a threshold for each individual spectrum, which defines the limit above which a Doppler spectrum is considered to be caused by the wind.The calculation of the threshold is based on the mean value (µ) plus a number of standard deviations (σ ) within a frequency range where no radial velocity signals are anticipated.Angelou et al. (2012) showed that a systematic selection of the threshold level should take into account the shape of the Doppler spectrum relative to the variation of the spectrum noise level.The number of standard deviations is thus different for the case of a wide Doppler velocity spectrum (high turbulence level) and a narrow one (low turbulence level).An overestimation of the threshold removes low-intensity fluctuations and, subsequently, biases the estimation of the radial velocity and reduces its variance.Here, we select a threshold of µ+3σ of the spectral values in the last 50 frequency bins.After thresholding, we remove the spectral values up to the bin corresponding to 2.3 m s −1 , which filters out the high spectral peaks in unrealistic low-velocity bins (Fig.7b).Each "cleaned" spectrum is then area-normalized.Figure7cshows the ensemble-average Doppler radial velocity spectrum from all normalized, thresholded and cleaned spectra.We also show the normalized distribution of sonic mea- Figure 7 . Figure 7.An example of Doppler radial velocity spectra analysis within a 10 min period (26 February 2021 at 14:10:00).The location of the grid cell y = 0 m, z = 48 m is shown in the scanning pattern in Fig. 6b.(a) Raw and scaled Doppler radial velocity spectra.(b) Cleaned and normalized Doppler spectra.(c) The average Doppler spectrum (black), the distribution of the sonic measurements at hub height (orange) and the three radial velocity estimates, which can be clearly seen in the inset (d). Figure 8 . Figure 8.(a) Contour map of the median-derived radial velocity from the ensemble-average Doppler spectra in a 10 min period.Black dots indicate the location of the grid cells with more than 30 Doppler spectra.(b) Contour map of the average median-derived radial velocity from SpinnerLidar simulations using 30 turbulence boxes. Figure 9 . Figure 9.Comparison of the 10 min mean horizontal (a) wind speed and (b) variance between the sonic and the cup anemometers at 44 m.Each 10 min is shown in blue markers, a 1 : 1 relation is shown in the red dashed line, and a linear regression fit to origin in the black dashed line (results of the regression are given on the top of the plot, where R 2 is the coefficient of determination). Figure 10 . Figure 10.Comparison between the reconstructed along-wind mean velocity from the sonic anemometer at 44 m and (a) W2, (b) W4 and (c) SpinnerLidar.All estimates are corrected for the induction.Features regarding the red and black dashed lines as in Fig. 9. Figure 11 . Figure 11.(a) Ratio of the unfiltered radial velocity variance to the u-velocity variance of the sonic anemometer from the simulations (30 turbulence boxes are used).(b) Comparison between the unfiltered radial velocity variance at the central grid cell (y = 0 m, z = 48 m) and the u variance of the sonic anemometer at 44 m from the measurements.Features regarding the red and black dashed lines as in Fig. 9. Figure 12 . Figure 12.(a) Ratio of the filtered to the unfiltered radial velocity variance from simulations (30 turbulence boxes are used).(b) Comparison between the SpinnerLidar filtered and unfiltered radial velocity variance at the central grid cell (y = 0 m, z = 48 m) from the measurements.Features regarding the red and black dashed lines as in Fig. 9. Figure 13 . Figure 13.Comparison of the unfiltered variance of the along-wind velocity component between the SpinnerLidar and the sonic anemometer at 44 m from (a) numerical simulation using 30 turbulence boxes and (b) measurement campaign. Figure 14 . Figure 14.Reynolds stresses derived from the SpinnerLidar and sonic anemometer, (a) numerical simulations using 100 turbulence boxes and (b) measurements.The markers are the means and the error bars are ± 1 standard deviation. Figure 15 .Figure 16 . Figure 15.Comparison of the filtered variance of the along-wind velocity component between the SpinnerLidar and the sonic anemometer at 44 m. (a-c) Numerical simulations using 30 turbulence boxes.(d-f) Measurement campaign. Figure 17 . Figure 17.(a) Scanning pattern of a nacelle lidar with beams across the rotor.(b) The radial velocity variances of the beams across the rotor from simulations using five turbulence boxes. Figure 18 . Figure 18.Contour plots of the radial velocity variance over the SpinnerLidar scanning pattern during two 10 min periods, (a) a case with negative σ 2 v and σ 2 w values, (b) a case with σ 2 u ≈ σ 2 v > σ 2 w . Table 1 . Specifications of the nacelle lidars for the measurement campaign. Table 2 . Bias and coefficient of determination between the lidar-derived along-wind velocity variance using different lidars and methods and that from the sonic anemometer at 44 m.
2021-12-12T17:35:33.795Z
2021-12-07T00:00:00.000
{ "year": 2022, "sha1": "c7a189b16a43492f35100efbe11c6fa96dbe4346", "oa_license": "CCBY", "oa_url": "https://wes.copernicus.org/articles/7/831/2022/wes-7-831-2022.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "71e5e76b5bdf5e8f62a1ebe635e66dafdec37bd5", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [] }
3803927
pes2o/s2orc
v3-fos-license
FR-900098, an antimalarial development candidate that inhibits the non-mevalonate isoprenoid biosynthesis pathway, shows no evidence of acute toxicity and genotoxicity ABSTRACT FR-900098 is an inhibitor of 1-deoxy-d-xylulose-5-phosphate (DXP) reductoisomerase, the second enzyme in the non-mevalonate isoprenoid biosynthesis pathway. In previous studies, FR-900098 was shown to possess potent antimalarial activity in vitro and in a murine malaria model. In order to provide a basis for further preclinical and clinical development, we studied the acute toxicity and genotoxicity of FR-900098. We observed no acute toxicity in rats, i.e. there were no clinical signs of toxicity and no substance-related deaths after the administration of a single dose of 3000 mg/kg body weight orally or 400 mg/kg body weight intravenously. No mutagenic potential was detected in the Salmonella typhimurium reverse mutation assay (Ames test) or an in vitro mammalian cell gene mutation test using mouse lymphoma L5178Y/TK+/− cells (clone 3.7.2C), both with and without metabolic activation. In addition, FR-900098 demonstrated no clastogenic or aneugenic capability or significant adverse effects on blood formation in an in vivo micronucleus test with bone marrow erythrocytes from NMRI mice. We conclude that FR-900098 lacks acute toxicity and genotoxicity, supporting its further development as an antimalarial drug. Introduction Isoprenoid biosynthesis in Plasmodium falciparum, the causative agent of malignant tertian malaria, solely depends on the 1-deoxy-D-xylulose-5-phosphate (DXP) pathway, also known as the 2-C-methyl-D-erythritol-4phosphate (MEP) pathway, whereas isoprenoids in humans are derived from the unrelated mevalonate pathway. The DXP pathway is used by most bacteria and is also found in the plastids of algae and higher plants. Likewise, DXP pathway enzymes in P. falciparum are located in a plastid-like organelle that is present in most parasites of the phylum Apicomplexa, and is therefore called the apicoplast. 1 DXP reductoisomerase, the second enzyme in the DXP pathway, is inhibited by the natural antimicrobial compound fosmidomycin and its close derivative FR-900098. 2,3 Both compounds display potent in vitro antimalarial activity, but FR-900098, which differs from fosmidomycin by the presence of a single additional methyl group (Fig. 1), inhibits the growth of cultured P. falciparum parasites with approximately twice the efficacy of fosmidomycin. 4 The activity of fosmidomycin and FR-900098 against 34 fresh clinical Cameroonian P. falciparum isolates was compared by Tahar and Basco. 5 The geometric mean IC 50 values (95% confidence interval) were 301 nM (245-370 nM) for fosmidomycin and 118 nM (93.3-149 nM) for FR-900098. Furthermore, FR-900098 also displayed twice the activity of fosmidomycin in the P. vinckei mouse model, following intraperitoneal and oral administration. 4,6 The IC 50 value of FR900098 for recombinant P. falciparum DXP reductoisomerase was 18 nM compared to 32 nM for fosmidomycin, 7 suggesting that the more potent activity of FR-900098 against malaria parasites results mainly from its higher affinity for the target enzyme. Indeed, the structural analysis of P. falciparum DXP reductoisomerase bound to FR-900098 revealed that the additional methyl group of FR-900098 forms a van der Waals contact with the side chain of a tryptophan residue, which could explain why FR-900098 is more active than fosmidomycin. 8 Before the discovery of the DXP pathway, 9,10 fosmidomycin and FR-900098 were isolated as natural antibacterial compounds from the culture broth of Streptomyces lavendulae and S. rubellomurinus, respectively. 11,12 Both compounds were found to be active against a number of clinically important Gram-negative bacteria, 13 but only fosmidomycin was developed further due to its superior antibacterial activity. In a phase I study of 127 healthy male volunteers, fosmidomycin was administered intravenously (i.v.) at a dose of 2 g every 6 h for 7 days, intramuscularly (i.m.) at a dose of 1 g every 6 h for 5 days, or orally (p.o.) at a dose of 1 g every 6 h for 7 d. 14,15 No adverse events were reported except for mild to moderate irritation at the site of injection in the i.v. and i.m. treatment groups. The strong antibacterial efficacy of fosmidomycin was confirmed in a pilot phase II trial of 70 patients with acute urinary tract infections although no details were published, and only minor adverse effects were reported including cases of nausea, vomiting and loose stools. 15 It is unclear why the clinical development of fosmidomycin as an antibacterial agent was discontinued at that time, but probable reasons include lower efficacy compared to other antibiotics in development, the lack of activity against streptococci and staphylococci, and the development of resistance. Interest in fosmidomycin was renewed following the discovery of its molecular target and its potential use as an antimalarial drug. In clinical phase II studies, oral treatment with fosmidomycin led to the rapid reduction of parasitemia in patients with acute, uncomplicated P. falciparum malaria. 16,17 However, a high rate of recrudescent infections precludes the use of fosmidomycin as a monotherapy. Nevertheless, the combination of fosmidomycin with clindamycin emerged as a new potential antimalarial treatment, wherein the antimalarial activity of clindamycin is probably mediated by inhibiting the prokaryotic-like protein synthesis of the apicoplast. Clindamycin, if administered as a single agent, results in a peculiar delayed onset of parasite growth inhibition (sometimes referred to as delayed kill effect), making it unsuitable for monotherapy. 18 Three-day regimens with 2 doses per day of fosmidomycin (30 mg/kg body weight) and clindamycin (10 mg/kg body weight) resulted in 28-day cure rates of approximately 90% in Gabon and Thailand. [19][20][21][22] Cure rates of 100% were achieved with longer treatment durations (4 and 7 d in Gabon and Thailand, respectively). 19,23,24 Lower cure rates (62% and 45.9% after a 3-day regimen in 2 different studies) were observed in children younger than 3 y. 20,25 Because the relatively low efficacy in this group of patients probably reflects inadequate formulation, the authors of a recent meta-analysis advocate the further clinical development of fosmidomycin. 26 Currently, a combination of fosmidomycin with piperaquine is under investigation in a phase IIa proof-of-concept study in Lambar en e, Gabon (ClinicalTrials. gov Identifier: NCT02198807). In the case of FR-900098, the paucity of toxicological data currently hinders its clinical evaluation, despite demonstrably superior antimalarial activity in vitro and in mice. Here, we present preliminary studies concerning the toxicology of FR-900098 to promote its further development as an antimalarial drug. Results and discussion Acute oral and intravenous toxicity in rats FR-900098 acute toxicity testing in Wistar (WU) rats was carried out as a limit test with single doses of 3000 mg/kg body weight p.o. and 400 mg/kg body weight i.v. No clinical signs of toxicity were observed in either group. Furthermore, FR-900098 did not affect body weight gain in either group during the 14-day observation period. Therefore, the LD 50 (approximate lethal range) value of FR-900098 under the described conditions was > 3000 mg/kg body weight following oral administration and > 400 mg/kg body weight following intravenous administration. In a previous preliminary FR-900098 toxicity study in ICR mice, 5 animals received a single i.v. dose of 100 mg (4000-5000 mg/kg body weight). All mice survived without toxic symptoms during 14 d of observation after injection. 11 These data indicate that FR-900098 may also lack significant toxicity in humans at therapeutically relevant doses of 15-30 mg/kg body weight. Salmonella typhimurium reverse mutation assay (Ames test) The mutagenic potential of FR-900098 in bacteria was determined using the Ames test, which is part of the standard battery of genotoxicity tests for pharmaceuticals [ICH S2(R1)] and is therefore a regulatory requirement before novel drugs can be registered. The test is performed in different Salmonella typhimurium strains with mutations in genes involved in histidine synthesis. 27 Reverse mutation (his ¡ ! his C ) in the presence of mutagens restores the ability of bacteria to grow on histidine-free substrates. The use of tailor-made tester strains that specifically test for frameshifts (TA 98, TA 1537) and substitutions (TA 100, TA 102, TA 1535) in the genes required for histidine synthesis allows the detection of mutagens with different modes of action. A preliminary test with the S. typhimurium strain TA 100 was performed in order to examine the direct antibacterial activity of FR-900098 under the relevant assay conditions, employing a plate incorporation test without metabolic activation. The pretest was performed in duplicate with 10 concentrations ranging from 0.0316-1000 mg per plate. No revertant colonies appeared at 1000 and 316 mg per plate. At 100 mg per plate, the number of revertants was reduced (45 and 48 colonies on the 2 plates, respectively) compared to the solvent control (118 and 110 colonies, respectively). Therefore, we selected 100 mg per plate as the maximum concentration in the full series of tests. For the main study, 2 independent test designs were used (the plate incorporation and pre-incubation methods) and the experiments were each carried out with and without metabolic activation using rat liver S9-mix. In contrast to the preliminary test, there was no reduction of the number of revertant colonies at 100 mg per plate, which might reflect slight differences in the actual experimental conditions. Nevertheless, a scarce background lawn was indicative for the direct antibacterial activity of FR-900098 at this dose. The background lawn typically appearing in the Ames test results from limited growth of the non-revertant bacteria due to the trace of histidine added to the top agar. At all dose levels, FR-900098 did not increase the number of revertant colonies in any of the 5 test strains when compared with the negative control plates treated solely with the solvent dimethylsulfoxide (DMSO) regardless of the test design and the presence or absence of S9-mix (Table 1; Table 2). The positive controls with and without S9-mix resulted in the induction of revertant colonies in all 5 strains, indicating that the test worked correctly and that the S9-mix had sufficient activity. The inability of FR-900098 to induce the formation of revertant colonies therefore confirmed the lack of mutagenic activity of FR-900098 in the S. typhimurium test strains up to the concentration causing direct antibacterial activity. In vitro mammalian cell gene mutation test in mouse lymphoma L5178Y/TK C/¡ cells The genotoxicity of compounds can differ substantially depending on whether they are tested against bacteria or mammalian cells, so a bacterial mutagenicity assay (Ames test) must be complemented by an in vitro genotoxicity test using mammalian cells prior to the approval of new pharmaceuticals. The Ames test was thus complemented by an in vitro mouse lymphoma assay (MLA) using L5178Y/TK C/¡ cells, representing the second (mammalian-based) genotoxicity test in option 1 of the updated ICH S2(R1) guidance. This MLA is based on the quantification of forward mutations in the thymidine kinase (TK) locus induced by test substances. 28 L5178Y/TK C/¡ cells possess TK activity and are therefore sensitive to the cytotoxic effects of the nucleoside analog trifluorothymidine (TFT), which substitutes for thymidine in the salvage pathway. However, TK-deficient cells generated by the forward mutation TK C/¡ ! TK ¡/¡ are resistant to TFT and continue to grow, because thymidine can also be synthesized de novo. The cytotoxicity of FR-900098 toward L5178Y/TK C/¡ cells was evaluated in a pre-test in the presence and absence of S9-mix, using a broad range of concentrations (0, 3.2, 8, 20, 50, 125, 250, 500, 1000 and 2200 mg/ml) up to the limit concentration of 10 mM for nontoxic compounds set out in OECD Guideline No. 476 and ICH guidance S2(R1). After exposure to FR-900098 for 4 h, there were only minor indications of direct cytotoxicity both in the presence and absence of S9-mix, without evidence of a concentration-dependent effect. The maximum reduction of relative cell counts was observed at 20 mg/ml FR-900098 (to 63% of the vehicle control) in the absence of S9-mix, and at 50 mg/ml FR-900098 (to 76% of the vehicle control) in the presence of S9-mix. The relative cell counts at the top dose of 2200 mg/ml were 100% and 116% in the absence and presence of S9mix, respectively. These results indicated that FR-900098 was relatively non-toxic toward mouse lymphoma cells and that the requirement for the MLA to test up to 80-90% reduction in relative total growth (RTG) could not be achieved at concentrations up to the defined limit concentration. Therefore, a broad range of concentrations was also used for the main series of tests, covering both the OECD-defined limit concentration of 10 mM (approximately 2200 mg/ml) and low concentrations to detect hormesis-like phenomena. The main MLA experiments were carried out using FR-900098 concentrations of 1, 10, 25, 125, 250 and 2200 mg/ ml (without S9-mix), and 1, 10, 50, 250 and 2200 mg/ml (with S9-mix). Similar to the pre-test, there was no evidence for direct cytotoxicity of FR-900098 as judged by suspension growth (SG) and RTG (Table 3). Without S9mix, FR-900098 induced a marginal and not relevant increase in the mean mutant frequency (MF) after exposure for 4 h, i.e., 97.6 (D 128%) per 10 6 viable cells at 25 mg/ml and 98.7 (D 130%) at 125 mg/ml, compared to 76.0 (= 100%) for the negative control (Table 3). This remained within our historical range for negative controls (66.7-169.7; 13 independent experiments) and the corresponding negative control range of 50-170 resistant mutants per 10 6 viable cells, as proposed by Moore et al. 29 As expected, the positive control methyl methanesulfonate (MMS) caused a relevant increase in the mean MF, based on the concept of the Global Evaluation Factor, 29,30 which amounted to 475.5 mutants per 10 6 viable cells (D 626% compared to the negative controls). After exposure to FR- Mammalian erythrocyte micronucleus test The micronucleus test in mice is a cytogenetic in vivo assay with bone marrow erythrocytes. 32 Like the Ames test and the MLA, the micronucleus test is a prerequisite for the registration of new drugs and represents the in vivo part of the standard battery of genotoxicity tests for pharmaceuticals [ICH S2(R1)]. Micronuclei arise from chromosomal fragments or chromosomes that are not included in the daughter nuclei at cell division. They are easy to detect in young erythrocytes because the main nucleus is expelled a few hours after the final mitosis is completed, but micronuclei persist in the cytoplasm. For acute treatment regimens, micronuclei are analyzed in young erythrocytes to be sure they were induced by the test substance. Immature polychromatic erythrocytes (PCE) are less than 1 day old and contain fragments of nuclear material in the cytoplasm. This material consists mainly of RNA, which gradually disappears with maturation and stains blue with Giemsa. In contrast, mature normochromatic erythrocytes (NCE) stain pink with Giemsa due to the absence of RNA in the cytoplasm. The spontaneous incidence of micronucleated PCE in NMRI mice is »0.2%. Chemicals with chromosome-breaking (clastogenic) or spindle-disrupting (aneugenic) activity can increase the frequency of micronucleated PCE. Before the main micronucleus test in NMRI mice, a preliminary test with 5 male and 5 female animals was carried out using an FR-900098 dose of 2000 mg/kg body weight, representing the defined limit dose for non-toxic substances (OECD Guideline No. 474). The total dose was administered as 2 consecutive oral gavages of 1000 mg/kg body weight at intervals of 6 h in order to achieve prolonged exposure. FR-900098 showed no signs of toxicity, and there was no reduction in body weight. For the main test, a 3-dose study design was chosen: 2 £ 1000 mg/kg body weight (total 2000 mg/kg body weight), 2 £ 200 mg/kg body weight (total 400 mg/kg body weight) and 2 £ 40 mg/kg body weight (total 80 mg/kg body weight). This decision was taken for safety reasons, even though a limit test with the limit concentration alone may have been sufficient. The additional low and mid doses were included to exclude any potential confounding anti-proliferative effects of FR-900098 on bone marrow at the high limit concentration of 2000 mg/kg body weight. The p.o. treatment of 5 male and 5 female animals in each treatment group indicated no significant inhibition of blood formation in the bone marrow, 24 and 48 h after application, compared to the control animals (Table 4). In male animals at the intermediate dose, there was a slight but statistically significant induction of blood formation, amounting to 123 § 10.1 PCE per 200 red blood cells (RBC) compared to 110 § 8.1 PCE per 200 RBC for the corresponding negative control animals. However, absence of concentration-dependency means that this outcome is unlikely to be biologically relevant. In the present study, FR-900098 did not significantly increase the number of micronucleated PCE in male and female mice at any of the doses tested, either 24 or 48 h after oral treatment. A very slight trend toward a higher frequency of micronucleated PCE was observed, particularly in FR-900098 treated female mice after 24 h (2 £ 200 mg/kg body weight), but was also evident in female and male mice after 48 h (2 £ 1000 mg/kg body weight). Micronucleus frequencies amounted to 0.25 § 0.061, 0.24 § 0.119, and 0.25 § 0.122%, respectively, as compared to 0.12 § 0.097, 0.18 § 164, and 0.20 § 0.141% for the concurrent negative controls. These slightly higher mean values were not considered relevant, because e.g. the micronucleus frequency of the FR-900098 treated females, 24 h after administration, was still in the range of the respective historical negative control data (0.12-0.55% micronucleated PCE), with a very low group mean value of the concurrent negative controls of 0.12% § 0.097, compared to 0.24 § 0.14% for the respective historical mean value. In contrast, the positive control animals showed a statistically significant increase in the number of micronucleated PCE in the bone marrow, with group mean values of 5.57 § 1.717% for males and 3.72 § 0.690% for females, confirming that the test worked as expected. As sufficient systemic availability of FR-900098 after oral administration can be derived from former in vivo studies in the P. vinckei mouse model, 4,6 our results strongly suggest that FR-900098 does not have clastogenic and/or aneugenic effects in NMRI mice at the dose levels tested, agreeing with the in vitro mutagenicity tests. Conclusions In conclusion, the present study indicates that FR-900098 does not cause acute toxicity in rats, and there was no evidence for genotoxicity or mutagenicity in 2 different in vitro assays and one in vivo assay in mice. The exceptionally low toxicity of FR-900098 may not only reflect the absence of the molecular target DXP reductoisomerase in mammalian cells, but may also indicate a specific uptake mechanism in erythrocytes infected with P. falciparum, which appears to be absent in mammalian cells. Radiolabeled FR-900098 does not penetrate human fibroblasts or uninfected erythrocytes, but enters infected erythrocytes, because the parasite can influence cell membrane permeability. 33 FR-900098 has a novel and highly specific mode of action, low toxicity in rats, and no significant genotoxicity according to the comprehensive assays required by both the former ICH S2A and S2B guidelines and the current ICH S2(R1) guidance on genotoxicity testing of pharmaceuticals intended for human use, thus strongly encouraging the further development of FR-900098 as an antimalarial drug. 73226-73-0) was synthesized using a combination of previously described methods. 34,35 Infrared spectrometry and 1 H-, 13 C-and 31 P-nuclear magnetic resonance spectrometry were used to confirm the identity of the substance. The purity of batch AR62 used in these experiments was 97.6% as determined by ion chromatography and photometry. For the in vivo tests, FR-900098 was dissolved in water and homogenized by ultrasonication for 15 min in a water bath directly before application. For the Ames test, FR-900098 was dissolved in DMSO for sterilization and was further diluted to the desired concentration using sterile water. For the mouse lymphoma assay, FR-900098 was accurately weighed and initially dissolved in pure ethanol in a sterile tube to avoid bacterial contamination. The ethanol was then allowed to evaporate under a sterile hood. Directly before use, FR-900098 was dissolved in treatment medium (culture medium with 5% rather than 10% heat-inactivated horse serum) by stirring, and then diluted to the desired concentrations. The stability of FR-900098 was confirmed both in ethanol and aqueous solution. Relevant guidelines and regulations at the time of testing All animal experiments complied with the regulations of the German Animal Protection Law (Tierschutzgesetz, May 18, 2006). The acute oral and intravenous toxicity study with rats and the genotoxicity modules were both conducted in compliance with the Principles of Good Laboratory Practice (GLP, German Chemical Law x 19a, Appendix 1, July 02, 2008) and with the appropriate Organization for Economic Cooperation and Development (OECD) Guidelines for the Testing of Chemicals Both experimental designs were carried out with and without metabolic activation using S9-mix, consisting of a post-mitochondrial fraction (S9 fraction) and the corresponding co-factors. 27 The S9 fraction was prepared from rats treated with Aroclor 1254 (Analabs, USA), as described by Maron In vitro mammalian cell gene mutation test with mouse lymphoma L5178Y/TK C/¡ cells The mutagenic potential of FR-900098 in mammalian cells was determined in vitro using the microwell method of the mouse lymphoma TK mutation assay (MLA), according to Honma et al. 28 Heterozygous L5178Y/TK C/¡ mouse lymphoma cells (clone 3.7.2C; the model system) were provided by Dr. Heike Schramke (Philip Morris Research Laboratories GmbH, Germany). The cells were cultured in RPMI-1640 medium, containing 2 mM glutamine supplemented with 100 U/ml penicillin G, 100 mg/ml streptomycin sulfate, and 10% heat-inactivated horse serum (all components were purchased from GIBCO/ Invitrogen, Germany) at 37 C and 5% CO 2 in a humidified atmosphere. Prior to use, spontaneous TK ¡/¡ mutants were removed as described by Clive and Spector. 37 Cultures of proliferating L5178Y/TK C/¡ cells (1 £ 10 7 ) in 20 ml treatment medium were exposed to FR-900098 at concentrations of 1, 10, 25, 125, 250 and 2200 mg/ml (without S9-mix) and 1, 10, 50, 250 and 2200 mg/ml (with S9-mix). These concentrations were based on a preliminary cytotoxicity test complying with the relevant guidelines. Negative controls (medium alone) and positive controls (methyl methanesulfonate for assays without metabolic activation and cyclophosphamide monohydrate for assays with metabolic activation; both substances from Sigma, Germany) were included. The cells were exposed for 4 h in the presence or absence of S9 fraction from phenobarbital and b-naphthoflavone treated rats (RCC Cytotest Cell Research GmbH, Germany) and the appropriate co-factors, as described by Maron and Ames. 36 The cells were subsequently washed and subcultured for 48 h to determine cytotoxicity (cell counts post-treatment and over the 2-day expression period as well as plating efficiency) and to allow for phenotypic expression (with daily cell population adjustment to 6 £ 10 6 cells/30 ml of culture medium) prior to mutant selection. The cytotoxicity directly after the treatment period (survivor I) and after the expression period (survivor II) was determined by plating »1. Mammalian erythrocyte micronucleus test The genotoxic potential of FR-900098 in vivo was determined using the bone marrow micronucleus test in NMRI mice as described by Hayashi et al. 32 Young adult male and female NMRI mice (8-12 weeks at delivery) were obtained from Harlan Winkelmann (Germany) and were randomized by weight into the different treatment groups of 5 male and 5 female animals. Body weights were recorded at arrival, prior to treatment and before bone marrow preparation. Before administration of the test and reference substances, the animals were starved overnight. FR-900098 was dissolved in water and administered at a dose of 10 ml/kg body weight by oral gavage using a stomach tube. Three dose groups were used. Because the limit dose of 2000 mg/kg body weight (administered at 2 doses of 1000 mg/kg body weight) did not induce clinical signs of toxicity in a preliminary toxicity test, the limit dose was chosen as maximum dose, 2 £ 200 mg/kg body weight was chosen as the mid-range dose, and 2 £ 40 mg/kg body weight was chosen as the low dose. In each case, the 2 doses were administered at intervals of 6 h. The positive control cyclophosphamide monohydrate was administered once orally 24 h before sacrifice at a dose of 60 mg/kg body weight. After administration, animals were observed at defined intervals of 0.5, 2.5, 5 and 24 h after the first dose, and 0.5, 2.5, and 24 h following the second dose to promptly detect toxic effects and treatment-related suffering. Bone marrow was sampled 24 and 48 h after the first dose of FR-900098. At the first sampling interval, animals in all 5 treatment groups (negative control, positive control and 3 FR-900098 dose levels) were prepared for necropsy. At the second sampling interval, additional animals in the highest dose group as well as additional negative control animals were prepared for necropsy. Two femurs were isolated from each mouse, the ends of the femurs were removed, and the bone marrow was transferred to a tube by washing out with fetal calf serum. The bone marrow suspension was gently pulled up and down in the tube to achieve a fine cell suspension. The bone marrow was then centrifuged for 5 min and most of the supernatant was discarded. The cell pellet was carefully re-suspended in a small volume of fetal calf serum, yielding about 2 drops of bone marrow cell suspension per animal. From this suspension 2 smears (A and B) were prepared on defatted clean slides. The smears were air-dried for 24 h and stained with May-Gr€ unwald and Giemsa solutions. The slides were coded prior to analysis. For each animal, the number of micronucleated cells per 2000 PCE was determined and the number of PCE and NCE per 200 erythrocytes was scored to determine the toxic effects of FR-90098 on bone marrow cells and thus blood formation. [38][39][40] The slides were decoded post-analysis. Disclosure of potential conflicts of interest No potential conflicts of interest were disclosed.
2018-04-03T04:16:14.701Z
2016-06-03T00:00:00.000
{ "year": 2016, "sha1": "318d6658b40b118ac218b6deee5fcbcd6cde01c3", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2016.1195537?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "318d6658b40b118ac218b6deee5fcbcd6cde01c3", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10938023
pes2o/s2orc
v3-fos-license
Cyclin-dependent Kinase-9 Is a Component of the p300/GATA4 Complex Required for Phenylephrine-induced Hypertrophy in Cardiomyocytes* A zinc finger protein GATA4 is one of the hypertrophy-responsive transcription factors and forms a complex with an intrinsic histone acetyltransferase, p300. Disruption of this complex results in the inhibition of cardiomyocyte hypertrophy and heart failure in vivo. By tandem affinity purification and mass spectrometric analyses, we identified cyclin-dependent kinase-9 (Cdk9) as a novel GATA4-binding partner. Cdk9 also formed a complex with p300 as well as GATA4 and cyclin T1. We showed that p300 was required for the interaction of GATA4 with Cdk9 and for the kinase activity of Cdk9. Conversely, Cdk9 kinase activity was required for the p300-induced transcriptional activities, DNA binding, and acetylation of GATA4. Furthermore, the kinase activity of Cdk9 was required for the phosphorylation of p300 as well as for cardiomyocyte hypertrophy. These findings demonstrate that Cdk9 forms a functional complex with the p300/GATA4 and is required for p300/GATA4- transcriptional pathway during cardiomyocyte hypertrophy. Heart failure results from a variety of cardiovascular disorders including myocardial infarction and hypertension, and is a principal cause of death and disability in humans (1). A major morphogenic change in failing hearts is hypertrophy of each cardiomyocyte, an increase in its cell volume (2). Hence, intense investigation has focused on elucidating the mechanisms of cardiomyocyte hypertrophy that eventually leads to the development of heart failure. At the transcriptional level, cardiomyocyte hypertrophy is characterized by changes in specific gene expressions controlled by a subset of hypertrophy-responsive transcription factors including MEF2, SRF, and a zinc fin-ger protein, GATA4 (3,4). GATA4 functionally and physically interacts with other transcription factors, including NFAT-3, GATA6, MEF-2, STAT, and SRF (5)(6)(7)(8)(9). Whereas these interactions regulate the transcriptional potential of GATA4 downstream of hypertrophy signaling pathways, disruption of this complex results in the inhibition of hypertrophic responses. Therefore, identifying novel GATA4 binding partners is critical to elucidate the precise mechanisms that mediate hypertrophic responses in cardiac myocytes. A transcriptional co-activator, p300, also directly interacts with GATA4 to synergistically activate the atrial natriuretic factor (ANF) 2 and ␤-myosin heavy chain (␤-MHC) promoters during myocardial cell hypertrophy (10,11). Through its histone acetyltransferase (HAT) activity, p300 acetylates not only histone to promote an active chromatin configuration, but also GATA4 to increase its DNA binding and transcriptional activities (12). HAT activity of p300 is required for myocardial hypertrophy in vitro and for the promotion of left ventricular remodeling in vivo (13). Recently, we and others have reported that a natural p300-specific HAT inhibitor, curcumin, prevents the development of cardiomyocyte hypertrophy and heart failure in vivo, further emphasizing the importance of p300 in these processes (14,15). In addition, p300 acts as a scaffold protein in the assembly of multisubunit transcription factor complexes for specific cardiac promoters, thereby conferring further specificity. Interestingly, the characterized interaction of GATA4 with FOG-2 is mediated through p300 (16). Hence, p300 may play a central role in the functional and physical interactions of multiple transcription factors with GATA4 and facilitate the formation of multisubunit complexes. A mechanism for hypertrophic growth must involve a global increase in the RNA content per cell. Among several regulatory factors that specifically target the transcriptional elongation, positive transcription elongation factor b (P-TEFb) induces hyperphosphorylation of the C-terminal domain in RNA pol II, a critical, essential step to produce messenger RNA (17). P-TEFb, a heterodimer composed of cyclin-dependent kinase 9 (Cdk9) and cyclin T1 (or the miner form T2 or K) (18), not only plays an important role in most RNA pol II-dependent transcription (17,18), but also is recruited to cellular promoters by interacting with a variety of transcription factors. However, the mechanisms regulating the recruitment of P-TEFb to cardiac hypertrophy-responsive promoters remain poorly understood. To identify novel binding partners of GATA4, we used a proteomics strategy to purify the GATA4 complex. Herein, we report that Cdk9 is a novel component of the p300/GATA4 complex and required for the p300-induced acetylation of GATA4. Moreover, Cdk9 contributed to hypertrophy responsive-transcription through its recruitment of the p300/GATA4 complex to the transcriptional machinery. Purification of the GATA4 Complexes-The GATA4 complexes were purified from nuclear extracts prepared from HeLa cells expressing the mouse GATA4 protein fused with N-terminal FLAG and HA epitope tags (e-GATA4) by immunoprecipitation on anti-FLAG antibody-conjugated agarose. The bound polypeptides were eluted with the FLAG peptide and were further affinity-purified by anti-HA antibody-conjugated agarose, as described (22). Mass spectrometry was performed by the Taplin Biological Mass Spectrometry Facility, Cell Biology, Harvard Medical School. Data analysis was performed with the ProFound software. In Vitro Binding Assay-GST fusion proteins were immobilized on glutathione-agarose beads (GE Healthcare) and used for in vitro protein interaction assays. Portions (10 l) of glutathione-agarose beads bearing equal amounts of either GST or the fusion proteins (1 to 2 g) were mixed with His 6 fusion proteins (300 ng) in 200 l of binding buffer (20 mM Tris-HCl, pH 8.0, 100 mM KCl, 5 mM MgCl 2 , 0.2 mM EDTA, 10% glycerol, 0.1% Tween 20, 10 mM 2-mercaptoethanol, and 0.25 mM phen-ylmethylsulfonyl fluoride). The binding reaction mixtures were gently rocked on a rotating wheel at room temperature for 1 h. The beads were then washed four times with 600 l of the same buffer, resuspended in NuPAGE LDS sample buffer (Invitrogen), and analyzed by SDS-PAGE. GST-fused proteins were visualized by Coomassie Brilliant Blue staining. His 6 -tagged proteins were analyzed by blotting with individual antibodies, anti-GATA4 antibody, anti-Cdk9 antibody, and anti-HA antibody for the p300 fragment. Transfection and Dual-Luciferase Assays-Primary neonatal rat cardiac myocytes were prepared and co-transfected with the indicated amounts of DNA using Lipofectamine Plus (Invitrogen), as described previously (16). COS7 cells were maintained and transfected with DNA using FuGENE 6 reagent (Roche Diagnostics), as described previously (20). Activities of firefly and sea pansy luciferase were measured in the same cell lysate using the PicaGene Dual kit (TOYO B-Net). The relative promoter activities were calculated as the ratio of firefly to sea pansy luciferase. Immunoprecipitation and Western Blotting-Nuclear extracts were prepared from HeLa cells, COS7 cells, or cardiac myocytes, and immunoprecipitation and Western blots were performed as described previously (16). For immunoprecipitation, mouse monoclonal FLAG (M5) antibody was purchased from Sigma, rabbit polyclonal anti-p300 (a mixture of N-15 and C-20), goat polyclonal anti-Cdk9, and rabbit polyclonal anti-GATA4 antibodies were from Santa Cruz Biotechnology, and normal mouse, rabbit, or goat IgG were from Jackson ImmunoResearch Laboratories. For Western blots, rabbit polyclonal antibody against acetylated lysine was from Cell Signaling, rabbit anti-GATA4 polyclonal antibody, rabbit polyclonal anti-Cdk9 antibody, rabbit polyclonal anti-cyclin T1 antibody, mouse monoclonal anti-HA probe antibody for the p300 fragment, rabbit polyclonal anti-p300 polyclonal antibody, and rabbit polyclonal anti-RNA pol II antibody were from Santa Cruz Biotechnology, mouse monoclonal anti-phospho-Ser/ Thr-Pro antibody was from Upstate, mouse monoclonal anti-␤-actin antibody was from Sigma, and mouse monoclonal anti-GAPDH antibody was from Molecular Probes. For the analysis of the total amount of GATA4 after immunoprecipitation, the membrane was reprobed with goat polyclonal anti-GATA4 antibody (Santa Cruz Biotechnology). The levels of signals were estimated using photographs taken with LAS1000 plus (FUJIFILM) and by quantification with Multi Gauge V3.0 (FUJIFILM). Chromatin Immunoprecipitation (ChIP) Assay and Re-precipitation (re-ChIP) Assay-Primary cardiac myocytes (ϳ1 ϫ 10 6 ) were treated with 30 M PE or saline. One hour after stimulation, ChIP assays were performed using the ChIP assay kit (Upstate Biotech), as previously described (23), with the following modifications. In brief, after fixation of the genomic DNA and nuclear proteins with formalin, extracts were sonicated, subsequently immunoprecipitated with goat polyclonal anti-GATA4 antibody (Santa Cruz Biotechnology), goat polyclonal anti-Cdk9 antibody (Santa Cruz Biotechnology), or control goat IgG, and immunocomplexes were captured by adding protein G beads. After the precipitates were washed four times in a low stringency buffer, DNA was purified by phenol-chloroform extraction, and precipitated with ethanol. To detect the ANF promoter containing a GATA site, collected DNA was subjected to PCR analysis using a thermal cycler with the specific primers for the ANF promoter. For quantitative real-time PCR, the reaction was performed with a SYBR Green PCR master mix (Applied Biosystems), and the products were analyzed with a thermal cycler (ABI Prism 7900HT sequence detection system). Levels of GAPDH transcript were used to normalize cDNA levels. Sequences of the primers were as follows: 5Ј-CTGAGGCGAGCGCCCAGGAAGATA-3Ј (sense for the rat ANF promoter), 5Ј-AAGATGCCCTTTTAAAGTTATCAG-3Ј (antisense for the rat ANF promoter), 5Ј-GTCATTGAGAG-CAATGCCAG-3Ј (sense for the rat GAPDH promoter), and 5Ј-GTGTTC CTACCCCCAATGTG-3Ј (antisense for the rat GAPDH promoter). Re-ChIP assays were performed as previously described (24), with the following modifications. In brief, the primary immunocomplexes obtained with rabbit polyclonal anti-p300 (a mixture of N-15 and C-20) or goat polyclonal anti-Cdk9 antibodies were eluted by 10 mM dithiothreitol with agitation at 37°C for 30 min. The elute was diluted 20 times with re-ChIP buffer (0.01% SDS, 1% Triton X-100, 1 mM EDTA, 150 mM NaCl, 15 mM Tris-HCl, pH 7.9) and immunoprecipitated with rabbit polyclonal anti-GATA4 antibody. Immunocytochemistry and Measurement of Cell Surface Area-The cardiac myocytes were grown in flask-style chambers with glass slides (Nalgen Nunc) and stained with anti-␤-MHC antibody that reacts with both ␣and cardiac ␤-MHC using the indirect immunoperoxidase method, as previously described (12). Then, the surface area of these cells was measured semiautomatically with the aid of an image analyzer (Image Pro-Plus), as described previously (12). Construction and Production of Lentiviral Vector Expressing shRNA-targeting Cdk9 Gene-BLOCK-iT Lentiviral RNAi Pol II miR Expression System with GFP (Invitrogen) was used for the construction of the lentiviral expression construct according to the manufacturer's instructions. Short pairs of sense and antisense DNA oligo encoding a sense-loop-antisense sequence to rat Cdk9 gene was synthesized for the validated corresponding shRNA, and sequences are as follow : 5Ј-TGCTGAGTAG-AGGCCATTCAGCAGCAGTTTTGGCCACTGACTGACT-GCTGCTGTGGCCTCTACT-3Ј (sense for the rat shRNA-Cdk9) and 5Ј-CCTGAGTAGAGGCCACAGCAGCAGTCA-GTCAGTGGCCAAAACTGCTGCTGAATGGCCTCTACT-C-3Ј (antisense for the rat shRNA-Cdk9). The complementary DNA oligos were annealed and ligated to the entry vectors and subcloned into the pLenti6.3/V5-DEST vector (Invitrogen). All the cloned sequences were confirmed by DNA sequencing. The recombinant lentiviral vectors and pLenti6.3/V5-GW/ EmGFP vector, as a control, were individually co-transfected with ViraPower packaging mix (Invitrogen) by using Lipofectamine 2000 and packaged into pseudotyped lentivirus by using 293FT cells. Viral supernatants were harvested 48 h after transfection and filtered through a 0.45-m filter. Statistical Analysis-Data are presented as the mean Ϯ S.E. Statistical comparisons were performed with the use of unpaired 2-tailed Student t-tests or ANOVA with Scheffé's test when appropriate, with a probability value Ͻ0.05 taken to indicate significance. RESULTS Purification of the GATA4 Complex-In this study, the tandem affinity purification method (22) (supplemental Fig. S1) was employed to identify proteins that are associated with GATA4. A murine GATA4 cDNA tagged with HA and FLAG at the N terminus (e-GATA4) was cloned into a retroviral expression vector. By transducing this vector into HeLaS3 cells, GATA4 was stably expressed in these cells. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-FLAG and anti-HA antibodies. Then, the immunopurified GATA4 complexes were separated on SDS gels (Fig. 1A). The identity of these bands was determined by liquid chromatography tandem mass spectrometry. One of the proteins purified with e-GATA4 was Cdk9, a component of P-TEFb. GATA4 formed a functional complex with p300 during cardiomyocyte hypertrophy. Therefore, we examined the binding of p300/ Cdk9 in addition to p300/GATA4 and GATA4/Cdk9 in HeLa cells expressing GATA4 with FLAG epitope peptides. Furthermore, we examined the binding of cyclin T1, another component of pTEF-b, with p300 and GATA4 as well as Cdk9. Nuclear extracts from HeLa cells were subjected to immunoprecipitation with anti-FLAG, anti-Cdk9, or anti-p300 antibodies, followed by Western blotting with each of these antibodies and with anti-cyclin T1 antibody. As shown in Fig. 1B, we observed the interaction of p300/GATA4 (compare lanes 3 and 4 in the 4th panel of B) and GATA4/Cdk9 (compare lanes 3 and 4 in the 2nd panel of B), as expected. Cdk9 physically interacted with not only GATA4 but also p300 (lanes 9 and 10 in the 1st and 4th panels of B). Moreover, cyclin T1 formed a complex with p300 and GATA4 as well as Cdk9 (lanes 3-11 in the 3rd panel of B). By GST pull-down assay using His 6 -GATA4 and GST-Cdk9 ( Fig. 1C) or His 6 -Cdk9 and GST-GATA4 (Fig. 1D), we confirmed that GATA4 and Cdk9 directly and physically interact each other. To determine the domain of each protein involved in the interaction between GATA4 and Cdk9, a series of GST pull-down assays was performed. We fused various N-, C-, or both N-and C-terminal deletion mutants with GST (Fig. 2, A and C). As shown in Fig. 2B, deletion of the N-terminal region of GATA4 up to residues 179 did not affect its ability to bind to Cdk9. However, the deletion of GATA4 residues 180 -255 abrogated the interaction with Cdk9, whereas a small GATA4 fragment containing residues 180 -255 was able to bind to Cdk9. These findings suggest that GATA4 amino acid sequences 180 -255 containing the N-terminal zinc finger domain are crucial for its interaction with Cdk9, although the data do not exclude the possibility that GATA4-(N1-179) containing the transactivation domain has a binding site for Cdk9. Next, we performed opposite experiments to determine the GATA4-binding site within Cdk9. Deletion of the N-terminal region of Cdk9 up to residue 82 did not affect its ability to bind Cdk-9 and p300/GATA4 in Cardiomyocytes to GATA4. The deletion of Cdk9 residues 83 to 129 abrogated the interaction with GATA4 (Fig. 2D). This suggests that the fragment of 83-129 of Cdk9 includes sequences that bind to GATA4. Because the C/H-3 domain of p300 has many binding sites for sequence-specific transcription factors, we used a GST pulldown assay to examine the binding between Cdk9 and the p300 fragment (1514 -1922) containing the C/H-3 domain (25). This p300 fragment (1514 -1922) inhibits the function of endoge-nous p300 as a dominant-negative mutant in many transcriptional pathways. In cardiomyocytes, this fragment suppresses hypertrophic responses as well as GATA4-dependent transcription (12). As shown in Fig. 1E, this p300 fragment tagged with GST was pulled down with Cdk9. On the other hand, as revealed in Fig. 1F, Cdk9 tagged with GST was pulled down with the p300 fragment tagged with the HA probe. To determine a Cdk9 domain that interacts with p300, we used a series of Cdk9 deletion mutants tagged with GST and the HAtagged p300-(1514 -1922) fragment (Fig. 2C). As shown in Fig. 2E, whereas the full length of Cdk9 and the N-terminal region of Cdk9 (residues 1-82) bound p300, deletion of the N-terminal Cdk9 region, residues 1-82, abrogated the interaction with p300. These observations suggest that fragment 1-82 of Cdk9 includes sequences that bind to p300. We investigated whether GATA6, another member of the GATA family, involved in hypertrophic responses in cardiomyocytes (26,27), can bind to Cdk9. COS7 cells were transfected with either GATA4 or GATA6 in addition to Cdk9. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-Cdk9, or IgG as a control, followed by Western blotting with anti-GATA4, anti-GATA6, or anti-Cdk9 antibody. The results clearly demonstrated that not only GATA4 but also GATA6 interacted with Cdk9 (supplemental Fig. S2D). p300 Is Involved in Cdk9 Activities-A fragment of p300-(1514 -1922) is able to interact with GATA4, but does not possess HAT activity. This fragment blocks the binding between p300 and GATA4, represses the p300-induced acetylation of GATA4, and reduces GATA4-dependent transcriptional activity, thus serving as a dominant-negative mutant. To determine whether p300-(1514 -1922) will affect the ability of Cdk9 to interact with GATA4 and to phosphorylate RNA pol II, expression plasmids encoding FLAG-tagged GATA4, full-length p300, and HA-tagged p300-(1514 -1922) were co-transfected into COS7 cells, as indicated in Fig. 3, A and B. To monitor RNA pol II phosphorylation, we performed FIGURE 1. Cdk9 is a novel binding partner of p300/GATA4. A, silver staining of the mouse GATA4 complex. Nuclear extracts prepared from HeLa cells expressing FLAG-HA-epitope-tagged mouse GATA4 (lane 2) or those expressing FLAG-HA tag (lane 1) were immunoprecipitated with anti-FLAG and anti-HA antibodies. Immunopurified GATA4 complexes were fractionated on SDS-PAGE. Bands were analyzed by mass spectrometry. Arrow, e-GATA4 B, nuclear extracts from HeLa cells expressing FLAG-tagged GATA4 were immunoprecipitated with anti-FLAG, anti-Cdk9, or anti-p300 antibody, followed by Western blotting with indicated antibodies. C and D, GST fusions were incubated with GATA4 (C) or Cdk9 (D) and precipitated by glutathione-agarose affinity chromatography. Bounded proteins were immunoblotted with anti-GATA4 or anti-Cdk9 antibody (top). Coomassie Blue staining of GST proteins is shown at the bottom. E and F, GST fusions were incubated with Cdk9 (E) or HA-tagged p300-(1514 -1922) (F), and precipitated by glutathione-agarose affinity chromatography. Proteins bound to GST fusion mutants were immunoblotted with anti-Cdk9 or anti-HA antibody (upper panel). Coomassie Blue staining of GST proteins is shown at the lower panel. Western blotting with antibody that recognizes hyperphosphorylated (IIo) and hypophosphorylated (IIa) RNA pol II. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-FLAG antibody. Western blotting of these precipitates demonstrated that the expression of p300-(1514 -1922) completely inhibited p300-induced GATA4 acetylation, as expected (1st panel of Fig. 3B, compare 2nd and 3rd bars of Fig. 3D). The co-expression of p300 in addition to GATA4 and Cdk9 induced the phosphorylation of RNA pol II (5th and 6th panels of Fig. 3A, 1st and 2nd bars of Fig. 3C) and increased the interaction between GATA4 and Cdk9 (4th panel of Fig. 3B, 1st and 2nd bars of Fig. 3E). However, the expression of p300-(1514 -1922) in addition to p300 inhibited p300-induced RNA pol II phosphorylation (5th and 6th panels of Fig. 3A, 2nd and 3rd bars of Fig. 3C) and disrupted the GATA4/Cdk9 interaction (4th panel of Fig. 3B, 2nd and 3rd bars of Fig. 3E). The expression of p300 and p300-(1514 -1922) did not influence the amount of GATA4 (1st panel of Fig. 3A) or Cdk9 (3rd panel of Fig. 3A). These results suggest that p300 is involved in the phosphorylation of RNA pol II and in the ability of GATA4 to interact with Cdk9. Kinase Activity of Cdk9 Is Required for Phosphorylation of p300, p300-induced Acetylation of GATA4, and Transcription of GATA4-To determine the role of Cdk9 kinase activity in the formation of the p300/GATA4 complex and p300induced acetylation of GATA4, we utilized a dominant-negative form of Cdk9 (DN-Cdk9) lacking kinase activity due to a single amino acid substitution. Expression plasmids encoding FLAG-tagged GATA4, p300, and Cdk9 or DN-Cdk9 were transfected into COS7 cells. Nuclear extracts from these cells were subjected to Western blotting or immunoprecipitation with anti-FLAG antibody followed by Western blotting. In the presence of wildtype Cdk9, p300 induced the phosphorylation of RNA pol II (4th and 5th panels of Fig. 4A, 1st and 2nd bars of Fig. 4D), promoted the formation of the p300/GATA4/ Cdk9 complex (3rd and 4th panels of Fig. 4B), and increased GATA4 acetylation (1st panel of Fig. 4B, 1st and 2nd bars of Fig. 4E). However, in the presence of DN-Cdk9 instead of wild-type Cdk9, p300 was unable to achieve these changes (4th and 5th panels of Fig. 4A, 2nd and 3rd bars of Fig. 4D, 1st, 3rd, and 4th panels of Fig. 4B, 2nd and 3rd bars of Fig. 4E). These observations demonstrate that Cdk9 kinase activity is required for the p300-induced acetylation of GATA4. Because Cdk9 binds to p300, we hypothesized that Cdk9, by its kinase activity, phosphorylates p300. To test this hypothesis, the same nuclear extracts used for Fig. 4, A and B were immu- Cdk-9 and p300/GATA4 in Cardiomyocytes noprecipitated with anti-p300 antibody followed by Western blotting using anti-phospho-serine/threonine antibody. DN-Cdk9 reduced the phosphorylation of full length p300, indicating that Cdk9 kinase activity is required for the phosphoryla-tion of p300 (1st panel of Fig. 4C). Next, to determine whether Cdk9 can phosphorylate the p300 domain that interacts with Cdk9, the FLAGtagged expression plasmid encoding the p300-(1514 -1922) fragment and the Cdk9 or DN-Cdk9 expression plasmid were co-transfected into COS7 cells, as indicated in Fig. 5, A and B. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-FLAG antibody, followed by Western blotting. Wild-type Cdk9 induced phosphorylation of the p300-(1514 -1922) fragment (1st panel of Fig. 5B) and promoted the formation of p300-(1514 -1922)/Cdk9 (3rd panel of Fig. 5B). However, while DN-Cdk9 interacted with p300 (3rd panel of Fig. 5B), DN-Cdk9 was unable to induce the phosphorylation of the p300 fragment (1st panel of Fig. 5, B and C). These observations demonstrate that Cdk9 kinase activity is required for the phosphorylation of p300. To assess whether Cdk9 kinase activity affects the p300-induced acetylation of histones, expression plasmids encoding full-length p300 and Cdk9 or DN-Cdk9 were co-transfected into COS7 cells, as shown in Fig. 4F. Protein extracts from these cells were subjected to immunoblotting with an antibody against the acetylated form of histone-3. DN-Cdk9 inhibited p300induced acetylation of histone-3 (1st panel of Fig. 4, F and G). Then, to investigate whether Cdk9 kinase activity is required for p300-mediated activation of the GATA4-dependent promoter activities, we performed reporter assays in COS7 cells. GATA4 and p300 synergistically activated ANF and ET-1 promoters. DN-Cdk9 significantly inhibited the p300/GATA4induced activities of the ANF (Fig. 5D) and ET-1 (Fig. 5F) promoters in a dose-dependent manner. Because these promoter sequences contain binding sites for GATA4, we investigated the role of these sites in p300/GATA4-induced transcription. The mutation of GATA sites within ANF (Fig. 5E) and ET-1 (Fig. 5G) promoters abolished p300/GATA4-responsive transcription. Taken together, these data demonstrate that FIGURE 3. p300 is involved in the ability of Cdk9 to phosphorylate RNA pol II and bind with GATA4. A, COS7 cells were co-transfected with 6 g of pCMVwtp300, 4 g of pcDNA3. 2/V5-DEST-HA-p300-(1514 -1922), 1 g of pcDNA3.2/V5-DEST-FLAG-GATA4, and 1 g of pcDNA-hCdk9, as indicated. The total DNA content was equalized in each sample with pCMV-␤-gal. Nuclear extracts from these cells were subjected to Western blotting with indicated antibodies. B, immunoprecipitated samples with anti-FLAG antibody were subjected to Western blotting with indicated antibodies. C-E, amounts of hyperphosphorylated-RNA polymerase II (pol IIo)/hypophosphorylated-RNA polymerase II (pol IIa) (C) from the 6th and 7th panels of A, acetylated-GATA4/total-GATA4 (D) from the 1st and 2nd panels of B, and GATA4-associated Cdk9/total-GATA4 (E) from the 2nd and 4th panels of B, were quantified by densitometry with the use of Multi Gauge V3.0 (FUJIFILM). The data shown are the mean Ϯ S.D. from three independent experiments. Cdk9 kinase activity is required for the p300-induced DNA binding and the transcriptional activation of GATA4. Cdk9/p300/GATA4 Complex Is Recruited onto the ANF Promoter in Phenylephrine (PE)-stimulated Cardiomyocytes-To determine whether Cdk9 and p300 are recruited on the GATA4dependent regulatory region (GATA element), we performed re-chromatin immunoprecipitation (re-ChIP) (supplemental Fig. S3A), DNA pull-down assay (supplemental Fig. S3B), and electrophoretic mobility shift assays (EMSAs) (supplemental Fig. S4). These date indicate that the association of Cdk9 and p300 with GATA4 on the GATA element of hypertrophic response genes. Next, to investigate whether the endogenous Cdk9/p300/GATA4 complex is recruited by PE stimulation onto the promoter of the representative hypertrophy-responsive gene, ANF, we performed ChIP and re-ChIP assays. Cardiomyocytes were stimulated with either PE or saline as a control, followed by formaldehyde cross-linking. These cells were lysed, sonicated, and then subjected to immunoprecipitation by an antibody against GATA4 or Cdk9, or goat IgG as a control. Anti-GATA4 and anti-Cdk9 antibodies efficiently precipitated the ANF promoter, which includes the GATA element in PE-stimulated cardiomyocytes, but not in salinetreated myocytes (compare lanes 5 and 6 in upper panel of Fig. 6, A and B, compare lanes 7 and 8 in upper panel of Fig. 6, A and C). These findings indicate that hypertrophic stimuli induce the cosegregation of GATA4 and Cdk9 with the ANF GATA element. To investigate whether Cdk9 or p300 associates with GATA4 on the ANF promoter in vivo, we performed re-ChIP analysis. Cross-linked chromatins from PE-or saline-treated cardiomyocytes were first treated with the anti-Cdk9 or p300 antibody. The resulting immunocomplex was then eluted and subjected to immunoprecipitation with anti-GATA4 antibody. We found that the 3Ј-enhancer region of the ANF gene present in the first immunocomplex was pulled-down again by the GATA4 antibody, indicating that Cdk9 and p300 are associated with GATA4 at the gene enhancer, and that this association is promoted by PE stimulation (compare lanes 5 and 6 in upper panel of Fig. 6, D and E, compare lanes 7 and 8 in upper panel of Fig. 6, D and F). To examine whether hypertrophic stimuli affect the binding of GATA4 with Cdk9 in cardiomyocytes, cultured myocytes prepared from neonatal rats were incubated with 30 M of PE for indicated periods. Nuclear extracts from these cells were subjected to immunoprecipitation with the anti-Cdk9 antibody FIGURE 4. The kinase activity of Cdk9 is required for phosphorylation of p300 as well as p300-induced acetylation of GATA4 and histone. A, COS7 cells were co-transfected with 10 g of pCMVwtp300, 1 g of pcDNA-hCdk9, pcDNA-DNCdk9, and 1 g of pcDNA3.2/V5-DEST-FLAG-GATA4, as indicated. The total DNA content was equalized in each sample with pCMV-␤-gal. Nuclear extracts from these cells were subjected to Western blotting with indicated antibodies. B and C, immunoprecipitated samples with anti-FLAG antibody (B) or anti-p300 antibody (C) were subjected to Western blotting with indicated antibodies. D and E, amounts of hyperphosphorylated-RNA polymerase II (pol IIo)/hypophosphorylated-RNA polymerase II (pol IIa) (D) from the 4th and 5th panels of A and acetylated-GATA4/total-GATA4 (E) from the 1st and 2nd panels of B were quantified. The data shown are the mean Ϯ S.E. from three independent experiments. F, COS7 cells were co-transfected with 1 g of pcDNA-hCdk9 or pcDNA-DNCdk9 in addition to 11 g of pCMVwtp300. Proteins isolated by acid extraction from these cells were subjected to Western blotting for acetylated histone-3 or total histone-3 as indicated. G, amount of acetylated histone-3/total histone-3 from the 1st and 2nd panels of F was quantified. The data shown are the mean Ϯ S.E. from three independent experiments. followed by Western blotting with anti-GATA4 antibody. The expression levels of GATA4, Cdk9, cyclin T1, and ␤-actin in cardiomyocytes did not change after PE stimulation (Fig. 6G). PE treatment increased the binding of Cdk9 with GATA4, reaching a maximum at 3.5 h after PE stimulation (Fig. 6H). Kinase Activity of Cdk9 Is Required for PE-induced Phosphorylation of p300 and Acetylation of GATA4 in Cardiomyocytes-Next, to evaluate the effects of Cdk9 kinase activity on acetylations of GATA4 during myocardial cell hypertrophy, we used a Cdk9 kinase inhibitor, 5,6-dichloro-1-h-ribofuranosyl-benzimidazole (DRB). Cultured myocytes from neonatal rats were preincubated with DRB or its solvent, DMSO, as a control for 2 h. Then, these cells ware stimulated with PE or saline for 48 h. As shown in Fig. 7A, the expression levels of GATA4, Cdk9, RNA pol II, and ␤-actin were similar in salineand PE-stimulated cardiac myocytes. In contrast, p300 protein levels markedly increased on PE stimulation. DRB treatment decreased the phosphorylation of RNA pol II, but not the expression level of p300. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-GATA4 antibody followed by Western blotting with anti-acetylated lysine and anti-p300 antibodies. As shown in Fig. 7B, stimulation of cardiomyocytes with PE markedly increased the level of the acetylated form of GATA4. This increase was significantly inhibited by DRB treatment (1st panel of Fig. 7, B and C). Moreover, to investigate the effects of Cdk9 kinase activity on phosphorylation of p300 during myocardial cell hypertrophy, cardiac myocytes were pretreated with DRB or its solvent, DMSO for 2 h and stimulated with PE or saline for 1 h. Nuclear extracts from these cells were subjected to immunoprecipitation with anti-p300 antibody followed by Western blotting with anti-phospho-serine/threonine, anti-p300 and anti-Cdk9 antibodies. As . The kinase activity of Cdk9 is required for phosphorylation of p300 as well as p300-induced transcriptional activities of GATA4. A, COS7 cells were co-transfected with 10 g of pcDNA-hCdk9, pcDNA-DNCdk9, or pCMV-␤-gal in addition to 2 g of pcDNA3.2/V5-DEST-FLAG-p300-(1514 -1922). Nuclear extracts from these cells were subjected to Western blotting with indicated antibodies. B, same samples were immunoprecipitated with anti-FLAG antibody, followed by Western blotting with indicated antibodies. C, amounts of phosphorylated-p300-(1514 -1922)/total p300-(1514 -1922) from the 1st and 2nd panels of B were quantified by densitometry with the use of Multi Gauge V3.0. The data shown are the mean Ϯ S.E. from three independent experiments. D-G, COS7 cells were co-transfected with 0.5 g pANF-luc (D), pmutGATA-ANF-luc (E), pET-luc (F), or pmutGATA-ET-luc (G), 0.001 g of pRL-SV40 in the presence (ϩ) or absence (Ϫ) of 0.1 g of pCMVwtp300, 0.0005 (ϩ) or 0.002 (ϩϩ)g of pcDNAG4, 0.1 g of pcDNA-hCdk9, and 0.01, 0.03, or 0.1 g of pcDNA-DNCdk9, as indicated. The total DNA content was equalized in each sample with pCMV-␤-gal. The relative promoter activities were calculated from the ratio of firefly to sea pansy Luc activity. The data shown are the mean Ϯ S.E. from three independent experiments, each carried out in duplicate. shown in Fig. 7D, stimulation of cardiomyocytes with PE markedly increased the level of the phosphorylated form of p300. This increase was significantly inhibited by DRB treatment (1st panel of Fig. 7, D and E). These observations demonstrate that Cdk9 kinase activity is required for p300 phosphorylation and its HAT activity during myocardial cell hypertrophy. To test whether Cdk9 kinase activity is involved in the PE-induced ANF or ET-1 promoter activity, we performed reporter assays with cardiomyocytes by administrating a Cdk9 kinase inhibitor, DRB, or co-transfecting with DN-Cdk9. DRB and co-transfection with DN-Cdk9 repressed PE-induced ANF and ET-1 transcription in a dose-dependent manner (Fig. 7, F, G, I, and J). We reported that the GATA elements are required for the PE-induced transcription of ANF and ET-1 in cardiomyocytes. In this study, we investigated whether activation of the hypertrophy-responsive gene program by Cdk9 depends on these GATA elements (26). The mutation of GATA sites within ANF (Fig. 7H) and ET-1 (Fig. 7K) promoters abolished Cdk9-induced transcription. Cdk9 Is Required for PE-induced Cardiomyocyte Hypertrophy-Next to investigate whether DRB inhibits PE-induced myocardial cell hypertrophy, cardiomyocytes were subjected to immunocytochemical staining with the anti-␤-MHC antibody. As shown in Fig. 8A, cardiomyocytes stimulated with PE showed an increase in cell size (myocardial cell surface area) and myofibrillar organization compared with saline-treated cells. Treatment with DRB in addition to PE markedly inhibited these PE-induced changes. DRB inhibited PE-induced increments in cell size dose-dependently, but did not affect cell size at the basal state (Fig. 8B). To confirm the requirement of Cdk9 for PE-induced cardiomyocyte hypertrophy, we used short hairpin RNA (shRNA) to knockdown Cdk9. Cdk9-shRNA or control lentiviruses were introduced into cardiomyocytes, and nuclear extracts from these cells were subjected to Western blots. The introduction of Cdk9 RNAi reduced the protein level of Cdk9 by nearly 70% (Fig. 8C). Knocking-down Cdk9 by Cdk9 RNAi inhibited the PE-induced increase in cell size, but did not affect the cell size at the basal state (Fig. 8, D and E). DISCUSSION Growing evidence suggests that the acetylation of non-histone proteins as well as histones plays a critical role in transcriptional regulation (28). During cardiomyocyte hypertrophy, an intrinsic HAT p300 acetylates not only histones but also a cardiac zinc finger transcription factor, GATA4, increases its DNA binding, and promotes GATA4-dependent transcription (12). In this study, we found that Cdk9 is a novel component of the FIGURE 6. PE increases the recruitment of Cdk9/p300/GATA4 complex onto the ANF promoter. A-C, primary rat cardiomyocytes were stimulated with 30 M PE or saline for 1 h and cross-linked and subjected to sonication and immunoprecipitation (IP) with anti-GATA4 antibody, anti-Cdk9 antibody, or IgG. The precipitated chromatin was amplified by PCR using the primers flanking the GATA element of the rat ANF gene. Each of the PCRs was done three times, and a representative photograph is shown (A). The quantitative results from three real time PCR reactions are shown as the mean Ϯ S.E. (B and C). D-F, Re-ChIP assays were first performed with anti-Cdk9 or anti-p300 antibody, and the immunocomplexes were eluted by 10 mM dithiothreitol. Then the aliquots of the diluted elution were immunoprecipitated with anti-GATA4 antibody. The precipitated chromatin was amplified by PCR using the primers flanking the GATA element of the rat ANF gene (D). The result is expressed as the mean Ϯ S.E. from three real time PCR reactions (E and F). G and H, primary cardiac myocytes from neonatal rats were stimulated with saline or 30 M PE for various periods. Nuclear extracts prepared from these cells were subjected to Western blotting with anti-GATA4, anti-Cdk9, anti-cyclin T1, or anti-␤-actin antibody (G), The same nuclear extracts (100 g of proteins) were immunoprecipitated with anti-Cdk9 antibody and sequentially subjected to Western blotting with anti-GATA4, anti-Cdk9, or anti-cyclin T1 antibody (H). Nuclear extracts prepared from these cells were subjected to Western blotting using indicated antibodies (A). The same nuclear extracts (100 g of proteins) were immunoprecipitated with anti-GATA4 antibody and sequentially subjected to Western blotting with anti-acetylated-lysine and anti-GATA4 antibodies (B). C, the levels of signals for acetylated GATA4 relative to those for total GATA4 from the 1st and 2nd panels of B were quantified. Results are expressed as the mean Ϯ S.E. from three independent experiments. D, primary cardiac myocytes from neonatal rats were preincubated with 10 M DRB or its solvent, DMSO, for 2 h and subsequently stimulated with saline or PE for 1 h. Nuclear extracts prepared from these cells were subjected to immunoprecipitation with anti-p300 antibody followed by Western blotting with anti-phospho-serine/threonine and anti-p300 antibodies. E, levels of signals for phospho-p300 relative to those for total p300 from the 1st and 2nd panels of D were quantified. Results are expressed as the mean Ϯ S.E. from three independent experiments. F and I, cardiac myocytes were co-transfected with 1.4 g of pANF-luc (F) or pET-luc (I), and 0.014 g of pRL-SV40 and preincubated with 2 or 10 M DRB or a corresponding amount of its vehicle (DMSO) for 2 h, and sequentially stimulated with 30 M PE or saline for 48 h. The data shown are the mean Ϯ S.E. from three independent experiments, each carried out in duplicate. G and J, cardiac myocytes were co-transfected with 0.9 g of pANF-luc (G) or pET-luc (J), 0.009 g of pRL-SV40, and 0.2 or 0.5 g of pcDNA-DNCdk9. The total DNA content was equalized in each sample with pCMV-␤-gal. These cells were subsequently treated with saline or 30 M PE for 48 h. The data shown are the mean Ϯ S.E. from three independent experiments, each carried out in duplicate. H and K, cardiac myocytes were co-transfected with 0.9 g of pANF-luc or pmutGATA-ANF-luc (H) or pET-luc or pmutGATA-ET-luc (K), 0.009 g of pRL-SV40, and in the presence (ϩ) or absence (Ϫ) of 0.5 g of pcDNA-hCdk9. The total DNA content was equalized in each sample with pCMV-␤-gal. These cells were subsequently treated with saline or 30 M PE for 48 h. The data shown are the mean Ϯ S.E. from three independent experiments, each carried out in duplicate. MARCH 26, 2010 • VOLUME 285 • NUMBER 13 p300/GATA4 complex. We showed that p300 binds to amino acid sequences 1-82 of Cdk9 containing an ATP binding domain and PITALRE, which, is similar to the conserved PSTAIRE motif found among most members of the Cdk family. On the other hand, GATA4 binds to Cdk9 amino acid sequences 83-129, where no conserved domains have been reported so far. Associations of Cdk9 with p300 and GATA4 at different sequences suggest that p300 and GATA4 can simultaneously bind to Cdk9. We also showed that p300 induces the phosphorylation of RNA pol II as well as acetylation of GATA4. Conversely, a dominantnegative form of p300 inhibited the p300-induced RNA pol II phosphorylation and GATA4 acetylation. Fu et al. (29) reported that p300 acetylates highly conserved Lys-44 of Cdk9, and that a single K44R mutation disabled the ability of Cdk9 kinase to phosphorylate the C-terminal domain of RNA pol II. These findings suggest that p300 is involved in myocardial cell hypertrophy by up-regulating Cdk9 kinase activity and increasing RNA pol II-dependent transcription, as well as up-regulating GATA4 activities and increasing cardiac-specific transcription. Cdk-9 and p300/GATA4 in Cardiomyocytes Studies using either RNA interference or highly specific P-TEFb inhibitors have implicated P-TEFb as an important factor in global transcriptional elongation (17). However, a series of reports have shown that several DNA binding transcription factors and activators, including CIITA, NF-B, Myc, and MyoD (30 -33), exhibit an ability to recruit P-TEFb to specific promoters. A previous report demonstrated that MyoD recruits P-TEFb, as well as p300, onto the promoters and enhancers of muscle-specific genes during skeletal myogenesis (34). By chromatin-IP and DNA pulldown assays, we have proven that GATA4, one of the transcription factors that control cardiomyocyte hypertrophy, recruits Cdk9 onto the GATA element of the ET-1 promoter. Thus, GATA4 may utilize Cdk9 in addition to p300 during hypertrophy-responsive transcription. A large number of in vitro and genetic studies (28,35) have indicated that p300 levels are tightly limited in cells and that multiple transcription factors compete for access to this shared coactivator. The precise regulatory mechanisms that control the cellular supply of p300 under stress conditions remain to be elucidated. As a complex important for p300-induced transcription, regulation of the FIGURE 8. Cdk9 inhibitor, DRB, and siRNA repress the phenylephrine-induced hypertrophic responses in cardiomyocytes. A, cardiac myocytes were preincubated with 10 M DRB or its solvent, DMSO, as a control for 2 h, subsequently treated with 30 M PE or saline for 48 h, and subjected to immunocytochemistry using the primary antibody against cardiac MHC, followed by staining with a secondary antibody conjugated with peroxidase (brown signals). Scale bar, 10 m. C and D, cardiac myocytes were infected with siRNA-Cdk9 or GFP control lentiviruses as indicated for 48 h. Nuclear extracts prepared from these cells were subjected to Western blotting using indicated antibodies (C). Cardiac myocytes were subjected to immunocytochemistry using the primary antibody against cardiac MHC, followed by staining with a secondary antibody conjugated with peroxidase (brown signals) (D). Scale bar, 10 m. B and E, myocardial cell-surface area was measured as described under "Experimental Procedures." Values in each group are the mean Ϯ S.E. from 50 cells. HAT activity of p300 is a subject of intensive study. Previous studies have reported the post-transcriptional modifications of p300 and CBP by protein kinases, such as CaMKIV, MAPK, Akt, Rho, or PKC (36 -40). For example, Ser-1834 of p300 was shown to be phosphorylated in vivo and in vitro by Akt. This Ser-1834 phosphorylation is critical for the transcriptional activation by stimulating p300 HAT activity, assembling transcription factors, and recruiting basal transcription machinery to the ICAM-1 promoter (38). The present study demonstrated that the p300 fragment (1514 -1922) containing the C/H3 domain is the target of Cdk9 kinase activity. The p300 C/H3 domain contains 2 Ser-Pro and 8 Thr-Pro sequences, the conserved target sequences of Cdk9 kinase. Furthermore, we identified the phosphorylation of p300 by an antibody that recognizes phospho-Ser-Pro and phospho-Thr-Pro. Therefore, within the p300 C/H3 domain, Cdk9 may phosphorylate Ser-Pro and/or Thr-Pro, distinct from residues phosphorylated by Akt. Whereas we showed the phosphorylation of p300 due to Cdk9 kinase activity, we could not detect the phosphorylation of GATA4 by Cdk9 using our experimental protocol (data not shown). It has been reported that p300 and its homolog CREBbinding protein (CBP), are more important targets of protein kinases than DNA-binding transcription factors (41). Thus, Cdk9 might dominantly phosphorylate p300 rather than GATA4. Herein, we showed that p300 phosphorylation by Cdk9 is strongly correlated with increases in the ability of p300 to interact with GATA4, and to induce acetylation, and DNA binding of GATA4. Increases in p300 HAT activity by Cdk9 are compatible with the report by Ait-Si-Ali S et al. (42) showing that CBP, a p300 homolog, increases its HAT activity through phosphorylation at the G1/S boundary by another cyclin-dependent kinase, Cdk2. Given that P-TEFb is a global transcriptional elongation factor, it is tempting to propose that, besides its ability to promote general transcription through the phosphorylation of RNA pol II, P-TEFb is also involved in transcriptional regulation during myocardial cell growth by up-regulating p300 HAT activity. The present study demonstrated that a component of Cdk9 directly interacted with GATA4 in cardiomyocytes in a timedependent manner after PE stimulation. Another P-TEFb component, Cyclin T, was reported to bind to several transcription factors, aryl hydrocarbon receptor (43), MyoD (33), and Brd4 (45). We also showed that cyclin T1 is associated with GATA4 by Western blots. However, this association appears to be indirect because we could not demonstrate the interaction of GATA4 with cyclin T1 using a GST pull-down assay (data not shown). The question arises as to how Cdk9 kinase is activated during cardiomyocyte hypertrophy. It has been reported that Cdk9 kinase activity is regulated by cyclin T1 levels (46) and that the overexpression of cyclin T1 is sufficient to induce cardiac hypertrophy in vivo (19). Liou et al. showed that cyclin T1 protein expression in freshly isolated monocytes is very low, increases early during macrophage differentiation, and decreases to low levels about 1 week after culturing. The kinase and transcriptional activities of P-TEFb parallel the changes in cyclin T1 protein levels. However, we showed that not only protein levels of Cdk9 and cyclin T1, but also binding between Cdk9 and cyclin T1 was constant after stimulation with PE, a representative hypertrophic stimulus. Therefore, other mechanisms may be involved in the activation of Cdk9 in cardiomyocytes. Cyclin T-Cdk9 complexes are physically associated with 7SK snRNA/HEXIM1, an endogenous inhibitor of Cdk9 (47,48). Sano M et al. (19) reported that the dissociation of 7SK occurred within 15 min of hypertrophic stimulation. Because the acetylation of GATA4 occurred at much later time points, the activation of Cdk9 kinase activity may be the first step in a series of p300/GATA4-dependent transcriptional activations during myocardial cell hypertrophy. Schneider and co-workers (49) reported that Cdk9 binds to PGC-1, a master regulator of mitochondrial biogenesis and function, and that Cdk9 represses PGC-1 transcriptional activity and impairs the expression of mitochondrial proteins. Because the mitochondrion is a major energy source of the heart, Cdk9 activation may lead to heart failure by repressing mitochondrial function through PGC-1. The present study demonstrated that Cdk9 is involved in pathological cardiomyocyte overgrowth, which also leads to the development of heart failure. Taken together, these two facts raise a hypothesis that Cdk9 is a key regulator that determines cardiac function and is one of important pharmacological targets of heart failure therapy. We and others (14,15) have recently reported that a p300specific HAT inhibitor, curcumin, can prevent the development of cardiomyocyte hypertrophy and heart failure in vivo. The present study provides evidence that Cdk9 kinase activity is required for p300 HAT activity and for the p300/GATA4 transcriptional pathway during hypertrophic responses in cardiomyocytes. Previous studies also showed that a marked decrease in the cardiac p300 protein level by doxorubicin leads to the down-regulated expression of cardiac-specific genes and the development of myocardial cell apoptosis (50,44). To clarify whether the inhibition of Cdk9 kinase activity will specifically inhibit the transcription involved in pathological cardiomyocyte hypertrophy, or generally inhibit the transcription required for maintaining its differentiated phenotype, additional studies are required. However, whether a specific inhibitor of Cdk9 can be used as an agent for heart failure therapy in vivo is a matter of special interest and should be investigated in future studies.
2018-04-03T00:21:10.543Z
2010-01-17T00:00:00.000
{ "year": 2010, "sha1": "1a7446fe37966a745df315d09995ccdb822bcb20", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/285/13/9556.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "52b5503ce068e3f910a0557b112631d8ab24b1cc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
266784270
pes2o/s2orc
v3-fos-license
Sea Urchin-like NiCo2O4 Catalyst Activated Peroxymonosulfate for Degradation of Phenol: Performance and Mechanism How to efficiently activate peroxymonosulfate (PMS) in a complex water matrix to degrade organic pollutants still needs greater efforts, and cobalt-based bimetallic nanomaterials are desirable catalysts. In this paper, sea urchin-like NiCo2O4 nanomaterials were successfully prepared and comprehensively characterized for their structural, morphological and chemical properties via techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), among others. The sea urchin-like NiCo2O4 nanomaterials exhibited remarkable catalytic performance in activating PMS to degrade phenol. Within the NiCo2O4/PMS system, the removal rate of phenol (50 mg L−1, 250 mL) reached 100% after 45 min, with a reaction rate constant k of 0.091 min−1, which was 1.4-times higher than that of the monometallic compound Co3O4/PMS system. The outstanding catalytic activity of sea urchin-like NiCo2O4 primarily arises from the synergistic effect between Ni and Co ions. Additionally, a comprehensive analysis of key parameters influencing the catalytic activity of the sea urchin-like NiCo2O4/PMS system, including reaction temperature, initial pH of solution, initial concentration, catalyst and PMS dosages and coexisting anions (HCO3−, Cl−, NO3− and humic acid), was conducted. Cycling experiments show that the material has good chemical stability. Electron paramagnetic resonance (EPR) and quenching experiments verified that both radical activation (SO4•−, •OH, O2•−) and nonradical activation (1O2) are present in the NiCo2O4/PMS system. Finally, the possible degradation pathways in the NiCo2O4/PMS system were proposed based on gas chromatography–mass spectrometry (GC-MS). Favorably, sea urchin-like NiCo2O4-activated PMS is a promising technology for environmental treatment and the remediation of phenol-induced water pollution problems. Introduction Recently, the increasing water pollution caused by the discharge of large quantities of industrial wastewater has aroused concern and worry.In particular, industrial wastewater containing phenol and Rhodamine B is considered persistent, difficult to degrade and composed of hazardous organic pollutants [1,2].This can adversely affect the quality of human life by way of the food chain and environmental cycles.Therefore, developing economical and efficient technologies is urgent to remove these organic pollutants from water bodies.Among the many wastewater treatment technologies, advanced oxidation processes (AOPs) based on peroxymonosulfate (PMS) are considered as promising technologies for the degradation of organic pollutants because of their simplicity, high efficiency, good reproducibility and reduced secondary pollution [3]. Molecules 2024, 29, 152 2 of 17 Previous studies have explored various strategies for the degradation of organic pollutants in water by PMS-based AOPs [4][5][6], where most of the degradation mechanisms have been attributed to independent radical or non-radical pathways.The radical pathway refers to the fact that PMS can generate reactive free radicals with high redox potentials, including SO 4 •− and • OH, through certain external conditions (acoustic, optical, electrical, thermal, transition metals and their oxides, etc.), which can lead to the complete mineralization of organic pollutants [7][8][9].The non-radical pathway relies on reactive oxygen species other than free radicals and oxidative processes, such as 1 O 2, and direct electron transfer from organic electron donors to the PMS on the catalyst surface [10].The radical pathway and non-radical pathway each have advantages and disadvantages.Radical-based AOPs with high oxidation potentials have excellent degradation performance, but side reactions and the corresponding by-products usually occur due to the non-targeted attack of free radicals [11,12].In contrast, the non-radical pathway is more selective for certain organics, such as electron-donating compounds [13].However, effective degradation by non-radical processes only occurs for electron-donating contaminants (e.g., aniline) and not for electron-absorbing contaminants (e.g., benzoic acid) [13].It has been shown that the simultaneous action of radical and non-radical pathways on pollutants is more effective than single pathway treatment due to the synergistic effect.Therefore, the design of a catalyst that can efficiently activate PMS with the presence of both radical and non-radical pathways is very promising in the field of PMS-based AOPs. The literature suggests that cobalt is the most powerful element for PMS activation in AOPs [14,15].Several stable cobalt oxides (e.g., CoO and Co 3 O 4 ) are frequently used as activators for PMS [16,17].Nevertheless, the leaching of toxic divalent cobalt ions, the small specific surface area and few active sites limit the practical application of monometallic cobalt oxides [18].The introduction of other polyvalent transition metal elements may alter the morphology and structure of the material in comparison to monometallic cobalt oxides, which not only helps to improve the catalytic performance of the materials but also reduces the leaching of cobalt ions [19,20].Therefore, it is a wise strategy to improve the above defects and maintain the excellent catalytic activity by introducing other transition metal elements. In recent years, a broad range of cobalt-based bimetallic catalysts have been widely used in AOP because of their excellent catalytic activity.Yu et al. [21] successfully synthesized MgCo 2 O 4 spinel via a hydrothermal method and tested its catalytic performance for PMS activation using bisphenol A (BPA) as the target pollutant.The results showed that the MgCo 2 O 4 /PMS system could effectively degrade 99.6% of BPA within 10 min at pH 7.2.In this case, the tetrahedral Mg 2+ may make MgCo 2 O 4 more stable and promote the redox cycle of Co 2+ /Co 3+ , which ultimately leads to the degradation of BPA through both radical and non-radical pathways.A.Q.K. Nguyen et al. [22] reported that CoWO 4 nanoparticles synthesized by adjusting the PH during hydrothermal synthesis can efficiently degrade 4-chlorophenol by activated PMS.The experimental results showed that the excellent performance of the CoWO 4 catalyst (CoWO 4 -10) synthesized at pH 10 is attributed to its large specific surface area, the good charge transfer properties and the synergistic effect between Co and W ions. Ultimately, the organic compounds were rapidly degraded relying on both free radical and non-free radical pathways.In addition, NiCo 2 O 4 is also an excellent semiconductor material for various catalytic applications, and its higher conductivity helps in electron transfer [23][24][25]. Based on the above factors, sea urchin-like NiCo 2 O 4 was rapidly synthesized via a simple hydrothermal method and thermal treatment for the catalytic degradation of phenol in water by activated PMS.The effects of important factors, such as catalyst and PMS dosage, initial solution concentration, initial pH, reaction temperature and coexisting anions and humic acid (HA), on the catalytic activity of sea urchin-like NiCo 2 O 4 were explored.The synergistic effect of Co 2+ -Co 3+ /Ni 3+ -Ni 2+ in the sea urchin-like NiCo 2 O 4 promoted the generation of more reactive oxygen species (ROS).The quenching experiments verified that both radical and non-radical pathways participated in the activation of PMS.Addition-ally, the GC-MS explored the intermediate products of the degradation of phenol in the NiCo 2 O 4 /PMS system.The present study suggests that sea urchin-like NiCo 2 O 4 -activated PMS is a promising technology for environmental treatment and remediation in response to phenol-induced water pollution problems.promoted the generation of more reactive oxygen species (ROS).The quenching experiments verified that both radical and non-radical pathways participated in the activation of PMS.Additionally, the GC-MS explored the intermediate products of the degradation of phenol in the NiCo2O4/PMS system.The present study suggests that sea urchin-like NiCo2O4-activated PMS is a promising technology for environmental treatment and remediation in response to phenol-induced water pollution problems. Characterizations of the Sea Urchin-like NiCo2O4 Catalysts The sea urchin-like NiCo2O4 was rapidly synthesized using a simple hydrothermal and thermal treatment method (Figure 1a).The crystal phase compositions of the synthesized NiCo2O4, Co3O4 and NiO were characterized by XRD (Figures 1b and S1a,b).The synthesized NiCo2O4 exhibits characteristic diffraction peaks, where the diffraction peaks at 31.2°, 36.7°,38.4°, 44.6°, 55.4°, 59.1°, 65.0° and 77.0°correspond to the (220), (311), ( 222), (400), ( 422), ( 511), ( 440) and (533) crystalline planes of NiCo2O4 (JCPDS 73-1702), respectively.Similarly, the XRD patterns of the synthesized Co3O4 and NiO corresponded to their standard spectra.These results indicated that NiCo2O4 bimetallic oxides as well as monometallic oxides of Co and Ni were successfully prepared [26].The morphology and structure of the sea urchin-like NiCo 2 O 4 were described using SEM and TEM.According to Figure 1c,d, the synthesized NiCo 2 O 4 appears as sea urchinlike microspheres with uniform size (~8 µm).The microspheres are composed of an orderly combination of needle-like structures with a solid interior, and the needle-like structures constituting the sea urchin-like NiCo 2 O 4 microspheres are formed by nanoparticles.Accordingly, TEM images of NiCo 2 O 4 (Figure 1e,f) further confirmed that the microspheres were composed of an orderly combination of a needle-like structure with a diameter of about 200 nm, which was accumulated by nanoparticles.The BET results (Figure S1c,d) further verified that NiCo 2 O 4 is a mesoporous material based on the obvious H3 hysteresis loop, and its specific surface area is 40.15 m 2 g −1 .Obviously, the unique structure and large surface area can provide a certain number of reaction sites for the surface reactions, and the sea urchin-like structure can maintain the structural stability of the material [26][27][28].As a comparison, the morphologies of monometallic oxide NiO and Co 3 O 4 are microsphere structures with relatively uniform size and rod-shaped, respectively (Figure S2).Furthermore, according to the TEM image of NiCo 2 O 4 (Figure 1g), the lattice stripes with a spacing of 0.242 nm correspond to the (311) crystalline surface of NiCo 2 O 4 .Selected-area electron diffraction in the inset of Figure 1g also shows well-defined diffraction rings, which coincide with the aforementioned XRD results of the NiCo 2 O 4 material.EDS analyses determined the presence of nickel and cobalt metals (Figure 1h).Additionally, the EDS mapping image of the sea urchin-like NiCo 2 O 4 material showed a uniform distribution of Ni, Co and O (Figure 1i), indicating the successful synthesis of bimetallic oxides. The chemical composition and surface electronic valence states of the sea urchin-like NiCo 2 O 4 catalyst were further studied by XPS experiments, as shown in Figure 2. The XPS full spectrum in Figure 2a shows that the sample contains Ni, Co and O and no miscellaneous peaks of other elements, which is in perfect agreement with the XRD test results.The XPS fine spectrum after Ni fitting (Figure 2b) shows two satellite peaks and four binding energy fitting peaks at 854.2 eV, 855.9 eV, 872.2 eV and 874.1 eV.Among them, two main peaks at 854.2 eV and 872.2 eV proved the presence of Ni 2+ , while two main peaks at 855.9 eV and 874.1 eV proved the presence of Ni 3+ .Similarly, the XPS fine spectrum of Co (Figure 2c), likewise, shows two satellite peaks and four binding energy-fitted peaks at 779.3 eV, 780.9 eV, 794.9 eV and 795.5 eV.The two main peaks at 780.9 eV and 795.5 eV prove the presence of Co 2+ , while the two main peaks at 779.3 eV and 794.9 eV prove the presence of Co 3+ .Based on the above experimental results, it can be speculated that on the surface of the sea urchin-like NiCo 2 O 4 catalyst, the interaction of Ni and Co with different valence states can generate additional electron holes, which is conducive to the transfer of electrons, thus improving the catalytic activity of the sea urchin-like NiCo 2 O 4 material [28].The morphology and structure of the sea urchin-like NiCo2O4 were described using SEM and TEM.According to Figure 1c,d, the synthesized NiCo2O4 appears as sea urchinlike microspheres with uniform size (~8 µm).The microspheres are composed of an orderly combination of needle-like structures with a solid interior, and the needle-like structures constituting the sea urchin-like NiCo2O4 microspheres are formed by nanoparticles.Accordingly, TEM images of NiCo2O4 (Figure 1e,f) further confirmed that the microspheres were composed of an orderly combination of a needle-like structure with a diameter of about 200 nm, which was accumulated by nanoparticles.The BET results (Figure S1c,d) further verified that NiCo2O4 is a mesoporous material based on the obvious H3 hysteresis loop, and its specific surface area is 40.15 m 2 g −1 .Obviously, the unique structure and large surface area can provide a certain number of reaction sites for the surface reactions, and the sea urchin-like structure can maintain the structural stability of the material [26][27][28].As a comparison, the morphologies of monometallic oxide NiO and Co3O4 are microsphere structures with relatively uniform size and rod-shaped, respectively (Figure S2).Furthermore, according to the TEM image of NiCo2O4 (Figure 1g), the lattice stripes with a spacing of 0.242 nm correspond to the (311) crystalline surface of NiCo2O4.Selectedarea electron diffraction in the inset of Figure 1g also shows well-defined diffraction rings, which coincide with the aforementioned XRD results of the NiCo2O4 material.EDS analyses determined the presence of nickel and cobalt metals (Figure 1h).Additionally, the EDS mapping image of the sea urchin-like NiCo2O4 material showed a uniform distribution of Ni, Co and O (Figure 1i), indicating the successful synthesis of bimetallic oxides. The chemical composition and surface electronic valence states of the sea urchin-like NiCo2O4 catalyst were further studied by XPS experiments, as shown in Figure 2. The XPS full spectrum in Figure 2a shows that the sample contains Ni, Co and O and no miscellaneous peaks of other elements, which is in perfect agreement with the XRD test results.The XPS fine spectrum after Ni fitting (Figure 2b) shows two satellite peaks and four binding energy fitting peaks at 854.2 eV, 855.9 eV, 872.2 eV and 874.1 eV.Among them, two main peaks at 854.2 eV and 872.2 eV proved the presence of Ni 2+ , while two main peaks at 855.9 eV and 874.1 eV proved the presence of Ni 3+ .Similarly, the XPS fine spectrum of Co (Figure 2c), likewise, shows two satellite peaks and four binding energy-fitted peaks at 779.3 eV, 780.9 eV, 794.9 eV and 795.5 eV.The two main peaks at 780.9 eV and 795.5 eV prove the presence of Co 2+ , while the two main peaks at 779.3 eV and 794.9 eV prove the presence of Co 3+ .Based on the above experimental results, it can be speculated that on the surface of the sea urchin-like NiCo 2 O 4 catalyst, the interaction of Ni and Co with different valence states can generate additional electron holes, which is conducive to the transfer of electrons, thus improving the catalytic activity of the sea urchin-like NiCo2O4 material [28]. Catalytic Performance To evaluate the catalytic activity of the sea urchin-like NiCo 2 O 4 catalyst, phenol was selected as the contaminants of this experiment.The degradation of phenol by PMS autoxidation and the adsorption of phenol by different kinds of catalysts were investigated through controlled experiments.As shown in Figure 3a, under the condition of PMS alone, the removal of phenol was less than 1% in 90 min, which indicated that PMS had no obvious degradation effect on phenol, and the degradation effect of PMS autoxidation was negligible.The absence of PMS, NiO, Co 3 O 4 and sea urchin-like NiCo 2 O 4 catalysts all showed similar adsorption effects on phenol, which were less than 1%, and the above results indicated that the adsorption effect of the catalysts was also negligible.In addition, three catalytic degradation systems, NiO/PMS, Co 3 O 4 /PMS and sea urchin-like NiCo 2 O 4 /PMS, were explored for phenol degradation under the same experimental conditions.The complete degradation of phenol in the sea urchin-like NiCo 2 O 4 /PMS system was 45 min, the complete degradation of phenol in the Co 3 O 4 /PMS system was 60 min and the removal of phenol in the NiO/PMS system was only about 2.5% in 90 min.The above experimental results show that the introduction of Ni substantially enhances the catalytic activity of pure cobalt oxides.This is mainly due to the synergistic effect between Ni and Co, which accelerates the electron transfer rate and, thus, improves the catalytic activity of the material [29].Additionally, the mineralization of phenol and RhB reached 67.5% and 62.7%, as displayed in Figure S3a,b, respectively, indicating that significant quantities of organic compounds were degraded to inorganic carbides in the NiCo 2 O 4 /PMS system.In summary, the sea urchin-like NiCo 2 O 4 exhibits the best catalytic activity. selected as the contaminants of this experiment.The degradation of phenol by PMS au toxidation and the adsorption of phenol by different kinds of catalysts were investigate through controlled experiments.As shown in Figure 3a, under the condition of PMS alone the removal of phenol was less than 1% in 90 min, which indicated that PMS had no obv ous degradation effect on phenol, and the degradation effect of PMS autoxidation wa negligible.The absence of PMS, NiO, Co3O4 and sea urchin-like NiCo2O4 catalysts a showed similar adsorption effects on phenol, which were less than 1%, and the above re sults indicated that the adsorption effect of the catalysts was also negligible.In addition three catalytic degradation systems, NiO/PMS, Co3O4/PMS and sea urchin-lik NiCo2O4/PMS, were explored for phenol degradation under the same experimental con ditions.The complete degradation of phenol in the sea urchin-like NiCo2O4/PMS system was 45 min, the complete degradation of phenol in the Co3O4/PMS system was 60 min an the removal of phenol in the NiO/PMS system was only about 2.5% in 90 min.The abov experimental results show that the introduction of Ni substantially enhances the catalyti activity of pure cobalt oxides.This is mainly due to the synergistic effect between Ni an Co, which accelerates the electron transfer rate and, thus, improves the catalytic activit of the material [29].Additionally, the mineralization of phenol and RhB reached 67.5% and 62.7%, as displayed in Figure S3a,b, respectively, indicating that significant quantitie of organic compounds were degraded to inorganic carbides in the NiCo2O4/PMS system In summary, the sea urchin-like NiCo2O4 exhibits the best catalytic activity.Furthermore, the kinetics for the degradation of phenol using different catalyst/PM systems also confirmed the remarkable catalytic performance of the NiCo2O4/PMS system (Figure 3b).The degradation rate constants of the sea urchin-like NiCo2O4/PMS system ( = 0.09139 min −1 ) were 1.4-times and 450-times higher than those of the Co3O4/PMS system (k = 0.06465 min −1 ) and NiO/PMS system (k = 0.00020 min −1 ), respectively.The result sug gests that the doping of Ni plays an important role in activating PMS to degrade pheno Moreover, Table S1 lists the catalytic properties of some catalysts in the literatures com pared with the NiCo2O4 catalyst in this work for phenol degradation [30][31][32][33].As can b seen from the table, the sea urchin-like NiCo2O4/PMS system shows excellent catalyti performance in phenol degradation.Overall, the great catalytic activity of the sea urchin like NiCo2O4 was attributed to the synergistic effect between nickel and cobalt, which ac celerated the electron transfer rate and accelerated the phenol degradation reaction [34].Furthermore, the kinetics for the degradation of phenol using different catalyst/PMS systems also confirmed the remarkable catalytic performance of the NiCo 2 O 4 /PMS system (Figure 3b).The degradation rate constants of the sea urchin-like NiCo 2 O 4 /PMS system (k = 0.09139 min −1 ) were 1.4-times and 450-times higher than those of the Co 3 O 4 /PMS system (k = 0.06465 min −1 ) and NiO/PMS system (k = 0.00020 min −1 ), respectively.The result suggests that the doping of Ni plays an important role in activating PMS to degrade phenol.Moreover, Table S1 lists the catalytic properties of some catalysts in the literatures compared with the NiCo 2 O 4 catalyst in this work for phenol degradation [30][31][32][33].As can be seen from the table, the sea urchin-like NiCo 2 O 4 /PMS system shows excellent catalytic performance in phenol degradation.Overall, the great catalytic activity of the sea urchinlike NiCo 2 O 4 was attributed to the synergistic effect between nickel and cobalt, which accelerated the electron transfer rate and accelerated the phenol degradation reaction [34]. Influence of Reaction Parameters on Phenol Removal 2.3.1. Effect of Catalyst and PMS Dosages Firstly, the effects of the dosages of sea urchin-like NiCo 2 O 4 in the catalyst/PMS for the phenol removal rate were investigated.According to Figures 4a and S4a, the higher the amount of sea urchin-like NiCo 2 O 4 present in the system, the higher the k value of the phenol degradation reaction, which accelerated the effective degradation of the pollutant.As the catalyst dosage was increased from 0.1 g L −1 to 0.2 g L −1 , the phenol removal rate increased from 75% to 100% in 50 min.As the catalyst content in the system continued to increase to 0.3 g L −1 , phenol was completely decomposed in 35 min.Based on Figure S4a, the k value increased from the initial 0.05139 min −1 to 0.0745 min −1 and 0.09859 min −1 when the dosage of sea urchin-like NiCo 2 O 4 was increased from 0.1 g L −1 to 0.2 g L −1 and 0.3 g L −1 .The higher value of k may be attributed to the fact that more catalysts provided more reactive active sites, which accelerated the PMS activation and, hence, promoted the decomposition process of phenol [35].In summary, a 0.2 g L −1 catalyst concentration was selected for subsequent study based on the practical application and economic efficiency. As the catalyst dosage was increased from 0.1 g L −1 to 0.2 g L −1 , the phenol remova increased from 75% to 100% in 50 min.As the catalyst content in the system continu increase to 0.3 g L −1 , phenol was completely decomposed in 35 min.Based on Figure the k value increased from the initial 0.05139 min −1 to 0.0745 min −1 and 0.09859 min −1 w the dosage of sea urchin-like NiCo2O4 was increased from 0.1 g L −1 to 0.2 g L −1 and L −1 .The higher value of k may be attributed to the fact that more catalysts provided reactive active sites, which accelerated the PMS activation and, hence, promoted th composition process of phenol [35].In summary, a 0.2 g L −1 catalyst concentration selected for subsequent study based on the practical application and economic effici In the sea urchin-like NiCo2O4/PMS system, the removal rate of phenol increased seq tially from 92.7% to 98.5% and 99.4% within 40 min as the PMS concentration wa creased sequentially from 1 g L −1 to 2 g L −1 and 3 g L −1 .The degradation kinetic cons of phenol revealed that the k value increased sequentially from the initial 0.07384 mi 0.09139 min −1 and 0.10416 min −1 (Figure S4b).Normally, an increase in PMS concentr increases the amounts of active substances in the sea urchin-like NiCo2O4/PMS sys which, in turn, improves the efficiency of pollutant degradation.However, the degr tion efficiency of phenol and the kinetic constant k of the reaction changed very little w the PMS concentration was increased from 2 g L −1 to 3 g L −1 .This was caused by the that the non-activated PMS reacts with active species (SO4 •− and • OH) by self-burs thus becoming a limiting factor for the degradation reaction [36].Therefore, an optim PMS concentration of 2 g L −1 was determined as the actual dosage in this study.Figures 4b and S4b show the influence of the dosage of PMS on phenol degradation.In the sea urchin-like NiCo 2 O 4 /PMS system, the removal rate of phenol increased sequentially from 92.7% to 98.5% and 99.4% within 40 min as the PMS concentration was increased sequentially from 1 g L −1 to 2 g L −1 and 3 g L −1 .The degradation kinetic constants of phenol revealed that the k value increased sequentially from the initial 0.07384 min −1 to 0.09139 min −1 and 0.10416 min −1 (Figure S4b).Normally, an increase in PMS concentration increases the amounts of active substances in the sea urchin-like NiCo 2 O 4 /PMS system, which, in turn, improves the efficiency of pollutant degradation.However, the degradation efficiency of phenol and the kinetic constant k of the reaction changed very little when the PMS concentration was increased from 2 g L −1 to 3 g L −1 .This was caused by the fact that the non-activated PMS reacts with active species (SO 4 •− and • OH) by self-bursting, thus becoming a limiting factor for the degradation reaction [36].Therefore, an optimum PMS concentration of 2 g L −1 was determined as the actual dosage in this study. Effect of Initial Phenol Concentration Figures 4c and S4c show the effect of different initial concentrations of phenol (between 20 mg L −1 -75 mg L −1 ) on the degradation process.Both the degradation efficiency of phenol and the rate constant of the reaction decreased with an increasing phenol concentration.At a low phenol concentration (20 mg L −1 ), 100% degradation of phenol could be achieved within 25 min.As the concentration of phenol further increased to 50 mg L −1 and 75 mg L −1 , the time required for its complete degradation increased to 50 min and 70 min, respectively.The corresponding k values decreased from 0.14122 min −1 to 0.07450 min −1 and 0.05621 min −1 , respectively.The main reason is that the number of active species is certain and, thus, a longer degradation time is needed with a higher concentration of phenol solution.In addition, more intermediates adsorbed on the catalyst surface are generated in highly concentrated phenol solutions, thus preventing PMS from binding to the catalyst active sites. Effect of Initial pH The activation efficiency of PMS in advanced oxidation techniques (AOPs) is depended on the initial pH of the solution.According to Figures 4d and S4d, the effect of different pH values on phenol degradation was ascertained.As the pH increased from 2.3 to 3.6, 6.4, 7.6 and 8.7, the phenol removal decreased sequentially, and the k value of the degradation reaction decreased sequentially from an initial value of 0.13892 min −1 to 0.12461 min −1 , 0.09139 min −1 , 0.06666 min −1 and 0.06238 min −1 , respectively.The result indicated that the degradation reaction was biased towards acidic conditions.In addition, according to the results of previous studies, the interaction between the catalyst surface and phenol molecules can be improved by adjusting the solution pH, which can accelerate the decomposition of pollutants through the formation of a variety of active substances on the catalyst surface [37,38].The surface of sea urchin-like NiCo 2 O 4 is positively charged within acidic solutions, which effectively attracts HSO 5 − near the material to generate more SO 4 •− [37].However, HSO 5 − cannot be stabilized in alkaline environments and reacts as shown in Equation ( 1), resulting in a greatly reduced phenol removal rates [39].).The experimental results showed that phenol and PMS molecules were more active at higher temperatures and, thus, provide more opportunities for the PMS to collide with the active sites of the catalyst [40].In addition, the activation energy (Ea) of the reaction is obtained using the Arrhenius equation (Equation ( 21)).The Ea in the sea urchinlike NiCo 2 O 4 /PMS system was calculated to be 68.89kJ mol −1 by fitting the equation (ln k = −8.286/T+ 25.135, R 2 = 0.871).The Ea value in the system is greater than that of the diffusion-controlled reaction (10 −13 kJ mol −1 ), suggesting that the degradation of phenol in the system is attributable to the surface-mediated inner chemical reaction instead of the mass transmission [41].Therefore, higher temperatures would increase the activation of PMS by sea urchin-like NiCo 2 O 4 to produce more reactive oxygen species, thus improving the catalytic degradation efficiency. As mentioned earlier, both temperature and the catalyst have a large effect on the Ea during the catalytic degradation reaction of phenol.The activation energies for the degradation of phenol by several catalytic systems are listed in Table S2 [42,43].The data shown in Table S2 indicate that the activation energy of the sea urchin-like NiCo 2 O 4 /PMS system for phenol degradation is lower than that of most of the catalyst/PMS systems in the literature, which suggests that the sea urchin-like NiCo 2 O 4 /PMS has the advantage of good catalytic degradation of phenol. Influences of Inorganic Ions and Humic Acid Large amounts of inorganic anions and humic acids (HAs) are universally present in natural aquatic systems.Undoubtedly, they may affect the degradation process of pollutants due to the reaction with free radicals.To investigate the practical value of the NiCo 2 O 4 /PMS system in environmental aquatic systems, the effects of inorganic anions (HCO 3 − , Cl − , NO 3 − ) and HA on the degradation performance of phenol were systematically investigated in this experiment.The specific experimental results and analysis are as follows. Effect of coexisting HCO 3 − : according to Figure 5a, the complete degradation time of phenol was shortened from the initial 45 min to 40 min and 8 min with increasing the concentration of HCO 3 − in the system from 0 mM to 1 mM and 5 mM, respectively.Therefore, HCO 3 − can promote the activation of PMS to cause more ROS, thereby promoting the degradation of phenol [44,45]. for phenol degradation is lower than that of most of the catalyst/PMS systems in ature, which suggests that the sea urchin-like NiCo2O4/PMS has the advantage catalytic degradation of phenol. Influences of Inorganic Ions and Humic Acid Large amounts of inorganic anions and humic acids (HAs) are universally prese ural aquatic systems.Undoubtedly, they may affect the degradation process of pollut to the reaction with free radicals.To investigate the practical value of the NiCo2O4/PM in environmental aquatic systems, the effects of inorganic anions (HCO3 − , Cl − , NO3 − ) on the degradation performance of phenol were systematically investigated in this exp The specific experimental results and analysis are as follows. Effect of coexisting HCO3 − : according to Figure 5a, the complete degradation phenol was shortened from the initial 45 min to 40 min and 8 min with increa concentration of HCO3 − in the system from 0 mM to 1 mM and 5 mM, respectively fore, HCO3 − can promote the activation of PMS to cause more ROS, thereby promo degradation of phenol [44,45].Effect of coexisting Cl -: the phenol removal rate was significantly enhanced increase in the Cl − concentration from 0 mM to 5 mM, and the time for complete d tion was shortened from 45 to 30 min (Figure 5b).According to the literature, Cl − vate PMS to generate HOCl (Equation ( 2)), which can selectively react with elect phenols to accelerate phenol degradation.The degradation of phenol is also facili the reaction between Cl − and • ClOH − to form • Cl2 − (Equations ( 3) and ( 4)) [46,47].Cl HSO SO HOCl Effect of coexisting Cl − : the phenol removal rate was significantly enhanced with the increase in the Cl − concentration from 0 mM to 5 mM, and the time for complete degradation was shortened from 45 to 30 min (Figure 5b).According to the literature, Cl − can activate PMS to generate HOCl (Equation ( 2)), which can selectively react with electronrich phenols to accelerate phenol degradation.The degradation of phenol is also facilitated by the reaction between Cl − and • ClOH − to form • Cl 2 − (Equations ( 3) and ( 4)) [46,47]. Effect of coexisting NO 3 − : As shown in Figure 5c, NO 3 − only shows a slight inhibitory effect on the phenol degradation process.The complete degradation time of phenol was prolonged from 45 min to 50 min when the concentration of NO 3 − was increased from 0 mM to 5 mM, which indicated that the presence of NO 3 − had a slight inhibitory effect on the degradation process of phenol.The reason could be that NO 3 − reacts with reactive species (SO 4 •− and • OH) (Equations ( 5) and ( 6)) [37] to produce less reactive NO 3 (5) Effect of coexisting HA: HA, as an important component of natural organic substances, cannot be ignored in the actual pollutant management process.Therefore, the effect of HA on phenol degradation was also investigated in this study (Figure 5d).With increasing the HA concentration (0 mg L −1 to 85 mg L −1 ) in the sea urchin-like NiCo 2 O 4 /PMS degradation system, the removal efficiency of phenol was gradually inhibited.It can be seen that the presence of HA has a serious inhibitory effect on the degradation of phenol, and this inhibitory effect becomes more and more obvious with the increase in HA concentration, which is attributed to the following reasons: (1) HA usually acts as a radical scavenger competing for the active radicals.( 2) HA is more readily adsorbed onto the surface of the catalyst through its own hydroxyl and carboxyl groups and blocks the reaction sites [44,48]. On the whole, both HCO 3 − and Cl − can promote phenol degradation, the presence of NO 3 − has almost no effect on the phenol degradation, and HA can inhibit phenol degradation in the sea urchin-like NiCo 2 O 4 /PMS system.Research has shown that nonradical reactions are lower in sensitivity to co-existing anions in water compared to free radical reactions, indicating that the nonradical pathway participated in the reaction in the sea urchin-like NiCo 2 O 4 /PMS system [49]. Reusability and Stability of the Sea Urchin-like NiCo 2 O 4 The stability and reusability of catalysts are crucial in practical applications.Therefore, the reusability of the sea urchin-like NiCo 2 O 4 catalyst and the crystal phase composition before and after use were further investigated (Figure 6).According to Figure 6a,c, the catalytic activity of sea urchin-like NiCo 2 O 4 showed an overall decreasing trend after five cycling experiments, which may be attributed to the adsorption of some intermediates on the catalyst surface, which could not be removed by filtration and washing with water.Subsequently, after the fourth cycle, the material was recalcined at 300 • C for 4 h, and the degradation efficiency of the catalyst was found to increase from 53.6% to 85.6%, indicating that calcination helps to remove the adhering substances on the catalyst surface and fully expose its active sites.Furthermore, Figure 6b shows the variation in the k value for each cycle reaction, which is consistent with the trend of the phenol removal rate in each cycle.The above results reveal that the sea urchin-like NiCo 2 O 4 material can be efficiently recycled several times after recalcination.In addition, comparing the XRD patterns of the used and fresh catalysts (Figure 6d), no obvious change in the XRD curves was found, which further confirmed the good chemical stability and durability of the sea urchin-like NiCo 2 O 4 . Catalytic Mechanism and Phenol Degradation Pathway To investigate the catalytic mechanism, a series of quenching experiments was performed to determine the types and contributions of reactive oxygen species in the sea urchin-like NiCo2O4/PMS system.As reported in the literature [50], it is known that MeOH is usually used to scavenge SO4 •− and • OH, and TBA is a bursting agent for • OH.From Figure 7a, phenol was completely removed after 45 min without adding any quenching agent, whereas the degradation of phenol was only about 90% and 29% with the addition of 0.5 M and 10 M MeOH, respectively.Correspondingly, the k value was reduced from 0.09139 min −1 (not added) to 0.02639 min −1 (added 0.5 M methanol) and 0.00387 min −1 (added 10 M methanol) (Figure 7b).When 0.5 M TBA was added to the sea urchin-like NiCo2O4/PMS system, the removal of phenol remained almost constant, although the k value of the degradation reaction decreased slightly.With increasing the concentration of TBA to 10 M, the phenol removal rate reduced to about 53%, and the k value decreased to 0.00855 min −1 (Figure 7a,b).The results showed that the addition of either a TBA or MeOH quencher significantly inhibited phenol degradation, and the inhibition of phenol degradation gradually increased with the increase in the concentration of the TBA or MeOH quencher, indicating that both SO4 •− and • OH were involved in the activation of PMS.In addition, EPR experiments (Figure 7c) further verified that both SO4 •− and • OH were generated in the NiCo2O4/PMS system based on a typical seven-line EPR signal [46].Notably, phenol degradation is still not completely inhibited within 10 M MeOH, suggesting that other ROS are also involved in the degradation reaction.For this reason, p-benzoquinone (p-BQ) and lev histidine (L-His) were used as O2 •− and 1O 2 bursting agents, respectively, to determine whether they were involved in the phenol degradation process [51,52].As shown in Figure 7a,b, both the phenol removal rate and degradation rate constant k decreased dramatically after the addition of 2 mM p-BQ and 50 mM L-His, which suggests that O2 •− and 1O 2 also play an essential role in phenol degradation. Catalytic Mechanism and Phenol Degradation Pathway To investigate the catalytic mechanism, a series of quenching experiments was performed to determine the types and contributions of reactive oxygen species in the sea urchin-like NiCo 2 O 4 /PMS system.As reported in the literature [50], it is known that MeOH is usually used to scavenge SO 4 •− and • OH, and TBA is a bursting agent for • OH.From Figure 7a, phenol was completely removed after 45 min without adding any quenching agent, whereas the degradation of phenol was only about 90% and 29% with the addition of 0.5 M and 10 M MeOH, respectively.Correspondingly, the k value was reduced from 0.09139 min −1 (not added) to 0.02639 min −1 (added 0.5 M methanol) and 0.00387 min −1 (added 10 M methanol) (Figure 7b).When 0.5 M TBA was added to the sea urchin-like NiCo 2 O 4 /PMS system, the removal of phenol remained almost constant, although the k value of the degradation reaction decreased slightly.With increasing the concentration of TBA to 10 M, the phenol removal rate reduced to about 53%, and the k value decreased to 0.00855 min −1 (Figure 7a,b).The results showed that the addition of either a TBA or MeOH quencher significantly inhibited phenol degradation, and the inhibition of phenol degradation gradually increased with the increase in the concentration of the TBA or MeOH quencher, indicating that both SO 4 •− and • OH were involved in the activation of PMS.In addition, EPR experiments (Figure 7c) further verified that both SO 4 •− and • OH were generated in the NiCo 2 O 4 /PMS system based on a typical seven-line EPR signal [46].Notably, phenol degradation is still not completely inhibited within 10 M MeOH, suggesting that other ROS are also involved in the degradation reaction.For this reason, p-benzoquinone (p-BQ) and lev histidine (L-His) were used as O 2 •− and 1 O 2 bursting agents, respectively, to determine whether they were involved in the phenol degradation process [51,52].As shown in Figure 7a,b, both the phenol removal rate and degradation rate constant k decreased dramatically after the addition of 2 mM p-BQ and 50 mM L-His, which suggests that O 2 •− and 1 O 2 also play an essential role in phenol degradation.Based on the above XPS analysis of the catalyst (Figure 2) and quenching experiments (Figure 7a), the possible reaction mechanism of phenol degradation by sea urchin-like NiCo 2 O 4 -activated PMS can be proposed as follows.First, the ≡Co and ≡Ni ions combine with H 2 O as Lewis acid/base sites to form ≡Co-− OH and ≡Ni-− OH.Subsequently, the ≡Co 2+ -− OH and ≡Ni 2+ -− OH species on the surface of the urchin-like NiCo 2 O 4 activate PMS to form surface-bound SO 4 •− (Equations ( 7) and ( 9)).Meanwhile, the formed ≡Co 3+ -− OH and ≡Ni 3+ -− OH react with PMS to regenerate more ≡Co 2+ -− OH and ≡Ni 2+ -− OH species (Equations ( 8) and ( 10)), respectively.Furthermore, according to the XPS analysis, redox reactions occur between Co 3+ /Co 2+ and Ni 3+ /Ni 2+ on the surface of the sea urchinlike NiCo 2 O 4 material (Equation ( 11)), which may enhance the electron transfer effect and accelerate PMS activation.The action of these two redox pairs is similar to the Fenton reaction based on the Harber-Weiss cycle [53].Namely, the synergistic effect between the Ni and Co can facilitate the activation of PMS to generate SO 4 •− .Based on the analysis of previous studies [54,55], the partial SO 4 •− directly participates in the decomposition of phenol, and the remaining SO 4 •− can react with H 2 O/OH − to form • OH (Equations ( 12) and ( 13)).Subsequently, a portion of • OH directly participates in the decomposition process, and the remaining • OH is involved in the generation of O 2 •− (Equations ( 14)-( 17)).Finally, the nonradical species of 1 O 2 comes from the O 2 •− (Equation ( 18)).Since the above reactions proceed in a cyclic manner, phenol in solution is under constant assault by the radical pathway based on the ROSs (SO 4 •− , • OH, O 2 •− ) and nonradical pathway ( 1 O 2 ) until they are completely decomposed to CO 2 and H 2 O (Equation ( 19)).The reaction mechanism of phenol degradation in the sea urchin-like NiCo 2 O 4 /PMS system is shown schematically in Figure 7e. (13) To further investigate the degradation pathway of phenol in the sea urchin-like NiCo 2 O 4 /PMS system, GC-MS was employed to measure the intermediates produced during phenol degradation.A few compounds, like dihydroxy benzene and benzoquinone, were initially determined (Figure S6).Furthermore, the UV spectrograms at different time points during the phenol degradation process verified that some intermediate products were produced (Figure S5).Based on the present experimental results and previous studies [56], possible degradation pathways for phenol were proposed (Figure 7d).First, the para or neighboring sites of the hydroxyl ( − OH) on the ring are assaulted by ROS to produce dihydroxy benzene, which then continues to be oxidized by ROS to produce p-benzoquinone, whose benzene ring and carbon-carbon double bond are destroyed by ROS and converted into oxalic acid, which is finally broken down into CO 2 and H 2 O by a decarboxylation process.The high TOC removal rate (67.5%) in Figure S3a indirectly indicated that the final products were transformed into CO 2 and H 2 O, which finally achieved the effective mineralization of the pollutants.Based on the above XPS analysis of the catalyst (Figure 2) and quenching experiments (Figure 7a), the possible reaction mechanism of phenol degradation by sea urchin-like NiCo2O4-activated PMS can be proposed as follows.First, the ≡Co and ≡Ni ions combine with H2O as Lewis acid/base sites to form ≡Co-− OH and ≡Ni-− OH.Subsequently, the ≡Co 2+ -− OH and ≡Ni 2+ -− OH species on the surface of the urchin-like NiCo2O4 activate PMS to form surface-bound SO4 •− (Equations ( 7) and ( 9)).Meanwhile, the formed ≡Co 3+ -− OH and ≡Ni 3+ -− OH react with PMS to regenerate more ≡Co 2+ -− OH and ≡Ni 2+ -− OH species (Equations ( 8) and ( 10)), respectively.Furthermore, according to the XPS analysis, redox reactions occur Preparation of the Sea Urchin-like NiCo 2 O 4 Catalysts The sea urchin-like NiCo 2 O 4 catalysts were prepared and synthesized by hydrothermal and thermal treatment method.To start with, 4 mmol of CoCl 2 •6H 2 O, 2 mmol of NiCl 2 •6H 2 O and 6 mmol of urea were dissolved in deionized water and stirred well.The mixed solution was then transferred to hydrothermal reactor and heated at 120 • C for 6 h, and purple-pink powder was obtained.Finally, the sea urchin-like NiCo 2 O 4 catalysts were obtained by calcining the prepared purple-pink powder at 300 • C for 4 h under an air atmosphere.For the purpose of comparison, nickel and cobalt monometallic oxides were synthesized separately using the above method but without the addition of nickel and cobalt sources. Degradation Experiments The catalytic degradation experiments were performed in a quartz reactor containing 250 mL phenol solution and placed on a magnetron stirrer equipped with a thermal sensor and a waterbath kettle (the speed of the magneton was set to 500 rpm).Firstly, a quantitative amount of catalyst was added to the phenol solution for half an hour to reach the adsorption and desorption equilibrium of the material.A quantitative amount of the oxidant PMS was then added to the contaminant solution to begin removal of the phenol.At the set time interval, 1 mL of sample solution was taken out, filtered by 0.22 µm filter membrane and quenched by the addition of 0.5 mL methanol, followed by UV and HPLC tests, respectively.Experimental parameters of different inquiry experiments can be adjusted according to the principle of a single variable.Diluted HCl and NaOH solutions were used to regulate the pH value of the original solution.After the degradation reaction, the catalyst in the solution was filtered and reused.With the degradation rate of phenol as the index, the stability and reusability of the catalyst were tested.In addition, to compare the catalytic performance of different catalysts, the same degradation experiments were carried out based on the synthesized single metal oxide catalysts.In quenching experiment, different quenchers were added to phenol solution.The hydroxyl radical ( • OH) was quenched by TBA; • OH and sulfate radical (SO 4 •− ) were quenched by MeOH.Super oxygen radicals (O 2 •− ) and 1 O 2 were quenched by para-benzoquinone (p-BQ) and L-Histidine (L-His), respectively.The main reactive oxygen species (ROS) in the system were determined by comparing the removal rates of phenol. Characterizations X-ray diffraction (XRD, D max/RB diffractometer, Rigaku Corporation, Tokyo, Japan) with Cu Kα radiation (λ = 1.5406Å) was carried out to measure the crystal structure and purity of the materials.The microscopic morphologies and structural features of the synthesized materials were obtained via scanning electron microscopy (SEM, Shimadzu S4800, Hitachi Corporation, Hitachi, Japan) with an X-ray energy-dispersive spectrometer (EDS) and transmission electron microscope (TEM, JEM-200CX, JEOL, Akishima, Japan).The Brunauer-Emmett-Teller (BET) specific surface area, pore size distribution and pore volume of the catalysts were characterized via using a physisorption instrument (JW-ZK222).The elemental composition and chemical valence states of the catalysts were recorded by X-ray photoelectron spectroscopy (XPS, ESCALAB 250Xi, Thermo Scientific, Waltham, MA, USA).During the phenol's degradation process, total organic carbon analyzer (TOC, Shimadzu-VCSH, Kyoto, Japan) was used to determine the TOC content in the solution.The concentration of phenol was determined by HPLC (Elite-EClassical 3100, Dalian, China) on a C-18 HPLC column (5 µm, 4.6 × 250 mm) with an ultraviolet detection wavelength of 270 nm.Typically, acetonitrile and UP water were exploited as mobile phase (40% organic phase and 60% aqueous phase).The flow rate was set to 1 mL min −1 .The effects of the catalysts on phenol degradation efficiency were monitored by ultraviolet-visible absorption spectroscopy (UV-Vis, TU-1810PC, Beijing, China).In order to better identify the active oxygen species (ROSs, SO 4 •− , • OH, O 2 •− and 1 O 2 ) during catalytic reaction, the effects of ROS were determined by the free radical quenching experiments and electron paramagnetic resonance trials (EPR, brooke-a300).The intermediates produced at different time points in the degradation process were identified by gas chromatography-mass spectrometry (GC-MS, Shimadzu-QP2020, Kyoto, Japan). Calculation Methods The degradation kinetics of phenol was fitted by the quasi-first-order reaction equation, and its apparent reaction rate constant k was calculated according to Equation (20): ln (C t /C 0 ) = −kt (20) where t is the reaction time, and C t and C 0 represent phenol concentration at time t and initial phenol concentration, respectively [57].With t as the abscissa and ln (C t /C 0 ) as the ordinate, the slope obtained after fitting is the apparent reaction constant k of the system (unit: min −1 ).Furthermore, the reaction activation energy (Ea) during the phenol degradation reaction could be calculated through the Arrhenius equation (Equation ( 21)).ln k = −Ea/RT + ln A (21) where k represents the reaction rate constant (min −1 ), Ea is activation energy (kJ mol −1 ), R is the molar gas constant (8.314J mol −1 K −1 ), T represents thermodynamic temperature (K) and A is a constant [58]. Conclusions In this study, the sea urchin-like NiCo 2 O 4 microspheres were successfully synthesized using a simple hydrothermal method followed by thermal treatment, applied for the phenol degradation in the activation of PMS.Thanks to the synergistic redox cycle between Ni and Co ion and its stable structure of the sea urchin-like NiCo 2 O 4 microspheres, the catalyst material showed good catalytic performances in activating PMS for the degradation of phenol.In the sea urchin-like NiCo 2 O 4 /PMS system, phenol solution could be completely removed within 45 min with a good mineralization rate, which is attributed to the activation of radical species.The sea urchin-like NiCo 2 O 4 exhibits enhanced PMS activation across a broad spectrum of PH values.In modeling environmental aquatic systems, both HCO 3 − and Cl − can promote phenol degradation, and the presence of NO 3 − has almost no effect on the phenol degradation in the sea urchin-like NiCo 2 O 4 /PMS system, which is due to the participation of non-radical species.In addition, the sea urchin-like NiCo 2 O 4 microspheres exhibited extraordinary reusability.The quenching experiments and EPR experiments confirmed that both radical species (SO 4 •− , • OH and O 2 •− ) and non-radical species ( 1 O 2 ) are important reactive oxygen species in the sea urchin-like NiCo 2 O 4 /PMS system.Furthermore, the degradation pathway of phenol was propounded based on the detected intermediates via GC-MS.This study suggests that sea urchin-like NiCo 2 O 4activated PMS is a promising technology for environmental treatment and remediation for phenol-induced water pollution problems. Figure 1 . Figure 1.(a) Schematic diagram of the preparation for the sea urchin-like NiCo2O4 catalysts.(b) XRD pattern of the sea urchin-like NiCo2O4.(c,d) The SEM images of the sea urchin-like NiCo2O4.(e,f) The TEM images of the sea urchin-like NiCo2O4.(g) TEM image of the sea urchin-like NiCo2O4 (inset: Figure 1 . Figure 1.(a) Schematic diagram of the preparation for the sea urchin-like NiCo 2 O 4 catalysts.(b) XRD pattern of the sea urchin-like NiCo 2 O 4 .(c,d) The SEM images of the sea urchin-like NiCo 2 O 4 .(e,f) The TEM images of the sea urchin-like NiCo 2 O 4 .(g) TEM image of the sea urchin-like NiCo 2 O 4 (inset: selected area electron diffraction pattern).(h) EDS element content image.(i) EDS mapping images of sea urchin-like NiCo 2 O 4 . Figure 3 . Figure 3. (a) Phenol removal in different systems.(b) The degradation rate constants (k) of pheno in different systems. Figure 3 . Figure 3. (a) Phenol removal in different systems.(b) The degradation rate constants (k) of phenol in different systems. . 4 . Effect of Temperature The effect of reaction temperatures (25 • C, 30 • C and 35 • C) on the phenol removal was investigated in the sea urchin-like NiCo 2 O 4 /PMS system.As shown in Figure 4e, the time required for the complete degradation of phenol was reduced to 45 min, 25 min and 20 min at 25 • C, 30 • C and 35 • C, respectively.According to Figure S4e, the rate constant k of the reaction increases significantly with increasing reaction temperature, and the k value at 35 • C (k = 0.15630 min −1 ) is more than twice of the k value at 25 • C (k = 0.06364 min −1 Figure 6 . Figure 6.(a) Ct/C0-t diagram of phenol degradation in the cycling experiment.(b) The first-order kinetic simulation of the reaction in the cycling experiment.(c) Histogram of the phenol removal rate at 60 min in the cycling experiment.(d) Fresh and used XRD patterns of sea urchin-like NiCo2O4. Figure 6 . Figure 6.(a) C t /C 0 -t diagram of phenol degradation in the cycling experiment.(b) The first-order kinetic simulation of the reaction in the cycling experiment.(c) Histogram of the phenol removal rate at 60 min in the cycling experiment.(d) Fresh and used XRD patterns of sea urchin-like NiCo 2 O 4 . : (a) XRD pattern of (a) Co 3 O 4 .(b) NiO.(c) N 2 adsorption/desorption isotherms of the sea urchin-like NiCo 2 O 4 .(d) pore size distribution image of the sea urchin-like NiCo 2 O 4 ; Figure S2: (a) The SEM images of (a,b) Co 3 O 4 , (c,d) NiO; Figure S3: (a) TOC removal rate of phenol in NiCo 2 O 4 /PMS degradation system.(b) TOC removal rate of RhB in NiCo 2 O 4 /PMS degradation system; Figure S4: First kinetic simulation diagram of phenol degradation under the influence of reaction parameters.(a) catalyst dosages, (b) PMS dosages, (c) initial phenol concentrations, (d) initial pH, (e) reaction temperatures; Figure S5: The SEM image of used sea urchin-like NiCo 2 O 4 catalysts; Figure S6: UV-vis spectral changes of phenol in the sea urchin-like NiCo 2 O 4 /PMS degradation system; Figure S7: GC (a-c) chromatogram for the phenol degradation in the sea urchin-like NiCo 2 O 4 /PMS system.(d-f) MS spectrum of the intermediates from phenol degradation; Table S1: Comparison with other catalysts for phenol
2024-01-06T16:29:41.826Z
2023-12-26T00:00:00.000
{ "year": 2023, "sha1": "c4a2dbd6eb964e467dd5bd8061375926b21c1584", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/29/1/152/pdf?version=1703668172", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53928c667d9066c3219bcd0e81f03178b35dcf0f", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
233466196
pes2o/s2orc
v3-fos-license
Treatment of Post-biopsy Arteriovenous Fistula of a Renal Graft by Selective Embolization The development of an arteriovenous fistula (AVF) after renal graft biopsy is a rare complication, it is associated in most cases with spontaneous resolution. However, interventional therapies are required in some cases, to prevent graft loss. Selective embolization has been described as an alternative treatment. In the present study, we describes our experience on AVF after biopsy in kidney transplant patients, which was managed with selective embolization. From 2005 to 2015, a total of 452 kidney transplant biopsies were performed, 12 had an AVF requiring embolization. In 92% of cases, this was successful. Beforehand, mean serum creatinine levels were 2.45 mg/dL, after the procedure, that increased to 3.05, however, 3 months later, mean creatinine levels dropped to 1.85 mg/dL. Graft survival after 2 follow-up years was 72%. Our experience demonstrates that selective embolization of the AVF after kidney transplant biopsy is a safe procedure, and that transplant function can be maintained in patients with this complication. Introduction Renal biopsy is recognized as the gold standard for the evaluation of renal graft dysfunction; this allows for differential diagnosis and treatment of multiple conditions that can be associated. [1]Despite its invasive nature, percutaneous renal biopsy is a relatively safe procedure, [2] with a low incidence of vascular complications, ranging from 0.2-2% of patients, including bleeding and the development of arteriovenous fistulas (AVFs). [2,3]The incidence of AVF after kidney graft biopsy is reported in up to 16% of cases, a complication that largely improves spontaneously without any therapy. [2]However, sometimes it can lead to graft dysfunction secondary to the compromise of renal parenchymal perfusion, which requires prompt treatment to avoid graft loss. [4]sk factors associated with the development of AVF are uncontrolled arterial hypertension, nephroangiosclerosis, allograft fibrosis, large needle size, penetration into the medulla, coagulation abnormalities, and certain immunosuppressant drugs [5,6] ; Regarding treatment, due to frequent spontaneous resolution, some groups suggest conservative management; however, there are some signs and symptoms that support early intervention, including increased size of the AVF in the follow-up with Doppler, impaired graft perfusion, difficult to control post-biopsy hypertension, and persistent hematuria. [2]Endovascular therapy is among the therapeutic options for this complication [3] ; in the case of uncontrolled bleeding, a nephrectomy may be necessary. [4]ere are few reports of embolization in patients with a post-biopsy AVF of a renal graft. [2]Our study aimed to describe our short-and long-term clinical experience with patients who underwent AVF after kidney graft biopsy.Included patients were older than 18 years.The following variables were examined: baseline and demographic characteristics; etiology of the kidney disease; type of donor (living or deceased); immunosuppressive therapy used; an indication of renal biopsy; renal function before biopsy; findings on Doppler of the graft, both pre-and post-biopsy; the number of punctures in the biopsy; the number of samples compatible with renal tissue; hemoglobin and hematocrit pre-and post-biopsy; systolic and diastolic blood pressure at the beginning of the biopsy; signs and symptoms suggestive of AVF; histological diagnosis of the biopsy; treatments received; complications; the time between biopsy and embolization; graft loss; renal function at 3, 6, 12, and 24 months after biopsy; reentry to dialysis; and death.Data were obtained from the patients' electronic medical records; they were recorded in an Excel database, and then exported to SPSS (Chicago, IL, USA) for statistical analysis. Retrospective Standard precaution for graft biopsy were taken.Ultrasound-guided biopsy at one of the poles of the renal graft, preferably from the lateral or upper pole using biopsy gun (ProMag Biopsy Needle, 18-gauge X 25 cm, Ref. 765018250 Argon Medical Devices, Frisco, TX, USA).Absolute rest for 6 h with monitoring of vital signs, puncture site, and urine characteristics.Routine subsequent Doppler ultrasound control was not performed.The diagnosis of post-biopsy AVF was made on Doppler ultrasound.Embolization was indicated in the case of impaired graft function or a significant increase in the AVF. Results From 2005 to 2015, 452 allograft biopsies were performed, in which 12 patients developed AVF, requiring embolization.Six were men and six women, with a median age of 42 years (p25-75: 36-50.5).Four (33.3%) patients presented with macroscopic hematuria and seven (58.3%) with renal dysfunction.Eleven patients (91.7%) received induction therapy with all received triple renal immunosuppressive therapy [Table 1].The indication for renal biopsy was graft dysfunction in all patients.Acute graft rejection was confirmed in 83.3% of patients [Table 1].Kidney biopsy was performed, on average, 2.92 months (percentiles 25-75: 0.34-48.5)after renal transplantation; mean follow-up time from the biopsy until the last consultation was 42 months (p25-75: 5.25-52.25).Other characteristics that were evaluated can be found in Table 1. Patients had an average of two punctures for each renal biopsy.The rejection was reported in 11 (91.7%), of which 4 were cellular rejection, 6 humoral, and 1 mixed rejection.The median time interval between the renal biopsy and diagnosis of AVF was 8.5 months (p25-75: 3.25-46).In 75% of them, renal doppler showed compromised renal flow due to fistula, 25% showed a significant increase in fistula size.The velocity of renal flow in cm/s had a median value of 276. 5 (p25-75: 169.5-360).Some laboratory parameters before and after the renal biopsy are shown in Table 2. Discussion Percutaneous renal graft biopsy is an indispensable procedure in the management of renal transplant patients with graft dysfunction. [7,9]; However, its invasive nature does not render it a risk-free procedure, as it is one of cause of iatrogenic vascular complications (AVF and pseudoaneurysms). [3,7]AVF is a complication occurring in 1 to 15% of patients [3,4] ; it is caused by damage of the arterial and venous wall, [10] and diagnosed by graft doppler. [11]14] Among the available therapeutic options, there is currently no standard therapy. [3,15]Some groups suggest expectant management, considering that in up to 70% of patients, AVF resolves spontaneously within the first 2 years. [2,12]Barrios et al. suggest performing ultrasound Doppler to check the graft every week for a month, and then monthly until the AVF is closed. [8]However, some groups are suggesting early intervention; Fossaceca et al. found that endovascular therapy was optimal in symptomatic AVF, or impairment of renal function after kidney biopsy. [3]Concerning endovascular therapy, selective angioembolization is considered the therapy of choice as a safe and effective procedure, [16] which allows occlusion of the AVF without inducing a lesion in the renal parenchyma. One of the complications of endovascular therapy is the risk of partial infarction or renal ischemia, [2] a complication that did not occur in our patient cohort.Although serum creatinine values after renal embolization significantly increased, this was expected with the contrast medium, or ischemia and inflammation.These levels decreased again when creatinine values were assessed in subsequent controls.Renal function after 2 years of follow-up post-embolization was preserved in 71.4% of patients. Long-term follow-up of the patients evaluated suggests that embolization is a safe procedure in those with a diagnosis of AVF of the renal graft; this could involve compromise in function, after biopsy of the renal graft, but it allows us to preserve function and avoid its loss in most patients. Financial support and sponsorship The study was supported by HPTU, Medellin, Colombia. Conflicts of interest There are no conflicts of interest.2.92 (0.34-48.5)The time between biopsy to the detection of arteriovenous fistula; median (p25-75) 8.5 (3.25-46)The time between biopsy to the last follow-up in months median (p25-75) 42 (5.25-52.25 cohort where renal transplanted patients with a diagnosis of AVF secondary to renal graft biopsy were evaluated and managed with selective embolization during the years from 2005 to 2015 at the Pablo Tobón Uribe Hospital (HPTU, Spanish initials), were taken.Indian Journal of Nephrology | Volume 31 | Issue 2 | March-April 2021 Figure 1 : Figure 1: Percutaneous embolization.(a) Pre-embolization of the AV fistula (AVF): selective arteriography of the transplanted kidney shows an arterial anastomosis to the external intestinal cavity, with high-grade AVF of the segmental artery of the lower renal pole.(b) Angiographic control post-embolization: complete closure of the AVF was observed, as well as improvement of the parenchymogram b a
2021-05-01T13:42:47.421Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "47db0e1582b16c0a48924966efbfee8d015ba11a", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijn.ijn_351_19", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bb8bdd7c848e8f376e8a1da497cd1164d7209a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
156315376
pes2o/s2orc
v3-fos-license
Moving Forward on Africa’s Regional Integration Agenda: Learning from Best Practices and Efforts The African continent is home to 14 percent of the global population. However, it accounts for less than 3 percent of global GDP, only 1.8 percent of global imports; 3.6 percent of global exports; and 3 percent of foreign direct investment. Africa’s growth edged up from 3.7 percent in 2013 to 3.9 percent in 2014 (ECA, 2013)1. The GDP growth rate is expected to increase to 4.5% and 4.9% in 2015 and 2016, respectively [1]. Against this backdrop, African countries are increasingly realising the virtue of regional co-operation and integration as a strategy to help them achieve robust and self-sustaining economic transformation and thereby become an important and effective player in the global economy. Most of the 54 African countries are small in size and income levels. Besides, 16 countries2 are land-locked. Regional integration and the creation of an African common market has been the vision of African leaders since the early years of independence. There are several critical reasons for this: Introduction The African continent is home to 14 percent of the global population.However, it accounts for less than 3 percent of global GDP, only 1.8 percent of global imports; 3.6 percent of global exports; and 3 percent of foreign direct investment.Africa's growth edged up from 3.7 percent in 2013 to 3.9 percent in 2014 (ECA, 2013) 1 .The GDP growth rate is expected to increase to 4.5% and 4.9% in 2015 and 2016, respectively [1].Against this backdrop, African countries are increasingly realising the virtue of regional co-operation and integration as a strategy to help them achieve robust and self-sustaining economic transformation and thereby become an important and effective player in the global economy.Most of the 54 African countries are small in size and income levels.Besides, 16 countries 2 are land-locked.Regional integration and the creation of an African common market has been the vision of African leaders since the early years of independence.There are several critical reasons for this: • First, a common market combining Africa's 54 mostly small and fragmented economies will lead to economies of scale that make countries competitive. • Second, it would provide access to a wider trading and investment environment, inducing backward and forward linkages and promote exports to regional markets, building experience to enter global markets. • And third, it would provide a framework for African countries to cooperate in developing common services for finance, transport, and communications.This vision will be achieved through the Abuja Treaty Establishing the African Economic Community which is now be backed by the Constitutive Act of the African Union.By this process, the continent can pool its capacities, endowments and energies together to transform it, and thereby help uplift the lives of the millions of its peoples.The direct and indirect multiplier effects of economies of scale, bigger markets and diversified production will result in greater wealth creation. The RECs are expected to evolve into free trade areas, custom unions, and through horizontal co-ordination and harmonization, eventually culminate into a common market embracing the entire continent.In this regard, we highlight some key milestones: Trade and market integration The Abuja Treaty anticipates all RECs to satisfy requirements of FTA by 2017.All RECs have made significant efforts to move ahead on this objective by implementing trade liberalization schemes.Though there are some variations in performance, REC members are for the most part adhering to their commitments.ECOWAS has established an FTA and customs union jointly with UEMOA.COMESA launched its Customs Union in June 2009.EAC has recently moved to a stage of common market, and the COMESA-SADC-EAC Tripartite FTA is approaching its formal launch date in June 2015 [9][10][11][12]. Macro-economic policy convergence Macro-economic policy convergence is one of the key objectives of RECS.Attainment of macro-economic convergence would lead to a stable macro-economic environment in the region and impact positively on economic growth.The main variables are: price stability/ control of inflation; fiscal restraint/ restraint on budget deficit financing; maintenance of sufficient levels of gross external reserves; growth performance.To monitor the process of convergence, a number of RECs have established macro-economic surveillance mechanisms: e.g., ECOWAS has set up the West African Monetary Agency; SADC has set up a Macroeconomic Monitoring Surveillance and Performance Unit (MSPU) [13]. Monetary and financial integration COMESA and ECOWAS have plans to become monetary unions within a span of 5 to 10 years.SACU, whose members, South Africa, Botswana, Lesotho, Namibia and Swaziland, are part of SADC have a common monetary area (CMA) where the South African Rand circulates officially as a common currency exchangeable with the domestic currencies.COMESA has a regional Bank (the PTA Bank) and has put in place the COMESA Fund to finance local activities and to act as seed money for the mobilization of further funding [14].In SADC, the South African Development Bank has taken the responsibility for serving the interests of all members.In ECOWAS, the ECOWAS Fund has been instrumental in supporting adjustment costs within the Community arising from the implementation of trade liberalization schemes. Free movement of people A number of measures have been adopted to promote free movement of people, particularly in terms of abolition of entry visas and issuance of common travel documents by some RECs.Best practices can be found in ECOWAS and EAC, Others have simply facilitated movement for special categories of people (mostly businessmen). Physical integration Physical connectivity has advanced in many areas.The West African sub-region has a relatively well-linked road network through the Trans-West African Highway system.UMA has fairly well developed road network, and SADC and EAC have good levels of road and rail linkages.All RECs have instruments in one form or another to achieve unimpeded transit facilitation, reduce costs and improve overall efficiency [15][16][17][18]. Peace and security Peace and security architecture is in place (less conflicts in Africa now than in the past; virtually coup d'etats is now a thing of the past.)The AU Charter on Human and People's rights provides a code of conduct for member States in terms of adherence to principles of respect for human rights and human dignity.The Protocol establishing the Africa Court on Human and People's Rights has been in force since January 2005.The African Peer Review Mechanism (APRM) has become a mutually agreed instrument voluntarily acceded to by the member States of the African Union (AU) as an African selfmonitoring mechanism. Currently, a number of political impetuses are being brought to bear on Africa's integration process to further speed up progress: First is the launch of the COMESA-SADC-EAC Tripartite Free Trade Area of 26 countries with a combined population of nearly 600 million people and a total Gross Domestic Product (GDP) of approximately US$1.0 trillion.The tripartite FTA makes up half of the African Union (AU) in terms of membership and just over 58% in terms of contribution to GDP and 57% of the total population of the African Union.The Tripartite FTA is anchored on three pillars namely: Market integration based on the Tripartite Free Trade Area (FTA); Infrastructure Development to enhance connectivity and reduce costs of doing business, and Industrial development to address the productive capacity constraints [19]. Second is the January 2012 AU Summit Decision to fast track the establishment of a Continental Free Trade Area by 2017 (indicative) and implement a parallel comprehensive Action Plan to boost intra-African trade.ECA studies (ARIA V and VI) 3 show that the CFTA plus robust trade facilitation (TF) in terms of reformed customs procedures and port handling would expand trade flows among African countries.It could add up to 34.6 billion (about 52%) to the baseline in 2022.Imports of African countries from the rest of the world would decrease by US $10.2 billion due to increase in intra Africa trade figures (ECA, 2012, ARIA V).The process is being steadfastly driven by a Continental Task Force composed of representatives of the African Union Commission, the RECs and ECA under close stewardship by the African Union Conference of Ministers of Trade, and high-level political oversight of the Chairs of the RECs, who are Heads of State. The above political push is designed to build on the "acquis" in terms of existing successes and milestones of the integration agenda [20].The purpose of this short paper is basically to identify and highlight some of the best examples and practices at both national and regional levels that are helping to move Africa's integration agenda forward, and which other member States and RECs can learn from. Trade facilitation efforts Broadly speaking, trade facilitation efforts are critical in addressing the logistics of moving goods through ports and other trading corridors or more efficiently moving documentation associated with cross-border trade.They also include the environment in which trade transactions take place, that is, the transparency and professionalism of customs and regulatory environments, as well as harmonization of standards and conformity to international or regional regulations.The International Chamber of Commerce (ICC) defines trade facilitation as the adoption of a comprehensive and integrated approach to simplifying and reducing the cost of international trade transaction, and ensuring that the relevant activities take place in an efficient, transparent and predictable manner based on internationally accepted norms and standards and best practices [21]. Trade facilitation is particularly important for developing countries, as studies show that they stand to gain the most from more efficient trade procedures, although achieving it may be more challenging for these economies than for the developed world.But even modest reductions in the cost of trade transactions would have a positive impact on trade for both the developed and the developing world.Trade facilitation should not only be perceived as a "transportation or customs problem", but rather a broader issue, which straddles many aspects of weak capacities that exist in many developing countries, which inhibit their effective participation in international trade.This aspect is nowhere more truly than in Africa.To this end, RECs have been at the forefront of trade facilitation at the sub regional level.Most of their efforts are focused on, but not limited to, the removal of non-physical transport barriers along major transit corridors, especially those connecting landlocked countries to seaports.This section therefore focuses on some of the efforts aimed at improving transport and customs operations. Addressing high transaction cost through regional payment systems: Regional payment system remains one of the key tools for making payments safe, sound, secure and timely with certainty and minimum cost.In spite of more than two decades of financial reforms, African payment systems to transact trade and business have remained very cumbersome, underdeveloped, fragmented, costly and inefficient.For instance, it takes just a few minutes to wire money from one corner of the world to another, but to transact business payments across borders in Africa, may take days, weeks or even months.Efforts are being made at all levels to address the problem. COMESA regional payment and settlement system-The system allows member countries to transfer funds within COMESA on the same day and at a lower cost.It is designed to benefit traders by reducing or minimizing the cost for money transfers.It thus saves money for traders as the funds are transferred through a member country's central bank, with customers using commercial banks to deposit and withdraw cash.Under the Regional Payment and Settlement System, traders pay a flat rate of 0.25% to transfer euros or dollars across borders, an improvement on standard bank fees which can be as high as 5% of the transfer. East African payments system (EAPS)-It is a Real Time Gross Settlement System (RTGS) operated by Bank of Uganda, Central Bank of Kenya and Bank of Tanzania.EAPS enables traders to either make or receive cross-border payments in real time and in their respective local currency.Under this system all traders are charged a flat rate of (US $6.25) to transfer bulk funds within the three EAC countries.A study by the Overseas Development Institute indicates that before the EAPS, it used to cost three times more to transfer money within the EAC than to send a similar amount from Europe into the region; sending US $200 from one EAC country to another within the EAC used to cost up to US $50; whereas sending the same amount from the UK into the region costs US $15 while the global average cost is US $8. The SADC Integrated Regional Electronic Settlement System (SIRESS)-successfully implemented its first stage in four countries (South Africa, Namibia, Lesotho and Swaziland) in 2013 [22].This is the first step towards a common electronic payment system for all the 15 SADC countries.This aims to boost socio-economic development through harmonization in areas of common interest such as trade tariffs and border controls, and integration in areas such as telecommunications and financial infrastructure.Just to single out, Zimbabwe Revenue Authority (ZIMRA) has introduced an E-Payment facility, whereby funds can be electronically transferred into ZIMRA's bank account and will automatically appear in the client's account within the Automated System for Customs Data (ASYCUDA). Using transit guarantee schemes to facilitate goods in transit: Transit Guarantee Schemes are facilities designed to facilitate the movement of goods in transit across regions by providing adequate security of guarantee to the transit countries to recover duties and taxes should the goods in transit be illegally diverted for home consumption in the country of transit.Under this Scheme, only one bond is issued as opposed to current practice whereby a bond is required for each and every country in which the goods are in transit. The COMESA carnet-Under this scheme, governments commit themselves to ensure that goods in transit are not resold in countries other than those destined for.The scheme plays an important role in ensuring the efficient movement of goods in the region, thereby facilitating the realization of the COMESA Customs Union objective.Under this scheme, a trader from Kenya carrying goods to Tanzania, for example, will only have to use a single document to clear the goods from Mombasa to Tanzania instead of several documents. COMESA-SADC -Transit Management Systems (TMS): Under this instrument, a single bond is issued for goods throughout the transit period in place of multiple bonds. SADC corridor management committee (CMC): The CMC reviews conditions of corridors regularly and make recommendations for improvements in terms of physical infrastructure and other related bottlenecks that hinder mobility along the corridor.Operations along the Corridor are therefore subject to constant scrutiny so that bottlenecks and abnormal practices can be quickly addressed.Similar corridor management arrangements also exist in other RECs, such as the Abidjan-Lagos Corridor Organization (ALCO) in ECOWAS and the Walvis Bay Corridor Group that caters for the Eastern and Southern Africa region [23]. ECOWAS brown card equals COMESA yellow card-These cards are basically a third party liability insurance cover for accidents involving motor vehicles crossing the borders within the respective RECs.The card protects car owners from third party liabilities incurred in foreign countries since they are fully covered under the auspices of the protocol.Drivers are no longer jailed for accidents causing damages for third party liabilities since the Brown Card cover is enough evidence of his insurer's ability to pay compensation for such liabilities.The insurers have put in place a compensation fund to be used in paying claims under these schemes. Harmonization and simplification of customs procedures leading ultimately to borderless and paperless trade: Excessive customs procedures and documentary requirements coupled with the lack of a harmonized trade and market integration policies and instruments can be administratively difficult to manage at the national level.A country belonging to two or more RECs with differing trade liberalization mechanisms would have to cope with policy contradictions, varying instruments, multiple procedures and formats. Customs officials would, for instance, have to deal with different tariff reduction rates, rules of origin, trade documentation, statistical nomenclatures etc. applicable to different RECs.According to estimates by UNCTAD, on average customs transaction involves 20-30 different parties, 40 documents, 200 data elements 30 of which are repeated at least 30 times and the re-keying of 60-70% of all data at least once.Such a multiplicity and diversity of steps complicates rather than simplifies customs procedures and paper work, causes possible confusion as to which rules, forms and procedures to apply, adds to the cost of doing business, and provides a scope for indulgence in rent seeking practices by customs and other related officials. Efforts are therefore being made in enhancing border cooperation among agencies of member States responsible for border controls and procedures dealing with importation, exportation and transit of goods.A number of African countries are in the process of implementing the Integrated Border Management or Coordinated Border Management, including the following: Single-window Single Windows are being established in a number of African countries.Notable effective single windows include Senegal (Customs Computer System-GAINDE 2000), Ghana (Ghana Community Network Services Ltd.-GCNet), Tunisia (Tunisia TradeNet), Cameroon (GUCE), and Mauritius, among others.The following countries are in the process of establishing single Window systems-Kenya, Burkina Faso, and Libya, Morocco the Republic of Congo, Rwanda and Uganda.Although there are some cost and complexity of setting a Single Window system, the benefits, are known to far outweigh the costs.The Zimbabwe Revenue Authority is currently at an advanced stage in the implementation of a Single Window environment at its major ports of entry, with Beitbridge border post. To monitor the time spend in clearing goods at the border, countries have started to establish and publish average release times taken during the whole process.For instance, using the surveys conducted with the support of the East African Community and the World Customs Organization, Uganda has available data on average release times at its major. Chirundu One Stop Boarder Post (OSBP) at the border crossing between Zambia and Zimbabwe which is often cited as a best practice has helped to significantly improve the efficiency in the border crossing procedures.Before the one stop border post became operational clearing times were between 3-5 days but now clearance is done on the same day.An average of 480 trucks cross Chirundu every day so a total of 960 to 1,920 travel days per day are being saved.This translates, at a conservative estimate, to between US$288,000 and US$576,000 in savings every day.Box 1 shows other OSBP efforts in progress. A Memorandum of Intent has been signed between Mozambique and Zimbabwe Customs Administrations with the view to establishing OSBPs at Nyamapanda/Cuchamano and Forbes/Machipanda border locations.South Africa and Zimbabwe are also currently in negotiations to improve operations at the Beitbridge border post and eventually planning to create OSBP between the two countries.The EAC has passed a bill on OSBPs, indicating the importance attached to the concept in the sub-region.Currently, there are OSBPs involving: Kenya and Uganda; Tanzania and Uganda; Rwanda and Uganda; and Sudan and Uganda. The concept of OSBP is being complemented in Eastern Africa with the introduction of the practice of customs clearance at first port of entry.In 2012, the EAC adopted in principle, the destination model of clearance of goods where assessment and collection of revenue is at the first point of entry and revenues are remitted to the destination partner States subject to the fulfillment of key pre-conditions to be developed by a High Level Task Force.Efforts to establish the Cinkase OSBP between Burkina Faso and Ghana under the auspices of UEMOA are advanced, and ECOWAS is also now fully involved.Source: Compiled by the Authors from official sources, RECs' websites. Box 1: Efforts in establishing OSBP in the continent. The lack of or insufficient use of automated processes and information technology is also a major source of delays, costs and inefficiencies.Many African countries have recognized the need to simplify and speed up customs procedures by use of automated systems, by introducing the use of the Automated System for Customs Data (ASYCUDA ++ which is now called ASYCUDA World).ASYCUDA is part of the modernization of customs clearance procedures, in order to achieve efficiency and effectiveness in the clearing of goods from the customs.With the help of COMESA and many other RECs, the cumbersome paperwork, delays and bureaucratic procedures associated with customs clearance is now a thing of the past in many of our countries.Currently, 16 COMESA countries use ASYCUDA in addition to the much simplified Customs Declaration Document (COMESA CD)4 -a standardized document currently being implemented for customs transit traffic control.Holders of COMESA CD are able to minimize time spent at border checkpoints significantly through the use of COMESA CD.The document caters for imports, exports, transit and warehousing and has replaced, on average 32 documents in some member States [24].Once data is entered at an initial customs checkpoint, automatically it becomes available on the system at all other customs check points.ECOWAS countries also use ASYCUDA systems.Three of the ECOWAS member States, namely, Côte d'Ivoire, Liberia and Mali are already using ASYCUDA world, the new version of ASYCUDA. Getting rid of non-tariff and other man-made barriers: Nontariff barriers are the set of trade distorting measures and policies other than tariffs.In a narrow sense, non-tariff measures are quantitative restrictions that are explicitly recognized as trade barriers, such as quotas.In a broader sense non-tariff measures include unfair measures or misuse of policies such as technical barriers to trade and unfair government policies.Other non-tariff barriers include illegal practices on major trading corridors by both uniform and non-uniform personnel.There are other man-made barriers, which create or add to price differences.They include policies such as import restrictions, special incentives or restrictions on export, foreign exchange policies, and preferential national treatment.They also include policies, which increase the costs imposed by natural barriers, such as regulations in the transportation and communications areas that keep prices of these services artificially high. Clearing document is of great concern in many African countries and appears to be particularly burdensome by international standards. Intra-EAC trade grew to $5.5 billion in 2012, up from $4.5 billion recorded in 2011, even as the five member states -Kenya, Uganda, Rwanda, Tanzania and Burundi-dithered on the elimination of nontrade barriers (NTBs).The figures do not include informal crossborder trade, estimated to be as much as 40% of formal trade.Intratrade in the EAC region grew from US$2.6 billion to US$8.Intra-REC Export trends: Trade facilitation efforts have also had a positive impact in intra-REC export trade as shown in Table 2. Free Movement of People and Right of Establishment As one of the key objectives of Africa's regional integration agenda, the issue of free movement of people often unleashes strong passions, and criticisms against governments' policies on visas, immigration and nationality laws. Best practices in free movement a.All ECOWAS citizens do not need a visa to travel to any ECOWAS country.The region has made significant progress in introducing the biometric identity card, replacing residence permits and the program is set to be launched in 2016.It is designed to ensure the unrestricted movement of citizens from member nations of the community.The ID card will circulate alongside the national identity card in each member states for a set period of time until the new ID card is fully implemented within a specified timeframe.b.EAC countries grant 3 months visa for citizens holding only their national passports.If they have the common EAC passport, they are entitled to a 6 months visa.The EAC common passport is valid for 5 years and is recognized by all the member States.It is however not valid for international travel beyond EAC.For a number of years now Importing activities in Africa cost more compared with the rest of the world.On average, the import of one standard container takes on average 37 days and costs US$ 2,567.This compares to 22 days and US$ 958 in East Asia and Pacific, 19 days and USD 1,612 in Latin America and the Caribbean, and 33 days and US$ 1,736 in South Asia.(Though exports are more costly in Eastern Europe and Central Asia, Africa excluding Northern Africa still compares rather 1 poorly with the remaining regions.The export of one standard container takes on average 31 days and costs US$ 1,990 in the sub-Saharan African region; that is 10 days and US$ 1,067 more than from East Asia and the Pacific, 14 days and US$ 722 more than from Latin America, 1 day less but US$ 387 more than in South Asia (ECA, 2014, Trade Facilitation from an African Perspective). The COMESA region has registered an increased growth in intraregional trade due to efforts in elimination of non-tariff barriers.Of all reported NTBs, 72% were removed and they continue to streamline their trade facilitation programs.The 14 member States participating in the COMESA FTA are a testimony to the regional trade liberalization regime (FTA). In the EAC region, all EAC partner States are using national and regional committees to monitor and address challenges posed by NTBs.In terms of best practice at the national level in the elimination of NTBs, Rwanda can be considered as a pace maker, using different approaches including the online reporting mechanism and institutional committees.Further, it has removed all checkpoints after the border compared to an average of 10 check points in some EAC countries. On-line reporting scheme by COMESA, EAC and SADC is being implemented through NTB Reporting and Monitoring Mechanism (ORM).The mechanism is designed to enable private and public sector operators to register complaints on NTBs which can then be resolved bilaterally.To date, 329 complaints have been registered on the system, out of which about 227 (69%) have been resolved (Table 1). Greater benefits from trade facilitation efforts There is enough evidence that the benefits of improved global trade facilitation far exceed those available from further tariff reduction.Box 2 below highlights positive trends in intra-REC trade.The CFTA plus trade facilitation efforts predict even greater benefits, as pointed out earlier in this paper.the tourism industry has been lobbying the governments in Southern Africa to come up with a single UNIVISA that will allow free movement of tourists within the region encouraging tourism.Finally, the single visa was officially launched by heads of state from the three countries in 2014.The revolutionary proposal seeks to boost the flow of tourists into the EAC region through the removal of Visa barriers.According to the proposal, tourists will only need to use the same visa they acquire at the point of entry into the EAC region, including Rwanda, Kenya and Uganda, to access all the three countries without paying any extra fees or stopping into other embassies.c.Since the signing of SADC protocol on the development of tourism 1998, there has been little progress in the implementation of the protocol.The provision for a univisa was set for 2002, but was not materialized.Despite the challenges, the South African Institute of International Affairs (SAIIA), indicated that the importance of the univisa, which among others could promote tourism in the region as it would introduce a smooth entry for regional and international visitors, especially within trans-frontier conservation areas.It is estimated that the univisa will bring about 3%-5% annual growths to the region. Efforts to Speed up Physical Connectivity Good infrastructure is critical for the long term economic development of a country.Key infrastructure assets create additional economic benefits by supporting industrial development and stronger trade links.African countries therefore recognize the importance of infrastructure in general and regional infrastructure networks in particular to their development and physical integration.To this end, a number of continental infrastructure development initiatives have emerged over the years.Among the most ambitious of these initiatives are the Trans African Highways network, conceived in early 1970s as a network of good all-weather highways linking Africa's capitals and major economic production areas to promote integration of African peoples and economies, and more recently the Program for Infrastructure Development in Africa (PIDA) and the Presidential Infrastructure Champion Initiative (PICI).Some of the key recent efforts by RECs and member States in advancing the infrastructure agenda are highlighted below: Transport (Road, Rail, Port, Pipeline) There is a 4,500 km Trans-Saharan highway linking Algiers Source: Compiled from UNTAD database.(Algeria) to Lagos (Nigeria).Once completed, the highway will facilitate trade and economic and social links between North African countries and other African regions.US $25 billion infrastructure development programme launched by the governments of Kenya, Ethiopia and South Sudan, which includes the construction of a highway linking the three countries.Donors have pledged over US $2 billion to support its implementation. Heads of State and Government of Cote d'Ivoire, Ghana, Togo, Benin, and Nigeria have signed a Treaty for the modernisation of the Abidjan-Lagos Corridor by expanding the existing road into a 6-lane dual carriage highway linking the 5 countries.The Treaty establishes a supra-national corridor management organisation and a USD 50 million Seed Fund to accelerate the implementation of the highway.ECOWAS Commission is spearheading the ongoing efforts to modernise the Abidjan-Lagos Corridor. • As part of the Presidential Infrastructure Champion Initiative (PICI), Egypt is spearheading the construction of a navigable waterway linking Lake Victoria and the Mediterranean Sea.The pre-feasibility study of the transport corridor has been completed.Efforts are ongoing to initiate the feasibility studies. • The construction of the LAPSSET Corridor was launched in March 2012 at the site of the Lamu Port in Kenya.The initiative consists of the following components: Lamu Port; LAPSSET Railway; Highway; Oil Pipeline; Oil Refinery, Resort Cities; and Lamu Airport at an estimated investment cost of USD 16,481 million.The detailed engineering designs for three berths and associated infrastructure have been completed for the Lamu Port.Funds are available to start the construction.About 365 km of the LAPSSET Road in Kenya and Ethiopia has been completed while work is ongoing in several other sections.The construction of the LAPSSET Corridor Railways is on course and the Government of Kenya signed a Memorandum of Understanding with the China Civil Engineering Construction Corporation in October 2014.The preliminary design and feasibility study has been completed. • African countries are revamping their railway network, including those with regional dimension.Construction is underway on the Djibouti-Ethiopia railway, while Kenya is making progress in the Mombasa-Nairobi railway construction.UEMOA is spearheading the construction of the Dakar-Bamako rail project which is at the phase of studies, in the context of PICI. Energy infrastructure Energy supply, especially electric supply is still weak and patchy in many African counties including even in large economies, such as Nigeria and South Africa.In 2012, the total generation capacity of SSA was 90,000 MW, and approximately 50% of this capacity was in South Africa.The energy demand in the region shows remarkable growth and has grown by 45% from 2000 to 2012, but still only four% of the world's total.However, in the recent years, there has been a positive development in addressing the energy infrastructure deficit.Unlike many other infrastructure types, energy projects have a long gestation period sometimes spanning over eight years.Currently, under PIDA, which aims to connect more than one country, there are many projects that are in the implementation stage.In total, there are 15 energy sector projects in the PIDA PAP at a total cost of US $40.3 billion (excluding the Nigeria-Algeria Gas Pipeline).A selection of projects advanced under implementation is presented below in Table 3. The highlights of these projects are the following: • In April 2011, the late Ethiopian, Prime Minister, Meles Zenawi formally launched the largest engineering project ever attempted in Ethiopia-the Grand Ethiopian Renaissance Hydropower Dam of Project (GERHDP) over the River Abbai (Blue Nile).The main objective of the project is to generate electric power with installed capacity of 5,250 MW, with an annual energy production of 15,130 GWH/year.What is noteworthy about this project is that most of the estimated USD 4.7 billion is sourced internally through Ethiopian bonds and taxpayers. • The Kenyan and Ugandan governments are jointly developing the Kenya-Uganda oil pipeline project.It will have significant impact of replacing the road tankers, as the primary means for transporting oil products from Kenya to Uganda.This pipeline on completion will deliver white petroleum products from Kenya to Uganda and viceversa.The governments of Kenya and Uganda together hold a 49% (24.5% each) share in the pipeline project, while private investors will have a 51% stake.The private investors are yet to be confirmed for the project.meters.Trans-Sahara Gas Pipeline Project is a "veritable vehicle for strengthening the bilateral economic relations between Nigeria, Algeria and Niger" 6 .The project, which is currently under construction, is expected to deliver first gas around 2015. Information and communication technology (ICT) • Some SADC countries performed well the development of ICT development-Between 200-2008, Mauritius and Seychelles registered an average of 13.6% in penetration of fixed telephony lines while South Africa and Tanzania registered about 19% and 5%, respectively, of the mobile subscription in Africa.Remarkable contribution in 2000/08 by SADC countries in terms of internet users was 1.8% (South Africa), 1.4% (Zimbabwe), 0.7% (Zambia) and 0.5% (Angola) and internet penetration rate 38% (Seychelles) and 30% (Mauritius).Implementation of sound policy reforms in the region has led this sector to privatization and progressive liberalization of ICTs services.Notable policies include the development of up-todate harmonized cyber laws in the region, reflecting international best practice. • The EAC is the leading example in implementing one network system in the continent.Uganda, Kenya and Rwanda have adopted the use of One Network Area, aimed at reducing the cost of calling within the region 7 .One area network is a regional framework comprising countries that have agreed to waive or manage roaming charges and other surcharges for telecommunications traffic.One network allows member countries to exempt regional calls from surcharges applied by member States on international incoming calls and move any additional charges to subscribers on account of roaming within the region.This means that subscribers will not be charged for roaming within the region for receiving calls while travelling within the member states. • According to The Kenyan Communication Authority report, Kenyans are now calling more within the East African region following the scrapping of taxes imposed on incoming voice calls by Rwandan and Ugandan governments.The last three months of 2014, statistics indicate that the number of outgoing voice minutes rose by 24.4% to 17 billion, up from 14 billion in the previous quarter.The number of calls from the neighboring countries also went up by 8.2% to 20 billion minutes from 19 billion.Kenya is the only East African state that does not levy any taxes on cross-border calls. • The East Africa region is set to achieve more in ICT development than any other region in Africa.It started with cheaper voice communication and now it is moving to data and mobile money services.The region is determined to allocate more funds to ICT projects, from less than 3% to a minimum of 5% of national budgets. Peace and security RECs play a critical role in promoting peace and security within their regions.First, by virtue of their mandate in terms of helping to integrate countries both economically and politically, they provide a framework that should help limit inter-state conflict.By helping to create commonalities of interest among their different member countries and governments across their region, they help militate against internal conflict, as there are economic interests vested in maintaining a well-functioning internal market.Eeconomic and political integration can therefore only be successful if it is facilitated by peace and security.This notwithstanding, economic integration is also a process that is not totally devoid of friction among the members.Successful integration therefore requires strong institutional mechanisms for containing friction and resolving disputes.And when conflicts erupt among member countries, RECs should be in a position to resolve them.It is in this perspective that a number of RECs have endeavoured to put in place dispute settlement and peace keeping or conflict resolution mechanisms.Among the key peace keeping interventions they offer to countries facing conflicts is a standby arrangement that has military, police and civilian components. For instance, SADC has approved the formation of a SADC Standby Brigade (SB) in 2007 where all member States are expected to contribute to the brigade for the purpose of intervening in conflict situations in the SADC region.The Heads of State have also recently decided during their 2015 Summit in Maputo to deploy a peace-keeping force of about 4000 troops in the Democratic Republic of Congo (DRC).This decision will strengthen the on-going peace keeping by the MONUSCO in DRC.SADC also intervened in Lesotho to help restore peace following the actions by the Army in that country that appeared like a putsch; although the main protagonists claimed it was nothing more than an antiterrorism operation. ECOWAS has a Protocol on Mutual Defense Assistance, which provides for a non-standing military force to intervene in the defense of a member State under external attack, as well as a Protocol for Conflict Prevention, Management, Resolution, Peace-keeping and Security.The Protocol provides for the establishment of various organs and mechanism for conflict prevention, management and resolution in West Africa. Notable achievements include intervention operations of ECOMOG in legitimate governments of ECOWAS member states which were victims of armed attack by rebel groups.The region managed to deploy ECOMOG troops to prevent total breakdown of law and order in the effected ECOWAS member States.Furthermore, the ECOWAS region, undertook operations in other countries like Sierra Leone and Equatorial Guinea to restore peace when the parties could not reach a peace agreement towards the settlement of the disputes.Other countries assisted by the ECOWAS include Guinea Bissau and Côte d'Ivoire during the time when the two countries had internal conflicts. Conclusion Regional integration remains critical in transforming African economies and change lives of many people from the deep levels of poverty.Many African countries, in close collaboration with the Regional Economic Communities and other development partners, remain committed in implementing regional integration.Significant progress can be seen in a number of areas including: Trade and Market Integration; Free Movement of People and Right of Establishment; Macroeconomic Convergence; Peace and Security; and Transport and Communication. Progress in trade facilitation, particularly regional payment system will promote trade by facilitating payment of goods and services in all regions without difficulties and also at a decreased rate.African citizens will be able to transact their business in a very smooth and effective manner.From the analysis in the paper, traders would incur less cost as a result of regional payment systems which in many cases have resulted in traders paying flat rates when doing their transaction.Establishment of regional payment systems will also be key in resolving the current problem of currency convertibility arising from intra-African trade.Regional Economic Communities which have not yet established regional payment systems are encouraged to do so in order to start reaping the benefits as demonstrated in this paper. Transit guarantee schemes have indeed brought a big difference in facilitating movement of goods among African countries.Countries will save a lot of money by issuing a single bond for all countries where goods are transiting.Transit guarantee schemes will also assist in improving the conditions of the corridors through corridor management committees which are established to oversee the activities and movement of goods.Through these committees, member States are bound to implement specific corridor programs which could not be easily achieved when pursued individually. Borderless and paperless trade has yielded significant results in many African countries.Countries which have implemented One Stop Boarder Post have seen a big improvement in reduction of operating hours and procedures.Regional Economic Communities should work in close collaboration with member States in implementing the decision on the establishment of OSBP.The use of automated processes and information technology such as ASYCUDA has played a big role in reducing delays in customs procedures.Implementation of ASYCUDA by all countries is a step ahead in having consolidated trade data among member States of regional Economic Communities.Regional Economic Communities and the member States are called upon to enhance their efforts in implementing measures such as ASYCUDA, OSBPs among others.Governments should consider regional integration as part of their broader national development strategies.In this regard, regional integration decisions should be given priority during the planning stages of national programs and strategies.Continued political commitment by the African leaders is the key, if continent is to achieve its regional integration agenda. In moving forward, African leaders need to pull their efforts and resources together in the implementation of agreed decisions pertaining to regional integration.There is need to learn from best practices.Efforts should be made by all countries to learn more from one another in in terms of best practices and experiences in implementing agreed programs in regional integration.As implementation of regional integration activities and programs require substantial amounts of money, countries need to invest money and efforts in these programs by allocating funds for them in their national budgets. Monitoring the implementation of these decisions should be taken as one of the key priorities by all the key stakeholders dealing with regional integration issues.The on-going work by the three pan-African institutions (ECA, AUC, AfDB) on the establishment of the regional integration index should be supported as this will indeed help in putting checks and balances in the implementation of regional integration agenda.This work need to be supported by all the RECs and member States.In addition, other monitoring mechanisms need to be put in place by all parties.Recommendation could be given to online reporting function which is a regional instrument for the monitoring, reporting, and elimination of NTBs within and across the three Regional Economic Communities (COMESA; EAC; and SADC). 2 : 6 billion between 2004 and 2014 Trade within the SADC has increased since the implementation of the SADC Protocol on Trade, which has been in effect since January 25, 2000.Exports increased from US$5.8 billion in 2000 to US$11.7 billion in 2010.Intra-SADC trade as percentage of SADC's total trade has however remained stagnant at roughly 15% over the entire period of implementation.Intra-trade in the SADC region grew from US$20 billion to US$72 billion between 2004 and 2014.The existence of the FTA has in part led to a rise in intra-COMESA trade from US $3.1 billion in 2000 to US $19.3 billion in 2012, reflecting a 523% growth rate over the period or 44% per annum on average.Global trade for the COMESA in 2012 grew by 9% from US $240 billion in 2011 to US $262 billion in 2012.Total exports rose by 12% from levels of US $96 billion in 2011 to US $108 billion in 2012, while imports on the other hand also registered a 7% growth, from US $144 billion in 2011 to US $155 billion in 2012.Intra-COMESA total trade grew by 5% in 2012 over 2011 levels, from US $18.4 billion in 2011 to $19.3 billion in 2012.Among the countries contributing to this growth were Libya, Zambia and Rwanda, all with growths in both intra-exports and intra-imports in 2012.Intra-trade in the COMESA region grew from US$8 billion to US$22 billion between 2004 and 2014.The combined intra African trade of the three RECs in the tripartite grew from US $30 billion between 2004 and 2014.This translates to a more than threefold growth in a period of 10 years.However, this growth has taken place on the basis of individual FTAs of the three RECs.Source: Computed from Tralac website Box Trade figures in selected RECs. a) Rwanda has waived visas to all African countries and visas to be issues upon arrival.b) Rwanda and Kenya have waived work permits for EAC citizens.c) Kenya has waived work permit fees charged on citizens of Uganda, Burundi and Tanzania.Uganda does so on a reciprocal basis.d) All of Kenya's border posts have harmonized immigration procedures, while eight are operating 24 hours.e) Rwanda allows citizens from all EAC member countries to live and work in the country.f) Zambia waives visas and visa fees for all COMESA nationals on official business.g) Mauritius and Seychelles have waived visas to all COMESA citizens.h) Holders of diplomatic passports are exempted from visas in CEN-SAD region. Table 2 : Intra-trade of regional and trade groups by product, annual, 2005-2013 (US dollars-millions and percentages).
2019-05-18T13:02:48.863Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "c72aa48e88f6187b118114598b84fcdff1a1db8f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2162-6359.1000314", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ea8863f5d64121e3d66f1902d93e5bb1f142b296", "s2fieldsofstudy": [ "Political Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
540642
pes2o/s2orc
v3-fos-license
Patient–doctor continuity and diagnosis of cancer: electronic medical records study in general practice Background Continuity of care may affect the diagnostic process in cancer but there is little research. Aim To estimate associations between patient–doctor continuity and time to diagnosis and referral of three common cancers. Design and setting Retrospective cohort study in general practices in England. Method This study used data from the General Practice Research Database for patients aged ≥40 years with a diagnosis of breast, colorectal, or lung cancer. Relevant cancer symptoms or signs were identified up to 12 months before diagnosis. Patient–doctor continuity (fraction-of-care index adjusted for number of consultations) was calculated up to 24 months before diagnosis. Time ratios (TRs) were estimated using accelerated failure time regression models. Results Patient–doctor continuity in the 24 months before diagnosis was associated with a slightly later diagnosis of colorectal (time ratio [TR] 1.01, 95% confidence interval [CI] =1.01 to 1.02) but not breast (TR = 1.00, 0.99 to 1.01) or lung cancer (TR = 1.00, 0.99 to 1.00). Secondary analyses suggested that for colorectal and lung cancer, continuity of doctor before the index consultation was associated with a later diagnosis but continuity after the index consultation was associated with an earlier diagnosis, with no such effects for breast cancer. For all three cancers, most of the delay to diagnosis occurred after referral. Conclusion Any effect for patient–doctor continuity appears to be small. Future studies should compare investigations, referrals, and diagnoses in patients with and without cancer who present with possible cancer symptoms or signs; and focus on ‘difficult to diagnose’ types of cancer. Aim To estimate associations between patientdoctor continuity and time to diagnosis and referral of three common cancers. Design and setting Retrospective cohort study in general practices in England. Method This study used data from the General Practice Research Database for patients aged ≥40 years with a diagnosis of breast, colorectal, or lung cancer. Relevant cancer symptoms or signs were identified up to 12 months before diagnosis. Patient-doctor continuity (fraction-of-care index adjusted for number of consultations) was calculated up to 24 months before diagnosis. Time ratios (TRs) were estimated using accelerated failure time regression models. Results Patient-doctor continuity in the 24 months before diagnosis was associated with a slightly later diagnosis of colorectal (time ratio [TR] 1.01, 95% confidence interval [CI] =1.01 to1.02) but not breast (TR = 1.00, 0.99 to 1.01) or lung cancer (TR = 1.00, 0.99 to 1.00). Secondary analyses suggested that for colorectal and lung cancer, continuity of doctor before the index consultation was associated with a later diagnosis but continuity after the index consultation was associated with an earlier diagnosis, with no such effects for breast cancer. For all three cancers, most of the delay to diagnosis occurred after referral. where symptoms or signs were recorded in consultations with GPs (partner, salaried, registrar, or locum) in relevant types of encounter (mainly surgery, telephone, or home visit). The few male patients in the breast cancer dataset were also excluded. Relevant symptoms and signs (classified as high-risk or low-risk) for each cancer (Table 1) were based on the National Institute for Health and Care Excellence Referral Guidelines for Suspected Cancer. 9 These were updated by reference to recent systematic reviews on colorectal cancer 10 and breast cancer; 11 and a case-control study of lung cancer. 12 Symptoms and signs were identified using Read Codes only, which were independently identified and agreed by the GPs on the team. High-risk took precedence over low-risk symptoms or signs where both were recorded in the index consultation. The presence or absence of risk factors for each type of cancer were also identified: family history (breast cancer); ulcerative colitis (colorectal cancer); and current/ex-smoker or chronic obstructive pulmonary disease (lung cancer). Referrals, appropriate to each cancer, were identified for 'definitive' investigations (for example colonoscopy for colorectal cancer) or secondary care opinion (for example respiratory physician for lung cancer). The list of diagnostic codes used has been developed previously as part of DISCOVERY (http://discovery-programme. org/; a 5-year programme of work designed to improve the diagnosis of cancer) and has supported several publications. 13 Patient-doctor continuity The index consultation (and hence the index doctor) was defined as the first consultation in the 12 months before diagnosis when a relevant cancer symptom or sign was recorded by a GP. Patients needed at least one other contact with the index GP in the 24 months before diagnosis to be included in the study. Patient-doctor continuity was summarised using the fraction-of-care (f ) index, which is the proportion of doctor encounters during a continuity defining period that were made to the current provider (that is, the index GP). Because f is sensitive to utilisation levels (that is, people who visit infrequently), it was adjusted for the number of consultations in all analyses (f' ). In the statistical models, f' was multiplied by 10 so that the regression coefficients represent the change in outcome associated with a 10% difference in continuity. In the primary analysis the effect of patient-doctor continuity was explored during the whole 24 months before diagnosis. In secondary analyses, the intervals were examined separately before the index consultation and after the index consultation. Outcomes This study investigated the effect of patientdoctor continuity on time to diagnosis and time to referral, expressed as the number of days from the first recorded sign or symptom of cancer until date of cancer diagnosis or date of referral. How this fits in Continuity of care is a core value in general practice yet, nowadays, patients are less likely to see the same doctor. It is unknown whether seeing the same doctor leads to a faster or slower diagnosis of cancer among patients who present with symptoms. Overall, this study found that any effect of patient-doctor continuity on time to diagnosis of breast, colorectal, or lung cancer was small. While GPs should be cautious not to dismiss potentially significant symptoms or signs among patients they know well, it may be prudent for doctors to personally follow-up patients with 'low-risk but not no-risk' symptoms. Time to diagnosis (or diagnostic interval) was chosen as the primary end-point because the date of diagnosis is usually easily determined; previous studies have shown an effect of organisational change (that is, introduction of '2-week wait' system) on this interval; 20 and it allows the findings to be easily compared with most other studies. 21 Time to referral (to relevant secondary care specialty or for definitive investigation) was explored as a secondary end-point because events after the referral (which are within the control of secondary care rather than primary care) may cause delays between referral and diagnosis. Where referral was made on the same day as the index consultation (around one-third of patients), 1 day was added so that the statistical model could be fitted. Analysis All analyses were carried out using Stata (version 12). First, a simple descriptive analysis was undertaken to examine the characteristics of participating patients, doctors, and their practices; patient-doctor consultation rates; number and type of symptoms/signs at the index consultation; and patient-doctor continuity before and after the index consultation. Next, regression models (acceleratedfailure time) were constructed to examine univariable and multivariable associations between patient-doctor continuity and time to diagnosis and time to referral. The accelerated failure time model is a parametric model that provides an alternative to proportional hazards models commonly used in time-to-event analyses. 22 It allows the derivation of a time ratio, which is more readily interpretable than a ratio of two hazards generated by other survival analysis approaches: a time ratio >1 for the covariate implies that it prolongs the time to the event, while a time ratio <1 indicates that an earlier event is more likely. 23 Plots were constructed to check model assumptions. Log-normal, loglogistic, generalised gamma, and Weibull distributions were used to represent the survival data. The Akaike information criterion measure of the goodness of fit of an estimated statistical model was used to select the best model. Alternative models for time to referral and time to diagnosis, using the different continuity defining periods, were constructed. The following covariates were included in each model: patient age, sex, multimorbidity, and cancer-specific risk factor(s); index doctor sex and status; index consultation type; and number of symptoms/signs at index consultation. Interactions between patient-doctor continuity and symptom/sign type (high/ low risk) were added to the models, but none with a likelihood ratio test <0.05 were found. The extent of clustering by practice was estimated and adjusted for as necessary in all models. Table 2 shows the initial and final number of patients (with a relevant cancer, symptoms/ signs in the 12 months prior to diagnosis, and qualifying consultations) analysed in each cancer dataset. The characteristics of participants and patients' consultations are given in Table 3 Patients with breast cancer were more likely to present initially with at least one high-risk symptom or sign (n = 2797, 94.2%) than those with subsequent colorectal (n = 2528, 34.2%) or lung (n = 636, 7.8%) cancer diagnoses (Table 3); and (n = 2559, 86.6%) of patients with breast cancer were referred on the same day as the index consultation. Patient-doctor continuity and diagnosis of breast, colorectal, and lung cancer The crude and adjusted associations with time to diagnosis for patient-doctor continuity, symptoms/signs and patient, doctor, and consultation characteristics for the three different cancers are shown in Table 4. There was no evidence of any association between patient-doctor continuity and time to diagnosis for breast cancer (adjusted TRbreast = 1.00, 95% CI = 0.99 to 1.01, P = 0.90) or lung cancer British Journal of General Practice, May 2015 e308 <0.01 Ex-smoker n/a n/a n/a n/a Further analysis examined whether there was a relationship between patientdoctor continuity before or after the index consultation and time to diagnosis (Table 5). There was no evidence of an effect for patient-doctor continuity on time to diagnosis over any continuity-defining period for breast cancer. There was some evidence that increased continuity before the index consultation increased the time to diagnosis for both colorectal cancer (adjusted TRcolorectal = 1.02, 95% CI = 1.01 to 1.02, P<0.01) and lung cancer (TRlung = 1.01, 95% CI = 1.00 to 1.01, P<0.01). Conversely, there was evidence that seeing the same doctor after the index consultation reduced the delay to diagnosis (adjusted TRcolorectal = 0.98, 95% CI = 0.98 to 0.99, P<0.01; TRlung = 0.98, 95% CI = 0.97 to 0.98, P<0.01). Finally, evidence of an effect for patient-doctor continuity before the index consultation on time to referral was found for patients with breast cancer only (adjusted TRbreast = 0.90, 95% CI = 0.85 to 0.95, P<0.01) ( Table 5). DISCUSSION Summary Overall, patient-doctor continuity was not associated with clinically important changes in time to diagnosis for patients with breast, colorectal, or lung cancer. In the primary analyses, the association seen with later diagnosis of colorectal cancer equates to a maximum delay of around 7 days; while in the secondary analyses the maximum reduction in time to diagnosis for patients with colorectal or lung cancer who see the same doctor after the index consultation are up to 14 and 18 days, respectively. For all cancers, the most significant factor predicting earlier diagnosis was first presentation with a high-risk symptom or sign; and the greatest delay for diagnosis of all three cancers occurred after the patients had been referred. Strengths and limitations This is the first study to explore, using a large, reliable, and validated dataset, the effect of patient-doctor continuity on the diagnostic process of three common cancers (breast, colorectal, and lung). However, the analyses were restricted to between 26.3%, (n = 2955, breast), and 47.6%, (n = 8143, lung) of the original datasets most patients were excluded because they had no relevant Read-Coded symptoms or signs in the 12 months before diagnosis. It is important to remember that the data for this study come from medical records whose primary purpose is clinical care, rather than research, so it is likely that relevant symptoms and signs in both included and excluded patients were not coded. In addition, the final route by which patients obtained their diagnoses is not known. A significant proportion of patients may have been diagnosed after being admitted through the emergency department, independent of their GP. 24 This study highlights the methodological challenges of operationalising continuity in this type of research. 25 It was decided to quantify continuity in relation to the doctor seen at the index consultation and, while other approaches are possible (for example, defining continuity in terms of 'usual doctor'), the authors believe this is the most appropriate for the research question posed: 'Does seeing the same GP (around time of first presentation of possible cancer symptoms or signs) reduce time to diagnosis of three common cancers?'. A modified form of an established continuity index (fraction-of-care) was used but the findings were the same when the analyses were repeated using another more widely used index (Continuity of Care; available from the authors on request). 26 This study improves our understanding of the role of patient-doctor continuity in patients who present with symptoms and signs who are subsequently diagnosed with cancer but not those with other outcomes or diagnoses. Also, any association between patient-doctor continuity and earlier diagnosis will be affected by variation in individual doctors' thresholds for investigating symptoms and making referrals. That is, if doctors who provide low continuity also have a high referral rate, their patients will have a short delay to diagnosis of cancer, but at the expense of a high number of referrals that do not lead to a cancer diagnosis. Comparison with existing literature Several studies have examined the role of continuity in relation to cancer screening [27][28][29][30] but the authors are aware of only two studies concerning diagnosis. 31,32 Both were conducted in the US and neither found that continuity at a primary care level was associated with an earlier stage of cancer at diagnosis. The continuity literature provides reasons to support and explain the observation in this study that seeing a known doctor at first presentation appears to delay diagnosis, yet seeing the same doctor afterwards promotes earlier diagnosis. In the case of the former, familiarity with the patient and their problems may mean that doctors make assumptions and become closed to other diagnoses; 33 the doctor may 'fail to see the wood for the trees' and misattribute symptoms or dismiss them. 5 However, when seeing the same doctor afterwards, the doctor may assume greater responsibility for the patient in ensuring complaints are followed up and to ensure that symptoms are either explained or resolved. 5 It is noteworthy that the mean number of consultations in the 12 months before diagnosis for each cancer is higher than might be expected for populations in these age groups, 34 although there is a wide variation as reflected in the standard deviations. Consultation frequency itself may be a cause for concern in the prediagnosis period. 35 Implications for research and practice Future studies should examine the value of patient-doctor continuity in relation to the investigations and referrals that doctors make for patients who present with possible cancer symptoms or signs who do and do not go on to be given a cancer diagnosis. Ideally, future research should be prospective and incorporate other important patient characteristics (disclosure of symptoms and signs) and doctor characteristics (tolerance of uncertainty and personal thresholds for organising investigations and referrals), so that the relationship between continuity and these other factors can be assessed comprehensively. Finally, it would be worth repeating this work in 'hard to diagnose' cancers, in particular, those which are associated with a larger number of consultations before referral. 36,37 What should GPs and policy makers do meanwhile? In keeping with much of the continuity literature in relation to patient outcomes, this study does not provide strong evidence that patient-doctor continuity reduces the time to diagnosis of breast, colorectal, or lung cancer. Rather, it suggests that doctors working in primary care should be cautioned against overlooking potentially worrying symptoms or signs among patients who they know well. Previous work has highlighted the potential problems of 'over-familiarity' and the potential benefit of having a 'fresh set of eyes'. 5,38 However, that is not to negate the psychological benefits that some patients may derive from 'following through' a cancer diagnosis with the same GP. Until further work is carried out, it would seem sensible to recommend that practices encourage patients to follow new problems up with the same doctor, especially for patients whose symptoms or signs at the initial consultation may represent an underlying cancer but do not in themselves warrant immediate investigation or referral. Finally, although much attention has been given to reducing delays to referral from general practice for patients with symptoms suggestive of cancer, these data suggest that more attention should be given to the process of care between referral and diagnosis. This is the main source of delay and where there is most scope for reductions in the time to diagnosis.
2016-05-04T20:20:58.661Z
2015-04-27T00:00:00.000
{ "year": 2015, "sha1": "c4eacf3ce9255a1a079c7244c6a4e1f5696129b7", "oa_license": "CCBY", "oa_url": "https://bjgp.org/content/bjgp/65/634/e305.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c4eacf3ce9255a1a079c7244c6a4e1f5696129b7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46844835
pes2o/s2orc
v3-fos-license
Application of a Thermodynamic Concept for the Analysis of Structural Degradation of Soap Thickened Lubricating Greases Lubricating greases are special lubricants with a wide range of application. The tribologically stressed grease is used as tribological system and finally modeled as an open thermodynamic system. This study investigated the phenomenon of self-optimization and applied to the process of shearing a grease. The conditions for self-optimization and the consequences of created dissipative structures are investigated according to the interpreted literature. Introduction Friction and wear are irreversible processes and characterized by energy dissipation with a continuous production of entropy.In general, we observe a cause-effect-chain that we can relate to the friction and wear phenomena.In this paper friction, is considered as an energy expenditure and supply into a tribo-system.Wear is characterized as the dissipation of this energy (loss of material is the most important type of wear). Similar to the solid rubbing bodies of a tribo-system, a lubricating grease undergoes irreversible changes caused by friction.These changes can be detected as a degradation of the thickener network. Tribo-systems are energy driven systems.In a number of publications the system reaction is described from an energy point of view.Abdel-Aal [1] creates an open thermodynamic system to observe the energetic situation of a solid-solid contact during a tribo-process.He described an entropy balance and correlated the wear (as loss of material) with the entropy production.Very early Fleischer [2] noticed that wear particles leave the tribo-system by taking an energy amount with them.From another point of view Federov [3] developed an energy balance of the friction process and creates an adaptive and a dissipative space of the tribo-process.In this references the system reaction is described from an energy point of view, showing that stable conditions and a minimum of energy expenditure are reached.Observing the stressed lubricating grease as a subsystem, the same fundamental behaviour can be detected. Interesting investigations of the structural change inside a grease comes from [4][5][6].Paszkowski and Olsztynska-Janus [4] investigated the degradation process of Li-soap lubricating greases.They reported that the change of structural viscosity during a shear process is caused by destruction of hydrogen bonds (OH-groups) between Li-soap-fibres.These authors stated that the degree of disintegration of the greases micro-structure depends mainly on the shear rate.Zhou et al. [5] made experimental work by using a rheometer.They show a continuous change of thickener structure with a continuous entropy production and differ a non-stationary and a stationary period.Rezasoltani and Khonsari [6] use the net-penetration to describe the degree of structural degradation.They observe that the shear process breaks down the structure of the grease and results in heat and entropy generation.The authors define a degradation coefficient by correlating rate of entropy generation with mechanical degradation. A number of SEM and AFM investigations [7,8] present important information about the degradation process.Delgado [7] shows the sensitivity of a grease structure withs SEM-pictures.He observe the change of the structure influenced by composition and different conditions.Roman et al. [8] investigate the degradation of thickener structure with AFM and show for selected samples the influence of shear rate and shear time. The drop of shear stress in a rotational rheometer test is a good example for the indirect measurement of structural changes during a friction process (see Figure 1). Investigation for the stationary state We are following the idea that for certain conditions sys energy in a special intensity.˙ = const.Many of the well known phenomena in tribology, like the so called running-in period of surface rougness, the running-in process for abrasion wear, or the formation of a tribofilm can be characterized as self-optimizing processes.That means that the system follows the natural effort to come into a stable situation after working in instability with the help of self-optimized parameters (roughness height, roughness distribution, tribo-film thickness or spatial and temporary patterns). It is assumed, that these phenomena also occur in the stressed lubricating grease and lead to an optimized dissipative grease structure. One aim of the current paper is an analytical investigation of this assumption. Process Stability Self-optimization as an reaction of the system to an arising instability can be investigated by observing the transition from stability to instability [9,10].Therefor the criterion is which is the second variation of entropy production rate [11].If Equation (3) is violated, destabilization occurs and facilitates the process of self-optimization.Analogues to the process of solid friction [9], self-optimization during the fluid friction inside a grease film is investigated .Nosonovsky [9] proposed the entropy production rate with with F friction coefficient, F N normal force, V velocity and T the temperature.For the fluid friction it is proposed with τ shear stress, γ shear rate and V 0 stressed volume.Consider first the situation that entropy production depends on the shear rate γ.Other influences like shear stress, temperature etc. are not involved in this step of the investigations.It follows This means that, if the slope of a friction curve (∂τ/∂ γ) becomes negative, a process destabilization occurs. An important parameter of lubricating greases is the content of solid material called thickener.For a metal soap grease this parameter is the soap content (%).That thickener content decribed as ψ is a microgeometric parameter and stays constant during the friction process.If the slope of a τ γ(ψ)-curve is 0, the transition from stationary to non-stationary state occurs. It is known, that there exists a correlation between thickener content and the formation of a thickener structure (see Figure 2).Formation, distribution and geometry of the thickener characterize the grease structure.The parameter ξ describes a temporary state of the grease structure and is not constant during the time dependent friction process.The stability criterion now is Suppose now that a certain value exists ψ crit with a primary structure ξ prim and the slope of the friction curve ∂τ/∂ξ = 0.It characterized the transition from stability to instability.In other words, in this case the friction of the initial structure does not depend on the structure evolution.The friction maximum at the beginning of the process depends on ψ.If there is a value ψ < ψ crit with a lower friction compared with ψ crit a destabilization occurs and an optimization process will be activated.The result is a new stationary state with a new stationary structure ξ stat (Figure 3).The stability criterion now is Suppose now that there exist a certain value crit with a primary structur It is known that there exist a correlation between thickener content and thickener structure receiving after manufacturing process (see Fig. ( 2)).The grease structure is characterized by formation, distribution and geometry of the thickener.The parameter ⇠ is used to describe a temporary state of the grease structure and is not constant during the time dependent friction process. The stability criterion now is Suppose now that there exist a certain value crit with a primary structure ⇠ prim and the slope of the friction curve @⌧ /@⇠ = 0.It characterized the transition from stability to instability.With other words in this case the friction of the initial structure has no dependence on the structure evolution.The friction maximum at the beginning of the process depends on .If there is a value < crit with a lower friction compared with crit a destabilization occurs and an optimization process will activate. Period of self optimization In this section a process of self optimization with an optimized grease structure is assumed.The consequence of that process is a decreasing friction and wear process.It delivers in our case a decreasing structural degradation during an increasing tribological stress.An experimental example is shown in Fig. (4).The experimental procedure to obtain the results presented in this Figure is described in [12].The period of decreasing structural degradation (wear of a lubricant) with an increasing shear rate is investigated below.This analysis follows the work of [13]. submitted to Lubricants 4 of 8 a correlation between thickener content and thickener structure receiving after Fig. ( 2)).The grease structure is characterized by formation, distribution and The parameter ⇠ is used to describe a temporary state of the grease structure the time dependent friction process. ist a certain value crit with a primary structure ⇠ prim and the slope of the It characterized the transition from stability to instability.With other words he initial structure has no dependence on the structure evolution.The friction of the process depends on .If there is a value < crit with a lower friction tabilization occurs and an optimization process will activate.The result is a a new stationary structure ⇠ stat .Experimental results of rheometer tests are 0,00E+00 tion behaviour of a model greases for different thickener content alue of the energy density on the ordinate presents the friction behaviour.In riterion for stabilization a process of self optimization occurs in the range of course more investigations are necessary to proof this assumption. tion of self optimization with an optimized grease structure is assumed.The s is a decreasing friction and wear process.It delivers in our case a decreasing ing an increasing tribological stress.An experimental example is shown in procedure to obtain the results presented in this Figure is described in [12].structural degradation (wear of a lubricant) with an increasing shear rate is alysis follows the work of [13]. well known process behaviour.62 Conclusion • an assumption was made that dissipative structures can be formed in stressed greases too • using the stability criterion the self-optimization process can be described • this optimization was related to the thickener content and the formed greases structure • given that a dissipative structure is formed the phenomenon that a decreasing wear can be observed with an increasing stress is analyzed ⌧ To get information about the friction behaviour inside a grease film rheometer tests were done.A rheometer MCR 302 by Anton Paar Germany GmbH, Ostfildern, Germany with a plate-plate system was used.Test temperature was T = 25 • C. To compare the friction behaviour oscillating tests (amplitude sweep) with a frequency of 10 Hz were made the expended mechanical energy quantified at a deformation of γ = 0.03%.These conditions were realized by all tests and all samples.The tests were repeated 3 times.The used sample was a grease with a naphtenic base oil (115 mm 2 /s), a Li-soap and a percentage of 2.5% polyethylene.The test were made with samples with 5 different Li-soap contents. In the light of the described criterion for stabilization, a process of self-optimization occurs in the range of 10% thickener content.Of course further investigations are necessary to prove this assumption. Period of Self-Optimization In this section we assume a process of self-optimization with an optimized grease structure.The consequence of that process is a decreasing friction and wear process.It delivers, in our case, a decreasing structural degradation during an increasing tribological stress.An experimental example is shown in Figure 5.The experimental procedure to obtain the results presented in this figure is described in [12].The period of decreasing structural degradation (wear of a lubricant) with an increasing shear rate is investigated below.This analysis follows the work of [13]. System entropy is described as with S i being the entropy production and S e the entropy flow.The tribologcal process leads to a gradient of density inside the grease film [14].The assumption follows, that a diffusion process happens between unchanged and changed grease structure.An example for the evolution of the density distribution is presented in Figure 6 (from IR-microscopy).Observing the entropy production of the described tribo-system (stressed grease film), two dissipative processes were considered.These processes are friction and diffusion.In the thermodynamics of irreversible processes, linear relations between thermodynamic forces X and fluxes J are assumed [15]. Lubricants 2018, 6, 7 6 of 8 For the investigated system follows the entropy production rate for the stationary state and the thermal conductivity process (according to [13]) and for the diffusion process with T the temperature, λ the thermal conductivity coefficient, φ the chemical potential and γ D the transport coefficient.For J 1 is proposed: The assumption was made (in application of [13]) that the degradation of the grease structure is proportional to the transport coefficient γ D . To analyze the stationary state, we assume that γ D is influenced by (τ • γ).The entropy production rate is written now as where γ D0 is the integration constant.For stationary conditions it can be seen from Equation (15) that with an increase in (τ γ) a decrease of γ D can be expected.Analogous to [13] it is assumed that the transport coefficient γ D is proportional to wear, or with other words, to the process of structural degradation.We find the phenomenon that with an increasing tribological stress (τ γ) in a stationary period, the structural degradation decreases (as presented in Figure 5).In a second step, the dependence of γ D on T is investigated.It is found As seen in Equation (20) with increasing temperature the parameter comes closer to a limiting value.For the investigated topic, this limiting value could be presented by the base oil properties.The degradation curve will never fall below the base oil curve.The behaviour during the optimized region seems similar to the well known process behaviour. Conclusions Creating a sub-tribo-system by investigating the stressed lubricating grease delivers the possibility to observe some special phenomena.The assumption is made that the formation of dissipative structures is possible for stressed greases too.The checking of the stability criterion delivers information about the process conditions for the optimization.Consideration of micro-geometrical parameters opens new possibilities to describe the correlation between evolution of the structure and friction behaviour during the optimization period. Under assumption of an optimization process, the phenomenon of a decreasing wear and increasing stress is investigated.An observation of an experimental result is described by formulating the entropy production rate. Figure 2 . Figure 2. Primary grease structure for different soap content.(a) Low soap content; (b) middle soap content; (c) high soap content-of a Li-model grease with same base oil and 5000× magnification [7]. Version October 17, 2017 submitted to Lubricants It is known that there exist a correlation between thickener content and thick 43 manufacturing process (see Fig.(2)).The grease structure is characterized by 44 geometry of the thickener.The parameter ⇠ is used to describe a temporary 45 and is not constant during the time dependent friction process. 46 47 friction curve @⌧ /@⇠ = 0.It characterized the transition from stability to i 48 in this case the friction of the initial structure has no dependence on the stru 49 maximum at the beginning of the process depends on .If there is a value 50 compared with crit a destabilization occurs and an optimization process w 51 new stationary state with a new stationary structure ⇠ stat .Experimental re density [J/m3] soap content [%] Figure 3 . Figure 3. Friction behaviour of a model greases for different th Figure 3 . Figure 3. Friction behaviour of a model greases for different thickener content 85 Creating a sub-tribo-system by investigating the stressed lubricating grease delivers the possibility to 86 observe some special phenomena.The assumption is made that the formation of dissipative structures 87 is possible for stressed greases too.The checking of the stability criterion delivers information about 88 the process conditions for the optimization.Consideration of micro-geometrical parameter opened new 89 possibilities to describe the correlation between evolution of the structure and friction behaviour during 90 optimization period.91Underassumption of an optimization process the phenomenon of a decreasing wear with and increasing 92 stress is investigated.An observation of an experimental result is described by formulating the entropy 93 production rate.94 new stationary state with a new stationary structure ⇠ stat .Experimental results of rheometer tests are Figure 3 . 3 . Figure 3. Friction behaviour of a model greases for different thickener content Figure 3 . Figure 3. Shear stress as a function of ξ-parameter for different ψ-values. Figure 4 . Figure 4. Friction behaviour of a grease sample for different thickener content. Figure 5 . Figure 5. Evolution of the structural degradation with an increasing tribological stress (rheometer tests). Figure 6 . Figure 6.Example of density distribution of different stressed greases to illustrate the effect of friction.
2018-02-12T09:00:13.439Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "a1a24010c44ebdac4adec9f9d761359dedf26163", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4442/6/1/7/pdf?version=1515695381", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1a24010c44ebdac4adec9f9d761359dedf26163", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
258079231
pes2o/s2orc
v3-fos-license
Fundamental and second-order dark soliton solutions of 2- and 3-component Manakov equations in the defocusing regime We present exact multi-parameter families of soliton solutions for two- and three-component Manakov equations in the \emph{defocusing} regime. Existence diagrams for such solutions in the space of parameters are presented. Fundamental soliton solutions exist only in finite areas on the plane of parameters. Within these areas, the solutions demonstrate rich spatio-temporal dynamics. The complexity increases in the case of 3-component solutions. The fundamental solutions are dark solitons with complex oscillating patterns in the individual wave components. At the boundaries of existence, the solutions are transformed into plain (non-oscillating) vector dark solitons. The superposition of two dark solitons in the solution adds more frequencies in the patterns of oscillating dynamics. These solutions admit degeneracy when the eigenvalues of fundamental solitons in the superposition coincide. I. INTRODUCTION Variety of oscillating localised structures associated with the scalar nonlinear Schrödinger equation (NLSE) is enormous [1][2][3][4]. Oscillating nonlinear solutions are commonly dubbed as 'breathers' [5,6]. They are multiparameter families of solutions that are periodic either in space or time with periods being the free parameters of the families. More general family of the lowest order double-periodic solution is periodic both in space and in time [7]. It contains particular subsets such as Akhmediev breathers [8] and . Each of them is still a family of solutions with a free parameter. Their limiting cases when each period is infinite leads to a special solution known as Peregrine rogue wave [10]. Vector (two-component) generalisation of the NLSE describes more complex systems such as nonlinear interaction of two wave components in optical fibres [22], twoatom Bose-Einstein condensates (BECs) [23,24], and the two-way wave propagation in the ocean (crossing seas) [25]. The integrable version of this system is known as the set of Manakov equations [26]. As mentioned, oscillating structures do exist in the defocusing NLSE case as * Electronic address: chongliu@nwu.edu.cn well [21]. Their investigation can be naturally extended to the case of Manakov equations [27][28][29][30][31][32][33][34]. For example, vector defocusing rogue waves have been predicted in [29] and observed experimentally in fiber optics [33,34]. Vector Akhmediev breathers also do exist in the defocusing regime [31,32] and they can exhibit unique 'hidden' dynamics in the nonlinear stage [31]. We can expect variety of other interesting phenomena when dealing with the whole family of exact solutions of Manakov equations in the defocusing regime. Even fundamental (lowest-order) solutions of the Manakov model are not as simple as we would initially expect. Clearly, superposition of these solutions produces highly nontrivial structures especially, when the number of components in the model exceeds two. Among these phenomena are multisoliton complexes [35][36][37], 'beating solitons' [38][39][40], non-degenerate solitons [41][42][43][44] etc. Another physical phenomenon is the multi-valley dark structure that exists in the defocusing regime when the number of components N ≥ 3 [44]. Soliton on a background is one type of the structures that exist in these systems [45,46]. In the defocusing media, solitons on a background are dark solitons. The study of these objects for Manakov equations is still incomplete. In this paper, we fill this gap in the knowledge. In particular, we have found several new types of dark solitons in the defocusing regime of Manakov system and revealed their properties. The paper is organised as follows. Exact fundamental (lowest-order) soliton solutions in the defocusing regime of Manakov system and their symmetries are presented in Section II. Existence diagrams and characteristics of these solutions in the two-and three-component cases of the Manakov system are given in Sections III and IV, respectively. A special case when all background amplitudes are equal a j = a is considered in Section VII. Two other special cases when one (a 1 = a 2 = a, a 3 = 0 ) or two (a 1 = a, a 2 = a 3 = 0) background amplitudes are zero are considered in Sections VIII and IX, respectively. Finally, Section X contains our conclusions. II. FUNDAMENTAL SOLITON SOLUTIONS AND THEIR SYMMETRIES We consider here the set of Manakov equations generally consisting of N wave components. In dimensionless form, they are given by i ∂ψ (j) ∂t where ψ (j) (t, x) are the nonlinearly coupled wave components of the vector wave field. The physical meaning of independent variables x and t depends on a particular physical problem of interest. We have normalized Eqs. (1) in a way such that σ = ±1. Note that in the case σ = 1, Eqs. (1) refer to either the focusing (or anomalous dispersion) regime in optics or the attractive interaction between the atomic components of BEC; in the case σ = −1, Eqs. (1) refer to either the defocusing (or normal dispersion) regime in optics or the repulsive interaction between the atomic components of BEC. In our previous work [47], we have demonstrated the dynamics of vector solitons in the focusing regime σ = 1 for the basic Manakov system, when N = 2. In contrast, we present here an exact multi-parameter family of fundamental soliton solutions in the defocusing regime σ = −1 of N -component Manakov equations when N = 2 and N = 3. We reveal the existence conditions and the exact dynamics of solitons separately for N = 2 and N = 3. This is different from the fundamental darkdark and bright-dark soliton solutions of the defocusing Manakov equations reported recently [48,49]. The applicability of Eqs. (1) with N = 2 in physics has been verified experimentally in optics [50][51][52][53] and for description of multicomponent BECs [54,55]. This task becomes significantly more difficult when the number of components in Eqs. (1) increases. Nevertheless, recent experiments [56] confirmed the physical relevance of Eqs. (1) with N = 3 by observing the bright-dark-bright solitons in BECs with repulsive forces between the atomic components. Our present theoretical results may provide a basis for observing more complex wave patterns in such experiments. A. Fundamental soliton solutions in general form A fundamental (first-order) vector soliton solution of Eqs. (1) can be obtained using a Darboux transformation scheme [57] with the seed in the form of a plane wave. In compact form, it is given by: where ψ (j) 0 (t, x) is the seed plane wave solution: with the real parameters a j , and β j being the amplitudes and wave numbers, respectively, and whereχ = χ + iα with α( = 0) being a real parameter. One can readily confirm that |ρ (j) | = 1. Moreover, where Subscripts r and i denote the real and imaginary parts of the complex parameter χ, respectively. The latter denotes the eigenvalue of the Manakov system (1) which obeys the relation: In principle, N -component model can admit 2N roots for χ. The one-to-one correspondence between the eigenvalue and the spectral parameter of the associated Lax pair is given by: The remaining notations in Eq. (5) are: Clearly, the solution (2) depends on the background wave parameters a j , β j , and the real parameter α ( = 0). For any N -component Manakov system, the solution (2) describes fundamental dark vector soliton with the plane wave background (3) around it. The solution (2) is the direct analog of the dark soliton of the single component NLSE in the defocusing (σ = −1) regime. At any t, the deviation of the soliton profile from the background is localised in x with the width ∼ 1/α. These solitons can move with the group velocity V g = −χ r . The new notable feature of the dark soliton of the Manakov system is that its components may exchange energy and therefore may oscillate in t. Period of these oscillations as we can see from (5) is 2π/Ω. Additional oscillations may appear when two dark solitons are superposed at the same location. The frequency of these oscillations will be equal to the beating frequency of two dark solitons. Such superpositions will be considered below. The choice of parameters a j , β j strongly influences the dynamics of solitons. As there are several of them, variety of possible dynamics is very large. First, let us consider the case of identical background amplitudes a j = a. Such condition (a j = a) has been used in experimental observations of optical rogue waves in the two-component Manakov system [33,34]. As particular cases, we consider the characteristics of dark solitons when one or two of the background amplitudes vanish. As for the wave numbers β j , we set them as follows: B. Symmetries of the solutions Before entering the details, let us consider the two main symmetries of the fundamental soliton solution (2). Taking them into account will simplify the analysis. The first one is the symmetry of the solution (2) relative to the sign change of β and simultaneous change of the wave component. For the case of identical background amplitudes a j = a, we have The second symmetry involving the eigenvalue χ, is not that simple. Namely, if where r j = 2 arg(ρ j ) denotes a constant phase, and x = x + ∆x, t = t + ∆t, with ∆x and ∆t fixed constant shifts along the x and t axes, respectively. They are given by Thus, the symmetry (14) defines the periods in the oscillating patterns of dark soliton. The symmetries (12)- (14) provide more insight in revealing the richness of soliton properties as it is demonstrated below. Let us start with the analysis of dark solitons in the two-component Manakov system. III. DARK SOLITONS IN THE TWO-COMPONENT MANAKOV SYSTEM In the defocusing regime of the two-component Manakov system (N = 2), Eq. (8) admits four roots for the eigenvalue χ. For the case of identical background amplitudes a j = a, and for β 1 = −β 2 = β, the explicit expressions for them are given by where κ = β 2 + a 2 − α 2 /4, and η = a 4 + 4a 2 β 2 − α 2 β 2 . It follows, from (16), that Then, from (14), it also follows that the wave components {ψ 1 (χ 4 )} have the same amplitude profiles. The only difference between them is the shifts in x and t equal to ∆x, ∆t. A direct analysis shows that χ 3i = χ 4i ≡ −α/2, implying that Ω ≡ 0. This indicates that the period of ψ (j) 1 (χ 3 ) in t (i.e., 2π/Ω) becomes infinite (no oscillations). Moreover, the solutions ψ to the background level everywhere on the (x,t)-plane. Thus, these two eigenvalues describe trivial background wave solutions. They can be ignored in further analysis. For illustration, Figures 1(a) and 1(b) show the individual and total component profiles of the fundamental dark soliton on the (x,t)-plane that corresponds to the eigenvalue χ 1 . Two different relative wavenumbers β = 0.3 and β = 1.0 are used. The individual components are periodic in t due to the energy exchange between them. Figure 1 (a) shows a 'four-petal' pattern in each period of oscillations with two areas of depressed and two areas of elevated amplitudes diagonally located relative to the centre. The central point in this pattern is a saddle. Fig. 1(b) displays a similar pattern but with the amplitude at the central point being transformed from a saddle to a minimum. The two areas with depressed amplitudes are now combined into a single one. With further increase of β, the oscillations disappear and each component is gradually transformed into a plain (non-periodic) dark soliton. The total amplitudes of the dark soliton |ψ| = |ψ (1) 1 | 2 shown in the r.h.s. columns of Fig. 1 are also oscillating. The minima are located at the centres of each four-petal patterns in (a) or coincide with the minima of the two components in (b). Thus, the solution (2) generally describes oscillating dark solitons. Clearly, the choice of the parameters α, β is not arbitrary. We need to analyse Hessian matrix for (2) in order to find the regions of existence of these solutions. Using the technique presented in [47], we constructed the existence diagrams for the solutions for each of the eigenvalues. Dark soliton solutions do not exist in grey areas. The existence diagrams for χ 1 and χ 2 are identical. In the limiting case of α = 0, dark solitons are transformed into vector rogue waves [29]. As mentioned, the eigenvalues χ 3 and χ 4 describe only trivial solutions. Thus, the diagrams corresponding to these eigenvalues are fully grey in Fig. 2. The analytical expression for the boundary of the dark soliton existence in Fig. 2 (the red solid curves) can be extracted from the conditions Namely, from Eqs. (16) we obtain where β c denotes the critical wavenumber. Dark solitons do exist in the region confined by the condition β 2 < β 2 c . When this is the case, the two eigenvalues χ 1 and χ 2 are purely imaginary (χ r = 0). This implies that these dark solitons have zero velocity (v g = 0). Two examples are shown in Fig. 1. Consequently, the total amplitude |ψ (1) DS | 2 of the dark soliton also has a dark-soliton shape. When β = 0, dark solitons are located on the black solid line in Figs. 2(a) and 2(b). They are confined by the condition α 2 < 2a 2 (α = 0). Analytical expression for these dark solitons can be found from Eq. (2): where . andψ The solution (23) is the same as (22) but reversed in space. The two components of the dark soliton are oscillating in t with the frequency α 2 /2. This follows from Eq. (25). The soliton profiles for this case are shown in Fig. 3(b). The two components are oscillating in the opposite phases. The elevations (depressions) in ψ (1) V BS correspond to the depressions (elevations) in ψ (2) V BS . This allows the total amplitude profile |ψ 1 | 2 of the dark soliton to be constant in t. The explicit expression for it is given by: This is the same profile as for the dark soliton (20). Indeed, Eqs. (21) and (27)] are the same. This can also be seen from the comparison of Figs. 3(a) and 3(b). When β 1 = β 2 = 0, the dark soliton acquires nonzero velocity. The solution can be derived either directly from Eq. (2) or obtained from the expressions (22-23) using Galilean transformation. For the absolute values of the components, we have The amplitude profiles of this solution is shown in Fig. 3(c). This dark soliton propagates with the velocity β 1 . Two of its components remain oscillating. The shape of the total amplitude remains fixed in t. It coincides with the shape of dark solitons in Figs 3(a) and 3(b). IV. DARK SOLITONS IN THE THREE -COMPONENT MANAKOV SYSTEM Now, we explore the properties of three-component (N = 3) vector dark solitons in the defocusing regime. There are six eigenvalues χ j in this case. The explicit expressions for them when a j = a, β 1 = −β 3 = β, and β 2 = 0 are given by Here Similar to the case N = 2 considered above, here, not all eigenvalues describe a soliton. We have found that χ 5 and χ 6 correspond to the trivial background solutions, while four other eigenvalues do correspond to dark solitons. They obey the relations: The corresponding amplitude profiles satisfy the symmetry (14): This means that ψ This means that for given values of a, β and α, we have two different dark solitons with opposite group velocities (χ 1r = −χ 3r ). The amplitude profiles of these two solitons, |ψ Figs. 4(a) and 4(b) respectively. Oscillations are now due to the energy exchange between the three wave components. The first two components in Fig. 4(a) show the four-petal patterns in each period with a saddle point at the centre. The third component has a minimum at the centre. The total amplitude (r.h.s. panel) is still an oscillating dark soliton. The two solitons shown in Figs. 4(a) and 4(b) have the same oscillating period. The patterns in the second case are reversed as well as the direction of propagation. Nonlinear superposition of these two dark vector solitons is a second-order 'non-degenerate' dark soliton (see Section VII). Figure 5 shows the existence diagrams of dark vector soliton components on the (α, β)-plane for three eigenvalues χ 1 , χ 3 , and χ 5 . Dark solitons do exist only for the case of the eigenvalues χ 1 and χ 3 . In these two cases, solitons are confined to the eye-shape areas bounded by the red solid curves. Due to the symmetry (12), the existence regions of ψ Dark solitons do not exist in the grey areas. The regions of dark soliton existence for case N = 3 are limited by the red solid curves obtained from the condition: At this boundary, the vector dark soitons have the form: The difference from the dark solitons in the case N = 2, Eq. (20), is that the group velocity −χ r is not zero. The amplitude profiles for these solitons is shown in Fig. 6(a). Inside the red solid lines, the dark soliton components are oscillating. Taking β = 0 (i.e., β j = 0, j = 1, 2, 3), we obtain the soliton solution from (2). The explicit expressions can be represented in the following forms: and 1 | 2 of the three-component dark solitons (2) corresponding to the eigenvalues χ1, and χ3, given by Eqs. (29). Parameters a = 1, β = 0.3, and α = 0.5. Here, we separated the solutions into a 'bright', ψ (j) BS , and 'dark', ψ DS , parts: where Oscillations in Eqs. (37,38) are caused by the 'bright' parts. The corresponding wave profiles in each component together with the total soliton amplitude are shown in Figs. 6(b) and 6(c), respectively. These can be considered as the special cases of the solutions shown in Figs. 4(a) and 4(b) but with β = 0. Also, from Eqs. (37,38), we find that This means that the components of |ψ On the other hand, in each case, all three components are different. Like in the case N = 2, the total amplitude always has the shape of a dark soliton that does not change in t. When β j = 0, solitons (37,38) have zero velocity. Using a Galilean transformation, we obtain the moving dark soliton solution for the case β 1 = β 2 = β 3 = 0. It is given by In contrast to the case N = 2, the three-component Manakov equations have additional degree of freedom influencing the dynamics of components. For the same case β 1 = β 2 = β 3 = 0, performing the Darboux transformation with a Lax spectral parameter where χ 1 = −β 1 − iα, we can obtain another family of dark soliton solutions given by: where and d = 1 2 ln 3 2α 2 − 1 2a 2 . In contrast to the dark soliton solutions (37,38), the oscillations in the solution (46) occur only in ψ (1) and ψ (2) components. The ψ (3) component is a plain dark soliton. This solution is shown in Fig. 6(d). The soliton propagates with the group velocity v g = β 1 . Only the components ψ (1) V BS and ψ (2) V BS periodically exchange energy. The total amplitude profile is the same as in Fig. 6(c). The solutions (46) satisfy a simple transformation. Namely, the solution obtained by swapping the components ψ is still the solution of Eqs. (1). The nonlinear superposition of (46) and (49) with different α produces the second-order dark soliton. It is presented below. Each of the fundamental dark solitons can be part of the nonlinear superposition of more complex structures. As in the previous works related to the scalar NLSE case [58][59][60], the nonlinear superposition of fundamental dark solitons in the Manakov system can be constructed using next steps in the Darboux transformation (see Appendix A 1). For the two-component Manakov system, the fundamental solution on a constant background can be obtained by using the vector eigenfunctions of the transformed Lax pair with the coefficients {1, 1, 0} (see Appendix A 1). A particular case is a dark soliton solution (2) for N = 2. First, we consider the case with equal background amplitudes a j = a. Two types of second-order dark solitons are obtained below: i) when the wavenumbers are unequal β 1 = −β 2 = β = 0; ii) when the wavenumbers are equal β 1 = β 2 . In each case, the solitons have the same (zero) velocity. Then, the second-order solution is a bound state of two dark solitons. A. Second-order dark soliton with β1 = −β2 = β = 0 The two components of the fundamental dark soliton (2) for the cases α = 0.5 and α = 0.4 are shown in Figs. 7(a) and 7(b) respectively. These components are periodic with 'four-petal' type patterns in each period of oscillations. The average velocity of the dark soliton is zero. The nonlinear superposition of these two fundamental solitons is shown in Fig. 7(c). The result of the superposition is the soliton structure oscillating with two periods. From the fundamental solution (2), the beating period of the bound state is given by where α 1 = 0.5 and α 2 = 0.4. The average velocity of this combined structure is also zero. When β 1 = β 2 , the exact solution is given by Eqs. (22), (23). The two components and the total amplitude for two different values of α are illustrated in Figs. 8(a) and (b) respectively. The nonlinear superposition of these two dark solitons again produces 'double-beating' soliton pattern with two frequencies of oscillation. It is shown in Fig. 8(c). However, the total amplitude profile (r.h.s. panel) shows only a single beating frequency. Let us now consider higher-order dark solitons for N = 2 when one of the background amplitudes vanishes, e.g., a 1 = 0, a 2 = 0. For the defocusing Manakov systems, all background components cannot be simultaneously equal to zero. We can use two approaches to investigate the properties of these solutions. The first one is to consider the limit a 2 → 0 in the solution presented above. The second one is to construct directly the new exact solution with a 2 = 0. Here, we use the second technique and present the new exact solution although both of them lead to the same result. We first consider the valid eigenvalues of the soliton from the general relation (8). The associated Lax spectral parameters follow from Eq. (9). Finally, the corresponding soliton solutions can be constructed by performing the Darboux transformation with these spectral parameters. The spectral parameter for the case a 1 = 0, a 2 = 0 follows from (9), It is given by: where χ = β 1 − iα is the only valid eigenvalue obtained from (8). Using the Darboux transformation with the spectral parameter (51), we obtain the higher-order dark soliton. The explicit form of this solution is given by: where d = 1 2 ln In contrast to the case N = 2, the three-component Manakov system admits more eigenvalues. This has been shown in Section IV. Then the number of possibilities in constructing higher-order dark solitons increases. On the other hand, there are two combinations of the vector eigenfunctions of the transformed Lax pair to generate different fundamental solutions for N = 3. Namely, using the combination with the coefficients of vector eigenfunctions {1, 1, 0, 0}, we obtain the fundamental dark soliton which coincides with (2). The alternative combination with the coefficients {1, 0, 1, 0} yields the fundamental solution describing the dynamics of general breathers [see Appendix B 1 a]. Nonlinear superposition of these two fundamental solutions can produce new wave formation. A. Second-order solutions with β1 = −β3 = β = 0, β2 = 0 We first consider the second-order solutions formed by the nonlinear superposition of two fundamental dark solitons corresponding to two different eigenvalues χ 1 , and χ 3 . These superpositions also depend on the set of initial parameters a, β, and α. The details of derivation of exact solutions are given in Appendix (B 1 a). Figure 11(a) shows the nonlinear superposition of two fundamental dark solitons shown in Fig. 4. As the two original dark solitons have velocities of opposite sign (χ 1r = −χ 3r ), the two dark solitons cross each other at t = 0. This superposition exhibits a typical X shape. The first, ψ (1) , and the third, ψ (3) , wave components are mirror images of each other. The second wave component, ψ (2) , is symmetric relative to the t and x axes. Moreover, the superposition of two dark solitons shows a typical elastic collision. This can be proved strictly by the asymptotic analysis shown in Appendix C. In order to confirm the accuracy of exact solutions, we used direct numerical simulations of Manakov equations. Figure 11(b) shows the results of numerical simulations with the initial conditions extracted from the exact solution at t = 0, namely, ψ (j) (x, t = 0). Comparison of the upper half of the solution in Fig. 11(a) with the results of numerical simulations in Fig. 11(b) shows that the exact solutions are indeed correct. Let us now consider the nonlinear superposition of two fundamental solutions corresponding to different combinations of the vector eigenfunctions of the transformed Lax pair, i.e., {1, 1, 0, 0} and {1, 0, 1, 0}. The first combination produces fundamental dark soliton while the latter combination {1, 0, 1, 0} produces fundamental general breather (GB). It is given by: where with Here, d and r are: while The coefficients G j , H j , M j and U j are respectively: and The values χ = χ 1 or χ 3 , and χ can be found by solving Eq.(B2) numerically. Here we use χ = χ 1,c . Due to the condition χ r = χ r , this solution describes general breather rather than a dark soliton (2). The dark soliton (2) and the general breather (53), each with the eigenvalue χ = χ 1 are shown in Figs. 12(a) and 12(b) for the cases α = 0.5 and α = 0.4 respectively. The nonlinear superposition of these two solutions is shown in Fig. 12(c). B. Second-order solutions with β1 = β2 = β3 When N = 3, the number of possible higher-order combinations is larger than in the case N = 2. First, we consider the nonlinear superposition of solitons with two different eigenvalues given by Eqs. (46) and (49). The details of derivation are presented in Appendix B 1 b. Figure 13(a) shows the amplitude profiles of dark soliton (46) with α = 0.5 while Fig. 13(b) shows the amplitude profiles of the dark soliton (49) with α = 0.4. These two dark solitons have the same velocity β 1 . Their superposition is shown in Fig. 13(c). It shows complex beating pattern in each component. The total amplitude is a bound state of two dark solitons with weakly attractive interaction around the centre (t, x) = (0, 0). Now, let us consider the nonlinear superposition of solutions given by Eqs. (37) and (38) that satisfy the condition β 1 = β 2 = β 3 = 0. Their amplitude profiles are shown in Figs. 6(b) and 6(c) respectively. The eigenval- ues χ 1 = χ 3 = −iα of these solutions are identical. Thus, for any α, the nonlinear superposition of these solutions is degenerate. This solution is shown in Fig. 14(a). Due to the degeneracy, in every component, the two localised waves are well-separated in x despite the zero shifts of individual solitons in the solution. The ψ (1) and ψ (3) components consist of two oscillating solitons while the ψ (2) component is a combination of oscillating and nonoscillating dark soliton. Moreover, the amplitudes of the r.h.s. solitons in ψ (1) and ψ (3) components are complementary. As the total amplitude should be dark solitons without oscillation, the r.h.s. soliton in the ψ (2) component exhibits a pure dark structure. The total amplitude also shows two well-separated dark solitons. This is in sharp contrast to the non-degenerate case shown in Fig. 13(c). In order to confirm the validity of the exact solution, we performed numerical simulations starting from the initial condition which is the exact solution at t = 0. The resulting wave profiles (dotted lines) after propagation of 40 units (t = 40) are presented in Fig. 14(b). The wave profiles according to the exact solutions are shown on the same plots by solid lines. As expected, the two profiles in each plot coincide. VIII. SECOND-ORDER SOLUTIONS WITH Next, we consider the cases when one or two of the background amplitudes vanish, namely i) a 1 = a 2 = 0, a 3 = 0; ii) a 1 = 0, a 2 = a 3 = 0. The case when all a j = 0 is not allowed in the defocusing Manakov systems. Each of the two cases produces new nonlinear superposition. In this section, let us focus our attention on the case a 1 = a 2 = 0, a 3 = 0. The corresponding spectral parameter (9) reduces to where χ denotes the valid eigenvalue determined below. Only complex eigenvalues χ 1 , χ 2 , χ 3 are valid. They are related to each other as follows This means that the two eigenvalues χ 1 , χ 2 in vector soliton formation play the same role. If we use χ 1 or χ 2 as the eigenvalue, we obtain vector solitons in ψ (1) and ψ (2) wave components and a zero solution in ψ (3) wave component. The derivation is given in Appendix B 2 a. However, if we use the eigenvalue χ 3 = β − iα, we obtain the solution in the form of dark-dark-bright solitons. Its explicit form is given by: where σ = 1 2 ln 1 − a 2 (β1+χ)(β1+χ * ) − a 2 χχ * , and λ is given by Eq. (63). Velocities of fundamental solutions shown in Figs. 16(a) and 16(b) are unequal but very close. As a result, their interaction shows complex oscillations of a quasibound state of two solitons shown in Fig. 16(c). The first two components ψ (1) and ψ (2) are oscillating dark solitons on the plane wave background while the component ψ (3) is an oscillating bright soliton. It has a two-peak wave profile. The nonlinear superposition of these two fundamental solutions is shown in Fig. 17(c). It reveals complex oscillating patterns in ψ (1) and ψ (2) wave components. The third wave component ψ (3) is the two-hump bright soliton also with oscillating structure. This result is a limiting case of the solution shown in Fig. 13 when a 3 → 0. Let us now consider the fundamental dark solitons (37) and (38) and their nonlinear superposition when a 3 = 0. Similar to the case shown in Fig. 17, the solution (37) reduces to (69) with zero velocity, β j = 0. It is shown in Fig. 18(a). This solution oscillates in ψ (1) and ψ (2) wave components and has zero third component. The solution (38) becomes a dark-dark-bright soliton (66) with zero velocity. It is shown in Fig. 18(b). For a given α, these two solitons have identical eigenvalues. Thus, their nonlinear superposition is a degenerate second-order soliton. It is shown in Fig. 18(c). Due to the degeneracy, the two solitons are separated in space. They have the same period in t. However, their phases are shifted relative to each other. Oscillations are observed only in ψ (1) and ψ (2) Now, we consider the case when two background components are zero a 2 = a 3 = 0, but a 1 = 0. In this case, the resulting Lax spectral parameter becomes where χ is determined below. Second-order exact soliton solution is constructed at the second step of Darboux transformation using the spectral parameter (72). As in Section VIII, two different combinations of the coefficients of the vector eigenfunctions of the transformed Lax pair are used, i.e., {1, 1, 0, 0} and {1, 0, 1, 0}. In the first case, {1, 1, 0, 0}, the explicit form of the soliton solution is given by: where This solution describes dark and bright solitons in ψ (1) and ψ (2) wave components, respectively. The third component is zero. The second combination, {1, 0, 1, 0}, provides a similar solution but with zero in the second component: Below, we consider these two solitons for specific eigenvalues. A. Second-order solutions when β1 = −β3 = β = 0, and β2 = 0 The four eigenvalues in this case are given by: As discussed, only complex eigenvalues χ 1 , χ 3 are valid. Figure 19(a) shows the evolution of amplitude profiles of the vector soliton (73) corresponding to the eigenvalue χ 1 = −iα. The solution is a zero velocity dark soliton in the first component ψ (1) and a bright soliton in the second component ψ (2) . The third component ψ (3) is zero. The amplitude profiles of the vector soliton (75) with the eigenvalue, χ 3 = −iα + β, are shown in Fig. 19(b). This soliton has a non-zero velocity −β. It is a dark soliton in the first component, ψ (1) , and bright soliton in the third component, ψ (3) . The second component, ψ (2) , is zero. The nonlinear superposition of these two solitons is shown in Fig. 19(c). The plot shows an elastic collision of two vector solitons with a phase shift at t = 0. The phase shift can be clearly seen also in the second and third components of the wave field. These results are the limiting cases of those shown in Fig. 11 when a 2 → 0, a 3 → 0. In order to confirm the validity of the exact solutions, we performed numerical simulations of this higher-order solution starting from the initial conditions provided by the exact solution at t = 0. The amplitude profiles of the exact solution (solid curves) and the numerical simulations (dashed curves) at t = 40 are shown in Fig. 19(d). There is an excellent agreement between them, as expected. Another type of a higher-order solution is formed by the nonlinear superposition of solutions (73) and (75) corresponding to a single eigenvalue (either χ 1 or χ 3 ) but with different α. To be specific, we chosen the eigenvalue χ 1 . The corresponding fundamental soliton solutions are shown in Figs. 20(a) and 20(b). These two solitons have zero velocity due to the condition χ 1r = 0. Their superposition results in a new form of bound state of two solitons. It is shown in Fig. 20(c). The first component, ψ (1) , is an asymmetric bound state of two dark solitons with the profile that has two unequal dips. The ψ (2) and ψ (3) components are the asymmetric bound states of two bright solitons with two unequal humps. These results are limiting cases of those shown in Fig. 12 when a 2 → 0, a 3 → 0. Namely, the moving vector solitons shown in Fig. 12(c) reduce to the ones shown in Fig. 20(c) when a 2 → 0, and a 3 → 0. Validity of the results shown in Fig. 20(c) are confirmed using numerical simulations starting from the initial conditions provided by the exact solution at t = 0. Figure 20(d) shows the wave profiles at t = 80 obtained from the exact solutions (solid curves) and from the numerical simulations (dashed curves). As expected, the two profiles coincide. B. Second-order solutions when β1 = β2 = β3 In the case of equal wavenumbers β 1 = β 2 = β 3 , the eigenvalues are given by Here, only one of the complex eigenvalues (either χ 1 or χ 3 ) is valid. Using χ 1 , we construct the fundamental vector soliton solutions for different combinations of the eigenfunctions. These solutions are given by (73) Fig. 21(c). Similar to the case shown in Fig. 20, the superposition is a bound state of two solitons but with finite velocity β 1 = 0.3. The ψ (1) component is a bound state of two equal dark solitons. On the other hand, the ψ (2) (ψ (3) ) component is a bound state of two unequal in-phase (out-of-phase) bright solitons. These results are the limiting case of those shown in Fig. 13 when a 2 → 0, and a 3 → 0. Figure 21(d) shows the comparison of the wave profiles at t = 40 obtained from the exact solutions and numerical simulations. This way, we confirmed that the exact solutions shown here are indeed correct. We finally consider the second-order vector solitons corresponding to the solution shown in Fig. 14(a) when a 3 → 0 (or the solution shown in Fig. 18(c) when both Fig. 22(c). It is a degenerate solution because the eigenvalues of the two fundamental solutions coincide. The situation is similar to the one shown in Fig. 18(c). The resulting second-order solution consists of two equal well separated dark-bright solitons propagating in parallel. Each of the wave profiles shown in Fig. 22(d) is symmetric in x. The parallel solitons shown in Figs. 20(c), 21(c), and 22(c), come from the nonlinear superposition between two fundamental dark-bright solitons with the same velocity, each associated with a zero solution in different components. X. CONCLUSIONS In conclusion, we have studied fundamental vector solitons and their interaction in the defocusing regime of Manakov equations. We derived multi-parameter family of fundamental vector soliton solutions in analytic form and presented the existence diagrams of these solitons for the two-and three-component Manakov equations. We have found that vector solitons exist only in finite areas of the (α, β) plane. Within these areas, the dark soliton components oscillate. At the boundaries of the existence diagrams, vector solitons are transformed into plain vector dark solitons. We have also provided exact solutions for the interaction of fundamental solitons. These are nonlinear superpositions of fundamental vector dark solitons. We found a rich variety of interaction patterns of two solitons each with it own eigenvalue. The two eigenvalues may differ or they can coincide. The corresponding solutions are nondegenerate or degenerate second-order solutions respectively. We confirmed the correctness of our theoretical results using numerical simulations. Because of the widespread fundamental and practical interest to physical systems described by the set of Manakov equations in the defocusing regime, we believe that our results may have a significant impact on experimental with the matrices Here, the vector function ψ= ψ (1) , ψ (2) , ..., ψ (j) T , † denotes the matrix transpose and complex conjugate, I is an identity matrix, λ is the spectral parameter, and a 2 = j=N j=1 a 2 j . The system of Manakov equations (1) follows from the compatibility condition For N = 2, using a diagonal matrix S=diag(1, e −iθ1 , e −iθ2 ), the Lax pair can be rewritten as: The linear eigenvalue problem in terms of the transformed Lax pair (A5) is given by Eq. (A6) admits three eigenvalues χ n,l , (l = a, b, c). To obtain the solution of Eq. (A2), we further diagonalise the matricesŨ andṼ. Namely, we have where the transformation matrix H is: Solving Eq. (A7), we have ϕ n,l = c n,l exp iχ n,l x + 1 2 where c n,1 (l = a, b, c) are arbitrary constants corresponding to the vector eigenfunctions of the transformed Lax pair ϕ n,l . Finally, the eigenfunctions Ψ n = (R n , S n , W n ) are given by R n = ϕ n,a + ϕ n,b + ϕ n,c , The fundamental (first-order, n = 1) vector solution of the two-component Manakov equations can be obtained through the Darboux transformation [57]. Namely If one of the c 1,l is zero, we obtain exact solutions that describe the dynamics of a single soliton. However, for the defocusing case, we have to set c 1,c = 0 so that ϕ 1,c = 0. Below we clarify this point. b. Vector solitons when β1 = β2 The general soliton solution derived above reduces to the soliton solution with zero velocity when β = 0. The moving soliton can be obtained by employing a Galilean transformation. Below, we show how to obtain the general vector soliton solution via the Darboux transformation. When β 1 = β 2 , the eigenvalue χ n,a = −β 1 − iα, χ n,b = −β 1 . The transformation matrix H can be rewritten as: The corresponding eigenfunctions (R n , S n , W n ) are given by R n = ϕ n,a + ϕ n,c , Here, we still use the coefficients: When a 2 = 0, the spectral parameter is given by (51). The transformation matrix H takes the form: Using (51) and solving the associated Lax pair, we have the eigenfunctions: R n = ϕ n,a + ϕ n,c , S n = ψ (1) 0 ϕ n,a β 1 + χ n,a + ϕ n,c β 1 + χ n,c , W n = ϕ n,b exp (iθ 2 ). The second-order solutions can be obtained in the next iteration of the Darboux transformation. They are: Here, ψ (j) 1 denote the fundamental vector solution obtained above. (P) 1i represents the element of the matrix (P) in the first row and i-th column, and where Λ = diag(1, −1, −1). When the two eigenvalues are different, these solutions describe the second-order non-degenerate solitons on the same vector background. For N = 3, the Lax pair can be rewritten as: by using a diagonal matrix S=diag(1, e −iθ1 , e −iθ2 , e −iθ3 ). The linear eigenvalue problem of the transformed Lax pair (B1) is given by There are four eigenvalues χ n,l (l = a, b, c, d). The transformation matrix in this case is: The vector solution of the transformed Lax pair is: ϕ n,l = c n,l exp {iχ n,l x + 1 2 (4a 2 + χ 2 n,l )t}, where l = a, b, c, d. The corresponding eigenfunctions (R n , S n , W n , X n ) are given by R n = ϕ n,a + ϕ n,b + ϕ n,c + ϕ n,d , The fundamental vector soliton solution (n = 1) can be obtained at the first step of the Darboux transformation: To obtain the solution describing a single soliton, two of the coefficients c 1,l (l = a, b, c, d) should be zero. Below, we show that three combinations of the coefficients can be used to generate valid solution. The linear eigenvalue problem (B2) directly leads to where the eigenvalue χ is given by Eq. (8). Substituting one valid eigenvalue χ 1 given by (29) into (B7), we obtain the corresponding Lax spectrum λ 1 . Using this spectrum and solving (B2), we have four different eigenvalues χ 1,a , χ 1,b , χ 1,c , and χ 1,d . The valid ones are: χ 1,a = χ 1 , χ 1,b = χ 2 = χ 1 + iα. One of the eigenvalues is invalid. Without loss of generality, we let χ 1,d to be invalid. There are three combinations of the coefficients that can be used to generate fundamental soliton solutions. Here (P) 1i is the matrix element of P in the first row and i-th column, and where Λ = diag(1, −1, −1, −1) and † denotes the matrix transpose and complex conjugate.
2023-04-13T01:27:47.349Z
2023-04-12T00:00:00.000
{ "year": 2023, "sha1": "d5a160672e45c9cfd35261dc15687df012a55fac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d5a160672e45c9cfd35261dc15687df012a55fac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
248202258
pes2o/s2orc
v3-fos-license
The mediator role of attitude towards aging and elderliness in the effect of the meaning and purpose of life on death anxiety Older adults can frequently serve as a reminder of death to younger adults. People can develop a negative attitude towards aging and elderliness because they see old age as an obstacle in reaching their goals and what they want to do, which they see as the purpose of their lives. This research was conducted to answer to the question of whether attitudes towards aging and elderliness have a mediating role in the relationship between meaning and purpose of life and death anxiety. Relational screening model was used in the research. The research was conducted with 422 participants between the ages of 18-59. In the analysis of the data, Pearson Correlation Analysis and Regression analysis were performed. In addition, Hayes Macro was used in SPSS program to analyze the mediator variable effect. As a result, it was determined that the attitude towards aging had a significant mediator role in the effect of the meaning and purpose of life on death anxiety. It was found that there was a moderate positive correlation between the attitude towards aging and elderliness and death anxiety, a moderate negative correlation between the attitude towards aging and elderliness and the meaning and purpose of life, and a weak negative correlation between death anxiety and the meaning and purpose of life. Introduction Death has been tried to be explained by many definitions in the literature, but each researcher and theorist has differentiated it as a result of the meanings they attribute to death. In general, death has been defined as the complete and definitive end of life, which we cannot directly experience, but if it happens, we can no longer exist, and it has been accepted as an important life event in all cultures and ages (Tanhan, 2007). Death can reveal the feelings of impotence, isolation, loss of control, and meaninglessness, and for some people, fear of death can prevent them from achieving satisfaction and happiness (MacLeod et al., 2019). Although death is known as a great fact of human life, human beings cannot feel fully ready for such a reality throughout their life (Kımter & Köftegül, 2017). Individual views on death and dying differ greatly due to a variety of factors (Lehto & Stein, 2009;Rooda et al., 1999;Yalom, 2008). Based on these differences, factors such as the culture, personality, religion, vocation and age of the individual have differentiative impact in this regard (Karakuş et al., 2012). For example, the death anxiety of nurses working in the service where the death rate is high and the teachers, academicians and bankers who are less likely to face death were compared and it was determined that the death anxiety levels of the nurses were higher (Aktürk & Şahin, 2019). Individuals' attitudes towards death itself and death of the people around them can change and these attitudes are handled in four categories such as wanting death, accepting death, not accepting death and fighting against death while upon the death of the people around them, the mourning period is included in these concepts. (Karakuş et al., 2012;Thorson & Powell, 1988). Although it is not often mentioned among the attitudes about death, according to the data of the World Health Organization, the act of suicide, which kills approximately 1 million people a year and is called a death challenge, can be among these attitudes (De Berardis et al., 2018). Like death, death anxiety has been demonstrated to affect conduct as well as life and death decisions (Dadfar & Lester, 2014). Death anxiety has been considered as a feeling that lasts from the moment we are born until the end of one's life and develops when the person realizes that they will no longer exist, that they will lose the world and themselves, and that they may disappear (Yalom, 1980). While Abdel-Khalek et al. (2009) expresses death anxiety as the conceptualization of the anxiety that exists when we become aware of death, for Nyatanga and Vocht (2006) death anxiety is a disturbing emotion created by multidimensional, existential-centered worries that arise with thoughts about the death of oneself or another handled as. According to the Terror Management Theory (TMT) developed by Greenberg et al. (2014) the thought that the existence of the individual will result in death creates an existential anxiety for every person, and even this anxiety creates a terror effect on people. Death anxiety and variables such as gender, age, religious beliefs, personality traits, having children and marital status are frequently studied and are variables that affect death anxiety (Aday, 1984;Lehto & Stein, 2009;Erdoğdu & Özkan, 2007;Schumaker et al., 1988;Abdel-Khalek et al., 2009). The meaning of life is another variable thought to be related to death anxiety; however, the findings of empirical studies on the effect of meaning on death attitude are ambiguous (Ardelt, 2003(Ardelt, , 2008. The need in explaining and commenting on the events that people have experienced in their lives and to make sense of the world they live in has been expressed in different ways. Every person seeks the purpose and meaning of their life, but the meaning might vary based on how people interact with the new situations, how they relate them with the people around themselves or with their past life, and on time and environment they are living in (Akıncı, 2005;Eagleton, 2012;Öcal, 2010). In the literature, the meaning of life is most often expressed as the purpose of life, and while these two concepts are distinct, they are sometimes used interchangeably. The meaning of life points out to the coherence, understanding and interpretation while the purpose of life points out to the goal and intention (Akıncı, 2005;Aydıner, 2011;Reker & Wong, 1983). Individuals can develop defense mechanisms by acquiring certain occupations and responsibilities in order to reduce the negative effects of death anxiety in their lives and cope with this reality, and in this direction, individuals must give meaning to life (Özcan et al., 2020;Langs & Giovacchini, 2018;Becker, 2014;Ulu, 2018). According to TMT, people turn to structures that provide meaning in order to cope with the knowledge that they are mortal (Routledge et al., 2008). The theory also stated that structures such as culture and worldview can be tied to values that add meaning to people's lives, protecting individuals from death anxiety (Greenberg et al., 2014). It is thought that individuals with a purpose in life are more attached to life, are able to cope better with the difficulties and problems that life brings, and can continue their existence longer (Aydıner, 2011;Eschleman et al., 2010;Lindsey & Hills, 1992;Kobasa et al., 1982). Aside from the meaning of life, attitudes toward aging and old age are thought to influence death anxiety (Depaola et al., 2003). Death is an evolutionary event that every human being will experience, which cannot be prevented or eliminated (Erdoğdu & Özkan, 2007). TMT stated that the elderly remind of death for young adults, the death anxiety of the young people who are in close contact with the elderly increases, their self-esteem decreases, and interpersonal communication atrophies (Rababa et al., 2021). It is thought that the development of negative attitudes towards old age may be due to the fact that young people see old age as an obstacle in reaching their goals and what they want to do, which they see as the purpose of their lives (Suhail & Akram, 2002;Öz, 2002;Akdemir et al., 2007). Losses increase in parallel with the age, and among these losses, the person may experience physiological losses as well as losses such as routine responsibilities, position and dignity of life (Arpacı et al., 2011). Although these losses are considered normal for the aging process, experiencing these losses at an early age makes it difficult for individuals to accept (Koç, 2003). Therefore, while the death of the elderly is acceptable for the evolutionary cycle, it is unexpected for the young (Arpacı et al., 2011). It is thought that the fact that old age results in a regression and death may have an effect on the change in attitudes towards old age and aging and on the level of death anxiety within the scope of these attitudes. At the same time, it is thought that the meaning and purpose of the individual's life may be effective in reducing the anxiety they have developed against death. We investigated the cross-sectional associations between attitudes towards aging and elderliness, death anxiety, meaning and purpose of life using a terror management theory framework and the aforementioned research. Considering all these, we proposed the following hypotheses: (a) meaning and purpose of life would be negatively associated with death anxiety; (b) death anxiety would positively associated with attitude towards aging and elderliness; (c) attitude towards aging would have a significant mediator role in the effect of the meaning and purpose of life on death anxiety. Enriching our understanding of the relationships between these constructs, as well as the mediating role of attitudes toward aging and elderliness in an understudied adult population, can shed light on the relevance of attitudes toward aging and elderliness in this population. Participants After a brief description of the study and informed consent, participants were invited to fill in an online survey. A total of 422 participants were recruited between March and May, 2021. Of the 422 participants, 61.4% were female and 38.6% were male. 42.9% of the participants indicated their marital status by ticking the option married and 57.1% single. While 31.3% of the participants have children, 68.7% of them do not. While 75.4% of the participants consider themselves to be religious believers, 24.6% do not consider themselves religious believers. 25.1% of the participants are in the age range of 18-24, 43.4% are in the age range of 25-34, 13.7% are in the age range of 35-40, 12.8% are in the age range of 41-50, 5% are in the age range of 51-59. Measures Demographics The demographics questionnaire gathered data such as gender, relationship status, number of children, religious affiliation, and age. Meaning and Purpose of Life Scale It was developed to measure the meaning of individuals in their lives and their level to attach a meaning to their lives (Aydın et al., 2015). The scale consists of 17 items. The scale is scored according to a 5-point Likert-type rating (strongly disagree = 1, strongly agree = 5). The highest score that can be obtained on the scale is 85, and the lowest score is 17. The reliability study was conducted with retests, and the Cronbach Alpha value was found to be .91. Death Anxiety Scale The scale was developed by Sarıkaya (2013) and was updated by Sarıkaya and Baloğlu (2016). It consists of 20 items. It is scored according to a 5-point Likert-type rating (never = 0, always = 4). The highest score that can be obtained on the scale is 80, and the lowest score is 0. A high total score from the scale is interpreted as high death anxiety, and a low score is interpreted as low death anxiety. The reliability study was carried out with retests and the Cronbach Alpha value was found to be .95. Attitude Scale Towards Aging and Elderliness The scale was developed to measure the attitudes of individuals aged 18 and over towards the elderly and aging (Otrar, 2016). It consists of 45 items. The scale is scored according to a 5-point Likert-type rating (strongly disagree = 1, strongly agree = 5). The highest score that can be obtained on the scale is 225, and the lowest score is 45. As the total score of the scale increases, the negative attitude towards aging and old age increases. In the reliability study, the internal consistency value was found to be .97. Process of Data Collection Ethical approval for this study was obtained from the Beykent University Publication Ethics Committee for Social Sciences and Humanities. After obtaining ethical approval, surveys were created using the Google Forms platform, and participants were invited to complete the Turkish versions of the data collection tools. Participants were informed about the process and asked to consent to participate before data collection began. The study announcement was distributed via social media channels. To increase the representativeness of the sample, convenience and snowball sampling were used to recruit participants. The age range of 18-60 was the inclusion criterion. There were no exclusion criteria in place. The data collection tools were completed voluntarily by 422 participants. Statistical Analyses SPSS 25.0 was used to process the data and conduct the descriptive and correlation analysis. Process macro for SPSS was used to conduct mediator role analysis. The "Bootstrap" method is used in the analysis of mediating variables performed with this method. In this method, sub-samples are randomly generated from the research data and the tested mediation model is analyzed for these sub-samples, and the analysis results of the larger research sample and sub-samples are compared with each other (Preacher & Hayes, 2008). In the current study, 5000 bootstrap samples were used during mediator variable analysis, as suggested by Hayes (2009). The fact that the lower and upper limit values of the confidence interval (LLCI; ULCI) do not include "0" indicates that the effect between variables is significant. In addition, the results of the Sobel test, created by Sobel (1982), were used to examine the mediating impact over regression models. Table 1 shows that a moderately significant positive correlation was found between ASTAE total scores and DAS total scores (r = .435; p < .01). A moderately significant negative correlation was found between ASTAE total scores and MPLS total scores (r = −.349; p < .01). There was a weak negative and significant correlation between DAS total scores and MPLS total scores. Results (r = −.114; p < .05). ASTAE = Attitude Scale Towards Aging and Elderliness, DAS=Death Anxiety Scale, MPLS = Meaning and Purpose of Life Scale. Table 2 shows the results of the regression analysis regarding the mediating role of attitudes towards aging and elderliness in the effect of meaning and purpose of life on death anxiety. The model resulting from regression analysis is also given in Fig. 1. In the first stage of the regression analysis, the predictive effect of the meaning and purpose of life on attitudes towards aging and elderliness was examined (a). It was found that the meaning and purpose of life (β = −.349, t = −7.632, p < .01) explained 12.1% of the attitude towards aging and elderliness. The established model was determined to be statistically significant (R = .349, R 2 = .121, F (1,420) :58.258, p = .000). In the second stage of the regression analysis, the predictive effect of the meaning and purpose of life on death anxiety was examined (c). It was found that the meaning and purpose of life (β = −.113, t = −2.347, p < .05) explained 1.3% of death anxiety. It was determined that the established model was statistically significant (R = .113, R 2 = .013, F (1,420) :5.512, p = .019). In the last stage of the regression analysis, the common predictive effect of the meaning and purpose of life, and attitudes towards aging and elderliness on death anxiety was examined (c'). It was found that the meaning and purpose of life (β = .043, t = .924, p < .05) attitude towards aging and elderliness (β = .450, t = 9.605, p < .01) explained 19.1% of death anxiety. The established model was determined to be statistically significant (R = .437, R 2 = .191, F (2,419) :49.488, p = .000). It was determined that all necessary assumptions were met to test the mediating effect. When the beta coefficients were evaluated, it was seen that the effect of the meaning and purpose of life on death anxiety became meaningless when the attitude towards aging and elderliness was assigned to the model as a mediator variable. It was observed that the lower and upper bound values of the confidence interval were in the same direction (LLCI = -.451, ULCI = -.217). As a result of the Sobel test, which was conducted to determine the significance of the mediating effect, it was observed that the attitude towards aging and elderliness had a significant mediator role in the effect of the meaning and purpose of life on death anxiety (z = −6.04, se = .05, p = .000). Discussion In this study, it was examined whether the attitude towards aging and elderliness has a mediating role in the effect of the meaning and purpose of life on death anxiety of individuals. It was found that the attitude towards aging and elderliness had a significant mediator role in the effect of the meaning and purpose of life on death anxiety. Although the meaning and purpose of life had no effect on death anxiety in the regression model, it was observed that the attitude towards aging and elderliness had a significant full mediator role between these two variables. A greater sense of purpose in life has been linked to a lower level of death anxiety (Routledge & Juhl, 2010). According to Wong (2013), human beings are meaning seeking and meaning-making creatures and the pursuit of meaning in life is the best way to alleviate death anxiety. The lack of a purpose that an individual aim to achieve throughout their life and a meaning that they see as the reason for their existence may cause the person not to be able to connect to life and cope with the difficulties of life, and it is thought that death anxiety levels may increase with the anxiety of having lived a meaningless and purposeless life. Similarly, attitudes toward aging and elderliness may reduce one's sense of purpose in life, paving the way for attitudes toward aging and elderliness to play a mediating role. TMT is also at a point that supports the study's findings. According to the theory, the emergence of mortality in an individual's life heightens the importance of values, meaning, and purpose in life (Bulut, 2015). At the same time, it is stated that the fact that old age is a harbinger of death and that the individual is viewed as an impediment to achieving one's goals increases death anxiety (Rababa et al., 2021). In the study, a moderately significant negative correlation was found between the meaning and purpose of life and attitudes towards aging and elderliness. When the concept of meaning is examined from an existential point of view, it is considered as a source of life that shapes the life of the person and makes his life consistent (Aydın et al., 2015). Individuals who think that they have a meaningful and purposeful life are thought to be not a negative phenomenon for them, and in this context, they do not develop a negative attitude towards aging. In this study, it was determined that the level of meaning and purpose of life decreased and the attitudes towards aging and elderliness increased. Considering this result, it is thought that the lack of a purpose for which individuals will strive to maintain their existence throughout their lives and the lack of a meaning that will be the basis of their lives, and the aging of the individual on the other hand, may prevent the formation of these goals and increase the negative attitudes towards aging in people in this way (Suhail & Akram, 2002;Akdemir et al., 2007;Öz, 2002). It is thought that death, which is considered as an inevitable result of old age, will increase death anxiety in people without a purpose and meaning in the life. A positive and moderately significant relationship was found between attitudes towards aging and old age and death anxiety. There are many factors that cause an increase in negative attitudes towards aging. Among these, negative attitudes towards aging can be developed due to the difficulties of coping with the difficulties brought by aging (Uncu, 2003), the negative views of the social environment against aging (Bilginer et al., 1997), the regression and deformation of the body in many ways (Biçer, 1996). The common point of these reasons that increase the negative attitude is that it is known that the end of old age will result in death. Although it is known that death can occur at any age, individuals think that death is closer to older individuals. In this context, it is thought that the increase in the negative attitudes of individuals towards old age will also increase their death anxiety. In a study that looked at the factors that increase the death anxiety of the elderly, it was found that death anxiety would increase with the decrease in the health perceptions, vitality and psychological health of the elderly individuals that would affect their quality of life (Öztürk et al., 2011). It is thought that death anxiety levels will decrease with the decrease of negative attitudes towards elders in society and increasing the value and importance given to old age. In a study examining the death anxiety of nurses and their attitudes towards aging, it was also discussed that the death anxiety levels of the nurses who care for the elderly with terminal illness increased and they developed negative and stereotyped ideas about aging (Kızılkaya & Koştu, 2006). It is thought that some negative effects of old age (loss of healthy bodies, inability to realize dreams, etc.) increase negative attitudes towards aging and that the anxiety brought about by knowing that old age will result in death (Biçer, 1996;Thorson & Powell, 1990) and attitudes towards old age increase death anxiety. It is also thought that individuals in old age witness the death of their loved ones and friends more, and their being more exposed to death may increase negative attitudes towards old age, and the relationship between exposure to death and death may be due to this. At the same time, another factor that increases death anxiety is the belief that death will occur through suffering. It is thought that the perception brought by the fact that the elderly people usually die by suffering from diseases may also be effective in developing negative attitudes towards old age. Limitations and Suggestions The present study has some limitations. This study lacked sufficient constructs, and it would be improved if it included more variables and it did not incorporate a contextual framework. In this regard mediating or moderating constructs worthy of attention can be studied and incorporated in terms of terror management theory and other frameworks such as positive psychology. Furthermore, the presented findings are of a correlative nature, which limits the ability to draw conclusions thus, caution is required in interpreting the obtained material. For this reason, understanding the experiences of attitudes toward aging and old age through a hybrid quantitative and qualitative methodology has the potential to provide a more detailed perception of the role of this construct in this population. While this study was being carried out, the difficulties brought by the COVID-19 pandemic that surrounded the whole world caused the participants of the study to be reached via online platforms. This situation had some risks that would affect the findings of the research, such as not understanding the questions directed to the participants, not being able to answer them correctly, and not being able to reach the targeted participants. Based on this study, researchers who will study the meaning and purpose of life, death anxiety, attitude towards aging and elderliness are recommended to recruit participants which creates equal distributions in demographic variables. In future studies, these variables can be examined with larger samples and different sample groups (white collar workers, women, elderly people, etc.). Although there are certain limitations in the study, the results of the research revealed important findings in terms of the meaning and purpose of life, death anxiety, attitude towards aging and old age, and it aimed to contribute to this field, which has limited studies in the literature. The current study adds to the empirical foundation of an influential but not fully empirically supported model in the area of the meaning and purpose of life and death anxiety. The study revealed that death anxiety positively associated with attitude towards aging and elderliness. Professionals working with younger adults who suffer from death anxiety are encouraged to incorporate consideration of attitudes towards aging and elderliness into routine assessment and intervention practices. Further educational and religious interventions focusing on aging and death may improve understanding of the aging and death processes and, as a result, younger adults' attitudes toward older adults (Chonody et al., 2014).
2022-04-17T05:17:40.852Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "4211ec1bc269183a7874f2ca0cf14f7682fc1156", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4211ec1bc269183a7874f2ca0cf14f7682fc1156", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
10931026
pes2o/s2orc
v3-fos-license
Consumption of a High-Fat Diet in Adulthood Ameliorates the Effects of Neonatal Parathion Exposure on Acetylcholine Systems in Rat Brain Regions Background Developmental exposure to a wide variety of developmental neurotoxicants, including organophosphate pesticides, evokes late-emerging and persistent abnormalities in acetylcholine (ACh) systems. We are seeking interventions that can ameliorate or reverse the effects later in life. Objectives We administered parathion to neonatal rats and then evaluated whether a high-fat diet begun in adulthood could reverse the effects on ACh systems. Methods Neonatal rats received parathion on postnatal days 1–4 at 0.1 or 0.2 mg/kg/day, straddling the cholinesterase inhibition threshold. In adulthood, half the animals were switched to a high-fat diet for 8 weeks. We assessed three indices of ACh synaptic function: nicotinic ACh receptor binding, choline acetyltransferase activity, and hemicholinium-3 binding. Determinations were performed in brain regions comprising all the major ACh projections and cell bodies. Results Neonatal parathion exposure evoked widespread abnormalities in ACh synaptic markers, encompassing effects in brain regions possessing ACh projections and ACh cell bodies. In general, males were affected more than females. Of 17 regional ACh marker abnormalities (10 male, 7 female), 15 were reversed by the high-fat diet. Conclusions A high-fat diet reverses neurodevelopmental effects of neonatal parathion exposure on ACh systems. This points to the potential for nonpharmacologic interventions to offset the effects of developmental neurotoxicants. Further, cryptic neurodevelopmental deficits evoked by environmental exposures may thus engender a later preference for a high-fat diet to maintain normal ACh function, ultimately contributing to obesity. Research Recent data indicate an alarming increase in the incidence of neurodevelopmental disorders [for review, see Grandjean and Landrigan 2006;Landrigan et al. 1994;Szpir 2006aSzpir , 2006bWeiss et al. 2004;Weiss and Bellinger 2006)], currently involving up to 17% of U.S. school children, including attention deficit hyper activity disorder, learning disabilities, and autism spectrum disorders, at an annual cost of $80-170 billion (Szpir 2006a(Szpir , 2006b. Exposures to environmental chemicals are strongly suspected to play a key role in these increases, contributing to what has been termed a "silent pandemic" (Grandjean and Landrigan 2006). Given that thousands of new chemicals are introduced each year, most of which are untested for developmental neurotoxicity, it seems unlikely that exposures to such toxicants will decline. Accordingly, we already have a legacy of millions of children with environmentally related neurodevelopmental disorders, and we can expect this problem to continue or even increase. Toxicologic research most often focuses on the identification of specific toxicants and their mechanisms of action, but it is equally valid to examine whether subsequent interventions can offset or ameliorate the consequences of exposure. In our previous work, we showed how identification of specific neuro transmitter or signaling pathways affected by diverse neurotoxicants can lead to therapies that restore both synaptic and behavioral function (Beer et al. 2005;Izrael et al. 2004;Shahak et al. 2003;Slotkin et al. 2001b;Steingart et al. 2000aSteingart et al. , 2000bYanai et al. 2002Yanai et al. , 2004Yanai et al. , 2005; in particular, we were able to repair deficient function of acetylcholine (ACh) projections in the hippo campus by transplanting neural progenitor cells or to achieve pharmacologic reversal of symptoms by chronic treatment with ACh receptor agonists. Here, we focus on a strategy that may be more readily applicable to neuro develop mental disorders, namely dietary manipulation in which the majority of calories are derived from fat (i.e., a "ketogenic" diet). This approach has met with some success in drug-resistant childhood epilepsies, although the specific mechanism for the improvement is unknown (Bough and Rho 2007;Connolly et al. 2006;Hallbook et al. 2007;Hartman and Vining 2007). Interestingly, the same approach has been tried in treating attention deficit hyperactivity disorder and autism in pilot studies (Connolly et al. 2006;Evangeliou et al. 2003;Pulsifer et al. 2001), again with some evidence of success. However, it must be noted that such trials obviously cannot be blinded because the diet is altered in a way known to the subject. A few animal studies examined the effects of a high-fat diet on neural function, providing some evidence for a generalized decrease in excitability and reduced motor activity (Murphy and Burnham 2006;Murphy et al. 2005); notably, in keeping with the pilot human studies, high fat diets ameliorate behavioral anomalies in genetically modified mice (Teegarden et al. 2008). The ability of dietary manipulations to evoke widespread changes in neuro transmitter function is likely due to changes in the composition of membrane lipids in which receptors and cell signaling molecules are embedded (Gudbjarnason and Benediktsdottir 1996;Ponsard et al. 1999) and therefore can span multiple brain regions and neurotransmitter systems. Indeed, diet-induced changes in neural membrane lipids are known to alter neuro transmitter uptake and release, as well as function of neurotransmitter receptors and their signaling pathways (Clandinin et al. 1983;Geiser 1990;Kelly et al. 1995). In the present study, we present a "proof of principle" by evaluating the ability of a high-fat diet to reverse the ACh-related synaptic abnormalities evoked by neo natal exposure to an organophosphate pesticide, parathion. Organophosphates are the most widely used insecticides (Casida and Quistad 2004) and human exposures are virtually ubiquitous (Casida and Quistad 2004;Morgan et al. 2005). These agents are under going increased scrutiny specifically because of their propensity to elicit developmental neuro toxicity at levels below those required for any signs of systemic exposure (Colborn 2006;Costa 2006;Pope 1999;Slotkin 2004Slotkin , 2005. Here, we evaluated the effects of a brief neonatal parathion exposure at doses straddling the threshold for cholinesterase inhibition and the first signs of toxicity (Slotkin et al. 2006a). In adulthood, we switched some of the animals to a ketogenic, high-fat diet that more than doubles serum β-hydroxybutyrate concentrations (Lassiter Background: Developmental exposure to a wide variety of developmental neurotoxicants, including organophosphate pesticides, evokes late-emerging and persistent abnormalities in acetylcholine (ACh) systems. We are seeking interventions that can ameliorate or reverse the effects later in life. oBjectives: We administered parathion to neonatal rats and then evaluated whether a high-fat diet begun in adulthood could reverse the effects on ACh systems. methods: Neonatal rats received parathion on postnatal days 1-4 at 0.1 or 0.2 mg/kg/day, straddling the cholinesterase inhibition threshold. In adulthood, half the animals were switched to a high-fat diet for 8 weeks. We assessed three indices of ACh synaptic function: nicotinic ACh receptor binding, choline acetyltransferase activity, and hemicholinium-3 binding. Determinations were performed in brain regions comprising all the major ACh projections and cell bodies. results: Neonatal parathion exposure evoked widespread abnormalities in ACh synaptic markers, encompassing effects in brain regions possessing ACh projections and ACh cell bodies. In general, males were affected more than females. Of 17 regional ACh marker abnormalities (10 male, 7 female), 15 were reversed by the high-fat diet. conclusions: A high-fat diet reverses neurodevelopmental effects of neonatal parathion exposure on ACh systems. This points to the potential for nonpharmacologic interventions to offset the effects of developmental neurotoxicants. Further, cryptic neurodevelopmental deficits evoked by environmental exposures may thus engender a later preference for a high-fat diet to maintain normal ACh function, ultimately contributing to obesity. key words: acetylcholine, brain development, high-fat diet, organophosphate insecticides, parathion. et al. 2008). We performed assessments in brain regions encompassing major ACh projections as well as those containing the corresponding cell bodies, focusing on three markers of ACh synaptic function that are targeted by developmental exposure to parathion and that contribute to ACh-related behavioral impairment by organophosphates (Slotkin et al. 2006a(Slotkin et al. , 2007a(Slotkin et al. , 2008bTimofeeva et al. 2008b): activity of choline acetyltransferase (ChAT), cell membrane binding of hemi cholinium-3 (HC3) to the presynaptic high-affinity choline transporter, and the concentration of α4β2 nicotinic ACh receptors (nAChRs). ChAT is the enzyme that synthesizes ACh and, because it is a constitutive component of ACh nerve terminals, its activity provides an index of the development of ACh projections (Dam et al. 1999;Happe and Murrin 1992;Monnet-Tschudi et al. 2000;Qiao et al. 2003;Richardson and Chambers 2005;Slotkin et al. 2001a). Although HC3 binding to the choline transporter is also a constituent of ACh nerve terminals, its expression is directly responsive to neuronal activity (Klemm and Kuhar 1979;Simon et al. 1976), so that comparative effects on HC3 binding and ChAT enables the characterization of both the development of innervation and presynaptic activity. Last, the α4β2 nAChR is a key player in the ability of ACh systems to release other neuro transmitters involved in reward, cognition, and mood Bertrand 2001, 2002;Dani and De Biasi 2001;Fenster et al. 1999;Quick and Lester 2002) and is also the most abundant nAChR subtype in the mammalian brain (Flores et al. 1992;Happe et al. 1994;Lindstrom 1987, 1988). Materials and Methods Animal treatments and diet. All experiments were carried out humanely and with regard for alleviation of suffering, with protocols approved by the Duke University Institutional Animal Care and Use Committee and in accordance with all federal and state guidelines. Timed-pregnant Sprague-Dawley rats (Charles River, Raleigh, NC) were housed in breeding cages, with a 12 hr light-dark cycle and free access to water and food (LabDiet 5001; PMI Nutrition, St. Louis, MO). On the day after birth, all pups were randomized and redistributed to the dams with a litter size of 10 (5 males, 5 females) to maintain a standard nutritional status. Parathion (99.2% purity; Chem Service, West Chester, PA) was dissolved in DMSO to provide consistent absorption (Slotkin et al. 2006a(Slotkin et al. , 2006bWhitney et al. 1995) and was injected subcutaneously in a volume of 1 mL/kg once daily on postnatal days 1-4; control animals received equivalent injections of the DMSO vehicle. Doses of 0.1 and 0.2 mg/kg/day were chosen because they straddle the threshold for barely detectable cholinesterase inhibition and the first signs of reduced weight gain or impaired viability (Slotkin et al. 2006a(Slotkin et al. , 2006b). Brain cholinesterase inhibition 24 hr after the last dose of 0.1 mg/kg parathion is reduced 5-10%, well below the 70% threshold necessary for symptoms of cholinergic hyperstimulation (Clegg and van Gemert 1999). Randomization of pup litter assignments within treatment groups was repeated at intervals of several days up until weaning; in addition, dams were rotated among litters to distribute any maternal caretaking differences randomly across litters and treatment groups. Offspring were weaned on postnatal day 21. Beginning at 15 weeks of age, half the rats were switched to a high-fat diet (OpenSource D12330; Research Diets Inc., New Brunswick, NJ), providing 58% of total calories as fat; 93% of the fat is hydrogenated coconut oil. The remaining rats continued on the standard LabDiet 5001 diet, which provides 13.5% of total calories as fat; with this diet, 27% of the fat is saturated. Although the high-fat diet contains 37% more calories per gram, we found that animals on this diet reduced their food intake by approximately the same proportion , so the total dietary intake is isocaloric; never theless, animals gained excess weight because of the higher fat content . During the 24th post natal week, animals were decapitated and brains were dissected into the frontal/parietal cortex, temporal/occipital cortex, hippo campus, striatum, mid brain, and brain stem. Neurochemical determinations were made on regions from six rats per treatment group for each sex and with each diet, with no more than one male and one female derived from a given litter in each group. Assays. Tissues were thawed in 79 volumes of ice-cold 10 mM sodium-potassium phosphate buffer (pH 7.4) and homogenized with a Polytron (Brinkmann Instruments, Westbury, NY). Duplicate aliquots of the homogenate were assayed for ChAT using established procedures (Qiao et al. 2003(Qiao et al. , 2004. Each tube contained 50 µM [ 14 C]acetyl-coenzyme A (specific activity 6.7 mCi/mmol; PerkinElmer Life Sciences, Boston, MA) as a substrate, and activity was determined as the amount of labeled ACh produced relative to tissue protein (Smith et al. 1985). For measurements of HC3 binding, the cell membrane fraction was prepared from an aliquot of the same tissue homogenate by sedimentation at 40,000 × g for 15 min. The pellet was resuspended and washed, and the resultant pellet was assayed by established procedures (Qiao et al. 2003(Qiao et al. , 2004, using a ligand concentration of 2 nM [ 3 H]HC3 (specific activity, 125 Ci/mmol; PerkinElmer) with or without 10 µM unlabeled HC3 (Sigma Chemical Co., St. Louis, MO) to displace specific binding. Determinations of nAChR binding were carried out in another aliquot, each assay containing 1 nM [ 3 H]cytisine (specific activity 35 Ci/mmol; PerkinElmer) with or without 10 µM nicotine (Sigma) to displace specific binding . Binding was calculated relative to the membrane protein concentration. Data analysis. Data were compiled as means ± SE. Because we evaluated three neuro chemical measures that were all related to ACh synapses, the initial comparisons were conducted by a global analysis of variance (ANOVA; data log-transformed because of heterogeneous variance among regions and measures) incorporating all the variables and measurements in order to avoid an increased probability of type 1 errors that might otherwise result from multiple tests of the same data set. The variables in the global test were treatment (control, parathion 0.1 mg/kg, parathion 0.2 mg/kg), diet (normal, high-fat), brain region, sex, and measure (nAChR binding, ChAT, HC3 binding); the latter was considered a repeated measure because all three determinations were derived from the same sample. Where we identified interactions of treatment with the other variables, data were then subdivided for lower-order ANOVAs to evaluate treatments that differed from the corresponding control. Where permitted by the inter action terms, individual groups that differed from control were identified with Fisher's protected least significant difference test. Significance was assumed at the level of p < 0.05. To ensure that treatment and diet effects could be compared across all groups, we conducted all three assays simultaneously on all samples for a given region and sex, but technical limitations dictated that each region and sex had to be performed in divided runs. Accordingly, the control values for region versus region or for males versus females cannot be compared directly, since each region was assayed separately, as was each sex. However, treatment and diet effects and their interactions with region and sex can be interpreted because these depend solely on the internal comparison to the matched control groups that were run together. In evaluating the magnitude of the changes elicited by parathion administration, we used entire brain regions rather than specific nuclei, which means that even drastic effects on a specific population of neurons show up as smaller changes due to dilution with unaffected areas. Despite this limitation, we found statistically significant alterations for both treatment paradigms in multiple regions. Results The global ANOVA identified a main effect of parathion treatment (p < 0.0001) as well as interactions of treatment with all the other variables: p < 0.01 for treatment × sex, p < 0.0001 for treatment × ACh measure, p < 0.007 for volume 117 | number 6 | June 2009 • Environmental Health Perspectives treatment × sex × diet, p < 0.05 for treatment × diet × region, p < 0.002 for treatment × sex × ACh measure, p < 0.05 for treatment × region × ACh measure, p < 0.0001 for treatment × sex × diet × ACh measure, p < 0.006 for treatment × diet × region × ACh measure, and p < 0.05 for treatment × sex × diet × region × ACh measure. In light of these interactions and of previous findings of strong sex differences in the effects of neonatal parathion exposure on neuro development Slotkin et al. 2008bSlotkin et al. , 2009, we also conducted a lower-order test separately for males and females. Again, parathion treatment interacted with each of the other variables, but on the whole, the effects in males were more statistically robust than those in females (males: p < 0.0001 for the main effect of treatment, p < 0.0001 for treatment × ACh measure, p < 0.02 for treatment × diet, p < 0.03 for treatment × region, p < 0.01 for treatment × diet × ACh measure, p < 0.002 for treatment × diet × region × ACh measure; females: p < 0.0002 for treatment × diet × ACh meas ure, p < 0.002 for treatment × diet × region × ACh measure). As required by these inter actions, we then separated the results according to region, diet, sex, and ACh measure for presentation. For the control group, the high-fat diet alone had no statistically significant effect on any of the measures when evaluated in a global test (factors of diet, sex, region, ACh measure) or separately for each of the measures. Accordingly, where apparent changes are caused by diet alone in the control group, the overall incidence of such "differences" cannot be distinguished from random; indeed, in the global ANOVA, we found no main effect of diet or interaction of diet × region (exclusive of the interactions with parathion treatment) for males or females for any of the variables. Further, because binding affinity can be influenced by the lipid milieu in which the proteins are embedded (Gudbjarnason and Benediktsdottir 1996;Ponsard et al. 1999), the effects of parathion must be compared to the corresponding, diet-matched control group, whereas interpreting apparent differences in the two control groups is problematic. Parathion had differential effects on the ACh measures in the various brain regions of males and females, and most of these were reversed by the high fat diet. In the frontal/ parietal cortex, neonatal parathion exposure had little or no effect on nAChR binding in males, regardless of whether animals were consuming a normal or high-fat diet ( Figure 1A). In females, parathion evoked a dose-dependent decrement in frontal/parietal cortical nAChR binding that was completely reversed by a high-fat diet. For ChAT activity in frontal/ parietal cortex, males showed significant reductions at either parathion dose; again, the high-fat diet reversed the defects ( Figure 1B). In females, neo natal parathion exposure elicited substantial increases in ChAT in the same region; in this case, the high-fat diet completely reversed the pattern, such that animals exposed to 0.2 mg/kg parathion exhibited significantly lower values than controls. For HC3 binding in the frontal/parietal cortex, neonatal parathion exposure caused significant reductions in both males and females, whereas animals on a high-fat diet did not display any deficits ( Figure 1C). In the temporal/occipital cortex, neonatal parathion exposure elicited significant reductions in nAChR binding in males on the normal diet but not in those on the high-fat diet; we saw no parathion effects in females on either diet ( Figure 1D). Also in this region we observed no significant changes in ChAT with parathion alone or in combination with a high-fat diet ( Figure 1E). For HC3 binding in the temporal/occipital cortex, males showed a significant reduction caused by neonatal parathion; however, for this parameter, the high-fat diet provided no protection ( Figure 1F); females showed no significant effects on HC3 binding. In the hippocampus, males exposed to the high dose of parathion displayed significant elevations in nAChR binding that were reversed by the high-fat diet ( Figure 2A); females showed no significant effects of parathion in either dietary group. In animals consuming a normal diet, hippocampal ChAT was significantly reduced at both doses in males and at the low dose in females, but no such changes were seen on the high-fat diet ( Figure 2B). In contrast, hippocampal HC3 binding was unaffected by parathion with or without a high-fat diet ( Figure 2C). In the striatum, nAChR binding was unaffected by parathion exposure ( Figure 2D). However, for striatal ChAT, parathion evoked significant reductions in males but not females, and the high-fat diet was unable to reverse the effect ( Figure 2E). Striatal HC3 binding evinced no significant differences ( Figure 2F). In the midbrain, males on the normal diet showed a significant parathion-induced reduction in nAChR binding that was not seen in animals on the high-fat diet ( Figure 3A); females showed no significant effects. For midbrain ChAT, both sexes showed significant reductions evoked by neonatal parathion exposure, involving the high dose for males and the low dose for females ( Figure 3B); the high-fat diet eliminated both defects. Midbrain HC3 binding was generally unaffected by parathion ( Figure 3C). In the brainstem, neonatal parathion exposure evoked nAChR up-regulation restricted to males, and once again, the high-fat diet completely reversed the effect ( Figure 3D). In contrast, parathion-exposed females but not males showed elevated ChAT ( Figure 3E) and suppressed HC3 binding ( Figure 3F) in the brainstem, and both of these effects were offset by the high-fat diet. We presented the effects of parathion treatments and dietary manipulations on body weights in this model previously ); because we used animals from the same cohort in the present study, the results are not presented here. In brief, parathion alone produced a small (2-3%) but significant elevation in weight at the low dose in males and reductions of about 4% at either dose in females. The high-fat diet alone produced significant increases in body weight for both males (10% increase) and females (30% increase). In males, neonatal parathion treatment did not affect the body weight response to high fat, whereas the dietary effect was diminished at the high parathion dose in females. Discussion Our results clearly indicate that consuming a high-fat diet in adulthood can ameliorate many of the long-term ACh synaptic abnormalities evoked by neonatal parathion exposure. We assessed three ACh measures for each of six brain regions, for a total of 18 sets of determinations for each sex. In males, parathion exposure evoked significant changes in 10 of these measures, of which 8 were reversed by the high-fat diet; in females, 7 of the measures were affected, and all of them were reversed by dietary manipulation, albeit that one measure then became abnormal in the opposite direction (ChAT in the frontal/ parietal cortex). These findings provide a proof of principle that dietary interventions are capable of offsetting the ACh synaptic defects caused by developmental neuro toxicant exposure; future studies clearly need to address whether behavioral performance is similarly restored by dietary manipulation, as is the case for anomalies evoked by genetic manipulations (Teegarden et al. 2008). Our results thus open a new avenue for developing general amelioration strategies that may prove useful for diffuse and diverse neurotoxicants. Obviously, the use of a high-fat diet poses serious metabolic problems that may preclude its generalized use. Indeed, in our earlier work, we showed that neonatal exposure to organophosphates evokes long-term changes in metabolic function that contribute to obesity, prediabetes, and cardio vascular risk factors such as elevated serum lipids (Lassiter and Brimijoin 2008;Roegge et al. 2008;. The metabolic abnormalities were exacerbated by a high-fat diet Roegge et al. 2008). It will therefore be important to establish whether there are specific components of the diet that are the key elements responsible for the reversal of ACh synaptic abnormalities that may permit use of a less injurious dietary manipulation. Nevertheless, the effects of parathion on ACh systems comprise an alteration in the "trajectory" of ACh synaptic development and function, rather than representing an outright initial injury that simply continues into adulthood. Indeed, none of these synaptic changes is apparent in the immediate post treatment period (Slotkin et al. 2006a); instead, the effects emerge over an extended time frame ranging from adolescence to adulthood (Slotkin et al. 2008b(Slotkin et al. , 2009. This means that there may be specific developmental "windows" in which a short-term intervention can redirect the trajectory of ACh synaptic develop ment, thus avoiding the need for life-long intervention. Again, this will be an important subject for future studies. The results of the present study also confirm and extend earlier work on the developmental neuro toxicity of parathion that indicated sex-selectivity and regional differences in the effects on ACh systems as well as other neuro transmitters (Slotkin et al. 2008b(Slotkin et al. , 2009. Developmental exposure to , 2005Dam et al. 2000;Levin et al. 2001Levin et al. , 2002Moser et al. 1998;Ricceri et al. 2006;Roegge et al. 2008;Slotkin et al. 2001aSlotkin et al. , 2002Slotkin et al. , 2006bSlotkin et al. , 2008aSlotkin et al. , 2008cSeidler 2005, 2007;Timofeeva et al. 2008a) because of interference with sexual differentiation of the brain (Aldridge et al. 2005;Levin et al. 2001;Slotkin 1999Slotkin , 2004Slotkin , 2005. However, in comparing our results in the present study at 5 months of age to those seen in adolescence (1 month of age) or young adulthood (2-3 months of age) (Slotkin et al. 2008b), we observed that many of the sex-selective differences intensified with time. This is consonant with the fact that, even where an initial injury might be equivalent in males and females, subsequent repair processes are generally greater in females, thus contributing to even further differences in the trajectory of ACh synaptic development and function (Amateau and McCarthy 2002;Hilton et al. 2004;McEwen 2002;Nunez and McCarthy 2003;Slotkin et al. 2007b;Tanapat et al. 1999). Similar effects are likely to play a role in the fact that the high-fat diet successfully reversed all of the parathion-induced changes in females, whereas some of the abnormalities persisted in males. As in a number of earlier studies of the effects of developmental organophosphate exposure (Levin et al. 2002;Slotkin et al. 2008a;Timofeeva et al. 2008a), we found some effects of parathion that were nonmonotonic, with significant alterations at the low dose but not at the high dose (ChAT in female hippocampus and midbrain, HC3 binding in female brainstem). This likely represents the fact that the higher dose of parathion elicits some systemic toxicity (Slotkin et al. 2006a), which will by itself produce additional changes in ACh function. Additionally, cholinesterase inhibition at the higher dose can provide a positive trophic effect by increasing the levels of ACh in the developing brain (Hohmann 2003;Lauder and Schambra 1999;Picciotto and Zoli 2008;Slotkin et al. 2006b). Indeed, a carefully chosen, small dose of chlorpyrifos can actually enhance some aspects of neuro development while damaging other components (Laviola et al. 2006). In the present study, the highfat diet reversed the anomalies regardless of whether they were non monotonic or monotonic, so the presence or absence of these additional components does not appear to be critical to the ameliorating effect of dietary manipulation. Finally, results of the present study point to an important consideration in the explosive worldwide increase in obesity. It is clear that neuro developmental disorders can influence apparent life style choices, most notably in the increased incidence of drug abuse or cigarette smoking (Deas and Brown 2006;Kandel et al. 1994;Pliszka and Pliszka 2003;Weissman et al. 1999;Wilens 2004). In a recent study Jacobsen et al. (2007) showed that abstinence from smoking in adolescent smokers whose mothers smoked during pregnancy leads to cognitive impairment, whereas those who were born to non smokers showed cognitive improvement upon abstinence from smoking. In other words, where there was pre existing neuro developmental damage from prenatal tobacco exposure, adolescents were able to offset cognitive impairment by smoking; this likely contributes to the higher likelihood of the children born to smoking women becoming smokers themselves (Kandel et al. 1994;Weissman et al. 1999). By the same token, our studies point to the possibility that exposure to develop mental neuro toxicants could contribute to a subsequent preference for a high-fat diet as a way of ameliorating the effects, thus providing an indirect but potentially potent driving force for consuming an unhealthy diet. If this turns out to be true, then our findings point to a potentially important contributory factor in the increased incidence of obesity and diabetes, expanding the public health implications of the "silent pandemic" caused by developmental neurotoxicant exposure (Grandjean and Landrigan 2006). Developmental exposure of rats to chlorpyrifos leads to behavioral alterations in adulthood, involving serotonergic mechanisms and resembling animal models of depression. Environ Health Perspect 113:527-531. Aldridge JE, Seidler FJ, Slotkin TA. 2004. Developmental exposure to chlorpyrifos elicits sex-selective alterations of serotonergic synaptic function in adulthood: critical periods and regional selectivity for effects on the serotonin transporter, receptor subtypes, and cell signaling.
2014-10-01T00:00:00.000Z
2009-02-03T00:00:00.000
{ "year": 2009, "sha1": "e1b7204a0d0227f27d283a78b1c678ea3c3a0769", "oa_license": "CC0", "oa_url": "https://doi.org/10.1289/ehp.0800459", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1b7204a0d0227f27d283a78b1c678ea3c3a0769", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
270007043
pes2o/s2orc
v3-fos-license
New giant genus of Parabathynellidae (Crustacea: Bathynellacea): first record of Bathynellacea in an Australian cave . A new genus and species of Parabathynellidae (Crustacea: Bathynellacea), Megabathynella totemensis Camacho & Abrams gen. et sp. nov., is described from the Northern Territory, Australia. This species is the first to be described from an Australian cave. It is a new giant species (4 to 6 mm). The new species displays several unique morphological character states within Parabathynellidae and is the only known species with: more than 12 articles on antennules, with a short, curved barbed seta on each article from the fifth; eight setae on the last article of antennae; more than three setae on the mandibular palp; up to 17 articles on the exopod of the thoracopods, without ctenidia but with a strong spine on each article at the base of the external seta; strong row of pair of spines on latero-external side of second article of endopod in all thoracopods; the male thoracopod VIII is different from all those known; more than 50 spines on the sympod of the uropod and more than 35 spines on the furcal rami. Specimens of the new species are morphologically different from all known species, but more closely resemble some giant species of the genera Kampucheabathynella (Asia), and Billibathynella and Brevisomabathynella (Australia). Introduction The crustacean Malacostraca Latreille, 1802 family Parabathynellidae Noodt, 1965 has a worldwide distribution with 45 genera and 220 species recognized thus far (Camacho & Leclerc 2022).According to Abrams et al. (2013), 47 species were known from Australia (including two in Tasmania), distributed in 10 genera across mainland Australia.Since then, one more genus and species have been described, Lockyerenella danschmidti Camacho & Little, 2017 from Queensland, two new species of Hexabathynella Schminke, 1972 from Rottnest Island (Western Australia) (Perina et al. 2023a) and two new species of Atopobathynella Schminke, 1973 from the Pilbara region (Perina et al. 2023b).In addition, a recent paper by Matthews et al. (2020), using a combined approach of molecular species delimitation methods, identifi ed between eight to 24 putative new species from remote subterranean habitats in the Pilbara region of Western Australia (WA).In the last 10 years, the delineation of parabathynellid species using molecular methods has focused on Queensland (QLD) (Cook et al. 2012;Little et al. 2016;Little & Camacho 2017), South Australia (SA) (Abrams et al. 2012(Abrams et al. , 2013)), New South Wales (NSW) (Asmyhr & Cooper 2012), and the Yilgarn region, WA (Guzik et al. 2008;Abrams et al. 2012Abrams et al. , 2013)).Therefore, the diversity of species is much greater than the 52 species formally described and assigned to 11 genera. As previously noted by Cho & Humphreys (2010), each new fi nding not only increases the known diversity in the area, but also the morphological novelties.The description of this new genus does not disappoint in this sense since it presents several morphological characters that have not been previously observed in the group and that expand the known morphospace of the Parabathynellidae family.It is a new giant genus and species, more than 4 mm long.Other genera that contain giant species include the Australian genera Brevisomabathynella Cho, Park &Ranga Reddy, 2006, andBillibathynella Cho, 2005, and the Cambodian genus Kampucheabathynella Cho, Kry & Chhenh, 2015.Here, we consider species larger than 4 mm to be 'giant' and those of 2.5-4 mm to be 'large'. Unfortunately, despite multiple DNA extraction attempts on fi ve specimens, we were unable to obtain sequences for this new genus, therefore, we were not able to place them in current global parabathynellid molecular phylogenetic frameworks (Little et al. 2016;Matthews et al. 2020;Camacho et al. 2021;Perina et al. 2023aPerina et al. , 2023b)). Study area Totem Pole Cave is located in the Pungalina Karst area (-16°48′0.9972″S, 137°27′1.0002″E) on the eastern margin of the Northern Territory near the border with Queensland in the Gulf of Carpentaria.The karst is a Precambrian dolomite and stromatolite fossils are common throughout the karst area. Specimens were collected from a shallow pool of clear water (15 cm deep and 1 m 2 ) in a cave passage approximately 150 m from the entrance and approximately 10 m above the phreatic zone.Numerous pools of water were present in the cave above the phreatic zone due to the recent cyclone and associated rainfall in the area.Specimens were collected by hand, using a forceps and placed into 70% ethanol.The small pool was densely populated with an estimated 80-100 individuals.Several adjacent pools of similar size contained individuals but in lower abundance.No other specimens were observed in the remainder of the cave. Morphological study Ten specimens used for the study are listed in Table 1.Two entire specimens (mounted completely) and eight completely dissected (all body appendages separately) were preserved as permanent slides (special metal slides, glycerine-gelatine stained with methylene blue and paraffi n as the mounting medium (see Perina & Camacho 2016).Anatomical examinations were performed using an oil immersion lens Table 1 ( We used the terminology proposed by Serban (1972Serban ( , 1980Serban ( , 1985) ) to describe the mandible and male thoracopod VIII. Diagnosis AI multisegmented with more than 12 articles with terminal aesthetascs present on fi fth to last segments.AII 7-segmented.Labrum fl at, free margin dentate.Md pars molaris (molar process) protruding; proximal tooth present; mandibular palp bi-segmented with several setae (more than six).Proximal endite of MxI with four claws; ten to 11 claws present on distal endite.Exopod of ThI to ThVII multisegmented (more than 11 articles); epipod present from ThI to ThVII; fi rst and second article of endopod of ThI to ThVII each with a plumose dorsal seta.Male ThVIII unusually large, twice as long as wide; basal region of penial complex supports three independent lobes: inner lobe, outer lobe and dentate lobe; dentate lobe as inner lobe, rounded and as long as the outer lobe; very long curved outer lobe, as a fi nger; large basipod with a seta and a pronounced crest-like on internal face; dentate small exopod; endopod large with two setae and with the distal end rounded with four teeth and two setae of different sizes, the longest barbed.Female ThVIII one-segmented, almost triangular with three long barbed terminal setae.Pleopods absent.Inhomonomous sympod of uropod, with more than 50 spines, distalmost 25% longer than the rest; endopod with spines, the two terminal spines stronger than the rest, and setae, two barbed apical setae and one subterminal and two basal plumose setae; exopod with more than 16 setae.Furca very enlarged with more than 35 spines.Giant species, more than 4 mm long. Differential diagnosis Megabathynella Camacho & Abrams gen.nov.bears some resemblance to the giant genera Billibathynella, Brevisomabathynella and Kampucheabathynella (see Table 2).The new genus has more than 12 segments in AI, while the other genera have seven or 10; however, the new genus shows few teeth on the labrum, 12 to 14 with few lateral denticles, less than the other giant genera (between 12 and 63 teeth); the new genus presents a two-segmented mandibular palp with more than six setae, the general condition is one or up to three setae found in some species of Billibathynella and Brevisomabathynella; the new genus has up to 18 articles in the exopod of the Ths and always more than 10, while the largest number found never exceeded 13 articles.The maximum number of spines found in the furca of giant species is 23, while the new genus always has more than 35.Similarly, the sympod of the uropod of the new genus has more than 50 spines, in comparison, the maximum counted to date is 28 spines in species of Billibathynella.The male ThVIII of the new genus also shows several signifi cant differences: it is very large, with a very special endopod, the rounded and protruded distal end with small teeth and setae is totally novel. Etymology This prefi x 'Mega-' comes from the Greek 'μέγας', which means 'big'.The name Megabathynella refers to the unusually large size of the new genus. Distribution Australia, Northern Territory (present study).ThVII; fi rst and second article of endopod of ThI to ThVII each with a plumose dorsal seta.Male ThVIII unusually large, twice as long as wide; basal region of penial complex supports three independent lobes: inner lobe, outer lobe and dentate lobe; dentate lobe as inner lobe, rounded and as long as the outer lobe; very long curved outer lobe, as a fi nger; large basipod with a seta and a pronounced crest on internal face; dentate small exopod; endopod large with two basal setae and with the distal end rounded with four teeth and two setae of different length, the longest barbed.Female ThVIII one-segmented, almost triangular with three long barbed terminal setae.Pleopods absent.Inhomonomous sympod of uropod, with many long spines (56-85), distalmost 25% longer than the rest; endopod with four to six spines, the two terminal spines stronger than the rest, and setae, two barbed apical ones, one subterminal and two basal plumose setae; exopod with 15 to 18 setae.Furca very enlarged with 37 to 46 spines, the two distal ones twice as long as the rest.Anal operculum not pronounced.Giant species, more than 4 mm long. Etymology The specifi c name 'totemensis' is dedicated to the Totem Pole Cave where the new species was found. Type locality Ten specimens, six females and four males, were collected at the type locality by T.A. Moulds on 24 Jul.2006. Description MEASUREMENTS AND APPEARANCE.Body total length of male holotype 5.9 mm, allotype 6.2 mm.Body elongated (Figs 1-2), segments widening towards posterior end ~8 × as long as wide.Head slightly longer than broad.All drawings are of the holotype, except ThVIII (Fig. 5C), labrum (Fig. 3D), mandibular palp (Fig. 3F) and ThI (Fig. 4B) which are of the allotype.ANTENNULES (Figs 3A, 6A) (AI).Almost 35% longer than AII.13-segmented; fi rst three articles as long as the next six and slightly longer than the last four articles combined; fi rst article longest, similar to third, second slightly shorter, but just as thick; the fourth article is the shortest; the fi fth to eighth articles are similar in length, short and thick, and the last fi ve are longer and narrower than all the previous ones.First article with three smooth dorsal setae and three plumose ones (Fig. 3A, I).Second article with four plumose setae and eight smooth setae on inner margin.Inner fl agellum on third article, small and almost square with three smooth setae.Third article (Fig. 3J) with two smooth and one plumose outer lateral setae and eight smooth setae of different sizes on inner margin.Fourth article with two plumose setae on outer distal apophysis, one more dorsal plumose seta and one small, plumose stub on dorsal margin.Fifth to ninth articles (Fig. 3A, K) with three smooth setae, one strong and short curved seta with long setules and two aesthetascs, of similar size, on inner margin, and one smooth dorsal seta.Articles 10 to 12 with similar setation as previous ones but with three terminal aesthetascs instead of two.Last article with three subterminal aesthetascs and four terminal smooth setae. MAXILLULES (Figs 3G, 6G-H) (MXI). Proximal endite with four unequal strong claws with strong setation; distal endite with ten claws (as spines), along inner edge, one apical smooth and very large and strong, remaining claws denticulate with strong row of denticles and setae, basal claw smallest, half length of others; three smooth setae subdistally on outer margin of endite as fi gured. MAXILLAE (Figs 3H, 6I ). Four-segmented, setal formula: 8, 16, 31, 8. 4A-B, 7A-D) (THI TO VII).Well developed; length gradually increasing from ThI to ThIV (Figs 1-2), ThV to VII similar in length; small epipod on ThI (Figs 4A, 7A) to VII, each about ⅓ of length of corresponding basipod.Basipod of all Ths with several (three to fi ve) distolateral, barbed setae.Exopods multi-segmented, 14 to 17 articles; number of exopodal articles of thoracopods I to VII: 14-16-16-17-17-17-16; basal article very long and wide with several barbed setae plumose at base on each side; following eight articles almost square and last ones elongated, all with barbed seta (plumose at base) on each side and one strong spine at base of inner seta.Endopod 4-segmented; fi rst article short, second and third long and similar in length and fourth article reduced with two smooth strong spinulose claws and two simple setae; fi rst article with distal plumose inner seta as second article; second article with inner and outer barbed setae, and cluster of pairs of strong spinules along inner margin from base to fi rst seta; third article with barbed setae on inner margin and one barbed distoventral seta.All thoracopods similar to ThI, but varying in number of articles of exopod, number of setae in basal article, number of setae on basipod and on fi rst three articles of endopod.Size ratios between articles, of exopod and endopod, similar to those of ThI (Fig. 4A).B, 8A-J, 9F).Unusually large, twice as long as wide; basal region of penial complex supports three independent lobes: inner lobe, outer lobe and dentate lobe; inclined dentate lobe, as long as outer lobe, with rounded distal end and big teeth (Fig. 5A-B); inner lobe rounded and longer than outer lobe; very long (6 × as long as wide) and curved outer lobe (as a fi nger) (Fig. 5A, 8B) that covers end of dentate lobe and not extending beyond basipod; large basipod with seta and pronounced crest-like protuberance (Fig. 5A-B, 8C) on internal face, and small almost triangular exopod with several denticles; endopod large with two basal setae and with distal end rounded, like skullcap or "helmet" (Figs 5A-B, 8F-G, J) with four teeth and two setae of different length, longest one barbed. THORACOPODS (Figs PLEOTELSON.Small ventral plumose seta.Anal operculum not pronounced, almost fl at. FIRST PLEOPODS.Absent.5D-E, 7F-G).Sympod 6.5 times as long as wide, 15% longer than exopod and almost three times as long as endopod, with 59 barbed spines subequal, except distalmost spine, slightly longer than rest.Endopod 2.5 times as long as exopod, with three spines along distal half of inner margin and two stronger and enlarged distal ones; distolateral angle of ramous with one subterminal plumose and two terminal barbed setae; basal part of ramous with two plumose setae of different length.Exopod with 18 barbed setae, 14 lateral and four terminal. UROPODS (Figs FURCAL RAMI (Figs 5F,7F).Each ramus almost triangular, very enlarged, with 37 barbed spines, six distal ones of different sizes, four slightly longer than fi rst 31 and two terminal ones two times as long as others; two plumose setae of different lengths oriented dorsally, shorter one without reaching tip of terminal spines.11. An analysis of the oligomerization of appendages in relation to the size of the species, as a whole, would be required to determine whether it can be explained by simple allometry; however, this is outside the scope of this study The new species is most similar overall to species in the genera Billibathynella, Brevisomabathynella and Kampucheabathynella and shares characters with some species of the other Australian and Asian genera as shown in Table 2. The new species has the highest number of articles known to date in AI, viz.15.Kampucheabathynella and Sinobathynella have the next highest number of articles (10).The new species is unique in having a very strong, recurved, short, thick and plumose seta from the fi fth to the penultimate article (Fig. 3A, K). The new genus has numerous claws in the distal endite of the MxI, up to 11, and only some species of Billibathynella come close, with 10 claws, but the most common state is seven claws. Brevisomabathynella uramurdahensis has a great profusion of setae in MxII but not as many as the new species. The number of articles of the Th exopods of Megabathynella totemensis gen.et sp.nov. is unusually high (18 in some Th) in comparison with other genera, for example some species of Billibathynella, Brevisomabathynella and Kampucheabathynella have 10 or 12 articles.Sinobathynella decamera Camacho, Trontelj & Zagmajster 2006 also has a relatively high number of articles of the exopod of some Ths (9-10).The new species has an epipod on all Ths, as all species of the genera Billibathynella, Brevisomabathynella, Arkaroolabathynella and Lockyerenella, and some species of Allobathynella and Notobathynella, but the absence of the fi rst and second epipods is common in Asian and Australian genera as in Kampucheabathynella, and even in the third epipod as occurs in some species of Paraeobathynella and Allobathynella.Megabathynella totemensis has several setae on the basipod of all Ths, which is also found in Kampucheabathynella.However, the new species has a large number of setae on the fi rst article (Table 1) of the exopod of all Ths, which makes it diffi cult to see where the second article begins.In other genera, there are few setae, 1 or 2 on each side only.Another exclusive character is the lack of the ubiquitous ctenidia at the base of the setae (which are plumose at the base and barbed elsewhere) of all articles of the exopod that all other species of the family Parabathynellidae show; instead, it presents a strong spine at the base of the outer seta of all articles, at the inner margin.The new species shows a cluster of pairs of strong spinules along the inner margin from the base to the fi rst seta on the second article of the endopod of the Ths; both the second and third articles can have barbed setae on the inner face in a variable number (Tables 1-2); in Brevisomabathynella uramurdahensis, B. magna and some other species of this genus and in Notobathynella octocamura Camacho & Hancock, 2011 there are also barbed setae on the inner face of the second article and some species of Allobathynella have a small seta, as a spinula, but never on the third article as in the new species. The female ThVIII of the new species is very large with three distal setae as only occurs in Kampucheabathynella khaeiptouka Cho, Kry & Chhenh, 2015.The male ThVIII (Fig. 10C) is distinctive in comparison with all other genera, as it is much larger and more elongated than that of any known species 10).Some examples of male ThVIII in Australian and Asian species are shown in Figs 9 and 10 for comparison.Perhaps the greatest resemblance of this appendage is to the genus Lockyerenella (Fig. 9C) due to the similarities of the long, curved, fi nger-shaped outer lobe and the crestlike protuberance on the internal face of the basipod, smaller than those of Megabathynella totemensis gen.et sp.nov.Due to its elongated appearance, male thoracopod VIII resembles Australian genera (Fig. 9) more than Asian genera (Fig. 10), in which the general appearance is more square.However, the gigantic size and unique morphology of the endopod of the male ThVIII of the new species makes it completely different from all other male ThVIIIs of other species. The absence of the fi rst pair of pleopods in the new species is quite common in most species of Parabathynellidae. No species known to date shows such a large number of spines on the sympod of the uropod, nor on the furca as the new species.However, some species of Billibathynella outnumber the new species in the number of setae on the uropod exopod (Table 2). The unique combination of characters along with the exclusive characters we have listed justifi es the erection of a new genus with similarities to Australian and Asian genera.Elucidating whether the oligomerization of AI and the Th exopods is correlated with size is beyond the scope of this study but will be an interesting question for future comprehensive analyses of the family Parabathynellidae. Distribution of bathynellaceans of northern Australia The diversity and distributions of bathynellaceans of the Northern Territory are poorly known, with only a single described species, Atopobathynella readi Cho, Humphreys & Lee, 2005, recorded in the Ngalia Basin, approximately 941 km south-west of the type locality of Megabathynella totemensis gen.et sp.nov.(Fig. 11).However, there are records of a likely new species of possibly Brevisomabathynella from the Cambrian Limestone Aquifer in the Beetaloo Sub-basin (Oberprieler et al. 2021) (Fig. 11).The region has not been extensively sampled for stygofauna and it is likely that new taxa will be collected with further survey.Far west of Totem Pole Cave, near the border with Western Australia, a rich parabathynellid assemblage has been recorded from the eastern Kimberley, with fi ve species of Kimberleybathynella occurring in alluvial and regolith substrata in the Ord River catchment (Cho et al. 2005), and multiple undescribed species have been collected from environmental impact surveys (Humphreys 1999;Bennelongia 2012). As new species continue to be discovered and described in Australia and around the world, it is interesting to observe that large and giant species occur on every continent (Table 3) (Fig. 11).Although almost half of them (15 species) have been recorded from Australia, fi ve occur in Europe (Portugal, Spain and France) and fi ve occur in South Korea.One large species has been collected each from Thailand, China, South Africa, USA, Madagascar, Morocco, Cambodia and Japan to date.It is likely that numerous new species will be discovered in these and other countries with further survey of prospective habitats, as is the case in Australia.Future studies could explore the factors that lead to this unusually large size and proliferation of articles and setae observed.It would also be interesting to investigate the paleobiogeographic aspects that have led to this worldwide distribution of genera with large species on all continents when more comprehensive and robust phylogenies based on both molecular and morphological data will have become available. Rect Table 2(continued).Character variability in different genera of the 'big' and 'giant' (2.5-4 mm and > 4.0 mm in length respectively) Australian and Asian Parabathynellidae and genera with more than four-segmented exopod of some thoracopods in the world. MANDIBLES (Figs 3E, 6D-F) (MD).Pars distalis with six well-developed teeth and six small and one triangular strong proximal tooth as in fi gures 3E and 6D; pars molaris (molar process) very big, with row of 18 claws, all strong and denticulate, with two more distal setulose ones, joined basally (Figs 3E, 6E); exceptional two-segmented mandibular palp with seven setae (Figs 3E, 6D, F) not exceeding distal part of Md. Table 1 Zeiss interference microscope equipped with a drawing tube.Material is deposited in the Museum and Art Gallery of the Northern Territory (MAGNT) and Western Australian Museum (WAM) collections and two specimens of the type series are deposited in the Arthropod Collection of the Museo Nacional de Ciencias Naturales de Madrid (Spain) (MNCN). Australia Results Systematic account Camacho & Abrams gen.et sp.nov.
2024-05-26T16:01:32.552Z
2024-05-22T00:00:00.000
{ "year": 2024, "sha1": "3b74d0e814d809b07d5d73576be5f1e4f1c35a8a", "oa_license": "CCBY", "oa_url": "https://europeanjournaloftaxonomy.eu/index.php/ejt/article/download/2545/11485", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e048d602096fb920eda21855bafe746f8bbc22cb", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
239422781
pes2o/s2orc
v3-fos-license
ESTIMATED EFFICIENCY OF RADIAL DRILLING TECHNOLOGY FOR TOURNAISIAN PRODUCTION ZONES OF THE PERM REGION The reservoirs of the Tournaisian deposits in the Perm Region are characterized by high heterogeneity of the geological section, low thickness and relatively low well productivity. For the rational development of such deposits intended to enhance the oil recovery ratio, various well interventions are required. The paper compares the effectiveness of such low- cost operations as acid treatments and radial drilling. It further considers the radial drilling technology in detail. Radial drilling operations at the Tournaisian deposits of the Perm Region oilfields are analysed. To assess the projected oil production, a graph of the actual decline in the incremental oil production from the time of operation over the years has been plotted. The paper presents the methods of estimating the well intervention effectiveness. The main method is hydrodynamic modelling. However, it has significant drawbacks in predicting the effectiveness of radial drilling technology. In the authors’ opinion, the optimal well intervention effectiveness forecast methods are statistical methods taking the complex impact of geological and technological parameters into account. In the course of the research, the Student's t -test was used to identify the main geological and technological parameters that have an impact on the efficiency of radial drilling. With the identified parameters and the linear discriminant analysis method, a predictive model for estimating an increase in oil production for the first year after the operation was built. For the wells selected as the training and test samples, the error of the oil output growth forecast for the first year after the operation was assessed. The errors in the forecast calculations were then compared to the forecast error estimation obtained using the hydrodynamic model. The output of the study is the developed method for determining the total incremental production using radial drilling technology. The reservoirs of the Tournaisian deposits in the Perm Region are characterized by high heterogeneity of the geological section, low thickness and relatively low well productivity. For the rational development of such deposits intended to enhance the oil recovery ratio, various well interventions are required. The paper compares the effectiveness of such lowcost operations as acid treatments and radial drilling. It further considers the radial drilling technology in detail. Radial drilling operations at the Tournaisian deposits of the Perm Region oilfields are analysed. To assess the projected oil production, a graph of the actual decline in the incremental oil production from the time of operation over the years has been plotted. The paper presents the methods of estimating the well intervention effectiveness. The main method is hydrodynamic modelling. However, it has significant drawbacks in predicting the effectiveness of radial drilling technology. In the authors' opinion, the optimal well intervention effectiveness forecast methods are statistical methods taking the complex impact of geological and technological parameters into account. In the course of the research, the Student's t-test was used to identify the main geological and technological parameters that have an impact on the efficiency of radial drilling. With the identified parameters and the linear discriminant analysis method, a predictive model for estimating an increase in oil production for the first year after the operation was built. For the wells selected as the training and test samples, the error of the oil output growth forecast for the first year after the operation was assessed. The errors in the forecast calculations were then compared to the forecast error estimation obtained using the hydrodynamic model. The output of the study is the developed method for determining the total incremental production using radial drilling technology. Introduction In the structure of the remaining oil in place in the Perm Region, Tournaisian and Fammenian carbonate deposits account for their major part of 44 % (222.7 out of 506.2 million tons of oil). In the platform part of the Perm Region, these deposits are developed at 130 production zones with over 80 % of the deposits located solely in the Tournaisian strata. Hereinafter, all the Tournaisian and Fammenian deposits in the studied territory are referred to as the "T-formation". The permeability values used for estimation of the oil reserves in the T-formation range from 3 to 676 mD, while formation oil viscosity varies from 0.8 to 87mPa·s. Characteristics of tournaisian reservoirs of the Perm Region oilfields In the platform part of the Perm Region, the T-formation reservoirs are mainly represented by organogenic and detrital, fine detrital and lumpy algal limestones. The core material from the oil-saturated part of the geological section has been studied, in particular, using X-ray tomography to visualize the structure of the rock porous space [1][2][3]. In carbonate reservoirs, the X-ray tomography is used as a method of visualizing the cavernosity and fracturing of rocks in high definition [4][5][6][7]. The core material analysis combined with hydrodynamic well testing done with the method described in papers [8,9] has shown that the Tournaisian reservoirs in the platform part of the Perm Region mainly belong to the porous (granular) type. The presence of fractures is generally untypical of these production zones. Core analysis has shown that the open porosity (K p ) of the Tournaisian reservoirs falls within a broad range (from 8 to 19 %) with the average value of 12 %. Low-porosity reservoirs (K p < 12 %) belong mainly to the granular type ( fig. 1, a), while in more porous ones the void space also includes dissolved cavities (see. fig. 1, b). The reservoirs of the Tournaisian deposits are characterized by high heterogeneity of the geological section, low thickness and, as a rule, not very high well productivity. The field experience shows that at all the stages of oilfield development the permeability of the bottomhole area deteriorates due to the sealing of rocks, increasing water saturation followed by the reduction of phase permeability for oil, salting and paraffin buildup [10]. In such conditions, the most cost-efficient well intervention techniques for carbonate reservoirs are acid treatment and radial drilling [11]. on core X-ray tomography Acid treatment is one of the most common well interventions used on carbonate reservoirs to recover and improve the fluid flow in the bottomhole area, increasing productivity of the production wells and injectivity profile of the injection wells [12]. The radial drilling technology means, in this case, drilling horizontal radial channels of a smaller diameter with a jet nozzle. The channel length does not exceed 100 m, while their number usually ranges from 2 to 4. The radial drilling technology is considered not only to improve the productivity of the wells but also to engage the previously undrained reserves into the production, materially expanding the displacement process [13,14]. Comparing the efficiency of acid treatment and radial drilling technologies In the course of study, the technological efficiency of acid treatment and radial drilling in respect of the Tournaisian formations of the Perm Region was compared. The results of employing acid treatment to the carbonate formations in the Perm Region are provided in paper [15], while the summary of international experience is provided in papers [16][17][18][19][20]. In foreign literature, acid treatment efficiency studies focus more on the composition of rocks, e.g. carbonate content, clay particles content, cement type and port space structure [18][19][20]. Table 1 presents the results of comparing the efficiency of acid treatment and radial drilling for the Tournaisian formations of the Perm Region fields. The number of wells covered by the analysis: acid treatment -148 wells, radial drilling -115 wells. The comparison was carried out using Student's t-test. Table 1 shows the average values of the efficiency indicators (for acid treatment and radial drilling, respectively), the t-score values and the achievable level of p. In respect of the considered indicators of the Tournaisian formations, the efficiency of the radial drilling technology is statistically significant (p < 0.05) and exceeds that of the acid treatment. Thus, the technological effect of the radial drilling varies in a broad range and mostly depends on the geotechnical conditions of the method implementation. At the same time, by this moment there are no formalized efficiency criteria for the radial drilling technology [21]. There is a number of purely technical aspects that complicate the implementation of radial drilling technology. First of all, the problems were caused by significant hydrodynamic loads on jet nozzles (up to 100 MPa) arising from high-speed jets (up to 400 m/sec) of flush fluid. That creates a large area of penetration of the emulsion filtrate with non-stationary rheological characteristics [22] in the area of destruction. The second important technical issue is the impossibility to exercise prompt control over the trajectory of the channels due to low bending stiffness of the assembly and different densities of the jet-broken rocks. In exceptional cases, this can result in the preformation by the channels of the water formations, which is absolutely inadmissible. All of the above points to the necessity of careful selection of the wells based primarily of their geological and physical characteristics. Developing statistical models for projecting the efficiency of radial drilling for the Tournaisian formations of the Perm Region Due to their fracturing, the Tournaisian formations feature significant incremental production during the first year after intervention. However, the fractures tend to collapse in the production process due to the pressure drop, resulting in a sharp decline of the effect [23-26]. Both Russian and foreign literature pays much attention to the impact of fracturing on the field development efficiency [27][28][29][30]. In paper [31], a forecast crossplot was developed based on the statistical analysis of technological effect's dynamics in respect of the radial drilling. For this crossplot (Fig. 2), the wells were divided into classes based on incremental production for tracking the effect decline for each well group. Thus, knowing the value of incremental production for the first year, the decline of production with time can be forecasted by means of extrapolation. When used at the stage of selecting candidate wells for the radial drilling technology, this crossplot requires the assessment of the amount of incremental oil production in the first year after the well intervention (∆q n ), which is decisive for the production dynamics over time. At the moment, the primary method for assessing the effectiveness of well interventions for oil and gas producing companies of the Perm Region, including radial drilling, is the geological and hydrodynamic modelling [32-33]. At the same time, it is quite hard to develop a fluid flow model of the carbonate reservoirs that would accommodate the impact of the fracturing component on the fluid flow, which materially impairs the models' reliability [34 -36]. As applicable to the radial drilling technology, the efficiency of geological and hydrodynamic modelling continues to decline as radial channels of 100 m will be represented by one or two cells in the model. This being said, even this cell of the model can be characterized only in subjective and quantitative terms by the channel connectivity with the formation (φ) and a screen factor (S). In practice, the φ-value is based on the necessary production of fluid after the well intervention, and the skin-factor is introduced based on the well tests, though they do not provide reliable results for all the wells. Therefore, the total error magnitude of this approach is quite high. A possible alternative to forecasting the efficiency of well interventions can be the use of displacement characteristics when the technological parameters are extrapolated with account of possible incremental production. However, in this case, at least the impact of geological and technical parameters of the given wells is not taken into account, which will obviously reduce the efficiency of well interventions for them. Given the above, in the opinion of the authors, the statistical methods are more suitable for the evaluation of the radial drilling efficiency. They are applied to the production formations treated with the radial drilling technology to identify the geological and technical indicators making the greatest impact on the well intervention efficiency. Such an approach is suitable for express efficiency analysis of the technology, while the forecasting model makes it possible to rank the wells by the priority of radial drilling use [37][38][39][40]. The papers [41,42] describe the methodology and provide successful examples of the statistical modelling results as applicable to the forecast of various well interventions. The incremental oil production during the first year after the radial drilling (∆q n ) is ultimately determined by a set of indicators of well interventions. The method chosen to evaluate the statistical impact of the indicators of the well intervention efficiency is linear discriminative analysis (LDA). The methodological aspects of evaluating the technological effect of well interventions using LDA method are discussed in paper [42]. For the LDA implementation, the wells that had not undergone acid treatment before radial drilling (for 15 years). The statistical analysis based on Student's t-test has shown a significant difference in the efficiency of the radial drilling without prior acid treatment and after it both in respect of incremental oil production (t-test = 2.03; p = 0.04), and average daily growth (t-test = 2.37; p = 0.02). For the Tournaisian formations, the acid treatment performed within 1-15 years prior to the radial drilling, reduce the ∆q n value by 0.8 tons per day on average. As preliminary acid treatment takes away a part of the potential increments production, such wells were excluded from the statistical analysis. Moreover, wells that underwent radial drilling in the new perforation interval were also excluded from the analysis scope, since in this case the increment is achieved primarily by involving new reservoir pay zones. As a result, the analysis covered 41 wells treated with the radial drilling technology in the current perforation interval. At the same time, the analyzed wells were also classified by the efficiency of the radial drilling results based on the cutoff value (∆q n ). With account of the geological and technical characteristics and the calculated increment range, the cutoff efficiency value was set to 5.5 tons per day. As a result, 12 wells were classified as having less effective well interventions (∆q n < 5.5 tons per day), and 27 wells were identified to have more effective well interventions (∆q n > 5.5 tons per day). The LDA method foresees finding linear combinations of the attributes that define the two classes best. As a result of LDA, the parameters that have the highest impact on ∆q n were identified and the following Linear Discriminant Functions (LDF) was obtained: where q n is oil production before the intervention; K sand is net-to-gross ratio; μ n is oil viscosity; ρ chan is channel density; χ is piezoconductivity; h intl is the oilsaturated layer thickness; S is skin factor. In general, the LDA method correctly identifies 25 out of 27 formations (93%) with an increment below 5.5 tons per day and 9 out of 14 (64%) with the increment above 5.5 tons per day. In this LDF, the highest impact on the classification results was rendered by the net-to-gross ratio K sand (the standardized rate is R st = 0.90), piezoconductivity χ (R st = 0.66) and oil production before the intervention q n (R st = -0.46), and to a lesser extent, by the sand-to-shale ratio K shale , oil viscosity µ, channel density ƍ chan , skin factor S, and specific thickness of the oilsaturated layer h intl . The higher is the Z value in this LDF, the higher is the probability of a successful well intervention (∆q n > 5.5 tons per day). To proceed to the probabilistic estimate, the dependence of the probability of classification as a more effective well intervention P(Z) on the estimates characteristic Z (Fig. 3) shall be used. After the probability for the training wells samples has been calculated, it shall be compared with the actual oil production growth. Fig. 4 shows the dependence of ∆q n on the probabilistic estimate P(Z), which can be approximated with the following linear function: Δq n = 6,2·P(Z) + 3,216 with r = 0,99. Based on the dependency ∆q n = f(P(Z)), the average daily growth for the first year after the radial drilling can be forecasted. To verify the method reliability, the values of ∆q n were calculated for the training sample, and then, in comparison to the actual values, the estimated errors were determined. Fig. 5, а presents the graph of minimal disparities for the LDA method compared to those of the calculations based on hydrodynamic models. The comparison was carried out for the training sample wells. When the LDA-based methodology is used, the forecast disparities range from -5.6 (overstatement of ∆q n ) to +4.9 tons per day (understatement of ∆q n ). At that, over a half of the wells (68% -28 wells) got into the error interval from -2 to 2 tons per day. The class with a disparity of over 4 tons per day comprised 4 wells. The error distribution range of the standard currently used methodology (see Fig. 5, а) is materially broader, ranging from -10 to +28 tons per day, while the number of forecasts with the minimal error (from -2 to 2 tons per day) is significantly less than 18 wells (44 %). There were 7 wells (17% of the total number of wells) falling in the disparity interval of over 4 tons per day. To control the forecast results, an additional estimate of the errors for the validation sampling was performed, which included the calculation of ∆q n values for the wells not involved in the development of a statistical model. Those were the wells than underwent the acid treatment prior to the radial drilling. Considering the adjustment explained above (0.8 tons per day), the actual data were compared to the forecasted values for the wells of the validation sampling. The disparities in the actual and forecasted values of ∆q n , estimated according to the LDA method the hydrodynamic model calculations for the validation sample are provided in Fig. 5, b. According to the method proposed above, the maximum number of disparities between the actual and forecasted ∆q n values(27 wells) fall in the range from -2 to 2 tons per day, while the maximum error ranges from -6 to +4 tons per day. The class with the disparity of over 2 tons per day included 15 wells with only 2 of them falling in the range of disparities exceeding 4 tons per day (see Fig. 5, b). The results of the comparison of the estimations based on the LDA statistical method for the T-formation should be admitted to be very good. When compared to the actual results of the hydrodynamic modelling-based standard forecast method, the disparity range significantly expands (from -20 to +8 tons per day), with an evident tendency to overstate the projected production for the previously acid-treated wells. The minimal disparity interval (from -2 to +2 tons per day) includes 20 wells, which is much worse than fo the LDA-based forecast. The disparities exceeding 4 tons per day relate to 10 wells (compared to 2 wells found with the LDA method), with the three well interventions of which showing the disparity over 6 tons per day (see Fig. 5, b). Thus, the comparison of disparities for the LDA-based and standard methods performed on the Tournaisian reservoirs of the platform part of the Perm Region shows a significantly higher efficiency of the former. The statistical approach significantly improves the accuracy of the oil production growth forecast for the first year after the radial drilling. The developed forecasted statistical models assessing the efficiency of the radial drilling technology were provided to an oilproducing company. Conclusion 1. The paper provides a characteristic of the Tournaisian deposits of the platform part of the Perm Region. The reservoirs are characterized by high heterogeneity of the geological section, low thickness and, as a rule, relatively low well productivity. 2. The efficiency of the acid treatment and radial drilling methods for the Tournaisian formations of the Perm Region fields was compared. The efficiency of the radial drilling technology was found to be significantly higher than that of the acid treatment, which is statistically confirmed. A graph showing the dynamics of the effect decline depending on the radial drilling technology for the Tournaisian formations of the Perm Region was developed. 4. The advantages and disadvantages of radial drilling technology have been analyzed. Among the key advantages of the technology are the relatively low cost of the intervention and generally satisfactory growth of oil production. The key drawbacks include the impossibility to control the channel trajectory during drilling and unstable effect due to the lack of formalized applicability criteria. 5. The radial drilling technology efficiency forecasting methods were analysed with the selection of the LDA-based statistical method being substantiated. The main geotechnical parameters influencing the increment of oil production during the first year after the radial drilling were identified. 6. A forecast model for the evaluation of the incremental oil production for the first year after the radial drilling was developed. The developed method was tested on the wells of the training and validation samples. The forecasts of the radial drilling efficiency based on the statistical approach were concluded to be more accurate as compared to the standard method.
2020-07-16T09:03:44.113Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "84bdcf6eb8538b8badc08a2c938f7b274a30223a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15593/2224-9923/2019.3.6", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "65a7ee81994e09a08f7948cfd9b88112a4937fef", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Geology" ], "extfieldsofstudy": [] }
15729441
pes2o/s2orc
v3-fos-license
Classical ultra-relativistic scattering in ADD The classical differential cross-section is calculated for high-energy small-angle gravitational scattering in the factorizable model with toroidal extra dimensions. The three main features of the classical computation are: (a) It involves summation over the infinite Kaluza-Klein towers but, contrary to the Born amplitude, it is finite with no need of an ultraviolet cutoff. (b) It is shown to correspond to the non-perturbative saddle-point approximation of the eikonal amplitude, obtained by the summation of an infinite number of ladder graphs of the quantum theory. (c) In the absence of extra dimensions it reproduces all previously known results. Introduction The search for large extra dimensions (LED), especially in view of the forthcoming experiments at LHC, constitutes a very exciting direction in high energy physics. After precursor ideas about the Universe as a topological defect in higher-dimensional space-time [1] and the proposal of TeV-scale internal dimensions related to supersymmetry breaking in string theory [2], large extra dimensions were discussed in several contexts. Among the conceptually and technically simplest is the ADD picture [3], according to which the standard model particles live in a four-dimensional space (the brane), embedded in a D-dimensional bulk inhabited only by gravity, and with the extra δ = D − 4 dimensions compactified on a torus. The D-dimensional Planck mass M * is supposed to lie in the TeV region and the LED have submillimeter size. In this scenario there is an infinite tower of massive Kaluza-Klein (KK) gravitons [4] whose existence may be detected at present and future colliders [5], either as missing energy in collisions due to emission of KK gravitons (which are weakly interacting after being created), or via processes which would otherwise be impossible or very much suppressed. Quantum collision processes with exchange of virtual KK gravitons are a very useful tool to test the model [5]. Unfortunately, tree diagrams containing the propagator of the KK tower diverge in the ultraviolet (UV) due to an infinite sum over the KK graviton masses [4]. The divergence appears already at tree level and is due to the emission of infinite momenta into the compactified dimensions. Several proposals were made in order to cure this problem. One was to introduce a UV cutoff [4,6], another to take into account brane oscillations and the associated Nambu-Goldstone modes [7], another yet to introduce brane thickness [8], or finally, to sum up ladder diagrams within the eikonal approximation [9]. However, none of them solves the problem completely. The UV cutoff at Planckian energies while it certainly does exist, does not look very natural in this context. It would mean that we are able to probe the Planckian regime using (relatively) low energy processes. Appealing to features associated with the physical brane, such as tension and thickness, introduces additional elements into the ADD model, which seem to spoil its overall consistency. Indeed, for a brane of non-zero tension one can not use any more the flat Minkowski background. Instead, one has to deal with solutions of the Einstein equations, which are non-trivial even in the absence of matter on the brane. In the case of codimension one, such a solution is well-known and is the basis of the Randall-Sundrum II model [10], with a different graviton spectrum. Finally, the eikonal calculation amounts to using the Fourier-transform of the Born amplitude and thus, it also suffers from ambiguities associated with the divergence of the latter. Indeed, the evaluation of the eikonal phase gives a finite result if a certain order of integration is used in calculating the Fourier integral, while it is divergent if one merely changes the order of integration. On the other hand, it has been argued that at ultrahigh energies, particle scattering in four dimensions not only becomes dominated by gravity, but in addition it involves only classical gravitational dynamics [11,12,13,14]. Indeed, quantum gravity effects should not, by definition, be important in the classical limit → 0. This, in terms of the two relevant lengths, i.e. the Planck length l Pl = ( G 4 /c 3 ) 1/2 = /M Pl c and the gravitational radius associated with the energy of the collision r g = G 4 √ s/c 4 , implies that r g ≫ l Pl , which is equivalent to the condition √ s ≫ M Pl c 2 of transplanckian energies. Thus, to study the scattering of two point-like particles with Planck energy, 't Hooft used a shock wave approximation for the field of the moving particle and obtained a result similar to the Veneziano amplitude. Later on, it was shown that in four dimensional quantum gravity [15], as well as in string theory [13], the eikonal approximation (free in both cases of the ambiguities mentioned above) reproduced the result of 't Hooft. The Planck length l * and the gravitational radius r * g in the D-dimensional ADD model are, correspondingly where M * is the TeV-scale mass parameter. The above reasoning remains essentially the same and shows that in the transplanckian regime √ s ≫ M * c 2 scattering is also classical, at least for some range of momentum transfer. Moreover, the particularly exciting proposals of black hole creation at LHC or in cosmic rays [16] [17] were based on a purely classical picture. Although the eikonal approximation of particle scattering in ADD has been discussed by a number of authors [9], [8], no classical calculation of the cross-section was found in the literature. The purpose of the present paper is to fill in this gap, and in addition to provide an independent check of the validity of the eikonal approach, whose applicability in the context of ADD is not yet rigorously proven. Indeed, it is hereby demonstrated explicitly that the classical theory reproduces the saddle-point result of the eikonal approximation and is essentially non-perturbative in the quantum sense. Thus, classical theory gives non-trivial results in ADD, in contrast to four dimensional gravity where the classical elastic scattering cross-section coincides with the Born approximation and is perturbative. Finally, it should be pointed out that the reason the classical cross-section is finite, even though it too involves an infinite summation over massive KK modes, is because in the classical calculation the potentially divergent integrals contain oscillating factors, which effectively cutoff the modes that cannot be excited by the source. This applies to all kinematical regimes and, in particular, is analogous to the situation in the classical derivation of Newton's law [18] [19]. The setup Consider the Fierz-Pauli lagrangian in D-dimensional Minkowski space, with δ ≡ D − 4 of its spatial dimensions being a torus T δ with equal radii R and are periodic under the translations y j → y j + 2πR, j = 1, . . . , δ. Thus, for instance where V = (2πR) δ is the volume of the torus. In what follows we abbreviate the sum over the KK modes as n and denote the momenta transversal to the brane as p i T = n i /R. The tensor h M N (x P ) has been split into an infinite sum of four-dimensional KK modes h n M N (x µ ) with (mass) 2 equal to p 2 T . In the harmonic gauge ∂ N h M N = 1 2 ∂ M h the Einstein equations for h M N read: According to the ADD scenario it is assumed that the matter stress-tensor is localized on the brane, located at y = 0, and carries only four-dimensional indices: It is worth noting, however, that this assumption is consistent with Einstein's equations only at the linearized level, with matter dynamics governed by the non-gravitational forces alone. Given (2.5) it is consistent to set the graviphotons h iµ and the non-diagonal part of the scalar matrix h ij to zero. Finally, the diagonal components of h ij are all equal and generated by the trace of the energy-momentum tensor: Their zero modes n i = 0 are the so called radions, which describe deformations of the torus caused by the presence of matter on the brane. The massive components n i = 0 of h n M N could be rearranged into the massive four-dimensional graviton and massive scalars in the usual Higgs mechanism language [20], [4], but this is not necessary for the present discussion. The D-dimensional Planck mass M * is defined by The retarded Green's function of the D'Alembert equation satisfies (2.7) Its Fourier transform reads: The solution of (2.4) with the source localized on the brane (2.10) Its restriction to the brane can be rewritten using the amputated propagator The momentum space four-dimensional retarded propagator thus reads (2.14) The ultra-relativistic elastic scattering cross-section Consider next the small angle ultrarelativistic scattering of two particles on the brane, with masses m and m ′ respectively, interacting via D-dimensional gravity. Using for notational simplicity the same parameter τ for both trajectories, a convenient way to describe them is with δz, δz ′ treated perturbatively. The asymptotic values of their momenta are while momentum conservation implies P µ + P ′µ =P µ +P ′µ . The Mandelstam variables s, t are and we consider the ultrarelativistic regime and the small-angle approximation in which case s = (p + p ′ ) 2 . Since the momentum transfer q µ depends only on the deviation δz µ of one of the particles, it will be convenient to work in the rest frame of m ′ before collision. In that frame p ′µ = m ′ (1, 0, 0, 0), and with no loss of generality one may set in addition where b is the impact parameter. The particles equations of motion following from the (2.1) are D-dimensional geodesic equations in the metric but it is easy to show that the particle moving on the brane in zero order in κ D will remain on the brane. So the matter stress-tensor of the two particle system is four-dimensional: Leaving aside the classical mass renormalization needed to take into account the gravitational self-action, we have to take as h µν in the equation for the particle m the retarded field of the partner particle m ′ . In the small-angle approximation it is assumed that the deviation from the unperturbed rectilinear motion is small, and we get perturbatively where Π µν is the projector onto the space transverse to the momentum p µ The gravitational field h µν of the particle m ′ is given by (2.14), where we have to substitute the Fourier transform of the second term of the stress-tensor (3.8) Using the reparametrization invariance of the particle trajectories one may choose τ in such a way that p µ δż µ = 0 for both particles. In this gauge the solution of (3.9) becomes The vector Q µ has the form Upon differentiation with respect to τ and integration over k 0 , using that in the chosen Lorentz frame δ(k · p ′ ) = δ(m ′ k 0 ), one obtains To calculate the momentum transfer in (3.3) we need the asymptotic values of δż µ as τ → ±∞, which one computes next. In the chosen Lorentz frame the exponent in (3.15) is (3.16) (a) Start with the integral over k z in (3.15). Define The terms B and C, as well as the one proportional to Ak z vanish in the limit τ → ±∞, because The term proportional to Ak 0 is zero, because it is evaluated at k 0 = 0. The y component vanishes by parity. So, the only component left is the one proportional to Ak x . In this case, parity implies that only the sine part of the exponential contributes. Use to perform the k z integration. Then, insert A from (3.14) to obtain lim τ →±∞ (3.20) (b) Next, denote by K 2 = k 2 y + p 2 T and perform the integral over k x (3.21) (c) Insert this into (3.15) and replace the sum over KK modes by a continuous integration to obtain (3.23) (d) To perform the remaining integrations pass to polar coordinates p T = K cos α, k y = K sin α, p δ−1 T dk y dp T = K δ dK cos δ−1 αdα, and integrate over K from zero to infinity and over α from −π/2 to π/2. The asymptotic values of the velocity of m in the field of m ′ are then: Inserting this into (3.3) we find for the square of the momentum transfer (3.25) In the ultrarelativistic limit γ ≫ 1, v ≃ 1, one has s = 2p · p ′ ≫ mm ′ and the above expression simplifies to: (3.26) Finally, the differential cross-section, defined as usual by dσ = 2πbdb, is . (3.27) In particular, for δ = 0 which coincides with the well-known formula for small angle scattering of General Relativity [21]. The scattering angle θ is given by tan θ = √ −t/mγv, thus, small scattering angles mean |t| ≪ m 2 γ 2 v 2 , which for ultrarelativistic velocities gives the range of validity of our approximation Relation to the eikonal approximation Consider the elastic scattering of massive scalar particles on the brane in the high-energy limit s ≫ m 2 . The Born amplitude contains the t-channel propagator involving the sum over Kaluza-Klein modes. Passing to the continuous integration over the momentum of gravitons p T in extra dimensions we have [4]: (4.1) This integral in the general case requires a UV cutoff. An alternative way to get the final amplitude for the small-angle high-energy scattering is to use the eikonalized form of the amplitude [9,4,8] where the two-dimensional vectors q, b lie in the transverse plane, with b the impact parameter vector. The transverse component q of the momentum transfer in this approximation satisfies q 2 ≈ −q µ q µ , so that t ≃ −q 2 . This expression in the usual four-dimensional theory corresponds to summation of the ladder and crossed-ladder diagrams (for a detailed calculation within the quantized linearized General Relativity see [15]). Actually, this involves UV divergent loop diagrams, but it can be shown that the leading contribution in the high-energy limit is independent on the cutoff. In the ADD linearized gravity the situation is believed to be the same, though no explicit analysis is available. Therefore our classical calculation provides an independent check of the applicability of the eikonal approximation in the ADD framework. The Born amplitude corresponds to the first term in the expansion of the exponential in (4.2) and is used to extract the eikonal χ as its inverse Fourier-transform Notice, that although the Born amplitude itself may be divergent, the integral (4.4) is finite if one first integrates over q, but not p T . Indeed, choose the coordinates as in the previous section to write and integrate first over q x using a contour integration which gives an exponential factor cutting the potentially divergent integral over p T . Then, integrate over q y = κ cos α and p T = κ sin α to obtain Then, the eikonal amplitude (4.2) becomes The unity in the parenthesis gives no contribution. In the remaining part and in the regime qb c ≫ 1 of interest here, one may replace the Bessel function by its asymptotic to obtain . (4.14) As advertised, it is identical to our classical result. Conclusions A purely classical calculation was presented of the high energy elastic scattering cross section in the ADD scenario. Our approach is entirely free of the ambiguities associated with the divergence of the Born amplitude with the virtual graviton exchange typical for ADD. Ultrarelativistic small angle gravitational collision in four dimensions is a special case, in which it agrees with 't Hooft' s result, which in turn coincides with the Born quantum cross-section. In the presence of extra dimensions it was shown that the lowest order small angle classical approximation reproduces the essentially non-perturbative result of the quantum eikonal calculation in the saddle-point approximation. Thus, the classical computation in the above kinematical regime is non-trivial, unambiguous as well as reliable and, therefore, worth applying to other processes like bremsstrahlung [22].
2009-04-21T16:48:42.000Z
2009-03-17T00:00:00.000
{ "year": 2009, "sha1": "e2f2499b90c96b20cc127fd0309cf3483b1ef1e3", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1126-6708/2009/05/074/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e2f2499b90c96b20cc127fd0309cf3483b1ef1e3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }