id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
17623952 | pes2o/s2orc | v3-fos-license | Critical Time Intervention for People Leaving Shelters in the Netherlands: Assessing Fidelity and Exploring Facilitators and Barriers
International dissemination of evidence-based interventions calls for rigorous evaluation. As part of an evaluation of critical time intervention (CTI) for homeless people and abused women leaving Dutch shelters, this study assessed fidelity in two service delivery systems and explored factors influencing model adherence. Data collection entailed chart review (n = 70) and two focus groups with CTI workers (n = 11). The intervention obtained an overall score of three out of five (fairly implemented) for compliance fidelity and chart quality combined. Fidelity did not differ significantly between service systems, supporting its suitability for a range of populations. The eight themes that emerged from the focus groups as affecting model adherence provide guidance for future implementation efforts. Electronic supplementary material The online version of this article (doi:10.1007/s10488-015-0699-9) contains supplementary material, which is available to authorized users.
Introduction
For people leaving homeless or women's shelters, the transition to community living can be challenging. Because much has to be arranged during this stressful time, people are often in need of practical and emotional support. They can no longer utilize shelter services, which are generally terminated after shelter exit, and most of them have few supports that they can rely on in their new living environment . Relationships with family members and other potential social supports may need to be repaired first and ties to professional supports in the community may be weak or not yet established due to waiting lists. As a result, people leaving shelters experience a discontinuity of support. Post-shelter services are, therefore, vital in preventing negative outcomes such as recurrent homelessness and re-abuse (Caton et al. 1993;McQuistion et al. 2014;Tan et al. 1995;Tutty 1996).
Critical time intervention (CTI) is a time-limited, strengths-based case management model designed to prevent adverse outcomes in vulnerable people at the time of a critical transition in their lives, such as following discharge from institutional settings ). CTI facilitates community integration and continuity of care by ensuring that a person has enduring ties to their community and support systems during these critical periods. It has been recognized by the Substance Abuse and Mental Health Services Administration (SAMHSA 2006), the Public Health Agency of Canada (2009), and the Coalition for Evidence-Based Policy (2013) as an evidence-based practice (EBP). In the United States, this intervention has been found to be effective in preventing recurrent homelessness and re-hospitalization as well as reducing psychiatric symptoms and substance use after the transition from shelters, hospitals, and other institutions to community living in people with severe mental illness (Herman et al. 2011;Kasprow and Rosenheck 2007;Susser et al. 1997). Furthermore, CTI is a cost-effective alternative to usual care for mentally ill men moving from a shelter to the community (Jones et al. 2003).
Few evidence-based interventions for vulnerable people leaving institutional settings have been tested rigorously outside the United States Jonker et al. 2015). Before EBPs are widely implemented internationally, it is necessary to test whether they are effective, because most of these practices have been developed to address place-and time-specific social issues . In addition, different nations usually have distinct systems of care, which may influence the effectiveness of interventions (Toro 2007). Differences between systems of care might require adaptations of an intervention during implementation. These adaptations should be consistent with the model, so that its active ingredients are preserved. By evaluating whether CTI is effective outside the United States, we could possibly add to the evidence base supporting that this intervention's mechanisms of effect are not dependent on a particular social context or health care system. We initiated two multi-center randomized controlled trials (RCTs) to test the effectiveness and model fidelity of CTI for homeless people and abused women in the Netherlands.
In modern effectiveness research, the development and use of fidelity criteria is considered obligatory to asses model adherence, that is, the degree to which a given intervention has been implemented in accordance with essential theoretical and procedural aspects of the model (Bond et al. 2000;Hogue et al. 2005). Earlier research shows that faithfully implemented EBPs produce better outcomes. For example, high fidelity to assertive community treatment (ACT) and strengths-based case management has been found to have a positive effect on clientlevel outcomes (Cuddeback et al. 2013;Fukui et al. 2012;McHugo et al. 1999).
So far, only one study has published CTI fidelity scores (Olivet 2013). This study was conducted by the Center for Social Innovation (C4) to assess differences in implementation and client outcomes between face-to-face and online CTI training. Fidelity was measured with the CTI fidelity scale, a quantitative tool developed by Conover and Herman (2007). The CTI fidelity scale consists of 20 items, which are rated on a five-point scale ranging from not implemented to ideally implemented. Item-level ratings can be combined to compute an overall fidelity score (Conover 2012). In the C4 study, overall fidelity scores were calculated nine months after training and were based on compliance fidelity, which is the degree to which providers implemented the key elements of the CTI model (eight items), and chart quality, which measures how well the intervention was documented (four items). The 15 North American homeless service agencies that participated in the C4 study obtained an average overall score of three on the five-point scale, which corresponds to fairly implemented according to the CTI Fidelity Scale Manual (Conover 2012). The present study was designed to provide insight in the implementation of CTI practice in three different ways. Firstly, we also conducted a fidelity assessment, which would allow us to examine whether a similar fidelity score would be achieved in the Netherlands as was obtained by the C4 study in North America. Secondly, we set out to compare the level of fidelity between two distinct service delivery systems-services for homeless people and services for abused women-which was possible because the two RCTs on CTI employed the same ongoing training and monitoring efforts during the same period in each service delivery system (Lako et al. 2013). Earlier studies of the effectiveness of CTI have already demonstrated that the CTI model can be successfully adapted for several types of populations (Herman and Mandiberg 2010). However, the hypothesis that CTI is suitable for a range of populations would be supported further if similar levels of model fidelity could be obtained in different service delivery contexts with the same implementation approach. Lastly, we aimed to provide insight into facilitators and barriers to CTI practice by conducting focus groups with the case managers trained in CTI (referred to as ''CTI workers''). This will provide important information on which key aspects should be paid attention to when implementing CTI.
The present study will answer the following three research questions: What is the fidelity of CTI for homeless people and abused women making the transition from shelters to community living in the Netherlands? Is it possible to obtain similar fidelity ratings in two distinct service delivery systems (i.e., services for homeless people and services for abused women) with the same implementation approach? And which factors may have facilitated or impeded CTI workers to adhere to the CTI model in these service delivery systems?
Method Procedure and Participants
This study is part of two RCTs examining the effectiveness of CTI for adult homeless people and abused women who are about to move to housing in the community and are willing to accept case management services during and after shelter exit. The two RCTs were initiated by the Academic Collaborative Center for Shelter and Recovery. The 18 shelter organizations that participated in these trials were members of this platform. In the Netherlands, services for homeless people are operated in a service delivery system that is separate from services provided to abused women; these two distinct service delivery systems will be referred to as services for homeless people and services for abused women in the remainder of this article.
Participant recruitment began in December 2010 and was completed in December 2012. In total, we recruited 183 clients from 18 homeless shelters, who had been rehoused within 14 months of entering the shelter, and 136 clients from 19 women's shelters, who had been victim to any violence committed by an intimate partner (intimate partner violence) or committed to protect or restore the family honor (honor related violence) and stayed in the shelter for at least 6 weeks before being rehoused. The trials comply with the criteria for approval by an accredited Medical Research Ethics Committee (aMREC). Upon consultation, the aMREC region Arnhem-Nijmegen concluded that these studies were exempt from formal review (registration numbers 2010/038 and 2010/247). The methods of the two RCTs have been reported elsewhere in more detail (Lako et al. 2013).
Written informed consent to share client charts with the research team was obtained before participants were randomly allocated to CTI or care-as-usual. To assess the intervention's fidelity to the CTI model, we randomly selected a sample of 70 charts, stratified by service delivery system, from participants allocated to the experimental condition. [Socio-demographic characteristics of these 70 participants are presented in the online supplement to this article.] In the two trials, 164 participants were allocated to CTI. In July 2013, we assessed which client charts were available for the fidelity assessment. Fifteen participants allocated to CTI had never been assigned a CTI worker (n = 15) and, as a result, did not have a CTI client chart that could be included in the assessment. Reasons for not assigning a CTI worker were that participants refused to receive services after randomization (n = 8), organizations were unable to provide CTI due to full case-loads or participants' place of residence (n = 5), or participants were mistakenly assigned to another case manager (n = 2). For 17 participants, who had been allocated to CTI in the last 6 months of recruitment, the intervention had not yet ended and their CTI workers were therefore unable to supply these clients' charts. Earlier research has shown that implementation of an EBP with a sufficient level of fidelity takes time (Fukui et al. 2012;Rapp et al. 2010b) and, therefore, CTI workers were expected to adhere more closely to the model at the end of the study than at the beginning. Because we aimed to draw a sample of charts representative for the study period as a whole, we decided to create temporal balance by excluding charts from participants who had been allocated to CTI in the first 6 months of recruitment (n = 33). To select a sample from the remaining charts available (n = 99), a computer-generated list of random numbers was used.
Tailoring the Model CTI is divided into three phases, of 3 months each, with decreasing intensity of support over time (see Fig. 1). During the intervention the CTI worker provides practical and emotional support and helps to extend and strengthen the client's social and professional network. Gradually, responsibility for the client's care is transferred from the CTI worker to significant members from the client's social and professional support system ). Timing is crucial: An important principle of the model is that the CTI worker and the client have started building a working relationship before the actual transition begins (Herman and Mandiberg 2010).
When CTI was first introduced to the Netherlands, the model was adapted to enhance continuity of services for people with schizophrenia and a history of homelessness (Valencia et al. 2007). A pilot study tested the feasibility of implementing the adapted intervention. Adaptations were informed by data on housing instability among schizophrenia patients, interviews with clinicians and peerspecialists, and the investigators' clinical and research experience with hard-to-engage populations ( van Hemert n.d.). One of the adaptations was a more flexible time frame compared to the original model. A cardinal element in the CTI model is that the phase transition is automatically made at the three-month time point rather than driven by readiness criteria. In the adapted intervention, the time frame could be altered depending on the complexity of clients' needs and problems, clients' and case managers' skills, and community factors, such as limited access to services due to waiting lists (Valencia et al. 2006). This adaption fits well with a growing interest in the Netherlands for the concept of providing personalized care (Evers et al. 2012). Decisions to transition to a subsequent phase were made by CTI workers and their supervisors during team meetings, which is an adaptation that was also incorporated in the implementation of CTI in the present study.
In addition to this model adaptation, implementation of CTI was also adapted to include elements from the strengths model (Rapp and Goscha 2011). Since most of the participating organizations had implemented a strengths-based approach to shelter services shortly before the start of the trial, principles from both the CTI and strengths model were integrated to ensure continuity in service approach during the transition from shelter to community. Because the strengths model stimulates clients' capacity for autonomy and self-reliance by focusing on their strengths (Rapp and Goscha 2011), it is very compatible with CTI.
Besides modifications to improve the fit of the CTI model with the health care system and shelter services in the Netherlands, the intervention was also tailored to meet the special needs of women (and their children) who have experienced abuse. Although the idea for CTI was conceived in the mid-1980s when many people with psychiatric disorders were becoming homeless, this model also seems to suit the complex service needs of women who have experienced abuse. Earlier research has shown that when these women successfully obtain desired community resources and increase their social support, this will enhance their overall quality of life. This improvement in well-being appears to serve as a protective factor from subsequent abuse (Bybee and Sullivan 2002). We adapted the CTI model to employ practices familiar to the field, such as motivational interviewing (Millner and Rollnick 2002), and include a number of key components geared toward helping these women address and prevent problems that they and their children face. The original six CTI areas of intervention, which were selected because these had been identified as the most essential for treatment of people with a severe mental illness during a 'critical time' of transition, were adapted in consultation with managers and practitioners from shelter organizations in the Academic Collaborative Center for Shelter and Recovery. The final 10 areas of intervention were based on experiences of practitioners as well as literature on risk and protective factors for re-abuse. These factors were also incorporated in the Risk and Needs Assessment, a tool in the CTI client chart that helps to assess individual risks for recurrent homelessness and/or re-abuse and discontinuity of care.
Training, Monitoring, and Support
Two or three case managers were drawn from existing staff of participating organizations to participate in the trials as CTI workers; they were generally part of service teams working in the community with vulnerable clients. Most of these case managers did not have any responsibilities within shelters. In order to qualify, staff members needed to have a bachelor's degree in social work or a related field. In the fall of 2010, potential CTI workers were introduced to CTI by the research team and experienced trainers. The CTI workers completed three one-day training sessions to become familiar with CTI's theoretical and procedural aspects and to acquire essential skills for CTI practice.
In addition to the initial training, CTI workers from all participating organizations attended centralized half-day training sessions-(bi)monthly during the first year and quarterly during the second year of study. With the aim of enhancing CTI practice, the research team and CTI trainer facilitated discussions in which workers from the participating organizations exchanged experiences, offered workshops on how to use CTI chart forms as tools for Participating organizations were required to assign an internal coach, who was responsible for ensuring sufficient organizational support for the CTI workers and monitoring the model fidelity of the intervention. To this end, CTI workers had biweekly face-to-face supervision with their internal coach. Coaches received a one-day training session at the start of the trials and four half-day training sessions during the study period.
Intensity of services
For the implementation of an EBP to be effective, leaders in an organization need to be committed to the change process (Brownson et al. 2012;McHugh and Barlow 2012). Several steps were taken to secure leadership buy-in. Firstly, the RCTs were initiated by the Academic Collaborative Center for Shelter and Recovery and designed in consultation with this platform's steering committees and working groups, consisting of directors, managers, and practitioners from the member organizations. Secondly, each participating organization was visited at least twice by the research team before the start of the CTI training. During the first site visit, any possible challenges to the implementation of CTI were discussed with directors and managers. The second site visit was aimed at team leaders and practitioners to fill them with enthusiasm for the intervention. Lastly, presentations and workshops were conducted regularly at conferences and meetings to highlight the importance of the intervention, the trials' objectives, and the study progress. The aim of organizing and attending these conferences and meetings was to ensure (continued) leadership buy-in of the participating shelters and policy makers in local government and other funding bodies.
Fidelity Scale and Measures
Fidelity was measured with the CTI fidelity scale, a quantitative tool developed by two of the authors . The CTI fidelity scale has been applied in a number of settings; however, this scale has not been formally validated so far (Herman and Mandiberg 2010). For the purpose of the two RCTs, the fidelity scale was adapted in consultation with the original authors and translated into Dutch. [The adapted version of the CTI fidelity scale and the rationale for each item in the original scale are available in the online supplement to this article.] Adaptations to the fidelity scale were for language as well as for elements from the strengths model. Items were not adapted to account for the planned change in the model with regard to flexibility in the time frame. Hence, the fidelity scale provided the opportunity to measure the deviation from the original model that resulted from this more flexible time frame.
Each item of the CTI fidelity scale consists of one to five criteria, which can be rated positively or negatively. In order to obtain fidelity ratings at item-level, the number of positively rated criteria is divided by the total number of criteria to calculate percentages. These percentages are then converted into a five-point scale rating (see Fig. 2). Finally, all item-level ratings are added up, divided by the number of fidelity items, and rounded to the nearest integer to compute an overall fidelity score (Conover 2012).
The 20 items of the original CTI fidelity scale belong to one of three sections that each measure a different component of model fidelity: compliance fidelity, competence fidelity, and context fidelity (Conover 2012). The first section, compliance fidelity, is the degree to which workers practiced the key elements of the CTI model and is measured by eight items. Four of these indicate whether the intervention was delivered according to the intended CTI structure (i.e., as a nine-month intervention divided into three equal phases with a focus on up to three intervention areas): Three Phases, Nine-Month Follow-Up, Time-Limited, and Focused. The other four items are concerned with developing relationships with clients and their social and professional support systems: Early Engagement, Early Linking, Outreach, and Monitoring. The second section, competence fidelity, refers to the extent to which these key elements were delivered to clients with skill and attention to the craft (Fixsen et al. 2005) and is measured by nine items. Four of them rate how well the intervention was documented: Intake Assessment, Phase Planning, Progress Notes, and Closing Note. The other five items measure program quality: Worker's Role With Client, Worker's Role With Linkages, Clinical Supervision, Fieldwork Coordination, and Organizational Support (Olivet 2013). The third section, context fidelity, indicates whether the organizational requirements were met to allow the intervention's practice to operate smoothly (Fixsen et al. 2005). Context fidelity is indicated by three items: Caseload Size, Team Meetings, and Case Review.
The organizations that participated in this study did not implement CTI throughout their organizations, but instead two to three CTI workers, who had other cases besides their CTI caseloads, operated independently within larger service teams for the benefit of the two RCTs. Due to this deviation in team structure and the small number of ''active'' CTI clients per worker at any time, conducting site visits as outlined in the CTI Fidelity Scale Manual (Conover 2012) was not appropriate. Therefore, the items that measure program quality (five items) and context fidelity (three items) could not be rated and were excluded from the assessment. The remaining 12 items of the CTI fidelity scale, which measure compliance fidelity and chart quality, were retained. The CTI Fidelity Scale Manual prescribes that these 12 items are rated by reviewing client charts.
Fidelity Assessment
For the sample of 70 charts, we collected all the CTI chart forms that the CTI workers had completed. At a minimum, each CTI client chart had to contain an Intake Form, a Strengths Assessment, a Risk and Needs Assessment, an Activity Log, a Personal Recovery Plan for each phase, and a Closing Note. [The content and function of each CTI chart form are described in the online supplement to this article.] The Strengths Assessment and Personal Recovery Plan originate from the strengths model (Rapp and Goscha 2011); these chart forms were adapted to include elements that increase their compatibility with the CTI model and that are essential for their use during fidelity assessment (Wolf et al. 2012). CTI workers sent copies of the chart forms to the research team using postage-paid envelopes or e-mail. The research team tracked receipt of all forms in a password protected database. Digital copies of CTI chart forms were stored on a secure server and hard-copies of CTI chart forms were stored in locked cabinets.
Review of CTI chart forms and additional notes was conducted by two fidelity assessors, who were part of the research team and both had extensive knowledge of the CTI model. Agreement between assessors, derived from an independently rated subsample of 17 charts, was very high (Cohen's j = .80). The fidelity assessors used CTI fidelity worksheets (Conover 2012), which had been modified in line with the adaptation of the CTI fidelity scale, to record and rate the criteria of each item during chart review.
In addition to chart review, we conducted two focus groups with a convenience sample of CTI workers to assess which factors may have helped or hindered them to adhere to the basic components of the CTI model. The first focus group was conducted in February 2013 with CTI workers who supported abused women (n = 5) and the second group discussion was carried out in April 2013 with CTI workers who provided services to formerly homeless people (n = 6). Before the start of the focus groups, we obtained written informed consent from the participants. The questioning route was determined in advance and each focus group lasted approximately 110 min. During the interview process, the group moderator regularly restated or summarized information and then questioned the participants to determine accuracy. The group discussions were recorded and transcribed verbatim. Six weeks after the focus groups took place, meetings were organized to verify the results with CTI workers and internal coaches. Preliminary codes and themes, and carefully selected fragments from the focus group transcripts to illustrate these, were presented to the attendees, to which they could respond by correcting misinterpretations or adding more information.
Analysis
For each item of the CTI fidelity scale, percentages of positively rated criteria were calculated at client-level using IBM SPSS Statistics for Windows, Version 20.0. Mean percentages for all client charts together and separately for services for homeless people and services for abused women were subsequently converted into fidelity ratings and an overall fidelity score for competence fidelity and chart quality.
Because fidelity ratings on separate items and the overall fidelity score could not be calculated at client-level, we tested for differences between services for homeless people and services for abused women before converting percentages into the five-point scale ratings. Mann-Whitney U tests were conducted to test for differences in percentages of positively rated criteria at item-level. An independent samples t-test was employed for the average percentage across all items. Because the group sizes are relatively small, and the analyses may lack statistical power as a result, we also calculated effect sizes.
Transcripts from the group discussions were explored using thematic analysis. The two lead authors (RV and DL) familiarized themselves with the data by listening to the recordings and rereading the transcripts. From one of the transcripts, they independently selected fragments considered to be relevant to the third research question. The supervising author (JW) reconsidered the relevance of extracted fragments and coded them inductively, developing an initial code frame. The lead authors used this frame to code the second transcript, using deductive and inductive analysis. To determine the validity of the information obtained and the code frame, a second data source was consulted, which consisted of questions and concerns about implementation from the CTI workers and internal coaches, and responses to these questions and concerns from the research team, collected during the study period. This document was continuously updated and disseminated during the centralized half-day training sessions. One of the lead authors (RV) combined the final codes into 41%-55% 56%-70% 71%-85% >85% Fig. 2 Conversion of percentages of positively rated criteria into five-point scale ratings overarching themes, which were reviewed by two other authors (JW and MB). Existing themes were refined and finalized in consensus among the authors.
Results
Fidelity Ratings Table 1 presents the percentages of positively rated criteria and fidelity ratings at item-level as well as the overall fidelity score for all client charts together (n = 70) and separately for services for homeless people (n = 35) and services for abused women (n = 35). Ratings of Monitoring (item 8) are based on a subsample of 63 client charts, because for seven clients-four clients from services for homeless people and three from services for abused women-the intervention had ended before phase 3 had begun. For all client charts together, the overall fidelity score for competence fidelity and chart quality is three out of five, which according to the CTI Fidelity Scale Manual indicates that fidelity to the CTI model is fair. On eight of the 12 items, CTI workers adhered fairly or well to the model; the other four items were not or poorly implemented. In relation to the intervention's structure, CTI workers had generally divided the intervention into three phases, but failed most of the time to start and end each phase within a two-week margin of the intended three-phase structure. As a result, Three Phases (item 1) received a rating of 1, indicating this aspect of CTI had not been implemented. CTI workers scored well on Nine-Month Follow-Up (item 2), indicating that most of the time they managed to stay in touch with their clients for nine months and there were few major gaps where clients disappeared. They found it more difficult, however, to also end the intervention on time; Time-Limited (item 3) received a fair rating. A fair rating was also obtained on being Focused (item 4), which prescribes that the intervention should be limited to a maximum of three intervention areas.
With regard to relationship development, CTI workers should have met clients several times before shelter exit in order to gain an understanding of their clients' histories; this Early Engagement (item 5) received a fair rating. Early Linking (item 6), which was also implemented fairly, prescribes that CTI workers maintain a high level of client contact during the first weeks after discharge and convene a joint meeting with family members and service providers to ensure continuity during this critical transition period. An element that was put into practice well is Outreach (item 7), which indicates that CTI workers regularly met in the community with clients and people in their support systems during phase 1. The poor rating on Monitoring (item 8) shows that, in phase 3, CTI workers had difficulty with adapting to their monitoring role; often, they met with or spoke to clients too frequently in that last phase. With respect to chart quality, required sections of the Strengths Assessment and the Risk and Needs Assessment, which are both part of the Intake Assessment (item 9), and the Progress Notes (item 11) in the Activity Log had generally been completed; CTI workers scored well on these items. Unfortunately, Phase Planning (item 10) information on the Personal Recovery Plans was often incomplete. In addition, important elements were missing from the Closing Note (item 12) most of the time. These items were not or poorly implemented.
Differences Between Service Delivery Systems
To compare the level of model fidelity between services for homeless people and services for abused women, we tested for differences in percentages of positively rated criteria at item-level and in the average percentage across all items. According to the independent samples t-test, the average percentage of positively rated criteria across all items did not differ between the two service delivery systems (t(68) = -1.42, p [ .05). When percentages of criteria met at item-level were compared, we found a trend for three items (p \ .10). CTI workers providing services to homeless people seem to be more careful to complete their Progress Notes (item 11; U = 461.50, p = .07), while CTI workers providing services to abused women seem to adhere better to the criteria regarding the Intake Assessment (item 9; U = 469.00, p = .06) and Phase Planning (item 10; U = 461.00, p = .06). For all three items, the effect size was small (r = -.22).
CTI Workers' Perceptions
The eight factors that emerged as prominent themes affecting model adherence are discharge and shelter services, working relationship, clients' needs and attitudes, community support system, perceived effectiveness, model adaptation and trial design, organizational and team support, and tools and training. These themes are described below.
Discharge and Shelter Services
During the focus groups, CTI workers confirmed that continuity of care is crucial for a smooth transition from shelter to community living. Filling out an Intake Form together with a client and shelter case manager before discharge resulted in fewer loose ends once the client had moved. Most of the workers agreed that if they had been unable to engage clients before discharge, three months was too short for the first phase (Three Phases). Being assigned to clients who had already left the shelter made organizing a meeting with the client and shelter case manager (Early Engagement) more difficult, because often shelter case managers would be unavailable or clients too preoccupied, according to the CTI workers.
So after a month and a half I got [the meeting with the shelter case manager]. That's how it works… Every time it's like: The person that's responsible is never there when you need them and that's why it goes wrong all the time. And so I didn't have any information from the client's chart at all, so I just had to completely rely on the client at that point, and I really felt the lack of that meeting.
With regard to shelter services, working towards similar goals and with similar chart forms during shelter stay facilitated adherence to the CTI model. If clients had already completed, for instance, a Strengths Assessment in the shelter, then this version could be used by the CTI workers as a basis to expand from (Intake Assessment). CTI workers indicated that if clients had worked on strengthening their informal network with their shelter case manager, they seemed more willing to accept help in this area after discharge, as the following comment by a CTI worker reflects: But one thing you can sort out [in the shelter], I think, has to do with their social network…. If the network doesn't get mobilized while they're in the shelter, then it's very hard to mobilize it once they get their own place, because I've noticed clients are then like: I don't need that any more.… So I think the time to seek help is in the shelter. If you engage [the network] at that point, then you can keep it involved later.
Working Relationship
In the CTI workers' view, having a good working relationship with a client was also instrumental in model adherence. Workers indicated that it could take several weeks, or even months, to build a positive working relationship. Being able to engage clients early to start building a positive relationship was an important facilitating factor. CTI workers were very positive about having a meeting with clients together with their shelter case managers before shelter exit: And the reason why that worked so effectively was… well, it gives the client a sense of safety, like: 'Hey, my [shelter] case manager also thinks it's a good thing that I'm going to start working with you.' Quite primal, actually.
During the intervention, a trusting relationship between client and CTI worker appeared to be essential in helping to motivate clients. For example, several workers indicated that, even though some clients were reluctant at first, they had been successful in organizing a joint meeting with social supports (Early Linking) by following the client's lead.
Clients' Needs and Attitudes
Clients' support needs, as well as their attitudes towards receiving support, also had an influence on model fidelity. Some clients, for instance, were quite hesitant to accept support from other professionals besides their CTI worker. Workers experienced that, even though other supports were available, certain clients would keep appealing to them, resulting in frequent contact during the last phase of the intervention. Workers also felt inclined to increase the intensity of the intervention if a client's situation suddenly deteriorated, for example, due to an emotional or financial crisis. This could help to explain why the fidelity rating for Monitoring was poor. A crisis situation, however, could also motivate a client to become more accepting of help from others (Outreach), according to some of the CTI workers.
Halfway through the second phase, they discovered a spot on my client's lungs. So then everything basically stood still for a while, but because of that, we did get to know his social network and could start drawing on that.
Community Support System
Model adherence also depended on workers' success in developing community support. During the focus groups, CTI workers indicated that sufficient community support was necessary to allow them to decrease and eventually terminate contact with a client (Time-Limited) and that a client's support system could help them gain more insight into a client's situation and restore contact with a client when it had been disrupted due to frequent no-shows (Nine-Month Follow-Up). Several workers experienced difficulty linking clients to professionals due to austerity measures and this lack of access had hindered them in moving from the first to second phase (Three Phases): [She] had an intellectual disability -or at least they [shelter staff] said 'suspicions of' -and as soon as she was home again [living in the community], you could tell. I basically ran into a brick wall trying to refer her. From pillar to post: Go there, try this and that. And at some point that frustrated her so much that she started rejecting everything.
Others experienced that, due to time constraints, professionals were often unwilling to attend joint meetings with a client's support system, which according to the CTI model should be organized at the start and end of the intervention (Early Linking and Closing Note).
Perceived Effectiveness
Whether the workers perceived a certain component of the intervention as effective seemed to have had an influence on their willingness to adhere to the model. Several workers mentioned that CTI's three-phase structure fitted well with their clients' process of adaption to community living. For some of the workers, the decreasing intensity of CTI in the second and third phase meant they could spend more time with their clients during that first, crucial phase; they felt that they were able to match service intensity to their clients' needs thanks to the implementation of the CTI model. During the focus groups, workers discussed how the time-limited nature of the CTI model had helped them change their mindset: They would make better use of supports in clients' networks instead of providing support directly, especially in the second and third phase.
You're already aware: During the first three months, I too will have to work very hard arranging and setting up practical things. But in the second phase you already start asking the client: 'Okay, how would you do that, what things could you consider? Who can you turn to?' And in the last phase you've resolved all that. You're in a different position then.
The CTI workers expressed that, as a result, they had generally been comfortable with ending the intervention at nine months (Time-Limited).
CTI workers expressed far less motivation to adhere to model components that they did not regard as beneficial. For example, the model prescribes that CTI workers organize a transfer-of-care meeting at the end of the intervention (which should be documented in the Closing Note). The transfer-of-care meeting is a joint meeting during which significant members from the informal and formal support system, along with the client, reach a consensus about the components of such an ongoing support system. In the view of several workers, having such a transfer-of-care meeting was unnecessary, because each member's role in the support system had already been discussed during a joint meeting in the first phase, and the system had been functioning well. Their perception of this element as redundant has most likely contributed to the poor fidelity rating for the Closing Note.
Model Adaptation and Trial Design
CTI workers mentioned that decisions about whether to move to a next phase with a client were made together with Adm Policy Ment Health (2017) 44:67-80 75 the internal coach and other CTI workers and were often based on a checklist of requirements for each phase-referred to as anchor points- (Wolf et al. 2012), which had been provided to them during the training sessions. The workers found this checklist a helpful tool in deciding whether they could move on to subsequent phases: Something to fall back on [anchor points] is great, because that's what you're working to achieve…. But you're also both aware -the client and the practitioner -that you've got that set amount of time to sort out those basic things…. So you just start to work.
That's great.
The decision to move a client to a subsequent phase was ultimately made by the CTI workers to enable them to provide personalized health care. This represents a considerable deviation from the CTI model, which most likely contributed to the poor degree of fidelity to Three Phases.
As mentioned before, CTI workers had other cases besides their CTI caseloads and operated independently within larger service teams. At the beginning of the recruitment period and at certain recruitment sites where few clients were eligible to participate, workers had few active CTI clients, which, according to the CTI workers, made it difficult for them to internalize the CTI model. Some of the CTI workers mentioned they were expected to have full standard caseloads at all times by their team supervisor. If a new participant had been assigned to CTI, they would often have to transfer clients receiving usual services to colleagues, or would sometimes be pressured to work overtime.
Organizational and Team Support
CTI workers indicated that generally they felt supported by their organizations, although organizational support was lacking in some organizations with respect to chart documentation. Several workers had to maintain a second client chart that met all of the organization's standards, which may have resulted in less time spent on and lower quality of the CTI client chart. Furthermore, in one organization, standard procedures with regard to ending services after several no-shows were enforced for clients assigned to CTI, which directly contradicts with Nine-Month Follow-Up.
Having team meetings on a regular basis was crucial in adhering to the CTI model. According to the CTI workers, these meetings helped to reflect upon the delivery of the intervention and thereby reinforced activities that were consistent with CTI principles: Reflection [during team meetings]. But also right in the middle of your work when someone suddenly reminds you: 'Why haven't you reached that point yet?' Although the CTI model stipulates that team meetings should be organized every 2 weeks, several CTI workers mentioned that they met less often and did not feel properly supported by their internal coach. Reasons for having infrequent team meetings were having to travel large distances to meet, having too little time to meet due to full caseloads, and having little reason to meet due to the small number of active CTI clients.
Tools and Training
Having the right tools, and sufficient training to use them to a client's advantage, facilitated adherence to the model as well. For example, CTI workers mentioned that the Personal Recovery Plan helped clients to set attainable shortterm goals (Phase Planning), because clients had to indicate on a five-point scale how likely they were to achieve each goal in the next three months. Several workers mentioned that the ecogram, a tool to visually map support systems (Hartman 1978), proved to be helpful, especially with clients who relied heavily on their CTI workers. Drawing an ecogram together with the client made clear who else was available for support in their network, which, in turn, made it easier for the CTI worker to ''pull back'' when the intervention progressed (Time-Limited).
For example, I'd had a client make an ecogram…. Then I covered up somebody's name with my thumb and said, 'What happens if she's not around?' That was somebody who was to come hang the light fixtures. 'Oh, well then I'll get my uncle to come round.' And then we did a few more, and at some point I put my thumb on my own name, and then she said something like, 'Yeah… well perhaps I could phone my aunt sometime.' And then it dawned on her: Hey, who could I call on then if he's not available anymore?
Discussion
The first and second aims of the study were to establish fidelity to the CTI model of an intervention for homeless people and abused women moving from shelters to community living in the Netherlands and to show whether it is possible to obtain similar CTI fidelity ratings in two distinct service delivery systems (i.e., services for homeless people and services for abused women) when the same implementation approach is employed during the same period. With an average of 60 % of positively rated criteria across all items, the intervention received an overall fidelity score for competence fidelity and chart quality of three out of five, which indicates CTI was fairly implemented according to the CTI Fidelity Scale Manual. This finding is similar to the overall fidelity rating in a previous multisite CTI study conducted with 15 service agencies in the United States and Canada (Olivet 2013). In the present study, the degree of fidelity on individual items ranged between not implemented (Three Phases and Closing Note) and well implemented (Nine-Month Follow-Up, Outreach, Intake Assessment, and Progress Notes). The two service delivery systems did not differ significantly on any of the items, although trends on three items related to chart quality were found. Effect sizes for these trends were small. This finding supports the hypothesis that CTI can be adapted for use with various populations, as suggested by Herman and Mandiberg (2010). Further research is needed to investigate whether this assertion holds when context fidelity and program quality, which are measured with the eight items from the CTI fidelity scale that were omitted in the present study, are taken into consideration. So far, however, the evidence seems to support that CTI's context-sensitive timing is applicable to a range of service delivery systems that serve vulnerable populations. Perhaps, that is due to the fact that its program components were developed in collaboration with practitioners, which lead to a pragmatic intervention that may be somewhat atheoretical in nature (Jenson 2014).
The third study aim was to report CTI workers' views on factors that may have facilitated or impeded adherence to the CTI model. From these factors, eight overarching themes emerged: discharge and shelter services, working relationship, clients' needs and attitudes, community support system, perceived effectiveness, model adaptation and trial design, organizational and team support, and tools and training. CTI worker's perceptions on factors that influence service delivery have been studied previously in a sample of 12 practitioners using CTI in a community agency or clinical trial setting in New York City (Chen 2012;Chen and Ogden 2012). Four of the themes that emerged in the present study-discharge and shelter services, working relationship, community support system, and organizational and team support-relate to the findings of this earlier study. Similarly to the CTI workers in the present study, practitioners interviewed by Chen (2012) stressed the importance of establishing contact with a client before the transition to a community residence. Not only have the benefits of early engagement been reported by practitioners, its effects on housing outcomes have also been empirically established (Herman et al. 2011). The present study corroborates the importance of fostering a trusting relationship to enhance client motivation and following clients' leads as a practice strategy, as previously established by Chen and Ogden (2012). Furthermore, CTI workers at community agencies in New York City revealed making frequent use of their own agencies' existing service programs (Chen 2012), which highlights the importance of easy access to community supports. In addition, they experienced that organizational policy occasionally conflicted with essential elements of the CTI approach, which was also the case in the present study.
Although the other four themes that emerged from the present study-clients' needs and attitudes, perceived effectiveness, tools and training, and model adaptation and trial design-were not corroborated by earlier research on CTI practice, parallels can be drawn with findings from other studies of EBP implementation in mental health services. In a study conducted in child and adolescent mental health settings, clients' concerns (for example, about the fit of an EBP with their own needs) and clients' values were identified as factors affecting implementation (Aarons et al. 2009). In adult mental health services, clients have also expressed concerns that EBPs will result in limited choice in service options and less say in the specifics of their services (Scheyett et al. 2006). Integrating recovery principles with evidence-based interventions could be a good strategy to address concerns about the fit of EBPs with clients' support needs and attitudes towards receiving support (Torrey et al. 2005). Concerning perceived effectiveness, Rapp et al. (2010a) identified practitioners' resistance toward an EBP as a barrier to implementation at several community mental health centers; this initial resistance emanated from the practitioners' assumptions about what works that contradicted with the EBP. Although the CTI workers participating in the present study were generally enthusiastic about the intervention, their assumptions did have a negative influence on their commitment to implement certain model elements. For instance, workers who deemed the transfer-of-care meeting to be unnecessary when the support system was functioning well, were unlikely to organize such a meeting at the end of the intervention. The importance of tools and training is addressed in another paper by Rapp et al. (2010b) which describes strategies for successful implementation of EBPs. The authors emphasize the importance of reinforcing the application of tools to achieve results, for example, by developing training units that focus specifically on the use of certain tools (such as the Strengths Assessment) in practice and by including tools in all aspects of systematic case review during team meetings. Regarding model adaption, CTI workers' views on phase transitions, as well as the fidelity rating for Three Phases, pointed towards a deviation from the original model in the present study. This deviation, however, was in line with an a priori decision to adapt the model by focusing on readiness instead of making phase transitions automatically at each three-month time point. Fidelity scales can be a useful instrument in measuring model adaptation of EBPs, as illustrated by a study that focused on transferring clients from an ACT program to a less intensive adaptation of the ACT model (Salyers et al. 1998). Although programs which are more faithful to the original model have demonstrated better client outcomes, the need for adapting EBPs, which are generally developed in a particular socio-cultural and economic context, to local conditions has also been recognized (Bond et al. 2000).
In this article, we have distinguished eight factors that influence model fidelity. Whether other factors that have been identified previously as facilitators or barriers to EBP implementation also apply to CTI practice in Dutch shelter services, warrants further research.
Strengths and Limitations
Together with an evaluation of a strengths-based intervention for homeless young adults (Krabbenborg et al. 2015), this study is the first to conduct a fidelity assessment of an evidence-based intervention in Dutch shelter services. Generally, few results from assessments of fidelity to the CTI model have been published (Herman 2014) and none of these previous studies have compared levels of model fidelity in two distinct service delivery systems. Moreover, this study contributes to a better understanding of model fidelity and implementation, because it combined quantitative and qualitative data to answer the research questions related to this topic rather than using either approach on its own (Robins et al. 2008). However, several limitations of the study need to recognized as well.
In the CTI Fidelity Scale Manual, cut-off points are provided to convert percentages of positively rated criteria into five-point fidelity ratings. In addition, norms are provided for how to interpret these ratings, ranging from not implemented (one out of five) to ideally implemented (five out of five). However, the CTI fidelity scale has not been formally validated so far and, as such, norms for good implementation have also not yet been established. Appropriate validation of the CTI fidelity scale is needed to determine whether the existing cut-off points and norms can be upheld.
Another limitation of the present study is that fidelity scores were calculated based on a subset of items from the original CTI fidelity scale. Because CTI was delivered to clients in a research context, participating organizations did not implement CTI throughout their organizations and the number of active CTI clients per worker was generally small. As a result, conducting site visits was not appropriate and eight of the 20 items of the original CTI fidelity scale, which measure program quality and context fidelity, had to be excluded from the fidelity assessment. If the omitted items would have been included, this could have altered the overall fidelity score as well as interpretation of the results. Inferences drawn based on the fidelity assessment are strictly limited to competence fidelity and chart quality and, based on these findings, no assumptions can be made about program quality or context fidelity of the intervention.
Nevertheless, valuable information about the context in which the intervention was delivered was obtained from CTI workers in focus groups. The use of focus groups, however, has certain limitations that should be highlighted, such as the possibility of social desirability and recall bias. Furthermore, data collected as the session progresses may represent opinions that are shaped by the group discussion (Carey 1995). The members of the group should, therefore, feel comfortable with each other. In the present study, focus group participants knew each other and the researchers well through the ongoing training sessions and were assured that the information they provided would be anonymously reported on. Therefore, we expect the data to accurately reflect the opinions of the focus group participants.
Implications for Policy and Practice
The CTI fidelity scale and the assessment provide agencies and local policy makers with a framework for the development and quality assurance of EBPs that support vulnerable citizens during transitions in their lives. The identified facilitators and barriers to implementation might be used by policy makers and practitioners to improve fidelity to EBPs in shelter services and to provide the necessary conditions for successful implementation. Several recommendations for successful implementation of CTI can be made based on the study findings. First, staff should be committed to recovery and CTI principles, including the importance of fostering a good working relationship with clients. Important to model adherence is also their perception of the intervention's components as effective. Assessing whether these core principles are part of the organization's culture and the intervention's components are integrated into work processes before implementation, and, if necessary, helping staff to internalize those principles through knowledge transfer (Rapp et al. 2010a), would be recommended. Sufficient access to a community support system is also important; CTI programs are unlikely to reach high fidelity in environments where access to informal as well as formal supports is very limited. In addition, CTI workers should be provided with sufficient organizational and team support as well as ongoing coaching. Coaching should foster mutual learning by reflecting together on the CTI model during regular case review and on the use of CTI chart forms as tools to improve clients' care. Furthermore, workers should have full CTI caseloads to gain ample experience. Lastly, fidelity to the CTI model would improve if organizations integrate similar tools and principles in their residential shelter services and CTI workers are assigned at least several weeks before clients exit the shelter, which will enhance continuity of care during the transition from institutional to community living. In addition, training for shelter staff in how to enhance communication and collaboration pre-discharge could maximize the potential benefits from early engagement, as suggested by Chen (2012).
Conclusions
This study shows that CTI was fairly implemented in the two multi-center RCTs testing the effectiveness of CTI for homeless people and abused women in the Netherlands. In these distinct service delivery systems-services for homeless people and services for abused women-the same implementation approach, employed during the same time period, resulted in very similar overall and item-level fidelity ratings. These findings are in line with the results from earlier studies that found CTI to be effective in different service delivery contexts: CTI seems to be an intervention suitable for a range of vulnerable groups who are going through a transition in their lives. Analyzing CTI workers' perspectives on factors that may have influenced model fidelity has yielded important recommendations for successful implementation of CTI in other service delivery systems. | 2017-08-02T21:47:57.046Z | 2015-11-16T00:00:00.000 | {
"year": 2015,
"sha1": "cdfd932b688d70bd7a558ecf8f241a8738ee6914",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10488-015-0699-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "856620fefb276d1a6c8f26d314d9047cf627931a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233415621 | pes2o/s2orc | v3-fos-license | Force Field Parameters for Fe2+4S2−4 Clusters of Dihydropyrimidine Dehydrogenase, the 5-Fluorouracil Cancer Drug Deactivation Protein: A Step towards In Silico Pharmacogenomics Studies
The dimeric dihydropyrimidine dehydrogenase (DPD), metalloenzyme, an adjunct anti-cancer drug target, contains highly specialized 4 × Fe2+4S2−4 clusters per chain. These clusters facilitate the catalysis of the rate-limiting step in the pyrimidine degradation pathway through a harmonized electron transfer cascade that triggers a redox catabolic reaction. In the process, the bulk of the administered 5-fluorouracil (5-FU) cancer drug is inactivated, while a small proportion is activated to nucleic acid antimetabolites. The occurrence of missense mutations in DPD protein within the general population, including those of African descent, has adverse toxicity effects due to altered 5-FU metabolism. Thus, deciphering mutation effects on protein structure and function is vital, especially for precision medicine purposes. We previously proposed combining molecular dynamics (MD) and dynamic residue network (DRN) analysis to decipher the molecular mechanisms of missense mutations in other proteins. However, the presence of Fe2+4S2−4 clusters in DPD poses a challenge for such in silico studies. The existing AMBER force field parameters cannot accurately describe the Fe2+ center coordination exhibited by this enzyme. Therefore, this study aimed to derive AMBER force field parameters for DPD enzyme Fe2+ centers, using the original Seminario method and the collation features Visual Force Field Derivation Toolkit as a supportive approach. All-atom MD simulations were performed to validate the results. Both approaches generated similar force field parameters, which accurately described the human DPD protein Fe2+4S2−4 cluster architecture. This information is crucial and opens new avenues for in silico cancer pharmacogenomics and drug discovery related research on 5-FU drug efficacy and toxicity issues.
The Study
Although there is an increased interest in the protein metal interactions, prompted by the essential physiological roles played by metal ions [15][16][17], the Fe-S (Gln) coordination in cluster 1026 is yet to be reported in other Fe 2+ 4S 2− 4 cluster containing proteins [1,2]. Metal ions such as iron (Fe 2+ ) are crucial components of a protein's electron transportation, We can gain insights into metal coordinating environments through computational studies, especially via molecular dynamics (MD) simulation. However, MD calculations are highly dependent on force fields derived through quantum mechanics (QM), and molecular [20,21]. MM methods employ classical-type models to predict the amount of energy in a molecule, based on its conformation [22]. Compared to QM approaches, MM methods are computationally cheaper and sufficient for describing atomic interactions and dynamics of a purely organic system. However, most of the available MM force fields cannot accurately describe the metal/organic interface occurring in metalloproteins, as they ignore their induced explicit electronic degree of freedom [23]. To account for the electronic effects of the metals, de novo QM/MM calculations have been employed to describe the precise electron structure of atoms around a metal center [24][25][26]. Due to the importance of metals in protein function, the development of novel force field parameters using either hybrid QM/MM or pure QM approaches for describing various transition metal architectures is gaining pace [27]. This has led to numerous modified force fields that have been incorporated in several force field families, such as the optimized potentials for liquid simulations (OPLS-AA) [28], Gronigen molecular simulation (GROMOS) [29], chemistry at Harvard molecular mechanics (CHARMM) [30,31], and assisted model building with energy refinement (AMBER) [32]. Both CHARMM and AMBER are widely used. They give a large palette of atom types, allowing several organic molecules to be represented by assigning atom types based on chemical similarity [33,34]. OPLS-AA [35,36] optimizations focus on the condensed phase properties of small molecules, and have since been extended to include a diverse set of small molecule model compounds. However, atom type assignment must be done manually. It worth noting that a commercial implementation of OPLS-AA with atom typing functionality is available [37]. On the other hand, CHARMM has been enhanced with the CHARMM general force field (CGenFF), which not only covers a wide range of chemical groups found in biomolecules and drug-like molecules, but also many heterocyclic scaffolds [38,39]. Furthermore, a web interface for automatic atom typing and analogy-based parameter and charge assignment is now available [40,41]. The GROMOS force field atom type palette offers a pool of diversity for the construction of small molecule models with a force field derived from biopolymer parameters [29]. The general AMBER force field (GAFF) [42] and the antechamber toolkit are now included in AMBER [33,43,44] allowing the user to generate an AMBER [32,45] force field model for any input molecule. Besides the associated simulation speeds and exportable parameters, the development of a Python-based metal parameter builder (MCPB.py) [46], which supports various AMBER force fields and >80 metal ions, has made the parametrization of inorganic constituents in proteins more facile. These advantages make AMBER the most preferred platform for the development of metal parameters for use in simulations involving metalloproteins. Hitherto, various methods, such as the polarization model, and non-bonded, semi-bonded, and bonded models, have been implemented to characterize metalloproteins. The non-bonded model uses non-covalent (van der Waals and electrostatic forces) interaction to define metal centers [43,44], whereas, semi-bonded [47,48] models put dummy atoms around metals to resemble electron orbitals. However, these two methods are incapable of taking into account charge transfer and polarization effects around the metal centers [49]. These shortcomings have been solved by incorporating the charge transfer and polarization effects in potential energy function models [50,51]. Contrastingly, the bonded model utilizes defined harmonic energy terms, which have been introduced into possible energy function to account for the bond formation between atoms and metal centers [48,52,53][ The approaches mentioned above have been extensively used in studies to characterize Fe 2+ centers in a range of metalloproteins [52][53][54][55]. Among other Fe 2+ clusters, Carvalho and colleagues [54] satisfactorily generated AMBER force field parameters for Fe 2+ 4 S 2− 4 clusters coordinated by cysteine residues. However, none of these parameters featured glutamine residue coordination to the Fe 2+ center or developed parameters for the structures of composite multiple clusters, besides applying two approaches. To the best of our knowledge, this is the first study to determine the human DPD protein metal force field parameters.
Collectively, the current study integrates MM with QM techniques to determine accurate force field parameters for 8 × Fe 2+ 4 S 2− 4 cluster complexes of the modeled human DPD proteins. We utilized the bonded method of QM and Seminario techniques in our calculations [27]. More specifically, the density functional theory (DFT) of the QM approach was used to derive Fe 2+ center AMBER parameters for two models using different Seminario methods. The first method (viz. Model 1) used the original Seminario [56] method [46], whereas the second method (viz. Model 2) used collation features Visual Force Field Derivation Toolkit (VFFDT) Seminario [57]. A comparison of the parameters from the two methods was performed and their reliability evaluated via all atom MD simulations. For the first time, the current study reports novel force field parameters for multiple Fe 2+ 4 S 2− 4 clusters, coordinated to both cysteine and glutamine residues. Furthermore, the reliability of the two parameter generation approaches was also evaluated and found to be comparable. The newly derived force field parameters can be adopted by other systems depicting a similar Fe 2+ coordinating environment. More importantly, the establishment of these parameters creates an avenue for further molecular studies to fully understand the functional mechanism of the human DPD protein, and to decipher the effects of missense mutations on drug metabolism and cancer drug toxicity issues. As part of our ongoing investigations about the effects of known variants in human DPD, especially on its structure and stability, the reliability of the current parameters has been confirmed and the findings will be published as a follow-up study. Furthermore, different methods, such as the identification of new mutants, coupled with structural analysis and clinical studies, i.e., phenotyping of DPD, has had a great impact on the understanding of the structural and functional effects of these mutations [6]. Together, these results will be crucial, not only for understanding how mutations lead to 5-FU toxicities, but also to better inform the implementation of precision medicine in cancer treatment.
Human DPD 3D Wild Type (WT) Complete Structure Determined via Homology Modeling Approaches
The availability of accurate and complete 3D structural information is a fundamental aspect for molecular studies aimed at understanding protein function. With the absence of the human DPD X-ray structure in the protein data bank (PDB) [8], homology modeling approaches were used to calculate accurate models of the human DPD enzyme using MODELLER v9.15 [58], DiscoveryStudio4.5 [59], and pig X-ray structure (PDB ID: 1H7X, 2.01 Å) as a template [1,2]. The choice of the template was guided by the high sequence identity (93%) with the target human DPD enzyme. Additionally, it was in complex with the drug of interest (5-FU) and had a complete query coverage of 100%. Using the very slow refinement level in MODELLER v9.15, 100 apo protein models were generated. The three best models, with the lowest z-DOPE scores of −1.36, −1.36, and −0.88, were chosen for further validation. z-DOPE score evaluates the closeness of a model in comparison with the native structure, based on atomic distance-displacement statistical potential, with a score of ≤−1.0 being considered as a near-native structure [60,61]. Consequently, holo (apo and cofactors) and holo-drug (5-FU) complex structures were generated by incorporating the non-protein coordinates from the template in Discovery Studio 4.5 [59]. Additional model quality assessment (Table S1) was performed using the VERIFY3D webserver [62], qualitative model energy analysis (QMEAN) [63], protein structure analysis (ProSA) [64], and program to check the stereochemical quality of protein structures (PROCHECK) [65]. VERIFY3D utilizes pairwise interaction derived energy potentials to evaluate the local quality of a model, based on each residue structure environment [62]. High-quality structures are predicted to have more than 80% of their residues with a 3D-1D score of 0.2 or higher [62]. The modeled structures had 3D-ID scores of 0.2 or higher (Table S1) in 85.01% of its residues. QMEAN estimates the quality of the submitted model based on its physicochemical properties, then derives a value corresponding to the overall quality of the structure and compares it to the calculated QMEAN-scores of 9766 high-resolution experimental structures [63]. The modelled structures of DPD holo and holo-drug complexes had a QMEAN-score of 0.90 and 0.89, which is similar to that of high-resolution experimental structures. ProSA assesses the quality of the submitted model by calculating its potential energy and comparing the resulting score to that of the experimental structures available in PDB [64]. The Z-score of each monomer of the holo and holo-drug complexes was between −13.41 and −13.56, which is similar to NMR structures of the same size.
PROCHECK assesses the stereochemical quality of the submitted protein models based on their phi/psi angle arrangement and then produces Ramachandran plots, which show the protein residues positions on the most favored, allowed, and disallowed regions [65]. Each generated model had more than 83.8%, 16.0%, and 0.2% of their residues in the most favored, allowed, and disallowed regions, respectively, suggesting a good distribution of torsion angles (Table S1). Overall, constructed holo and holo-drug complexes with consistently high-quality scores were obtained.
To remove steric clashes in the generated models (holo and holo-drug), 100 steps of minimization, with the steepest descent algorithm using the GROMACS 5.14 MD simulation package [66], were performed and determined to be suitable for subsequent calculations.
AMBER Force Field Parameters Generated Using Bonded Approaches
The metal coordination geometries in proteins are highly dependent on the protonation states of the residues involved. Thus, to achieve the correct geometry arrangements in the human DPD protein, the protonation states of all titratable resides were determined at a pH of 7.5, using the H++ webserver (http://biophysics.cs.vt.edu/H++, accessed on 12 December 2019) [67] (Table S2). To ensure correct protonation, a visual inspection of all titratable residues was performed and corrected using Schrödinger Maestro version 11.8 [68]. Table 1 shows the protonation states of residues forming a bond with the metal ions in the Fe 2+ 4 S 2− 4 clusters. Cys was protonated as CYM and interacted with the Fe 2+ center through a sulfur (SG) bond. On the other hand, Gln was protonated as GLH to coordinate with the Fe 2+ ion through the oxygen (OE) atom. The AMBER force field parameters of the Fe 2+ 4 S 2− 4 clusters in the human DPD protein were calculated using two approaches: the original Seminario method (Model 1) and the collation features Seminario approach in visual-force field derivation tool (VFFDT) (Model 2). In each chain, two distinct residue coordinating environments were identified. Cluster 1026 (4 × Fe 2+ , 4 × S 2− , 3 × Cys and 1 × Gln) coordination was different from those of clusters 1027, 1028, and 1029 (4 × Fe 2+ , 4 × S 2− and 4 × Cys). The four Fe 2+ (FE1, FE2, FE3, FE4) bonded to the four S 2− (S1, S2, S3, S4) ions to form internal coordinates. [69][70][71]. Model 2 calculations failed at the B3LYP level of theory, therefore, the parameters for single internal coordinates (S3 and FE3) were obtained using a Los Alamos double-zeta basis (LSDA/LANL2DZ) approach [72]. However, those for the external coordinates ((Cys and Fe 2+ ) and (Gln and Fe 2+ )) were derived using a geometry, frequency, noncovalent, extended TB (GFN1-xTB) method ( Figure S1) [73,74]. concerning Fe 2+ and Cys [54,77,78]. However, there is limited literature on Fe 2+ and Gln force field interactions, which has been sufficiently addressed in this study. Despite the slight differences, the values of force constant from both systems (Model 1 and 2) were within the same range, and consistent with those obtained from previous studies [54,78]. Commonly, force field parameter values of a model conducted under different systems are not exact, but fall within an expected range [56,57,79]. In generating new parameters, the state of the structural geometry optimization is thought to be a contributing factor to the varied observations [80]. Previous findings [81] ascribed the discrepancies to the different methods used in obtaining the force constant and the opposite manners in which the connectivity's were defined. Most importantly, the derived values showed that both models maintained the subsets structural geometry following the optimization step.
Geometry Optimization
The subset structures for Model 1 attained the local minima at step number 238 initiating the optimization process ( Figure 2C,D). During the optimization process, a significant energy variation between steps 120 and 230 was observed. The main cause of the energy variation was the formation of a repulsive bond between Fe 2+ and Fe 2+ ions instead of the Fe 2+ and S 2− ions in cluster 1026. Nevertheless, the subset structures achieved correct optimization, while maintaining their geometry, as seen in Figure 2B.
The original Seminario method derived individual point value parameters for the subsets in Model 1 (Table S3). Contrastingly, the VFFDT (Model 2) approach/method generated average related parameters for internal bond length and angles, whereas the external parameters were averaged manually (Table S4). The equilibrium bond length and angle values obtained from QM (Models 1 and 2) showed some deviation of the crystal structure (Tables 2-4). These disparities might have been due to deficient phase information on the x-ray structure, since they give a static snapshot of the dynamic structure, contributing to spurious values [75]. Moreover, the disparity might have resulted from the lack of solvent effects and intermolecular interactions during the QM gas-phase optimization step [75,76]. As expected, the average bond length and angle for Model 2 were within the range of those obtained from Model 1. Furthermore, consistent with previous studies, in both models, the bond distances between Gln(OE)-Fe 2+ were seemingly lower (Model 1: 1.92 Å and Model 2: 1.93 Å) ( Table 2) compared to the bond between Cys(S)-Fe 2+ , with force constants of 60.40 and 24.97 kcal·mol − ·Å − , respectively. The short bond length might be attributed to the smaller atom radius of oxygen in Gln compared to that of sulfur in Cys [1,2]. These values coincided with those obtained from previous related studies concerning Fe 2+ and Cys [54,77,78]. However, there is limited literature on Fe 2+ and Gln force field interactions, which has been sufficiently addressed in this study.
Despite the slight differences, the values of force constant from both systems (Model 1 and 2) were within the same range, and consistent with those obtained from previous studies [54,78]. Commonly, force field parameter values of a model conducted under different systems are not exact, but fall within an expected range [56,57,79]. In generating new parameters, the state of the structural geometry optimization is thought to be a contributing factor to the varied observations [80]. Previous findings [81] ascribed the discrepancies to the different methods used in obtaining the force constant and the opposite manners in which the connectivity's were defined. Most importantly, the derived values showed that both models maintained the subsets structural geometry following the optimization step.
RESP Charges
Partial atomic charge calculations were derived for each atom interacting with the Fe 2+ center for the optimized subset structures. Figure S2 and Table S5, illustrate differences in the WT DPD atomic charge distribution in the oxidized subsets. The RESP method derived these charges by fitting the molecular electrostatic potential obtained from the QM calculation, based on the atom-centered point charge model. In their oxidized state, atoms within the DPD Fe 2+ (S 2− , Gln and Cys) center exhibited varied atomic charges due to the large electrostatic environment around the protein's metal sphere. Such variations are known to influence charge transfer at the redox center bringing stability around the coordinating sphere of metalloproteins [79]. As such, they are vital components in the achievement of accurate inter-and intra-molecular potential electrostatic interaction [75]. The newly generated Fe 2+ force field parameters for subsets 1026-A and 1027-A (Tables S6 and S7) were inferred to the remaining Model 1 DPD clusters corresponding to their geometries mentioned earlier. Similarly, the generated internal and external parameters (Table S4) for Model 2 were also inferred to the corresponding clusters, accordingly. At the end, each model featured a holo and a holo-drug (5-FU cancer drug) protein complex, totaling 64 internal (Fe-S) and 32 external (30 Cys-Fe; 2 Gln-Fe) parameter calculations for the DPD (Fe 2+ 4 S 2− 4 ) clusters. In terms of energy profile and range of force constants for Model 1 and 2, there were no significant differences observed in terms of DPD Fe 2+ ion coordination to Cys, Gln residues, and S 2− ions. Tables 2-4 show a summary of equilibrium bond length, angle, and related force constants, with detailed information available in the supporting information (Tables S6 and S7). Dihedral-related force constants were derived manually from the respective structures (Table S8). Accurate parameters are necessary for maintaining the coordinating geometry of a metal center in metalloproteins [55]. Therefore, to evaluate the accuracy and reliability of the derived parameters (Model 1 and 2), all atom MD simulations (150 ns) for holo system and holo-drug complexes were performed. The derived parameters were validated by assessing the root mean square deviation (RMSD) (Figure 3A), the radius of gyration (Rg) ( Figure 3B), and root mean square fluctuation (RMSF) ( Figure 3C). Simulations of both models for holo and holo-ligand complexes showed minimal deviation from their initial structures, which were maintained across the simulation process ( Figure 3A). Model 1 systems (holo and holo-drug) displayed a multimodal RMSD density distribution, implying they sampled various local minima, whereas each of the Model 2 proteins attained a single local minimum (unimodal distribution). The Rg ( Figure 3B) revealed that the compactness of the various protein models remained the same during dynamics. However, differences were observed between the holo and holo-drug bound proteins. The ligand-bound protein was seen to generally have a higher Rg than the non-ligand bound protein in both model systems. This may be attributed to the presence of the drug. Proteins from both models exhibited similar RMSF profiles ( Figure 3C). However, the ligand-bound proteins appeared slightly more flexible than the non-ligand bound ones. As expected, the loop regions, which constitute~43% of the entire protein structure, including the active-site loop (residues 675-679), were the most flexible regions, while the metal site residues displayed minimal fluctuation ( Figure S3). Visualization of the different trajectories through visual molecular dynamics (VMD) [82] verified a high conformational change of the loop areas, while the protein central core containing Fe 2+ clusters had vibrational-like movements.
The profiles of the RMSDs ( Figure 3A) exhibited higher variation in conformational changes across all systems. These variations were more apparent in the Model 1 system's proteins compared to the Model 2 system. Considering the similarity of protein behavior with drug binding, it is apparent that both models showed similar atomic tendencies in the drug and non-drug bound systems. The disparities arising from conformational changes were because of the slight differences in the approaches used in the models' preparation. For instance, fixed bond parameters were assigned between Fe-S, Fe-Fe, and the connecting residues (Fe-Cys or Fe-Gln) of Model 2, based on averages of crystallographic structure (Table S2), whereas Model 1 parameters were attained from single point atom calculation of the crystallographic structure. The RMSF values of both the holo and holo-drug bound complexes demonstrated a region of higher flexibility between residues in all models ( Figure 3C). the loop areas, while the protein central core containing Fe 2+ clusters had vibrational-like movements. The profiles of the RMSDs ( Figure 3A) exhibited higher variation in conformational changes across all systems. These variations were more apparent in the Model 1 system's proteins compared to the Model 2 system. Considering the similarity of protein behavior with drug binding, it is apparent that both models showed similar atomic tendencies in the drug and non-drug bound systems. The disparities arising from conformational changes were because of the slight differences in the approaches used in the models' preparation. For instance, fixed bond parameters were assigned between Fe-S, Fe-Fe, and the connecting residues (Fe-Cys or Fe-Gln) of Model 2, based on averages of crystallographic structure (Table S2), whereas Model 1 parameters were attained from single point atom calculation of the crystallographic structure. The RMSF values of both the holo and holodrug bound complexes demonstrated a region of higher flexibility between residues in all models ( Figure 3C).
Proteins are dynamic entities and as such they undergo conformational changes as part of their functionality. Elucidating these changes is necessary for understanding how their functionality is maintained [83]. Hence, we evaluated the conformational variations sampled by each system during the simulation by plotting the free energy of each system snapshot as a function of RMSD and Rg using the Boltzmann constant ( Figure 4). In both models, free energy investigations revealed similar tendencies to the kernel density map in all the systems. Both holo and holo-drug bound proteins populated three main conformations in Model 1. However, the holo bound protein attained three energy minima at 0.18, 0.20, and 0.25 nm, while the drug-bound protein energy minima were attained later, at 0.22, 0.25, and 0.35 nm. On the other hand, Model 2 equilibrated at single energy minima for both the drug (0.28 nm) and holo (0.22 nm) bound complexes. Model 1 proteins repeatedly attempted to find a high probability region that guaranteed more thermodynamic stability for its conformational state than Model 2. However, upon drug binding Proteins are dynamic entities and as such they undergo conformational changes as part of their functionality. Elucidating these changes is necessary for understanding how their functionality is maintained [83]. Hence, we evaluated the conformational variations sampled by each system during the simulation by plotting the free energy of each system snapshot as a function of RMSD and Rg using the Boltzmann constant ( Figure 4). In both models, free energy investigations revealed similar tendencies to the kernel density map in all the systems. Both holo and holo-drug bound proteins populated three main conformations in Model 1. However, the holo bound protein attained three energy minima at 0.18, 0.20, and 0.25 nm, while the drug-bound protein energy minima were attained later, at 0.22, 0.25, and 0.35 nm. On the other hand, Model 2 equilibrated at single energy minima for both the drug (0.28 nm) and holo (0.22 nm) bound complexes. Model 1 proteins repeatedly attempted to find a high probability region that guaranteed more thermodynamic stability for its conformational state than Model 2. However, upon drug binding the conformation entropy was increased in both models, which destabilized the transitional state and simultaneously slowed down the protein equilibration. Visualization of the trajectories in VMD for establishing the cause of the trimodal ensemble showed alternating movements in the loop regions, including the C-terminal, N-terminal, and active-site loop areas. More importantly, the Fe 2+ 4 S 2− 4 cluster's geometry was maintained during the simulation ( Figure S4).
Fe 2+ 4 S 2− 4 Clusters Exhibited Stability during MD Simulations
Assessment of the inter-or intra-molecular distances between groups of interest can be used to investigate stability changes during MD simulations [84]. In this study, distances between the center of mass (COM) of; 1) the entire DPD protein and each of the eight Fe 2+ 4 S 2− 4 clusters ( Figure 5A); 2) each chain and the four Fe 2+ 4 S 2− 4 clusters therein ( Figure 5B); and 3) the active site of each chain and its Fe 2+ 4 S 2− 4 clusters, were evaluated ( Figure 5C) for each model (Model 1 and 2: holo and holo-drug). From these calculations, the overall stability of the key components involved in the electron transfer process was evaluated. Generally, the inter-COM distances between the various groups in both models were nearly the same ( Figure 5A-C). Moreover, data were distributed with a less standard deviation (uni-modal distribution), as seen from most kernel density plots, suggesting the distances within metal clusters remained in the same range across the 150 ns simulation and maintained stability within the metal clusters. Thus, the two methods can reliably be used to achieve accurate parameters for other metalloproteins.
Molecules 2021, 26, 2929 14 of 28 the conformation entropy was increased in both models, which destabilized the transitional state and simultaneously slowed down the protein equilibration. Visualization of the trajectories in VMD for establishing the cause of the trimodal ensemble showed alternating movements in the loop regions, including the C-terminal, N-terminal, and activesite loop areas. More importantly, the Fe 2+ 4S 2− 4 cluster's geometry was maintained during the simulation ( Figure S4).
Fe 2+ 4S 2− 4 Clusters Exhibited Stability During MD Simulations
Assessment of the inter-or intra-molecular distances between groups of interest can be used to investigate stability changes during MD simulations [84]. In this study, distances between the center of mass (COM) of; 1) the entire DPD protein and each of the eight Fe 2+ 4S 2− 4 clusters ( Figure 5A); 2) each chain and the four Fe 2+ 4S 2− 4 clusters therein (Figure 5B); and 3) the active site of each chain and its Fe 2+ 4S 2− 4 clusters, were evaluated ( Figure 5C) for each model (Model 1 and 2: holo and holo-drug). From these calculations, the overall stability of the key components involved in the electron transfer process was evaluated. Generally, the inter-COM distances between the various groups in both models In addition to the group inter-COM distance calculations, the distances between the Fe 2+ centers and the coordinating residues were also determined for the holo-drug complexes in both models ( Figure 6). Using this approach, the integrity of the coordinating geometry could be accessed during simulations. From the results, a high bond length consistency was observed within all Fe 2+ 4 S 2− 4 centers; an indication that the derived parameters were accurately describing the cluster geometries. Furthermore, the obtained bond lengths were in agreement with those reported previously [54,55]. The maintenance of the bond distances signified that the desired functionality and stability had not been jeopardized given that this is dependent on the protein environment [54]. Notably, Zheng et al.'s protocol for the evaluation of metal-binding structure confirmed that the coordinating tetrahedral geometry of Fe 2+ 4 S 2− 4 clusters was maintained during the entire simulation run. Although our calculations agreed with previous findings [54,56,77,78], it is worth noting that, to the best of the authors' knowledge, none of the studies featured the force field parameters for glutamine interaction with a single or multiple Fe 2+ 4 S 2− 4 cluster in a single protein.
were nearly the same ( Figure 5A-C). Moreover, data were distributed with a less standard deviation (uni-modal distribution), as seen from most kernel density plots, suggesting the distances within metal clusters remained in the same range across the 150 ns simulation and maintained stability within the metal clusters. Thus, the two methods can reliably be used to achieve accurate parameters for other metalloproteins. In addition to the group inter-COM distance calculations, the distances between the Fe 2+ centers and the coordinating residues were also determined for the holo-drug complexes in both models ( Figure 6). Using this approach, the integrity of the coordinating geometry could be accessed during simulations. From the results, a high bond length consistency was observed within all Fe 2+ 4S 2− 4 centers; an indication that the derived parameters were accurately describing the cluster geometries. Furthermore, the obtained bond lengths were in agreement with those reported previously [54,55]. The maintenance of the bond distances signified that the desired functionality and stability had not been jeopardized given that this is dependent on the protein environment [54]. Notably, Zheng et al.'s protocol for the evaluation of metal-binding structure confirmed that the coordinating tetrahedral geometry of Fe 2+ 4S 2− 4 clusters was maintained during the entire simulation run. Although our calculations agreed with previous findings [54,56,77,78], it is worth noting that, to the best of the authors' knowledge, none of the studies featured the force field parameters for glutamine interaction with a single or multiple Fe 2+ 4S 2− 4 cluster in a single protein. , and (C) active-site. Generally, a uni-modal distribution was seen across all clusters in both models. The distance between the Fe 2+ cluster and backbone of the protein remained within the same range during dynamics. Cluster compactness is an indication of the system stability. Respective clusters are colored accordingly.
Validation of Derived Parameters in IH7X Crystal Structure
The derived Fe 2+ 4 S 2− 4 parameters coordinated uniquely to Cys and Glu residues were inferred to the template structure (PDB ID: 1H7X) for additional validation purposes. As with the modelled human structures, the four Fe 2+ 4 S 2− 4 clusters in each chain of the template maintained the correct geometry, as shown in Figure S5.
Essential Motions of Protein in Phase Space
Proteins are dynamic entities, whose molecular motions are associated with many biological functions, including redox reactions. Collective coordinates derived from atomic fluctuation principal component analysis (PCA) are widely used to predict a lowdimensional subspace in which essential protein motion is expected to occur [85]. These molecular motions are critical in biological function. Therefore, PCA was calculated to investigate the 3D conformational study and internal dynamics of the holo and holo-drug complexes of both models (Model 1 and Model 2). The first (PC1) and the second (PC2) principal components captured the dominant protein motions of all atoms in the 150 ns MD simulation (Figure 7). Both holo structures (Model 1 and Model 2) showed a U-shaped time evolution from an unfolded state (yellow) emerging from simple Brownian motion and ending in a native state (dark blue), over 150 ns. Strikingly, the projection of holo-drug complexes from both models (1 and 2) adopted a V-shaped time evolution space, emerging from an unfolded state (yellow) and ending in a native state (dark blue). Model 1 and Model 2 holo structures accounted for 44.95% of the total global structural variances. The holo-drug complexes displayed 48.95% and 36.5% of global total variance for Model 1 and Model 2, respectively. In overall, the holo-drug complexes (Model 1 and Model 2) exhibited an altered conformational evolution over time in-comparison to their respective holo structure, suggesting that the newly derived force field parameters in both models did not alter protein function. 1026A and 1027A of Model 2, the holo-drug bound system. The coordinating distances between the Fe 2+ and the connecting residue was seen to be maintained throughout the simulation in both models.
Validation of Derived Parameters in IH7X Crystal Structure
The derived Fe 2+ 4S 2− 4 parameters coordinated uniquely to Cys and Glu residues were inferred to the template structure (PDB ID: 1H7X) for additional validation purposes. As with the modelled human structures, the four Fe 2+ 4S 2− 4 clusters in each chain of the template maintained the correct geometry, as shown in Figure S5.
Essential Motions of Protein in Phase Space
Proteins are dynamic entities, whose molecular motions are associated with many biological functions, including redox reactions. Collective coordinates derived from atomic fluctuation principal component analysis (PCA) are widely used to predict a lowdimensional subspace in which essential protein motion is expected to occur [85]. These molecular motions are critical in biological function. Therefore, PCA was calculated to investigate the 3D conformational study and internal dynamics of the holo and holo-drug complexes of both models (Model 1 and Model 2). The first (PC1) and the second (PC2) principal components captured the dominant protein motions of all atoms in the 150 ns MD simulation (Figure 7). Both holo structures (Model 1 and Model 2) showed a U-shaped time evolution from an unfolded state (yellow) emerging from simple Brownian motion and ending in a native state (dark blue), over 150 ns. Strikingly, the projection of holodrug complexes from both models (1 and 2) adopted a V-shaped time evolution space, emerging from an unfolded state (yellow) and ending in a native state (dark blue). Model 1 and Model 2 holo structures accounted for 44.95% of the total global structural variances. The holo-drug complexes displayed 48.95% and 36.5% of global total variance for Model 1 and Model 2, respectively. In overall, the holo-drug complexes (Model 1 and Model 2) exhibited an altered conformational evolution over time in-comparison to their respective holo structure, suggesting that the newly derived force field parameters in both models did not alter protein function.
Materials and Methods
A graphical workflow of the methods and tools used in this study is presented in Figure 8.
Materials and Methods
A graphical workflow of the methods and tools used in this study is presented in Figure 8.
Homology Modeling of Native DPD Protein.
Due to the absence of human DPD protein crystal structural information in the Protein Data Bank (PDB) database [10], a homology modeling approach was used to obtain a complete 3D structure using MODELLER v9.15 [61]. This technique has become indispensable for obtaining 3D model structures of proteins with unknown structures and their assemblies by satisfying spatial constraints based on similar proteins with known structural information [86]. The restraints are derived automatically from associated structures and their alignment with the target sequence. The input consists of the alignment of the sequence to be modeled with a template protein whose structure has been resolved, and a script file (Table S9). At first, the target sequence (human DPD enzyme: UniProt accession: Q12882) was obtained from the Universal Protein Resources [87]. Both HHPred [88] and PRIMO [89] were used to identify a suitable template for modeling the human DPD protein. From the potential templates listed by the two webservers, PDB 1H7X, a DPD crystal structure from pig with a resolution 2.01 Å , was identified as the top structural template having a sequence identity of 93% [1,2]. A pir alignment file was prepared between the Uniprot (UniProt accession: Q12882) target sequence and that of template using multiple sequence comparison by log-expectation (MUSCLE). Therefore, the template PDB ID: 1H7X was utilized. In MODELLER v9.15 [90], a total of 100 human DPD holo models were generated at the "very-slow" refinement level, guided by the selected template. The resulting models, devoid of both drugs (5-FU and cofactors), were ranked based on their lowest normalized discrete optimized protein energy (z-DOPE) score [60], and the top three models were selected for further modeling. To incorporate the non-protein structural information, each of the selected models was separately superimposed onto the template in Discovery Studio 4.5 [59], and all non-protein information was copied. The coordinates for cofactors and the drug were then transferred directly to the modeled structures. Further quality assessment of the resulting complexes was performed using VER-IFY3D [62], PROCHECK [65], QMEAN [63], and ProSA [64]. The best model showing a consistently high-quality score across the different validation programs was chosen for further studies.
Protonation of Titrarable Residues.
To account for the correct protonation states of the system, all DPD titratable residues were protonated at pH 7.5 [1], a system salinity of 0.5 M, and internal and external default dielectric constants of 80 and 10, respectively, in the H++ web server [67]. System coordinates (crd) and topology (top) files were used to build protonated protein structure files.
Homology Modeling of Native DPD Protein.
Due to the absence of human DPD protein crystal structural information in the Protein Data Bank (PDB) database [10], a homology modeling approach was used to obtain a complete 3D structure using MODELLER v9.15 [61]. This technique has become indispensable for obtaining 3D model structures of proteins with unknown structures and their assemblies by satisfying spatial constraints based on similar proteins with known structural information [86]. The restraints are derived automatically from associated structures and their alignment with the target sequence. The input consists of the alignment of the sequence to be modeled with a template protein whose structure has been resolved, and a script file (Table S9). At first, the target sequence (human DPD enzyme: UniProt accession: Q12882) was obtained from the Universal Protein Resources [87]. Both HHPred [88] and PRIMO [89] were used to identify a suitable template for modeling the human DPD protein.
From the potential templates listed by the two webservers, PDB 1H7X, a DPD crystal structure from pig with a resolution 2.01 Å, was identified as the top structural template having a sequence identity of 93% [1,2]. A pir alignment file was prepared between the Uniprot (UniProt accession: Q12882) target sequence and that of template using multiple sequence comparison by log-expectation (MUSCLE). Therefore, the template PDB ID: 1H7X was utilized. In MODELLER v9.15 [90], a total of 100 human DPD holo models were generated at the "very-slow" refinement level, guided by the selected template. The resulting models, devoid of both drugs (5-FU and cofactors), were ranked based on their lowest normalized discrete optimized protein energy (z-DOPE) score [60], and the top three models were selected for further modeling. To incorporate the non-protein structural information, each of the selected models was separately superimposed onto the template in Discovery Studio 4.5 [59], and all non-protein information was copied. The coordinates for cofactors and the drug were then transferred directly to the modeled structures. Further quality assessment of the resulting complexes was performed using VERIFY3D [62], PROCHECK [65], QMEAN [63], and ProSA [64]. The best model showing a consistently high-quality score across the different validation programs was chosen for further studies.
Protonation of Titrarable Residues.
To account for the correct protonation states of the system, all DPD titratable residues were protonated at pH 7.5 [1], a system salinity of 0.5 M, and internal and external default dielectric constants of 80 and 10, respectively, in the H++ web server [67]. System coordinates (crd) and topology (top) files were used to build protonated protein structure files. A visual inspection of all titratable residues was performed, and incorrect protonation corrected using Schrödinger Maestro version 11.8.
New Force Field Parameter Generation.
Prior to the parameter generation process, the residue coordinations present in chain-A and chain-B Fe 2+ 4 S 2− 4 centers were evaluated to identify representative subsets. Two unique coordination subset arrangements, viz. 1026A (4 × Fe 2+ , 4 × S 2− , 3 × Cys and 1 × Gln) and 1027B (4 × Fe 2+ , 4 × S 2− and 4 × Cys), were identified. The two subsets (1026A and 1027B) represented the coordinating geometry of all Fe 2+ 4 S 2− 4 clusters in the protein. Subsequently, force field parameters describing the coordinating interactions in these unique centers were determined via two approaches. First, the original Seminario method (Model 1) was implemented using the bonded model approach in Am-berTools16 [57] and Python-based metal center parameter builder (MCPB) [46]. Gaussian 09 [91,92] input files (com) of the protonated protein incorporating the subsets structures (1026A and 1027B) were prepared. Thereafter, their geometries were optimized utilizing the hybrid DFT method at a B3LYP correlation function level of theory. This process utilized double split-valence with a polarization [6-32G(d)] basis set [71,92] (Table S1). Sub-matrices of Cartesian Hessian matrix were used in the derivation of the metal geometry force field parameters [56]. Bond and angle force constants were obtained via fitting to harmonic potentials. The potential energy of the relative position for each atom in the system was determined by AMBER force field parameters calculated from Equation (1) below: where the bond lengths, angles values, torsion values, and the interatomic distances were obtained. The first and second term of the harmonic potential energy function relates to bond bending and bond stretching, respectively, whereas the torsion angles are described by the third term. Lastly, the van der Waals forces and electrostatic interaction are given by the non-bonded energy function involving the Lennard Jones (12-6) potential and Coulomb potential, respectively [32,56]. The optimized/minimized structures were then visualized in GaussView 5.0.9 [93] to confirm that the bonds in the centers were intact. The atomic charges of the optimized subset structures were then derived from electrostatic potential (ESP). However, ESP assigns unreasonably charged values to the buried atoms, which impair their conformational transferability. Therefore, the restrained electrostatic potential (RESP) fitting technique, which considers the Coulomb potential for the calculation of electrostatic interaction, was employed to address these issues. This method has been highly regarded and widely used for assigning partial charges to various molecules utilizing B3LYP/6-31G(d) gas phase [45]. Restraints, in terms of penalty functions, are applied on the buried atoms, leading to multiple possible charged values. Hence, the quality of fit to the QM ESP is not compromised [94]. Herein, a default Merz-Kollman restrained electrostatic potential (RESP) radius of 2.8 Å was allocated to the metal centers. An additional approach (herein named as Model 2) using the collation features Seminario: VFFDT program was used [57]. Analysis data were acquired following optimization of subset Fe 2+ -S 2− , Fe 2+ -Cys, and Fe 2+ -Gln coordination; the calculations were performed using density functional theory (DFT) featuring the LSDA/LANL2DZ (Table S2) [72]. This factored in the internal covalent bonds; note that the calculation was not successful at the B3LYP level of theory [69]. The external non-covalent bond calculation was determined by GFN1-xTB [73,74]. Retrieval of the force field parameters for the entire molecule was done through the Protocol menu item "FF" for the whole "General Small Molecule". Since the system in this study was symmetrical, the atom types were left identical to Fe or S. The AMBER force field parameters for Fe 2+ metal center bond and angles were then generated automatically. Individual detailed statistics were derived but only the final values were utilized for further calculations. The obtained parameters were then inferred to the other clusters in the modeled structures, as well as the template crystal structure (PDB ID: 1H7X) using the LEaP [95] program. This was based on the similarity of the clusters coordinating geometry. As such, cluster 1026A was inferred to 1029B, and those for 1027A were inferred to 1027B, 1028A, 1028B, 1029A, and 1029B, as they depict an identical coordination geometry. In total, 2 × ([Fe 2+ 4 S 2− 4 (S-Cys) 3 (S-Gln)]) and 6 × ([Fe 2+ 4 S 2− 4 (S-Cys) 4 ]) cluster parameters were derived for each model. No other 3D structure with metal centers, such as the human DPD coordinating environment, was available in the PDB. Therefore, the pig crystal structure was used to validate the reliability and accuracy of the newly generated force field parameters.
Force Field Parameters Validation and Analysis
To evaluate the reliability of the generated parameters derived from the original and automated Seminario approaches, duplicate all-atom MD simulations were conducted using the GROMACS 5.14 MD package [66]. For each model system (Model 1, Model 2, 1H7X crystal structure), the holo (protein with only cofactors) and holo-drug (5-FU) complexes were considered for simulation studies. At first, AMBER topologies for each system were generated by Leap modeling with the AMBER ff14SB force field to incorporate all the generated parameters [96]. The resulting system topologies were converted to GROMACScompatible input files for the structure (gro) and the topology (top), with the correct atom types and charges using the AnteChamber Python Parser interface (ACPYPE) tool [97]. The infinite systems were then solvated in an octahedron box system using the simple point charge (SPCE216) water model [98], and with a padding distance of 10 Å set between the protein surface and the box face. The net charge for all systems was subsequently neutralized by adding 0.15 M NaCl counter-ions [99]. The neutralized systems were then subjected to an energy minimization phase (without constraints) using the steepest descent integrator 0.01 nm, and a maximum force tolerance of 1000 kJ·mol −T ·nm −m was attained. This was necessary to get rid of steric clashes that may have resulted during incorporation of the parameters and water molecules. Subsequently, the systems were equilibrated to ensure that they attained the correct temperature and pressure using a two-step conical ensemble (each 100 ps). First, the temperature was set at 300 K (NVT-number of particles, volume, and temperature) using a modified Berendsen thermostat. This was followed by pressure equilibration at 1 atm (NPT-number of particles, volume and temperature) using the Parrinello-Rahman barostat algorithm [100]. The ensembles utilized the revised coulomb type for long range electrostatic interactions with a gap cut of 8.0 Å, as described by the particle mesh Ewald (PME) [101] method, and the LINCS algorithm was used to constrain bonds between all atoms [102]. Finally, production MD simulations of 150 ns were performed for all the systems at the Centre for High Performance Computing (CHPC) in Cape Town South Africa using 72 Linux CPU cores, with time integrations step of 2 fs. Coordinates were written to file every 10 ps. The obtained MD trajectories were stripped off all periodic boundary conditions (PBC) and fitted to the reference starting structure.
Root Mean Square, Root Mean Square Fluctuation, and Radius of Gyration Analysis
Global and local conformational behaviors of the replicate ensembles were determined using various GROMACS modules, viz. gmx rms, gmx rmsf, gmx gyrate, gmx distance, and analyzed in RStudio [103]. These packages were used to analyze the root mean square deviation (RMSD), root mean square fluctuation (RMSF), the radius of gyration (Rg), and the inter-center of mass between groups of interest, respectively. The overall conformational changes per system were observed using visual molecular dynamics (VMD) [82] to ensure that the derived parameters correctly maintained the geometry of the various Fe 2+ 4 S 2− 4 clusters.
Principal Component Analysis
Principal component analysis (PCA) was conducted in MDM-TASK-web to investigate the time evolution of the protein's conformational changes in MD trajectories [85,104]. PCA is a linear transformation technique that extracts the most important element from a data set by using a covariance matrix built from atomic coordinates defining the protein's accessible degree of freedom. The calculations of the coordinate covariance matrix for the Cα and Cβ atoms were implemented after RMS best-fit of the trajectories was applied to an average structure [85,104]. Corresponding eigenvectors and eigenvalues were then obtained from a diagonalized matrix. Protein coordinates were then projected using eigenvectors. PC1 versus PC2 plots were then derived from the normalized primary and secondary projections.
To ascertain how accurate the generated force field parameters were, the average bond lengths and force constants from the derived parameters were compared to those of the x-ray structure. All statistical calculations were performed using Welch t-test in RStudio v1.1. 456 [103], where a p-value (<0.05) was considered significant.
Conclusions
In addition to the nucleotide metabolizing function of the DPD metalloenzyme in humans, the dimeric protein also serves as an important anti-cancer drug target [4][5][6]. Deficiency or dysfunction of the enzyme, because of mutations, results in increased exposure to active fluoropyrimidines metabolites, leading to severe toxicity effects. Computational approaches such as MD simulations have become integral components of elucidating protein function, as well as the effects of mutations [4]. MD simulations allow the elucidation of the conformational evolution of protein systems over time during a reaction process [26,31,32]. MD simulations require the appropriate mathematical functions and a set of parameters collectively known as force fields, which describe the protein energy as a function of its atomic coordinates. In cases where adequate parameters are lacking, especially those describing non-protein components in a system, additional descriptors are necessary. In this work, which forms a platform for future studies towards anti-cancer personalized medicine, we reported new validated AMBER parameters that can be used to accurately describe the complex Fe 2+ 4 S 2− 4 clusters in the DPD protein and related systems. This was motivated by the absence of ready to use force field parameters enabling in silico studies on the DPD system. The development of combined QM/MM methods has provided the most effective, accurate, and theoretical description of the molecular system [92]. They enable a comprehensive analysis of the structural, functional, and coordinating environment in metal-binding sites [26]. Thus, we highlighted the two similar methods' capabilities, yet with different approaches and aspects of the algorithms for deriving authentic force field parameters for Fe 2+ centers in DPD protein.
First and foremost, we reported the generation of force field parameters using the original Seminario method [56]. We went further and exploited the collation features of the VFFDT Seminario method for obtaining the force field parameters of the same Fe 2+ ions as a supportive measure [57]. This was performed by considering the dimeric functionality of the human DPD protein, which relies on the well-organized inter-chain electron transfer across an eight Fe 2+ 4 S 2− 4 cluster complex. A double displacement reaction across the two chains leads to the activation and deactivation of the third most commonly prescribed anticancer (5-FU) drug globally [111]. It was remarkable that we successfully derived the desired force constants and bond distances for the Fe 2+ centers using both Seminario approaches. The parameters obtained from other studies [54] did not address the coordinating geometry of the clusters in this study. Moreover, none of the studies focused on force field parameters for multiple clusters in a protein. Therefore, from the range of force field parameters generated from both approaches, it would be best to obtain averages of such force fields for future use in other similar systems. These averaged values will allow for some degree of transferability.
Above all, the derived parameters could easily be incorporated into consolidated MM packages. Furthermore, we ascertained that irrespective of the DFT (B3LYP HF/6-31G* and (LSDA/LANL2DZ and GFN1-xTB) logarithm application, the original Seminario approach is not inferior to the modified Seminario (collation features VFFDT) approach. Despite the role of DFT calculations (such as B3LYP) in deciphering the reactivity mechanisms of the DPD systems, the method is faced with the major limitation of neglecting dispersion interactions [112]. As a result, additional correction approaches, such as DFT-D3 [113], DFT-D [114], and BJ-damping [115] methods, are included in the calculations. In calculations where the dispersion interactions were most critical in Model 2, the DFT-D3 correction, which is part of the Grimme's GFN1-xTB, was used. However, for Model 1, consideration of the most DFT correction method will be applied in future calculations. Owing to the possible occurrence of paramagnetism effects, due to the presence of unpaired electrons in the non-trivial DPD system Fe 2+ 4 S 2− 4 clusters, an attempt at implementing unrestricted calculations in Model 2 resulted in a higher energy compared to under restricted conditions.
The validation of the Fe 2+ force field parameters obtained from this study using MD simulations produced satisfactory results. This will provide more insight into atomistic or electronic information, regarding the effects of site-specific interactions on the reaction path, in the DPD protein and the detrimental mutants [26,31,32].
Most importantly, concerning the generation of AMBER force field parameters, the authors acknowledge no other compatible parameters for this unique system. The derived novel force field parameters have paved the way for further simulations and enhanced the mechanistic understanding of metal cluster function in the human DPD protein through higher-level MD simulation methods. Additionally, the derived parameters are currently being applied to study the structural and changes in stability effects due to existing mutations in the human DPD protein. Together, the results from these studies will provide the atomistic details of mutation effects involving the DPD protein. This will open a platform for the implementation of in silico cancer pharmacogenomics and drug discovery research on 5-FU drug efficacy and toxicity effects. | 2021-04-28T13:26:46.752Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "f282f7967f49a2c85dae508bbb2687354de3a276",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules26102929",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "897e0319922a130565b08b19435bdb038d5da0f2",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18314433 | pes2o/s2orc | v3-fos-license | Nature of Ground State Incongruence in Two-Dimensional Spin Glasses
We rigorously rule out the appearance of multiple domain walls between ground states in 2D Edwards-Anderson Ising spin glasses (with periodic boundary conditions and, e.g., Gaussian couplings). This supports the conjecture that there is only a single pair of ground states in these models.
A fundamental problem in spin glass physics is the multiplicity of infinite-volume ground states in finite-dimensional short-ranged systems, such as the Edwards-Anderson (EA) [1] Ising spin glass. In 1D, there is no frustration and only a single pair of (spin-reversed) ground states. In the mean-field Sherrington-Kirkpatrick (SK) model [2], there are presumed to be (in some suitably defined sense) infinitely many ground state pairs (GSP's) [3]. One conjecture, in analogy with the SK model, is that finite D realistic models with frustration have infinitely many GSP's; for a review, see [4,5]. A different conjecture, based on dropletscaling theories [6,7,8], is that there is only a single GSP in all finite D. In 2D and 3D, the latter scenario has received support from recent simulations, some [9,10] based on "chaotic size dependence" [11] and some [12] using other techniques.
In this paper, we provide a significant analytic step towards a resolution of this problem in 2D, by ruling out the presence of multiple domain walls between ground states. We anticipate that the ideas and techniques introduced here will ultimately yield a solution to the problem of ground state multiplicity in two dimensions, and that at least some of them may prove to be useful in higher dimensions as well. Though our result is more general, we confine our attention to the nearest-neighbor EA Ising spin glass, with Hamiltonian where J denotes a specific realization of the couplings J xy , the spins σ x = ±1 and the sum is over nearest-neighbor pairs x, y only, with the sites x, y on the square lattice Z 2 . The J xy 's are independently chosen from a mean zero Gaussian (or any other symmetric, continuous distribution with unbounded support) and the overall disorder measure is denoted ν(J ). A ground state is an infinite-volume spin configuration whose energy (governed by Eq. (1)) cannot be lowered by flipping any finite subset of spins. That is, all ground state spin configurations must satisfy the constraint along any closed loop C in the dual lattice. In any L × L square S L (centered at the origin) with, e.g., periodic b.c.'s, there is (with probability one) only a single finite-volume GSP (the spin configurations of lowest energy subject to the b.c.). An infinite-volume ground state can be understood as a limit of finite-volume ones: consider the ground state σ (L 0 ,L) inside any given S L 0 , but with b.c.'s imposed on S L and L ≫ L 0 . An infinite-volume ground state (satisfying Eq. (2)) is generated whenever, for each (fixed) L 0 , σ (L 0 ,L) converges to a limit as L → ∞ (for some sequence of b.c.'s, which may depend on the coupling realization). If many infinite-volume GSP's exist, then a sequence as L → ∞ of finite-volume GSP's with coupling-independent b.c.'s will generally not converge to a single limit (i.e., σ (L 0 ,L) continually changes as L → ∞), a phenomenon we call chaotic size dependence [11]. So a numerical signal of the existence of many ground states is that the GSP in S L with periodic b.c.'s varies chaotically as L changes [9,10,11]. It is important to distinguish between two types of multiplicity. The symmetric difference α∆β between two GSP's α and β is the set of all couplings that are satisfied in one and not the other. A domain wall (always defined relative to two GSP's) is a cluster (in the dual lattice) of the couplings satisfied in one but not the other state. So α∆β is the union of all of their domain walls, and may consist of a single one or many. Two distinct GSP's are incongruent [13] if α∆β has nonvanishing density in the set of all bonds; otherwise the two are regionally congruent. Incongruent GSP's can in principle have one or more positive density domain walls, or instead infinitely many of zero density.
If there are multiple GSP's, the interesting, and physically relevant, situation is the existence of incongruent states. Regional congruence is of mathematical interest, but to see it would require a choice of b.c.'s carefully conditioned on the coupling realization J . It is not currently known how to choose such b.c.'s. Numerical treatments that look for multiple GSP's implicitly search for incongruent ground states, and it is the question of their existence and nature in 2D that we treat here.
To state our result precisely, we introduce the concept of a metastate. For spin glasses, this was proposed in the context of low temperature states for large finite volumes [14] (and shown to be equivalent to an earlier construct of Aizenman and Wehr [15]), and its properties were further analyzed in [16,17]. In the current context, a (periodic b.c.) metastate is a measure on GSP's constructed via an infinite sequence of squares S L , with both the L's and the (periodic) b.c.'s coupling-independent. Roughly speaking, the metastate here provides the probability (as L → ∞) of various GSP's appearing inside any fixed S L 0 . It is believed (but not proved) that different sequences of L's yield the same (periodic b.c.) metastate.
If there are infinitely many (incongruent) GSP's, a metastate should be dispersed over them, giving their relative likelihood of appearance in typical large volumes. If there is no incongruence, the metastate would be unique and supported on a single GSP, and that GSP will appear in most (i.e., a fraction one) of the S L 's [18].
We now state the main result of this paper. It shows that if more than a single GSP is present in the periodic b.c. metastates, then two distinct GSP's cannot differ by more than a single domain wall. After we present the proof of this statement, we will discuss why this result supports the existence of only a single GSP in 2D.
Theorem. In the 2D EA Ising spin glass with Hamiltonian (1) and couplings as specified earlier, two infinite-volume GSP's chosen from the periodic b.c. metastates are either the same or else differ by a single, non-self-intersecting domain wall, which has positive density.
We sketch the proof of this theorem in several steps; a full presentation will be given elsewhere [19]. First, some elementary properties of (zero-temperature) domain walls: Lemma 1. A 2D domain wall is infinite and contains no loops or dangling ends. Proof . A domain wall between two spin configurations is a boundary separating regions of agreement from disagreement and thus cannot have dangling ends. To rule out loops, note that the sum <xy> J xy σ x σ y along any such loop must have opposite signs in the two GSP's, violating Eq. (2), unless the sum vanishes. But this occurs with probability zero because the couplings are chosen independently from a continuous distribution.
We now construct a periodic b.c. metastate κ J , which will provide a measure on the domain walls between GSP's (that appear in κ J ). As in construction II of [20] (but at zero temperature), consider for each square S L , two sets of variables, the couplings J (L) (chosen from, e.g., the Gaussian distribution) and the bond variables σ (L) x σ (L) y for the GSP ±σ (L) . Consider fixed sets of both random variables as L → ∞; by compactness, there exists a subset of L's along which the joint distribution converges to a translation-invariant infinitevolume (joint) measure. This limit distribution is supported on J 's that arise from ν, the usual independent (e.g., Gaussian) distribution on the couplings, and the conditional (on J ) distribution κ J is supported on (infinite-volume) GSP's for that J .
A metastate κ J yields a measure D J on domain walls. This is done by taking two (replica) GSP's from κ J to obtain a configuration of (unions of) domain walls (i.e., the set of domain walls one would see from two GSP's chosen randomly from κ J ). If one then integrates out the couplings, one is left with a translation-invariant measure D on the domain wall configurations themselves.
This leads to important percolation-theoretic features of domain walls between GSP's in κ J . Some of these [21] are stated in the following: Lemma 2. Distinct 2D GSP's α and β from κ J must (with probability one) be incongruent and the domain walls of their symmetric difference α∆β must be non-intersecting, non-branching paths, that together divide Z 2 into infinite strips and/or half-spaces.
Proof. This lemma, from [21], uses a technique introduced in [22]. First we note that by the translation-invariance of D, any "geometrically defined event", e.g., that a bond belongs to a domain wall, either occurs nowhere or else occurs with strictly positive density. This immediately yields incongruence. Suppose now that an intersection/branching occurs at some site z (in the dual lattice). Then there are at least three (actually four) infinite paths in α∆β that start from z, and they cannot intersect in another place, because that would form a loop, violating Lemma 1. But then translation-invariance implies a positive density of such z's. The tree-like structure of α∆β implies that in a square with p such z's, the number of distinct such paths crossing its boundary is at least proportional to p. Since p scales like L 2 , there is a contradiction as L → ∞, because the number of distinct paths cannot be larger than the perimeter, which scales like L. Similar arguments complete the proof.
The picture we now have for α∆β is a union of one or more infinite domain walls (each of which divides the plane into two infinite disjoint parts) that neither branch, intersect, nor form loops, and that mostly remain within O(1) distance from one another. We now begin a lengthy argument to show that there in fact cannot be more than a single domain wall. The first step is to introduce the notion of a "rung" between adjacent domain walls.
A rung R in α∆β is a path of bonds in the dual lattice connecting two distinct domain walls, and with only the first and last sites in R on any domain wall. So each of the couplings in R is satisfied in both α and β or unsatisfied in both. The energy E R of R is defined to be with σ x σ y taken from α (or equivalently, β). It must be that E R > 0 (with probability one) for the following reason. Suppose that a rung could be found with negative energy; by translation-invariance (and arguments somewhat like those used for Lemma 2), there would then be an infinite set of rungs with negative energy connecting some two domain walls. Consider the "rectangle" that is bounded by two such rungs and the connecting domain wall pieces. The sum of J xy σ x σ y along the couplings in the two domain wall pieces would be positive in one of α, β and negative in the other; hence, the loop formed by the boundary of this rectangle would violate Eq. (2) in α or β, leading to a contradiction. However, we can impose a more serious constraint on E R ; namely that it must be bounded away from zero for all R between two fixed domain walls. To explain this, we first consider a single arbitrary bond b, an S L large enough to contain b, a coupling realization J (L) and the corresponding GSP α (L) . Now let J b vary with all other couplings fixed. It is easy to see that there will be a transition value K and one when it is above. What happens when L → ∞? As in the construction of metastates, we obtain a translation-invariant infinite-volume joint probability distribution on J (the couplings J b ), α (a GSP for J ), K (transition values K b for J , α) and α * (α b 's for J , α, K). In this limit: J is chosen from the usual disorder distribution ν, then α from the metastate κ J and finally K and α * from some measure κ J ,α . The symmetric difference α∆α b may consist of a single finite loop or else of one or more infinite disconnected paths, but in all cases some part must pass through b. The lack of dependence of K (L) b on J b implies that even after L → ∞, K b and J b are independent random variables; this independence leads to the next two lemmas.
Lemma 3. With probability one, no coupling J b is exactly at its transition value K b . Proof. From the independence of J b and K b , and the continuity of the distribution of J b , it follows that there is probability zero that J b − K b = 0, much like in the proof of Lemma 1.
Lemma 4. The rung energies E R ′ between two fixed (adjacent) domain walls cannot be arbitrarily small; i.e., there is zero probability that E ′ , the infimum of all such E R ′ 's, will be zero.
Proof. Were this not so, there would be (by translation-invariance arguments) an infinite set of rungs R ′ with E R ′ < ǫ, for any ǫ > 0. That implies (by the "rectangular" construction below Eq. (3)) that each J b along the two domain walls would be at the transition value K b , either for α or for β, violating Lemma 3.
The next lemma relates the location of the droplet boundary, α∆α a , when α a replaces α, to the "flexibility" of a. The flexibility F a of a bond a (in a (J , α, K, α * ) configuration) is defined as |J a − K a |; the larger the flexibility, the more stable is α under changes of J a . Lemma 5 . If F b > F a , then there is zero probability that α∆α a passes through b.
Proof. For finite L, this is an elementary consequence of the fact that for e = a or b, F (L) e ≡ |J e − K (L) e | is the minimum, over all droplets whose boundary passes through e, of the droplet flip energy cost. After L → ∞, such a characterization of F e may not survive, but what does survive is that α∆α a does not go through b.
The next lemma completes our proof that for GSP's α and β chosen from κ J , α∆β cannot consist of more than a single domain wall, since otherwise there would be an immediate contradiction with Lemma 4. For the proof, we need the notion of "super-satisfied". It is easy to see that a coupling J xy is satisfied in every ground state if |J xy | >min{M x , M y }, where M x is the sum of the three other coupling magnitudes |J xz | touching x, and M y is defined similarly. Such a coupling J xy , called super-satisfied, clearly cannot be part of any domain wall. Lemma 6 . There is zero probability that E ′ > 0.
Proof. Suppose E ′ > 0 (with positive probability); we show this leads to a contradiction. First we find, as in Fig. 1, a rung R with E R − E ′ = δ strictly less than the flexibility values (for both α and β) of two couplings b 1 , b 2 along the "left" of the two domain walls, b 1 "above" and b 2 "below" the rung. Such an R, b 1 and b 2 must exist by Lemma 3 (and translation-invariance arguments).
But we also want a situation, as in Fig. 1, where all the (dual lattice) non-domain-wall couplings that touch the left domain wall between b 1 and b 2 (other than the first coupling J a in R) are super-satisfied, and remain so regardless of changes of J a . How do we know that such a situation will occur (with non-zero probability)? If necessary, one can first adjust the signs and then increase the magnitudes (in an appropriate order) of these (ten) couplings, so that they first become satisfied and then super-satisfied. This can be done in an "allowed" way because of our assumption that the distribution of individual couplings has unbounded support. Also, this can be done without causing a replacement of either α or β, without changing E R , without decreasing any other E R ′ and without decreasing the flexibilities of b 1 or b 2 . Starting from a positive probability event, such an (allowed) change of finitely many couplings in J yields an event which still has non-zero probability.
Next, suppose we move J a toward its transition value K a by an amount slightly greater than δ. The geometry (of Fig. 1) and Lemma 5 forbid the replacement of either α or β, because it is impossible, under the conditions given, for α∆α a or β∆β a to connect to the left end of bond a. But this move reduces E R below E R ′ for any R ′ not containing a, contradicting translation-invariance.
This completes the proof of the theorem: if distinct α, β occur, they differ by at most a single domain wall. Although this does not yet rule out many ground states in the 2D periodic b.c. metastate, it greatly simplifies the problem by ruling out all but one possibility about how GSP's may differ.
We expect, though, that these single domain walls do not exist. There are reasonable arguments and conjectures indicating that this is so, and that even if they do exist, it remains unlikely that there exists an infinite multiplicity of states. We will discuss these in turn.
First, we note that although, for technical reasons, we have not extended our proof to rule out single domain walls, our previous results indicate that it is natural to expect that the "pseudo-rungs" that connect sections of the domain wall that are close in Euclidean distance, but greatly separated in distance along the domain wall, can have arbitrarily low (positive) energies. If these "pseudo-rungs" also connect arbitrarily large pieces of the domain wall containing some fixed bond (and we emphasize that these properties are not yet rigorously proved), then single domain walls would be ruled out in a similar manner as above. The consequence would be that the periodic b.c. metastate in the 2D EA Ising spin glass with Gaussian couplings is supported on a single GSP.
In the unlikely event that single positive-density domain walls do appear, our theorem could still rule out an infinite multiplicity of GSP's in 2D. This would be a consequence of the following conjecture (which presents an interesting problem in the topology of random curves): Conjecture: There exists no translation-invariant measure on infinite sequences (a 1 , a 2 , . . .) of distinct bond configurations on Z 2 such that each a i and each a i ∆a j is a single, doublyinfinite, self-avoiding path.
The above conjecture, if true, would rule out the presence of infinitely many distinct GSP's α 0 , α 1 , . . . (in one or more metastates for a given J ) since taking a i = α 0 ∆α i would contradict the conjecture.
These considerations, taken together, make it appear unlikely that an infinite multiplicity of GSP's, constructed from periodic (or antiperiodic [17]) boundary conditions, can exist for the 2D EA Ising spin glass with Gaussian (or similar) couplings. | 2018-04-03T01:14:58.392Z | 2000-03-01T00:00:00.000 | {
"year": 2000,
"sha1": "f565867e814f4dd8e1dbdc7db92ef7da0b7f916f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0003083",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f565867e814f4dd8e1dbdc7db92ef7da0b7f916f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
} |
2688936 | pes2o/s2orc | v3-fos-license | On Size and Life.
It consists of forty-seven chapters, divided into five sections. The first section is a general, elementary, and brief introduction to the bacterial cell and its physiology, as well as to immunoassays and other techniques used in identification and classification of bacterial species. Section two, with 22 chapters, has been devoted to systemic bacteriology. These chapters, although brief, are well organized; almost all include a discussion about microscopy, culture appearances, biochemical reactions, serological characteristics, and, finally, infections of pathogenic members of each genus. There is, however, hardly any material about the clinical management or treatment of infections, although, where applicable, immunization has been discussed. A brief discussion about the clinical management of each bacterial infection would have been appropriate. It is also worth noting that mention of the epidemiology of bacterial species in this book pertains mostly, though not exclusively, to Britain. For example, in the discussion of infections caused by mycobacteria and corynebacteria, the main focus in terms of epidemiology is England. The third section explains the diagnostic methods used in bacteriology laboratories. This section is written almost like a manual; each chapter consists of two parts: specimen collection and laboratory procedure. A concise figure providing an overview of the laboratory procedure accompanies each chapter. Sections four and five consider a few of the important pathogenic species of protozoa and fungi, respectively. The book contains 151 excellent, mostly colorful figures, supplemented by 13 tables. (An error was noted in the presentation of biochemical reactions in differentiation of bacterial species in figure 22, where the description of an upper row of test tubes has been mistakenly given to a lower row and vice versa.) There is a detailed index; however, no bibliography or list of references is given. Overall, Bacteriology Illustrated, is well organized and clearly written; about one-fourth of the information and many of the illustrations presented are hard to find in many bacteriology textbooks, which make it a fine "supplement to more complete texts."
It consists of forty-seven chapters, divided into five sections. The first section is a general, elementary, and brief introduction to the bacterial cell and its physiology, as well as to immunoassays and other techniques used in identification and classification of bacterial species.
Section two, with 22 chapters, has been devoted to systemic bacteriology. These chapters, although brief, are well organized; almost all include a discussion about microscopy, culture appearances, biochemical reactions, serological characteristics, and, finally, infections of pathogenic members of each genus. There is, however, hardly any material about the clinical management or treatment of infections, although, where applicable, immunization has been discussed. A brief discussion about the clinical management of each bacterial infection would have been appropriate. It is also worth noting that mention of the epidemiology of bacterial species in this book pertains mostly, though not exclusively, to Britain. For example, in the discussion of infections caused by mycobacteria and corynebacteria, the main focus in terms of epidemiology is England.
The third section explains the diagnostic methods used in bacteriology laboratories. This section is written almost like a manual; each chapter consists of two parts: specimen collection and laboratory procedure. A concise figure providing an overview of the laboratory procedure accompanies each chapter.
Sections four and five consider a few of the important pathogenic species of protozoa and fungi, respectively.
The book contains 151 excellent, mostly colorful figures, supplemented by 13 tables. (An error was noted in the presentation of biochemical reactions in differentiation of bacterial species in figure 22, where the description of an upper row of test tubes has been mistakenly given to a lower row and vice versa.) There is a detailed index; however, no bibliography or list of references is given.
Overall, Bacteriology Illustrated, is well organized and clearly written; about one-fourth of the information and many of the illustrations presented are hard to find in many bacteriology textbooks, which make it a fine "supplement to more complete There is more to being small than getting stepped on. Also complicating the life of the little guy are low Reynolds numbers, low attainable kinetic energies, significant intermolecular attractions, and a low ratio of volume to surface area. Yet smallness has some distinct advantages, including the ability to fall large distances with nary a bruise, to crawl up walls and across ceilings, and maybe even to walk on water, although extricating one's self from a drop of it can be daunting. Using a quantitative and analytic approach, On Size and Life examines the consequences of size for living things, and there are many. Authors Thomas A. McMahon and John Tyler Bonner draw on the work of a number of pioneering scientists, including J.S. Huxley, Max Kleiber, and Yale ecologist G.E. Hutchinson. With tools like dimensional analysis, allometric formulas, and logarithmic plotting, the authors show that mountains of raw biological data can be reduced to manageable mathematical statements. In some cases these equations describe reality with surprising accuracy. For example, analysis of mammalian bone structure reveals that small skeletons are not just scale models of skeletons of larger related species. The bones of large mammals are relatively thicker; in fact, bone thickness varies as the length raised to the 3/2 power. Thus mammalian skeletons are what mechanical engineers call elastically similar: long bones resist bending due to gravity and other forces as effectively as short bones. The authors use an eclectic set of examples, from musical instrument sound frequencies to submarine hydrodynamics, in order to illustrate the principles of physics and engineering which apply to living things. And even those with a poor appetite for algebra will have little trouble digesting On Size and Life. McMahon and Bonner keep their mathematical highjinks to a minimum, relying on a few equations and a lot of intuitive arguments to make clear important concepts (although several chapters do have appendices with more rigorous derivations for the purists).
What emerges from all the number crunching and logarithmic plotting presented in On Size and Life is yet more proof of the order which governs life in its bewildering variety. The simple beauty of these revelations will charm not only the general reader but even the experienced scientist. The book is striking visually as well; as is usual with Scientific American publications, the illustrations are superb. To read On Size and Life is at once pleasant and challenging, and very satisfying for the curious scientist, whether amateur or professional.
ROBERT A man in his fifties has always had an unexplained fear of being grabbed from behind. He avoids crowds and sits with his back to the wall. By chance he runs into an old childhood acquaintance who says, "Remember when you were a boy, I grabbed you from behind in the grocery store and you fainted?" After fifty years of bewilderment, the origin of his phobia has finally been uncovered.
Phobias, and related anxiety disorders, are the subject of this short and readable volume by Stewart Agras, a Stanford-based psychiatrist. Agras is quick to point out that the example above is the rare exception; most phobics never discover such a neat link between trauma and phobia. In fact, the lack of a psychoanalytic explanation for most phobics' fears underlies one of the book's fundamental conclusions-that behavioral therapy, rather than psychotherapy, is usually the treatment of choice.
The volume is loosely divided into ten chapters. The first few chapters set down the working definitions of the anxiety spectrum: from common fears, to phobias, to the full-blown panic syndrome. The three are distinguished from each other by the degree of disability associated with the fear. The panic syndrome is the most disabling of the three, often leaving the victim housebound for fear of experiencing an attack outside the home. Physiologically, the symptoms simulate those of a genuine heart attackracing, erratic heartbeat; chest pain; sweating; tingling; and fear of impending doom-though the stimulus is psychological.
The dichotomy of modern man-a civilized being trapped inside a primitive | 2018-05-08T18:17:38.303Z | 1985-11-01T00:00:00.000 | {
"year": 1985,
"sha1": "5f71a7e23a9bd875da99b6db0b441abdbda70c73",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5f71a7e23a9bd875da99b6db0b441abdbda70c73",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8451280 | pes2o/s2orc | v3-fos-license | Tropical Forest and Carbon Stock ’ s Valuation : A Monitoring Policy
Carbon is the fourth most abundant element on Earth. It is estimated that the world's forests store 283 gigatonnes (1Gt = 1 billion tons) of carbon in their biomass alone and 638 Gt of carbon in the ecosystem as a whole (to a soil depth of 30 cm). Thus, forests contain more carbon than the entire atmosphere. Carbon is found in forest biomass and dead wood, as well as in soil and litterfall [1]. Consequently, changes in forest carbon storage, resulting from a shift in land use, have a significant impact on global climate change [2].
Introduction
Carbon is the fourth most abundant element on Earth.It is estimated that the world's forests store 283 gigatonnes (1Gt = 1 billion tons) of carbon in their biomass alone and 638 Gt of carbon in the ecosystem as a whole (to a soil depth of 30 cm).Thus, forests contain more carbon than the entire atmosphere.Carbon is found in forest biomass and dead wood, as well as in soil and litterfall [1].Consequently, changes in forest carbon storage, resulting from a shift in land use, have a significant impact on global climate change [2].
Changes in climate occur naturally, through processes operating on a geologic time scale.For example, the main species presently inhabiting the planet have survived climate changes during the Pleistocene, adjusting their geographical distribution to weather conditions.However, the speed and magnitude of changes that have been occurring in the Earth's climate system since the Industrial Revolution are currently of great concern.In 1991, the Intergovernmental Panel on Climate Change (IPCC) published a first report about global temperature increases caused by the intensification of the greenhouse effect.After this official announcement, the IPCC has established different working groups with scientists from various parts of the world in order for them to meet and compile as much information as possible and to update scientific predictions about the climatic future of the planet.The reports that have been produced by the international scientific community are considered as the main reference for global climate change.
Currently, scientific societies question the capacity of the present biota to tolerate such changes, in an environment that has been highly fragmented by human intervention and where what is still left intact is confined within protected areas.Changes within biota can result in changes in the ecosystem services they provide.Human well-being depends directly and indirectly on the environmental services provided for free by the natural world, including climate regulation, soil formation, erosion control, carbon storage, nutrient cycling, provision of water (both quality and quantity), maintenance of hydrological cycles, preservation of genetic resources, scenic beauty, among others [3].Furthermore, tropical forests contain 50% of all world species and are considered mega-diverse environments.Therefore, changes in any of these services can have serious consequences for biodiversity, for the natural carbon cycle and the hydrological cycle, which may in turn alter the world economy and affect the everyday life of humans and other species on the planet.
How can these changes be monitored?One way to monitor biodiversity and carbon stocks over large areas is through the establishment of forest inventories.These are effective tools for estimating the type, amount and condition of forest resources over large areas [4].The regular collection of measurements within Permanent Monitoring Plots (PMPs), combined with the use of statistical techniques, provide a baseline for assessing changes in the structure and dynamics of a forest and permit the construction of predictive models [5].In the last decade, there has been a large increase in the installation of PMPs in different tropical forest sites around the world, especially in the Amazon Rainforest, where large monitoring networks (TEAM, PELD, CTFS, RAINFOR, LBA, REDEFLOR, PDBFFE and CIFOR) have been established.These programs increase the level of understanding of ecological systems, transforming the knowledge base [6].However, there are still serious deficiencies in estimating carbon stocks and other components of other types of tropical forests, types and others components of tropical forests.
Within the current political and environmental international situation it is vital that all countries, whether or not signatories of the Kyoto Protocol, do promote initiatives to monitor their biodiversity and their carbon stocks.These data are strategic for each country because they indicate where and how the management of natural resources can bring benefits to local people (local scale), they support the creation of public policies that can become part of the country's legislation (regional scale) and promote policies for adaptation to an increased vulnerability to climate change (global scale).This chapter, "Tropical Forest and Carbon Stock's 1 Valuation: A Monitoring Policy", incorporates parts of the TEAM (Tropical Ecology Assessment Monitoring) protocol [7] and the knowledge generated over six years of monitoring permanent plots in an area of the Atlantic Rainforest in Brazil.It aims to discuss the importance of planning and implementation of PMPs, the main techniques used, and the errors associated with them.Biomass, carbon stock calculation techniques and data analysis will also be discussed, among other topics.Data collection and analysis have a greater value when incorporated into natural resource management policies, such as Payment for Environmental Services (PES), which are provided by nature.A comprehensive approach involving stakeholders at all levels, from the local to the global scale, is essential for the success of integrated policies.Each of the topics listed below will be presented with the aid of practical examples, figures and tables, in order to allow readers the opportunity to fully engage with the subject matter and, most importantly, to begin to understand how to apply these practices in their own social and environmental contexts.
Methods for establishing Permanent Monitoring Plots (PMPs)
The establishment of vegetation monitoring networks is a strategy that aims to develop an integrated database through systematized collections using a single monitoring protocol on various sites.In the vegetation network implementation, it is extremely important that the database management team be clear about the questions to be asked and the objectives for the collection of field data.This systemization has implications directly related to the method of collection and the definition of the protocol for implementation and monitoring.The primary analyses to be conducted also must be predefined as they too have a direct impact on the sample design and the means of data collection.
During the planning of a monitoring network, it is important to keep in mind that the key objective is to conduct large-scale analyses that can speak to physiognomy, biomes and wider generalizations.This scale of work is fundamental in order to accomplish robust analyses and to study broad-scale ecological processes.However, it should be noted that local and regional data and publications are also part of this network as they promote the development of local scientific knowledge, along with the participation of the team responsible for the collection of field data.These initiatives encourage cooperation and sharing of experience, in addition to motivating those who are responsible at the local level to continue the work of monitoring once the objectives and results of the initiative are made clear to all involved.
The means of disseminating results should also be defined in the planning phase.For example, during this phase, contact can be made with the editors of scientific journal where there is an intention to publish, in order to establish a connection with the journal and develop credibility for a strong relationship.The sharing of the monitoring protocol, the initial results and the key conclusions at national and international conferences provides visibility for the project and stimulates ongoing discussions with other researchers in the topic area.This interaction and sharing of experience always benefit the project as they increase quality and strengthen key elements.The network planning team should also identify other forms of communication for scientific dissemination, such as specialized documentaries, news networks, community sites and scientific blogs.These promote dissemination and constructive discussion of the conclusions and methods of the published initiative.Another tactic that can make a significant contribution to successful monitoring over the long term by strengthening relationships with local teams is the development of news releases in the local language where the data was collected.
As with any good plan, the protocol must be rigorous.Several protocols for monitoring tropical forests are available including RAINFOR's [8], TEAM's [7] and the Smithsonian's Center for Tropical Forest Science [9].However, it must also be flexible enough to be adapted and to evolve naturally according to the knowledge generated during the planning process, as well as to the local reality of each site.Ongoing workshops with the local team guarantee that acquired experience is formally recorded, in addition to facilitating the continuous improvement of the protocol by applying experience acquired through its execution in situ.
Geoprocessing techniques for area selection
Many field procedures involve high costs due to transportation and logistics.Therefore, prior to any field procedure, errors in area selection can be minimized by careful planning using GIS techniques.In addition to playing an important role in the preliminary phase (planning), these tools are also very useful in the data analysis phase.When these instruments are used extensively by a qualified professional, significant economies of time and financial resources can be achieved.
After clearly defining the objectives for the implementation of the monitoring network, the next phase is the selection of potential areas to house the plots.The use of GIS allows for a more confident selection of the target areas since it works with georeferenced bases and shapes which allow for simulation of PMPs implemented in practically any location in the world.These areas can be selected by process of elimination from those that, for example, do not have the required attributes or by selection of multiple criteria that involves interpolation of various bases.Through experience acquired in the implementation and monitoring of PMPs, we understand that the minimal criteria for exclusion of target areas for monitoring include:
Areas that possess accentuated declivity; Areas that are not easily accessible and complicate field logistics; Areas with creeks, swamps, lakes and rivers; Areas that have significant spatial heterogeneity; Areas that have variations in the type of soil.
Assuming that the objective of monitoring is to evaluate the temporal dynamics of primary vegetation areas, the areas that are not located in Conservation Units can be excluded first.It is understood that forested areas protected by law in any part of the world represent the highest percentage of protected primary areas.After this first filter, the layers or shapes that meet the exclusion criteria cited above are applied.This type of cut is made relatively quickly, while still in the office, but can reduce a universe of potential samples by more than 90% in certain regions of the world, thus optimizing the accuracy and use of the project's financial resources.
Following elimination of the areas not selected for the sample, the professional responsible for the GIS technology should create polygons capable of housing the future PMPs so that random samples can be selected from within the universe of possible options, thus establishing statistical confidence for the sample.Another important point is that the PMPs should be replicated in areas where there is similar physiognomy, so that means, errors and reliable statistics can be obtained.
It is of fundamental importance for the field team that thematic maps be developed by the GIS team.These maps should be easy to visualize and understand, with current satellite images and superimposed colored sketches of the PMPs in various layers.Essential factors for successful field work include the standardization of symbols, language and scale of work, as well as pre-definition of a standard datum, and being in a system of unique coordinates compatible with the use of local GPSs.The field maps should also be plasticized to avoid stains and tears which can often occur with the use of these materials in the middle of the forest.
Choosing target areas
The field team should also be very clear about the objective of monitoring.When the project's primary issue is related to the dynamics of areas in recovery or to the differences between primary and secondary vegetation areas, area selection involves different parameters.When the question is focused on temporal variations in areas of intact vegetation in the climactic stage, area selection will be directed primarily toward areas protected by legal mechanisms in each region, ensuring that there will be no interference in the plot throughout the years of monitoring.Depending on the objective, criteria for intersite analysis can also be established, such as a latitudinal gradient temperature or rainfall gradient, soil gradient, etc.
Once all of the criteria have been established, the field team should depart in order to locate and validate the target areas in situ.In addition to being accompanied by local guides, the team should be supplied with basic field supplies as well as thematic maps developed by the GIS team, a GPS, a compass, and a camera for the validation or invalidation of areas previously defined by the GIS team.Additionally, the field team should have in your GPS all points and layers that were previously prepared by the GIS team.For example, see [10] for a complete data transfer protocol.
It is important that the field team be fully trained on the monitoring protocol and have the ability to independently decide at any given moment if an area truly possesses the defined selection criteria or if it would be better to search for a new area.This decision is a key since all monitoring throughout the years ahead will depend on the correct choice and demarcation of these plots.In order to select the best areas for PMPs to be implemented, various factors should be taken into consideration, including the homogeneity of the forest typology to be sampled, the existence of water courses, logistics, access, type of soil and inclination of the terrain.
Due to difficulties of orientation and localization in interior bush areas, the geographical coordinates should be checked and the location of the field team confirmed upon arrival at the target area.Once the location has been verified, a marker should be placed in the ground (a PVC tube of about 1.3 m can be used) to be the point of coordinates 0,0 (X, Y), which will serve as a reference point for the validation of the area as well as for future plot implementation.This point will be used to evaluate the area to decide whether or not it will be selected for PMP implementation.Thus, using a compass, the direction of the course should be read, so that the angle of the directions has a difference of 90° (straight angle).The course is followed in the first direction (X), remaining aligned with the lead angle on the compass, stopping every 20 meters to check the coordinates and the direction of the course.
In the field, detours are very common during a walk/hike due to natural obstacles such as fallen trees and branches, the presence of lianas or holes in the ground, or large trees that have to be circumvented.It is important in this verification phase, as well as in the PMP implementation phase, that knives and scythes are not to be used to open trails or forest passages as they can have a long term impact with significant implications on the dynamics of vegetation.Thus, when faced with a natural obstacle, the ideal would be for the team to circumvent it and return to the defined course in order to continue with area verification.
The team should be aware of sudden changes in the type of soil, the existence of accentuated declivity that was not possible to identify in the satellite images, or any other element that strongly differentiates the landscape and that could negatively impact the monitoring or the homogeneity of the plot.This should be recorded in a designated worksheet in order to justify the decision not to use the area in question.Once line X has been verified, the same procedure is conducted with line Y beginning from ground zero.If an area does not possess significant heterogeneity, the selection of the plot must be validated, assigning a number and a syllable to be used throughout the entire period of monitoring and analysis of that area (e.g.01-LP).
Implementation of PMPs in the field
Once the entire validation process is complete, the actual marking of the PMP in the field is undertaken.On the day prior to departure, a checklist should be reviewed of all equipment required for field implementation, such as PVC tubes, rubber hammer, colored tape, polypropylene cord, compass, GPS, binoculars, clipboard, collection worksheets, plastic bags, masking tape, pencils, erasers and pens.In addition to support materials, specialized clothing must also be taken, such as boots, leggings and field jackets (with many pockets).
The PMP implementation team should be comprised of at least 4 people, primarily to divide the weight of materials to be taken to the selected PMP area, as the tubes or stakes used to mark the chosen spots are very heavy and bulky.
Upon arrival at the PMP location previously marked as 0,0, a suitable location to leave all of the equipment should be identified, as well as an appropriate place to have snacks or lunch while in the field.This location, named "Support Station -SS" should be located in the outlying area of the PMP so that it does not interfere with the vegetation to be monitored on the plot.The ground should be covered by a light blue tarp (or any color that strongly contrasts the forest floor), upon which all of the equipment should be placed to avoid loss.Again, it is imperative that the team be careful not to allow any type of vegetation (lianas, branches or shrubs) to be cut during plot implementation.
In the following example, we simulate the implementation of a 1 ha PMP (10.000 m 2 ) according to the TEAM protocol for vegetation monitoring [7].The size of the PMP will depend on the initial objective outlined by the team responsible for managing the project.T h e s i z e o f 1 h a i s w i d e l y u s e d i n p e r m anent plots whose objectives are related to monitoring the dynamics and carbon stocks for the site in question.
Starting at 0,0, two baselines (X, Y) should be projected, at 90° perpendicular angles, which will serve as reference points throughout the PMP implementation.Each baseline should be spiked every 20 meters, with their distance verified using a measuring tape and direction verified by reading the course angle on the compass.After the 6 spikes for each baseline have been duly marked and inserted into the ground, the entire line should be measured to confirm its length, which should be a total of 100 meters.Each spike placed every 20 meters should be sequentially numbered, as well as having its Cartesian coordinates on the plot recorded (e.g. 20, 0; 40, 0; 60, 0;…).Once line X has been completed, the formation of line Y can be undertaken using the same procedures previously followed.
Once the two baselines have been formed, the internal squares of the PMP can be developed.In order to close a PMP, two basic methods can be used: creating 5 lines parallel to baseline Y (Figure 1-B) or creating small 400 m 2 squares, forming sequential lines until the entire PMP is closed (Figure 1-A).
Marking trees
After marking the PMP, the individuals to be monitored are marked and data collection is undertaken.For studies related to long-term monitoring of the structure and dynamics of vegetation, it is common for the sample to include all individuals in the forest that have DBH ≥ 10 cm (Diameter at Breast Height).For studies of biomass and carbon stocks, individuals with DBH < 10 cm are not included, due to their low contribution to the total stocks of the PMP.In general, if the objective is to monitor changes in floristic composition and the biodiversity of the PMP, these smaller individuals should be incorporated into the monitoring.
In this case, all of the trees palms and lianas with a DBH greater than or equal to 10 cm should be marked and measured.The POM (Point of Measurement) is the point on the tree or liana where their respective diameters are measured.The POM is marked at 1.30 m with the help of a PVC tube graded at 1.60 m and 1.30 m to avoid error related to the different heights of the field markers.However, for individuals with tabular roots, sapopemas or buttress roots, the POM should be identified at 50 cm above the highest root (Figure 2).This is a valid
A B
change since it is common in forest inventories to find all stems with their DBHs measured at 1.30 m.When these data are inserted into allometric equations to calculate biomass, they overestimate biomass, increasing the standard error of these calculations [9][10][11][12][13].
In the case of trees that have many deformities at the POM, a modular ladder up to 12 meters (4 modules of 3 meters each to make it easy to transport in the forest) should be used so that the best location on the tree can be selected for diameter measurement (Figure 2 and Figure 3).Leaning or fallen trees should have their DBH measured following the methodology above; however, the distance from the base should be measured from the underside of the tree (Figure 2) in order to obtain an accurate distance.For trees with multiple trunks, where forking occurs below 1.30 m, each trunk should be considered a separate individual (Figure 2), with the number of measurements matching the number of trunks for the tree.
Once the best area for DBH measurement has been selected, it should be painted with yellow paint.This can also be done with a type of stamp (stencil) that can be made out of a sheet of hard plastic that is cut in the center in the following dimension: 20 x 3 cm.After selecting the location to be painted, the stamp (stencil) is placed on the tree and the POM is painted (Figure 3).In addition to facilitating field work, this stamp also standardizes the width of the paint marking on the trees, thus reducing the possibility of errors in future tree measurements.
This marking should be re-done every two years so that the specific POM is not lost.In order to avoid errors related to POM marking, the height at which the POM is marked should be recorded in a designated field worksheet.This procedure, along with painting the POM, guarantees that the measurement will be done at the exact same point during recensus throughout the monitoring period.
All of the individuals selected should be marked with nails and aluminium tags using increasing numbers according to the layout within the PMP.The nail should always be a distance of 40 cm from the POM so that the nail hole does not damage the trunk and consequently alter the POM.It is very common to see trees in the forest that have significant deformities resulting from a small nail hole.Bacteria and pathogens can enter through this small orifice and cause significant stress to tree trunks.Another important point is that the nail should be pointed downward whereby the tag is touching the head of the nail, since it is common to see trees that envelop around the tags over time when the tags had been touching the trees themselves.
After numeration and marking are complete, each individual should be identified at the highest taxonomic level possible in the field.It is highly recommended that photos be taken of the collected branches and that a collection of each species within the PMP be maintained as a botanical collection specific to each region.The data should be recorded in field worksheets and branch samples that are not identified should be taken for laboratory activities, herbarium consultations and completion of taxonomic identification by specialists.
All field collections should be labelled with masking tape, recording their PMP number and reference code.With the collection and identification of botanical material, local guidebooks can be developed for the identification of trees registered within the PMPs.The guidebook could include photos of dried plants, taxonomic identification, location of the species, whether or not there are medicinal purposes, and details about flowers or fruits.In collaboration with local experts, the production of this type of material strengthens relationships between project managers and the execution team, in addition to producing registered material that is easily understood by the local population.
Calibration of diameter tape
As a result of the measurement process, diameter tape can become stretched or it may come from the factory already with small defects.Considering that the annual growth rate of a tree stratum in the forest is ~0.2 cm/year [14] small measurement errors can have a strong impact on the final results.In order to avoid this type of error, the diametric tape should be calibrated using an aluminium ruler prior to each census, thus maximizing the level of precision in the results.
Measurement calibration
Errors in reading the diametric tape or errors in the position of the tape on the tree can be common during the census, negatively impacting the processing and analysis of data.Therefore, prior to each census, it is also necessary to calibrate the technician responsible for measuring the trees.
On the first day of the census, all possible measurements should be completed within a given PMP.One or two days later, the same person who measured the trees on day one should return to the same area and re-measure all of the previously measured trees.The results are considered good if the one measuring the trees obtains a minimum of 70% accuracy, or 90% with less than 1 mm of error.If these parameters are not reached, the procedure is repeated, even with others doing the measuring, until the required precision is obtained.The objective in each phase is to minimize potential errors that generally occur in field activities and which substantially impact data analysis.
Census and re-census
The measurement of the individuals located in a PMP is the heart of the entire initiative.The measurements conducted during the first census should be done with careful attention so that the complete methodology for measurement and marking is constantly being verified and validated.Despite the fact that there are technicians responsible for data collection who are fully trained in the methodology, a copy of the measurement protocol and its specifications should be available for consultation in the field.
It is important to remember that the period for plot measurement (completion of the first census or re-censuses) should be defined by the analysis of a series of rainfall in the region under study so that the measurements can always be done at the same time of year, that is, in the month that has the least amount of precipitation.This strategy seeks to take advantage of the best transportation logistics, generally by ground, and to avoid the influence of rain in the diameter measurements since tree bark can become saturated with water, thus affecting/falsifying growth data.
For the individual measurement of trees, it is recommended that diametric tape (e.g.Diameter Tape -Forest Suppliers) be strictly used.The use of tapes that measure the circumference of individual trees, in order to later convert to diameter, increases estimation errors.The technician responsible for measurement should note, tree by tree, any loose bark, lichens, lianas or other factors that could impact diameter measurement.The technician cleans the measurement area by passing his/her hand along the trunk and then runs the diametric tape around it.Also responsible for worksheet data, the technician should seek to assist the one who is measuring the trees, primarily during the evaluation of large trees, in order to verify the correct position of the tape and to determine if there is anything between the tape and the tree.
During the annual re-censuses, the technician responsible for recording data in the worksheets should pay even greater attention to the data that are found to be divergent from the previous year's records, which could likely be due to an error in reading the diametric tape.If an error is found, the technician should ask for a re-measurement and a rereading of the diameter for recording in the worksheet.
Another important activity undertaken during the re-censuses is an active search throughout the PMP for new individuals to be included in the sample (recruits) and individuals that no longer exhibit vegetative activity (dead).All of the new trees, palms and lianas that have met the inclusion criteria (DBH ≥ 10 cm) are included in the sample and the same marking methodology is followed.Individuals marked in the first census but which, during the re-census, did not exhibit vegetative activity or were not found after a detailed sweep of the plot, should be considered dead.
It is also possible that trees that had died in the previous year show activity through diametric growth or new growth.In this case, the processing worksheet should be modified, correcting the data recorded the previous year and including this individual once again in the sample since it was not actually dead.
Tabulation of data
For all field activities related to planning, implementation and monitoring of PMPs, there should be specific worksheets.The standardization of the entry of information that will be generated is of fundamental importance to guarantee the quality of the data.Each worksheet should include the following information at minimum: Upon completion of the field work, all of the worksheets used should be digitized, scanned, and saved in a digital file and then stored in a dry, safe place.These procedures assure that the original worksheets can be consulted in the case of duplicate or conflicting information, when typing errors occur, or when mistakes are made in noting information in the field.
After digitizing the worksheet data, the new worksheets should be printed and evaluated by pairs for accuracy, followed by the correction of any confirmed errors.
Spatial mapping
Spatial mapping of the individuals marked in the PMPs allows for the possibility of analyses of the distribution of species or guilds in the forest.For these analyses, indices of aggregation, such as Morisita [15] or McGuinness [16], can be used, thus defining the spatial distribution of the individuals as aggregate, random or regular.This knowledge is fundamental to ecological analyses as it facilitates an understanding of how a certain species uses available resources in the forest.While the aggregation factor can vary within a species, in different diametric classes, it shows how the life stages of an individual can change the way it uses an available resource.
For mapping, each individual should have its Cartesian coordinates X and Y measured in the PMP.The distances can be measured using a 50 meter measuring tape or a digital measuring stick.It is important that a compass always be used to support the measurements so that the distances are consistently taken in a straight line with respect to the position within each sub-plot.In the example below (Figure 4), the individual marked in the PMP has Cartesian coordinates of X = 56.2meters and Y = 74.3meters.
Estimates of biomass and carbon stocks
The estimates of aboveground live biomass and the resulting carbon stocks can be obtained using two key methods.The first, based on destructive sampling (direct method), involves cutting, drying and weighing separately (roots, trunk and leaves) all of the trees in a specific area.This technique becomes unviable in the case of monitoring since it can damage the sample over the life of the vegetation.The second method (indirect method) consists of estimating biomass and carbon stocks by measuring field variables without having to fall the tree.In this case, DBH data and/or total height of the trees (Ht) and/or specific density of the wood (p) are inserted into previously developed allometric equations in order to estimate the biomass and carbon stocks of the PMP.
Table 1 shows examples of allometric equations already developed and that can be used to calculate biomass.The selection of the best equation should be based on the objective of the project and on the questions to be answered.Allometric models that offer greater precision should be given preference [17].
Types Allometric Equations R 2
Wet In order to conduct accurate comparisons with other areas or to serve as a potential indicator of carbon stocks for a specific region, simpler allometric equations with only one variable -DBH can be used [17].In this case, it is not necessary to collect data related to the height or wood density of individuals, resulting in the inventory being completed much faster.An important detail regarding the selection of the equation is that the results for some are fresh biomass data, while for others they are dry biomass data, and still others provide results as carbon quantity.
As previously mentioned, the ideal would be to use an allometric model that provides the highest degree of confidence.The best model has to explain most of the variation in the data or has the lowest AIC (Akaike Information Criterion).In the most cases, equations that use multiple entries with 3 variables per individual (DBH, Ht and p) are better.DBH data are easily collected as previously outlined.The data related to tree height are generally complicated to collect due to error associated with height estimations, in addition to the need for greater time in the field, which results in inventories having higher costs.In order to optimize this work, an estimate of tree height can be used by creating an allometric equation adjusted by the diametric and height measurements of a specific number of trees in the plot (Figure 4).This requires the collection of height data for a certain part of the plot.These data should be collected with the greatest precision possible, using cords, ladder or equipment such as a rangefinder.It is recommended that height be measured for a random sample of 20% of the individuals of a PMP in order to later relate them to the diameters, producing an equation for site-specific heights (Figure 4).In order to collect data for specific wood density, there are some protocols for extracting and obtaining these values for each tree in a PMP.With a view to obtaining economies of time and project resources, existing databases can be used, for example, Global Wood Density Database [24,25], which makes available a series of wood density values for species that exist in almost every part of the world.With biomass calculated, many different possibilities for analysis become available.For example, comparisons of biomass can be done between primary and secondary areas, between one year and another, and total biomass can be calculated for the PMP and extrapolated to large forested areas of the same typology.In addition to comparing the relative data to the average annual increment of biomass (or of carbon) of a PMP, analyses of the change in biomass between years can also be conducted.This can be achieved by subtracting the biomass in year one from the biomass in year 0, remembering that this biomass value should include the biomass of recruits in year 1 while the biomass of individuals considered to be dead is subtracted.Another factor that can be considered is the number of days between each census in order to standardize the calculations for a specific period.For example, for 1 year, the following equation would be used: * 365; where AGBt2 refers to biomass in year 2, and AGBt1 to biomass in year 1.DTt2 refers to the date the census was taken in year 2, and DTt1 to the date the census was taken in year 1 (D.Clark personal communication).
Recruitment and mortality rates
Calculations of the annual rates of Recruitment (Eq.2) and Mortality (Eq. 3) can be done using the equations by Sheil and Mail [42].These rates are an excellent indicator of forest dynamics, providing a solid understanding of forest behaviour as it is affected by seasonal events causing variations in water availability, or by extreme climatic events or to conduct multiple comparisons.Since, in reality, everything depends on the proposed objective, forest dynamics can be compared, for example, between those individuals that belong to the higher diametric classes and those who belong to the lower, or the behaviour between different species, among others.
The valuation of tropical forests
In this section, we will explain how to assign value to carbon stock estimates taken from collected data on PMPs.We will also discuss issues regarding the Payment for Environmental Services (PES), which is provided by tropical forests that are connected with major international protocols and signed agreements.
Assigning value
Forest conservation strategies to be effective, local communities must first be significantly involved and they must believe in the importance of biodiversity to guarantee quality of life.These communities are the key to a conservationist network.The second step is to invest financially in these initiatives.The project should clearly demonstrate that forest conservation efforts are more economical lucrative when compared with the opportunity costs of using the soil in a given region, for example, for cattle-raising.Therefore, investing in the protection of biodiversity in order to encourage the social and economic development of local communities is one of the best long-term conservation strategies for biodiversity and the ecosystemic services it generates.
One of the difficulties in assigning value to biodiversity and the services it offers is how to specifically quantify this value.First, the value of its natural attributes is immeasurable, such as the services offered by bees when pollinating plantations throughout the world or the atmospheric regulation offered by forests (see, [44]).Thus, the carbon valuation and commercialization market has an advantage, since the prices per ton are already known by the market.Despite being affected by countries' economic changes, a ton of carbon (sequestered or saved) has its own regulations derived from agreements, such as the Kyoto Protocol or by mechanisms such as the CDM (Clean Development Mechanism) and REDD (Reducing Emissions from Deforestation and Forest Degradation).Therefore, projects that seek to assign economic value to environmental services can include "carbon valuation" as a more precise indicator of the technical reliability of the project.
Forest projects began to participate in the global carbon credit market when companies partnered in order to preserve forests and plant trees with the goal of neutralizing their greenhouse gas emissions [3].Due to the initial difficulty of negotiating these credits within a regulated market (compliance market), many of these initiatives looked for the voluntary market [3] and other financial transactions that could neutralize their emissions by trees capturing carbon.These new mechanisms opened the door for a wide variety of carbon projects that include voluntary initiatives as payment for the recovery of degraded areas as a means of neutralizing emissions and even responsibility for conserving existing forest areas.
Development of public policies
These widely diverse ongoing carbon projects have one objective in common: to take advantage of existing market mechanisms in order to assign economic values to rainforests.Today, the REDD+ mechanisms is considered one of the most interesting since it focuses on creating an institutional structure and economic incentives required for developing countries to substantially reduce their CO2 emissions resulting from deforestation and forest degradation [45].
A practical example of implementing public policies connected to carbon projects is the program called Bolsa Floresta (Forest Fund), created by the state of Amazonas through Law no.3135 on 05/06/2007.Through this initiative, the Government pays R$50 (~USD $30) per month to registered families who live in State Conservation Units and who have signed a collective agreement to stop deforestation [45].In the state of Minas Gerais, the Government created an initiative called Bolsa Verde (Green Fund) (Law no.17.127 in 2008), whose objective is to help conserve native vegetation cover in the State by paying property owners for environmental services if they already preserve or are committed to restoring native vegetation on their properties.In this case, the financial incentive is relative to the size of the protected area, which is a priority for family farms and rural producers.Thus, the REDD+ has a comprehensive rural planning strategy that values rainforests and their recovery, as well as supporting the sustainable development of rural livelihoods [45] and facilitating true socio-environmental gains.
For all of these initiatives works, there must also be reliable data on existing carbon stocks to serve as a baseline for the projects.Permanent Monitoring Plots are technically considered to be the best way to obtain these data.For forest recovery projects, where it is not possible to implement PMPs, they can be implemented using adjacent areas or existing data can even be used to extrapolate biomass values.During the monitoring of carbon projects, the random distribution of PMPs serves as a statistically equivalent sample area for forest recovery monitoring.As an example of other monitoring sites using a standardized methodology we can cite the TEAM network (http://teamnetwork.org/) which has more than 15 monitoring sites in tropical forests.In Brazil we can cite two of these sites: Manaus, Caxiuanã, which have 05 PMPs each.Another success case in the monitoring area is the LBA project (http://lba.inpa.gov.br/lba/), which has a vast network of PMPs in the Amazon forest, that in ten years been able to train more than 500 masters and doctors in Brazil, publishing ~1000 articles in specialized journals.
Regardless of the type of project or the mechanism that is used to implement it, projects that use a ton of carbon (sequestered or saved) as the base, guarantee the long-term presence of these stocks in nature.But most importantly, these projects require the assured quality of the data they propose to collect.These data should have Measurement, Reporting and Verification (MRV) to guarantee the technical quality of the project (e.g.see the Standard CCBA -Climate Community and VCS -Voluntary Carbon Standard).In order to guarantee viability, these projects should also have local community involvement as a goal, whether in the implementation phase or during monitoring, in order to facilitate the improvement of quality of life and the resulting socio-environmental gains.In addition to facilitating the socio-environmental benefits already outlined, the implementation of local PMPs has a powerful differential: calibrating the calculation of international methodologies with highly reliable data, collected locally and using a standardized methodology [27].
Figure 1 .
Figure 1.Sample structure of the Permanent Monitoring Plot (PMP), with 25 sub-plots.A -Means of closure using squares; B -Means of closure using lines.Adapted from TEAM (2010).
Figure 2 .
Figure 2. Details for marking trees with deformities in the field.A -For tabular roots, the POM is measured 50 cm above the last root; B -For multiple trunks, each is measured as a separate individual, provided the forking is below 1.30 m; C -For fallen trees, the distance is taken from the underside; D -For tall trees, the measurement should be done with the support of modular ladders.
Figure 3 .
Figure 3. Details of marking big trees at Rio Doce State Park: Use of a modular ladder up to 12 meters and POM painting process using a stencil.Source: Metzker, T.
PMP name and abbreviation; Complete date when the collection was done; Names of each of the team members; Number of each individual; Registration number of the sub-plot to which each individual belongs; Data related to the POM and DBH; Pertinent observations.
Figure 4 .
Figure 4. Example of the result of spatial mapping of the field individuals within the PMP at Rio Doce State Park -Minas Gerais, Brazil.
Figure 5 .
Figure 5. Examples of the development of a site-specific equation for the calculation of height using tree diameters [17] and of equation adjustment using the observance of the normality of residues.
No equals the number of individuals at time 0; Nm is the number of dead individuals between the interval; and Nr is the number of individuals recruited in the same time interval (t).
Figure 6 .
Figure 6.Example taken from Phillips et al., (2008)[43] referring to the analysis of mortality rates (grey lines) and recruitment rates (black lines), using a monitoring time period of 25 years.Solid lines are means and dotted lines are 95% CIs.
Table 1 .
Example of allometric equations used to estimate the aboveground biomass (kg) of trees, palms and lianas in different tropical forest types.DBH -Diameter at breast height; Ht -Total height; and p -Wood mean density g/m 3 .
Table 2 .
(Alves et al. 2010ground biomass in different forest typologies on neotropical sites.Adapted by(Alves et al. 2010 | 2017-09-17T11:19:22.740Z | 2012-08-29T00:00:00.000 | {
"year": 2012,
"sha1": "12a2efe2d6e14a7259854cd0065e514b0da46258",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/38675",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8d44bfea753dc0054013455c42317e4099743969",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
} |
261380589 | pes2o/s2orc | v3-fos-license | Open your black box classifier
Abstract A priority for machine learning in healthcare and other high stakes applications is to enable end‐users to easily interpret individual predictions. This opinion piece outlines recent developments in interpretable classifiers and methods to open black box models.
In many computer-based decision support applications, clinical attributes take the form of tabular data.Being so prevalent, not just in medicine but also for risk models in other domains ranging from banking to insurance, this class of data deserves particular focus and it is the subject of the rest of this piece.For tabular data specifically, one way around the issue of transparency is with models that are interpretable by design [6].
Interpretability by design has long been known to be possible with linear-in-the-parameters models and with decision trees, albeit at the expense of classification performance.Although rule-based predictors [7] and risk scores derived from logistic regression models [8] have been effective to aid decision making in clinical practice and indeed have performance levels that are competitive even against modern approaches such as deep learning [9] there are significant shortcomings.In order to cope with non-linear dependence on clinical attributes with linear models, input variables are frequently discretised.An example of this would be to group age intervals into multiple categories.However, if age bands are for instance by decades, this would treat someone aged 39 as more similar to a 30-year-old than to a 40-year-old.Discretisation will mask variation within each group and, furthermore, it can lead to considerable loss of power and residual confounding [10].
One way to manage non-linearities with interpretable models is to fit a Generalised Additive Model (GAM) estimating the dependence on individual variables with splines [11].This class of flexible models is in fact a gold standard for interpretability [12].They are self-explaining [13] and new formulations are emerging which do not require careful tuning of spline parameters but replace them with machine learning modules.In the case of Explainable Boosting Machines [14] the modules are random forests and gradient boosted trees, whereas Neural Additive Models [15] have the structure of a self-explaining neural network (SENN).Both are bespoke models and estimate the component functions of the GAM in tandem with inferring an optimal sparse model structure.Along with linear and logistic regression, GAMs lend themselves to practical implementation in the form of nomograms, which are already familiar to clinicians for visualisation of risk scores [16,17].
But what about existing machine learning models?
A key to opening probabilistic black box classifiers without sacrificing predictive performance is an old statistical tool, Analysis of Variance (ANOVA).It is well known that ANOVA decompositions can express any function as an exact sum of functions of fewer variables, comprising main effects for individual variables together with interaction terms [18].This is a natural way to derive additive functions with gradually increasing complexity.The derived functions are non-linear and mutually orthogonal, ensuring that the terms involving several variables do not overlap with the information contained in the simpler component functions.
All black box models generate multivariate response functions and hence can be expressed in the form of GAMs using ANOVA.For probabilistic models, this can be applied to the logit of the predicted probabilities.Selecting univariate and bivariate additive terms provide interpretability.The black box is then explained by replacing the original data columns with the ANOVA terms and selecting the most informative components with an appropriate statistical model, such as the Least Absolute Shrinkage and Selection Operator [19].
There are two measures that can be applied in ANOVA, both related to the commonly used partial dependence functions.The Dirac measure corresponds to a cut across the predicted surface and the Lebesgue measure is an average over the same surface, sampled over the training data by setting the values of only the variables in the argument of each component function and sweeping them across their full range.In practice, the main difference between the two measures is a small variation in the models that are selected.This framework is remarkably stable showing that partial dependence functions, normally used only for visualisation, work very well for model selection and are effective for prediction.
Once the black box has been mapped onto a GAM, from there onwards the two measures yield exactly the same component functions.Interestingly, the Shapley additive values, already used in medicine [4], are exactly the terms in the GAM expansion [20].
A natural next step is to replicate the interpretable model derived from the black box by implementing it in the form of a Generalised Additive Neural Network (GANN) also known as a SENN.This will ensure that the univariate and bivariate component functions can be further optimised given the selected structure.Model refinement is possible by a renewed application of the ANOVA decomposition, this time to separate and orthogonalise the first-and second-order terms in the GANN/SENN [20] rather than the original Multi-Layer Perceptron (MLP).This results in a streamlined model that is optimised to the final sparse structure.A schematic of the model inference process is shown in Figure 1.
Second-order terms appear to be sufficient to achieve strong performance [20] no doubt due to the inherent noise in the data.Moreover, starting with a black box model, the structure and form of the original interpretable model is generally very close to that of the GANN/SENN estimated de novo by reinitialising and re-training, as are the predictive performances of the two models [20].
The derived GAMs make clinically plausible predictions for real-world data and buck the performance-transparency trade-off even against deep learning [21].They solve one of the biggest hurdles for AI by enabling physicians and other end-users to easily interpret the results of the models.Arguably, transparency has arrived for tabular data, setting a new benchmark for the clinical application of flexible classifiers.
FIGURE 1
FIGURE 1 Schematic of the mapping of black box classifiers into Generalised Additive Models for tabular data. | 2023-08-31T15:15:21.003Z | 2023-08-29T00:00:00.000 | {
"year": 2023,
"sha1": "d1cc95ae7a864196e651ad64ef1bc8c64fc42074",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/htl2.12050",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a742fd9e63ec5222ebf377d0898bbc415be1abc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
14384721 | pes2o/s2orc | v3-fos-license | The Impact of Feedback on Disk Galaxy Scaling Relations
We use a disk formation model to study the effects of galactic outflows (a.k.a. feedback) on the rotation velocity - stellar mass - disk size, gas fraction - stellar mass, and gas phase metalicity - stellar mass scaling relations of disk galaxies. We show that models without outflows are unable to explain these scaling relations, having both the wrong slopes and normalization. The problem can be traced to the model galaxies having too many baryons. Models with outflows can solve this"over-cooling"problem by removing gas before it has time to turn into stars. Models with both momentum and energy driven winds can reproduce the observed scaling relations. However, these models predict different slopes which, with better observations, may be used to discriminate between these models.
Introduction
Galactic outflows are widely observed in galaxies that are undergoing, or have recently undergone, intense star formation: e.g. Nearby starburst and IR bright galaxies (Martin 2005); Post starburst galaxies at redshift z ≃ 0.6 (Tremonti et al. 2007); Normal Star forming galaxies at redshift z = 1.4 (Weiner et al. 2009) and Lyman Break Galaxies at redshifts z ≃ 3 (Shapley et al. 2003). However, whether or not galactic outflows play an important role in determining the properties of galaxies has yet to be determined.
A clue that outflows might play an important role in galaxy formation comes from fact that galaxy formation is inefficient. The galaxy formation efficiency, ǫ GF , defined as the ratio between the galaxy mass (in stars and cold gas) to the total available baryons available to that galaxy (the cosmic baryon fraction times total virial mass of the halo) peaks at ≃ 33%. This has been determined by galaxy-galaxy weak lensing studies (Hoekstra et al. 2005;Mandelbaum et al. 2006), which are independent of ΛCDM, and galaxy-halo number abundance matching (e.g. Conroy & Wechsler 2009), which assumes the ΛCDM halo mass function as a prior.
A low peak galaxy formation is a problem because cooling is expected to be efficient in typical galaxy mass haloes (with virial velocities ranging from V vir ≃ 60 to ≃ 150 km/s). At low masses (below V vir ≃ 30 km/s) cooling is suppressed by UV photo heating, while at high masses (and high virial temperatures) cooling is inefficient due to the physics of radiative cooling. Thus another mechanism is needed to suppress galaxy formation, in the halo mass regime one would expect 2 Dutton & van den Bosch it to be highly efficient. Galactic outflows driven by supernova (SN) or young massive stars are the prime candidate, having been successfully invoked in semianalytic galaxy formation models to explain the shallow faint end of the galaxy luminosity function (e.g. Benson et al. 2003).
Simple feedback models
The simplest, physically motivated, feedback models can be described by 2 parameters: the mass loading factor, η, defined as the ratio between the mass outflow rate, and the star formation rate; and the wind velocity, V wind . These two parameters are related by the mechanism that drives the wind, and the relation between the wind velocity and the escape velocity, V esc . Feedback models can be divided into 3 broad categories: • Constant Velocity Wind: Assumes V wind = const., which implies η = const. A popular example is that implemented by Springel & Hernquist (2003), which assumes V wind = 484km/s and η = 2. This corresponds to 25% of the SN energy being transferred to the wind (i.e. ǫ FB = 0.25).
• Energy Driven Wind: Assumes V wind = V esc , energy conservation implies η = ǫ FB 10(300/V wind ) 2 , where ǫ FB is the fraction of SN energy that ends up in the outflow (e.g. van den Bosch 2001) Finlator & Davé (2008) showed that models with the momentum driven wind provide a better match to the stellar mass -gas phase metallicity relation at z ≃ 2 than models with a constant velocity energy driven wind, or models without galaxy winds. However, it is not clear that this is a convincing argument against energy driven winds because Finlator & Davé (2008) did not consider an energy driven wind with the same assumption that they made for the momentum driven wind i.e. V wind ≃ V esc .
Here we use a semi-analytic disk galaxy formation model to discuss the observational signatures of different feedback models on the scaling relations of disk galaxies. We address the following questions: 1) Can models without outflows explain these relations? 2) Can models with outflow explain these relations? and 3) Can the scaling relations be used to discriminate between different wind models?
The Disk Galaxy Formation Model
Here we give a brief overview of the disk galaxy evolution model used in this proceedings. This model is described in detail in Dutton & van den Bosch (2009). The key difference with almost all disk evolution models is that in this model the inflow (due to gas cooling), outflow (due to SN driven winds), star formation rates, and metallicity, are computed as a function of galacto centric radius, rather than being treated as global parameters. The main assumptions that characterize the framework of these models are the following: 1. Mass Accretion History: Dark matter haloes around disk galaxies grow by the smooth accretion of mass which we model with the Wechsler et al. (2002) mass accretion history (MAH). The shape of this MAH is specified by the concentration of the halo at redshift zero; 2. Halo Structure: The structure of the halo is given by the NFW profile (Navarro, Frenk, & White 1997), which is specified by two parameters: the mass and concentration. The evolution of the concentration parameter is given by the Bullock et al. (2001) model with parameters for a WMAP 5th year cosmology (Macciò et al. 2008); 3. Angular Momentum: Gas that enters the halo is shock heated to the virial temperature, and acquires the same distribution of specific angular momentum as the dark matter. We use the angular momentum distributions of the halo as parametrized by Sharma & Steinmetz (2005); 4. Gas Cooling: Gas cools radiatively, conserving its specific angular momentum, and forms a disk in centrifugal equilibrium; 5. Star Formation: Star formation occurs according to a Schmidt type law on the dense molecular gas, which is computed following Blitz & Rosolowsky (2006); 6. Supernova Feedback: Supernova feedback re-heats some of the cold gas, ejecting it from the disk and halo; 7. Metal Enrichment: Stars eject metals into the inter stellar medium, enriching the cold gas.
8. Stellar Populations: Bruzual & Charlot (2003) stellar population synthesis models are convolved with the star formation histories and metallicities to derive luminosities and surface brightness profiles. Fig. 1 shows the impact of feedback on the rotation velocity, stellar mass, and disk size of a galaxy that forms in a halo with virial mass M vir = 6.3 × 10 11 h −1 M ⊙ , and which has the median halo concentration and angular momentum parameters for haloes of this mass. The green lines show the observed scaling relations from (Dutton et al. 2007 andShen et al. 2003). The circles show models with feedback efficiency varying from ǫ FB = 0 to 1. The model without feedback results in a galaxy that is too small and which rotates too fast. The upper right panel shows the galaxy mass fraction m gal = M gal /M vir , and galaxy spin parameter λ gal = (j gal /m gal )λ, where λ is the spin parameter of the halo and j gal = J gal /J vir is the angular momentum fraction of the galaxy, versus the feedback efficiency. This shows that the model galaxy without feedback has acquired 85% of the available baryons and 80% of the available angular momentum. The mass and angular momentum fractions are less than unity because cooling is not 100% efficient. The angular momentum fraction is less than the galaxy mass fraction because cooling occurs from the inside-out.
Impact of Feedback on Velocity, Stellar Mass and Disk Size
As the feedback efficiency is increased the galaxy stellar mass decreases, disk size increases and the rotation velocity decreases. These changes are primarily driven by the decrease in the galaxy mass fraction, m gal , and secondarily by the increase in the galaxy spin parameter, λ gal (upper right panel). The increase in galaxy spin parameter is the result of preferential loss of low angular momentum material, which helps to explain the origin of exponential galaxy disks, which are otherwise not naturally produced in a CDM cosmologies (Dutton 2009).
The upper left panel shows that models with adiabatic contraction (Blumenthal et al. 1986) (red points and lines) rotate too fast for all feedback efficiencies. For a model without adiabatic contraction (open circles and black lines) the zero point of the VM relation is reproduced for feedback efficiencies of ǫ FB ≃ 0.1 − 0.5. In order for our models to produce realistic rotation velocities, in the models that follow we will assume the halo does not contract in response to galaxy formation.
Impact of feedback on disk sizes, gas fractions and metallicity
Here we discuss the impact of feedback on the scaling relations between disk size, gas fractions and gas phase metallicity with stellar mass. We discuss three feedback models: 1) no feedback; 2) momentum driven feedback; 3) energy driven feedback with ǫ FB = 0.25. For each model we generate a Monte Carlo sample of galaxies, with halo masses logarithmically sampled from M vir = 10 10 − 10 13 h −1 M ⊙ , log-normal scatter in halo spin parameter λ, halo concentration, c, and angular momentum distribution shape, α.
Disk Sizes: The upper panels of Fig. 2 show the disk size-stellar mass relation for our three models. As expected from Fig. 1, the model without feedback produces a size-mass relation with the wrong zero point, but also with the wrong slope. Models with feedback reproduce the zero point of the size-mass relation, but they predict different slopes: 0.26 for the momentum driven wind and 0.14 for the energy driven wind. The energy driven wind predicts a shallower slope because it is more efficient at removing gas from lower mass haloes, which (see Fig. 1) moves galaxies to lower masses and larger sizes. Observationally the correct slope is not clear, with values of 0.24 (Pizagno et al. 2005) and 0.28 (Dutton et al. 2007) and 0.14 (at low masses) to 0.39 (at high masses) from Shen et al. (2003) being reported. Thus a more accurate observational determination of the size-stellar mass relation would provide useful constraints to these models.
Gas Fractions: It has emerged in the last few years (Springel & Hernquist 2005;Hopkins et al. 2009) that the gas fraction of galaxies plays an important role in determining the morphology of galaxies after mergers. In particular galaxies with high gas fractions can re-form their disks after major and intermediate mass mergers. This removes a potential problem for the formation of bulgeless and low bulge fraction galaxies in ΛCDM, where intermediate and major mergers occur in the lifetime of essentially all dark matter haloes.
The middle panels of Fig. 2 show the gas fraction vs. stellar mass relation. The green points show observations from Garnett (2002), with a fit to the mean and scatter of this data shown by the solid and dashed lines. The model without feedback (left) produces galaxies that are too gas poor, especially for lower mass galaxies. This problem is the result of the disks being too small, and hence too high surface density, which results in more efficient star formation. The models with feedback both reproduce the observed relation, with the energy driven wind predicting a higher zero point.
Mass Metallicity: Finlator & Davé (2008) used the mass metallicity relation at redshift z ≃ 2 to argue in favor of momentum driven winds over energy Figure 2. Dependence of disk size, gas fraction and gas metallicity on feedback. Upper panels: disk size -stellar mass; Middle panels: gas fraction -stellar mass, Lower panels: gas phase metallicity -stellar mass. The observed relations are given by green lines, points and symbols. The model galaxies are given by grey points, with the black lines showing the 14th and 86th percentiles in stellar mass bins. For the size-mass relation and gas fraction mass scaling relations the data (Dutton et al. 2007;Shen et al. 2003;Garnett 2002) and models are for redshift z = 0. For the metallicity-mass relation the data (Erb et al. 2006) and models are for redshift z = 2.26. The sizes, gas fractions and metallicities are coupled, and yield different slopes for different feedback models driven winds (of constant velocity). The lower panels of Fig. 2 show the stellar mass -gas metallicity relation at z = 2.26. We confirm the result of Finlator & Davé (2008) that models without feedback do not reproduce the mass-metallicity relation, and that models with momentum driven winds provide a good match to the observations. However, we also show that models with energy driven winds provide a equally good match to the data. The energy and momentum driven winds do predict different slopes: 0.17 for momentum and 0.32 for energy, and thus more accurate observations, and especially to lower stellar masses, may be able to distinguish between these two models.
Summary
We have used a semi-analytic disk galaxy formation model to investigate the effects of galaxy outflows on the scaling relations of disk galaxies. We find that 1) None of the scaling relations can be reproduced in models without outflows: model galaxies rotate too fast, are too small, are too gas poor and are too metal rich. These problems are driven by the high baryonic mass fractions of these galaxies.
2) Models with outflows can solve this problem by removing gas from galaxies before it has had time to turn into stars.
3) Models with momentum and energy driven winds provide acceptable fits to the observed disk size-stellar mass, gas fraction stellar mass, and gas metallicity -stellar mass relations. However, these models predict different slopes (due to the different scaling between mass loading factor and wind velocity). Thus more accurate observations will be able to discriminate between these models. | 2009-05-04T02:16:41.000Z | 2008-10-28T00:00:00.000 | {
"year": 2010,
"sha1": "5705a9817db2568bb32e7a27cf55f5c5962fa7eb",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/396/1/141/4069299/mnras0396-0141.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5705a9817db2568bb32e7a27cf55f5c5962fa7eb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4844962 | pes2o/s2orc | v3-fos-license | Evaluation of BODE index and its relationship with systemic inflammation mediated by proinflammatory biomarkers in patients with COPD
Introduction BODE index, a multidimensional grading system which is based on Body mass index, airway Obstruction, Dyspnea scale, and Exercise capacity, has been increasingly used for the evaluation of chronic obstructive pulmonary disease (COPD). Many of the systemic manifestations of COPD are shown to be mediated by elevated levels of proinflammatory biomarkers. Objective We aimed to investigate the relationship between the BODE index, its components, disease severity, and proinflammatory biomarkers like C-reactive protein (CRP), tumor necrosis factor (TNF)-α, and interleukin (IL)-6. Materials and methods A cross-sectional study which included 290 clinically stable COPD patients and 80 smoker controls was conducted. Medical history, body mass index, pulmonary function tests, 6-minute walking test, and modified scale of Medical Research Council dyspnea scale were evaluated. BODE scores were determined. Systemic inflammation was evaluated with the measurement of CRP, TNF-α, and IL-6 in the serum samples of all studied subjects. The correlation between inflammatory biomarkers and BODE index was assessed in COPD patients. Results We found a significant relationship between COPD stages and BODE index. Our analysis showed significant association between systemic biomarkers and components of the BODE index. Both TNF-α and CRP levels exhibited weak but significant correlation with BODE index. Serum IL-6 concentrations exhibited significant correlation with 6-minute walking test, modified scale of Medical Research Council, and BODE index (r=0.201, P=0.004; r=0.068, P=0.001; and r=0.530, P=0.001, respectively). Also, an inverse and significant correlation was observed between BODE index and FEV1 (r=0.567, P=0.001). IL-6 exhibited a highly significant and inverse correlation with FEV1 (r=−0.580, P=0.001). Conclusion BODE index should be considered for evaluating patients with COPD. Also, IL-6 seems to be a potential biomarker that may enable determination of the severity and prediction of the course of the disease.
Introduction
Chronic obstructive pulmonary disease (COPD) is a disorder characterized by progressive development of airflow limitation and an enhanced chronic inflammatory response in the airways, 1 and is predicted to become the third most frequent cause of death in the world by 2030. 2 The major manifestation of airflow obstruction in COPD is the reduction in forced expiratory volume in 1 second (FEV1). 3 However, according to the European Respiratory Society and the American Thoracic Society (ATS), the measurement of FEV1 alone does not represent the complex clinical consequences of COPD and additional parameters should be assessed and evaluated. 4 Recent developments in the past decades have led to better understanding of the systemic nature of COPD, which has given rise to the multidimensional classification system that systematically predicts the degree of mortality in individuals with COPD. The BODE index, which is based on Body mass index (BMI), airway Obstruction, Dyspnea scale, and Exercise capacity, includes both symptoms and physiological measurements, and now it is being considered as better indicator than FEV1 for predicting mortality and severity of COPD. 5 According to the Global initiative for chronic Obstructive Lung Disease (GOLD), the BODE index gives more comprehensive information in predicting mortality from any cause as well as respiratory causes than FEV1-based staging system. 5,6 It is now recognized that COPD is characterized by lowgrade chronic systemic inflammation, and therefore, it is an important component of COPD. 7 The role of inflammatory cytokines has also been widely investigated in the natural history of COPD. The inflammation in the respiratory tract of COPD patients seems to be an amplification of the normal inflammatory response of the respiratory tract to chronic irritants. Inflammatory cells like macrophages, neutrophils, and lymphocytes release inflammatory mediators which interact with structural cells in the airways and the lung parenchyma. Various inflammatory mediators, such as cytokines, chemokines, growth factors, and reactive oxygen species, are found to be increased in COPD patients. 8 C-reactive protein (CRP) levels have been shown to be elevated in the serum of even stable COPD patients and found to be associated with disease severity, quality of life, exercise capacity, response to treatment, and mortality. 9 Raised levels of proinflammatory cytokines such as interleukin (IL)-6 have been reported in the circulation of stable COPD patients 7,10 and shown to be associated with impaired functional capacity, 11 reduced daily physical activity, 12 and decreased health status. 9,10 Tumor necrosis factor-α (TNF-α) has been reported to play an important role in muscle wasting and weight loss occurring in COPD patients. 13 Several studies have shown elevated levels of TNF-α and its receptors in the circulation of COPD patients. 14,15 It is not clear whether these cytokines are simply markers of the inflammatory process in COPD or differences in inflammation are related to different phenotypes of the disease. 16 The inflammatory processes that underlie COPD are probably mediated by a multitude of cytokines of proinflammatory cascade. Systemic inflammatory biomarkers may offer new strategies for the diagnosis and may be valuable in assessing the prognosis of patients with COPD.
There are only a few studies that have investigated the relationship between the clinical characteristics of this disease severity (measured by the BODE index) and the systemic inflammation mediated by proinflammatory cytokines, despite there being increasing scientific evidences. However, no study has reported the significance of the differences in the levels of multiple proinflammatory cytokines among different scores and quartiles of the BODE index in patients with COPD in the Indian population.
In the present study, we hypothesize that the BODE index would be a better predictor of systemic inflammation and health status in COPD patients than FEV1 alone, and would present significant associations with biomarkers of systemic inflammation. We also planned to investigate the correlation between the components of BODE index and the BODE index itself with systemic inflammatory biomarkers in patients with stable COPD.
Ethics statement
The study was approved by the Institutional Ethics Committee review board of Maulana Azad Medical College and Associated Lok Nayak Hospital, New Delhi, India. All the subjects received written information and provided written informed consent prior to participation in the study.
Patient selection
Two hundred and ninety stable COPD patients diagnosed according to the GOLD guidelines in the pulmonary outpatient clinic of Maulana Azad Medical College and associated Lok Nayak Hospital were included in the study between November 2011 and December 2013. Eighty smokers (65 males and 15 females) from the general population were recruited as the control group. The inclusion criterion was: COPD patients in stable conditions (no exacerbations due to any reason in the last 4 weeks). Stable COPD patients had been receiving inhaled bronchodilator therapy in the form of long-acting β2-agonists and/or anticholinergic agents. Severe/very severe COPD patients were on inhaled corticosteroids as well. COPD was defined as FEV1/forced vital capacity (FVC) ratio of less than 70% at 20 minutes after salbutamol administration. The exclusion criteria were: presence of a respiratory disorder other than COPD or other inflammatory diseases (inflammatory bowel disease, vasculitis), presence of atopy, and history of myocardial infarction
Smoker controls
Male/female subjects aged 35-75 years without any significant disease as determined by history and physical examination and current and ex-smokers with a smoking history of ≥10 pack-years with normal pulmonary function were enrolled as controls.
Demographic features and medical history of the subjects were recorded. Weight, height, and dyspnea severity were measured, and the 6-minute walking test (6MWT) and the pulmonary function tests (PFTs) were performed. Measurement of serum levels of inflammatory biomarkers (CRP, TNF-α, IL-6) was also performed.
PFTs (spirometry)
PFTs were done for stable COPD patients in the outpatient clinic. FEV1 and FVC were measured with a calibrated spirometer (Spirolab III, MIR, Via Del Maggiolino, Roma, Italy; a portable pulmonary function apparatus). Patients with FEV1 ≥80% of the predicted value were considered mild, those with 50% ≤ FEV1 < 80% of the predicted value were considered moderate, those with 30% ≤ FEV1 < 50% of the predicted value were considered severe, and patients with FEV1 <30% of the predicted value were considered very severe. The best value from three consecutive tests was accepted. FEV1, FVC, and FEV1/FVC were measured according to ATS criteria. COPD staging was done according to GOLD 2011. 6 Anthropometric, body composition, exercise capacity, and dyspnea assessments During the clinical visit, the information about demographics and a detailed medical history were obtained from the patients. Height was measured in centimeters and weight in kilograms by a calibrated scale with the subjects not wearing their shoes. The BMI was calculated as weight (in kilograms)/height 2 (in meters). Functional exercise capacity was measured with the 6MWT in accordance with the ATS recommendations. 17 The 6MWT was performed in a level, covered hospital corridor of approximately 50 m in length. Three tests were performed and the test with the maximum 6-minute walking distance (6MWD) was considered for analysis. Each patient received standard instructions and encouragement during the test. The magnitude of dyspnea was assessed using the modified scale of Medical Research Council (mMRC). 18 Patients were asked about their perceived breathlessness and were then classified into the mMRC five dyspnea grades (0 minimum to 4 maximum).
BODE index
The BODE index was calculated for each participant based on the combination of four variables, with the following scores: a measure of body composition (BMI) from 0 to 1 point; a measure of the intensity of airflow obstruction (FEV1% predicted postbronchodilator): from 0 to 3 points; a measure of subjective sensation of dyspnea (MRC scale): from 0 to 3 points; and a measure of exercise capacity (walked distance in the 6MWT): from 0 to 3 points. The final score of the BODE index ranges from 0 to 10 points; higher the index value, the worse is the patient's condition. The participants were divided into four quartiles for the analysis according to their BODE index score, as previously described by Celli et al. 5 Quartile I is a score of 0-2 points, quartile II 3-4 points, quartile III 5-6 points, and quartile IV is a score of 7-10 points.
Measurement of inflammatory biomarkers (CRP, TNF-α, IL-6)
Five milliliters of venous blood sample was collected and centrifuged at 1,000 ×g for 15 minutes. The serum samples were separated, aliquoted, and preserved at −80°C until further analysis. Serum CRP levels were measured with the original reactive analyzers (Unicel DxC 800 Synchron Clinical Systems; Beckman Coulter Inc., Galway, Ireland). Serum TNF-α, IL-6 (Thermo Fisher Scientific, Waltham, MA, USA), and CRP (Thermo Fisher Scientific) were measured according to the manufacturer's instructions with the enzyme-linked immunosorbent assay method. The intra-assay coefficient of variation (%CV) values for TNF-α kit were 4.2% for 87.5 pg/mL and 4.5% for 369 pg/mL, while the interassay coefficients of variation were 5.2% for 92.8 pg/mL and 5.0% for 384 pg/mL. The lowest limit of detection for TNF-α kit was <2 pg/mL. The intra-assay precision value for IL-6 kit was ≤9.4%, while the interassay precision was ≤8.6%. The lowest measurement level for IL-8 kit was <1.0 pg/mL. The intra-assay coefficient of variation (%CV) values for CRP kit were 6.02% for 51.01 pg/mL and 9.82% for 205.01 pg/mL, while the interassay coefficients of variation were 9.98% for 56.98 pg/mL and 9.82% for 229.33 pg/mL.
Statistical analysis
Data were analyzed by the Statistical Package for the Social Sciences 20.0 package program (IBM Corporation, Armonk, NY, USA). The normality of data distribution was analyzed by the Kolmogorov-Smirnov test. The results are presented as mean ± standard deviation (SD), median (range), or proportions (percentage), depending upon their distribution and the measurement scale. Baseline differences between different studied groups were determined with unpaired student's t-test and one way ANOVA. In general, the Kruskal-Wallis test with Dunn's post-test was used to compare the variables of the participants in different quartiles of the BODE index, the Mann-Whitney test to evaluate the differences in the biomarkers' levels between the groups "quartiles I-II" and "quartiles III-IV", and the Spearman correlation coefficient was used to correlate the BODE index and the studied biomarker variables. Statistical significance was set at P-values <0.05 (two-sided) for all analyses.
Results
Demographics and clinical characteristics of the study subjects Table 1 presents the main demographic and clinical characteristics of all participants at recruitment. On average, COPD patients had moderate to severe airflow limitation and as expected, they complained of more symptoms than controls. A total of 290 COPD patients (254 males and 36 females) with a mean age 58.3±16.1 years were recruited in the study. Also, 80 smokers (65 males and 15 females) were recruited in the study from the general population (mean age 47.2 ±11.0 years). There was a higher proportion of males in patients and controls compared with the healthy smokers.
COPD patients had more pack-years of smoking (38.6±17.4 vs 17.4±9.5 years) and clear functional exercise intolerance (6MWD = 329±122 vs 510±133 m). As expected, COPD patients had significant moderate to severe airflow obstruction (FEV1 = 54.3±12.8), compared to healthy smokers (FEV1 = 102.8±16.3) who had normal spirometry (P<0.001). COPD patients and smoker controls did not differ in BMI (P=0.42). BODE index score and mMRC score were found to be significantly higher in COPD patients compared to smoker controls (P<0.001) ( Table 1). On crude comparison, we found that COPD patients had higher concentrations of serum inflammatory biomarkers (TNF-α, IL-6, and CRP) than the controls and they differed significantly (P<0.001). Table 2 shows the characteristics of the COPD patients stratified according to different stages of GOLD. The patients represented three stages of COPD severity. One hundred and forty-seven COPD patients were in GOLD II stage, 104 in GOLD III stage, and 39 were in GOLD IV stage. Age, BMI, pack-years, and pulmonary function parameters differed significantly with increasing GOLD stage. As expected, exercise capacity measured by 6MWT decreased significantly with increasing disease severity. All the studied biomarker values changed significantly and increased with worsened disease severity ( Figure 1). Interestingly, among the three biomarkers studied, IL-6 was found to be the most consistent for the severity as assessed by GOLD (P=0.002). BODE index scores also increased significantly with increasing disease severity (P<0.001) ( Table 2).
Relationship between proinflammatory biomarkers and the clinical parameters of COPD
In COPD subjects, serum concentrations of inflammatory biomarkers were assessed according to different quartiles of BODE index scores. In general, all the studied biomarker values changed significantly with increasing disease severity (quartiles I-IV) and the differences were found to be concordant with the severity assessed by BODE index scores. The mMRC score and airflow limitation increased significantly in proportion to different quartiles of BODE index (I-IV) and 6MWD decreased significantly from quartile I to IV of BODE index ( Figure 2). Age, BMI, and pack-years also differed significantly with worsened disease severity as assessed by BODE index scores. Of the three biomarkers studied, IL-6 was found to be the most consistent biomarker that differed significantly with increased disease severity as stratified by low quartile to high quartile of BODE index (P<0.001) ( Table 3).
Associations of the biomarker panel with FEV1, BMI, mMRC, 6MWD, and BODE index In COPD patients, the relationships between inflammatory biomarker concentrations, BMI, 6MWT, mMRC, and BODE index were evaluated (Figure 3). The relationship of individual biomarkers with BMI, mMRC, 6MWD, and BODE index is shown in Table 4. A weak and negative correlation Figure 4). All the markers studied were associated with the physiological indicators of disease, but the strength of the association differed among the biomarkers studied in terms of correlation. Therefore, in our study, IL-6 was found to be the biomarker which showed significant correlation with the maximum functional characteristics in COPD patients.
Discussion
This study aimed to investigate the correlation between the components of BODE index and the systemic inflammatory biomarkers in patients with stable COPD. We propose that multidimensional assessment of COPD by means of serum biomarkers might be more accurate in predicting the risk and course of disease progression than the assessment made by predictors based on a single biomarker or clinical variables alone. In the current study, the biomarkers panel appears to furnish additional information to various clinical variables and to the BODE index. To our knowledge, our study is the first study from India primarily designed to investigate whether adding a panel of proinflammatory biomarkers could add the significance of BODE index in predicting the severity of the disease. This cross-sectional study provides three relevant observations. Firstly, it suggests that there exists a proinflammatory state characterized by an increased circulation of many inflammatory cytokines in stable COPD patients. Secondly, The most common clinical component associated with risk prediction in COPD is the severity of airflow obstruction as assessed by FEV1% predicted. 19 However, COPD, being a heterogeneous disease, has several extrapulmonary manifestations that are not related to the lung function itself. Hence, it was thought to combine different variables into a multidimensional index that captures the complexity of COPD, such as BODE index, along with systemic biomarkers in order to predict more accurately the risk of disease severity and progression than that predicted by the predictors based on clinical variables alone or any single biomarker.
Several previous studies have demonstrated a weak correlation between PFTs, especially FEV1 and the clinical outcomes including the severity of dyspnea and other symptoms, mortality, quality of life, and the frequency of exacerbations, and BODE index. Ong et al 20 found that the BODE index showed a significant but weak relationship between the number of emergency visits and FEV1. 21 Another study demonstrated the BODE index to be more significant for determining the severity of COPD exacerbations, in comparison to FEV1. 22 Similarly, in our study, we found that the BODE index significantly correlated to the FEV1. The above observations 21,22,23 highlight the significance of BODE index, as FEV1 does not capture the complexity of the disease alone.
Some of our findings are consistent with those reported in previous cross-sectional studies. Celli et al 5 have shown that BMI and BODE index are inversely related. Similar results were observed in our study, which showed a significant decrease in BMI as the BODE index score increased.
BMI plays an important role in defining the phenotypes of COPD patients; low BMI has been shown to be associated with the disease severity and is an independent predictor of mortality in COPD. 23,24 In the current study, BMI did not correlate with the inflammatory biomarkers studied, suggesting that the loss of body mass may not be mediated by systemic inflammation.
Our study demonstrated higher serum TNF-α, CRP, and IL-6 levels in COPD patients than in smoker controls. Several previous studies have also shown higher concentrations of these proinflammatory biomarkers in COPD patients than in healthy smokers and nonsmoker controls. 7,10,12 Bon et al reported a statistically significant association between the degree of emphysema shown by chest computed tomography and the serum levels of TNF-α and IL-6 in COPD subjects. 25 TNF-α is also known to stimulate synthesis of IL-6 which plays an important role in COPD pathobiology. Hence, our findings are consistent with the findings of previous researchers. Elevated levels of CRP have been associated with impaired exercise capacity, airway obstruction severity, and impaired quality of life in COPD patients. 10,12 Elevated serum IL-6 levels have been found in COPD patients compared to healthy smokers, 12,25,26 which is in agreement with our current findings. A previous study had shown the relationship between elevated TNF-α levels and weight loss in COPD. 27 In our study, we did not find a correlation between TNF-α levels and BMI; however, we observed a significant correlation with BODE index. Our findings are not in line with the findings of Sarioglu et al, 28 who observed no correlation between TNF-α, CRP levels, and BMI. Also they did not report correlation between these biomarkers and BODE index, which is in contrast to our findings. Their failure to establish a significant correlation between BODE index and systemic biomarkers may be attributed to small sample size in their study in comparison to more number of patients included in our study. In the current study, CRP levels failed to show any significant correlation with 6MWT and mMRC. However, serum IL-6 concentrations exhibited significant correlation with 6MWT, mMRC, and BODE index.
IL-6 is a proinflammatory biomarker which plays a key role in COPD pathobiology. It is released by bronchial epithelium, and CD8+ and CD4+ T-cells. 29 Monocytes of COPD are thought to be involved in overreacting and, thereby, releasing more IL-6 than in healthy subjects. 30 In the current study, IL-6 concentrations were found to be significantly elevated in COPD patients than in healthy smokers, indicating its potential involvement in the inflammatory process of COPD patients. In agreement with our finding, Pinto-Plata 31 have also shown previously that the serum levels of IL-6 were elevated in patients with advanced stage of COPD. 32 In our study, serum IL-6 was a better marker as compared to TNF-α CRP with respect to BODE index as it showed significant correlation with the maximum functional components of BODE index. (BMI, 6MWT, FEV1, dyspnea). Our findings are in contrast with the results of Gaki et al, 33 who studied the role of systemic biomarkers including IL-6 in 222 stable COPD patients and could not find a significant association between IL-6 and BODE index. Their failure to find a significant association might be related to the fact that they addressed age as a confounding factor in their study. The significant association of IL-6 with BODE index and its components further supports the fact that a combination of systemic biomarkers along with BODE index assessment can be more accurate in predicting disease severity and progression. IL-6 is thought to be directly involved in the systemic inflammation and may be used as an additional parameter for risk assessment in patients with COPD.
Clinical implications of the current findings
Many systemic biomarkers predicting the clinical outcomes in terms of their relationship with mortality or exacerbations have recently been reported in COPD. Very few studies have looked into the predictive value of combining these biomarkers with multidimensional BODE index in assessing disease severity. We have shown an association between the expression of the serum biomarkers and the integrated systemic manifestations of the disease as represented by the functional capacity and the BODE index. The present analysis demonstrated that serum IL-6 concentration and BODE index combined can be a powerful prognostic determinant of the disease severity as well as its progression in patients with stable COPD. Of all the proinflammatory biomarkers studied, IL-6 is considered most accurately and significantly correlated with the maximum components of the BODE index.
Strengths and limitations
Our study has several strengths and limitations which merit consideration. To date, only a few studies have highlighted a relationship between disease severity measured by the BODE index and proinflammatory biomarkers in COPD patients. To the best of our knowledge, our study provides the largest hospital-based cross-sectional investigation evaluating the relationship of systemic inflammatory biomarkers with BODE index in a group of stable COPD patients and compares the results to those of healthy smokers. Then, we assessed a well-characterized, modest-sized monocentric cohort of COPD patients seeking treatment in the respiratory division of a premier tertiary academic institution of North India.
Our study also has some potential limitations. Firstly, this is a descriptive study, which depicts only associations; hence, we acknowledge that our analyses and conclusions will need to be replicated either prospectively in a study powered for these hypotheses or in other cohorts that contain similar data. Secondly, the biology of the inflammatory response is complex and we studied only a small panel of biomarkers. We did not study the markers of tissue repair, and it is likely that the balance between inflammation and repair is important for the pathobiology of COPD. Lastly, we had only few women in our study, so we cannot generalize the findings of our study to both sexes. Nevertheless, the findings of the current study need to be analyzed with a large sample size and a long-term follow-up period, involving different populations.
Conclusion
In summary, serum IL-6 combined with multidimensional BODE index appears to be an important, accurate biomarker for predicting disease severity in patients with stable COPD. In this setting, IL-6 is significantly correlated with the maximum components of BODE index as well as BODE index alone, compared to other biomarkers studied. Thus, we speculate that this novel biomarker may enable the determination of the severity and prediction of the course of the disease. substantially as a coauthor by overseeing all data entry, helping in experimental and statistical analyses of the study, and revising the manuscript. GM contributed substantially as a coauthor by assisting in data collection, data entry, performing spirometry of the patients, and revising the manuscript. NK contributed substantially as a coauthor by designing the study, assisting in data collection, interpretation of the data and revising the manuscript critically and approving the final version of the manuscript. SAH contributed substantially as a senior author by designing the study, providing supervision of the project, revising the manuscript, and approving the final version of the manuscript.
Disclosure
The authors report no conflicts of interest in this work. | 2018-04-03T05:46:47.679Z | 2016-11-18T00:00:00.000 | {
"year": 2016,
"sha1": "ac25003f21193dafeda76746f13fe37409575dde",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=33656",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13465c44dc660300648d81517782106aa6c25579",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239628223 | pes2o/s2orc | v3-fos-license | Compound Dihuang Granule Protects against 6-OHDA Induced Toxicity in Parkinson’s Disease Rats by Suppressing the Phosphorylation of MAPK/ERK1/2
Parkinson’s disease (PD) is a multifactorial neurodegenerative disorder characterized by progressive loss of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc) and the presence of Lewy bodies (LBs) consisting of misfolded α-synuclein protein in the substantia nigra pars compacta (SNpc). Compound Dihuang Granule (CDG), a famous traditional Chinese medicine (TCM) has been clinically used in PD therapy with curative effects. However, the specic functions and the mechanism of action remained unclear. This study explored the therapeutic effects and potential mechanisms of CDG in the PD rats induced by 6-OHDA toxicity. Methods The PD rat model induced by unilaterally stereotactic of The behavioral performances of rats were evaluated by rotation test, muscle strength assessment and balance beam walking test. The striatal contents of neurotransmitters were detected by HPLC.The numbers of dopaminergic (DA) neurons were determined with immunohistochemistry (IHC) staining and Western blotting assay. Indicators of oxidative stress were determined with colorimetric method. Apoptotic cells were detected by TUNEL assay. The expression levels of neurotrophic factors were examined with IHC staining and real-time quantitative PCR. The related protein expression levels were determined with Western blotting assay.
Introduction
PD is the second most common neurodegenerative disease with 2% of people aged over 60 years old suffering all over the world [1]. PD is clinically manifested with static tremor, bradykinesia, rigidity and abnormal posture [2], while pathologically characterized with progressive loss of DA neurons and deposition of α-synuclein-containing lewy bodies (LBs) in the SNpc [3]. The etiology and pathogenesis of PD is complicated. So far, no treatment is available to effectively slow down or halt PD progression [3].
Levodopa is the only valid treatment reported to extend life expectancy of PD patients [4]. However, as long-term usage of the drug, the therapeutic effects become increasingly less bene cial, and more than 50% of patients eventually experience highly disabling uctuations, dyskinesia and the agonist induced sleep attacks [5,6]. Therefore, it is imminent to nd alternative therapy with less toxic side effects.
A variety of intracellular processes are involved in the pathogenesis of PD, including mitochondrial dysfunction, oxidative stress, cell apoptosis and neurotrophic factors deprivation [7,8]. Oxidative stress plays an undeniable role in the complex progressing neurodegenerative cascade, the inhibition of which attenuates DA neuron loss in PD models [7,8]. Neurotrophic factors are endogenous proteins promoting the survival of different neural cells [9], upregulation of which has been an effective approach for physical and medical treatments to protect against neurotoxicity induced neurodegeneration in PD [10,11]. Apoptosis is one of the main mechanisms responsible for neuronal deaths in PD. Apoptosis is mediated by a number of initiator and executioner caspases, and occurs via the intrinsic or extrinsic pathways [12]. The activation of MAPK/ERK1/2 has been commonly implicated in promoting neuroregeneration and cell apoptosis [13,14], in which the activation of CREB plays a synergetic pole with the MAPK pathway [15,16].
Traditional Chinese medicine (TCM) is famous for the multidimensional clinical outcome in medical treatments. CDG has been clinically used in PD therapy for improving motor and non-motor symptoms and reducing the side effects of long-term Levodopa usage. CDG was proved to alleviate the excess levodopa induced dyskinesia in PD rat model [17]. In our previous study, CDG inhibited nigrostriatal pathway apoptosis in PD rats by suppressing the JNK/AP-1 Pathway [18]. And verbascoside, one of the main extracts of Rehmannia glutinosa root (the sovereign drug of CDG) was effective in treating PD and can increase the TH content of PD rats [19]. However, the protective effects and speci c mechanisms of CDG in PD therapy remain to be further investigated.
In this study, the bene cial effects of CDG were documented in the 6-OHDA induced PD rats and the mechanisms of action were investigated. We evaluated the abnormal motor symptoms and the nigrostriatal loss of dopaminergic neurons of PD rats with or without CDG treatment. We found that CDG treatment signi cantly improved the 6-OHDA induced motor disorders and brain injuries of PD rats.
Moreover, the 6-OHDA induced oxidative stress and cell apoptosis were alleviated by CDG treatment. CDG also increased the protein expression of the striatal neurotrophic factors. Further the inhibitor of MAPK/ERK1/2 was applied and we found that CDG signi cantly suppressed the MAPK/ERK/1/2 phosphorylation in the striatum of PD rats. These ndings elucidated that CDG treatment could improve Rats in the CDG group were intragastrically given 7g/kg/d CDG (1 mL/100 g), and 10% DMSO was injected intraperitoneally. Rats in Madopar group were gavaged with 150 mg/kg Madopar and 10% DMSO was injected intraperitoneally. Rats in the SL327 group were intraperitoneally given 25 mg/kg SL327 solution (dissolved in 10% DMSO) and gavaged with 1 mL/100 g of saline. Rats in the CDG + SL327 group were intraperitoneally given 25 mg/kg SL327 immediately after gavage with 7 g/kg/d CDG.
Rats in the sham group and Model group were gavaged with 1 mL/100 g of saline and intraperitoneally given 10% DMSO solution. The intraperitoneal injection volume was 0.1 mL/100 g, twice weekly for 6 weeks.
Rotation test
Two weeks after the operation, the rats' contralateral rotations induced by Apomorphine (APO) were measured and recorded with a video camera at 2 weeks, 4 weeks and 6 weeks. The duration of each recording time was 30 min. Rats with a rotating frequency of over 7 turns per minute were included in the PD model [20].
Muscle strength assessment
Take a wire rope (length 100cm, diameter 0.15cm), x its two ends, 70cm away from the ground, and put down a sponge pad (5cm thick) to prevent rats from falling. During the operation, the rat's two front paws were placed on the wire rope, and then let go, observe the rat's behavior, and record its suspension time. Use the following points to evaluate the muscle strength of the experimental rats: 3 points, hanging on the rope for more than 5s, and the hind limbs can be placed on the rope; 2 points, hanging on the rope for more than 5s; 1 point, hanging on the rope 3~4s; 0 points, hanging on the rope 0~2s.
Balance beam walking test
The rats walked the entire length of a standard balance beam (80 cm in length, 2.5 cm in width, 100 cm off the oor) steadily without falling off [21,22]. Brie y, a subjective observation was conducted for 60 seconds. Score 0 indicates stable balance; score 4 indicates fall off. The test was performed in triplicate on weeks 0, 2, 4 and 6. The average score of balance of each rat was calculated.
Western blotting analysis
The striatum tissues were lysed in T-PERTM Tissue Protein Extraction Reagent (Thermo Scienti c, USA) containing complete protease inhibitor. Protein concentrations were measured using a BCA kit (Beyotime, Shanghai, China). 40 μ g of protein from each group was separated by 10% SDS-PAGE gels and electrophoresis and subsequently transferred onto a PVDF membrane (0.45 μ m, EMD Millipore, MA, USA). BSA (3%; Sigma-Aldrich, MO, USA) was used to block the membranes for 2 h at room temperature (RT). The membranes were then incubated with primary antibody TH, Bax, Bcl-2, ERK1/2, p-ERK1/2, CREB, p-CREB (CST, MA, USA) overnight at 4 °C. After the membranes were washed three times with tris-buffered saline containing 0.1Tween-20(TBST), they were incubated with the horseradish peroxidase-conjugated secondary antibody mouse β-Actin, GAPDH Antibody Mouse Monoclonal (Proteintech, Rosemont, USA) for 1 h at RT. After the nal wash, signals were detected using Li-cor ODDSEY infrared laser imaging system (CLx-1259, LI-COR Biosciences, USA); Image J software was used to analyze the strip optical density.
Detection of mRNA expression by real-time uorescence quantitative PCR
The total RNA of the rats in each group were extracted with RNAiso Plus and were transformed into cDNA by reverse transcription kit (Takara, Beijing, China) according to the manufacturer's protocol. RT-PCR was performed on ABI StepOnePlus real-time uorescent quantitative PCR system (ABI StepOnePlus USA).
Taking β-actin as the endogenous reference, the relative amount of mRNA is determined based on 2 -ΔΔCT calculation. The primer sequences of TH, GDNF, BDNF, NGF were listed in Table 2.
Immunohistochemistry (IHC)
The 20-μm-thick slices of rat brain tissue from each group were selected as similar as possible. And frozen slices were subjected to citrate buffer (0.1 M, pH 6.0) at 95 °C for 10 min for antigen retrieval. After the tissue was washed three times with phosphate-buffered saline containing 0.2% Tween-20 (PBST) for 10 min, the sections were treated with 0.5% Triton X-100 for 10 min and blocked with 5% bovine serum albumin (BSA) for 1 h at RT. After blocking, the sections were incubated with anti-mouse TH (Cambridge, MA, USA), which was prepared with PBST (0.5% Triton X-100) /1% sheep serum, incubated at 37 ℃ for 2 h, 4 ℃ overnight. Then, samples were incubated with the horseradish peroxidase-conjugated secondary antibody (Cambridge, MA, USA) for 1 h, and samples were detected with 3,3'-diaminobenzidine (DAB) for 2-3 min. Finally, the sections were cover-slipped with neutral balsam and observed with an Olympus BA51 photomicroscope (Tokyo, Japan). Image Pro Plus 6.0 software (Media Cybernetics, MD, USA) was used for cell counting. For the detection of neurotrophic factors, the 3.5-μm-thick para n sections were mounted on glass slides and baked for 1h at 62 ℃, after which they were depara nized and the endogenous peroxidase activity quenched. The primary antibody NGF (Abcam, UK), BDNF (Abcam, UK), GDNF (Abcam, UK) was incubated on the slides for 12 h at 4 ℃. After rinsing three times with phosphatebuffered saline solution containing Tween, the horseradish peroxidase-conjugated secondary antibody (Huaan, Hangzhou, China) was incubated for 20 min at RT and then visualized after incubation with 3, 3diaminobenzidine for 10 min at RT. Then the sections were counterstained with hematoxylin to mark the nucleus. Finally, the binding sites were sealed with neutral resin. Images were obtained at the objective len with 20× magni cation. The numbers of positive cells were counted Image J software.
Immuno uorescence staining
Brain frozen slices with 20μm thick from each group were washed with PBS ve times for 3 min each.
Then, 0.5% (wt/vol) Triton X-100 and blocking serum were added successively and incubated for 10 min and 1.5 h, respectively. The tissue was incubated in primary antibody, anti-mouse TH (Cambridge, MA, USA), at 4 ℃ overnight. After being washed four times with PBS, the sections were incubated with the secondary antibody (Alexa Fluor 488; A-11007, Alexa Fluor 555; Invitrogen, CA, USA) for 1h at RT and protected from light. Images were obtained with confocal microscopy. The number of positive cells was calculated with Image J software.
Measurement of oxidative stress
Rats from each group were anesthetized with pentobarbital sodium (50 mg/kg), decapitated and their brains removed, then the right substantia nigra was dissected out and weighed. (1) removal of the brain 0.5 g in the cold saline to remove blood, rinse, dry lter paper, then put in the speci cations for 5mL small beaker; (2) adding 0.65 mL cold 0.9% saline in the beaker, and with ophthalmic scissors cutting brain block. as soon as possible; The brain tissue suspension was then poured into the homogenate tube, and the cold 0.86% saline 0.3 mL was added to the homogenate of 3~5 min, and the 10% brain tissue homogenate was prepared, and centrifuged at 12,000×g for 10 minutes at 4 ℃; The above steps were carried out on the ice. Take proper amount of supernatant of SOD, MDA, GSH, GSH-Px detection, the speci c methods of operation in strict accordance with the completion of Nanjing Institute of biological engineering kit (Nanjing, China) detection steps.
TUNEL assay
Select appropriate brain slices from each group of rats from the in situ hybridization protection solution. TUNEL staining was performed as described previously according to the manufacturers' protocols with minor modi cations. Brie y, TUNEL assay was performed in 20-μm-thick frozen sections using in situ cell death detection kit (Roche, Switzerland Basel, Germany). All images were acquired using a confocal microscope (Leica TCS SP2, Solms, Germany). The nuclei were stained with DAPI (blue), and the apoptotic cells appeared green. Image Pro Plus 6.0 software (Media Cybernetics, MD, USA) was used for cell counting.
Statistical analysis
The experimental data statistics are expressed as mean ± standard error (Means ± SEM). Two groups of data were compared using t test, and multiple groups of data were analyzed by One-way ANOVA or Twoway ANOVA followed with Turkey's multiple comparison test post hoc. When P <0.05, there was statistical difference.
CDG ameliorated behavioral symptoms of 6-OHDA induced PD rats
With induction of APO, no rotational behavior was observed throughout the test in sham-operated rats, while the number of the rotations of PD rats signi cantly increased after surgery and gradually decreased week by week (P<0.001). The number of rotations of the Madopar group decreased signi cantly compared with 6-OHDA-lesioned group ( <0.01) at 6 weeks. With 4 weeks, and 6 weeks of treatment, the number of rotations of the CDG group and Madopar group signi cantly decreased compared with the model group (P<0.001, Fig.3). There was no signi cant difference in the number of rotations between the Madopar group and the CDG group ( >0.05). These results suggested that CDG reduced motor dysfunction in PD rats.
CDG attenuated nigrostriatal dopamine loss of PD rats
Loss of striatal DA and its metabolites is closely related to the dyskinesia of PD. In this study, the striatal contents of neurotransmitter DA and the intermediate metabolites including DOPAC and HVA were determined with HPLC. Compared with the Sham, 6-OHDA toxicity induced a signi cant reduction of DA, DOPAC, and HVA levels in the striatum of Model rats. Compared with the Model group, the striatal contents of DA, DOPAC and HVA all increased signi cantly in both the Madopar and CDG group (P<0.01, Fig.4 A-C), however there was no difference between the two groups ( >0.05). These results showed that CDG treatment increased the contents of neurotransmitters including DA, DOPAC and HVA in the striatum of 6-OHDA induced PD rat.
To examine the DA neuronal injuries in the SNpc of PD, the protein expression of TH in striatum was determined. Compared with the sham-operated groups, the expression of TH protein in the striatum of PD rats was signi cantly decreased (P<0.01). With 6-week treatment of Madopar or CDG, the protein expression levels of TH were signi cantly increased in the striatum of PD rats (P <0.05, P <0.01, Fig. 4 D-E). However, there was no signi cance between the Madopar and CDG group ( >0.05). These results suggested that CDG increased TH protein expression in the nigrostriatal pathway of PD rats. CDG attenuated 6-OHDA induced loss of nigrostriatal DA neurons in the PD rats Immunohistochemistry staining was used to evaluate the injuries of nigrostriatal DA neurons (Fig. 5 A-C). The number of DA neurons in the SNpc of Model rats signi cantly decreased compared with the Sham group. Compared with the model group, the number of DA neurons increased signi cantly in the SNpc of rats with Madopar or CDG treatment (P<0.05). In addition, the density of TH neuronal terminals in the striatum of rats was calculated. With 6-OHDA toxicity, the TH average optical density (AOI) of the striatum signi cantly decreased in the Model rats, while both CDG and Madopar treatment improved the density of TH neuronal terminal in the striatum of PD rats. However, there was no signi cance between the two groups ( >0.05). These results demonstrated that CDG treatment could attenuate the 6-OHDA induced DA neuronal injuries.
CDG alleviated 6-OHDA induced oxidative stress in the Striatum of PD rats Compared with the Sham group, the SOD content, GSH and GSH-Px activity in the striatum of the Model group decreased signi cantly, while the MDA level signi cantly increased; Compared with the Model group, the MDA level of rats in the CDG and Madopar group signi cantly decreased, while the SOD content, GSH and GSH-Px activity signi cantly increased (Fig. 6 A-D).However, there was no signi cance between CDG and Madopar group ( >0.05).These results showed that CDG alleviated oxidative stress in PD rats.
CDG increased the expression of neurotrophic factors in the SNpc of PD rats
The protein expression levels of neurotrophic factors including NGF, BDNF and GDNF were determined with immunohistochemical staining in the SNpc of PD rats, and the cell counting was conducted. Compared to the Sham group, the number of NGF positive cells was signi cantly decreased in the SNpc of Model group (Fig.7 A), while both Madopar and CDG treatment e ciently rescued this decline as indicated by the statistical analyses (Fig.7 B). Moreover, the increase of NGF positive cells was more signi cant in the CDG group than the Madopar group, when compared to the Model ( <0.01 and <0.001), however the changes between the two groups are identical. Changes of BDNF and GDNF positive cells in the SNpc of four groups were consistent with the NGF expression (Fig.7A, C&D). The results suggest that CDG treatment signi cantly increased the protein expression levels of the neurotrophic factors including NGF, BDNF and GDNF in the nigrostriatal pathway of 6-OHDA induced PD rats.
CDG reduced 6-OHDA induced cell apoptosis in the nigrostriatal pathway of PD rats
In view of the decreased oxidative stress injuries and increased expression of neurotrophic factors in the CDG treated rats' brains, we wonder that if CDG could reduce 6-OHDA induced cell deaths. The apoptotic cells in the SNpc were examined with TUNEL assay (Fig. 8 A-B). Co-staining with TH (red) protein, the apoptotic neurons in the SNpc of rats were marked by TUNEL (green) assay. Compared with Sham group, the number of apoptotic neurons signi cantly decreased in the SNpc of Model rats ( <0.001), however this decline was signi cantly attenuated by Madopar treatment ( <0.05, Model vs. Madopar). Moreover, the CDG treatment showed identical effects with Madopar, which is more signi cant for reducing the apoptotic neurons in the SNpc ( <0.01, Model vs. CDG).
Meanwhile, the protein expression levels of Bcl-2 and Bax in the striatum of rats were determined with Western blotting assay (Fig. 8 C-D). Compared with Sham group, Bax protein signi cantly increased while Bcl2 protein decreased in the striatum of Model group. Consistently, the ratio of Bcl2/Bax protein expression signi cantly decreased in the striatum of Model group, ( <0.001). Corresponding to the number of apoptotic neurons in the SNpc, changes of Bax and Bcl2 protein induced by 6-OHDA toxicity were both attenuated with Madopar and CDG treatment ( <0.05 and <0.01 respectively). Together, the results showed that CDG treatment could suppress the apoptosis of dopaminergic neurons in the nigrostriatal pathway of 6-OHDA induced PD rats.
The parallel effects between CDG and SL327 in 6-OHDA induced PD rats Neuroregeneration and Cell apoptosis were widely reported to be regulated by the MAPK/ERK1/2 phosphorylation [23][24][25], thus we further investigated that if CDG treatment delivered its protective effects by regulating the MAPK/ERK1/2 pathway with introducing the MAPK/ERK inhibitor SL327.
The behavioral performances of rats were examined including the Apomorphine induced rotation test, muscle strength assessment and balance beam walking test. SL327 or CDG each alone signi cantly alleviated the abnormal rotation behaviors of Model rats since the treatment of 2 weeks, 4 weeks and 6 weeks ( <0.05), while their combined utilization showed more prominent effects (SL327+CDG vs. SL327 or CDG, <0.05; Fig .9 A-C). In the balance beam walking test, compared with Sham group, all other groups of rats spent much more time crossing the beam, suggesting the motor de cits induced by unilateral injection of 6-OHDA into the SNpc. However, compared with the Model group, SL327, CDG and their combined usage all showed improving effects with a 6-week treatment ( <0.05 each), although the improvements among three groups were identical (Fig .7 B). In the muscle strength assessment, the score of Model rats signi cantly decreased compared to the Sham rats since the 6-OHDA injection. CDG signi cantly improved the performances of model rats with 2 weeks, 4 weeks and 6 weeks of treatment, while SL237 showed no effect neither by single usage nor combined with CDG ( <0.001) (Fig .9 C).
To further evaluate the effects of CDG on suppressing the MAPK/ERK phosphorylation, the striatal expression levels of TH of rats were determined with Western blotting and real-time quantitative PCR (Fig.9 D-F). As expected, compared to the Sham group, the striatal TH protein signi cantly decreased in the Model rats (Sham vs. Model, <0.001), however SL327, CDG and their combined usage all attenuated the loss of TH proteins (each vs. Model, <0.05) and there was no difference among the three groups ( Fig .9 E-F). Consistently, the striatal mRNA levels showed the same changes with the results of TH protein expression. All-together, CDG and SL327 both signi cantly improved the motor symptoms and striatal DA loss of 6-OHDA induced PD rats, and their combined usage showed even promoted effects, suggesting that CDG had similar protective effects in common with the MAPK/ERK1/2 phosphorylation inhibitor.
CDG inhibited the phosphorylation of MAPK/ERK1/2 induced by 6-OHDA toxicity
Protein expression levels of the upstream regulatory MAPK/ERK1/2 and CREB proteins were determined ( Fig. 10A-C). The striatal protein expression levels of ERK and CREB were similar among all groups. Compared with the Sham, the phosphorylation of ERK and CREB both signi cantly increased in the striatum of Model rats, which was signi cantly inhibited by SL327 treatment. The CDG treatment showed similar effects on suppressing the ERK and CREB phosphorylation. Moreover, the combined utilization of CDG and SL327 enhanced the inhibition of CREB phosphorylation than the SL327 alone, indicating the synergistic effects of CDG with SL327.Therefore, the results showed that CDG could suppress the phosphorylation of MAPK/ERK1/2 and CREB and enhance the inhibitory effects of MAPK/ERK1/2 inhibitor SL327.
The mRNA expression levels of these neurotrophic factors in the striatum of rats were further examined. Corresponding to the protein expression levels of the neurotrophic factors, the striatal mRNA level of NGF, BDNF, and GDNF in the Model group all signi cantly decreased compared with the Sham group, which was signi cantly rescued with SL327 or CDG treatment. However, the combined utilization of both showed no better effect than each alone (Fig.10 D-F). Taken together, MAPK/ERK1/2 signaling pathway is involved in the protective effects of CDG treatment against the 6-OHDA toxicity.
Discussion
The incidence of PD increases greatly in the worldwide, and a large number of studies have shown that TCM treatment could not only improve the clinical e cacy but also reduce the side effects of chemically synthesized drugs in PD therapy [26].CDG has been clinically applied in the PD treatments and signi cantly improve the UPDRS score of PD patients compared to the patients with single Madopar treatment [27]. In our previous study, CDG was proved to alleviate the excess levodopa induced dyskinesia in PD rat model [17]. Moreover, CDG inhibited the nigrostriatal pathway apoptosis of PD rats by suppressing the JNK/AP-1 Pathway [18]. In this paper, the protective effects of CDG in PD were systematically studied in a 6-OHDA toxicity induced PD rat model. We obtained experimental evidences that CDG attenuated the nigrostriatal DA loss, improved oxidative stress, inhibited cell apoptosis and increased the expression of neurotrophic factors in the 6-OHDA induced PD rats, demonstrating the protective role of CDG in PD therapy, and which was mainly mediated by suppressing the phosphorylation of MAPK/ERK1/2.
Herein, the protective effects of CDG for PD were proved in the rat model induced by unilateral injection of 6-OHDA into the medial forebrain bundle and SNpc. 6-OHDA selectively damages dopaminergic neurons with prolonged injuries, the toxicity of which is frequently used to construct in vivo and in vitro Parkinson's disease models. The protective effects of L. stoechas methanol extract were investigated on 6-OHDA-induced cytotoxicity and oxidative damage in PC12 cells [28]. In this study, the PD rat model with prolonged DA neuronal damages allowed the delivery of 6-week treatment even longer, in which the rotational behaviors and loss of DA neurons were alleviated, and the DA neurotransmitter with its metabolites including DOPAC and HVA were signi cantly increased by CDG treatment. Therefore the therapeutic effects of CDG were well demonstrated for alleviating parkinsonian appearances.
Madopar was applied in this study as a positive control, the 6-week treatment of which delivered similar protective effects in 6-OHDA induced PD rats. Madopar consisting of two active ingredients called levodopa and benserazide, has been clinically applied in PD therapy for improving the clinical symptoms [29]. In this study, the PD rat model with prolonged DA neuronal damages allowed the delivery of a 6-week treatment even longer, in which the rotational behaviors and loss of DA neurons were alleviated, and the DA neurotransmitter with its metabolites including DOPAC and HVA were signi cantly increased by Madopar and CDG treatment. The improvements of PD rats were even more signi cant with CDG treatment than that with Madopar treatment in the examination of behavioral performances and cell apoptosis in the PD rats. Therefore CDG showed comparable therapeutic effects to Mardopar in the 6-OHDA induced PD rats.
Overwhelming evidences suggest that oxidative stress plays a vital role in the degeneration of dopaminergic neurons, the suppression of which directly protects the DA neurons in midbrain [7]. In this study, 6-OHDA induced the increase of MDA and the decrease of antioxidants including SOD, GSH and GSH-Px in the striatum, however the induced oxidative stress was all reversed by the CDG treatment, suggesting the protective effects of CDG against the oxidative stress injuries. Neurotrophic factor (NTFs) could reduce neuronal apoptosis and promote the neurite regeneration [30]. BDNF promotes the survival, differentiation and growth of DA neurons [31]. In this study, CDG signi cantly increased the numbers of NGF, BDNF and GDNF positive neurons in the SNpc of PD rats, and also increased the mRNA expression levels of the NTFs in the striatum of PD rats, thereby revealing one of the neuroprotective mechanisms of CDG for increasing the expression of the neurotrophic factors in the brain. The Bcl-2 protein family including the pro-apoptotic proteins (such as Bax) and anti-apoptotic proteins (such as Bcl-2) play a vital role in the process of apoptosis [32]. With exposure to the oxidative stress, pro-apoptotic proteins will translocate to the outer membrane of mitochondria, triggering the release of apoptosis-inducing factors, thus inducing apoptosis [33]. The TUNEL-positive neurons in the SNpc of 6-OHDA induced PD rats were signi cantly decreased with CDG treatment, as well as the increased expression of Bcl-2/Bax protein, suggesting the protective role of CDG against cell apoptosis. However, the speci c mechanisms were still unde ned.
The molecular mechanisms involved in the protective effects of CDG were further discovered. By introducing the inhibitor of MAPK/ERK1/2 pathway, the role of CDG in regulating the ERK phosphorylation was studied. CDG signi cantly inhibited the phosphorylation of ERK protein and the upstream molecule CREB protein in the striatum of the PD rats, which was similar with the effects of ERK inhibitor SL327, demonstrating the suppression of CDG for MAPK/ERK1/2 in the 6-OHDA induced injuries. Moreover, the effects of CDG were compared with SL327 by examining the behavioral performances and brain injuries of PD rats. SL327 also showed protective effects in attenuating the motor de cits and striatal TH loss of PD rats. However, the CDG treatment alone or the combination use of CDG and SL327 either showed better effects than SL327 alone in improving the rotational behaviors of rats. Therefore, CDG played a synergetic role with the SL327 in alleviating the motor symptoms of 6-OHDA induced PD rats. Whereas CDG alone showed better effects than SL327, indicating that CDG orchestrated not only the MAPK/ERK1/2 pathway to deliver the protective effects on 6-OHDA induced brain injuries, which was consistent with the multifaceted actions of TCM. In conclusion, CDG treatment can alleviate DA neuron loss and motor de cits of PD by suppressing the phosphorylation of MAPK/ERK1/2 signaling pathway.
Conclusions
In summary, this study reveals that CDG, a compound traditional Chinese medicine, protected against the Parkinsonian pathologies in the 6-OHDA induced PD rats, with improving the neurobehavioral performance of PD rats and attenuating the nigrostriatal loss of DA neurons. Moreover, the oxidative stress and cell apoptosis induced by 6-OHDA toxicity were also ameliorated by CDG treatment. CDG increased the expression of neurotrophic factors while inhibited the phosphorylation of MAPK/ERK/1/2 and CREB proteins in the nigrostriatal pathway of PD rats, implicating the potential mechanisms of action. Collectively, these results provide evidences for the protective role of CDG treatment in PD therapy, which extends our understanding on the treatments of neurodegenerative diseases with traditional Chinese medicine. weeks. statistical analysis was performed with repeated measures and multivariate analysis of variance (ANOVA), n = 9. Signi cant differences were indicated by * P 0.05; ** P 0.01;*** P 0.001. Signi cant differences were indicated by * P 0.05; ** P 0.01;*** P 0.001.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 2021-10-22T15:48:27.198Z | 2021-09-02T00:00:00.000 | {
"year": 2021,
"sha1": "223a4d0a7252411748353b2f8dd014f78e2e2a50",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-847269/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9bdee95c0462f76690e98de58dbd4dd1b60eacc2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
268231122 | pes2o/s2orc | v3-fos-license | Resistance Gene Association and Inference Network (ReGAIN): A Bioinformatics Pipeline for Assessing Probabilistic Co-Occurrence Between Resistance Genes in Bacterial Pathogens
The rampant rise of multidrug resistant (MDR) bacterial pathogens poses a severe health threat, necessitating innovative tools to unravel the complex genetic underpinnings of antimicrobial resistance. Despite significant strides in developing genomic tools for detecting resistance genes, a gap remains in analyzing organism-specific patterns of resistance gene co-occurrence. Addressing this deficiency, we developed the Resistance Gene Association and Inference Network (ReGAIN), a novel web-based and command line genomic platform that uses Bayesian network structure learning to identify and map resistance gene networks in bacterial pathogens. ReGAIN not only detects resistance genes using well-established methods, but also elucidates their complex interplay, critical for understanding MDR phenotypes. Focusing on ESKAPE pathogens, ReGAIN yielded a queryable database for investigating resistance gene co-occurrence, enriching resistome analyses, and providing new insights into the dynamics of antimicrobial resistance. Furthermore, the versatility of ReGAIN extends beyond antibiotic resistance genes to include assessment of co-occurrence patterns among heavy metal resistance and virulence determinants, providing a comprehensive overview of key gene relationships impacting both disease progression and treatment outcomes.
Introduction
Increasing rates of multidrug resistance in clinically important bacterial pathogens represents a monumental threat to global human health (1−4).Among antibiotic resistant (AR) bacteria, those classified as multidrug resistant (MDR) represent the most challenging health threat (5,6), as they can encode extensive genetic machinery to evade elimination by anti-infective agents (7−9), thereby significantly limiting treatment options.Continued increases in rates of MDR bacteria has been historically attributed to improper management of antibiotics, including misuse of antibiotics in treating non-bacterial infections and antibiotic overuse in agriculture (10−13).Moreover, the increase in multidrug resistance is significantly driven by the dissemination and acquisition of resistance genes within microbial communities through horizontal gene transfer (14−21).Indeed, many multidrug resistance gene operons are flanked by transposable elements, further indicating the mobility of not just small operons, but large genomic islands containing extensive resistance gene diversity (7,22−25).
The ability of bacteria to easily exchange antibiotic resistance genes (ARGs) makes antibiotic and multidrug resistance an ever-evolving problem (10,26).Furthermore, exposure to heavy metals in the environment, combined with genetically encoded heavy metal resistance genes (HMGs), could aid in the transfer of ARGs through co-selection (27−29).To better monitor the spread and dissemination of resistance genes, careful curation of genomic datasets, including isolation source, region, and date should be a priority.
With advancements in whole-genome sequencing, a wealth of bacterial genomic data has become available, offering unprecedented opportunities to understand the evolution of multidrug resistance and map common patterns of resistance gene co-occurrence in problematic bacterial species.This is especially important in monitoring patterns of co-occurrence with genes conferring resistance to lastline antibiotics, such as the mcr-family of colistin resistance genes (16,30,31).Though powerful tools have been developed to identify antibiotic resistance genes, e.g., AMRfinderPlus and ResFinder (32,33), these tools are predominantly confined to gene identification, overlooking the complex interplay between genes that contribute to the MDR phenotype.Several in silico methods designed to expand the analysis of resistome data have been published.However, these methods generally require researchers to have extensive computational experience (27,34,35) or focus on resistance gene abundance within metagenomic datasets rather than organism-specific patterns of resistance gene cooccurrence (35,36).Currently, to the best of our knowledge, there is not a publicly available bioinformatic platform designed to measure antibiotic resistance gene co-occurrence.
In response to the urgent need to not only identify resistance genes but also catalog patterns of resistance gene co-occurrence, we developed the Resistance Gene Association and Inference Network (ReGAIN) genomic pipeline (Figure 1).Leveraging the foundational capabilities of the National Center for Biotechnology Information (NCBI) software AMRfinderPlus (32), ReGAIN's core pipeline employs a robust Bayesian network structure learning approach to elucidate probabilistic patterns of resistance or virulence gene co-occurrence indicative of multidrug resistance in microbes.The ReGAIN pipeline is not limited to resistance genes and extends to virulence determinants through a parallel pathway, thus providing a comprehensive view of microbial defense mechanisms.Designed to be flexible and userfriendly, the ReGAIN web application simplifies the bioinformatic workflow into two core modules: data acquisition and Bayesian network analysis.Users are directed to upload genome files in FASTA format to the data acquisition module, specify an analytical focus (resistance or virulence), and initiate the analysis through a simple submission form.Using externally prepared data or files generated from the data acquisition module, users can then upload data to the Bayesian network analysis module.This module provides interactive Bayesian networks and probabilistic measurements, like conditional probability and relative risk ratios, along with confidence intervals and standard deviation.Further distinguishing the ReGAIN pipeline are two novel post-hoc analyses that compute the bidirectional strength of gene co-occurrence, which offer a quantitative method to explore and interpret asymmetry in gene-gene relationships.These metrics enhance our understanding of resistance gene networks, offering new insights into the dynamics of multidrug resistance.For researchers who prefer to have more control over the analyses, ReGAIN is additionally being developed as a command line program.
As an initial step towards identifying patterns of resistance gene co-occurrence, Bayesian networks were constructed using publicly available genomic data for a diverse set of common, clinically important bacterial pathogens.These included Escherichia coli and Enterococcus faecalis and the ESKAPE pathogens Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp., which are responsible for the majority of nosocomial infections worldwide and are often resistant to multiple antibiotics (1,6,8,13,37−39).Our large-scale Bayesian network analyses offer valuable insight into general patterns and pairwise probabilities of resistance gene co-occurrence among pathogens and provides a platform that can also reveal patterns of heavy metal resistance and virulence gene co-occurrence.
Overview of ReGAIN
The ReGAIN platform is comprised of two primary modules: a data acquisition and dataset construction module and a Bayesian network structure learning module.The front-end input of the ReGAIN web server data acquisition module allows users to upload assembled bacterial genomes in FASTA format.
After genome submission, users are directed to select an organism-specific or non-specific pipeline.
The organism-specific pipeline takes advantage of the AMRfinderPlus software (32), which analyzes the genomic dataset for organism-specific resistance-conferring point mutations and genes.Currently, 1).
Additional options allow users to set a minimum/maximum gene occurrence threshold, as the exclusion of very low or high abundance genes is an important step in reducing network noise.Because discrete Bayesian network analyses require variables to exist in at least two states within a dataset, it is necessary to exclude ubiquitously occurring genes from the analysis.The ReGAIN data acquisition workflow creates a presence/absence matrix of genes across the genomic population ('1' = present, '0' = absent).Thus, for ReGAIN datasets, any given gene must occur in both the 'present' and 'absent' states.To exclude ubiquitously occurring genes while still allowing user-end flexibility, ReGAIN permits users to set the minimum/maximum values themselves; for n genomes, the maximum value for any given gene must be at least n − 1.From here, users may choose to assess their genomes for resistance or virulence genes.Resistance genes include antibiotic and heavy metal resistance genes, as well as multiple stress response genes.
Using the NCBI AMRfinderPlus software, user-uploaded genomic datasets can be assessed for the presence of core antibiotic resistance genes (organism non-specific analysis) or for core resistance genes along with species-specific point mutations and virulence genes (organism specific analysis).
AMRfinderPlus utilizes both Hidden Markov Models and manually curated BLAST cutoffs to identify resistance and virulence genes based on the large Reference Gene Catalog consisting of over 6,000 genes (32).Identification of genes and creation of data files required for downstream analysis is automated within the data acquisition workflow.First, the distribution of genes across the genomic dataset is reduced to a binary format and a presence/absence data matrix is built.This data matrix is then curated based on the user-input minimum/maximum gene occurrence values.Finally, a metadata file is generated, which contains a list of all genes identified in the analysis, independent of minimum/maximum gene values.For resistance genes, additional information associated with gene function, such as resistance type or gene class (i.e., aminoglycoside, sulfonamide, β-lactamase, etc.) will be included.Output files from the data acquisition module include results files for each uploaded genome, an initial data matrix, a curated final data matrix, and a metadata file.From here, the final data matrix and metadata file can be submitted to the Bayesian network structure learning module.
Bayesian Network Structure Learning Analysis
The Bayesian network analysis module has been optimized to handle both very large datasets (≥ 100 genes) and small-to-moderate sized datasets (<100 genes).The module's backend employs the bnlearn (40) and gRain (41) package in R. Given the computational complexity associated with Bayesian network queries, two pipelines were designed based on the size of the input dataset to optimize performance and accuracy.For small-to-moderate size datasets, ReGAIN utilizes the gRain 'queryGrain' function.This function is adept at handling smaller networks where exhaustive computations can be performed without prohibitive computational costs and full graph inference is manageable, thus allowing direct computation of conditional dependencies.Conversely, for very large datasets, we leverage 'cpquery', a function designed for large-scale Bayesian networks.As variables increase, the computational requirements for direct calculations increase exponentially, quickly resulting in intractable computation costs.'cpquery' addresses this issue by estimating conditional probabilities through Monte Carlo simulations that scale linearly with the number of variables, thereby facilitating the analysis of large genomic networks (40,42,43).Altogether, these methods ensure that our pipeline remains computationally feasible while maintaining a high degree of accuracy in the probabilistic assessment of gene co-occurrence.Although the Bayesian network analysis module is designed to interface with the results output from the data acquisition module, instructions on how to format externally prepared datasets are provided for users to run this module independently of the Data Acquisition pipeline (Figure S1).
Once executed, the Bayesian network analysis module employs a systematic approach to analyze the genomic data.First, Bayesian network structure learning using a Hill Climbing algorithm and Bayesian Dirichlet equivalent score is used to construct the network.The combination of these two functions guides the algorithm in learning the most probable network structure given the data.A critical aspect of the Bayesian network workflow is the implementation of bootstrapping to resample data multiple times.Users are directed to use between 300 and 500 bootstraps to enhance the robustness of the network analysis.Following bootstrapping, the module applies a significance threshold of 0.5 to filter out weakly supported gene pairs from the network.The refined network then undergoes further analysis through a resampling process.This step involves fitting multiple Bayesian networks to subsets of the data, which are randomly sampled with replacement, such that all values within the dataset have an equal probability of being selected one or multiple times.To enhance the statistical reliability of the results, the resampled datasets were used 100 times to fit the Bayesian network.Finally, statistical analyses are performed on the fitted networks.For each gene pair, median conditional probability (Equation 1), median relative risk (Equation 2), empirical confidence intervals, and standard deviation are calculated.Conditional probability is described as the probability of observing Gene A given the presence of Gene B, P(A|B), while relative risk is the ratio of the conditional probability of observing Gene A given Gene B to the conditional probability of only observing Gene A in the absence of gene B,
P(A|B)/P(A|¬B).
Whereas conditional probability is expressed on a scale of 0 to 1 (e.g., a conditional probability of 0.65 indicates a 65% probability of observing the gene pair), relative risk offers more insight.As relative risk is a ratio, it offers a more descriptive scale.For instance, a relative risk of 1 suggests that Gene A and Gene B are likely independent of each other, A ⫫ B. A value greater than 1 suggests that it is more likely to observe Gene A in the presence of Gene B, as P(A|B) > P(A|¬B).
Similarly, a value less than 1 indicates that Gene A is more likely to be observed in the absence of Gene B, as P(A|B) < P(A|¬B).Aside from a table containing statistical summaries, conditional probability, and relative risk values, the output of the Bayesian network analysis module includes the refined network as an interactive HTML file.
To further analyze the probabilistic gene co-occurrence relationships, two post-hoc analyses are performed.Bidirectional probability score (BDPS) (Equation 3) describes the directional strength of the conditional probabilities of a gene pair.As conditional probability, P(A|B), is dependent on the presence or absence of Gene B, the reciprocal conditional probability is not often equal, P(A|B) ≠ P(B|A).
Therefore, a better assessment of the overall strength of the relationship can be achieved by taking the ratio of these two values.If BDPS = 1, equal bidirectional strength can be assumed.If BDPS > 1, the probability of observing Gene A given Gene B is stronger, P(A|B) > P(B|A).Conversely, if BDPS < 1, the probability of observing Gene B given Gene A is stronger, P(A|B) < P(B|A).Using relative risk, fold change can be calculated (Equation 4), an additional post-hoc analysis interpreted similarly to BDPS, and these scores are output to a second CSV file.Fold change is interpreted inversely to BDPS; due to the nature of how relative risk ratios are calculated, a fold change > 1 indicates that the probability of observing Gene A given Gene B is weaker, RR(A|B) < RR(B|A), while a fold change < 1 indicates that the probability of observing Gene A given the presence of Gene B is stronger, RR(A|B) > RR(B|A).
Additionally, a fold change value of 1 may indicate either equal bidirectional probability or variable independence, dependent on the constituent relative risk values.To expand on this, if Gene A and Gene B and the reciprocal both exhibit a relative risk of 1, the fold change would also be 1.Because of this, it is important to use these post-hoc analyses to supplement conditional probability and relative risk of each gene pair.Taken together, the Bayesian network structure learning module offers valuable insight into both the probability and directional strength of gene co-occurrence.
Additional Statistical Analyses
In addition to Bayesian network structure learning, the ReGAIN platform offers a multivariate analysis module (MVA) for exploring datasets.While Bayesian networks can reveal probabilistic relationships and dependencies between variables, MVA can quickly provide insight into the patterns and structures of variables within the dataset.Further, the use of a k-means clustering algorithm allows for the categorization of data into clusters based on similarity measures, which is independent of the probabilistic models and may assist in identifying distinct groups or outliers.This can be an important analysis in understanding the spread and development of resistance patterns, especially as a dataset grows over time and becomes more complex.
The ReGAIN MVA module begins by constructing a distance matrix dependent on a user-defined distance measurement, including, but not limited to, Jaccard, Bray-Curtis, Euclidean, and Manhattan measures of distance.Principle coordinates analysis (PCoA) is then applied to the distance matrix to reduce dimensionality, facilitating a visual exploration of the data.The percentage variance explained by the first two principal coordinates is calculated to guide interpretation.A k-means clustering algorithm further categorizes the data into user-defined clusters, which incorporate confidence ellipses around clusters for statistical significance to add an additional layer of validation to the analysis (Figure 2) (44).
The resulting MVA graphical representation serves as a tool for visual exploration of gene cooccurrence patterns and is designed to complement the Bayesian network structure learning module by providing initial insight into a dataset.
Probabilistic Resistance Gene Co-Occurrence in Gram-Negative Pathogens
To assess the effectiveness of the the ReGAIN pipeline, we investigated the co-occurrence of resistance genes in thirteen clinically relevant Gram-positive and Gram-negative bacterial species.This analysis involved between 15 and 2464 genomes per species (Figures 3 and 4, Table 1).Initially, each genomic population was examined for the presence of resistance genes (Figure S2, Tables S1 and S2).
Bayesian networks were then constructed with a focus on Gram-negative pathogens.Using E. coli as a proof of concept, we curated a dataset containing 179 antibiotic and heavy metal resistance genes across 1491 genomes (Figure S2A), resulting in 30802 gene−gene comparisons (Figure 3, Table S1A).
While investigating co-occurrence patterns of resistance genes, we initially focused on the widely used trimethoprim-sulfamethoxazole, commonly employed in treating Gram-negative infections such as urinary tract infections (47,48), expecting to see strong probabilistic relationships between trimethoprim and sulfonamide resistance genes.Though not as expansive as initially predicted, cooccurrence of trimethoprim resistance-conferring dihydrofolate reductase dfrA genes and sulfonamide resistance sul genes were detected in all analyzed Gram-negative bacteria except A. baumannii (Table S1B and S3).Interestingly, the strongest relationship observed involving a trimethoprim resistance gene in A. baumannii was dfrA1 with the streptothricin resistance-conferring N-acetyltransferase gene, sat2 (cond.prob.0.97, rel.risk 94.6).
Resistance Gene Co-Occurrence in Gram-Positive Bacterial Pathogens
In addition to Gram-negative pathogens, four Gram-positive bacterial pathogens − E. faecalis, E. faecium, S. aureus, and S. pneumoniae − were analyzed (Table 1), though it is worth nothing that S. pneumoniae represents our smallest Bayesian network (10 nodes).To assess the accuracy of our system, the Bayesian networks were initially analyzed for the presence of previously observed coresistance patterns, including the co-occurrence of phenicol and lincosamide resistance genes (38,52).
However, in all three Gram-positive strains, the strongest probabilistic relationships were between the vancomycin resistance genes themselves, which is unsurprising, given that genes conferring resistance to vancomycin are often clustered together (39).
Antibiotic and Heavy Metal Resistance Genes Exhibit Strong Patterns of Co-Occurrence
As heavy metal resistance genes (HMGs) may facilitate co-selection of ARGs under environmental pressures (27), we were interested in investigating whether antibiotic and heavy metal resistance genes displayed probabilistic patterns of co-occurrence within our genomic datasets.With the Gram-negative pathogens, only E. coli, P. aeruginosa, S. enterica, A. baumannii, and K. pneumoniae encoded known heavy metal resistance genes.Importantly, moderate-to-strong relationships between every type of HMG and ARGs representing nearly every antibiotic class, including the quinolones, trimethoprim, βlactamases, chloramphenicol, rifamycin, aminoglycosides, colistin, and tetracyclines were observed.
Discussion
Rising multidrug resistance rates represent a global health threat that is unlikely to be solved anytime soon (1,5,6,8,13).However, the effective utilization of probabilistic statistical models can offer some predictive value to trends and patterns of resistance gene co-occurrence.The development and application of the ReGAIN bioinformatic pipeline, which leverages a wealth of publicly available genomic data, serves to offer a reproducible method of measuring resistance gene co-occurrence.
Additionally, ReGAIN may facilitate the identification of previously unrecognized gene cohorts that work synergistically or provide avenues for resistance gene co-selection under clinical or environmental pressures (29).By identifying these relationships, genes can be prioritized for further characterization.
In this study, several interesting probabilistic relationships were identified from Gram-negative and Gram-positive bacterial pathogens, including several unexpected relationships between clinically important ARGs and various chloramphenicol resistance genes (Table 2).Although chloramphenicol is infrequently used in clinical human health settings in the United States (53), the NCBI database contains genomes deposited from all over the world.Thus, it is plausible that our datasets contain genomes isolated from patients in other countries, which could explain the strong relationships observed.Alternatively, we observed similar relationships in a genomic population of E. coli isolated from hospitalized patients in Salt Lake City, Utah (unpublished data).This suggests that the cooccurrence of chloramphenicol and other resistance genes could be evidence of the mobilization of MDR strains through the food chain, as bacteria harboring phenicol resistance genes have been identified in food animals (46,54).
Besides mapping common patterns of resistance gene co-occurrence, we were additionally interested in investigating ARG co-occurrence involving problematic resistance genes, such as those conferring resistance to vancomycin, methicillin, or last-line antibiotics like colistin.Though several mcrfamily colistin resistance genes have been observed occurring together (55), to the best of our knowledge, this study is the first to explore co-occurrence of mcr-family genes with other antibiotic resistance genes using a large-scale genomic analysis.Indeed, several potentially problematic relationships involving colistin resistance genes were identified, including co-occurrence with tetracycline, sulfonamide, and aminoglycoside resistance genes.
Multidrug resistance is an ever-evolving threat, and it is important that trends of gene co-occurrence are identified and cataloged.Among resistance genes, relationships between antibiotic and heavy metal resistance genes are still not fully understood, though there is evidence of synergy or coregulation between these gene categories (28,29,56).Surprisingly, this study identified multiple examples of ARG-HMG co-occurrence in both Gram-negative and Gram-positive bacterial pathogens (Figures 5 and 6).Moreover, several ARG-HMG gene pairs displayed BDPS scores indicating that the presence of the HMG was influenced by the presence of the ARG (Table S7).Examples include the mercury resistance gene merT and the carbapenem resistance β-lactamase gene bla IMP-1 in P. aeruginosa (BDPS 14.3); the silver resistance gene silC and gentamicin resistance gene aac(3)-Via in S. enterica (BDPS 32.6); and in S. aureus, the copper resistance gene mco and lincosamide resistance gene vgaE (BDPS 13.9) as well as the arsenic resistance gene arsC and trimethoprim resistance gene dfrE (BDPS 13.2).In each case, the probability of observing the HMG was 13 to 32-fold higher given the presence of the ARG.Conversely, multiple examples were identified where the probability of observing the ARG was higher given the presence of the HMG, indicating that the presence of the ARG may be influenced by presence of the HMG (Table S7).Taken together, these results could indicate coselection of specific heavy metal and antibiotic resistance genes.By using relative risk (rel.risk < 1 indicates a higher probability of observing Gene A in the absence of Gene B), we can also identify antagonistic relationships, revealing patterns of genes that either cannot or do not co-occur.
Though Bayesian network structure learning represents a powerful statistical approach to identifying and mapping co-occurrence of resistance genes, it is important to understand the limitations of these models.Sample size and isolation source can skew results, especially if sample size is very small or isolation source is biased towards one environment.To ensure an accurate overview of organismspecific patterns of resistance, it is important that large sample sizes representing myriad isolation sources, including clinical, environmental, and agricultural are used.Furthermore, to track movement of resistance genes, the isolation source, origin, and year of collection should be documented when genomic data are deposited.Using well-curated genomic databases will help facilitate tracking the spread of resistance genes in a region and time-specific manner.
Overall, the utilization of the ReGAIN platform offers a broad look at common patterns of antibiotic and heavy metal resistance gene co-occurrence in clinically relevant and high threat bacterial pathogens.By using large genomic populations pulled from publicly available databases, this study works to mitigate potential bias introduced by oversampling from either clinical or environmental sources, which allows for general inferences to be made concerning commonly co-occurring resistance genes.Furthermore, making the ReGAIN bioinformatic pipeline available as both a web interface and command line software provides researchers with a reproducible probabilistic statistical method of measuring resistance gene co-occurrence.
Data Acquisition
All genomic data was downloaded from the National Center for Biotechnology Information (NCBI) using either the NCBI web server or the NCBI Datasets command line software (57).To ensure only full genomes or large plasmid genomic data was included in each analysis, FASTA files containing < 3500 nucleotides were filtered and excluded using a custom Python script.Antibiotic resistance genes were identified using AMRfinderPlus v.3.10.40 (32) using the appropriate organism flag.The ReGAIN data acquisition workflow was built using Python v.3.10.12.
Bayesian Analysis, and Results Visualization
The Bayesian network structure learning module was built using R v.4.3.1 and the following packages: bnlearn v.4.8.1 (40), gRain v. 1.3.13 (41), gRbase v.1.8.9 (58), visNetwork v.2.1.2,and igraph v.1.5.0 (59).Results were visualized using ggplot2 v.3.4.2 (60).Bayesian Networks were built using the bnlearn and gRain packages using 500 bootstraps; data was additionally resampled and fitted to networks 100 times to calculate confidence.Laplace smoothing was implemented that scaled with the number of variables to avoid divide by zero errors while calculating relative risk.Given N variables, conditional probabilities were adjusted by [(N + 0.5) / (N + 1)], which maintains the appropriate scale while avoiding inf values.For each genomic dataset, only genes occurring ≥ 5 times were included.PCoA was created using the Jaccard method of distance and the following R packages: ellipse v.0.3.2 and vegan v.2.6-4 (61).To reduce noise in the network, genes occurring less than five times across within each genomic population were not included in the analysis.Antibiotic and heavy metal resistance gene association networks were created using ggraph v.2.1.0(62).Not determined (ND) values indicate that a gene pair was excluded from the analysis due to low confidence (< 0.5) or because the pair would introduce cycles in the Bayesian network.Equation 1. Conditional probability.The probability of observing 'Gene A' given the presence of 'Gene B'.
𝑃𝑃(𝐴𝐴|𝐵𝐵) = 𝑃𝑃(𝐵𝐵|𝐴𝐴) • 𝑃𝑃(𝐴𝐴) 𝑃𝑃(𝐵𝐵)
Equation 2. Relative risk.The ratio of the conditional probability of observing 'Gene A' given the presence of 'Gene B' to the conditional probability of observing 'Gene A' in the absence of 'Gene B'.This calculation adjusts for single occurrence of Gene A in the dataset, offering a more in-depth understanding of the probability and rate of co-occurrence.
𝑅𝑅𝑅𝑅 = 𝑃𝑃(𝐴𝐴|𝐵𝐵) 𝑃𝑃(𝐴𝐴|¬𝐵𝐵)
Equation 3. Bidirectional probability score.The ratio of the conditional probability of observing 'Gene A' given the presence of 'Gene B' to the conditional probability of observing 'Gene B' given the presence of 'Gene A'.From the Bayesian network, 179 genes were queried in pairwise to calculate conditional probability of co-occurrence.Grey squares represent gene pairs excluded from the analysis due to low confidence, generation of loops in the Bayesian network, or to avoid self-query (Gene A = Gene B).From the Bayesian network, 91 genes were queried in pairwise to calculate conditional probability of co-occurrence.Grey squares represent gene pairs excluded from the analysis due to low confidence, generation of loops in the Bayesian network, or to avoid self-query (Gene A = Gene B).
Equation 4 .
Fold change.The ratio of the relative risk of 'Gene A' to 'Gene B' to the relative risk of 'Gene B' to 'Gene A'.
Figure 1 .
Figure 1.Outline of the ReGAIN bioinformatic pipeline.The first core module uses AMRfinder Plus to identify resistance/virulence genes from the input genomes in nucleotide FASTA format.After all genomes have been processed, two major output CSV files are created: a presence/absence matrix of genes across genomes and a metadata file containing all identified genes and their associated resistance type.All downstream ReGAIN analyses can be performed using these two output files.The second core module, the Bayesian analysis pipeline, utilizes 100 resamples of the submitted dataset to generate a table of pairwise conditional probability and relative risk scores for each gene pair in the dataset, including low and high confidence intervals and standard deviation.Additional post-hoc analyses generate bidirectional probability scores based on both conditional probability and relative risk for each gene pair cohort.Both tables are output in CSV format.Finally, an interactive Bayesian network HTML web page is generated.The optional additional analyses module includes Association Rule Mining and multivariate analyses.
Figure 2 .
Figure 2. Example Principal Coordinate Analysis (PCoA) output from ReGAIN's Additional Analyses Module of 39 resistance genes identified in a genomic population of 502 Enterococcus faecium genomes.PCoA plot was generated using the Jaccard measure of distance.Ellipses represent 95% confidence, with three clusters specified.
Figure 3 .
Figure 3. Heatmap illustrating conditional probabilities of resistance gene co-occurrence in E. coli.From the Bayesian network, 179 genes were queried in pairwise to calculate conditional probability of co-occurrence.Grey squares represent gene pairs excluded from the analysis due to low confidence, generation of loops in the Bayesian network, or to avoid self-query (Gene A = Gene B).
Figure 4 .
Figure 4. Heatmap illustrating conditional probabilities of resistance gene co-occurrence in S. aureus.From the Bayesian network, 91 genes were queried in pairwise to calculate conditional probability of co-occurrence.Grey squares represent gene pairs excluded from the analysis due to low confidence, generation of loops in the Bayesian network, or to avoid self-query (Gene A = Gene B).
Figure 5 .
Figure 5. Antibiotic and heavy metal resistance gene co-occurrence in Escherichia coli, Salmonella enterica, Pseudomonas aeruginosa, Klebsiella pneumoniae, and Acinetobacter.baumannii.A. Association network of patterns of antibiotic and heavy metal resistance gene co-occurrence.Edge width and color is scaled by magnitude of relative risk B. Bubble plot illustrating antibiotic and heavy metal gene co-occurrence by organism.Bubble size is scaled by conditional probability and color is scaled by relative risk.
Table 1 .
Genomic sample size of ReGAIN databases.Database size describes the number of genomes and number of antibiotic and resistance genes included in the database.
Table 2 .
Co-occurrence of chloramphenicol resistance genes in Gram-negative bacteria.Gene A column includes only genes associated with chloramphenicol resistance.Gene B are co-occurring genes and the antibiotics they confer resistance to are listed in the Gene B Class column.ND, not determined. | 2024-03-05T14:13:22.953Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "ecff00b892dd0fe82bfcf9f40ccc4d58aa7376b7",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/03/01/2024.02.26.582197.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9f7b66710117111f4bbed8ae17190b15cb9141d",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
1784886 | pes2o/s2orc | v3-fos-license | A note on regular Ramsey graphs
We prove that there is an absolute constant $C>0$ so that for every natural $n$ there exists a triangle-free \emph{regular} graph with no independent set of size at least $C\sqrt{n\log n}$.
Introduction
A major problem in extremal combinatorics asks to determine the maximal n for which there exists a graph G on n vertices such that G contains no triangles and no independent set of size t. This Ramsey-type problem was settled asymptotically by Kim [6] in 1995, after a long line of research; Kim showed that n = Θ(t 2 / log t). Recently, Bohman [1] gave an alternative proof of Kim's result by analyzing the so-called triangle-free process, as proposed by Erdős, Suen and Winkler [3], which is a natural way of generating a triangle-free graph. Consider now the above problem with the additional constraint that G must be regular. In this short note we show that the same asymptotic results hold up to constant factors. The main ingredient of the proof is a gadget-like construction that transforms a triangle-free graph with no independent set of size t, which is not too far from being regular, into a triangle-free regular graph with no independent set of size 2t.
Our main result can be stated as follows.
Theorem 1.1. There is a positive constant C so that for every natural n there exists a regular triangle-free graph G on n vertices whose independence number satisfies α(G) ≤ C √ n log n.
Denote by R(k, ℓ) the maximal n for which there exists a graph on n vertices which contains neither a complete subgraph on k vertices nor an independent set on ℓ vertices. Let R reg (k, ℓ) denote the maximal n for which there exists a regular graph on n vertices which contains neither a complete subgraph on k vertices nor an independent set on ℓ vertices. Clearly, for every k and ℓ one has R reg (k, ℓ) ≤ R(k, ℓ). Theorem 1.1 states that R reg (3, t) = Θ (R(3, t)) = Θ t 2 log t .
2 Proof of Theorem 1.1 Note first that the statement of the theorem is trivial for small values of n. Indeed, for every n 0 one can choose the constant C in the theorem so that for n ≤ n 0 , C √ n log n ≥ n, implying that for such values of n a graph with no edges satisfies the assertion of the theorem. We thus may and will assume, whenever this is needed during the proof, that n is sufficiently large.
The following well known theorem due to Gale and to Ryser gives a necessary and sufficient condition for two lists of non-negative integers to be the degree sequences of the classes of vertices of a simple bipartite graph. The proof follows easily from the max-flow-min cut condition on the appropriate network flow graph (see e.g. [8,Theorem 4.3.18]).
then there exists a simple bipartite graph with degree sequence d on each side. In particular, this holds for Proof. By Theorem 2.1 it suffices to check that for every s, Suppose this is not the case and there is some s as above so that Observe that by doing so the left hand side of (2) increases by d 1 − d i , whereas the right hand side increases by at most this quantity, hence (2) still holds with this new value of d i . We can thus assume that (2), as the left hand side does not change, whereas the right hand side can only decrease. Moreover, the new sequence still satisfies (1). Thus we may assume that in (2) (2) gives Therefore [(a + 1)s − m]d > s 2 , implying that (a + 1)s − m > 0, that is, s > m a+1 , and The function g(s) = s 2 (a+1)s−m attains its minimum in the range m a+1 < s ≤ m at s = 2m a+1 and its value at this point is 4m (a+1) 2 . We thus conclude from (3) that d > 4m (a+1) 2 and hence that d 1 = ad > 4am (a+1) 2 contradicting the assumption (1). This completes the proof.
then there is no simple bipartite graph whose degree sequence in each side is (d 1 , d 2 , . . . , d m ). This follows from Theorem 2.1.
Let R(n, 3, t) denote the set of all triangle-free graphs G on n vertices with α(G) < t. As usual, let ∆(G) and δ(G) denote the respective maximal and minimal degrees of G. Proof. Construct a new graph G ′ as follows. Take two copies of G, and color each of these copies by the same equitable coloring using ∆(G) + 1 colors with all color classes of cardinality either ⌊n/(∆(G) + 1)⌋ or ⌈n/(∆(G) + 1)⌉ using the Hajnal-Szemerédi Theorem [4] (see also a shorter proof due to Kierstead and Kostochka [5]). Let C and C ′ be the same color class in each of the copies of G. Denote the degree sequence of the vertices of C in G by d According to Corollary 2.2 there exists a simple bipartite graph with m vertices on each side, where the degree sequence of each side is d 1 ≥ . . . ≥ d m as the maximal degree d 1 = d + ∆(G) − δ(G) ≤ 2d, the minimal degree d m ≥ d, and by our assumption on G we have d 1 ≤ 8m 9 . We can thus connect the vertices of C and C ′ using this bipartite graph such that all vertices in C ∪ C ′ have degree d + ∆(G). By following this method for every color class, we create the graph G ′ which is (d + ∆(G))-regular, triangle-free and has no independent set of cardinality 2t − 1.
The H-free process and Bohman's result
Consider the following randomized greedy algorithm to generate a graph on n labeled vertices with no Hsubgraph for some fixed graph H. Given a set of n vertices, a sequence of graphs {G i−1 . The process terminates at step t, the first time that no potential unselected pair e t+1 exists. This algorithm is called the H-free process.
The K 3 -free process was proposed by Erdős, Suen and Winkler [3] and was further analyzed by Spencer [7]. Recently, Bohman [1] extending and improving previous results, was able to analyze the K 3 -free process and to show that with high probability it passes through an almost regular Ramsey-type graph. Remark 2.6. Item (3) can be derived implicitly from [1], or alternatively, it follows from [2,Theorem 1.4], as the degree of every vertex is a trackable extension variable.
Note that Proposition 2.4 in conjunction with Theorem 2.5 completes the proof of Theorem 1.1 for every large enough even integer n. To fully complete the proof, we describe how to deal with the case of n odd. So, let now n be be large enough and odd. Our aim is to show the existence of a regular triangle-free graph G n on n vertices with α(G n ) = O( √ n log n). The approach we take to achieve this goal is to construct a "big" graph satisfying our Ramsey conditions on an even number of vertices, and to add to it a "small" graph with an odd number of vertices without affecting the asymptotic results claimed.
For every k = 0 (mod 5), and every even r ≤ 2k/5, let H k,r denote a graph constructed as follows. Start with a copy of C 5 blown up by factor of k/5 and delete from the resulting graph (2k/5 − r/2) disjoint 2-factors (which exist by Petersen's Theorem, see e.g. [8,Theorem 3.3.9]). H k,r is hence a triangle-free r-regular graph on k vertices.
Denote by F n an r-regular triangle-free graph on 2n vertices with α(F n ) ≤ C √ n log n for some absolute constant C, and furthermore assume r is even (this can be achieved by choosing the appropriate parameter d in Proposition 2.4, as we have much room to spare with the values we plug in from Theorem 2.5). Let n 0 = (n − k)/2, where k = 5 (mod 10), and k = (1 + o(1)) 5C 2 √ n log n. Clearly, n 0 is integer. The graph F n0 is r-regular for some even r ≤ α(F n0 ), is triangle-free on 2n 0 vertices, and satisfies α(F n0 ) ≤ C √ n 0 log n 0 ≤ C √ n log n. Now, define G n to be a disjoint union of F n0 and H k,r . Clearly, G n has 2n 0 + k = n vertices, is r-regular, triangle-free and satisfies α(G n ) = α(F n0 ) + α(H k,r ) ≤ α(F n0 ) + k ≤ C √ n log n + k = O( √ n log n).
Discussion
A natural question that extends the above is to try and determine R reg (k, ℓ) for other values of k and ℓ (in particular for fixed values of k > 3 and ℓ arbitrary large), and also to try and investigate its relation with R(k, ℓ). The following conjecture seems plausible. | 2009-07-21T22:00:55.000Z | 2008-12-12T00:00:00.000 | {
"year": 2008,
"sha1": "407d2ba0eb963eda391363694bc6a07362989453",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.2386",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "985555fc5f801ab05f42f6a6c4bf467d9c70a60f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
237166150 | pes2o/s2orc | v3-fos-license | An optimal window of platelet reactivity by LTA assay for patients undergoing percutaneous coronary intervention
Objective This study was aimed to determine how platelet reactivity (PR) on dual antiplatelet therapy predicts ischemic and bleeding events in patients underwent percutaneous coronary intervention (PCI). Design A total of 2768 patients who had received coronary stent implantation and had taken aspirin 100 mg in combination with clopidogrel 75 mg daily for > 5 days were consecutively screened and 1885 were enrolled. The recruited patients were followed-up for 12 months. The primary end-point was the net adverse clinical events (NACE) of cardiovascular death, nonfatal myocardial infarction (MI), target vessel revascularization (TVR), stent thrombosis (ST) and any bleeding. Result 1709 patients completed the clinical follow-up. By using the receiver operating characteristic (ROC) curve analysis, the optimal cut-off values were found to be 37.5 and 25.5% respectively in predicting ischemic and bleeding events. Patients were classified into 2 groups according to PR: inside the window group (IW) [adenosine diphosphate (ADP) induced platelet aggregation (PLADP) 25.5–37.4%)] and outside the window group (OW) (PLADP < 25.5% or ≥ 37.5%). The incidence of NACE was 16.8 and 23.1% respectively in the IW and OW group. The hazard ratio of NACE in IW group was significantly lower [0.69 (95% CI, 0.54–0.89, P = 0.004)] than that in the OW group during 12-month follow-up. Conclusion An optimal therapeutic window of 25.5–37.4% for PLADP predicts the lowest risk of NACE, which could be referred for tailored antiplatelet treatment while using LTA assay. Trial registration Trial registration number: ClinicalTrials.govNCT01968499. Registered 18 October 2013 - Retrospectively registered.
Introduction
Dual antiplatelet therapy with aspirin and an adenosine diphosphate (ADP)-receptor (P2Y 12 ) inhibitor is a cornerstone of the pharmacological treatment for patients with coronary artery disease undergoing percutaneous coronary intervention (PCI) [1].
Clopidogrel is one of the most widely used P2Y 12 inhibitors, which undergoes a two-step metabolic transformation before binding to the platelet P2Y 12 receptor [2]. Studies have shown wide variability of platelet clopidogrel response [3], indicating that a substantial proportion of patients have inappropriate platelet inhibition at a regular dose of clopidogrel 75 mg once daily. It has been reported that high on-treatment platelet reactivity (HOPR) detected by platelet aggregometry leads to increased risk of thrombotic events [4][5][6][7][8], while low on-treatment platelet reactivity (LOPR) leads to increased risk of bleeding after PCI [9,10]. Thus, it is important to identify an optimal platelet inhibition or on-treatment platelet reactivity (PR) by platelet aggregometry [11,12].
This study was to investigate an optimal therapeutic window for PR determined by light transmission aggregometry (LTA) to predict the lowest ischemic and bleeding risks in patients underwent PCI and treated with dual antiplatelet agents.
Methods
This is a prospective, single-center, registration study conducted at the First Affiliated Hospital of Nanjing Medical University, Nanjing, China. The study was registered at URL: https://www.clinicaltrials.gov (Unique identifier: NCT019684 99) and was approved by the ethics committee of the First Affiliated Hospital of Nanjing Medical University. Written informed consent was obtained from each patient.
Study population
A total of 2768 patients were consecutively screened from April 2011 to October 2016 in the First Affiliated Hospital of Nanjing Medical University, among which 883 declined to participate, and the remaining 1885 patients were enrolled in the study (Fig. 1).
The inclusion criteria were patients who had undergone coronary stent implantation and taken aspirin 100 mg in combination with clopidogrel 75 mg daily for > 5 days [7]. Exclusion criteria were patients: 1) intolerant to aspirin or clopidogrel (e.g. history of allergic reactions or gastrointestinal bleeding); 2) taking any other antiplatelet agents in addition to aspirin and clopidogrel (e.g. cilostazol); 3) taking any anticoagulant agents (e.g. vitamin K antagonists, new oral anticoagulants); 4) with myelodysplastic syndrome or abnormal baseline platelet counts of < 80 × 10 9 / L or > 450 × 10 9 /L; 5) with hemoglobin < 90 g/L; 6) with cancer or any other complications that may not suitable to be recruited at the discretion of the investigators.
PR measurements
Six milliliter venous blood was collected into 3.2% citrate vacutainer tubes in the morning 2 h after the patients' taking clopidogrel (if glycoprotein (GP) IIb/IIIa inhibitors were used, testing would be performed 24 h after drug discontinuation). Blood samples were subjected to platelet function test by LTA within 2 h as previously described [13]. In brief, samples were centrifuged at 200 g for 8 min to obtain platelet-rich plasma (PRP). Plateletpoor plasma (PPP) was prepared by centrifuging the remaining blood at 2465 g for 10 min. Platelet counts were adjusted by the addition of PPP to the PRP to achieve a count of 250 × 10 9 /L. The ADP-induced platelet aggregation (PL ADP ) was recorded using the maximum platelet aggregation within 8 min after addition of ADP (final concentration 5 μmol/L) by a Chronolog Model 700 aggregometer (Chrono-log Corporation, Havertown, PA, USA) [13].
Study end-points
The primary end-point was set as the net adverse clinical events (NACE), a composite of ischemic events including cardiovascular death, nonfatal myocardial infarction (MI), target vessel revascularization (TVR), stent thrombosis (ST) and any bleeding defined by the Thrombolysis in Myocardial Infarction (TIMI) criteria [14]. MI was defined in accordance with the Third Universal Definition proposed in 2007 [15]. ST was defined as definite or probable according to the Academic Research Consortium definitions [16]. All the clinical events were independently adjudicated by two investigators blinded to the results of PR tests. Disagreements were resolved by discussion or consultation with a third investigator (Li).
The outcome data were collected by 2 investigators who were blinded to the results of platelet reactivity testing. The patients were followed up in the clinic and less preferably by telephone call if they were unable to attend the clinic. A standard case report form was used to record the outcome.
Statistical analysis
Statistical analysis was performed using SPSS 22.0 software (SPSS, Chicago, IL, USA). Continuous variables are expressed as means ± standard deviations (SD) or medians (range [or Inter Quartile Range]). Categorical variables are expressed as frequencies and percentages. Two-sided Mann-Whitney tests were used to compare PL ADP between groups. The time to primary endpoint between groups was compared using the Kaplan-Meier method. Survival curves were compared using the logrank test and hazard ratios were calculated using Cox's regression models. Sensitivity and specificity of PL ADP in predicting thrombotic events were calculated at different thresholds by receiver operating characteristic (ROC) curve analysis. A two-sided P < 0.05 was statistically significant.
Relationship between PR and 1-year outcome
The average time from PCI to PR test reached 2.50 days. Patients with ischemic events during follow-up had a higher PL ADP level compared to those without (36% [IQR: vs.29% [IQR: 20-40]; P = 0.054). ROC analysis was performed to evaluate the value of PL ADP in predicting ischemic events. As a result, a PL ADP cut-off value of 37.5% provided a sensitivity of 48.9%, specificity of 70%, and the largest area under the curve value of 0.58 (Fig. 2a). By comparison, the recommended cut-off value of 46% by LTA provides a sensitivity of 20% and a specificity of 84.3% [12]. While adopting 37.5% as a new cut-off value, 521 patients (30.5%) were defined with HOPR, who experienced a higher rate of ischemic events compared with those without (4.2% vs. 1.9%; P = 0.007, Fig. 3a).
On the other hand, patients who experienced bleeding events had significantly lower PL ADP compared with those without bleeding (25% [IQR vs.30% [IQR 21-41]; P < 0.001). By ROC analysis, a cut-off value of 25.5% provided a sensitivity of 50.3%, a specificity of 62.6%, and the largest area under the curve of 0.57 in predicting bleeding (Fig. 2b). Using this new cut-off value, 682 (39.9%) patients were defined with LOPR, who experienced a higher rate of bleeding events compared to those without (24.2% vs. 15.9%; P < 0.001, Fig. 3b).
The risk of ischemic events and NACE was nonsignificantly higher in patients with HOPR compared with those in normal responders ( Fig. 4).
Optimal PR or therapeutic window of PR to prevent ischemic and bleeding events According to the ROC curve analysis, we defined an optimal window of PL ADP between 25.5 and 37.5% after dual antiplatelet treatment. As a result, 29.6% of the study population was comprised within this therapeutic window in this study.
We classified the patients into 2 groups according to PR: inside the window group (IW) [PL ADP (25.5-37.4%)] and outside the window group (OW) (PL ADP < 25.5% or ≥ 37.5%). The baseline demographic characteristics, clinical, angiographic and biological characteristics and medication history were described in Table 2. There were no significant differences in all the baseline characteristics between the 2 groups.
We further analyzed the prognosis according to the newly defined therapeutic window. The NACE rate of the IW group patients was lower than that of the OW group patients (16.8% vs. 23.1%; P = 0.004) (Fig. 3c). Kaplan-Meier analysis showed a significant difference in NACE and bleeding between patients within and outside the window, although no significant difference was found in ischemic events (P = 0.438, 0.024 and 0.004, for ischemic events, bleeding and NACE, respectively) (Fig. 5). The hazard ratio of NACE for OW group was significantly higher during the 12-month follow-up compared with IW group [1.44 (95% CI: 1.12-1.85; P = 0.004)] after adjusting for age, gender, body mass index (BMI), history of Fig. 4 1-year Adverse Events in Groups of Different Level of PL ADP . Patients were stratified into groups of NOPR (25.5-37.4%), HOPR (≥37.5%) and LOPR (< 25.5%). ** represents P < 0.001 for bleeding events compared with the NOPR group. † † represents P < 0.001 for net adverse clinical events compared with the NOPR group. PL ADP , ADP induced platelet aggregation; NOPR, normal on-treatment platelet reactivity; HOPR, high ontreatment platelet reactivity; LOPR, low on-treatment platelet reactivity smoking, hypertension, diabetes, coronary artery bypass grafting (CABG), PCI, hemoglobin, platelet count, estimated glomerular filtration rate (eGFR), activated partial thromboplastin time (APTT), and international normalized ratio (INR) ( Table 3). The total bleeding rate was also significantly higher in OW than IW after adjusting for the confounders [1.33 (95% CI: 1.03-1.72; P = 0.028)], which turned out to be the main contributor to NACE (Table 3).
Discussion
In this study, we identified an optimal range of platelet reactivity as 25.5-37.4% for PL ADP while determined by LTA for patients underwent PCI and on the treatment of regular-dose aspirin and clopidogrel, and approximately one third (29.6%) of the patients meet this therapeutic window. Patients inside the window presented significantly lower risk of NACE than those outside the window during 12-month follow-up. Several studies have tried to identify a threshold of PR that could stratify patients at risk of ischemic events. Bliden et al. [17] found that HOPR (defined as PL ADP ≥ 50% measured by LTA with ADP concentration of 5 μmol/L) was the only variable being significantly related to ischemic events after adjusting for hypertension, diabetes and use of calcium channel inhibitors. Gurbel et al. [6] demonstrated that HOPR (defined as PL ADP ≥ 46% measured by LTA [12] with ADP concentration of 5 μmol/L) was an independent risk factor for ischemic events within 2 years of non-emergent PCI (OR = 3.9, P< 0.001).
The cut-off value of PL ADP in our study is 37.5%, which is lower than the previous study. However, as demonstrated by the GRAVITAS trial, when HOPR was defined as ≥230 P2Y 12 reaction units (PRU) by Verify-Now P2Y 12 test, high-dose clopidogrel compared with standard-dose clopidogrel did not reduce the incidence of major adverse cardiovascular events [18], while the post-hoc analysis found that the achievement of a PRU < 208 was associated with significantly improved clinical outcomes. Consistent with the GRAVITAS trial, our result suggests that a lower cut-off value of PL ADP might bring more low responders to the intensified antiplatelet treatment and consequently reduce ischemic events.
In addition to recurrent ischemic events, the prognostic importance of bleeding complications following PCI has also been established. ADAPT-DES trial showed that HOPR (defined by > 208 PRU, by VerifyNow P2Y 12 test) was inversely related to TIMI major bleeding (adjusted HR: 0.73, 95% CI: 0.61 to 0.89, P = 0.002) [3]. Studies suggested a possible link between LOPR and bleeding [7][8][9][18][19][20][21][22][23]. With the LTA method, Tsukahara et al. [24] found that high-responsiveness was the independent predictor of major bleeding in patients receiving drugeluting stents and treated with thienopyridine. Parodi et al. [25] reported that LOPR (PL ADP < 40%, 10 μmol/L ADP, LTA assay) were the independent predictor of bleeding events. Consistent with previous studies, we confirmed the predictive value of PR on the occurrence of bleeding events after PCI as measured with the LTA assay, and we suggested a cut-off value of PL ADP < 25.5% to predict the bleeding events.
The optimal therapeutic window of PL ADP is uncertain, Campo [26] and Mangiacapra et al. [1] have reported two therapeutic windows for PR measured with the VerifyNow P2Y 12 assay. However, in Campo's study, they reported all clinical events (ischemic and bleeding) after 1 month and up to 1 year of follow-up. Patients with adverse events during the first month were excluded. In Mangiacapra's study, only short-term outcome of 1-month clinical events were analyzed. By contrast, using the two thresholds for ischemic and bleeding events, we found an optimal therapeutic window for PL ADP by LTA assay, ranging from 25.5 to 37.4%, which was associated with the lowest 1-year incidence of NACE. To the best of our knowledge, our study was the first that use LTA method to demonstrate an optimal therapeutic window for PL ADP regarding the 1-year clinical outcome.
Our study has important clinical implications. According to the results, post-PCI evaluation of PR carries important prognostic information, and the antiplatelet treatment should be guided referring to optimal therapeutic window of PR instead of single cut-off value. In particular, for patients with HOPR and higher ischemic risk, more aggressive antiplatelet strategies might be useful. On the other hand, for patients with LOPR and higher bleeding risk, conservative antiplatelet therapies should also be indicated until PR falls within the desired range.
The present study has potential limitations. First, the limited funding support prevented us to perform another cohort to validate the study results. Thus, a prospective study would be needed before using such an assay to try to predict outcomes. Second, the sample size was modest, so we could not analyze the optimal ranges of platelet reactivity for different age groups. Third, platelet reactivity could vary while patients taking clopidogrel treatment for longer term. However, we could not further extend the time of platelet reactivity test due to the limited hospitalization period. Besides, patients would be on high risk of thrombotic events early after PCI, so clopidogrel response in early stage of stent implantation would be more important to overcome or predict the thrombotic events.
Conclusion
An optimal therapeutic window of 25.5-37.4% for PL ADP predicts the lowest risk of net adverse cardiovascular events, which could be referred for tailored antiplatelet treatment while using platelet aggregation assay by light transmittance aggregometry.
Code availability Not applicable.
Authors' contributions
Jing Wang, Jing Wang, Tong Wang, Jiazheng Ma and Jianzhen Teng analyzed data and wrote the manuscript; Xiaofeng Zhang, Jing Wang, Qian Gu, Zekang Ye, Inam Ullah, Chuchu Tan, Samee Abdus, Lu Shi and Xiaoxuan Gong provided patients, collected data, and critically reviewed the manuscript; Chunjian Li designed the study and critically reviewed the manuscript. All authors approved the manuscript for submission.
Availability of data and materials
The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate This study was approved by the ethics committee of the First Affiliated Hospital of Nanjing Medical University based on the Declaration of Helsinki. Written informed consent was obtained from each patient.
Consent for publication
Written informed consent for publication was obtained from all participants. | 2021-08-19T00:08:35.813Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "55a606a552c424bfd35c6753a0052a1770350a28",
"oa_license": "CCBY",
"oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-021-00323-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00e29952e5f3c4921c39007ab2d22fee17a581de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119377883 | pes2o/s2orc | v3-fos-license | Modeling of gate controlled Kondo effect at carbon point-defects in graphene
We study the magnetic properties in the vicinity of a single carbon defect in a monolayer of graphene. We include the unbound $\sigma$ orbital and the vacancy induced bound $\pi$ state in an effective two-orbital single impurity model. The local magnetic moments are stabilized by the Coulomb interaction as well as a significant ferromagnetic Hund's rule coupling between the orbitals predicted by a density functional theory calculation. A hybridization between the orbitals and the Dirac fermions is generated by the curvature of the graphene sheet in the vicinity of the vacancy. We present results for the local spectral function calculated using Wilson's numerical renormalization group approach for a realistic graphene band structure and find three different regimes depending on the filling, the controlling chemical potential, and the hybridization strength. These different regions are characterized by different magnetic properties. The calculated spectral functions qualitatively agree with recent scanning tunneling spectra on graphene vacancies.
I. INTRODUCTION
The two π orbital sub-bands of a honeycomb lattice in pristine graphene realize perfect Dirac fermions [1,2] with a linearly vanishing density of states (DOS) at the Dirac point. As a result, graphene is a semimetal with a half-filled conduction band [2,3]. While vacancies in metals typically act as non-magnetic scattering centres on the conduction electrons, it has been shown [4][5][6] that removing a carbon atom induces a local magnetic moment that can undergo a Kondo screening at low temperatures [7] with substantial Kondo temperatures of order T K ∼ 50 K. A tunable Kondo resonance in combination with the possibility to manipulate the spin degrees of freedom [8,9] may be of fundamental use for practical graphene based spintronics.
Resistivity measurements [7] or the magnetic response of graphene [6] with vacancies induced by irradiation provide information on ensemble averaged impurity properties while a scanning tunneling microscopy (STM) gives direct access to single vacancy qualities. Since the graphene DOS vanishes linearly at the Dirac point such a system appears to be an ideal system for observing quantum criticality [10,11]. While the Kondo effect is absent for arbitrary coupling at the Dirac point and particle-hole (PH) symmetry in a linearly vanishing pseudo-gap DOS [12,13], finite doping controlled by an external gate voltage or strong PH asymmetry can lead to Kondo screening at low enough temperatures. In addition, the Kondo physics for adatoms such as Co on graphene [11,[14][15][16][17] has drawn some attention in recent years.
While the 3d transition metal Co carries an intrinsic magnetic moment, the generation of spin moments at carbon vacancies in graphene is less clear. Locally, two-orbitals of the three broken sp 2 bonds form a spin singlet bond leaving one radical with free moment [6]. Since these σ orbitals are orthogonal to the π orbital subsystem responsible for the Dirac fermions, they do not couple to the conduction band prohibiting the Kondo effect. Furthermore, the carbon vacancy induces a bound π state in the vicinity of the defect that is energetically located at or close to the Dirac point [18][19][20]. In a tight-binding formulation for the itinerant π electrons, the exact position of the bound state depends on the nearest and next nearest neighbor hopping t and t ′ . A vanishing t ′ results in a symmetrical DOS and a Dirac Point that coincides with the bound state. Those bound states are orthogonal to the itinerant states forming the conduction electron continuum and it is unclear [11] whether these states are responsible for the experimentally observed magnetic moments [6]. Due to the local graphene curvature in the vicinity of the carbon vacancy, the σ orbitals and the neighboring π orbitals start to hybridize and the possibility of a Kondo effect emerges.
In this paper, we investigate the two-orbital model for single vacancies in graphene originally proposed by Cazalilla et al. [20] using Wilson's numerical renormalization group (NRG) approach [21,22]. We calculate local spectra as functions of the gate voltage controlled chemical potential as well as the hybridization strength which is connected to graphene's curvature in the vicinity of the vacancy. The model comprises one unbound σ orbital, the locally bound π state and the coupling to the remaining π continuum.
Mitchell and Fritz used this model [16] as starting point and constructed an effective Kondo model in the local moment regime comprising of a spin-1/2 coupled to a logarithmic divergent effective conduction band density of states [20,23] for a non-interacting π subsystem. The authors include the vacancy induced bound state in the DOS which is singular at the DP. Thus, they neglect the Coulomb interaction between σ and π orbital and treat the problem as an effective s = 1/2 single impurity Kondo problem. However, the divergent DOS at the DP is irrelevant for doping away from charge neutrality. Since in this paper we are interested in local fluctuations in particular and treat Coulomb interactions explicitly, we incorporate the bound state in the impurity as an effective orbital and use an effective DOS comprising only of the itinerant graphene π states.
The scanning tunneling spectra (STS) [24] indicate that the system is located closely to local charge degeneracy points at which the local orbital occupation is changed by one indicating that the external gate voltage not only alters the filling in the graphene layer but also the local charge configuration. Charge fluctuations become important for understanding the STS questioning the applicability of the Schrieffer Wolff transformation in the experimentally relevant parameter space. Since a density functional theory (LDA) calculation [19] predicted a substantial ferromagnetic Hund's rule coupling J H ≈ 0.35 eV between the π and the σ orbital, combining the π subsystem into a single locally projected density of states (PDOS) of non-interacting degrees of freedom appears to be an oversimplification.
Previously, single orbital Kondo and Anderson models with a graphene type pseudo-gap DOS have been studied using the NRG see also the reviews [11,14]. Miranda et al. [25] focused on a disorder-mediated Kondo effect. Ruiz-Tijerina et al. [26] concentrated on the investigation of a single orbital Anderson impurity model for different geometries in the context of a dilute ensemble of atomic impurities in graphene.
The inclusion of the ferromagnetic coupling [19] between both orbitals, the σ orbital and the zero-mode (ZM), can favor a local triplet formation in the n doped region leading to distinct local spin configurations in different parameter spaces, and consequently to different STS responses as function of the local graphene curvature that match the recent experimental findings [24] as we will show below. We have also explicitly taken into account the hybridization between the σ orbital and the locally bound π state as included in the starting point of Ref. [16]. This induces a level repulsion in the twoorbital model as function of the hybridization strength that turned out to be crucial for the development of different local charge and spin configurations. We investigate the competition between these different charge and spin states as function of the chemical potential and the hybridization caused by the graphene curvature. We are able to qualitatively explain the different experimental STS spectra obtained on different carbon vacancy locations using the same two-orbital model.
The paper is organized as follows. After giving a brief overview on the experimental motivation and the STS data in Ref. [24] in Sec. II, we summarize the literature on the electronic configuration around carbon vacancies in graphene in Sec. III A and introduce a two-orbital model including our parameterization of the local curvature. First, we focus on the charge and spin configurations of the isolated impurity in Sec. III B in order to set the stage for the detailed NRG calculation presented in Sec. IV. Sec. III C is devoted to the basics of the Kondo effect in pseudo-gap systems such as graphene. We begin the result section IV by examining each of the three relevant hybridization parameter regimes independently -Sec. IV A, IV B, and IV C -before we combine a full parameter scan of hybridization and chemical potential into a finite temperature diagram in Sec. IV E. A detailed analysis of the Kondo temperatures is presented in Sec. IV G. We discuss the impact of parameter changes within the model on the spectral functions as well as possible extensions of the model in Sec. IV H and end with a summary (Sec. V) of our findings.
II. EXPERIMENTAL MOTIVATION
In a recent STM study [24], Mao et al. have shown that carbon vacancies in graphene exhibit Kondo physics adjustable by an external gate voltage V G . They identified two different classes of vacancies clearly distinguishable by their characteristic scanning tunneling spectra (STS) as a function of V G . The gate voltage V G can be directly converted into a chemical potential µ separating the system by either p (µ < 0) or n (µ > 0) doping.
The STS of the first class of vacancies show a Kondo resonance for electron and hole doping with a vanishing T K close to charge neutrality as expected from the pseudo-gap DOS in graphene. For the second class of vacancies a Kondo peak is only visible in the hole doped regime and disappears upon approaching charge neutrality [24].
We see the same characteristic behavior in our NRG calculations where we use the hybridization strength to distinguish both scenarios. Thus, we will refer to first scenario as the 'intermediate hybridization' and the second as the 'strong hybridization' regime throughout this paper. In our calculations, we also identified a third regime, where a Kondo peak is only found for µ > 0. This so-called 'weak hybridization' regime has been experimentally observed as well [24]. Note, that we are dealing with broad crossovers between these regimes and not sharp phase boundaries. Therefore, we will do without giving absolute values for the boundaries of the different regimes.
The goal of this paper is to give a physical explanation of all three regimes using the same two-orbital impurity model and calculate the spectral function ρ(ω) for those classes by small parameter changes related to the local curvature of the graphene sheet at the location of the impurity.
A. Modeling vacancies by a two-orbital Anderson model
Modeling the electronic degrees of freedom in the vicinity of a graphene vacancy has a long history [27]. Removing a carbon atom in a 2D monolayer of graphene leads to three dangling σ orbitals in the neighboring atoms. Two of them will form a new chemical bond with a doubly occupied orbital causing a Jahn-Teller like distortion of the local lattice. Electronically active remains only the third unpaired σ orbital [20] (see also Fig. 1 (left)). Charge neutrality and a weakly screened local Coulomb interaction causes a free local moment formed in this chemical radical which can undergo a Kondo screening at low temperature.
There is a long-standing debate about the possibility of Kondo screening of this moment by coupling to the π conduction electrons since the π Wannier states are orthogonal to the σ orbital in a flat 2D monolayer of graphene and, therefore, do not hybridize. This situation however changes in the presence of a local curvature which is naturally induced by either suspending the sample or by supporting it on a corrugated substrate such as SiO 2 . Due to the local curvature, the neighboring π orbitals acquire a finite overlap with the localized σ orbital and start to hybridize with a hybridization strength that depends on the local curvature: the larger the curvature, the stronger the coupling.
In addition, removing a carbon atom from the lattice also influences the π band in the vicinity of the impurity. Treating this as a single-particle scattering problem [18] has demonstrated that now the π subsystem comprises itinerant states responsible for the typical graphene density of states and one additional bound state weakly localized in the vicinity of the vacancy being orthogonal to the itinerant π states. This bound state is often referred to as zero-mode since it is located at the Dirac point in the tight-binding description [18]. Furthermore, density functional theory predicts a rather significant ferromagnetic coupling between the local π state and the σ orbital of J H ≈ 0.35eV [19].
When setting up an effective model for the vacancy one has to carefully review the local density of states of the effective non-interacting conduction band, particularly when hunting for very subtle changes of many-body wave function due to the Kondo effect.
A first guess would be to consider the local density of state (LDOS) of the neighboring π orbitals of the dangling σ orbital. In this LDOS [18,27] the bound π state causes a singularity at the Dirac point. Using such LDOS is somehow misleading, since (i) the zero-mode stems for a δ-distribution that (ii) is associated with an interacting orbital with a significant Hund's rule coupling [19] which is neglected in the LDOS description.
Alternatively, this bound π state could be ignored by setting up a simplified single-orbital pseudo-gap Ander-FIG. 1. Left: Schematic picture of remaining electronic states for a missing carbon atom. The free σ orbital hybridizes with both adjacent π orbitals. The resulting impurity has the characteristic triangle shape. Right: Schematic two-orbital model. The d orbital represents the free σ orbital and is coupled directly to the conduction band. The π orbital represents the vacancy induced zero-mode bound state.
son model. Such a model may be sufficient to qualitatively explore the limit of strong hybridization where only occupation fluctuations between a singly and a doubly occupied σ orbital is relevant -see below. However, it fails to capture the physics of the weak hybridization regime without major modifications and assumptions (e.g. hybridization dependent Coulomb interaction) which have to be added by hand into the model. This simplified model also does not address the issue of the bound state: the resonance resides at ω = 0 yielding a singular contribution to the DOS that must be properly dealt with. Furthermore, the ferromagnetic interaction between the π bound state and dangling σ orbital is expected to be quite strong and should still be taken into account explicitly.
Cazalilla et al. [20] proposed the following two-orbital Anderson model for the description of a carbon vacancy in graphene that will be the starting point of this paper. The local twoorbital Hamiltonian is given by where ǫ d and ǫ π denotes the single-particle energy of the orbitals with the corresponding creation operators d † π(d)σ creating an electron in the π (respectively d) orbital with spin σ (we used the subscript d for the σ orbital to avoid interference with the spin index) U ππ and U dd label the intra-orbital Coulomb repulsion and U dπ the inter-orbital Coulomb interaction (see Fig. 1 (right)). In order to [29,30]. We performed a summation over the first Brillouin zone and evaluated the well-known single-particle dispersion Eq. (4). The dashed curve shows a simplified approximate Γ(ω). The frequency is shifted in both cases such that the Dirac point lies at ω = 0 for µ = 0. ensure rotational invariance in the spin space [28] the Hund's rule coupling J H has to be augmented with a pair-hopping term with the same coupling strength.
Since we have included the bound state of the π system in the effective impurity, we treat the π continuum by a simple tight-binding model for a bipartite honeycomb lattice as where ≪ i, j ≫ denotes the summation over a nextnearest neighbor hopping [3]. The operators a ( †) and b ( †) destroy (create) an electron on the respective sublattice A or B. Fitting of the hopping integrals to DFT calculations for the band structure yields the values t ≈ 2.91 eV and t ′ ≈ −0.16 eV [30] where the inclusion of a next-nearest neighbor hopping t ′ leads to an asymmetrical DOS and a shift of the Dirac point. Throughout the paper, µ = 0 refers to charge neutrality of the system where the Dirac point coincides with the Fermi energy. In STM experiments µ is taken as the zero of energy and its variation with the gate induced doping is monitored by the motion of the Dirac point relative to µ. Likewise, the energy spectrum is symmetrically discretized around the chemical potential in the NRG. Therefore the spectral functions are also calculated with respect to µ as zero of energy.
The tight-binding Hamiltonian (3) can be easily diag-onalized which yields two energy bands (cf. [2,3]) where τ = ± denotes the band index. The hybridization energy takes the standard form where the influence of the π band on the impurity dynamics can be fully encoded into the complex hybridization function This defines a local Wannier orbital created by c † 0σ and an effective hybridization strength V via Taking the imaginary part yields the coupling function where ǫ ± ( k) is the one-particle tight-binding solution Eq. (4). Since {c † 0σ , c 0σ ′ } = δ σσ ′ , V is determined by the equation that is related to the effective hybridization strength Γ 0 by the integral where we choose D = 8 eV for the bandwidth of graphene.
In order to capture the essence of the pseudo-gap density of states, we also used the parameterization of Γ(ω) by the analytical expression (D eff = 3 eV pins the van Hove singularities in the DOS while D = 8 eV determines the edges) otherwise.
The calculations presented below are performed for both hybridization functions Γ(ω), i.e. (i) a direct k-space summation of the single-particle dispersion ǫ τ ( k)) and (ii) the approximation (12). The different Γ(ω) are depicted in Fig. 2.
Γ 0 is used as a free parameter to adjust the vacancy specific hybridization introduced above. Thus, it serves as a direct parameterization of the local curvature in the vicinity of the impurity.
B. Local scenarios and choice of parameters
In order to set the stage for the full solution of the correlated two-orbital impurity problem, we will discuss the different local configurations in a model which will become relevant for the explanation of the three different regimes found in the experiments.
In this section, we restrict ourselves to H loc defined in Eq. (2). At charge neutrality, the d orbital and the π bound state should be singly occupied. In the p doped region, the d orbital remains half-filled while the π bound state is unoccupied and thus the total local occupation is typically close to N imp = 1 for µ ≪ 0.
Three distinct local configurations are of particular interest for positive µ: A second electron can populate either the d or π orbital depending on the local parameters. In addition, a third electron will occupy the π bound state in some parameter regimes if the additional energy gain µ is larger than the differences in the impurity energies. This results in a doublet state where the π orbital is fully occupied and forms a local singlet whereas the d electron's spin is providing a local moment for a spin-1/2 Kondo effect.
First, consider the case N imp = 2. The total energy for the doubly occupied d orbital (singlet state) is given by which competes with the triplet state where each orbital is singly occupied and whose energy reads A local singlet ground state formed by two electrons on the d orbital prohibits the Kondo effect once conduction electrons are coupled to the orbitals. On the other hand, the local triplet state still leaves the possibility for the underscreened Kondo regime with a twofold degenerate ground state from the π orbital spin. For a total orbital occupation of three electrons, the energy of the doublet state reads Here, the spin of the d electron may also be screened by the conduction electrons showing Kondo physics. Since the state |D contains three electrons and, therefore, its occupation is governed by ∆E = E D − 3µ in a decoupled impurity. This state becomes favored with increasing µ over the singlet and the triplet configuration with N imp = 2 and can push the system close to a charge instability.
The local scenario depends sensitively on the parameters of the local two-orbital Hamiltonian Eq. (2). The exchange interacting J H is crucial in order to stabilize the local triplet state. Ab-initio calculations predict a substantial ferromagnetic Hund's rules coupling of J H ≈ 0.35 eV [19] between the orthogonal π state and σ orbital. We propose that the difference between the two classes of impurities detected experimentally and labeled as intermediate and strong hybridization regime are related to a hybridization controlled level crossing between the local singlet state with energy E S and the local triplet state with E T .
DFT calculations [31,32] and recent experimental studies [24] suggest that U dd = 2 eV [33]. We will adopt this value but address the consequence of possible smaller U in Sec. IV H 1.
For the remaining Coulomb matrix elements, we comply with the parameters stated in the supplementary material of Ref. [20] where U dπ ≈ 0.1 eV. In order to stabilize the triplet state in the strong hybridization regime, we slightly increase the onsite Coulomb repulsion on the π orbital, U ππ = 0.01 eV.
Our goal is to switch between the different scenarios by just a small change in parameters that are solely caused by the variations in the local curvature of graphene in the vicinity of the different impurities. We chose the impurity single-particle energies such that the system is located close to the local crossover between singlet and triplet state outlined above.
Without a microscopic ab-initio theory for each impurity configuration at hand, we assume that in the experiment only two parameters, Γ 0 and µ, are varied: The former by the vacancy location, i. e. the local curvature, the later by the external control parameter V G .
The effect of the hybridization between the π orbitals on the neighboring carbon sites [34] and the local d orbital is twofold: (i) It couples the d orbital to the π band continuum and (ii) generates a hopping term between the bound π state and the d orbital.
A single-particle Green function approach [19] has been employed for the tight-binding model H π−band with one carbon vacancy to determine the fraction z of the local π orbital contributing to the bound π state. The single particle wave function of this bound state decays only as ∝ r −1 . Due to the extended nature of the wave function, the z factor is relatively small and has been calculated as z ≈ 0.07 [19]. Using the hybridization parameter V and diagonalizing the impurity single-particle matrix with the appropriate transformation matrix U determines the new eigenenergies ǫ ′ d and ǫ ′ π of the effective orbitals. Combining an estimate for ǫ d and ǫ π with this hybridization dependent level repulsion induced by the local curvature reduces the parameter space to Γ 0 and µ as desired. We also have considered a simplified linear shift interpolating the level energies by the equation Here, Γ i,s 0 stands for the hybridization at which the system is either in the intermediate or strong hybridization regime. The disadvantage is obvious: There is no microscopic justification and we need to determine four reference parameters for this interpolation to arbitrary Γ 0 .
In the following, we omit the prime and implicitly use the shifted Γ 0 -dependent orbital energies. We also refer to the extended bound π state as π orbital in the following, since the physical π orbitals have been decomposed in their contribution to Γ(ω) and to the bound π state. Since the Coulomb matrix elements are not determined by an ab-initio approach, we assume that they remain constant as function of Γ 0 and are always defined with respect to the final orbital-basis to keep the modeling simple.
In addition to the previous considerations, the level positions are subject to a dynamical, Γ 0 -dependent shift stemming from the interaction with the conduction band continuum that is explicitly included in our numerical renormalization group approach. As a consequence, the local estimations for E T and E S can only be regarded as an approximate guideline for the level positions and the charge instability.
For p doping (µ < 0), the lower lying d orbital is singly occupied in all cases eventually resulting in Kondo screening. The exact value of T K strongly depends on Γ 0 and µ. Fig. 3 summarises the different situations for positive µ. For small Γ 0 , the double occupation of the bound π states becomes favorable, leaving a local moment s = 1/2 in a N imp = 3 ground state that can undergo a Kondo screening at large enough µ. Increasing Γ 0 will favor the spin-triplet formation in the local moment regime with s = 1 that can exhibit an underscreened Kondo regime at very low temperatures. Increasing Γ 0 further reduces the single-particle energy ǫ d enough that the doubly occupied d orbital singlet state is locally favored, and the possibility of a Kondo effect is excluded. The Kondo problem has been a major point of interest in sold-state theory for multiple decades. It is a prime example of a correlated many-body problem naturally arising in metallic hosts exhibiting small magnetic dilution. At sufficient low temperature T < T K the antiferromagnetic interaction between the impurity spin and the electrons' spin of the host gives rise to the formation of a singlet state which screens the magnetic moment of the impurity. The standard Kondo Hamiltonian [21,35] takes the form where J 0 is the Kondo coupling between the impurity spin S and the local conduction electron spin density s 0 , and V 0 accounts for an additional potential scattering term arising from particle-hole symmetry breaking in the local moment (LM) regime. For a constant metallic DOS ρ(ω) ≡ ρ 0 the Kondo temperature shows an exponential dependence on J 0 [21, 35] For a pseudo-gap density of states, i.e. ρ(ε) ∝ |ε| r , Withoff and Fradkin were the first to point out the existence of a critical coupling [12] using a perturbative renormalization group argument. This model exhibits a wide variety of different phases for r > 0. These phases are characterized by different fixed points (FP) whose properties and occurrence depend on the bath exponent r as well as on particle-hole symmetry or the absence of it. For 0 < r < 1/2, there exists a critical coupling strength Γ c [13,[36][37][38] governing the transition between a LM phase for weak coupling and a strong-coupling (SC) phase for large coupling to the metallic host.
As discussed in section III A and depicted in Fig. 2, the case r = 1 is relevant for graphene. In the following, we briefly summarized the results of the literature [10,11,13]. For µ = 0 and PH-symmetry the system will always flow to the stable LM fixed point independent of the Kondo coupling J 0 . For µ = 0, Kondo screening will arise at any coupling J 0 due to the finite density of states and we expect an exponentially suppressed Kondo scale ln(T K ) ∝ −1/|µ| r [10]. This picture changes in case of a broken particle-hole symmetry. At µ = 0, screening is possible above a critical coupling J c and for strong enough asymmetry V 0 > V c . Thus, there is an unstable quantum critical point separating an unscreened LM and screened ASC phase. For finite chemical potential and near J c the system will eventually arrive at a strong coupling fixed point but in a different way with regards to particle or hole doping [10]. As a result, close to the quantum critical point (QCP) but for p doping, the Kondo temperature scales as T K = κ|µ|, while for µ > 0 it is proportional to |µ| x , where x ≈ 2.6 and κ is a prefactor of order O(1) that depends only on the bath exponent r (see Fig. 4 in Ref. [10]).
D. NRG approach to quantum impurity systems
In general, we are interested in the dynamics and the thermodynamics of an interacting quantum-mechanical many-body system with an approximate continuous quasi-particle excitation spectrum. The importance of a wide range of different energy scales, from several eV on the scale of the bandwidth to arbitrary small excitations between degenerate states, gives rise to infrared divergences in the perturbation theory for the antiferromagnetic Kondo problem or Anderson impurity models. One successful approach is the renormalization group (RG) that allows for a non-perturbative treatment of all energy scales. The RG forms the foundation of Wilson's Numerical Renormalization Group (NRG) [22] approach which has been first applied to solve the Kondo problem [21,39,40].
The Hamiltonian of such a quantum impurity problem takes the three-part form of Eq. (1) where the effect of the impurity (band) is completely encoded in the H loc (H band ) term. The bath is divided into intervals on a logarithmic mesh, characterized by the discretization parameter Λ, around the chemical potential where Λ → 1 reconstructs analytically the original problem. One then proceeds by mapping this star geometry via a Householder transformation onto a half-infinite tight-binding chain, the so-called Wilson chain. The hopping parameter between neighboring sites n and n + 1 decay exponentially t n ∝ Λ −n/2 owing to the logarithmic discretization. The problem is now solved in an iterative fashion where the starting point is the easily diagonalizable bare impurity devoid of any bath degree of freedoms. In each iteration one additional site of the Wilson chain is included, the new eigenvalue problem is solved numerically, and high-energy excitations are discarded in order to control the otherwise exponentially growth of the many-body Fockspace. Each iteration provides access to a smaller temperature scale due to the exponentially decreasing hopping parameters. The iterative procedure is then continued until the desired temperature is reached. In the end, the set of all eigenstates and -energies make up an approximate solution to the original many-body spectrum up until the final temperature.
The NRG algorithm is not limited to equilibrium calculations and has been adopted successfully to nonequilibrium problems [41,42]. The identification of a complete basis set is the foundation for a sum rule conserving formulation of the impurity Green's function [43,44] and eventually for present non-equilibrium extensions. Equilibrium NRG results provide an almost exact agreement with analytical results if available and often serve as a benchmark for other methods. For a more indepth examination refer to state-of-the-art reviews such as [22].
IV. RESULTS
This section starts out with presenting our results for the three different generic scenarios that are characterized by Γ 0 before we compile everything into a finite temperature as well as a T → 0 regime diagram. These diagrams will be presented in Sec. IV E. Fig. 3 which summarizes the local configurations for positive chemical potential and three different generic values of Γ 0 serves as a road map. The scenarios represent the different classes of vacancies identified by STM experiments [24].
We used a discretization parameter Λ = 1.81 and kept N s = 2000 states after each iteration in all our NRG calculations. The calculations were done for finite temperature T = 4.2 K unless stated otherwise to be comparable to experimental results.
We employed three different approximations for Γ(ω) and the single-particle orbital energies: (i) the approximate Γ(ω) stated in Eq. (12) and a simple linear interpolation, (ii) the realistic Γ(ω) generated from the t, t ′ dispersion and a linear interpolation of ε d/π , and (iii) the realistic Γ(ω) as well as a re-diagonalization based on the z factor for the bound π state. In all cases, the single particle energies ǫ d and ǫ π are functions of the selected Γ 0 .
The parameters used in the NRG calculations are mainly extracted from different sources in the literature, as discussed in Sec. III B and as summarized in table I. The question whether different sets of parameters, especially smaller values for U dd , results in similar experimental findings is addressed in Sec. IV H 1. In order to reach the desired temperature, we choose β = 1.225 and the number of NRG iterations is given by N = 34.
If not stated otherwise, the spectral functions are for the d orbital only, i.e. ρ d (ω). The effect of the π contribution to the spectrum is discussed in Sec. IV D.
A. Weak hybridization regime Fig. 4 shows the calculated spectral functions ρ d (ω) [43,44] for the d orbital for a weak hybridization (Γ 0 ≈ 1eV ). The results are depicted in panel Fig. 4(a) for µ < 0 and those for µ ≥ 0 in panel Fig. 4(b) and cover a range from µ = −100 meV to µ = 100 meV. For clarity, the spectral functions are shifted by a constant for each µ and normalized by ρ 0 being the maximum of all spectral functions. While ρ d (ω) remains featureless in the p doped regime and reflects the pseudo-gap DOS of the conduction band, Kondo resonances are visible in the spectra for µ > 60 meV.
In order to relate the NRG results to the scenarios presented in Fig. 3, we plot the local orbital occupations as function of µ in Fig. 5(a). In the weak hybridization regime, the lower d orbital remains half-filled and nearly independent of µ while the occupation of the π orbital depends on the chemical potential. For µ < 0 the π orbital is also half-filled due to the energy gain by the Hund rule coupling J H forming a local triplet. Increasing µ to µ > 50 meV adds another electron to the π orbital since the U ππ interaction is weak and a local moment with s = 1/2 remains on the d orbital. Therefore, we identify an underscreened (undercompensated [45]) Kondo problem for µ < 0, while we find a conventional s = 1/2 pseudo-gap Kondo problem for µ > 0 since n π = 2. The observation of a lower T K for a underscreened s = 1 Kondo problem in recent NRG calculations (see Ref. [46]) is compatible with our findings where T K ≪ T = 4.2K for µ < 0.
Additionally, the influence of the chemical potential on the Kondo temperature is asymmetric even for a fixed s = 1/2 Kondo problem [10,11] with respect to the sign change of µ.
B. Intermediate hybridization regime
Increasing Γ 0 moves the system into the intermediate hybridization regime. Here, a Kondo peak is found in ρ d (ω) for both strong p doping or n doping as clearly depicted in Fig. 6. The spectral functions are normalized as before for clarity. For small bias voltage, the Kondo effect breaks down since Γ(ω) vanishes linearly at ω = 0 suppressing screening close to the Dirac point. The Kondo temperature as function of µ is extracted using a Goldhaber-Gordon fit [47] of the zero bias conductivity G(T ) where s = 0.22 and G 0 = G(T = 0). T K (µ) shows an exponential dependence on the chemical potential close to |µ| → 0, albeit with different slope for p and n doping (not shown).
We have chosen the impurity parameters in the intermediate hybridization regime in such a way that the occupation of the π orbital changes around µ = 0 [19] n d,σ approx. Γ(ω) n π,σ approx. Γ(ω) n d,σ realistic Γ(ω) n π,σ realistic Γ(ω) n d,σ realistic Γ(ω) +Z n π,σ realistic Γ(ω) +Z as depicted in Fig. 5(b). The d orbital remains roughly half-filled as function of µ while the occupancy of the π orbital changes rapidly from zero to single occupation. There is only a very small difference in the results obtained from the approximated Γ(ω), Eq. (12), and the realistic hybridization function emphasizing the generic character of the power law approximation of Γ(ω). Since the local occupancy changes from N imp = 1 to N imp = 2, the underlying Kondo effect is fundamentally different for both types of doping. For µ < 0 the local d spin moment is screened by the bath forming a conventional Kondo singlet. For positive µ, however, the two spins form a local triplet state due to the strong ferromagnetic Hund's rule coupling J H . Only a fraction of the resulting s = 1 moment can be screened by the bath resulting in an underscreened Kondo effect and a dangling s = 1/2 moment [45].
C. Strong hybridization regime
When Γ 0 is increased further compared to the intermediate regime, eventually the strong hybridization regime is reached.
For µ < 0, it essentially resembles the intermediate regime. The single occupied d orbital provides an unpaired spin that is screened to a Kondo singlet, and ρ d (ω) shows a Kondo resonance (Fig. 7(a)). The π orbital, however, remains mostly empty for all chemical potentials considered here and the physics is entirely governed by the d orbital; this regime could also be described by a single orbital Anderson model [24].
Starting around µ = 0 and increasing µ, the d orbital gets slightly filled such that the total occupation approaches N imp ≈ 1.2 -see also Fig. 5(c). Therefore, the system approaches the intermediate valence (IV) FP governed by local charge fluctuations [40,48]. A resonance develops in ρ d (ω) close to the DP that moves towards lower energies as µ increases as shown in Fig. 7(b). This coincides with an increase of the d orbital occupation. The Kondo resonance for µ < 0 evolves continuously to a charge fluctuation resonance close to µ = 0 since the local N imp = 1 and N imp = 2 charge configurations become energetically degenerate for µ ≈ 0. While a doubly occupied d orbital is predicted in a purely local picture, the substantial hybridization favors the itinerancy of the impurity electrons. The local occupation of the d orbital is shown in Fig. 5(c) and approaches n dσ ≈ 2/3 indicating the IV FP.
D. Zero-mode peak
We only dealt with the spectral functions for the d orbital in the previous sections. However, the tunneling STS current measured in the experiments [24] results from a superposition of three parts: The d orbital, the π orbital as well as the substrate. The d orbital is responsible for the Kondo physics but the π orbital contributes to the total experimental dI/dV curves in form of the zero-mode. The orbital occupation Fig. 5 gives a rough estimate for the position of the zero-mode peak relative to the Fermi energy in the different regimes: a vanishing occupation implies a zero-mode that rests above E F . We used the same parameters as in the previous sections. The spectral functions depicted in Fig. 8 are calculated by Lehmann representation of the NRG data [43,44] using a logarithmic Gaussian broadening b -see the NRG review for details [22].
The π-spectral functions in the intermediate hybridization regime, Γ 0 = 1.21 eV, are shown for varying the broadening parameter b [22] and µ = −60 meV in Fig. 8(a). This demonstrates the very sharp nature of the ZM excitation whose width is artificially broadened by b. In addition, a Hund's rule mediated excitation at slightly higher energies ω ≈ 0.35 eV emerges for small b which results from the exchange coupling between the d orbital and the π orbital.
The zero-mode peak exhibits a simple µ dependence as shown in Fig. 8(b): A higher chemical potential shifts the peak towards ω = 0. This is the exact same behavior observed in the experiments (Fig. 2(a) in [24]).
Since we are mainly interested in the Kondo physics which is only contained in the d orbital spectral function we will not discuss the π spectral function in the following. After presenting the d orbital spectral functions for three different characteristic values for Γ 0 representing the three different classes of vacancies found in the STM experiments, we combine the results into a more elabo-rate scan of Γ 0 from weak to strong hybridization in a single regime diagram where µ and Γ 0 are the only free parameters.
We used the realistic hybridization Γ(ω) calculated from the t, t ′ DOS and the single-particle orbitals energies calculated for a given Γ 0 via Eq. (16) in all NRG calculations in this section. For each data point (Γ 0 , µ), we obtained the impurity entropy S imp [22] and total orbital occupation N loc from an individual NRG run. The calculations were performed at a fixed finite temperature T = 4.2 K to make contact to the STM experiments. The diagram depicted in Fig. 9 shows the color-coded product of the residual entropy times the total impurity occupation.
In Fig. 9 three different areas are separated by two naturally appearing lines that define a sharp, but temperature broadened crossover region characterized by N loc S imp = const. In the left part of the figure, N loc rapidly drops from three to two, while simultaneously the entropy increases from ln(2) to ln (3). The second line occurs where N loc = 2 → 1 and S imp /k B = ln(3) → ln (2).
We identify seven different regimes in Fig. 9. Increasing Γ 0 in the p doped region (µ < 0), we found three local moment and one strong coupling regime: s = 1/2 (LM-a), s = 1 (LM-b), s = 1/2 (LM-c), and s = 1/2 (SC-a). For positive µ, we find two strong coupling FPs (s = 1 SC-b and s = 1/2 SC-c) as well as a frozen impurity regime (FI). Note that the LM regimes mentioned above also extend partially to µ > 0 due to the vanishing Γ(ω) at the Dirac point.
For the smallest hybridization Γ 0 in the range of 0.5eV ≤ Γ 0 < 0.7eV, the π orbital is doubly occupied for positive µ, and we find a half-filled d orbital. This regime is labeled as the LM-a since the entropy tends towards S imp /k B ≈ ln(2) and the square effective magnetic moment [21] is given by µ 2 eff ≈ 0.25. This regime extends into p doping and crosses over into the SC-c regime upon simultaneous increase of µ and Γ 0 .
Increasing Γ 0 at a fixed µ enhances the level splitting between the two local orbitals: the doubly occupied π orbital becomes less favorable and the system crosses over to the s = 1 (LM-b) regime. The thick red line marks the crossover from a local occupation of N imp = 3 to N imp = 2 which also distinguishes LM-a and LM-b regimes. Thus, the entropy in the LM-b regime tends to S imp /k B ≈ ln(3) while the effective moment takes the value µ 2 eff ≈ 0.66. The ground state is a triplet state formed by both electrons which are coupled due to the strong ferromagnetic interaction J H .
Increasing the hybridization further delocalizes another impurity electron, and the system crosses over to the LM-c region where only a single d orbital spin is locally present. The spectral function already shows the onset of a Kondo peak (see Fig. 6) even though the entropy and the effective moment still signifies a LM FP: S imp /k B ≈ ln(2) and µ 2 eff ≈ 0.25. The s = 1/2 Kondo temperature is significantly increased above 4.2K at very larger Γ 0 and µ ≪ 0, and the crossover from LM-c to the SC-a regime is observed. The entropy as well as the square effective magnetic moment µ 2 eff tends to zero due to the formation of a Kondo singlet state in this regime. Now we discuss the diagram for µ > 0 starting from the upper right area of Fig. 9. For strong enough hybridization and large µ, a doubly occupied d orbital and an empty π orbital are locally favored. Although that regime is labeled as frozen impurity (FI) regime, it is closer related the IV FP since N imp = 1.2 − 1.3. Interestingly enough, the FI ground state calculated by the NRG corresponds to a double occupied state while the impurity occupation is closer to N imp ≈ 1.3. Since the ground state is a local singlet state, the impurity decouples effectively from the conduction band. The coupling to the conduction band, however, leads to a gain of kinetic energy and partially delocalizes the d orbital electrons. The spectral function shows a pronounced excitation peak which shifts linearly with the chemical potential as depicted in Fig. 7(b).
Upon the reduction of Γ 0 , the local spin triplet state becomes energetically favorable. As in the p doped region, we see a broad crossover regime where entropy and magnetic moment still take their LM fixed point values although an underdeveloped Kondo peak is already visible -see Fig. 6(b) at µ = 60 meV. Both local electrons align ferromagnetically, and the hybridization is sufficient to initiate Kondo screening of the d electron spin. For sufficiently large Γ(µ), the underscreened Kondo fixed point (SC-b) is found with a dangling π electron spin. The SC-b regime is characterized by and impurity entropy of ln(2) and square effective moment of 0.25.
At small Γ 0 -upper left corner of Fig. 9 -we observe another crossover from the LM-a to the strong coupling regime SC-c. Both regimes are characterized by a fully occupied π orbital. Therefore, we end up with an effective isolated half-filled d orbital which may now be screened by the conduction band's electrons when Γ(µ) is increased with increasing µ > 0.
Additionally, we added three areas to the color contour plot Fig. 9 to indicate the three regimes with their distinct spectral evolution as function of µ: (i) the weak, (i) intermediate, and (iii) strong hybridization regime.
Fixed points T → 0
Away from the µ = 0 line, all LM regimes are associated with unstable RG FPs. The diagram in Fig. 10 is calculated for the same parameters as Fig. 9 but for T = O(10 −8 ) K. Clearly, the LM-c regime characterized by a single d orbital spin is almost completely suppressed and reduced to a narrow region in the vicinity of µ = 0 where a rapidly vanishing Γ(ω) exponentially reduces T K . This regime has been replaced by the stable SC-a FP for µ < 0 and the FI FP for µ > 0. The area of two other stable SC FPs, SC-b and SC-c, have also increased but the two other LM FP, LM-a and LM-b still cover a large area of the diagram. Further reduction to T = 10 −30 K -not shown here -reduce these regions in favor of the corresponding SC FPs. The question arises whether the crossover from LM-c to LM-b in the intermediate regime upon increasing µ will develop to a quantum phase transition at T = 0. Fig. 11 shows the spectral functions, panel (a), as well as the orbital occupation, panel (b), in the crossover region.
Already at finite T = 4.2 K the crossover takes place in a narrow range between µ = 8 meV and µ = 12 meV giving rise to the clearly visible distinction in Fig. 9. The spectral functions for µ < µ c ≈ 9.5meV are characterized by a peak at ω ≈ 50 meV whose spectral weight is shifted towards slightly higher energies ω ≈ 115 meV for µ > µ c .
The orbital occupation of the impurity illustrates the change from a crossover to a quantum phase transition at T = 0. The low lying d orbital remains at around half-filling regardless of temperature and µ. On the other hand, the π orbital changes its occupation from empty to half-filled in a smooth manner at finite T . This develops into a sharp transition for T → 0 indicating a real phase transition and a change of ground states.
We see two lines of quantum critical points for our parameters, where the impurity changes its total occupation by an integral number.
G. The Kondo temperature
In section IV E we have shown that the system has not yet reached its stable fixed points for a large part of the parameter space at finite T = 4.2 K. The broad crossover regimes show qualities which can be ascribed to either a LM or SC phase, such as an entropy ∝ ln(2) while simultaneously exhibiting an onset of a Kondo peak. Therefore, estimating the Kondo temperature by using the full width half maximum (FWHM) of the peak may result in values that differ significantly from the true low energy scale for the given parameter sets. In addition, the PH asymmetry and the fitting procedure does not uniquely determine T K [49].
We adopted a twofold strategy: (i) we estimated T Fano K using a Fano line shape for the Kondo peak in the spectral function in order to connect to the STS approach for extracting T K from spectra experimental obtained over a limited temperature range, and (ii) we calculated T K from the temperature evolution of the zero-bias conductance (ZBC) originally proposed by Goldhaber-Gordon [47] in the context to quantum dots that turns out [49] to coincide very accurately with Wilson's definition [21]. However, this method for extracting T K might be questionable in the case of STS which shows Fano resonances [50]. Fig. 12 shows the calculated T K values using both approaches. Note that at finite T = 4.2 K in general the ZBC has not yet reached its plateau value rendering the Goldhaber-Gordon approach useless, so that we iterated until the fixed point is reached. For all calculations we use Γ(ω) that stems from the realistic DOS as well as local orbital energies obtained from H sp loc , Eq. (16). The first two curves -from top to bottom -are obtained for hybridization Γ 0 = 2.1, 2.4 eV and can be identified with the strong hybridization regime, the next two (Γ 0 = 1.5, 1.7 eV) show the occurrence of a Kondo peak characteristic for the intermediate regime, and the last one belongs to the weak hybridization regime.
Note that the values of T K determined by the ZBC fit correspond to the crossover temperature for the entropy change towards the SC depicted in Fig. 9. The agreement between T GG K extracted from the ZBC fit and true thermodynamic energy scale T K characterizing the crossover to the stable FP in the NRG has been previously observed in the analysis of STS for Au-PTCDA complexes on Au surface [49].
H. Modification of the model
All features observed experimentally and reported in Fig. 4(a) by Mao et al. [24] are in qualitative agreement with Fig. 12. However, we also note some differences to the STS data present in Ref. [24]: (i) T Fano K calculated by the two-orbital model is about a factor of 1.5 − 2 too small and (ii) the Kondo resonance in the weak and intermediate regime sets in at around µ ≈ 40 meV at the earliest for finite T = 4.2K. Since the Kondo temperature is exponentially sensitive to the parameters of the model and the determination by a FWHM fit in the experiment is not reliable -see discussion in Ref. [49] -the difference between the experimental and the theoretical Fano fit T Fano K is clearly not as significant as the different onset of the Kondo effect for µ > 0.
Effect of a smaller on-site U
The exact value of the intra-orbital Coulomb interaction is not agreed upon in the literature with predictions spanning roughly one order of magnitude [6,31,51,52]. In the light of a recent study, we investigate the effect of a reduced Coulomb matrix element U dd = 0.5 eV as proposed by Nair et al. [6]. A smaller Coulomb interaction enhances local charge fluctuations and presumably can enhance T K using the Schrieffer-Wolff transformation [53] as a guideline. A larger U dd on the other hand suppresses those charge fluctuations and, therefore, requires an even larger, possible non-realistic hybridization Γ 0 .
Note that U dd = 0.5 eV is particularly small in comparison with the Hund's rule coupling J H ≈ 0.35 eV obtained by density functional theory [19] and the interorbital interaction U dπ ≈ 0.1 eV. In order to guarantee the occupation necessary for developing a stable local moment, we slightly increased to U dd = 0.65 eV.
The small on-site U dd had to be combined with a value of ǫ d = −0.37(−0.40) eV in the intermediate (strong) hybridization regime for the system to form a stable moment in order to find a Kondo resonance at the chemical potential. Furthermore, a smaller U dd tended to result in a vacant impurity ground state abruptly destroying the Kondo effect.
We restricted ourselves to the most approximated Γ(ω) (Eq. (12)) for the calculations in order to focus an the qualitative differences to U dd = 2eV. Fig. 13 shows ρ d (ω) for the modified parameter set at T = 1.6 · 10 −5 K. The intermediate (solid lines) and strong hybridization (dashed lines) regimes are clearly visible with a pronounced Kondo peak. However, the width of the Kondo resonance is extremely small corresponding to a Kondo temperature T K < 1 K not comparable to recent experimental STM data (Ref. [24]).
Our calculations show that both regimes appear to be a generic feature of our two-orbital pseudo-gap model. Counterintuitively, higher Coulomb repulsion U dd = 2 eV yields a significant higher Kondo temperature and better agreement with STM experiments but requires an increase of Γ 0 . We could increase U dd and find adequate parameters for Γ 0 , but it is not clear whether the corresponding large hybridization matrix element V can be justified by a curvature induced overlap between the local σ orbital and the neighboring tilted π orbital, when the π orbital matrix elements entering the band structure are given by t ≈ 2.9eV. The calculations are done for T = 1.6 · 10 −5 K. The Kondo peak is clearly pronounced but very narrow indicating a small TK.
Possible modifications of the model by neglected interactions
In the literature as well as in our calculations, the starting point is always a perfect ideal flat graphene sheet. The local modification by a rippled and curved graphene is only included in (i) the shift of the local Dirac point and by the finite σ − π orbital overlap resulting in a finite Γ 0 . We recall that the local curvature is a consequence of an energetically favored 3D over a purely 2D graphene sheet according to the Mermin-Wagner theorem and the underlying SiO 2 substrate. This will also modify the phonon modes. Long-wave length Goldstone phonons with vanishing phonon energies will be present in the material but can be neglected for our problem. Locally, however, there will be breathing modes of longitudinal vibrations of the curved graphene sheet that might expand and contract slightly the distance of the graphene and the STM tip. Clearly such a local vibration will have a finite frequency similar to a molecular vibration.
If we assume the existence of a single local vibrational breathing mode it will have two effects: (i) it will modify the Hamiltonian (1) describing the impurity dynamics and (ii) it will modify the tunnel theory employed to interpret the STS data. The definition of H in Eq. (1) has to be augmented by the two additional terms in H ph where b annihilates a phonon with energy Ω 0 . The first term accounts for free phonons while the second term with the local displacement operator X = b + b † and the dimensionless electron-phonon coupling strength λ ph .
Such an additional term has been investigated within the NRG in the context of the quantum transport through a deformable molecular transistor [54]. In the limit of weak electron-phonon coupling, it was analytically shown that the Kondo temperature will increase due to an enhancement of the Kondo coupling. An investigation of the impact onto T K in the pseudo-gap twoorbital model as well as an estimation of inelastic tunnel processes is desirable but beyond the scope of this paper.
In the context of charge transport through a twoorbital molecule the influence of an electron-phonon coupling onto an additional spin anisotropy term in the spin-1 sector was investigated [55,56]. This spin anisotropy requires a spin-orbit coupling which is relevant in Co complexes but weak in carbon. The effect of a small spin anisotropy can be enhanced by a linear spin-lattice coupling in the anti-adiabatic limit resulting in a reduction of T K . Due to the weak spin-orbit coupling in carbon, we do not consider this effect to be of relevance for the spin dynamics of graphene vacancies. Furthermore, the zeromode bound state is extended [19] and can only couple very weakly to a local phonon.
Anisotropic hybridization V
As indicated in Fig. 1, the dangling sp 2 orbital hybridizes with both π orbitals located at the base of the distorted triangle [34]. The hybridization part of the Hamiltonian can be written as where a † iσ creates an electron in the π orbital at the nearest-neighbor position δ 1,2 = a 2 (1, ± √ 3) T relative to the vacancy. The localized σ orbital resides at δ 3 = −a(1, 0) T . We consider the general case where V j = |V j |e iϕj , i.e. we allow the hybridization to differ in phase and absolute value. A point-defect distorts the nonplanar geometry of graphene and may lead to a symmetry breaking hybridization.
Combing the conduction band operators in (23) to the effective operator V c 0σ = V 1 a 1σ + V 2 a 2σ leads to the general hybridization functioñ For |V 1 | = |V 2 |, we can follow the approaches of Refs. [34,57,58] and combine both π orbitals that are situated opposite the free sp 2 (or σ) orbital to arise at a pair of new orthogonal operators (even and odd parity). The fermionic commutator determines the new effective DOS and hybridization function for even (+) and odd (−) parity respectively. The corresponding hybridization functions take the form Here, ǫ τ k is the dispersion for the τ energy band given by Eq. (4). In the case where both π orbitals hybridize equally to the free sp 2 orbital, only the even linear combination couples to the impurity [34] while the odd parity linear combination decouples from the sp 2 orbital. The authors of Ref. [34] restricted themselves to a linear dispersion around both Dirac points and a fully symmetric hybridization in which case Eq. (25) is analytically solvable for |V d k | 2 = V 2 /N and becomes proportional to the zeroth order Bessel function.
Here, we make use of the full tight-binding dispersion including second nearest neighbors, Eq. (4), and evaluate the summation over the first Brillouin zone explicitly.
In the presence of a vacancy the difference | δ 1 − δ 2 | can be approximated by the lattice constant a [34]. Both hybridizations, Γ ± (ω), are shown in Fig. 14 . We adopted the t, t ′ parameters of the impurity for the even and odd hybridization function.
even linear combination would suppress the Kondo effect significantly for a fixed |V i | due to the reduced slope. In order to retain Kondo temperatures close to the experimental ones, one could simply increase Γ 0 which would eventually lead to questionable high values for Γ 0 . A relative phase shift of δϕ = ϕ 2 − ϕ 1 = π would result in a swap between even and odd hybridization and would increase T K . In order for δϕ = 0, either the configuration belongs to a local odd parity, or the parity is broken in the vicinity of the vacancy: both π orbitals are slanted differently in relation to the σ-plane. In order to reveal the effect of the different energy dependence of hybridization functions for the same hybridization strength -see Eq. (11) -we present a direct comparison of a spectral functions from the intermediate regime at T = 4.2 K for Γ(ω) based on a realistic t, t ′ DOS used throughout this paper and the even/odd hybridization as defined in Eq. (25) in Fig. 15 using identical local impurity parameters. While the Kondo temperature is strongly suppressed for the even Γ + (ω), we observe an enhanced Kondo temperature for Γ − (ω) illustrating the effect of the significantly large slope of the Γ(ω) ∝ |ω| region in the vicinity of the Dirac point. Note that, in order to compare the consequence of the changed hybridization throughout, the parameters of the impurity ought to be adjusted.
In a flat graphene sheet with parity conservation at the location of the σ orbital, the even hybridization function Γ + (ω) should be relevant [34]. Due to the orthogonality of the Wannier orbitals, however, a hybridization is only generated for curved graphene in the vicinity of the carbon vacancy. This locally distorted environment breaks parity in general. The matrix elements V i will be different in modulus and phase, and the most general hybridization function is given bỹ (26) defining |V 1 | 2 + |V 2 | 2 = |V | 2 . If k F ( δ 1 − δ 2 ) + δϕ ≈ 0, Γ(ω) ∝ Γ(ω) close to the Dirac point. Since the parity breaking should be taken into account and a detailed microscopic theory of the specific single-particle properties in the vicinity of a carbon vacancy is missing, we restricted ourselves to the Γ(ω) defined in Eq. (9).
V. SUMMARY AND CONCLUSION
We investigated an effective two-orbital quantum impurity model for a carbon vacancy in locally curved monolayer of graphene. Combining the estimations for the local impurity parameters [19,20,31] with the experimental STM data [24] we identified three different regimes, namely the weak, intermediate, and strong hybridization regime. Our NRG calculations are carried out for T = 4.2 K matching the experimental temperature [24]. The intermediate regime is characterized by SC fixed points for both n and p doping which crosses over into unstable LM regimes for a constant temperature when µ approaches the Dirac point as the vanishing DOS of graphene exponentially suppresses screening of the impurity spins. The other two regimes only show a Kondo effect in the p doped region for large hybridization (curvature) or in the n doped region for small hybridization (curvature). For positive µ and significantly strong hybridization the systems is driven into a FI regime whose striking feature is a single peak in the spectral function shifting away from ω = 0 with increasing µ.
All three regimes are connected by changing the graphene curvature that also changes the effective single particle orbital energies: Not only the π conduction band hybridizes with the σ orbital but also the vacancy induced bound π state. This physical mechanism driving the transition from weak to intermediate and finally to the strong hybridization regime is related to the local rippling of graphene that has been verified experimentally [24]. All three regimes can be understood in terms of local spin and charge configurations of a two-orbital impurity model.
Our findings are in qualitative agreement with recent experimental studies by Mao et al. [24] where scanning tunneling microscope (STM) measurements on irradiated graphene sheets found all three scenarios albeit with higher Kondo temperatures. We have shown that all three regimes are an intrinsic feature of our realistic twoorbital model and are not necessarily dependent on the exact form of the DOS as long as the pseudo-gap slope for ω → 0 is present. Even for particularly small Coulomb interactions we can still reproduce the intermediate and strong hybridization regime although T K is diminished substantially.
The only drawback of our approach is the relatively low Kondo temperature compared to the experiment. In order to predict larger T K s, a substantial local hybridization is required [10,16] whose microscopic origin is unclear: Note that the nearest neighbor hopping matrix element t in pristine graphene has been approximate 2.9eV, and hybridization matrix elements of the same order are required to achieve T K s of the order of 60 − 80 K. This might be caused by several aspects not included in our model. Firstly, in all calculations we assume that the Dirac Fermion DOS prevails in the vicinity of the vacancy up to the bound state. However, Kondo temperatures depend sensitively on the local Γ(ω) close to the Dirac point. The true energy dependence of Γ(ω) in a curved environment -a result of the vacancy, the substrate, and the high gate voltages -might differ slightly from that of pristine graphene. Secondly, we assumed that the STS is directly proportional to the spectral function of the σorbital while in reality, the STM tip can locally couple to several different orbitals [50]. The additional asymmetries of such a resulting spectrum stemming from the interference between multiple transport channels, one to the local σ orbital and the others to the graphene p band, might lead to a larger value of T K by a two-parameter Fano fit. Thirdly, local rippling of graphene is caused by an instability of the ideal 2D graphene sheet. Local curvatures will modify the phonon spectrum, and it is expected that there is a significant electron phonon coupling contributing to the hybridization. This electron phonon interaction has been proven to enhance the effective hybridization [54] and therefore T K . In addition, one expects inelastic contributions to the tunnel current if such a breathing mode leads to an oscillation of the distance between the vacancy and the STM tip. Such an additional contribution could modify the overall shape of STS and therefore also change the FWHM used to estimate T K . | 2018-03-08T16:46:51.000Z | 2018-03-08T00:00:00.000 | {
"year": 2018,
"sha1": "a7dea276ff06bb09e2fc1e6e5a0ef73390721cbc",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.97.155419",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a7dea276ff06bb09e2fc1e6e5a0ef73390721cbc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
4088729 | pes2o/s2orc | v3-fos-license | Neoadjuvant intraperitoneal chemotherapy followed by radical surgery and HIPEC in patients with very advanced gastric cancer and peritoneal metastases: report of an initial experience in a western single center
Background The association of preoperative systemic and intraperitoneal chemotherapy has been described in Eastern patients with very good outcomes in treatment responders. The aim of this paper is to describe the initial results of this multidisciplinary regimen in gastric cancer patients with very advanced peritoneal metastases. Case presentation We present here the first four cases who received the treatment protocol. They had a baseline PCI between 19 and 33. Two patients had received systemic chemotherapy prior to this regimen. Three of them had significant response and were taken to cytoreductive surgery, while one patient who had 12 cycles of chemotherapy previously showed signs of disease progression and subsequently died. There was no significant postoperative morbidity, and three patients remain alive, two of them with no signs of recurrence. Conclusion Systemic and intraperitoneal chemotherapy led to a marked response in peritoneal disease extent in our initial experience and allowed three of four patients with very advanced disease to be treated with cytoreductive surgery.
Background
Patients with advanced gastric cancer present with peritoneal metastases in about 30% of cases [1] and up to 50% of those treated with curative intent will develop relapse in the peritoneum. Standard treatment for these individuals is systemic chemotherapy, but median survival in this scenario is poor, around 3 to 6 months in most studies [2], reaching a little over 1 year in a recent Eastern trial [3].
Cytoreductive surgery (CRS) followed by hyperthermic intraperitoneal chemotherapy (HIPEC) for gastric cancer patients with peritoneal metastases has been associated with improved survival in a selected group of patients both in an Eastern [2] and in a large French multicenter series [4]. The results from both studies have emphasized the importance of patient selection, as the ones with the best results were treated with preoperative systemic chemotherapy, had limited peritoneal dissemination, measured by a low peritoneal cancer index (PCI), and were treated with a complete cytoreduction [4].
The association of neoadjuvant intraperitoneal and systemic chemotherapy has been investigated recently and seems to be a very important tool for patient selection. In a large Eastern series, individuals who had negative cytology after this preoperative regimen and were treated with CRS + HIPEC had improved survival compared to those with positive cytology [5]. The addition of laparoscopic HIPEC (L-HIPEC) and more effective systemic chemotherapy to this multidisciplinary treatment, labeled as bidirectional intraperitoneal and systemic chemotherapy (BISIC), has led to more significant response rates and improved survival in this set of patients [1].
The aim of this study was to report the first four consecutive cases of gastric cancer patients who presented with advanced disease and disseminated peritoneal metastases and were treated with L-HIPEC and BISIC, followed by CRS + HIPEC.
Methods
This is a retrospective, single-center case series based on routinely collected data extracted from patients' electronic charts. This paper was written in accordance with CARE guideline for case reports [6].
The inclusion criteria were diagnosis of synchronous metastatic gastric cancer with peritoneal dissemination as the sole site of metastatic disease and treatment with BISIC (as described below) between October 2015 and August 2017 (Table 1).
Treatment was adapted according to Yonemura's original protocol (2006), which he later modified, adding more effective systemic chemotherapy and with a different dosage of the intraperitoneal drugs [1,7]. As S-1 is not currently available in Brazil, Capecitabine was used instead of S-1. Briefly, during the first laparoscopy, before any treatment, extensive intraperitoneal lavage (EIPL) [8] and L-HIPEC (Cisplatin 30 mg/m2 + Docetaxel 30 mg/m2, for 1 h at 42 o C) were performed. At the end of the procedure, a peritoneal port-a-cath (DistricAth®, Districlass Médical, France) was placed with its tip directed toward the cul-de-sac. After a 15-day period of rest, patients initiated normothermic chemotherapy. On day 1, Cisplatin 30 mg/m2 and Docetaxel 30 mg/m2 were infused into the peritoneal cavity for 2 h after adequate pre-medication. On day 8, Cisplatin 30 mg/m2 and Docetaxel 30 mg/m2 were given intravenously in separate bags according to standard infusion protocols. Capecitabine 850 mg/m2 PO twice a day was administered from day 1 to day 14. Cycles were repeated every 21 days.
Treatment strategy consisted on repeating BISIC cycles three times, followed by CT scans, endoscopy, and a new laparoscopy. According to the surgical findings, another three cycles of BISIC was performed, or patient was taken to CRS, which included gastrectomy + D2lymphadenectomy, resection of peritoneal lesions, and HIPEC.
During treatment, patients were followed for toxicity at least once per cycle, and response evaluation with endoscopy and CT scans were performed every three cycles. After cytoreductive surgery, CT scans were repeated every 2-3 months.
All patients signed informed consent after extensive discussion with patients and relatives regarding potential benefits and risks of the treatment. Toxicity was graded according to Common Toxicity Criteria (CTC) 4.0.
Case 1
A 34-year-old female patient was admitted in September 2015 at our service. She complained of epigastric pain and reported a 13-kg weight loss. Her weight at this time was 39 kg. An upper endoscopy revealed a large gastric tumor that extended from the posterior wall of the greater curvature in the fundus to the gastric antrum. Biopsy confirmed the diagnosis of a poorly differentiated adenocarcinoma. Thoracic and abdominal CT scans were then performed and showed gastric wall thickening in the fundus, mild ascites, enlarged perigastric lymph nodes, and peritoneal nodules. CEA and CA 19-9 levels were 6.4 ng/mL and 555.4 U/mL, respectively.
A staging laparoscopy demonstrated multiple peritoneal nodules in the right and left diaphragm, greater and lesser omentum, pelvis, and parieto-colic gutters. PCI count was 33. Ascites was considered to be moderate, and its cytology was positive for the presence of free cancer cells. Biopsy of two nodules confirmed the diagnosis of adenocarcinoma. After informed consent was obtained, the patient started the protocol described in the "Methods" section.
Re-staging CT scans demonstrated a decrease in gastric wall thickening, ascites, and in the number of peritoneal nodules. A new laparoscopy showed a decrease in the number of peritoneal nodules, but the PCI count remained high (20). Cytology of the peritoneal wash, however, was negative, as were the biopsy of one nodule in the right diaphragmatic peritoneum. After multidisciplinary discussion, we opted to treat the patients with three more BISIC cycles. After six cycles, the patient had regained her weight (50 kg), and her CT scans showed a significant reduction both in gastric wall thickening and peritoneal nodules (Fig. 1). Her CA 19-9 was 13.1 U/mL, and her CEA level was below detection level. During treatment with BISIC, patient developed mild toxicities, including G1 nausea, vomiting, fatigue, alopecia, decreased appetite and diarrhea, and G2 constipation and infection (upper respiratory tract infection). Among hematological toxicities, only grade 2 anemia was observed. There were no dose reductions or treatment delays due to toxicity. No serious adverse event was reported.
The patient was then taken to surgery in June 2016. A new staging laparoscopy identified no peritoneal lesions (Fig. 2). We proceeded to a laparotomy, and a total gastrectomy with D2-lymphadenectomy was performed. Peritoneal areas in the right and left diaphragm, in the pelvis, and in the small bowel mesenterium were resected. EIPL was performed after resection, followed by HIPEC with Docetaxel 30 mg/m 2 and Cisplatin 30 mg/m 2 for 1 h. She had an uneventful recovery and was discharged on the tenth postoperative day.
The peritoneal wash cytology was negative and the pathology report showed only acellular mucin and rare epithelial cells with nuclear atypia in the gastric body and antrum's mucosa and submucosa. The tumor bed measure was 10 × 4.5 cm. No lymphatic, perineural, or vascular invasion was identified. All margins were negative, and there were no metastases in the 38 dissected lymph nodes. Peritoneal wash cytology was negative, and all peritoneal areas that were resected had no signs of viable disease. Pathological staging was ypT1b ypN0 ypM0 (pathologic TNM I). Response to chemotherapy in the examined tissue was characterized as 5% of viable tumor cells and 95% of fibrosis.
After surgery patient was submitted to five additional cycles of capecitabine 750 mg/m2 PO twice a day from day 1 to day 14 every 21 days. At last follow-up (January 2018), she was asymptomatic and exams showed no evidence of disease.
Case 2
A 55-year-old female presented in September 2016 with a long history of dyspeptic symptoms and an upper endoscopy that showed a Borrmann type IV lesion in the gastric body with a biopsy of signet-ring cell adenocarcinoma.
Staging was performed first with thoracic and abdominal CT scans, which showed diffuse gastric wall thickening and signs of peritoneal metastases. A staging laparoscopy revealed multiple peritoneal metastases, with a PCI count of 25.
After three cycles of treatment, re-staging endoscopy demonstrated a significant response to treatment, as no ulcerated lesions remained, only a fibrotic and substenotic area in the body-antrum transition (Fig. 3). Furthermore, CT showed a regression in the thickening area. A new laparoscopy was performed in February 2017, which revealed the presence of remaining peritoneal metastases, and a total PCI of 17. The recommendation as in the previous case was to maintain the BISIC regimen and re-evaluate after three more cycles.
After the sixth cycle, re-staging with endoscopy and CT identified the same signs of response to chemotherapy. During treatment, the patient presented somewhat more toxicity in comparison to the previous patient. G1 vomiting, peripheral neuropathy, alopecia, and decreased appetite were noticed. Moreover, G2 nauseas, fatigue, abdominal pain (due to chemical peritonitis), infection (upper respiratory tract infection), and myalgia were verified. Dose reductions of the intraperitoneal component of BISIC (due to chemical peritonitis) and dose delays were deemed necessary. Severe toxicities were not observed.
A new laparoscopy was performed in June 2017 and showed less spread of peritoneal nodules, with a PCI count of 12. Cytoreductive surgery was then performed, with a total gastrectomy, D2-lymphadenectomy, resection of the diaphragmatic peritoneum, and nodules in the small bowel mesentery. Three of these nodules were sent to frozen section biopsy, with no signs of viable disease in any of them. HIPEC followed, with Cisplatin 30 mg/m 2 and Docetaxel 30 mg/m 2 , perfused for 1 h.
This patient also had an uneventful recovery and was discharged from the hospital on the eleventh postoperative day. The pathology report described a signet-ring cell adenocarcinoma in the stomach distal body, with serosa infiltration and 4 metastatic lymph nodes, a total of 23 dissected. Regarding the peritoneal nodules, viable disease was detected in the round ligament and in one small bowel implant. (ypT3 ypN2 ypM1-pathologic TNM IV). She received four cycles of adjuvant chemotherapy (FOL-FOX), from a planned total of six cycles. She developed peritoneal recurrence 3 months later and is now under treatment with a second-line chemotherapy regimen.
Case 3
The next two cases encompass subjects with a different treatment background. The first was a 29-year-old male, admitted in December 2015, with a history of epigastric pain and a 6-kg weight loss. He had an upper endoscopy that showed a Bormann IV lesion in the gastric body and a biopsy of mixed-type adenocarcinoma. Diffuse peritoneal metastases were identified on abdominal CT and on a PET-CT.
A staging laparoscopy confirmed multiple peritoneal metastases, with a PCI count of 20. The patient received systemic chemotherapy, with 12 cycles of modified DCF (Docetaxel, Cysplatin, and 5-Flouracil). After multidisciplinary discussion, a new laparoscopy was performed and PCI was 15. L-HIPEC was then administered with Cisplatin 30 mg/m 2 and Docetaxel 30 mg/m 2 , followed by three cycles of BISIC. The following staging procedure showed a PCI count of 17. Due to this finding of disease progression, surgery was not performed and the patient was started on second line of chemotherapy, with Paclitaxel and Ramucirumab. He had disease progression in the second cycle and died due to complications related to the tumor in April 2017.
Case 4
Case 4 refers to a 55-year-old male, who had a different treatment background as well. He was admitted at our service in October 2016 with a 6-month history of epigastric pain and a 15-kg weight loss. His upper endoscopy revealed an infiltrative lesion in the upper body of the stomach, which resembled linitis plastica. Biopsy revealed a signet-ring cell adenocarcinoma. He had a previous abdominal CT with findings of diffuse peritoneal metastases and had been treated with eight cycles of FLOT (5-Fluoracil, Leucovorin, Oxaliplatin, and Docetaxel) at a cancer service in his hometown.
He underwent a staging laparoscopy in December 2016, which confirmed the peritoneal metastases with a PCI count of 19. L-HIPEC was performed as described in the previous two cases and the intraperitoneal port-acath positioned. After two cycles of BISIC, also administered as previously described, a new laparoscopy was performed in April 2017 and a PCI count of 13 was identified. A midline incision followed and a total Fig. 3 Endoscopic aspect of gastric lesion in case 2 a at diagnosis, b after three cycles of BISIC, and c after six cycles of BISIC demonstrating significant response to treatment gastrectomy with D2-lymphadenectomy was performed, along with the resection of the subdiaphragmatic and pelvic peritoneum, in which there were some fibrotic areas that could have residual disease. HIPEC was also administered as in the other two patients. Pathology showed a residual mixed-type adenocarcinoma invading the gastric submucosa, 17 lymph node metastases out of 22 dissected nodes and 30% of viable tumor cells in the stomach. All peritoneal areas showed no residual disease. (pT1b pN3b pM0--pathologic TNM IIB). His recovery had no significant events, and he was discharged on the twelfth postoperative day. He underwent six cycles of chemotherapy (FOLFOX) in his home town and now is on follow-up, with no signs of recurrence.
Discussion
We report in this series the first four cases of gastric cancer patients with very advanced peritoneal disease who were treated with a multimodality regimen that included systemic and intraperitoneal chemotherapy in a recently nominated regimen known as BISIC [1], followed by cytoreductive surgery and HIPEC, in a single Western cancer center. The most important findings in these cases were the lack of postoperative morbidity and the significant response in peritoneal dissemination associated with treatment, which turned patients who would be candidates for palliative chemotherapy only into candidates for a more radical therapy and a chance for improved survival.
Three of the four patients reached a significant response that allowed them to be treated with a complete cytoreductive surgery and also received HIPEC with Docetaxel and Cisplatin for 60 min. The association of CRS and HIPEC has been proven superior to CRS alone in a Chinese randomized trial with 68 patients, in which the multimodality group had a median survival of 11 months compared to 6.5 months in the surgery only group. These poor numbers in both groups may be related to patient selection, as 42% of individuals in this study did not have a complete CRS with no residual macroscopic disease (CC-0) [9]. This trial confirmed the findings of a large French multicenter study that subjects who are candidates for this multimodality regimen should receive preoperative chemotherapy, should have low disease burden, expressed in PCI count, and should receive a CC-0 CRS as a mandatory step of their treatment [4].
Canbay et al. first described the use of neoadjuvant intraperitoneal chemotherapy combined with systemic chemotherapy in a large single-center series in Japan, with 194 patients. Out of these individuals, 152 (78%) were classified as responders, a classification that at the time included patients whose cytology became negative after the administration of two cycles of intraperitoneal Cisplatin and Docetaxel and six cycles of S-1. These responders were then taken to CRS and HIPEC, and the median survival of those who received CC-0 surgery was 18 months [5], which was an improvement compared to the results of chemotherapy alone, even in Eastern studies [10].
This regimen was later modified by the same group, and the concept of a laparoscopic hyperthermic intraperitoneal chemotherapy perfusion (L-HIPEC) was introduced. Its main advantage would be a minimally invasive procedure with direct vision, which would allow for a more valid assessment of the peritoneal disease extent in the peritoneal cavity. Also, a more effective systemic chemotherapy regimen was adopted, with the use of Cisplatin and Docetaxel intravenously associated with the intraperitoneal regimen (BISIC). This treatment has been described in detail in the literature [1,11]. The most important benefit of this association seems to be a higher response rate in peritoneal disease extent. In a study of this regimen in 105 patients, the association of L-HIPEC and BISIC led to a significant change in PCI, as 44% of patients had baseline PCI under 12, compared to 67% after treatment. Also, 66% of all subjects experienced a decrease or complete disappearance in peritoneal metastases in the re-staging laparoscopy [12]. We identified this response in three of our patients, with decreases of 100, 52, and 31%, with numbers that were obtained in open surgery and that could have been undervalued in the staging laparoscopies. The less significant response and the disease progression findings were identified in the two patients who had systemic chemotherapy prior to L-HIPEC and BISIC. Although sample size was too small to draw any conclusions, it is possible that the performance of this prior treatment could have helped induce drug resistance.
Another aspect that should be highlighted is the very low morbidity associated with the procedure. In the study above, no patient developed grade IV and V toxicity and only four had grade III events after L-HIPEC and BISIC. Our cases had similar results, with no grade III, IV, or V events after chemotherapy and no postoperative complication. We certainly do not expect zero morbidity in future cases, but that resembles our previous results with adjuvant HIPEC, in which no patient developed organ insufficiency and there was no mortality [13]. Very similar results have been recently reported in an American cancer center series, with 11% morbidity and no mortality associated with the L-HIPEC [14]. That reinforces the notion that the morbidity and mortality associated with CRS and HIPEC is highly influenced by surgery extent. This is a very small case series, and interpreting its results is somewhat limited. However, this multimodality treatment has been performed extensively in a Japanese institution and its results have shown a very acceptable toxicity and promising response rates, with long-term survival in selected patients [1]. We report here our group preliminary experience, with favorable results in subjects with very advanced gastric cancer and diffuse peritoneal metastases. A higher number of cases should confirm the validity of these results and provide more meaningful analyses.
Conclusions
In conclusion, the association of L-HIPEC and BISIC has led to a good response in peritoneal disease extent in our initial experience and allowed radical procedures to be performed in individuals who were otherwise candidates for palliative chemotherapy. | 2018-03-23T08:34:23.015Z | 2018-03-22T00:00:00.000 | {
"year": 2018,
"sha1": "c18b81133514cf7ecceb17828ce3cef8efe799d2",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-018-1363-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c18b81133514cf7ecceb17828ce3cef8efe799d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4314811 | pes2o/s2orc | v3-fos-license | Sarcopenic Obesity in Elderly Korean Women: A Nationwide Cross-sectional Study
Background Sarcopenia causes loss of muscle mass in the elderly and is associated with development of metabolic syndrome in those with obesity. This study evaluated the prevalence of sarcopenic obesity (SO) in healthy Korean elderly women. Methods This study was based on data from the Korea National Health and Nutrition Examination Survey IV and V, 2008–2011. Whole body dual energy X-ray absorptiometry and body mass index measurement were performed for all patients. Women aged 65 years or older were included in this study. Total appendicular extremity muscle mass was used to determine the skeletal muscle mass index. Results Of 2,396 women aged 65 years or older, a total of 1,491 (62.2%) were underweight, normal weight, or overweight, while 905 (37.8%) were obese. The prevalence of sarcopenia using a cut-off value of 5.4 kg/m2 was 64.9% (63/97) in underweight women, 38.2% (320/838) in normal weight women, 17.1% (95/556) in overweight women, and 6.1% (55/905) in obese women. Conclusions The prevalence of sarcopenia was different among groups. The prevalence rate in obese women was lower than that in non-obese women. SO is a new category of obesity in older adults with high adiposity coupled with low muscle mass. The prevalence of SO was lower than that in previous studies because of differences in the definition. A consensus definition of SO needs to be established.
INTRODUCTION
Loss of muscle mass is a predominant change of body composition in elderly people. Rosenberg [1] defined age-related reduction of muscle mass as sarcopenia. Sarcopenia is characterized by decline in skeletal muscle mass and function that may result in reduced physical capability and poorer quality of life. [2,3] In recent years, the population of obesity is rapidly increasing. Aging and physical disability are also related to an increase in fat mass, particularly visceral fat, [4] an important factor in the development of metabolic syndrome and cardiovascular disease.
The concurrence of both obesity and sarcopenia, also known as sarcopenic obesity (SO), was first defined by Baumgartner [5]. It has been reported that SO can increase the risk of metabolic syndrome, physical disability, morbidity, and mortality compared to either sarcopenia or obesity alone. [6] The complex interplay of https://doi.org/10.11005/jbm.2018. 25.1.53 common pathophysiological mechanisms (such as increased proinflammatory cytokines, oxidative stress, insulin resistance, and hormonal changes) and decreased physical activity underlies the close relationship between sarcopenia and obesity. [7] Sarcopenia can reduce physical activity and total energy expenditure, thus increasing the risk of obesity. [8] In contrast, an increase in visceral fat induces inflammation which contributes to the development of sarcopenia. [9] The association between sarcopenia and obesity likely sets up a vicious cycle, resulting in further loss of muscle mass and mobility, insulin resistance, and risk of metabolic syndrome development. [10] For this reason, SO and its relation with other metabolic disease are important issues in elderly population. Furthermore, we hypothesized that there would be higher prevalence rate of sarcopenia within obesity due to that vicious cycle.
Korea is one of the most rapidly aging countries in the world. [11] Several studies have focused on the prevalence, etiology, and clinical issues of SO in Korea. However, the prevalence rates of SO were quiet different among studies. The objective of this study was to evaluate the prevalence of SO in large number of healthy Korean elderly women based on data of the Korea National Health and Nutrition Examination Survey (KNHANES) IV and V conducted in 2008 to 2011. We also compared prevalence rates of SO according to their obesity status.
Study population
This study was based on data obtained from the KNHANES IV and V. The KNHANES is a nationally-representative survey conducted by the Korean Ministry of Health and Welfare. These surveys have been conducted periodically since 1998, using a rolling sampling design involving a complex, stratified, multistage, probability-cluster survey of a representative sample of the non-institutionalized civilian population in order to assess the health and nutritional status of the Korean population. Whole body dual energy X-ray absorptiometry (DXA) scan and body mass index (BMI) measurement were performed for individuals of ≥10 years old from July 2008 to May 2011(excluding pregnant women), individuals with a height of ≥196 cm or a weight of ≥136 kg were excluded in accord with the KNHANES survey protocol. In addition, test results were treated as missing value in participants with implanted radio-opaque material (e.g., a prosthetic device, implant or other radio-opaque object) that could affect DXA results. Postmenopausal women aged 65 years or more were included in this study. Written informed consent was obtained from all participants. Protocols for the KNHANES IV and V were approved by the Institutional Review Board of the Korean Center for Disease Control and Prevention.
Definition of sarcopenia and SO
Whole-body DXA examinations for the KNHANES study were conducted with a QDR4500A apparatus (Hologic Inc., Bedford, MA, USA). Data included values for bone mineral content (g), bone mineral density (g/cm 2 ), fat mass (g), lean mass, bone mineral content (g), and fat percentage of the whole body and that of specific anatomical region. Appendicular skeletal muscle mass (ASM) was obtained by adding muscle masses of the four limbs by assuming that nonfat and non-bone masses were skeletal muscles. Skeletal mass index (SMI) was defined as ASM/height 2 . Asian Working Group for Sarcopenia (AWGS) suggested a classical approach to determine the cut-off value by using the value of two standard deviations below the mean for a young reference group which was 5.4 kg/m 2 for women. [12] We used 5.4 kg/m 2 as the cut-off value to determine the prevalence for sarcopenia among elderly Korean women.
BMI is commonly used to classify overweight and obesity in adults. According to the International Obesity Task Force recommendation, BMI categories were as follows: normal (between 18.5 and 23 kg/m 2 ), underweight (<18.5 kg/m 2 ), overweight (between 23 and 25 kg/m 2 ), and obese (between 25 and 30 kg/m 2 ). [13]
Statistical analysis
The χ 2 test was used to compare categorical measures between groups defined by BMI. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated for the associations between obesity and sarcopenia using logistic regression analyses. All P-values of less than 0.05 were considered statistically significant. All analyses were conducted using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA).
The 2008 to 2011 the KNHANES included 2,396 women ≥65 years (Fig. 1). Regarding age distribution, 1,611 were The data is presented as the mean±standard deviation or percentage distribution of participants, as appropriate: BMI, SMI, and bone mass density. a) The unweighted sample size was presented in the table, but the results presented reflect the weighted sample. b) Moderate physical activity was 5 or more days of moderate-intensity activity for at least 30 min per day. c) Walking physical activity was 5 or more days of walking of at least 30 min per day. *Significance was compared between non-sarcopenia and sarcopenia groups using Student's t-test or Pearson χ 2 test. BMI, body mass index; SMI, skeletal muscle mass index. (Fig. 2). The overall prevalence of sarcopenia among women ≥65 years was 22.2%. The prevalence of SO was 2.3% among elderly women ≥65 years ( Table 2). Participants with BMI of 25 or above had lower odds of having sarcopenia (OR, 0.14; 95% CI, 0.10-0.18; P<0.01) than those with BMI <25.
DISCUSSION
This study evaluated the prevalence of SO in healthy elderly women in Korea. The results showed that the overall prevalence of SO was 2.3% in women of 65 years or older. The prevalence of sarcopenia was different among groups according to BMI. The prevalence rate of sarcopenia in obese women was lower than that in non-obese women.
To clarify the definition of sarcopenia, various working groups have published consensus papers. We addressed the prevalence of sarcopenia using 5.4 kg/m 2 as recommended value by the AWGS. We also defined obesity as BMI ≥25 according to the International Obesity Task Force recommendation.
The prevalence of SO significantly differed from those of previous studies, [6,[14][15][16][17][18][19][20][21][22][23][24] ranging from 0% to 48% depending on the background of the studied population, parameters, and cut-off values used for muscle volume quantification. Furthermore, methods and cut-off values used to de-fine obesity were different among these studies. In this study, the prevalence rate of SO was lower than other previous studies. In general, SO prevalence is lower using height adjusted SMI compared to weight or BMI adjusted SMI. This phenomenon is prominent in Korea because differences in height between age groups are greater among Koreans compared to those in other countries. [25] Moreover, SO prevalence is higher using % body fat compared to that using BMI as a parameter of obesity. [26] In this study, we used height adjusted SMI and BMI as a parameter of SO. This might have resulted in lower prevalence of SO compared to that in other previous studies.
Patients with obesity were at decreased risk of prevalent sarcopenia in this study. This finding is consistent with findings of others. In a previous cohort study, low BMI was a risk factor for both current and future sarcopenia in very old age. The prevalence and incidence of sarcopenia were high in the underweight group in the present study. Low BMI might be a reasonable proxy for low lean mass. [27] Previously, BMI has been found to be the only predictor of skeletal muscle mass for women. [27] It is strongly and negatively correlated with the prevalence of sarcopenia. [28] Although BMI is commonly used as a surrogate parameter for obesity, it indicates weight-for-height without considering differences in body composition or contribution of body fat to overall body weight. The mass of muscle is reduced while that of fat tissues is increased in the elderly. Thus, BMI might be a poor marker of body fat because it does not distinguish between fat and lean body mass, especially in old ages. This is a kind of "Obesity Paradox" commonly mentioned in heart failure, cardiovascular disease, and other chronic diseases. [29] SO is a novel concept that has become more important in the elderly population. However, different definitions of sarcopenia and SO limit the discovery of clinical application of this disease with regard to other metabolic diseases and cardiovascular diseases. Therefore, consensus definition needs to be established for SO to promote standardized diagnosis and management for this disease in future study.
This study has several limitations. First, we only considered low muscle mass when defining sarcopenia regardless of muscle function. Second, we did not specifically exclude women with metabolic diseases such as hyperthyroidism or diabetes mellitus that might influence lean body
mass.
This study has several powerful strengths. A large number of participants were included and analyzed by using data obtained from the KNHANES. Furthermore, the data of the KNHANES are representative of the entire Korean population. To the best of our knowledge, this is the first study addressing the prevalence of sarcopenia using 5.4 kg/m 2 as recommended value by the AWGS in Korea.
In conclusion, the prevalence rate of sarcopenia in obese women was lower than that in non-obese women. We examined the prevalence of SO in Korean elderly women by using AWGS recommendation. The prevalence of SO was lower compared to other previous studies because of different method to define it. A consensus definition of SO needs to be established. | 2018-04-03T00:56:53.218Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "f5b9cb00f2e1a68bf93a906d25b7079785c33b1d",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5854823?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5b9cb00f2e1a68bf93a906d25b7079785c33b1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270235692 | pes2o/s2orc | v3-fos-license | Application of a Modal Parameter Identification Method Based on Variational Mode Decomposition in Flight Flutter Testing
Signals of flight flutter testing exhibit non-stationary characteristics, closely spaced modes, and low signal-to-noise ratio, presenting challenges in data processing. In recent years, the variational mode decomposition (VMD) method has emerged as a promising approach to mitigate mode mixing and exhibit robust noise resistance. Therefore, a novel time-frequency domain modal parameter identification method based on VMD is proposed to process impulse response signals in flight flutter testing. The modal frequency and damping ratio are determined through a three-step process: VMD analysis, Hilbert transform, and least square fitting. The efficacy of the proposed method in identifying closely spaced modes and resisting noise is validated through a numerical example. Furthermore, this method is applied to analyze two types of pulse excitation signals in actual flight flutter testing: one induced by the pilot’s shaking stick and the other induced by small rocket excitation. The obtained modal parameters are compared with those from ground vibration tests and specialized software, respectively, to showcase the effectiveness and superiority of the proposed method.
Introduction
Flight flutter testing is mandated for all new aircraft or those undergoing significant structural modifications, establishing itself as a globally acknowledged high-risk subject in the realm of flight testing.The primary objective of this testing is to acquire structural modal parameters (frequency and damping) for extrapolating the aircraft flutter boundaries.Effective excitation to structural modes and precise modal parameter identification are crucial for accurately obtaining these modal parameters and ensuring the safety of flight testing [1].
During flight flutter testing, commonly used excitation methods include turbulent excitation [2], frequency-swept excitation [3], and pulse excitation [4].These methods can be applied individually or in combination depending on specific testing conditions.The resulting response signals often demonstrate non-stationary behavior, presenting multiple modes with closely spaced frequencies and vulnerability to noise interference.This complexity poses challenges for the accurate estimation of modal parameters.
For turbulent and frequency-swept excitations, various modal parameter identification methods are available, while pulse excitation data has received comparatively less attention.Currently, specialized software such as "Prin80" is commonly utilized to analyze pulse excitation data, particularly in the IOP Publishing doi:10.1088/1742-6596/2762/1/012050 2 context of national aircraft certification.This software employs a frequency-domain approach based on polynomial fitting [5].However, it's worth noting that the underlying Fourier transform method is influenced by frequency resolution when processing short-time pulse data, and tends to exhibit suboptimal performance in identifying closely spaced modes.
As modal parameter identification techniques evolve, various time-frequency domain methods have been developed.Among them, the Hilbert-Huang Transform (HHT) [6,7] stands out, comprising empirical mode decomposition (EMD) and Hilbert transform.However, the core EMD technique suffers from mode mixing issues due to its recursive nature.While ensemble empirical mode decomposition (EEMD) [8] can mitigate this problem to some extent, it significantly increases computational complexity.
Variational mode decomposition (VMD) [9], introduced in 2014, represents an adaptive and nonrecursive signal decomposition method.It transforms the mode decomposition challenge into a variational solution problem.Through an iterative search for the optimal solution to the variational model, the original signal is decomposed into a discrete number of components, each closely aligned with corresponding center frequencies.This process dynamically determines the center frequency and bandwidth of each component, enabling effective adaptive separation.Extensive research has validated the VMD method's solid theoretical foundation, showcasing superior anti-noise performance and anti-mode-mixing capabilities.In recent years, its successful application has extended across various domains, including machinery, electronics, biology, and energy, with notable achievements, particularly in the realm of mechanical fault diagnosis [10,11,12].
In view of the above advantages of the VMD method and the needs of actual flight flutter testing, a time-frequency domain method for modal parameter identification based on VMD is proposed in this paper by combining Hilbert transform and least square fitting, which is called 'VMD-HT method'.The practical application in dealing with flutter signals by pulse excitation shows that the VMD-HT method has good identification accuracy and important engineering application value.
Given the noted advantages of the VMD method and the specific requirements of practical flight flutter testing, this paper proposes a novel time-frequency domain approach for modal parameter identification, termed the 'VMD-HT method.'This method combines the variational mode decomposition (VMD) with Hilbert transform and least square fitting.The application of the VMD-HT method to flutter signals induced by pulse excitations demonstrates its commendable accuracy in modal parameter identification, highlighting its significant engineering application value.
Variational mode decomposition
In the VMD algorithm, the intrinsic mode function (IMF) is redefined as an amplitude-modulatedfrequency-modulated signal k u , written as: where ( ) k A t and ( ) k t are the instantaneous amplitude and phase of the kth IMF respectively at time t.
Then the instantaneous frequency can be obtained as . Assuming that the multi-component signal ( ) f t can be decomposed into K IMF components, each with a central frequency ( ) k t and limited bandwidth, the constrained variational model can be established as: 3 Where ( ) t is unit impulse, and j 1 .
The optimal solution for the constrained variational problem stated above in equation ( 2) can be obtained by converting it into a non-constrained problem, and the generalized Lagrange function is given as: Where is a quadratic penalty term, and is Lagrangian multiplier.To solve this variational problem, the multiplier alternation algorithm is used to constantly update each mode and its centre frequency until all the components in the frequency domain can be determined.
In the expression above, 1 ˆ( ) It is important to note that the value of K and need to be predefined when using VMD method.In this paper, K is initially determined by the spectral peak number of the signal, while is found by research to have little impact on the decomposition results when it is several thousand.Therefore, no additional discussion on is provided here.
VMD-HT Method
The decoupling dynamic equation of discrete multi-degree-of-freedom system is expressed as: For continuous systems, the steady-state solution of the equation under the zero initial value condition is obtained using the Duhamel integral to derive the displacement response of the kth modal coordinates.By finding the second-order derivative for the displacement, the acceleration response at any point in physical coordinates can be further derived as: where It can be observed in equation ( 7) that the acceleration signal is composed of single-frequency amplitude-modulation harmonics, which can be decomposed by VMD to obtain different IMFs.The single component in equation ( 7) can be written as: Performing the Hilbert transform of k u to get k U , then the analytic signal is established, i.e.
( ) Therefore, Instantaneous amplitude ( ) k A t and instantaneous phase ( ) k t are introduced.Then, select the applicable data of ( ) to intercept (interception method is shown in numerical example), and instantaneous amplitude logarithm and instantaneous phase are obtained: Utilizing equation (10), the least square fitting of the interception data is carried out, and the slopes of fitting curve for instantaneous amplitude logarithm and the instantaneous phase are obtained respectively: (11) Finally, the natural frequency and damping ratio of each mode can be calculated by:
Numerical example
To evaluate the ability of the VMD-HT method to identify closely spaced modes and resist noise, an impulse response signal resembling equation ( 7) is synthesized.The original composite signal signal f , expressed in equation ( 13), comprises modes with frequencies of 8Hz, 8.3Hz, and 16Hz, respectively, along with additional Gaussian noise with a signal-to-noise ratio (SNR) of 10dB.The signal has a duration of 5s and is sampled at a rate of 256 Hz.After obtaining the instantaneous amplitude of each IMF through Hilbert transform, we selectively intercept the data corresponding to the exponential attenuation part of the IMF for subsequent fitting, ensuring greater accuracy in the fitting result.Figure 2 demonstrates the selection of the instantaneous amplitude data of the falling edge based on the shape of IMF1.Subsequently, through least square fitting of the instantaneous amplitude logarithm and instantaneous phase of the intercepted data, the fitting curves in Figure 3 are generated, and the slope of the lines is calculated using equations (10,11).The effective fitting ensures the accuracy of modal parameter determination.Finally, the frequency and damping ratio of the three modes are obtained according to the equations Firstly, we employ the VMD-HT method to identify the modal parameters of the aforementioned signals.Here, K=3 and α=2000 are chosen.The time history and frequency spectrum of the original s ignal and Intrinsic Mode Functions (IMFs) obtained by VMD are displayed in Figure 1.Notably, due t o the limited frequency resolution of 0.2Hz for the 5s signal, distinguishing the two closely spaced modes, 8.0 Hz and 8.3 Hz, in the spectrum of the original signal proves challenging.However, VMD can still successfully extract them as IMF1 and IMF2.Additionally, the aforementioned signal is also analyzed using the polynomial fitting method in Prin80 software, a common tool for pulse data in flight flutter testing, as illustrated in Figure 4.The red curves represent the fitting result of the frequency response function for different modes.Due to the influence of poor frequency resolution, the peak near 8Hz in the spectrum is easily processed into a single mode, highlighting a limitation of the frequency domain method.However, in this case, we treat the peak as two modes to assess its ability to recognize closely spaced modes.Furthermore, the disparities between the results obtained from the two methods and the theoretical values are compared in table 1.In general, accurately identifying damping is more challenging than frequency.As observed, the identification accuracy of the two methods for mode 2 and mode 3 is essentially the same.However, for mode 1, the VMD-HT method demonstrates higher accuracy in identifying its damping ratio, indicating that the VMD-HT method holds more promising advantages in processing multimode impulse signals with noise.
Application in flight flutter testing
Pulse excitation is a prevalent technique in flight flutter testing.The subsequent examples showcase the practical application of the VMD-HT method, specifically in the context of Pilot's shaking stick excitation for low-frequency modes and small rocket excitation for high-frequency modes.
Pilot's shaking stick excitation
The pilot's shaking stick is a commonly employed and easily implemented excitation method that doesn't necessitate additional devices on the aircraft.However, due to the low frequency of the pilot's manual excitation, it is unable to stimulate the high-frequency modes on the aircraft.Consequently, this method is typically utilized as an auxiliary means of excitation.During actual flight tests, for certain low-frequency modes on the wing, the pilot deflects the aileron by shaking the stick.This action applies pulse excitation to the aircraft, eliciting the corresponding impulse response signal.The same approach is taken for exciting the tail modes.Figure 5 illustrates an example of employing the pilot's shaking stick to deflect the ailerons and subsequently acquiring the pulse response signals of wings during the flight flutter testing of civil aircraft.The vibration on the leading edge of the right wing, as depicted in Figure 5, is chosen as the original signal and analyzed using the VMD-HT method.6 shows the IMFs obtained through VMD analysis, using K=2 and =2000.The spectrum of IMFs aligns well with that of the original signal, as observed in Figure 7, highlighting the beneficial effect of the VMD method in extracting components.
original singnal 0 0.5 Additionally, table 2 presents a comparison between the calculated results of this method and those computed by the Prin80 software.The table also provides the two modal frequencies derived from the ground vibration test, which are used to determine the modal types.Due to increased stiffness caused by aerodynamic forces in the air, the frequency of bending modes of the wing will be higher than those measured on the ground.The damping observed in the ground test is much smaller than that in the flight test, making it not suitable for reference here.The results show that the identification outcomes of the VMD-HT method for low-frequency modes are reasonable and reliable, consistent with the results obtained from the software and the ground vibration test.This demonstrates that the VMD-HT method can be effectively employed for the data processing of pilot's shaking stick excitation.
Small rocket excitation
A small rocket, sometimes referred to as "bonkers" in early literature, serves as an excitation device in flight flutter testing.It operates by igniting gunpowder through electric ignition, creating a pulse force, as illustrated in Figure 8.This device is affixed to the wing, tail, or an external component of the aircraft.Upon activation, sensors on the aircraft measure the resulting pulse acceleration response.In an actual flight test, as illustrated in Figure 9, four sensors are symmetrically positioned at the tip and center of both the left and right wings, respectively.The installation of a small rocket on the left wing induces an upward pulse force during excitation, effectively stimulating the antisymmetric modes of the wing.All four sensors recorded distinct responses with a sampling rate of 256 Hz.Subsequently, their data were individually analyzed using the VMD-HT method and Prin80 software for comprehensive evaluation.In this context, we selected K=5 and =2000 for the VMD analysis.
Recognizing the pivotal role of the VMD analysis quality in ensuring the accuracy of the VMD-HT method, Figures 10 and 11 present the modal decomposition results obtained from sensor LW2's data, successfully capturing five key modes spanning low to high frequencies.Finally, Figure 12 illustrates a three-dimensional graph simultaneously comparing modal parameters (frequency and damping ratio) derived from different sensors using both methods.
During flight flutter testing, it is essential to observe the variation trend of modal parameters.However, in general, there are discrepancies in the identification of the same mode among different wing sensors.If the differences are excessively significant, engineers may face challenges in selecting the appropriate modal parameters.Based on the results presented in Figure 12, it is evident that, compared to Prin80 software, the frequency and damping ratio obtained using the VMD-HT method show greater similarity across different sensor data, particularly for modes 2-4.Moreover, there is an enhanced consistency in identification results among symmetrically mounted sensors.In contrast, Prin80 software yields widely varying modal parameters for the same mode with sensors in different positions.The study demonstrates that the VMD-HT method, applied to small rocket excitation, improves the stability and robustness of modal parameter identification in flight flutter testing, regardless of sensor positions on the wings.
Conclusions
This study proposes a VMD-HT method for identifying modal parameters in impulse signals during flight flutter testing.Numerical results indicate that the VMD-HT method outperforms the traditional frequency-domain method found in the specialized software Prin80, especially in identifying closely spaced modes.Furthermore, the VMD-HT method is applied to impulse signals induced by the pilot's shaking stick and small rocket excitation during actual flight flutter testing, respectively.Through comparison and analysis, it is shown that the VMD-HT method effectively identifies both lowfrequency and high-frequency structural modes, enhancing the accuracy and robustness of modal parameter identification in engineering applications.This method proves to be a valuable tool for postprocessing and analyzing flight flutter data.
) Where k , k m and k Φ denote the damping ratio, mass, and modal vector of the kth mode, respectively.( ) k Q t denotes the unit impulse force vector applied at 0 x x .
Figure 1 .Figure 2 .
Figure 1.Time history (Left) and frequency spectrum (Right) of the original signal and IMFs
Figure 3 .
Figure 3.The Fitting curve for the instantaneous amplitude logarithm and instantaneous phase
Figure 4 .
Figure 4. Diagram of frequency response function by using the Prin80 software
7 Figure 5 .
Figure 5. Diagram of aileron deviation and the corresponding vibration of the wing
Figure
Figure6shows the IMFs obtained through VMD analysis, using K=2 and =2000.The spectrum
Figure 6 .Figure 7 .
Figure 6.Time history of the original signal and IMFs obtained by VMD
Figure 8 .
Figure 8.The picture of small rocket excitation device in flight flutter testing
Figure 9 .Figure 10 .Figure 11 .
Figure 9. Position diagram of sensors and the small rocket on the wing
Figure 12 .
Figure 12.Comparison of modal parameters obtained by the two methods between different sensors
Table 1 .
Comparison between the calculated value and the theoretical value (SNR=10dB)
Table 2 .
The comparison between the calculated value and the ground test | 2024-06-05T15:11:58.567Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "a85aa88e1d5d35d9ae67828189277321ca12edce",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2762/1/012050/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "57ca1ffd7c21dc3ab9ac9a1af6e85d4124d193c2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
2984282 | pes2o/s2orc | v3-fos-license | Investigating the shape bias in typically developing children and children with autism spectrum disorders
Young typically developing (TD) children have been observed to utilize word learning strategies such as the noun bias and shape bias; these improve their efficiency in acquiring and categorizing novel terms. Children using the shape bias extend object labels to new objects of the same shape; thus, the shape bias prompts the categorization of object words based on the global characteristic of shape over local, discrete details. Individuals with autism spectrum disorders (ASDs) frequently attend to minor details of objects rather than their global structure. Therefore, children with ASD may not use shape bias to acquire new words. Previous research with children with ASD has provided evidence that they parallel TD children in showing a noun bias, but not a shape bias (Tek et al., 2008). However, this sample was small and individual and item differences were not investigated in depth. In an extension of Tek et al. (2008) with twice the sample size and a wider developmental timespan, we tested 32 children with ASD and 35 TD children in a longitudinal study across 20 months using the intermodal preferential looking paradigm. Children saw five triads of novel objects (target, shape-match, color-match) in both NoName and Name trials; those who looked longer at the shape-match during the Name trials than the NoName trials demonstrated a shape bias. The TD group showed a significant shape bias at all visits, beginning at 20 months of age while the language-matched ASD group did not show a significant shape bias at any visit. Within the ASD group, though, some children did show a shape bias; these children had larger vocabularies concurrently and longitudinally. Degree of shape bias elicitation varied by item, but did not seem related to perceptual complexity. We conclude that shape does not appear to be an organizing factor for word learning by children with ASD.
Introduction
The shape bias is a principle or strategy that children utilize during language acquisition to rapidly learn new nouns. This bias is exhibited when a child extends the name of an object to new objects of the same shape rather than other characteristics such as color or texture . For example, a child first learning "ball" with reference to a round blue ball would extend that label to other round objects, rather than to other blue objects. The shape bias is robust among typically developing (TD) children older than 18 months or so (Landau et al., 1988;Graham and Poulin-Dubois, 1999;Samuelson and Smith, 2000;Perry and Samuelson, 2011); however, it is not yet clear whether children with neurodevelopmental disorders, such as autism spectrum disorder (ASD), also use the shape bias in word learning. The shape bias has been linked with noun learning (Samuelson, 2002;Perry and Samuelson, 2011); because many children with ASD appear to have little difficulty acquiring a vocabulary of nouns (Eigsti et al., 2007;Swensen et al., 2007), they might also be predicted to show a shape bias. However, the shape bias also requires children to attend to the overall shapes of objects rather than their smaller perceptual details and children with ASD are known to preferentially focus on such details (Happe and Frith, 2006); thus, acquiring a shape bias might be difficult for them (Tek et al., 2008). Furthermore, the ASD population is extremely heterogeneous, with some children apparently developing language typically whereas others manifest language impairments (Tager-Flusberg and Caronna, 2007); therefore, it is possible that a shape bias might be observed in some children with ASD but not others. In the current study, we address both of these issues with a longitudinal investigation of the shape bias in a sample of children with ASD. We address the question of perceptual focus by including stimuli that vary in visual detail and by assessing whether the children focus on overall shape during non-naming as well as naming trials. We address the question of subgroups by including a relatively large (n > 30) sample of children with ASD, who vary widely in their language abilities. This large and varied sample, together with the longitudinal design, also allows us to investigate a number of possible relationships between children's vocabulary size and eventual attainment of a shape bias.
In TD children, the shape bias has been proposed to emerge during the second year of life, in response to their early acquisition of a set of nouns whose referents are objects with differentiated shapes (Smith, 2000;Smith et al., 2002). Support for this proposal comes from studies showing that toddlers who are taught novel nouns with differentiated-shape referents demonstrate a shape bias earlier than children who are taught novel nouns organized by material (Samuelson, 2002;Smith et al., 2002). Moreover, Perry and Samuelson (2011) have recently reported that toddlers who have more words for solid objects organized by shape than for solid objects organized by material show a more consistent shape bias-i.e., the shape bias is seen across more trials. Learning the shape bias seems to have positive consequences for later vocabulary growth, as children who are shown to demonstrate a shape bias at one time point subsequently are reported to have larger vocabularies at later time points (Samuelson and Smith, 2000;Smith et al., 2002). Alternative frameworks have also been proposed, suggesting that the shape bias results from general conceptual mechanisms instead of from the noun-learning process. These frameworks emphasize the function of the creator's intent for a particular shaped object as the cause for generalization of the name .
Especially early in development, children's demonstration of a shape bias is also influenced by visual properties of the objects themselves. That is, even though object shape is a salient property to preverbal infants (Hupp, 2008), extracting shape similarities across diverse objects is not always a straightforward task. For example, Son et al. (2008) have demonstrated that TD toddlers show a stronger shape bias with perceptually simple objects (e.g., with a smooth shape and a single color) compared with more complex ones (e.g., with more edges and more than one color). Similarly, Tek et al. (2012) found that toddlers extended the labels to new objects more consistently if those new objects matched the original only and exactly in shape, and were paired with objects that matched the original only and exactly in color. Whereas test object pairs that shared some color and shape details with each other were actually more likely to elicit a material bias.
Effects of perceptual detail might be expected to be even stronger in children with ASD, because of their tendency to focus on the small physical details of objects (Happe and Frith, 2006). While enhanced attention to detail can be strength (e.g., Mottron et al., 2006), over-emphasizing small visual details to define objects can hinder children with ASD from noticing the overall shape similarities of those objects. Thus, they might develop a shape bias that is weaker-and/or emerges later-than their TD peers. Consistent with this hypothesis is the common observation that children with ASD manifest delays in the onset of language development; many also show significant impairments in pragmatic abilities and some show grammatical delays or impairments as well (Tager-Flusberg, 2004;Eigsti et al., 2007;Goodwin et al., 2012;Tek et al., 2014). However, researchers have also reported that many children with ASD acquire a substantial vocabulary (Eigsti et al., 2007;Tek et al., 2014). Similarly to TD children, their first words are usually object words, and they demonstrate a noun bias when presented with novel words that could be mapped onto objects or actions (Tager-Flusberg et al., 1990;Swensen et al., 2007). Thus, it is possible that at least some children with ASD have acquired a shape bias for use with learning new words.
To our knowledge, only one published study has investigated the existence of a shape bias in young children with ASD. Tek et al. (2008) examined a group of 15 children with ASD across 12 months of development beginning when they were between 2 and 3 years of age; a TD group (MA = 20 months), which was matched on language to the ASD group at the initial visit, was also tested. The method of assessing language was intermodal preferential looking (IPL; Golinkoff et al., 1987;Naigles and Tovar, 2012), in which children view side-by-side videos and hear a linguistic stimulus that matches only one of the videos. This method has elicited good comprehension of some aspects of language from young children with ASD, partly because it allows them to express their language skills without their social cognitive deficits impeding their performance (Swensen et al., 2007;Naigles et al., 2011;Sasson et al., 2013;Venker et al., 2013).
Indeed, Tek et al. (2008) found that both TD and ASD groups demonstrated usage of a noun bias via the IPL paradigm, in that they preferentially mapped novel words onto novel objects rather than novel actions. Both groups were also tested on the shape bias four times over the course of a year. Beginning at visit 2, when they averaged 24 months of age, the TD children looked significantly longer at the shape-match object during novel-name trials compared with control trials; thus, they demonstrated a shape bias. In contrast, the ASD group did not show the same effects even at the fourth visit, when they averaged 45 months of age and had a lexicon of more than 100 nouns. These children also completed a pointing version of the shape bias task, with 3-dimensional versions of the target and test objects. The pointing task elicited a shape bias from the TD group at 28 and 32 months of age, but no group-wide shape bias was observed from the ASD group at any visit. Thus, the IPL findings replicated those from the pointing task; the earlier demonstration of the shape bias in the TD group via IPL vs. pointing is consistent with other research showing that implicit tasks elicit evidence of linguistic knowledge developmentally earlier than explicit tasks (Hirsh-Pasek and Golinkoff, 1996;Graham and Poulin-Dubois, 1999;Naigles, 2002;Goodwin et al., 2012;Piotroski and Naigles, 2012;Golinkoff et al., 2013). Tek et al. (2008) concluded that these children with ASD did not have a shape bias.
The underlying bases for the absence of a shape bias in children with ASD are still unknown; moreover, this study clearly needs further replication and extension. For one thing, Tek et al.'s (2008) report was from a study still in progress; those children with ASD also viewed the shape bias video at two subsequent visits, when they averaged 49 and 54 months of age. Thus, it is possible that the original study was underpowered, and a reliable shape bias will be seen with more children and/or later in development. Furthermore, while Tek et al. (2008) reported some indications of individual differences, in that children with ASD who had higher vocabulary scores on the MacArthur-Bates Communicative Development Inventory (MB-CDI) showed a stronger shape bias at one visit, they did not investigate the longitudinal antecedents or consequences of an emergent shape bias. Moreover, Tek et al. (2008) did not compare the looking patterns elicited by the different items to see if their perceptual complexity played a role in eliciting a shape bias. In sum, with the current study we address three questions: (1) Will children with ASD demonstrate a shape bias, as a group, with a larger sample size and developmental timespan? Alternatively, will a shape bias be seen consistently in some subgroup(s) of children with ASD? Because the IPL task seems to be more sensitive to the onset of the shape bias (see also Graham and Poulin-Dubois, 1999), we only report IPL findings here. (2) If shape bias performance varies within the ASD group, are their shape-match preferences predicted by their vocabulary size or content, and does their degree of shape-match preference predict later good language skills? (3) Does the perceptual complexity of the individual items play a role in the shape bias performance of the ASD group?
Materials and Methods
Participants Participants for this longitudinal study included 35 TD children (29 male, 6 female) and 32 children with ASD (27 male, 6 female). Participants with ASD resided in Connecticut, Massachusetts, New Jersey, New York, and Rhode Island. Upon beginning the study, the children with ASD's ages ranged from 24 to 42 months (M = 32.8, SD = 5.4). Participants had received a professional diagnosis of ASD within the past 6 months and had begun interventions including 5-30 h per week of applied behavior analysis (ABA) therapy. The diagnosis for each child was confirmed at the first visit.
TD participants resided in the state of Connecticut. Upon beginning the study, their ages ranged from 18 to 23 months (M = 20.3, SD = 1.5). Status as a TD participant was also confirmed at the initial visit. Beginning the study, TD and ASD groups did not differ in language or cognitive levels, but were significantly different in adaptive functioning. By visit 6, groups differed significantly in cognitive, language, and adaptive behavior scores (see Table 1). Informed consent was obtained from each child's parent or guardian at each visit. The University of Connecticut Internal Review Board for human subjects approved all materials and procedures involved in this study.
Apparatus
The IPL videos were shown to each participant on a large projector screen set up in their home. The child sat approximately four feet in front of the screen; either by themselves, upon a familiar seat of choice, or with a parent or visiting research assistant. Participating parents and research assistants wore headphones playing classical music in order to mask the audio stimuli. A digital camera, focused on the child's face, was placed centrally below the screen aligned with the child and adjusted for individual height and choice of seating arrangement. The speaker projecting the auditory stimuli was located behind the projection screen and also aligned centrally with the digital camera and child (Naigles and Tovar, 2012).
Materials
The shape bias video was the same as that used by Tek et al. (2008). Novel objects were constructed from simple wooden blocks or plastic toys. Wooden blocks were painted with solid, striped, and polka dot design variations. Plastic toys were of unfamiliar shapes and enhanced with decorative paper. Across the objects, the levels of complexity in detail (intuitively operationalized as the number of corners) varied from low to mid to high. Most objects had an element of curvature in their overall structure. In total there were five target objects, five color pattern match objects and five shape-match objects (each color-match and shape-match corresponding with one target). Ordered from lowest to highest complexity of detail, the five novel target objects were labeled Tiz, Pim, Zup, Dax, and Pilk (see Figure 1). Each object was filmed moving slowly back and forth; each clip was 4 s long.
The video included a set of five NoName (i.e., control) trials followed by a set of five Name (i.e., test) trials. A sample video layout for one block of NoName and Name trials is shown in Table 2; trial 4 is the NoName test and trial 8 is the Name test. During the interstimulus interval, the child was re-centered via a flashing red dot. The side of first presentation of each object varied across objects within the video in an LRLRL pattern, and was counterbalanced between children and across visits (i.e., half of the children viewed variant A at visits 1, 3, and 5 and variant B at visits 2, 4, and 6; the other half experienced the opposite pattern). Variants A and B were also differentiated by the side of presentation of the shape-match, which varied in a LRRLL or RLLRR pattern. When the target was initially presented on either the right or left side of the screen, the opposing side remained black, without a video stimulus. The order in which the objects were presented differed between NoName and Name blocks. The target object did not remain visible on the screen during the simultaneous presentation of color-match and shape-match objects; thus, the children had to remember how it looked during the test trials.
Standardized Test Measures
The Autism Diagnostic Observation Schedule (ADOS; Lord et al., 1989) is a series of structured play activities constructed as a diagnostic assessment of ASDs; this was administered at visits 1 and 5. The MB-CDI (Fenson et al., 1991) is a parent-report standardized assessment measuring the child's early language development. There are three versions: Infant, Toddler, and Level III. The CDI Infant version is intended for children ages 8-16 months and measures both language production and comprehension.
Part one of this version consists of a 396 word vocabulary inventory including nouns, verbs, adjectives, pronouns, prepositions, and quantifiers. Part two assesses the child's use of actions and gestures for early non-verbal communication. The CDI Toddler version is intended for children ages 16-30 months. Part one of this version contains a 608 word vocabulary inventory. Part two assesses morphological and syntactic usage. CDI Level III is an 100 word expressive vocabulary inventory with a questionnaire assessing complex semantic, pragmatic, and grammatical usage. For this study only the vocabulary inventories were analyzed. Parents of TD children filled out the Infant version at visit 1, the Toddler version at visits 2 and 3, and Level III at visits 4 through 6. For the children with ASD, the schedule was identical to the TD children for visits 1 through 3. Starting at visit 4, parents of children with ASD filled out the CDI III if their child (Sparrow et al., 2005) is a parent-reported questionnaire assessing the child's developmental milestones across the areas of communication, daily living, socialization, and motor skills. Scores are standardized to compute overall adaptive functioning.
The Mullen Scales of Early Learning (Mullen, 1995) assess the overall intellectual development of the child across the areas of cognition, expressive and receptive language, and motor development. Both raw and standard scores were used in the analyses.
Procedure
Children were visited in their homes every 4 months for a total of six visits. The first visit was separated into two sessions. At the first session the ADOS, CDI, Vineland, and Mullen were administered and the child was introduced to the IPL paradigm. At the second session, 1 week later, the child was shown the IPL videos. At subsequent visits the IPL videos were presented prior to all other activities.
The shape bias video was shown to each TD participant at visits 1 through 4 and was shown to each participant with ASD at visits 1 through 6. For all participants at visits 1 and 2, the shape bias video was shown as the second of three IPL videos. At visits 3 through 6, the shape bias video was shown first (see Swensen et al., 2007;Naigles et al., 2011;Goodwin et al., 2012;Tovar et al., 2015, for descriptions of the findings from the other videos).
IPL Coding
The recording of the child's face was digitized and uploaded to a custom coding program after each visit. During coding, research assistants did not have access to the accompanying auditory stimuli. Each child's visual fixations were coded frame by frame as right, left, center, or away for all trials. Looking patterns during NoName and Name test trials (e.g., trials 4 and 8 in Table 2) were then calculated yielding the primary dependent variable of percent looking to shape-match (i.e., seconds looking to the shape-match divided by total time looking to the shape plus color-matches).The latency of the first look to the shape and color-matches was also calculated but proved uninformative and so will not be considered further (Potrzeba, 2014). For 50% of the participants, multiple research assistants coded the recordings until inter-rater reliability within 0.3 s for each trial was achieved by two coders. For the other participants, inter-rater reliability was assessed for 10% of the data set; correlations between the two coders averaged 0.975 (p < 0.0001).
Trial and Visit Elimination
For each of the visits, participants' data were eliminated for a number of reasons. Individual trials with a total looking time of less than 1 s were eliminated because the children's attention to the stimulus was too brief; these trials were designated as missing and not replaced. Individual participants were eliminated at a given visit if they provided a total of fewer than three paired (i.e., involving the same target item) Name and NoName trials, a side bias of greater than 75% across test trials, or a missed visit. For the ASD group, seven participants were eliminated at visit 1, three participants at visit 2, two participants at visit 3, two participants at visit 4, two participants at visit 5, and one participant at visit 6. For the TD group, 12 participants were eliminated at visit 1, three participants at visit 2, and one participant at visit 3 (see Table 3).
Individual and Item Designations
Children were designated as shape-biased at a given visit if they showed a percentage of looking time to the shape-match of greater than 50% in the Name trial, averaged across items. Correspondingly, they were designated as color-biased at a given visit if they displayed a percentage of looking time to the shapematch lower than 50% of the time during the Name trial, again averaged across items; recalling that if the child has a low percentage of looking to the shape-match, they in turn have a high percentage of looking to the color-match, as uninformative trials where the child was predominantly looking away from the screen and not attending to the items were not used for data analysis. Children's looking patterns were also assessed for each item at each visit, as follows: first, each NoName and Name trial was assessed as to whether the percentage of looking time to the shape-match was above or below 50%. Percentages below 50% were designated as "Low" and percentages above 50% were designated as "High." The shift in percentage from the NoName to the Name trial placed that item particular item for that child at that visit into one of four categories. "Low" for NoName coupled with "High" for Name was designated as 'shape biased (i.e., LH);' "High" for NoName coupled with "Low" for Name was designated as 'color-biased (HL).' "High" for both a given NoName and Name pairing (HH) indicated an overall shape-match preference, regardless of whether the target had been named, and "Low" for a given NoName and Name pairing (LL) indicated an overall colormatch preference, again regardless of whether the target object had been named. The proportion of children who provided LH, HL, HH, and LL patterns was calculated for each item, to investigate whether items varying in perceptual complexity elicited different levels of shape bias.
MB-CDI Coding
The Infant and toddler versions of each participant's MB-CDIs were coded for three subcategories. Following Perry and Samuelson (2011) specific words were designated as shape organized (e.g., chair, cup), color organized (e.g., apple, snow), or as a descriptive term (e.g., red, blue). Apples might seem to be shape-organized, but young children typically experience apples in pieces, such that their color is more salient. For the Infant version, there were a possible total of 84 shape words, 48 color words, and three descriptive words. For the Toddler version, there were a possible 108 shape words, 100 color words, and nine descriptive words. Descriptive words were added to the color category. Totals and percentages were calculated to observe potential predominant word types. Only data from visits 1-3 were included because the CDI-III administered starting at visit 4 did not include enough relevant words.
Analysis Plan
We first conducted ANOVAs to compare NoName and Name trials collapsed across items, to determine whether the shape bias appeared at any visit for each group. We next explored potential subgroups in shape bias performance, ranging from children always exhibiting the shape bias (i.e., at 100% of the visits) to those rarely exhibiting the shape bias (at 0 visits). Furthermore, we explored which individual differences (e.g., from the standardized test measures) correlated with their shape bias performance and we then examined in detail the extent to which each individual's particular vocabulary content might have influenced their shape bias performance. Lastly, we examined the whether particular items elicited the shape bias more consistently than others.
Group Analyses
The TD group exhibited a consistent increase in percent looking to shape-match during the Name trials compared with the NoName trials, starting as early as 20 months of age; in contrast, the ASD group exhibited no consistent pattern. Table 4 displays the means and SDs by visit and group. A repeated measure, multivariate ANOVA [2 (group) × 4 (visit) × 2 (trial)] was conducted to compare the groups across NoName and Name trials for visits 1 through 4. A significant effect of trial was obtained [F(1,40) = 14.904, p < 0.001, η 2 = 0.271] as well as a significant interaction of trial by group [F(1,40) = 4.811, p = 0.034, η 2 = 0.107].
Two additional repeated measures multivariate ANOVAs were then conducted to assess each group separately. For the TD group, the analysis was conducted across visits 1 through 4. A significant effect of trial was found [F(1,20) = 19.885, p < 0.001, η 2 = 0.499] and no other significant effects or interactions. Paired sample t-tests of the TD children demonstrated a significantly greater percent looking to the shape-match during the Name than the NoName trials at each visit (see Table 4; onetailed tests are reported because the prediction is for greater looking to the shape-match during the Name trials). The effect sizes for the TD group are at similar levels to those reported in other IPL studies (e.g., Gertner et al., 2006;Wagner et al., 2009;Golinkoff et al., 2013). For the ASD group, the analysis was conducted across all six visits; no significant effects or interactions were observed. Paired sample one-tailed t-tests comparing the NoName and Name trials were performed but none yielded Frontiers in Psychology | www.frontiersin.org significant effects (ps > 0.14). These analyses were repeated including the children's percent looking to shape-match during only the first or second halves of the test trials, with similar results. Children in the TD and ASD groups were then assigned to one of four subgroups, according to the percent of visits for which they showed a shape bias (i.e., looked longer at the shapematch during Name compared to NoName trials). Children in the Always subgroup showed a shape bias at 100% of their visits, children in the Consistent group showed a shape bias at 60-95% of their visits, children in the Inconsistent group showed a shape bias at 40-55% of their visits, and children in the Rarely group showed a shape bias at 0-35% of their visits. The majority of children in the TD group demonstrated a shape bias at more than half of their visits (see Table 5); in contrast performance in the ASD group was much more variable. Three children with ASD showed a shape bias at 100% of their visits; however, the majority of children with ASD showed a shape bias at fewer than 50% of their visits (see Table 5). A chi-square analysis revealed that the distributions of the two groups were significantly different [χ(3) = 13.6, p = 0.003].
Individual Differences
Correlations were conducted to investigate the relationships between the children's standardized test scores, including the MB-CDI, Mullen, and Vineland, and their degree of shape bias (i.e., mean difference of percent looking to shape-match between NoName and Name trials) at each visit. No significant concurrent correlations emerged for the TD group, likely because of little variance in shape bias performance. However, for the ASD group significant concurrent correlations emerged at both visit 2 and visit 6. At visit 2, children's degree of shape bias positively correlated with their MB-CDI scores (r = 0.452, p = 0.014). At visit 6, their degree of shape bias positively correlated with their Vineland motor scores (r = 0.386, p = 0.035), Mullen fine motor raw scores (r = 0.363, p = 0.045), and Mullen receptive language raw scores (r = 0.359, p = 0.047). At both early and later visits, then, children with ASD with stronger shape biases had more advanced language skills. Furthermore, at the last visit children with ASD showing the shape bias also had stronger motor skills.
Cross-visit correlations were then conducted between the children with ASD's MB-CDI scores and their shape bias performance. Four significant relationships were observed: children's vocabulary at visit 1 correlated significantly and positively with their shape bias performance at visits 2 (r = 0.499, p = 0.006) and 6 (r = 0.409, p = 0.022), children's vocabulary at visit 2 correlated significantly and positively with their shape bias performance at visit 6 (r = 0.364, p = 0.048), and children's shape bias performance at visit 4 correlated significantly and positively with their vocabulary at visit 6 (r = 0.409, p = 0.034). Multiple regressions were then performed, to investigate whether the earlier vocabulary measures predicted later shape bias performance when controlling for early shape bias performance, and to investigate whether early shape bias performance predicted later vocabulary, when controlling for early vocabulary. Three models were significant: MB-CDI at visit 1 significantly predicted shape bias performance at visit 2 ( R2 = 0.218, β = 0.459, p = 0.025); shape bias performance at visit 1 did not contribute significantly to the model. Similarly, MB-CDI at visit 1 significantly predicted shape bias performance at visit 6 ( R2 = 0.214, β = 0.462, p = 0.023); again, shape bias performance at visit 1 did not contribute significantly to the model. Finally, shape bias performance at visit 4 significantly predicted MB-CDI levels at visit 6 ( R2 = 0.055, β = 0.237, p = 0.033), with vocabulary at visit 4 contributed significantly and independently to this model ( R2 = 0.702, β = 0.798, p < 0.001). In sum, there seems to be a longitudinal and mutually facilitative connection between vocabulary size and shape bias performance, as children with ASD who had larger vocabularies at the early visits showed a stronger shape bias at one of the later visits, and children who showed a stronger shape bias in the middle of the study were reported to have a larger vocabulary at the last visit.
We further explored this connection between vocabulary and the shape bias by considering whether learning a 'threshold number' of shape words were necessary to abstract the shape bias. If this was the case, then children who showed the shape bias more consistently should produce relatively more 'shape' words than children who showed the shape bias less consistently. Table 6 presents the mean percentages of 'shape' and 'color' words produced at visits 1, 2, and 3, organized by the shape bias subgroups, for the children with ASD. Because of the low number of children in the Always subgroup, the Always and Consistent children were combined into one subgroup for this analysis. As Table 6 shows, in general, children produced a greater proportion of words in the 'shape' category than in the 'color' category; moreover, the children who showed the shape bias more consistently produced a greater proportion of words than the children who demonstrated the shape bias less consistently. However, no differences in the distribution of 'shape' vs. 'color' words were observed at any visit (all chi-squares < 1). That is, the higher proportion of 'shape' words produced by the Always/Consistent subgroup is mirrored by the higher proportion of 'color' words they produced. In other words, as also demonstrated by the correlation and regression analyses, children with more consistent shape bias performances produced more words overall; there is little indication of a special role for their 'shape' words.
Item Effects
Finally, we investigated the degree of shape-vs. color-preferences elicited by each item, for the ASD group. Figure 2 shows the percent of children with ASD who demonstrated the four looking patterns (LH, HL, HH, LL) for each item, combined across visits. The items are ordered left to right from simplest (fewest corners, tiz) to most complex (most corners, pilk). The items do seem to vary in the type of looking pattern they most commonly elicit, and this variability is confirmed by a significant chi-square analysis [χ 2 (12) = 28.9, p = 0.004]. However, the overall pattern is rather complex: if the dominant basis for eliciting a shape bias-or overall shape preference-were perceptual complexity, operationalized here by the number of corners on the object, then the green and blue bars would be highest for TIZ while the orange and red bars (LL, HL) would be highest for PILK. However, while ZUP, a 6-cornered object, clearly elicited more shape-match preferences and DAX, an 8-cornered object, tended to elicit more color-match preferences, the other objects do not fit into either a shape-oriented or a color-oriented pattern.
Discussion
Children in this study saw triads of novel objects (target, shapematch, color-match) in both NoName and Name trials; those who looked longer at the shape-match during the Name trials than the NoName trials demonstrated a shape bias. Target objects did not remain visible during the presentation of shape or color-match objects; thus, a memory constraint was imposed. Children were tested across four (TD group) or six (ASD group) visits, 4 months apart. The TD group showed a significant shape bias at all visits, beginning at 20 months of age. The ASD group did not show a significant shape bias at any visit, even as late as 54 months of age. Considerable individual variation was observed, however, with slightly more than one-third of the sample demonstrating a shape bias at more than half of their visits, and slightly less than one-third demonstrating a shape bias at 2 or fewer visits. Children with ASD who had larger vocabularies showed a stronger shape bias both concurrently and longitudinally; moreover, children with ASD with a stronger shape bias at visit 4 had larger vocabularies at later visits. Finally, while the target objects varied in perceptual complexity and in the degree to which they elicited a shape bias from children with ASD, there was little indication that these two types of variance were related to each other. Taken together, these findings shed new light on the universality and underlying basis of the shape bias in young children.
Our demonstration of a shape bias in TD children as young as 20 months of age replicates many others in the field (Smith, 2000;Perry and Samuelson, 2011, passim): from midway through the second year of life through 2.5 years of age, children showing typical development preferentially extend the labels of objects to new instances of the same shape rather than color or pattern. Tek et al. (2008), using these same stimuli, had only found a shape bias in TD children by 24 months of age; however, their sample size was small. In this study we doubled the sample size and obtained a significant shape bias in the youngest group tested, indicating that the previous null effect was likely attributable to low power. Nonetheless, the effect size of the 20 montholds was smaller than that of the older children, indicating that the shape bias increases in strength across this period of development.
In contrast, doubling the sample size for the ASD group, as well as extending the age range, did not change the effects reported by Tek et al. (2008). As a group, the children with ASD did not exhibit a preference for the shape-match during the Name trials compared with the NoName trials. In fact, the children with ASD appeared to look randomly during both the NoName and Name trials; that is, they looked preferentially neither at the shape-nor color-matched objects, both when the target object had been named and when it had not. Thus, while they were not disposed-as a group-to sort the objects by shape, neither were they disposed to do so by color or pattern. This negative finding contrasts somewhat puzzlingly with the positive findings reported for this same sample of children during their other IPL tasks. That is, as a group, these children understand SVO word order and can learn novel verbs using Syntactic Bootstrapping (Naigles et al., 2011); they manifest a noun bias, mapping novel words onto novel objects rather than actions (Tek et al., 2008), and the majority of them also demonstrated understanding of subject-and object-wh-questions, as well as the -ing/-ed aspectual distinction, by visit 6 (Goodwin et al., 2012;Tovar et al., 2015). Thus, their difficulty as a group with the shape bias cannot be attributed to difficulties with the IPL tasks nor with general language comprehension. However, poor shape bias performance was not universal in our ASD sample, as 13 of the 32 children with ASD did demonstrate a shape bias during more than half of their visits. Many of these 13 were indeed high-functioning (Tek et al., 2013); however, two others were actually non-verbal and three were verbal but still quite delayed in their language development. And five children who had been designated as high-verbal showed a shape bias at fewer than half of their visits. Thus, whereas Samuelson and smith (1999) and Perry and Samuelson (2011) have shown that among TD toddlers, a threshold level of 100 count nouns and/or some number of 'shape' words is associated with a shape bias, we did not observe such a threshold level for the ASD sample. Nonetheless, across the entire sample, children with ASD who had higher vocabulary scores, especially at visits 2 and 6, showed a stronger shape bias at the same visit. These findings replicate those from much younger TD children, showing that the shape bias is associated with overall vocabulary size (Samuelson and smith, 1999). Moreover, our findings from the ASD group also replicated those involving TD children with respect to longer-term antecedents and consequences of shape bias performance. That is, Smith et al. (2002) and Smith and Samuelson (2006) have reported that children who develop a shape bias earlier in their second year have larger vocabularies during their third year, controlling for variation in vocabulary size at the initial time point. A similar relationship was observed in our ASD group, where children with a stronger shape bias at visit 4 were reported to produce a greater proportion of the available words on the MB-CDI 8 months later, controlling for their MB-CDI scores at visit 4. It is possible that, the more a child can use the shape bias strategy, the more words they are able to learn. The reciprocal relationship was also observed in our ASD group, that children producing more words on the MB-CDI at visit 1 showed a stronger shape bias at visits 2 and 6, controlling for their degree of shape bias at visit 1. That is, an early demonstrated ability to learn words evidently facilitates the later development of the shape bias. These relationships were observed only in our ASD group, possibly because the TD group was already showing a consistent shape bias at visit 1, and demonstrated much less within-group variability.
Interestingly, at visit 6 children with ASD with stronger shape biases also had higher fine motor scores, judged by both parent report and administered tests. A relationship between the shape bias and motor ability has not previously been reported in the TD literature; however, we conjecture that this might be attributed to the children with ASD's developing facility with object manipulation. It seems likely, for example, that children who are becoming skillful at manipulating objects might be better at extracting these objects' global shape characteristics, which might then transfer to the visual extraction of shape in the IPL task. Furthermore, as a result of more skillful manipulation, it is possible that the children can appraise more meaning and functionality to the object, which would allow for broader shape understanding.
Increasing the ASD sample size, then, was fruitful for illuminating how child-based constructs such as vocabulary size and motor skills might influence, and/or be influenced by, the development of a shape bias in children with ASD. In contrast, our second goal of shedding light on the role of object complexity did not bear much fruit. While our five objects varied in perceptual complexity as well as in degree of shape bias elicitation, these two properties did not seem to be related. Our study was limited in that we included only five objects, whose perceptual properties were not varied systematically. Including more objects, though, would have lengthened the video and so further tested the attention spans of the children with ASD.
In sum, one of the same factors that influence the development of the shape bias in TD children-vocabulary size-also seems to influence shape bias performance in children with ASD. This finding supports the universality of the role of the lexicon in the development of this construct. However, the current study also demonstrates that simple objects and sizeable 'shape word' vocabularies are not sufficient for children to demonstrate a shape bias, because most of our child participants with ASD displayed such a bias only inconsistently. So why might the shape bias be so challenging for children with ASD? One possibility is that our IPL task imposed more strenuous memory demands than the usual pointing tasks, because in the latter tasks the target object is still available during test. However, both the noun bias and syntactic bootstrapping videos placed similar memory constraints on these children, but for these latter videos the children with ASD were able to succeed, in that they demonstrated consistent looking at the same test stimulus as the TD children (Tek et al., 2008;Naigles et al., 2011).
Another possibility is that the shape bias actually requires more conceptual knowledge than theorists such as Smith (2000) have proposed. For example, have suggested that children extend object kinds by shape based on the object creator's intentions; therefore, children with ASD's difficulties with the shape bias might be related to their well-known difficulties with understanding the intentions of others . Along similar lines, the lack of shape bias in children with ASD might be consistent with-and possibly symptomatic of-additional difficulties with categorization and lexical organization that have been reported for this population (Minshew et al., 2002;Dunn and Bates, 2005;Gastgeb et al., 2006). That is, the shape bias requires children to utilize words as indicators of category structure (i.e., that different objects are exemplars of the same category), and research with older children with ASD has demonstrated weaknesses and inconsistencies in their category structure . A future direction for our research will be investigate the degree to which individual variation in shape bias performance during ages 2-4 is related to variation in category knowledge during school age and adolescence.
Limitations of this study include, as stated above, the lack of systematicity in the investigation of the role of perceptual complexity in developing a shape bias. Moreover, the heterogeneity of our ASD sample may limit generalization of these findings to other populations. Furthermore, it should be noted that this study was conducted with a particularly structured methodology which may not yield findings generalizable across variations of stimuli or across more naturalistic settings. And unlike (Tek et al., 2008), we did not compare preference to the shape-match between looking time via the IPL paradigm and pointing with a hands-on physical object manipulation task; such a comparison could be valuable in future research.
It is understood that the shape bias is a beneficial mechanism for language development and it could become a critical target for early intervention in children with ASD. Future research should aim to further differentiate between the children with ASD who do and do not exhibit the shape bias. Perhaps these children without the bias are not receiving adequate input as pertaining to shape organization and need to be more explicitly taught. Continuing this investigation will yield more knowledge as to the irregularities displayed during language acquisition in children with autism. | 2016-06-18T02:08:50.730Z | 2015-04-21T00:00:00.000 | {
"year": 2015,
"sha1": "cc42feec6b1aa45437f9badbceaac0775f2a7e09",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00446/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc42feec6b1aa45437f9badbceaac0775f2a7e09",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
21676519 | pes2o/s2orc | v3-fos-license | Trajectory Design for Distributed Estimation in UAV Enabled Wireless Sensor Network
In this paper, we study an unmanned aerial vehicle(UAV)-enabled wireless sensor network, where a UAV is dispatched to collect the sensed data from distributed sensor nodes (SNs) for estimating an unknown parameter. It is revealed that in order to minimize the mean square error (MSE) for the estimation, the UAV should collect the data from as many SNs as possible, based on which an optimization problem is formulated to design the UAV's trajectory subject to its practical mobility constraints. Although the problem is non-convex and NP-hard, we show that the optimal UAV trajectory consists of connected line segments only. With this simplification, an efficient suboptimal solution is proposed by leveraging the classic traveling salesman problem (TSP) method and applying convex optimization techniques. Simulation results show that the proposed trajectory design achieves significant performance gains in terms of the number of SNs whose data are successfully collected, as compared to other benchmark schemes.
I. INTRODUCTION
The last two decades have witnessed a dramatic advancement in the research and development of wireless sensor network (WSN) for applications in various fields. A WSN typically consists of a large number of sensor nodes (SNs) that are distributed in a wide area of interest. SNs are typically lowcost and low-power devices, which are able to sense, process, store and transmit information. Although the SNs usually have limited sensing, processing and transmission capabilities individually, their collaborative estimation/detection can be highly efficient and reliable [1], [2].
One typical application of WSN is for the estimation of an unknown parameter (such as pressure, temperature, etc.) in a given field based on noisy observations collected from distributed SNs. Specifically, each SN performs local sensing and signal quantization, then sends the quantized data to a Fusion Center (FC), where the received data from all SNs are jointly processed to produce a final estimate of the unknown parameter. Prior research on distributed estimation in WSN (see, e.g., [1], [2]) has mainly considered the static FC at a fixed location. As a result, SNs may require significantly different transmission power to send their data reliably to the FC due to their near-far distances from it, which results in inhomogeneous energy consumption rates of the SNs and thus limited lifetime of the WSN.
To overcome this issue, utilizing unmanned aerial vehicle (UAV) as a mobile data collector for WSN has been proposed C. Zhan as a promising solution [3]- [5]. With on-board miniaturized transceivers that enable ground-to-air communications, UAVenabled WSN has promising advantages, such as the ease of on-demand and swift deployment, the flexibility with fullycontrollable mobility, and the high probability of having lineof-sight (LoS) communication links with the ground SNs. In contrast to fixed FCs, a UAV-enabled mobile data collector is able to fly sufficiently close to each SN to collect its sensed data more reliably, thus helping significantly reduce the SNs' energy consumptions, yet in a more fair manner. A fundamental problem in UAV-enabled WSN for distributed estimation is the design of the UAV's trajectory (see Fig. 1), which needs to take into account two important considerations. Firstly, for an SN to send its data reliably to the UAV, the UAV needs to fly sufficiently close to the SN (say, within a certain maximum distance assuming an LoS channel between them). Secondly, given a finite flight duration, the UAV's trajectory should be designed to "cover" (with respect to the given maximum distance) as many SNs as possible to optimize the distributed estimation performance (e.g., minimizing the mean square error (MSE) for the estimated parameter). Notice that in our prior work [5], the SNs' wakeup schedule and UAV's trajectory were jointly optimized to minimize the maximum energy consumption of all SNs, while ensuring that the required amount of data is collected reliably from each SN. In contrast to [5] where the UAV needs to collect independent data from all SNs, this paper considers that all SNs' data contains noisy observations of a common unknown parameter. As a result, their approaches for the UAV trajectory design are also fundamentally different.
UAV trajectory design for optimizing communication performance has received growing interests recently (see. e.g., [6]- [10]). In [6], the UAV's trajectory was jointly optimized with transmission power/rate for throughput maximization in a UAV-enabled mobile relaying system, subject to practical mobility constraints of the UAV. The energy-efficient UAV communication via optimizing the UAV's trajectory was studied in [7], which aims to strike an optimal balance between maximizing the communication rate and minimizing the UAV's propulsion power consumption. The deployment and movement of multiple UAVs, used as aerial base stations to collect data from ground Internet of Things (IoT) devices, was investigated in [8]. The work in [9] maximized the minimum throughput of a multi-UAV-enabled wireless network by optimizing the multiuser communication scheduling jointly with the UAVs' trajectory and power control. In [10], the UAV trajectory was designed to minimize the mission completion time for UAV-enabled multicasting. Different from the above work, this paper investigates the UAV trajectory design under a new setup for distributed estimation in WSN. The main contributions of this paper are summarized as follows: • First, we show that for distributed estimation in an UAVenabled WSN, minimizing the MSE is equivalent to maximizing the number of SNs whose sensed data are reliably collected by the UAV; • Second, with a given UAV flight duration, we formulate an optimization problem for designing the UAV's trajectory to maximize the number of covered SNs, subject to the practical constraints on the initial and final locations of the UAV as well as its maximum speed. Although the problem is NP-hard, we show that the optimal UAV trajectory consists of connected line segments only; • Third, with the above simplification, an efficient greedy algorithm is proposed to obtain a high-quality suboptimal trajectory solution by leveraging the classic traveling salesman problem (TSP) method and applying convex optimization techniques; • Last, numerical results show that the proposed trajectory design achieves significant performance gains in terms of the number of SNs with successful data collection as compared to benchmark schemes.
II. SYSTEM MODEL AND PROBLEM FORMULATION
As shown in Fig. 1, we consider a WSN consisting of N SNs arbitrarily located on the ground, denoted by U = {u 1 , u 2 , . . . , u N }. The horizontal coordinate of SN u n is denoted by w n ∈ R 2×1 , n = 1, · · · , N . Each SN can observe, quantize and transmit its observation for an unknown parameter θ to the FC, which estimates the parameter based on the received information.
A. Distributed Estimation
Each SN u i makes a noisy observation on a deterministic parameter θ (e.g., temperature). The real-value observation y i by SN u i is modeled as where n i is the observation noise that is assumed to be spatially uncorrelated for different SNs with zero mean and variance σ 2 i . We further assume that the noise variances for all SNs are identical, i.e., σ 2 i = σ 2 , ∀i. Denote by [−W, W ] the signal range that the sensors can observe, where W is a known constant that is typically determined by the sensor's dynamic range. In other words, The local processing at SN u i consists of the following: (i) an uniform quantizer with 2 Si quantization levels, where S i denotes the number of quantization bits and ∆ i = 2W 2 S i −1 represents the quantization step size; (ii) a modulator, which maps the S i quantization bits into a number of symbols based on certain modulation scheme, such as binary phase shift keying (BPSK); and (iii) transmission of the modulated symbols to the FC. It is shown in [11] that with uniform quantizer, the quantization noise variance for u i can be obtained as For a homogeneous sensor network with equal observation noise power for all SNs, we assume that all SNs generate the same number of quantization bits, i.e., S i = S, ∀i [1]. The FC then performs the linear estimation based on the received data from all SNs to recover θ using the Quasi Best Linear Unbiased Estimators (Quasi-BLUE) [2], and the corresponding MSE can be obtained as where K ≤ N is the number of SNs whose sensed data are reliably collected. The expression in (2) shows that for the considered distributed estimation, the MSE is inversely proportional to the number of SNs K whose data are reliably collected. Therefore, in order to minimize MSE for the distributed estimation, the FC should successfully collect data from as many SNs as possible.
B. UAV Data Collection
For the UAV-enabled WSN, a UAV is employed as a flying data collector/FC for a given time horizon T , which collects the quantized information from SNs and jointly estimates the parameter θ. It is assumed that the UAV flies at a fixed altitude of H in meter (m) and the maximum speed is denoted as V max in meter/second (m/s). The initial and final UAV horizontal locations are pre-determined and denoted as q 0 , q F ∈ R 2×1 , respectively, where q F − q 0 ≤ V max T so that there exists at least one feasible trajectory for the UAV to fly from q 0 to q F in a straight line within T . The UAV's flying trajectory projected on the ground is denoted as We assume that the transmit power for each SN is given (but can be different among SNs in general, depending on each SN's energy availability). Thus, in order to satisfy the minimum required signal-to-noise ratio (SNR) at the UAV for reliable data collection from each SN u n , the UAV location projected on the ground should lie within its communication range, which is denoted by r n . For each SN u n , define the coverage area D n {q ∈ R 2×1 | q − w n ≤ r n }. In general, an SN with smaller transmit power has a smaller r n given the same S for all SNs. As a result, the UAV can collect the data reliably from u n as long as it is within D n , as shown in Fig. 1. In the following, we refer to the event that the UAV enters into D n as UAV visits u n . For example, in Fig. 1, the UAV has visited SNs u 2 , u 6 , u 7 and u 8 . Since the number of quantization bits is typically small for practical applications (e.g., S = 10 bits) [1], the required transmission time for the quantized information can be neglected compared to the UAV flight time. In other words, as long as the UAV visits u n , we assume that the sensed data by u n can be reliably collected by the UAV.
C. Problem Formulation
Define the indicator functionÎ n (t) and indicator variable I n as follows,Î whereÎ n (t) indicates whether UAV is within D n or not at each time instant t, and I n indicates whether UAV visits D n (at least once) during the time horizon T . We assume that all the SNs' locations as well as their communication ranges are known to the UAV. The UAV trajectory design problem to maximize the number of visited SNs for distributed estimation is thus formulated as, In (P1), constraint (5) corresponds to the maximum UAV speed constraint, withq(t) denoting the time-derivative of q(t), and constraints (6) specify the initial/final locations for the UAV.
III. PROPOSED SOLUTION
(P1) is a non-convex optimization problem, since the objective function is a non-concave function, which involves timedependent indicator functions in terms of the UAV trajectory. In the following, we first show the structure of the optimal UAV trajectory solution to (P1).
A. Optimal Trajectory Structure and Problem Reformulation
Theorem 1. Without loss of optimality to problem (P1), the UAV trajectory can be assumed to consist of connected line segments only.
Proof. Theorem 1 is proved by showing that for any given feasible trajectory q(t) of (P1), which contains curved path, we can always construct another feasible trajectory q ′ (t) consisting of only connected line segments, which satisfies the conditions in (5), (6), and achieves the same objective value. Specifically, for any given q(t), the indicator variables I n can be obtained based on (3) and (4), and the objective value of (P1) (i.e., the number of visited SNs) can be obtained as K = N n=1 I n . Without loss of generality, we assume that the K visited SNs are u ω1 , u ω2 , . . . , u ωK , where ω i is the index of the visited SNs in U , 1 ≤ i ≤ K. Let q ωi be the waypoint that the UAV enters into D ωi for the first time, 1 ≤ i ≤ K, then q ωi = q(t ωi ), where t ωi = min{t |Î ωi (t) = 1, 0 ≤ t ≤ T }. We re-arrange q ωi with the increasing order of t ωi and obtain a sequence of ordered waypoins (q π1 , . . . , q πK ), where (π 1 , . . . , π K ) is a permutation of (ω 1 , . . . , ω K ). Let q π0 = q 0 and q πK+1 = q F . Then we have T = K i=0 T πiπi+1 , where T πiπi+1 denotes the flying time between waypoints q πi and q πi+1 along the given trajectory q(t). We can then replace any curved trajectory path between waypoints q πi and q πi+1 , 1 ≤ i ≤ K with a line segment and obtain the alternative trajectory q ′ (t). Thus, with the same flying time T πiπi+1 , the required flying speed q′ (t) can be reduced since line segment between any given pair of waypoints q πi and q πi+1 yields the minimum distance. Therefore, q ′ (t) satisfies the constraints (5) and (6), and yet achieves the same objective value for (P1). This concludes the proof of Theorem 1.
Based on Theorem 1 and its proof, (P1) can be solved by determining the optimal subset of SNs that are visited, denoted as U K ⊆ U with cardinality K, their optimal visiting order π = (π 1 , . . . , π K ), and the optimal waypoints q π k ∈ R 2×1 each for an SN u π k ∈ U K , such that the data from u π k can be received when the UAV is at q π k and the total distance of the resulting path p = (q π0 , . . . , q πK+1 ) is no greater than V max T . Therefore, (P1) can be reformulated as Consider a special case of (P2) with r n = 0, ∀n, (i.e., the UAV can collect data reliably from the SN only when it is directly above the SN), then only U K and the visiting order π need to be determined to maximize the number of visited SNs within duration T . This problem is essentially equivalent to the selective TSP problem (or orienteering problem), which is known to be NP-hard [12]. Therefore, problem (P2) with r n ≥ 0 is also NP-hard and more difficult to solve than TSP in general.
B. Proposed Algorithm for (P2)
One straightforward approach for solving (P2) is via exhaustively searching all possible subsets U K ⊆ U and the visiting order π of each U K , and then determining whether the minimum distance of path p that visits U K with order π is no greater than V max T . However, searching all possible subsets of U has an exponential complexity of O(2 N ), which is infeasible for large values of N . Therefore, we propose an efficient suboptimal solution to (P2) by a greedy iterative algorithm.
The key idea of our proposed solution is to maintain a working set C containing SNs that the UAV needs to visit, and add only one additional SN in C at each iteration. Initially, C is set as empty, and we make a greedy choice to select u k from the complement set U \ C, which leads to the minimum traveling distance to visit C {u k }. The above process iterates until C = U or when the required visiting time is greater than T . Let w 0 = q 0 and w N +1 = q F . The proposed greedy algorithm for (P2) is summarized in Algorithm 1. In Algorithm 1, d max is the flying distance of the UAV that flies over each SN following the increasing index of all the SNs in U , which is an upper bound of the minimum flying distance to visit all SNs in U .
Algorithm 1 Proposed algorithm for (P2) for each u k ∈ U \ C do; 6: U K ← C {u k }; 7: Given U K , solve (P3) and denote the optimized objective value and trajectory as d 0 and Q 0 ; 8: if d 0 ≤ V max T and d 0 < d min then 9: Note that in step 7 of Algorithm 1, the UAV trajectory is designed with a given SN set U K to minimize the UAV traveling distance. The problem is formulated as In Algorithm 1, after executing the inner iteration from step 5 to step 11, if adding any additional SN in the complementary set U \C leads to a traveling distance greater than V max T , then step 9 will not be executed and d min remains equal to d max as initialized in step 4, and the outer iteration in step 3 terminates. Otherwise, step 9 will be executed and one additional SN will be added into C with d min < d max , and the outer iteration continues. Therefore, the size of C increases over the iterations until either C = U or adding any additional SN will lead to a traveling distance greater than V max T ; thus, Algorithm 1 is guaranteed to converge. Furthermore, Algorithm 1 requires at maximum O(N 2 ) iterations, which is significantly less than O(2 N ) required by exhaustive search. Thus, the remaining task for Algorithm 1 is to solve problem (P3). Note that solving (P3) includes determining π and the waypoints {q π k , 1 ≤ k ≤ K}. (P3) is essentially equivalent to the TSP with neighborhoods (TSPN), which is known to be NP-hard [13]. To solve (P3), we propose an efficient method for waypoints design based on TSP method and convex optimization. Specifically, the visiting order π for the SNs in U K is first determined by simply applying the TSP algorithm over the SNs in U K while ignoring the coverage (disk) region of each SN. Since the initial and final points of the UAV are fixed, π can be obtained by using a variation of the TSP method (No-Return-Given-Origin-And-End TSP) [10]. Various algorithms have been proposed to find high-quality solutions to TSP efficiently, e.g., with time complexity O(K 2 ) [14]. With the visiting order π determined, the optimal waypoints q π k can be obtained by solving the following problem, Algorithm 2 Trajectory design algorithm for (P3) 1: Input:U K ; 2: Obtain visiting order π by using the No-Return-Given-Origin-And-End TSP method [10]; 3: Solve (P4) to obtain q π k and d K ; 4: Construct trajectory Q based on π and q π k with line segments; 5: Q 0 ← Q; d 0 ← d K ; 6: Output:Q 0 , d 0 ; Note that the objective function of (P4) is a convex function with respect to q π k , and the coverage area D π k is a convex set. Thus, (P4) is a convex optimization problem, which can be solved by standard convex optimization techniques or existing software such as CVX [15], with polynomial complexity. Let q π k − q π k−1 . The trajectory design algorithm for (P3) with given U K is summarized in Algorithm 2.
IV. NUMERICAL RESULTS
We consider a WSN with N = 40 SNs, which are randomly located within an area of size 4.0 km × 4.0 km. The following results are based on one random realization of the SN locations as shown in Fig. 2. The UAV's initial and final locations are respectively set as q 0 = [−2km, −2km] T and q F = [2km, 2km] T , and V max is set as 50m/s. We assume that the communication range r n of different SNs is identical, i.e., r n = r, ∀n. If not stated otherwise, we set r = 200 m. For performance comparison, we also consider two benchmark schemes, namely strip-based and zig-zag line trajectories for the UAV, as described in the following.
For the strip-based trajectory, the area of interest is partitioned into rectangular strips that are perpendicular to the line connecting q 0 and q F . Furthermore, each strip has width 2 min n r n = 2r so that all SNs within the strip will be visited as the UAV travels along the strip, as shown in Fig. 2. If the rectangular strips exceed the boundary of the area of interest, then the UAV just travels along the intersection between the borderlines of the area and these rectangular strips, which can be uniquely determined. With such strip-based trajectory, the number of visited SNs increases with the height of the strips. Therefore, a bisection search method can be used to determine the maximum height of the strips so that the total UAV flying distance is no greater than V max T . On the other hand, the zig-zag line trajectory is similar to the strip-based trajectory, but with the difference in that rectangular strips are replaced with zig-zag lines. The two benchmarks lead to rather intuitive UAV trajectories for different values of T . For example, when T is small, say T = T min qF −q0 Vmax , the two benchmarks yield the same path that directly connects q 0 and q F . When T increases, the heights of the strips and zig-zag lines increase since the width of the strip and parallel zig-zag lines are fixed as 2r. Therefore, when T is sufficiently large, both benchmark schemes result in UAV trajectories covering the entire area of interest, so that all SNs will be visited by the UAV.
The optimized trajectories with different schemes with T = 400 s are shown in Fig. 2. It is observed that with our proposed solution, the UAV can visit more SNs than the two benchmark schemes. In Fig. 3(a), we compare the number of visited SNs by our optimized trajectory with the two benchmark trajectories for different T . As expected, our proposed design significantly outperforms both benchmarks. It is observed that the performance gain is more substantial with small T > T min . As T becomes sufficiently large, all the three trajectories can visit all SNs, but our proposed scheme requires much less time to visit all SNs. It is also observed that the strip-based trajectory gives better performance than the zig-zag line trajectory. This is because the zig-zag line trajectory in general has smaller coverage areas than the stripbased trajectory with the same traveling distance or T (see Fig. 2). Furthermore, we study the effect of the SNs' communication range r on the system performance. Fig. 3(b) plots the number of visited SNs versus r when T = 200 s. It is observed that for all the three schemes, the number of visited SNs increases with r, as expected; and the proposed trajectory outperforms the two benchmarks significantly, especially for small r.
V. CONCLUSION
This paper studies the trajectory design for distributed estimation in a UAV-enabled WSN to minimize the MSE for the estimation, which is shown equivalent to maximize the number of SNs with successful data collection by the UAV. Although the formulated problem is NP-hard, we reveal that the optimal UAV trajectory consists of connected line segments only. We then propose a greedy algorithm with low complexity based on TSP method and convex optimization to obtain a suboptimal trajectory solution. Numerical results demonstrate that the proposed design significantly improves the number of visited SNs and hence the estimation performance, as compared to the benchmark schemes. | 2018-05-11T12:49:01.000Z | 2018-05-11T00:00:00.000 | {
"year": 2018,
"sha1": "527b94d64d0c51cb255c71e0d7e78ad1d86bd20a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.04364",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9093f2a744f84862061c4e99bcdcc705c87cb4fb",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Engineering"
]
} |
84742174 | pes2o/s2orc | v3-fos-license | Effect of CO2 Enrichment on the Translocation and Partitioning of Carbon at the Early Grain-filling Stage in Rice (Oryza sativa L.)
Abstract Rice plants (Oryza sativa L.) were grown under normal (350 µL L-1 CO2) and CO2-enriched (660 µ L L-1 CO2) conditions, and 13CO2 was supplied to the rice plants after heading to examine the translocation and partitioning of photosynthate at the early grain-filling stage. At 2 days after supplying 13CO2, no difference in the 13C content of the whole plant was observed between the plants grown under normal and CO2-enriched conditions, but translocation of 13C from the leaf blade to other plant organs seemed to be accelerated by CO2 enrichment. Up to 9 days after supplying, 13CO2 fixed into sucrose was mainly used to synthesize starch in the stem rather than translocated to the ear in plants grown under normal conditions. In contrast, the supplied 13C was rapidly translocated to the ear, and 13C stored as starch in the stem was also translocated to the ear in plants grown under CO2-enriched conditions. Therefore, we concluded that CO2 enrichment accelerated the translocation of carbohydrates to the ear.
The global atmospheric CO 2 concentration, presently averaging 360 µL L -1 , are increasing and are predicted to double by the end of this century. Though CO 2 enrichment can stimulate photosynthesis, crop production is affected not only by photosynthesis but also by translocation or partitioning of photosynthate in the plant. It is important to know how elevated CO 2 concentrations infl uence photosynthetic carbon fi xation, the translocation and partitioning of photosynthates, and the grain-fi lling process in rice (Oryza sativa L.).
The photosynthetic responses of higher plants to CO 2 enrichment, including acclimation, have mainly been studied during vegetative growth (Webber et al., 1994;Koch, 1996;Wolfe et al., 1998), and CO 2 enrichment has been reported to increase the total concentration of non-structural carbohydrates in leaf blades, leaf sheaths, and culms (Rowland-Bamford et al., 1996). The enhancement of growth and yield by CO 2 enrichment has been reported for many plant species, including rice (Baker et al., 1990a;Baker and Allen, 1993;Rowland-Bamford et al., 1996). However, the effect of CO 2 enrichment on the translocation of photosynthates has only been examined during shortterm CO 2 enrichment (Grodzinski et al., 1998).
Grain fi lling can be limited by the photosynthetic activity of the plant (the source), and the uptake capacity of the grain or the ear (the sink), or a combination of the two. Both the photosynthetic products transferred directly from the leaf blades and the repartitioned from vegetative tissues contribute to grain fi lling in rice (Ho, 1988). In many experiments designed to analyze these processes, the photosynthetic rate, photosynthate translocation, or spikelet number was studied (Iwasaki et al., 1992;Wada et al., 1993;Conocono et al., 1998). These experiments, however, were conducted under unnatural conditions (e.g., removal of panicles, spikelets or leaves), and the results might have been affected by the resulting physiological changes in the plant. Translocation has been evaluated based on the dry weight or carbohydrate content but this approach may not show real translocation rates, because newly produced or used carbohydrates in each organ were not taken into consideration (Gebbing et al., 1998). For such investigations, 13 C is useful because it is a stable carbon isotope and safe to use. For this reason, we used labeling with 13 C for our analysis of the partitioning of carbon in the plants. Although it is not possible to study the fl ow of newly fi xed photosynthate in plants based solely on the measurement of dry weight or carbohydrate content in each organ, labeling with 13 C provides a powerful tool for monitoring the partitioning of newly fi xed carbon. Furthermore, it may reveal the translocation of fi xed carbohydrates from the culm and leaf sheath to the ears.
The carbohydrates redistributed from vegetative tissues contribute to grain fi lling in rice. The maximum starch content was attained in the leaf sheaths at 4 days after heading, and thereafter, starch in the stem was digested into sucrose and translocated to the ears (Hirose et al., 1999). The translocation of fi xed carbohydrates from the culms and leaf sheaths to the ears may be examined by the 13 C content of the stems and the ears in the form of starch at several days after fl owering and at the active grain-fi lling stage. Therefore, we examined the translocation and partitioning of 13 C at the early grain-fi lling stage.
The objective of this study was to directly determine the effect of CO 2 enrichment on the translocation and partitioning of carbohydrates in rice plants at the early grain-fi lling stage by examining the uptake, use, and translocation of 13 C.
Cultivation of rice plants
Rice (Oryza sativa L. cv Nipponbare) was grown in 1998 at the National Institute of Agro-Environmental Sciences in Tsukuba, Japan. Rice seeds were sown in plug pots at 23 ºC, a relative humidity of 80 %, and a CO 2 concentration of 350 or 660 µL L -1 on 20 April. One month later, the seedlings were transplanted into paddy boxes (150 150 30 cm high), with three seedlings per hill at a plant density of 20 20 cm in 16-m 3 computer-controlled environmental chambers. Each chamber contained two paddy boxes and was exposed to natural light, with air conditioning to maintain ambient air temperatures, a relative humidity of 80 %, and CO 2 concentrations of 350 and 400 µ L L -1 (day and night, respectively) for the normal CO 2 conditions, and 660 and 700 µL L -1 for the CO 2enriched conditions (Sakai et al., 2001). The plants were supplied with fertilizer solution (5 g N m -2 , 15 g P 2 O 5 m -2 , 15 g K 2 O m -2 ) just before transplanting, and an additional 3 g N m -2 was supplied 56 days after transplanting (DAT). Heading occurred on 23 August (100 DAT) and 22 August (99 DAT) under normal and CO 2 -enriched conditions, respectively.
Measurements of air temperature and PAR
Air temperature was measured with a platinum resistance thermometer that was shielded, aspirated and placed outside the chambers, and PAR was measured with an infrared compact sensor (IKS-25, Koito, Tokyo, Japan). Air temperature and PAR were monitored every 10 s, and 5 min means were recorded.
Overall CO 2 -exchange rate
Plant CO 2 -uptake in the day time (C uptake ) and release at night (C release ) were estimated from the CO 2 injection rate required to maintain a constant CO 2 concentration in the chamber, according to the method of Sakai et al. (2001); this method takes account of the leakage rate of the chamber and the CO 2 fl ux out of the paddy water and the soil.
Determination of sucrose and starch contents
The contents of soluble sugars and of starch were determined enzymatically according to the method reported by Nakamura and Yuki (1992).
Supply of 13 C and sampling
13 CO 2 was supplied to six plants grown under normal conditions and six plants grown under enriched CO 2 conditions on 24 August (101 DAT). Each plant was covered with a transparent bag made of 0.10 mmthick polyvinylchloride fi lm that neither transmitted nor absorbed air or CO 2 . Plants took up 13 CO 2 gas liberated from 100 mg of Ba 13 CO 3 powder mixed with 7.3 M H 3 PO 4 inside the bag. To allow plants to entirely absorb the liberated 13 CO 2 , we sealed the bag with water and exposed the plants to 13 CO 2 at about 1400 µmol -2 s -1 PFD for 90 min. Three of the six plants supplied with 13 CO 2 under each CO 2 condition were harvested 2 days (26 August) and 9 days (2 September) after supplying the 13 CO 2. The plants were then divided into culms, leaf sheaths, leaf blades, and ears. Each plant part was immediately stored at 80 ºC, then freeze-dried.
Extraction of structural and non-structural carbohydrates
The dried plant materials were weighed, then ground to a fi ne powder. Samples (approximately 500 mg) were incubated in 80 % ethanol at 80 ºC for 1 h, then centrifuged at 3000 g for 10 min, after which the ethanol-soluble fraction was decanted. The ethanolsoluble fraction was dried using a centrifugal dryer in vacuum, then further fractionated using a mixture of 2 mL of distilled water and 2 mL of chloroform, and the aqueous phase was passed through a cationexchange resin (Dowex-50, Dow Chemical, USA) to remove amino acids. The effl ux was collected, and used as the soluble sugar fraction. Distilled water was added to the ethanol-insoluble fraction which was dried using a centrifugal dryer in vacuum, and the suspension was boiled for 4 h. Thereafter, 20 units of amyloglucosidase in 0.5 mL of 100 mM acetate buffer (pH 4.6) was added to the suspension, which was then incubated for 2 h at 55 ºC to digest starch into glucose. After centrifugation at 3000 g for 10 min, the watersoluble fraction was collected, then passed through Dowex-50 and a nitrocellulose fi lter (Advantec, Tokyo, Japan) to remove protein. The fi ltrate was collected and used as the insoluble sugar fraction (i.e., starch). Both the soluble sugar and the sugar from starch were dried in vacuum using a centrifugal dryer. Thereafter, the water-insoluble fraction (structural components) was washed twice with distilled water at 40 ºC and dried in an oven at 80 ºC.
Measurement of 13 C content
The total carbon and 13 C contents were determined using an elemental analyzer (NC2500, Thermoquest, San Jose, CA, USA) and a mass spectrometer (Delta Plus System, Thermoquest, San Jose, CA, USA). Each fraction was dried completely, as described above, and the 13 C content was determined. The 13 C content was calculated in each organ, according to equation 1: ( 13 C content) (total carbon atom content) ( 13 C atom excess %) 13 (1) where 13 C atom excess % is the difference in the 13 C /( 12 C 13 C) ratio between the plants supplied with 13 CO 2 and those supplied with ordinary 12 CO 2 . The increase in 13 C content from 26 August to 2 September was calculated by subtracting the 13 C content on 26 August from that on 2 September. Figure 1 shows the change in ear weight after heading. During the early grain-fi lling stage (2 September), the ear weight under CO 2 -enriched conditions increased more rapidly than under normal CO 2 conditions, and the ear weight at maturity (13 October) under CO 2 -enriched conditions (31.2 0.9 g hill -1 ) was signifi cantly (P 0.05, Student's t-test) heavier than under normal conditions (25.6 0.9 g hill -1 ). The sink strength, which is potentially responsible for this difference can be divided into two components: sink size and sink activity (including the activities of enzymes). Table 1 shows the infl uence of CO 2 enrichment on spikelet number, which is an indication of sink size. The spikelet number per ear or per hill was not signifi cantly higher under the CO 2enriched conditions (Table 1).
Results
Though the difference in ear weight was not seen on 26 August, the infl uence of CO 2 enrichment on ear Table 1. Effect of CO 2 enrichment on spikelet number and ear number.
Plants from 12 rice hills were destructively sampled for each CO 2 condition. The normal CO 2 concentration was about 350 µL L -1 and the enriched CO 2 concentration was about 660 µL L -1 . Values represent means standard errors. Differences between the means for the two CO 2 conditions were analyzed using Student's t-test (ns, non-signifi cant). Differences between the means for the two CO 2 conditions were analyzed using Student's t-test (**, P 0.01; *, P 0.05). weight was marked on 2 and 12 September (Fig.1). To examine the translocation of fi xed carbohydrate from the stems to the ears, we measured the carbon budget (carbohydrates) during the early grain-fi lling stage for a week (from 26 August to 2 September). Figure 2 shows the changes in air temperature and incident PAR from 24 August to 2 September. It was sunny and PAR was above 20 mol m -2 day -1 from 24 to 26 August. Thereafter (from 28 August to 2 September), it was cloudy or rainy, hence PAR and air temperature were lower than those during the fi rst 3 days. C uptake was high under both normal and CO 2 -enriched conditions during the fi rst 3 days (24, 25 and 26 August), when PAR was above 20 mol m -2 day -1 (Fig. 3). However, when PAR was below 13 mol m -2 day -1 (from 28 August to 2 September), C uptake was low under both normal and CO 2 -enriched conditions; there was no signifi cant Fig. 3. Change in CO 2 uptake and release from 24 August to 2 September. CO 2 uptake in the day time (C uptake ) and CO 2 release at night (C release ) represent the amounts of absorbed and released CO 2 , respectively. The plants grown under normal and CO 2 -enriched conditions are shown by the broken and solid lines, respectively. Bars indicate the standard errors of three replications. Differences between the means under the two CO 2 conditions were analyzed using Student's t-test (*, P 0.05). Fig. 4. Increase in total and structural weights of each plant organ from 26 August to 2 September, and increases in starch and sucrose contents over this period. W, L, S, and E indicate the whole plant, leaf blades, stems (including leaf sheaths and culms), and ears, respectively. Closed and open columns indicate CO 2 -enhanced and normal conditions, respectively, plus the standard errors for three replications. The increase in total and structural weights was calculated by subtracting the weight on 26 August from that on 2 September. Differences between the means were analyzed using Student's t-test (**, P 0.01; *, P 0.05). difference in C uptake between the two conditions from 24 August to September 2. C release was lower than C uptake . Although the daily CO 2 budget was signifi cantly higher under the CO 2 -enriched conditions during the fi rst 2 days, the total CO 2 budget from 26 August to 2 September was not signifi cantly different (4.3 0.2 and 4.4 0.2 g CO 2 hill -1 under normal and CO 2 -enriched conditions, respectively). The dry weight of the ears (total weight in Fig.4) increased from 26 August to 2 September by 2.4 and 5.1 g hill -1 under normal and CO 2 -enriched conditions, respectively, and the difference was signifi cant (Fig. 4). The dry weights of the other plant organs did not increase signifi cantly during this period. The starch content in the ears under CO 2enriched conditions increased signifi cantly (to 1.45 g hill -1 ), and this increase was approximately three times the increase observed under normal conditions (0.45 g hill -1 ).
Rice plants were supplied with 13 CO 2 to examine the translocation and partitioning of photosynthate in the plants. We preliminarily examined the absorption of 13 CO 2 liberated from 100 mg of Ba 13 CO 3 powder into the rice plant (cv Fujihikari) at the heading stage under 900 µmol m -2 s -1 PFD of artifi cial lights, which was lower than PFD at the 24 August in 1998 (Fig. 5). Though 95 % of supplied 13 C was absorbed into a plant at 60min after the start of supplying 13 CO 2 , all 13 C was absorbed at 90 min. To absorb 13 CO 2 entirely, we exposed the plants to 13 CO 2 for 90 min in this experiment. At 2 days after supplying 13 CO 2 , no difference was observed in the 13 C content of the Fig. 5. The amount of 13 C absorbed into the rice plant at heading stage with after exposure to 13 CO 2 for various periods. The 13 CO 2 was liberated from 100mg of Ba 13 CO 3 powder at 900 µmol -2 s -1 PFD of artifi cial lights. Bars indicate the standard errors of three replications. whole plant under normal (59.3 %) and CO 2 -enriched conditions (55.1 %), but the 13 C content of leaf blades under CO 2 -enriched conditions (6.2 %) was lower than under normal conditions (9.0 %) ( Table 2). In the whole plant, the 13 C content of sucrose fraction at 2 days after supplying 13 CO 2 was about 6 % under both conditions. Thereafter, the 13 C labeling in sucrose of the whole plant decreased under both conditions, accompanied by an increase in 13 C labeling of starch during the period we examined (from 26 August to 2 September) (Fig. 6). This indicates that sucrose was converted starch during this period for the plant as a whole. In stems, 13 C in starch increased signifi cantly during this period under normal conditions, but decreased or did not change under CO 2 -enriched conditions. In the ears, the 13 C content of starch under CO 2 -enriched condition increased by 5.3 % during this period, but only by 0.8 % under normal conditions.
Discussion
In most investigations concerned with the c o n t r i b u t i o n o f s t o r e d a n d n e w l y p r o d u c e d photosynthate to grain-fi lling, researchers have analyzed the long-term reserves and the growth of grains and other plant parts. However, estimates of translocation based solely on changes in dry-matter weight or carbohydrate content may not represent actual translocation, because newly produced or used carbohydrates in each organ are not taken into consideration (Gebbing et al., 1998). In the present study, we analyzed the growth of each plant organ together with the partitioning of photosynthate labeled with 13 C to let us estimate the contribution of the reserves in each organ to grain-fi lling. Under normal conditions, the starch content of the stem did not change for 1 week after heading; thus, it seems that carbohydrates stored in the stem were not decomposed and translocated to the grains (Fig. 4). However, based on the results of the 13 C analysis, it appears that a large amount of the soluble sugars in the stem was used to synthesize starch and was stored in the stem during this period (Fig. 6). These fi ndings indicate that carbohydrates stored in the stem are translocated to the grains, and newly fi xed carbon is stored in the stem.
In the present paper, the roles of the roots and of dead leaves were not considered. In another study conducted in a paddy fi eld (Kim et al., 2001), the amount of 13 C translocated to the roots was less than 1 % of the 13 C supplied at the time of heading (data not shown). We also measured the amount of fallen leaves every 30 days, but the data cannot be presented here, because the difference between plants grown under normal and CO 2 -enriched conditions was so small and because the amount of leaves fallen during the 9-day study period was negligible in our analysis of 13 C translocation. Though about 40% of the 13 C supplied on 24 August was released from the whole plant before 26 August, the 13 C content of the whole plant did not decrease from 26 August to 2 September (Table 2, Fig.6). Since about 50 % of the 13 C supplied at 3 days Fig. 6. Increase in 13 C content of the plant as a whole and that of structural components in different organs from 26 August to 2 September, and starch and sucrose contents after this period. The increase in 13 C content was calculated by subtracting the 13 C content on 26 August from that on 2 September. The amount of 13 C supplied to the plants was set to 100%. Black and white columns indicate CO 2 -enhanced and normal conditions, respectively, plus the standard errors for three replications. See Fig. 4 for details.
after heading day was used for respiration within 2 days and thereafter, little 13 C in the whole plant was released (Hara et al. 1999), we thought that about 40 % of the 13 C supplied on 24 August was used for respiration within 2 days and little 13 C was used for respiration after 26 August.
Heading day under CO 2 -enriched condition was earlier than that under normal CO 2 condition, and it might affect the translocation and partitioning of carbohydrate. However, since the difference in heading day was only one day and the differences were not observed in partitioning of 13 C to starch fractions in the stems and ears between under normal and CO 2 -enriched conditions (Table 2), the infl uence of heading day was thought to be little at 2 days after supplying 13 CO 2 . The translocation and partitioning of carbohydrate at 9 days after supplying 13 CO 2 , active grain fi lling stage, might be affected by the difference in heading day. However, since the difference in heading days was only one day and the plants under both conditions were grown under the same light and temperature conditions, the infl uence of heading day was also thought to be little at even 9 days after supplying 13 CO 2 .
Our fi nding that ear weight was increased by CO 2 enrichment agrees with previous reports (Kimball, 1983(Kimball, , 1986Kim et al., 2001). Although increased photosynthesis has been reported under long-term CO 2 enrichment (Baker et al., 1990b;Monje and Bugbee, 1998;Sakai et al., 2001), the infl uence of CO 2 enrichment on carbohydrate translocation has been reported by only a few researchers (e.g., Grüters, 1999). At 2 days after supplying 13 CO 2 , the 13 C content of leaf blades under CO 2 -enriched conditions was lower than under normal conditions (Table 2); thus the carbon fi xed in leaf blades under CO 2 -enriched conditions was translocated to other organs more rapidly than under normal conditions. This indicated that CO 2 enrichment accelerated the translocation of carbohydrates from leaf blades. CO 2 enrichment has been reported to increase the activity of leaf sucrose-phosphate synthase (SPS, EC2.4.1.14), which would promote the translocation of photosynthates in rice plants (Hussain et al., 1999;Seneweera et al., 1995. These fi ndings suggest that translocation might be increased because CO 2 enrichment promotes the activities of enzymes such as SPS that are necessary for photosynthate translocation. Although effects of CO 2 enrichment on the fi nal grain yield have been reported (Kimball, 1983(Kimball, , 1986Kim et al., 2001), its infl uence on grain-fi lling has not been investigated in detail. The balance between source and sink was thought to determine carbohydrate storage in the grains during the grain-fi lling stage (Ho, 1988). However, our study revealed little difference between the total CO 2 budget (source activity) and the spikelet number per plant or per ear (sink size) between plants grown under normal and CO 2 -enriched conditions (Fig. 3, Table 1). We hypothesize that CO 2 enrichment promotes the effi ciency of translocation of carbon. Newly fi xed carbon was translocated more rapidly from leaf blades under CO 2 -enriched conditions, but the 13 C content in the form of starch in the ears at 2 days after supplying 13 CO 2 under a CO 2 -enriched condition was not greater than under normal conditions (Table 2). We thought that sink activity was too low to transport a large amount of carbon fi xed in the leaf blades to the ear, and excess carbon was stored as starch in the stem or as sucrose in a whole plant. At 9 days after supplying 13 CO 2 , the 13 C content of sucrose in the whole plant decreased under both CO 2 conditions, but the 13 C content of starch in both the stem and the ear increased under normal conditions (Fig. 6). In contrast, the 13 C content of starch in the ears increased more under CO 2 -enriched conditions than under normal conditions, but the 13 C content of starch decreased in the stem under CO 2 enrichment. The sink activity in the plant under normal CO 2 conditions was thought to be lower than that under CO 2 -enriched conditions and excess carbon under normal CO 2 conditions was stored in the stem. Considering that 13 C content of starch decrease in the stem under CO 2enriched conditions, CO 2 enrichment is considered to promote the translocation of carbohydrate from the culms and leaf sheaths accompanied by an increase in sink activity. We suggest that CO 2 enrichment promotes sink activity and metabolic activities related to the translocation of carbon to other plant organs. We also suggest that the heavier ear weight at maturity under CO 2 -enriched condition results from the increased translocation of carbon to the ear during grain-fi lling period.
The effect of CO 2 enrichment on enzymes related to carbohydrate metabolism has been studied in the leaf blade but not in the stem. In a previous study, we found that the levels of mRNA for cytosolic fructose 1,6-bisphosphatase (EC3.13.11) and for SPS were increased in the leaf blade by CO 2 enrichment (Aoki et al., 2003). We also found that CO 2 enrichment accelerated the translocation of carbohydrates from the leaf blade. Although we also found that CO 2 enrichment accelerated the translocation of carbohydrate from the stem to the ears, whether it increased enzymatic activity in the stem has not been investigated in suffi cient depth. It is nonetheless possible that increased activities of enzymes related to debranching and degradation of starch and to translocation will be observed under CO 2 enrichment conditions. Micrometeorology of the National Institute of Agro-Environmental Sciences for providing the data on ambient CO 2 concentrations that were used for our calculation of chamber leakage. We measured 13 C by mass-spectrometry at Asia Natural Environmental Science Center of the University of Tokyo. | 2019-03-21T13:10:01.056Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "ef14b918624830b73c854e6ba41ca9d0c4e6f888",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1626/pps.8.8?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "94be919728ab90db6742b450ee74a13b457d0fa8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
220602265 | pes2o/s2orc | v3-fos-license | A deep learning approach for staging embryonic tissue isolates with small data
Machine learning approaches are becoming increasingly widespread and are now present in most areas of research. Their recent surge can be explained in part due to our ability to generate and store enormous amounts of data with which to train these models. The requirement for large training sets is also responsible for limiting further potential applications of machine learning, particularly in fields where data tend to be scarce such as developmental biology. However, recent research seems to indicate that machine learning and Big Data can sometimes be decoupled to train models with modest amounts of data. In this work we set out to train a CNN-based classifier to stage zebrafish tail buds at four different stages of development using small information-rich data sets. Our results show that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression confocal microscopy images, achieving in each case up to 100% test accuracy scores. Importantly, we show that high accuracy can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net. Furthermore, our classifier shows that it is possible to stage isolated embryonic structures without the need to refer to classic developmental landmarks in the whole embryo, which will be particularly useful to stage 3D culture in vitro systems such as organoids. We hope that this work will provide a proof of principle that will help dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to also apply ML approaches.
Introduction
Machine learning (ML) approaches are not new, with early works dating as far back as the 1950s [1]. However, in the last two decades, the field has experienced an astonishing surge in productivity and progress. This soar can be explained, at least in part, by our new-found ability to generate and store ever larger amounts of data (Big Data) with which to train ML models coupled with unprecedented computational speed and power. It is becoming difficult to find a a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 rich, often consisting of stacks of high-resolution z-slices that together make up a complete 3D image of the embryo or region under study. Moreover, advances in the multi-plexed detection of mRNA distribution means that such 3D structural information can be coupled with information about the distribution and quantification of multiple mRNA species from the tissue to sub-cellular level [49][50][51][52]. With ML approaches it is often unclear at the onset, exactly how much data is going to be required to achieve a certain test accuracy. Inspired by previous work showing that CNN-based classifiers could be trained with information-rich small data sets [43][44][45][46][47][48], we set out to see if we would be able to train a CNN to classify zebrafish tail buds of different stages with the limited data that was available to us in the lab.
Zebrafish (Danio rerio) is a very popular model organism in developmental biology. Its embryos go from fertilization to free swimming larvae in three days [53]. In the first day of their development, zebrafish embryos enter the segmentation period [53]. During this time, the embryo elongates, somites (precursors of the dermis, skeletal muscle, cartilage, tendons, and vertebrae) appear sequentially in an anterior to posterior manner and the tail bud becomes more prominent (Fig 1A). Zebrafish embryos are routinely staged based on their overall shape and the total number of somites that they have formed. However, during these stages the overall shape of the tail bud also changes, although more subtly, to become shorter, thinner and overall straighter (Fig 1B).
In this work we set out to test whether it would be possible to use a small albeit information-rich data set of confocal images to train a CNN to accurately classify images of zebrafish tail buds at four different stages during the segmentation period. In addition to the challenge regarding the small size of our training set, we also wanted to see whether a CNN would be able to learn to classify based on subtle changes in the shape of an isolated embryonic structure, in this case the tail bud. Such a classifier would solve the problem of asynchronous development within clutches and save man-hours by automating the staging step in laboratories across the world.
In this paper we show that, contrary to popular belief, small information-rich data sets can also be used to train CNN-based classifiers to a high accuracy. We have focused on building and training CNNs to correctly stage two and three dimensional morphological and gene expression image data of zebrafish tail buds at four different stages during the segmentation period of development. We found that CNN-based classifiers can yield test accuracies of 100% when trained with less than 100 images. Furthermore, our results show that this is the case both when morphological and gene expression image data were used as the training set. Surprisingly, higher dimensional data (3D versus 2D) isn't always associated with a higher accuracy score. We hope that our work will provide a precedent and encourage others in the life sciences to apply ML approaches even when their data are relatively scarce.
Data
In this work, we set out to train convolutional neural networks to classify 2D and 3D confocal images of dissected tailbuds taken from zebrafish embryos at four close but different stages in development: 16-18 somite stage, 20-22 somite stage, 24-26 somite stage and 28-30 somite stage ( Fig 1A). The chosen classes cover approximately 1.5hrs of embryonic development each, and are fine enough for our general research purposes, which aim to understanding fate specification in the tail bud during the segmentation period.
Whole embryos were stained for three gene products, Tbxta, Tbx16 and Sox2, using HCR V.3 [54]. Tbxta is expressed in the notochord and mesoderm progenitor zone (Fig 2A and 2D), Tbx16 is also expressed in the mesoderm progenitor zone and is present in the posterior pre-somitic mesoderm, but not in the notochord (Fig 2A and 2C) [55,56]. Sox 2 is a neural marker which is expressed in the neural progenitor zone (Figs 1C and 2A and 2B) [57]. All embryos were also stained with DAPI, a nuclear marker (Fig 1B) which allows us to visualise the morphology of the entire tail. The staining protocol takes three days and has been optimised to allow us to stain larger numbers of embryos at a time (see Materials and methods).
PLOS ONE
Once the tails had been staged and stained, they were imaged on a confocal microscope. The resulting images consist of four channels, one for each stained gene product plus DAPI, and we image over three spatial dimensions. Images were subsequently passed through a preprocessing pipeline in preparation to be used for network training and testing (see details in the Materials and methods section). It is important to note that the tails in all images presented to the CNN were in the same orientation: anterior to the left, posterior to the right, dorsal up and ventral down as in Fig 1B and 1C. We obtained a total of 120 images stained with DAPI, 56 images stained for tbxta, 56 for sox2 and 56 for tbx16. Of the 120 DAPI stained images obtained, 96 were used for training (with 24 images per class) and the remaining 24 images were used for testing (with 6 images per class). Of the 56 images obtained for each gene of interest, 48 were used for training (with 12 images per class) and the remaining 8 were used for testing (with 2 images per class) (for more details please see the Final Data Sets section in the Materials and methods and in particular Table 2). These are very small data set sizes compared to those usually used for deep learning classification problems, which typically range from the hundreds to the millions.
All images are three dimensional, with x, y and z axes. By adding an an additional pre-processing step, we obtained a maximum intensity projection along the z axis for every image in order to obtain its two-dimensional representation. In this way we generated an associated 2D version of the 3D data set. This dimensionality reduction was accompanied by a subsequent reduction in the size of the images compared with their 3D counterparts. As a result we were able to increase the dimensions of the x and y axes to 128 x 128 pixels. All data are available on Figshare (see Materials and methods).
Convolutional neural network architecture
We chose a simple architecture, composed of two convolutional layers, each using rectified linear unit (ReLU) activation functions followed by a max pooling operation (Fig 3). The first convolutional operation employs a kernel size of 5x5x5 and stride of 1x1x1 which produces 32 channels, and the second uses a kernel size of 4x4x4 and stride of 1x1x1, producing 64 channels. The number of channels produced by the convolution operation is arbitrary, and is often used as a hyperparameter to optimise a network. Both convolutions are followed by a max pooling layer with a kernel size of 3x3x3. Once passed through the convolutional layers, the data is flattened into a one dimensional array of length 262144 (64x64x64), and passed into the multi layered perceptron (MLP) part of the network. The MLP contains three fully-connected PLOS ONE hidden layers, followed by a third output layer consisting of 4 units-one for each of our defined classes (16-18 somites, 20-22 somites, 24-26 somites and 28-30 somites). An activation function is implemented after each of the first two layers, and the dropout regularisation technique applied with a dropout rate of 0.2. The final layer of the network is a third fully connected layer containing four units which returns the log probabilities of each image belonging to each of the four classes. Finally, a softmax activation function is applied to these probabilities, which yields a percentage likelihood for each classification. During training, a cross entropy function is used as the loss function, which is optimised using the Adam optimisation function. The architecture described above corresponds to the 3D implementation of the classifier. The 2D implementation is generally identical, except for the necessary reduction in one dimension ( Fig 3A and 3B, more details in the Materials and methods section).
CNN-based classifiers trained on less than 100 morphological images of zebrafish tail buds reach up to 100% accuracy
Our first objective was to train a classifier that would be able to distinguish between morphological (DAPI stained) images of zebrafish tail buds at four discrete stages in development. The confocal images are three dimensional but can be reduced to two dimensions by performing a maximum intensity projection along one of the axes on the image analysis software Imaris, as detailed in the previous section. Given the small size of our data set, we wanted to see whether a 2D or a 3D data set would be most suitable to train a CNN to accurately classify zebrafish tail buds according to their developmental stage. To address this we trained both 2D and 3D versions of the network on 2D and 3D data sets respectively and compared them.
CNN reaches close to 80% accuracy classifying 3D morphological images of zebrafish tail buds. The 3D CNN was initially parametrised using the optimisation algorithm SigOpt (see Materials and methods section) to find the most appropriate activation functions and number of units in the first and second hidden layers. This procedure found that ReLU
PLOS ONE
activation for both layers performed consistently better than Tanh activation for both layers, or a combination of the two. The algorithm settled on a first hidden layer composed of 501 units and a second composed of 397 units, which we rounded up to 500 and 400 units respectively (Materials and methods, Table 5). All other network parameters remained as described in the previous section.
This network was further trained on the 3D DAPI data set for 100 epochs to allow learning convergence and optimal classification results. The total training data set consisted of 96 3D DAPI stained images, with 24 per class (see Table 2). As expected, extending the training time results in a significant increase in accuracy and an associated decrease in loss. The training accuracy peaked at a perfect score of 100% after 100 epochs, compared to the 62% accuracy reached after 10 epochs during the initial parameter optimisation round. Similarly, test accuracy reached a maximum value of 79.16%, compared to the initial 58.33% test accuracy (Materials and methods, Table 5). A test accuracy of almost 80% is already a very good score, and a surprising one too, considering the small size of the training set used.
CNN reaches 100% accuracy classifying 2D morphological images of zebrafish tail buds. Next, we proceeded to train the 2D version of the classifier on the associated 2D data set. As with the 3D classifier, we use an initial course grained parameter optimisation strategy to find the best performing parameter combination before fine training the selected network for longer times (more epochs, see Materials and methods). The resulting network also uses ReLu activation functions, this time with 429 and 330 units in the first and second hidden layers respectively (see Table 6, cyan). Using these parameters, the network converged after as few as 25 epochs; a small number compared to the 100 that were required for convergence in the 3D classifier. The accuracy when the network was applied to the test data set reached a perfect score of 100%. This was expected since since in the course-grained training this network had already scored 95.83% (see Table 6, cyan row represents the parameters that result in the best performance).
Performance comparison of 2D vs 3D CNN-based classifiers of morphological images of zebrafish tail buds. Both classifiers perform exceptionally well, especially considering that less than 100 images were used to train four different classes. The 3D classifier reached an accuracy of almost 80% while the 2D classifier achieved an accuracy of 100% (as shown in Table 1). We had initially expected the increased information contained in the 3D images when compared to their 2D counterparts, to result in a better performance of the 3D classifier. Instead we find the opposite. Furthermore, the 2D network obtained a higher test accuracy at a quarter of the computational time, converging after only 25 epochs as opposed to the 100 required to converge the 3D network. One possible explanation for this result might have to do with our methodology: to train the 2D network we initialise using the some parameter values obtained from training the 3D network in a somewhat unusual application of transfer learning. Transfer learning refers to using the weights obtained by training a network with a given data set, as the starting point for training the network on a different data set and for another classification task [58]. This method has already been shown to be extremely effective
PLOS ONE
at reducing the amount of labelled image data required for training CNNs on biological microscopy data [25,41,[59][60][61]. Usually this approach is used to train networks on new data sets and classification challenges. In our case, transfer learning was applied to the same classification task and data, albeit in a different format, which yielded an improved performance and faster convergence time.
In practical terms, this result dispels the myth that training a CNN to a high test accuracy always requires prohibitive amounts of data, and shows how transfer learning can help make the most out of the available data, in this case to further improve the levels of accuracy to obtain a perfect score on the 2D version of the same set. This result suggests that CNNs can be successfully applied to myriads of classification tasks in the life sciences, where data are scarce.
CNN-based classifiers trained on less than 50 3D images of gene expression domains in zebrafish tail buds reach up to 100% test accuracy
Next, we applied a traditional transfer learning approach to ask whether 2D and 3D classifiers could be trained to stage gene expression domains on zebrafish tail buds at the same four developmental stages considered before (Fig 2B-2D). To do this, we used the network architecture and weights obtained in the previous section from training using 2D and 3D morphological (DAPI stained) images of zebrafish tail buds, and re-trained these networks using the gene expression data. Gene expression data is fundamentally different from the previously used DAPI stains (compare Fig 1A and 1B and Fig 2B-2D). While DAPI stains the nucleus of every cell, hence building an image of the tail bud's morphology, HCR V.3. stains the mRNA of the gene of interest anywhere in the cell and surroundings, resulting in hazy coloured clouds. To add to the challenge of staging hazy gene expression domains, the data sets used were half the size with respect to those used to train for morphological classification in the previous section (Table 2), with a training set size of 48 images for each gene.
CNNs stage 3D gene expression domains in zebrafish tail buds with a minimum of 87.5% accuracy. We used the 3D network obtained by training using the 3D morphological image data set (previous section) as the starting point from which to train three networks, each of which will learn to classify 3D gene expression image data for one gene (Sox2, Tbx16 or Ntl respectively. See Fig 2). The gene expression image data set for each gene is smaller than the morphological image data sets used in the previous section, with each containing less than half the number of samples (56 as opposed to 120, see Materials and methods, Table 2). To reduce the risk of over-fitting that is commonplace when training networks with small data sets, we reduced the number of epochs to 25. Preliminary experiments showed that the models tended to converge quicker with the smaller data sets, corroborating our choice of a reduced training time. Table 2. Data set summary. Our data were divided into two data sets, the 2D and the 3D datasets. Within those data sets there were image stained for DAPI, Tbxta, Tbx16 and Sox2. The table shows the total number of images of each kind (Total column), and how many images were used for training (Training column) and Testing (Testing column). In brackets, the number of images per class.
PLOS ONE
After 25 epochs, the networks trained to stage 3D Ntl and Sox2 expression domains in the tail bud both achieved a test accuracy of 100% while the network trained to stage 3D Tbx16 expression patterns reached a lower, but also impressive, maximum test accuracy of 87.5% ( Table 3). The observed differences in accuracy could be due to a number of reasons including differences in the level of background staining between each of the three genes or they could be rooted in the existing differences between the shapes of the expression domains themselves. The rates of convergence during training did not differ significantly between the three networks (S1 Fig). CNN-based classifiers trained on less than 50 2D images of gene expression domains in zebrafish tail buds score between 60 and 90% test accuracy. Again, we use a transfer learning approach where the 2D network pre-trained on morphological image data (section) was taken as the starting point from which to train three new networks, each of which will learn to stage 2D gene expression domains. As in the 3D case discussed in the previous section, we train for 25 epochs to reduce potential over-fitting.
For each gene, the training accuracy of the 2D models plateaus soon after 10 epochs achieving between 70% and 80% in each case ( Table 4). The range of testing accuracy that we obtain for the 2D expression data is larger than that obtained from training: between 60% and 90% (Table 4) and performing overall much worse than when the networks were trained on 3D expression data.
Performance comparison of 2D vs 3D CNN-based classifiers of gene expression images.
Contrary to what we found with the networks trained on morphological (DAPIstained) tail bud images, networks trained on gene expression data sets tended to perform consistently better when trained on the 3D image data as opposed to those trained on 2D image data (except for Tbx16 where the test accuracy of the 3D and 2D networks are the same). It is difficult to pinpoint exactly the reasons underlying the improved performance of the 3D network, however it is possible that for these very small data set sizes, the training favours from the extra information contained in the 3D images, and is able to take full advantage of it. The 2D images are rendered by projecting down all of the information in the Z axis, hence averaging it out. While it seems that this information is expendable when training on DAPI-stained images, it is definitely not when training on gene expression data. This suggests, at least for such small training set sizes, that the changes undergone by gene expression domains in all dimensions are learnt and used by the network.
Conclusion
Our results have shown that two and three dimensional convolutional neural networks can be trained to stage developing zebrafish tail buds based on both morphological and gene expression images. Importantly, we show that high accuracies can be achieved with data set sizes of under 100 images, much smaller than the typical training set size for a convolutional neural net, which tend to be at least in the tens of thousands range, if not larger.
By showing that we can we build a CNN-based classifier to stage isolated structures such as tail buds, our work also highlights that it is not necessary to always rely on established developmental landmarks such as somite formation for staging. This ability will make it possible to directly stage developmental processes that are difficult to otherwise stage accurately based solely on such landmarks. Furthermore, image series can be generated directly from tissue explants to train a CNN-based classifier, leaving out entirely the additional step of having to stage whole embryos. This approach should prove particularly useful considering the recent rise in the use of 3D culture methods to derive specific tissue derivatives, or organoids. Research on these systems requires the development of accurate staging systems since a tissue or organ of interest developed in isolation on a dish can no longer be staged relative to the development of the whole embryo [62]. A recent example highlighting the importance of being able to stage such structures comes from work were the ability to order intestinal organoids along a common morphogenetic trajectory was key for determining the mechanisms of symmetry breaking at the early stages of their development [63]. We propose that the development of CNN-staging methods will offer a broad range of advantages, allowing us to follow developmental events in both embryonic explants and organoids.
Convolutional neural nets are becoming increasingly wide-spread and thanks to the various user-friendly tools that are now available to implement them, we expect that their use to soon extend even further. Although our results suggest that the data requirements for training a CNN-based classifier are highly dependent on the nature of the data themselves, this work constitutes a proof of principle that we hope will contribute to dispel the myth that large data set sizes are always required to train CNNs, and encourage researchers in fields where data are scarce to apply ML approaches.
Ethics statement
This research was regulated under the Animals (Scientific Procedures) Act 1986 Amendment Regulations 2012 and approved following ethical review by the University of Cambridge Animal Welfare and Ethical Review Body (AWERB).
Zebrafish husbandry, manipulation and embryo collection
Zebrafish from the wild-type Tupfel Long-fin line zebrafish (Danio rerio) were kept at 28.5˚C, as recommended by standard protocols. Crosses with one male and one female were set up and left overnight. The morning after, dividers were lifted and the fish allowed to mate for 15 minutes, after which the embryos were collected. This was done to favour the synchronous developmental of embryos in the same batch. Embryos are kept in E3 embryo medium (Westerfield, 2000) and incubated at between 26˚C and 32˚C until they reach the correct somite stage range of one of the four classes (16-18 somites, 20-22 somites, 24-26 somites and 28-30 somites) according to the Kimmel et al (1995) staging table. Embryos are de-chorionated by hand, fixed in 4% paraformaldehyde (PFA) and stored overnight at 4˚C.
In situ hybridization
In situ hybridization was performed using third-generation DNA hybridization chain reaction (HCR V3.0), and carried out as described by Choi et al. (2018). In brief, embryos are incubated overnight at 37˚C in 30% probe hybridisation buffer containing 2pmol of each probe mixture. Excess probes were washed off with 30% probe wash buffer at 37˚C and 5xSSCT at room temperature. Embryos are then incubated overnight in the dark, at room temperature in amplification buffer containing 15pmol of each fluorescently labelled hairpin. A shaker was used to ensure a thorough mixing of the hairpin solution, which allowed us to increase the number of embryos per eppendorf approximately 10-fold. Following HCR, embryos were bathed in DAPI overnight at 4˚C. The protocol takes a total of three days. Probe sequences were designed and manufactured by Molecular Instruments, Inc.
Sample preparation and imaging
A precision scalpel was used to dissect the posterior unsegmented and tail bud regions from the embryo bodies while viewing through a Nikon Eclipse E200 in bright field at 10x objective. Tail buds were then collected using a glass pippette, placed in the center of a glass-bottom dish and mounted using methyl-cellulose. Particular emphasis was put on trying to not make the cuts stereotypical, in order to reduce the likelihood of them being learnt by the neural net. Images were acquired using a Zeiss LSM 700 laser-scanning confocal microscope, and the accompanying Zeiss Zen 6.0 software. The z-dimension and laser intensity settings were modified on a batch-by-batch basis to account for variance in the depth range of the samples between experiments, while the bit depth and pixel dimensions for the image capture were kept constant at 12-bit and 256 x 256px, respectively.
All the raw data used in this project can be freely downloaded from https://figshare.com/ articles/dataset/Data_for_staging_proejct_Deep_learning_for_classification_of_4_different_ stages_of_embryos/13110599
Image processing
Image samples were processed using Imaris 9.2.2 (Bitplane) and FIJI (Schindelin et al., 2012). Each image was processed manually according to the following workflow: 1. Images are first saved in Zeiss' proprietary LSM (.lsm) format and are then imported into Imaris. Each image is repositioned using the 'Image Processing -> Free Rotate' function. The tail bud is moved in three dimensions such that the antero-posterior (AP) and dorsoventral (DV) axes are approximately positioned left-to-right and top-to-bottom, respectively.
2.
A surface is created using the DAPI channel. This surface is used to segment the whole region of the image taken up by the unsegmented posterior embryonic axis and tail bud. Masks are then used to set all voxels outside of this generated segment to zero on all four channels, leaving the target region intact while removing all imaging noise/artifacts from the sample. 3D images of each of the four channels (DAPI, Tbxta, Sox 2 and Tbx16) were exported individually in .TIF format. In addition, maximum intensity projection images (projections onto the z axis) were also obtained and exported for each channels and used to assemble the 2D datasets (details below).
3. Once in .TIF format and before presenting the images to the model for training, they are subject to a simple programmatic pre-processing pipeline where, first, the z axis is normalised for all training samples. As a result, images which initially had varying counts of z-slices due to the batch-by-batch nature of the image acquisition procedure, will now all have been normalised to the same size, as required by the neural net. Next, it is necessary to crop the x and y dimensions of each slice to accommodate the computational bottleneck of fitting heavy 3D images into memory. The last pre-processing step before inputting the data into the network requires us to convert the images into Numpy array format (Walt et al, 2011). Numpy arrays are an efficient and robust way of representing image data in the form of a multi-dimensional matrix, and is how the downstream model will process and understand the contents of the image.
4. Finally, images were converted into grayscale and reshaped to 1 x 64 x 64 x 64 pixels, where the first dimension represents the number of channels (1 for a grayscale image) before being presented to the network.
Final datasets
The final data sets consists of 2D and 3D collections of images for DAPI stained embryos, as well as embryos for stained using HCR V3.0 for the gene products of three genes: Tbxta, Sox2 and Tbx16. The 2D and 3D DAPI image datasets contain each a total of 120 images: 96 of which are used for training-24 images for each class-and 24 images for testing-6 per class. The 2D and 3D HCR image datasets contain a total of 56 images for each gene: in each case, 48 are used for training (12 per class), and 8 were used for testing (2 for each class). An equal split of samples per class maintained in all cases. No data augmentation was used.
2D CNN formulation
The CNN presented in the Results section corresponds to the 3D implementation of the network. The 2D implementation is generally identical, except for the necessary reduction of one dimension. To achieve this, the kernel size for the first convolution layer is reduced from 5x5x5 (in the 3D model), to 5x5. The flattened layer must also be adapted to the reduction in dimensions, and in the 2D implementation takes the form of a vector of length 65536. Fig 3A and 3B show a visual representation of the network architecture for the 2D network. All other parameters were kept the same as in the 3D network.
Network training and initial parameter optimisation
The parameter optimisation algorithm SigOpt was used to parametrise the 3D classifier. Given a series of parameter options, this algorithm will run the network with different parameter combinations as it optimises the performance of a previously defined target metric. The metric used in this case was the test accuracy, that is the accuracy of the classifier when applied to a subset of images that have not been seen during training. We allowed each parameter set to be trained during only 10 epochs. This suffices to highlight the best performing networks while economising the time spent training the network. The parameters to be optimised corresponded to the type of activation functions to be used within each of the first two layers of the MLP region of the network, and the number of hidden units in those same layers. Activation functions being considered were ReLU and Tanh, and the search range for optimal number of units was 400-700 units for the first hidden layer and 300-500 for the second.
The outcome of the optimisation revealed that for this task the ReLU activation function was consistently superior to Tanh, with the highest test accuracy scores coming from the networks which had a ReLU activation function at both layers (Table 5, conditions 5 and 9, gray and cyan rows). When comparing these conditions we realise that the precise number of hidden units in each layer seems to have less influence on the resulting test accuracy. We chose parameters in condition 9, rounded to nearest 10 hidden units, as these gave the highest accuracy: 58.33% and re-trained this network with a significantly increased training time (100 or 20 epochs depending on the data, see Results section) to evaluate its maximum performance.
As with the 3D network, we use an initial course grained parameter optimisation strategy using the SigOpt algorithm to find the best performing parameter combination of the 2D network before fine training the selected network for longer times. We decided to use ReLu activation functions in order to keep the 3D and 2D networks as similar and therefore comparable to each other as possible. Instead, only the number of units in the hidden layers were optimised.
The optimisation process yielded a best configuration with a hidden unit count of 420 and 330 for the first and second layers, respectively (Table 6, condition 7, cyan row).
Code and computer specifications
The code for this project was written in Python (Python Software Foundation), making use of the large ecosystem of data manipulation tools and libraries available therein. The machine learning model development was set up using Pytorch, a Python API of the Torch ML Table 5. Parameter optimisation results for the 3D CNN. The activation functions and hidden units refer to the parameters used in the first and second layers of the MLP part of the network, respectively. Test accuracy was obtained on a test data set of 24 images unseen by the network during training.
PLOS ONE
framework. Pytorch allows for performing operations on a graphics processing unit (GPU), which parallelizes the large mathematical operations, and increases performance time significantly. The GPU used in this project was an Nvidia 2080Ti RTX. All the code used in this project is available at: https://github.com/ajrp2/ss_classifier. | 2020-07-16T09:04:47.829Z | 2020-07-15T00:00:00.000 | {
"year": 2021,
"sha1": "5935028bf6d3b3d16ef22232870acad41d0e7b86",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0244151&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ece2fa3f4674b360a7426ee98fa8d09a2142b1fc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Biology"
]
} |
16602839 | pes2o/s2orc | v3-fos-license | Super Yang-Mills on the lattice with domain wall fermions
The dynamical N=1, SU(2) Super Yang-Mills theory is studied on the lattice using a new lattice fermion regulator, domain wall fermions. This formulation even at non-zero lattice spacing does not require fine-tuning, has improved chiral properties and can produce topological zero-mode phenomena. Numerical simulations of the full theory on lattices with the topology of a torus indicate the formation of a gluino condensate which is sustained at the chiral limit. The condensate is non-zero even for small volume and small supersymmetry breaking mass where zero mode effects due to gauge fields with fractional topological charge appear to play a role.
Introduction
It is believed that super-symmetric (SUSY) field theories may play an important role in describing the physics beyond the Standard Model. Non-perturbative studies of these theories are of great interest. First-principles numerical simulations may be able to provide additional information and confirmation of existing analytical calculations. Typically first principles numerical simulations of field theories are done within the framework of the lattice regulator. A host of results have been produced in this way for many field theories and most notably QCD. Several SUSY theories can also be formulated on the lattice and be studied numerically. To be more specific consider the problems of putting a SUSY theory on the lattice (see for example [1,2,3]): 1) Since space-time is discrete only a subgroup of the Poincaré group survives and as a result SUSY is broken. This problem is not severe and is of the same nature as in QCD. The symmetry breaking operators that are allowed by the remaining symmetries are irrelevant. One can calculate at several lattice spacings a and then take the a → 0 limit. No fine tuning is needed.
2) If the SUSY theory under consideration involves scalar fields one can have scalar mass terms that break SUSY since typically they are not forbidden by some symmetry. Since these operators are relevant fine tuning will be needed in order to cancel their contributions. The four-dimensional N = 1 Super Yang-Mills (SYM) theory does not involve scalars and therefore it does not have this problem.
3) A naive regularization of fermions results in the well known doubling problem [4]. For each fermion species in the four-dimensional continuum 16 are generated on the lattice with total chirality of zero. This results in the wrong number of degrees of freedom and therefore breaks SUSY. However, this problem may be possible to treat as in QCD. This is the case for N = 1 SYM.
One possible way to remove the unwanted fermion degrees of freedom is to add an irrelevant operator (Wilson term [5]) that gives them heavy masses of the size of the cutoff. This term unavoidably breaks the chiral symmetry [4] and as a result a gluino mass term is no longer forbidden. Since such a term is relevant, fine tuning of the bare fermion mass is necessary as the continuum limit is approached in order to cancel its contribution. Although fine tuning is not a welcomed property this method makes it possible to recover the continuum target theory.
Therefore, it is possible to simulate numerically the N = 1 SYM theory using existing lattice "technology" since all three difficulties can be circumvented. This observation was made some time ago [1]. In particular, it was argued that, using a standard lattice gauge theory action with a pure gauge Wilson plaquette term and Wilson fermions in the adjoint representation, numerical simulations could be done. Pioneering work using these methods has already produced very interesting numerical results [6,7]. Also, for proposed lattice tests of SYM see [8]. For a supersymmetric formulation on the lattice using Kogut-Susskind [9] fermions see [10].
There are two unwelcomed difficulties in using Wilson fermions. The first has already been mentioned and it is the need for fine tuning. The second is of a technical nature. It turns out that the Pfaffian resulting from the fermionic integration is not positive definite [6] at finite lattice spacing. However, it does become positive definite as the continuum limit is approached and therefore as a "cure" only the absolute value of the Pfaffian is used [6,7]. However, this introduces non-analyticities that may make the approach to the continuum limit difficult.
Both of these difficulties can be brought under control by using an alternative fermion lattice regulator, domain wall fermions (DWF). The use of DWF in supersymmetric theories has been explored in the very nice work of [2,3]. The methods in this paper are along the lines of these references. Domain wall fermions were introduced in [11], were further developed in [12] and in [13,14]. They provide a new way for treating the unwelcomed chiral symmetry breaking that is introduced when the fermion doubler species are removed. Here a variant of this approach will be used [13,14]. For reviews on the subject please see [15] and references therein. DWF have already been used for numerical simulations of the two flavor dynamical Schwinger model [16], dynamical QCD [17], quenched QCD [18,19,20,21,22,23], as well as for simulations of 4-Fermi models [24]. The use of DWF in supersymmetric theories has also been explored in a different fashion in [25,26]. Furthermore, the use of overlap [12] type fermions has been explored in [12,27,28], and the use of other related types of fermions has been explored in [29,30].
In the lattice DWF formulation of a vector-like theory the fermionic fields are defined on a five dimensional space-time lattice using a local action. The fifth direction can be thought of as an extra space-time dimension or as a new internal flavor space. The gauge fields are introduced in the standard way in the four dimensional space-time and are coupled to the extra fermion degree of freedom in a diagonal fashion. The key ingredient is that the boundary conditions of the Dirac operator along the fifth direction are taken to be free. As a result, although all fermions are heavy, two chiral, exponentially bound surface states appear on the boundaries (domain walls) with the plus chirality localized on one wall and the minus chirality on the other. The two chiralities mix only by an amount that is exponentially small in L s , where L s is the number of lattice sites along the fifth direction, and form a Dirac spinor that propagates in the four-dimensional space-time with an exponentially small mass. Therefore, the amount of chiral symmetry breaking that is artificially induced by the regulator can be controlled by the new parameter L s . In the L s → ∞ limit the chiral symmetry is exact, even at finite lattice spacing, so there is no need for fine-tuning.
For the first time the approach to the chiral limit has been separated from the approach to the continuum limit. Furthermore, the computing requirement is linear in L s . This is to be contrasted with traditional lattice fermion regulators where the chiral limit is approached only as the continuum limit is taken, a process that is achieved at a large computing cost. Specifically, because of algorithmic reasons, the computing cost to reduce the lattice spacing by a factor of two grows by a factor of 2 8−10 in four dimensions. Therefore, the unique properties of DWF provide a way to bring under control the systematic chiral symmetry breaking effects using today's supercomputers.
The purpose of this paper is two-fold. First, the techniques for performing a numerical simulation of the full N = 1 SU(2) SYM theory using DWF are collected and it is demonstrated that they work as expected by performing numerical simulations of the full theory. Second, the gluino condensate is measured. It is expected that a non-zero gluino condensate must form [31,32,33,34]. However, there are also arguments that the theory has a phase where a gluino condensate does not form [35]. In the numerical simulations performed here it is found that a non-zero gluino condensate is sustained in the limit of zero gluino mass. This result is at a finite lattice spacing and therefore SUSY is still broken albeit by irrelevant operators.
It must be emphasized that in this work due to limited computer resources no attempt has been made to extrapolate to the continuum limit. It is possible that in this limit the gluino condensate may vanish. Future work using larger computer resources could calculate the gluino condensate at several lattice spacings and extract the continuum value. But even then, it will never be possible to numerically prove that the finite lattice spacing theory is not separated from the continuum theory by a phase transition. This problem is not particular to the case at hand but is in the nature of numerical investigations. They can provide strong evidence but not unquestionable proof. A well known case with similar problems relates to the question of confinement and chiral symmetry breaking in QCD.
This paper is organized as follows. In section 2, the DWF lattice formulation of N = 1 SU(2) SYM is presented. In section 3, analytical considerations relating to the gluino mass, the Ward identities and the effects of topology in the patterns of chiral symmetry breaking are given. The numerical methods used in the simulations are discussed in section 4. The numerical results are presented in section 5 and the paper is concluded in section 6.
Lattice formulation
In this section, the N = 1, SU(2) SYM lattice action and operators are presented. The approach is similar in spirit as to the case of Wilson fermions [1,6,7]. The DWF formulation for this theory is identical to [2] and [3]. It is presented below for the convenience of the reader and in order to establish notation.
The N = 1, SU(2) SYM theory is an SU(2) gauge theory with Majorana fermions in the adjoint representation. As such, the fermionic path integral results in the analytic square root of the corresponding Dirac determinant. This then is the Pfaffian of an antisymmetric matrix that has the same determinant as the Dirac operator. On the lattice, the Dirac operator can be defined using Wilson's approach as in [1,6,7] or the DWF approach as in [2] and [3].
The partition function is: U µ (x), µ = 1, 2, 3, 4 is the four-dimensional gauge field in the fundamental representation, Ψ(x, s) is a (real) five-dimensional Majorana field in the adjoint representation and Φ(x, s) is a (real) five-dimensional bosonic Pauli Villars (PV) type field with the same indices as the Majorana field. x is the coordinate in the four-dimensional space-time box with extent L along each of the four directions. The boundary conditions along these directions are taken to be periodic for all fields. The coordinate of the fifth direction is s = 0, 1, . . . , L s −1, where L s is the size of that direction and is taken to be an even number. The action S is given by: S G (U) is the pure gauge part and is defined using the standard single plaquette action of Wilson: where β = 4/g 2 and g is the gauge coupling. The fermion part S F (Ψ, U) is given by: where D F is the DWF Dirac operator in the form of [14]. Specifically it is: where V is the gauge field in the adjoint representation. It is related to the field in the fundamental representation by (see for example [6]): and where T a = 1 2 σ a with σ a the Pauli matrices. In the above equations m 0 is a five-dimensional mass representing the "height" of the domain wall and it controls the number of light flavors in the theory. In order to get one light species in the free theory one must set 0 < m 0 < 2 [11]. The parameter m f explicitly mixes the two chiralities and as a result it controls the bare fermion mass of the four-dimensional effective theory. The dependence of the bare fermion mass on m 0 and L s is discussed in section 3.1.
The fermion field Ψ is not independent but is related to Ψ by the equivalent of the Majorana condition for this 5-dimensional theory [3]: where R 5 is a reflection operator along the fifth direction and C the charge conjugation operator in Eucledean space which can be set to: Therefore, the fermion action can also be written as: where is an antisymmetric matrix as can be easily checked [2]. As a result the fermionic integral gives the anticipated Pfaffian: Because det(CR 5 ) = 1 one also has that det(M F ) = det(D F ) and therefore: The Pauli-Villars action S P V is designed to cancel the contribution of the heavy fermions [12]. Viewing the extra dimension as an internal flavor space [12] one can see that there are L s − 1 heavy fermions with masses near the cutoff and one light fermion. The PV subtraction subtracts the L s heavy particles. As was pointed in [2] this amounts to a "double" regularization of the light degree of freedom, first by the lattice and then by the PV field. The form of the PV subtraction used here is as in [16] and is given by: The integral over the PV fields results in: Green functions in this work are measured using four-dimensional fermion fields constructed from five-dimensional fermion fields using the projection prescription [14]: In the L s → ∞ limit of the theory these operators directly correspond to insertions in the overlap of appropriate creation and annihilation operators [12]. Using eq. 11 and 19 the Majorana condition on the four-dimensional fermion field is: Because this is the correct condition for a four-dimensional field one can see that the definition in eq. 11 not only produces an antisymmetric fermion matrix M F but is also consistent with the projection prescription in eq. 19 as expected.
Analytical Considerations
In this section some analytical considerations are presented. In the N = 1 SYM theory, a gluino mass term is the only relevant operator that can break supersymmetry and is also the only relevant operator that can break (at the classical level) the U(1) A symmetry. Therefore, the two symmetries are intimately related to the mechanisms that can introduce a bare gluino mass term. These mechanisms depend on the "extra" regulator parameters m 0 and L s . This is discussed below. Next the fate of the U(1) A chiral symmetry and the effects of topology are presented. The chiral and supersymmetric Ward identities are derived in the last subsection.
The "extra" DWF parameters
DWF introduce two extra parameters, the size of the fifth direction L s and the domain wall height or five-dimensional mass m 0 . These two parameters together with the explicit mass m f control the bare fermion mass m eff . In the free theory one finds [16]: In the interacting theory one would expect that m 0 as well as its range of values will be renormalized. From the above equation one can see that for the free theory the value of m 0 = 1 is optimal in the sense that finite L s effects do not contribute to m eff . In the interacting theory one would expect that there is no such "optimal value" since, in a heuristic sense, m 0 will fluctuate. For a more detailed analysis please see [36]. Then one would like L s to be large enough so that the second term in eq. 21 will be small allowing for simulations at reasonably small masses and/or for dependable extrapolations to the m f → 0, L s → ∞ limit. The effects of finite L s on the chiral symmetry can be best understood in the overlap formalism [12]. In that formalism a transfer matrix T along the extra direction is constructed. Because the gauge fields are not changing along that direction the product of transfer matrices simply results in T Ls . For L s = ∞ this is a projection operator that projects the reference vacuum state to a ground state. The fermion determinant is then the overlap of the reference vacuum state with that ground state. In [12] it was shown that, as a lattice gauge field configuration changes, from say the zero topological sector to sector one, an eigenvalue (or a degenerate set of eigenvalues) of the corresponding Hamiltonian H changes sign. As a result, the filling level of the ground state becomes different from that of the reference vacuum state. Then the overlap is zero indicating the presence of an exact zero mode. This remarkable property is maintained to a good degree even at finite L s as was found in [20]. Unfortunately, this property is also the reason for most of the difficulties with DWF. As the eigenvalue of the Hamiltonian H changes sign it crosses zero. In such a configuration the transfer matrix has an eigenvalue equal to one and therefore even at L s = ∞ there is no decay along the extra direction, the two chiralities do not decouple, and chiral symmetry can not be restored. Fortunately, configurations for which H has an exact zero eigenvalue (for a given m 0 ) are of measure zero [12,14] and therefore are of no consequence. However, configurations in their neighborhood are not of measure zero and such configurations will exhibit very slow decay rates. Therefore, in order to restore chiral symmetry, very large values of L s may be needed. Since one would expect that the neighborhoods of such configurations are suppressed closer to the continuum limit this problem should become less severe as that limit is taken. This has been observed in numerical simulations of the Schwinger model [16], of full QCD [17], and of quenched QCD [18,19,22,23].
In the region where it makes sense to parameterize these effects by a residual mass in an effective action it has been found that: where for dynamical QCD at the currently accessible lattice spacings the decay is found to be c 2 ≈ 0.02 [17]. For quenched QCD the situation is better because current computing resources can simulate lattices with smaller lattice spacing. There, a value of c 2 ≈ 0.1 is found [18,19,22]. Also in these studies the value of c 2 was a weakly changing function of m 0 indicating that for practical purposes there is no optimal value of m 0 .
In the case of the N = 1 SYM SU(2) theory the Hamiltonian corresponding to the five dimensional transfer matrix has eigenvalues that are doubly degenerate because the fermion fields are in the adjoint representation [28]. Therefore when there is a "topology" change two eigenvalues will have to cross through zero (as compared to one for fundamental fermions). This may make this theory harder to study than QCD in the sense that larger L s values may be required. On the other hand, since no massless Goldstone particles are expected, the sensitivity of the spectrum on L s may be considerably milder. In any case, in this paper the only fermionic observable that will be discussed is the gluino condensate. This quantity is known to approach its L s → ∞ limit with faster decay rates than the ones in m res (for a discussion and results for full QCD see [17]; there the decay rate for the chiral condensate was about five times faster than that for m res ).
As was discussed above, the range of m 0 is renormalized by the interactions. It has been found that as the lattice spacing increases and one moves away from the continuum limit this range shrinks in size and for currently accessible spacings in QCD that range is about [1.4, 2.0]. As one moves even farther away from the continuum limit this range can shrink to zero and then it will not be possible to have light DWF modes [2,39]. However, it must be emphasized that for as long as the range of allowed values of m 0 is not of zero size the overlap formalism, although it does not specify how it is approached, guarantees the existence of the L s → ∞ limit. In this work, m 0 = 1.9 and, as it will be shown in section 5, the behavior of the gluino condensate vs. L s is consistent with an exponential ansatz.
Chiral symmetry and topology
Fermions in the adjoint representation of the SU(N) gauge group have a Dirac operator with index: 2Nν (23) where ν is the winding of the background field configuration. Classical instantons have integer winding and they cause condensation of operators with 2N Majorana fermions.
This results in the breaking of the U(1) A chiral symmetry down to the Z 2N symmetry by the corresponding anomaly. The remaining Z 2N symmetry may break spontaneously down to Z 2 [31]. Mechanisms for this further breaking have been explored for example in [32,33,34] where instantons and fractionally charged objects such as torons [37] or caloron monopole constituents [38] were investigated as the source of this symmetry breaking. Since in a toroidal geometry fractional winding numbers are possible [37], the partition function of the full theory can be expressed as: where θ is the vacuum angle and Z ν is the partition function on the sector with winding ν.
For the theory with a soft breaking by a mass m f the interplay of the volume and mass in the formation of the gluino condensate has been analyzed in [40]. The reader is referred to that reference for a very nice presentation on the subject. Assuming a mass gap is present in the theory the authors of [40] show that non-zero contributions to the gluino condensate χχ come almost exclusively from the The above considerations result in an unusual picture. If the infinite volume limit is taken (followed by the massless limit) it is possible that a gluino condensate will form due to spontaneous breaking of the discrete symmetry Z 2N down to Z 2 . On the other hand, at a finite volume and zero mass a gluino condensate can form due to the presence of fractional winding configurations. Since the volume is finite, this can not be the result of spontaneous symmetry breaking. Instead, it is similar to symmetry breaking due to topological effects as, for example, in one flavor QCD. As pointed above the size of m f × V × χχ controls which "scenario" takes place.
On the lattice there is no clear definition of topology. The path integral over the SU(N) group space generates configurations of all possible windings. In order for the lattice theory to be able to reproduce phenomena that relate to topology it is essential that the lattice Dirac operator obeys the index theorem in a statistical sense. This is highly non-trivial since it is obviously related to the doubling problem. Traditional fermions (Wilson or staggered) do not exhibit exact zero modes at finite lattice spacing. On the other hand, as mentioned in section 3.1, DWF at L s = ∞ have exact zero modes and at finite L s have robust zero modes to a good approximation [20]. An approximate form of the index theorem has been found to be obeyed for fundamental fermions in the overlap formulation in quenched SU(2) [41] and in quenched SU(3) [42].
The index of adjoint fermions in the overlap formulation in quenched SU (2) has been studied in [28]. In that work it was pointed out that the overlap Dirac operator for adjoint fermions in the SU(2) gauge group is necessarily even-valued. Then the question posed by the authors of [28] was whether or not all even values are realized or only values that are multiples of four are present. The latter case corresponds to configurations with instantons. The former case corresponds to fractional winding numbers. Configurations with fractional winding were found and their presence persisted as the lattice spacing was decreased In this paper DWF are used at finite L s and therefore some of the clarity present in the L s = ∞ case will be lost. However, the full theory (including the fermion determinant) is studied here. Furthermore, it is interesting to see if at a small volume and zero mass the gluino condensate still forms and if it does to what extent its value is due to zero mode effects. The numerical results are presented in section 5.
Ward identities
As discussed in the introduction and in section 3, the DWF formulation of the N = 1 SU(2) SYM theory at the L s → ∞ limit is expected to preserve the U(1) A chiral symmetry (at the classical level) and break supersymmetry only by irrelevant operators. Since the DWF formulation contains many more fields than the continuum theory, one may naturally wonder what are the SUSY transformations in terms of these fields. In particular, while the continuum theory has a single Majorana fermion the DWF lattice theory contains L s Majorana fermions and L s corresponding PV fields. Since all these fields, except for one Majorana fermion, have masses near the cutoff, one can expect that the SUSY transformations should only transform the gauge field and the light Majorana fermion represented by the boundary field χ of eq. 19. Similarly, the chiral symmetry transformations should only involve the field χ. However, one should expect that this choice of SUSY and chiral transformations is not unique. For example, see [14] for a different choice of QCD chiral transformations that involve all fermion fields in one half of the fifth direction transforming vectorially and all fermions in the other half also transforming vectorially but with opposite charge. That choice could also be appropriate here for the chiral transformations, but it may make the SUSY ones more complicated.
As a first step in deriving the Ward identities, the fermionic part of the action in eq. 4 is rewritten in terms of the boundary field χ: where S F 0 does not depend on the field χ and where are the "wrong" projected fields in the sense that they are defined on the opposite wall from where the corresponding light mode is localized. If indeed there is localization one would expect that in the L s → ∞ limit these fields will have no overlap with the light mode. The operator D / N is the naive part of the four-dimensional Wilson operator in eq. 6 and B is the symmetry breaking part (B is the equivalent of B in [12,14]): These operators have the following properties: First the Ward identity corresponding to the U(1) A symmetry is derived. The symmetry transformations are: where α(x) is an infinitesimal real number and δ A symbolizes the change under the chiral transformation. Then the Ward identity is: where the backward difference is defined as . The currents are: If in the above Ward identity O(y) = J 5 (y) one gets In this identity the term with J B will be responsible for producing the ABJ anomaly in the L s → ∞ limit. On the other hand, if L s is kept finite this term is similar to the one for Wilson fermions which, besides producing the ABJ anomaly, also produces a mass redefinition. For an analysis of QCD with DWF at finite L s see [19]. As mentioned earlier these chiral transformations are different than the ones in [14]. If the transformations relevant for a non-singlet current in QCD were done on the fields χ, χ, one obtains a Ward identity exactly as in [14] but with the currents A a µ (x) and J a 5q (x) replaced with: J a 5q (x) = 1 2 y χ(x)γ 5 λ a B(x, y)φ(y) + φ(y)γ 5 λ a B(y, x)χ(x) .
The derivation of the SUSY Ward identity is similar to the one for Wilson fermions. One can use the existing calculations for Wilson fermions [1,7,43] to elucidate the differences between the two formalisms. Here the derivation in [43] will be followed. The symmetry transformations are as in [43] and commute with parity.
The change of the pure gauge action with respect to the transformation of the gauge field is of course the same. In terms of the symmetry breaking part of the Ward identity it contributes a term denoted below by X 2 (x) + X 3 (x) where X 2 , X 3 are as in [43]. This term breaks SUSY because of the explicit breaking of the Lorentz symmetry. Using improved pure gauge lattice actions can alleviate the effects of this breaking. Such an improvement is not considered here.
The change of the fermion and Pauli-Villars parts of the DWF action with respect to the transformation of the gauge fields produces terms for all L s slices. In particular the variation of the fermion matrix D F of eq. 5 with respect to the gauge field is: One sees that δD F is independent of m f and is diagonal in the fifth direction. Furthermore δD /(x, x ′ ) is the same as the variation of the Wilson operator. Therefore, this variation contributes to the symmetry breaking part of the Ward identity the terms: and where X F 4 (x, s) is as X 4 in [43] except that the four-dimensional Wilson fermion fields that have their spin indices contracted are replaced by Ψ(x, s), Ψ(x, s) while the other Wilson fermion field is replaced by χ(x). Similarly X P V 4 (x, s) is as X 4 in [43] except that the fourdimensional Wilson fermion fields that have their spin indices contracted are replaced by the Pauli-Villars fields Φ T (x, s)CR 5 , Φ(x, s), the other Wilson fermion field is replaced by χ(x) and the sign of the second term in X 4 is minus instead of plus due to the commutativity of the Pauli-Villars fields.
The change of the action with respect to the fermion field transformations can be partially deduced from the corresponding Wilson fermion calculation. Since this transformation only involves the action S Fχ in eq. 26, one can observe that the first two terms of that action are identical with the action of naive fermions (Wilson fermions with r = 0). These will contribute identical terms as the r = 0 part of the Wilson action. They contribute to the divergence of the SUSY current and to the mass term of the Ward identity given below. Finally, the transformation of the last term of the action S Fχ in eq. 26 is easy to calculate and is denoted by X 1 (x): This term is closely related to X 1 of [43]. The Ward identity is: where the supersymmetric current S µ and the quantity D S are as in [43]. The symmetry breaking term X S (x) is also similar to the one in [43]: As mentioned above the symmetry breaking term X 2 (x) + X 3 (x) is due to the breaking of Lorentz symmetry by the lattice. The X F 4 (x) and X P V 4 (x) terms break the symmetry as in Wilson fermions. These terms do not cancel each other exactly 1 . However, one would expect large cancellations of heavy modes. The terms in X F 4 (x, s) that are proportional to the Wilson parameter involve fields that couple to the light modes by an amount that is exponentially small in L s . One would expect these terms to be nearly canceled by the corresponding Pauli-Villars terms resulting in exponentially small contributions. The remaining terms that involve fields away from the relevant domain walls should also yield similar cancellations. As a result the only terms that should make significant contributions should be the ones that involve fields of the "correct" chirality near the domain walls. These few terms would couple to the light modes and be further regularized by the corresponding Pauli-Villars terms. Clearly this analysis of cancellations is heuristic. A detailed calculation using for example perturbation theory or transfer matrix methods would be interesting but it is beyond the scope of this paper.
Finally, the symmetry breaking term X 1 (x) involves the field φ(x) that is expected to have no overlap with the light mode in the L s → ∞ limit. If L s is finite then DWF are similar to Wilson fermions and an analysis as in [1] should indicate that this term is responsible for the same mass redefinition as the one in the chiral Ward identity.
Numerical methods
As can be seen from section 2 the N = 1 SU(2) SYM theory can be simulated as a theory with 0.5 flavors of Dirac fermions in the adjoint representation. An efficient and popular algorithm that can be used to simulate any number of flavors is the hybrid molecular dynamics R (HMDR) algorithm of [44]. Because of the Grassmann nature of fermions these algorithms need to invert the matrix D F of eq. 5. That matrix is not Hermitian. This is a problem since some of the more efficient inversion algorithms require the matrix to be Hermitian. However, because: Then one can invert the Hermitian matrix D F D † F and then use the HMDR algorithm to take the appropriate power so that the desired number of flavors is simulated. This method is adopted here and the 0.25 power is taken in order to go from a theory with two Dirac fermions to a theory with one Majorana fermion. In other words, the fermion determinant that is used in the simulation is: 1 We thank Y. Shamir for pointing this out to us where in the last equality use was made of the fact that for non-negative m f det[D F ] is also non-negative [2]. This approach was also taken in [7] for Wilson fermions. For an approach that uses Wilson fermions and the multibosonic algorithm [45] instead of the HMDR algorithm see [6]. However, as mentioned earlier in the case of Wilson fermions the last equality in eq. 47 is not true for all gauge field configurations. The HMDR algorithm uses molecular dynamics methods in order to produce the correct statistical ensembles. Because the molecular dynamics step size δτ is finite discretization errors are introduced. There are two ways one can deal with this problem. One is to simulate at various values of δτ and then extrapolate to δτ = 0. Another method is to use δτ small enough so that the errors are negligible when compared with the statistical errors.
In order to ensure this, one can simulate the two Dirac flavor theory at the same parameters and same δτ . For the two flavor theory, one has a local action and therefore, at the end of the evolution, one can employ a Metropolis accept-reject step. Then the finite δτ errors are "converted" to a non-ideal acceptance rate and in effect they are reflected in the final statistical errors. This is the exact hybrid Monte Carlo Φ (HMCΦ) algorithm of [46,44]. Therefore the acceptance rate is an indication of the size of the finite δτ errors in the HMD integration. By simulating the two Dirac flavor theory with (HMCΦ) one can set δτ so that the acceptance rate is high, say ≈ 90%. Since the coefficient of the finite δτ errors is proportional to the number of flavors one would expect that for 0.25 flavors the errors would be small and at the few percent level.
The only fermion observable measured in this work is the gluino condensate. By inserting appropriate source terms as in [7] the gluino condensate was measured as the trace of D −1 F with spin and fifth-direction indices restricted as dictated by eq. 19 The trace was calculated using a standard stochastic method. All inversions in this work were done using the conjugate gradient (CG) algorithm. An even-odd preconditioned form of the matrix D † F D F was inverted. For more details on the numerical algorithms and methods employed to DWF simulations see [16,17].
Simulation parameters
In all simulations the domain wall height was chosen to be m 0 = 1.9. As mentioned in the previous section, the finite δτ errors were kept to the few percent level by using a small δτ . For all simulations the step size was set to δτ = 0.01 and the trajectory length to τ = 0.5. In order to confirm that this choice introduces finite step size errors that are small compared to the statistical errors an HMCΦ simulation for two Dirac flavors was run for L s = 12 and m f = 0.04. It produced an acceptance rate of ≈ 90% suggesting that the finite δτ errors of the 0.5 flavor theory are small. Furthermore, an HMDR simulation was also run for two Dirac flavors using the exact same parameters. The value of the gluino condensate obtained from these two simulations was the same within statistical errors.
The CG stopping condition for all simulations was set to 10 −6 for the evolution and to 10 −8 for the calculation of χχ. The number of CG iterations varied between ≈ 100 for m f = 0.08, L s = 12 and 250 for m f = 0.0, L s = 24.
The 8 4 volume simulations were done with β = 2.3. The value of β was chosen so that one is not close to the point where the box size becomes too small and a thermal transition takes place, but also not too deep in the strong coupling regime where the finite L s effects become severe. The transition point of the N t = 8 quenched theory is at β = 2.5115(40) [47]. In figure 1 the magnitude of the fundamental Wilson line |W | measured in quenched simulations in an 8 4 volume is plotted vs. β. In the quenched theory this is an order parameter. As can be seen from that figure, a rapid crossover takes place around β = 2.5. In the same figure the value of |W | from a simulation of the dynamical theory at β = 2.3 is also shown (cross). The quenched and dynamical values are very similar indicating that at β = 2.3 the dynamical theory is in a phase that "confines" fundamental sources. Therefore, the box size is large enough to avoid finite temperature effects that would of course spoil SUSY. Using the quenched theory as a guide the 4 4 simulations were done at β = 2.1 since the quenched transition at N t = 4 is known to take place at β = 2.2986(6) [47]. At β = 2.1 the lattice spacing is larger than at β = 2.3. However, the lattice sizes are small and do not allow a reliable measurement of the lattice spacing. According to [48], β = 2.1 − 2.3 is in the beginning of the weak coupling regime. Then if one uses the weak coupling form in [48] one finds that the lattice spacing at β = 2.1 is about a factor of two larger than the one at β = 2.3.
In order to estimate the necessary number of thermalization sweeps two simulations were run on an 8 4 lattice at β = 2.3, L s = 12 and m f = 0.04. The first simulation used an initial configuration with all gauge links set to the identity (ordered) and the other used an initially random configuration (dis-ordered). The evolutions in "computer time" are shown in figure 2. As can be seen, the two ensembles converged after about 100 sweeps. This number of thermalization sweeps was then used in all other simulations which were started from an ordered initial configuration. The number of measurements after thermalization for all simulations is about 200 with measurements done in every trajectory. The gluino condensate was measured with a single "hit" stochastic estimator.
The gluino condensate at the chiral limit
In order to be able to extrapolate to the chiral limit, corresponding to L s → ∞ and m f = 0, the mass m f and the size of the fifth direction L s was varied. The results of all simulations are given in tables 1 and 2 in the appendix. Three different methods were used to analyze the data and calculate the gluino condensate in the chiral limit.
I. For fixed L s , the data for m f = 0.08, 0.06, 0.04, 0.02 were fit to a function: This functional form is valid provided m f is small enough. Otherwise, higher order terms must also be included. The data and fits are shown in figure 3 and the results of the fits are given in table 3. Then the extrapolated values b 0 were fit to a form: This functional form is approximate but it is expected to be valid close enough to the continuum and has been found to be consistent in simulations of the Schwinger model [16] and of QCD even at relatively large lattice spacings (see for example [17]). The data and fit is shown in figure 4 and the results of the fit are given in table 4. II. For fixed m f the data for L s = 12, 16, 20, 24 were fit to the form of eq. 49. The data and fits are shown in figure 5 and the results of the fit are given in table 4. Then the extrapolated values c 0 were fit to the form of eq. 48. The data and fit is shown in figure 6 and the results of the fit are in table 3. III. Additional simulations were done for m f = 0 and L s = 12, 16, 20, 24. The data were fit to the form of eq. 49. The data and fits are shown in figure 7 and the results of the fit are in table 4.
The m f → 0 and L s → ∞ extrapolated values of the gluino condensate for each one of the above three methods are summarized in table 5. As can be seen, all values are consistent within the statistical errors. This suggests that the systematic errors inherent to the limited statistics and to fits onto functions that represent the data only for a limited range are small. Furthermore, it suggests that the fitting functions used are consistent (please see subsection 5.4 for more discussion on the validity of these fitting functions).
The telltale signals of topology in numerical simulations
In order to investigate the issues discussed in section 3.2 the gluino condensate was also calculated in a smaller 4 4 lattice volume at β = 2.1. It was measured only for m f = 0 and method III above was used to extrapolate to the L s → ∞ limit. The data and fit are shown in figure 8 and the fit results are given in table 4. The 8 4 data from figure 7 are presented again in this figure to aid comparison. The value has decreased indicating that scaling is violated. However, without more simulations at other lattice spacings and volumes one can not conclude much from this result. The β = 2.1 coupling is in the strong coupling region and furthermore the 4 4 lattice volume is rather small.
However, it is interesting to notice that the parameter V × χχ Ls→∞ ≈ 8.4 (a factor of 12 coming from the normalization of χχ has been included). Since m f = 0 the effective mass m eff gets its value from finite L s effects. As L s is increased m eff becomes small. From analysis of m eff in strong coupling QCD [17] one would roughly guess that m eff < 0.1. Then [m eff × V × χχ Ls→∞ ] < 1. In that case, the analysis of [40] can be followed and one would expect the value of the condensate in the 4 4 lattice to be mostly supported by configurations with total winding of ±1/2. Indeed, this can be seen from figure 9. In that figure the evolutions in "computer time" are shown. The "spikes" in the evolution are apparent and they become more pronounced and less frequent as L s is increased (and in effect m eff is decreased). This is exactly how the effect of zero modes for small [m eff × V × χχ Ls→∞ ] would present itself in a numerical simulation of the dynamical theory. As the fermion mass is made smaller, χχ is expected to receive most of its value from sectors with winding ±1/2. However, in these sectors the fermion determinant is very small because of the zero mode. Since the probability for the algorithm to generate a gauge field configuration is proportional to the fermion determinant one would expect that these sectors will be visited less and less frequently as the effective mass is decreased. When these sectors are visited the value of χχ will be very large (spikes) in order to compensate for the infrequent sampling. In this way the presence of the zero mode in the observable "balances" the presence of the zero mode in the determinant. As the mass is made smaller one would have to increase the size of the statistical sample in order to include enough of these increasingly "rare" but very large fluctuations. For similar results in the Schwinger model and QCD see [16,20].
A histogram of the values of χχ is presented in figure 10 (solid line). For small L s the effective mass is larger and χχ is distributed with a symmetric looking distribution around the mean value. However, for L s = 40 the effective mass is smaller and the distribution has a more pronounced "tail" towards larger values. In order to investigate this further numerical simulations at exactly the same parameters, but without the fermion determinant (quenched theory) were done. The histograms from these simulations are shown in the same figure for comparison (dotted lines). One can observe that the absence of the fermion determinant had the effect of shifting the distributions to higher values. This is expected since configurations with small eigenvalues that produce larger values of χχ are not suppressed anymore and are produced more frequently. Also, one can observe that the number of configurations with χχ larger than ≈ 0.007 that appeared as spikes in figure 9 have now increased in number. These observations lend support to the presence of small near-zero eigenvalues. Furthermore, configurations with fractional topological charge have already been found in quenched SU(2) simulations at similar couplings [28]. It would be very interesting to calculate the index for the configurations of figure 9 using the methods of [28] and see to what extent there is a correlation between fractional topological charge and the observed spikes. This correlation should be exact at L s → ∞ but it will be obscured at finite L s by the presence of non-zero m eff . This investigation is beyond the scope of this work.
Furthermore, it should also be noted that on the 8 4 lattice there are no visible spikes up to L s = 24. This can be seen in figure 11. Presumably this is because the product [m eff × V × χχ Ls→∞ ] is probably much larger than in the 4 4 lattice. Again, this statement is not exact since the value of m eff was not measured.
These results are consistent with the discussion in section 3.2. However, since even with m f = 0, an L s extrapolation is essentially an extrapolation from non zero masses these results are not necessarily the results of a simulation at exactly L s = ∞. It is still possible that if such a simulation were done one could have found that the gluino condensate is zero. This could happen since in a finite volume and zero mass the effects of spontaneous symmetry breaking are absent and the zero mode effects alluded to above may not be sufficient to sustain a non zero vacuum expectation value. This type of simulation is possible and can be done using the overlap formalism [12] or exact Neuberger fermions [2]. However, if one is to maintain exact chiral symmetry these methods will demand large computing resources.
The fine print
Perhaps the largest uncertainty in the analysis presented in the previous subsections has to do with the assumption of exponential decay as in eq. 49. For small enough lattice spacings and large enough L s this behavior is expected to be true. All data presented in this work were well represented by this ansatz. However, as with any numerical investigation, one can never completely disprove all other possibilities. While such an exercise over all possible functions would clearly be fruitless there are some alternative forms that may be reasonable to consider since they are based on analytical considerations.
Far from the continuum limit, the approach to the chiral limit may become power law [42] or even completely disappear [2,39]. In order to explore the possibility of power law behavior the m f = 0 data for the 8 4 and 4 4 volumes were fit to the form: The results of the fit are given in table 6 (the fits are not presented in any of the figures).
As can be seen from that table the χ 2 /dof of these fits is significantly larger than the one of the corresponding exponential fits to the same data. Another possibility is decay to zero with two different exponential decay rates. Such a behavior was found to be consistent with investigations of the two flavor Schwinger model [16] for a quantity that is expected to vanish in the chiral limit. There it was argued that the fast decay rare is due to fluctuations within a given topological sector while the slow decay rate is due to the presence of topology changing configurations. Therefore, for m f = 0 one could try to fit the largest three L s points to the form: The results of the fit for the 8 4 , β = 2.3, m f = 0, and L s = 16, 20, 24 points as well as for the 4 4 , β = 2.1, m f = 0, and L s = 24, 32, 40 points are shown in table 7. The fit to the 8 4 data is acceptable. However, the fit to the 4 4 data has a rather large χ 2 /dof. Because this fit is for larger L s than the 8 4 fit one would expect that if there were a second exponential decaying to zero its effect would be more pronounced in the 4 4 fit. Therefore, the large χ 2 /dof of the 4 4 fit suggests that the presence of a second exponential decaying to zero is not likely. This could be made more precise if simulations with larger L s values for the 8 4 and 4 4 lattices were done. However, such simulations are beyond the computing resources of this project. Also, the analysis in [36] suggests functional forms with more parameters. It would be interesting to fit to these forms but that would require more data points and better statistics both of which are also beyond the computing resources of this project. Finally, the SUSY breaking by the irrelevant terms may have non-negligible effects at the lattice spacings studied here. Although it was found that the chiral condensate is nonzero at the chiral limit in two lattice spacings, this is not enough to estimate its value in the continuum limit.
Conclusions
The formulation of N = 1, SU(2) Super Yang-Mills theory on the lattice with domain wall fermions (DWF) has several advantages over more traditional lattice fermion regulators. Even at non-zero lattice spacing the chiral limit can be taken by letting L s → ∞, where L s is the number of sites along the fifth auxiliary direction. Since in that limit there is no gluino mass term, supersymmetry is broken only by irrelevant operators and there is no need for fine tuning. Also, in that limit the theory has exact zero modes on non-trivial topological backgrounds.
However, even at finite L s , where numerical simulations are done, these properties are maintained to a good degree allowing extrapolations to the L s → ∞ limit. Furthermore, the Pfaffian resulting from the integration of Majorana fermions is positive definite at finite L s , non-zero lattice spacing and for any background gauge field configuration. As a result, one can unambiguously interpret it as a probability measure to be used by the numerical simulation for importance sampling. This property also allows the use of standard numerical algorithms where any number of flavors N f can be simulated. By contrast, Wilson fermions have this positivity property only at the continuum limit.
In this work, the full N = 1, SU(2) Super Yang-Mills theory was numerically simulated on the lattice using DWF. The gluino condensate χχ was measured. These simulations did not present any unexpected technical difficulties.
A finite value of L s breaks chiral symmetry and induces a small gluino mass. In addition, an explicit gluino mass m f was used to provide extra control. Several m f and L s values were used (all corresponding to positive gluino mass) and the value of χχ was extrapolated to the chiral limit using three different methods. All methods gave consistent results indicating small systematic effects and suggesting that the functions used for the fits are consistent. These simulations were done on a lattice with 8 4 lattice sites.
Additional simulations on a lattice with 4 4 lattice sites but approximately double the lattice spacing were done. Again, extrapolations to the chiral limit gave a non-zero χχ . In this lattice [mass × volume × χχ ] < 1. Then analytical considerations suggest that the value of χχ must come mostly from topological sectors with fractional topological charge of ±1/2. Indeed, as the mass was made smaller unusually large values (spikes) were observed in the statistical sample of χχ indicating the singular contribution of these sectors.
The spectrum of the theory is of great interest but it was not possible to measure on the small lattices considered here. Also, the gluino condensate was measured only on two different lattice spacings and therefore it was not possible to extrapolate to the continuum limit where comparisons with analytical results would be possible. Future work could explore these very interesting topics. | 2014-10-01T00:00:00.000Z | 2000-08-10T00:00:00.000 | {
"year": 2000,
"sha1": "1406dc0aacf3ecd4d16469c436b8631737e1ef5d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/0008009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1406dc0aacf3ecd4d16469c436b8631737e1ef5d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17443210 | pes2o/s2orc | v3-fos-license | Polymyositis Associated with Hepatitis B Virus Cirrhosis and Advanced Hepatocellular Carcinoma
Polymyositis (PM) is an inflammatory condition of skeletal muscle and is believed to be a paraneoplastic syndrome associated with various types of cancer. PM associated with chronic hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is very rare. We report a case of advanced HCC with chronic HBV cirrhosis that presented with proximal muscle weakness. Further investigation showed elevation of muscle enzymes, myopathic pattern of electromyography (EMG), and evidence of myositis compatible with PM. Lamivudine and 1 mg/kg of oral prednisolone were given. Two sessions of transcatheter arterial chemoembolization (TACE) were performed and sorafenib was started. Muscle enzymes normalized after 6 weeks of treatment. Unfortunately, 5 months after treatment, patient was readmitted and died of severe bacterial pneumonia.
Introduction
Polymyositis (PM) is an idiopathic inflammatory myopathy. It is a systemic disease that affects skeletal muscles and results in proximal muscle weakness. PM is associated with malignancy in 10-15% of patients. [1][2][3] The 3 most commonly associated cancers are nasopharyngeal, lung, and breast cancer. 4 Hepatocellular carcinoma (HCC)-associated PM is quite rare. We report a case of hepatitis B virus (HBV) cirrhosis with advanced HCC presenting with PM.
Case Report
A previously healthy 56-year-old male presented with a 6-week history of fever. Two weeks prior to admission, he developed progressive proximal muscle weakness. Through work-up, he was diagnosed with chronic HBV cirrhosis (Child-Pugh B; MELD 7) with advanced HCC. On physical examination, body temperature was 38°C, blood pressure was 120/75, and pulse rate was 92 bpm. Examination of the limbs showed normal tone without muscle wasting or tenderness. There was bilateral proximal weakness in both upper and lower limbs, with grade 3 of 5 in strength of both flexion and extension based on the Medical Research Council scale (MRC), with normal strength in distal limbs. His neck muscles were also weak. Deep tendon reflexes and sensation were normal. He had hepatomegaly and signs of chronic liver stigmata, but no ascites and no signs of hepatic encephalopathy. His Eastern Cooperative Oncology Group (ECOG) performance status score was 2.
Laboratory revealed albumin 2.2 g/dL, total protein 4.4 g/dL, aspartate aminotransferase (AST) 724 IU/L, alanine aminotransferase (ALT) 236 IU/L, total bilirubin 1.0 mg/dL, alkaline phosphatase 148 U/L, prothrombin time (PT) 14.1 s, international normalized ratio (INR) 1.14, and creatinine 0.8 mg/dL. His creatinine phosphokinase was 17,963 IU/L. Serum electrolytes and thyroid function tests were normal. His viral profiles were positive for HBV with DNA polymerase chain reaction (PCR) of 17,460 IU/mL. Tests for hepatitis C virus and HIV were negative. Alpha-fetoprotein (AFP) was 56,310 ng/mL (normal: <25 ng/mL). Autoantibodies including anti-dsDNA, anti-Jo-1, anti-neutrophilic cytoplasmic antibody, anti-RNP, anti-SSA ,and anti-SSB were negative. Abdominal computed tomography (CT) showed liver cirrhosis with an ill-defined 12 x 7-cm, arterial-enhancing, heterogeneous, hypodense lesion in the left lobe of the liver with increased enhancement on venous and delay phase and central necrosis. The CT showed 2 additional arterial-enhancing lesions, size 6 x 7 cm and 3 x 3 cm, with contrast washout in the venous phase at hepatic segments V/VIII and VII and a left main portal vein thrombosis ( Figure 1). Liver biopsy of the largest lesion was performed to exclude secondary liver neoplasm. Histology showed poorly differentiated carcinoma positive for glypican 3 (GPC3) and negative for CK7, CK20, and hepatocyte paraffin 1 (Figure 2), which was consistent with the diagnosis of HCC.
Electromyography (EMG) showed low-amplitude, short-duration action potentials with an early recruitment pattern, normal nerve conduction study, and repetitive nerve stimulation. These findings were highly suggestive of a myopathic pattern ( Figure 3). Muscle biopsy showed increased endomysial connective tissue and lymphocyte infiltration with necrotic and regenerating myofibers (Figure 4). No vasculitis or perifascicular pattern was seen, and a diagnosis was polymyositis (PM) was confirmed. Prednisolone 1 mg/kg for PM treatment and lamivudine for preventing hepatitis B reactivation were given. Two sessions of transcatheter arterial chemoembolization (TACE) were performed. His creatinine kinase level decreased to normal after 6 weeks of treatment, but his muscle strength did not improve. Unfortunately, 5 months after treatment, he was readmitted with severe bacterial pneumonia and died after 16 days of hospitalization.
Discussion
HCC-associated PM is a rare condition. Only 4 cases of PM associated with HCC have been reported (Table 1). 5-8 A previous study showed that large tumor size and a high AFP level were commonly found in HCC patients who had paraneoplastic syndromes. 9 Our patient had both a large tumor and a very high AFP level. The pathogenesis of PM has not been identified. A possible mechanism is that an autoimmune process triggered by the tumor leads to clonally expanded CD8-positive cytotoxic T-cells that invade muscle fibers and express major histocompatibility complex (MHC) class 1 antigen and release cytokines, causing muscle inflammation. 10,11 HBVassociated PM has also been reported, 12 so it is possible that HCC, HBV, or both may have caused PM in this patient.
The role of steroids for treating HCC-associated PM is controversial. Physicians should be aware of an increased risk of infection when using high-dose corticosteroids in patients with advanced HCC and cirrhosis. Our patient's ability to maintain daily life was limited by his weakness; therefore, cor-
A B
Publish your work in ACG Case Reports Journal ACG Case Reports Journal is a peer-reviewed, open-access publication that provides GI fellows, private practice clinicians, and other members of the health care team an opportunity to share interesting case reports with their peers and with leaders in the field. Visit http://acgcasereports.gi.org for submission guidelines. Submit your manuscript online at http://mc.manuscriptcentral.com/acgcr. ticosteroids were given after discussion with him. As a result of corticosteroid therapy, lamivudine was needed for prevention HBV reactivation. Lamivudine has good efficacy for this indication, particularly when the HBV DNA is low, as in our patient. 13,14 Most patients with HCC-associated PM have had poor prognosis and treatment outcomes. Despite high-dose corticosteroids and surgery/chemoembolization for HCC management, 5-8 few patients had improvement of muscle weakness and all died from HCC-related causes within 6 months after diagnosis.
Disclosures
Author contributions: K. Thanapirom wrote the first draft, collected the data, and conducted the literature research. S. Aniwan conducted the literature research and drafted the article. S. Treeprasertsuk reviewed the final draft and is the article guarantor.
Financial disclosure: No conflicts of interest or sources of funding to report.
Informed consent was obtained for this case report. | 2016-05-04T20:20:58.661Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "a15081c0a1b49e266df1a66c9a5dc43aecbc175d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.14309/crj.2014.39",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a15081c0a1b49e266df1a66c9a5dc43aecbc175d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225067412 | pes2o/s2orc | v3-fos-license | Ubiquitous Molecular Outflows in z>4 Massive, Dusty Galaxies II. Momentum-Driven Winds Powered by Star Formation in the Early Universe
Galactic outflows of molecular gas are a common occurrence in galaxies and may represent a mechanism by which galaxies self-regulate their growth, redistributing gas that could otherwise have formed stars. We previously presented the first survey of molecular outflows at z>4 towards a sample of massive, dusty galaxies. Here we characterize the physical properties of the molecular outflows discovered in our survey. Using low-redshift outflows as a training set, we find agreement at the factor-of-two level between several outflow rate estimates. We find molecular outflow rates 150-800Msun/yr and infer mass loading factors just below unity. Among the high-redshift sources, the molecular mass loading factor shows no strong correlations with any other measured quantity. The outflow energetics are consistent with expectations for momentum-driven winds with star formation as the driving source, with no need for energy-conserving phases. There is no evidence for AGN activity in our sample, and while we cannot rule out deeply-buried AGN, their presence is not required to explain the outflow energetics, in contrast to nearby obscured galaxies with fast outflows. The fraction of the outflowing gas that will escape into the circumgalactic medium (CGM), though highly uncertain, may be as high as 50%. This nevertheless constitutes only a small fraction of the total cool CGM mass based on a comparison to z~2-3 quasar absorption line studies, but could represent>~10% of the CGM metal mass. Our survey offers the first statistical characterization of molecular outflow properties in the very early universe.
INTRODUCTION
Corresponding author: Justin S. Spilker spilkerj@gmail.com * NHFP Hubble Fellow Powerful galactic outflows or winds have been widely invoked in the establishment and regulation of many fundamental observed correlations in galaxies. Outflows driven by supermassive black hole feedback or processes related to star formation (e.g. stellar winds, supernovae, radiation pressure) are thought to regulate the growth of both the black hole and the stellar compo-nent of galaxies (e.g. Silk & Rees 1998;Fabian 1999;Gebhardt et al. 2000). Outflows are also invoked as an important mechanism regulating the metallicity of galaxies, capable of transporting heavy elements into the circumgalactic medium that surrounds galaxies (e.g. Tumlinson et al. 2017). They are also likely necessary to explain the rapid suppression ('quenching') of star formation in massive galaxies and the resulting global and spatially-resolved properties of the stars and gas in quenched galaxies at high redshift (e.g. Tacchella et al. 2015; Barro et al. 2016;Spilker et al. 2018a;Bezanson et al. 2019;Spilker et al. 2019).
Feedback and outflows are widely viewed as necessary in simulations in order to prevent over-cooling and consequently overly-massive galaxies. Feedback is typically included in ad hoc ways, and prescriptions differ greatly across simulations (see Somerville & Davé 2015 for a recent review). Recent high-resolution zoom simulations have been able to drive outflows self-consistently (e.g. Muratov et al. 2015;Agertz & Kravtsov 2016), but the galaxy parameter space probed is still limited (usually focusing on Milky Way-like halos). Thus, constraining outflow scaling relations is useful both for testing predictions from high-resolution simulations and for informing sub-grid prescriptions used in large-volume simulations.
Outflows appear to be ubiquitous in galaxies, and the winds are known to span many orders of magnitude in temperature and density (e.g. Thompson et al. 2016;Schneider & Robertson 2017), and as such various components of the winds are observable from X-ray to radio wavelengths (e.g. Leroy et al. 2015). The cold molecular component of outflows is of special interest for many reasons, not least of which is that molecular gas is the raw fuel for future star formation and appears to be the largest component by mass of most outflows (see Veilleux et al. 2020 for a recent review). The cold gas in outflows is notoriously difficult to reproduce in simulations because the thermal balance of outflowing gas depends on the detailed hydrodynamics and heating/cooling processes on spatial scales much smaller than typically achieved (e.g. Scannapieco 2013;Schneider & Robertson 2017). The molecular outflow properties of large samples of galaxies can thus provide a valuable constraint for cosmological galaxy formation simulations (e.g. Muratov et al. 2015;Davé et al. 2019;Hayward et al. 2020).
In the first paper in this series , hereafter Paper I), we presented the first sample of molecular outflows in the z > 4 universe, using Atacama Large Millimeter Array (ALMA) observations of the hydroxyl (OH) 119 µm doublet as an outflow tracer. The sample was selected from the South Pole Tele-scope (SPT) sample of gravitationally lensed dusty starforming galaxies (DSFGs), targeting intrinsically luminous galaxies, log L IR /L ∼ 12.5 − 13.5. We found unambiguous outflows in 8/11 (∼75%) of the sample, approximately tripling the number of known molecular winds at z > 4. The observations also spatially resolved the outflows, and we found evidence for clumpy substructure in the outflows on scales of ∼500 pc.
High-redshift DSFGs such as those targeted by our sample, in particular, can offer unique insight into the physics of feedback and its role in galaxy evolution. Their star formation rates (SFRs) and SFR surface densities are unprecedented in the local universe, and approach the theoretical maximum momentum injection rate from stellar feedback beyond which the remaining gas is unbound (e.g. Murray et al. 2005; Thompson et al. 2005). DSFGs are expected to trace the most massive dark matter halos of their epoch, offering insight into galaxy formation in dense environments (e.g. Marrone et al. 2018;Miller et al. 2018;Long et al. 2020). They are also one of few viable populations capable of producing massive quiescent galaxies now identified at z ∼ 4, the existence of which implies that the progenitor systems must have experienced powerful and effective feedback in order to suppress star formation (e.g. Straatman et al. 2014;Toft et al. 2014).
Our primary focus in this work is to understand the physical properties of the molecular outflows we have detected at z > 4 based on the measured OH 119 µm profiles. This is made difficult by the fact that the 119 µm line opacity is expected to be very high, τ OH 119µm 10 (Fischer et al. 2010). In the nearby universe, extensive observations of ULIRGs and obscured QSOs with Herschel /PACS allowed many OH lines to be detected towards the same objects, including some transitions with far lower optical depths and excited transitions that can only arise from the warmest and densest regions. With many OH transitions, self-consistent radiative transfer modeling can reproduce all observed line profiles as well as the dust continuum emission simultaneously (González-Alfonso et al. 2017; hereafter GA17). In the distant universe we are unlikely to possess such a rich trove of information until the next generation of far-IR space missions become reality. Even with ALMA at the highest redshifts the atmosphere precludes observations of the full suite of OH diagnostics, and the other OH transitions are typically weaker than the 119 µm ground state lines. It thus behooves us to understand whether and how well outflow properties can be determined if only the 119 µm OH doublet has been detected. We presented a first attempt at such an analysis in Spilker et al. (2018b, hereafter S18), which we expand upon here.
Readers interested only in our interpretation of the outflow properties we derive here are welcome to skip to Section 4, which presents our main findings and discussion. Section 2 gives an overview of our assumed outflow geometry, calculations of outflow properties, and the literature reference samples we use as a training set for our own sample galaxies. Section 3 describes the different methods we use to estimate the outflow rates and explores the level of agreement between the methods. We summarize and conclude in Section 5. We assume a flat ΛCDM cosmology with Ω m = 0.307 and H 0 = 67.7 km s −1 Mpc −1 (Planck Collaboration et al. 2016), and we take the total infrared and far-infrared luminosities L IR and L FIR to be integrated over rest-frame 8-1000 and 40-120 µm, respectively. We assume a conversion between L IR and SFR of SFR= 1.49 × 10 −10 L IR , with L IR in L and SFR in M /yr (Murphy et al. 2011). Tables of the outflow properties from this work, as well as the SPT sample properties from Paper I, are available in electronic form at https://github.com/spt-smg/publicdata.
Assumed Outflow Geometry
Where necessary throughout this work, we assume a spherical 'time-averaged thin shell' geometry widely used in the literature (see Rupke et al. 2005), in which a mass-conserving outflow with constant outflow rate expands following a density profile n ∝ r −2 . In this geometry, the outflow rate and mass are related througḣ with v out the characteristic outflow velocity, R out the outflow inner radius, µ = 1.4 the mean mass per hydrogen atom (including the cosmological helium abundance) and m H the mass of a hydrogen atom. These quantities are fundamentally linked to the column density of gas along the line of sight N H responsible for generating the observed absorption profiles. If we drop the assumption of spherical symmetry, these quantities are also linearly proportional to the covering fraction f cov , the fraction of the full 4π sr covered by wind material as seen from the source. For our sample, in which the outflows are detected in absorption, the inferred energetics obviously depend strongly on the orientation of the outflow since no absorption can be detected for outflowing material that does not intersect the line of sight to the galaxy. Redshifted receding material is also difficult to constrain for our high-redshift sample. The ALMA bandwidth probes only a limited range of redshifted velocities, and it is possible for the galaxy itself to be optically thick to emission from the receding material even if the spectral coverage were extended to more redshifted velocities. As has been discussed extensively in the literature, this assumed geometry leads to more conservative outflow energetics than other simple geometries (e.g. Veilleux et al. 2020, and references therein). In particular the outflow rates are a factor of 3 lower than if the outflow volume is filled with uniform density (which implies a decreasing outflow rate over time for constant flow velocity), and a factor ∆r/R out lower than the 'local' or 'instantaneous' rate if the wind arises from a thin shell of width ∆r. We take the characteristic velocity to be v 84 , the velocity above which 84% of the absorption occurs. This is also fairly conservative: clearly the maximum velocity is not a 'characteristic' outflow velocity, but v 84 is more robust to uncertainties in the systemic redshift than the median absorption velocity v 50 , given the deep absorption at systemic velocities present in most of our sources (Paper I). Finally, we take R out to be r dust , the effective radius of the dust emission at rest-frame ≈ 100 µm. This radius has been directly measured for the low-redshift literature reference sources by Herschel /PACS (Lutz et al. 2016) and from our lensing reconstructions of the ALMA OH continuum data, and is motivated by our observation that the OH outflow absorption is frequently strongest in equivalent width not in the nuclear regions but towards the outskirts (Paper I).
The momentum and kinetic energy outflow rates are then given bẏ where the expression for kinetic power assumes negligible contribution from turbulent (i.e. non-bulk) sources. Although maps of the molecular outflows at ∼500 pc resolution are available for our SPT sample due to their gravitationally lensed nature (Paper I), we do not attempt to match, or otherwise account for, the structures seen in these maps when estimating outflow rates.
OH Outflow Training Sample
In order to determine whether and how well the outflow properties for our high-redshift sample can be estimated given the sole available OH 119 µm transitions, we compare extensively to low-redshift ULIRGs and obscured QSOs with rich OH data and radiative trans-fer models from Herschel /PACS. In particular, GA17 self-consistently model 12 nearby IR-luminous galaxies with detections of the OH transitions at 119, 84, 79, and 65 µm, all of which showed either P Cygni profiles or blueshifted line wings in the 119 µm doublet. The 84 and 65 µm doublets are highly-excited lines with lower levels 120 and 300 K above the ground state that require an intense and warm IR radiation field to be detected, and the cross-ladder 79 µm doublet has an optical depth ≈40× lower than the 119 µm transition. We supplement this sample with one additional ULIRG with OH radiative transfer modeling (Tombesi et al. 2015;Veilleux et al. 2017), and an additional four sources with outflow rates based on the detection of high-velocity CO line wings that were also observed in OH 119 µm. These four sources have lower typical L IR and lower outflow rates than the primary OH-based sample. We consider these final sources because Lutz et al. (2020) find reasonable agreement between CO-based and OH-based outflow properties, but we exclude them from our later empirical outflow rate estimation out of an abundance of caution. These sources can at some level be considered akin to a small cross-validation sample, although the dynamic range in outflow properties they span is small.
For all sources, we remeasured various properties of the OH 119 µm spectra in the same way as for our high-redshift sample (Paper I), including the broadening from the FWHM≈300 km s −1 PACS instrumental spectral resolution at these wavelengths. As for our sample, we use the fits to the spectra to measure the velocities above which 50 and 84% of the absorption takes place, v 50 and v 84 , and the 'maximum' outflow velocity v max that we take to be the velocity above which 98% of the absorption takes place. We also measure the total equivalent widths of the absorption components as well as the equivalent widths integrated over various blueshifted velocity ranges; for example, EW v<−200 refers to the equivalent width integrated over velocities more blueshifted than −200 km s −1 . We note that all these quantities are non-parametric and therefore depend little on the exact methods used to fit the PACS spectra.
OUTFLOW RATE ESTIMATES
In this section we detail a number of different methods we use to estimate the outflow rates for our z > 4 SPT DSFG sample. For each method presented here, we estimate uncertainties on the derived outflow rates through a Monte Carlo procedure, repeatedly resampling the measurements within the uncertainties, redoing the fitting analysis, and remeasuring the predicted outflow rates based on the results of each fit. While we provide several empirical fitting formulae that can be used to estimate outflow rates from OH 119 µm data, we caution that the broad applicability of these formulae is questionable. The literature reference sources are not broadly representative of star-forming galaxies (nor is our z > 4 sample), consisting solely of IR-luminous galaxies. It is unclear if the conversions we find here can (or should) be extrapolated to less-extreme sources.
For each of our methods here, we aim to find correlations between the published outflow rates for our training sample and the measured OH and ancillary galaxy properties, as detailed in the following subsections. For our objects, we provide observational details and sample properties in Paper I, and briefly reprise here. Gravitational lensing magnification factors were measured from lens models of the rest-frame 119 µm dust continuum emission observed by ALMA along with the OH spectroscopy. From simple fits to the OH spectra, we measured basic observed properties such as equivalent widths and velocities. Molecular gas masses were estimated from CO(2-1) detections of all sources, following Aravena et al. (2016). Total IR luminosities were measured by fitting to the well-sampled far-IR/submm photometry, which spans rest-frame ≈15-600 µm for all sources. We constrain the contribution of AGN to the total luminosity using rest-frame mid-IR ∼15-30 µm photometry from Herschel /PACS sensitive to hot AGNheated dust near the torus. No source shows evidence of an AGN in the mid-IR (or in any other data, e.g. Ma et al. 2016), with fractional contributions to the total luminosity f AGN 0.1 − 0.45 (1σ; mean upper limit f AGN 0.25), depending on the source. It is possible that this method underestimates f AGN for heavilyobscured AGN, but we do not know whether such AGN are present in our sample or how common they are if so. In our subsequent analysis, we detail changes to our interpretation that would result from a factor-of-2 underestimate of f AGN (and consequent decrease in the fraction of L IR arising from star formation).
While our parent sample consists of 11 z > 4 DSFGs, all of which were detected in OH 119 µm absorption, we determined in Paper I that only 8 of these show unambiguous evidence for outflows. It is essentially not possible to set upper limits on the outflow properties for the remaining 3 sources, since this would require prior knowledge of, for example, the outflow velocities. Lack of sensitivity is not the issue; all 3 were detected in OH absorption, but ancillary spectral information from [CII] or CO data made the OH profiles difficult to interpret conclusively as evidence for outflows. The sources we selected for OH observations are not obviously biased with respect to the full sample of z > 4 SPT DSFGs in terms of L IR , dust mass, or effective dust temperature (Reuter et al. 2020), although they are by no means representative of 'typical' galaxies at this epoch.
The outflow rates, masses, and energetics we derive for our high-redshift sources from all methods are given in Table 1. These values, as well as the SPT DSFG observed properties given in Paper I we use to derive the outflow rates, are available in machine-readable format at https://github.com/spt-smg/publicdata.
Simple Optically-Thin Model
We first consider a simple analytic calculation of the outflow rates for our sources and the literature reference sample assuming the OH 119 µm absorption is optically thin. As already discussed, we expect this to be a very bad assumption, but this calculation does at least provide a hard lower bound on the true outflow rate and an opportunity to determine if some overall correction factor to the optically thin outflow rates could allow a more realistic estimate. While for many nearby galaxies other OH transitions with far lower line opacities can be observed (e.g. the 79 µm doublet, or lines of the lessabundant 18 OH isotopologue), we must instead attempt to find some other quantity that can provide an empirical correction.
Under the assumption that the absorption is optically thin, the minimum column density of OH molecules N OH is given by where λ and ν are the wavelength and frequency of the transition, h and k B are the Planck and Boltzmann constants, A ul is the Einstein 'A' coefficient of the transition, g u the degeneracy of the upper energy level, E l the lower energy level in temperature units, Q rot the rotational partition function evaluated at excitation temperature T ex , and τ dv the integrated optical depth of the absorption profile (e.g. Mangum & Shirley 2015). For the OH 119 µm doublet transitions, E l = 0 K, A ul = 0.138 s −1 and g u = 6 (Müller et al. 2001(Müller et al. , 2005. Tabulated values of Q rot are available from the NASA JPL spectroscopic database (Pickett et al. 1998 (Goicoechea & Cernicharo 2002).
In order to isolate only outflowing material, it is common to integrate the optical depth over a limited range of velocities. Here we calculate the integrated optical depth over velocities more blueshifted than −200 km s −1 , a commonly-adopted threshold. Although this is not fast enough to be guaranteed to trace only outflowing material, we expect it to largely trace the outflows even in our sample DSFGs that often show broad CO or [CII] emission line profiles (Paper I). Under the assumption of optically-thin absorption, the integrated optical depth over this velocity range is equal to the equivalent width over the same range. In practice, because the outflow rate itself is proportional to v out (Eq. 1), we instead calculate −200 −∞ τ (v) v dv from our spectral fitting procedure, consequently incorporating the v out term of Eq. 1 into the column density calculation directly. This allows us to include a first-order consideration of the shape of the absorption profiles while also removing the need to adopt a characteristic velocity in the outflow rate calculation. Finally, because we expect that the wind material does not fully cover the source, we adopt a covering fraction f cov = 0.3, the average value determined for the low-redshift reference sample (GA17). We discuss covering fractions in detail in Paper I, where we estimate covering fractions ranging from a hard lower bound of ∼0.1 to upper limits of ∼0.7. These covering fractions are also not directly comparable: GA17 estimate f cov using their multi-transition OH radiative transfer analysis, while our estimates are based on our lensing reconstructions with hard lower limits based on spectral analysis. While assuming a different value for f cov would linearly rescale our optically-thin outflow rate estimates, this has no impact on our subsequent results, as we explain further below.
We use Eq. 1 to calculate the outflow rates for our own and the literature reference samples. For the reference sample, we use far-IR continuum sizes from PACS 100 µm imaging (Lutz et al. 2016), or assume the average size ≈1 kpc if no data are available. We derive minimum optically thin outflow rates spanning 8 − 370 M /yr for the literature sources and 5 − 120 M /yr for the SPT sample, and corresponding outflow masses log(M out /M ) ≈ 7 − 8.5. We emphasize that these values are strong lower limits given the expected high OH line opacities. Figure 1 (left) shows the ratio of the optically thin outflow rates to the published values from the OH radiative transfer models and CO line wings for the literature sample; upper limits in this plot correspond to those sources that were spatially unresolved by PACS and therefore have upper limits on R out . As expected, the optically thin assumption likely underestimates the
86
Note-Outflow rate estimates are described in the text as follows.Ṁ thin out ,Ṁ thin corr. out : Section 3.1;Ṁ S18 out ,Ṁ HC20 out : Section 3.2;Ṁ PLS out : Section 3.3;Ṁ joint out : Section 3.4. We use the joint estimatesṀ joint out and associated uncertainties throughout the remainder of the text, subsequently dropping the 'joint' superscript for simplicity. We estimate typical uncertainties onṗout andĖout of ∼0.4 dex. This table is available in machine-readable format at https://github.com/ spt-smg/publicdata. against the published outflow rates (left) and the maximum outflow velocity vmax (right). Literature objects with outflow rates derived from multi-transition OH radiative transfer are shown with circles, while those with only CO-based rates are shown as squares. Outflow rates assuming the OH 119 µm absorption is optically thin underestimate the true outflow rates by an amount that is correlated with the outflow velocity. The right panel shows a log-linear fit and 16-84th percentile confidence interval (including an intrinsic scatter of ±0.15dex) that we use to 'correct' the optically thin outflow rates to more realistic estimates using the measured values of vmax for the high-redshift SPT sources (× symbols). The SPT sources in the right panel are placed along the best-fit line according to their measured vmax.
true outflow rate by a large factor, ≈ 4 − 30× for most sources. The fact that the outflow rates can be so drastically underestimated is at some level a testament to the sensitivity of OH 119 µm to even minute amounts of outflowing material: with ALMA at z > 4, in principle it is possible to detect molecular outflow rates of just ∼10M /yr in less than an hour of observing time. This is a consequence of both the relatively high OH abundance and especially the large value of the Einstein A ul for the ground-state 119 µm transition, ∼10 5 times larger than for [CII] 158 µm or ∼10 6 times larger than for low-order CO transitions. The drawback to this sensitivity, of course, is that the line opacities are high and it is not easy to determine by what exact factor the true outflow rate has been underestimated, as Figure 1 shows. There is no obvious trend between the ratio of optically thin to published outflow rate and the published outflow rate itself; evidently the OH 119 µm line opacity varies by a large amount in different galactic winds. We do, however, identify correlations between this ratio of outflow rate estimates and various measures of the outflow velocity. Sources with the fastest outflows are also the closest to being consistent with optically thin absorption. Figure 1 (right) shows this in terms of v max , but we obtain results consistent within the uncertainties from v 84 and v 50 as well; while v max is somewhat more difficult to measure (Paper I), it shows the largest dynamic range among the outflow velocity metrics. Such a correlation makes intuitive sense: for a given column density of absorbing gas, if the total gas column extends over a larger range in velocity, the line opacity per unit velocity interval must necessarily be lower and therefore the line opacity averaged over the full absorption profile must also be lower. This leads to a less extreme 'correction factor' needed for sources with very fast outflows.
We fit a simple log-linear function to the data in Figure 1 (right), determining uncertainties on the fit using the same Monte Carlo resampling method we use for all outflow rate techniques. We find a best-fit relation for the outflow rates 'corrected' from the optically-thin values of with m = −6.4 +1.8 −1.7 ×10 −4 (km s −1 ) −1 and b = 0.91 +0.07 −0.06 , and v max in km s −1 . The analysis indicates an intrinsic dispersion of ∼0.15 dex around the best-fit relation in addition to the statistical uncertainties. We use this relation and the measured values of v max to estimate the true outflow rates empirically corrected from the optically-thin assumption.
We note again that our prior assumption of f cov = 0.3 in our calculation ofṀ thin out has no impact on our 'corrected' outflow rates, as a different assumed value propagates directly into b in the equation above. There is also no evidence for a correlation between f cov and other galaxy properties in the low-redshift training sample that could influence our outflow rates given the differences between the samples (GA17). While we do find a tentative correlation of f cov with L IR (Paper I), we expect those covering fractions to be upper limits on the true values and stress again that the methods used between low-and high-redshift are not directly comparable. Our assumed f cov = 0.3 lies well within the lower and upper limits we expect for the true values, so we do not expect this to add substantial additional uncertainty beyond the present estimates. Both the optically thin and the corrected outflow rates are given in Table 1.
Simple Empirical Estimates
As we expected, the optically thin outflow rates almost certainly severely underestimate the true outflow rates. While we derived a method to correct these values to more realistic outflow rates, the correction factors remain highly uncertain and in any case the general methodology deserves to be cross-checked by other methods. We now consider two simple empirical methods to provide alternative estimates of the outflow rates before moving to a more complex empirical method.
In S18 we made a simple estimate of the true outflow rates using a subset of the present literature reference sources for which outflows had also been detected in CO emission. In that work we took the OH 119 µm equivalent widths for the low-z literature sources integrated over the velocity ranges where high-velocity wings of CO emission had been detected (GA17), under the philosophy that both traced molecular outflows and that the outflows should appear over the same velocity range in both tracers. Here we follow a similar vein, now including an expanded reference sample. Instead of individually choosing velocity ranges over which to measure the OH equivalent widths, here we simply fit a linear relationship between the published literatureṀ out values and EW v<−100 from our re-measured 119 µm spectral fits, which gave equivalent widths similar to those we used in S18.
Figure 2 (left) shows the results of this analysis. We do not force this fit to have a zero intercept. Although this allows the unphysical scenario of positive outflow rates in the absence of any absorption or even negative outflow rates for low equivalent widths, allowing this freedom in the model yields a better characterization of the uncertainty at lowṀ out . We find a best-fit expres-sion for the outflow ratė with m = 2.7 +0.4 −0.5 M /yr/(km s −1 ) and b = 55 ± 50 M /yr, the outflow rate in M /yr and the equivalent width in km s −1 . We find an intrinsic dispersion of ±150M /yr around this relation in addition to the statistical uncertainties that at least applies in the low-EW v<−100 , low-Ṁ out regime. At higherṀ out there are too few sources to quantify any additional scatter beyond the statistical uncertainties; we assume a constant 150 M /yr scatter for all values of EW v<−100 .
The OH equivalent width is not expected to be the sole controlling parameter that predicts outflow rates, of course. Herrera-Camus et al. (2020) (abbreviated HC20) explored an alternative simple parameterization, fitting the outflow rates to the product EW v<−200 × √ L FIR . This was motivated by an expectation that the outflow rate should depend on both the column density of outflowing gas (related to EW v<−200 ) and the size of the source (proportional to √ L FIR through a Stefan-Boltzmann type relation), as in Eq. 1.
We repeat a similar analysis as HC20, with a couple small modifications. First, we use √ L IR instead of √ L FIR , which is more readily available for all literature reference sources. Second, we do not fit a line forcing the y-intercept to be zero as done in HC20. Again, this allows us to better understand the uncertainties at loẇ M out . Our best-fit relation for the outflow rate in this way isṀ −30 M /yr. We find an essentially identical intrinsic scatter around this relation as before, ≈150 M /yr, where this is assumed to be constant due to the lack of sources with very high outflow rates.
Aside from a more physically-justified parameterization, this method also has a slightly higher dynamic range in the abscissa than the S18-style fit. Between the two methods, we have some preference for the HC20 parameterization. Outflow rates derived from both methods are given in Table 1.
Multivariate Empirical Estimate
Finally, we consider a more complex empirical model to derive outflow rates. While the analyses in the previous subsection relied on specific linear correlations between observables and published outflow rates, there is no particular reason to choose those specific observables over others apart from some physical intuition about the likely important parameters. The fact that we find significant additional intrinsic scatter beyond the inferred uncertainties in the previous fits is a clue that a more complex model connecting the observables and the outflow rates is warranted. Indeed, both our reference and high-redshift samples have many more known properties than we have yet utilized, both from the OH spectra themselves as well as ancillary measurements from other data. Here we perform one final analysis that attempts to discern the most predictive relationship between all available measurements and the outflow rates, at the expense of linking the resulting relationship to any particular physical meaning.
To explore the complex relationship between outflow rates and all available measurements, we use a 'partial least squares' (PLS) technique (Wold 1966). PLS is both a regression and dimensionality reduction technique, and can be thought of as somewhat of a hybrid between standard multivariate linear regression and principal component analysis (PCA) or singular value decomposition. PLS is well-suited to cases such as ours where the number of objects in the reference sample is relatively few but the number of measured quantities for each sample object is large, with many of the measured quantities correlated with each other. In our case, for example, while v 84 and v max encode slightly different information about the shape of the OH absorption profile, they are still strongly correlated: fast outflows are fast regardless of the metric used. While PCA techniques are capable of describing the variance in the observables, not all principal components need be predictive of some other quantity (Ṁ out in our case). PLS addresses this by maximizing the covariance between the space of observables and the space of desired predicted quantities. PLS also performs better than some other techniques (e.g. random forest estimators) when some measured observables of the target sample (in this case the SPT objects) lie outside the dynamic range of the training sample -that is, when extrapolation is required for one or more observables. In the end this has little influence on our application because the most predictive observables (see below) are well-sampled by the reference objects and extrapolation is not generally required.
For our purposes, we use PLS to predict the outflow ratesṀ out from a variety of (sometimes strongly correlated) observed properties: several metrics of the OH velocity profiles and equivalent widths integrated over various velocity ranges, as well as ancillary galaxy properties such as L IR , r dust , the AGN contribution to the bolometric luminosity f AGN , and the effective dust temperature T dust . We experimented extensively with vari- . Each panel also shows a linear fit and 68 percent confidence interval (including an intrinsic scatter of ±150 M /yr; dashed line and grey shaded region). We use these fits and the measured OH 119 µm spectral properties to infer outflow rates for the high-redshift SPT sources. All symbols as in Fig. 1 Predicted and measured outflow rates from the empirically-based PLS method that does not presuppose any particular functional form between measured properties and the molecular outflow rate. Partial least squares (PLS) is a technique that combines dimensionality reduction with multivariate regression; see Section 3.3. The dashed line shows the one-to-one relation while the grey shaded region shows the upper limit to the remaining intrinsic scatter, ±100 M /yr. ous numbers and combinations of observables and found consistent results for the predicted outflow rates of the SPT sources in almost all cases. Generally regardless of the observables used, a maximum of four or five PLS components minimized the mean squared error in the predicted outflow rates of the reference sample (that is, the dimensionality of the problem could be reduced from the number of observables used to four or five, due to covariances between the observables employed). PLS also allows us to understand which observables are most responsible for driving predictions for the outflow rates. Of those we explored, the outflow velocity and equivalent width were the most predictive of the measured outflow rates, while f AGN and r dust generally had little predictive power, possibly due to the relatively small dynamic range in these quantities in the training and target samples. Figure 3 shows the comparison between predicted and published outflow rates for the combination of parameters that includes v 50 , v 84 , v max , EW v<−100 , EW v<−200 , EW total , L IR , and f AGN . Interestingly, unlike the previous methods, there is no longer any detectable intrinsic scatter between the predicted and published outflow rates; the grey shaded region in Figure 3 illustrates the approximate upper limit on the scatter we can set with the data available, ≈100 M /yr. Although it remains rather unsatisfying to necessarily discard all physical interpretation of the resulting predictions, clearly PLS is capable of translating the complex measurement space into the desired output outflow rates. Predicted outflow rates from this method are provided in Table 1.
Summary and Method Comparison
We now have four different estimates for the molecular outflow rates applied to the high-redshift SPT objects -one corrected from the optically-thin assumption, two simple empirical estimators, and one more complex empirical estimate. Figure 4 compares these estimates for both the low-redshift reference sample and as applied to our z > 4 objects. Note that for the literature sources this Figure Essentially by definition this Figure shows good agreement between the various methods for the reference sample, since this sample was used to derive the conversions between observables and outflow rates in the first place. We also find generally good agreement between the estimators for the SPT objects, in particular between the multivariate PLS analysis and the simpler approach of HC20. Evidently these methods make use of the most salient predictive measurements from the OH spectra and ancillary galaxy properties. Figure 5 compares the outflow rates for the SPT sources in more detail. This figure shows the outflow rates derived from each method for each source. This figure again demonstrates the generally good agreement between methods, although the PLS and HC20like methods tend to yield slightly higher values than the other methods. We also show a joint distribution of the outflow rates created by equally combining the Monte Carlo trials from each method. While this should not be considered a true joint probability distribution of the outflow rates -the methods to derive input distributions are hardly independent, for example -it both highlights the level of agreement or disagreement between methods and summarizes the constraints we place on the outflow rates. These joint outflow rates are listed in Table 1, referred to asṀ joint out . We use these joint estimates and associated uncertainties throughout the remainder of the text as our 'best' estimates of the outflow rates, subsequently dropping the 'joint' superscript from the notation for simplicity.
The joint distributions from each method suggest that we are able to estimate the outflow rates for our sources at about the factor-of-two level. The OH-based reference sources have quoted uncertainties at the ∼50% level; the higher level of uncertainty for our sources reflects the lack of additional OH data for our sample that propagates into the scatter seen in the four individual methods and thus into our final joint estimates.
We emphasize that for both samples these uncertainties are likely underestimated due to systematics in many of the assumptions, from the OH abundance to the assumed geometry and outflow history. For our high-redshift objects, while our estimates are empirically based, the methods we have described presume that low-redshift IR-luminous galaxies are sufficiently similar to our targets as to not render these calculations meaningless. While the observed characteristics of our sample are contained within the parameter space probed by the reference sample (Figures 1-3 and Paper I), it is certainly possible that some other unmeasured quantity has a strong influence on the outflow rates that is not accounted for by our methods. Thus while we propagate the uncertainties on the joint outflow rates in the remainder of the text, it is important to remember that these are probably more uncertain by some difficult-toquantify amount.
Outflow Driving Mechanisms
In low-redshift samples, correlations between outflow velocities and host galaxy properties such as SFRs or AGN luminosities have been used to shed light on the physical mechanism(s) responsible for launching the outflows. There are good theoretical reasons to believe that the energy and momentum imparted to the gas from star formation and/or AGN activity should play a role in driving galactic winds, and should then manifest in the properties of the outflows launched. Nevertheless, observations of neutral and low-ionizations species show at best weak correlations between outflow velocities and SFR from the local universe to z ∼ 1 (e.g. Weiner et al. 2009;Rubin et al. 2014;Chisholm et al. 2015; Roberts-Borsani & Saintonge 2019), with any trend mostly due to the weak outflows seen in very low-SFR galaxies (e.g. Heckman & Borthakur 2016).
While this could plausibly be because the neutral outflows are less strongly coupled to the driving source, similarly weak correlations have also been seen for the molecular phase traced by OH in nearby ULIRGs and QSOs (e.g. Veilleux et al. 2013). While still relatively weak, the strongest correlations between outflow velocities and galaxy properties are found with L AGN and f AGN , suggesting a connection between AGN and wind launching at least in these extreme nearby systems. Figure 6 shows three outflow velocity metrics, v 50 , v 84 , and v max , as a function of L IR , the IR surface density Σ IR , f AGN , and L AGN , where we now compare our highredshift objects to the combined sample of low-redshift ULIRGs and QSOs and nearby AGN-dominated systems (described in detail in Paper I). We distinguish between objects with outflows (filled symbols) and those without (empty), as determined by the original authors. We also note that the subset of low-redshift ULIRGs selected for OH radiative transfer modeling by GA17 is skewed towards sources with the fastest outflows, presumably because these were a more viable sample for multi-transition modeling. We return to this point several more times because it propagates into many of the differences we see with the low-z ULIRGs in other outflow properties as well.
The left column of Fig. 6 first shows outflow velocities as a function of L IR . In agreement with Veilleux et al. (2013), we see no evidence of a correlation in the expanded sample of nearby galaxies. Interestingly, however, we do see hints of a trend within the z > 4 SPT DSFGs when considered alone, with the most luminous sources also driving the fastest outflows. Whether this is a genuine difference between the outflows driven in lowand high-redshift galaxies remains to be seen; a larger sample of high-redshift objects that spans a wider range in L IR and other properties will be required to understand these tentative differences further.
Instead of L IR alone one might instead expect the outflow velocity to depend more strongly on the IR surface density Σ IR (or similarly the SFR surface density), for example in cases in which radiation pressure on dust grains drives the outflows (e.g. Thompson et al. 2015). The second column of Fig. 6 shows these quantities for the low-and high-redshift OH molecular outflow samples, where we have used far-IR sizes measured from Herschel /PACS imaging (Lutz et al. 2016(Lutz et al. , 2018 for the low-redshift samples and the sizes from our lensing reconstructions for the SPT sample (Paper I). We find no convincing evidence of correlation between these quantities even for the SPT sample considered alone.
Of the parameters investigated by Veilleux et al. (2013) for low-redshift ULIRGs and QSOs, the strongest correlations with outflow velocities were found with f AGN and L AGN , 1 which those authors argued could be due to obscuration effects whereby the fastest-moving material was more easily visible in AGN that had already cleared the nuclear regions or were oriented faceon. The subsequent addition of far less luminous AGNdominated systems by Stone et al. (2016) agreed with this picture although the number of sources with definite outflows was small. The third column of Fig. 6 shows outflow velocities against f AGN . We also fit a simple linear function to the low-redshift sources with molecular outflows, finding a marginally significant correlation with v 50 that becomes weaker with v 84 and v max ; the scatter is clearly large. The limits on f AGN for the SPT sources based on rest-frame mid-IR photometry do not clearly result in these objects being outliers, and they certainly would not be outliers even if we have underestimated f AGN by a substantial amount (Section 3).
Finally, the right column of Fig. 6 shows outflow velocities as a function of L AGN , which Stone et al. (2016) find to be strongly correlated in low-redshift sources in agreement with Veilleux et al. (2013). We also see some relationship between these quantities -namely, sources with low AGN luminosities rarely drive fast outflows. However, we note that while the Stone et al. (2016) sample certainly extends the dynamic range in L AGN probed, this now conflates samples selected in very different ways, with many other possible confounding variables (mass, for example). Regardless, we again find that the limits we can place on L AGN for the SPT sample again do not make them obvious outliers.
In summary, among the SPT DSFGs alone, the total L IR appears to be most strongly correlated with outflow velocity, although a larger sample size will be required to investigate whether this is genuine. While we recover correlations previously noted in low-redshift work with our larger combined literature sample, the OH outflow velocities appear to be at best weak indicators of the driving source of molecular outflows, with substantial scatter. While we currently have no evidence for AGN activity in the SPT DSFGs and only weak limits on f AGN , our objects are not obvious outliers in plots of outflow velocity and AGN properties given the substantial scatter seen amongst the low-redshift objects, and we thus cannot rule out that AGN are responsible for driving the molecular outflows we have observed.
Molecular Outflow Rate Scaling Relations
A number of recent works have explored scaling relations between molecular outflow properties and host galaxy properties, compiling samples of now dozens of objects (e.g. Cicone et al. 2014;González-Alfonso et al. 2017;Fluetsch et al. 2019;Lutz et al. 2020). While these studies focused exclusively on low-redshift galaxies, we now include our measurements for the first sample of molecular outflows in the early universe. Our primary comparison samples are the OH-based outflow measurements in nearby ULIRGs from GA17, as before, supplemented with the CO-based sample of Lutz et al. (2020), which extends to lower-luminosity systems. All samples assume the same outflow geometry of Section 2.1. We (Stone et al. 2016). Filled symbols indicate sources with outflows and empty those without, as determined by the original authors of each study. The nearby ULIRGs with OH-based radiative transfer models to measure outflow rates (Section 2.2) are highlighted as larger navy circles. Previous low-redshift work indicated that fAGN is correlated with the outflow velocities, so in the third column we show simple linear fits with 68 percent confidence intervals to the low-redshift objects with outflows. While we currently have no evidence of AGN activity in the high-redshift SPT sample, our objects are not obvious outliers in these plots, suggesting that we cannot rule out AGN as the driving mechanism of the outflows we have observed.
note that Lutz et al. (2020) found that OH-based outflow rates tended to be ≈0.5 dex higher than CO-based rates in their comparison of galaxies observed in both tracers (while the total outflow masses M out were very similar). Because the CO-based sample spans a different range of parameter space than the other samples, we also comment on how our inferences in this section would change if the CO outflow rates were increased by 0.5 dex. We also detail changes to our interpretation that would result from doubling our present upper limits on f AGN to try to account for the effects of any heavily-obscured AGN that may not be detectable even in the rest-frame mid-IR. It is important to note that none of these samples at any redshift are complete or unbiased; the galaxies typically targeted for molecular outflow observations are highly biased towards luminous star-forming systems and/or quasars. Figure 7 shows the molecular outflow rateṀ out and mass loading factor η out ≡Ṁ out /SFR as a function of SFR. We find uniformly sub-unity mass loading factors for the high-redshift DSFGs, although the uncertainties of course remain significant. We would still find loading factors 1 even if we have underestimated our limits on f AGN by a factor of 2 (which would consequently lower the SFR). This is a perhaps surprising result -these galaxies are among the most luminous, highest-SFR ob- jects known, yet drive relatively weaker outflows than many less-luminous nearby galaxies (though again, none of these samples is complete or unbiased). In particular, despite SFRs a few times higher than the low-redshift OH sample of GA17, the outflow rates we derive do not increase accordingly and the loading factors are consequently lower. At least some of this difference is likely due to the selection for fast outflows in the low-redshift work, but because the outflow rate depends only linearly on the velocity this is insufficient to explain the full difference. Our sample does appear, however, to follow the slightly sub-linear relationship seen in some low-redshift studies (e.g. Fluetsch et al. 2019), extended now to an order of magnitude higher SFR and the high-redshift universe. Fitting a power-law to the combined samples, we find a best-fit relationship log(Ṁ out ) = (0.72 ± 0.05) log(SFR) + (0.5 ± 0.1), withṀ out and SFR in M /yr and an additional intrinsic scatter on the outflow rates of ≈0.25 dex. Thus, we find a transition from η out > 1 to sub-unity values near SFR∼100 M /yr. Because of the distribution of the CO-based Lutz et al. (2020) sample in SFR, increasing the CO-based outflow rates by 0.5 dex would further flatten the power-law slope to ≈0.5 but increase the transition SFR at η out = 1 to 500 M /yr. On the other hand, if we lowered the SFRs of our sample by doubling f AGN to estimate the effect of possible highy-obscured AGN, the power-law slope would marginally increase to ≈ 0.8.
Interestingly, we find no evidence that the dominance of an AGN plays a secondary role in setting the outflow rate, once the overall trend with SFR (or L IR ) has been accounted for. The points in Fig. 7 Lutz et al. (2020) find no such correlation, even though both studies use a largely-overlapping set of literature outflow detections (since we use the Lutz et al. 2020 CO-based literature compilation it is no surprise that we also find no correlation given the fairly small increase in dynamic range in SFR afforded by our sample). While a thorough analysis of this low-redshift discrepancy is beyond the scope of this paper, part of the difference may lie in how the outflow rates were calculated from the CO line wings, as Lutz et al. (2020) included only the wings of the broad CO component while Fluetsch et al. (2019) included the entire broad component (i.e. including emission at systemic velocities that may not actually be part of the outflow). Figure 8 showsṀ out and η out as a function of Σ SFR instead. The SPT sample lies well within the scatter but shows typically lower values of η out at a given Σ SFR compared to the low-redshift samples. As noted above, this is at least partially explained by the overall sub-linear trend betweenṀ out and SFR. Similar to our investigation of outflow velocities with Σ IR above, we again find no significant correlation between these properties, a conclusion that would not change if we increased the CO-based outflow rates by 0.5 dex or adopted 2× higher limits on f AGN for our sample. As noted by Lutz et al. (2020), however, the combined literature sample (and our own, clearly) is not complete in Σ SFR . Severe selection effects stemming from the diverse selection criteria in individual studies comprising the combined literature sample may exist, particularly at low Σ SFR . Finally, Figure 9 showsṀ out and η out as a function of L AGN . As seen in previous works, the low-redshift combined sample shows a clear relationship betweenṀ out and L AGN , with substantial scatter that increases at low L AGN . The distribution of low-redshift objects in this parameter space has been discussed extensively in the literature (e.g. Lutz et al. 2020). For our sample, we find that the limits on L AGN from the available restframe mid-IR data do not result in the high-redshift objects being clear outliers in Fig. 9, and they would not be outliers in the event that our limits on f AGN were underestimated by a factor of 2. As before in Section 4.1 but from a different perspective, the outflows we have detected do not require an AGN based on mass loading factor scaling relations, but AGN activity also cannot be ruled out in our objects.
Outflow Masses and Depletion Times
We now turn to estimates of the total molecular gas mass contained within the outflows. As described in Section 2.1, we use Eq. 1 to calculate the outflow masses M out for our sample. For the low-redshift samples, we use the original published masses for both the OH-based and CO-based outflows. Recalculating the masses for the low-redshift samples using our assumed geometry results in <10% differences in the median compared to the published values. Additionally, Lutz et al. (2020) find only a 0.06 dex offset and 0.3 dex dispersion between the masses for the low-redshift sources with outflows observed in both OH and CO. We make use of the total molecular gas masses for the low-redshift samples assembled by the original studies, all of which are based on low-J transitions of CO (CO(3-2) or lower). The conversion factor between CO luminosity and M H2 is known to vary based on various galaxy properties (e.g. Bolatto et al. 2013); we accept the values used by the original studies. For the SPT sample, we assume α CO = 0.8 M (K km s −1 pc 2 ) −1 , which we have previously found to be appropriate for the IR-luminous galaxies in our sample (e.g. Spilker et al. 2015;Aravena et al. 2016). Figure 10 shows the molecular outflow masses as a function of L IR , as well as the fraction of the total galaxy molecular gas mass contained in the outflows. Not unexpectedly, M out is clearly correlated with L IR , as has been previously noted many times in the literature. The z > 4 SPT DSFG sample has molecular outflow masses in the range log M out /M = 8.6 − 9.1, unsurprisingly on the high end of the local samples. The masses of the two most intrinsically luminous sources in our sample, SPT2132-58 and SPT2311-54, are perhaps somewhat low in comparison to the extrapolation of the low-redshift samples, but are well within the observed scatter. From the lower panel of Fig. 10, meanwhile, we find that the molecular outflows in our sample contain 1-10% of the total molecular gas masses of the galaxies. These values are well within the range typically seen in low-redshift galaxies. Further, we find no discernible trend between M out /M H2 and L IR despite the increase in dynamic range in L IR afforded by our sample. Figure 11 shows these same quantities as a function of L AGN . Together, Figures 10 and 11 are effectively the corresponding versions of Figures 7 and 9 for M out instead ofṀ out . As with the outflow rates previously, the current limits on L AGN for the SPT DSFGs do not make them obvious outliers in Fig. 11. There are no indications from either of these figures that the dominance of the AGN plays any role either in determining M out in general or in defining the scatter in M out at a given L IR or L AGN , as evidenced by the lack of secondary trends in these figures with f AGN . Meanwhile, Figure 12 compares the molecular gas depletion time scale due to outflows with that due to star formation. Here these depletion time scales are defined as the time it would take for the entire molecular gas reservoir of the galaxies to be removed by outflows or consumed by star formation, assuming the outflow rate or SFR remain constant. That is, t dep, out ≡ M H2 /Ṁ out and t dep, SF ≡ M H2 /SFR. These depletion times are only approximate estimates of the important time scales in the evolution of galaxies, given that both outflows and star formation operate simultaneously (which would give shorter depletion times), we ignore molecular gas de- struction due to e.g. photo-heating or shocks (which would also shorten the depletion times), gas accretion and/or cooling into the molecular phase are neglected (which would give longer depletion times), andṀ out and SFR are not in fact constant over time (which could push the depletion times either higher or lower depending on the time variability inṀ out and SFR). Note that changing estimates of M H2 for any object in Fig. 12 moves objects diagonally parallel to the one-to-one line, since M H2 is incorporated in both axes.
For the z > 4 sample, we find for all sources that t dep, out t dep, SF , a straightforward consequence of the sub-unity wind mass loading factors we determine (Section 4.2). This conclusion would also hold if we arti- z > 4 SPT DSFGs Nearby ULIRGs (OH-based) Nearby Galaxies (CO-based) Figure 12.
Comparison of the molecular gas depletion timescales due to gas consumption through star formation (t dep, SF ≡ MH 2 /SFR) and due to removal via molecular outflows (t dep, out ≡ MH 2 /Ṁout). Symbols are as in Fig. 7, but are now color-coded by log LIR. The solid line indicates the one-to-one relation. Unlike the majority of low-redshift galaxies, we find shorter timescales for gas consumption by star formation than for removal in molecular outflows (a reflection of the sub-unity mass loading factors we find in our sample; Section 4.2).
ficially decrease the SFRs of our sample by doubling f AGN as a crude approximation of the effects of heavilyobscured AGN, though the depletion times would be about equal in that case. As before with η out , this places our sample with a distinct minority of the low-redshift samples. We stress again that all of these samples are biased towards galaxies and quasars that do host powerful outflows, and these results may not hold for objects with less extreme outflows that would be difficult to detect. Unsurprisingly given their very high SFRs and outflow rates, both depletion times are very short, ∼10-100 Myr, and the fact that the two time scales are comparable points to the important role that outflows must play in regulating star formation in these galaxies.
Outflow Momentum and Energetics
Galactic winds are often classified as either "energydriven" if radiative losses in the outflowing gas are negligible or "momentum-driven" if they are not (regardless of the ultimate source(s) of the energy driving the wind). In the former case, the outflow is thought to be launched by the adiabatic expansion of a bubble of hot gas (e.g. Chevalier & Clegg 1985;Silk & Rees 1998) that either lofts cold gas entrained in the expanding hot wind or (re-)forms cold gas from the swept-up shocked material at larger radii where the gas can radiatively cool (e.g. Faucher-Giguère & Quataert 2012; Costa et al. 2014;Richings & Faucher-Giguère 2018). Similar to the energy-conserving Sedov-Taylor phase of supernova expansion, the resulting momentum in the outflowing gas can be 'boosted' well above the radiative momentum flux driving the wind. In the momentum-driven case, in which radiative cooling is significant, momentum transferred to the gas from ram pressure or radiation pressure on dust grains results in gentler acceleration that may allow cold gas to reach large radii and high velocities before it is destroyed (e.g. Murray et al. 2005Murray et al. , 2011Thompson et al. 2016;Brennan et al. 2018). Real winds can of course be intermediate between these cases. For both momentum-and energy-driven winds and for winds driven by AGN or star formation, theoretical models provide estimates of the coupling efficiency between the input momentum and energy and the outflowing gas.
We calculate estimates of the outflow momentum and energy for our SPT sample as described in Section 2.1. For both the low-redshift OH-and CO-based samples we use the original published values of the outflow momentum and energy instead of those derived from our own assumptions in Sec. 2.1. A comparison between the published values and our estimates indicates that we may be overestimating the outflow momentum and energy by ∼30 and 70%, respectively, while we find no systematic difference in the mass outflow rates. While still within the substantial uncertainties, this probably means that the values of v 84 we use in Eq. 2 are higher than the outflow 'characteristic' velocity, and some lower velocity between v 50 and v 84 would provide more accurate outflow energetics. We continue with our use of v 84 ; our conclusions here would be further strengthened if the outflow momentum and energy were systematically lower.
In Figure 13 we show the fraction of the estimated outflow momentum rate compared to the total radiative momentum rate (ṗ rad = L/c) as a function of the estimated luminosities due to AGN and star formation for the sample of molecular outflows assembled from the literature and our high-redshift sample. In momentum-driven winds the AGN may provide a momentum rate up to ∼2L AGN /c, treating both radiation pressure on dust grains and the AGN inner winds as L AGN /c. Meanwhile, a continuous starburst can generate a maximum of ∼3.5L SF /c Heckman et al. 2015) through a combination of radiation pressure and the pressure of hot wind material driven by supernova ejecta. As seen in Fig. 13, molecular outflows in low-redshift galaxies frequently show large momentum boosts ∼2-30 above the radiative momentum provided by the AGN and/or star formation, often taken as evidence that an energy-driven wind phase is required to achieve such large boosts (though see also Thompson et al. 2015, who argue that radiation pressure on dust grains can also achieve large momentum boosts in conditions possibly realized in very dusty and gas-rich galaxies).
We find much more modest momentum ratios in our sample of high-redshift DSFGs, with maximum momentum boosts of ∼2 compared to the luminosity due to star formation, and all sources consistent with no momentum boost above the radiative momentum injection at all. This momentum boost is well within the range achievable by radiation pressure on dust in cases where the effective IR optical depth is of order unity Thompson et al. 2015). Further, the momentum injection due to star formation alone is fully consistent with the observed outflow momentum fluxes; no additional radiative momentum from AGN is required. Indeed, it is not clear if the AGN alone could provide sufficient momentum to explain the observed outflows given the current limits on L AGN ; at least some substantial contribution from star formation would be required if AGN are relevant to the outflow energetics. This result would not change if we redistributed the total luminosity arising from the AGN and star formation by doubling f AGN compared to our current limits, although in that case the momentum flux from the AGN would also be sufficient to drive the outflows we observe. All sources would still show momentum boosts 3.5L SF /c, would still be consistent with momentum-driving due to star formation, and would not show momentum boosts as large as those seen in the local ULIRGs. For our sources to be >1σ inconsistent with the rough maximum ∼3.5L SF /c would require f AGN > 0.8 − 0.99 depending on the source, far above our current limits from the restframe mid-IR. Sources with such high f AGN typically show OH solely in emission in nearby objects (see discussion in Stone et al. 2016), while none of our sources show OH in emission. This could be taken as evidence that no source in our sample has f AGN 0.9. Figure 14 shows a similar plot for the outflowing kinetic power, following our outflow calculations in Section 2.1. Hot energy-driven winds from AGN are thought to be capable of supplying up to about ∼5% of the AGN power to the outflows, of which some fraction ∼1/2 can plausibly be converted into bulk kinetic energy in the wind (e.g. King & Pounds 2015;Faucher-Giguère & Quataert 2012). The mechanical luminosity generated by supernovae during a starburst, meanwhile, Fig. 7. Horizontal dashed lines indicate the approximate maximum momentum attributable to the AGN or star formation in momentum-driven winds. For the high-redshift SPT DSFGs, unlike in most local ULIRGs, the radiative momentum flux provided by star formation is fully sufficient to explain the observed outflows; neither AGN power nor energy-driven wind phases are required.
may reach ∼2% of the total starburst luminosity, with perhaps ∼1/4 of this luminosity converted into kinetic motion in the ISM (e.g. Veilleux et al. 2005;Harrison et al. 2014). The outflow energetics in many low-redshift molecular winds exceed the expected coupling efficiency to the starburst luminosity while the AGN energetics are in better agreement (Fig. 14). This has been taken as evidence that the AGN must be primarily responsible for driving the low-redshift molecular outflows, and, in combination with the momentum rates in Fig. 13, that these winds must be at least partially energy-driven. In contrast to these low-redshift results, we find that the outflow kinetic energy rates in our z > 4 DSFGs are uniformly below the threshold coupling efficiency for supernova-driven winds, and would still be consistent with this coupling efficiency if we adopt limits on f AGN twice as high (or more) as current data indicate. As with the momentum rates, AGN are not required in order to explain the observed outflow energetics. Moreover, the AGN in our sample could be an order of magnitude less luminous than the current limits without the outflow kinetic power approaching the theoretical maximum ∼few percent of the AGN luminosity.
Taken together, we conclude from Figures 13 and 14 that (1) the high-redshift molecular outflows we have observed are fully consistent with expectations for momentum-driven winds, with no need for partially or fully energy-conserving phases, and (2) the observed outflow energetics can be fully explained by the momentum and energy provided by star formation alone in these galaxies, with no need for additional driving by AGN. We emphasize that we do not conclude that AGN are not responsible for driving the observed outflows, merely that AGN are not required to explain the energetics. We note again that these conclusions are further strengthened if we adopt a somewhat lower characteristic velocity in Eq. 2 as it appears may be appropriate by comparison to the OH-based outflow energetics (GA17). Similarly, our conclusions are also not changed in the event that our present limits on f AGN are underestimated by a factor of two (or more) due to AGN so heavily obscured they are not detectable in the mid-IR. In that case either the AGN or star formation could be the ultimate driving source, but the outflow energetics would still not require AGN momentum or energy injection or energy-conserving phases.
Both these results are surprising and counter to the conclusions typically reached in low-redshift studies. Conventional wisdom dictates that AGN are necessary to regulate galaxy growth in massive galaxies, in part due to scaling relations such as that between black hole mass and galaxy or bulge mass. Yet in our high-redshift rapidly star-forming galaxies, we find no need for AGN in order to explain the molecular outflow energetics we have measured. While our sample objects are more luminous than almost all of the low-redshift sources, we have no reason to expect that the outflow energetics should not also increase concomitantly with luminosity. Additionally, the energy-conserving wind mode is generally thought to have the highest coupling efficiency with the ISM, capable of sweeping up a large fraction of the gas in the ISM (e.g. Zubovas & King 2012). Our results, however, show that such high-efficiency energydriven winds are not necessary to explain the observed outflow momenta and kinetic energy rates in our sample.
The differences between the z > 4 DSFGs and nearby ULIRGs -both with outflow properties from OH spectroscopy -are particularly striking given their general similarities as highly dust-obscured and IR-luminous galaxies. GA17 find that additional energy injection from AGN is required to explain the energetics of most of the low-redshift molecular outflows in ULIRGs (Fig. 14) and that at least partially energy-conserving wind phases are likely necessary to explain the large momentum boosts (Fig. 13). Neither of these appears to be true for the high-redshift DSFG outflows. We also note that a similar conclusion appears to be true for the only other z > 4 object with detected OH absorption, a z = 6.1 quasar where we expect the total luminosity to be dominated by the AGN (HC20) in contrast to our own sample with only upper limits on L AGN . Although an estimate of the AGN and starburst luminosities separately is not available for this source and the OH detection was low-significance, applying the same outflow property calculations to this source as our sample would also place it in the general vicinity of our sample objects as long as f AGN 0.1, a condition easily met for luminous quasars.
It is tempting to ascribe at least some of the differences we see compared to the nearby ULIRGs to the overall difference in luminosities between the low-and high-redshift sources. Due to observational limitations the high-redshift objects are typically several times more luminous than the low-redshift ULIRGs. Increasing the luminosity of the ULIRGs would move them down and right in Figs. 13 and 14, in the direction that would be required to unify the low-and high-redshift objects. However, this would imply that the outflow momentum rates and kinetic power have essentially reached their maximum in low-redshift ULIRGs and no longer continue to increase in more-luminous systems as observed locally (GA17). It is also possible that the physics of outflows is qualitatively different between the low-and high-redshift samples. Multiple simulation efforts have found that star formation-driven outflows become inefficient in massive galaxies at z 1, so it could be that the low-redshift samples are predisposed towards AGN-driven winds by virtue of the fact that they have outflows detected at all (e.g. Muratov et al. 2015;Hayward & Hopkins 2017). It is clear that a larger sample at high redshift that spans a wider range in parameter space than current observations will be required to understand the dependencies of outflow energetics on galaxy properties.
There is also a probable selection effect that appears to be at play in the low-redshift sample. As shown in Figure 6, while our sample overlaps with the lowredshift samples by most metrics, the subset of low-z sources selected for detailed OH radiative transfer modeling by GA17 have preferentially higher outflow velocities than the low-redshift sample overall, likely because these sources presented a more tractable sample for their modeling. This may weight the low-redshift sample towards AGN-driven (fast) outflows. Additionally, a bias towards fast winds can sharply skew the outflow energetics because the outflow velocity enters at least linearly in the outflow momentum rates and at least quadratically in the kinetic power (the outflow rates themselves are also proportional to v out ). We thus expect that the local ULIRGs with slower outflows would show substantially lower momentum and kinetic energy outflow rates that extend down to the values we find for the SPT sources. For the majority of nearby ULIRGs, then, we expect that the outflow energetics would also be consistent with momentum-driven winds that do not require additional energy injection from the AGN.
Fate of the Outflowing Gas
The molecular outflows we have observed could plausibly affect the host galaxies over cosmological timescales, especially if large fractions of the cold gas in the outflows travel at sufficiently high velocity to escape the galaxy or even the dark matter halo virial radius. In the latter case, now unbound, the gas may never again be available for star formation. In the former, the gas becomes part of the circumgalactic medium and could recycle back into the galaxy unless continued energy injection or shock heating prevents the gas from cooling and condensing (see Tumlinson et al. 2017, for a recent review).
We make a simple estimate of the fraction of the outflowing molecular gas that will escape the host galaxies by assuming the outflowing mass as a function of velocity is directly proportional to the equivalent width Figure 14. The ratio of the outflow kinetic powerĖout to the total luminosity of the AGN (left) or star formation (right) as a function of the AGN and star formation luminosity. Symbols and color-coding as in Fig. 7. Horizontal dashed lines indicate the approximate maximum fractions of the AGN or star formation luminosity that can couple to the outflowing gas. For the high-redshift SPT DSFGs, unlike many low-redshift galaxies, the luminosity provided by AGN is not necessary to explain the outflow energetics; the outflows we have observed are fully consistent with the energy input from star formation alone.
as a function of velocity, excluding the absorption components centered on systemic velocities (Paper I). To estimate the galaxy escape velocity for each source, we assume a spherical isothermal mass distribution truncated at a maximum radius r max /r = 10, following Arribas et al. (2014). Because the detection of absorption requires the presence of continuum emission, we take r to be the circularized effective size of the dust emission from our lensing reconstructions, r dust . We estimate galaxy masses from total molecular gas masses based on CO(2-1) observations, assuming a typical gas fraction for DSFGs at these redshifts . These masses are in reasonable agreement with simple dynamical mass estimates using the available [CII] or CO line widths and the lens model sizes (e.g. Spilker et al. 2015). We find escape velocities for our sources ranging from ∼400-1000 km s −1 (median ∼700 km s −1 ), which agree reasonably well with other simple estimates scaling from the CO or [CII] line widths or assuming pointlike mass distributions within r dust . Given the uncertainties in mass and shape of the gravitational potential, we estimate typical uncertainties on the galaxy escape velocities of ≈40%.
In this calculation, we ignore any additional deceleration of the outflow caused by sweeping up additional material. We have also implicitly assumed that the out-flowing material is located at a typical distance from the galaxy center equal to the dust continuum emitting size, which seems reasonable based on our lensing reconstructions of the outflow material (Paper I), but we cannot rule out that much of this material is located deeper within the gravitational potential wells of the host galaxies. Both these effects would lower the fraction of outflowing gas that escapes the galaxies. On the other hand, we also assume that the outflowing gas is no longer being accelerated, which may not be the case if the winds are driven by the outward radiation pressure on dust grains, especially in the event of high far-IR optical depths and/or cosmic ray pressure. This would result in higher outflow escape fractions than our estimates. Figure 15 shows the cumulative outflow mass for each object in our sample as a function of the outflow velocity, normalized to the estimated escape velocity. We find typical galaxy escape fractions ∼20% with large variation within the sample. The three objects with estimated escape fractions >25% are SPT2132-58 and SPT2311-54, which have the fastest outflows of our sample, and SPT2319-55, which has an atypically low mass given its outflow velocity (or an atypically fast outflow given its mass). Only 10% of the outflowing gas is traveling at 1.5 times the escape velocity or faster, and essentially none is traveling at twice the escape velocity. Figure 15 also shows these escape fractions as a function of L IR , now including the local galaxy samples with stellar velocity dispersion measurements available to estimate the escape velocities. The uncertainties on the escape fractions include only those due to the uncertain escape velocities, but do not include the (unknown) contribution from any variations in the equivalent width to outflow mass proportionality (or the CO-H 2 conversion factor for the CO-based masses). We find a very wide range in estimated galaxy escape fractions from ≈0 up to 60%, reflective of the large range in outflow velocities and a rather limited dynamic range in galaxy mass. The escape fraction shows no obvious correlation with L IR or other observables, in agreement with our conclusions in Section 4.1.
The galaxy outflow escape fractions in Fig. 15 are substantially higher than those found by Fluetsch et al. (2019) even for the same objects. As discussed in Section 4.2, this is because those authors count the full broad CO component as belonging to the molecular outflows, thereby including a substantial amount of CO flux at systemic velocities that need not actually be outflowing. This additional flux (and therefore mass) artificially lowers the galaxy escape fractions well below the values we obtain following the more conservative definition of Lutz et al. (2020), who only consider the flux in the broad line wings in the outflow definition (excluding the core emission at systemic velocities). This more conservative definition results in total outflow masses a factor of ≈ 5 lower on average for the CO-based objects in Fig. 15 and consequently higher escape fractions compared to those found by Fluetsch et al. (2019).
While the uncertainties are large, the nearby ULIRGs in Figure 15 tend to show somewhat larger escape fractions than our own sample of high-redshift objects. These sources have a mean and median escape fraction ≈40%, about double that for our own sample. As previously discussed, this is most likely due to the fact that the sources with available OH-based radiative transfer models are preferentially also those with the fastest outflows (Fig. 6). Given the lack of correlation between outflow velocity and stellar velocity dispersion or stellar mass over the limited dynamic range probed by these samples (e.g. Veilleux et al. 2013), this results in outflow escape fractions skewed towards larger values. As in Section 4.4, we expect that a more complete sample of local ULIRGs would show substantially more overlap with the lower escape fractions we find for the highredshift DSFGs.
The bulk of the molecular gas in the outflows is destined to remain within the galaxies, where it can become available for future star formation through a galactic fountain flow. At least in the cold molecular phase, most of the gas will not be permanently expelled and therefore these outflows cannot really be responsible for the very low gas fractions that are one of the hallmarks of quenched galaxies at lower redshifts (e.g. Young et al. 2011;Davis et al. 2016;Spilker et al. 2018a;Bezanson et al. 2019). Moreover, without continuous injection of thermal energy or turbulence over the long term, the CGM gas will develop a cooling flow resulting in significant gas accretion (e.g. Su et al. 2020).
Implications for Circumgalactic Medium Enrichment
Finally, we consider the impact of the outflowing molecular gas that probably will escape in the context of the CGM surrounding these high-redshift DSFGs 2 . The top panel of Figure 16 shows the mass of the molecular outflows traveling at speeds greater than the galaxy escape velocity in each source. We assume all of this material enters the CGM, ignoring the loss of any material that escapes the larger dark matter halos (we expect this to be an exceedingly small fraction given the outflow velocity distributions in Fig. 15). For most of the SPT DSFGs, we expect ∼few 10 8 M of the outflowing molecular gas to become incorporated into the CGM of the host halos.
The typical CGM properties of DSFGs are virtually unknown. Based on a sample of 3 z ∼ 2 DSFGs with background quasar sightline absorption spectra, Fu et al. (2016) speculate that the CGM of DSFGs may be less massive and/or that DSFGs inhabit somewhat less massive dark matter halos than co-eval quasars. However, given the much better statistics available for quasars at these redshifts, Fig. 16 shows the typical range of total cool (T 10 4 K) CGM gas mass within the virial radius of 2 < z < 3 quasar host galaxies thought to reside in log M h /M ∼ 12 − 13 mass halos Lau et al. 2016). 3 Given the possible differences between the CGM of DSFGs and quasars and an expectation that the CGM grows in mass from z > 4 Right: Fraction of the outflowing material that will escape into the CGM as a function of LIR. Symbols and color-coding as in Fig. 7. On average only ≈20% of the molecular gas in the z > 4 DSFG winds we have observed is traveling fast enough to leave the host galaxies, but there is wide dispersion in this fraction within the sample.
to 2.5, we expect this range to be an approximate upper bound on the total cool CGM mass surrounding the higher-redshift DSFGs in our sample. The bottom panel of Fig. 16 shows the total mass in metals being ejected into the CGM, under the simplifying assumption that the molecular outflows have approximately solar metallicity. If, as the outflow energetics suggest (Section 4.4), processes related to star formation are responsible for driving the molecular outflows, we may expect the outflowing gas to be enriched significantly beyond solar, moving the points upwards in the lower panel of Fig. 16. In comparison, the metallicity of the cool CGM gas surrounding 2 < z < 3 quasars is sub-solar, Z ∼ 0.1 − 0.3Z , likely because it is a mixture of metal-enriched outflow gas, less metal-rich material stripped or ejected from infalling satellites, and metal-poor material accreting from the cosmic web (e.g. Muratov et al. 2017;Hafen et al. 2019). The range of total CGM metal mass for the same z ∼ 2 − 3 quasar samples is also shown in Fig. 16.
Taken together, the two panels of Figure 16 give an intriguing (if admittedly speculative) picture of the relationship between molecular outflows and the CGM surrounding these galaxies. If the DSFGs in our sample will evolve to become like the quasars observed at slightly lower redshift, the current molecular outflow episodes will contribute only a small fraction 1-10% of the total cool CGM mass. Evidently the total CGM mass must be assembled from some combination of outflowing gas in warmer phases than we have observed, many repeated outflow events through the lifetime of the galaxies, and accretion of additional gas into the CGM from the cos-mic web or infalling satellites. While observations of the multi-phase components of outflows are rare even in the nearby universe, it appears that in general the molecular phase contains a significant if not dominant portion of the total outflow mass (Fluetsch et al. 2019), so additional mechanisms beyond accounting for the unobserved warmer phases are likely required. On the other hand, the current outflow episodes can contribute some substantial fraction ∼10% or more of the total metals present in the CGM at lower redshift. This fraction would rise further if the outflowing molecular gas is enriched beyond solar metallicity. X-ray observations of the hot plasma in nearby winds typically find α/Fe elemental abundance patterns (i.e. including oxygen, of relevance to our OH observations) enhanced to several times the solar value (e.g. Nardini et al. 2013;Veilleux et al. 2014;Liu et al. 2019), though the composition of the molecular gas in outflows is unknown, even at low redshift.
The outflow metallicities of these highly-obscured galaxies are conceivably observable with future observations of far-IR fine structure lines (e.g. Nagao et al. 2011;Pereira-Santaella et al. 2017). Indeed, the [CII] 158 µm line has recently been detected on 10-30 kpc spatial scales surrounding co-eval lower mass galaxies through stacking and, in a few cases, direct individual detections (Fujimoto et al. 2019(Fujimoto et al. , 2020Ginolfi et al. 2020). These studies conclude that metal-enriched outflows are the most likely source of the extended [CII] emission, as generally expected from simulations (e.g. Muratov et al. 2015;Hayward & Hopkins 2017;Pizzati et al. 2020). Estimates of the molecular gas mass (upper panel) and metal mass (lower panel) contained in the observed z > 4 SPT DSFG outflows that will escape the host galaxies and enter the surrounding CGM, assuming solar metallicity for the lower panel. For comparison, the grey shaded regions show the estimated total cool (T 10 4 K) CGM mass and metal mass surrounding quasar host galaxies at slightly lower redshifts Lau et al. 2016). The molecular phase of the outflow episodes we have observed conceivably contribute ∼10% of the total metals contained in the CGM at later times but only a small fraction of the total cool gas.
CONCLUSIONS
This work has focused primarily on deriving the physical properties of the largest sample of molecular outflows in the early universe to-date. These outflows, detected with ALMA as blueshifted absorption line wings in the ground-state OH 119 µm doublet, appear ubiquitous among massive, IR-luminous DSFGs at z > 4. We rely heavily on observations of outflows in low-redshift galaxies with much richer OH spectroscopic data available, which we use as a 'training set' of objects to derive outflow rates for our high-redshift sample with only the ground-state OH lines observed. Comparing four methods for estimating outflow rates, we find agreement at the factor-of-two level. Future improvements in the outflow rate estimates will require either observations of shorter-wavelength OH lines (e.g. the 79 µm doublet) and/or the much less abundant 18 OH isotopologue, both of which have far lower line opacities than the 119 µm doublet currently available. Though the uncertainties on the outflow rates (and therefore the other outflow properties derived from the outflow rates) are large, we draw a number of conclusions from this first high-redshift outflow sample: • We find tentative evidence that the outflow velocity correlates with L IR within the z > 4 sample ( Fig. 6 and Section 4.1). The same is not true for the combined low-redshift galaxies with OH data. A larger sample at high redshift will be necessary to determine whether there is a legitimate difference between outflows in low-and high-redshift objects.
• We find high molecular outflow ratesṀ out ranging from ∼150-800 M /yr. This was not unexpected given the high IR luminosities of our sample. The wind mass loading factors are nevertheless slightly less than unity. The mass loading factors do not clearly correlate with any other quantity including SFR or Σ SFR . Gas consumption by star formation is more important than gas removal by outflows in regulating the molecular gas reservoirs of these objects (Figs. 7 and 12, Sections 4.2 and 4.3).
• The cold molecular mass of the outflows is also high, log M out /M ≈ 8.5 − 9. This still only represents 1-10% of the total molecular gas mass of these gas-rich massive galaxies ( Fig. 10 and Section 4.3).
• We find only very modest momentum boosts in the outflows compared to the radiative momentum, p out /(L/c) < 3. These boosts are fully achievable by winds driven either by supernovae or radiation pressure on dust grains. The outflow kinetic energy fluxes, similarly, are always less than the expected maximum values for outflows driven by star formation. There is no need for partially or fully energy-conserving wind phases (Figures 13 and 14, Section 4.4).
• Following the previous conclusion, the outflows we have observed do not require an additional injection of momentum or energy from AGN in these galaxies. While we currently have no evidence for AGN activity in our sample objects, with limits from rest-frame mid-IR photometry, we cannot rule out that deeply buried AGN are present. The outflow energetics, however, do not require AGN as the primary driving source.
• We estimate that ≈20% of the gas in the molecular outflows is traveling fast enough to escape the galaxies and enter the CGM, on average, though with large uncertainties and a range from 0-50% within the sample. While an admittedly more speculative conclusion, we find that the molecular material moving fast enough to escape the galaxies represents only a small fraction of the total cool CGM mass but perhaps 10% of the metal mass observed in the CGM of massive halos at slightly lower redshifts (Figures 15 and 16, Sections 4.5 and 4.6).
While we have presented the largest currentlyavailable sample of molecular outflows at z > 4, it is by no means a cleanly selected or complete sample; our primary selection criterion was merely that the redshift of each target place the OH 119 µm lines in an atmospheric window for ALMA observations. Given the high success rate in detecting outflows in these galaxies, we hope to have motivated future observations of samples that span a wider range in galaxy properties in order to build a more comprehensive view of the statistical properties of molecular outflows in the early universe. The physical properties derived for the outflows assembled from our present sample and future samples will provide invaluable constraints for simulations of galaxy evolution, tracking the prevalence and consequences of molecular outflows through the history of the universe. | 2020-10-27T01:01:16.955Z | 2020-10-23T00:00:00.000 | {
"year": 2020,
"sha1": "183d1adc063166a1e4ac956b3dc11819b30b1eb3",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/abc4e6/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4b2ec49a86f0309530bc6c4060942cc054e70718",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
218470561 | pes2o/s2orc | v3-fos-license | Multi-View Self-Attention for Interpretable Drug-Target Interaction Prediction
The drug discovery stage is a vital part of the drug development process and forms part of the initial stages of the development pipeline. In recent times, machine learning-based methods are actively being used to model drug-target interactions for rational drug discovery due to the successful application of these methods in other domains. In machine learning approaches, the numerical representation of molecules is vital to the performance of the model. While significant progress has been made in molecular representation engineering, this has resulted in several descriptors for both targets and compounds. Also, the interpretability of model predictions is a vital feature that could have several pharmacological applications. In this study, we propose a self-attention-based, multi-view representation learning approach for modeling drug-target interactions. We evaluated our approach using three large-scale kinase datasets and compared six variants of our method to 16 baselines. Our experimental results demonstrate the ability of our method to achieve high accuracy and offer biologically plausible interpretations using neural attention.
Introduction
In the pharmaceutical sciences, drug discovery is the process of elucidating the roles of compounds in bioactivity for developing novel drugs. The drug discovery stage is a vital part of the drug development process and forms part of the initial stages of the development pipeline. In recent times, traditional in vivo and in vitro methods for analyzing bioactivities have been enhanced with automated methods such as large-scale High-Throughput Screening (HTS). The automation is motivated by the quest to reduce the cost and time-to-market challenges that are associated with the drug development process. The cost of developing a single drug is estimated to be 1.8 billion US dollars and could take 10-15 years to complete [1]. While HTS provides a better alternative to wetlab experiments, it is time-consuming (takes about 2-3 years) [2] and requires advanced chemogenomic libraries. Also, with HTS, an exhaustive screening of the known human proteome and the 10 60 synthetically feasible compounds is intractable [3,4]. Additionally, HTS has a high failure rate [5].
Lately, the availability of large-scale chemogenomic and pharmacological data (such as DrugBank [6], KEGG [7], STITCH [8], and ChemBL [9], Davis [10], KIBA [11], PubChem [12]), coupled with advances in computational resources and algorithms have engendered the growth of the in silico (computer-based) Virtual Screening (VS) domain. In silico methods have the potential to address the challenges mentioned above that plague HTS due to their ability to analyze assay data, unmask inherent relationships, and exploit such latent information for drug discovery tasks [13].
In VS, data-driven models are used to examine and predict Drug-Target Interactions (DTI) to systematically guide subsequent HTS or in vitro validation methods. DTI research using VS methods has applications in drug side-effects studies [14] and could be a key contributor in developing personalized medications [15], and in drug-repurposing [16]. Also, it is worth noting that the use of in silico methods to optimize the drug development process could reduce healthcare costs and encourage accessibility of healthcare services.
Consequently, there are several in silico proposals in the literature about DTI prediction [3]. On account of data usage, structure-based methods, ligand-based approaches, and proteochemometric Modeling (PCM) constitute the taxonomy of existing in silico DTI studies. Structure-based methods use the 3D conformations of targets and compounds for bioactivity studies. Docking simulations are well-known instances of structure-based methods. Since the 3D conformations of several targets, such as G-Protein Coupled Receptors (GPCR) and Ion Channels (IC), are unknown, structure-based methods are limited in their application.
They are also computationally expensive since a protein could assume multiple conformations depending on its rotatable bonds [3]. Ligand-based methods operate on the assumption that similar compounds would interact with similar targets and vice-versa, tersely referred to as 'guilt-by-association.' Hence, ligand-based methods perform poorly when a target has a few known binding ligands (< 100). The same applies in reverse.
On the other hand, PCM or chemogenomic methods, proposed in [17], model interactions using a drug (compound)-target (protein) pair as input. Since PCM methods do not suffer from the drawbacks of ligand-based and structure-based methods, there have been many studies in using such chemogenomic methods to study DTIs [18,19,20]. Also, PCM methods can use a wide range of drug and target representations. Qiu et al. provide a well-documented growth of the PCM domain [21].
As regards computational methodologies, Chen et al. categorize existing models for DTI prediction into Network-based, Machine Learning (ML)-based, and other models [22]. Network-based methods approach the DTI prediction task using graph-theoretic algorithms where the nodes represent drugs and targets while the edges model the interactions between the nodes [23]. As a corollary, the DTI prediction task becomes a link prediction problem. While network-based methods can work well even on datasets with few samples, they do not generalize to samples out of the training set, among other shortcomings.
ML methods tackle the DTI prediction problem by training a parametric or non-parametric model iteratively with a finite independent and identically dis-tributed training set made up of drug-target pairs using supervised, unsupervised, or semi-supervised algorithms. Probabilistic Matrix Factorization (MF) of an interaction matrix and certain forms of similarity-based methods also exist in the domain [24,25]. Rifaioglu et al., in their analysis of recent progress of in silico methods, show that researchers in the domain [3] are increasingly studying supervised ML methods. In this context, similarity-based and feature-based methods have been the main ML approaches. Similarity-based methods leverage the drug-drug, targettarget, and drug-target similarities to predict new interactions [26,27,28].
Feature-based methods represent each drug or target using a numerical vector, which may reflect the entity's physicochemical and molecular properties.
These feature vectors are used to train an ML model to predict unknown interactions. Sachdev et al. provide a thorough discussion of the feature-based DTI methods [29]. Additionally, some proposals combine feature-based and similarity-based methods to model interactions [30,31]. Due to the recent success of the Deep Learning (DL) domain, a form of ML, in areas such as computer vision [32] and Natural Language Processing (NLP) [33], recent feature-based approaches have mainly been DL algorithms [34,35,36,37,2,15].
In feature-based methods, the construction of numerical vectors from the digital forms of drugs or targets is significant. This process is called featurization. The 2D structure of a compound can be represented using a line notation algorithm, such as the Simplified Molecular Input Line Entry System (SMILES) [38]. Likewise, a target can be encoded using amino-acid sequencing.
The compound and target features can then be computed using libraries such as RDKit [39] and ProPy [40], respectively. While Wen et al. draw a line between descriptors and fingerprints, we refer to both as descriptors herein since they can be composed to form molecular representations [14].
While significant progress has been made in molecular representation engineering, this has resulted in several descriptors for both targets and compounds [41,3,42]. Since the choice of descriptors or features significantly affects model skill, there is an inexorable dilemma for researchers in feature selec-tion [43,44]. In some instances, the performance of molecular descriptors tends to be task related [45] and offer complementary behaviors [46,47,42]. Therefore, the integration of these predefined descriptors is common and espoused by researchers to construct joint molecular views [48,3]. Although these descriptors tend to provide domain-related information, their predefined nature means they are unable to establish a closer relationship between the input and output space concerning the task at hand.
Indeed, several algorithms have been proposed to learn compound and target features directly from their sequences, 2D or 3D forms over the past few years [49,34,35,36,50,51,2,15] using backpropagation. It has been shown that DTI models constructed in such manner usually outperform predefined descriptors or provide competitive results [35,52,53]. Nonetheless, the proliferation of these end-to-end descriptor learning methods only exacerbates the dilemma mentioned above since these studies also demonstrate the capabilities of predefined methods such as the Extended Connectivity Fingerprint (ECFP) [54] method.
In another vein, most of the existing DTI studies in the literature have formulated the DTI prediction task as a Binary Classification (BC) problem. However, the nature of bioactivity is continuous. Also, DTI depends on the concentration of the two query molecules and their intermolecular associations [55]. Indeed, it rare to have a ligand that binds to only one target [3]. While the binary classification approach provides a uniform approach to benchmark DTI proposals in the domain using the GPCR, IC, Enzymes (E), and Nuclear Receptor (NR) datasets of [23], treating DTI prediction as a binding affinity prediction problem leads to the construction of more realistic datasets [56,11]. Accordingly, the Metz [57], KIBA [11], and Davis [10] datasets serve as the benchmark datasets for regression-based DTI proposals and their output values are measured in dissociation constant (K d ), KIBA metric [11], and inhibition constant (K i ), respectively. Another significant feature of the regression-based datasets is that they do not introduce class-imbalance problems seen with the BC datasets mentioned above. The BC-based algorithms typically address the class-imbalance problem using sampling techniques [42] or assume samples without reported interaction information to be non-interacting pairs. We argue that predicting continuous values enable the entire spectrum of interaction to be well-captured in developing DTI prediction models.
Furthermore, since in silico DTI models are typically not replacements for in vitro and in vivo validations, interpretability of their prediction is vital to guiding domain experts to realizing the benefits above of advances in the domain. However, the application of multiple levels of non-linear transformation of the input means that DL models do not lend themselves easily to interpretation. In some studies, less powerful alternatives such as decision trees and L 1 regularization of linear models have been used to achieve the interpretability of prediction results [58,59]. Recent progress in pooling and attention-based techniques [33,60,61] have also aided the ability to gain insights into DL-based prediction results [62,15]. We posit that such attention-based mechanisms offer a route to provide biologically plausible insights into DL-based DTI prediction models while leveraging the strength of DL-models. Also, since attention-based methods can learn rich molecular representations, it could facilitate accurate predictions in other domains such as ligand-catalyst-target reactions [3].
To this end, our contributions to the domain are as follows: • We propose a multi-view attention-based architecture for learning the representation of compounds and targets from different unimodal descriptor schemes (including end-to-end schemes) for DTI prediction.
• Our usage of neural attention enables our proposed approach to lend itself to the interpretation and discovery of biologically plausible insights in compound-target interactions across multiple views.
• We also experiment with several baselines and show how these seemingly different compound and target featurization proposals in the literature could be aggregated to leverage their complementary relationships for modeling DTIs.
The rest of our study is organized as follows: section 2 discusses the related work and baseline models of our study, we discuss the various featurization methods we use and our proposed architecture in section 3. The experiments we conducted are described in section 4 and we discuss the results in section 5.
Finally, we conclude our work in section 6.
Related Work
In silico methods provide a promising route to tackle some critical challenges in drug discovery effectively. Over the last decade, several studies have been conducted in modeling interactions, which has led to substantial progress in DTI prediction and other related tasks. We review some of these notable works which relate to our study in what follows.
One of the seminal works on integrating unimodal representations of drugs and compounds is [63]. The authors note that the challenges with DTI prediction mean that the development of models that can leverage heterogeneous data is vital to the domain. Hence, the chemical space, genomic space, and pharmacological space are integrated. Subsequently, the compound-target pairwise relationships are studied using network or graph analysis. Shi et al [26] also augment similarity information with non-structural features to perform DTI prediction using a network-based approach.
Additionally, Luo et al. [64] argue that multi-view representations enable modeling of bioactivities using diverse information. As a result, a DTI model is proposed in [64] that learn the contextual and topological properties of drug, disease, and target networks. Likewise, Wang et al. [65] also propose a random forest-based DTI prediction model that integrates features from drug, disease, target, and side-effect networks learned using GraRep [66]. These networkbased DTI models are not scalable to large datasets and unable used on samples outside the dataset.
Also, other researchers have adopted collaborative filtering methods to predict DTIs. In [67], the authors propose a Matrix Factorization (MF) method for predicting the probability that a compound would interact with a given target.
Noting that traditional MF methods are unable to detect nonlinear properties, a deep MF (DMF) method is proposed in [68]. The DMF approach first constructs negative samples using a K-Nearest Neighbor (kNN) method and then builds an interaction matrix. The rows and columns of the interaction matrix then serve as the features of drugs and targets in a DL model, which finds the low-rank decomposition of the interaction matrix.
Similarly, Yasuo et al. [69] use a probabilistic MF approach to decompose an interaction matrix into a target-feature matrix and a feature-ligand matrix.
While these DL-based MF are able to learn nonlinear properties, viewing DTI prediction as a BC problem, as seen in these works, does not address the entire spectrum of bioactivity. In [70], the graph-regularized MF approach of [16] is also extended to a multi-view approach that integrates both chemical and structural views of compounds and targets. As mentioned earlier, in the BC setting, true-negatives are mostly lacking and using kNN, as in [68], introduces arbitrariness in determining negative samples.
On the other hand, similarity-based ML methods have also been proposed for DTI prediction. In this setting, compound and target similarity matrices are constructed and used in kernel-based algorithms such as Support Vector Machines (SVM) [71,72], and other well-known ML algorithms such as kNN and Regularized Least Squares (RLS). While compound similarities are typically constructed by considering their topological and chemical properties [73], target similarities are usually computed using metrics such as the Smith-Watermann (SW) score, which considers the alignment between sequences [74]. Nonetheless, these approaches use the BC problem formulation. Conversely, the work in [55] proposed a Kronecker RLS (KronRLS) method that predicts binding affinity measured in K d and K i .
Concerning ensemble ML algorithms, SimBoost is proposed in [31] as a GBT-based DTI prediction model. While KronRLS is a linear model, Sim-Boost can learn non-linear properties for predicting real-valued binding affinities. While [31] uses a feature-engineering step to select compound-target fea-tures for GBT training, the work in [42] integrates different representations of a target and uses a feature-selection algorithm to construct representations for GBT training. The work in [75] also proposes a feature-selection method for determining feature-subspaces for GBT training. Additionally, [76] proposes an AdaBoost model for DTI prediction. However, as noted in [77], Boosting methods are not well-suited for predicting probabilities.
In another vein, several DL methods have been proposed to learn the features of compounds and targets for DTI prediction [50,34,36,35], whereas others have proposed DL models that take predefined features as inputs. The work in [14] proposed a deep-belief network to model interactions using ECFP and Protein Sequence Composition (PSC) of compounds and targets, respectively. [78] also propose a DTI model that uses generative modeling to oversample the minority class in order to address the class imbalance problem. In [2], the sequence of a target is processed using a Convolutional Neural Network (CNN), whereas a compound is represented using its structural fingerprint. The compound and target feature vectors are concatenated and serve as input to a fully connected DL model. Using CNN means the temporal structure in the target sequence is sacrificed to capture local residue information.
In contrast, [62] used a Recurrent Neural Network (RNN) and Molecular
Graph Convolution (MGC) to learn the representations of targets and compounds, respectively. These representations are then processed by a siamese network to predict interactions. A limitation of the approach in [62] is that extending it to multi-task networks require training several siamese models. While all these works formulate the DTI prediction as a BC problem, [56] proposes a DL model that predicts binding affinities given compound and protein encoding. The work in [15] also proposed a self-attention based DL model that predicts binding affinities. Using self-attention enables atom-atom relationships in a molecule to be adequately captured. Nonetheless, these studies do not leverage other unimodal representations of compounds and targets. Also, they do not adopt the split schemes proposed in [55] for developing chemogenomic models.
In what follows, we provide an introduction to the existing regression ML models for DTI prediction that are used as baselines in this study for completeness.
KronRLS
The KronRLS method proposed in [55] is a generalization of the RLS method in which the data is assumed to consist of pairs (compounds and targets, in this case). It is a kernel-based approach for predicting the binding affinity between a compound-target pair. Specifically, given a set of compound-target pairs X = {x 1 , x 2 , ..., x m } as training data with their corresponding bindingaffinity values Y = {y 1 , y 2 , ..., y m }, where i < m and m ∈ R, KronRLS learns a real-valued function f (x) that minimizes the objective, where λ is a regularization parameter and f k is the norm of the minimizer f associated with the kernel k in equation 2. Basing on the representer theorem, [55] defines the minimizer as, where the kernel function k is a symmetric similarity measure between two compound-target pairs. Given a dataset of m samples, k can be represented as pairs. Here, K c and K t are the kernel matrices of the compounds and targets, respectively. In this context, the parameters a i of f can be determined in closed form by solving a system of |C||T | linear equations: where C is the set of compounds, T is the set of targets, a ∈ R m , y ∈ R m , and
SimBoost
SimBoost, proposed in [31], is a gradient boosting approach to predict the binding affinity between a compound and a target. The authors propose three types of features to construct the feature vector of a given compound-target in where F is the space of all possible trees and K is the number of regression trees. Using the additive ensemble training approach, the set of trees {f k } are learned by minimizing the following regularized objective: where Ω determines model complexity to control overfitting, l(·) is a differentiable loss function which evaluates the prediction error and y i is the true binding affinity corresponding to x i .
PADME
In [52], PADME is proposed to model DTIs. The authors propose two variants of PADME: PADME-ECFP and PADME-GraphConv. The former variant constructs feature vectors of compounds using the ECFP scheme, whereas the latter learns the representations of compounds using Molecular Graph Convolution [37]. On the other hand, targets are represented using PSC [40]. After that, for a given compound-target pair, the feature vector x i ∈ R d is con- where f (x i ; θ) outputsŷ i as the predicted value using parameters θ and λ is a regularization parameter to control overfitting.
IVPGAN
In our previous study [53], we propose IVPGAN to predict DTIs using a multi-view approach to represent a compound and PSC to construct the target feature vector. While ECFP is used to represent predefined compound features, MGC is used to learn the representation of a compound given the graphical structure encoded in its SMILES notation. Using an Adversarial Loss (AL) training technique, the following objective is minimized: where , θ f and θ g are trainable parameters, g(·) ∈ R d , [· · · ] is a concatenation operator, · 2 k is a norm operator, λ is a hyperparameter that is used to control the combination of MSE and the AL objectives, and β is a regularization parameter that controls overfitting. L M SE G is the MSE objective of the DTI prediction model, which is treated as the generator of a Generative Adversarial Network (GAN). L AL G is the generator objective component of the GAN whose discriminator objective is expressed as, where the distributions p and G of equations 9 and 10 are derived from the neighborhood alignment matrices constructed from the labels and predicted values, respectively, as explained in [53].
Problem Formulation
We consider the problem of predicting a real-valued binding affinity y i between a given compound c i and target t i , i ∈ R. The compound c i takes the form of a SMILES [38] string, whereas the target t i is encoded as an amino acid sequence. The SMILES string of c i is an encoding of a chemical graph structure where V i is the set of atoms constituting c i and E i is a set of undirected chemical bonds between these atoms. Therefore, each data point in the training set is a tuple < c i , t i , y i >. In this study, we refer to the SMILES of a compound and the amino acid sequence of a target as the 'raw' form of these entities, respectively.
In order to use the compounds and targets in VS models, their respective raw forms have to be quantized to reflect their inherent physicochemical properties.
Accurately representing such properties is vital to reducing the generalization error of VS models [3]. We discuss the featurization methods considered in our study in sections 3.2 and 3.3.
Extended Connectivity Fingerprint
The ECFP algorithm is a state-of-the-art circular fingerprint scheme for numerically encoding the topological features of a compound [54]. [39] implementation of the ECFP algorithm in our study.
Molecular Graph Convolution
Motivated by recent progress in end-to-end representation learning, MGC is a class of algorithms that, for a given layer, apply the same differentiable function to the atoms of a molecule to learn the features of the molecule from its raw form. This operation is akin to the use of kernels in the CNN architecture.
Also, information about distant atoms is propagated radially through bonds, as found in circular fingerprints. Thus, composing several layers facilitate the learning of useful representations that are related to the learning objective. The earliest form of MGC is the work in [49]. It has been used in a notable number of studies and in various forms, such as that of [62], to model bioactivity. In [37], graph pool, and gather operations are proposed to augment the neural graph fingerprints algorithm of [49]. Recent progress in the domain has also produced other forms of MGCs [50]. In our study, we use the GraphConv algorithm proposed by [37]. Atom vectors are initialized using predefined physicochemical properties. The main operations of GraphConv are: 1. Graph convolution: applies molecular graph convolution to each atom.
2. Graph pool: applies a pooling function to an atom and its neighbors to get the updated feature vector of the atom.
3. Graph gather: takes the feature vectors of all atoms and applies a downsampling function to compute the fixed-length compound feature vector We refer to the GraphConv implementation without the graph gather operation as GraphConv2D in this study. Hence, for a compound of n ∈ R atoms, where a i ∈ R d , i < n, is the vector the ith atom, the output of GraphConv2D is
2. Weave gather: computes the compound feature vector x molecule ∈ R d as a function of all atom feature vectors.
We refer to the Weave implementation without the graph gather operation as Weave2D in this study. Thus, for a compound of n ∈ R atoms, where a i ∈ R d , i < n, is the vector the ith atom, the output of Weave2D is x molecule ∈ R n×d .
Graph Neural Network
In [51], a Graph Neural Network (GNN) is proposed for molecular graphs.
GNN maps a given molecular graph to a fixed-length feature vector using two In our study, we use a variant of GNN dubbed GNN2D. This variant omits the downsampling phase of the GNN operation. Thus, for a compound of n ∈ R atoms, where a i ∈ R d , i < n, is the vector the ith atom, the output of GNN2D is x molecule ∈ R n×d .
Protein Sequence Composition
As regards target quantization, PSC is a well-known predefined scheme for capturing subsequence information. It consists of Amino Acid Composition
Prot2Vec
Similar to compound featurization, efforts have been made to learn protein representations directly from their raw forms. Learning protein vectors is typically achieved by learning embedding vectors using NLP techniques such as the word2vec and GloVe models [79,80]. This approach also maintains the temporal properties in the target sequence. In [81], it is shown that the NLP approach could be used to develop rich target representations. Therefore, we construct a vocabulary of n-gram subsequences (biological words) following the splitting scheme of [81]. We set n = 3 in this study. In Figure 1, the approach we use to construct the 3-gram profile of a protein sequence is illustrated. The raw form of the protein is split into three non-overlapping representations. The words of all three sequences make up the vocabulary used in this study. We then move across the three splits to construct the overlapping 3-gram target profile. Each word in the dictionary D is mapped to a randomly initialized vector x i ∈ R d , i < |D|, that is updated during training.
In order to make computations tractable, we group subsequences using a non-overlapping window approach similar to the method in [51].
Setting the window size to 3, for didactic purposes, we group X as: where [· · · ] is a concatenation operator. Also, x i:i+w−1 denotes the window {x i , ..., x i+w−1 } where w is the window size. Note that if |x i:i+w−1 | < w by k elements, we add z ∈ R d to the window k times. Here, z is a vector of all zeros. Thus, each window is a wd-dimensional vector. Pooling functions or RNN could then be used to process these windows/segments into a fixed-length representation of the target. In section 3.4 we show how we use our proposed approach to construct the fixed-length vector of a target.
Protein Convolutional Neural Network
Protein Convolutional Neural Network (PCNN), proposed by Tsubaki et al. [51], is another end-to-end representation learning scheme for target sequences. It uses a similar approach to the Prot2Vec method (see section 3.3.2), but with overlapping windows, to construct target representations. The subsequent discussion on the PCNN uses Prot2Vec to encode target data and also has a minor variation of the convolution operation in [51]. Given c lth convolution layer, PCNN computes c where f (·) is a nonlinear activation function, we let W l ∈ R wd×wd be the kernel, and b ∈ R wd . Applying equation 11 multiple times enable nonlinear properties to be learned at different levels of abstraction. In order to produce a d-dimensional vector c (L) i for the last PCNN layer L, we let W L ∈ R d×wd and b L ∈ R d . Thus, the final output is C L ∈ R |C|×d . We refer to the rows of C L as segments.
To compute the vector representation of the target, [51] propose using the average pooling function. It is easy to realize that other differential pooling functions, such as the max and sum functions, could be employed. Moreover, an attention mechanism is proposed in [51], where the compound representation is used to compute attention weights for the segments of the target representation.
In this context, the compound vector dimension and the segment dimension must be equal. In this study, we refer to the attention variant as PCNN with Attention (PCNNA). We refer the reader to [51] for the exposition of PCNNA.
Additionally, we use a variant of the PCNN architecture called PCNN2D.
This variant omits the downsampling and attention phases of the PCNN method.
Joint View Attention for DTI prediction
We propose a Joint View self-Attention (JoVA) approach to learn rich representations from different unimodal representations of compounds and targets for modeling bioactivity. Such a technique is significant when one considers that there exist several molecular representations, and that other novel methods are likely to be proposed, in the domain.
In Figure 2, we present our proposed DL architecture for predicting binding affinities between compounds and targets. Before discussing the details of the architecture, we explain the terminology it uses: • Entity: this refers to a compound or target.
• View: this refers to a unimodal representation of an entity.
• Segment: for an entity represented as X ∈ R |X|×d , we refer to the rows as the segments.
• Projector: projects an entity representation X ∈ R |X|×d into X ∈ R |X|×l , where l ∈ R is the latent space dimension.
• Combined Input Vector (CIV): a vector that is constructed by concatenating two or more vectors and used as the input of a function.
For a set of views V = {v 1 , v 2 , ..., v J |J ∈ R}, JoVA represents v j of an entity as X vj ∈ R |Xv j |×dj where |X vj | denotes the number of elements that compose the entity and d j ∈ R is the dimension of the feature vector of each of these elements of the j-th view. We write X vj as X j in subsequent discussions to simplify notation. For a compound, the segments are the atoms, whereas a window of n-gram subsequences is a segment of a target. Note that in the case where the result of an entity featurization is a vector before applying the JoVA method (e.g., ECFP and PSC), this is seen as X j ∈ R 1×dj . Thus, |X j | = 1.
Thereafter, a projection function p j of v j projects X j into a latent space of dimension l to get X j ∈ R |X|×l . Note that the dimension of each projection function is l. We refer to this operation as the latent dimension projection. We use the format (seg. denotes segment(s)), X then serves as the input to the joint view attention module. Since we use a single data point in our discussion, we useX ∈ R K×l in subsequent discussions. Figure 3 illustrates the detailed processes between the segment-wise concat and view-wise concat layers of Figure 2. Given the multi-view representation of an entitȳ X, we apply a multihead self-attention mechanism and segment-wise input transformation [61]. An attention mechanism could be thought of as determining the relationships between a query and a set of key-value pairs to compute an output. Here, the query, keys, values, and outputs are vectors. Therefore, given a matrix of queries Q, a matrix of keys K, and a matrix of values V , the output of the attention function is expressed as, where d k is the dimension of K. In self-attention, we setX as Q, K, and V . The use ofX as query, key, and value enables different unimodal segments to be related to all other views to compute the final representation of the compound-target pair. where Additionally, a segment-wise transformation sub-layer is used to transform each segment of the multihead attention sub-layer output non-linearly. Specifically, we computeX = ReLU (aiW1 + b1)W2 + b2 (15) where ai denotes the i-th segment, W1 ∈ R l×dseg , W2 ∈ R dseg ×l . We set dseg = 2048 in this study, same as found in [61].
Furthermore, the Add and Norm layers in Figure 3 implements a residual connection around the multihead and segment-wise transformation sublayers. This is expressed as layerN orm(ai + sublayer(ai)).
At the segments splitter layer,X is split into the constituting view representations vector representation νj out ofXj, pooling functions could then be applied to each view's representation. This enables our approach to be independent of the number of segments of each view, which could vary among samples. In this study, νj ∈ R l is computed as, where m = |Xj| andX
Experiments Design
In this section, we present the details of the experiments used to evaluate our proposed approach for DTI prediction.
Datasets
The benchmark datasets used in this study are the Metz [57], KIBA [11], and Davis [10] datasets. These are Kinase datasets that have been applied to benchmark previous DTI studies using the regression problem formulation [55,73,31,52,15]. Members of the Kinase family of proteins play active roles in cancer, cardiovascular, and other inflammatory diseases. However, their similarity makes it challenging to discriminate within the family. This similarity results in target promiscuity problems for binding ligands and, as a result, presents a challenging prediction task for ML models [55]. We use the version of these datasets curated by [52]. In [52], a filter threshold is applied to each dataset for which compounds and targets with a total number of samples not above the threshold are removed. We maintain these thresholds in our study. The summary of these datasets, after filtering, is presented in table 1. Figure 4 shows the distribution of the binding affinities for the datasets.
Baselines
In line with the multi-view representation learning espoused by this study, we use the following compound and target views listed in Table 3 We compare our proposed approach to the works in [55,31,52,51]. While [51] is a binary classification model, we replace the endpoint with a regression layer in our experiments. The labels we give to [55,31] and [51] are KronRLS, SimBoost, and CPI, respectively. SimBoost and KronRLS are implemented as XGBoost and Numpy models, respectively, in our experiments.
As discussed in section 2.3, two DL models were proposed for DTI: (1) PADME-ECFP4 and (2) PADME-GraphConv. Here, we consider these two architectures under a bigger umbrella of models that use a single view of a compound and a single view of a target. The nomenclature of such models is compound view-target view.
In summary, the list of baselines used in this study are presented in Table 4.
JoVA Models
In order to show the versatility of JoVA, we propose six variants using combinations of the views listed in Table 3. However, other representations not considered herein could be utilized. The primary condition is ensuring that a view's representation of an entity, before the joint view attention module of Figure 2, is in a matrix form. Indeed, that is the rationale for the 2D variants of the GraphConv, GNN, Weave, and PCNN models. Nonetheless, as earlier mentioned, the feature vector representations could be treated as a one-row matrix in order to make the JoVA computations possible. The six variants are shown in Table 5, and they are implemented as Pytorch models herein.
Model Training and Evaluation
In our experiments, we used a 5-fold Cross-Validation (CV) model training approach. The structure of each CV-fold is shown in Figure 5. Also, the following three main splitting schemes were used: • Warm split: Every drug or target in the validation and test sets is encountered in the training set.
• Cold-drug split: Every compound in the validation and test sets is absent from the training set.
• Cold-target split: Every target in the validation and test set is absent from the training set.
Since cold-start predictions are typically found in DTI use cases, the cold splits provide realistic and more challenging evaluation schemes for the models.
We used Soek 1 , a Python module based on scikit-learn, to determine the best performing hyperparameters for each of all models. We used the warm split of the Davis dataset and the validation set of each fold for the search. The determined hyperparameters were then kept fixed for all split schemes and datasets. This was done due to the enormous time and resource requirements needed to repeat the search in each case of the experiment. The only exception to this approach is the Simboost model where we searched. In the case of Simboost, we searched for the best performing latent dimension of the matrix factorization stage for each dataset. The test set of each fold was used to evaluate trained models.
As regards evaluation metrics, we measure the Root Mean Squared Error (RMSE) and Pearson correlation coefficient (R 2 ) on the test set in each CV-fold. Additionally, we measure the Concordance Index (CI) on the test set, as proposed by [55].
We follow the averaging CV approach, where the reported metrics are the averages across the different folds. We also repeat the CV evaluation for different random seeds to minimize randomness. After that, all metrics are averaged across the random seeds.
Results and Discussion
In this section, we discuss the results of all baseline and JoVA models of our study.
Here, performance is to be understood as referring to the CI, RMSE, and R 2 results of a given model. While the smaller RMSE value is desirable when comparing two models, larger values of CI and R 2 connotes the best performance.
In Figure 6, we present the performances of both the baseline and JoVA models on the Davis dataset. Generally, the cold drug split proved to be the most challenging scheme on the Davis dataset, with the cold target and warm splits following in that order. This trend on the Davis dataset implies that the entity with fewer samples may offer the toughest challenge in the cold splitting schemes of [55].
We realized that the models that utilized multiple unimodal representations of entities usually resulted in the best or competitive performance on the RMSE, CI, and R 2 metrics. In particular, the IntView and IVPGAN models performed best amongst all the models, with the IntView model attaining a marginal increase in performance than the IVPGAN model. While the IVPGAN results observed in this study is an improvement on the work in [53]. Nonetheless, the ECFP8-PSC model performed almost as well as the best performing multi-view methods. We argue that the simplicity Figure 7. Similar to the general trend of difficulty seen on the Davis dataset, the cold target regime proved to be the most challenging since the Metz dataset set has fewer targets (see Table 1). This phenomenon is more evident among the baseline models than the JoVA models.
Furthermore, the DL-based baselines mostly performed poorly on the Metz dataset. We also observe from Figure The foregoing performance consistency claim on all three CV splits also agrees with the scatter and joint plots shown in these figures.
Taken together, we believe that using self-attention to align multiple unimodal representations of atoms and amino acid residues to each other enables a better representational capacity, as is typical of most neural attention-based DL models.
DrugBank Case Study
In this section, we discuss a case study performed using the Drugbank [82] database. warm split scheme was selected to evaluate the ability of our approach to predict novel and existing interactions. While these results demonstrate the ability of our proposed approach to improve the virtual screening stage of drug discovery, the novel predictions reported herein could become possible cancer therapeutics upon further investigations.
Interpretability Case Study
As mentioned earlier, the interpretability of DTI predictions could facilitate the drug discovery process. Also, being able to interpret an interaction in both the compound and target directions of the complex could reveal abstract intermolecular relationships.
Therefore, we performed an interpretability case study using Brigatinib and Zanubrutinib as the ligands and EGFR (Protein Data Bank ID: 1M17) as the macromolecule in two case studies. The EGFR structure was retrieved from the PDB 2 and the ligand structures from the DrugBank for docking experiments. We used PyRx [83] to perform in-silico docking and Discovery Studio (v20.1.0) to analyze the docking results. We then mapped the top-10 atoms and top-10 amino acid residues predicted by the JoVA model used in the Drugbank case study above unto the docking results. The attention outputs of the model were used in selecting these top-k segments. In Figure 12, the yellow sections of the macromolecule indicate the top-10 amino acid residues, whereas the top-10 atoms of the ligand are shown in red transparent circles in the interaction analysis results on the right of each complex.
In the case of the EGFR-Brigatinib complex (see Figure 12a), we realized that the selected amino acid residues were mostly around the binding pocket of the complex.
While we show only the best pose of the ligand in Figure 12, the other selected amino acid residues were identified by the docking results to be for other poses of the ligand.
Also, selected atoms of the ligand happen to be either involved in an intermolecular bond or around regions identified by the docking results analysis to be essential for the interaction. Interestingly, the amino acids of the macromolecule identified to be intimately involved in the interaction and also among the top-10 residues are predominantly in a Vand der Waals interaction with the ligand. Thus, the model considered stability of the interaction at the active site to be significant in determining the binding affinity.
Likewise, the EGFR-Zanubrutinib case study yielded interpretable results upon examination. It could be seen in Figure 12b that the top-10 amino acid residues selected in the EGFR-Brigatinib case study were identified again. Thus, the model has learned to consistently detect the binding site in both case studies. Indeed, this consistency was also observed in several other experiments using EGFR-1M17 and other ligands 3 .
This aligns with knowledge in the domain where an active site could be targeted by multiple ligands. The highlighted top-10 amino acid residues also contain three phosphorylation sites (Thr686, Tyr740, Ser744), according to the NetPhos 3.1 [84] 4 server prediction results. Additionally, the interaction analysis of the EGFR-Zanubrutinib case study reveals that a number of the amino acids selected in the top-10 segments are involved in pi-interactions which are vital to protein-ligand recognition. We also note that some of the selected atoms of Zanubrutinib are in the aromatic regions where these pi-interactions take place. In another vein, other selected amino acids are involved in Vand der Waals interactions which reinforce the notion of stability being significant in determining the binding affinity.
In the nutshell, our approach is also able to offer biologically plausible cues to experts for understanding DTI interactions. Such an ability could be invaluable in improving existing virtual screening methods in rational drug discovery.
Conclusion
In this study, we have discussed the significance of studying DTI as a regression problem and also highlighted the advantages that lie within leveraging multiple entity representations for DTI prediction. Our experimental results indicate the effectiveness of our proposed self-attention based method in predicting binding affinities and offers biologically plausible interpretations via the examination of the attention outputs.
The ability to learn rich representations using the self-attention method could have applications in other cheminformatic and bioinformatic domains such as drug-drug and protein-protein studies. | 2020-05-04T01:00:42.301Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "7d6a3a51f93e05da511087727677c993fcd9192c",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jbi.2020.103547",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7d6a3a51f93e05da511087727677c993fcd9192c",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Mathematics"
]
} |
238200444 | pes2o/s2orc | v3-fos-license | Physicochemical and Sensory Attributes of Intact and Restructured Chicken Breast Meat Supplemented with Transglutaminase
Simple Summary Transglutaminases are enzymes used for joining cuts or fragments of meat together to make larger pieces that are easier to handle or a product that is more attractive to consumers. They react differently with various meats and at different inclusion levels, so this study investigated quality traits of intact chicken meat and restructured chicken meat supplemented with different proportions of transglutaminase. The results showed that enzyme-supplemented restructured meat had lower cooking loss and greater tenderness compared to intact meat. Sensory attributes were not affected by the supplemented enzyme, and there was no difference in these attributes compared to intact meat. Therefore, supplementation with transglutaminase could be undoubtedly considered as a valuable contributing agent in improving yield and texture of minced meat, and reducing other additives usually used in chicken meat processing. Abstract Transglutaminases (TG) are enzymes that improve the functional properties of proteins in meat products, contribute to the strong cohesion of meat without the further need for the addition of sodium chloride or phosphates, and have a positive effect on the texture of the meat product. This study aimed to investigate the physicochemical and sensory attributes of intact and restructured chicken meat supplemented with different TG proportions. The study was conducted on chicken breast meat samples (n = 40) originating from the line Ross 308. The intact samples were separated from the pectoralis major muscle, whereas the rest of the breast meat was ground, divided into equal parts, and supplemented with TG (0.2%; 0.4%; 0.6%; 0.8%; 1%). The intact meat had the highest cooking loss (19.84) when compared to 0.2% (15.51), 0.4% (15.04), 0.6% (14.95), 0.8% (14.95), and 1% (15.79) TG-supplemented meat. The intact meat had greater shear force (16.90) than 0.2% (5.16), 0.4% (5.39), 0.6% (5.16), 0.8% (5.98), and 1% (6.92) TG supplemented meat. There was no difference between intact meat and TG-supplemented meat in color, taste, odor, texture, and overall acceptability (p > 0.05). Therefore, TG supplementation can be used in improving yield and texture of minced chicken meat.
Introduction
Today's meat industry is faced with a challenge involving modification of different processing techniques (use of improved raw materials, reformulation of products, changing the technological process) that lead to meat quality required by consumers [1,2]. In general, it is known that consumers demand healthier meat products. Currently, the emphasis is on the minimum use of additives (such as phosphates or sodium chloride) traditionally involved in the meat industry [2][3][4]. However, numerous studies reported that exclusion of sodium chloride and phosphate led to meat products with poor physicochemical properties [5][6][7]. Marques et al. [7] reported that addition of transglutaminase enzyme (TG) had been used for inducing gelation and reducing or eliminating the need to add sodium chloride and phosphates in products. Stangierski et al. [8] reported that in the case of proteins, which do not form gels with desirable rheological properties after thermal processing, their functionality might also be improved by using TG. TG catalyzes the bonding of acyl transfer reactions between the γ-carboxyamide group of peptides of bound glutamine residues and a variety of primary amines. The reaction results in the formation of high molecular weight polymers. In the presence of primary amines, TG can crosslink the amines to the glutamines of a protein. In the absence of primary amines, water will react as a nucleophile and lead to deamidation of glutamines. The aforementioned reactions can influence the functional properties of proteins in food [9][10][11]. It has been proven that TG improves the functional properties of proteins in meat products, contributes to the strong cohesion of a block of meat without the further need for the addition of sodium chloride or phosphates, and increasing hardness has a positive effect on the texture of the meat product [11][12][13]. Literature reports revealed that TG reacted differently with various protein sources and at different inclusion levels [14]. Furthermore, the most important findings obtained from the studies regarding TG and chicken meat involve the use of TG in combination with other additives during processing of different meat products. Lantto et al. [5] investigated the effects of laccase and TG on the firmness and weight loss of cooked chicken breast meat homogenate gels. In addition to these enzymes, meat homogenate samples were mixed with phosphate (0.3%), ascorbic acid (0.06%), glucose (0.1%), and nitrite (0.012%). Uran and Yilmaz [11] investigated the effect of TG (0.2%, 0.4%, 0.6%, 0.8%, and 1%) on the quality characteristics of chicken burgers. The chicken burgers were processed following the common procedures for burger production (grounding of fresh chicken breast meat, mixing, adding of ice, emulsion fat, burger mix, antimicrobial substance, carmine, nitrite, and filling). Tseng et al. [15] researched the effect of TG (0%, 0.05%, 0.1%, 0.2%, 0.4%, and 1.0%) on the quality of low-salt chicken meatballs. Each formulation also contained 1.0% salt, 0.2% sodium tripolyphosphate, 0.8% monosodium glutamate, 3% sugar, 0.2% white pepper, and 0.1% seasoning powder. Simultaneous application of TG (0.3%) and high pressure on chicken batters with the addition of fresh egg yolk (10%), dehydrated egg white (10%), cold water (30%), and salt (1%) was investigated by Trespalacios and Pla [16]. Uran et al. [17] investigated the effect of TG (0%, 0.5%, and 1%) on the quality properties of chicken breast patties produced with the addition of non-specified amounts of salt and different spices.
The aim of this study was to investigate the physicochemical and sensory attributes of intact chicken breast meat and restructured chicken breast meat supplemented with different proportions of TG enzyme without any other additives.
Raw Material and Preparation of Restructured Chicken Breast Meat
The study was conducted on chicken breast meat samples originating from 40 chicken broilers from the line Ross 308. The study was conducted in accordance with Croatian legislation (The Animal Protection Act, The Official Gazette 102/17; The Regulation on the Protection of Animals Used for Scientific Purposes, The Official Gazette 55/13), and was approved by the Bioethical Committee for the Protection and Welfare of Animals at the University of Zagreb, Faculty of Agriculture, Croatia (Class: 114-04/20-03/10; Ref. 251-71-29-02/19-20-2, 30-11-2020). The animals were slaughtered at 35 d of age. The carcasses were eviscerated and chilled in a cold chamber at 4 • C for 24 h before dissection. Chicken breast meat was manually trimmed of skin, visible fat, and connective tissue. The intact meat samples were separated from the left lateral side of the pectoralis major muscle and were not supplemented with TG. The samples were weighed and stored at 4 • C for 24 h in a refrigerator pending further analysis. The rest of the breast meat was coarsely ground through a plate (Ø 10 mm). Immediately after that, the ground meat was divided into five portions of equal mass and supplemented with microbial TG (Special Ingredients Ltd., Chesterfield, UK) in concentrations as follows: 0.2%, 0.4%, 0.6%, 0.8%, and 1.0%. According to the producer, this TG product is also called 'Meat glue' and is comprised of sodium casein (E469), maltodextrin, transglutaminase, and sunflower lecithin (E322). Each portion of ground meat was manually homogenized for 10 min to allow even distribution of TG. There were no other additives in the mixtures. During the entire processing period, temperature of the meat was controlled and did not exceed 10 • C. Each mixture of restructured chicken breast meat (RCM) was formed firmly into 10 cylindrical shapes (50 mm in diameter and 160 mm in length) without visually entrapped air using polyvinyl chloride film (PVC). The ends of the PVC film were firmly twisted. The RCMs were stored for cold binding at 4 • C for 24 h in a refrigerator. Immediately after the binding stage, the RCMs were used for further analyses.
pH Value
The pH value of the samples was measured in triplicate using a penetrating electrode (InLab Solids Pro) adapted to the portable pH meter Seven2Go (Mettler Toledo, Greinfensee, Switzerland). For the pH determination, the electrode was inserted in the center of each sample.
Color
The color parameters (L * a * b *) were successively measured in triplicate on the crosssection of the samples after a 1 h blooming period using a Chroma Meter (Minolta CR 400, Osaka, Japan), with measurements standardized with respect to the white calibration plate.
Cooking Loss
Cooking loss (CL) was determined using a method described by Honikel [18]. Each sample was weighed, placed in a polyethylene bag, and cooked in a water bath (85 • C) until the endpoint temperature of 75 • C in the sample center was attained. After that, the samples were cooled in ice slurry, dried, and weighed. CL was calculated on the basis of the weight loss (%). The cooked samples were stored at 4 • C for 24 h in a refrigerator and used for Warner -Bratzler shear force (WBSF) determination.
Warner-Bratzler Shear Force
The WBSF was evaluated by using an Instron Universal Testing System (Model 3345, Instron, Canton, MA, USA) equipped with a WBSF device. Each sample was cut into ten square cores (10 × 10 × 25 mm) that were sheared once perpendicular by using the Instron unit calibrated to a full scale with a 500 Newton load cell, a crosshead speed of 250 mm/min, and a sample rate of 10 points/s. The mean value of the ten replicates was taken as the maximum shear force value.
Sensory Evaluation
After cooking the meat samples as described above for CL, samples were evaluated by six panelists. The panel comprised students and staff of the Departments of Animal Science and Technology. The analyses were performed in a well-lit room, at a temperature of about 23 • C, and relative humidity of 60-70%. Panelists were asked to evaluate taste, color, odor, texture, and overall acceptability on a 7-point hedonic scale. The scale was defined as 1-very poor, 2-poor, 3-slightly poor, 4-fair, 5-moderate, 6-good, and 7-excellent [15]. Each sample was prepared in a uniform manner by removing the casing and cutting the cylinder into equal cross-sections (15 mm width). The samples were individually coded with three-digit numbers, and randomly presented to the panelists on a white porcelain plate. Each sample was evaluated in triplicate, i.e., three sessions. During each session, panelists were provided with water and bread to rinse and eat between tasting samples. The values were statistically calculated as median values and included in the further statistical analysis.
Statistical Analysis
Statistical analysis was performed using the GLM procedure of the SAS/STAT software package version 9.4 [19]. Post hoc comparison among the least square means was performed using a Bonferroni multiple test correction. The difference between means was considered significant at p < 0.05.
Physicochemical Attributes
The pH value in RCM groups supplemented with TG enzyme was significantly greater than the pH value of intact meat (Table 1). Despite this difference, it should be noted that all determined pH values in the present study are within the 'normal' range of chicken meat [20]. Previous research also observed that a slight increase in the pH value of the samples was accompanied by a greater dosage of the TG enzyme [16,17,21]. Setiadi et al. [21] reported that greater pH values, as a result of addition of a TG supplementation, are due to the crosslinking reaction to the sample protein, which chemically produces the ammonia base molecule, and thus the more alkaline ammonia content is able to influence the pH value. However, this trend was not observed in the studies of Trespalacios and Pla [16], Uran et al. [17], and Uran and Yilmaz [11]. Table 1. pH value (pH), color parameters (L * a * b *), cooking loss (CL), and Warner-Bratzler shear force (WBSF) of intact chicken breast meat and restructured chicken breast meat (RCM) supplemented with 0.2%, 0.4%, 0.6%, 0.8%, and 1% of transglutaminase enzyme (LSM ± SE). Comparison of the colorimetric values showed a significant difference in lightness (L *) between intact meat and RCM groups supplemented with TG enzyme ( Table 1). The intact meat had the lowest L * value (56.44). Comparing the RCM groups supplemented with TG enzyme, no change was found between 0.2% and 0.4% groups, between 0.4%, 0.8%, and 1% groups, nor between 0.6%, 0.8%, and 1% groups. Uran et al. [17] did not find statistically significant differences in the L * values of chicken patties of the nonsupplemented group (41.81) and those supplemented with TG of 0.5% (43.10) and 1% (42.15). Uran and Yilmaz [11] also did not report statistically significant differences in the L * values of chicken burgers supplemented with TG of 0.2% (54.93), 0.4% (56.53), 0.6% (53.78), 0.8% (54.04), and 1% (54.64). However, they found significant difference in the L * value of the non-supplemented group when compared with those of TG supplemented groups. The L * value of the non-supplemented group was 50.29, whereas the highest L * value of chicken burgers supplemented with 0.4% of TG enzyme was 56.53.
Treatments
When comparing values for redness (a *), the lowest value was found in the intact meat, which significantly differed from the 0.4% and 1.0% TG supplemented groups. Differences among all other groups were not statically significant. In chicken patties, Uran et al. [17] did not find significant difference between the a * values of the non-supplemented (7.42) group and other TG supplemented groups (0.5% = 6.98, and 1% = 6.96). In contrast, in chicken burgers Uran and Yilmaz [11] found the highest a * value for the 0.6% TG supplemented group (23.51), while the lowest value was found for the 0.4% (20.79) TG supplemented group. The a * value in chicken burgers with 0.4% TG was significantly different from the other TG supplemented groups (0.2%, 0.6%, 0.8%, and 1%). The authors pointed out that differences in a * values between the other TG supplemented groups (0.2%, 0.6%, 0.8%, and 1%) were not found (p > 0.05).
The results of the present study indicate that there was no significant difference between yellowness (b *) of intact meat and RCM groups supplemented with TG enzyme (p > 0.05; Table 1). Uran and Yilmaz [11] found the largest statistical difference in the b * value of chicken burgers in the 0.4% (9.36) and 0.8% (8.96) TG supplemented groups. Furthermore, they found statistically similar values for b * values between the non-supplemented group, and the 0.2%, 0.4%, 0.8%, and 1% TG supplemented groups, as well as the 0.6% and 0.8% TG supplemented groups. Uran et al. [17] did not find statistically significant difference in the b * values of chicken patties of the non-supplemented group and the 0.5% TG supplemented group. When the TG supplemented groups were evaluated, there was significant difference in b * values between the 0.5% (24.42) and 1% (23.07) TG supplemented groups.
Regarding CL in the present study, there was no significant difference in water-holding capacity between RCM groups supplemented with different TG enzyme % ( This result indicates that the ground meat supplemented with different TG could certainly have greater yield than the whole meat piece. It is valuable information that could be considered in processing of different types of ground (restructured) meat products (e.g., meatballs, patties, hamburgers, different types of sausages) for gaining greater yields without/with minimum use of other additives, such as phosphates, sodium chloride, or monosodium glutamate. Pietrasik et al. [22] and Mostafa [23] reported that TG enzyme increased the water-holding capacity of meat products by decreasing cooking and thawing losses. In the case of raw materials with the addition of TG, the gel-forming capability was improved and thus, indirectly, the water-holding capacity was improved as well. These authors pointed out that by improving ε-(γ-glutamyl) lysine peptide bonds, more water is retained, despite the temperature at which processing was performed. In accordance with this, Uran and Yilmaz [11] in chicken burgers, and Uran et al. [17] in chicken patties also confirmed that TG supplementation significantly decreased CL in comparison to the non-supplemented ground meat. Stangierski et al. [8], Stangierski and Baranowska [24], Stangierski et al. [25], and Stangierski and Kaczmarek [26] indicated that pre-incubation time, the amount of supplemented TG, and heating treatment are also major factors that could influence cooking loss and texture properties of the meat. Stangierski and Kaczmarek [26] investigated the effect of TG (0.1%, 0.2%, 0.3%, and 0.6%) on the quality of poultry surimi during an incubation time of 1, 3, 5, 8, and 24 h, and an incubation temperature of 6-7 • C. They found that a 0.3% concentration of TG was the most advantageous in reducing cooking loss from poultry surimi gels. Stangierski et al. [8] found that poultry meat supplemented with 0.3% of TG, pre-incubated for 3 h, and thermally processed at 70 • C had low cooking loss and improved texture properties.
The results of our study indicated that a slight increase in the WBSF of the RCM samples was accompanied by a greater dosage of the TG enzyme (Table 1). When statistically evaluated, there was no significant difference in WBSF among RCMs supplemented with TG enzyme. Given that one of the key properties of TG, and also the reason for its use in the meat industry, is the binding of myofibrils, increasing gel structure and thus increasing texture [8,22,25], a greater WBSF value in RCM with a higher proportion of TG was actually expected. The results also showed that intact meat had a significantly greater WBSF than the RCMs supplemented with TG enzyme. With regard to this, it is interesting that the RCMs with the highest enzyme dosage (1%) had significantly lower WBSF values than intact meat. Tseng et al. [15] found that the gel strength of low-salt chicken meatballs increased with increasing TG enzyme supplementation (0.05%, 0.1%, 0.2%, 0.4%, and 1%), and at proportions above 0.2% was significantly higher than the non-supplemented group. Uran and Yilmaz [11] found no statistically significant difference in WBSF between the non-supplemented group and with 0.2% and 0.4% TG enzyme groups, nor between 0.6% and 0.8% TG enzyme groups. They found the largest statistically significant difference in the WBSF of chicken burgers supplemented with 1% TG enzyme. Uran et al. [17] also found statistically significant difference in the WBSF of chicken patties supplemented with 0.5% and 1% TG enzyme, respectively. Significantly higher WBSF was found in chicken patties supplemented with 1% TG enzyme. Compared to non-supplemented group, chicken patties supplemented with 1% TG enzyme had greater WBSF values. These greater differences between researchers could be due to the fact that, except with TG, the samples were prepared with different additive supplementation (emulsion fat, carmine, nitrite, salt, monosodium glutamate, sodium tripolyphosphate, etc.). Furthermore, these differences in WBSF values could be related to the fact that in the present study the intact meat samples were used as the samples without TG supplementation, while in the each of the above-mentioned studies the non-supplemented group was mechanically treated in the same manner as the supplemented groups of the relevant studies. Surely, as reported by Stangierski et al. [8], Stangierski and Baranowska [24], and Stangierski et al. [25], differences in WBSF could be the result of other aforementioned factors that were found to have an effect on textural properties on the meat supplemented with TG but were not considered in the present study nor in the other aforementioned comparable studies.
Sensory Evaluation
According to the results, there was no significant difference between the intact meat and the RCM groups supplemented with TG enzyme in terms of color, taste, odor, texture, and overall acceptability (p > 0.05). Except for the color, the mean values between treatments ranged from 'moderate' to 'good' (5 = moderate; 6 = good; Table 2.). The results of the present study revealed that the increase in the proportion of TG did not cause any negative acceptance in terms of evaluated sensory attributes. Uran and Yilmaz [11] confirmed that TG in different proportions (0%, 0.2%, 0.6%, 0.8%, and 1%) did not significantly affect the color, taste, odor, texture, and general evaluation of chicken burgers. They also indicated that TG in chicken burgers did not cause any negative acceptance in evaluated sensory attributes (the same hedonic scale as in the present study). Tseng et al. [15] investigated the effect of TG supplementation (0%, 0.05%, 0.1%, 0.2%, 0.4%, and 1%) on the sensory properties of chicken meatballs. It was found that the addition of TG in different proportions did not significantly affect the appearance, color, and taste of chicken meatballs (p > 0.05). However, the results of that study showed that the meatballs supplemented with 1% of TG significantly differed from the other samples in texture, juiciness, and overall acceptability. These findings indicated that chicken meatballs with 1% TG had the highest gel strength, more complete gel clusters, and the highest yield. It was also noticed that there were no negative scores for the evaluated attributes (the same hedonic scale as in the present study) and they were predominantly scored as 'good'.
Conclusions
The present study revealed varying changes in pH value, color parameters (except for b * value), cooking loss, and shear force of intact meat and RCMs supplemented with different proportions of TG enzyme. The most interesting change was related to lower cooking loss and shear force values of RCMs supplemented with different proportions of TG enzyme compared to intact meat. Since there was no significant change in cooking loss and shear force values among RCM groups supplemented with different proportions of TG enzyme, lower inclusion rates could be considered in different processing technology. Sensory attributes were not affected by the supplemented enzyme, and there was no difference in these attributes compared to intact meat. Therefore, TG enzyme supplementation could undoubtedly be considered as a valuable contribution agent in improving yield and texture of minced meat, and reducing other additives usually used in chicken meat processing. However, taking into consideration the changes in physicochemical attributes, it is important to consider further investigations that could give precise information about variations in meat quality attributes which are affected by the inclusion of TG enzyme.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author, A.K., upon reasonable request. | 2021-09-29T05:26:19.536Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "f6843980ef0baaaa5c60e0d26e33f39c55d82387",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/9/2641/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6843980ef0baaaa5c60e0d26e33f39c55d82387",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67803438 | pes2o/s2orc | v3-fos-license | Interactive comment on “Tropospheric column ozone response to ENSO in GEOS-5 assimilation of OMI and MLS ozone data”
The work focuses on natural phenomena, specifically El Niño Southern Oscillation (ENSO), and its effects on tropospheric ozone with an emphasis on extratropics. The article provides well-rounded background by stressing the importance of separating natural signal from the anthropogenic signal when analyzing tropospheric ozone variability. For the analysis of the tropospheric ozone, the study uses NASA’s GEOS-5 data assimilation system (DAS) along with Ozone Monitoring Instrument (OMI) and Mi-crowave Limb Sounder (MLS) on the Earth Observing System Aura satellite. The study also utilizes Global Modeling Initiative (GMT) chemical transport model (CTM) to show that 9 years of ozone assimilation (2005-2013) are consistent with the longer-term tropospheric ozone response. The ENSO is represented by Niño 3.4 index. Outgoing longwave radiation (OLR) data is used as a proxy for convection, which affects tro-C1
Abstract.We use GEOS-5 analyses of Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) ozone observations to investigate the magnitude and spatial distribution of the El Niño Southern Oscillation (ENSO) influence on tropospheric column ozone (TCO) into the middle latitudes.This study provides the first explicit spatially resolved characterization of the ENSO influence and demonstrates coherent patterns and teleconnections impacting the TCO in the extratropics.The response is evaluated and characterized by both the variance explained and sensitivity of TCO to the Niño 3.4 index.The tropospheric response in the tropics agrees well with previous studies and verifies the analyses.A two-lobed response symmetric about the Equator in the western Pacific/Indonesian region seen in some prior studies and not in others is confirmed here.This two-lobed response is consistent with the large-scale vertical transport.We also find that the large-scale transport in the tropics dominates the response compared to the small-scale convective transport.The ozone response is weaker in the middle latitudes, but a significant explained variance of the TCO is found over several small regions, including the central United States.However, the sensitivity of TCO to the Niño 3.4 index is statistically significant over a large area of the middle latitudes.The sensitivity maxima and minima coincide with anomalous anti-cyclonic and cyclonic circulations where the associated vertical transport is consistent with the sign of the sensitivity.Also, ENSO related changes to the mean tropopause height can contribute significantly to the midlatitude response.Comparisons to a 22-year chemical transport model simulation demonstrate that these results from the 9-year assimilation are representative of the longer term.This investigation brings insight to several seemingly disparate prior studies of the El Niño influence on tropospheric ozone in the middle latitudes.
Introduction
The contributions by natural phenomena to tropospheric ozone variability must be identified and quantified for robust assessments of the present and future anthropogenic influence.Here, we investigate the signal of the El Niño Southern Oscillation (ENSO) in extratropical tropospheric ozone in a global assimilation system.To the best of our knowledge, this study provides the first near-global, explicit, spatially resolved characterization of the ENSO influence, and reveals coherent patterns and mechanisms of the influence in the extratropics.
ENSO is well known to impact the magnitude of tropospheric ozone in the tropical Pacific.El Niño (La Niña) conditions are characterized by anomalous increases (decreases) in SSTs in the central and eastern Pacific.Opposite anomalies tend to occur in the western Pacific.In general, changes to convection and circulation patterns under El Niño conditions lead to reduced tropical tropospheric ozone in the central and eastern Pacific and enhanced ozone over the western Pacific and Indian Oceans.The response is highly linear in the tropics, so La Niña conditions produce an antisymmetric response (DeWeaver and Nigam, 2002).This influence on tropical tropospheric ozone has been observed in Published by Copernicus Publications on behalf of the European Geosciences Union.
The ENSO impact has also been demonstrated to extend to the subtropics.Using 40 years of ozone observations at Mauna Loa Observatory and a CTM, Lin et al. (2014) identified a strong link between El Niño events and lower tropospheric ozone enhancements over the subtropical eastern Pacific in winter and spring.They attribute this to the eastward extension and the equatorward shift of the subtropical jet stream during El Niño, which enhances the longrange transport of Asian pollution.Neu et al. (2014) examined mid-tropospheric ozone observations from TES during 2005-2010 and found increased and decreased zonal mean ozone below the Northern Hemisphere climatological subtropical jet during the 2009-2010El Niño and 2007-2008 La Niña, respectively.In the extratropics, ENSO events have been shown to alter the circulation by modifying planetary wave driving, the North Pacific low, and the location and strength of the extratropical jets (e.g., Angell and Korshover, 1984;Langford, 1999;Trenberth et al., 2002;García-Herrera et al., 2006).Thus, it is reasonable to expect ENSO to have a dynamical impact on extratropical tropospheric ozone distribution and variability.However, the extratropical ozone response to ENSO has not been as extensively studied as the tropical ozone response and some results from prior studies appear to be contradictory.Oman et al. (2013) examined the ozone sensitivity to ENSO with Microwave Limb Sounder (MLS) and Tropospheric Emission Spectrometer (TES) observations in addition to a chemical-climate model simulation.Although limited by just over 5 years of TES data (September 2004through December 2009), they show statistically significant sensitivity in the lower midlatitude troposphere over two broad meridional bands centered on the Pacific and Indian Oceans.Balashov et al. (2014) find a correlation between ENSO and tropospheric ozone around South Africa using air quality monitoring station data from the early 1990s to the 2000s.Langford et al. (1998) and Langford (1999) show ozone enhancements in the free troposphere correlated with El Niño (with a several month lag) in lidar data from Boulder, Colorado between 1993and 1998. Langford (1999) attributes this to the secondary circulation associated with an eastward shifted Pacific subtropical jet exit region under El Niño conditions.The transverse circulation of ozone-rich air from the stratosphere across the jet is then transported poleward.Lin et al. (2015) conclude that more frequent springtime stratospheric intrusions following La Niña winters contribute to increased ozone at the surface and free troposphere in the western United States.
In contrast, other observational and modeling studies have not found a significant relationship between ENSO and extratropical tropospheric ozone, suggesting that any such influence is weak or occurs only on a regional scale.For example, Vigouroux et al. (2015) use a stepwise multiple regression model including an ENSO proxy to examine ground-based Fourier transform infrared (FTIR) measurements from eight subtropical and extratropical stations of the Network for the Detection of Atmospheric Composition Change (NDACC).They did not find a significant ENSO impact on the tropospheric ozone column at any of the eight sites.Hess et al. (2015) also did not find a relation between ENSO and tropospheric ozone over extratropical regions in a four-member ensemble model simulation spanning 1953 to 2005.They suggest that ENSO may occasionally induce ozone anomalies but the correlation is weak.Thompson et al. (2014) remove the ENSO signal from ozonesonde data near South Africa to investigate middle tropospheric ozone trends.However, in contrast to the results of Balashov et al. ( 2014) using air quality station data, they find the correlation of the sonde data with ENSO is weak (A.Thompson, personal communication, 2016).
Determining the spatial extent of ENSO influence on tropospheric ozone from observations is difficult due to the sparse observation networks of sondes, FTIR, etc.The direct retrieval of tropospheric ozone from satellite observations is limited by coarse vertical resolution in the troposphere for nadir-viewing instruments and pressure broadening in the lower troposphere for limb-type instruments.Nevertheless, sonde and surface data combined with satellite observations have been used to derive a coarse global climatology of tropospheric ozone (Logan, 1999).Tropospheric ozone fields have also been derived from subtracting measured stratospheric column ozone from total column ozone (e.g., Fishman et al., 1990Fishman et al., , 2003;;Ziemke et al., 1998;Schoeberl et al., 2007).These residual methods are more robust at lower latitudes and have been used to show a large impact by ENSO on tropospheric ozone in the tropics (e.g., Chandra et al., 1998;Ziemke et al., 1998;Thompson and Hudson, 1999;Ziemke and Chandra, 2003;Fishman et al., 2005).
The goal of this paper is to use NASA's Goddard Earth Observing System Version 5 (GEOS-5) analyses of satellite measured ozone to investigate the spatial distribution, magnitude, and attribution of the tropospheric ozone response to ENSO.Assimilation provides the advantages of global, gridded fields constrained by observations.Ziemke et al. (2014) show that the ozone assimilation offers more robust tropospheric ozone fields for science applications in the lower and middle latitudes than residual methods.In the present study, the response in the tropics is evaluated and discussed alongside the midlatitude response.The relatively well-established tropical response is primarily included here for verification of the analyses, although several new findings are discussed.The comprehensive examination of the midlatitudes made possible by the ozone assimilation is novel to this study.In the midlatitudes, we show the tropospheric column ozone (TCO) has a statistically significant response to ENSO in some regions.This response can be explained by changes to circulation, convection, and tropopause height.These results will benefit both process-oriented evaluations of the regional ozone response in simulations and assessments of the anthropogenic impact on tropospheric ozone, including prediction of future tropospheric ozone and trends.
The following section discusses the data, assimilation system, and methods used in this study.The results are then presented in Sect.3. A comparison of results to a CTM simulation is included to show that the 9-year time period of the EOS Aura observations is largely representative of longer periods.Additional discussion of the results is found in Sect. 4 before concluding with a brief summary.
2 Data, assimilation system, and methods The ozone analyses used in this study were produced using a version of NASA's GEOS-5 data assimilation system (DAS), ingesting data from the Ozone Monitoring Instrument (OMI) and MLS on the Earth Observing System Aura satellite (EOS Aura), as described in Wargan et al. (2015).A brief description of the ozone data and assimilation system is provided in the following subsection.Subsequent subsections provide information on ancillary data sets used and the linear regression analysis used in this study.
Ozone data and GEOS-5 data assimilation system
The OMI and MLS instruments are both onboard the polar orbiting EOS Aura satellite launched on 15 July 2004.OMI is a nadir-viewing instrument that retrieves near-total column ozone across a 60-scene swath perpendicular to the orbit (Levelt et al., 2006).The footprint, or spatial resolution, of the nadir scene is 13 km along the orbital path by 24 km across the track.The cross-track scene width increases with distance from nadir to about 180 km at the end rows.OMI collection 3, version 8.5 retrieval algorithm data are used in the analyses considered here.The MLS instrument scans the atmospheric limb to retrieve the ozone vertical profile from microwave emissions.Version 3.3 data on the 38 layers between 261 and 0.02 hPa were used in the present analyses after screening based upon established guidelines (Livesey et al., 2011).
The GEOS-5.7.2 version of the data assimilation system is used to produce the ozone analyses.This is a modified version from the system used in the Modern-Era Retrospective analysis for Research and Applications (MERRA) (Rienecker et al., 2011).For the analyses used here, the system uses a 2.5 • × 2.0 • longitude-latitude grid with 72 layers from the surface to 0.01 hPa.The vertical resolution around the tropopause is about 1 km.Alongside the ozone data, a large number of in situ and space-based observations are included in the GEOS-5 analyses (Wargan et al., 2015).However, OMI and MLS ozone retrievals are the only data that directly modify the analysis ozone in this version of the DAS.Anthropogenic and biomass burning ozone production sources are not explicitly implemented in these analyses.Although tropospheric chemistry is not implemented in the assimilation system, ozone that is produced or lost due to emissions and other tropospheric chemistry sources and sinks is included in the analyses to the extent of the sensitivity of each OMI column retrieval at tropospheric altitudes.In general, the sensitivity decreases with decreasing altitude in the troposphere.Wargan et al. (2015) provides more details on the OMI tropospheric sensitivity and the retrieval "efficiency factors", or averaging kernels, used in the assimilation.Wargan et al. (2015) and Ziemke et al. (2014) previously evaluated these ozone analyses relative to sondes and other satellite data.Their assessments show that accounting for measurement and model errors in the assimilation greatly increases the precision of the tropospheric ozone over other methods of obtaining gridded TCO fields.Both Wargan et al. (2015) and Ziemke et al. (2014) show that there is greater disagreement of the tropospheric ozone analyses with sondes at high latitudes.For this reason, we restrict our discussion in the present study to the tropics and middle latitudes.
Global modeling initiative CTM simulation
We use a Global Modeling Initiative (GMI) CTM (Strahan et al., 2007;Duncan et al., 2008) simulation to determine if the results from the 9 years of ozone analyses are representative of the longer term.Stratospheric and tropospheric chemistry are combined in the GMI CTM with 124 species and over 400 chemical reactions.The tropospheric chemistry mechanism is a modified version originally from the GEOS-CHEM CTM (Bey et al., 2001).The simulation is driven using MERRA meteorological fields for 1991-2012 and run at the same resolution as the assimilation system.Observationbased, monthly varying anthropogenic and biomass burning emissions are used through 2010 with repeated 2010 monthly means for the final 2 years.Strode et al. (2015) provide more details on this specific simulation, which they refer to as the "standard hindcast simulation" in their study.Ziemke et al. (2014) show that the TCO from a similar GMI simulation compares well with sonde observations.In the present study we define, process, and analyze the CTM TCO fields in the same manner as the assimilation fields.
ENSO index and outgoing longwave radiation data
ENSO is characterized in this study by the monthly mean Niño 3.4 index available from the NOAA Climate Prediction Center (Climate Prediction Center, 2016).The index is based upon the mean tropical sea surface temperature between 5 • N-5 • S and 170-120 • W. This time series is normalized using 1981-2010 as the base time period.Figure 1 index time series from 1991 to 2013, which spans the years of the ozone analyses and GMI simulation.In this study, we define months with "strong" El Niño and La Niña conditions as months with index values greater than 0.75 and less than −0.75, respectively.The Climate Prediction Center uses threshold values of 0.5 and −0.5 to characterize El Niño and La Niña, respectively.The value of ±0.75 used here to characterize months of "strong" conditions is about 1 standard deviation (0.78) of the time series spanning the assimilation, 2005-2013.La Niña conditions were dominant during the ozone analyses time period (black line in Fig. 1).Months of strong El Niño conditions occurred in the boreal fall/winter of 2006/2007 and 2009/2010.Months of strong La Niña conditions occurred during the boreal fall/winter of 2005/2006, 2007/2008, 2008/2009, 2010/2011, and 2011/2012.We use outgoing longwave radiation (OLR) data as a proxy for convection to investigate the contribution from changes in convection associated with ENSO.The monthly, 1 • × 1 • data are provided by the NOAA Earth System Research Laboratory (Lee, 2014).Small values of OLR indicate substantial convection, and vice versa.
Methods
For the present study, we use the 9 full years (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013) of ozone analyses that have been completed.To calculate the TCO, we define the tropopause at each grid point as the lower of the 380 K potential temperature and 3.5 potential vorticity unit (1 PVU = 10 −6 m 2 K kg −1 s −1 ) surfaces.The daily TCO fields are smoothed horizontally by averaging each grid point with the eight adjacent neighboring points.Monthly mean TCO is computed from the daily values.We deseasonalize the TCO to remove the large seasonal variability by subtracting the respective 9-year mean for each month at each point.
We use multiple linear regression of the TCO monthly mean time series onto the Niño 3.4 index and the first four sine and cosine harmonics to evaluate the response of tropospheric ozone to ENSO.That is, TCO = the X i are the index and harmonic time series, m i are the best fit regression coefficients, and ε is the residual error.The regression is computed at every model grid point.The F-test is used to compute the confidence level of the explained variances (Draper and Smith, 1998).The calculated significance of the ozone sensitivity includes the impact from any autocorrelation in the residual time series (Tiao et al., 1990).We find that tests with time-lagged regressions from 1 to 6 months were generally no better than for zero-lag regressions.Therefore, the results presented herein are computed with no lag of the ozone time series.This is further discussed in Sect. 4.
Results
In this section, we examine the magnitude, spatial distribution, and mechanisms of the TCO response to ENSO.For reference, the multi-year annual mean TCO is shown in Fig. 2. The non-seasonal variability is indicated by overlaid contours of 1 standard deviation of the deseasonalized TCO expressed as a percent of the mean TCO.(Ziemke et al., 2014 illustrate the large seasonal variability).The following two subsections present the explained variance and TCO sensitivity to the Niño 3.4 index.Changes to advection and convection contributing to the TCO response are examined in Sects.3.3 and 3.4.Section 3.5 evaluates the ENSOassociated changes to the tropopause height and the impact on the TCO response.We conclude this section with a comparison to CTM results in subsection 3.6 for the purpose of evaluating how robust the results from 9 years of ozone assimilation are compared to the longer term.
Explained variance
The percent variance of TCO explained by ENSO is shown in Fig. 3.The ENSO influence is greatest in the tropical Pacific where the variance explained has a maximum of about 55 %.This well-known tropical response is associated with increased convection and upwelling in the central and eastern Pacific during El Niño that lofts ozone-poor air into the mid-to upper-troposphere.The anomalous warm ocean current that runs southward along the South American coast during El Niño conditions (e.g., Trenberth, 1997) is evident in the tropospheric ozone response.A northeastward tongue of relatively large magnitude also extends towards and across Central America.An isolated significant maximum is also found between 20 and 30 • N in the subtropical Pacific with explained variance of greater than 20 %.
In the western Pacific and Indonesian region, ENSO is known to produce an opposite response to the central and eastern Pacific due to increased upward transport during La Niña conditions.Two lobes of significant explained variance of more than 20 % are symmetric around the equator in this region.Off the western coast of Australia, the southern lobe has a maximum of about 35 %.
The impact by ENSO is less in the subtropics and middle latitudes compared to the tropical Pacific.Still, the variance explained by ENSO is greater than 20 % and statistically significant in several isolated regions.Of particular note, the variance explained exceeds 25 % over South Africa and 20 % over the central United States.These areas correspond to locations where previous studies have found an ENSO signature in ground station, FTIR, and ozonesonde data (Balashov et al., 2014;Langford et al., 1998;Langford, 1999;Lin et al., 2015).The variance explained also exceeds 20 % in a small region south of New Zealand.Other midlatitude areas, such as the northern Pacific and Atlantic, exceed 10 % but are not statistically significant due to the length of the time series.
TCO sensitivity
The sensitivity of TCO per degree change in the Niño 3.4 index is another measure of the ozone response to ENSO determined by the regression analysis.The spatial distribution of the sensitivity is shown in Fig. 4.Over the time period stud- ied here, we find the response to be linear with respect to the ENSO forcing.The large region of negative sensitivity in the central Pacific corresponding to the maximum in explained variance is a result of the increased lofting of ozone-poor air into the middle and upper troposphere under El Niño conditions.Thus, higher values of the Niño 3.4 index correspond to decreases in the TCO.The opposite sensitivity is found in the equatorial symmetric lobes over Indonesia and the eastern Indian Ocean where the increased lofting (decreased TCO) occurs with La Niña (negative Niño 3.4 values).In the subtropics, positive sensitivity is located between about 20 and 30 • to the north and south of the large central Pacific minimum.In addition, relatively strong negative sensitivity exists over South Africa corresponding to the significant variance explained there.In the midlatitudes, a negative albeit weaker response is seen over the United States.Statistically significant negative responses are also found over the northern Pacific and Atlantic Oceans, and the Southern Ocean.
Changes in advection
We examine the differences in circulation patterns for strong El Niño and La Niña conditions to investigate the large-scale impact of the extratropical circulation relative to the ozone sensitivity.The streamlines of the difference in the mean winds at 200 hPa for months with Niño 3.4 index of greater than 0.75 and less than −0.75 are overlaid on the ozone sensitivity contours in Fig. 4. In the Northern Hemisphere extratropics, anomalous cyclonic circulations coincide with the regions of negative sensitivity over central Asia, the north Pacific, United States, and the north Atlantic.The north Pacific and United States circulations agree well with ENSOassociated upper-troposphere height anomalies observed by Mo and Livezey (1986) and Trenberth et al. (1998).Similar cyclonic circulations aligned with negative sensitivity in the Southern Hemisphere are seen over the southern Pacific Ocean and over the southern tip of South America.Similarly, anomalous anticyclonic flow is associated with positive sensitivity over much of the midlatitudes.
The meridional and vertical cross-section streamlines of the difference between the mean winds between 180 and 120 • W for months with Niño 3.4 index greater and less than 0.75 and −0.75, respectively are shown in Fig. 5.The positive and negative sensitivity patterns in this region shown in Fig. 4 coincide with the anomalous tropospheric downwelling and upwelling.In the tropics, the anomalous upwelling lofts ozone-poor air into the mid-and uppertroposphere in agreement with previous studies.Northward of about 40 • N, the tropospheric upwelling coincides with the cyclonic circulation and negative sensitivity shown in Fig. 4.This is consistent with increased upwelling induced by cyclonic circulation.Similarly, other anomalous cyclonic circulations associated with negative sensitivity over North America, the north Atlantic, and the southern tip of South America also correspond to regions of increased upwelling (not shown).The positive sensitivity between about 15 and 30 • N corresponds with increased downwelling and evidence of increased cross-jet transport from the stratosphere into the troposphere in Fig. 5. Oman et al. (2013) find a similar positive sensitivity in this region and also in the Southern Hemisphere The qualitative interpretation of the upwelling and downwelling shown in Fig. 5 is supported by comparison with the dynamical ozone tendency output by the assimilation system.Figure 6 shows the differences of the mean dynamical ozone tendencies averaged between 180 and 120 • W for strong El Niño and La Niña months (the black line).The greatest differences occur in the mid to upper troposphere, so the net ozone tendencies are shown for the region between the tropopause and 350 hPa below the tropopause, which provides a constant mass comparison.In the tropics, the El Niño-La Niña difference in the dynamical tendencies ranges between −0.2 to −0.55 DU day −1 , consistent with greater upward transport of ozone-poor air during El Niño than La Niña.In the lower extratropics, the dynamical tendency differences increase to around 0.2 DU day −1 , corresponding with positive ENSO sensitivity in these regions and increased ozone during El Niño.Negative values of about −0.1 DU day −1 exist between 40 and 50 • latitude that correspond with negative sensitivity and upwelling.The small magnitudes at these latitudes are about 1/6 of the maximum tropical magnitude, which is consistent with the ratio of the sensitivities in these regions.
The positive sensitivity in the tropics around Indonesia corresponds with increased upwelling during La Niña conditions rather than with El Niño.This is evident in the downward oriented streamlines in Fig. 7 showing the circulation differences averaged between 85 and 120 • E for strong El Niño-La Niña months.In the tropics, the magnitude of the difference is smallest near the equator, resulting in the northern and southern tropical lobe structure of sensitivity maxima seen in Fig. 4. The difference is greater in the Southern Hemisphere and the streamlines indicate more stratosphere to troposphere transport than in the Northern Hemisphere as a possible reason for the greater sensitivity in the southern lobe located around 15 • S.
Changes in convection
In addition to the resolved advective vertical transport and stratosphere to troposphere transport, TCO can also respond to ENSO through changes in the vertical transport due to convection and mean depth of the tropospheric column (the tropopause height).This subsection examines the potential impact from convection using differences in OLR as a proxy.Changes in the tropopause height are presented in the following subsection.
The differences in the mean OLR for months with Niño 3.4 indices greater and less than 0.75 and −0.75 over the 9 years are shown in Fig. 8.The central Pacific is dominated by decreased OLR by up to 25 %, indicating greater convection under El Niño conditions.The maximum decrease is displaced to the west of the extrema of explained variance and TCO sensitivity to ENSO (Figs. 3 and 4,respectively).Over the Indonesian region, the OLR is increased by up to 16 %, indicating reduced convection.Here, the maximum OLR changes are offset to the east of the explained variance and sensitivity extrema.
These spatial offsets suggest that much of the tropical TCO sensitivity to ENSO is realized through the resolved advective transport.This is supported by the comparison of the analyses convective and dynamical tendency differences.Figure 6 compares the El Niño-La Niña differences in the analysis mid-to upper-tropospheric convective ozone ten- dencies (red line) and dynamical tendencies (black line) between 180 and 120 • W. In the tropics, the convective tendency differences range from −0.15 to 0.1 DU day −1 .The dynamical tendency differences are negative and the magnitudes are more than twice as great as the convective tendency differences.In the middle latitude north Pacific between 40 and 50 • N, the magnitude of the El Niño-La Niña convective ozone tendency difference is similar to the dynamical tendency differences (Fig. 6).Thus, the impact on the TCO sensitivity from the resolved transport and convection in this region are comparable in contrast to the tropics where the resolved transport is dominant.
Impact from tropopause height differences
The sensitivity of the tropopause pressure to the Niño 3.4 index determined by regression analysis is shown in Fig. 9.The response of the tropopause pressure is generally symmetric about the equator over the Pacific Ocean.Under El Niño conditions, a slightly greater mean tropopause pressure (decreased height and shorter tropospheric column) occurs in the extratropics poleward of the climatological subtropical jet.Equatorward, decreased tropopause pressures occur with El Niño, except in the western tropical Pacific where there is a small positive response.The pattern of tropopause response in the Pacific is similar to the 200 hPa circulation anomalies in Fig. 4. The offset of the tropical response extrema to the north and south of the equatorial TCO response (Fig. 4) indicates that very little of the equatorial TCO response is attributable to changes in the depth of the tropospheric column.
The maxima TCO response around 25 • N and 25 • S generally coincide with where the tropopause height response is zero.This also suggests that the positive TCO response here may be impacted by increased stratosphere to troposphere transport of ozone-rich air across the subtropical jet.
Changes in the depth of the tropospheric column associated with ENSO have a greater impact on the TCO sensitivity in the middle latitudes than in the tropics.Throughout much of the midlatitudes, positive tropopause pressure sensitivity coincides with negative TCO sensitivity and vice versa.Particularly noteworthy in the extratropical Northern Hemisphere are the positive tropopause pressure sensitivity maxima over the northern Pacific, North America, northern Atlantic, and Asia.The positive and negative tropopause sensitivity over extratropical South America also aligns closely to the TCO response.
Both the changes in transport (including vertical advection, convection, and cross-tropopause transport) and the tropopause height can impact the magnitude of TCO.We use regression analysis of the mean tropospheric mixing ratio on the Niño 3.4 index to make a rough estimate of the relative influences of transport and tropopause height changes.The mean mixing ratio is directly sensitive to changes in the transport but not to the tropopause pressure.Note that the mean mixing ratio also inherently includes any dependence from changes in chemistry that are associated with ENSO (Sudo and Takahashi, 2001;Stevenson et al., 2005;Doherty et al., 2006).If the response is assumed linear with respect to changes in transport/chemistry and tropospheric column depth, the variances explained by the TCO and mean mixing ratio can provide a first order estimate of the relative roles of these factors.For example, if the TCO explained variance in a region is 25 % and the mixing ratio explained variance is 20 %, the tropopause height would account for an estimated 5 %, or 1/5, of the TCO response.
The spatial pattern of the mean mixing ratio explained variance (not shown) is very similar to the TCO regression (Fig. 3) in both the tropics and midlatitudes.Throughout the tropics, the magnitudes of the variance explained are nearly identical.Thus, changes in transport/chemistry dominate the TCO response in this region.However, at middle latitudes the explained variance of mean mixing ratio is frequently less than that of the TCO, so the tropopause height plays a greater role.For the previously noted Northern Hemisphere negative sensitivity extrema, we estimate the tropopause height accounts for about a 1/4 of the TCO response to ENSO over the United States, 1/2 of the response over the North Pacific, and 2/3 of the North Atlantic sensitivity.The tropopause height is responsible for about 1/5 of the negative sensitivity around midlatitude South America.Also, only about 1/5 or less of the positive TCO response in the subtropical Pacific around the climatological subtropical jets is attributable to changes in the tropopause height.
Representativeness of the 9-year assimilation time series
We use the 22-year (1991-2012) GMI CTM simulation described in Sect.2.2 to show that the results from the 9 years of assimilation are representative of the longer-term TCO response to ENSO.The percentage of the simulated TCO variance explained by ENSO during 2005-2012 is shown in Fig. 10a for comparison with the assimilated ozone results over nearly the same time period (i.e., Fig. 3).The spatial distribution of the simulated TCO response is very similar.The maximum variance explained occurs in the central Pacific.
The northeast and southeast split towards Central and South America is evident, but the southern fork is not as prominent.In the area of Indonesia, the simulated explained variance exhibits the same lobe-like structure symmetric about the equator.The maximum over the subtropical Pacific and isolated maxima over the United States and South Africa also agree well with the assimilated ozone results.Likewise, the ozone sensitivity to ENSO in the simulation is very similar to the results from the assimilation (not shown).The sensitivity patterns previously discussed relative to the assimilation are well represented in the simulation although the magnitude of the sensitivity is generally slightly greater in the simulation.Regression analysis of the 22-year time span of the hindcast simulation reveals that much of the TCO response determined from the 9 years of assimilation is consistent with the longer-term response (Fig. 10b).Use of the longer time series also increases the area in which the explained variance is statistically different from zero, particularly in the middle latitudes.The shape and magnitude of the tropical explained variance is similar to the results from the shorter time period.Two differences are the reduced magnitude extending into the Northern Hemisphere Atlantic and the slight equatorward shift in the location of the Southern Hemispheric lobe in the Indonesian region.In the southern subtropical Pacific near 25 • S, the maximum in variance explained is more prominent.Conversely, the maximum in the northern subtropical Pacific is suppressed over the longer-term.However, there remains an enhancement of greater than 15 % explained variance near 135 • W between 15 and 30 • N that is consistent with the shift in the exit region of the subtropical jet and the associated secondary circulation (Langford, 1999).Lin et al. (2014) find a strong ENSO signature in free tropospheric ozone from 40 years of observations over Mauna Loa.This is in the region where the variance explained is reduced in our 22-year simulation compared to the shorter assimilated and simulated time series.The simulated ozone sensitivity around Mauna Loa in the longer time series is very similar to the sensitivity found using the shorter time series (not shown).However, the TCO variability is greater over the longer time period, at least partially accounting for the reduced variance explained.
In the extratropical northern Pacific, corresponding to the location of negative sensitivity in Fig. 4, the explained variance is 10-15 % and statistically significant.The signal over the United States and South Africa persists in the 22-year regression at over 20 % explained variance.Over midlatitude Europe and Asia, the spatial pattern of the explained variance differs between the 22-year and 8-year regression results.This may be indicative of the variability and trends of emissions being much more dominant than the ENSO influence in this region.
Tropical response
The tropical tropospheric ozone response to ENSO has been extensively studied in many previous observational and model investigations.The tropical response in the OMI/MLS ozone analyses agrees well with these prior investigations and verifies the analyses.However, many studies that evaluate the spatial distribution of the response do not show a twolobe structure in the western Pacific/Indonesian region as seen in the present study (e.g., Ziemke and Chandra, 2003).Nevertheless, our results confirm that the two-lobed response to the 2006 El Niño seen in OMI-MLS TCO residual fields by Chandra et al. (2009) and in TES observations by Nassar et al. (2009) is a robust response evident when considering more than that single event.Furthermore, Nassar et al. (2009) used a tropospheric CTM to show that this structure is predominantly of dynamical origin rather than from biomass burning emissions.The two-lobe structure is also suggested in the ozone sensitivity computed from regression of 5 years of TES data shown by Oman et al. (2013) in their Fig.5a.We find that the symmetric response is likewise well simulated by the GMI CTM driven by assimilated meteorology (Fig. 10).However, the free-running GEOS-5 Chemistry Climate Model simulation examined by Oman et al. (2013) produces a single, broad response centered on the Equator (their Fig. 5b) where the vertical wind differences are consistent with the single, centered response.This demonstrates that the ozone response is sensitive to changes in the advective transport that must be well simulated to reproduce the observed tropospheric response.
Timing of the response
As discussed in Sect.2, sensitivity tests of possible lags in the ozone response in the regression analysis did not increase the correlation between the regressed ozone and Niño 3.4 index or increase the explained variance.In general, the correlation and explained variance remain nearly constant or decreasing with lag times of 1 or 2 months in the middle latitudes.The correlations generally decrease rapidly with longer lag times.This lack of improved regressions using longer lag times indicates that there is minimal impact from long-range transport, including transport in the stratosphere that modulates lower stratospheric ozone concentrations and hence, the magnitude of large-scale stratosphere to troposphere exchange of ozone.This is consistent with previous studies that find little relation between ENSO and large-scale stratosphere-troposphere exchange at midlatitudes (e.g., Hsu and Prather, 2009;Hess et al., 2015).In the present study, the changes to transport and tropopause height contributing to the TCO response act over shorter timescales and potentially impact the entire or large portions of the tropospheric column.
Regional aspects of the midlatitude response
In the middle latitudes, the statistically significant variance explained by ENSO shown in this study occurs over smallscale regions, so it is not surprising that some previous studies fail to find an ENSO influence over large-scale regions or in many surface-based observations.For example, there is www.atmos-chem-phys.net/16/7091/2016/Atmos.Chem.Phys., 16, 7091-7103, 2016 no statistically significant explained variance over the midlatitude regions of Canada, Central Europe, and Japan considered by Hess et al. (2015).These regions also remain insignificant in the 22-year CTM simulation in the present study.
Conversely, Langford et al. (1998) demonstrate a correlation of ENSO with lidar observations of ozone near Boulder, Colorado from 1993 to 1998.This coincides with the location of significant explained variance and negative sensitivity we show in Figs. 3 and 4.However, Langford et al. (1998) show a positive correlation of mid-tropospheric ozone with the ENSO time series where the ozone signal lags ENSO by a few months.The lidar ozone anomalies are correlated with the subtropical jet exit region in the northeastern Pacific (Langford, 1999).He hypothesizes that transverse circulation across an El Niño-shifted jet exit region brings stratospheric air into subtropical tropical troposphere where it descends with the secondary circulation and is then transported northward to the central United States.In the present study, the suggestion of increased localized stratosphere-totroposphere transport and subsequent downwelling in the northern subtropical Pacific is supported by the meridional cross-section of the anomalous wind field (Fig. 5) and the relatively large TCO response evident in the explained variance and sensitivity (Figs. 3 and 4).It is possible that episodic events may bring anomalously high ozone air to the central United States from the subtropics that can impact at least a portion of the tropospheric column.However, we find that the immediate negative influence by the ENSO-driven vertical transport and tropopause height changes is dominant when considering the entire tropospheric column.
Furthermore, the model evaluation by Lin et al. (2015) reproduces the positive correlation over the Colorado region for the time period studied by Langford et al. (1998), but the correlation is not evident when they consider the longer time period from 1990 to 2012.They show that more frequent springtime stratospheric intrusions following La Niña winters contribute to increased ozone at the surface and free troposphere in the western United States.Since the stratospheric intrusions are associated with enhanced stratosphere to troposphere transport, this can significantly increase the TCO through an influx of ozone-rich air at lower altitudes.The negative sensitivity over the United States shown in the present study is consistent with these results of Lin et al. (2015).
South African region
We find significant explained variance and sensitivity of TCO around subtropical South Africa.This is consistent with the findings of Balashov et al. (2014) who show a correlation of surface observations of ozone with ENSO.They attribute this association to increased ozone formation from anthropogenic emissions under warmer and drier conditions occurring with El Niño.
Unlike most of the midlatitude TCO response, the processes that drive the TCO response in the southern Africa region are not clear considering the mechanisms investigated in this study.A meridional cross-section of the difference in the resolved advective winds averaged between 15 and 55 • E for strong El Niño and La Niña months (not shown) does not indicate coherent upwelling consistent with the negative sensitivity found there.Overall, there is weak anomalous downward transport between about 5 and 11 km in this region.The differences in OLR (Fig. 8) are also not consistent with unresolved convection as the source of the negative sensitivity.The tropopause height sensitivity to ENSO in this region (Fig. 9) is positive and similar to the spatial pattern of TCO sensitivity (Fig. 4) but is weak compared to the relatively strong TCO response.Therefore, much of the TCO response may be due to ENSO-related changes in the ozone chemistry, similar to the Balashov et al. ( 2014) results using surface ozone data, although this requires further investigation beyond the scope of this study.
Summary
The assimilation of OMI and MLS data enables this first comprehensive study of the TCO response along with the ancillary information to interpret and explain the results.We have used regression analysis of the TCO to provide an observationally constrained evaluation of the magnitude and spatial distribution of the ENSO impact on TCO throughout the middle latitudes.Prior results of the TCO response outside the tropics have been contradictory and limited by the spatial distribution and sparseness of available data.The present study is able to unify and explain many aspects of the seemingly disparate findings reported by previous studies.
While the examination of the response in the tropics is included primarily for completeness and verification of the analyses, we particularly note two results.We find that changes in the large-scale transport dominate the changes in convective transport to produce the TCO response throughout much of the tropics.We also show that a two-lobe response around Indonesia symmetric about the Equator, seen in prior studies of the 2006 El Niño, is not unique to that event.
The midlatitude ozone response to ENSO is not as strong as in the tropics.However, the explained variance is statistically significant over several small regions for the 9-year analysis, such as over the United States and south of New Zealand.Other areas have an explained variance of greater than 10 % that the 22-year CTM simulation suggests would be statistically significant with a longer observation period.These regions include the northern Pacific and around midlatitude South America.
The TCO sensitivity to ENSO is relatively small but statistically significant over much of the midlatitudes.These regions of negative (positive) sensitivity are coincident with anomalous cyclonic (anticyclonic) circulation.The anomalous circulations are associated with upwelling and downwelling that are consistent with the sign of sensitivity.In addition to the contribution by transport, changes in the tropopause height can contribute substantially to the middle latitude TCO response by altering the depth of the tropospheric column.
This study using analyses of OMI and MLS ozone provides the first explicit spatially resolved characterization of the ENSO influence and demonstrates coherent patterns and teleconnections impacting the TCO in the extratropics.Although relatively weak, the ENSO-driven variability needs to be considered in investigations of midlatitude tropospheric ozone, particularly on regional scales.The spatial variability of the TCO response indicates the ENSO influence is likely statistically insignificant for hemispheric studies or over other broad areas.However, the variance explained by ENSO can be 10 % or greater over smaller regions like the United States, midlatitude South America, and South Africa.Thus, it will be important in attributing the sources of variability and trends in TCO, such as by human-related activity.These results are potentially useful for evaluating the spatially dependent model response of TCO to ENSO forcing.In the extratropics, the ENSO signal is convolved with large extratropical circulation variability from other sources.Thus, additional factors may need to be considered when evaluating the midlatitude response in free-running models, particularly in ensemble simulations.
Data availability
The assimilated data used in this study are available through the Aura Validation Data Center website: http://avdc.gsfc.nasa.gov.The Niño 3.4 index used in this study is available from the NOAA Climate Prediction Center at http://www.cpc.ncep.noaa.gov/data/indices/.The OLR data are provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their web site at http://www.esrl.noaa.gov/psd/.
Figure 1 .
Figure 1.Time series of the Niño 3.4 index (K) from 1991 through 2013.The time period of ozone analyses is the black line (2005-2013).The red line indicates the additional years covered by the GMI simulation.Dashed lines are +0.75 and −0.75 that are considered strong El Niño and La Niña conditions in this study.
Figure 2 .
Figure 2. The 2005-2013 annual mean TCO (color contours) from the analyses.Black contours indicate one standard deviation of the deseasonalized TCO expressed as a percent of the annual mean TCO.Black contour interval is 0.5 %.
Figure 3 .
Figure3.The deseasonalized TCO variance explained by ENSO from the linear regression over 2005-2013.Crosshatched areas denote where the confidence level of the explained variance being different from zero is less than 95 %.The increment of the white contours is 5 %.
Figure 4 .
Figure 4.The TCO sensitivity to the Niño 3.4 index from the linear regression over 2005-2013 (color contours).The sensitivity is expressed as the change in the TCO per degree change in the index (DU K −1).Crosshatched regions denote where the sensitivity is not statistically different from zero at the 95 % confidence level.White contours are incremented every 0.3 DU K −1 .The streamlines show the difference between the mean winds at 200 hPa for months with strong El Niño conditions (Niño 3.4 index greater than 0.75) minus months of strong La Niña conditions (Niño 3.4 index less than −0.75).The thickness of the streamlines is scaled to the magnitude of the difference.Particularly note the midlatitude regions of negative and positive sensitivity aligned with anomalous cyclonic and anticyclonic circulations, as discussed in the text.
Figure 5 .
Figure 5. Streamlines of the difference between the mean vertical and meridional winds for months with strong El Niño conditions minus months of strong La Niña conditions from 2005 to 2013.The means are calculated between 180 and 120 • W. The width of the streamlines is proportional to the magnitude of the difference.The dashed line indicates the mean tropopause pressure for strong El Niño months.Solid contours are the zonal mean wind for strong El Niño months.
Figure 6 .
Figure 6.The dynamical (black) and convective (red) ozone tendency differences between months of strong El Niño and La Niña conditions from the assimilation system over 2005-2013.The means are calculated between 180 and 120 • W, matching that of Fig. 5.
Figure 8 .
Figure 8. Difference in the outgoing longwave radiation (OLR) for months with strong El Niño conditions minus months of strong La Niña conditions from 2005-2013.The differences are expressed as percent of annual mean OLR.Thin white lines are incremented every 2 %.
Figure 9 .
Figure 9.The sensitivity of tropopause pressure to the Niño 3.4 index from linear regression over 2005-2013.The sensitivity is expressed as the change in tropopause pressure per degree change in the index (hPa K −1 ).Crosshatched regions denote where the sensitivity is not statistically different from zero at the 95 % confidence level.White contours are incremented every 2 hPa K −1 .
Figure 10 .
Figure 10.The deseasonalized TCO variance explained by ENSO in the GMI CTM simulation for years (a) 2005-2012 and (b) 1991-2012.Crosshatched areas denote where the confidence level of the explained variance being different from zero is less than 95 %.The increment of the white contours is 5 %. | 2018-12-20T19:54:01.741Z | 2015-12-16T00:00:00.000 | {
"year": 2016,
"sha1": "76aa7136720acd04de3e5191ba1e1eb07994ec08",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/16/7091/2016/acp-16-7091-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "76aa7136720acd04de3e5191ba1e1eb07994ec08",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
201106232 | pes2o/s2orc | v3-fos-license | Online pseudo Marginal Sequential Monte Carlo smoother for general state spaces. Application to recursive maximum likelihood estimation of stochastic differential equations
This paper focuses on the estimation of smoothing distributions in general state space models where the transition density of the hidden Markov chain or the conditional likelihood of the observations given the latent state cannot be evaluated pointwise. The consistency and asymptotic normality of a pseudo marginal online algorithm to estimate smoothed expectations of additive functionals when these quantities are replaced by unbiased estimators are established. A recursive maximum likelihood estimation procedure is also introduced by combining this online algorithm with an estimation of the gradient of the filtering distributions, also known as the tangent filters, when the model is driven by unknown parameters. The performance of this estimator is assessed in the case of a partially observed stochastic differential equation.
Introduction
The data considered in this paper originate from general state space models, usually defined as bivariate stochastic processes {(X k , Y k )} 1 k n where {Y k } 1 k n are the observations and {X k } 1 k n are the latent states comonly assumed to be a Markov chain. When both processes take values in general spaces, the estimation of the conditional distribution of a sequence of hidden states given a fixed observation record is a challenging task required for instance to perform maximum likelihood inference. Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods (also known as particle filters or smoothers) are widespread solutions to propose consistent estimators of such distributions. This paper sets the focus on the special case where the conditional likelihood of an observation given the corresponding latent state (also known as the emission distribution) or the transition density of the hidden Markov chain cannot be evaluated pointwise, while they are pivotal tools of both MCMC and SMC approaches. The first objective of this paper is to prove that conditional expectations of additive functionals of the hidden states may still be estimated online with a consistent and asymptotically normal SMC algorithm. A recursive maximum likelihood estimation procedure based on this algorithm and using an approximation of the gradient of the filtering distributions, referred to as the tangent filters, is then introduced.
The use of latent data models is ubiquitous in time series analysis across a wide range of applied science and engineering domains such as signal processing [6], genomics [36,35], target tracking [33], enhancement and segmentation of speech and audio signals [31], see also [32,14,37] and the numerous references therein. Statistical inference for such models is likely to require the computation of conditional expectations of sequences of hidden states given observations. In this Bayesian setting, one of the most challenging problems is the approximation of expectations under the joint smoothing distribution, i.e. the posterior distribution of the sequence of states (X 1 , . . . , X n ) given the observations (Y 1 , . . . , Y n ) for some n 1. This computation is not tractable in the framework of this paper where it is assumed that the transtion density of the hidden process or the conditional likelihood of observations given states cannot be computed. This circumstance is somehow common for instance in the case of partially observed stochastic diferential equations (SDE), or in models where the emission distributions relies on a computationally prohibitive blackbox routine.
Following [18,21], this paper concentrates on SMC methods to approximate smoothing distributions with a random set of states, the particles, associated with importance weights by combining importance sampling and resampling steps. This allows to solve the filtering problem by combining an auxiliary particle filter with an unbiased estimate of the unknown densities. Then, the online smoother of [21] extends the particle-based rapid incremental smoother (PaRIS) of [28], to approximate, processing the data stream online, smoothed expectations of additive functionals when the unknown densities are replaced by unbiased estimates. This approach is an online version of the Forward Filtering Backward Simulation algorithm algorithm [11] specifically designed to approximate smoothed additive functionals. The crucial feature which makes the PaRIS algorithm appealing is the acceptance-rejection step which benefits from the unbiased estimation. The extension of the usual alternative, named the Forward Filtering Backward Smoothing algorithm [15], is more sensitive as it involves ratios of these unknown quantities. Other smoothing algorithms such as two-filter based approaches [2,19,25] could be extended similarly but they are intrisically not online procedures as they require the time horizon and all observations to be available to initialize a backward information filter.
In [21], the only theoretical guarantee is that the accept reject mechanism of the PaRIS algorithm is still correct when the transition densities are replaced by unbiased estimates. In this paper, the consistency of the algorithm as long as a central limit theorem (CLT) are established (see Proposition 4.2 and Proposition 4.3 in Section 4.2). This makes this pseudo marginal smoother the first algorithm to approximate such expectations in the general setting of this paper with theoretical guarantees and an explicit expression of the asymptotic variance. As a byproduct, the proofs of these results require to establish exponential deviation inequalities and a CLT for the PaRIS algorithm based on the auxiliary particle filter, see Section 4.1. This extends the result of [28], written only in the case of the bootstrap filter of [22]. This also extends the theoretical guarantees obtained for online sequential Monte Carlo smoothers given in [11,9,17,20].
The second part of the paper is devoted to recursive maximum likelihood estimation when the emission distributions or the transition densities depend on an unknown parameter, see Section 5. Following the filter sensitivity approach of [5,Section 10.2.4], the pseudo marginal smoother is used to estimate online the gradient of the one-step predictive likelihood of an observation given past observations. This procedure allows to perform online estimation in complex frameworks and is applied in Section 6 to partially observed SDE.
Online Sequential Monte Carlo smoother
Let n be a positive integer and X and Y two general state spaces. Consider a distribution χ on B(X) and the Markov transition kernels (Q k ) 0 k n−1 on X × B(X) and (G k ) 0 k n−1 on X × X × B(Y). Throughout this paper, for all 0 k n−1, G k has a density g k with respect to a reference measure ν on B(Y). In the following, F(Z) denotes the set of real valued measurable functions defined on the set Z. Let (Y k ) 1 k n be a sequence of observations in Y and define the joint smoothing distributions, for any 0 k 1 k 2 n and any function h ∈ F(X k2−k1+1 ), by: where a u:v is a short-hand notation for (a u , . . . , a v ) and is the observed data likelihood. For all 0 k n − 1, Q k has a density q k with respect to a reference measure µ on B(X). The initial measure χ is also assumed to have a density with respect to µ which is also referred to as χ. For all 0 k n, φ k = φ k:k|k are the filtering distributions, π k+1 = φ k+1:k+1|k are the one-step predictive distributions, while φ k|n = φ k:k|n are the marginal smoothing distributions.
Consider a latent Markov chain (X k ) 0 k n with initial distribution χ and Markov transition kernels (Q k ) 0 k n−1 . The states (X k ) 0 k n are not available so that any statistical inference procedure is performed using the sequence of observations (Y k ) 1 k n only. The observations are assumed to be independent conditional on (X k ) 0 k n and such that for all 1 n the distribution of Y given (X k ) 0 k n has distribution G k (X k , ·). In this case, (1) may be interpreted as: (2). Note that, when for all 0 k n − 1 g k only depends on its last two arguments, (2) is the likelihood of a standard hidden Markov model. In such models, computing (1) allows to solve classical problems such as: i) path reconstruction, i.e. the reconstruction of the hidden states given the observations; ii) parameter inference, i.e., when q k and g k depend on some unknown parameter θ, the design of a consistent estimator of θ from the observations.
As (1) is, in general, not available explicitly, this paper focuses on a sequential Monte Carlo based approximation specifically designed for cases where q k and/or g k cannot be evaluated pointwise. Partially observed diffusion processes (POD) [27], where the latent process is the solution to a stochastic differential equation are widespread examples where q k is not tractable.
Recursive formulation of (1) for additive functionals. For all 0 k n − 1, define For all 0 k n − 1, define also the kernel L k on X × B(X), for all x ∈ X and all f ∈ F(X) by In the following, 1 denotes the constant function which equals 1 for all x ∈ X so that Following for instance [4], the joint smoothing distributions φ 0:n|n may be decomposed using the backward Markov kernels defined, for all 0 k n − 1, all x k+1 ∈ X and all f ∈ F(X), by: Consequently, the joint-smoothing distribution φ 0:n|n may be expressed, for all h ∈ F(X n+1 ), as where where, for all Markov kernels K 1 , K 2 on X × B(X), all f ∈ F(X 2 ) and all x ∈ X, In this paper, the focus is set on additive functionals of the form with, for all 0 k n − 1,h k : X × X → R p for some p 1. The additive form of the function h n defined in (7) allows to update the backward statistics (T k h k ) k 0 recursively, see [3,9]. For all k 0, By (5) and (8), the smoothed additive functional (5) can be updated recursively each time a new observation is available. However, its exact computation is not possible in general state spaces. In this paper, we propose to approximate φ 0:n|n [h n ] using SMC methods: φ n in (5) and ← − Q φ k in (8) are replaced by a set of random samples associated with nonnegative importance weights. These particle filters and smoothers approximations combine sequential importance sampling steps to update recursively φ n and importance resampling steps to duplicate or discard particles according to their importance weights.
Sequential Monte Carlo for additive functionals. Let (ξ 0 ) N =1 be independent and identically distributed according to the instrumental proposal density ρ 0 on X and define the importance weights: .
, see for instance [8]. Then, for all k 1, once the observation Y k is available, the weighted particle sample {(ω k−1 , ξ k−1 )} N =1 is transformed into a new weighted particle sample approximating φ k . This update step is carried through in two steps, selection and mutation, using the auxiliary sampler introduced in [29]. New indices and particles {(I k , ξ k )} N =1 are simulated independently from the instrumental distribution with density on {1, . . . , N } × X: where ϑ k−1 is an adjustment multiplier weight function and p k−1 a Markovian transition density. For any ∈ {1, . . . , N }, ξ k is associated with the importance weight defined by: to produce the following approximation of φ k [f ]: For all k 0 and all (x, f ) ∈ X × F(X), replacing φ k by φ N k in (4), The forward-filtering backward-smoothing (FFBS) algorithm proposed in [9] consists in replacing, in (8), Proceeding recursively, this produces a sequence of estimates for 0 k n. Starting withτ i 0 = 0 for all 1 i N , this yields for all 0 k n − 1: Then, at each iteration 0 k n − 1, φ 0:k+1|k [h k+1 ] and φ 0:k+1|k+1 [h k+1 ] are approximated by The computational complexity of the update (12) grows quadratically with the number of particles N . This computational cost can be reduced following [28] by first replacing (12) by the Monte Carlo estimate where the sample size N 1 is typically small compared to N and (J Acceptance-rejection procedure. The computational complexity of the described approach is still of order N 2 since it requires the normalising constant for all particle ξ i k+1 , 1 i N . A faster algorithm is obtained by applying the accept-reject sampling approach proposed in [11] and illustrated in [16] which presupposes that there exists a constant M > 0 such that r k (x, x ) M for all (x, x ) ∈ X × X. Then, in order to sample from (ω k r k (ξ k , ξ i k+1 )) N =1 a candidate J * ∼ (ω i k ) N i=1 is accepted with probability: This procedure is repeated until acceptance. Under strong mixing assumptions it can be shown, see for instance [11,Proposition 2] and [28,Theorem 10], that the expected number of trials needed for this approach to update (
Pseudo marginal Sequential Monte Carlo smoother
In many applications, Sequential Monte Carlo methods cannot be used as the transition densities q k or g k , 0 k n − 1, are unknown. The following crucial steps which rely on r k are not tractable: (a) computation of the importance weights ω k in (10) ; (b) computation of the acceptance ratio (14).
To overcome these issues, following [21], consider the following algorithm.
H1 There exist a Markov kernel R k on (X × X, B(Z)) where (Z, B(Z)) is a general state space and a positive mapping r k on X × X × Z such that, for all (x, x ) ∈ X 2 , Then, under H1, if conditionally on F N k ∨ G N k+1 , ζ k has distribution R k (ξ The filtering weights then become: For all f ∈ F(X) and all 0 k n, φ k [f ] is approximated by To solve issue (b), [21] ensured that, under several assumptions, the acceptance-rejection mechanism introduced to implement PaRIS algorithm is still valid for stochastic differential equations. Consider the following assumption, H2 For all 0 k n, there exists a random variable M k measurable with respect to G N k+1 such that sup x,y,ζ r k (x, y; ζ) ≤ M k .
If this assumption holds, the accept-reject mechanism of PaRIS algorithm is replaced by the following steps. For all 1 i N and all 1 j N , a candidate J * is sampled in {1, . . . , N } with probabilities proportional to ( ω i k ) N i=1 and is accepted with probability r k (ξ J * k , ξ i k+1 ; ζ)/M k , where ζ has distribution R k (ξ J * k , ξ i k+1 ; ·). Then, set Lemma 3.1. Assume that H1 and H2 hold. Then, for all 0 k n − 1 and all 1 i N , ( J (i,j) k+1 ) 1 j N are i.i.d. and independent of ω i k+1 given F N k ∨ G N k+1 and such that for all 1 N , where ω k is defined by (16).
Proof. The proof follows the same lines as [21, Lemma 1].
The proposed algorithm therefore leads to an estimator of the expectation (1) in the general setting of this paper. The following section provides constistency and asymptotic normality results for this estimator.
Auxiliary Particle filter based PaRIS algorithms
In [26], the authors established the consistency and asymmptotic normality of PaRIS algorithm for the bootstrap filter, i.e. in the simple case where for all 0 k n − 1, ϑ k is the constant function which equals 1 and p k = q k . This section extends these convergence results to the general auxiliary particle filter based PaRIS algorithm as such filters are required for the pseudo marginal smoother. Consider the following assumptions.
H3 For all 0 k n − 1, g k is a positive function such that g k ∞ < ∞. For all 0 k n − 1, Lemma 4.1. Assume that H3 holds. Then, for all 0 k n, (f k , f k ) ∈ F(X) 2 and N 0, there exist (c k , c k ) ∈ (R + ) 2 such that for all N ∈ R + and all ε ∈ R + , Proof. The proof follows the same lines as the proof of [26, Theorem 1].
Lemma 4.2. Assume that H3 holds. Then, for all 0 k n, f k ∈ F(X) and N 0, where η 0 [f 0 ] = 0 and for all 0 k n − 1, Proof. The proof is postponed to Section B.1.
Following [26,Lemma 13], for all 0 k n and f k ∈ F(X), the recursion given in Lemma 4.2 may also be expressed as Establishing a central limit theorem for PaRIS algorithms requires to introduce the retro-prospective kernels, defined, for all 0 k m n, x k ∈ X and h ∈ F(X m+1 ), by Proposition 4.1. Assume that H3 holds. Then, for all 0 k n, where Z is a standard Gaussian random variable and for all 0 k n − 1, Proof. The proof is postponed to Section B.2.
Corollary 4.1. Assume that H3 holds. Then, for all 0 k n, where Z is a standard Gaussian random variable and
Pseudo marginal PaRIS algorithms
Consider the following assumption. H4 Proposition 4.2. Assume that H1, H2 and H4 hold. Then, for all 0 Proof. The proof follows the same lines as the proof of [26, Theorem 1].
Lemma 4.3. Assume that H1, H2 and H4 hold. Then, for all 0 k n, f k ∈ F(X) and N 0, where for all 0 k n, η k [f k ] is defined in (19).
Proof. The proof is postponed to Section C.1.
Assume that H1, H2 and H4 hold. Then, for all 0 k n, where Z is a standard Gaussian random variable and for all 0 k n − 1,σ 2 k+1 f k+1 ; f k+1 can be computed using an explicit recursive formula given in Appendix C.2.
Proof. The proof is potsponed to Section C.2.
Corollary 4.2. Assume that H1, H2 and H4 hold. Then, for all 0 k n, where Z is a standard Gaussian random variable andσ 2 k (h k ) can be computed using an explicit recursive formula given in Appendix C.2.
Tangent filters and online recursive maximum likelihood
Let Θ be a parameter space. This section considers a family of transition kernels (Q k;θ ) θ∈Θ;0 k n−1 on X × B(X) and (G k;θ ) θ∈Θ;1 k n on X × B(Y) associated with densities q k;θ and g k;θ with respect to µ and ν. The joint smoothing distributions are then defined, for any θ ∈ Θ, 0 k 1 k 2 n and any function h ∈ F(X k2−k1+1 ), by: As noted for instance in [10, Section 2] and [26], for all θ ∈ Θ and all f 0:n ∈ F(X n+1 ), with, for all 0 k n − 1, Considering an objective function f n ∈ F(X) which depends on the last state x n only, the tangent filter η n is defined as the following signed measure: where π n = φ n:n|n−1 is the predictive measure. The particle based estimator of π n [f ] is given by: Using the tower property, (4) and the backward decomposition (6): Therefore, the tangent filter (21) can be approximated on-the-fly using the statistics (τ i n ) N i=1 and the weighted particles In cases where r k , 0 k n − 1, is unknown and replaced by an unbiased estimate, the associated pseudo marginal particle-based approximation of the tangent filter is given by: Given a set of observations Y 1:n , maximum likelihood estimation amounts at obtaining a parameter θ n ∈ Θ such thatθ n = arg max θ∈Θ θ;n (Y 1:n ), where θ;n (Y 1:n ) = log L θ;n (Y 1:n ) is the logarithm of the likelihood given in (2). There are many different approaches to compute an estimator ofθ n , see for instance [4,Chapter 10]. Following [12], under strong mixing assumptions, for all θ ∈ θ, the extended process {(X n , Y n , π n , η n )} n 0 is an ergodic Markov chain and for all θ ∈ θ, the normalized score ∇ θ θ (Y 1:n )/n of the observations may be shown to converge where: Assuming that the observations Y 1:n are generated by a model driven by a true parameter θ for all θ ∈ θ this normalized score converges almost surely to a limiting quantity λ(θ, θ ) such that, under identifiability constraints, λ(θ , θ ) = 0. A gradient ascent algorithm cannot be designed as the limiting function θ → λ(θ, θ ) is not available explicitly. Solving the equation λ(θ , θ ) = 0 may be cast into the framework of stochastic approximation to produce parameter estimates using the Robbins-Monro algorithm where ζ n+1 is a noisy observation of λ(θ n , θ ). Obtaining such an observation is not possible in practice and following [26] this noisy observation is approximated by where In (26), the measures π n+1;θn and η n+1;θn depend on all the past parameter values. In the case of a finite state space X the algorithm was studied in [24], which also provides assumptions under which the sequence {θ n } n 0 converges towards the parameter θ (see also [34] for refinements). In more general cases, these measures may be estimated online using the pseudo marginal smoother presented in this paper.
Application to partially observed SDE
Let (X t ) t≥0 be defined as a weak solution to the following Stochastic Differential Equation (SDE) in R d : where (W t ) t≥0 is a standard Brownian motion, α θ : X → X is the drift function . The inference procedure presented in this paper is applied in the case where the solution to (27) is supposed to be partially observed at times t 0 = 0, . . . , t n , for a given n 1, through an observation process (Y k ) 0 k n taking values in R m . For all 0 k n, the distribution of Y k given (X t ) t 0 depends on X k = X t k only and has density g k;θ with respect to ν. The distribution of X 0 has density χ with respect to µ and for all 0 k n − 1, the conditional distribution of X k+1 given (X t ) 0 t k has density q k+1;θ (X k , ·) with respect to µ. This unknown density can be expressed as an expectation of a Brownian Bridge functional [7].
Let ω = (ω s ) 0≤s≤t be the realization of a Brownian Bridge starting at x at time 0 and ending in y at time ∆. The distribution of ω is denoted by W ∆,y x . Moreover, suppose that for all θ ∈ Θ, α θ is of a gradient form α where ∆ k = t k+1 − t k , for all a > 0, φ a is the probability density function of a centered Gaussian random variable with variance a.
The transition density then cannot be computed as it involves an integration over the whole path between x and y. To perform the algorithm proposed in this paper, we therefore have to design a positive an unbiased estimator of q k+1;θ (x, y). Moreover, maximum likelihood estimation of θ requires an unbiased estimator of ∇ θ log q k+1;θ (x, y). Such two estimators can be obtained using the General Poisson Estimator (GPE, [18]).
Unbiased GPE estimator for q k+1;θ (x, y; ζ). Assume that there exist random variables m θ and m θ such that for all 0 s ∆ k , m θ ψ θ (ω s ) m θ . Let κ be a random variable taking values in N with distribution µ, ω = (ω s ) 0≤s≤∆ k be the realization of a Brownian Bridge, and (U j ) 1 j κ be independent uniform random variables on (0, ∆ k ) and ζ = (κ, ω, U 1 , . . . , U κ ). As shown in [18], equation (28) leads to a positive unbiased estimator given by (28), On the other hand, the diffusion bridge S ∆ k ,y θ,x associated with the SDE (27) is absolutely continuous with respect to W ∆ k ,y x with Radon-Nikodym derivative given by This yields and an unbiased estimator of ∇ θ log q k+1;θ (x, y) is given by where U is uniform on (0, 1) and independent of s θ,x,y,∆ k ∼ S ∆ k ,y θ,x . In the context of GPE, s θ,x,y,∆ k can be simulated exactly using exact algorithms for diffusion processes proposed in [1].
Experiments. Online recursive maximum likelihood using pseudo marginal SMC is illustrated when (27) has the specific form: where θ is an unknown parameter ranging between 0 and 2π. For this numerical experiments, we suppose that a realization of (29) is only observed at times t k = k for 0 k n with n = 5000 through a noisy observation process (Y k ) 0 k n such for all 0 k n, where (ε k ) 0 k n are i.i.d. standard Gaussian random variables, independent of (W t ) t 0 . In this case α θ : x → sin(x − θ) and and for all x ∈ R, 0 ϕ θ (x) = (α 2 θ (x) + ∆A θ (x))/2 + 1/2 9/8 and a GPE estimator of both the transition density and the gradient of its logarithm associated with the SINE model is straightforward to compute.
A simulated data set is displayed in Figure 2, where θ * = π/4. The solution to (29) is sampled at times (t k ) 0 k n using the Exact algorithm of [1]. For all 0 k n − 1, q k,θ and the GPE unbiased estimator of ∇ θ q k,θ (x, y) are estimated using M = 30 independent Monte Carlo replications of the general Poisson estimator. The estimations of θ * are given for 50 independent runs started at random locations θ 0 with N = 100 particles and N = 2 backward samples. Following [21], the proposal distribution of the particle filter is obtained using an approximation of the fully adapted particle filter where q k,θ is replaced by the its Euler scheme approximation.
Sensitivity to the starting pointθ 0 . The inference procedure was performed on the same data set from 50 different starting points uniformly chosen in (0, 2π). The gradient step size γ k of equation (24) was chosen constant (and equal to 0.5) for the first 300 time steps, and then decreasing with a rate proportional to k −0.6 . Results are given Figure 3. There is no sensitivity to the starting point of the algorithm, and after a couple of hundred observations, the estimates all concentrate around the true value. As the gradient step size decreases, the estimates stay around the true value following autocorrelated patterns that are common to all trajectories.
Asymptotic normality. The inference procedure was performed on 50 different data sets simulated with the same θ * . The 50 estimates were obtained starting from the same starting point (fixed to θ * , as Figure 3 shows no sensitivity to the starting point). Figure 4 shows the results for the raw and the averaged estimates. The averaged estimates ( θ k ) k 0 consist in averaging the values produced by the estimation procedure after a burning phase of n 0 time steps (here n 0 = 300 time steps). This procedure allows to obtain an estimator whose convergence rate does not depend on the step sizes chosen by the user, see [30,23]. For all 0 k n 0 , θ k = θ k and for all k > n 0 , As expected, the estimated distribution of the final estimates tends to be Gaussian, centered around the true value.
Step size influence. To illustrate the influence of the gradient step sizes, different settings are considered. In each scenario, the sequence (γ k ) k 0 is given by where γ 0 = 0.5. In this experiment κ ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1}. The results are shown in Figure 5. As expected, the raw estimator shows different rates of convergence depending on κ, whereas the averaged estimator has the same behavior in all cases. Time Estimate (24)).
A Additional technical results
The proof of Lemma A.1 is given in [11]. Then, The proof of Theorem A.1 is given in [13,Theorem A.3].
Theorem A.1. Let N be a positive integer, (U N,i ) 1 i N be random variables on a probability space (Ω, F , P) and (F N,i ) 0 i N be a filtration on (Ω, F , P). Assume that for all Assume also that the two following conditions hold.
Lemma A.2. Assume that H3 holds. Let K be a transition kernel on (X, B(X) with transition density k ∈ F(X × X) with respect to the reference measure µ. Assume that (ϕ N ) N 1 is a sequence of functions in F(X) such that Then, for all 0 k n,
B Convergence results for PaRIS algorithms
For all 0 k n, define the following σ-fields: Proof. The proof follows the same lines as [26, Lemma 12].
B.1 Proof of Lemma 4.2
Proof. The proof proceeds by induction. The case k = 0 is a direct consequence of the fact that T 0 h 0 = 0 and τ i 0 = 0 for all 1 i N . Assume that the result holds for some 0 k n − 1 and write Then, using that (ω i k+1 ) 1 i N are i.i.d. conditionally on F N k and , by Hoeffding inequality, since for all 1 i N , 0 ω i k+1 ω k+1 ∞ , Therefore, by Lemma 4.1, Since φ k [L k 1] > 0 it remains to establish the convergence in probability of (a N ) N 1 . On the other hand, by Hoeffding inequality, using that for all 1 and it is enough to obtain the limit of E[a N |F N k ] as N grows to infinity. Then, write The first term is given by By the induction hypothesis and Lemma 4.1, which yields The second term is given by with, for all x ∈ X,
Pseudo marginal SMC
For all x ∈ X, by Lemma 4.1, In addition, for all x ∈ X, by (8), ∞ , by the generalized Lebesgue dominated convergence theorem, see Lemma A.2, Using that The proof is concluded upon noting that
B.2 Proof of Proposition 4.1
Proof. The result is proved by induction on k. It holds for k = 0 as for all 1 i N , τ i 0 = 0. Assume now that the result holds for some 0 k n − 1 and that φ k+1 By Lemma B.1, Therefore, using the induction hypothesis, Slutsky's lemma and where Z is a standard Gaussian random variable. By Lemma B.1, where for all 1 i, j N and all x ∈ X, First, by Lemma 4.1,
Pseudo marginal SMC
The proof is then concluded by applying Slutsky's Lemma and Theorem A.1 to the sequence (υ i N ) 1 i N . By construction E[υ i N |F N k ] = 0 so that the proof of (i) is based on The first term of (30) is given by By Lemma 4.1, since by assumption and [26,Lemma 11] .
and by Lemma 4.2, On the other hand, by Lemma A.2 applied to Finally, so that using again Lemma A.2 and Lemma 4.1, Therefore, the first term of (30) satisfies which concludes the proof for the first term of (30). The second term of (30) is given by By assumption and [26, Therefore, The proof of (ii) is an immediate consequence of H3 since for all 1 i N , and then, by (19), By definition of the kernel D k+1,k+1 , It remains to prove the explicit expression of σ 2 k+1 f k+1 ; f k+1 from this recursion formula. First, following the proof of [26,Theorem 3], for all 0 s < k, In addition, 0 s < k, which concludes the proof.
C Convergence results for Pseudo marginal PaRIS algorithms
Lemma C.1. Assume that H1 and H2 hold. The, for all 0 k n − 1, (f k+1 , f k+1 ) ∈ F(X) 2 and N, N 0, the random variables Proof. The proof follows the same lines as [21, Lemma 2]. Note first that whereω k is defined by (18) in H3. Then, since conditionally on F N k ∨ G N k+1 , τ 1 k+1 is independent of ω 1 k+1 , which concludes the proof.
C.1 Proof of Lemma 4.3
Proof. The proof proceeds by induction and follows the same lines as [26,Lemma 13]. The case k = 0 is a direct consequence of the fact that T 0 h 0 = 0 and τ i 0 = 0 for all 1 i N . Assume that the result holds for some 0 k n − 1 and write The random variables ( ω i k+1 whereω k is defined by (18) in H3. Noting that by H4 for all 1 i N | ω i k+1 | ω k ∞ and , by Hoeffding's inequality, there exist positive constants c k and c k such that Therefore, by Proposition 4.2 and Lemma A.1, Since φ k [L k 1] > 0 it remains to establish the convergence in probability of (a N ) N 1 . On the other hand, by Hoeffding inequality, using that for all 1 Then, write By Lemma 3.1, the first term is given by By the induction hypothesis and Proposition 4.2, which yields The second term is given by with, for all x ∈ X, For all x ∈ X, by Proposition 4.2, Therefore, as ϕ N ∞ f k+1 ∞ h k+1 2 ∞ , by the generalized Lebesgue dominated convergence theorem, see Lemma A.2, This concludes the proof following the same steps as in the proof of Lemma 4.2.
C.2 Proof of Proposition 4.3
Proof. The result is proved by induction on k. It holds for k = 0 as for all 1 i N , τ i 0 = 0. Assume now that the result holds for some 0 k n − 1 and that φ k+1 By Lemma B.1, Therefore, using the induction hypothesis, Slutsky's lemma and where Z is a standard Gaussian random variable. By Lemma B.1, where for all 1 i, j N and all x ∈ X, Then, by construction, The first term of (31) is given by where, for all (x, y) ∈ X × X, k (x, y) = r k (x, y; z) ω k+1 (x, y; z)R k (x, y, dz) , Following the same steps as in the proof of Proposition 4.1, where Q φ k k : x → φ k [ k (., x)]/φ k [r k (., x)]. Therefore, the first term of (31) satisfies which concludes the proof for the first term of (31). The second term of (31) is given by where, for all x ∈ X, By assumption, φ k [T k h k L k f k+1 + L k ( h k f k+1 + f k+1 )] = 0 so that by Lemma A.2, Therefore, The proof of (ii) is an immediate consequence of H4 since for all 1 i N , Then, defining c k = 2 ω k+1 ∞ ( h k+1 ∞ f k+1 ∞ + f k+1 ∞ ), for all ε > 0, which concludes the proof. | 2019-08-20T09:53:17.000Z | 2019-07-25T00:00:00.000 | {
"year": 2019,
"sha1": "4c7cb84a9cc5c61c63220b047cbc7664b5f97fcc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4c7cb84a9cc5c61c63220b047cbc7664b5f97fcc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256922727 | pes2o/s2orc | v3-fos-license | Risk Factors for Influenza-Induced Exacerbations and Mortality in Non-Cystic Fibrosis Bronchiectasis
Influenza infection is a cause of exacerbations in patients with chronic pulmonary diseases. The aim of this study was to investigate the clinical outcomes and identify risk factors associated with hospitalization and mortality following influenza infection in adult patients with bronchiectasis. Using the Chang Gung Research Database, we identified patients with bronchiectasis and influenza-related infection (ICD-9-CM 487 and anti-viral medicine) between 2008 and 2017. The main outcomes were influenza-related hospitalization and in-hospital mortality rate. Eight hundred sixty-five patients with bronchiectasis and influenza infection were identified. Five hundred thirty-six (62%) patients with bronchiectasis were hospitalized for influenza-related infection and 118 (22%) patients had respiratory failure. Compared to the group only seen in clinic, the hospitalization group was older, with more male patients, a lower FEV1, higher bronchiectasis aetiology comorbidity index (BACI), and more acute exacerbations in the previous year. Co-infections were evident in 55.6% of hospitalized patients, mainly caused by Pseudomonas aeruginosa (15%), fungus (7%), and Klebsiella pneumoniae (6%). The respiratory failure group developed acute kidney injury (36% vs. 16%; p < 0.001), and shock (47% vs. 6%; p < 0.001) more often than influenza patients without respiratory failure. The overall mortality rate was 10.8% and the respiratory failure group exhibited significantly higher in-hospital mortality rates (27.1% vs. 6.2%; p < 0.001). Age, BACI, and previous exacerbations were independently associated with influenza-related hospitalization. Age, presence of shock, and low platelet counts were associated with increased hospital mortality. Influenza virus caused severe exacerbation in bronchiectasis, especially in those who were older and who had high BACI scores and previous exacerbations. A high risk of respiratory failure and mortality were observed in influenza-related hospitalization in bronchiectasis. We highlight the importance of preventing or treating influenza infection in bronchiectasis.
Introduction
Bronchiectasis is characterized by abnormal dilatation of the bronchi and systemic inflammation, and such patients are at an increased risk for respiratory tract infections [1,2], which play a major role in causing exacerbations of bronchiectasis. These exacerbations are associated with more severe disease and a higher mortality rate [3]. While the main pathogen for exacerbations is bacteria [3,4], other pathogens such as fungi, nontuberculous mycobacteria (NTMs), and viruses have been implicated but remain less well documented [5]. Viral infections such as rhinovirus, coronavirus, and influenza viruses may trigger exacerbations of bronchiectasis [6,7] and the virus is more frequently detected during the exacerbation than during a period of stability [6][7][8]. There is a paucity of clinical studies that have specifically investigated the role of influenza infection in clinical outcomes of bronchiectasis.
The interactions between pathogenic bacteria, viruses, and host defense responses, such as the immune reaction, epithelial injury, and repair delay after virus infection, have been demonstrated in chronic respiratory diseases, and may frequently lead to a lethal synergism in susceptible people with more severe disease [9,10]. In critical patients infected with influenza, bacterial co-infection and its complications, such as development of acute respiratory distress and acute kidney injury, contribute to the mortality associated with influenza [11][12][13][14][15]. In our recent study, the common existing bacterial pathogens in acute exacerbations of bronchiectasis were Pseudomonas aeruginosa, followed by Klebsiella pneumoniae and Haemophilus influenzae [16]. Whether this underlying chronic infection leaves bronchiectasis patients particularly vulnerable to influenza infections is not known. There are very limited studies that report the clinical outcomes and risk factors of viral infection during bronchiectasis exacerbations. The aim of this study was therefore to survey the clinical manifestations and prognoses of influenza infection in patients with bronchiectasis and to investigate the risk factors associated with influenza-related hospitalization and in-hospital mortality.
Bronchiectasis Cohort
This study analyzed the data of a multi-institutional bronchiectasis cohort in Chang Gung Research Database (CGRD). The CGRD provides the electronic medical records collected from the Chang Gung Memorial Hospital system, which includes three medical centers and four regional hospitals, as reported previously [17,18]. Patients with at least two bronchiectasis diagnoses (International Classification of Diseases, 9th Clinical Modification (ICD-9-CM) 494.0 or 494.1) from outpatient visits or from hospitalization records were identified in the cohort [19]. The diagnosis of bronchiectasis was made from clinical symptoms, history, and a high-resolution CT (HRCT) of the lungs by a radiologist and pulmonary specialist. The Institutional Review Board of Chang Gung Memorial Hospital approved this study (IRB number: 201800712B0C502).
Inclusion Criteria
This cohort included adult patients (aged ≥ 18 years) with diagnoses of bronchiectasis recorded in CGRD between January 2006 and June 2016. The inclusion criteria were bronchiectasis patients who had typical influenza-like illnesses with diagnoses of influenza (ICD-9-CM code 487) and concomitant use of anti-virus medicines [20][21][22][23]. Clinicians made the diagnoses of influenza infection based on typical influenza-like symptoms, positive reverse-transcription polymerase chain reactions (PCR), or influenza rapid antigen tests. The definition of influenza infection has been validated in previous publications of the national health insurance database of Taiwan [20][21][22][23].
Main Outcomes
The primary outcome was influenza-related severe exacerbations, defined as hospitalization for diagnoses of ICD-9-CM code 480-487 [20,24]. The secondary outcome was in-hospital mortality. Acute respiratory failure was defined by ICD-9-CM code 518.81 or 518.82, or ICD-10-CM code J96.0 with mechanical ventilator use [25].
Clinical Parameters
We retrieved demographic data, CT images, laboratory, microbiology and pulmonary function reports. The bronchiectasis aetiology comorbidity index (BACI) was calculated for each subject based on comorbidities obtained from CGRD diagnoses (ICD-9-CM and ICD-10) [26]. The etiology of bronchiectasis was determined as described in a previous study [19]. Clinicians requested an immune or autoimmune screen if signs of immunod-eficiency and connective tissue diseases were present. When primary ciliary dyskinesia was suspected, nasal mucociliary clearance was measured by using the saccharin test. Evaluations of α1-antitrypsin were performed when an HRCT demonstrated the presence of emphysema affecting the lower lobes. Sweat tests were requested if signs and symptoms suggestive of cystic fibrosis were present. We collected sputum microbiology reports during hospitalization. Pulmonary bacterial co-infection was defined as the presence of pneumonia and a positive bacterial culture in sputum or bronchoalveolar lavage fluid during the period of hospitalization [12]. Shock was defined as the necessity for use of parenteral inotropic agents or vasopressors. A pulmonary function test was performed with a spirometer according to the American Thoracic Society and the European Respiratory Society criteria [27]. Medical treatment included anti-viral medicine, antibiotics, systemic corticosteroids, and inhalation medication. Acute kidney injury was defined as an increase in serum creatinine level of 50% or 0.3 mg/dL above baseline during hospital admission [28].
Statistical Analysis
Chi-square tests and two-sided Fisher exact tests were used for dichotomous variables, unpaired t-tests for normally distributed continuous variables, and Mann-Whitney U tests for non-normally distributed continuous data. p-values (two-sided) < 0.05 were considered statistically significant. A univariate descriptive analysis was performed to identify risk factors for hospitalization and mortality in patients with bronchiectasis and influenza infection. Variables with a significance level of p < 0.05 were selected. Next, a multivariate Cox proportional hazards regression was used to identify independent risk factors. An ROC analysis was performed to validate the Cox model. Statistical analyses were performed using SAS software, version 9.4 (SAS Institute, Cary, NC, USA).
Results
There were 9516 patients with bronchiectasis recovered from CGRD between 2008 and 2017. A total of 865 bronchiectasis patients having influenza infection were identified. Figure 1 showed the incident number in each year. Five hundred thirty-six (62%) patients with bronchiectasis were hospitalized for influenza-related infection. The demographic and clinical characteristics are summarized in Table 1. Compared to the clinic group (defined as no episode of hospitalization), the hospitalization group was older, with more male patients, lower FEV 1 , higher BACI index, and higher acute exacerbation rates in the previous year. The hospitalization group had higher proportions of pre-existing COPD, connective tissue disease, and diabetes mellitus than the clinic group. The usage rate of anti-viral agents was similar in both groups. A higher proportion of bronchiectasis patients in the hospitalization group received antibiotics (94.9% vs. 9.1%, p < 0.001) and systemic corticosteroids (54.6% vs. 11.3%, p < 0.001) than those in the clinic group. Figure 2 showed the annual incident numbers of bronchiectasis with influenza infection in CGRD since 2008-2017. The characteristics of the hospitalized patients with bronchiectasis and influenza infection are demonstrated in Table 2. 118 (22%) patients had respiratory failure, with a mean age of 70.5 years. Patients with or without respiratory failure had similar ages, as well as lung function, BACI index, and acute exacerbations rates in the previous year. Bronchiectasis patients with respiratory failure exhibited significantly higher levels of white blood cell count as well as C-reactive protein. In addition, the influenza-infected bronchiectasis patients having respiratory failure were apt to develop acute kidney injury (36% vs. 16%; p < 0.001), and shock (47% vs. 6%; p < 0.001), compared to those infected Table 2. 118 (22%) patients had respiratory failure, with a mean age of 70.5 years. Patients with or without respiratory failure had similar ages, as well as lung function, BACI index, and acute exacerbations rates in the previous year. Bronchiectasis patients with respiratory failure exhibited significantly higher levels of white blood cell count as well as C-reactive protein. In addition, the influenza-infected bronchiectasis patients having respiratory failure were apt to develop acute kidney injury (36% vs. 16%; p < 0.001), and shock (47% vs. 6%; p < 0.001), compared to those infected with influenza without respiratory failure. A systemic steroid was administered more frequently in the influenza-infected bronchiectasis patients having respiratory failure (80% vs. 48%; p < 0.001). The duration of hospital stay was shorter (11.5 ± 11.4 days) among the influenza-infected bronchiectasis patients without respiratory failure than for those patients having respiratory failure (24.2 ± 15.1 days, p < 0.001). The overall mortality rate was 10.8%; however, the influenza-infected bronchiectasis patients having respiratory failure exhibited a significantly higher in-hospital mortality rate (27.1% vs. 6.2%; p < 0.001) ( Table 3). As for the in-hospital mortality rate, we add a power analysis in the result. There was 73.44% power at a 0.05 level of significance. Twenty non-critical patients died of septic shock and six non-critical patients died of pneumonia with DNR consent. Because these 26 patients did not receive ventilator use, they did not fulfill the criteria for the respiratory failure group in this study.
The characteristics and outcomes of the patients with influenza confirmed by positive PCR or influenza rapid antigen test are demonstrated in the Supplementary Materials as a sensitivity analysis. The characteristics and in-hospital outcomes of the patients with lab confirmed influenza were similar to the overall population. The influenza-infected bronchiectasis patients having respiratory failure exhibited a significantly higher one-year respiratory failure rate (17.4% vs. 5.7%; p = 0.012).
Discussion
In this study, we found that 62% patients with bronchiectasis were hospitalized after influenza infection, and 22% of the influenza-infected bronchiectasis patients developed respiratory failure during hospitalization. Patients with bronchiectasis and influenzarelated hospitalization were more likely to be male, older, have more comorbidities, have worse lung function, and have had increased exacerbations in the previous year than those who were not hospitalised. Influenza-infected bronchiectasis patients with respiratory failure had a greater incidence of bacterial co-infections, acute kidney injury, shock, and higher in-hospital mortality. Age, BACI, and previous exacerbations were independent risk factors of hospitalization after influenza infection in bronchiectasis.
Viral infection is a trigger of exacerbation in respiratory diseases, but the clinical outcomes of virus infection in bronchiectasis have not been reported before. Influenza infection may cause mild to severe exacerbations in patients with bronchiectasis. Our study showed that 38% of patients with bronchiectasis and influenza infection recovered after treatment in outpatient clinic, with 11% of them acquiring a mild exacerbation after influenza infection. After influenza infection, 62% patients with bronchiectasis in our database had severe exacerbations and hospitalizations, which was compatible to previous studies demonstrating that viral infection was associated with more frequent and more severe exacerbations of pulmonary diseases [29,30]. The possible reasons might be respiratory viruses inducing airway epithelial cell sloughing and enhancing inflammation, immune cell accumulation, as well as dilatation of capillaries and parenchymal edema [31,32]. There is also an increased risk of bacterial co-infection because of impaired mucociliary clearance and parenchymal destruction in bronchiectasis [31,32]. Thus, bacterial infection is believed to be the main pathogen that causes most exacerbations of bronchiectasis, and our work provides evidence that influenza viruses also play an important role in triggering severe bronchiectasis exacerbations, possibly through bacterial co-infection. We also show that the older a patient's age, the more BACI, and that an increased number of previous exacerbations were risk factors for hospitalization after influenza infection in bronchiectasis. Menendez et al. also found that age, severity of bronchiectasis, and more comorbidities were associated risk factors for acute exacerbations in bronchiectasis [33]. Another study also reported an increase in hospitalization among elderly men [34]. A prior history of exacerbations is demonstrated to be a predictor of future exacerbations, and patients with two or more exacerbations per year at baseline have an increased mortality risk [3]. Thus, greater bronchial and systemic inflammation in old age, greater extent of disease, and frequent acute exacerbations may contribute to perpetuating the infection-inflammation cycle and have negative impacts on prognosis in influenza infection. Our result highlights the importance of preventing or treating influenza infection in patients with bronchiectasis, particularly those with the identified risk factors.
Severe influenza infection may progress to acute respiratory failure [15,31,35,36]. Our data indicate that 20% of hospitalized bronchiectasis patients with influenza infection developed acute respiratory failure. The rate of acute respiratory rate was higher than the previously reported 5-10% in the general population [15,32]. Primary viral infection and bacterial co-infection are the main causes of respiratory failure [15,32]. Bacterial and fungal pulmonary co-infections are associated with critical illness, such as septic shock and acute respiratory distress syndrome, leading to increased mortality rates [15,32]. Among patients with influenza-related critical illness, bacterial co-infections ranged between 10 and 30% [15,32]. Pulmonary bacterial co-infections are mainly caused by Streptococcus pneumoniae in Europe and S. aureus in the United States [12,15]. In patients with bronchiectasis, because of airway destruction and the high risk of chronic infection, the common species of bacterial and fungus co-infection after influenza may be different from those of the general population. In this study, 20.1% of the hospitalized patients developed a pulmonary co-infection, and the proportion of co-infections in the bronchiectasis patients with respiratory failure was significantly higher than in those without respiratory failure. The most common pathogens included P. aeruginosa, fungus, and K. pneumoniae. S. aureus, and fungus co-infections were significantly higher in the influenza-infected bronchiectasis patients with respiratory failure than in those patients without respiratory failure. Our previous study [19] indicated that P. aeruginosa, K. pneumoniae, and S. aureus were the main pathogens existing in our bronchiectasis cohort, and chronic bacterial colonization may lead to increased bacterial and decreased immune defenses, which is quickly accompanied by severe deterioration in lung function [10,37]. In animal studies, the influenza virus infection worsened the destruction of lung tissue and/or the overproduction of cytokines, increased the number of inflammatory cells in the lung, and made mice with chronic P. aeruginosa infection more susceptible to severe pneumonia [38]. The influenza virus is also reported to attenuate neutrophil functions in vitro, including the inhibition of chemotaxis, oxidative function, and lysosome secretion [39,40]. The impaired immune response following virus infection may facilitate bacterial superinfections or promote biofilm formation and subsequent disease severity or even mortality [41,42].
Seasonal influenza-related mortality is up 12% in ICU patients, especially in older adults and COPD patients [15,32]. In our population, the overall in-hospital mortality was 10% and the mortality of patients with acute respiratory failure reached 27%. Respiratory failure and multi-organ failure are the main causes of death after influenza infection [15,32]. This study showed 35% of the patients with bronchiectasis who were admitted for influenzarelated infection developed acute kidney injury. Old age, shock, and low platelet counts are risk factors for influenza-related mortality. Other factors, such as comorbidities (cardiovascular, renal, liver diseases and immune deficiency) and acute kidney injury, have also been reported to be associated with increased influenza-related mortality [14,15,43]. In a pandemic influenza A study, severe infection with influenza damages the airway and alveolar epithelium, resulting in diffuse alveolar damage complicated by bacterial pneumonia, which may be another reason for the development of multiple organ failure and mortality [44]. The replication of the influenza virus in these airways and alveolar cells may contribute to the development of severe lung injury or increased susceptibility to secondary bacterial infection; therefore, limiting viral entry or replication can prevent or attenuate the severity of the infection [45,46]. Treatment with antivirus agents to stop viral replication soon after the onset of infection could improve the survival rate in patients with bronchiectasis.
Corticosteroid use is associated with increased mortality in patients with acute respiratory failure or ARDS due to influenza virus infections [15,47]. Short-term corticosteroid treatment is sometimes used to control bronchiectasis exacerbations and decrease airway inflammation [48]. Although no randomized, controlled trials could be identified to demonstrate the effects of oral corticosteroids during exacerbations, the British Thoracic Society guideline still suggests that systemic corticosteroid be used in bronchiectasis comorbid with COPD, asthma, allergic bronchopulmonary aspergillosis, or as mucoregulators [49]. In this study, more than half of the hospitalized patients received systemic corticosteroid treatment, and the usage of corticosteroids was up 80% in the influenza-infected patients with bronchiectasis and respiratory failure. Although the analysis did not reveal corticosteroids as an independent risk factor for increased mortality, corticosteroid use still should be cautiously used in patients who have bronchiectasis with influenza virus infection.
Our study has several strengths. Firstly, the CGRD database included patients from multiple medical centers and regional hospitals across Taiwan, providing strong real-world evidence of influenza infection in bronchiectasis. Moreover, the study presented 10 years of seasonal and pandemic influenza infection. To the best of our knowledge, no similar study has focused on the treatment and outcomes in influenza-associated hospitalizations in bronchiectasis to date. Secondly, we only included patients with influenza-associated infection. Previous studies included patients with bronchiectasis and viral infections from various pathogens [6,7]. This study provided a large population for investigating the impact of influenza on patients with bronchiectasis. Thirdly, we analyzed multiple outcomes of influenza infection in patients with bronchiectasis, including mild and severe exacerbation, respiratory failure and hospital mortality.
The limitations are as follows. First, we used the specific ICD-9-CM code 487 and anti-viral agents to identify episodes of influenza infection in the CGRD database. Although PCR confirmation is the gold standard of influenza infection [50], that examination is not routinely used in clinical practice in Taiwan because of the cost and time it takes to obtain the results. Besides, negative a nasal swab screen test does not rule out influenza infection [49]. However, the definition of influenza has been adopted in previous database studies and the diagnosis code of influenza infection has also been validated by comparison with laboratory data of the Taiwan CDC surveillance network [20][21][22][23]. Second, not all patients had sputum cultures performed during hospitalization. Third, we did not have complete information on the influenza vaccine in our cohort. Therefore, we could not analyze the effects of the influenza vaccine on the clinical outcomes. Fourth, treatment selection bias may exist when evaluating the effects of amantadine or systemic corticosteroids, since this was an observational study from a multi-institution database. Fifth, we acquired the data in 2018, so we only analyzed the influenza-induced exacerbations among patients with bronchiectasis before 2018. During the COVID-19 pandemic, the incidence of influenza infection might have been lower than during pre-COVID-19 periods because of public health strategy and enhanced personal hygiene. Updated data, especially after the COVID-19 pandemic, may be provided in a future study.
In conclusion, the influenza virus caused severe exacerbation in bronchiectasis, especially in those with increased age, BACI, and previous exacerbations. High risks of respiratory failure and mortality were observed in influenza-related hospitalizations in bronchiectasis. The study also demonstrated that the co-infection pathogens after influenza infection were different in bronchiectasis patients than in the general population. These results provide a better understanding of the clinical characteristics of influenza-associated infection, and the risk factors for severe exacerbation in bronchiectasis. This study highlights the importance of preventing or treating the early stage of influenza infection in bronchiectasis to lessen exacerbations and subsequent respiratory failure.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/v15020537/s1, Table S1: Baseline characteristics of bronchiectasis patients with influenza infection (PCR or rapid antigen test). Table S2: Clinical characteristics of bronchiectasis and influenza-related (PCR or rapid antigen test) hospitalization. Table S3: Clinical parameters and main clinical outcomes of influenza-related (PCR or rapid antigen test) hospitalization. Table S4 The Harrell's C-index of the significant parameters in Tables 4 and 5 Author | 2023-02-17T16:03:43.471Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "2c54427fe12820c16d250f3ae13a538241d7ee91",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "41613fe858ea41d6af9013aebd75ad21eecaec7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
131714012 | pes2o/s2orc | v3-fos-license | Some Aspects of the Flow-Topography Interactions in the Taiwan Strait
The Taiwan Strait is a shallow channel with complicated topographic variations that connects the South China Sea with the East China Sea. Pre vious hydrographic and numerical studies have suggested that a distinct feature of the flow pattern in the strait is the transfer of the vertical strati fication of the incoming flow from the south into the horizontal density gradient in the middle of the northern portion of the strait. This work ap plies a· numerical model with a free surface to examine the influence of a change in the volume transport on the· density distribution in the strait. According to the model results, a reduction in the incoming flow rate will cause a weakening of the original density gradient. The release of the avail able potential energy associated with the original density gradient causes the current to meander. The model results are consistent with the satellite sea surface temperature images on 12 and 17 August, 1998.
INTRODUCTION
The Taiwan Strait (TS) is a major channel for the exchange of water masses between the East China Sea and t he South China Sea ( SCS) .Although the mean water depth of the TS i s around 60 m, there is considerable topographic variation.Figure 1 depicts the bathymetry in t he vicinity of the TS.The main f eatures include the shallow Formosa Bank, the Peng-hu Channel (PHC), t he Chang-yuen Ridge ( CYR) and the Kuan-Yin Depression ( KYD).During summer, the southwesterlies dri ve the SCS waters northw ard.Some of them pass t hrough t he PHC and en t er the TS (Fan and Yu 19 81 ).J an et al. t he western portion of the TS.Therefore, the vert i cal stratif ication i n the PRC is transf erred i nto the cross-strait density gradient north of the CYR.Most of the hydrographic surveys in the TS during the summer have shown that the tem peratur e and salinity contours i n the north ern part of the TS are ro ughly parallel to the coast (Wang and Chem 19 92).
However, as the northward transport in the TS becomes weak , corresponding to a period of calm southwesterly wind, flow in the TS is no longer confined to the direction parallel to the coast.Some water begins to move perpendicular to the direction of the strait (Chem and Wang 1992).
In this paper, a three-dimensional primitive equation model with a free surface is used to study the flow adjustment in the TS with the occurrence of the water motion across strait.We find that the decrease in the incoming transport causes the weakening of the cross-strait den sity gradient, and the release of the available potential ene rgy causes the current to meander when the stratification of the incoming water in the PHC is sufficiently strong.
NUMERICAL MODEL
The basic equations for the model closely follow those in Semtner (1986).Under the Boussinesq and hydrostatic approximations, the model equations are as follows.
( 3 ) where (u,v,w) are velocities in the (x,y,z) directions, p is the density, P o is a reference water density, f is the Coriolis parameter, Tis the temperature, Sis the salinity, Pis the pressure, (Am, Ah) are the horizontal mixing coefficients for momentum and temperature/salinity, respectively, v is the vertical vi scosity for momentum, "K is the vertical diffusivity, The method of integration retains the split into barotropic and baroclinic modes.We de fine u = u + u' and v = v + v', where ( u, v) is the vertically averaged barotropic mode velocity and ( u', v') is the baroclinic internal mode flow with no depth average.We still use the Semtner algorithm to compute the ( u', v') velocity and the (T,S) fields.However, a code similar to that of the Pr inceton Ocean Model (POM) developed by Blumberg and Mellor ( 1987) is adopted to compute the barotropic mode motion.Our approach is also very similar to the model in Chao and Paluszkiewicz (1991).The model domain, as shown by the rectangle in Fig. 1, has a 40x7 5 grids in the horizontal plane.The mesh size is 5 km in both the x and y directions, where the x-axis is transverse to the strait and the y-axis is along the strait.There are ten levels in the vertical with a layer thickness of 10 m.The two sides along the strait of the model basin are regarded as rigid walls and the other two as open boundaries.Figure 2 shows the mo del basin and includes all the main topo graphic features shown in Fig. 1.The Peng-hu Islands and the Hai-tan Islands are treated as a single bigger island in the model basin.
865
The wind field in the region around Taiwan is dominated by the seasonal monsoon changes.
Figure 3 depicts the average surface wind distribution over the SCS and TS area, as calculated from the satellite SSM/I wind data during July and August 1996.We inferred that during summertime, the winds in the.TS are generally weaker than those in the SCS.Therefore, the summertime flow pattern in the TS is mainly influenced by the magnitude of the northward transport, which enters the TS through the PHC.The variation of the flow rate in the PHC certainly depends on the wind strength in the SCS.Jan et al. (1994) estimated that the volume transport is about 1 Sv in the PHC during summer, and the hydrographic field calculated from a rigid lid model based on this flow rate is very close to that from the observed data.In the following model studies, we also used this transport value as a standard case.
Initially, there is no motion in the model basin and the water has a constant temperature and salinity, (24° C, 34 psu).The model ocean is driven with a volume transport of 1 Sv flowing in from the PHC.The velocity of the inflow at the PHC has no vertical variation and its horizontal profile is assumed to have a parobolic shape, given by where The water to the south of the PHC has properties which vary between spring and summer.
Wang and Chem (1988) showed that water originating in the Kuroshio, the so-called "Kuroshio branch current", enters the TS only during the spring.After the onset of the southwesterly monsoon, the SCS water will be present in the TS.Therefore, we choose two different tem perature and salinity profiles, as shown in Fig. 4, for the water flowing into the PHC.These two profiles were observed at the entrance of the PHC on 2 September and 8 June, 1988 respectively, and can be regarded as representing the waters from the SCS, Case I, and the Kuroshio branch current, Case II.The SCS water has a stronger thermocline and less saline surface water, while the water from the Kuroshio branch current has a weaker vertical stratification.We use the no-normal-gradient condition for the temperature and salinity fields at the other open boundaries.The no-flux condition for the temperature and salinity is imposed at the sea surface and the rigid boundaries.The horizontal mixing coefficients for the momen tum and temperature/salinity balance equations, (Am,Ah) are 4• 106cm2 / s and 2• 1 Q6 cm2 / s, respectively.The vertical mixing coefficients, v and K, are 5cm2 Is for all the momentum, temperature and salinity fields.
The numerical model was integrated for 400 days.Figure 5 shows the time variations of the kinetic energy, temperature and salinity, averaged over the whole model basin, for Case I.
This figure indicates that the flow reaches a quasi-steady state after 250 days.Figure 6 depicts the horizontal distribution of the temperature, salinity and velocity fields at 15 m and 45 m for Case I on day 250.The temperature and salinity distribution of Fig. 6 are similar to the observed pattern of the hydrographic survey in the TS during 1 to 6 September 1988 (Fig. 3 in Wang and Chem 1992).Due to the blocking of the CYR, the cold and saline water in the lower half of the PHC is deflected westward.Some of this cold ( < 27 ° C) and saline(> 33.6 psu) water flows out of the model domain through the open boundary to the west of Peng-hu, while the rest enters the western part of the strait north of the CYR.Transfer of the vertical stratification in the PHC into the cross-strait temperature/salinity gradients north of the CYR is evident.The flow pattern revealed in Fig. 6 resembles the results from Jan et al. (1994) and can be regarded as a typical case of summertime circulation in the TS.In the following studies, we use this pattern as a new initial condition and investigate the response in the TS when the volume transport through the PHC is changed.Since the salinity pattern in Fig. 6 is almost the same as that of the temperature distributions, low salinity is always associ ated with high temperature.Thus, we will only show the temperature and velocity fields in the following discussions.
MODEL RE SULT S
As mentioned in the previous section, the wind over the TS is not strong during the summer.
The variation of the flow in the TS depends on the change of the volume transport in the PHC, which in turn depends on the wind variations over the South China Sea.Therefore we may • expect a typical time scale for the velocity variation in the PI-iC to be of the order of a week to a month.In the following, we simulate a case, the Case 1-1, in which the result shown in Fig. 6 is used as a new initial condition and reduces the transport in the PHC to 0.5 Sv.
As the velocity of the inflow in the PHC is reduced, the surface warm water can still flow over the CYR and enter the TS, but its ability to uplift the cold lower layer water in the PHC and tr ansport it to the western portion of the strait will be decreased.Hence the strength of the density gradient to the north of the CYR cannot be maintained.Some adjustment of the flow in the TS is activated.We integrate this model for 100 days.Although a final quasi-steady state, which has a much weaker temperature/salinity gradients north of the CYR, can be reached, this final state is not relevant to the actual condition.We are only interested in the flow adjustment, in which the original frontal strength is retained.The model results show that this kind of flow pattern is present for about 10 days after the reduction of the transport in the PHC and the flow remains similar for up to 30 days of integration.Therefore, in this section, we only discuss the result on day 20.22+-��� ��--,� �� �--'"---+ 22 --1-��� �� �� �. .J.;... ..!� �" "----l- 118 120 122 .!!:. .
vector in the vertical direction and i is the bottom drag.The use of a quadratic drag law, respectively, and C is 0.002 in our model.We. then have Ra s about 10-5 s-1• The relative vorticity, generated by the topographic variation, is dampened by the bottom friction as the water is traveling a distance of about 1 !1 , which is about 25 km in Case I.
Since the horizontal scale of the CYR is about 50 km, we may infer that the negative f acquired by the water column as it approaches the CYR is lost before the water moves to the north of the CYR.Meanwhile, due to the stretching effect, the water column will obtain posi and some of the available potential energy is released if the density gradient is strong enough.This is a common geostrophic adjustment process, and has been studied quite extensively in a broad area of ocean envirnni.ents,e.g., Tandon and Garrett (1994).The problem in the TS is more complicated, however, due to the fact that the CYR is almost perpendicular to the den sity front to the� north of it; Fig. 6(b).This front is maintained by the convergence of the warm during summertime.La mer, 30, 213-221.
Fig. 1 .
Fig. 1.The bottom topography of the Taiwan Strait.The depth contours are in km, PHC deno t es the Peng-hu Channel, CYR denotes the Chang-yuen Ridge and KYD denotes the Kuan -yin Depression.The rectangle i nside the figure is our model region.
Fig. 2 .
Fig. 2. Topography of the mo del basin; the annotations are grid numbers in the horizontal axis and the depths are in meters.
Fig. 3 .
Fig. 3.The sutface wind distribution, average of July and August 1996, over the seas surrounding Taiwan area.Sta.A denotes Tungsha Island.
Q is the flow rate, (D,h) are the depth and width of the PHC, chosen to be (100 m, 50 km), and xis the distance measured from the lower right comer of the model basin.At the open boundary to the west of Peng-hu and at the north of the model domain, we use the Orlanski explicit radiation condition for the water level and velocity fields.The no-slip condition is used at all rigid boundaries.
Fig. 4 .
Fig. 4. Typical vertical profiles of temperature (a) and salinity (b) of the SCS water and Kuroshio Branch current at the entrance of PHC.
Figure 7
Fig. 5. Time variations of the avera ged kinetic energy, temperature and salinity over the whole model basin
Figure 8 Fig. 6 .
Figure8shows a cross-strait plot, at grid Y=35, of the downstream velocity component at 15 m for Case I and 1-1.The region of higher velocity has moved from the eastern portion (between X-grid number 25 and 34) in Case I, toward the middle portion of the strait (between X-grid number 15 and 25) in Case 1-1.This is a general feature of the velocity distribution in the TS north of the CYR as shown in Figs. 6 (a) and 7 (a).Due to its low velocity, the flow in
FigureFig. 10 .
Figure 10 depicts the horizontal temperature and velocity distributions at 15 m and 45 m on day 20 of the flow, which uses the result from Case TI as a new initial condition and reduces the volume transport at the PHC to 0.5 Sv.Since the vertical stratification of the inflow in the PRC is much weaker in Case II and 11-1, the influence of the density structure on the flow in the TS becomes less important.The flow in the strait should depend mainly on the incoming flow rate in the PHC and the topographic variations.The Rossby numbers, based on the veloc ity of the incoming flow and the width of the PHC, are 0. 1 and 0.05 for Case II and II-1,respectively, and can be regarded as small.Therefore we may expect that the flow pattern for Case 11-1 is similar to that of Case II.The region of higher velocity occurs in the eastern portion of the strait in these two cases.Figure11depicts the cross-srait plot, at grid f:::: : 35, of the downstream velocity component at 15 m for Case II and II-1.The similarity of the velocity profiles in these two cases is evident.
iFig. 12 .
Fig. 12. Horzontal velocity and the vertical component of the relative vorticity distributions at 15 m for the Case I (a) and Case II (b) on day 250.The solid cur'1e is the contour line of zero relative vorticity and the shaded area denotes the region of posive relative vorticity.
tive relative vorticity as it flows over the CYR Hence there is a region of positive vorticity in the middle of the strait to the north of the CYR.The flow pattern associated with this positive vorticity distribution should have a stronger velocity along the eastern bank of the strait, espe cially in Case II (see Fig.11).However, when the density gradient in the middle of the strait is strong enough (Case I), the.thl,frmal wind relation indicates that the flow should not be weak , within the frontal zone.In this case, we may expect a more uniformly distributed velocity profile in the eastern half of the strait, as revealed in Fig.8.
Figure 13
Figure 13 shows the distribution of velocity and vertical component of the relative vortic ity at 15 m for Case 1-1 and 11-1 on day 20.The f distribution of Case 11-1, Fig. 13(b), has a similar pattern to that of Case II, Fig. 12(b).However, the flow pattern for Case 1-1 becomes very different to that of the Case I.This is due to the fact that the density front in the middle of the strait is in equilibrium with the velocity field in Case I, and this equilibrium is disturbed when the flow rate is decreased in Case 1 -1.The warm water to the east of the front tends to move westward during the following geostrophic adjustment process.If the density contrast is strong enough, some of the available potential energy associated with the original front will be released.Since the topography to the north of the CYR has a cross-strait slope, Fig. 1, then the westward spreading of the warm water is accompanied by compression of the water column.Therefore the relative vorticity in the middle of the strait north of the CYR changes from positive values in Case I, into negative values in Case I-1.
Fig. 13 .
Fig. 13.Horzontal velocity and the vertical component of the relativevorticity distributions at 15 m for Case I-1 (a) and Case II-1 (b) on day 20.The solid curve is the contour line of zero relative vorticity and the shaded area denotes the region of positive relative vorticity.
Figure 7
Figure7(a) also shows that the meandering of the current in the northern TS is accompa nied by an anti-cyclonic eddy in the KYD.This eddy tends to block the northward flow from the TS and pushes it westward.This eddy motion also enhances the westward intrusion of water to the north of Taiwan.Chern and Wang (1992) found that this kind of flow pattern occurs only when the outflow from the TS is weak and this is consistent with the flow condi tion shown in Fig.7.Figures14 (a) and (b) show the sea surface temperature (SST) contours derived from NOAA 12 and NOAA 14 images on 12 and 17 August 1998, respectively.The isotherms in Fig. 14(a) are roughly parallel to the coast, but in Fig. 14(b) both the westward intrusion of
Fig. 14 .Fig. 15 .
Fig. 14.The sea surface temperature distribution ( in °C) of the TS, derived from the SST images of NOAA 12 on 12 August 1998 ( a) and NOAA 14 on 17 August 1998 ( b). | 2019-04-25T13:08:03.613Z | 2000-12-01T00:00:00.000 | {
"year": 2000,
"sha1": "fb6c220a85eaa7a1f1254049d7c36d8c697160a8",
"oa_license": "CCBY",
"oa_url": "http://tao.cgu.org.tw/index.php/articles/archive/oceanic-science/item/download/2834_bab4cfb1ca2f302374768acc0666bf4a",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2b84e96493db5e86d9fd0323fb3044420496c43f",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geology"
]
} |
9587053 | pes2o/s2orc | v3-fos-license | Accumulation of stress-related proteins within the glomeruli of the rat olfactory bulb following damage to olfactory receptor neurons.
The expression of stress-responsive proteins, such as nestin and a 27-kDa heat-shock protein (HSP27), was immunohistochemically examined in order to demonstrate glial responses in the rat olfactory bulb following sensory deprivation. At 3 days to 1 week after sensory deprivation, numerous nestin-expressing cells appeared within the glomerulus of the olfactory bulb. These cells were regarded as reactive astrocytes since they were immunoreactive for glial fibrillary acidic protein and showed hypertrophic features. The glomeruli, in which nestin-immunoreactive astrocytes were localized, were filled with degenerating terminals of olfactory receptor neurons and migrated microglia. A small population of nestin-immunoreactive cells was positive for a proliferating cell marker, Ki67 (8.0-9.7% at 3 days; 3.1 - 5.0% at 1 week). At 3 weeks, nestin-immunoreactive astrocytes were occasionally detected. At 6 weeks, when the olfactory receptor neurons had completely recovered, no nestin-immunoreactive astrocytes were detected. HSP 27 was also expressed within the glomerular astrocytes and showed a similar spatiotemporal expression pattern to nestin. The present study suggests that reactive astrocytes may be involved in axonal regeneration and synaptic remodeling in the olfactory system, through the recapitulation of developmentally regulated proteins, such as nestin and HSP27.
Introduction
The glomeruli of the olfactory bulb represent anatomical and functional entities in which the axons derived from the olfactory receptor neurons in the olfactory epithelium of the nose terminate to establish specialized synaptic interactions with dendrites of periglomerular, tufted, and mitral cells. This region of the brain is a unique region in adult mammals that exhibits plastic synapse formation, as the olfactory epithelium undergoes neurogenesis under physiological conditions and in response to lesions. Little is known about the role of glial cells in the synapse formation in glomeruli of the olfactory bulb, whereas an olfactory ensheathing cell, which is a unique type of Schwann cell enclosing axons of the olfactory receptor neurons within the olfactory nerve, is known to play a crucial role in promoting the long-distance growth of regenerating axons (Reviewed by Boyd et al., 2003). Much attention has been paid to reports that discuss the active regulatory role of astrocytes, such as in inducing neurogenesis in adult neural stem cells (Song et al., 2002) and in regulating synapse formation (Mauch et al., 2001; bulb was removed intact and placed in 4% PA in PB overnight. The tissues were immersed in a 20% sucrose buffer overnight, embedded in Embedding Matrix, and immediately frozen at 80 . The olfactory bulb was cut frontally into 40 m-thick serial sections. The freefloating sections were immersed in a mixture of glycerin and 0.02 M potassium phosphate-buffered saline (KPBS) and kept at 20 . The olfactory mucosa of the upper back portion of the nasal septum was also removed and processed in the same manner in order to check the reversible lesion of the olfactory epithelium and to examine nestin expression in olfactory ensheathing cells. The tissue was were cut into 10 m-thick serial perpendicular sections on a cryostat and mounted onto glass slides.
All of the experimental procedures were reviewed by the Committee on Ethics for Animal Experiments of the Faculty of Medicine, Kyushu University and carried out according to the Guidelines for Animal Experiments of the University, and Law No. 105 and Notification No. 6 of the Japanese Government.
Immunohistochemistry
The immunohistochemical procedure used in the present study has been described elsewhere (Hirata et al., 2003;Yasuoka et al., 2004). Briefly, non-specific binding sites were blocked by preincubation with 1% bovine serum albumin in KPBS with 0.1 % Triton X-100 at 4 for 1 h. For nestin immunohistochemistry, free floating serial sections of the olfactory bulb (240-m interval) were first incubated with the primary antibody, a mouse mAb against nestin (Rat 401, developed by S. Hockhfield and obtained from the Developmental Studies Hybridoma Bank [DSHB] maintained by The university of Iowa, Dept. of Biological Sciences, Iowa City, IA, USA 52242) (diluted 1:1,000 in KPBS) for 4 days at 4 and then with the secondary antibody, a FITC-conjugated antimouse IgG (Jackson, PA, USA), overnight at 4 . Control sections were processed identically and in parallel; however, these were incubated with KPBS instead of with the primary antibodies. No labeling was detected in these controls. The FITC-labeled sections were nuclearstained with propidum iodide (PI) using a Vectashield mounting medium with PI (Vector, CA, USA). For cellular identification of the nestin-immunoreactive elements, a double-immunofluorescence procedure for nestin with GFAP was performed. Sections were incubated with a mixture of mouse mAb against nestin and a rabbit polyclonal antibody (pAb) against GFAP (DAKO, Denmark) (1:10), followed with a mixture of FITC-conjugated donkey anti-mouse IgG and Texas Ullian et al., 2001), rather than focusing on the supportive role that has traditionally been assigned to them. Thus, it is assumed that the astrocytes in this specific region of the olfactory bulb may also be involved in plastic synapse formation.
The aim of the present study is to explore the response of glial cells, especially astrocytes, in the sensory-deprived olfactory bulb. To this end, we immunohistochemically examined the expression of nestin and a 27-kDa heat-shock protein (HSP27), both of which are known to be developmentally regulated proteins inducible under various situations of cellular stress and proliferation (nestin: Frisen et al., 1995;Duggal et al.,1997;Scorza et al., 2005, HSP27: reviewed by Ciocca et al. 1993;Plumier et al., 1996;Costigan et al., 1998;Lewis et al., 1999), within the olfactory bulb of adult rats in which the olfactory epithelium had been injured by a single subcutaneous injection of diethyldithiocarbamate (DDTC) (Ravi et al., 1997;Struble et al., 1998). The results demonstrated that nestin-expressing reactive astrocytes subacutely appeared within the glomerulus (mainly at 3 days to 1 week postinjection) and that they mostly coexpressed HSP27. Double-immunolabeling with a proliferating cell marker, Ki67, showed that part of them demonstrated mitotic activity. The role of these reactive astrocytes, which coexpress nestin and HSP27 within the glomerulus, is discussed.
Subjects
Adult male Wistar rats weighing 200-350 g were used for all the experiments. The animals were anaesthetized with diethyl ether and received a single subcutaneous injection of 600 mg/kg of 1.04 M sodium diethyldithiocarbamate (DDTC) (Sigma Chemical, St. Louis, MO, USA) dissolved in 0.01 M phosphate-buffered saline (PBS) at pH 7.4. This protocol results in severe damage to the olfactory epithelium at a few days post-injection, followed by a subsequent regeneration and recovery to normal levels by 5 weeks post-injection (Ravi et al., 1997;Struble et al., 1998). Animals were sacrificed after 1, 3, and 4 days, and 1, 3, and 6 weeks. Control animals received a PBS injection, while two animals remained untreated, in order to ensure that no damage was caused to the olfactory system by the injection. The animals were transcardially perfused with PBS, followed by a mixture of 2.8% paraformaldehyde (PA) and 0.2% picric acid in a 0.1 M phosphate buffer (PB) at pH 7.4. The olfactory protein (OMP) (a marker of olfactory receptor neurons) (Wako), or a goat pAb against HSP27 (Santa Cruz, CA, USA) as primary antibodies, followed by the appropriate secondary antibodies (Table 1).
To estimate the number of proliferating nestinimmunoreactive glomerular astrocytes, 5 serial frontal sections (240-m interval) through the middle part of the olfactory bulbs obtained from 3 rats at each time point were double immunofluorescence-labeled for nestin and Ki67 using a mixture of mAb against nestin and rabbit pAb against Ki67 (Yleiyi, Rome, Italy) as primary antibodies and then using a mixture of FITC-conjugated donkey anti-mouse IgG and Texas Red-conjugated antirabbit IgG as secondary antibodies. The sections were further nuclear-stained with DAPI using a Vectashield mounting medium with DAPI (Vector) (cf. Fig. 3E).
The sections were observed under a fluorescence light microscope (Zeiss Axioplan) and images were taken by a DP70 camera and stored on a computer.
Red-conjugated anti-rabbit IgG (Jackson) or a mixture of a biotinylated horse anti-mouse IgG (Vector) and Alexa488-conjugated anti-rabbit IgG (Molecular Probes, OR, USA), and finally with Texas Red-conjugated streptavidin (Jackson) for binding to the biotinylated secondary antibodies. The same procedure for nestin and/ or GFAP was used on sections of the olfactory mucosa, except that the incubation conditions for each primary and secondary antibody were for 1 day at 4 and for 4 h at room temperature, respectively.
To demonstrate the relationship between nestinimmunoreactive glomerular astrocytes and other cellular elements such as microglia and olfactory receptor neurons and to compare the expression patterns of nestin and HSP27, a double immunofluorescence procedure was performed on sections of the olfactory bulb in the same way using a mixture of mouse mAb against nestin and either a rabbit pAb against Iba1 (a microglial marker) (Wako, Japan), a goat pAb against olfactory marker
Time course of regeneration of olfactory receptor neurons
The combination of light microscopy of the PI nuclear staining ( Fig. 1A -F) with SEM ( Fig. 1G -J) was useful for demonstrating the regenerative process of the olfactory receptor neurons following the DDTC injection.
In intact animals, the olfactory epithelium which contains olfactory receptor neurons is typically a tall pseudostratified epithelium of about 40 m in thickness ( Fig. 1A). At 1 day post-injection almost the entire olfactory epithelium had been removed, except for one or two layers of flattened cells covering the lamina propria ( Fig. 1B). Subsequently, regeneration of the olfactory epithelium began and this gradually progressed ( Fig. 1C-D). At 3 weeks post-injection, the height of the olfactory epithelium had almost reached the normal level; however, it showed an irregular nuclear arrangement ( Fig. 1E). At 6 weeks post-injection, it showed a regular nuclear arrangement as in the intact olfactory mucosa (Fig. 1F). Observations of the luminal surface of the intact olfactory epithelium by SEM showed a unique apical process of mature olfactory receptor neurons, known as an olfactory vesicle (Nomura et al., 2004). This possessed ten or more specialized cilia the membrane of which is known to bear the receptors for odorants extending horizontally in various directions (Fig. 1G). After the DDTC injection, the olfactory vesicles with specialized cilia completely disappeared. At 1 week post-injection, the surface was covered with fibrous structures although a few olfactory vesicle-like structures were occasionally detected (Fig. 1H). The olfactory vesicle-like structures markedly increased in number after 3 weeks; although they still showed an immature profile because they lacked cilia ( Fig. 1I). At 6 weeks, mature olfactory vesicles with specialized cilia, similar to those of the control, reappeared (Fig. 1J). Thus, the olfactory receptor neurons within the olfactory epithelium had been completely repaired at 6 weeks post-injection.
Appearance of nestin-immunoreactive olfactory ensheathing cells within the olfactory mucosa after the DDTC injection
GFAP immunohistochemistry revealed a unique profile of olfactory ensheathing cells within the olfactory nerve ( Fig. 2A). The cells showed no marked nestin expression within the intact olfactory mucosa (Fig. 2B). After the
Confocal laser scanning microscopy (CLSM)
Double-fluorescence-labeled sections were imaged with a confocal laser scanning imaging system (LSM-GB200) attached to a microscope (Olympus). They were illuminated with an excitation wavelength of 488 nm (argon laser) for Alexa 488 and FITC, and 568 nm (krypton laser) for Texas Red and PI. To show the fine processes extending from the cell bodies, a series of optical sections at 1.5-m intervals was projected and extended onto a single plane 10-20 m in thickness (volume projection method) (cf. Fig. 2L-N and 4H-M). Green and red images were presented either as a superimposed image or separately as a grayscale image. The images were taken using 4, 10, 20, or 40 objective lenses. All figures are confocal images, unless described as fluorescent micrographs in the figure legend.
SEM
Scanning electron microscopy (SEM) was carried out in order to examine the repair processes of the olfactory epithelium. Animals were perfused with a mixture of 4% PA-0.05% glutaraldehyde (GA) in a 0.1 M cacodylate buffer (CB). The olfactory mucosa was removed and postfixed with 2% GA in PB. The tissue was dehydrated in a graded ethanol series, transferred to a graded t-butyl alcohol series, and freeze-dried. Tissue was mounted onto double-sided carbon tape and coated with osmium in an HPC-1S osmium coater before being observed through a scanning electron microscope (JSM-840, Japan) at an accelerating voltage of 8 kV.
Semithin sections of nestin-immunolabeled olfactory bulb
To clarify the morphological details of the nestinimmunoreactive glomerular astrocytes, animals were perfused with a mixture of 4% PA-0.05% GA in CB. The olfactory bulb was removed and cut frontally into 50 m-thick sections with a Vibratome. The tissue was processed for nestin immunohistochemistry using diaminobenzidine visualization. Then the sections were postfixed in 2% GA in PB, followed by 1% OsO4 in PB, before being embedded in Epon 812. Semithin sections were made and counterstained with 0.1% toluidine blue. The olfactory bulbs from control animals were processed in the same way, but without the nestin immunohistochemistry procedure. (Fig. 2K). No nestin-immunoreactive cells were detected within the nerve fiber layer where the olfactory ensheathing cells exist (Fig. 2H, J). Doubleimmunolabeling of nestin and GFAP (Fig. 2L -N) revealed that the nestin-immunoreactive cells coexpressed GFAP, suggesting that they were astrocytes. The density of the nestin-immunoreactive astrocytes ranged from 37.0 to 61.7/section at 3 days (3 rats) and from 40.8 to 63.8/ section at 1 week post-injection (3 rats). At 3 weeks after the DDTC injection, the nestin-immunoreactive astrocytes were detected within only a few glomeruli (cf. Fig. 4D).
The density of the nestin-immunoreactive astrocytes ranged from 0.7 to 1.6/section at 3 weeks (3 rats). After that, they completely disappeared.
Appearance of nestin-immunoreactive astrocytes within the glomerulus after the DDTC injection
Double-immunolabeling of nestin and OMP showed that numerous nestin-immunoreactive cells appeared within the glomeruli of the olfactory bulb at 3 days (Fig. 2H, I) to 1 week post-injection (Fig. 2J), that is, at the early stage of regeneration in the olfactory receptor neurons, whereas no nestin-expression was seen within the glomeruli of the control olfactory bulb (Fig. 2K). The nestin-immunoreactive cells which had a star-like profile were located among scattered
Semiquantification of proliferating nestinimmunoreactive astrocytes
The number of proliferating nestin-immunoreactive astrocytes was counted in serial frontal sections that had been triple-labeled with nestin, Ki67 and DAPI. Only the cells with a clear nucleus were counted (Fig. 3E). The proportion of nestin/Ki67-double-immunoreactive astrocytes to all nestin-immunoreactive astrocytes was 8.0 -9.7% at 3 days, 3.1 -5.0% at 1 week, and 0% at 3 weeks. This suggests that proliferation peaked at the first appearance of nestin-immunoreactive astrocytes before decreasing.
Coexpression of nestin and HSP27
Double-immunolabeling of nestin and HSP27 of the olfactory bulb clearly demonstrated the coexpression of the two proteins after the DDTC injection. Changes in Double-immunolabeling of nestin and the microglial marker, Iba1, showed that the nestin-immunoreactive a s t r o c y t e s w e r e c l o s e l y a s s o c i a t e d w i t h I b a 1immunoreactive microglia that had an ameboid figure, which is characteristic of activated microglia (Fig. 3A, B). In toluidine blue-stained semithin-sections, the glomerulus of the intact olfactory bulb showed few cellular profiles except for the endothelial cells of the capillary (Fig. 3C). In contrast, the glomerulus of the sensory-deprived olfactory bulb, which had been immunostained for nestin, contained numerous migrated cells, including nestin-immunoreactive astrocytes and microglia, along with the debris of degenerating nerve elements (Fig. 3D). The nestin-immunoreactive astrocytes showed hypertrophic features with a large, round pale nucleus (about 10 m in diameter) characteristic of reactive astrocytes. These were often located at the periglomerular region, and their tapering processes often ended at the blood capillary (Fig. 3D). the expression pattern of the two proteins were readily observed by confocal images of the entire olfactory bulb (Fig. 4A -D). At 3 days post-injection, a considerable number of glomeruli expressed both nestin and HSP27 (Fig. 4B). At 1 week post-injection (Fig. 4C), the number of glomeruli expressing these two molecules appeared to be unchanged, except that the intensity of HSP27 in most glomeruli had decreased compared to that seen at 3 days. After 3 weeks, only a few glomeruli expressing the two molecules were found (Fig. 4D). Higher magnification showed that, after 3 days (Fig. 4E -G), almost all the nestin-immunoreactive astrocytes coexpressed HSP27, although the expression was too strong to delineate any cellular profile. After 1 week (Fig. 4H -J), almost all the nestin-immunoreactive astrocytes coexpressed HSP27. In nestin/HSP27-double-immunoreactive astrocytes, HSP27 immunoreactivity apparently filled the entire cytoplasm, so that the true hypertrophic configurations of nestincontaining cells and their fine processes were disclosed. Numerous fine processes appeared to be associated with each other within a glomerulus and some of them extended beyond the glomerulus toward the deeper layer (arrows in 4J). At 3 weeks after the DDTC injection ( Fig. 4K -M), most of the fine processes had disappeared in the nestin/HSP27-double-immunoreactive astrocytes.
Discussion
In the current study, severe damage to the olfactory epithelium was seen at 1 day after a single subcutaneous injection of 600 mg/kg of DDTC in rats. Subsequently, t h e d a m a g e d o l f a c t o r y e p i t h e l i u m p r o g r e s s i v e l y recovered, as was reported in a DDTC injection-model (Ravi et al., 1997, Struble et al., 1998 and also in some other olfactory epithelium lesion models that used an axonal transection of the olfactory receptor neurons (Graziadei and Monti-Graziadei, 1978), inhalation of methyl bromide (Schwob et al., 1995), or irrigation with ZnSO4 (Williams et al., 2004). The SEM observation revealed the recovery process of the receptor site of the olfactory receptor neurons. Numerous olfactory vesicles with immature profiles were seen at 3 weeks; these matured with the formation of specialized cilia, which bear receptors for odorants, at 6 weeks post-lesion. The above time points correspond with the reinnervation of the olfactory bulb at about 3 weeks after axonal transection (Graziadei and Monti-Graziadei, 1980;Doucette et al., 1983) and restoration of synapstic contact with dendrites at 6 weeks (Doucette et al., 1983). Thus, the complete recovery of the receptor region observed by SEM may imply the reformation of their synaptic contact with secondary neurons within the glomeruli.
T h e p r e s e n t s t u d y d e m o n s t r a t e d t h e t r a n s i e n t appearance of nestin-expressing cells in the olfactory bulb at the start of regeneration of the olfactory epithelium. Based on the findings from the double-immunolabeling of nestin/GFAP, nestin/ OMP, and nestin/Iba1, and of semithin sections of the olfactory bulb immunostained for nestin, the nestin-immunoreactive cells were regarded as reactive astrocytes that had migrated to the degenerating terminals of olfactory receptor neurons where activated microglia had also aggregated. The mechanism of nestin induction is still not fully understood, but gliotrophic factors and other diffusible factors released from degenerating neurons or infiltrating inflammatory cells are thought to be possible triggering factors (Chen et al., 2006). According to our findings, the nestin-expressing astrocytes were often located in the periphery of the Fig. 3. Nestin-immunoreactive astrocytes intermingling with activated microglia in the glomeruli (A, B, and D) and proliferation of nestin-immunoreactive astrocytes (E). A and B: Double-immunolabeling of Iba1 (red) and nestin (green) of the olfactory bulb at 1 week post-injection. Note that nestin-immunoreactive astrocytes are closely associated with the Iba1-immunoreactive microglia that have migrated into the glomeruli (B). Scale bars = 10 m. C and D: Toluidine blue-staining of glomeruli of intact olfactory bulb (C) and the olfactory bulb at 1 week post-injection (D). Brown represents the nestin-immunoreactive site. Note the periglomerular localization of nestin-immunoreactive astrocytes (arrows) and the contact of their processes with blood capillaries (arrowheads). Asterisks indicate the nuclei of migrated cells, most of which are presumably microglia. Scale bars = 10 m. E: Double-immunolabeling of Ki67 (red) and nestin (green) with DAPI nuclear staining (blue) of the olfactory bulb at 3 days post-injection that was used for counting the number of proliferating nestin-immunoreactive astrocytes. An arrow indicates a Ki67/nestin-double-immunoreactive cell. A fluorescent micrograph. Scale bar = 10 m glomeruli and extended fine long processes within the glomeruli or toward the deeper layers. Some of these astrocytes were proliferating. Valverde et al., (1992), who studied the development of the olfactory bulb using mAb Rat-401, demonstrated that, during the early postnatal days, olfactory glomeruli became complete through the transformation of nestin-immunoreactive radial glial cells into periglomerular astrocytes. They also demonstrated that nestin expression within these cells had virtually disappeared by the end of the first postnatal month. Given that nestin-expressing reactive astrocytes after CNS injury are originally derived from a nestin-expressing population (Frisen et al., 1995), it seems logical to consider that the nestin-immunoreactive reactive astrocytes within the glomeruli actually originate from the nestinimmunoreactive radial glial cells.
Intriguingly, nestin expression was usually not detected within the nerve fiber layer of the olfactory bulb where an olfactory ensheathing cell encloses the axon of the olfactory receptor neurons although a simultaneous nestin expression was detected in some of the olfactory ensheathing cells within the nasal mucosa close to the damaged olfactory epithelium. This finding may reflect a unique property of the olfactory ensheathing cells, which maintain continuously open channels to allow for regrowth of olfactory nerve fibers without proliferation or migration, even though their enclosing axons are damaged (Li et al., 2005).
Of particular interest is the fact that HSP27 was also induced in reactive astrocytes almost concurrently with nestin. HSP27, a member of the small heat-shock protein family, is developmentally regulated (Costigan et al., 1998) and can be neuroprotective in the event of heat shock or other injuries (reviewed by Ciocca et al. 1993;Lewis et al., 1999;Plumier et al., 1996). HSP27 also participates in cytoskeletal dynamics by stabilization of the actin filaments that protect cells from insult (Lavoie et al., 1995) or via the assembly of the intermediate filament protein which functions as a molecular chaperone (Perng et al., 1999). This raises the possibility that HSP27 is also involved in the synthesis of the embryonic intermediate filament component, nestin. The coexpression of nestin and HSP27 within reactive astrocytes in cerebral abscesses was reported by Ha et al. (2002), who surmised that the increased synthesis of nestin was probably associated with small HSP synthesis via the MAP kinaseassociated pathway.
Accumulating evidence gained mainly by in vitro studies has revealed a previously unknown function of astrocytes: they may actively participate in synaptic plasticity (Mauch et al., 2001;Ullian et al., 2001). In addition, in injury models, nestin induction in reactive astrocytes has been proposed to be involved in local remodeling and repair of the mature brain, including the facilitation of synapse formation related to neural plasticity (Frisen et al., 1995;Duggal et al.,1997;Krum and Rosenstein 1999;Scorza et al., 2005). In the present study, reactive astrocytes with a coexpression of nestin and HSP27 appeared within the synaptic region of the olfactory bulb following sensory deprivation. Although their precise role still needs to be clarified, they may be involved in repair and synapse reformation. Recapitulation of these proteins suggests a structural plasticity that is thoroughly prepared for the continuous processing of olfactory information. These findings provide fundamental data for a further understanding of the mechanisms underlying axonal regeneration and synaptic remodeling in the olfactory system. | 2018-04-03T00:10:51.671Z | 2008-12-01T00:00:00.000 | {
"year": 2008,
"sha1": "a09b0cc416fc05f333e48207147bb689760c8ab7",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/aohc/71/4/71_4_265/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9de8d2b2caf0168dbbea3e368b0968207e91bb89",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234093012 | pes2o/s2orc | v3-fos-license | The Legal Conundrum in the Implementation of the Convention on the Rights of the Child in Nigeria
DOI: 10.28946/slrev.Vol5.Iss1. 603.pp1-13 International law or treaty binds a state where such state signed, ratified acceded or domesticated same. In a monist State, ratification alone suffices for the international law or treaty to become binding whereas, in a dualist State, domestication as a condition must have complied. It is because of the peculiarities within various nations' legal systems (Monist or Dualist system). In 1989, The United Nations Convention on the Rights of the Child (UNCRC), an international human rights instrument came into force. Since its domestication as the Child Rights Act (CRA 2003) in Nigeria by the National Assembly, only about 24 States have enacted the law for onward enforcement. Nigeria is a nation which became independent in the year 1960 comprising now of 36 states and Abuja as its Federal Capital Territory all under the Federal Government. Since its domestication as the Child Rights Act (CRA 2003) in Nigeria by the National Assembly, many States have enacted the law for onward enforcement. However, few states are yet to comply and raise a question as to whether the said CRC has a binding force in all the States of the Federation. This study aims to examine the extent of how the UNCRC and CRA are being enforced in Nigeria. This study's research methodology is purely doctrinal, where library materials such as books, articles from journals, and online articles have been carefully selected and analyzed for this research. This paper recommends establishing a global agency or organ that should be saddled with the responsibility of ensuring full compliance and enforcement of international laws or treaties. ©2021; This is an Open Access Research distributed under the term of the Creative Commons Attribution License (https://Creativecommons.org/licences/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original works is properly cited.
International law or treaty binds a state where such state signed, ratified acceded or domesticated same. In a monist State, ratification alone suffices for the international law or treaty to become binding whereas, in a dualist State, domestication as a condition must have complied. It is because of the peculiarities within various nations' legal systems (Monist or Dualist system). In 1989, The United Nations Convention on the Rights of the Child (UNCRC), an international human rights instrument came into force. Since its domestication as the Child Rights Act (CRA 2003) in Nigeria by the National Assembly, only about 24 States have enacted the law for onward enforcement. Nigeria is a nation which became independent in the year 1960 comprising now of 36 states and Abuja as its Federal Capital Territory all under the Federal Government. Since its domestication as the Child Rights Act (CRA 2003) in Nigeria by the National Assembly, many States have enacted the law for onward enforcement. However, few states are yet to comply and raise a question as to whether the said CRC has a binding force in all the States of the Federation. This study aims to examine the extent of how the UNCRC and CRA are being enforced in Nigeria. This study's research methodology is purely doctrinal, where library materials such as books, articles from journals, and online articles have been carefully selected and analyzed for this research. This paper recommends establishing a global agency or organ that should be saddled with the responsibility of ensuring full compliance and enforcement of international laws or treaties.
INTRODUCTION
The massive destruction of lives and properties after the Second World War and the dire need to protect human life led to the enactment of various human rights instruments such as the Universal Declaration of Human Rights (UDHR 1948), International Covenant on Civil and Political Rights (ICCPR 1966); International Covenant on Economic, Social and Cultural Rights (ICESCR 1966); Convention Against Torture (CAT 1987) among others. Basically, Child Rights ISSN Print: 2541-5298 ISSN Online: 2541 domestication of International law, the Nigerian legal system and its dualistic nature, the UNCRC, CRA and the factors responsible for the non and/or low implementation of the UNCRC and the CRA in Nigeria.
ANALYSIS AND DISCUSSION International Laws and International Legal Instruments
International law is an organized system of treaties and agreements entered into by two or more nations of the world that such a treaty or agreement governs how nations interact with each other, how citizens interact with each other, and how businesses are carried out among nation(s). 5 International law is also known as public international law is the set of rules, norms, and standards generally accepted by nations to govern their social, political, economic relations. 6 It establishes normative guidelines and a common conceptual framework to guide States across a broad range of domains, particularly on war, diplomacy, trade, and human rights, e.t.c. International law allows for the practice of a stable, consistent, and an organized international relation among nation(s) of the world. Some international law sources include international custom (general state practise accepted as law), treaties, and general principles. International law is quite different from a state or national law in that it is primarily applicable to countries rather than individuals. It also operates through consent, i.e. (acceptance, adoption, ratification and/or domestication) since there is no universally accepted authority to enforce it upon sovereign States. Consequently, States may choose not to abide by International law, and even break a treaty. However, such violations, exceptionally customary international law and peremptory norms, can be met with coercive actions ranging from military intervention to diplomatic and economic pressure. 7 On the other hand, international instruments are instruments that any State in the world can become a party to, for example, the Charter of the United Nations and International Criminal Law. 8 International instruments can further be described as established forms of broad foundations for protecting nations, entities and persons from harmful practices. Other examples of International instruments include the Universal Declaration of Human Rights (1948) which provides in Article 1 and 3 that all humans are born free, equal, and with dignity. As such, non shall be subjected to torture, inhumane, cruel or degrading treatment. There is also the ICCPR 1966, Convention of the Elimination of All Forms of Discrimination Against Women (CEDAW 1979), United Nations Convention on the Rights of a Child (UNCRC 1989) UN WOMEN, "Overview andInternational Legal Instruments," 2020, www.endvawnow.org/en/articles/582overview-and-international-legal-instruments.html [retrieved: December 24, 2020].
[4] Furthermore, International instruments are the treaties and other international texts that generally serve as a legal source for International laws. 10 International instruments can be divided into two, i.e. international instruments to which any state in the world can be a party to and regional instruments restricted to states in a particular region of the world, e.g. Africa, Asia, West Africa e.t.c. 11
Ratification or Domestication?
Ratification is the confirmation of an act of another person. 12 Ratification is an approval of an act of an agent that lacked the authority to bind the principal legally. To ratify a treaty, the State first signs it and then carries out its own established national legislative requirements. 13 Once the appropriate national organ makes a formal decision to be a party, ratification is said to be done. The ratification instrument is a formal sealed letter referring to the decision or choice which is signed by the state's appropriate authority. 14 Article 2 (1) (b) of the Vienna Convention on the Law of Treaties (VCLT 1969) provides that "ratification', 'acceptance', 'approval' and 'accession' mean in each case the international Act whereby a State establishes at the international plane its consent to be bound by a treaty". 15 Domestication in International law is a process when an International agreement becomes a part of the municipal law of a state. For International law or treaty to affect, the country must pass domestic legislation. In some states categorized as monist, treaties which are considered to be sufficient become law without passing a domestic legislation 16 while a dualist require domestic legislation to adopt such International Agreement after ratification. There are cases where treaties may enter into force immediately upon signature and binds the parties without ratification. It usually occurs in bilateral treaties. This principle was firmly established by the International Court of Justice in the case of Cameroon v. Nigeria. 17 On 29 March 1994, Cameroon brought an application before the International Court of Justice (ICJ) seeking the court to determine the question of sovereignty on the Bakassi Peninsula as well as a parcel of land around Lake Chad against Nigeria. 18 Cameroon also brought other demands which include a declaration of the maritime boundary between the two states, the instant and absolute retraction of Nigerian troops and properties from that area according to the fact that Nigeria had violated and was violating the fundamental principles of respect for frontiers inherited from colonization, i.e. uti possidetis which means "as you possess under the law". 10 O A. Hathaway "Do Human Rights Treaties Make a Difference?", The Yale Journal Vol. 111, No. 8 (Jun., 2002, pp. 1935-2042 It is simply a principle of International law that provides that newly formed sovereign states retain the internal borders that their preceding dependent area had before their independence 19 and the rules of conventional and customary international law, and that Nigeria's international responsibility was not complied with. 20 Nigeria came up with about seven (7) preliminary objections to the suit in June 1998, but the same was rejected. Some of the preliminary objections were that the court lacked jurisdiction and that Cameroon's application was inadmissible. Nigeria also laid claims on the disputed Lake Chad area based on historical consolidation of title, i.e. presence/ usage of land over a reasonable period and Cameroon's acquiescence or tacit consent and as such preference should be given to the holder of the title. 21 As per the court's jurisdiction, Cameroon relied on the declarations made between Cameroon and Nigeria under Article 36(2) of the ICJ Statute. In the court's judgment of 11 June 1998, the court rejected some of Nigeria's preliminary objections, and it also reserved some for consideration at the merit stage. 22 Consequently, the court rejected Nigeria's argument that the Maroua Declaration 23 was invalid under International law even though it was signed by the Nigerian Head of State of the time, but it was never ratified. From the above, it is clear that an International law can become binding on a state upon signature regardless of whether ratification or domestication has been done. However, it is worthy of note that this position of the law is dependent on the circumstances of a case. 24 Thus, ratification is an act of the executive arm of the government at the international level. At the same time, domestication is primarily done by the legislature and is meant to bring the treaty into the domestic setting.
The Nigerian Legal System
A Legal System refers to the process of making, interpreting and enforcing the law of a state. 25 According to Ese Malemi, the legal system refers to the laws, courts personnel of the law, and the justice system's administration in a given state, country or geographical entity. Therefore, a legal system comprises four basic elements that include the laws, courts, personnel of the law and the administration of Justice System. 26 The amalgamation of the Northern and Southern Protectorates in 1914 by Great Britain gave birth to the present-day Nigeria. 27 Before the coming of the colonial masters, Nigerians existed as a people of different, independent and unrelated entities 28 as each of the occupants had their own Systems of Legislative, Judicial and Executive Administration. 29 The system of government adopted and practised in Nigeria is Federalism. A federal system of government is a system where powers are shared between the central and regional governments. In Nigeria, power is shared between the federal, state and local governments. The constitution adequately distinguishes the responsibilities between these three levels of government into the exclusive, concurrent and residual list. Where there is a clash or conflict between the levels of government, the Federal shall prevail over the state, and likewise, the state over the Local Government and such law or action by the lower level of government shall be void. 30 The Nigerian legal system sources include Customs and Traditions, Islamic law, English law, Local Legislations, Foreign laws and Legal Writings from erudite renowned scholars of the various classes of law. 31 The Dualistic Nature of the Nigerian Legal System In Nigeria, there exist two law making bodies which include the National Assembly 32 for the whole federation of Nigeria and the State Houses of Assembly 33 for each respective States of the federation. The various State Houses of Assembly make laws which are peculiar to the people of such state. Where the National Assembly makes a law, it remains an option to adopt the same before implementation and enforcement. It is at the discretion of such state to decide whether they want such law or not. This dual system is one of the factors responsible for the nonimplementation and/or non-enforcement of specific laws in individual States made by the National Assembly in Nigeria. Nigeria's dualistic legal system is one of the basic reasons responsible for the challenge in the implementation and enforcement of the UNCRC/ CRA in Nigeria.
The United Nations Convention on the Rights of the Child
In 1989, a notable contribution was made to protect children's rights through the enactment and adoption of the United Nations Convention on the Rights of the Child (UNCRC 1989). The Convention separated adulthood from childhood. Childhood is a particular time in which children should grow, play, develop, learn, and flourish with dignity. 34 The UNCRC is a human rights treaty that sets out to protect children's civil, political, economic, social, health and cultural rights. Publishers Ltd, 1996). 30 Tobi N. 31 Tobi N. 32 Tobi N. S 47 which provides that there shall be a National Assembly for the federation which shall consist of a Senate and a House of Representatives. 33 [7] Sriwijaya Law Review Vol. 5 Issue 1, January (2021) The Convention 36 provides that any person below the age of eighteen (18) years is a child until the majority is attained. The Black's Law Dictionary 37 defines a child as a person under the age of majority. The age of majority has since been seen as age eighteen (18). The African Union Charter on the Rights of Women and Children (ACURWC) defines a child as "every human being below the age of eighteen (18) years". The issue of a child's rights before the enactment of the UNCRC was locked up in jurisprudential debate among scholars and human rights activist until the UNCRC. 38 The grave atrocities committed during World War II and other factors necessitated human rights thoughts and decided to ensure international unity. 39 On the above, efforts were made to establish a legal and administrative set up to protect children's rights. Consequently, the UN adopted the Declaration of Geneva 40 for the protection of a child in 1924. It was subsequently followed in 1945 by the United Nations Charter of Universal Declaration of Human Rights (UDHR 1948). 41 The actual regime to protect a child came with the adoption of the 1959 Declaration on the Rights of the Child by the United Nations. The UNCRC was drafted in 1979, which is referred to as the Year of the Child. In 1989, the UNCRC was adopted and on 02 September 1990, it came into force. It became the most undemanding document on child rights having the highest number of ratifications in international instruments' history ratification. 42 More than 196 countries are party to it, including every United Nations member except the United States. 43 The Convention is to ensure the exercise of parental responsibilities within States and also to serve as a legislation against human trafficking, child slavery, child prostitution and pornography among others as it forbids children from being separated from parents against their will except where it is in the best interest of the child. 44 In the case of Williams v Williams 45 the court described a child's interest to comprise of many factors for consideration such as emotional attachment to the particular parent (mother or father), adequacy of facilities such as educational, religious or opportunities for proper upbringing.
The Child Rights Act in Nigeria
The UNCRC is a human rights treaty setting out the civil, cultural, social, economic, health and political rights children are entitled to. In 2003, Nigeria domesticated the UNCRC as the Child Rights Act through a due recognized process based on Section 12 of the Nigerian constitution 36 "United Nations Convention on the Rights of a Child 1989 Part 1 Art 1" (n.d. 1999 as amended. 46 The Act gives legal effect to the commitment made by Nigeria under the UNCRC, and the AUCRWC. This law was passed at the Federal level. The Child Rights Act was created to serve as legal documentation and protect children's rights and responsibilities in Nigeria. 47 Before enacting the CRA, the primary law dealing with matters affecting children in Nigeria was the Children and Young Person's Act (CYPA 1958) and the Labour Act 2004. The structure of the CRA has shown that there is a mandate to provide a legislation that will incorporate all the rights and responsibilities of children which consolidates all laws relating to children into one single legislation as well as specifying the duties and obligations of government, parents, authorities, organizations and other related bodies to children.
Some notable decided cases in Nigeria to buttress the extent of implementation of child rights can be seen in the case of Otti v Otti 48 where it was held that the responsibilities of parents to children entails the inherent responsibility to control, preserve and care for the child's person, food, clothing, e.t.c, similar decision was held in the case of Alabi v Alabi.
The delay by some States in Nigeria to adopt the Child Rights Act or to enact a replica of CRA has been described as a great hindrance to the development and protection of the Nigerian child by the United Nations Children Fund (UNICEF) Chief of Bauchi Field Office, Mr. Bhanu Pathak in a speech during the 2018 Children's Day Celebrations at UNICEF office in Bauchi (Nigeria). 49 Mr. Pathak emphasized that States are reluctant to adopt the Act for onward implementation despite the Federal Governments effort. He restated his expectations that he desires to see that all States of the federation have signed the law for implementation. 50 Despite that some Nigerian states have enacted the CRA, there still exists the problem of full implementation and full enforcement. Geoffery asserted that most States that have assented to the law appear reluctant to enforce it because they believe that it would make children grow wild and this is not true as he added. 51 This misconception is also at the root of the reluctance to pass the bill in the few states yet to implement the CRA. 52
The Enforceability of International Law under the Nigerian Legal System
The Nigerian constitution provides in Section 12 that "no treaty between the Federation and any other country shall have the force of law except to the extent to which any such treaty has been enacted into law by the National Assembly". The status of International law in Nigeria solely relies on this provision. Furthermore, customary international law can be Judicially Noticed as seen in the case of African Continental Bank v Eagles Super Pack Ltd 53 and Section 17 of the Evidence Act, 2011 even though such laws are not enacted in the state. In the case of African Continental Bank v Eagles Super Pack Ltd, 54 the issue for determination was whether the Uniform Custom and Practice (UCP) for documentary credit is applicable in Nigeria. The International Chamber made the UCP of Commerce with its headquarters in Paris with a view of having a universal standardization of letters of credit in banking and commercial transactions. At the trial court, Per Ononuju J held that the UCP is not applicable in Nigeria. However, it was held at the Court of Appeal that the UCP constitutes customary international law and can be judicially noticed and applied in Nigeria. 55 From the provision of section 17 of the Evidence Act, 2011, it is clear that an International Law, although not ratified in Nigeria, shall have the force of law in Nigeria if it has been adjudicated upon by a superior court of record. A visible example is the boundary case between Nigeria v Cameroon (Supra). By implication, laws established there from becomes binding on Nigeria even though Nigeria has not ratified such law.
Responsible Factors for the Full Implementation Failure of the UNCRC and the CRA
The success of the enactment of the Childs Right Act 2003 is as a result of the signing and ratification of the UNCRC and the AUCRWC. The CRA serves as a necessary legal backup to the commitment made by Nigeria to the UNCRC and the AUCRWC by Nigeria.
All issues concerning a child in Nigeria before 1993 was handled by the Department of Social Welfare of the Federal Ministry of Social Development Culture. After the Children Summit in 1990, a Commission was created which has now been divided into two, i.e. the Ministry of Women Affairs and Youth Development to take over issues relating to children. In 1994, the Federation's Government inaugurated the National Child Rights Implementation Committee (NCRIC) to popularize the AUCRWC and the UNCRC. The body enhanced the signing and ratification of the two optional protocols of the UNCRC, which eventually led to the CRA's enactment. 56 Hence the enactment of the CRA by the Federal Government of Nigeria, responsibility was placed on the NCRIC to ensure full implementation of the CRA in all states of the federation. Other bodies sharing this responsibility include the National Human Rights Commission (NHRC), 57 National Agency for the Prohibition of Trafficking in Persons (NAPTIP), 58 and among others.
The Government of Nigeria has taken steps to ensure compliance and enforcement of the UNCRC, AUCRWC and CRA. However, there are no significant records of enforcement, especially in States that are yet to enact the CRA. 59 54 Yusuf Ali & Co. 55 Azoro, "The Place of Customary International Law in the Nigerian Legal System-A Jurisprudential Perspective." 56 The two protocols include the Optional Protocol to the Convention relating to the Rights of the Child on Children's involvement in Armed Conflict and the Protocol Concerning the Sale of Children, the Prostitution of Children and Pornography Exhibiting Children. 57 A commission established in 1995 under NHRC Act as amended in 2010 to serves as machinery for safeguards of the Human Rights of the Nigerian Population. It monitors Human Rights in Nigeria, assists victims of human rights violations, and helps formulate the Nigerian governments' policies on human rights. 58 The NAPTIP is an agency of the government to combat human trafficking and other similar human rights violations. It was created in 2003. NAPTIP is one of the agencies under the supervision of the Federal Ministry of Justice. 59 Despite it being signed as an Act of the National Assembly it becomes binding automatically on the FCT and any other state that ratifies the same. Therefore, the scope of the application and enforceability of the Act or Law [10] One of the factors responsible for the problem of non or poor implementation of the CRA is the attitude of Nigerian courts to International laws or treaties adopted for implementation in Nigeria. In Ogugu v State 60 the Supreme Court of Nigeria held that the African Charter provisions are enforceable but through the Nigerian courts' already existing rules and procedures. In Chief Gani Fawehinmi v Sani Abacha 61 the Supreme Court stated that the African Charter is below the constitution, and in cases of clash of conflicts, the constitution shall prevail. The combined effect of the above two cases implies that International law shall be subjected to the Nigerian court's rules and proceedings. If there are inconsistencies between International law and the constitution of Nigeria, the constitution shall prevail.
Secondly, the federal system of government in Nigeria impedes the full implementation of the Child Rights Act 2003. Each of the thirty-six States of the federation is autonomous and equal to each other. 62 These 36 States have their own Houses of Assembly saddled with the responsibilities of making local laws suitable to the people of the state. Where the National Assembly makes a law, it becomes binding and enforceable in Nigeria as it is a federal law and for such law to be binding on a State, the state must enact that law as a State law. The above practice means that States will have two similar laws applicable, i.e. the federal law and the State law. These two laws bring about contention as to which is to be enforced where there is a breach. It is important to note that the law enforcement agents in Nigeria operate as federal agencies. Invariably, the federal law is what they enforce. Who then enforce the State laws? What is the essence of State law?
The customs and traditions of individual societies/ states also contribute to the nonimplementation or partial implementation of the CRA in some states in Nigeria. Obviously, some states' customs and practices in Nigeria, especially those of rural areas, conflict with the CRA.
These customs and practices range and vary among the people/ society inhabiting Nigeria cutting across the North, East, West and South of Nigeria. These practices include marrying off girls before they attain the age of 18, killing of twins' and albinos, discrimination against children with disabilities, future promises of marriage made on behalf of young children, unnecessary and severe labour on children especially on orphans and children living with guardians, female genital mutilation e.t.c. All the above mentioned customary practices are in contravention of the provisions of the Child Rights Act. Some of the provisions of the Child Rights Act contravened include sections 1 (Best interest of a child), section 4 (Right to survival and development), section 10 (no deprivation merely because of the circumstance of birth), section 11 (Right to dignity, i.e. no physical, mental, emotional injury or abuse, neglect or maltreatment including sexual abuse on a child), section 21 (prohibition against marriage under 18 years. Such marriage is null and void), section 22 (No betrothal of young girls for future marriage), section 25 (exposure of children to trafficking of narcotic drugs), use of children in other criminal activities, section 28 (prohibition of exploitative labour i.e. no child should work except in the family and agricultural work heavy to adversely affect physical, mental, spiritual, moral or social as the case may be is restricted to being applied in FCT and any State which consent to its application and enforcement through ratification. 60 development) and section 32 (other forms of exploitation of a child) which is an omnibus provision for the protection of a Child. 63 It is pertinent to mention that people consider their customary practices sacred and complicated for such people to abolish such custom.
Furthermore, some States have deliberately refused to enact or replicate CRA. This could be due to several reasons peculiar to the States that have refused to enact the law. On the other hand, some other States have replicated the CRA but refused to fully implement the law. 64 More so, there is also the notion that the CRA gives so much liberty to children because it dispenses the standard African style for the discipline of Children which may be regarded as a breach of Children's Right by the CRA. The African culture believes and relies so much on beating as a means of disciplining a child. To beat a child is the same as correcting the child. The Child Rights Act frowns at any act of battery, torture, suffering or inhumane Act to a child. It invariably amounts to a conflict between the Child Rights Act and the African system of a child's discipline. With this African perception, adoption, and full implementation of the CRA would be practically impossible because the Act would establish restrictions to the African system of discipline.
CONCLUSION
Conclusively, laws are made to be obeyed. Invariably, there will be no need to make laws if there is no possibility of implementation and enforcement. The UNCRC and CRA are both innovative landmark laws made for the protection of children specifically. It is worthy of note that these laws are specifically enacted to protect the best interest of a child. The child's interest cannot be overemphasized; hence, these instruments are needed in every State of Nigeria. Adoption, enactment, implementation, and enforcement of the CRA imply growth and development and, most importantly, honour and respect for children's life and existence.
However, many factors have contributed to the problem of full implementation and full enforcement. Most notably is the federal system of government which is being practised in Nigeria. The dual system of law-making also contributes to this problem. Due to CRA's importance, States are therefore enjoined to enact or replicate CRA and ensure full implementation of the same. Even as there is no international law enforcement agency, it is the state's responsibility to ensure enforcement.
From the findings and discussions, this paper recommends as follow: 1) Adequate awareness should be created provided by the government agencies, NGOs, and social organizations handling issues of the child in States that are yet to attain full implementation of the Act on the innovations brought by the Child Rights Act to enhance the wellbeing of the Children; 2) Proper steps or protocols should be put in place for law enforcement agencies to ensure successful implementation and full enforcement of the Child Rights Act in the best interest of children; 3) It is further recommended that having an International Law enforcement agency will also assist in ensuring that nations or States that have signed the International Law or Treaty to comply with same. Such agencies should be given the power to investigate and recommend cases to 63 "Child Rights Act 2003" (n.d.). 64 Yusuf Ali & Co, "Nigeria Weekly Law Report." appropriate authorities for prosecution of violators; 4) National and State Houses of Assemblies must ensure that International laws are not just adopted, signed, ratified and domesticated. They should endeavor to give such laws adequate considerations and scrutiny with the sole aim of having better laws in the society through the instrumentality of States rights of reservation and the open right to amend laws to suit a State's peculiar condition; 5) Additionally, National and State Houses of Assemblies must endeavor to sign and ratify International laws in which enforceability is achievable so as not to have dormant laws in the state. | 2021-04-28T14:51:17.334Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "4ff4b3bc0d7a836e23e25674706bdcb7f1571718",
"oa_license": "CCBY",
"oa_url": "http://journal.fh.unsri.ac.id/index.php/sriwijayalawreview/article/download/603/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4ff4b3bc0d7a836e23e25674706bdcb7f1571718",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
21742369 | pes2o/s2orc | v3-fos-license | Human Immunodeficiency Virus Serodiscordance and Dual Contraceptive Method Use Among Human Immunodeficiency Virus-infected Men and Women in Lilongwe, Malawi
Our study of human immunodeficiency virus–infected clients attending ART clinics in Lilongwe, Malawi, found that being in a serodiscordant relationship was not associated with dual method contraceptive use at last sex.
S ub-Saharan Africa (SSA) remains disproportionately affected by the HIV pandemic compared with other regions in the world. As the region struggles to decrease HIV-related morbidities and mortality, new infections have remained persistently high, accounting for almost two thirds of the global total of new HIV infections. 1 Unintended pregnancies are also high in this part of the world. The high rate of unintended pregnancies contributes to the high rates of unsafe abortions, maternal morbidity, and maternal mortality in the region. 2 Malawi, a country in SSA, has an overall HIV prevalence of 10.6% 3 and unintended pregnancy rate of 41%. 4 The high unintended pregnancy rate can be attributed to low contraceptive prevalence rate (59%) and high unmet need for family planning (19%) among Malawian women. 4 The HIV serodiscordant couples are faced with the dual challenge of preventing HIV transmission to the uninfected partner and avoiding unintended pregnancy. The World Health Organization recommends dual (contraceptive) method use among serodiscordant couples. 5 Dual method use entails use of condoms and another effective birth control method to avert unintended pregnancy and prevent transmission of HIV and other sexually transmitted infections (STIs) to uninfected partners. Thus, dual method use has the potential to reduce new HIV infections among serodiscordant sexual partners and prevent unintended pregnancies concurrently. Through prevention of unintended pregnancies, dual method use can also lead to reduced mother-to-child HIV transmission. These two reductions support achievement of Sustainable Development Goal number 3, which is to end the HIV/AIDS epidemic by 2030. 6 Despite the recommendation, low rates of dual method use among people living with HIV have been reported in Brazil, Nigeria, and Thailand. [7][8][9] In SSA, limited data exist on dual method use in the general population, as well as among HIV-infected men and women in serodiscordant relationships, a population at high risk of HIV transmission. Therefore, in this analysis, we used data from a cross-sectional study in Malawi to estimate the prevalence of dual method use among HIV-infected men and women and assess the association between couple serodiscordance and dual method use.
METHODS
We conducted a secondary data analysis of data from a crosssectional study that evaluated knowledge, attitudes, and practices for reproductive health among HIV-infected men and women receiving HIV care in Lilongwe, Malawi. The cross-sectional study was approved by Malawi National Health Sciences Research Committee, the Emory University Institutional Review Board, and the University of North Carolina at Chapel Hill Institutional Review Board.
Details about the design and conduct of the cross-sectional study have already been described. 10 In brief, the study enrolled HIV-infected individuals attending the Lighthouse Trust clinics at Kamuzu Central Hospital and Bwaila Hospital (Martin Preuss Clinic) in Lilongwe, Malawi, from September 26, 2013, to December 20, 2013. Potential participants were eligible for enrollment in the study if they 1 : were between the ages of 18 and 45 years, 2 spoke Chichewa (the most commonly spoken local language) fluently, 3 had a sexual partner within the past 6 months, 4 had a documented HIV positive status, and 5 were a registered client at either Lighthouse clinic. Once enrolled, the study participant completed a face to-face paper-based questionnaire administered by a trained research assistant.
The questionnaire captured information on 1 : demographics, 2 partner HIV status 3 condom use, 4 sexual history and current sexual behavior, 5 fertility intent, 6 contraceptive knowledge, attitudes, and use, 7 disclosure of HIV status to their partner, and 8 ante-retroviral therapy (ART) use.
Main Exposure
Our primary exposure variable was serodiscordance, defined as the participant reporting that their most recent sexual partner was known to be HIV-uninfected.
Main Outcome
The primary outcome variable was dual method use. We identified participants as dual method users if they reported using condoms and another birth control method concurrently during the last time they had sex. The other birth control methods used included the intrauterine contraceptive device, levonorgestrel or etonorgestrel implant, depot medroxyprogesterone acetate (DMPA) injectable, oral contraceptives (OC), emergency contraception, female sterilization/ tubal ligation, and male sterilization/vasectomy. We then compared condom use versus modern contraceptive use among dual method users to ascertain if there was a gap in dual method use and determine whether the gap was due to lack of condoms use or lack of use of modern contraceptives.
Potential Confounders and Effect Measure Modification
Based on previous literature, we identified the following variables from the survey as potential confounders of the association between serodiscordance and dual contraceptive method use: age, education (no education, primary, secondary and > secondary), marital status (married and not currently married), partnership duration (≤1 year and > 1 year), number of sexual partners in the past month (none, 1, and > 1), desire for more children (yes or no), time since HIV diagnosis (<1 year, 1-5 years, 5-10 years, and > 10 years), whether participant was on ART or not, duration on ART, partner disclosure of HIV status (yes or no), ability to refuse sex if a partner did not want to use a condom (yes or no), and house floor materials (as a measure of socioeconomic status: earth/sand/dung, cement, and other).
To reduce sparse data in multivariable model for women, we collapsed education into three categories: no education, primary, and ≥ secondary. The category for number of sexual partners in the past month was also collapsed into 2: no partner and ≥ 1 partner.
The HIV-infected individuals who are on ART would have suppressed viral load and reduced risk of transmitting the HIV virus to their sexual partners. Individuals who are virally suppressed may be less willing to use a condom compared with those who are not on ART. We therefore assessed if ART status is an effect measure modifier (EMM) of the association between serodiscordance and dual method use in this population. We also evaluated for EMM by desire for more children (fertility intent) because those that desire for more children would be thought to be less likely to use dual methods.
Statistical Analyses
Because report of contraceptive use may differ between men and women, we conducted separate analyses for men and women. In univariable analyses, we used Fisher exact test and Wilcoxon rank sum test to assess for associations between potential confounders and both serodiscordance and dual contraceptive method use. EMM was assessed using Breslow-Day test of homogeneity of the odds ratios. A priori, all variables that yielded a P value of 0.2 or less in the univariable analysis with either serodiscordance or dual method use were included in the multivariable model. Multivariable logistic regression was used to estimate adjusted odds ratios (aORs) and 95% confidence intervals (CIs) for the association between serodiscordance and dual contraceptive method. All participants who had missing data on either serodiscordance status or dual method use were excluded from the analyses.
All statistical analyses were performed in Stata 14.1 (Stata Corp LP, College Station, TX).
RESULTS
Of the 562 participants enrolled, 308 (54.8%) were women. For this analysis, we excluded 2 participants (1 man and 1 woman) who had missing data for serodiscordance and 5 women with missing data for dual method use.
Among Women
We included 302 women in the analyses, who had a median age of 32 years (interquartile range [IQR], 27-37). The median time since HIV diagnosis was 4 years (IQR, 2-7). Of the 302 women, 256 (84.8%) were married, 269 (91.5%) were in a sexual partnership for more than 1 year, and 246 (83.1%) had only 1 sexual partner in the past month (Table 1). Among the 268 (88.7%) women who were on ART, 52 (17.2%) had been on ART for less than 1 year. The number of women in a known serodiscordant relationship at last sex was 57 (18.9%). There were no statistically significant differences in age, education, marital status, partnership duration, number of sexual partners in the past month, fertility intent, disclosure of HIV status to recent partner, time since HIV diagnosis, ART status and duration, partner disclosure of STI, ability to refuse sex if a partner did not want to use a condom, and house floor materials between women who were in a known serodiscordant relationship at last sex and those who were in a known seroconcordant relationship.
Eighty (26.5%) women were utilizing dual methods at last sex. Among dual method users, the DMPA injection (48.7%) and implants (27.5%) were the most commonly-used second methods used together with condoms ( Table 2). Compared with women who were not using dual methods at last sex, more women who did had completed secondary education (60.0% vs 24.3%, P = 0.01), and more had the ability refuse sex if partner did not want to use a condom (75.0% vs 45.9%, P <0.001) ( Table 3). Fewer women (20.0%) who were using dual methods at last sex desired to have more children compared with those who were not using dual methods (37.8%, P = 0.004). More women who were on ART (159/268, 59.3%) reported that their partners used condoms than among those who were not on ART (13/34, 38.2%, P = 0.02). However, the effect of serodiscordance on dual method use did not significantly differ between women who were on ART (OR, 1.27; 95% CI, 0.61-2.57) and those who were not on ART (OR, 0.96; 95% CI, 0.12-12.50; P = 0.82). We did not find any evidence of effect measure modification of the association between serodiscordance and dual method use even when ART was categorized as 1 year or less versus ART use longer than 1 year. We also did not find evidence of effect measure modification of the association between serodiscordance and dual method use by fertility intent (P value of 0.97).
Among the 57 women who were in a serodiscordant relationship at last sex, 17 (29.8%) reported using dual methods at last sex. Among the 245 women who were in a seroconcordant relationship at last sex, 63 (25.7%) were using dual methods at last sex. Serodiscordance at last sex was not significantly associated with dual contraceptive method utilization among women in unadjusted analysis (OR, 1.23; 95% CI, 0.65-2.32). They were also not significantly associated after adjusting for age, education, number of sexual partners in the past month, desire for more children, ART
Among Men
We included 253 men in the analyses, who had a median age of 37 years (IQR, 33-41 years). The median time since HIV diagnosis was 3 years (IQR, 1-7 years). Of these, 236 men (93.3%) were married, 233 (92.8%) were in a sexual partnership for more than 1 year, and 208 (84.5%) had only 1 sexual partner in the past month (Table 4). Among the 221 (87.3%) men who were on ART, 61 (24.2%) had been on ART for less than 1 year. The number of men in a known serodiscordant relationship at last sex was 44 (17.4%). More men known to be in a serodiscordant relationship at last sex (65.9%) had completed secondary school than those known to be in a seroconcordant relationship (42.1%, P = 0.003). There were no notable differences in the other characteristics between men who were in a known serodiscordant relationship at last sex and those who were in a seroconcordant relationship.
Sixty-three (24.9%) men were in a relationship that used dual methods at last sex. Similar to women, the male dual method users reported that the DMPA injection (45.3%) and implants (25.0%) were the most commonly used methods with condoms (Table 2). More men who were using dual methods at last sex had the ability to refuse sex if their partner did not want to use a condom (82.5% vs 68.1%, P <0.04) than those who were not using dual methods (Table 5). Fewer men (17.5%) who were using dual methods at last sex desired to have more children than those who were not using dual methods (31.5%, P = 0.04). Among men, condom use did not differ by ART status for those on ART (151 [68.3%] of 221) versus those not on ART (19 [59.4%] of 32, P = 0.31). The effect of serodiscordance on dual method use also did not differ between men who were on ART (OR, 0.73; 95% CI, 0.27-1.76) and those who were not on ART (OR, 1.11; 95% CI, 0.02-23.96, P = 0.75). Among men, we did not find any evidence of effect measure modification of the association between serodiscordance and dual method use even when ART was categorized as 1 year or less versus ART use longer than 1 year. We also did not find evidence of effect measure modification of the association between serodiscordance and dual method use by fertility intent (P value of 0.18).
Among 44 men who were in a serodiscordant relationship at last sex, 9 (20.5%) reported using dual methods at last sex. Among the 209 men who were in a seroconcordant relationship at last sex, 54 (25.8%) reporting using dual methods at last sex. Serodiscordance at last sex was not significantly associated with dual contraceptive method utilization among men in unadjusted analysis (OR, 0.75; 95% CI, 0.33-1.63). They were also not significantly associated after adjusting for marital status, desire for more children, time on ART, ability to refuse sex without condoms, and house floor material (aOR, 0.62; 95% CI, 0.27-1.44). Among both women and men, a great proportion (30.5% for women, 42.3% for men) was using condoms but not modern contraceptives (Table 6). In contrast, only 17.5% of women and 12.3% men were using a modern contraceptive but not condoms. The remaining 25.5% of women and 22.5% of men were using neither condoms nor modern contraceptives.
DISCUSSION
In our HIV-infected population in Lilongwe, we found couple serodiscordance rates of 18.9% and 17.4% among women and men, respectively, and dual method use rates to be only 26.5% and 24.9%, respectively. In contrast, a study in Brazil found that 72.0% of HIV-infected women were using dual methods. 9 However, our findings are in line with studies done in South Africa, where rates of dual method use in the general population were below 30% despite high rates of HIV. 11,12 Serodiscordance did not have a major impact on dual method use among our study participants. Among those in a serodiscordant relationship, only 29.8% of women and 20.5% of men reported dual method use at last sex. This finding is concerning because at least 30% of new HIV-1 transmissions in Africa occur within stable serodiscordant partnerships, making HIV-1 serodiscordant couples one of the highest-risk populations for HIV-1 transmission and a key group for targeting HIV-1 prevention interventions. 13 Encouraging these couples to use dual methods needs to be a priority for curbing the virus in a region where more than a million new infections occur every year. 14 When we compared condom use with modern contraceptive method use among nondual method users in our sample, we found that more men and women used condoms alone than modern contraceptive methods alone. This finding is consistent with findings from studies done in Thailand, 7 Brazil, 9 and India. 15 Therefore, to get more men and women to use dual methods, health workers need to remember to promote both condom use for prevention of STIs and modern contraception use for improved pregnancy protection, particularly among condom-only users who may think they are already using an effective method of contraception considering that condoms alone are not as effective at pregnancy prevention.
Our couple serodiscordance rates are similar to those reported in Nigeria, 8 but lower than those reported in Brazil among HIV-infected women, where it was found to be 47.0%. 9 Similar to our study, studies from Kenya and Brazil also found that dual method utilization was negatively associated with desire for more children and positively associated with ability to refuse sex if partner did not want to use a condom. 16 Of particular significance is that among serodiscordant relationships in our study, 37% of HIV-infected men and 18% of HIV-infected women said that they desired more children. Safer conception strategies have been developed for serodiscordant couples in low resource settings where the risk of HIV transmission is high. Use of ART in itself by infected partners in serodiscordant relationships has already been shown to reduce viral load and risk of HIV transmission to their uninfected partners. 17 Because adherence to ART is imperfect and genital shedding of HIV may occur even in the presence of suppressed plasma viral load, couples are advised to seek additional methods to reduce transmission risk. 18 For serodiscordant couples with a seronegative female, condomless intercourse limited to the ovulation window, use of preexposure prophylaxis by the female while the male is virally suppressed on ART, STI treatment for both partners, are feasible options to reduce risk of HIV transmission while promoting safe conception in resource limited settings. 19,20 For serodiscordant couples with a seronegative male, voluntary male medical circumcision, vaginal sperm insemination, STI treatment, and use of preexposure prophylaxis by the male have been shown to be safer and costeffective methods for conception in resource-limited settings. [21][22][23][24][25] We therefore recommend that health care providers and policy makers also promote awareness and use of these modalities to better meet reproductive health needs of serodiscordant couples in addition to broadening access to ART which is by far the most effective public health approach to prevent HIV transmission in low resource settings. We also found some notable associations with education in our population. Among women, we did find that higher education was associated with dual method use which is in agreement with findings from studies done in Brazil, Uganda, United States, and China. 9,12,26,27 In contrast, among men, having a higher education was not associated with dual method use. Women with higher education levels may have had greater knowledge about the benefits of dual method use or be more empowered to negotiate it than women with lower education levels. However, men's education did not influence dual method use, potentially because women are the users of the hormonal contraception rather than men.
Our study stands among a few that have analyzed dual method use in both HIV-infected men and women, particularly those in serodiscordant partnerships, a population at high risk of HIV transmission. Performing separate analyses for men and women allowed us to examine if serodiscordance had a different effect on dual method use among the two populations. However, the study participants' responses may have been affected by recall or social desirability bias. Because we did not interview couple dyads and the contraceptive and serodiscordance responses were all based on self-report, we could not verify responses. This may particularly be important in couples where women may perceive partner disapproval of contraception and thus not disclose method use. With that said, for all methods, other than sterilization, the proportions of use by method were similar by gender.
Another study limitation is that we conducted our study in an urban setting at a center of excellence for ART care, and our study population was relatively older, primary monogamous, and on ART longer than 1 year. Hence, our findings may not be generalizable to other populations, and there was likely selection bias among those who choose to participate in our study. In addition, due to small sample size, we did not use prevalence odds ratios because we anticipated having several predictors, and as a result, we were worried that we could experience problems with model convergence in the multivariable model, a common problem with risk models. We could have used prevalence rate ratios for the unadjusted estimates but for the sake of consistent reporting, we thought it would be better to report odds ratios for both unadjusted and adjusted estimates.
Finally, we were unable to assess for effect measure modification for the association between serodiscordance and dual method use by viral suppression because information on the participant's most recent viral load was not collected in the study. Instead, we assessed if ART use was an effect modifier. In addition, we were unable to ascertain from our data why the participants or their partners did or did not use dual methods. Further studies need to be done to determine why HIV-infected men and women do not use dual methods and if viral suppression affects its use, particularly those who are in serodiscordant relationships.
In conclusion, less than 20% of our HIV-infected participants reported that they were in a known serodiscordant relationship at last sex, and less than 30% reported dual method use. Serodiscordance was not associated with dual method use at last sex among either men or women. Given the low rate of dual method use in this HIV-infected population, we recommend greater efforts to encourage HIV providers to counsel their patients about the importance of dual method use to prevent unintended pregnancy, STIs, and HIV transmission. | 2018-05-21T23:03:43.250Z | 2018-10-11T00:00:00.000 | {
"year": 2018,
"sha1": "56bce9fbf53d15a50746a32097fc83a966c07069",
"oa_license": "CCBYNCND",
"oa_url": "https://www.hiv.health.gov.mw/images/Documents/MALAWIFactsheet.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dbff42b7bf9a00c72858309fecf5757787de9ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239132666 | pes2o/s2orc | v3-fos-license | An Ultra-Fast Power Prediction Method Based on Simplified LSSVM Hyperparameters Optimization for PV Power Smoothing
: With existing power prediction algorithms, it is difficult to satisfy the requirements for prediction accuracy and time when PV output power fluctuates sharply within seconds, so this paper proposes a high-precision and ultra-fast PV power prediction algorithm. Firstly, in order to shorten the optimization time and improve the optimization accuracy, the single-iteration Wolf Optimization (SiGWO) method is used to simplify the iteration process of the hyperparameters of Least Squares Support Vector Machine (LSSVM), and then the hybrid local search algorithm composed of Iterative Local Search (ILS) and Self-adaptive Differential Evolution (SaDE) is used to improve the accuracy of hyperparameters, so as to achieve high-precision and ultra-fast PV power prediction. The power prediction model is established, and the proposed algorithm is applied in a test experiment which can complete the power prediction within 3 s, and the RMSE is only 0.44%. Finally, combined with the PV-storage advanced smoothing control strategy, it is verified that the performance of the proposed algorithm can satisfy the system’s requirements for prediction accuracy and time under the condition of power mutation in a PV power generation system.
Introduction
In recent years, the penetration rate of renewable energy such as solar energy has increased [1]. PV power generation is easily affected by the environment, resulting in power fluctuations lasting for several seconds to several minutes [2], in which the maximum instantaneous power fluctuation rate can reach 75%/s [3], causing grid voltage and frequency flicker, which will reduce power quality and power supply reliability [4,5].
It has been proven that the PV power station equipped with energy storage can smooth the power fluctuation effectively [6,7]; especially, the energy storage system with high power density can effectively smooth the short-term and severe PV power fluctuation [8]. In Guo T., Liu Y., Zhao J., et al. [9], a new robust dynamic wavelet-enabled method is proposed, which can optimize the wavelet parameters adaptively and adjust the state of charge (SOC) and depth of charge or discharge of the hybrid energy storage system (HESS) composed of supercapacitors and batteries so as to smooth the fluctuations of the output power. In Sun Y., Tang X., Sun X., et al. [10], an improved low-pass filtering algorithm (ILFA) is proposed to optimize the power distribution of the battery and the supercapacitor, and it combines with the fuzzy control (FC) to smooth the power fluctuations based on the SC priority control strategy. In Lamsal, D., Sreeram, V., et al. [7], a fuzzy-based
LSSVM Algorithm and Hyperparameters
Compared with the wavelet analysis and neural networks, support vector machine (SVM) has obvious advantages in self-learning, self-adaptation, and non-linear mapping, but its training speed for quadratic programming problems is slow. The least square support vector machine (LSSVM) on the basis of SVM is proposed by Suykens [20], which chooses the principle of minimizing structural risk, and converts the optimization problem into a form similar to ridge regression. The difficulty of solving to a certain extent is reduced and the solution speed is improved. According to the principle of structural risk minimization, the regression problem is transformed into an equation-constrained optimization problem: min β,e g(β, e) = 1 where e i is the error variate, β T is the hyperplane normal vector in high-dimensional space, b is the offset, ϕ(x) is the non-linear mapping function, and C is the regular parameter which used to balance the complexity of the model. The model of LSSVM is as follows: where α i is the Lagrange multipliers and K(x, x i ) is the kernel function. This paper selects the RBF kernel as the kernel function, defined as the monotonic function of the Euclidean distance from any point x to a certain center x c in space [21].
The kernel function K(x, x i ) is as follows: where x is the any point in space, x c is the center point in space, and σ is the width parameter of the kernel function. The regression of LSSVM is related to the choice of hyperparameters which are included with the regular parameter C and the width parameter σ of the kernel function [22].
Hyperparameters Optimization for the First Time
Mirjalili imitates the hunting process of the gray wolf and proposes the Gray Wolf Optimization (GWO) [23]. Under the guidance of the optimization ideology of GWO, this paper proposes a Single-iteration Gray Wolf Optimization (SiGWO) to optimize hyperparameters for the first time. In the GWO algorithm, the process of iteration is the process of wolf α, wolf β, and wolf δ constantly approaching their prey. The distance D α , D β , and D δ between wolves ω and wolf α, wolf β, and wolf δ is continuously shortened by the iterative calculation to narrow the encircling radius, and the optimal hyperparameters are obtained.
The schematic diagram of the hunting process of the gray wolves is shown in Figure 1.
According to the single iteration principle of the SiGWO, the gray wolves only hunt for the prey (the optimal solution) once, and narrow the hunting radius to the range by r shown in Figure 1. The approximate position of the prey (the optimal solution) is defined and recorded as the position solution X pos . The above process is recorded as the first optimization of the hyperparameters. According to the single iteration principle of the SiGWO, the gray wolves only hunt for the prey (the optimal solution) once, and narrow the hunting radius to the range by r shown in Figure 1. The approximate position of the prey (the optimal solution) is defined and recorded as the position solution Xpos. The above process is recorded as the first optimization of the hyperparameters.
Chaos Initialization
Chaos is a widespread phenomenon in nonlinear systems, and chaotic mapping instead of traditional probability distribution is used to initialize the population which can enhance the traversal and uniformity of the population [24]. The cube map which has better uniformity is chosen to complete the initialization of the gray wolf. The formula for cube mapping is as follows: 3 ( 1) 4 ( ) 3 ( ) 1 ( ) 1, 0,1, 2, y n y n y n y n n where y(n) is the chaos number generated by chaos initialization and n is the size of the gray wolf population.
Chaos is a complex system with unpredictable behavior, and mapping is to associate chaotic behavior with a parameter by a function [24]. The original pseudo-random numbers are replaced by chaotic numbers in the proposed algorithm, and the position is calculated.
where Position is the initialization position and ub and lb are the upper and lower bounds of the parameter value, respectively. This paper sums up the detailed introduction of the SiGWO algorithm and chaos initialization above, and records the specific process of the algorithm in the form of pseudo code, as shown in Figure 2.
Chaos Initialization
Chaos is a widespread phenomenon in nonlinear systems, and chaotic mapping instead of traditional probability distribution is used to initialize the population which can enhance the traversal and uniformity of the population [24]. The cube map which has better uniformity is chosen to complete the initialization of the gray wolf. The formula for cube mapping is as follows: where y(n) is the chaos number generated by chaos initialization and n is the size of the gray wolf population.
Chaos is a complex system with unpredictable behavior, and mapping is to associate chaotic behavior with a parameter by a function [24]. The original pseudo-random numbers are replaced by chaotic numbers in the proposed algorithm, and the position is calculated.
where Position is the initialization position and ub and lb are the upper and lower bounds of the parameter value, respectively. This paper sums up the detailed introduction of the SiGWO algorithm and chaos initialization above, and records the specific process of the algorithm in the form of pseudo code, as shown in Figure 2.
Hyperparameters Accuracy Optimized by Hybrid Local Search
A hybrid local search is introduced to improve the accuracy of the hyperparameters, and the Xpos, which has experienced chaos initialization, will be used as the initial solution for the hybrid local search.
Hyperparameters Accuracy Optimized by Hybrid Local Search
A hybrid local search is introduced to improve the accuracy of the hyperparameters, and the X pos , which has experienced chaos initialization, will be used as the initial solution for the hybrid local search.
(1) Preliminary optimization of accuracy. Iterative local search (ILS) is based on the common characteristics of good solutions, adding local disturbance to the existing position solution X pos [25], using the Griewank function as a perturbation function [26], disturbing the existing solution to ensure that it jumps out of the local optimum to find the new position solution better pos .
The formula of the Griewank function is as follows: where X pos is the position solution obtained by the SiGWO algorithm, randn is the stochastic number between [−1, 1], x i is the stochastic product of X pos , and G is the disturbance output.
(2) Re-optimization of accuracy. In order to find the optimal hyperparameters, a Self-adaptive Differential Evolution Algorithm (SaDE) is introduced. The second local search is used to confirm the position information of the wolf α in the optimal population to obtain the optimal hyperparameters. The global or local search ability of the algorithm will be affected by the definition of the variation factor F [27]. Therefore, the variation factor F of the SaDE algorithm is defined as follows: where F 0 is the initial variation factor, i is the ith population, and Max_iteration is the maximum number of iterations. The mutation strategy of SaDE algorithm is as follows: where dd is the adaptive parameter, better pos is the better solution of position, N i2 (t) and N i3 (t) is the random vector, and best pos is the position of the optimal solution. Based on the content above, the pseudo code of the hybrid local search algorithm is shown in Figure 3. Under the guidance of the principle of one-to-one correspondence between the population fitness and the position, the optimal position is determined by the search of the minimum population fitness. The information of the optimal hyperparameters is contained in the optimal position.
The main flow chart of the proposed algorithm is shown in Figure 4.
Import the sampling data collected Start Under the guidance of the principle of one-to-one correspondence between the population fitness and the position, the optimal position is determined by the search of the minimum population fitness. The information of the optimal hyperparameters is contained in the optimal position. Under the guidance of the principle of one-to-one correspondence between the population fitness and the position, the optimal position is determined by the search of the minimum population fitness. The information of the optimal hyperparameters is contained in the optimal position.
The main flow chart of the proposed algorithm is shown in Figure 4.
Data Collection
The existing power prediction methods are based on solar radiation intensity, historical power, and meteorological factors to complete the power prediction by statistical prediction or intelligent algorithm. However, these methods cannot adequately study the power fluctuation characteristics, and the accuracy of the prediction is impacted by overreliance on Numerical Weather Prediction (NWP), which is low-precision and high-cost [28].
This paper divides the historical output power into every 1 min in PV power station in Shenzhen. It updates the sampling power based on the cyclic forecasting idea, adds the
Data Collection
The existing power prediction methods are based on solar radiation intensity, historical power, and meteorological factors to complete the power prediction by statistical prediction or intelligent algorithm. However, these methods cannot adequately study the power fluctuation characteristics, and the accuracy of the prediction is impacted by over-reliance on Numerical Weather Prediction (NWP), which is low-precision and high-cost [28].
This paper divides the historical output power into every 1 min in PV power station in Shenzhen. It updates the sampling power based on the cyclic forecasting idea, adds the latest measured data, and eliminates the furthest measured data. The basic weather conditions of the selected sample data are as follows: the temperature at the time of sample collection is 24-27°C, cloudy, and northeast wind level 3.
Data Classification and Normalization
This paper selects the six-hour historical output power of the PV power as a sample, the last half-hour output power as the test data, and the rest as the training data.
The speed of convergence and the accuracy will be improved because the sample data are normalized. The min-max standardization method is selected to linearly transform the original data, so that the result value x* is mapped to [0, 1].
The conversion function is as follows: where x max and x min are the maximum and minimum values in the sample data, respectively, and x* is the normalized value.
Predictive Evaluation Index
A simulation is built to record the prediction time and the accuracy of the predicted power at the same time. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to evaluate the accuracy of the predicted power.
where N is the number of training or test samples, y i is the actual value, andŷ i is the predicted value.
Simulation Verification
Several existing high-precision power prediction algorithms such as QPSO-LSSVM, SaDE-GWO-LSSVM, and ABC-LSSVM are chosen as the control group, compared with the proposed algorithm, and run under the same sample data. MATLAB is used to simulate the above algorithm, and the prediction algorithm is evaluated from the two aspects of prediction accuracy and time.
In order to reduce the effect of prediction randomness, the average power after 20 times of prediction in the same period is selected as the final prediction power.
The fitting curve of the predicted power obtained by the power prediction algorithms and the actual power is shown in Figure 5. As shown in Figure 5, the degree of fitting between the predicted power and the actual power is at a high level, and the degree of fitting of the power curve is positively correlated with the accuracy. By comparing the deviation degree of each predicted power, we can gain the following prediction accuracy results: QPSO-LSSVM > HLSGWO-LSSVM > ABC-LSSVM > SaDE-GWO-LSSVM.
According to the definition of the power prediction evaluation index, this paper selects the RMSE to draw the prediction. To sum up the above, the chart of error-time comparison is shown in Figure 6. As shown in Figure 5, the degree of fitting between the predicted power and the actual power is at a high level, and the degree of fitting of the power curve is positively correlated with the accuracy. By comparing the deviation degree of each predicted power, we can gain the following prediction accuracy results: QPSO-LSSVM > HLSGWO-LSSVM > ABC-LSSVM > SaDE-GWO-LSSVM.
According to the definition of the power prediction evaluation index, this paper selects the RMSE to draw the prediction. To sum up the above, the chart of error-time comparison is shown in Figure 6. tual power is at a high level, and the degree of fitting of the power curve is positively correlated with the accuracy. By comparing the deviation degree of each predicted power, we can gain the following prediction accuracy results: QPSO-LSSVM > HLSGWO-LSSVM > ABC-LSSVM > SaDE-GWO-LSSVM.
According to the definition of the power prediction evaluation index, this paper selects the RMSE to draw the prediction. To sum up the above, the chart of error-time comparison is shown in Figure 6. As can be seen from Figure 6, the proposed algorithm can complete the power prediction within 3 s and greatly shorten the time required for power prediction. In this case, the sampling interval could be further shortened, and the sensitivity of the smoothing system to the power fluctuations in an ultra-short period can be enhanced.
Understanding the occupancy of the power prediction in PV power generation systems is of great significance to the internal resource allocation of PV power generation systems. The paper uses AIDA64 software to monitor the computer CPU occupancy rate of each power prediction algorithm when it works. A histogram is used to represent the total occupancy rate of each power prediction algorithm in the PV power generation system, as shown in Figure 7.
The total occupancy rate of the prediction algorithm is the sum of the system occupancy rate in each time period when the prediction algorithm is running. The higher its value, the higher the performance requirements of the CPU, and the high total occupancy rate will affect the operation of other parts in the PV power generation system. As can be seen from Figure 6, the proposed algorithm can complete the power prediction within 3 s and greatly shorten the time required for power prediction. In this case, the sampling interval could be further shortened, and the sensitivity of the smoothing system to the power fluctuations in an ultra-short period can be enhanced.
Understanding the occupancy of the power prediction in PV power generation systems is of great significance to the internal resource allocation of PV power generation systems. The paper uses AIDA64 software to monitor the computer CPU occupancy rate of each power prediction algorithm when it works. A histogram is used to represent the total occupancy rate of each power prediction algorithm in the PV power generation system, as shown in Figure 7. As shown in Figure 7, the proposed algorithm can greatly reduce the resource occupancy rate for the power prediction calculation in the PV power generation system, and then relieve the pressure of computer calculation and improve the operating efficiency of the power generation systems.
Comprehensive Analysis of Predictive Power
The RMSE, which is sensitive to the abnormal values, is selected as the main evaluation index of the prediction error, and the RMSE between the predicted power and the actual power is calculated every minute in the future. The distribution diagram of the prediction error (RMSE) at each time point of power prediction is shown in Figure 8. The total occupancy rate of the prediction algorithm is the sum of the system occupancy rate in each time period when the prediction algorithm is running. The higher its value, the higher the performance requirements of the CPU, and the high total occupancy rate will affect the operation of other parts in the PV power generation system.
As shown in Figure 7, the proposed algorithm can greatly reduce the resource occupancy rate for the power prediction calculation in the PV power generation system, and then relieve the pressure of computer calculation and improve the operating efficiency of the power generation systems.
Comprehensive Analysis of Predictive Power
The RMSE, which is sensitive to the abnormal values, is selected as the main evaluation index of the prediction error, and the RMSE between the predicted power and the actual power is calculated every minute in the future. The distribution diagram of the prediction error (RMSE) at each time point of power prediction is shown in Figure 8.
Comprehensive Analysis of Predictive Power
The RMSE, which is sensitive to the abnormal values, is selected as the main evaluation index of the prediction error, and the RMSE between the predicted power and the actual power is calculated every minute in the future. The distribution diagram of the prediction error (RMSE) at each time point of power prediction is shown in Figure 8. As shown in Figure 8, by analyzing the RMSE index of each power prediction value, the prediction error RMSE curve of the proposed algorithm changes most gently. By comparing Figures 5 and 8, the power prediction curve of the proposed algorithm can be regarded as the translation of the actual power curve.
In order to verify the universality of this discovery, it needs to be verified later. After verification, it is found that the power prediction curve of the same PV power station at different times or under different weather conditions can show a high fit with the actual power curve after translation change, and the translation range is relatively stable and fluctuates in a small range. Therefore, the above predicted power can be translated and calculated, and the fitting diagram of the translated power prediction curve can be drawn in Figure 9. As shown in Figure 8, by analyzing the RMSE index of each power prediction value, the prediction error RMSE curve of the proposed algorithm changes most gently. By comparing Figures 5 and 8, the power prediction curve of the proposed algorithm can be regarded as the translation of the actual power curve.
In order to verify the universality of this discovery, it needs to be verified later. After verification, it is found that the power prediction curve of the same PV power station at different times or under different weather conditions can show a high fit with the actual power curve after translation change, and the translation range is relatively stable and fluctuates in a small range. Therefore, the above predicted power can be translated and calculated, and the fitting diagram of the translated power prediction curve can be drawn in Figure 9. The new power prediction fitting curve after translation and calculation is shown in Table 1. According to the comprehensive analysis of Table 1 and Figure 9, the proposed algorithm has the best fit between the predicted power curve and the actual power curve, which greatly improves the accuracy of power prediction. The new power prediction fitting curve after translation and calculation is shown in Table 1. According to the comprehensive analysis of Table 1 and Figure 9, the proposed algorithm has the best fit between the predicted power curve and the actual power curve, which greatly improves the accuracy of power prediction.
PV Power Generation System Equipped with HESS
HESS, which is composed of energy storage batteries and supercapacitors, is selected to complete the power smoothing. HESS combines the advantages of both, which has high energy density and power density at the same time and ensures the energy storage can smooth the power fluctuation efficiently and quickly.
The schematic diagram of the PV power generation system equipped with HESS is shown in Figure 10. According to the comprehensive analysis of Table 1 and Figure 9, the proposed algorithm has the best fit between the predicted power curve and the actual power curve, which greatly improves the accuracy of power prediction.
PV Power Generation System Equipped with HESS
HESS, which is composed of energy storage batteries and supercapacitors, is selected to complete the power smoothing. HESS combines the advantages of both, which has high energy density and power density at the same time and ensures the energy storage can smooth the power fluctuation efficiently and quickly.
The schematic diagram of the PV power generation system equipped with HESS is shown in Figure 10. All parts in Figure 10 obey the law of conservation of energy that can be summarized as the following formula: P pv + P HESS = P Grid (12) where P pv is the output power of PV, P HESS is the charge or discharge power of HESS, and P Grid is the grid-connected power.
Related Parameter Settings
It is assumed that the energy storage system in this paper is an ideal energy storage, meaning that the capacity is sufficient to satisfy the requirements for power smoothing.
(1) Sampling interval T 0 . The PV power generation is a continuous process; as long as the power generation conditions are met, electric energy can be generated in real time. It is assumed that the power of PV power generation during T 0 is a constant value.
(2) Power fluctuation rate ∆P . The ratio of the output power difference between adjacent power sampling points to time. The calculation formula is: where t is the tth sampling point after the prediction time, P(t) is the predicted power at time t, and ∆P (t) is the power fluctuation rate at time t.
(3) Target volatility D et . D et is a parameter that can reflect the power grid's frequency modulation capability; it is the upper bound of the grid-connected power fluctuation allowed in the guidelines [29]. The grid-connected PV power needs to satisfy the following formula: Energies 2021, 14, 5752 11 of 14
The Design of PV-Storage Advanced Smoothing Control Strategy
The charge or discharge action of HESS is judged by the fluctuation rate between the prediction power and target volatility D et . When the predicted power fluctuation rate is greater than D et , HESS will charge or discharge; otherwise, there is no action [30].
The specific control strategy is as follows: (15) where P HESS (t) is the charge or discharge power of HESS at time t.
The energy exchange can be completed in advance according to the predicted power, so as to ensure the energy storage system can satisfy the charge or discharge.
The flow chart of PV-storage advanced smooth control is shown in Figure 11.
The Verification of Power Smoothing Simulation
The predicted power is gained by the power prediction algorithm. HESS follows the flow of advanced smoothing control strategy to realize charge or discharge, in order to compensate the difference between the predicted power and the grid-connected target value.
In addition, according to China's State Grid Enterprise Standard Q/GDW617-2011 "Technical Regulations for Connecting Photovoltaic Power Stations to the Grid", the maximum active power change of a small PV power station within 1 min is limited to 0.2 MW. Therefore, this paper sets the target volatility Det to 2 KW/min.
A schematic diagram of power smoothing based on predicted power is shown in Figure 12.
The Verification of Power Smoothing Simulation
The predicted power is gained by the power prediction algorithm. HESS follows the flow of advanced smoothing control strategy to realize charge or discharge, in order to compensate the difference between the predicted power and the grid-connected target value.
In addition, according to China's State Grid Enterprise Standard Q/GDW617-2011 "Technical Regulations for Connecting Photovoltaic Power Stations to the Grid", the maximum active power change of a small PV power station within 1 min is limited to 0.2 MW. Therefore, this paper sets the target volatility D et to 2 KW/min.
A schematic diagram of power smoothing based on predicted power is shown in Figure 12.
As shown in Figure 12, the power smoothing based on the predicted power can effectively solve the problem of the sharp fluctuations within seconds. In combination Figure 8 with Table 1, the relationship between the power smoothing performance and the power prediction accuracy shows that the storage power smoothing performs well when it under the guidance of the predicted power with high-precision. The slight power fluctuation such as at the 14th sampling time point can be smoothed well by HESS, and the smoothness of the grid-connected power can be improved, in order to guarantee the power quality of the whole power system.
The predicted power is gained by the power prediction algorithm. HESS follows the flow of advanced smoothing control strategy to realize charge or discharge, in order to compensate the difference between the predicted power and the grid-connected target value.
In addition, according to China's State Grid Enterprise Standard Q/GDW617-2011 "Technical Regulations for Connecting Photovoltaic Power Stations to the Grid", the maximum active power change of a small PV power station within 1 min is limited to 0.2 MW. Therefore, this paper sets the target volatility Det to 2 KW/min.
A schematic diagram of power smoothing based on predicted power is shown in Figure 12. Table 1, the relationship between the power smoothing performance and the power prediction accuracy shows that the storage power smoothing performs well when The charge or discharge power of HESS is shown in Figure 13. it under the guidance of the predicted power with high-precision. The slight power fluctuation such as at the 14th sampling time point can be smoothed well by HESS, and the smoothness of the grid-connected power can be improved, in order to guarantee the power quality of the whole power system. The charge or discharge power of HESS is shown in Figure 13. As can be seen from the Figures 12 and 13, the grid-connected power under the guidance of high-precision power prediction has a higher degree of fit with the actual power.
The power required for power smoothing for a single time is reduced by 4.5% to 5%. At the same time, the power smoothing guided by the proposed algorithm reduces the capacity requirements of energy storage equipment.
Conclusions
This paper proposes a high-precision and ultra-fast PV power prediction algorithm, in view of the difficulty of the existing power prediction algorithms to simultaneously satisfy the requirements for prediction accuracy and time when the PV output power fluctuates sharply within seconds.
By comparison with the existing power prediction algorithms, the proposed algorithm can complete power prediction within 3 s and greatly reduce the time required for power prediction. According to the predicted power error distribution, the RMSE of the proposed algorithm optimized is only 0.44%.
The proposed algorithm is applied to the PV-storage advanced smoothing control to prove that this algorithm can effectively guide the smoothing of HESS, and ensures the smoothness of the grid-connected power. In addition, the proposed algorithm can reduce the requirements of the energy storage capacity to a certain degree. As can be seen from the Figures 12 and 13, the grid-connected power under the guidance of high-precision power prediction has a higher degree of fit with the actual power.
Patents
The power required for power smoothing for a single time is reduced by 4.5% to 5%. At the same time, the power smoothing guided by the proposed algorithm reduces the capacity requirements of energy storage equipment.
Conclusions
This paper proposes a high-precision and ultra-fast PV power prediction algorithm, in view of the difficulty of the existing power prediction algorithms to simultaneously satisfy the requirements for prediction accuracy and time when the PV output power fluctuates sharply within seconds.
By comparison with the existing power prediction algorithms, the proposed algorithm can complete power prediction within 3 s and greatly reduce the time required for power prediction. According to the predicted power error distribution, the RMSE of the proposed algorithm optimized is only 0.44%.
The proposed algorithm is applied to the PV-storage advanced smoothing control to prove that this algorithm can effectively guide the smoothing of HESS, and ensures the | 2021-10-20T16:17:54.318Z | 2021-09-13T00:00:00.000 | {
"year": 2021,
"sha1": "3fe43589f796c619c834fe89838fca88184b5e5d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/en14185752",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "026dcecbfc73c8d02b2950372dba9567463a1779",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
44066598 | pes2o/s2orc | v3-fos-license | Structure Learning from Time Series with False Discovery Control
We consider the Granger causal structure learning problem from time series data. Granger causal algorithms predict a 'Granger causal effect' between two variables by testing if prediction error of one decreases significantly in the absence of the other variable among the predictor covariates. Almost all existing Granger causal algorithms condition on a large number of variables (all but two variables) to test for effects between a pair of variables. We propose a new structure learning algorithm called MMPC-p inspired by the well known MMHC algorithm for non-time series data. We show that under some assumptions, the algorithm provides false discovery rate control. The algorithm is sound and complete when given access to perfect directed information testing oracles. We also outline a novel tester for the linear Gaussian case. We show through our extensive experiments that the MMPC-p algorithm scales to larger problems and has improved statistical power compared to existing state of the art for large sparse graphs. We also apply our algorithm on a global development dataset and validate our findings with subject matter experts.
I. INTRODUCTION
Decisions to set policies and conduct interventions ought to be supported by evidence and scenario analysis. Supporting decision making in this way, however, is difficult since it is usually not possible to conduct randomized controlled trials over policy decisions such as a country's investment in sanitation or primary education and see the effect on indicators such as number of patents produced or tuberculosis mortality rate. Understanding the causal structure of the many different variables, indicators, and possible interventions relevant to such decisions is challenging because of their intricate interdependencies and the small cardinality of noisy samples coupled with a large number of variables. One approach for developing such understanding is through painstaking theorization and validation of small sets of variables over many years [1]. An alternative that we focus on in this paper is to estimate the structure of a Bayesian network from an observed time series.
Existing approaches for the problem of structure learning from time series observations include the Granger Causality (GC) algorithm [2]. This has been recently formalized in terms of directed information graphs [3], [4] as a Bayesian network structure recovery problem on time series. The GC approach has good statistical properties because it conditions on all other variables to isolate the pair in question [5]. However, with finite samples and very large number of variables, the statistical power of the algorithm significantly reduces due to large conditioning sets. Inspired by the Peter and Clark (PC) algorithm for causal discovery for non-time series data, recently proposed variants perform tests that condition on a reduced subset, beginning with a complete graph and pruning it with pairwise tests [6]; this approach yields many false positives while also having scaling issues.
In this paper, we propose a new algorithm, MMPC-p, that is scalable and has provably strong p-value control to prevent false discoveries using techniques from [7], [8]. This proposed algorithm is inspired by the MMHC algorithm for causal structure learning in non-time series observational settings [9]. It begins with an empty graph, adds edges to form candidate parent sets, and subsequently prunes them in a two-phase approach. We show it to be sound, complete and equip it with false discovery rate (FDR) control under assumptions we describe in the sequel.
The proposed MMPC-p algorithm relies on some form of independency testing on pairs of random processes. Due to autocorrelations across time, we cannot use conditional independence tests directly. We consider two testers based on directed information, an information-theoretic measure of predictive information flow between processes for linear Gaussian models. The first is a naïve directed information test that ignores correlations across time but requires fewer samples. The second computes directed information in a more principled way but requires a greater number of samples. We conduct a detailed comparison study of GC, PC, and MMPC-p on generated data and find MMPC-p to have better scaling and error control empirically. From our synthetic experiments, we find that MMPC-p has higher statistical power for sparse large graphs than the alternatives: GC and PC. We also apply MMPC-p to a real-world data set of global development indicators from 186 countries over more than fifty years and compare the learned causal relationships to the validated relationships in the International Futures (IFs) program [1]. There are systematic differences in the two sets of relationships that we detail later, but the ones found by the proposed algorithm have some validity from the policy expert perspective.
The main contributions of this work are: (1) an MMPC-p algorithm for time series data inspired by the MMHC algorithm, (2) a method to control false discoveries with our approach under weak assumptions on Type II error, (3) exhaustive experiments comparing the performance of MMPC-p with the modified PC and modified GC algorithms [5] (we show that MMPC-p performs well for large sparse graphs in terms of both omission and commission errors), and (4) a case study on a global development dataset with input from subject matter experts.
II. RELATED WORK
Most recent work in causal structure learning has focused on issues related to undersampling. In [10], [11], causal time scale structures are learned from subsampled measurement time scale graphs and data. A variant of this was studied in [12], where the authors address the issue of causal structure learning from temporally aggregated time series. We consider the learning problem at measurement time scales only. Recent work [5] has drawn attention to the need for evaluating algorithms on the measurement time scale problem, which is used by some of the algorithms that deal with under sampling. Regression-based methods have also been used for estimating the causal impact [13], which is quantified as the counterfactual response in a synthetic control setting. In contrast, here we focus on the structure learning aspect of the problem. In [6], the PC algorithm is extended for time series under the assumption that the measurement and time scales were approximately equal. Another variant called modified PC has been presented in [5]. We actually compare our results to this variant in the empirical section. In [2], the authors present techniques for estimating multivariate Granger causality from time series data, both unconditional and conditional, in the time and frequency domains. In contrast, [14] explores combining graphical modeling with Granger causality to address climate change problems. These papers use the Granger causal algorithm that conditions on all the variables but the pair of variables in questions. We actually compare to a variant of these approaches as described in [5]. More recently, the importance of FDR control is being emphasized in causal structure learning problems. An approach for Bayesian network structure learning with FDR control was presented in [8]. In [15], the authors present p-value estimates for high-dimensional Granger causal inference under assumptions of sub-Gaussianity on the coefficients. PC-p [7] is an extension of PC which computes edge-specific p-values and controls the FDR across edges. In contrast to these algorithms, our MMPC-p is for time series, works under general assumptions and is inspired by the MMHC algorithm. We also formally prove FDR control guarantees and back it up with results in our empirical section.
III. FORMAL PROBLEM DEFINITION AND PRELIMINARIES
Notation. Consider m random processes over time slots 0 . . . T such that the ith random process sequence is denoted by: denote the ith random process from time t 0 to t 1 . Consider a subset of random processes A ⊂ [1 : m]. Then, the random variables of all the processes in A from time t 0 to t 1 are denoted X (t0,t1) A . X A,t denotes the random variables belonging to the set of processes A at time t. Let X A and X (t) A denote the quantities analogous to the single random process case as described above.
We will primarily consider random processes that take values in a finite alphabet. This is to simplify the presentation, avoiding all measure-theoretic issues. The experiments, however, are performed with respect to real-valued random processes. We make the following assumptions.
Assumption 1 (Strict Causality). The m random processes follow the dynamics: The dynamics are order-1 Markov. This supposes that there are no instantaneous interactions in the system conditioned on the past, i.e. the system is strictly causal.
Following [3], let us denote the above causal conditioning over time in (1) using: Here, the notation P (· ·) subsumes the recursive causal conditioning over time in (1).
Assumption 2 (Causal Sufficiency). There are no hidden confounders and all variables are measured. This is a sort of 'faithfulness assumption': the data does not exhibit any near-deterministic relationships. We review relevant results from [3] and [4] under the above assumptions. Definition 1. Causally conditioned directed information from random process X i to X j conditioned on the random processes in the set A is given by: Here, I(·; ·|·) represents the standard conditional mutual information measure in information theory.
In other words, it is the time average of the mutual information between process j until time t − 1 and process i at time t given the past of processes in i ∪ A until time t − 1. It is related to Granger causality, signifying the reduction in prediction loss that process j until t − 1 gives over and above the processes in i ∪ A until time t − 1. The notion is exact for prediction under log loss. However, [16] presents arguments as to why the log loss is the correct metric for measuring value of extra side information in prediction as only this measure satisfies a data-processing axiom.
This is a graph where every node is a random process. We interchangeably use i and X i when talking about nodes in the graph G. Let Pa(i) = {j : (j, i) ∈ E} be the set of directed parents of node i in the DI graph G. Let Ch(i) = {j : (i, j) ∈ E} be the set of children of i. [4]). Let Pa(i) be the set of directed parents according to the DI graph. Then, if the positivity condition holds for all m random processes over time and if the system is strictly causal, then almost surely: Corollary 1 (Local Causal Markov Property [3], [4]). When the system of m random processes satisfies the positivity constraint and satisfies strict causality: IV. ALGORITHM: MMPC-P Inspired by the MMHC algorithm for observational causal discovery [9] with i.i.d. data, we introduce an adaptation called the MMPC-p algorithm (max-min parents) for Granger causality. The MMPC-p algorithm uses a DI Tester as an oracle instead of a Conditional Independence (CI) Tester. We will prove an upper bound on the p-values of the edges obtained and show that p-value control is possible in this case under some weak assumptions.
DI Testing Oracle DI(i, j, A): This DI testing function outputs the probability (or p-value) of the event I(X i → X j X A ) = 0 for any A ⊂ [1 : m] given the dataset. We will first assume this oracle that outputs p-value to specify our MMPC-p algorithm.
Let us assume we have a measure of association Define the functions max-min association and argmax-min association as follows: We now describe the MMPC-p algorithm presented in Algorithm 1. It consists of two phases: the first phase picks candidate parents while the second prunes the list of candidate parents picked in the first phase.
Assumption 5. If I(X i → X j X A ) > 0, DI(i, j; A) < α for α used in Algorithm 1.
The above says that Type II errors are small. Related to the faithfulness assumption, it means there are no very weak dependencies in the system. Similar assumptions have been made for p-value control for causal inference with i.i.d. data [7]. Lemma 1. Type II error less than α (Assumption 5) implies that Pa(j) ⊆ CP (j) after Phase I of MMPC-p. Empty P(Y → j) ; 20 end 21 P(Y → j) = max{P(Y → j)}; 22 return CP (j), P Proof. We present a proof by contradiction. Suppose v ∈ Pa(j), then I(v → j|S) > 0 ∀S. This implies that DI(v, j; S) < α, ∀S by Assumption 5. Suppose v is not included in CP (j) and Phase I of MMPC-p completes. This implies that when looking at v, there exists a subset S such that Assoc α (v → T |S) = 0. This implies that DI(v, j; S) ≥ α for that subset S yielding a contradiction. Therefore, node v will be included in CP (j) at the end of Phase I of MMPC-p.
Lemma 2 ( [7]
). Consider m CI testers and the following null and alternative hypothesis H 0 : At least one CI oracle outputs independent; H 1 : All CI oracles output dependent.
Assuming that the ith CI oracle outputs independent of all other oracles, we can bound the p-value (5) as p ≤ max j=1,...,m p j .
Theorem 2. For all the edges A → T that finally remain after Phase II of MMPC-p the max(pvalue, α) ≤ P(A → T ).
Proof. After completion of Phase I, we wish to test whether the edge is present by conducting independence tests. We construct a hypothesis test with the following null and alternative: where T represents the target node. According to Lemma 1 and referring to lines 9 to 14 of Algorithm 1, the p-value for parents for a given target T will always be less than α, and would never be dropped.
Since all parents are in CP , testing for H 1 in (6) is equivalent to testing H 1 in (5). Similarly, testing for H 0 in (6) is equivalent to testing H 0 in (5). Hence, the hypothesis test defined in (6) is equivalent to the hypothesis test defined in (5). Hence, our FDR Control:We define F DR BY (β) given by: where R is the number of edges retained at the end of Phase II of MMPC-p. The value β * satisfies: Given a target false positive rate q, deleting directed edges A → T whose P(A → T ) > β * ensures consistent FDR control provided β * ≤ β in Algorithm 1.
Theorem 3. The MMPC-p algorithm with a perfect DI oracle is sound and complete.
Proof. A perfect DI oracle means that DI(i, j; F ) = 1 if I(X i → X j F ) = 0 and DI(i, j; F ) = 0 if I(X i → X j F ) > 0.
With this strong assumption and Lemma 1, for any node j, Pa(j) ⊆ CP (j) after the first phase of MMPC-p.
Next, we show that if i is not in Pa(j) then i / ∈ CP (j) after the second phase. The reason is that one of the subsets of CP (j) after the first phase has to equal Pa(j) (as shown in the previous paragraph). Suppose, i / ∈ Pa(j), then we know that I(X i → X j Pa(j)) = 0. Therefore, DI(i, j; Pa(j)) = 1. This means that for any α > 0, assocY = 0 at Line 8 when Y = i if i ∈ CP (j) after Phase I. This would cause i to be discarded in the second phase.
Remark. This algorithm is much simpler than the one it is inspired by: MMHC. The definition of DI and the role of time in its computation simplify the algorithm and its proof. Furthermore, we do not have problems of "descendants" staying after the two phases of the algorithm (there is a second part of the MMHC algorithm in the original paper where pairs were only considered if i ∈ CP (j) and j ∈ CP (i), that is not required here). However, the algorithm still retains the robustness of MMHC.
V. DI TESTERS FOR LINEAR MODELS AND GAUSSIAN PROCESSES
Let the scalar variable X follow a memory-1 autoregressive linear model with i.i.d. Gaussian noise given by X(t + 1) = Φ(t)X(t) + ξ(t) where ξ(t) ∼ N (0, σ 2 ) and ξ(t) is independent across time. Generalizing to a set of random variables with an underlying Granger causal graph (the DI graph) in the sense of Section III. Now given the DI graph i.e., for variable i there is a set P a(i) (that does not depend on t) such that where φ ij 's are the coupling coefficients. Let T denote the number of time points sampled for every variable i. Let the number of i.i.d. copies of these time series is N . Every variables essentially is observed N T times, T across time for each i.i.d. sample. For jointly Gaussian autocorrelated time series processes, DI can be computed by [17].
is the asymptotic prediction error of X i,t given the past of the process X i , i.e. X (t−1) i and the past of process X (t−1) A . Hence, DI testing boils down to testing whether both the mean squared variances are equal. Therefore, we form the mean squared test statistic in two ways leading to two different DI testers.
Test 1(i, j, A): We follow the standard approach used in Granger causal studies [2]. If Φ is constant, we create T − 1 × 2 matrix consisting of rows (X i,t , X i,t+1 ) T −1 t=1 by taking all consecutive pairs from every time series stacking them vertically. Now, we stack these matrices vertically again to create an N T − 1 × 2 matrixX i . LetX i [1, :] refer to the first column and let X i [2, :] refer to the second column. We solve the following two approximations through ordinary least squares regression: 1) min X j [2, : 1] − l∈AΦ 1 ljX l [1, :] 2 2) min X j [2, : 1] − l∈A∪iΦ 1 ljX l [1, :] 2 Let mse 1 be the mean squared error for the first least-squares approximation and mse 2 be the mean squared error for the second approximation. Then (N T −1) ln( mse1 mse2 ) follows a χ 2 distribution with 1 degree of freedom and the p-value corresponding to the null hypothesis corresponds to the p-value of the null hypothesis I(X i → X j X A ) = 0 (when the process is stationary and jointly Gaussian). This is, therefore, a DI testing oracle and we call it Tester 1.
Test 2(i, j, A): The issue with Tester 1 is that it does regression with highly autocorrelated samples of the same time series stacked vertically. This is a good practice when the number of i.i.d. copies N is small. However, when N is comparable to T , autocorrelation amongst a specific process would decrease the performance of the tester. Instead of regression through stacking as in the previous case, we compute the asymptotic prediction error as follows.
1) We do two separate regressions for each pair of time points (t + 1, t) with N i.i.d. samples: one using variable i, A as a covariate to predict j and another without i and only with set A.
2) Now consider the N residues obtained after the regressions as j,A∪i,t,n , 1 ≤ n ≤ N for the first regression. Similarly, let the residues for the second regression be j,A,t,n .
3) Denote Σ j,A∪i to be the covariance matrix whose entries are indexed by (t 1 , t 2 ), t 1 ∈ [1 : T ], t 2 ∈ [t : T ]. Σ j,A∪i [t 1 , t 2 ] is the covariance between j,A∪i,t1,· and j,A∪i,t2,· averaged over the N i.i.d samples. Similarly, let Σ j,A [t 1 , t 2 ] is the covariance between matrix calculated from the j,A,· variables. Now, since all variables are jointly Gaussian, the residues are also jointly Gaussian. Let Σ (t) j,A be the covariance sub-matrix involving points with time index until t. Therefore, we compute the asymptotic prediction error given by the expression [17] . Similar expressions hold for 2 ∞ (j, A ∪ i). This is motivated by the fact that for jointly Gaussian variables x 1 . . . x n , the squared prediction error of x n given the other is det Σ (n) det Σ (n−1) , where is distributed with χ 2 with 1 degree of freedom. We call this Tester 2.
VI. COMPARATIVE STUDY
We perform a comparative study similar to reference [5]. We compare the results of MMPC-p to modified GC [2], [5], [18] and modified PC [5], [19]. For MMPC-p and PC we use Tester 1 and Tester 2; for modified GC we use only Tester 1. We fix an α value for all the testers. For the sake of brevity, we refer to modified GC and modified PC as GC and PC, respectively in the remainder of this paper.
Synthetic Datasets: We generate synthetic datasets as described in [5]. For a given density ρ and number of nodes N , we generate 50 datasets consisting of directed graphs of N nodes that contain at least one N -cycle, with coefficients of the AR(1) model in ±[0.2, 0.8] (before normalizing by the largest eigenvalue), such that the matrix has a density ρ. This method of constructing AR(1) models will generate matrices with very small eigenvalues that is fixed by adding a scaled identity to the AR(1) model (essentially adding feedback loops . We do this 50 times for each of N = 10, 15, 20, 25, 30, 50 and for densities ρ = 0.1, 0.2, 0.3. For each of the datasets we generate 1000 samples, in the form of N series time series with N samples samples each, such that N series N samples = 1000.
Metrics: We consider omission error rate: false negative edges normalized by the total number of edges and commission error rate: false positive edges normalized by the total number of non-edges.
Discussion: The leftmost plots in Figure 1, Figure 2, Figure 3, and Figure 4 indicate that for large and sparse graphs both the omission and commission errors are well controlled for MMPC-p. The rightmost plots suggest that when the density is higher, commission errors for MMPC-p are still well controlled but the omission error increases.
We conduct analysis for 50 variables separately in Figure 5. The results indicate that PC has high commission error with Tester 1, and GC cannot run because of the conditioning set being large. However, both commission and omission errors for MMPC-p with Tester 1 are lower than PC and GC.
VII. GLOBAL DEVELOPMENT CASE STUDY
We consider a dataset of over 4,000 random processes, most of which begin in 1960, across a wide range of development issue areas. The source is largely international organizations like the World Bank, the Food and Agriculture Organization of the United Nations, the UNESCO Institute for Statistics, and the International Monetary Fund and has been standardized (all series Table I, we show the parents obtained for four series (selected arbitrarily): AGCropProductionFAO, GDPCurDol, LaborAgricultureTotMale, and Population.
For the most part, the parents identified by MMPC-p do not match the causal drivers in IFs. They are a mixed bag: some are semantically similar to the IFs parents, such as AGCroptoFoodFAO and GDP; we have validated them to be semantically similar with domain experts. Others like Market for PC sales, Internet Subscribers, and Cooking Oil are spurious.
One of the main reasons for the mismatch between MMPC-p and the IFs model is that many variables used in the model do not have a direct corresponding data series. For example, one of the two direct drivers of crop production is yield, measured as Birth, Death, Migration tons per hectare. But the variable for yield used in the IFs model is initialized using data series for crop production and crop land (the quotient being yield). So, MMPC-p is unable to identify yield as a direct parent of crop production. Another reason for the mismatch is that the dataset used by MMPC-p contains many series that are aggregated for use in the IFs model, and are not directly causally connected to other variables. For example, calories per capita is an important development indicator, and a direct driver of hunger, but is initialized in the IFs model through the sum of ten series for calories per capita from different food sources. Finally, a technical reason for the mismatch could be that many of these relations are non-linear in nature. Other testers that do not require linearity could be used with more available data and perhaps yield results more similar to IFs.
VIII. CONCLUSION
In this paper, we have proposed a new algorithm for learning the Granger causal structure of observational time series and endowed it with strong FDR control. Named MMPC-p, it is inspired by the hill-climbing MMHC approach for causal structure learning in non-time series observations and inherits its scalability to large numbers of random processes. We conduct a comprehensive comparison to GC and PC with two different DI testers on large sparse graphs, finding that the proposed algorithm has better FDR control and scalability than the competing algorithms. We have also taken the first steps to using the algorithm in practice for a global development use case as an alternative to years-long modeling efforts. Our results are observed to be semantically similar for some variables when compared to the existing ground-truth. There is still room for improvement in better aligning with international studies practice; in fact, one piece of future work is to use the human-validated relationships not only as a comparison point for validating algorithm outputs, but as input for an improved algorithm that is a hybrid of deduction and data-driven inference. | 2018-05-24T21:34:17.000Z | 2018-05-24T00:00:00.000 | {
"year": 2018,
"sha1": "087d754018ebcb828a0e10de058e2a021441abab",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "087d754018ebcb828a0e10de058e2a021441abab",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
225756575 | pes2o/s2orc | v3-fos-license | Metal-Ceramic Bond Strength of Substrates Made With Different Fabrication Techniques: In Vitro Study
The purpose of this in vitro study was to determine the metal-ceramic bond strength between dental porcelain and cobalt-chromium (Co-Cr) metal substrates fabricated by different techniques. Forty Co-Cr metal substrates were fabricated according to ISO 9693-1, by casting, milling, soft milling, and DMLS. Forty additional substrates were fabricated for each technique to record the modulus of elasticity. A commercially available feldspathic porcelain was placed on the substrates, and then the specimens were also tested for metal-ceramic bond strength with the 3-point bend test, according to ISO 9693-1. The fractured specimens were observed with optical and scanning electron microscopy using electron dispersive spectroscopy to define the mode of failure. X-ray diffraction spectroscopy was conducted to determine changes in crystalline phases after fabrication and the 3-point bend test. Statistical analysis was with 1way analysis of variance and the Tukey post hoc test (α=.05). No statistically significant differences were found for modulus of elasticity among any of the groups. The metal-ceramic bond strength for casting had no statistically significant differences and the mode of failure in all groups was cohesive. The metallographic analysis of the as-received, the after porcelain firing, and the after 3-point bend test specimens revealed changes in microstructure. The crystallographic microstructure revealed that the patterns had minor changes among the groups. The study revealed that all of the techniques showed similar results. The modulus of elasticity and metal-ceramic bond strengths presented no statistically significant differences, and the mode of failure was cohesive.
INTRODUCTION
Metal substrates for metal-ceramic restorations were traditionally constructed by casting dental alloys, but in recent years, new technologies have offered alternative solutions. The most common of these are the three-dimensional (3D) subtractive and additive techniques, such as milling and soft milling, which are methods that involve cutting the substrate from a soft material, followed by thermal treatment in an oven, and the direct metal laser sintering (DMLS) technique, which involves the construction of the metal substrates layer by layer with metal powder and a laser. Many researchers [1][2][3][4][5][6][7][8][9] have published articles concerning experiments with these techniques, using mostly cobalt-chromium (Co-Cr) alloys and testing different factors which influence the metal-ceramic bond strength. Also, many results have been published [10][11][12][13][14] regarding bond strength values related to microstructural changes of alloys used under different processing and thermal conditions.
In addition, Co-Cr alloys have been utilized with these new technologies because they involve less concern about biocompatibility than the traditional cast nickel-chromium (Ni-Cr) alloys, for metal-ceramic prostheses. The purpose of the present research was to study the bond strength between Co-Cr alloys and a feldspathic dental porcelain when casting, milling, soft milling, and DMLS techniques were used, according to ISO 9693-1, 15 to construct the substrates. The null hypothesis of the present study was that there would be no statistically significant differences in the metal and ceramic bond strengths for any of the fabrication techniques used for the construction of the metal substrates.
MATERIALS AND METHOD
Forty metal substrates were fabricated by casting, milling, soft milling, or DMLS techniques according to ISO 9693-1 requirements. The metal substrates were classified into 4 equal groups of 10, as shown in Table 1. In addition, 40 metal substrates were fabricated, 10 with each technique, to test the modulus of elasticity (E) ( Table 1). Twenty-one plastic patterns were fabricated using custom-made equipment 16 to the exact dimensions specified by ISO 9693-1 (length 25 ±1 mm, width 3 ±0.1 mm, and thickness 0.5 ±0.05 mm).
Twenty of the specimens were positioned in casting rings, with an investment material (Giroinvest Super; Amann Girrbach) and then cast using a Co-Cr alloy (Phase-C3; Unitech).
The investment material was removed by sandblasting with 110-μm Al2O3 particles (Cobra; Renfert) with 200 kPa pressure. The one plastic pattern left was used for the scanning procedure (Ceramill Map 400; Amann Girrbach) to create the digital prototype, and a standard tessellation language (STL) file was created for the milling, soft milling, and DMLS techniques.
The milling was performed with a YenaDentD40 5-axis cutting machine (YenaDent Europe), using a Co-Cr 10-mm Magnum Splendidum 4 disc (Mesa). Mouchtaridi et. al., Br J Med Health Res. 2020;7(06) ISSN: 2394-2967 The soft milling was performed with a Ceramill Motion 2 5-axis cutting machine (Amann Girrbach) using a 10-mm Co-Cr special soft disc Ceramill Sintron from the same company for the milling. After the soft milling procedure, all of the substrates of this group were subjected to thermal processing in a Ceramill Argotherm 2 furnace (Amann Girrbach) to acquire the optimum mechanical properties.
The DMLS substrates were created using the 3Shape program with the STL file, and then with an MLab cusing machine (Concept Laser) using the 10-30 μm Co-Cr powder Remanium Star CL (Dentaurum). The compositions of the manufacturers 'Co-Cr alloys are presented in Table 2.
Ten substrates from each group were submitted to a 3-point bend test in a universal testing machine (Tensometer10; Monsanto). A standard load was applied with a crosshead speed of 1.5 mm/min and a distance between the supporting points of 20 mm. The E was calculated using the following formula: E=L3 ΔP/4bh3Δd, where L is the distance between the supporting rods (20 mm), b is the width of the specimen (3 mm), h is the thickness of the specimen (0.5 mm), ΔP and Δd are the load and deflection increment, respectively, between the 2 specific points in the elastic portion of the curves.
The porcelain was applied in layers. The first layer was the bonding agent (Metablend; Unitech), the second and third layers were the opaque and dentin of Noritake EX-3 dental After the preparation of the metal-ceramic specimens, a 3-point bend test was performed with the same equipment (Tensometer10; Monsanto) that was used for the definition of E. The load was applied on the opposite side from the porcelain layers. Fracture diagrams were obtained, and the bonding strength was calculated using the following formula: σ=3FL/2bd2 where σ is the tension, F is the largest amount of weight that is applied (Ν), L is the distance between the To verify the kind of material of the different colored areas recorded with optical microscopy, some selected areas of the fractured surfaces were observed in a scanning electron microscope (SEM; JEOL 6380LV) operating at an accelerating voltage of 30 kV, by secondary electron images at ×100 magnification. The qualitative and quantitative definitions of the elemental distribution were obtained by X-ray energy dispersive spectroscopy (EDS) using a super ultrathin beryllium window (Sapphire; Edax Intl). The EDS area (mapping) analysis was obtained in representative metal substrates as received from the manufacturer, for each group, as well as the bonding agent and opaque porcelain before their use on the substrates. The EDS analysis was also conducted on specific areas of the fractured specimens.
Cohesive failure was defined as >50% of the fractured surface of the specimen covered by ceramic material (including bonding agent), whereas adhesive failure was defined as <50%.
The mode of failure in all of the groups in this study was cohesive.
Phase composition was analyzed by X-ray diffraction (XRD) using a Philips X'Pert in the present study, XRD patterns were used and divided into groups of as-received, after porcelain firing, and after 3-point bend test patterns. The mean value and standard deviation were calculated for E.
Bond strength was statistically analyzed by 1-way analysis of variance (ANOVA) and Tukey post hoc test; P<.05 was considered statistically significant.
RESULTS AND DISCUSSION
The results for E of the 4 different techniques applied for the fabrication of the metal substrates are presented in Table 3. No statistically significant difference was recorded among the tested groups (P=0.268). The results for metal-ceramic bond strength are presented in Table 4. Again, no statistically significant difference was recorded among the tested groups (P<0.173). The results of the EDS area analysis of the as-received ceramic materials (bonding agent, porcelain) revealed that the bonding agent presented an increased amount of Ti, while Zr and Si were the main elements detected in the opaque porcelain. The mode of failure for all groups was cohesive.
Also, XRD analysis of the used alloys was conducted. The 2 typical primary phases of the casting group of Co-Cr alloys, γ-fcc (γ-Co-Cr-fcc) and ε-hcp (ε-Co-Cr-hcp), were found in the as-received alloys. Also another phase that fit with intermetallic Co7Cr8 was revealed. After porcelain firing and after the 3-point bend test, the Co-Cr-fcc became the major phase. In addition, TiO2 and CeO2 were identified showing that the presence of porcelain and of Co-Crhcp could not be discounted, because their peaks appeared in the same place as those of the additives from opaque porcelain. The amorphous halo observed at low angles (below 25 degrees) in the after porcelain firing group and after the 3-point bend test, was the result of their additive content. As in the casting specimens, the patterns of the as-received milling group showed the 2 fcc/hcp Co-Cr phases together with an intermetallic Co7Cr8 phase. In APF-A3PBT patterns, the Co-Cr-fcc became the major phase, in combination with the TiO2 and
DISCUSSION
According to the results of the present study, the null hypothesis was verified, and no statistically significant difference was found among the tested groups concerning the metalceramic bond strength. The results for modulus of elasticity of all tested Co-Cr alloys were in accordance with the data provided by the companies and that reported in the literature.
Regarding XRD, the pure solid cobalt was (under equilibrium conditions) face-centered cubic (fcc) above 419ºC and hexagonal close-packed (hcp) below 419ºC. 17 The pure solid Cr showed the body-centered cubic phase (Cr-bcc) as departure phase; however, the solid-state transformation of Co from fcc to hcp was slow, so the C-fcc phase was retained under normal They found no significant difference in bond strength between cast and selective laser sintering Most of the previously mentioned studies were accompanied by testing for mode of failure.
The results revealed cohesive fractures in the majority of the tests, independent of the fracture method used. [1][2][3][4][5] Many researchers report mixed adhesive and cohesive failure, with cohesive failure the more prevalent. 4,5,9,19 Al Jabbari et al 10 in an experimental study, produced Co-Cr specimens by casting, powder metallurgy, and CAD-CAM, and analyzed those using EBSD EDS and XRD. Specimens were treated using different porcelain firings, and crystallography, grain size, and chemical composition were analyzed. for future research, it would be useful to analyze the longevity of each of these constructions, comparing the groups. Also, another possible study could include these 4 groups to reveal the best effect on tissues, and which one would be more suitable. Furthermore, it would be interesting to analyze the ions released from these materials. Given all of the new technologies that are now a part of the production of fixed dental prostheses and looking forward to new technologies that may emerge, a similar study including also 3D printers might give valuable insights.
CONCLUSION
Within the limitations of the present laboratory study, the following conclusions can be drawn: The modulus of elasticity of Co-Cr dental alloys fabricated by casting, milling, soft milling, or | 2020-07-09T09:12:52.074Z | 2020-06-22T00:00:00.000 | {
"year": 2020,
"sha1": "2ce05bbd04750a105982db51f3ce283a0cf7e547",
"oa_license": null,
"oa_url": "https://doi.org/10.46624/bjmhr.2020.v7.i6.011",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d4b98988a7bff3c5b45afe19a4e61e3bbd04eec6",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
2248613 | pes2o/s2orc | v3-fos-license | Kinetic Analysis of Mouse Brain Proteome Alterations Following Chikungunya Virus Infection before and after Appearance of Clinical Symptoms
Recent outbreaks of Chikungunya virus (CHIKV) infection have been characterized by an increasing number of severe cases with atypical manifestations including neurological complications. In parallel, the risk map of CHIKV outbreaks has expanded because of improved vector competence. These features make CHIKV infection a major public health concern that requires a better understanding of the underlying physiopathological processes for the development of antiviral strategies to protect individuals from severe disease. To decipher the mechanisms of CHIKV infection in the nervous system, a kinetic analysis on the host proteome modifications in the brain of CHIKV-infected mice sampled before and after the onset of clinical symptoms was performed. The combination of 2D-DIGE and iTRAQ proteomic approaches, followed by mass spectrometry protein identification revealed 177 significantly differentially expressed proteins. This kinetic analysis revealed a dramatic down-regulation of proteins before the appearance of the clinical symptoms followed by the increased expression of most of these proteins in the acute symptomatic phase. Bioinformatic analyses of the protein datasets enabled the identification of the major biological processes that were altered during the time course of CHIKV infection, such as integrin signaling and cytoskeleton dynamics, endosome machinery and receptor recycling related to virus transport and synapse function, regulation of gene expression, and the ubiquitin-proteasome pathway. These results reveal the putative mechanisms associated with severe CHIKV infection-mediated neurological disease and highlight the potential markers or targets that can be used to develop diagnostic and/or antiviral tools.
Introduction
Chikungunya virus (CHIKV), an Alphavirus belonging to the Togaviridae family, is an arthropod-borne virus transmitted to humans by Aedes spp. mosquitoes [1]. Since 2004, major epidemics of Chikungunya fever have occurred in Africa and spread rapidly to India and islands in the southwestern Indian Ocean, notably in La Réunion between 2005 and 2006, when more than one third of the population was affected with the infection, causing 203 deaths [2]. Although Ae. aegypti is the classical vector of CHIKV, an adaptive mutation of the virus to Ae. Albopictus during the La Réunion outbreak increased the viral transmissibility and dissemination [3]. Climate changes and increased international exchanges of products and people have favored the dissemination of Ae. albopictus that colonized the temperate regions of Europe and the Americas [4,5]. Therefore, the wide distribution of Ae. albopictus and its establishment in temperate regions have modified the risk map of CHIKV outbreaks [6]. A recent CHIKV epidemic in northeastern Italy highlighted the increased risk of the emergence of arboviruses transmitted by local competent mosquitoes in Europe [7,8]. These outbreaks were directly linked to the return of tourists from India and the affected islands in the Indian Ocean. The risk of CHIKV transmission arises from the simultaneous presence of the virus, well-adapted vectors and susceptible human hosts. The spread of the Chikungunya epidemic has caused significant social and economic losses (high economic cost and human suffering) [9]. CHIKV is now considered a global health concern. In the absence of a vaccine or specific treatment, the primary mechanism to protect individuals from CHIKV infection is the prevention of bites from infected Aedes spp. using a combination of personal protective measures and vector control strategies [10]. However, protection cannot be restricted to antivector measures. Antiviral strategies against CHIKV infection must be developed for the prevention and/or treatment of the clinical manifestations associated with this arboviral disease.
The symptomatology of CHIKV infection was first described in the mid-1950s after an outbreak of Dengue disease in Tanzania in 1952 [11,12]. Although five percent of the infected people are asymptomatic, the disease is mainly characterized by fever, rash, headache and incapacitating joint pain (arthralgia) [13]. Chikungunya fever is rarely fatal, and most symptoms are resolved within a few weeks; nevertheless, some patients have persistent joint pain in the form of recurrent or persistent episodes that last for months to years [14]. Whereas the neurological complications were described in the 1960s [15,16], the severe clinical forms involving the central nervous system (CNS) were not uncommon during the Chikungunya outbreak that occurred in La Réunion from March 2005 to April 2006 [17]. This outbreak was characterized by a large number of atypical manifestations, including neurological disorders, which are listed as a major cause of death among individuals with severe CHIKV infection [18]. The increased susceptibility of newborns and the elderly to neurological complications supported the age-dependent association of these severe forms [19]. Additionally, the first mouse model of CHIKV infection developed by Couderc and collaborators [20] revealed the dissemination of the virus to the choroids plexuses and leptomeninges in the CNS in severe infections. This animal model has improved knowledge about the pathogenicity and cell/tissue tropisms of the virus, confirming that CHIKV can disseminate in the CNS.
To prevent and/or treat severe neurological disease in humans, a better understanding of the neurological consequences of CHIKV infection before and after the appearance of neurological clinical symptoms is needed. To elucidate the pathogenesis of CHIKV infection and identify the host factors hijacked by CHIKV to complete its viral replication cycle, the protein profile changes following CHIKV infection in vitro and in vivo were analyzed using state-of-the art technology. The in vitro experiments using CHIKV-infected hepatic or microglial cell lines collected before cell death revealed the down-regulation of host proteins involved in diverse cellular pathways and biological functions, including transcription, translation, cell signaling and lipid and protein metabolism [21,22]. The in vivo experiments comparing the liver and brain protein expression patterns in mock-and CHIKV-infected mouse tissues collected at the peak symptomatic phase showed an alteration of the proteins involved in stress responses, inflammation, metabolism and apoptosis [23]. In contrast to the in vitro experiments, most of the differentially expressed proteins in the infected mouse brain tissue were upregulated. The differences between the global protein expression patterns could be attributed, in part, to the different time points chosen in each proteomics analysis. The in vitro experiments focused on early proteome alterations following CHIKV infection, whereas the in vivo experiments investigated the molecular consequences of viral infection after the appearance of clinical symptoms.
Therefore, to obtain a comprehensive view of the pathophysiological processes associated with the clinical onset of neurological CHIKV infection, a kinetic analysis was performed on the protein expression profiles in the brain of CHIKV-infected mice collected before and after the onset of clinical symptoms. The host proteome modifications were determined using two proteomic approaches (2D-DIGE and iTRAQ) followed by protein identification by mass spectrometry (MS). Ingenuity Pathway Analysis (IPA) of the total dataset of proteins that were differentially expressed at the early and late time-points enabled the determination of the main networks and pathways modified during CHIKV infection in the brain. Detailed analysis of the proteins involved in these networks and pathways provided insight into the protein interactions and biological processes that are involved in the pathogenesis of the neurological disease caused by CHIKV infection. This study also highlighted the biomarkers of severe atypical symptoms and potential targets for antiviral research.
Ethics statement
All animal experiments described in this paper have been conducted according to Dutch guidelines for animal experimentation and approved by the Animal Welfare Committee of the Erasmus Medical Centre, Rotterdam, the Netherlands. All efforts were made to minimize animal suffering.
Mouse infection
Nine-day-old female C57/Bl6 mice were infected intra-peritoneally (i.p) with 10 5 TCID 50 per mouse of CHIKV strain S27 in 100 ml volumes. Mock infected mice (n = 6) received the same volume of medium and were sacrificed on day 2 post infection. CHIKV-infected mice were sacrificed on the first day that virus was present in the brain (Day 2 post infection; n = 6) and on the first day of neurological symptoms (Day 3 post infection). Two different disease manifestations were observed on day 3 post infection: mice exhibited paralysis-like symptoms (n = 6) or tetanus-like symptoms (n = 6). Brains were collected immediately after humane euthanasia by cervical dislocation under isoflurane anesthesia and cerebellum was separated to reduce protein background. Brains were cut in half and the left hemisphere of each mouse was washed rapidly in ice-cold PBS to remove residual blood contaminants. Brains were then snap-frozen and stored in 280uC until processing. Mice were maintained in isolator cages throughout the infection experiment, had a 12-hour day-night cycle and were fed ad libitum. Animal experiments were approved by the Animal Ethics Committee of Erasmus Medical Center. Virus presence in the brain was confirmed by means of detection of viral RNA and antigen in brains samples from all mice. Viral RNA was extracted from brain samples using the automated MagnaPure method (Total nucleic acid isolation kit, Roche Diagnostics, the Netherlands) according to the manufacturer's instructions, and detected using a one-step RT-PCR TaqMan protocol (EZ-kit, Applied Biosystems) in an ABI PRISM 7500 detection instrument. The primers and probe used for CHIKV RNA detection were: CHIKV-reverse CCAAATTGTCC GGGTCCTCCT; CHIKV-forward AAGCTCCGCGTCCTT-TACCAAG and probe Fam-CCAATGTCTTCAGCCTGGA-CACCTTT-Tamra [24,25]. Viral antigen was detected in formalin fixed, paraffin embedded tissues as follows: 4-mm thick paraffin sections were deparaffinized in xylene, rehydrated in descending concentrations of ethanol and incubated for 10 min in 3% H 2 O 2 diluted in PBS to block endogenous peroxidase activity. Antigen retrieval was performed by incubation for 15 min at 121uC in citrate buffer (0.01 M, pH 6.0). Sections were incubated overnight at 4uC with rabbitanti-CHIKV capsid (1:5000), antibody followed by secondary goat anti-rabbit IgG-PO antibody (1:100; Dako, The Netherlands). Sections were counterstained with Mayer's hematoxylin and mounted with Kaiser's glycerin-gelatin and analyzed using a light microscope.
Protein sample preparation
Half brain hemispheres from non-infected mice and mice infected with CHIKV were collected before and after the appearance of clinical signs corresponding to early and late time-points, respectively. CHIKV-infected mice were separated into three groups: early (n = 6, CH-E1 to E6) sampled at day two and two late groups showing two different symptoms (see above ''mouse infection'' section for details); late ''paralytic'' (n = 6, CH-LP1 to LP6) and late ''tetanus-like'' (n = 6, CH-LT1 to LT6) sampled at day three. Control group (mock, n = 6, C1 to C6) was sampled at day two. Brain samples were stored at 280uC and further processed in biosafety level 3 laboratory (Dept. of Virology, IRBA Marseille) until complete homogenization. Briefly, each brain sample was lysed with 1 ml of lysis buffer containing 2% SDS, 125 mM Tris-HCl pH = 6.8, 10% glycerol and 5% mercaptoethanol, and homogenised by mechanical disruption using metal beads and the Tissue Lyser apparatus (QIAGEN). The resulting homogenates were centrifuged for 15 min at 16 000 x g at 4uC and the supernatant was collected and stored at 280uC. The protein concentration of each sample was determined by the Lowry method (DC Protein assay Kit, Bio-Rad) according to the manufacturer's instructions.
CyDye labeling
Samples were subjected to 2-D clean-up kit (GE healthcare) and the protein pellet was resuspended in standard cell lysis buffer containing 8 M urea, 2 M thiourea, 4% (w/v) CHAPS and 30 mM Tris, adjusted to pH 8.5 (UTC buffer) at a protein concentration of 2.5 mg/mL. Sample quality and protein amount was checked out by loading 10 mg of each sample onto a 10% SDS-PAGE precast gel (BioRad) stained with Imperial Protein Stain solution (Fisher Scientific) (data not shown). Proteins in each sample were minimally labeled with CyDye according to the manufacturer's recommended protocols and as previously described [26] [27]. An internal standard pool was generated by combining an equal amount of each sample included in the study and was labeled with Cy-2. Cy3-, Cy5-and Cy2-labeled samples were then pooled (Supplementary Tables S1 and S2), and an equal volume of UTC buffer containing 10 mM DTT and 1% (v/v) immobilized pH gradient (IPG) buffer corresponding to the IPG strips used, was added.
Image analysis
After electrophoresis, the gels with Cydye-labeled proteins were scanned with a Typhoon Trio Image scanner (GE Healthcare UK). Prescans were performed to adjust the photomultiplier tube (PMT) voltage to obtain images with a maximum intensity of 60 000 to 80 000 U. Images were cropped with ImageQuant software (GE Healthcare UK) and further analyzed using the software package Progenesis SameSpot v2 software (Nonlinear Dynamics, Newcastle upon Tyne, UK). Background subtraction and spot intensity normalization were automatically performed by Progenesis SameSpots. Protein spots which presented a significant abundance variation between the 3 experimental groups (|ratio|$1.3, ANOVA p#0.05) were marked and submitted to MS for identification.
In-gel Digestion
Based on the Progenesis SameSpot analysis, protein spots of interest from gels stained with Imperial TM Protein Stain solution were excised and digested using a Shimadzu Xcise automated gel processing platform (Shimadzu Biotech, Kyoto, Japan) as described previously [28] and stored at 220uC until their analysis by MS Mass spectrometry analysis of peptide mixture from gel elution and data analysis The samples were subjected to nanoscale capillary liquid chromatography-tandem mass spectrometry (nano LC-MS/MS) analysis with a QTOF apparatus (Q-TOF Ultima, Waters, MA) as previously described [26]. The peak lists generated in the micromass pkl format, were then fed into a local search engine Mascot Daemon v2.2.2 (Matrix Science, London, UK) against a mixed Mus musculus and Chikungunya virus homemade protein database (SwissProt). Search parameters were set in order to allow one missed tryptic cleavage site, the carbamidomethylation of cysteine, and the possible oxidation of methionine; precursor and product ion mass error tolerance was ,0.2 Da. All identified proteins have a Mascot score greater than 35 (Mixed: Mus musculus, Chikungunya virus, 16487 sequences extracted from Swis-sprot_2012_02), corresponding to a statistically significant (p,0.05) confident identification.
iTRAQ labeling
For iTRAQ labeling, a sample pool of each experimental group was generated by mixing an equal amount of each sample per group (mock, pool-C; early, pool-E; and late, pool-LP and pool-LT). Six mice per group were pooled. Each pool was then divided into two replicates (mock, pool-C1-C2; early pool-E1-E2 and late pools-LP1-LP2 and LT1-LT2) containing 100 mg of protein.
Proteins were precipitated with cold acetone for 2 h at -20uC, centrifuged for 15 min at 16 000 6 g, dissolved in 20 mL of Dissolution buffer, denatured, reduced, alkylated and digested with 10 mg of trypsin overnight at 37uC, following manufacturer's protocol (iTRAQ Reagent Multiplex Buffer kit, Applied Biosystems, Foster City, CA, USA) and as previously described [29]. The resulting peptides were labeled with iTRAQ reagents (iTRAQ Reagent-8Plex multiplex kit, Applied Biosystems) according to manufacturer's instructions and as presented in the Supplementary Table S3. Before combining the samples, a pre-mix containing an aliquot of each sample, cleaned-up using a ZipTip was analysed by MS/MS to check out peptide labeling efficiency with iTRAQ reagents and homogeneity of labeling between each sample. The content of each iTRAQ reagent-labeled sample was pooled into one tube according to this previous test. The mixture was then cleaned-up using an exchange chromatography (SCX/ ICAT cation exchange cartridge, ABsciex, Foster City, USA) and reverse-phase chromatography C18 cartridge (C18 SpinTips, Proteabio, Nîmes, France), prior to be separated using off-gel system (Agilent 3100 OFFGEL fractionator, Agilent Technologies), as previously described [27].
Mass spectrometry analysis of peptide fractions from off-gel separation For nanoLC MS measurements, approximately 5 mg of peptide sample was injected onto a nanoliquid chromatography system (UltiMate 3000 Rapid Separation LC (RSLC) systems, Dionex, Sunnyvale, CA). After pre-concentration and washing of the sample on a Dionex Acclaim PepMap 100 C18 column (2 cm6100 mm i.d. 100 A, 5 mm particle size), peptides were separated on a Dionex Acclaim PepMap RSLC C18 column (15 cm675 mm i.d., 100 A, 2 mm particle size) (Dionex, Amsterdam) using a linear 90 min gradient (4-40% acetonitrile/ H20; 0.1% formic acid) at a flow rate of 300 nL/min. The separation of the peptides was monitored by a UV detector (absorption at 214 nm). The nanoLC was coupled to a nanospray source of a linear ion trap Orbitrap MS (LTQ Orbitrap Velos, Thermo Electron, Bremen, Germany). The LTQ spray voltage was 1.4 kV and the capillary temperature was set at 275uC. All samples were measured in a data dependent acquisition mode. Each run was preceded by a blank MS run in order to monitor system background. The peptide masses were measured in a survey full scan (scan range 300-1700 m/z, with 30 K FWHM resolution at m/z = 400, target AGC value of 10 6 and maximum injection time of 500 ms). In parallel to the high-resolution full scan in the Orbitrap, the data-dependent CID scans of the 10 most intense precursor ions were fragmented and measured in the linear ion trap (normalized collision energy of 35%, activation time of 10 ms target AGC value of 10 4 , maximum injection time 100 ms, isolation window 2 Da and wideband activation enabled). The fragment ion masses were measured in the linear ion trap to have a maximum sensitivity and the maximum amount of MS/MS data. Dynamic exclusion was implemented with a repeat count of 1 and exclusion duration of 37 sec.
Data analysis
Raw files generated from MS analysis were combined and processed with Proteome Discoverer 1.1 (Thermo Fisher Scientific). This software was used for extraction of MGF files. Protein identification and quantification were carried out using ProteinPilot version 4.0 (Applied Biosytems). The search was performed against the mixed database containing 55536 sequences (54080 sequences from Mus musculus (extracted from Uniprot the 13th December 2011) + 1300 sequences from Chikungunya virus, and 156 classical contaminant proteins). Data were processed as described previously [30].
Ingenuity pathway analysis
A dataset containing deregulated proteins obtained from 2D-DIGE and iTRAQ analysis and their corresponding expression values (fold-change and p-values) were uploaded into the IPA software, Inc (http://www.ingenuity.com), taking into account to protein expression evolution according to clinical onsets. Proteins whose expression was significantly deregulated (|fold-change| $ 1.3, p-value # 0.05), were selected for the analysis. The IPA program uses a knowledgebase (IPA KB) derived from the scientific literature to relate genes or proteins based on their interactions and functions. Ingenuity Pathway Analysis generates biological networks, canonical pathways and functions relevant to the uploaded dataset. A right-tailed Fisher's exact test is used for calculating p-values to determine if the probability that the association between the proteins in the dataset and the functional and canonical pathway can be explained by chance alone. The final scores are expressed as negative log of p-values and used for ranking. The scores are derived from a p-value (score = 2log (pvalue)) and indicate the likelihood that focus proteins (i.e., the identified proteins within a network)) are clustered together. Thus, these proteins and their association with the IPA KB were used to generate networks and to perform functional canonical pathway analyses.
Virus Infection Conditions
The presence of the CHIKV RNA in the brains collected from CHIKV-infected mice at the indicated time points was verified by real-time RT-PCR using primers and probes targeting the capsidencoding region of the viral RNA. The viral RNA was detected on days 2 and 3 post-infection. On day 3, 50-to 100-fold more viral RNA was detected in the brains of infected animals. There was no difference in the RNA level in animals with paralysis and tetanuslike symptoms. Using immunohistochemistry, the viral antigen was detected in the brain of infected animals with paralysis and tetanus-like symptoms (Figure 1). The antigen was detected in the cortex and thalamus of the brain collected at days two and three post-infection. None of the mock-infected mice expressed the viral RNA or antigen.
2D-DIGE analysis to detect the differentially expressed proteins following CHIKV infection
Using the Progenesis SameSpot v.2 software, the abundance of 23 protein spots was found to be significantly different (ANOVA, p-value # 0.05) among the four groups (M, E, LP and LT) with fold changes $ 30% (|FC| $ 1.3, in the pH range 3-10; Figure 2A). The majority of the protein spots were significantly altered at the late time points in the infected mice relative to the mock-infected mice (LP vs M, n = 15; eight up-regulated and seven down-regulated, and LT vs M, n = 15, nine up-regulated and six down-regulated, Figures 2C and 2D). Nine protein spots were significantly different in the infected mice at the early time point relative to the mock-infected group (five up-regulated and four down-regulated, Figure 2B). Finally, seven and eight protein spots were significantly different in the LP and LT samples, respectively, relative to the early samples (LP vs E, three up-regulated and four down-regulated; and LT vs E, three up-regulated and five downregulated, Figures 2E and 2F). Notably, among the modified protein spots, few differences were observed between the LP and LT samples relative to either the mock or early samples.
To better understand the early pathophysiological processes following CHIKV infection and to improve the determination of host proteome changes before the appearance of clinical signs, 2D-DIGE analyses were performed on the mock-infected samples relative to the infected samples at the early time point using narrower pH range IPG strips (e.g., pH 4-7 and 6-11). Using the pH 4-7 IPG strips for the IEF, nine protein spots were found to be significantly different between the early and mock-infected samples (|fold-change| $ 1.3, p # 0.05) (seven up-regulated and two down-regulated, Figure S1). However, no protein spots were found to be significantly different using the pH 6-11 IPG strips (data not shown). Considering both pH ranges (i.e., pH 3-10 and 4-7), the abundance of 18 protein spots was found to be significantly modified at the early time point compared to the mock-infected group. Overall, the DIGE analysis of the brain tissue following CHIKV infection revealed that the abundance of 32 protein spots was altered.
Identification of modified protein abundance following CHIKV infection using 2D-DIGE analyses Among the 23 protein spots detected using the pH 3-10 range analysis, 18 (78.3%) were successfully identified with a high degree of confidence; these spots corresponded to 16 distinct proteins according to their accession numbers, including seven proteins identified in the early samples (Table 1). These proteins were grouped into functional categories according to their gene ontology (GO). No viral protein was identified. Two spots contained more than one identified protein (#469 and #628), and three proteins could be identified in more than one spot (DPYSL2, DPYSL3 and AL1L1). Five protein spots were not identified, most likely because of insufficient amounts of proteins or low MS spectra qualities.
The nine spots that the pH 4-7 range analysis revealed to be differentially expressed in the infected samples at the early time point were all identified; these spots corresponded to eight distinct host proteins based on their SwissProt accession numbers (Supplementary Table S4). The dynactin subunit 1 (DCTN1) was identified in two spots. Similar to the pH 3-10 analysis, no viral protein was identified. Notably, the early up-regulation of the dihydropyrimidinase-related protein 2 (DPYSL2) was confirmed using the pH 4-7 analysis. Therefore, the use of the narrower pH range enabled the identification of seven proteins in addition to those identified using the pH 3-10 range; therefore, 14 distinct proteins were identified as differentially expressed before the appearance of clinical signs. Considering the results obtained using both pH ranges in the 2D-DIGE analyses and the different paired comparisons, 22 unique host proteins were identified.
Identification of differentially expressed proteins following CHIKV infection using iTRAQ-labeling
To further characterize the proteome changes in the mouse brain after CHIKV infection, an off-gel quantitative proteomic analysis was performed using the iTRAQ reagent, which allowed Table S3). The data were analyzed with the Protein Pilot software using the parameters described above. More than 3000 proteins were initially identified; following the application of the local False Discovery Rate (FDR) of 5% and the exclusion of contaminants, 2686 proteins identified and quantified, which were included in the analysis.
A total of 178 distinct host proteins were found to be significantly different (p # 0.05) among the four groups (M, E, LP and LT) with a fold-change $ 30% (|FC| $ 1.3) (Supplementary Table S5). Among these proteins, 129 were differentially expressed in the CHIKV-E-and mock-infected mice, with 87% of the proteins being down-regulated (17 up-regulated and 112 down-regulated). In the CHIKV-LP-and early-infected mice, 138 proteins were differentially expressed with 96% of the proteins being up-regulated (133 up-regulated and five downregulated). In the CHIKV-LT and early-infected samples, 144 proteins were differentially expressed with more than 98% of the proteins being up-regulated (142 up-regulated and two downregulated). Among the proteins differentially expressed at the late time-point compared to control samples, 91 proteins were significantly different in the CHIKV-LP-and mock-infected mice (51 up-regulated and 40 down-regulated) and 85 proteins were significantly different in the CHIKV-LT and mock-infected samples (55 up-regulated and 30 down-regulated).
Combination of in-gel (2D-DIGE) and off-gel (iTRAQlabeling) analyses
Because the objective of this kinetic study was to determine the protein profile alterations during the evolution of clinical signs, only the proteins that were significantly differentially expressed in the late (LT or LP) and early-infected samples, and in the earlyinfected and mock samples were included in the combined analysis of the DIGE and iTRAQ data. Considering the times at which the two late symptoms occurred, analysis of the three comparisons (E vs M, LP vs E and LT vs E) generated a total of 177 unique host proteins that were differentially expressed in the brain tissue samples after CHIKV infection at early and/or late time-points (2D-DIGE, n = 17; iTRAQ, n = 161; CRMP1 was detected using both methods). The subcellular distributions and the GO functional classifications were determined for the proteins that were significantly differentially expressed during CHIKV infection. The proteins were located mainly in the cytoplasm (.45%), the membrane (22%) and the nucleus (20%) ( Figure 3A). The differentially expressed proteins were mainly involved in transcription/translation (14%), nervous system development (12%), cytoskeleton organization (11%), metabolism (11%) and transport (10%); others were related to cell cycle, ubiquitination or apoptosis (.5%) ( Figure 3B). Among the 177 differentially regulated proteins, the majority (n = 110, 62.1%) showed differential expression in all three comparisons, and 36 proteins showed differential expression in paired comparisons ( Figure 3C). Finally, a few proteins were found to be differentially expressed in only one comparison (n = 31), particularly in the LP or LT groups compared to the early time-point, corresponding to six and three Representative data from a 2D-DIGE experiment using a 10% SDS-polyacrylamide gel with the pH 3-10 range is shown. Proteins from M-, E-and LP-and LT-CHIKV-infected brain samples were labeled with Cy5 or Cy3 cyanine dyes, as described in Table S1. As determined by Progenesis SameSpot software, protein spots that were differentially regulated between the four experimental groups (|FC| $1.3 and p #0.05), were submitted to mass spectrometry for identification. The numbers annotated on the gel corresponded to master gel numbers of deregulated protein spots. Spots were all identified as Mus musculus proteins and were listed in Table 1 Table 1. Proteins identified from the 2D-DIGE (pH 3-10) analysis of mouse brain lysates collected at early (E), late paralytic (LP) or late tetanus-like (LT) time-points after CHIKV infection. proteins, respectively. These results highlighted that the number of proteins differentially expressed between the two late time-points (LT and LP) is relatively low. The hierarchical cluster analysis clearly indicated that at the early time-point, the large majority of proteins were down-regulated (80%) and subsequently up-regulated when both the late clinical symptoms occurred (.95%) ( Figure 3D).
Networks, biological pathways and functions involved in the clinical evolution of CHIKV infection in the brain
The 177 unique host proteins whose levels were found to be significantly altered based on the three comparisons (E vs M, LP vs E and LT vs E) were uploaded into IPA to statistically determine the functions and pathways most strongly associated with the protein list and to establish interactions with other proteins in known networks. Eleven relevant networks were generated by IPA and the top 5 are listed in Table 2. Among these networks, the top two had clearly higher scores ($ 37) and included more than 20 focus molecules involved in functions related to cell or tissue morphology and infectious disease (network 1); and cell-to-cell signaling and interaction, cellular assembly and organization, and cellular compromise (network 2). These networks were overlaid with the fold changes in protein expression determined in each comparison (E vs M, LP vs E and LT vs E) to highlight the proteins whose levels were altered during the time-course of the infection and according to the late symptoms ( Figure 4) [32]. Network 1 contained proteins involved in actin cytoskeleton organization (Actin, Tubulin, Spectrin, CAPZ, and CORO2B), nervous system and recycling machinery (DPYSL, GRIPAP1, and ARRB1) and gene expression regulation (SRRM2, EIF3H, and EIF3K). Network 2 contained proteins related to integrin signaling and cytoskeleton dynamics (ITGAV, ITGB1, and LAMB1), endocytosis and synapse plasticity (GABRA1, PICK1, RABEP1, and NSF), and ubiquitination (RBX1, and RNF213).
Of the 208 canonical pathways identified, 26 presented a significant association (-Log (p-value) .2.0, Supplementary Table S6). The most relevant pathways were related to cell junctions and integrin signaling or associated with endocytosis phenomena (i.e., clathrin-mediated endocytosis (CME) and virus entry via the endocytic pathways) and nervous system signaling (i.e., CDK5, semaphoring signaling in neurons, and neuregulin). In addition, the biological functions associated with the protein dataset, ranked by significance, corresponded to cellular assembly and organization (p-value: 1.36E-05; n = 54), cellular function and maintenance (p-value: 2.11E-05; n = 49) and the cell cycle (p-value: 4.40E-05; n = 32). Furthermore, 53 molecules were associated with cell death and survival (p-value: 1.80E-03). In terms of diseases and disorders, 48 proteins were significantly associated with neurological disease (p-value: 5.20E-04), consistent with the observation that 27 of the molecules were mostly significantly related to nervous system development and function (p-value: 1.68E-04).
0.010
The proteins were identified by mass spectrometry following in-gel trypsin digestion. The spot numbers correspond to the same numbers as indicated on Figure 2. The identities of the spots, their SwissProt accession numbers, and the theoretical molecular masses and pI values as well as the number of peptide sequences, the corresponding percent sequence coverage, and the Mascot score are listed for MS/MS analysis. Protein scores greater than 35 were considered as significant (p, 0.05). Paired average volume ratio and p-values (ANOVA) between each paired groups compared, were defined using Progenesis Samespot software. n.i., no identification. M; mock-infected samples. doi:10.1371/journal.pone.0091397.t001 synaptogyrin (SYNGR3)). For all WB, each protein sample was labeled with the cyanine-3 dye to reveal any variations in sample loading, which were considered for the normalization and the calculation of the average band volume ratio that was detected by each specific antibody and revealed by a fluorescence-conjugated secondary antibody (FITC or ECL Plex system). Using these conditions, the protein levels during the course of CHIKV infection in the mouse brain were determined. The significant relative down-regulation between the mock-and early-infected samples and relative up-regulation between the early and LP or LT time points that were detected using the proteomic approaches were confirmed for RABEP1, SYNGR3, GRASP1, ARRB1, GABRA1 and ANXA2 by WB ( Figure 5). However, the increased expression of GRASP1 in the LT samples compared to the early-infection samples was not significant in the WB analysis. Although the differential expression of ITGAV and NRAS measured by WB was consistent with the proteomic results according to clinical symptom onset, the protein variations observed were only significantly different between LT or LP vs E for ITGAV and between LT vs E for NRAS. The MYPT1 differential expression was not significant in the WB analysis, irrespective of the pair-wise comparison performed.
Collectively, for the majority of the selected proteins, the expression variations measured by WB analysis were consistent with the DIGE and iTRAQ analyses over the time course of the clinical symptoms. Nevertheless, the protein abundance changes measured by WB were lower than those detected using the proteomic approaches ( Figure 5, Tables 1, S4 and S5). Several previously described factors might affect this validation step [27].
Discussion
To allow therapeutic intervention of the pathophysiological processes involved in the CHIKV neuro-invasive disease, knowledge of the kinetics of the host protein expression profiles is crucial. Therefore, mouse brain tissues were sampled before and after the appearance of severe clinical symptoms following CHIKV infection, and the protein expression profiles were monitored using comprehensive quantitative proteomic approaches. Unexpectedly, two distinct clinical manifestations were observed in the infected mice, one group showing paralytic symptoms (LP), and the other presenting tetanus-like symptoms (LT). Considering these two clinical signs and focusing the analysis on the alteration of protein profiles, comparisons were limited to successive time points (i.e., E vs M, LP vs E and LT vs E), allowing the determination of 177 unique proteins with significant differential expression.
The most striking results were the opposite trends in differential expression of proteins in the brain during the CHIKV infection in mice. Prior to the appearance of clinical signs, 80% of the proteins that were significantly differentially expressed were down-regulated compared with the mock-infected samples; however, after the appearance of the clinical signs, irrespective of the type of symptom, more than 95% of the proteins that were significantly differentially expressed were up-regulated. The changes in protein expression profiles occurred rapidly, within 24 hr of the onset of the acute phase of the disease. The changes in protein expression profiles were validated by WB analyses for the majority of the selected proteins. The observed contrasting expression profiles are consistent with recent proteomic studies using cell lines or mouse models infected with CHIKV [21,22,23]. In CHIKV-infected microglial cells harvested before cellular apoptosis, which was considered to be an early time point, almost all of the proteins that were significantly differentially expressed were down-regulated [21]. In another proteomic analysis, Thio and collaborators reported the down-regulation of 42 out of 50 proteins that were differentially expressed 24 h after the CHIKV infection of hepatic WRL-68 cells; this was considered a model system for early infection time points [22]. Conversely, the analysis of the proteome changes in the brain of mice infected by CHIKV and sampled during the phase of acute neurological symptoms revealed that more than 88% of the proteins were significantly up-regulated [23]. Consistent with these previously reported alterations in the protein expression profiles, the present study clearly indicated that CHIKV infection induces an early dramatic shut-off of host protein expression, followed by an up-regulation during the onset of clinical symptoms. These contrasting protein expression patterns observed using in vitro and in vivo CHIKV infection models likely reflect the nature of the replication cycle of this virus. Notably, the majority of proteins that were differentially regulated (. 62%) were similar in the three comparisons performed (E vs M, LP vs E and LT vs E), and the level of similarity was as high as 88% at the late time points (i.e., LP vs LT). Additionally, in the LP vs LT comparison, the expression levels of the common proteins varied in the same direction. These alterations in the protein expression patterns likely reflect the host response in combination with the hijacking of the host protein repertoire for successful viral multiplication and might have important consequences on viral pathogenicity and neurological symptoms.
The in silico analysis of proteins that were differentially expressed among the three compared groups (i.e., E vs M, LP vs E and LT vs E) revealed that the main functions and processes altered during the course of CHIKV infection in the mouse brains were as follows: i) integrin signaling and cytoskeleton dynamics, ii) endosome recycling machinery and synapse function, iii) regulation of host gene expression, and iv) modulation of the ubiquitinproteasome pathway. Several proteins had roles in multiple biological functions, indicating the interconnectedness of these functions. The networks and pathways associated with the differentially expressed proteins are potential pathophysiological processes of neurological CHIKV infection and offer possible biomarkers or therapeutic targets for the diagnosis and prevention of these severe manifestations, as discussed below.
i) Integrin signaling and cytoskeleton dynamics
Bioinformatic analysis pointed out that several proteins that were significantly differentially expressed were involved in the integrin signaling cascade and cytoskeleton dynamics, including ITGAV, ITGB1, LAMB1, ARPC1B, CORO2B, RAB35, PICK1, MYPT1, DNM1, and TBB3 ( Figure 6). Interestingly, the integrins ITGAV and ITGB1 were found in almost all of the highlighted canonical pathways and in network 2 generated by IPA, suggesting the central role of these membrane proteins. Integrins are composed of alpha and beta subunits and are known to facilitate signal transduction and participate in a variety of processes, including cell growth, blood vessel permeability, tissue repair and immune response [33]. These membrane proteins also function as receptors for diverse viruses and act as signal transducers during virus entry [34]. Although some cell surface proteins are potential receptors for CHIKV entry into the host cells, the precise mechanism is still unclear [35,36]. In the present study, the downregulation and up-regulation of aV integrin (ITGAV) and b1 integrin (ITGB1) were observed over the time course of the appearance of clinical signs. The aV/b1 integrin heterodimer is utilized by various adenovirus serotypes for cell entry [37,38]. In addition, members of the integrin superfamily serve as entry receptors for the Ross River alphavirus and the West Nile virus (WNV) [39,40]. Therefore, it will be interesting to test whether the aV/b1 integrins could be involved in CHIKV attachment or entry into the host cells. In this case, the decrease and subsequent increase in the level of this membrane receptor could first limit and then promote virus entry, respectively. The targeting of integrin receptors is a possible therapeutic strategy against tumor development [41]. Recently, it was shown that the use of aV/b1 antibodies or antagonists prevented the outgrowth or dissemination of carcinoma [42]. Therefore, the candidate drugs targeting aV/b1 integrins could be evaluated for their efficacy in the protection against CHIKV infection or prevention of severe cases.
Notably, the extracellular matrix component LAMB1 was found to be differentially expressed, consistent with previous results using resident brain macrophages infected with CHIKV [21]; LAMB1 can interact with integrin b1, triggering cytoskeleton modifications [43]. The integrin signaling cascade includes the phosphorylation/activation of several protein families, such as the Rho family of small GTPases [44]. Many viruses can hijack the Rho GTPase pathway for efficient virus replication and production [45]. In the present study, the expression levels of several proteins connected to the Rho GTPase pathway were found to be altered (Figure 6), for instance, the Rab GTPase RAB35, a protein that controls the endocytic recycling pathway [46] and participates in the maintenance of cellular adhesion by the inhibition of factors involved in integrin recycling [47]. Disturbance of the equilibrium between these factors could lead to the rupture of cell adhesion, leading to blood vessel permeability and facilitating the crossing of the blood brain barrier by CHIKV. Figure 3. Classification of proteins significantly differentially regulated following CHIKV infection identified by 2D-DIGE and iTRAQ analysis. Significantly differentially regulated proteins were classified according to their sub-cellular location (A) and their functional categorization (B) according to gene ontology. The percentages of proteins associated with each category are indicated in brackets. (C) Venn diagram representing unique host proteins identified according to experimental group comparisons following CHIKV-infection by combined 2D-DIGE and iTRAQ analyses. The number of host proteins significantly differentially regulated between early (E) vs mock (M), late paralytic (LP) or late tetanus-like (LT) vs E are indicated. The number of proteins associated with each category is indicated with corresponding percentage in brackets. (D) Hierarchical clustering analysis was performed according to the mean ratios calculated between E vs M, LP vs E and LT vs E, as indicated at the top of the graphic. Up-and down-regulated proteins are shown in red and green, respectively, and proteins with no statistically change in expression level are indicated in black. The intensity of red or green color corresponds to the degree of regulation as indicated by the color strip at the top of the figure in arbitrary units. The graphical cluster was generated using the Genesis program [87]. doi:10.1371/journal.pone.0091397.g003 The altered expression of other proteins involved in microtubule filament formation (ARPC1B) [48], brain cytoskeleton rearrangement/motility (CORO2B, MYPT1) [49,50] and synaptic plasticity (PICK1) [51] highlighted the importance of changes in cellular cytoskeletal maintenance and molecular trafficking ( Figure 6). Alteration of the coronin protein level (CORO1A) in CHIKVinfected brains was previously observed [23].
Other cytoskeleton-related proteins such as dynamin (DNM1) and Tubulin beta-3 chain (TUBB3) were up-regulated only at the early time point. The role of the cytoskeletal network in CHIKV endocytosis has been demonstrated using actin-or tubulin-specific depolymerizing agents, highlighting the importance of intact actin filaments and microtubules for CHIKV infection [52].
ii) Endosome recycling machinery and synapse function
Similar to the cytoskeletal dynamics-related changes, several proteins that were differentially expressed during the time course of CHIKV infection in the brain were related to the endocytic machinery, including Rabaptin-5 (RABEP1), RAB35, N-ethylmaleimide-sensitive fusion protein (NSF), GRIP-associated protein 1 (GRIPAP1 or GRASP1), and annexin-A2 (ANXA2); this class of proteins was mainly down-regulated at the early stage of infection and then up-regulated at the later time-points ( Figure 6). The identification of the differential expression of the proteins involved in different steps of endosome trafficking, such as RABEP1, involved in early endosome fusion [53,54], NSF, in membrane fusion and transport [53,55], GRIPAP1, in endosome recycling [56], RAB35, in endosome recycling to the membrane [57,58] and ANXA2, in different steps from early endocytosis to endosome recycling [59], suggested the strong perturbation of this pathway following CHIKV infection. Diverse viruses including CHIKV use different stages of this endocytic pathway for efficient infection [52,60,61,62,63,64,65]. Endosome trafficking appears to play a key role for the CHIKV replication cycle, and the proteins identified in the present study could represent potential drug targets for the control of viral spreading. Drugs licensed for human use targeting the Rab GTPase network are already available [66]; these drugs could be tested against CHIKV infections.
Several differentially expressed proteins identified in the CHIKV-infected brain samples were associated with neurotransmitter receptor recycling, which is involved in synapse functions ( Figure 6). The c-aminobutyric acid receptor subunit alpha-1 (GABAAR1 or GABRA1), which mediates post-synaptic transmission in the vertebrate CNS, presented an early downregulation and late up-regulation with the onset of clinical signs. The differential expression of GABRA1 is associated with the modulation of the phosphatase and tensin homolog (PTEN), a synaptic signaling protein [67] and arrestin-b1 (ARRB1), both of which activate the PI3K pathway, leading to the regulation of GABRA1 membrane expression and function [68,69]. ARRB1, NSF, PICK1 and GRIPAP1 were related to the endocytosis/ recycling process of other neurotransmitter receptors (GPCRs or AMPARs) [56,70,71,72]. In addition, the synaptic vesicle protein synaptogyrin (SYNGR3), which is involved in neurotransmission [73,74], was also found to be differentially expressed in this study. The alteration of these protein levels in CHIKV-infected brains indicates a profound impairment of receptor signaling, leading to synapse dysfunction. The resulting alteration of neurotransmission may be, in part, associated with severe neurological CHIKV infection, as observed in patients with HIV encephalitis with differential expression of the genes encoding the neuronal molecules involved in synaptic plasticity, including synaptogyrin [75]. Further studies are needed to precisely determine the balance between receptor endocytosis, late recycling to the membrane and degradation and the effect on synaptic plasticity from early to severe CHIKV infection. GABAAR are targets of several antiseizure pharmacological compounds such as benzodiazepines [76]. The potential beneficial effect of these molecules in neurological cases of CHIKV infection should be investigated. Together, these data showed that the endosomal machinery was affected by CHIKV infection in the brain, which could facilitate virus entry and spread and potentially cause the dysregulation of synapse function and neurotransmission ( Figure 6).
iii) Regulation of gene expression
Because viral genomes are small with limited protein encoding ability, viruses require host transcriptional and translational machineries to complete their replication cycle. Furthermore, the alphaviruses, including CHIKV, induce the cessation of host transcription/translation in infected cells [77,78,79]. Here, several proteins involved in mRNA processing and translation, including transcription factors (TFAM, PNN), splicing factors (SRRM2, SF32B2, U2AF1, RMB17, PHF5A,) and translation initiation factors (EIF3H and EIF3K) were differentially expressed ( Figure 6). Apart from SRRM2 and PNN, all these proteins were downregulated and up-regulated at early and late time points (i.e., LT and LP), respectively. The early down-regulation of proteins regulating gene expression and/or translation was reported in previous proteomic studies using CHIKV-infected cells [21,22]. The differential expression of splicing and translation proteins, including EIF3, has also been detected following infection with other viruses [26,80].
Altogether, these data supported that CHIKV infection, similar to infection with other viruses, perturbed the host protein synthesis machinery at different levels: transcription/translation and maturation, which could explain the down-regulation and up- Figure 5. Western blot validations of differentially regulated proteins identified by 2D-DIGE and/or iTRAQ analyses. (A) Protein samples from each group used for proteomic analysis were minimally labeled with cyanine-3 dye. At the top, a representative protein profile of three biological replicates from brain lysates of mock (M), early (E), late paralytic (LP) and late tetanus-like (LT), separated by 10% SDS-PAGE is shown. WB with fluorescence-based methods was used to detect an overlaid fluorescent scan of the general protein patterns (Cy3 dye; green) and the specific immunoreactive proteins (FITC or Cy5 dye; red). To better visualize protein detection signals observed with each specific antibody used, corresponding cropped WB images are presented in grey levels. (B) The graphs correspond to the mean 6 S.D. of protein quantity measured by densitometry of the antigenic bands. Densitometry analyses were performed using TotalLab Quant v12.2 software (Nonlinear Dynamics), and data were normalized to levels of global protein pattern intensity. The values indicated under each graph correspond to fold changes from paired comparisons. The significance of the differential protein expression are indicated *, p,0.05; **, p,0.01; ***, p,0.001. A.U., arbitrary units. ANXA2: annexin A2; ARRB1: b-arrestin; GABRA1: c-aminobutyric acid receptor subunit alpha-1; GRASP1: GRIP-associated protein; ITGAV: integrin aV; MYPT1: myosin phosphatase target subunit 1; N-Ras: N-Ras; RABEP1: rabaptin-5; SYNGR3: synaptogyrin-3. doi:10.1371/journal.pone.0091397.g005 regulation of the differentially expressed proteins at the early and late stages of the infection, respectively ( Figure 6). Further studies on these regulators of gene expression will elucidate their contribution to CHIKV pathogenesis.
iv) Modulation of the ubiquitin-proteasome pathway (UPP)
The ubiquitin-proteasome pathway (UPP) is indispensable for a variety of cellular processes including the control of protein stability, protein trafficking, the regulation of signal transduction pathways and antiviral responses [81,82]. Conversely, the hijacking of the UPP by diverse viruses prevents viral protein degradation, aiding evasion of the host immune response, and also enables viral replication, viral particle assembly and egress [83]. In this study, six UPP-associated proteins were differentially expressed during the neurological CHIKV infection (i.e., RBX1, RNF213, RAD23A, GPS1, USP13 and UBXN6) ( Figure 6). Ring box protein-1 (RBX1) and Ring finger protein 213 (RNF213) are E3 ubiquitin-ligases that function in protein ubiquitination. The opposing directions of differential expression of the two E3 ubiquitin-ligases before and after the onset of clinical symptoms indicated the nuanced regulation of these proteins. Because E3 ligases are the components of the ubiquitin cascade that confer substrate specificity, knowledge of their specific host and/or viral protein targets might clarify whether the observed E3 ligase expression patterns are more beneficial for the pathogen or the host. The alteration of E3 ligase expression was previously observed in the brain of mice infected with WNV [27], indicating that the expression of these proteins is particularly perturbed after viral infection. The nucleotide excision repair protein RAD23A interacts with the 19S subunit of the 26S proteasome to deliver poly-ubiquitinated proteins to the proteasome for degradation. Li and collaborators showed that in HIV-infected macrophage cells, the HIV-1 viral protein R (Vpr) interacts with the proteasome through RAD23A to promote host protein degradation [84]. The decreased expression of the RAD23A protein in macrophages led to a decrease in HIV replication, indicating the crucial role of RAD23A in viral multiplication. Li et al. suggested that the interaction between Vpr, RAD23A and the 26S proteasome could lead to the degradation of antiviral factors [84]. Finally, two deubiquitinating enzymes (DUBs), ubiquitin carboxyl-terminal hydrolase 13 (USP13) and ubiquitin regulatory X domain-containing protein-6 (UBXN6), were down-regulated and up-regulated, respectively, before and after the onset of clinical symptoms in mice infected with CHIKV. This is the first report of the differential expression of DUBs following an alphavirus infection. The consequences of the differential expression of the DUBs need to be explored to clarify their involvement in CHIKV infection. The large number of proteins from the UPP that are differentially expressed following CHIKV infection highlights the crucial role of Figure 6. Diagram summarizing the main interconnected pathways and biological processes altered following CHIKV infection in mouse brain. Proteins found to be differentially expressed in the study are shown in bold. Related proteins are shown in italics. doi:10.1371/journal.pone.0091397.g006 this pathway. Recently, the rapid down-regulation of the UPP proteins was observed following CHIKV infection in a hepatic cell line; however, none of the same proteins were detected relative to this study [22]. Notably, several molecules targeting E3-ligases, DUBs or the proteasome are candidates for cancer treatment [85], which highlights novel potential therapeutic strategies targeting the UPP in CHIKV infection.
Conclusion
The analysis of differentially protein expression during the time course of CHIKV infection indicated the profound downregulation of proteins in the early stage, followed by a rapid upregulation after the appearance of clinical symptoms, consistent with the previously reported dramatic cessation of protein expression in CHIKV-infected cells. These analyses revealed that the differentially expressed proteins were involved in several biological processes including i) cell signaling via integrins, which is related to cytoskeletal dynamics, ii) endocytosis and receptor recycling, which are associated with virus circulation and synapse function, iii) the host translational machinery, which reflects the regulation of protein expression, and iv) modulation of the ubiquitin-proteasome pathway. By revealing the protein expression profiles correlated with the onset of clinical symptoms, this study paves the way for further studies on the evaluation of potential disease markers and the development of new therapeutic targets to prevent the CHIKV neurological infection. Figure S1 2D-DIGE analysis (pH 4-7) of mock-(M) and early (E) CHIKV-infected brain samples. Representative data from a 2D-DIGE experiment using a 10% SDSpolyacrylamide gel with the pH 4-7 range is shown. Proteins from mock-and early-CHIKV-infected brain samples were labeled with Cy3 and Cy5 cyanine dyes, respectively. As determined by Progenesis SameSpot software, protein spots that were differentially regulated between the two experimental conditions (|FC| $1.3 and p#0.05) were submitted to mass spectrometry for identification. The numbers annotated on the gel corresponded to master gel numbers of deregulated protein spots. All spots were identified as Mus musculus and are were listed in the supplementary Table S4. Red and blue numbers correspond to up-and down-regulated spots, respectively. (TIF) Table S4 Proteins identified from the differential 2-D DIGE (pH 4-7) analysis of mouse brain lysates collected at early-compared to mock-group after CHIK-infection.
(DOC)
Table S5 Dataset of proteins identified by iTRAQ labeling and tandem mass spectrometry as differentially expressed between mock-(M), early-(E) and late paralytic (LP) or late tetanus-like (LT) CHIKV-infected samples, indicating fold-changes and p-values in each comparison, and GO subcellular location and biological function. (DOC) | 2017-05-31T18:55:17.156Z | 2014-03-11T00:00:00.000 | {
"year": 2014,
"sha1": "ad50d73f761b386073f8606f97009bca788aa5d1",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0091397&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad50d73f761b386073f8606f97009bca788aa5d1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
18058847 | pes2o/s2orc | v3-fos-license | Association between single nucleotide polymorphisms (SNPs) of XRCC2 and XRCC3 homologous recombination repair genes and triple-negative breast cancer in Polish women
XRCC2 and XRCC3 genes involved in homologous recombination repair (HRR) of DNA and in the maintenance of the genome integrity play a crucial role in protecting against mutations that lead to cancer. The aim of the present work was to evaluate associations between the risk of triple-negative breast cancer (TNBC) and polymorphisms in the genes, encoding for two key proteins of HRR: XRCC2 Arg188His (c. 563 G>A; rs3218536, Genbank Accession Number NT 007914) and XRCC3 Thr241Met (c. 722 C>T; rs861539, Genbank Accession Number NT 026437). The polymorphisms of the XRCC2 and XRCC3 were investigated by PCR–RFLP in 70 patients with TNBC and 70 age- and sex-matched non-cancer controls. In the present work, a relationship was identified between XRCC2 Arg188His polymorphism and the incidence of triple-negative breast cancer. The 188His allele and 188His/His homozygous variant increased cancer risk. An association was confirmed between XRCC2 Arg188His and XRCC3 Thr241Met polymorphisms and TNBC progression, assessed by the degree of lymph node metastases and histological grades. In conclusion, XRCC2 Arg188His and XRCC3 Thr241Met polymorphisms may be regarded as predictive factors of triple-negative breast cancer in female population.
Molecular profiling indicated that triple-negative breast cancer represents heterogeneous subgroup of breast cancer. Triple-negative breast cancer shares histological and genetic abnormalities with basal-like subtype of breast cancer, however, this overlap is incomplete. Triple-negative breast cancer do not benefit from hormonal therapies or treatments targeted against HER2 [1][2][3][4][5]. Many of targeted therapeutic agents show promise in early stage studies, but their clinical performance has yet to be definitively proven.
Molecular epidemiological studies have provided the evidence that an individual's susceptibility to precancerous lesions and cancer is modulated by both genetic and environmental factors [6,7]. Genomic rearrangements (translocations, deletions, and duplications) are extremely frequent in breast cancer cells [8][9][10][11]. These rearrangements are believed to result from an aberrant repair of DNA double-strand breaks (DSBs).
Double-strand DNA breaks are the most dangerous DNA damage. If not repaired leads to down-regulation of transcription and various cancers development [12,13]. DSB are repaired by two mechanisms: recombination (HR) and non-homologous end joining (NHEJ) [14,15].
A recent study on the Caucasian population has provided the first epidemiological evidence, supporting the association between DSBs repair gene variants and breast cancer development [16].
Polymorphisms in DNA repair genes may alter the activity of the proteins and thus modulate cancer susceptibility [17].
RAD51 is involved in homologous recombination and repair of double-strand breaks in DNA and DNA crosslinks and for the maintenance of chromosome stability. RAD51 gene is highly polymorphic in nature. In the literature, many reports confirm the significance RAD51 gene G135C polymorphism (c. 98 G[C; rs1801320; Genbank Accession Number NT 010194), regarding the risk of breast carcinoma [21][22][23][24].
XRCC2 Arg188His polymorphism (c. 563 G[A; rs3218536, Genbank Accession Number NT 007914) may limit effect on gene activity, although it can modify the breast cancer risk in female patient with low levels of plasma a-carotene or plasma folate [16,25].
The C722T substitution is the most thoroughly analyzed polymorphism in the XRCC3 gene (c. 722 C[T; rs861539, Genbank Accession Number NT 026437). Although the functional relevance of XRCC3 Thr241Met variation is unknown, some studies have reported that the 722T/T genotype is associated with increased risk of breast cancer [26][27][28].
In the present study, the association between the Arg188His polymorphism of XRCC2 gene and Thr241Met polymorphism of XRCC3 gene and triplenegative breast cancer in the population of Polish women was investigated.
Patients
In the reported study, paraffin-embedded tumor tissue was collected from 70 women with triple-negative breast carcinoma, treated at the Department of Oncology, Institute of Polish Mother's Memorial Hospital, Lodz, Poland. The age of the patients ranged from 36 to 68 years (the mean age 46.2 ± 10.12). No distant metastases were found in any of the patients at the time of treatment onset. The median follow-up of patients at the time of analysis was 38 months (the range 2-70 months). The average tumor size was 20 mm (the range 17-32 mm). All the tumors were graded by a method, based on the criteria of Scarff-Bloom-Richardson. The demographic data and the pathologic features of the patients are summarized in Table 1 The breast tissue samples (cancerous and non-cancerous) were fixed routinely in formaldehyde, embedded in paraffin, cut into thin slices, and stained with hematoxylin/ eosin for pathological examination. DNA for analysis was obtained from an archival pathological paraffin-embedded tumor and non-cancerous breast samples which were deparaffinized in xylene and rehydrated in ethanol and distilled water. In order to ensure that the chosen histological material is representative for cancerous and noncancerous tissue, every tissue sample qualified for DNA extraction was initially checked by a pathologist. DNA was extracted from material using commercially available QIAmp Kit (Qiagen GmbH, Hilden, Germany) DNA purification kit according to manufacturer's instruction.
Genotyping
Polymorphism of XRCC2 and XRCC3 gene was determined by PCR-RFLP (polymerase chain reaction-restriction fragment length polymorphism), using the appropriate primers ( Table 2).
Determination of XRCC2 genotype
The 25 lL PCR mixture contained 100 ng of DNA, 12.5 pmol of each primer, 0.2 mmol/l of dNTPs, 2 mmol/l of MgCl
Results
The genotype frequency of the XRCC2 Arg188His polymorphism in the TNBC patients and controls is summarized in Table 3. It can be seen from the Table that there are significant differences in the frequency of genotypes (p \ 0.05) between the two investigated groups. A weak association was observed between triple-negative breast carcinoma occurrence and the presence of at least one 188His allele. A stronger association was observed for 188His/His than for 188Arg/His heterozygous variant. In case of the Arg188His polymorphism of XRCC2 gene, the distribution of the genotypes in the patients differed significantly from one expected from the Hardy-Weinberg equilibrium (p \ 0.05).
No statistically significant differences were observed in genotype frequencies of XRCC3 Thr241Met polymorphism between the control group and the TNBC patients (see Table 4). Among the patients, all genotype distributions did not differ significantly (p [ 0.05) from those expected by the Hardy-Weinberg equilibrium.
Histological grading was related to XRCC2 Arg188His and the XRCC3 Thr241Met polymorphisms. Histological stages were evaluated in all the cases (n = 70). There were as follows: stage I-20 cases, stage II-45 cases, and stage III-5 cases. Stages II and III were accounted together for statistical analysis (see Table 5). Some correlation was observed between the XRCC2-Arg188His and XRCC3-Thr241Met polymorphisms and triple-negative breast cancer invasiveness. An increase was observed, regarding 188Arg/His heterozygotes frequency (OR 2.45; 95 % CI That increase was, however, not statistically significant. Table 6 shows the distribution of genotypes and the frequency of alleles in patients with (N?) and without (N-) lymph node metastases. A tendency for a decreased risk of breast cancer was observed with the occurrence of 188His/His genotype and 188His allele of XRCC2 and 241Met/Met genotype and 241Met allele of XRCC3 polymorphism. That decrease was, however, not statistically significant (p [ 0.05) (see Table 6). There were no differences either in the distribution of genotypes or the frequency of alleles in the group of patients with different tumor size (Table 6).
Discussion
According to our data, it is the first time that polymorphisms in XRCC2 and XRCC3 genes involved in the DNA repair pathway were analyzed in the population of Polish women with TNBC. The combined effect of XRCC2 and XRCC3 polymorphisms on TNBC occurrence was not investigated before. The study was performed on an ethnically homogenous population, which may improve our knowledge, regarding to what an extent the genotypephenotype relationship variations are population-related.
The polymorphisms, chosen for the study, had previously been shown to have functional significance and to be responsible factors for low DNA repair capacity phenotype, characteristic for patients with cancer including those with breast carcinoma [20].
The genes involved in DNA repair and in the maintenance of genome integrity play a crucial role in providing protection against mutations that may lead to cancer [29].
XRCC2 and XRCC3 proteins are structurally and functionally related to RAD51, which plays an important role in the homologous recombination, the process being frequently involved in cancer transformation [30].
RAD51, XRCC2, and XRCC3 gene are highly polymorphic. A single nucleotide polymorphism, 135G/C, has been identified in the 5 0 untranslated region of the RAD51 gene and has been shown to influence gene transcription activity [31]. As it was mentioned in the Introduction above, the reports on the relationship between RAD51 G135C polymorphism and breast cancer incidence are suggest that the RAD51 135C variant allele was associated with an increased risk of female breast cancer [22,23,32,33]. By contrast, Brooks et al. [34] showed that RAD51 gene variants were found to be not associated with breast cancer risk.
Other studies have shown that the RAD51 135C variant allele was associated with an increased risk of female breast cancer [35][36][37].
135C/C genotype may be associated with an elevated tumor risk among the European populations, regarding sporadic breast cancer [36]. Similar results were obtained in the Polish population [38].
In our earlier study, RAD51 135C allele variant was associated with an elevated risk of triple-negative breast cancer in the Polish women [39].
It is possible that the presence of C allele remains in a linkage disequilibrium with another, so far unknown, mutation located outside the coding region in the RAD51 gene, which may be important, regarding RAD51 concentrations in plasma.
In the presented study, XRCC2 Arg188His genotype was associated with an elevated risk of triple-negative breast cancer in the Polish population. There was a 6.25-fold increased risk of TNBC for the individuals, carrying XRCC2-188His/His genotype, compared with subjects carrying XRCC2-188Arg/Arg, 188Arg/His genotype, respectively. XRCC2 Arg188His polymorphism was not related, either to tumor size or cancer type or grade.
In the reported study, the Arg188His polymorphism of XRCC2 gene and Thr241Met of XRCC3 were correlated with breast carcinoma progression. Arg188His and Thr241Met heterozygote were associated with an increased risk of stage I breast cancer. However, other literature data were also found [40][41][42]. No significant associations were observed between the Thr241Met and breast cancer in Iowa and Cypriot women (40,43).
In the Polish population, Thr241Met genotype of XRCC3 polymorphism increased the risk of breast cancer development [41,42,44].
Similar to our observation, the recent reports demonstrate that XRCC3 Thr241Met allele seems associated with an elevated breast cancer risk in non-Chinese subjects (28).
The role of position 188 in the aminoacid chain for XRCC2 protein functionality is still unknown. The several data suggest that XRCC2 Arg188His polymorphism is not directly associated with breast cancer risk [45,46].
In conclusion, the reported study is another evidence for the significance of Thr241Met and Arg188His genotype in breast carcinoma staging.
The obtained data show that Arg188His and Thr241Met polymorphisms of XRCC2/3 genes may be associated with the risk of triple-negative breast carcinoma occurrence. On the other hand, a protective effect was observed of all the polymorphisms in the patients without (N-) lymph node metastasis. The obtained data suggest that the reported study may be the first observation of the polymorphisms in XRCC2 and XRCC3 genes, involved in the DNA repair pathway, to be associated with triple-negative breast carcinoma risk in the population of Polish women.
Finally, it is postulated that these polymorphisms may be used as predictive factors for TNBC in the Polish female population. Further studies, conducted on a larger group, are suggested to clarify this point. | 2016-05-12T22:15:10.714Z | 2014-04-13T00:00:00.000 | {
"year": 2014,
"sha1": "8a5bc76b64d0378046036f09163753da9a8b8c04",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10238-014-0284-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a5bc76b64d0378046036f09163753da9a8b8c04",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248619115 | pes2o/s2orc | v3-fos-license | exploring Performance Management in China’s Family SMes Based on Structural equation Modelling and Back-Propagation Neural Network
Because of the growing competition and challenges within the global business environment, understanding performance management has become essential to small and medium-sized enterprises (SMEs) as they have traditionally dominated the Chinese economy. In recognition of the limited studies with a specific focus on Chinese family SMEs, this study modelled and tested performance management by analysing four factors and eighteen indicators using structural equation modelling (SEM) and back-propagation neural network (BPNN). Secondary data from the Chinese Stock Market and Accounting Research (CSMAR) database were collected for this study. The results provide a better understanding of the proposed relationships between these variables through a review of their impact and correlations. This study suggested that four factors, including financial performance, external environment, internal environment, and enterprise development potential, will significantly impact performance management.
of performance measures that directly apply to the SME context (Garengo et al., 2005;Bahri et al., 2010). Studies focusing on performance management in Chinese family SMEs are especially scarce.
Most SMEs in China are family-owned and managed enterprises, which historically have inherent organisational advantages because of their business structure (Chen & Hsu, 2009). However, due to changes within the business environment, their structural limitations have become more apparent, especially in relation to enterprise management issues caused by the difficulty in adjusting their existing framework (Chen et al., 2013). Notably, even when the scale of a family enterprise expands, its management model often remains concentrated at an entrepreneurial stage (Fang et al., 2021). In China, the success of family SMEs has received much attention due to their importance to the economy (Yang et al., 2020). However, previous studies have noted that the rate of failure among China's family SMEs is quite high, with 68% failing within their first five years of operations, 19% surviving up to around 6-10 years, and only 13% of these firms have a lifespan that exceeds ten years (Zhu et al., 2012). This study attempts to understand performance management in Chinese family SMEs by testing a sample of firms listed in the Chinese Stock Market & Accounting Research (CSMAR) database using structural equation modelling (SEM) and back-propagation neural network (BPNN).
Performance Management Tools
Today, organizations worldwide implement a range of performance management tools. These include popular methods such as Key Performance Indicators (KPI), 360-degree feedback Evaluation, Balanced Score Cards (BSC), as well as Management by Objectives (MBO). KPI is a method of performance management that ensures the achievement of corporate strategic objectives by extracting the key factors of an enterprise's success and transmitting them to grass-roots units based on goal management (Tairova et al., 2021). BSC evaluates performance by utilizing both financial and nonfinancial indicators. Unlike traditional methods that focus solely on financial indicator analysis, BSC establishes a system that integrates internal processes, customers, learning, and development while incorporating finance as a component (Darestani &Nillofar, 2019). MBO is conducted accordingly: First, the enterprise puts forward specific business objectives to be realised in the future according to the actual operational situation. Next, employees of all departments and ranks would determine their personal goals based on the business objectives of the enterprise and their current situation. Finally, practical actions are taken by employees to realize their personal goals so that the enterprise can achieve its overall goals (Zhou & Xiong, 2020). 360-degree feedback evaluation focuses on the objectives of performance behaviours in an enterprise reviewed against the feedback assessment of stakeholders such as customers, the management, and employees (Priya & Renjitha, 2019).
Although widely used, these above methods have a common disadvantage, that is, they are adapted and replicated from Western firms that are structurally different from family SMEs in China. Implementing these methods requires rich experience and systematic theoretical knowledge, while results are only linked to task-oriented objectives and salaries. They do not align directly with employees' promotion and career development, and therefore, they may limit effective management.
Factors Influencing Information Management of Performance in SMe s
Recognising the complexity within enterprises, studies have noted the limitations of relying only on traditional financial indicators in performance management and have highlighted the importance of including non-financial factors such as customers, products and markets, internal governance, and the development potential of an enterprise (Muhammad et al., 2021;Thapa et al., 2020;Wu et al., 2022). For the SEM analysis, the methodological approach taken by Wickramesekera and Oczkowski (2004) was followed, then BPNN analysis was undertaken using a sample of 195 Chinese family SMEs.
Financial Performance of an enterprise
All business enterprises focus on achieving profitability; thus, including financial-related indices in performance indicators is a fundamental requirement (Omneya et al., 2021;Sun et al., 2020;Gian et al., 2019). However, other relevant indices that are derived through the evaluation of an enterprise's internal and external environment and the realization of its crucial success should also be evaluated. If an enterprise's overall performance improves, yet it cannot significantly improve its financial performance, it implies that the managers should reconsider their strategies and implementation plans. Based on this understanding, we propose the following hypotheses: H1: An enterprise's growth ability will positively affect performance management (O'sullivan & Abela, 2007). H1a: Profitability will positively affect performance management (Beard & Dess, 1981). H1b: Solvency will positively affect performance management (Taffler, 1983). H1c: Cash flow will positively affect performance management (Afrifa & Tingbani, 2018). H1d: Asset management will positively affect performance management (Wang & Wang, 2012). H1e: Asset quality status will positively affect performance management (Said, 2018).
external environment of an enterprise
An enterprise's customers are essential to its long-term success and sustainability in its markets. Customers are concerned with quality, time, service, and cost when purchasing and using an enterprise's products (Hirons et al., 1998;Rolstadås, 1998). Therefore, an enterprise needs to clarify its goals in terms of quality, time, service, and cost in the appraisal process and translate them into specific measurement indicators (Dragnić, 2014;Hanggraeni et al., 2019). Based on these considerations, the following hypotheses are proposed: H2: Customer satisfaction will positively affect performance management (Blessing & Natter, 2019). H2a: Market situation will positively affect the performance management (Erdem, 2020). H2b: Enterprise product control will positively affect performance management (Estrin & Rosevear, 1999).
Internal Governance of an enterprise
An enterprise's internal governance guides its procedures, decisions, and behaviours within its operations. Therefore, performance evaluation of internal activities should consider factors that may affect operational issues such as cycle time, quality, employee skills, and productivity (Huang et al., 2015). In addition, managers should cultivate continuous improvement in corporate governance as part of the enterprise's core capabilities, foster project management procedures, and enhance safety standards (Ehler, 2003;Eldenburg et al., 2004;Schultz et al., 2010). Thus, the following hypotheses are proposed: H3: Internal satisfaction of an enterprise will positively affect performance management (Vu & Nwachukwu, 2021). H3a: The internal governance structure will positively affect performance management (Teece, 1985). H3b: Management rules on stakeholder engagement will positively affect performance management (Ciemleja & Lace, 2011). H3c: Corporate governance structure will positively affect performance management (Said, 2018).
development Potential
An enterprise's potential for development impacts its sustainability and success, and as such, emphasis should be placed on its long-term operations, future investment, employee quality improvement, risk management, and technological innovation and learning ability (Ееhа, 2015;Wang et al., 2020). Specific objectives should be put in place to enhance staff quality, establish a good information system, and improve abilities to encourage learning and growth. The following hypotheses are proposed: H4: Industry development status will positively affect performance management (Ofori, 2000). H4a: An enterprise's core technology and innovation ability will positively affect performance management (Leung & Sharma, 2021). H4b: Risk factors will positively affect performance management (Jia & Bradbury, 2021). H4c: Quality of staff will positively affect performance management (Leung & Sharma, 2021). H4d: Market growth potential will positively affect performance management (Rialp-Criado & Rialp-Criado, 2018).
Structural equation Modelling (SeM)
SEM is a widely-used statistical modelling technique for testing and analysing complex multivariate research data (Thomas & Hayes, 2021;Yu et al., 2021;Liu et al., 2021). SEM methodology incorporates a measurement model that examines the relationship between latent variables and their measures and a structural model that allows the testing of hypothetical dependencies in a model (Hoyle, 1995).
The Measurement Model
In SEM, the measurement model examines the relationship between latent variables and their measures, and their specific form is shown in equations (1) and (2) below (Sanita et al., 2021): In the equation, x is a vector that includes exogenous indexes, y is a vector that includes endogenous indexes, L x highlights the relationship between exogenous indexes and exogenous latent variables, that is, the factor load matrix of exogenous indexes on exogenous latent variables, L y indicates the relationship between the endogenous indexes and the endogenous latent variable, that is, the factor loading matrix of the endogenous index on the endogenous latent variable, d, and e are error terms in the equations.
The Structural Model
The structural model allows the testing of relationships between latent variables, and its specific form is shown in equation (3) below (Shipley & Douma, 2021): In the equation, h denotes the endogenous latent variables, x represents the exogenous latent variables, B refers to the relationship between the endogenous latent variables, G indicates the influence of the exogenous latent variables on the endogenous latent variables, and z indicates the residual term that accounts for the unexplained part of η in the equation.
According to SEM rules, the above model needs to meet the following four conditions: First, there is no relationship between e and h ; second, there is no relationship between d ; third, there is no relationship between z and x ; and fourth, there is no relationship among z , e and d .
From these equations, coupled with the proposed models, each parameter can be calculated in the SEM through an iterative solution process.
Utilizing SEM as a research method has numerous advantages (Abe et al., 2021). First, it can consider measurement errors between independent and dependent variables to avoid bias in results. Next, the method allows simultaneous processing of multiple dependent variables and simultaneously estimating the structure and inter-factor relationships. Finally, SEM allows a more elastic measurement model, the ability to evaluate the complete model's degree of fit, and the use of the path diagram can reveal the complex correlation between variables in the study.
Back-Propagation Neural Network (BPNN) Model
Many decades ago, the world was introduced to artificial neural networks (ANNs), which first originated in the middle of the 20 th century. Today, ANN systems are recognised for their abilities in self-learning, information distribution and storage, and parallel processing. Hence, ANN tools have been frequently used in information processing and patterns recognition. ANNs have also performed well in intelligent modelling and intelligent control . ANNs can be applied through a multi-layer perceptron using a back-propagation algorithm as an analytical tool. The BPNN form is a network composed of several layers of neurons, consisting of input layer nodes, output layer nodes, and one or more hidden layer nodes. It can approximate any continuous function with arbitrary precision. Hence, BPNN has a wide application in nonlinear modelling, function approximation, and pattern classification . A typical BPNN model with input, output, and hidden layers is shown in Figure 1. Structurally, the three-layer BPNN includes an input layer, hidden layer, and output layer. No association is found between nodes at the same layer, and neurons at different layers propagate from forward to backward. The input layer contains several nodes, and the corresponding BP network can sense several inputs. The BP network will have several output data if the output layer contains several nodes. The node quantity in the hidden layer should be adjusted or set in response to the basis of the real situation. Having more hidden layer nodes will lead to higher accuracy in results, but this will also be a more time-consuming process. The specific function between each layer is shown in the following equations (Moldovanu et al., 2021): The expression of the output function of nodes in the hidden layer reads: The expression of the output function of nodes in the output layer reads: The input layer node is a i . W ir . indicates the connection weight to the hidden layer node b x .
V ir is the connection weight between the hidden layer node b x and the output layer node c j . T r and q j are the threshold of the hidden layer node and output layer node, respectively. . f(•) is the transfer function, and the S-type transfer function is usually selected. It is shown in equation (6) The maximum is close to 1, and the minimum is close to 0. Normally, 0.5 is selected as the threshold. In performance management research, the variable is completely correlated to 1, and the variable is not correlated to 0. If the p-value obtained by calculation is higher than 0.5, it indicates that the variable correlation degree is high; if the p-value is obtained smaller than 0.5, it indicates a low correlation between variables.
The specific learning process of BP network is as follow: (1) It is to randomly assign a smaller value to W ir , T x , V ir , q j .
(2) The following operations should be conducted on each mode ( Reverse allocation of errors to hidden layer nodes: Adjust the connection weight W ir between the input layer and the hidden layer node and the hidden layer node threshold T r : Step ‚ is repeated until the error E AV becomes small enough for j n k p = = 1 2 1 2 , , , , , , , E AV is the target function of training. After multiple iterations training, the error E AV meets the accuracy required in the specific problem. The learning process of BPNN algorithm can be summarised into the following two phases (Huang et al., 2021).
(1) Forward propagation of the working signal: the input signal is transmitted from the input layer to the output layer through the hidden unit, and the output signal is generated at the output end, which is the forward propagation of the working signal. During the forward transmission of the signal, the weight of the network is fixed. Besides, the state of neurons in each layer only influences the state of the neurons in the next layer. The error signal is transferred to the backpropagation if the desired output cannot be obtained in the output layer.
(2) Back-propagation of error signal: the error signal is the difference between the actual output of the network and the expected output. It propagates forward layer by layer from the output end, which is the backward propagation of the error signal. In this process, the error feedback adjusts the network's weight. Repeated correction of weights can make the network's actual output closer to the expected output.
data Source and Sample
This study used secondary data on Chinese family SMEs listed in the Chinese Stock Market and Accounting Research (CSMAR) database to screen data. The CSMAR database is valued as a reliable and reputable data source that consolidates a comprehensive collection of statistics on Chinese listed companies, and it has been extensively used by researchers studying Chinese businesses (Krause et al., 2019;Li et al., 2019). In China, listed family SMEs are concentrated on the Chinese SME Board, and these firms are required to disclose detailed and comprehensive operations information (Zhou et al., 2015). Therefore, family firms listed in the SME Board were sampled for this research, consisting of 195 firms. This sample size adheres to the sample size requirement for using SEM (Hair et al., 2018).
Construction of the Structural equation and Path diagram
As noted earlier, this study models performance management in Chinese family SMEs by focusing on four factors: financial performance, external environment, internal governance, and enterprise development potential. Drawing on established research, hypotheses are proposed based on the perceived relationships between the variables, and the model was tested (Don et al., 2020). We propose that the external operations of an enterprise is an exogenous latent variable, while financial performance, internal governance, and the development potential of the enterprise are endogenous latent variables. Furthermore, each endogenous latent variable is influenced by its corresponding observed variables. Based on these proposed relationships, the path diagram of this study is detailed in Figure 2.
The specific meaning of each parameter in the structural equation is as follow: : 3*1 vector, endogenous latent variable; ¾: 3*1 vector, exogenous latent variable; ¶ : residual of the structural equation; ² : relationship coefficient between endogenous latent variables; ³ : influence of exogenous latent variables on endogenous latent variables.
BPNN Structural design
The BPNN structural design used the following five procedures.
1) Determination of the number of neurons in the input layer: The input layer receives input data, so the number of nodes in the input layer of the BPNN model is the number of input variables, which depends on the number of indicators. After filtering through SEM, the number of nodes in the input layer of the BPNN is determined. 2) Determination of the number of neurons in the output layer: The selection of output nodes corresponds to the evaluation results, so it is necessary first to determine the expected output (Yang et al., 2020). The expected output of the research object is the overall evaluation of enterprise information management of performance appraisal, so the neuron quantity in the output layer is determined to be 1. The set of evaluation is set to five levels: good, better, common, worse, and worst. The highest and lowest score are set. For example, 0 and l can be evaluated by the following principles: 1>output result 3 0.9, high correlation; 0.9>output result 3 0.7, higher correlation; 0.7>output result 3 0.5, common correlation; 0.5>output result 3 0.3, lower correlation; the output result is <0.3, low correlation. 3) Determination of the neuron quantity in the hidden layer: the neuron quantity in the hidden layer of the BPNN is related to the neuron quantity in the input and output layers. In addition, it is related to factors such as the problem complexity, the conversion functional form, and the characteristics of the sample data. With too few neurons in the hidden layer, the network may either not reach specific accuracy or may not be properly trained. Although including more neurons can reduce system error, the network training process may be prolonged. Besides, the training may easily fall into a local minimum without achieving the best advantage. This risk is also the potential internal reason for the appearance of an over-fitting phenomenon. To avoid over-fitting during training and ensure sufficiently high network performance, the primary principle of determining the neuron quantity in the hidden layer is to make the structure as compact as possible while meeting the accuracy requirements . 4) Setting training parameters: In the BPNN model, setting the error to 0.0001 can enhance result accuracy. Generally, the BPNN model can achieve satisfactory results with 500 iterations of training and learning (Wen et al., 2020). 5) Designing transfer function: The transfer function is very important to the effect of the BPNN.
In academia, sigmoid-type functions are generally used. The transfer function from the input to the hidden layer is the logsig-type transfer function, while the transfer function from the hidden layer to the output layer is the tansig function Wu, 2020).
Reliability Analysis and Fit Indices
For reliability analysis, statistical software SPSS was used to evaluate the internal consistency coefficient and to measure the reliability of the sample data. The model's internal consistency coefficient α of the corresponding four research factors, including financial performance, internal environment, external environment, and enterprise development potential, were calculated, and the results are shown in Figure 3. Figure 3 shows that the coefficient values are above 0.7; this is in line with accepted statistical guidelines requiring the value to exceed 0.7 (Hair et al., 2018). Therefore, it is appropriate for this study to apply these four predictor variables to predict a firm's overall performance.
Next, the AMOS fit indices were reviewed to assess model fit. The Normed Chi-Square (χ 2 /df) is 3.211. The Goodness of Fit Index (GFI) is 0.932, Normed Fit Index (NFI) is 0.981, Comparative Fit Index (CFI) is 0.976, and the Root Mean Square Error of Approximation (RMSEA) is 0.021. These values are within the recommended guidelines between acceptable to good fit (Hair et al., 2018).
SeM ANALySIS ReSULTS
The proposed model was tested using the Analysis of Moment Structures (AMOS) software. AMOS is a commonly used software in SEM-based research studies and is a suitable tool for covariance structural analysis and analysis of complex multivariate data. Figure 4 presents the analysis results. From this analysis, the relationship between the four latent variables can be explained: (1) The impact of financial performance on performance management is significantly positive, with an impact coefficient of 0.86. financial performance plays a crucial impact in performance management; (2) The impact of the internal environment on financial management is positive. However, the impact coefficient is weak at 0.32, which suggests a small impact of the internal environment on performance management; (3) The impact of the development potential on performance management is also positive but with limited impact based on the coefficient of 0.38, suggesting that the development potential has little impact on performance management; (4) The impact of the external environment on financial management is significantly positive. The impact coefficient is 0.69. On the other hand, the impact of the external environment on performance management is moderate, with an impact coefficient of 0.69. The results of the hypothetical validation analysis for the model's eighteen indicators are detailed in Table 1. Based on the validation analysis as shown in Table 1, the indicators enterprise internal governance structure (X8), corporate external governance structure (X9), corporate governance structure (X10), risk factors (X13) have the worst correlation to performance management. To further understand the importance of the correlative indicators, 14 indicators of '++' and '+++' were selected as input items for the BPNN simulation.
BPNN SIMULATIoN ReSULTS
After 14 nodes were determined in the input layer, the range of the nodes quantity in the hidden layer of the BPNN was set to 5-20. Using MATLAB software, it was required to program first before inputting the sample data. After learning and training, the failure rate and training time were calculated. Next, the number of nodes in the hidden layer was adjusted, with the last determined at 11. Training times and results are shown in Figure 5: After 500 iterations of learning and training with the sample data using the Trainrp function, the result met the requirement of the error target of 0.0001 ( Figure 5). The final result was obtained after the specific output result was compared with the expected value, as shown in Figure 6. From the above simulation analysis, seven of the fourteen indicators have the greatest impact on performance management in China's family SMEs: profitability, solvency, cash flow, enterprise core technology, and innovation capabilities, quality of staff, customer satisfaction, and market conditions. The other seven indicators have a low-to-moderate impact on performance management in China family SMEs: enterprise growth ability, asset management indicators, asset quality status, internal satisfaction of the enterprise, industry development status, market growth potential, and enterprise product control. The BPNN simulation supports the SEM analysis with similar results and provides an effective and confirming robustness check on the analysis.
CoNTRIBUTIoNS, LIMITATIoNS, ANd CoNCLUSIoN
This research addresses the current gap in studies that emphasize Chinese family SMEs. Using SEM and BPNN, performance management in Chinese family SMEs was analysed by testing four factors and eighteen related indicators. The generated results provide a better understanding of the proposed and hypothesized relationships between the variables in this study, highlighting the importance of profitability, solvency, cash flow, enterprise core technology and innovation ability, quality of staff, customer satisfaction, and market conditions on performance management. Thus, this study enriches the SME performance literature. Furthermore, as a practical contribution, this study has the potential to provide suggestions to listed Chinese family SMEs regarding which kinds of performance indicators can be emphasized to improve a performance management. For example, the results show that financial performance plays a key role in the performance management process.
The limitations of this study should be noted. First, the focus on China's SMEs may restrict this study as the findings may not be generalisable to foreign firms outside China and non-SMEs. Next, in this exploratory study, we had a limited sample size (N=195), preventing a test and retest of the proposed model. This was partly overcome by using BPNN, but a test and retest should be conducted in future studies. Also, this research analyzed only secondary data obtained through the CSMAR database; only indicators with recorded information on the database were considered. Subsequent follow-up studies should use a range of primary data to analyze the variables and their proposed relationships. | 2022-05-10T15:49:19.230Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "8a1d3a7308abca3368f7949926ce289055bc5fb0",
"oa_license": null,
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=301613&isxn=9781668464434",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0873ebc3e7590d56a14d548c8de12ba00bd202b9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16489898 | pes2o/s2orc | v3-fos-license | Targeting intracellular p-aminobenzoic acid production potentiates the anti-tubercular action of antifolates
The ability to revitalize and re-purpose existing drugs offers a powerful approach for novel treatment options against Mycobacterium tuberculosis and other infectious agents. Antifolates are an underutilized drug class in tuberculosis (TB) therapy, capable of disrupting the biosynthesis of tetrahydrofolate, an essential cellular cofactor. Based on the observation that exogenously supplied p-aminobenzoic acid (PABA) can antagonize the action of antifolates that interact with dihydropteroate synthase (DHPS), such as sulfonamides and p-aminosalicylic acid (PAS), we hypothesized that bacterial PABA biosynthesis contributes to intrinsic antifolate resistance. Herein, we demonstrate that disruption of PABA biosynthesis potentiates the anti-tubercular action of DHPS inhibitors and PAS by up to 1000 fold. Disruption of PABA biosynthesis is also demonstrated to lead to loss of viability over time. Further, we demonstrate that this strategy restores the wild type level of PAS susceptibility in a previously characterized PAS resistant strain of M. tuberculosis. Finally, we demonstrate selective inhibition of PABA biosynthesis in M. tuberculosis using the small molecule MAC173979. This study reveals that the M. tuberculosis PABA biosynthetic pathway is responsible for intrinsic resistance to various antifolates and this pathway is a chemically vulnerable target whose disruption could potentiate the tuberculocidal activity of an underutilized class of antimicrobial agents.
Scientific RepoRts | 6:38083 | DOI: 10.1038/srep38083 M. tuberculosis DHPS 10,14 . Evidence for the potentiation of sulfonamides has been demonstrated in the related organism, Mycobacterium smegmatis 15 . Thus, it is possible that more potent activity of antifolates can be revealed in M. tuberculosis through overcoming intrinsic resistance mechanisms.
DHPS inhibitors and PAS directly compete with PABA for binding to DHPS and consequently supplementation with PABA antagonizes the activity of these compounds in both whole cell and enzymatic assays 10,16 . Since PABA is an essential precursor for tetrahydrofolate biosynthesis, M. tuberculosis must maintain a basal level of intracellular PABA, as was confirmed by a recent metabolomic analysis of M. tuberculosis 17 . Importantly, these studies demonstrated a dose dependent increase in intracellular PABA in response to treatment with DHPS inhibitors and PAS 17 . However, the effect of intracellular PABA on the activity of these antimicrobial agents remains unexplored. We reasoned that since internal PABA levels are significantly increased following antifolate treatment, PABA synthesis might play a crucial role in intrinsic resistance to these drugs. Furthermore, we hypothesized that impairing synthesis of PABA would abolish this intrinsic drug resistance and lead to potentiation of antifolate action against M. tuberculosis. 8 . (b,c) The growth of indicated strains was assessed in PABA free 7H9 supplemented medium. Growth was determined by OD 600 measurements taken every 2 days over a 10 day period. Growth curves were conducted in biological triplicate. Error bars denote standard deviation. (d,e) Exogenous PABA supplementation restores growth. The growth of indicated strains was assessed in PABA free 7H9 supplemented medium amended with 10 μ g/ml PABA. Growth was determined by OD 600 measurements taken every 2 days over a 10 day period. Growth curves were performed in biological triplicate. Error bars denote standard deviation.
Results
Disruption of PABA biosynthesis potentiates anti-tubercular antifolate action. To begin assessing the effects of intracellular PABA concentration on potency of DHPS inhibitors, M. tuberculosis transposon mutant strains deficient for PABA production were isolated. By screening a library of 5,000 independent transposon insertion mutants, we identified two strains that were unable to grow on PABA-free agar medium. These PABA auxotrophic strains were found to harbor transposon insertions in pabC, encoding the 4-amino-4-deoxychorismate lyase responsible for catalyzing the final step in PABA biosynthesis (Fig. 1a). Growth in PABA-free medium was assessed and a growth defect was observed in the pabC::Tn disruption strain relative to wild type (Fig. 1b). This growth defect was alleviated by expression of a wild type copy of pabC from an integrative mycobacterial vector (Fig. 1b) or with the addition of exogenous PABA to the growth medium (Fig. 1d). The minimum concentration of PABA necessary to restore wild type growth in the pabC::Tn strain was determined to be 5 pg/ml. While disruption of pabC caused auxotrophy on plates, the observed bradytrophic phenotype in liquid culture suggests a low level of sustained PABA production.
Next we examined the susceptibility of the pabC::Tn strain to the DHPS inhibitors sulfamethoxazole, sulfathiazole and dapsone ( Table 1). The pabC::Tn strain showed a dramatic increase in susceptibility to DHPS inhibitors, ranging from 8 to greater than 500 fold, compared to susceptibility of the wild type strain. Importantly, these drugs also showed greatly enhanced bactericidal activity at biologically relevant concentrations against the pabC::Tn strain while remaining bacteriostatic against the wild type strain. Susceptibility to isoniazid was evaluated to determine whether enhanced drug susceptibility of this mutant strain was a general phenomenon or specific to anti-folate drugs. As anticipated, the isoniazid MIC 90 was indistinguishable for the wild type and pabC::Tn strains, indicating that enhanced drug susceptibility associated with disruption of PABA biosynthesis was specific for anti-folates. These data indicate that inhibition of PABA biosynthesis can sensitize M. tuberculosis to otherwise less effective anti-folate drugs.
We sought to examine whether this potentiation effect could be applied to p-aminosalicylic acid (PAS), a second line drug commonly used to treat MDR TB 1 . PAS is also a structural analog of PABA and competes with PABA for activation by the concerted action of DHPS and dihydrofolate synthase. Once incorporated, PAS is able to poison folate metabolism likely through inhibition of dihydrofolate reductase ( Fig. 1a) 17,18 . Similar to DHPS inhibitors, PAS anti-tubercular action can be antagonized by exogenously supplied PABA 13,16 . Thus, we reasoned that intracellular PABA levels could also provide intrinsic resistance to PAS. The PAS minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) for the pabC::Tn strain were both found to be reduced by greater than 1000-fold, relative to those for the wild type strain (Table 1).
PabB is a novel, bactericidal target in M. tuberculosis. The bradytrophic phenotype of the M. tuberculosis pabC::Tn strain suggests that PabC would not be an ideal stand-alone target, thus, we sought to determine whether disruption of alternative steps in PABA biosynthesis would result in a more severe defect. PABA production in M. tuberculosis occurs in a two-step pathway from chorismate. The pabB gene encodes a putative 4-amino-4-deoxychorismate synthase that catalyzes the amination of chorismate to afford 4-amino-4-deoxychorismate (ADC) while PabC catalyzes the subsequent conversion of ADC to PABA (Fig. 1a). Because the conversion of ADC to PABA could occur spontaneously through a concerted pericyclic pathway 19 , but the transformation of chorismate to ADC cannot occur spontaneously, we hypothesized that disruption of PabB would generate a stronger auxotrophic phenotype. We used the specialized transduction approach 20 to construct a M. tuberculosis pabB::hyg strain by allelic exchange. The pabB::hyg strain was found to be auxotrophic for PABA on both solid agar and in broth culture (Fig. 1c). Unabated growth of this strain was restored by expression of pabB from a mycobacterial expression vector (Fig. 1c) as well as by supplementation of the growth medium with exogenous PABA (Fig. 1e). The minimum concentration of PABA required to restore wild type growth kinetics was determined to be 1 ng/ml. To determine whether the PABA auxotrophy resulted in bacteriostatic or bactericidal effects, we evaluated survival of the pabB::hyg strain in PABA-free broth culture. Culture viability, as assessed by determining colony forming units (CFU) per ml, decreased over time in PABA-free medium (Fig. 2a). These data suggest PABA synthesis as a potential target for discovery of bactericidal anti-tubercular agents. We then tested susceptibility of the pabB::hyg strain to PAS treatment. Since the pabB::hyg strain is auxotrophic for growth in broth culture, we performed PAS MICs in medium containing various concentrations of PABA. We found the PAS MIC to be dependent on the concentration of exogenous PABA (Fig. 2b). Similar to the pab-C::Tn strain, deletion of pabB greatly potentiated PAS anti-tubercular action in limiting PABA conditions. Again isoniazid was included as a control. In line with the observations for the pabC::Tn strain, the level of isoniazid susceptibility between the wild type and pabB::hyg did not change (Fig. S1), supporting the idea that disruption of PABA biosynthesis specifically enhances anti-folate susceptibility in M. tuberculosis. Collectively, these data indicate that intracellular PABA levels play a role in intrinsic resistance to PAS, confirm PabB as a bactericidal target, and demonstrate synergy of PAS with deletion of pabB.
Disruption of PABA synthesis restores PAS susceptibility in a resistant strain. As disruption of PABA synthesis drastically improves PAS efficacy, we reasoned that this strategy could restore susceptibility to PAS in a resistant mutant strain. Mutations in folC, encoding dihydrofolate synthase, confer PAS resistance in both laboratory and clinical isolates 18,21,22 . We deleted pabB in a previously described PAS resistant folC mutant containing a glutamate to alanine substitution at position 153 (folC E153A ) 22 . The PAS MIC was determined using various concentrations of exogenous PABA. The M. tuberculosis folC E153A strain was resistant to PAS at concentrations up to 20 μ g/ml (Fig. 2c). However, disruption of pabB lowered the MIC to < 0.156 μ g/ml, a reduction greater than 2-fold below the PAS susceptible parental strain MIC during PABA limitation (Fig. 2c). These results demonstrate that targeting intrinsic resistance pathways can restore PAS susceptibility in resistant strains.
Chemical targeting of PABA biosynthesis.
Since not all biosynthetic pathways can be targeted by small molecules, we sought a chemical inhibitor of PABA biosynthesis to recapitulate our genetic findings. To date, no specific chemical inhibitor for M. tuberculosis PABA biosynthesis has been described. However, a recent high-throughput screen identified a compound, MAC173979, capable of inhibiting E. coli de novo PABA biosynthesis and growth 23 . We synthesized MAC173979 and tested the antimicrobial activity against M. tuberculosis. We found that treatment of M. tuberculosis with MAC173979 phenocopied deletion of pabB, inhibited growth with an MIC of 75 ng/ml and could be antagonized by supplementation with exogenous PABA (Fig. 3). Checkerboard assays were performed with MAC173979 and PAS, and the fractional inhibition concentration index (FICI) was calculated to assess drug interaction between these compounds (Table 2). In these assays, the FICI ranged from 0.5 to 0.87, indicating a mild synergistic effect.
Discussion
Repurposing existing drugs offers an opportunity to fill the gap in the drug discovery pipeline 24 . The ability to improve drug efficacy through co-targeting strategies broadens the therapeutic tools available to treat infectious diseases. One well-known example of this augmentation is in the combined use of β -lactam antibiotics with clavulanic acid in order to neutralize β -lactamases 25 . However, enzymes capable of inactivating drugs are only one factor contributing to intrinsic drug resistance in bacteria. Other factors including membrane penetration, efflux mechanisms, and endogenous native substrate competition contribute to intrinsic drug resistance [26][27][28] . The As antimicrobial resistance is a major concern in M. tuberculosis infections, we examined the impact that co-targeting intrinsic resistance mechanisms has on re-sensitizing bacteria to drug treatment. Mutations in folC are one of the most common mechanisms of resistance to PAS found in clinical isolates 22 . We observed that susceptibility to PAS could be restored to below wild type levels by genetically disrupting PABA biosynthesis in a folC mutant strain, indicating that it is possible to circumvent previously developed resistance.
While many essential pathways might look like ideal drug targets, not all enzymes are suitable candidates for small molecule inhibitors 24,29,30 . For proof of concept, we synthesized the small molecule inhibitor MAC173979 to evaluate the PABA biosynthetic pathway as a cellular target in M. tuberculosis. We tested MAC173979 efficacy in the presence and absence of PABA to ascertain whether it was specifically targeting PABA biosynthesis. In line with observations made using E. coli, MAC173979 inhibited M. tuberculosis growth in vitro at nanomolar concentrations and targets PABA biosynthesis as exogenous PABA antagonized its action between 4 and 8-fold. Further, MAC173979 demonstrated mild synergy in combination treatment with PAS. Based on the genetic data, a stronger interaction may be expected between these drugs, however; it is not yet clear if there are additional undefined cellular responses to MAC173979 that may impact the anti-tubercular effects of PAS. Difficulties with MAC173979 highlight a need to develop more potent PABA biosynthesis inhibitors for further pursuit of potential antifolate co-treatment strategies.
Taken together, these data suggest that co-targeting intracellular competitor production opens new avenues for therapeutic interventions that take advantage of well-established antimicrobials. Targeting intrinsic resistance not only improves antimicrobial action, but offers a route to circumvent and prevent emerging drug resistance. Here we demonstrate the potential to improve the antitubercular potency of multiple antifolates by several orders of magnitude in both drug susceptible and drug resistant M. tuberculosis. Furthermore, as DHPS inhibitors are used to treat a wide range of infections, co-targeting PABA biosynthesis could be a viable option for improving treatment of many other types of infections. Developing novel therapeutic approaches to treat M. tuberculosis infection is paramount to combat the increasing prevalence of drug resistance.
Screening for transposon insertion mutants.
Transposon mutagenesis was performed on M. tuberculosis strain mc 2 7000 as previously described 31 . Briefly, transposon mutants were isolated on supplemented 7H10 agar medium containing 10 μ g/ml PABA and 50 μ g/ml kanamycin. Colonies were picked and patched to supplemented 7H10 agar medium without or with 10 μ g/ml PABA (Sigma-Aldrich). Mutants unable to grow on the PABA-free medium were selected for further evaluation. Transposon insertion sites were identified as previously described 32 .
Cloning methods. For complementation of the M. tuberculosis pabC and pabB mutant strains, respective genes were cloned in the integrative mycobacterial vectors pJT6a 33 containing a hygromycin resistance cassette or pMV306 34 containing a kanamycin resistance cassette (Table S2). Briefly, the pabC and pabB genes were amplified by PCR using primers pabC_F and pabC_R, and, pabB_F and pabB_R respectively (Table S3). The amplicons and vectors were cut with HindIII and EcoRI and ligated together to produce pJT6a-pabC and pMV306-pabB respectively (Table S2). The recombinant plasmids were propagated in E. coli DH5α and maintained with hygromycin or kanamycin selection.
Construction of deletion strains. A previously described allelic exchange system was utilized to replace the pabB locus with a hygromycin resistance cassette in relevant parental strains 20 . Briefly, the allelic exchange substrate containing flanking regions (~1000 bp) homologous to the flanking regions of pabB was constructed using plasmid p0004S (Table S2) to generate p1005. Flanking regions were amplified by the primer pairs: pabB_ Up_For, pabB_Up_Rev and pabB_Dwn_For, pabB_Dwn_Rv (Table S3). This plasmid was ligated into the PacI site of the specialized transducing phage phAE159 (Table S2) to generate ph1005. ph1005 was propagated in M. smegmatis mc 2 155 to produce high titer phage. Phage ph1005 was then used to deliver the pabB deletion substrate into M. tuberculosis strain H37Ra using specialized transduction. Transductants were plated on supplemented 7H10 agar plates containing 10 μ g/ml PABA and hygromycin to select for recombinant strains. Genomic DNA was prepared and deletion of pabB was verified using PCR and sequencing of respective amplicons.
Growth experiments. Growth was assessed by measuring optical density at 600 nm (OD 600 ). Initially, all strains were passaged in PABA-free supplemented 7H9 broth to titrate out PABA. Exponentially growing cultures were washed twice in an equal volume of PABA-free, supplemented 7H9 broth and diluted to an OD 600 of 0.01 in PABA-free supplemented 7H9 broth for incubation at 37 °C. Supplemental PABA was added as indicated.
Determining pabB loss of viability. Strains were passaged in PABA-free supplemented 7H9 broth to titrate out PABA. Exponentially growing strains were then washed twice in an equal volume of PABA-free supplemented 7H9 broth and diluted to an OD 600 of 0.01 in PABA-free, supplemented 7H9 broth for incubation at 37 °C. The samples were serially diluted and plated on supplemented 7H10 plates containing 10 μ g/ml PABA to determine CFU every 7 days over a 21 day time-course.
Determination of antifolate MICs and MBCs.
The minimum inhibitory concentration (MIC 90 ) was defined as the concentration of drug required to inhibit 90% of growth relative to a no drug control. Growth was assessed by optical density (OD 600 ). Exponentially growing strains were washed twice in an equal volume of PABA-free supplemented 7H9 broth and diluted to an OD 600 of 0.01 in PABA-free supplemented 7H9. The PABA-free exponentially growing cultures were diluted to an OD 600 of 0.01. Exogenous PABA was added at 100 ng/ml, 10 ng/ml, or 1 ng/ml as necessary for pabB::hyg strains. Drugs were added using a log 2 dilution scheme, cultures were incubated at 37 °C and the OD 600 was measured at day 14 to determine the MIC 90 . Cultures were also serially diluted and plated for CFU on supplemented 7H10 containing 10 μ g/ml PABA on day 0 and day 14 to determine the minimum bactericidal concentration (MBC 99 ) required to kill 99% of the initial population.
Evaluation of antitubercular activity of MAC173979. MAC173979 was synthesized as described previously 23 with minor modifications as described in the Supporting Information. M9 minimal medium supplemented with 0.4% glucose, 0.2% glycerol, 0.05% tyloxapol, and 50 µg/ml pantothenate was used for all experiments involving MAC173979. Initial MIC determination and PABA antagonism assays were conducted using a 96-well plate format. Briefly, mc 2 7000 was grown to exponential phase, diluted to an OD 600 of 0.01 and inoculated into 200 μ l of medium. MAC173979 was added using a log 2 dilution scheme and 10 μ g/ml of PABA was added as appropriate to determine antagonism. Cultures were incubated at 37 °C and the OD 600 was measured at day 14 to determine the MIC 90 .
Evaluation of synergy between PAS and MAC173979. Synergy was evaluated by performing checkerboard assays. Briefly, M9 minimal medium supplemented with 0.4% glucose, 0.2% glycerol, 0.05% tyloxapol, and 50 µg/ml pantothenate was used to culture mc 2 7000 to exponential phase. Upon reaching exponential phase, mc 2 7000 was diluted to an OD 600 of 0.01 and inoculated into 5 ml of supplemented M9 medium. Bottles were arrayed as 7 rows with 10 columns. PAS was added to each column in a log 2 dilution scheme with column 1 containing 300 ng/ml and column 10 as the no drug control. MAC173979 was then added in a log 2 dilution scheme to each row with row 1 containing 120 ng/ml and row 7 as the no drug control. Bottles were incubated at 37 °C and the OD 600 was measured at day 10 35 . | 2018-04-03T04:33:16.123Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "e3b9bfa6388eac8acc2b7101d742615d497b8bdb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep38083.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e3b9bfa6388eac8acc2b7101d742615d497b8bdb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
207514695 | pes2o/s2orc | v3-fos-license | G protein α subunit may help zoospore to find the infection site and influence the expression of RGS protein
Sensing chemical signal secreted from host root and find the best site for penetration are crucial for initiating infection of Phytophthora zoospore. G protein α subunit of P. sojae participates in not only the chemotaxis to soybean isoflavone, but also finding penetrating site. Further more, although calcium signal pathway has been proven to be influenced by Gα, other signal pathways which are also influenced by G protein are need to be find out. In this addendum, we describe an RGS protein, PsRGS6, is expressed downregulated in zoospores of Gα silenced mutant. This result indicates that the expression of Gα and RGS protein may be influenced by each other. Some differences between Gα mutants of P. infestans and P. sojae may be due to the different developmental procedures.
Sensing chemical signal secreted from host root and find the best site for penetration are crucial for initiating infection of Phytophthora zoospore. G protein α subunit of P. sojae participates in not only the chemotaxis to soybean isoflavone, but also finding penetrating site. Furthermore, although calcium signal pathways are influenced by Gα, other signal pathways also influenced by G protein remain to be discovered. In this addendum, we describe an RGS protein, PsRGS6, is expressed downregulated in zoospores of Gα silenced mutant. This result indicates that the expression of Gα and RGS protein may be influenced by each other. Some differences between Gα mutants of P. infestans and P. sojae may be due to the different developmental procedures.
There are many phytopathogenic microbes in Phytophthora genus, which usually cause annual losses worldwide. In the field, fungus-like mycelia of Phytophthora differentiate to form multinucleate asexual sporangia, which could germinate to infect host plant or cleave cytoplasm to form and release uninucleate and biflagellated zoospores. 1 Motile zoospores swim toward host plant through sensing chemical signals secreted from roots of hosts, and begin new infection. 2 Consequently, asexual development including sporangia formation, zoospore motility and chemotaxis to special chemical signals determine the ability of dispersal and host invasion of Phytophthora. Recently, we have proven that Gα mediated signal transduction pathway controlled P. sojae zoospore chemotaxis to soybean isoflavone. 3 Silencing of PsGPA1 gene, which encodes Gα, caused upregulation of calcium binding proteins including calmodulin and protein kinase. Further more, we found some other changes of phenotypes caused by PsGPA1 gene silencing and downstream targets regulated by Gα. In this addendum, we described the function of Gα in helping zoospores to find penetration sites, effectors whose transcription were influenced by Gα, and different functions of Gα between P. infestans and P. sojae.
Gα Help P. sojae Zoospore to Find Not Only Host, but also Penetration Site
Gα participates the signal transduction pathway, which controls P. sojae zoospore swimming to soybean isoflavone daidzein and some amino acids. In addition, it could help P. sojae zoospore to find best penetration site. We dropped zoospore suspensions of P. sojae Gα silenced mutant and wild-type strain onto the epidermis of onion. After 30 min in 25°C, zoospore encystment on epidermis of onion was observed under microscope. As show in Figure 1, most zoospores of wild-type strain encysted on the gap of two cells, and aggregated together, while zoospores of Gα silenced mutant encysted randomly on the surface of epidermis without aggregation. However, the same phenomenon was not observed on epidermis of soybean. Zoospores of P. sojae wild-type strain can penetrate soybean cell from either intercellular gap or normal cell wall (Fig. 2), which may be due to the different structure or components of plant cells. This result indicates that Gα has putative function on helping zoospore to find the best site of host for infection.
Gα and RGS Protein
Gα transmits extracellular signals from G protein coupled receptors, which combined ligands, to downstream targets including adenylyl cyclase, phospholipase and ion channels. 4 It has been proven that silencing of Gα can cause upregulation of some calcium binding proteins. 3 In addition, there are also some genes expressed downregulated following silencing of Gα. For example, expression of a gene-coding regulator of G protein signal (RGS), named PsRGS6 was analyzed in Gα silenced mutant and wild-type strain. The result showed that, PsRGS6 was expressed lower in Gα silenced mutant than that in wild-type strain (Fig. 3). RGS proteins have GTPase activity, which could hydrolyze GTP to GDP. That would allow Gα-GTP transforms to Gα-GDP, which reassociates with Gβγ dimmer and inhibit Gα mediated signal transduction. 5 Based on our results, PsRGS6 may function as an inhibitor of G protein signal, whose transcription is regulated by Gα. a air-borne species, sporangia of which can be separated from sporangiophores, and usually spread by wind or water to new potential sites of infection. Sporangial cleavage of P. infestans needs cool and moist conditions. In contrast, P. sojae is a soil-borne species, sporangia of which cannot be separated from sporangiophores, and sporangial cleavage does not need cool condition. The most important difference is that P. sojae zoospore can be attracted not only by some amino acids, which is the same as that of P. infestans, but also by isoflavones secreted from soybean roots, which may be partly determined by host range. These differences may be the reason for Gα mutants of P. infestans and P. sojae have some different phenotypes. There is only one copy of Gα subunit gene in all the sequenced Phytophthora genome. From the results of Northern Blot and RT-PCR, Gα is not expressed in nutrient mycelium, while it is expressed highest in sporangia or sporulating mycelia. 3,6 However, there are still some different expression patterns of Gα in asexual development between P. sojae and P. infestans. Zoospore has to encyst to host surface before penetration. Gα are expressed in the same levels between zoospore and cyst in P. infestans. 6 However, in P. sojae, it is expressed higher in zoospore than that in cyst (Fig. 4). So zoospore of Gα silenced mutant encysts very quickly than that of wild-type strain, 3 while P. infestans Gα silenced mutant did not have the same phenotype in zoospore encystment. Gα mutant of P. sojae and P. infestans reduced their pathogenisity by different mechanisms. P. sojae zoospore reduced its ability of cyst germination, while P. infestans reduced its ability of appressorium formation. For P. infestans, the expression level of Gα in sporangium is much higher than that in zoospore, while for P. sojae, the expression levels in those two stages are similar. The reason may be in different functions of Gα in P. sojae and P. infestans. This indicates that G protein signal pathway in different Phytophthora spp. may participate in different signal transduction pathways.
Conclusions and Future Directions
The function of Gα in sporangial cleavage, zoospore motility and pathogenesis of Phytophthora has been found. However, there are no evidence revealed that Gα participates sexual development of Phytophthora. We supposed that, hormone signal could be transmitted independently from Gα mediated signal transduction pathway. There are 24 GPCRs in P. sojae genome, 12 of which are fused to a PIPK domain which are similar to Dictyostelium RpkA. 7 These GPCR-PIPKs may transmit extracellular signals to cells which trigger phosphoinositide second messenger synthesis, and activate downstream signaling pathways which control sexual development of P. sojae. Although transcription of some putative downstream targets of G protein such as calcium binding proteins and RGS protein were found to be regulated by Gα, more effectors binding with Gα and regulated by Gα need to be found. The G protein involved signal transduction mechanisms also need to be deeply analyzed for disease control. | 2018-04-03T02:46:35.652Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "f6855ee3abc20a1b8ff2cb7bc74659e81c9915e2",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4161/cib.7525",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a1819ef361434f3457cd526fdd66966fe4947b72",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253383922 | pes2o/s2orc | v3-fos-license | CASA: Category-agnostic Skeletal Animal Reconstruction
Recovering the skeletal shape of an animal from a monocular video is a longstanding challenge. Prevailing animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown animal by associating it with a seen articulated character in their memory. Inspired by this fact, we present CASA, a novel Category-Agnostic Skeletal Animal reconstruction method consisting of two major components: a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first retrieves an articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained language-vision model. CASA then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of CASA regarding shape reconstruction and articulation. We further demonstrate that the resulting skeletal-animated characters can be used for re-animation.
Introduction
Recovering the shape, articulation, and dynamics of animals from images and videos is a longstanding task in computer vision and graphics. Achieving this goal will enable numerous future applications for 3D modeling and reanimation of animals. Nevertheless, accurately reasoning about the geometry and kinematics of animals in the wild remains an ambitious problem for three reasons: 1) partial visibility of the captured animal, 2) variability of shape across different categories, and 3) ambiguity of unknown kinematics. Take the camel in Fig. 1 as an example -to faithfully model it requires: 1) hallucinating the unobserved (e.g., occluded, back-side) region, 2) recovering its unique shape and scale, and 3) predicting its kinematic structure. making strong class-specific assumptions (e.g., humans [34, 55-57, 78, 80] or quadruped animals [83][84][85]). However, these assumptions greatly limit the applicability of reconstruction systems, which fail to generalize to animals in the wild.
We aim to reconstruct arbitrary articulated animals using monocular videos casually captured in the wild. Recent works [74-76, 70, 45] demonstrate promising results. However, they often impose non-realistic assumptions on articulation, such as control point driven deformation [74][75][76]70] or freeform deformation [45,50]. As a result, they fall short of the goal of modeling skeletal characters that can be realistically re-animated in downstream applications. Furthermore, there remains significant improvement space for the quality of the inferred animal shape.
In this work, we propose CASA, a novel solution for Category-Agnostic Skeletal Animal reconstruction in the wild. CASA jointly estimates an arbitrary animal's 3D shape, kinematic structure, rigging weight, and articulated poses of each frame from a monocular video (Fig. 1). Unlike existing nonrigid reconstruction works [74,75,70], we exploit a skeleton-based articulation model as well as forward kinematics, ensuring the realism of the resulting skeletal shape ( § 3.1). Specifically, we propose two novel components: a video-to-shape retrieval process and a skeleton-based neural inverse graphics framework. Given an input video, CASA first finds a template shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained language-vision model [47] ( § 3.2). Using the retrieved character as initialization, we jointly optimize bone length, joint angle, 3D shape, and blend skinning weight so that the final outputs are consistent with visual evidence, i.e., input video ( § 3.3).
Another issue that hinders the study of animal reconstruction is that the existing datasets [4,13,31] lack realistic video footage and ground-truth labels across different dynamic animals. To address it, we introduce a photo-realistic synthetic dataset PlanetZoo, which is generated using the physicalbased rendering [7] and rich simulated 3D assets [35]. PlanetZoo is large-scale and consists of 251 different articulated animals. Importantly, PlanetZoo provides ground-truth 3D shapes, skeletons, joint angles, and rigging, allowing evaluation of category-agnostic 4D reconstruction holistically ( § 4).
We evaluate CASA on both PlanetZoo and the real-world dataset DAVIS [46]. Experiments demonstrate that CASA recovers fine shape and realistic skeleton topology, handles a wide variety of animals, and adapts well to unseen categories. Additionally, we showed that CASA reconstructs a skeletal-animatable character readily compatible with downstream re-animation and simulation tasks.
In summary, we make the following contributions: 1) We propose a simple, effective, and generalizable video-to-shape retrieval algorithm based on a pretrained CLIP model. 2) We introduce a novel neural inverse graphics optimization framework that incorporates stretchable skeletal models for category-agnostic articulated shape reconstruction. 3) We present a large-scale yet diverse skeletal shape dataset PlanetZoo. Figure 2: Overview. Given an input video, A video-to-shape retrieval process is first conducted with the guidance of pre-trained CLIP ( § 3.2). Initialized by the retrieved shape, we jointly optimize shape, skeleton (bone length and joint angle), and skinning through inverse rendering ( § 3.3).
structure to control the motion. Each skeleton is a hierarchical set of bones. Each bone has associated with a portion of vertices (skinning). The bone's transformation is determined through a forward kinematics process. As the character is animated, the bones change their transformation over time.
Linear Blend Skinning (LBS) [37,28] is a standard way for modeling skeletal deformation, which deforms each vertex of the shape based on a linear combination of bone transformations. For improvement, multi-weight enveloping [68,38] is used to overcome the issue of shape collapse near joints when related bones are rotated or moved. Dual-quaternion blending (DQB) [20] adopts quaternion rotation representation to solve the artifacts in blending bone rotations, and STBS [15] extends LBS to include the stretching and twisting of bones. Recent works [74][75][76] employs LBS for modeling the motion of shapes recovered from video clips. However, these methods do not enforce a skeleton-based forward kinematic structure. Hence their recovered animated shapes are not interpretable and cannot be directly used in skeletal animation and simulation pipeline.
Language-vision approaches. Self-supervised language-vision models have gone through rapid advances in recent years [62,66,48] due to their impressive generalizability. The seminal work CLIP [66] learns a joint language-vision embedding using more than 400 billion text-image pairs. The learned representation is semantically meaningful and expressive, thus has been adapted to various downstream tasks [82,71,49,79,65]. In this work, we adopt CLIP in the retrieval process.
3D reconstruction dataset. A plethora of synthetic 3D datasets [61,81,16,52] and interactive simulated 3D environments [58,59,9,67] have been proposed in recent years. ShapeNet [8] provides a benchmark for common static 3D object shape reconstruction. Among all of these datasets, only a few aim for dynamic object reconstruction [31]. Given that the synthetic dataset has a large domain gap to real-world settings, photo-realistic rendered samples are strongly preferred. To this end, we propose a photo-realistic synthetic dataset PlanetZoo to study the dynamic animal reconstruction problem. PlanetZoo contains high-fidelity assets and covers a wide range of animal categories.
3 Category-agnostic skeletal animal reconstruction CASA aims to reconstruct various dynamic animals in-the-wild from monocular videos. It takes as input the RGB frames {I t } 1...T from a monocular video, object masks {M t } 1...T , and optical flow maps {F t } 1...T computed from consecutive frames. From these inputs, we aim to recover an animal shape s 0 and its articulated deformed shape s t at each time t. Fig. 2 demonstrates an overview of our approach. Our method exploits the skeletal articulation model and forward kinematics to ensure the realism of the resulting skeletal shape ( § 3.1). Our method consists of two phases. In the retrieval phase ( § 3.2), CASA finds a template character from an existing asset bank based on the similarity to the input video in the deep embedding space using a pre-trained encoder. The retrieved template is fed into the neural inverse graphics phase as initialization( § 3.3). Finally, we jointly reason the final shape, skeleton, rigging and the articulated pose of each frame through an energy minimization framework.
Rigging Skeletal parameters Deform
Per-video parameters Per-frame parameters Figure 3: Skeletal shape parametric model. Our parametric model consists of joint angles q, bone length b, skinning weight w, and vertex positions v. Joint angles q change per-frame and the others are universal. Utilizing this model, animal shapes are tuned by vertex deformation as well as the stretching of bones. The target articulation would be fit by predicting per-frame joint angles.
Articulation model
Skeletal parameters. We exploit a stretchable bone-based skeletal model for our deformable shape parameterization ,as shown in Fig. 3. This articulated model consists of three components: 1) a triangular mesh consisting of a set of vertices and shapes in the canonical pose, describing the object's shape; 2) a set of bones connected by a kinematic tree. Each one has a bone length parameter and associated rigging weight over each vertex; 3) a joint angle describing the relative transformation between each adjacent bone. To summarize, a deformed shape s t at time t can be represented as: where v i is the vertex position, f j represents a triangle face parameterized as a tuple of three vertex indices, w i is the rigging weight constrained in a K-dimensional simplex: ∆ K = (w 0 , . . . , w K ) ∈ R K | K k=0 w k = 1 and w k ≥ 0 ; b k is the scalar bone length for each bone and q t k is the joint angle for each bone at a time t, represented as a unit quaternion in the SO(3) space. Note that all the variables except joint angles are shared across time.
Our skeletal model is grounded by the nature of many articulated objects. Unlike the commonly used control-point based articulation [41,75,74], our model reflects the constraints imposed by bones and joints, leading to more natural articulated deformation. Compared to category-specific parametric models [34,85], it is more flexible and generalizable.
Forward kinematics. We use the forward kinematic model [26] to compute the transformation of each bone along the kinematic tree. Specifically, the rigid transformation of a bone is uniquely defined by the joint angles and bone length scales of the bone itself and its ancestors along the kinematic tree. Given a bone k, the rigid transform between its own frame and the root can be computed by recursively applying the relative rigid transformation along the chain: where T t k ,T t pa(k) is the transformation of bone k and its parent node at frame t. q ans(k) , b ans(k) are the joint angles and bone lengths from all the ancestors of target bone k. T t k,pa(k) (q t k , b k ) is the relative rigid transform between the bone and its parent frame, consists of a joint center translation along z axis decided by the bone length b k e z = [0, 0, b k ] and a rotation around the joint center R(q t k ). Linear Blend Skinning (LBS). Given the rigid transformation of each bone, we adopt LBS to compute the transform of each point. Specifically, the deformation of each point v is decided by a linear composing the rigid transforms of bones through its skinning weight k is the transformation of bone k at frame t as defined in Eq. 2. For simplicity, we omit its dependent variables.
Stretchable bones. The aforementioned nonrigid deformation assumes independent shape and bone lengths. Naturally, the shape should deform accordingly as bone length changes. To model such a relationship, we exploit a stretchable bone deformation model [15]. For each bone, we compute an additional scaling transform along the bone vector direction based on its bone length change and apply this scaling transform to any points associated with this bone. Fig. 3 provides an illustration of stretching. For mathematical details on stretching, we refer readers to our supplementary document.
Stretchable bones bring a two-fold advantage: 1) it allows us to model shape variations within the same topological structures; 2) it makes the topological structure adjustable by "shrinking" the bones, e.g., a quadrupedal animal can be evolved into a seal-like skeletal model through optimization.
Reparameterization. Optimizing each vertex position offers flexibility yet might lead to undesirable mesh due to the lack of regularization. We use a neural displacement field to re-parameterize the vertex deformation to incorporate regularization, such as smooth and structured vertex deformation. We use a coordinate-based multi-layer perceptron (MLP) to define this displacement field V. In addition, we incorporate a global scale scalar u in our framework to handle the shape misalignment between our initialization and the target. The position of each vertex is defined as: where uv i is the scaled position at the canonical pose, V θ (up i ) represents the vertex offset parameterized by θ, and v i is the updated position for vertex i. During inference, instead of directly optimizing v i , we will only optimize the parameters of the displacement network. This displacement field reparameterization allow us to smoothly deform canonical shape during inference. In practice we find this implicit regularization compares favorably over explicit smoothness terms such as as-rigid-as-possible and laplacian smoothness.
Video-to-shape retrieval
Directly optimizing all skeletal parameters defined in Eq. 1 is challenging due to the variability and highly structured skeletal parameterization. Many deformable objects share similar topology structures, albeit with significant shape differences. To address it, we propose initializing skeletal shapes in a data-driven manner by choosing the best matching template from an asset bank.
Videos and skeletal shapes are in two different modalities; hence establishing a similarity measure is hard. Inspired by the recent success of language-vision pretraining models for 3D [66,39], we utilize realistic rendering and pretraining image embedding models [47] to bridge this gap. Specifically, we first pre-render video footage for each character in the asset bank through a physically based renderer [7]. The environment lighting, background, articulated poses, and camera pose for each video are randomized to gain diversity and robustness. We then extract the image embedding features for each video using CLIP [47], which is a language-vision embedding model that is pretrained on a large-scale image-caption dataset. It captures the underlying semantic similarity between images well despite the large appearance and viewpoint difference. This is particularly suitable for retrieving shapes with similar kinematic structures, as the kinematics of animals is often related to semantics.
During inference, we extract embedding features from a given input video and measure the L2 distance between the input video and the rendered video of each object in the embedding space. We then select the highest-scoring articulated shape as the retrieved character. Please refer to the supplementary material for implementation details.
Neural inverse graphics via energy minimization
We expect our final output skeletal shape to 1) be consistent with the input video observation; 2) reflect prior knowledge about articulated shapes ,such as symmetry and smoothness. Inspired by this, we formulate ours as an energy minimization problem.
Energy formulation. We exploit visual cues from each frame including segmentation mask {M t } and optical flow {F t } through off-the-shelf neural networks [63,74]. We also incorporate two types of prior knowledge, including the motion smoothness for joint angles q t k of bones across frames, as well as the symmetry constraint for the per-vertex offset v i at the reference frame. In particular, let be the input video frames and corresponding visual cues, and {s t } be the predicted shapes of all frames, we formulate the energy for minimization as: where E cue measures the consistency between the articulated shape and the visual cues; E smooth promotes smooth transitions overtime; E symm encodes the fact that many articulated objects are symmetric. The three energy terms complement each other, helping our model capture motions of the object according to visual observations, as well as constraining the deformed object to be natural and temporally consistent. We describe details of each term below. Visual cue consistency. The visual cue consistency energy measures the agreement between the rendered maps of the articulated shape and 2D evidence (flow and silhouettes) from videos. We use a differentiable renderer [33] to generate projected object silhouettes {M t }. Additionally, we project vertices of predicted shapes at two consecutive frames to the camera view and compute the projected 2D displacement to render optical flow, following the prior work [74]. We leverage PointRend [22] for object segmentation and volumetric correspondence net [73] for flow prediction. The energy measures the difference between the rendered and inferred cues in 2 distance: where β is the trade-off weight; π flow (s t ) is the rendered the 2D flow map for each pixel given the deformed shape s t ; π seg (s t ) is the rendered object mask. Similar to previous inverse rendering work [75], the object-camera transform is encoded as the root node transform.
Motion smoothness. This energy term encodes that motions of animals should be smooth and continuous across frames. We impose a constraint to ensure that there is little difference in joint angles of one bone from two consecutive frames should be slight. We implement this by computing the multiplication of the joint quaternion at the current frame and the inverse of the joint quaternion at the next frame, which should be close to an identity quaternion q = (0, 0, 0, 1): where • is the quaternion composition operator.
Symmetry offset. This term encourages the resulting shape in canonical space to be symmetric at the reference frame. It is inspired by the fact that most animals in the real world are symmetric if putting them into a certain canonical pose (e.g., 'T-pose' for bipedal animals). Following previous works [74], we enforce this property at the canonical shape (where the joint angles are all zero). Specifically, we calculate the chamfer distance for measuring the similarity between the set of vertices {v i } under the canonical shape and its reflection: where H is the Householder reflection matrix.
Inference
We reason the skeletal shape by minimizing the energy defined in Eq. 4. Our optimization variables include the vertex position v i at the canonical pose, the rigging weight w i , bone length b k as well as the joint angles q t k . All the variables except joint angles are shared across time, and joint angles are optimized per frame.
Initialization. We initialize the vertex position, bone length, and the rigging weight v i , b k , w i of canonical shape using the retrieved template character. The skeleton tree structure of this character is also taken as the basis of our skeletal parameterization. We initialize all the joint rotations as a unit quaternion.
Optimization. The energy function is fully differentiable and can be optimized end-to-end. We use Adam optimizer [21] to learn all the optimization variables. We adopt a scheduling strategy to avoid getting stuck at a local minimum. Specifically, we first optimize the mesh scaling factor based on silhouette correspondences. We then jointly update the bone length scale, joint angles, and the neural offset field by minimizing the energy function defined in Eq. 4.
PlanetZoo dataset
Benchmarking 4D animal reconstruction requires a large amount of nonrigid action sequences with ground truth articulated 3D shapes. Towards this goal, we construct a synthetic dataset PlanetZoo consisting of hundreds of animated animals with textures and skeletons from different categories. Appendix Fig. 12 depicts a snapshot of the assets and rendering images from our dataset, demonstrating the diversity and quality.
Data generation. We extracted animal meshes from the zoo simulator Planetzoo [35]. Cobratools [12] are used to extract those meshes along with their skeleton. With the extracted mesh and skeleton, we further render RGB maps, segmentation masks and optical flow for each frame using Blender.
Assets. To set a diverse environment, we set the background with random HDRI pictures for environmental lighting, and set floor textures with random materials from ambientCG 2 . To reduce the gap between synthetic and real, we set the location of the light source along with its strength to simulate different environments in real-world situations.
Camera. To generate realistic action sequences, we randomly change camera locations between every 12 frames, resulting in a constant view-point change following the animal. The camera is allowed to rotate at a certain angle, ranging from 15 • to 22.5 • .
Articulation. In order to obtain animated animal sequences, for every 12 frames, 8 bones are selected from the skeleton tree with a transformation attached. The angle value of rotation for each bone is sampled from a uniform distribution. By doing this, we are able to cover the whole action space, providing diverse action sequences.
Rendering. We generate silhouettes, optical flow, depth map, camera parameters, and RGB images using Vision Blender [7]. The physically based renderer is capable of showing fine details such as fur, making the rendering results more realistic. For each animal in the dataset, 180 frames are rendered.
Experiments
In this section, we first introduce our experimental setting ( § 5.1). We then compare CASA against a comprehensive set of articulated baselines in various reconstruction and articulation metrics on both simulation ( § 5.2) and real-world datasets ( § 5.3). Finally, we demonstrate our inferred shape can be used for downstream reanimation tasks ( § 5.4).
Experimental setup
Benchmarks. We validate our proposed method on two datasets. Our proposed photorealistic rendering dataset PlanetZoo as well as a real-world animal video dataset DAVIS [46], containing multiple real animal videos with mask annotations. For PlanetZoo, we choose 24 out of 249 total animals for testing and use the rest for validation and training. The testing dataset includes diverse articulated animals from unseen categories, including multiple quadrupeds, bipeds, birds categories, as well as unseen articulated topology such as pinnipeds.
Metrics. We measure the reconstruction quality by Intersection over Union (IOU) and Chamfer distance, as well as skinning distance, joint distance and re-animation quality on PlanetZoo. 1) mean Intersection Over Union(mIOU) measures the volumetric similarity between two shapes. We voxelized the reference and the predicted shape into occupancy grids and calculate the IOU ratio.
2) mean Chamfer Distance(mCham) computes the bidirectional vertex-to-vertex distances between the reference and the predicted shape.
3) Joint is the symmetric Chamfer Distance between joints. We evaluate CD-J2J following [72]. Given a predicted shape, we compute the Euclidean distance between each joint and its nearest joint in the reference shape, then divide it by the total joint number.
4)
Skinning distance measures the similarity between the skinning weights. We first extract vertices associated with each joint using skinning weight. For each pair of joints from GT and prediction, we calculate the Chamfer distance between their associated vertices. Finally, we exploit the Jonker-Volgenant algorithm to find the minimum distance matching between prediction and reference.
5)
Re-animation measures how well we can re-pose an articulated shape to a target shape. Specifically, we minimize the Chamfer Distance between the reference shape and the predicted shape by optimizing the joint angles of each bone for skeletal shape, or the rigid transform for control point-based shape. We consider this a holistic metric that jointly reflects the quality of shapes, skinning, and skeleton.
Baselines. We compare with state-of-the-art approaches for monocular-video articulated shape reconstruction ,including LASR [74], ViSER [75], and BANMo [76] 3 . Similar to our method, they utilize 2D supervision of videos for training including segmentation masks and optical flows. We download the open-source version of them from GitHub. For the input data, we use our ground truth silhouette either in PlanetZoo dataset or DAVIS [46] dataset. We follow the optimization scripts in their code and get the baseline results.
3D reconstruction from a monocular video inevitably brings scale ambiguities. To alleviate the issue of scale differences, for every predicted shape among all the competing baselines, we conduct a line search to find the optimal scale that maximizes the IOU between the reference and the predicted shape.
Results on PlanetZoo
We present quantitative results on PlanetZoo in Tab. 1. CASA achieves higher performance on metrics mIOU and mCham, demonstrating a shape with higher fidelity. Our approach also outperforms other competing methods in reanimation, demonstrating the superior performance of holistic articulated shape reasoning and the potential for downstream reanimation tasks. CASA compares less favorably to LASR in the skinning quality. We conjecture this is due to additional degrees of freedom benefits brought by joint control point articulation. We show the reconstructed mesh from both the camera view and the opposite view. The figure shows that CASA can recover accurate shape across various view angles under partial observation. In contrast, baselines fail to reconstruct reliable 3D mesh when the objects are partially observed. In addition, CASA also produces meshes with higher fidelity and local details in the visible region.
Real-world reconstruction
We demonstrate qualitative results of the competing algorithms on the real-world animal video dataset DAVIS [46] in Fig 6 and Fig. 4. This figure shows that both our method and baselines can get good reconstruction results from the camera perspective. But both two baselines fail to reconstruct those unseen parts. In contrast, CASA can reliably reason the shape and articulation in the unseen regions, thanks to its symmetry constraints, skeletal parameterization, and 3D template retrieval.
Reanimation
We now show how the reconstructed articulated 3D shape can be retargeted to new poses. Given an inferred skeletal shape and the target GT shape at a different articulated pose, we apply an inverse kinematic to compute the articulated pose to reanimate the inferred shape. Specifically, we optimize the articulated transforms such that the reanimated mesh is as close to the target in Chamfer distance. Joint quaternions are optimized for skeletal mesh, and the articulation transforms are optimized for the control-point-based method [74]. Fig. 7 shows a comparison between the retargeted meshes of our methods and LASR on PlanetZoo, with a GT target mesh as a reference. We observe that our retargeted meshes look realistic and accurate and preserve geometric details. Despite having more degrees of freedom in reanimation, LASR fails to produce realistic retarget results. Tab. 1 reports a quantitative comparison in chamfer distance between the retargeted and the reference mesh, demonstrating our approach outperforms competing methods by a large margin.
Ablation study
We provide an ablation study to demonstrate the efficacy of each design choice in our framework.
Energy terms. In Tab. 2, we ablate different terms of our energy function. Specifically, we consider the mask consistency, flow consistency, symmetry and smoothness regularization separately. We find that the mask part is crucial for the performance, while the optical flow part also improves the framework results. Removing the symmetry offset term results in performance degradation since this term plays a vital role in regularizing the neural offset, which can alleviate issues brought by the ambiguity of single-view monocular video. The smooth term leads to better qualitative results.
Optimization. We compare different optimization settings in Tab. 3. The results show that: 1) using neural offset field is superior to per-vertex offset or not deforming shape (CASA vs. w/o offset vs. w/o disp. field); stretchable skeleton helps (CASA vs. w/o skeleton); removing shape scaling step leads to performance degradation (CASA vs. w/o scaling).
Retrieval strategies. In Tab. 4, we test different retrieval strategies. Our result demonstrates that CLIP is the preferred backbone for retrieval, most likely due to training with significantly richer semantic information than ImageNet pretrained models.
Initialization. We test different initialization strategies for skinning weights in Tab. 5. We replace the rigging from the retrieved animal by using k-means for initializing weights. The comparison in the table shows that high-quality rigging weight initialization is essential for good shape predictions. The results of sphere initialization also confirm the necessity of the proposed retrieval phase.
Stretchable bone. Fig. 8 shows qualitative results with/without stretchable bone parameterization. Compared to merely deforming each vertex, stretchable bone deformation ensures global consistency and smoothness. As noted in the figure, without stretchable bones, we see discontinuity at the nose of the animal and a mismatch between the lower and upper mouth. While demonstrating superior performance in animal reconstruction, our method has a few remaining drawbacks. Firstly, the source asset bank restricts the diversity of the retrieved articulation topology. Bone length optimization partially alleviates such limitations through "shrinking" bones. Still, it cannot add new bones to the kinematic tree, e.g., we cannot create a spider from a quadrupedal template. Secondly, our method so far does not impose the constraint that bones are inside the mesh. We plan to tackle such challenges in the future.
Conclusion
In this paper, we propose CASA, a novel category-agnostic animation reconstruction algorithm. Our method can take a monocular RGB video and predict a 4D skeletal mesh, including the surface, skeleton structure, skinning weight, and the joint angle at each frame. Our experimental results show that the proposed method achieves state-of-the-art performances in two challenging datasets. Importantly, we demonstrate that we could retarget our reconstructed 3D skeletal character and generate new animated sequences.
A Additional Results
We provide additional qualitative results on animals from PlanetZoo in Fig. 9. As shown in the figure, CASA well recovers the shape and topology of target animals from input videos. The sample of ostrich in this figure illustrates the limitations of CASA: 1) partial observation from monocular video leads to ambiguity on animal poses. The predicted shape fits well with the target from the camera view. However, from an alternative view, we can find that the wings of the prediction are of a wrong pose; 2) we do not impose a constraint on bones that they should be inside the mesh, resulting in joints in the head becoming outside the shape.
In order to demonstrate that CASA is able to adjust the topology of the initial shape, we provide the evolution of predictions for a seal in Fig. 10. Although the retrieved animal is a quadrupedal animal (otter), CASA gradually warps the predicted shapes to the target seal, whose topology is significantly different from quadrupeds.
Reference image Skeleton
Camera view Alternative view Reference image Figure 10: Given a retrieved animal with wrong articulation topology, CASA can recover a reasonable skeletal shape through our thanks to our stretchable bone formulation.
B Implementation Details
We minimize the energy function using Adam optimizer [21] with β 1 = 0.9 and β 2 = 0.999. We optimize the framework for 200 epochs in total. A scheduling strategy is adopted. We optimize a scaling factor for the first 60 epochs by minimizing the mask term. This factor is initialized by aligning the bounding boxes between the rendering mask of the initial shape and the ground-truth mask. For the rest 140 epochs, we jointly optimize the bone length scale, joint angles, and the neural offset field. The learning rate is set to 5e − 2 for the first stage and 4e − 3 for all parameters in the second stage, except for the mesh scale and neural offset, which are set to 1e − 3. The trade-off weights for the mask term, flow term, smoothing term, and symmetry term are empirically set to 1e4, 1e6, 1e6, and 1e4, respectively. The neural offset field is parameterized by a four-layer MLP.
C Retrieval Result Fig. 11 shows some retrieval results. Input animals are of various colors, skinning textures, and poses, together with different backgrounds and lighting conditions in their corresponding videos. Still, our method can retrieve animals from the database with a similar topology to the input, as CLIP [66] captures high-level semantic information and avoids distractions brought by different appearances and background environments. The sample of the seal at the bottom right is a failure case, as the two animals have different typologies. However, as shown in Fig. 10, CASA is able to bridge the gap between retrieved and target animals.
D Stretchable Bone Formulation
To incorporate stretchable bone parameterization into our linear blend skinning model, we have to re-formulate the skinning equation. The original formulation for linear blend skinning is defined as: where v t i and v i are the position of vertex i at frame t and the rest pose, respectively. w i,k is the rigging weight. T t k is the transformation of bone k at frame t. Since the bone transformation T t k is rigid, we can replace it by a translation and a rotation. In particular, given the head positions of each bone before and after deformation (which can be calculated by performing transformations according to forward kinematics), we can define the skinning model as: where c k and c t k are the head positions of bone k before and after deformation. R t k is the rotation matrix decomposed from the rigid transformation T t k . Simply adding a scaling term into the skinning model can result in shape explosion, especially for vertices located near tails of bones, as illustrated in [15]. Following the solution proposed in [15], we define endpoint weight functions e k (p) for all bones, to control shape deformation in scaling. With the functions, the new skinning model is defined as: where s k = (b k − 1)(d i − c i ) is the full stretching vector at the tail of bone d i , b k is the bone scaling factor. Each vertex p i should have a unique stretching vector corresponding to this bone, which is decided by the endpoint weight function e k (p i ).
Typically the value of endpoint weight functions increases from 0 to 1 along the rest vector (d k − c k ) of each bone. We define these functions by projecting each vertex v i onto the nearest point of each bone, calculating the fraction of where it falls between the head and tail of the bone as the weight:
Input animal
Retrieved animal Input animal Retrieved animal Figure 11: Retrieval results.
where c k and d k are the head and tail positions of bone k, proj k (v i ) is the projection of v i onto the bone.
E Template-based Baseline
We compare CASA with an additional template-based baseline in Tab. 6. In order to ensure a fair comparison, we only incorporate quadruped animals in the PlanetZoo testset in experiments since ACFM is a category-specific method. The results show that CASA outperforms ACFM without pre-training on a large-scale database.
F Optimization Framework Comparison
We compare CASA-retrieval + LASR with the entire CASA pipeline. For LASR, rather than using the whole coarse-to-fine optimization steps, we utilized only the final step. The quantitative comparison can be found in table 7.
G Efficiency
We conduct a comparison on the efficiency between CASA and the other two state-of-the-art methods, LASR [74] and ViSER [75]. We test their average running time for animals in the testing set of PlanetZoo, using one RTX 2080Ti. The results are shown in Tab. 8. Our method is approximately 3× faster than LASR and 5× faster than ViSER, demonstrating that our method is highly efficient. This is natural as CASA do not need to train any complex neural networks.
H Detailed Implementation of Retrieval
We denote the input monocular video containing an animal to be reconstructed as {I t } t=1,...,T1 . The 3D asset bank for retrieving includes N categories of animals. For each animal i, the asset bank includes a 3D mesh M i , a skeleton S i and a realistic rendering video {I t } t=1,...,T2 . In our retrieval stage, our goal is to search the animal with the best similarity to the input animal in the asset bank, and the output should be its 3D shape s j . In short, our retrieval stage learns a mapping function from 2D video to 3D mesh. Since the input and output of retrieval are in different modalities, the target mapping function is not easy to learn. We propose to measure the similarity of the input video {I t } and all rendering videos in the asset bank using CLIP, obtain the most similar rendering video {I r t }, and take the corresponding animal mesh M r as the retrieval result.
To be specific, given the input video and any animal shape in the asset bank, we utilize pre-trained CLIP to encode all frames of the input video and the photo-realistic rendering : where {I t }} t=1,...,T1 is the input video, s j is the j-th animal shape, g CLIP is the image embedding network of the CLIP model and π(s j , q v ) is the photo-realistic rendering of the articulated shape s j at a randomized skeletal poses q v . T 1 is the length of the input video. V is the number of randomized skeletal pose. Note that we render only one frame for each pose.
With the corresponding encoding features {F input t } and {F j v }, we first calculate the similarity D between the t-th input frame and the v-th skeletal pose for the j-th shape by: With the similarity defined above, We are able to retrieve the animal category r in the asset bank that is the most similar with the input animal, which is formulated as: r = arg max j max t max v S(I t , π(s j , q v )) We then obtain the 3D animal shape s r together with its corresponding skeleton and skinning weight from the asset bank, which is used for initializing the following optimization process.
I Initialization Strategy of CASA
We provide details of our initialization strategy for CASA.
In the skeletal optimization stage of CASA, optimizing parameters include a displacement field, the skinning weight, joint rotation angles for each bone, a root transformation, and bone length scales. During initialization, the displacement field is defined by a Multi-Layer Perception (MLP) with random network parameters. The skinning weight is set according to the retrieval results. Bone length scales are all set to 1. The joint angles are set to T-pose.
The root transformation is represented as a combination of rotation and translation. For the convergence of skeletal optimization, the root transformation should ensure the consistency of the mesh in the camera coordinate with the ground-truth. Instead of estimating such transformation in world coordinate, we propose to predict camera parameters for each video, and fix the root transformation as an identical one. In practice, for synthetic videos in which the camera parameters are available, we directly use them as initialization. For videos where camera parameters are not available, we adopt a naive strategy to predict the camera parameters.
Specifically, we first optimize the camera intrinsic and focus point offset from the origin by minimizing the mask loss between rendering and ground-truth. In each iteration, we randomly sample a point as the location of the camera from a sphere with its center at the origin and radius R set according to the animal scale.
When meeting the convergence criteria, we fix the camera intrinsic and focus point offset. We then pick camera locations sampled before with the Top-10 lowest mask loss values. For each location, we randomly sample points in its nearby regions as new camera locations, calculate the corresponding mask loss values, and select the location with the lowest value as our final camera location.
J More information on PlanetZoo
We provide the names of all categories in PlanetZoo in Tab. 9. We also show an overview figure of PlanetZoo in Fig. 12. Figure 12: Overview of PlanetZoo. We collect a novel dataset PlanetZoo containing high-fidelity and animatable 3D animal models. On the right we show renderings for the elephant with different poses. | 2022-11-08T06:42:53.820Z | 2022-11-04T00:00:00.000 | {
"year": 2022,
"sha1": "10afac32f90faf66227788e80c62ca9a10213d19",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "10afac32f90faf66227788e80c62ca9a10213d19",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3063538 | pes2o/s2orc | v3-fos-license | A Maximum Entropy Approach to FrameNet Tagging
The development of FrameNet, a large database of semantically annotated sentences, has primed research into statistical methods for semantic tagging. We advance previous work by adopting a Maximum Entropy approach and by using Viterbi search to find the highest probability tag sequence for a given sentence. Further we examine the use of syntactic pattern based re-ranking to further increase performance. We analyze our strategy using both extracted and human generated syntactic features. Experiments indicate 85.7% accuracy using human annotations on a held out test set.
Introduction
The ability to develop automatic methods for semantic classification has been hampered by the lack of large semantically annotated corpora. Recent work in the development of FrameNet, a large database of semantically annotated sentences, has laid the foundation for the use of statistical approaches to automatic semantic classification.
The FrameNet project seeks to annotate a large subset of the British National Corpus with semantic information.
Annotations are based on Frame Semantics (Fillmore, 1976), in which frames are defined as schematic representations of situations involving various Frame Elements such as participants, props, and other conceptual roles.
In each FrameNet sentence, a single target predicate is identified and all of its relevant Frame Elements are tagged with their element-type (e.g., Agent, Judge), their syntactic Phrase Type (e.g., NP, PP), and their Grammatical Function (e.g., External Argument, Object Argument). Figure 1 shows an example of an annotated sentence and its appropriate semantic frame.
To our knowledge, Gildea and Jurafsky (2000) is the only work that uses FrameNet to build a statistical semantic classifier. They split the problem into two distinct sub-tasks: Frame Element identification and Frame Element classification. In the identification phase, they use syntactic information extracted from a parse tree to learn the boundaries of Frame Elements in sentences. The work presented here, focuses only on the second phase: classification. Gildea and Jurafsky (2000) describe a system that uses completely syntactic features to classify the Frame Elements in a sentence. They extract features from a parse tree and model the conditional probability of a semantic role given those features. They report an accuracy of 76.9% on a held out test set.
She clapped her hands in inspiration. We extend Gildea and Jurafsky (2000)'s initial effort in three ways. First, we adopt a Maximum Entropy (ME) framework to better learn the feature weights associated with the classification model. Second, we recast the classification task as a tagging problem in which an n-gram model of Frame Elements is applied to find the most probable tag sequence (as opposed to the most probable individual tags). Finally, we implement a re-ranking system that takes advantage of the sentence-level syntactic patterns of each sequence. We analyze our results using syntactic features extracted from a parse tree generated by Collins parser (Collins, 1997) and compare those to models built using features extracted from FrameNet's human annotations.
2.2
Training (32,251 sentences), development (3,491 sentences), and held out test sets (3,398 sentences) were generated from the June 2002 FrameNet release following the divisions used in Gildea and Jurafsky (2000) 1 .
Because human-annotated syntactic information could only be obtained for a subset of their data, the training, development, and test sets used here are approximately 10% smaller than those used in Gildea and Jurafsky (2000). 2 There are on average 2.2 Frame Elements per sentence, falling into one of 126 unique classes.
Maximum Entropy
ME models implement the intuition that the best model will be the one that is consistent with all the evidence, but otherwise, is as uniform as possible. (Berger et al., 1996). Following recent successes using it for many NLP tasks (Och and Ney, 2002;Koeling, 2000), we use ME to implement a Frame Element classifier.
We use the YASMET ME package (Och, 2002) to train an approximation of the model below:
P(r| pt, voice, position, target, gf, h)
Here r indicates the element type, pt the phrase type, gf the grammatical function, h the head word, and target the target predicate. Due to data sparsity issues, we do not calculate this model directly, but rather, model various feature combinations as described in Gildea and Jurafsky (2000).
The classifier was trained, using only features that had a frequency in training of one or more, and until performance on the development set ceased to improve. Feature weights were smoothed using a Bayesian method, such that weight limits are Gaussian distributed with mean 0 and standard deviation 1.
Tagging
Frame Elements do not occur in isolation, but rather, depend very much on what other Elements occur in a sentence. For example, if a Frame Element is tagged as an Agent it is highly unlikely that the next Element will also be an Agent. We exploit this dependency by treating the Frame Element classification task as a tagging problem.
The YASMET MEtagger was used to apply an ngram tag model to the classification task (Bender et al., 2003). The feature set for the training data was 2.3 3 Results 1 Divisions given by Dan Gildea via personal communication. 2 Gildea and Jurafsky (2000) use 36995 training, 4000 development, and 3865 test sentences. They do not report results using hand annotated syntactic information.
augmented to include information about the tags of the previous one and two Frame Elements in the sentence:
P(r| pt, voice, position, target, gf, h, r -1 ,r -1 +r -2 )
Viterbi search was then used to find the most probable tag sequence through all possible sequences.
Pattern Features
A great deal of information useful for classification can be found in the syntactic patterns associated with each sequence of Frame Elements. A typical syntactic pattern is exhibited by the sentence "Alexandra bent her head." Here "Alexandra" is an external argument Noun Phrase, "bent" is the target, and "her head" is an object argument Noun Phrase. In the training data, a syntactic pattern of NP-ext, target, NP-obj, given the predicate bend, was associated 100% of the time with the Frame Element pattern: "Agent target BodyPart", thus, providing powerful evidence as to the classification of those Frame Elements. We exploit these sentence-level patterns by implementing a re-ranking system that chooses among the n-best tagger outputs. The re-ranker was trained on a development corpus, which was first tagged using the MEtagger described above. For each sentence in the development corpus, the 10 best tag sequences are output by the classifier and described by three probabilities: 3 1) the sequence's probability given by the ME classifier (ME); 2) the conditional probability of that sequence given the syntactic pattern and the target predicate (pat+target); 3) a back off conditional probability of the tag sequence given just the syntactic pattern (pat). A ME model is then used to combine the log of these probabilities to give a model of the form: P(tag-seq| ME, pat+target, pat) Figure 2 shows the performance of the base ME model, the base model within a tagging framework, and the base model within a tagging framework plus the reranker. Results are shown for data sets trained and tested using human annotated syntactic features and trained and tested using automatically extracted syntactic features. In both cases the training and test sets are identical.
For both the extracted and human conditions, adopting a tagging framework improves results by over 1%. However, while the syntactic pattern based reranker increases performance using human annotations by nearly 2%, the effect when using automatically extracted information is only 0.5%. This is reasonable considering that the re-ranker's effectiveness is correlated with the level of noise in the syntactic patterns upon which it is based.
The difference in performance between the models under both human and extracted conditions was relatively consistent: averaging 8.7% with a standard deviation of 0.7.
As a further analysis, we have examined the performance of our base ME model on the same test set as that used in Gildea and Jurafsky (2000). Using only extracted information, we achieve an accuracy of 74.9%, two percent lower than their reported results. This result is not unreasonable, however, because, due to limited time, very little effort was spent tuning the parameters of the model. Figure 2. Performance of models on held out test data. ME refers to results of the base Maximum Entropy model, Tagger to a combined ME and Viterbi search model, Re-Rank to the Tagger augmented with a re-ranker. Extracted refers to models trained using features extracted from parse trees, Human to models using features from FrameNet's human annotations.
Conclusion
It is clear that using a tagging framework and syntactic patterns improves performance of the semantic classifier when features are extracted from either automatically generated parse trees or human annotations. The most striking result of these experiments, however, is the dramatic decrease in performance associated with using features extracted from a parse tree.
This decrease in performance can be traced to at least two aspects of the automatic extraction process: noisy parser output and limited grammatical information.
To compensate for noisy parser output, our current work is focusing on two strategies. First, we are looking at using shallower but more reliable methods for syntactic feature generation, such as part of speech tagging and text chunking, to either replace or augment the syntactic parser. Second, we are using ontological information, such as word classes and synonyms, in the hopes that semantic information may supplement the noisy syntactic information.
The models trained on features extracted from parse trees do not have access to rich grammatical information. Following Gildea and Jurafsky (2000), automatic extraction of grammatical information here is limited to the governing category of a Noun Phrase. The FrameNet annotations, however, are much richer and include information about complements, modifiers, etc. We are looking at ways to include such information either by using alternative parsers (Hermjakob, 1997) or as a post processing task (Blaheta and Charniak, 2000).
In future work, we will extend the strategies outlined here to incorporate Frame Element identification into our model. By treating semantic classification as a single tagging problem, we hope to create a unified, practical, and high performance system for Frame Element tagging.
% Correct
Extracted Human | 2014-07-01T00:00:00.000Z | 2003-05-27T00:00:00.000 | {
"year": 2003,
"sha1": "7eff3ef2c978dcbc8b5a4ab99fa4f0a187afa5ed",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1073491&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "7eff3ef2c978dcbc8b5a4ab99fa4f0a187afa5ed",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118452630 | pes2o/s2orc | v3-fos-license | Theoretical Estimates of 2-point Shear Correlation Functions Using Tangled Magnetic Field Power Spectrum
The existence of primordial magnetic fields can induce matter perturbations with additional power at small scales as compared to the usual $\Lambda$CDM model. We study its implication within the context of two-point shear correlation function from gravitational lensing. We show that primordial magnetic field can leave its imprints on the shear correlation function at angular scales $\lesssim \hbox{a few arcmin}$. The results are compared with CFHTLS data, which yields some of the strongest known constraints on the parameters (strength and spectral index) of the primordial magnetic field. We also discuss the possibility of detecting sub-nano Gauss fields using future missions such as SNAP.
Introduction
In recent years, weak gravitational lensing has proved to be one of best probes of the matter power spectrum of the universe. In particular, this method can reliably estimate the matter power spectrum at small scales which are not directly accessible to other methods e.g. galaxy surveys (for details and further references see e.g. Munshi et al. (2008); Hoekstra & Jain (2008); Refregier (2003); Bartelmann & Schneider (2001)).
Magnetic fields play an important role in the many areas of astrophysics, and are ubiquitously seen in the universe. They have been observed in the galaxies and clusters of galaxies with the coherence lengths up to ≃ 10-100 kpc (for a review see e.g. Widrow (2002)). There is also evidence of coherent magnetic fields up to super-cluster scales (Kim et al. 1989). Still little is known about the origin of cosmic magnetic fields, and their role in the evolutionary history of the universe. These fields could have originated from dynamo amplification of very tiny seed magnetic fields ≃ 10 −20 G (e.g Parker (1979) ;Zeldovich, Ruzmaikin & Sokolov (1983); Ruzmaikin, Sokolov & Shukurov (1988)).
It has been shown that dynamo mechanism can amplify fields to significant values in collapsing objects at high redshifts (Ryu et al. 2008;Schleicher et al. 2010;Arshakian et al. 2009;de Souza & Opher 2010;Federrath et al. 2011a,b;Schober et al. 2011). It is also possible that much larger primordial magnetic field (≃ 10 −9 G) were generated during the inflationary phase (Turner & Widrow 1988;Ratra 1992) and the large scale magnetic field observed today are the relics of these fields. In the latter case, of interest to us in this paper, magnetic field starts with a large value in the intergalactic medium, while in the former case large magnetic fields are confined to bound objects.
While the presence of primordial magnetic fields have the potential to explain the observed magnetic fields coherent at a range of scales in the present universe, such fields also leave detectable signatures in important observables at cosmological scales in the universe.
More recently, lower bounds ≃ 10 −15 G on the strength of magnetic fields have been obtained based on observations of high-energy γ-ray photons (e.g. Dolag (2010); Neronov & Vovk (2010); Tavecchio et al. (2010); Taylor et al. (2011)). Wasserman (1978) showed that primordial magnetic fields can induce density perturbations in the post-recombination universe. Further work along these lines have investigated the impact of this effect for the formation of first structures, reionization of the universe, and the signal from redshifted HI line from the epoch of reionization (e.g. Kim et al. 1996;Gopal & Sethi 2003;Sethi & Subramanian 2005;Tashiro & Sugiyama 2006;Schliecher, Banerjee, Klessen 2009;Sethi & Subramanian 2009). The matter power spectrum induced by primordial magnetic fields can dominate the matter power spectrum of the standard ΛCDM model at small scales. Weak gravitational lensing can directly probe this difference and therefore reveal the presence of primordial fields or put additional constraint on their strength.
In this paper we attempt to constrain primordial magnetic fields within the framework of the two-point shear correlation function induced by gravitational lensing, including the contribution of matter perturbations induced by these magnetic fields. We compare our results with the CFHTLS Wide data (Fu et al. 2008).
Matter Power Spectrum
Tangled magnetic fields can be characterized by a power-law power spectrum: M(k) = Ak n . In the pre-recombination era, the magnetic fields are dissipated at scales below a scale corresponding to k max ≃ 200 × (10 −9 G/B eff ) (e.g. Jedamzik, Katalinić, & Olinto 1998;Subramanian & Barrow 1998A). Here B eff is the RMS at this cut-off scale for a given value of the spectral index, n. Tangled Magnetic fields induce matter perturbations in the post-recombination era which grow by gravitational collapse. The matter power spectrum of these perturbations is given by: P (k) ∝ k 2n+7 , for n < −1.5, the range of spectral indices we consider here (Wasserman 1978;Kim et al. 1996;Gopal & Sethi 2003).
The Magnetic field induced matter power spectrum is cut-off at the magnetic field Jeans' wave number: k J ≃ 15(10 −9 G/B eff ) (e.g. Kim et al. 1996;Kahniashvili et al. 2010).
The dissipation of tangled magnetic field in the post-recombination era also results in an increase in the thermal Jeans' length (Sethi & Subramanian 2005;Sethi et al. 2008). For most of the range of magnetic field strengths considered here, the scale corresponding to k J generally exceed or are comparable to the thermal Jeans length ( Figure 4 of Sethi et al. (2008)).
For our computation, we need to know the time evolution of the matter power spectrum induced by tangled magnetic fields. It can be shown that the dominant growing mode in this case has the same time dependence as the ΛCDM model (see e.g. Gopal & Sethi (2003) and references therein)
Weak Lensing & Cosmic Shear
The cosmic shear power spectrum P k (ℓ) or the lensing convergence power spectrum, P κ , is the measure of the projection of matter power spectrum P δ and is given by the following expression (Bartelmann & Schneider (2001)), where χ is the comoving distance along the light ray and χ lim is the limiting comoving distance of the survey; f K (χ) is the comoving angular diameter distance; for spatially flat (K=0) universe f K (χ) is numerically equal to the χ and the expression for χ in the flat universe is as given below, n(z) is the redshift distribution of the sources and ℓ is the modulus of a two dimensional wave vector perpendicular to the line of sight. P δ is the matter power spectrum. In this paper, we use tangled magnetic power spectrum as P δ to compute the shear power spectrum for the magnetic cases.
The cosmological shear field induced by density perturbations is a curl-free quantity and is donated as an E-type field. One can decompose the observed shear signal into E (non-rotational) and B (rotational) components. Detection of non-zero B-modes indicates a non-gravitational contribution to the shear field, which might be caused by systematic contamination to the lensing signal. 1 These decomposed shear correlation functions can be expressed as: where ξ ′ is given by ξ + and ξ − are two-point shear correlation functions which are related to the matter power spectrum according to the following relation, θ is the angular separation between the galaxy pairs, and J 0,4 are Bessel functions of the first kind.
Shear power spectrum from tangled magnetic field power spectrum
We use the tangled magnetic field matter power spectrum P δ to compute the shear power spectrum P κ (ℓ) which in turn is used to calculate ξ + , ξ − , ξ E and ξ B using Eqs (3), (4) & (5). We have used the same source redshift distribution as in Fu et al. (2008): where z max = 6. Values of the parameters a, b, c & A we have taken from the same paper Fu et al. (2008). Values of these parameters as quoted in the paper are as, a = 0.612 ± 0.043 ; b = 8.125 ± 0.871 ; c = 0.620 ± 0.065 & A = 1.555. To evaluate the integral (1) we sources these modes. Vector modes are likely to play a more dominant role at angular scales of interest to us in the paper. We hope to explore this possibility in a future work.
changed the variable from χ to z using (2).
where k = ℓ/χ(z). again P δ (k,z) can be written as, where D(z) is growth factor, which as noted above is the same as for the flat ΛCDM mode and is given by Peebles (1993): We took z lim = 2.5 for our calculations as in Fu et al. (2008).
For comparison, we also compute all the relevant quantities for the linear and non-linear ΛCDM models. For ΛCDM linear power spectrum we used P (k, z) = AkT 2 (k)D 2 (z), where the transfer function T (k) is given by Bond & Efstathiou (1984). For nonlinear ΛCDM we followed prescription given in Peacock & Dodds (1996).
Results
In Figure 1 we show the tangled magnetic field matter power spectra for a range of spectral indices n and magnetic field strengths, B 0 at z = 0. The matter power spectra are plotted for k < k J ; a sharp cut-off below this scale is assumed for our computation.
For comparison, we have also displayed the linear and non-linear ΛCDM matter power spectra (the non-linear power spectrum is obtained following the method introduced by Peacock & Dodds (1996)). The figure shows that the magnetic field induced matter power spectra can dominate over the ΛCDM case at small scales. Possible implications of this In Figure 2 we show the shear power spectra for magnetic and non-magnetic cases.
The green and red curves present the shear power spectrum for ΛCDM linear and nonlinear matter power spectra, respectively. The blue curve shows the shear power spectrum for the tangled magnetic field power spectrum (B eff = 3.0 nG and n = -2.9). In this figure we can see the impact of additional power in the tangled magnetic field-induced matter power spectrum as an enhancement in the shear power spectrum on angular scales ≃ 1 ′ .
The peak of the matter power spectra of both the ΛCDM model and the magnetic-field induced matter power spectra are also seen in the shear power spectra. The ratio of angular scales at the peak of the two cases correspond to the ratio of these peaks of the matter power spectra: k eq /k J . In the ΛCDM model the power at small scales falls as k −3 , while k J imposes a sharp cut-off in the magnetic case. In both the cases, there is power at angular scales smaller than the peak of the matter power spectra. But the sharp cut-off in the matter power spectrum at k > k J results in a steeper drop in shear power spectra as compared to the ΛCDM case. This cut-off ensures that the magnetic field-induced effects dominate the shear power spectrum for only a small range of angular scales.
In Figure 3, the two-point shear correlation functions ξ E and ξ B are shown for magnetic and non-magnetic cases. As noted in the previous section, we use the parameters of the paper of Fu et al. (2008) for all our computation, which allows us to directly compare our results with their data, shown in Figure 3.
For detailed comparison with the data of Fu et al. (2008), we performed a χ 2 including the effect of both the ΛCDM (non-linear model with the best fit parameters as obtained by Fu et al. (2008)) and the magnetic field induced signal. We fitted the sum of these two signals ((ξ E ) B + (ξ E ) ΛCDM ) against the CFHTLS data to obtain limits on the magnetic field strength B 0 and the spectral index n. As seen in Figure 3, the magnetic field induced signal dominates the data for only a small range of angular scales below a few arc-minutes.
However, this can put stringent constraints on the magnetic field model. Our best fit values are B 0 = 1.5 nG and n = −2.96. In Figure 4, we show the allowed contours of these parameters for a range of ∆χ 2 = χ 2 i − χ 2 min . It should be noted that B 0 = 0 is an acceptable fit to the data because we fix the best fit parameters obtained by Fu et al. (2008).
Discussion
Primordial magnetic fields leave their signatures in a host of observables in the universe.
Their impacts on CMBR temperature and polarization anisotropies have been extensively studied. Yamazaki (2010) compute the allowed region in the {B 0 , n} plane by comparing the predictions of primordial magnetic field models with existing CMBR observations. Other constraints come from early formation of structures, Faraday rotation of CMBR polarization (e.g. Kahniashvili et al. 2010) and reionization in the presence of magnetic fields Schleicher & Miniati (2011).
In addition to the upper bounds on the magnetic field strength obtained by these observables, recent results suggests that there might be a lower bound of ≃ 10 −15 G on the magnetic field strength (e.g. Dolag (2010) 2011)). This would suggest that the magnetic field lies in the range 10 −15 < B 0 < a few 10 −9 G. This range is still too large for a precise determination of the magnetic field strength.
How do our constraints ( Figure 4) compare with the existing bounds on primordial magnetic fields? CMBR constraints (e.g. Figure 1 of Yamazaki (2010)) are stronger than our constraints for n < −2.95. For the entire range of spectral indices above this value, we obtain stronger upper limits on B 0 . Our limits are comparable to bounds obtained from the formation of early structures, which also arise from excess power in the magnetic field-induced matter power spectrum (e.g. Kahniashvili et al. (2010)).
Can primordial magnetic fields be detected in the Weak lensing data? As seen in The present data is noisy at the scales at which magnetic fields begin to make significant Fu et al. (2008). The shaded area is the 1-σ allowed region. The three curves (from top to bottom) are contours at 5σ, 3σ and 1σ level. contribution, at least partly owing to errors inherent in ground based measurements of shear, e.g. correction for point spread function, etc (e.g. Figure 4 of (Schrabback et al. 2010); a brief look at this figure might suggest that their measurements would already put stronger constraints on magnetic field strength than presented here). Future, proposed space missions such as SNAP are likely to greatly improve the errors on these measurements.
A comparison of Figure 4 of the white paper on weak lensing with SNAP (Albert et al. 2005) with the Figure 3 of this paper suggests that SNAP would easily be able to probe sub-nano Gauss magnetic fields.
The magnetic field signal could be degenerate with the overall normalization of the ΛCDM model as measured by σ 8 ; WMAP 7-year data give σ 8 = 0.801±0.030 ( (Larson et al. 2011)). WMAP results are in reasonable agreement with the value of σ 8 as inferred by the weak lensing data. This error is not sufficient to mimic the much larger signal from magnetic field strengths considered in this paper (e.g. Figure 4 of (Schrabback et al. 2010)).
However, a more careful analysis will be needed to distinguish the error in σ 8 from the sub-nano Gauss magnetic fields.
One uncertainty in our analysis is that the magnetic Jeans' scale, unlike the thermal Jeans' scale which is well defined in linear perturbation theory, is obtained within an approximation in which the backreaction of the magnetic field on the matter is not exactly captured (e.g. Kim et al. 1996;Sethi & Subramanian 2005). Even though our results capture qualitatively the impact of such a scale, there could be more power on sub-Jeans' scale which is lost owing to our approximation of the sharp k-cut off. As noted in section 2, the cut-off scale is the larger of the magnetic Jeans' length and the thermal Jeans' length. Magnetic field dissipation can raise the temperature of the medium to ≃ 10 4 K, thereby raising thermal Jeans' length of the medium (Figure 4 of Sethi et al. (2008) for a comparison between the two scales for different magnetic field strengths). For B 0 10 −9 G, the magnetic Jeans' scale is the larger of the two scales, as the maximum temperature of the medium reached owing to this process doesn't exceed 10 4 K. In the more general case also this would be true as photoionization of the medium by other sources, e.g. the sources which could have cause reionization of the universe at z ≃ 10, also results in comparable temperatures. For magnetic field strengths smaller than considered in the paper, the cut-off scale is likely to be determined by thermal Jeans' scale, caused by the photoionization of the medium by sources other than the magnetic field dissipation. Our approximation allows us to identify important length and angular scales for our study (Figure 2 and 3). However, further work along these lines could extend our analysis by taking into account the physical effects of sub-magnetic Jeans' scales.
The analysis of Lyman-α forest in the redshift range 2 z 4 is another powerful probe of the matter power spectrum of at small scales (e.g. Croft et al. 2002). Primordial magnetic field can alter this interpretation in many ways: (a) more small scale power owing to magnetic field induced matter power spectrum (Figure 1), (b) dissipation of magnetic field can change the thermal state of Lyman-α clouds (e.g. Sethi, Haiman, Pandey 2010; Sethi & Subramanian 2005), (c) magnetic Jeans' length can reduce the power at the smallest probable scale. We hope to undertake this study in a future work. | 2012-01-25T07:58:46.000Z | 2012-01-18T00:00:00.000 | {
"year": 2012,
"sha1": "2141f3a61d00bacd670fa3cc4160509064ff4fa4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.3619",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2141f3a61d00bacd670fa3cc4160509064ff4fa4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4690031 | pes2o/s2orc | v3-fos-license | Multiple Margin Positivity of Frozen Section Is an Independent Risk Factor for Local Recurrence in Breast-Conserving Surgery
Purpose Breast-conserving surgery (BCS) with radiotherapy has become a standard treatment for early stage breast cancer, since the installation of NSABP B-06. One of the serious problems in BCS is that of local recurrence. There are many risk factors for local recurrence, such as large tumor size, multiple tumors, axillary lymph node involvement, young age, high nuclear grade, and so on. The aim of this study is to identify patients with a higher risk of local recurrence of breast cancer. Methods Between January 2002 and December 2006, 447 patients with breast cancer, and who had undergone BCS with immediate breast reconstruction, were enrolled in the study. The follow-up period was 5 years from the time of operation and we analyzed local recurrence, disease-free survival (DFS), and overall survival (OS). The analysis included various clinicopathological factors such as age, chemotherapy, radiotherapy, hormone therapy, pathologic characteristics, and margin status. Statistical analysis was performed with log-rank test and Kaplan-Meier method. The p-value <0.05 was considered statistically significant. Results The mean follow-up period was 88 months and local recurrence of breast cancer occurred only in 16 cases (3.6%). The actual 5-year DFS, and OS rates were 90.6% and 93.3%, respectively. For the local recurrence of breast cancer, positive margin status, multiple margin positivity, conversed margin cases, T/N stages showed statistical significance in univariate analysis. However, only multiple margin positivity was identified as an independent risk factor for local recurrence in multivariate analysis. Conclusion When the multiple margin positivity is diagnosed on intraoperative frozen biopsy, surgeons should consider a much wider excision of the breast and a more aggressive management.
INTRODUCTION
Breast-conserving surgery with radiotherapy has become a standard treatment for early breast cancer since the treatment was determined to have an equivalent survival rate to mastectomy [1][2][3][4]. One of the serious problems in breast-conserving surgery is local recurrence and hence, the most important factor for successful breast-conserving surgery is the complete resection of the tumor.
The local recurrence rate (LRR) of breast cancer has been reported as 3% to 20% [5][6][7][8][9]. Although several studies have reported that local recurrence does not greatly influence overall survival (OS), it still poses concerns since re-excision of the tumor is necessary and the incidence of contralateral breast cancer becomes much higher in cases with local recurrence [10].
The risk factors for local recurrence of breast cancer which have been reported include large tumor size [8,11], multifocality [9,[12][13][14][15][16], axillary lymph node involvement [11], young age [9,17,18], high nuclear grade [12,19], extent of the intraductal component and positive surgical margin status. Among these, the most important factor involved in local recurrence is positive surgical margin status [7,9]. In order to perform a complete excision with clear resection margin, it is important to secure 1 to 2 cm of distance from tumor and perform an intraoperative evaluation of the margin status. When a positive surgical margin is reported on intraoperative frozen biopsy, or when a pathologic report for margin status shows a result to be reversed from negative to positive, re-excision should be performed to prevent local recurrence. These are very stressful situations for both patients and surgical oncologists. Thus, evaluation of the tumor size, number, location, and morph-Purpose: Breast-conserving surgery (BCS) with radiotherapy has become a standard treatment for early stage breast cancer, since the installation of NSABP B-06. One of the serious problems in BCS is that of local recurrence. There are many risk factors for local recurrence, such as large tumor size, multiple tumors, axillary lymph node involvement, young age, high nuclear grade, and so on. The aim of this study is to identify patients with a higher risk of local recurrence of breast cancer. Methods: Between January 2002 and December 2006, 447 patients with breast cancer, and who had undergone BCS with immediate breast reconstruction, were enrolled in the study. The follow-up period was 5 years from the time of operation and we analyzed local recurrence, disease-free survival (DFS), and overall survival (OS). The analysis included various clinicopathological factors such as age, chemotherapy, radiotherapy, hormone therapy, pathologic characteristics, and margin status. Statistical analysis was performed with log-rank test and Kaplan-Meier method. The p-value < 0.05 was considered statistically significant. Results: The mean follow-up period was 88 months and local recurrence of breast cancer occurred only in 16 cases (3.6%). The actual 5-year DFS, and OS rates were 90.6% and 93.3%, respectively. For the local recurrence of breast cancer, positive margin status, multiple margin positivity, conversed margin cases, T/N stages showed statistical significance in univariate analysis. However, only multiple margin positivity was identified as an independent risk factor for local recurrence in multivariate analysis. Conclusion: When the multiple margin positivity is diagnosed on intraoperative frozen biopsy, surgeons should consider a much wider excision of the breast and a more aggressive management. ologic features should be done via imaging modalities prior to surgical planning.
The aim of this study was to identify the most important high risk factors for local recurrence of breast cancer. The authors investigated additional risk factors besides those which have been previously suggested and focused on multiple surgical margin positivity.
METHODS
Between January 2002 and December 2006, the data of 447 patients with breast cancer who underwent breast-conserving surgery were collected for this study. Data were recorded prospectively and were analyzed retrospectively. Exclusion criteria included stage IV breast cancer, synchronous or metachronous malignancy in other organs.
All breast cancer was diagnosed by needle or excision biopsy, and the size, number, and location of the tumor were identified through mammography, ultrasonography, and breast magnetic resonance imaging (MRI) prior to surgery. According to the tumor stage and characteristics, neoadjuvant chemotherapy, adjuvant chemotherapy, radiotherapy, or hormone therapy was applied in each case.
Informed consent was obtained from all patients and the protocol used in this study was approved by the Institutional Review Board Committee of the Pusan National University Hospital (H-1211-005-012).
Clinicopathological factors
A follow-up period of 5 years was set up from the time of operation. Local recurrence, disease-free survival (DFS), and OS were investigated during this period. Disease follow-up was performed every 6 months based on blood tests with a tumor marker, chest plain X-ray, mammography, breast and abdomen ultrasonography, bone scan, brain computed tomography (CT), and positron emission tomography/computed tomography (PET/CT).
The patients were divided into two age groups, based on 50 years as the supposed age of menopause. Patients were classified into groups that received neoadjuvant chemotherapy, adjuvant chemotherapy, radiotherapy or hormone therapy. Information from surgical margin status included simple positive results, multiple positive results and conversion cases which were negative in the frozen to positive in the final pathologic report. Surgical margin positivity was defined when the atypical cell, carcinoma in situ or invasive cancer cells existed within 5 mm of the cut surface. Multiple margin positivity was defined when margin positive results was found more than twice in the same site or in more than 2 different sites simultaneously ( Figure 1). Basically, negative surgical margin is defined as the margin having at least 5 mm of free distance in the frozen section or in the final pathologic report. Conversion cases which underwent re-operation were included in positive margin cases.
The morphologic features of the tumor margin were classified as round, irregular, spiculated, or amorphous, according to the images on breast MRI. Tumor type, stage, nuclear grade, histologic grade, presence of estrogen receptor or progesterone receptor, HER2/neu gene expression and triple negative were also verified from pathologic reports.
Surgical technique with assessment of margin status
The surgical margin is deemed to be 2 cm from the tumor by preoperative ultrasonography. When the tumor is not palpable, we performed ultrasound-guided H-wire localization not to miss the tumor during operation. Either sentinel node biopsy or axillary lymph node dissection was performed according to the axillary lymph node status. To evaluate the surgical margin status, tissues were obtained using the circumferential method from 12 directions of the surgical cavity. Determination of the surgical margin was performed by three different pathologists in random order. Re-excision and secondary margin evaluation were performed when margin positive was diagnosed in the intraoperative frozen section. After the negative results of surgical margin were confirmed, breast reconstruction methods such as local flap, thoraco-epigastric flap, lateral thoracic fasciocutaneous flap, and latissimus dorsi myocutaneous flap were applied according to the volume and location of removed breast tissue.
Statistical analysis
Statistical analyses were performed with SPSS version 16.0 (SPSS Inc., Chicago, USA). Categorical variables were compared using chi-square test and actual 5-year DFS, and OS was evaluated with the Kaplan-Meier method. Comparison of local recurrence-free survival between 2 groups was examined using the log-rank test in univariate analysis. A Cox proportional hazard model was used to analyze various prognostic factors in multivariate analysis. The p-value < 0.05 was considered statistically significant.
RESULTS
The mean age of the patients was 46.6 years (range, 24-75 years) and the mean follow-up period was 88 months. In our study, local recurrence of breast cancer occurred only in 16 cases (3.6%) and actual 5-year DFS, and OS rates were 90.6% and 93.3%, respectively ( Figure 2).
Three hundred four patients (68.0%) were below the age of 50 years, and those under 50 years of age presented with a higher LRR than those over 50 years of age. There was, however, no statistical significance in the LRRs between the two age groups. Neoadjuvant chemotherapy and adjuvant chemotherapy were applied to 282 patients (63.1%) and 406 patients (90.8%), respectively. Moreover, irradiation was applied to nearly all patients (434, 97.1%) after breast-conserving surgery.
The local recurrence of breast cancer was identified only in the group who received radiotherapy, and eight cases of local recurrence occurred equally in both groups treated with hormone therapy. These treatment modalities, however, were not statistically significant (Table 1). The mean tumor size was 2.0 cm and morphological features of the tumor margin were not associated with the local recurrence of breast cancer. However, overall, T and N stages of significantly contributed to the LRR of breast cancer (p< 0.001, p= 0.001, and p= 0.008, respectively). The tumor types were classified as ductal, lobular, mucinous lesion, and others. There were 52 cases (11.6%) of carcinoma in situ and 366 (81.8%) cases of invasive carcinoma lesions. We found no influence of tumor type on the risk of local recurrence.
No differences were found in nuclear grade, histologic grade, lymphovascular invasion, and perineural invasion among the groups. Information of hormone receptors was available in 444 cases (99.3%), but again, no significant differences were found with regard to recurrence. In our study, HER2/neu gene positive rate, and triple negative rate were 25.3% and 14.8%, respectively and this was similar to other previous studies. However, unlike previous reports, HER2/neu positivity or triple negativity did not show a statistically significant association with LRR.
Among 90 cases (20.1%) of positive surgical margin, multiple margin positivity was identified in 37 cases (8.3%). In 10 cases, the final pathologic report was conversed to frozen biopsy results from less to more aggressive lesion. Re-excision was performed in 4 cases and the other 6 cases received only irradiation. There were 10 (11.1%) and 9 (24.3%) cases of local recurrence in margin positive cases and multiple margin positive cases, respectively. Pathologically, LRR after the breastconserving surgery was attributable to a positive surgical margin, multiple margin positivity and conversion results (p< 0.001, p < 0.001, and p = 0.008, respectively). However, the types of positive margin and multiple margin positivity were not associated with local recurrence (p= 0.699, p= 0.424) ( Table 2).
Among various clinicopathological factors, the margin positive cases and multiple margin positive cases showed statistical significance only with T stage (p = 0.019, p = 0.011), and the pathologic characteristics of positive margin cases are shown in Table 3.
The adjusted multivariate analysis for risk factor of local recurrence included a positive surgical margin, multiple margin positivity, conversion cases, T stage, N stage and overall stage which showed significance in univariate analysis. However, only multiple margin positivity was an independent risk factor for local recurrence of breast cancer (p= 0.031) ( Table 4).
In univariate analysis of OS, statistical significance was shown in N stage (p= 0.007), overall stage (p< 0.001), lymphovascular invasion (p = 0.001), and presence of progesterone receptor (p = 0.021). However, only N stage and overall stage were associated with OS independently, and the local recurrence was not associated with OS in our study.
DISCUSSION
LRRs after breast-conserving surgery for breast cancer ranged from 2% to 20% and for early breast cancer were 12% and 20% at 5 and 10 years, respectively [5][6][7][8][9]20,21]. Although previous randomized trials have reported that groups with a high incidence of local recurrence demonstrate the same OS those with a low incidence of local recurrence [1,2], patients with local recurrence of breast cancer experience not only anxiety in relation to the recurrence, but stressful situations such as re-operation, and chemotherapy, etc. Hence, identification of patients with high risk factor of local recurrence is very important.
The risk factors of local recurrence in breast cancer have been reported as young age, large tumor, positive surgical margin, extensive intraductal component, multifocality, axillary lymph node involvement, extranodal extension, and high nuclear grade. Among these factors, positive surgical margin is the most important factor because it is the only factor which can be controlled by surgeons [6].
Generally, breast cancer in young age groups shows more aggressive tumor progression. However, there are some conflicting reports about the incidence of local recurrence between age groups [12,13,17]. In the present study, no difference in local recurrence was found between patients younger than 50 and those older than 50. However, it is difficult to draw conclusions since treatment strategies were not strictly controlled.
Chemotherapy and radiotherapy have been reported to be associated with local control and OS. However, in our study, there was no significant association of the incidence of local recurrence with chemotherapy and radiotherapy. Nevertheless, the result is not reliable because most patients in this study received these treatments.
Several studies reported the margin positive rate in primary excision from 4% to 14% [5,7,9]. When positive surgical margin is diagnosed, surgeons should perform re-excision immediately until the negative result is confirmed. Authors assessed the tumor size and location with preoperative imaging modalities and evaluated the frozen sections for surgical margins. In our study, primary margin positive rate and multiple margin positive rate were 20.1% and 8.3%, respectively. They showed statistical significance with local recurrence of breast cancer in univariate analysis. However, the multiple margin positivity was associated only with pathological T stage, and not with the clinical T stage. The multiple margin positivity was the only independent risk factor of local recurrence in adjusted multivariate analysis. This means that the rate of multiple margin positivity would be more detected in cases of larger pathologic tumors, and that the risk of local recurrence would be higher with inadequate resection, even if surgeons confirmed the negative surgical margin during operation and performed the standard treatments. In these cases, surgeons should consider a much larger scale of surgery or mastectomy to prevent the local recurrence of breast cancer to remove the still remained tumor cells. Of course, for successful results, surgeons and patients should discuss their options before surgery, and the surgeon should ideally be proficient in various oncoplastic techniques. According to previous reports, multiple margin positivity has been considered as an extensive ductal carcinoma in situ. However, positive results in our study included cases of atypical cells, carcinoma in situ, and invasive carcinoma. Ductal carcinoma in situ showed no significance in our study, in terms of its relationship with either the type of cancer or the stage of cancer.
The surgical margin status has been accepted to be the most important risk factor, because it is the only risk factor which is controllable by surgeons. For the accurate diagnosis of surgical margin, there are several requirements. First, a guideline for the margin status should be established. There is no clear consensus as to the definition of a positive surgical margin. To establish a treatment guideline, an in-depth discussion between surgeons and pathologists, in order to achieve a definition of a positive result is needed. Based on previous reports, it might be reasonable for positive surgical margins to include atypical cells, in situ or invasive cancer cells within 5 mm from cut surface [22][23][24]. Second, adequate specimens should be obtained. There are some limitations to performing frozen biopsies when margin specimens contain an artifact of electrocautery or fat tissue. Margin tissue should be taken with surgical scissors from the breast parenchyma. Third, the surgical margin evaluation should be performed by surgeons. Only surgeons can recognize correct directions and decide for re-excision. Many authors are using "total-circumference intraoperative frozen method" and taking the tissues from the remnant breast cavity for margin evaluation, due to its varying directions and farther tissues from the tumor [25].
In pathologic assessments, conversion cases which were diagnosed as negative in the frozen, but as positive in the final pathologic report existed. When the surgical procedure is incomplete, re-excision is recommend for more than a focal degree of margin positivity and only irradiation treatment is enough for a focal degree of margin positivity [6]. There were 6 cases with irradiation treatment only in this study, and one case of local recurrence was confirmed during the follow-up. However, conversion case was not considered as an independent risk factor for local recurrence.
Large tumor size, positive lymph nodes, estrogen-negative status, high histologic grade, and lymphovascular invasion are also risk factors for the local recurrence of breast cancer [7,26,27]. Although, overall stage and T/N stages in this study showed statistical significance in univariate analysis, they were not independent factor, and other pathologic characteristics, including hormone status, were not associated with the local recurrence of breast cancer, even in univariate analysis.
The most important point of this study is that multiple margin positivity would be an independent risk factor for the local recurrence of breast cancer, and the identification of tumor size and location, multifocality, and morphologic features of the tumor should be assessed before surgery in order to prevent multiple margin positive cases. However, only a few reports have described multiple margin positivity and they did not suggest guidelines for margin evaluation or treatment strategies.
There are, of course, some limitations in our study. The follow-up period was only 5 years and this was a single institution investigation with a small population. According to the rules of thumb, such as 10 or more events per predictor variable, 16 cases of local recurrence in our study is not a sufficient number for a reliable prediction [28]. Hereafter, if a large population with long term follow-up in multicentric investigation is performed, a much more concrete conclusion will be drawn.
Based on our results, multiple margin positivity of breast cancer is an independent risk factor for the local recurrence of breast cancer. In conclusion, authors recommend a much larger scale of oncoplastic surgery, equivalent to mastectomy, for successful breast-conserving surgery when multiple margin positivity is confirmed. | 2018-04-03T02:48:13.425Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "4a774dd7e1c2ef5711f77ebbdf1d1811cffac997",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4048/jbc.2012.15.4.420",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a774dd7e1c2ef5711f77ebbdf1d1811cffac997",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249152154 | pes2o/s2orc | v3-fos-license | Double Deep Q Networks for Sensor Management in Space Situational Awareness
We present a novel Double Deep Q Network (DDQN) application to a sensor management problem in space situational awareness (SSA). Frequent launches of satellites into Earth orbit pose a significant sensor management challenge, whereby a limited number of sensors are required to detect and track an increasing number of objects. In this paper, we demonstrate the use of reinforcement learning to develop a sensor management policy for SSA. We simulate a controllable Earth-based telescope, which is trained to maximise the number of satellites tracked using an extended Kalman filter. The estimated state covariance matrices for satellites observed under the DDQN policy are greatly reduced compared to those generated by an alternate (random) policy. This work provides the basis for further advancements and motivates the use of reinforcement learning for SSA.
I. INTRODUCTION
In an era of regular and frequent launches of satellites to low Earth orbit (LEO), the possibility of collisions between resident space objects continues to increase, and poses a significant threat to space based infrastructure. The Kessler Syndrome -a cascade of collisions that could render any satellite use in LEO extremely difficult and costly -becomes an increasing risk [1]. Between one third and one half of the capacity of LEO space has already been occupied [2].
Space Situational Awareness (SSA) is the understanding of the complex orbital domain, involving man made objects, and natural phenomena [3]. Ground-based surveillance and tracking of man made objects in orbit can be achieved with a variety of instruments, including radars and optical telescopes. Measurements are required to be able to predict the trajectories of the objects, to assess the risk of potential collisions. However, measurements must be made repeatedly, as the orbit of any satellite is subject to change. These changes may be small perturbations, but the accumulation of small changes over time can be significant. In the LEO environment, there are several factors that could affect the orbit of satellites -most notably intentional manoeuvres, atmospheric drag, or solar radiation pressure could alter the orbit from a predicted trajectory. With limited sensor availability, efficient sensor management (SM) algorithms are necessary for long-term SSA. Given the large number of objects in LEO, the problem suffers from a combinatorial explosion as the number of possible actions increases [4]. The European Space Agency is investing in improving the long-term sustainability of the space domain [5], and employing novel methods to improve SSA and help accomplish this goal. Objects orbiting the earth in LEO have short orbital periods, meaning they cannot be observed reliably from a single site; and these sites are often constrained to making measurements in clear weather and of restricted patches of sky. Therefore, using multiple sensors located around the globe is highly beneficial, but comes with a cost and is a considerable SM challenge.
Deep reinforcement learning (DRL) is one possible solution to this problem. DRL is the combination of standard reinforcement learning algorithms with neural networks to solve Markov decision processes (MDPs). DRL has been applied to various fields with large action spaces, and has produced impressive results [6]- [8].
II. FILTERING AND STATE ESTIMATION
In this paper, we aim to estimate {X}, the set of state vectors describing satellites' positions and velocities, using measurements {Y }. The optimal state estimation algorithm for linear, Gaussian systems is the well known Kalman filter (KF). The Kalman filter is the best linear estimator for reducing the mean square error [9]. For slightly nonlinear systems, adaptations of the KF exist to attempt state estimation while overcoming some of these non-linearities. The extended Kalman filter (EKF) employs state transition and measurement functions, as opposed to simple matrices, to propagate the state estimates. However, the covariance is propagated linearly through the step, so the EKF is only suitable for systems with modest non-linearities. The unscented Kalman filter (UKF) develops this further, by generating sigma points around the target position, and propagating these through the nonlinearity, and reforming the covariance [10]. In this paper we will only use an EKF for simplicity but the approach is readily extendable to UKF or other state estimation methods.
III. REINFORCEMENT LEARNING
Reinforcement learning (RL) is a machine learning method in which an intelligent agent must make decisions to maximise its received reward, which is determined by the results of the actions it takes [11], [12]. RL differs from other machine learning learning areas in that the model can be unknown, the agent need only know the actions and the reward, as well as some observation about the environment's transition into new time steps, based on the environment's evolution over time.
Observations are usually related to some value in the environment that determines the amount of reward returned. This can be ideal for SM applications, particularly in SSA, where we do not need to model a potentially complex environment for the agent to interpret. This means RL can work in much higher dimensions than other dynamic programming approaches.
Markov decision processes (MDPs) are the underlying formulations that RL algorithms are built upon. MDPs operate discretely, where at each time step, an action is made. The state will react to this action via a transition, and a reward is given. The transition is defined as P ss = P[S t+1 = s |S t = s], where P ss is the state transition probability, s is the Markov state at time t and s is the successor state. The goal of an MDP is to find a policy, matching states to actions, to receive the maximum reward.
Previous work into RL for SSA includes applications of DRL in [13] and [14]. They show proof-of-concept results for applying RL to the SSA problem, using Actor-Critic methods. The actor refers to the policy, that asks the estimated value function, or critic, about the next possible state values, which the critic improves during learning. More recently, extensions to the previous work were completed that added more complexity and required less intensive compute resources to run [15], [16]. We present the first implementation of a DDQN to the sensor management problem for SSA, as opposed to the Actor-Critic methods cited above.
A. Q-learning
Q-learning is a simple value iteration update on a Markov decision process. Q-values, or quality-values, are state-action values, and refer to the expected reward gained by taking a certain action in a given state. Q-learning attempts to first find Q-values for a range of states and actions, and to then exploit the Q-values by selecting the action that returns the highest reward at any state, in a greedy policy. Q-learning is different from a Q-value iteration algorithm, as the transition probabilities and rewards are initially unknown.
Q-learning is defined by: where Q is the expected reward and is a function of action a and state s at time t. α is the learning rate, and 0 < γ < 1 is the discount factor. α is a tuning parameter that determines how quickly the algorithm learns new information. γ is a parameter required for convergence of the algorithm, and determines how much weight is given to information in the future. If γ is close to 1, the future is valued almost as much as the present. If γ is close to 0, the immediate information is much more highly valued [17].
B. Deep Q Network
A DQN is an implementation of Q-learning. The main issue with Q-learning is that it does not scale to larger problems with larger action and state spaces. Deep Q-learning was developed to overcome this. Deep Q-learning uses neural networks (NNs) and experience replay to use a random sample of prior actions instead of just the most recent action. Some well-known DQNs use convolutional NNs: hierarchical layers of tiled convolutional filters to mimic the effects of receptive fields [18]. Receptive fields are defined as the association of input fields to output fields. Experience replay is the use of batches of sampled transitions for better data efficiency and stability. DQNs are the term given to implementations of Qlearning algorithms applied to such NNs.
C. Double Deep Q Network
It has been shown that DQNs commonly overestimate action values in certain situations, and produce over-confident Qvalues [19]. To solve this problem, Double Deep Q Networks were developed. In DQNs, the max operator is used to select and evaluate actions, which leads to overly confident value estimates. By using two sets of weights θ and θ , and using one to determine the policy and the other to evaluate it, this problem is effectively overcome.
IV. PROBLEM SIMULATION
In this scenario, we create a satellite simulation using a Python package Pysatellite: a Github repository being developed by the author [20]. This package implements orbit generation, reference frame transformations, and target tracking -in this scenario through the use of an EKF. We generate 25 LEO satellites using a Keplerian model, visualised in Fig. 1. Higher order terms such as solar radiation pressure and atmospheric drag will be included in further iterations. In this paper, it is assumed that all satellites are following circular orbits at a radius from the centre of the Earth of R = 7 × 10 6 metres. For this implementation, we find that using an EKF is adequate to handle the non-linearities of the system, but in future work, a UKF or particle filter may be more suitable.
Detections are generated by simulating a telescope on the surface of the Earth which measures azimuth, elevation, and range with additive Gaussian noise. Fig. 2 shows the satellite paths from the telescope's point of view. The measurements are transformed into the Earth-centred inertial (ECI) reference frame, a Cartesian frame with the origin at the centre of the Earth, through which the Earth rotates. The EKF operates in an ECI reference frame, with a state vector X = (x, y, z, x v , y v , z v ), which encodes the Cartesian position and velocity of the satellite. Measurements are transformed from azimuth, elevation, and range to ECI via the following method: where φ, θ, and R are the azimuth, elevation and range coordinates of the satellite respectively, Y N ED refers to the commonly used North, East, Down reference frame, φ 0 and λ 0 are the latitude and longitude positions of the sensor, respectively, ω is the Earth rotation rate, and t is the sidereal time. In our simulated problem, we define our own ECI and ECEF reference frames related to the time elapsed between frames. For dealing with real data, ECI and ECEF reference frames that relate to common time bases must be used. The method explained here does not account for higher-order specificities associated with real-world reference frames, but is suitable for a simple geoid simulation.
A measurement noise matrix is generated in the AER frame for each measurement. We simulate ideal diffraction limited measurements.
where σ θ and σ r are the standard deviations expected for ideal angle and range measurements. These are converted to the ECI frame by calculating the Jacobian matrix of the measurement through the above transformation from AER to ECI and applying it to the measurement noise matrix: where J is the calculated Jacobian matrix. For the learning environment, we use Tensorflow Agents, an accessible and approachable RL framework in Python. Agents allows users to create their own environments, and apply them to a range of different RL algorithms, including DQN, Deep Deterministic Policy Gradient (DDPG), and others; for our simulation we used a DDQN. The DDQN uses a replay buffer and stochastic gradient descent to calculate the loss.
We state the following definitions for clarity: iterations refer to the number of training episodes that have occurred. Episodes refer to one full run of an environment, made up of t time-steps. At each iteration, the step transitions are added to a circular buffer that stores the last n number of iterations. During learning, a small sample of the buffer is used to calculate the loss, instead of just the last transition. This provides two benefits: as each transition is sampled many times, a higher data efficiency is achieved, and using uncorrelated transitions leads to a better stability of data.
We model a telescope control scenario, with a sensor state T = (φ, θ), where φ is the telescope's azimuth pointing, and θ is the telescope's elevation pointing. At each time step of the environment, the agent can choose from 5 possible actions: move up, down, left, right, or do nothing. As the environment is discretised, we assume that when an action is taken, the telescope state in the next time-step will be equal to the maximum distance travelled in that direction, based on a telescope slew rate of 2 o /s. Up and down refer to the telescope's elevation pointing, and left and right refer to the telescope's azimuth pointing. If actions are taken that would be unfeasible, such as the telescope pointing below the horizon, no action is taken. As elevation can cross the zenith at π /2 c , and azimuth is bounded 0 < θ < 2π c , actions that would take the telescope direction out of this range are wrapped. For each satellite, we find the difference between the centre of the telescope Field of View (FoV) and the satellite's azimuth and elevation position.
where d φ and d θ are the differences in azimuth and elevation, respectively, s φ and s θ are the satellite's azimuth and elevation positions, respectively, and T φ and T θ are the telescope's centre azimuth and elevation pointing, respectively. If both azimuth and elevation are within the FoV of the telescope, then a reward is given. The reward is cumulative in each time-step, so if multiple satellites are detected, the reward will increase accordingly. Algorithm 1 shows the basic operation of the RL environment. Algorithm 2 shows the training loop to improve the agent's policy, and select the best action at each time-step. The agent will train for a certain number of iterations, and the performance is periodically evaluated using a sample of episodes which are executed with the current policy. We then run another episode to generate measurements using the current policy, which are used in the EKF.
V. RESULTS
After training the above environment on a DDQN for 20,000 iterations, and sampling the average reward at every 1,000 iterations, Fig. 3 shows that the DDQN agent clearly outperforms the same environment run with a random policy, where at a given time-step, an action is picked at random, instead of choosing the action that will maximise the cumulative reward. We choose to compare against a random policy to show a baseline for learning, and to show clearly the improvement of the DDQN over increasing iterations. Future work will compare the DDQN to other RL implementations. Each iteration trains the agent with 10 episodes of the environment, with each episode consisting of 20 time-steps. We use a seeded simulation to create the same satellite orbits, and run the environment for 5 sets of iterations to generate average returns.
It is clear that after ∼8,000 iterations, the DDQN begins to converge on an optimal policy, that far exceeds the random policy shown, which has no obvious improvement over time, as expected. After this, the DDQN remains at or near the optimal policy, with no loss of return from catastrophic forgetting, a common problem in RL algorithms [21]. Catastrophic forgetting occurs as an agent explores its environment, it may learn things that break its previously learnt information, causing the agent to forget the past information and return poor rewards. The shaded region shows the standard deviation of 5 runs of the same environment in the DDQN. The standard deviation decreases once the algorithm reaches the plateau, showing its increased confidence in this region of training. The maximum possible return of the DDQN is less than the number of satellites simulated because only some of the satellites will cross the telescope field of view during the length of the environment simulation, as exemplified in Fig. 4. Increasing In Fig. 5, we show the log of the trace of the covariance matrices associated with each satellite after they have been tracked for the length of the episode. The value for each point in the graph is the trace of the covariance matrix after applying an EKF to the satellite for n steps, where n is the number of steps used in the RL environment, based on the measurements generated in the DDQN.
If a satellite is captured within the telescope FoV, a measurement is generated; conversely if the satellite is not observed by the telescope, no measurement can be made. As the policy improves and the number of satellites seen in the FoV of the telescope increases, more measurements are generated as iterations increase. By having more measurements for each satellite, the EKF is able to reduce the uncertainty of the target position and velocity, which can be seen in the lines at the bottom of the graph. Where limited or no measurements are made, the EKF can only predict the satellite position and velocity, giving increasing uncertainty -seen at the top of the graph. We see that over half of the visible satellites are measured more consistently as the DDQN trains, leading to reductions in the final uncertainty of the satellite. We compare this with Fig. 6, which is the result of tracking on measurements generated from a random policy. Here we see no overall improvement on the tracking performance.
In Fig. 7 and Fig. 8, we show the outcome of tracking in the final iteration, when the agent has attained the optimal reward. In the trained run, we see that a majority of the satellites are detected by the telescope, meaning the measurements made are able to reduce the uncertainty. Some satellites are never seen in the FoV, which is shown by the top line in the graph, where the EKF becomes increasingly uncertain about its state. In the random action run, we can see that no satellite has consistent measurements, meaning that the uncertainties are not lowered as well. This shows a SM algorithm that is better than random pointing, proved by a covariance-based metric.
VI. FUTURE WORK
In future work, we hope to expand on the work completed here, with the inclusion of target tracking performance metrics. Two such metrics that would likely prove fruitful in this scenario are the Posterior Cramér-Rao Bound (PCRB) [22] and the Generalised Optimal Sub-Pattern Assignment (GOSPA) [23]. The PCRB would be useful in situations where the geometry affects the resulting information of the targets, such as in cases where the covariance of a target is very thin but long. The GOSPA metric will be more suitable in scenarios where there is clutter, false detections, and missed targets [24], all of which are likely in the SSA domain. Further improvements will be made to increase the complexity of the satellite dynamics and how they are tracked, including more robust reference frame transformations, for example. Including effects like solar radiation pressure and atmospheric drag will increase the realism of the scenario, and will require more advanced tracking algorithms, such as a UKF. Other advancements include using angle-only measurement models, and continuous control agents to more accurately reflect the use of real telescopes.
VII. CONCLUSIONS
In this paper, we simulate a controllable Earth-based telescope viewing satellites in low Earth orbit in a reinforcement learning environment. We present a novel application of a Double Deep Q Network to space situational awareness. We maximise the number of satellites observed during a time period, and increase the number of successful measurements made. We use the generated measurements in an extended Kalman filter, in which we see a significant reduction in position and velocity uncertainty for observed satellites as a result of increasing observations, as opposed to observations made from a random policy. This forms the basis of a framework for future research into applying deep reinforcement learning to space situational awareness. | 2022-05-30T01:16:00.515Z | 2022-05-27T00:00:00.000 | {
"year": 2022,
"sha1": "a4bcc20c1c972ba4f9ac02d9550d0b5a470beb92",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a4bcc20c1c972ba4f9ac02d9550d0b5a470beb92",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225792547 | pes2o/s2orc | v3-fos-license | To the Methodology of Phase Transition Temperature Determination in Aqueous Solutions of Thermo-Sensitive Polymers
An advanced methodology of phase transition determination in aqueous solutions of thermo-sensitive polymers by using the phase portraits method has been suggested. The methodology allows highly accurate determining the temperature when exactly half of the molecules lose solubility (from the maximum number that can go to another phase state under given conditions). It is shown that since phase transition passes usually in a wide enough temperature interval this indicator should be used as a quantitative parameter that characterizes phase transition process. Additionally, the suggested methodology allows introducing one more quantitative parameter that reflects a sharpness of phase transition. The methodology is verified by an example of phase transitions study in aqueous solutions of thermo-sensitive copolymers based on N-vinylpyrrolidone and vinyl propyl ether. Article info Received: 8 October 2019 Received in revised form: 20 December 2019 Accepted: 19 February 2020
Introduction
Thermo-sensitive polymers are one of the most important classes of hydrophilic macromolecules [1][2][3]. Study them is of considerable interest both in academic (e.g. in terms of studying the hydrophobic-hydrophilic balance that defines a solubility of the multicomponent composition of copolymer copolymers [4]) and in purely applied aspects. Particularly it is shown in [5,6] that new systems of information visualization may be implemented based on heat sensitive polymers.
Phase transition experienced by thermo-sensitive polymers at their solution temperature variations is one of the most important properties; such transitions in many works have been studied [1,[5][6][7][8][9][10][11][12]. To date, firmly established that a character of phase transition for various thermo-sensitive polymers can significantly differ. Particularly, a width of a transition area where both macromolecules that have lost a solubility and the ones that remain in solution coexist significantly varies on a temperature scale from polymer to polymer. The fact that a transition temperature range can quite long lead to certain difficulties related to the phase transition temperature determination. Various kinds of graphic constructions at times used for this purpose lead to noticeable errors and sometimes they are systematic.
It is shown in this work that there is a possibility to develop the methodology of phase transition temperature determination in the thermo-sensitive polymers solutions with high precision based on experimental dependencies of solution turbidity on temperature. This methodology allows also getting the quantitative indicator that reflects «a degree of the heat sensitivity», i.e. how sharp the phase transition is. The methodology is verified by an example of phase transitions study in aqueous solutions of thermo-sensitive copolymers based on N-vinylpyrrolidone (NVP) and vinyl propyl ether (VPE). These copolymers were synthesized in the work [13] early, as well as their temperature-responsive properties have been studied.
Experimental part
N-Vinyl-2-pyrrolidone and vinyl propyl ether were purchased from Sigma-Aldrich (UK) and were purified by distillation. 2,20-azobis(isobutyronitrile) (AIBN) were purchased from Acros. AIBN was recrystallized from ethanol before use. Ethanol was purchased from Fisher Scientific (UK) and used without purification.
Copolymers NVP-VPE were synthesized by free radical copolymerization at 60 °C in ethanol solutions. The polymerization was conducted for 26 h with AIBN (0.01 mol/L) used as a radical initiator. Before copolymerization, the monomer mixtures were saturated with argon by bubbling for 10 min. Polymerization was terminated after 26 h by cooling the reaction vials with cold water. The copolymers were purified by dialysis against deionized water (volume 5 L, 20 changes during 4 days) and were recovered by freeze-drying. The copolymer composition was determined by elemental analysis for the content of nitrogen, which is present in NVP only.
Methods
The thermo-responsive behavior of NVP-VPE copolymers in aqueous solutions was studied by dynamic light scattering (DLC) at 10-60 °C using a Malvern Zetasizer Nano-S (Malvern Instruments, UK). Each DLS experiment was repeated in triplicate by preparing and analyzing solutions of each polymer sample separately. Dependence of a light scattered by polymer solution on temperature was registered in experiments. This allows acquiring the necessary information about the character of phase transition since it is accompanied by turbidity of the solution and consequently by its ability to efficiently scatter the light. Figure 1 shows experimentally obtained temperature dependencies of light intensities scattered by NVP-VPE copolymer aqueous solutions at different concentrations (points). Dependence of light intensity scattered by polymer solution on temperature was recorded experimentally. This allows obtaining necessary information about a character of phase transition since the last one is accompanied by turbidity of the solution and, consequently, by its ability to efficiently scatter the light. It is seen that the light scattering intensity grows with increasing of temperature, when it reaches a certain value close to the phase transition temperature a light scattering sharply rises. The same figure (solid lines) shows the theoretical dependences curves obtained using the technique considered below based on the use of the phase portraits method.
Results and discussion
It is seen that a considered copolymer does experience a phase transition, however, it is quite smooth: width of a transition area is about 25 °С. Figure 2 shows similar dependencies for a case of the water-alcohol mixture.
It can be seen that the presence of ethanol in the solution shifts the hydrophobic-hydrophilic balance and at its high concentrations (30 vol.%) the phase transition is practically not observed. Presented experimental phase portraits are obtained by numerical differentiation using an approximate formula (points in Figs. 3 and 4). It is seen that phase portraits of curves presented in Figs. 1 and 2 are described by parabolic dependencies with high accuracy (dashed lines in Figs. 3 and 4).
This suggests that the dependence of the degree of transparency (turbidity) of the solution at the phase transition obeys the following differential equation of the first order.
and -coefficients obtained by least squares method at a parabolic approximation of experimentally obtained phase portrait. Solution of the Eq. (2) has the form: where -T ph is a parameter interpreted as a temperature of phase transition Equation (3) also allows establishing a physical meaning of parameters included in Eq. (2), T 0parameter that defines a slope of phase transition, D 0 -extrapolation extremum of optical density.
Dependencies shown in Figs. 1 and 2 as solid curves by Eq. (3) have been calculated. It can be seen that there is a good correspondence between the dependencies obtained by the phase portrait method and the initial experimental data. Parameters of dependencies shown in Figs. 1 and 2 with solid curves are presented in Tables 1 and 2 respectively. Tables show that the suggested method allows determining very small phase transition temperature variations due to changes in polymer concentration in solution or changes in the thermodynamic quality of the medium. Table 1 Parameters of theoretical dependences for curves of Fig. 1 From the main goal of this work i.e. in terms of determining the temperature of a phase transition, an obtained solution is of interest since it actually corresponds to the logistic curve. Indeed, if put T = T ph in Eq. (3), then a value under sign of exponent will be equal to zero. Respectively, in this case takes place. In other words, T ph -is a temperature when exactly half of molecules in solution experiences phase transition. This is an exact value that is appropriate to accept as a temperature of phase transition, especially in cases when it is carried out quite smoothly. Thus, a method of phase portraits allows obtaining simple and reliable approximations of dependencies of solution turbidity on temperature, and it allows determining the characteristic value, which can serve as an effective measure of phase transition temperature with a quite accuracy.
Conclusion
Thus, dependencies of the optical density of solution experienced a phase transition on temperature are described by logistic curves. Confirmation of this can be given by the phase portraits method. Namely, the logistic curve corresponds to a parabolic phase portrait, it shows which is shown by the experimental data presented in this work. This fact allows for developing a methodology for accurate estimation of the parameter characterizing the phase transition temperature. Namely, it is appropriate to accept as a temperature of a phase transition the temperature at which exactly half of macromolecules in solution experiences changes in conformation and lose solubility. | 2020-08-13T10:08:14.524Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "127fb97a7d4b60fd4a5da48f28557987bbbf63d1",
"oa_license": "CCBY",
"oa_url": "https://ect-journal.kz/index.php/ectj/article/download/960/755",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f265d08c9a644fba8f85311c6f543c4d0fdf734f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
59946375 | pes2o/s2orc | v3-fos-license | Axiomatic quantum mechanics: Necessity and benefits for the physics studies
The ongoing progress in quantum theory emphasizes the crucial role of the very basic principles of quantum theory. However, this is not properly followed in teaching quantum mechanics on the graduate and undergraduate levels of physics studies. The existing textbooks typically avoid the axiomatic presentation of the theory. We emphasize usefulness of the systematic, axiomatic approach to the basics of quantum theory as well as its importance in the light of the modern scientific-research context.
probability and the basic course in linear algebra-the latter, e.g., a la Vujičić [18], on which basis the Hilbert and the rigged Hilbert space can be smoothly introduced [19]. Equipped with this knowledge, a student will easily follow the contents of quantum mechanics with the benefits emphasized in Section 3 of this paper. Everywhere in this paper we use the standard Dirac notation.
Quantum kinematics
Postulate I: Quantum States. Every state of a quantum system is represented by an element (a vector) of a linear vector space, which is the state space of the system, and vice versa: every element of the vector space is a possible state of the quantum system. Two vectors, |ϕ and |χ , that satisfy the equality |ϕ = e ıδ |χ (P.1) for arbitrary δ, should be regarded the same quantum state.
Postulate II: Quantum Observables. Every variable of a classical system is represented by a Hermitian (self-adjoint) operator on the state space of the system (that is established by Postulate I). And vice versa: every Hermitian operator on the system's quantum state space corresponds to a physical variable (physical quantity) that, in principle, can be physically measured ("observed").
In classical mechanics, the basic pair of the variables-position r and momentum p-of a particle determine the particle's state, ( r, p). That is, everything that is needed is at one place: On the one hand, r and p are the physical variables, i.e. the measurable quantities.
On the other hand, as a pair, those variables define the system's state-the "phase space" of states that itself is a linear vector space. Every classical state ( r, p) uniquely determines the value of every possible classical variable A = A( r, p).
This elegant classical picture is destroyed by the above quantum postulates. In quantum theory, "states" are still elements of a linear vector (state) space, but the observables do not
Remarks
The state spaces are different for different kinds of physical systems. It can be finite or infinite dimensional and determined by certain phenomenological rules or constraints, such as e.g. the superselection rules. Building the state space for a system is at the root of doing quantum mechanics, with the free choice of representation.
Comments Linear superposition of quantum states (of the state-space vectors) is a reminiscence of the classical "superposition of waves", which may lead to interference. Nevertheless, a quantum state is neither a classical wave nor a classical particle, nor it is uniquely related to the ordinary three-dimensional space. An emphasis should be placed on the fact, that some states need not be in the domain of certain quantum observables, see Postulate II below.
Research
The set of the available (accessible, or allowed) states for the system can be dynamically redefined, e.g. in quantum decoherence-dynamical superselection rules induced by the system's environment. Whether quantum state is "realistic" or merely a source of information (i.e. "epistemic") is a long standing problem of vivid current interest.
Remarks
Measurability of a quantum observable assumes that its eigenstates form a complete set, i.e. an orthonormalized basis in the state space.
Comments Measurability of an observable should be taken with care. It may regard operational accessibility (in a lab) or be a subject of additional (e.g. phenomenological) rules.
Research
The set of the practically measurable observables may be dynamically determined-e.g. the environment-induced preferred observable(s) in quantum decoherence.
Bearing (Q) in mind, it seems unavoidable to expect uncertainty, i.e. probabilistic quantum theory. That is, in order to avoid probabilities, we need one-to-one relation between the states and the values of all observables-as it is the case in classical mechanics.
Postulate III: Measurement Probabilities. For a measurement of an observableÂ, with the spectral form = n a nPn + β α |a ada a|, that is performed on the system in the state |ϕ , the probability for the result to fall within some interval (c, d) reads: whereP (c,d) (Â) is the so-called spectral measure for the interval (c, d) determined by the observableÂ.
Remarks
The postulate regards measurement of a single observable and is historically known as the Born's rule. Whether or not some observables can be simultaneously measured requires additional concepts-e.g. "complete observable" (equivalently, the complete set of mutually commuting observables).
Comments The measurement result of an observable is determined by the set of its eigenvalues and the spectral form. Extending the postulate by stating the form of the final state of the object of measurement should be separately performed. To this end, the so-called projective (von Neumann) measurement is the ultimate basis of the "generalized measurements" formalism. Noncommutativity of certain observables implies non-existence of the spectral measure, which might be common for the noncommuting observables.
Research
It is a true challenge to detect the situations in which joint measurements of non-commmuting observables may be possible, at least partly. The so-called quasidistributions (like the Wigner function) may be useful in this regard.
Modern statistical (so-called "device-independent") approaches to quantum foundations may go even beyond the above presented basic postulates.
Postulate IV: Quantization. Transition from the classical variables to the quantum observables, i.e. quantization of the classical variables of a system, is such that: αA + βB → α + βB, α, β ∈ R (c) A product of a pair of variables is mapped into the "symmetrized" product of the observables: AB → 1 2 (ÂB +BÂ).
(d) A Poisson bracket is mapped to a commutator: with the Planck constant .
(e) Transition from the classical to the quantum quantities is continuous.
Remarks
There are alternative quantization schemes. Here presented one is the most common in the nonrelativistic context.
Comments Every classical degree of freedom, q i , is accompanied with its conjugate mo-
Research
The absence of non-commutativity for classical variables poses a challenge for the transition from the quantum to the classical formalism-a subject of e.g.
quantum decoherence and the quantum measurement theory.
The following three postulates (V-VII) may not be universally acknowledged.
Postulate V: Quantum degrees of freedom. Quantization of a classical degree of freedom, q i , gives the quantum mechanical observable, which (together with its conjugate observable, if such exists) acts on a related Hilbert space H i . The total state space of a system, H, is tensor-product of the ("factor") spaces corresponding to the individual degrees of freedom:
Remarks
The postulate equally regards the classically known, mutually independent, degrees of freedom (such as the Descartes x, y and z coordinates) as well as the phenomenologically defined "internal" degrees of freedom, such as the spin, whose components do not mutually commute.
Comments Some internal degrees of freedom, e.g. the spin, do not mutually commute and therefore all act on the same, non-factorizable state space.
Research
Alternative degrees of freedom can be obtained via the classically-analogous, Comments This is the ultimate basis for introducing the "mixed" quantum states, when the observer is not sure in which ("pure") state the measured single system actually is. Knowing the quantum state uniquely, i.e. with certainty, to be some "pure state" |ϕ is the situation of the maximum possible information about the system.
Research
Description of single systems and their behavior is essential in certain applications (e.g. quantum metrology and the emerging technologies) as well as for the interpretational corpus of the quantum theory.
Remarks
Quantum mechanics is not sensitive to the number of the constituent particles of a composite system. That is, the number N in eq.(P.4) may in principle go to infinity. Nevertheless, physically interesting are the finite systems-finite N .
Comments Every subsystem of a composite system may itself have whatever degrees of freedom (including the internal ones, such as the spin). For every degree of freedom, arbitrary representation may be chosen.
Research
Where is the line dividing small (micro, i.e. quantum) and the macro (i.e. many-particle, classical) systems? This is an aspect of the measurement problem, but also of the modern open systems theory (and quantum decoherence), quantum foundations of the thermodynamic relaxation, also of interest in the interpretations of quantum mechanics.
Definition 1. By isolated quantum system, it is assumed a system that is not in interaction with any other physical system. An isolated system may be subjected to some external field, which may be time-dependent.
This map is a dynamical map that is generated by the system's Hamiltonian so that: 3. Usefulness of the axiomatic approach Axiomatic quantum mechanics provides the shortest and most efficient path to familiarizing with the basics and universal use of quantum mechanics. It provides the basic methodological core and approach to every scientifically-useful presentation of the quantummechanical theory as well as its upgrades towards the diverse applications. Below, we emphasize some specific benefits of making the students familiar with the axiomatic quantum mechanics. Representation-invariance is also a precursor for the quantum field theory, notably for the free Dirac field.
I) It is possible to clearly distinguish quantum kinematics from quantum dynamics-very much like the standard attitude and benefits known from the classical mechanics. Certain specific benefits are emphasized below.
II) From Postulates I and IX, it is clear that the "wave function", ϕ( r, t), is nothing but one out of the plenty possible, the so-called position-representation of the quantum state |ϕ(t) ; in Dirac notation, ϕ( r, t) = r|ϕ(t) . Representation should be avoided as long as it is possible since the basic formulas can all be written in the representation-independent Dirac form. E.g. the measurement probability, see (P.2) for the notation: P (Â, |ϕ(t) , a n ) = ϕ(t)|P n |ϕ(t) . (1) The choice of the representation should be made such that the calculation of (1) is made easier. The same applies to the choice of the dynamical picture; e.g., in the Heisenberg picture, eq.(1) reads: That is, all the basic expressions practically directly follow from the Postulates-no need for memorizing.
III) Non-cummutativity of the position and momentum operators implies both, nonexistence of the common representation, i.e. of the representation of the form ϕ( r, p, t), as well as nonexistence of the common spectral measure, which might lead to the exact simultaneous measurements of those observables.
IV) Postulates I and II point out and emphasize the substantial divorce of "quantum" from "classical" in that, as distinct from the classical systems, in quantum theory there is the [quantum] information limit. That is, every pure state carries the maximum information about the system that can be acquired by measurement. Nevertheless, for every pure state |ϕ exist certain quantum observables for which the state is not an eigenstate. Hence the non-unique values of such observables for the system in the state |ϕ . In contrast to this, "pure" classical states (e.g. the points in the classical phase space) give unique value for every possible physical variable of the system. For this reason, it is said that there are no dispersion-free quantum ensembles. This is the essence of "quantum uncertainty" that is so often misused (or even abused) in presentation of the quantum theory.
The uncertainty relations due to Robertson [20] is a direct consequence of Postulates I and II, i.e. a theorem of quantum mechanics: so the "quantum jumps", |ϕ m → |ϕ n , m = n, are not possible for an isolated atom.
Quantum modeling of the composite system "atom+EMF" may be a matter of taste [2,5,6,8] but the fact that every atom is an open system is a direct consequence of Postulate IX in conjunction with the atomic phenomenology.
X) Following the postulates, it is straightforward to "decipher" the physical meaning of certain formulas not known to the student, with the basic information about the underlying model. E.g. the standard expression of the electric quadruple moment of the atomic nucleus that regards the nucleus as a rotating rigid body, not as a point-like particle, e.g. eq. (15.35) in Ref. [21]: The classical model of "rigid body" assumes the external center-of-mass (CM) and the Euler angles for the nucleus rotational degrees of freedom. In this model, the internal spatial degrees of freedom are defined for the bulk of the rigid body-the position-vector r that is accompanied by the mass and the electric charge densities (the total positive charge being Ze). Ignoring the CM dynamics can be performed by placing the reference frame into the CM system. Then remain the internal spatial and the external rotational degrees the "factor"-space H J regards the total angular momentum of the nucleus, i.e. the observablê J =ˆ L+ˆ S, whereˆ L stands for the nucleus' total angular momentum andˆ S the nucleus' total spin observable. Then the classical definition of the electric quadruple moment, Q = 3z 2 −r 2 , due to Postulate V, directly gives the observableQ = 3ẑ 2 −r 2 for the internal spatial observableˆ r; rigorously, bearing eq.
Of course, every nucleon's state space (for every index i and j in eq.(8)) is tensor product of the orbital and the spin factor spaces, H (orbital) ⊗H (spin) . Adopting the spherical coordinates for every nucleon separately as described above introduces the individual nucleon's state space factorization: H r ⊗ H Ω ⊗ H (spin) . Placing this and grouping the angular-and the spin-factor-spaces gives: where the H J space regards the nucleus' total angular moment as defied above. Ignoring the neutrons gives rise to the electric quadruple in the position-representation [22]: as a discrete form of eq.(6), while the factorization eq.(9) incorporates eq. (7).
XI) Now it is ready to build on the postulates of Section 2 in order to introduce: (A) Axiomatic theory of "mixed states" as distinguished in Table VI, Notably, the need for Postulate III emphasizes insufficiency of Postulate IX for describing the process of quantum measurement. Just like the atoms commented in VII, the object of quantum measurement is not an isolated but open quantum system. Description of quantum measurements is a part of the ongoing efforts to describe dynamics and behavior of the general open quantum systems [2,8], still with significant contributions from the interpretational corpus of the quantum mechanical theory.
Discussion and conclusions
The desired "visualizations" and "explanations" of the quantum mechanical formalism should better be left to the specialized applications and interpretations of quantum mechanics. Often, they produce the puzzles and make the theory "mysterious" before becoming useful for certain limited purposes. For example, the idea of the electron orbiting around the proton in the hydrogen atom may be useful in certain limited contexts of the atomic physics but appears to come to a flat contradiction with the fact [7] that the electron and the proton are quantum-mechanically entangled with each other. Equally unreliable are the statements regarding the fate of a single object of quantum measurement, especially in attempts of "explaining" quantum uncertainty (of any kind).
In our opinion, the main goal of teaching the axiomatic quantum mechanics should be to emphasize its basic, methodological character that enables the upgrades towards the specialized courses in non-relativistic quantum physics and some of the prominent current scientific research in physics and emerging technologies. We believe that such a course can be properly presented on about 150 pages. To this end, we recognize Chapter 2 of Nielsen and Chuang's [1], which is announced in the book's Preface as follows: "Aside from classes on quantum computation and quantum information, there is another way we hope the book will be used, which is as the text for an introductory class in quantum mechanics for physics students." The needs of quantum information and computation mainly regard the finite-dimensional quantum systems (qubits and their realizations). Nevertheless, if equipped with the formalism needed for the continuous ("continuous variable") systems, the Nielsen-Chuangs intro (currently around 60 pages) in quantum mechanics could indeed be used as a basic course of quantum mechanics on the undergraduate/graduate level. Therefore we conclude that we are still missing a proper textbook, which would present the axiomatic quantum theory in a concise yet sufficient form for the first encounter of physics students with the quantum mechanical theory.
From Sections 2 and 3 we construct the following sketch of the Curriculum for the introductory course of quantum mechanics for the physics students: -Quantum kinematics: Postulates I through VIII.
-Building the functional state space for the continuous systems.
-The general theory of angular momentum.
-The Stern-Gerlach experiment and the theory of the spin-1/2.
-Solutions of the Schrödinger equation (conservative systems, bound states): simple one-dimensional models, harmonic oscillator and the hydrogen atom.
-Non-relativistic quantum symmetries (kinematical: the extended Galilei group; dynamical: symmetry group of the system's Hamiltonian).
Hence the Lego-dice-like upgrades toward the modern topics, such as: -Basics of the quantum scattering theory (continuous spectrum of the Hamiltonianrelated to the so-called scattering states in the Hilbert state space).
-Composite quantum systems: Quantum entanglement (kinematical aspect: the Schmidt canonical form; dynamical aspect: interactions in composite systems).
-Non-classical correlations and their measures.
-Axiomatic formalism of the "mixed" quantum states.
-Quantum subsystems: Non-unitary dynamics, "improper" mixed states, basic concepts of the open-systems theory.
-Selected chapters of quantum interpretations (physical nature of "quantum state"; quantum measurement and the transition from quantum to classical; hidden variables) etc.
Certain combinations of these upgrades with the basic Curriculum may be useful for studies of some related fields and applications in modern science and technology.
Our experience in teaching quantum mechanics, nuclear physics and quantum information emphasizes usefulness of the axiomatic quantum theory. E.g. from the total of 94 students, 57% of them have successfully passed the exam. If we do not account the students that have not regularly attended the lectures, the percent goes to more than 76! The average mark is between C and D (numerically 7.43). Similarly, teaching nuclear physics is much easier with the underlying basic course of quantum mechanics. Of the total of 60 students, around 40% have passed the exam. However, accounting only the students who regularly attended the lessons raises the percent to approximately 73! The average value is approximately C (numerically: 8.3). We find those scores encouraging: the students properly prepared and active during the lessons did not have any serious problems in adopting the topics of the basic course in quantum mechanics as well as its application in nuclear physics. So, in our teaching practice, we are convinced of the famous statement of Boltzmann: "Nothing is more practical than a good theory." So we conclude: the needs for application of quantum theory should be clearly separated from the basic formalism on the graduate/undergraduate level of physics studies. Introductory course of quantum mechanics can be formulated in the axiomatic form with significant benefits for students regarding both, more specialized applications as well as familiarizing with the current scientific research. | 2017-11-05T12:47:24.000Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "369bd626fd9d641d78f62f06347421dbd29ee407",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "369bd626fd9d641d78f62f06347421dbd29ee407",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
237644961 | pes2o/s2orc | v3-fos-license | Study on Diesel Low-Nitrogen or Nitrogen-Free Combustion Performance in Constant Volume Combustion Vessels and Contributory
: This paper studies the combustion performance of diesel in constant volume combustion vessels under different conditions of mixed low-nitrogen (O 2 and N 2 ) or non-nitrogen (O 2 and CO 2 ) in varying proportions. The high-speed camera is used to shoot the combustion flame in the constant volume combustion vessel. The process and morphology of the combustion flame are amplified in both time and space to study and analyze the effects of different compositions and concentrations in gases on the combustion performance of diesel and conduct a study on the contributory factors in the performance of diesel with no nitrogen. According to the study, in the condition of low nitrogen, the O 2 concentration is more than 60%, the ignition delay period is shortened, the combustion flame is bright and slender, it spreads quickly, and the blue flame appears when the O 2 concentration reaches 70%; While for nitrogen-free combustion, only when the O 2 concentration reaches 30% is the combustion close to the air condition; when the O 2 concentration reaches 40%, the combustion condition is optimized obviously and the combustion flame is relatively slender compared to the air working condition. Similarly, with the increase of the O 2 concentration, the ignition delay period of nitrogen-free diesel is shortened, the duration is extended, and the combustion performance is optimized. In addition, when the O 2 concentration reaches 50%, with the decrease of the initial temperature, the ignition delay period is prolonged, and the duration is shortened obviously. When the temperature is lower than 700 K, there is no ignition. The increase of the diesel injection pressure is beneficial to optimize the ignition performance of diesel non-nitrogen combustion and shorten its ignition delay period and combustion duration. Related research has important guiding significance to optimize nitrogen-free combustion technology, which produces no NOx of the diesel engine.
Introduction
With the shortage of natural resources and the deterioration of the environment, people's awareness of energy conservation and environmental protection has gradually increased [1]. Diesel engines, as one of the main discharge equipment for pollutants, have drawn people's attention [2,3]. Many laws and regulations at home and abroad have put strict requirements on the carbide and nitrogen oxide emissions, and effectively promote the research and development of energy conservation and emission reduction technologies of diesel engines [4][5][6]. At present, there are technologies based on intake components, such as EGR [7,8], intake oxygen enrichment/hypoxia [9][10][11][12], intake air humidification [13,14], and in-cylinder water injection [15,16] to optimize the combustion and emission performance of diesel engines. All these technologies are based on the change of gas components in the combustion environment. To reduce NOx, the temperature in the combustor should be changed in the combustion process at the expense of the economics of the diesel engine [17,18]. Some research institutes have carried out comprehensive optimization of EGR and oxygen-enriched air intake, which can make SOOT-NOx emission lower than the original machine; however, the result is not obvious [19,20]. Therefore, the SOOT-NOx emission bottleneck has become the key to restrict the development of combustion and emission technologies of diesel engines [21,22]. The O 2 and N 2 concentration in the combustion environment is of great significance for the production of NOx [23,24]. This paper aims to find new solutions to optimize diesel emissions through basic studies on diesel low-nitrogen or nitrogen-free combustion.
Since Home and Steinburg first proposed the nitrogen-free coal combustion technology in 1981, relevant applications and research has been carried out at home and abroad [25,26]. Its NOx emissions are much smaller than air conditions, which has crucial reference value for nitrogen-free combustion studies in other fields [27,28]. At present, there are relatively few studies on nitrogen-free combustion of the engine. Wang Zuofeng and other researchers carried out nitrogen-free combustion experiments and numerical modelling studies based on the diesel engine ZS195, a single-cylinder machine. The result showed that the diesel engine can idle and reduce NOx emission (approximately zero) under the inlet condition of 60% O 2 and 40% CO 2 . Its HC and CO emissions just slightly increased compared to air conditions. When the O 2 concentration is lower than 50%, the diesel engine cannot be on fire, and when the O 2 concentration reaches 65%, the inlet condition becomes relatively better, and can be optimized by the change of the fuel supply advance angle [29,30]. Zhao Tianpeng and other researchers conducted a numerical simulation calculation of diesel and a visualization study of the constant volume combustion vessel, which showed that in the condition of 65% O 2 and 35% CO 2 , the ignition delay period of diesel was reduced by 50% compared to air conditions. In the condition of 50% O 2 and 50% CO 2 , diesel can also be on fire and the ignition delay period is shortened by 35% [31,32]. Tan Qinming and others carried out numerical simulation calculations and bench tests under a low-nitrogen and nitrogen-free environment based on the 4135ACa diesel engine. The result showed that the diesel engine can start and operate continuously under the condition of 50% O 2 and 50% CO 2 , but the combustion was abnormally bad and the fuel consumption rate was significantly higher compared to air conditions. When the O 2 concentration increased to 60%, the combustion improved but was not perfect, and even visible carbides appeared [33,34]. Zhu Changji and other people carried out experimental studies on the characteristics of homogeneous premixed combustion of gasoline under a nitrogen-free environment, which showed that the ignition delay period and duration were shortened by 50%, and the pressure growth rate increased by 180% under the condition of 40% O 2 and 60% CO 2 [35,36]. The above research results of nitrogen-free combustion are basically consistent. Different test conditions led to different results [29,36]. At present, there are few studies on the factors that affect diesel nitrogen-free combustion.
Experiments were performed in a constant volume combustion vessel, with fuel injected into the vessel at selected mixed air, pressure, and temperature conditions [37,38]. In this paper, the low-nitrogen or nitrogen-free mixed gases in varying proportions are used to inflate the fixed volume combustion vessel, and the mixed gases are heated to a certain temperature so that the fuel can be ignited and burned. The high-speed camera is used to shoot the combustion flame in the constant volume combustion vessel. The combustion flame is analyzed from time and space. The performance of diesel that has low nitrogen and is nitrogen free is discussed from the aspects of flame morphology, ignition delay period, combustion duration, and flame intensity [39][40][41]. In addition, this paper studies the effects of the initial temperature of the mixer and the fuel injection pressure on the diesel nitrogen-free combustion performance, which provides an important guide to optimize the technology that produces no NOx of diesel engines.
Experimental Method
Many experimental works have been performed using constant volume combustion bombs with adjustable initial conditions [42][43][44]. The principle of the constant volume combustion vessel and its visual test system are shown in Figure 1. The maximum volume of the constant volume combustion vessel is 43 L. The effective volume is 12 L and the maximum pressure is 60 bar. Various fuel injection and combustion tests can be performed. Different proportions of mixed gases are used to inflate the bomb, and heat it through the heating tile. The maximum heating temperature can reach 900 K, and the accuracy is ±10 K. The constant volume system is equipped with a 0.14 mm single injector orifice. The fuel high-pressure common rail injection system provides an injection pressure of 600-1800 bar and can regulate the injection cycle, times, pressure, and pulse. The Schott HAB-150W halogen lamp was used as the constant light source during the test. The Photron FASTCAM SA5 CMOS high-speed camera was used to collect images of the flame in the constant volume combustion vessel during the combustion process under different conditions. The shooting speed was 10,000 fps. The image resolution was 768 × 768 and the low f number was 17. The fuel injection of the entire experimental bench, high-speed camera shooting, and the collection of thermal parameters were synchronously triggered to ensure the synchronization of data acquisition under different working conditions.
Experimental Method
Many experimental works have been performed using constant volume combustion bombs with adjustable initial conditions [42][43][44]. The principle of the constant volume combustion vessel and its visual test system are shown in Figure 1. The maximum volume of the constant volume combustion vessel is 43 L. The effective volume is 12 L and the maximum pressure is 60 bar. Various fuel injection and combustion tests can be performed. Different proportions of mixed gases are used to inflate the bomb, and heat it through the heating tile. The maximum heating temperature can reach 900 K, and the accuracy is ±10 K. The constant volume system is equipped with a 0.14 mm single injector orifice. The fuel high-pressure common rail injection system provides an injection pressure of 600-1800 bar and can regulate the injection cycle, times, pressure, and pulse. The Schott HAB-150W halogen lamp was used as the constant light source during the test. The Photron FASTCAM SA5 CMOS high-speed camera was used to collect images of the flame in the constant volume combustion vessel during the combustion process under different conditions. The shooting speed was 10,000 fps. The image resolution was 768 × 768 and the low f number was 17. The fuel injection of the entire experimental bench, high-speed camera shooting, and the collection of thermal parameters were synchronously triggered to ensure the synchronization of data acquisition under different working conditions.
Test Method
The low-nitrogen oxygen-enriched tests of the constant volume combustion vessel were carried out mainly under conditions of 60% O2 and 40% N2, 70% O2 and 30% N2, and 80% O2 and N2 20% to explore the limit of the oxygen enrichment concentration in lownitrogen combustion that can reduce NOx emission under the premise of combustion safety. For nitrogen-free combustion, tests were carried out by mixing gases of 21% O2 and 79% CO2, 30% O2 and 70% CO2, 40% O2 and 60% CO2, and 50% O2 and 50% CO2. The initial pressure of mixed gases in the bomb was 40 bar, the initial temperature was 800 K, and the diesel was injected with 16 mg with 1200 bar pressure to conduct the combustion test. Under the nitrogen-free environment of 50% O2 and 50% CO2, two experiments were studied in order to optimize the nitrogen-free combustion performance of the diesel engine: firstly, the diesel nitrogen-free combustion performance (with 1200 bar injection pressure) of diesel fuel with the gas mixture temperature of 750 or 700 K, and secondly, the effect of 900 bar or 600 bar diesel injection pressure on its nitrogen-free combustion performance (with the temperature of 800 K of gas mixture). Through the process and analysis of the diesel combustion flame images taken by the high-speed camera, the effects of the O2/N2/CO2 concentration, initial mixture temperature, and diesel injection pressure on the diesel combustion process were studied.
Pictures taken by the high-speed camera were processed as follows: take a screenshot and draw a scale to analyze the length, width, and appearance of the burning flame; convert the picture to a grayscale picture; and set pictures less than 5 to 0 intensity to eliminate
Test Method
The low-nitrogen oxygen-enriched tests of the constant volume combustion vessel were carried out mainly under conditions of 60% O 2 and 40% N 2 , 70% O 2 and 30% N 2 , and 80% O 2 and N 2 20% to explore the limit of the oxygen enrichment concentration in low-nitrogen combustion that can reduce NOx emission under the premise of combustion safety. For nitrogen-free combustion, tests were carried out by mixing gases of 21% O 2 and 79% CO 2 , 30% O 2 and 70% CO 2 , 40% O 2 and 60% CO 2 , and 50% O 2 and 50% CO 2 . The initial pressure of mixed gases in the bomb was 40 bar, the initial temperature was 800 K, and the diesel was injected with 16 mg with 1200 bar pressure to conduct the combustion test. Under the nitrogen-free environment of 50% O 2 and 50% CO 2 , two experiments were studied in order to optimize the nitrogen-free combustion performance of the diesel engine: firstly, the diesel nitrogen-free combustion performance (with 1200 bar injection pressure) of diesel fuel with the gas mixture temperature of 750 or 700 K, and secondly, the effect of 900 bar or 600 bar diesel injection pressure on its nitrogen-free combustion performance (with the temperature of 800 K of gas mixture). Through the process and analysis of the diesel combustion flame images taken by the high-speed camera, the effects of the O 2 /N 2 /CO 2 concentration, initial mixture temperature, and diesel injection pressure on the diesel combustion process were studied.
Pictures taken by the high-speed camera were processed as follows: take a screenshot and draw a scale to analyze the length, width, and appearance of the burning flame; convert the picture to a grayscale picture; and set pictures less than 5 to 0 intensity to eliminate the influence of the background value. By calculating the total light intensity value of each horizontal row, the starting moment of the combustion flame and the combustion duration with total light intensity exceeding 100 cd were analyzed. Studying the change law of the combustion start position of the flames at any time allowed comparative analysis of the total light intensity of the flame.
In order to ensure the validity of the data of the diesel engine's ignition delay period, the high-speed camera was triggered at the same time of the fuel injection to perform synchronous shooting. Pictures were taken at a certain frequency, and the data were averaged by multiple injection combustion tests. Due to the large volume of gas working fluid in the bomb (40 L), the release heat of 16 mg diesel combustion showed too little influence on the gas working fluid pressure in the constant volume combustion vessel, and the relevant exothermic analysis was not performed. Additionally, the total volume of the waste gas after the single combustion test was not enough for the detection and analysis of NOx, carbide, and other emissions. Relevant experiments can be further carried out under favorable experimental conditions.
Low-Nitrogen Combustion
Under low-nitrogen (O 2 and N 2 ) conditions, part of the photos of flame during the diesel combustion process is shown in Figure 2. The first photo of the air condition was taken at 1.6ms after the injection, and the other conditions were taken 0.6ms after the injection. The time interval was 0.2 ms. Compared with the flame in air condition, when the O 2 concentration reached 60%, the ignition delay period was shortened from 2.07 ms to 1.06 ms, the flame brightness was obviously enhanced, the flame shape was slender, the longest flame length was about 60 mm, and the width was about 10 mm. The flame shape of the air working condition was relatively large, and the longest flame was about 50 mm long and the maximum width was nearly 40 mm. It showed that in the condition of high concentration of oxygen enrichment, the combustion flame is significantly faster than the air condition. Especially when the O 2 concentration reached 70% or 80%, the burning moment was further advanced, and a blue flame appeared. However, there was no abnormal phenomenon in the constant volume combustion vessel body. In this test, the amount of air was large, and the amount of fuel injection was small, the heat of combustion was limited, and the temperature of the combustion flame could not reach the temperature of the body of the damaged constant volume. The above results can effectively support the phenomenon mentioned in the literature [33]: the sparking phenomenon of the exhaust pipe in the diesel engine appeared under the 60% O 2 oxygen-enriched and low-nitrogen conditions.
In order to quantitatively analyze the combustion process, data of the total light intensity, combustion start position (the distance between the first burning flame and the injector nozzle), ignition delay period, and duration are shown in Figure 3 and Table 1 by processing the photos. The increase in total light intensity under low-nitrogen conditions advances, and the steep curve indicates that the rate of rise is much faster than the air condition. The maximum light intensity under low-nitrogen conditions is slightly larger than that of air, and the maximum intensity of light intensity lasts longer than 2 ms. The flat-top curve is strikingly different from the singe-peak curve of the air working condition. The reason is that the concentration of O 2 in the combustion environment rises greatly, which is beneficial to the ignition and combustion of diesel fuel. As a result, the combustion propagation speed accelerates, and the efficiency improves. The high-efficiency combustion in the early stage of low-nitrogen combustion accelerated the consumption of diesel fuel, resulting in a decrease in the amount of diesel in the late combustion stage, which causes the sharp fall in total light intensity, rather than a slow decline in the air condition. It is worth noting that the total intensity of light at the same time as low-nitrogen combustion conditions does not increase with the rise of the O 2 concentration. The total intensity of light under 70% or 80% O 2 concentrations are lower than 60%. The main reason for this phenomenon is that when the O 2 concentration reaches a certain value, the combustion condition of diesel reaches the limit. This phenomenon is caused by thermal-physical differences between O 2 and N 2 . guished in the later stage of combustion. The position of the combustion flame in the air condition gradually moves down as the combustion process progresses, which is especially fast in the later stage of combustion. Therefore, it is further explained that the combustion speed and efficiency of the diesel in the low-nitrogen high-concentration oxygenenriched conditions are significantly better than the air conditions. under the same gas working condition were averaged and analyzed as shown in Table 1. The standard deviation of the ignition delay period of each working condition is about 1.5 ms, and the standard deviation of the burning duration is about 1, indicating that the data are relatively stable. The average ignition delay period of air conditions is about 2.07 ms, and when the O2 concentration reaches 60%, the ignition delay period is obviously shortened to 1.06 ms. With the increase of the O2 concentration, the period is further shortened, and when the O2 concentration reaches 80%, the burning period is only 0.98 ms.
Nitrogen-Free Combustion
The constant volume combustion vessel test was carried out by replacing the N2 (21% O2 and 79% CO2) in the air with CO2. As shown in Figure 4, the first photo of each working condition was taken at 1.6 ms after the injection. Compared with Figure 3, although the combustion time is not much different from the air condition (about 2.1 ms), the combustion flame was very dark and the flame shape was slightly smaller. It indicates that the presence of a great deal of CO2 inhibits the combustion chemical reaction that produces CO2, thereby inhibiting the combustion process. Additionally, the CO2 heat transfer performance is much lower than that of N2, which weakens the flame propagation. This test indicates that diesel undergoes a chemical reaction in a 21% O2 and 79% CO2 environment. The decrease in the ignition delay period is sure to make the starting position is closer to the injector nozzle. In case the O 2 concentration is 60%, the distance between the flame and the injector nozzle is less than 20 mm and it will be further shortened with the further increase of the O 2 concentration, which is consistent with the change law of the ignition delay period. Additionally, the starting position of the flame stays close to the injector nozzle, and does not move down with the combustion process until the flame is extinguished in the later stage of combustion. The position of the combustion flame in the air condition gradually moves down as the combustion process progresses, which is especially fast in the later stage of combustion. Therefore, it is further explained that the combustion speed and efficiency of the diesel in the low-nitrogen high-concentration oxygen-enriched conditions are significantly better than the air conditions.
The ignition delay period and combustion duration of four combustion processes under the same gas working condition were averaged and analyzed as shown in Table 1. The standard deviation of the ignition delay period of each working condition is about 1.5 ms, and the standard deviation of the burning duration is about 1, indicating that the data are relatively stable. The average ignition delay period of air conditions is about 2.07 ms, and when the O 2 concentration reaches 60%, the ignition delay period is obviously shortened to 1.06 ms. With the increase of the O 2 concentration, the period is further shortened, and when the O 2 concentration reaches 80%, the burning period is only 0.98 ms.
Nitrogen-Free Combustion
The constant volume combustion vessel test was carried out by replacing the N 2 (21% O 2 and 79% CO 2 ) in the air with CO 2 . As shown in Figure 4, the first photo of each working condition was taken at 1.6 ms after the injection. Compared with Figure 3, although the combustion time is not much different from the air condition (about 2.1 ms), the combustion flame was very dark and the flame shape was slightly smaller. It indicates that the presence of a great deal of CO 2 inhibits the combustion chemical reaction that produces CO 2 , thereby inhibiting the combustion process. Additionally, the CO 2 heat transfer performance is The constant volume combustion vessel test was carried out by increasing the O 2 concentration to 30%, 40%, or 50% in a nitrogen-free environment, as shown in Figure 4.
With the increase of the O 2 concentration, the diesel-free ignition delay period was shortened by nearly 0.6 ms, and the combustion flame gradually became brighter. When the O 2 concentration reached 30%, the combustion flame became obviously better, but it was still inferior to the air working condition, and was especially weak and discontinuous in the late combustion stage. When the oxygen concentration reached 40%, the combustion flame brightness was close to the air working condition. Additionally, its flame appearance became slender. It shows that the flame propagation speed increases as the O 2 concentration rises. In particular, the flame in the 50% O 2 condition is slender and bright, indicating that the nitrogen-free combustion in the 50% O 2 concentration condition can continue and the combustion is perfect. The ignition delay period and combustion duration of the five combustion processes in each gas working condition were averaged and analyzed, as shown in Table 2. The standard deviation between the ignition delay period and the combustion duration of each working condition is about 1.5 ms, indicating that the data are stable. In working conditions of 21%, 30%, 40%, and 50% O 2 , the non-nitrogen ignition delay period is 2.01, 1.59, 1.46, and 1.18 ms, respectively. Therefore, the ignition delay period is mainly affected by the O 2 concentration in the environment. The combustion duration was shortened from 3.2 to 2.7 ms.
As shown in Figure 5, during the diesel combustion process of 21% O 2 and 79% CO 2 , the total intensity of light at each moment did not exceed 20,000 cd; there was a big difference from air condition. When the O 2 concentration of nitrogen-free combustion increased to 30%, the combustion start time was earlier than the air condition, that is, the ignition delay period was shortened from 2.07 to 1.59 ms. Additionally, the total light intensity rise curve was parallel to the air condition rise curve, but the total intensity of light at each moment was lower than that of the air condition. When the total intensity of the light gradually decreased after reaching its maximum value, it was consistent with the air condition, mainly due to the small amount of diesel combustion at an early stage. Therefore, the late combustion continued. When the O 2 concentration increased to 40%, the total light intensity rise rate was more than the air condition, and the total light intensity continued to maintain a relatively high position, and then rapidly decreased. It indicates that the diesel combustion under this working condition is mainly at the early and middle stage, but its maximum light intensity is still significantly lower than the air condition. When the oxygen concentration reaches 50%, the combustion advances and the total intensity of the light rises rapidly to a high point (close to air conditions), and the maximum intensity of light intensity lasts for about 1.8 ms, which is longer than the single-peak curve of the air conditions. The total intensity of the combustion flame indirectly reflects the diesel combustion, which indicates that the diesel combustion is better than the air condition under this working condition. Although the diesel combustion time of 21% O2 and 79% CO2 was close to the air condition, the distance between the combustion flame and the injector nozzle was obviously larger than the air condition. When the O2 concentration increased to 30%, the start position of the combustion flame was similar to the air condition. When the O2 concentration was less than 30% at the middle stage of the nitrogen-free combustion, the combustion flame gradually moved down as the combustion processed, which was consistent Although the diesel combustion time of 21% O 2 and 79% CO 2 was close to the air condition, the distance between the combustion flame and the injector nozzle was obviously larger than the air condition. When the O2 concentration increased to 30%, the start position of the combustion flame was similar to the air condition. When the O 2 concentration was less than 30% at the middle stage of the nitrogen-free combustion, the combustion flame gradually moved down as the combustion processed, which was consistent with the air condition, as shown in Figure 5. The O 2 concentration of the 40% and 50% nitrogen-free combustion flame start position will stay at a certain position. Especially, the nitrogen-free combustion condition of the 50% working condition is close to the low-nitrogen combustion condition. The combustion flame starts to stay at the position that is less than 20 mm from the injector nozzle.
Factors Affecting Nitrogen-Free Combustion Performance
The above research shows that the diesel combustion process is superior to air conditions when diesel is injected at a fuel injection pressure of 1200 bar in a 50% O 2 and 50% CO 2 nitrogen-free environment at 800 K. The current relevant research results show that the diesel combustion in the 50% nitrogen-free environment is worse while when the oxygen concentration reaches 65%, the combustion condition is better. Even the literature 29-31 shows that the combustion condition of diesel engines under the 60% intake condition is not perfect, and there are even visible carbon particles. The main reason for the inconsistency of the above conclusions is the influence of the initial temperature of the mixture at the time of injection and the diesel injection pressure. The following studies will be carried out to provide a reference for the optimization of the diesel-free combustion performance of diesel at the later stage.
Initial Temperature
Parts of the photos of the combustion flame taken at a temperature of 800 or 750 K in a constant volume combustion vessel are shown in Figure 6. The first photo is the burning photo corresponding to 1.6 ms after the injection, and the interval is 0.2 ms. It can be seen from the figure that the diesel ignition delay period in the condition of 750 K is quite long (about 2.9 ms), which is much longer than 1.9 ms under the condition of 800 K. Additionally, the burning duration was very short, only about 1.2 ms, while the intensity of the flame was very weak, and the burning flame was not captured by the aperture of 17. In Figure 6, the flame at 750 K was shot by an aperture of 11. The total intensity of the light was approximately twice of that of the 17-aperture photograph. From Figure 6, visually, the intensity of the flame's light shoot by 17 aperture in the condition of 800 K was a little difference from that in the condition of 750 K. When the temperature lowered to 700 K, the image of the combustion flame inside the constant volume combustion vessel was not captured by any aperture. It indicates the initial temperature of the nitrogen-free mixed gases is critical to the combustion performance of diesel. The specific heat capacity of CO 2 is relatively large. As a result, a great deal of CO 2 in the diesel engine without nitrogen can cause the compression end temperature to be relatively low, thus affecting the nitrogen-free combustion performance of the diesel engine. This is also one of the key factors leading to poor combustion of diesel engines in 50% O 2 and 50% CO 2 nitrogen-free conditions in the literature 2934. Under the premise of sufficient O 2 in the nitrogen-free combustion environment, the high-temperature smoke and gases emitted by the diesel engine can be recycled by EGR to increase the intake air temperature, and handle the problem of the O 2 and CO 2 consumption, thereby effectively reducing the operating cost of the diesel engine.
gases is critical to the combustion performance of diesel. The specific heat capacity of CO2 is relatively large. As a result, a great deal of CO2 in the diesel engine without nitrogen can cause the compression end temperature to be relatively low, thus affecting the nitrogen-free combustion performance of the diesel engine. This is also one of the key factors leading to poor combustion of diesel engines in 50% O2 and 50% CO2 nitrogen-free conditions in the literature 2934. Under the premise of sufficient O2 in the nitrogen-free combustion environment, the high-temperature smoke and gases emitted by the diesel engine can be recycled by EGR to increase the intake air temperature, and handle the problem of the O2 and CO2 consumption, thereby effectively reducing the operating cost of the diesel engine. 800 K 700 K Figure 6. Influences of different initial temperatures on the diesel combustion flame in a 50% O2 nitrogen-free environment.
Injection Pressure
A large number of studies have shown that the injection pressure directly affects the progress of combustion [34,40]. This is true of the conclusion drawn from the optimization experiment of a constant volume combustion vessel without nitrogen combustion. As shown in Figure 7, in the nitrogen-free environment of 50% O2 and 50% CO2, the diesel fuel was injected at a pressure of 1200, 900, and 600 bar, respectively, and photos were taken of the combustion flame in a constant volume combustion vessel. The first picture of each working condition was the 16th shot after the injection, and the later interval was 0.2 ms. From the perspective of flame topography, there is not much difference. As the injection pressure decreased, the combustion time was obviously shifted back (1.5, 1.7, 1.9 and ms, respectively), and the combustion duration was extended (corresponding to 2.7, 3.1, and 4.0 ms, respectively). For the total intensity of the light and the starting position of the flame at different times, the analysis of the photograph was obtained, as shown in Figure 8. The change of the injection pressure does not affect the curve shape of the flat-
Injection Pressure
A large number of studies have shown that the injection pressure directly affects the progress of combustion [34,40]. This is true of the conclusion drawn from the optimization experiment of a constant volume combustion vessel without nitrogen combustion. As shown in Figure 7, in the nitrogen-free environment of 50% O 2 and 50% CO 2 , the diesel fuel was injected at a pressure of 1200, 900, and 600 bar, respectively, and photos were taken of the combustion flame in a constant volume combustion vessel. The first picture of each working condition was the 16th shot after the injection, and the later interval was 0.2 ms. From the perspective of flame topography, there is not much difference. As the injection pressure decreased, the combustion time was obviously shifted back (1.5, 1.7, 1.9 and ms, respectively), and the combustion duration was extended (corresponding to 2.7, 3.1, and 4.0 ms, respectively). For the total intensity of the light and the starting position of the flame at different times, the analysis of the photograph was obtained, as shown in Figure 8. The change of the injection pressure does not affect the curve shape of the flat-top total intensity, and during the earlier and later period, the rise or drop rates of the total intensity of the light are similar. With the decrease of injection pressure, the maximum total light intensity was higher, and the change was not obvious. However, the increase of the injection pressure led to advancement of the diesel free-nitrogen combustion time, which can significantly improve the ignition performance. The authors of [34] used a Dongfeng 4135 ACa diesel engine, and its injection pressure was only about 300 bar, which is one of the key factors that led to poor combustion of the diesel engine under the nitrogen-free working condition of 50% O 2 and 50% CO 2 . The above information implies that the diesel injection pressure is also critical for the diesel engine's nitrogen-free combustion ignition performance. It is recommended to use a higher diesel injection pressure in later studies related to nitrogen-free combustion to optimize its ignition and combustion performance.
top total intensity, and during the earlier and later period, the rise or drop rates of the total intensity of the light are similar. With the decrease of injection pressure, the maximum total light intensity was higher, and the change was not obvious. However, the increase of the injection pressure led to advancement of the diesel free-nitrogen combustion time, which can significantly improve the ignition performance. The authors of [34] used a Dongfeng 4135 ACa diesel engine, and its injection pressure was only about 300 bar, which is one of the key factors that led to poor combustion of the diesel engine under the nitrogen-free working condition of 50% O2 and 50% CO2. The above information implies that the diesel injection pressure is also critical for the diesel engine's nitrogen-free combustion ignition performance. It is recommended to use a higher diesel injection pressure in later studies related to nitrogen-free combustion to optimize its ignition and combustion performance.
Conclusions
In this study, low-nitrogen and nitrogen-free combustion tes ume combustion vessels were carried out. The initial pressure in t initial temperature was 800 K, and 16 mg of fuel was injected at a the combustion test. The following conclusions can be made after ing the photos of the diesel combustion flame that were taken by (1) Diesel can be on fire in a low-nitrogen environment wit The flame is bright and slender, in contrast to the dim and large fla and there is no abnormal phenomenon in the combustion. Wh reaches 70% or 80%, there is blue flame. Although there is no e constant volume combustion vessel, there may be an abnormality tion of the diesel engine. The diesel engine is not suitable for ope environment with too high an O2 concentration.
(2) Under the condition of 21% O2 and 79% CO2, the diesel c the flame is extremely dim, the combustion duration is short, and obviously deteriorates.
(3) The increase of the O2 concentration can optimize diesel ni When the O2 concentration reaches 30%, the combustion time is p and the increase rate of the total intensity of the flame is similar the maximum total intensity is still far below air conditions. Wh reaches 50%, the combustion flame is bright and slender, and the m of the light is close to the air condition. The maximum total inte shows that under this working condition, diesel can burn normal
Conclusions
In this study, low-nitrogen and nitrogen-free combustion tests of diesel constant volume combustion vessels were carried out. The initial pressure in the bomb was 40 bar, the initial temperature was 800 K, and 16 mg of fuel was injected at a pressure of 1200 bar for the combustion test. The following conclusions can be made after processing and analyzing the photos of the diesel combustion flame that were taken by a high-speed camera: (1) Diesel can be on fire in a low-nitrogen environment with 60% O 2 concentration. The flame is bright and slender, in contrast to the dim and large flames of the air condition, and there is no abnormal phenomenon in the combustion. When the O 2 concentration reaches 70% or 80%, there is blue flame. Although there is no effect on the body of the constant volume combustion vessel, there may be an abnormality in the high-load condition of the diesel engine. The diesel engine is not suitable for operation in a low-nitrogen environment with too high an O 2 concentration.
(2) Under the condition of 21% O 2 and 79% CO 2 , the diesel can be on fire. However, the flame is extremely dim, the combustion duration is short, and the combustion process obviously deteriorates.
(3) The increase of the O 2 concentration can optimize diesel nitrogen-free combustion. When the O 2 concentration reaches 30%, the combustion time is prior to the air condition, and the increase rate of the total intensity of the flame is similar to the air condition, but the maximum total intensity is still far below air conditions. When the O 2 concentration reaches 50%, the combustion flame is bright and slender, and the maximum total intensity of the light is close to the air condition. The maximum total intensity is about 1.8 ms. It shows that under this working condition, diesel can burn normally.
(4) In the nitrogen-free working condition of 50% O 2 concentration, diesel burns better at an initial temperature of 800 K, while combustion deteriorates severely at an initial temperature of 750 K and there is no burning phenomenon at the initial temperature of 700 K. The temperature is of great significance for the nitrogen-free combustion of diesel. The decrease in the end temperature of the diesel nitrogen-free intake compression may be the main cause of the abnormal combustion in the 50% O 2 working condition.
In the nitrogen-free condition of the 50% O 2 concentration, diesel is injected at a pressure of 1200, 900, and 600 bar, respectively, and the combustion time corresponds to 1.5, 1.7, and 1.9 ms, respectively. In addition, the combustion duration corresponds to 2.7, 3.1, and 4.0 ms, respectively. It indicates that the injection pressure is critical to the ignition | 2021-09-27T20:56:02.977Z | 2021-07-17T00:00:00.000 | {
"year": 2021,
"sha1": "4a449a3b2f9503f5ebd3e3b1c1ad9948ed091b9b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/12/7/923/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "e4ca7146cea25c229d7ddcf01a497c5db18d9317",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
232118205 | pes2o/s2orc | v3-fos-license | Persistent bone impairment despite long-term control of hyperprolactinemia and hypogonadism in men and women with prolactinomas
While prolactinoma patients have high bone turnover, current data are inconclusive when it comes to determining whether correction of hyperprolactinemia and associated hypogandism improves osteodensitometric data in men and women over the long term. In a large cohort of including 40 men and 60 women, we studied the long-term impact of prolactinoma treatment on bone mineral density (BMD) in men versus women, assessed adverse effects of a primary surgical or medical approach, and evaluated data for risk factors for impaired BMD at last follow-up using multivariate regression analyses. Median duration of follow-up was 79 months (range 13–408 months). Our data indicate that the prevalence of impaired BMD remained significantly higher in men (37%) than in women (7%, p < 0.001), despite the fact that hyperprolactinemia and hypogonadism are under control in the majority of men. We found that persistent hyperprolactinemia and male sex were independent risk factors for long-term bone impairment. Currently, osteoporosis prevention and treatment focus primarily on women, yet special attention to bone loss in men with prolactinomas is advised. Bone impairment as “end organ” reflects the full range of the disease and could become a surrogate marker for the severity of long-lasting hyperprolactinemia and associated hypogonadism.
Long-term results. The median long-term follow-up was 79 months (range 13-408) and was not significantly different between the sexes (p = 0.14).
At last follow-up, impaired BMD was recorded in 37% of men and 7% of women (p < 0.001; Fig. 1A). At this time point, 26% of men and 2% of women suffered from osteopenia, whereas 11% of men and 5% of women suffered from osteoporosis ( Table 2).
The prevalence of bone impairment at last follow-up was significantly higher in patients with persistent hyperprolactinemia than those with normoprolactinemia (42% vs. 15%; p = 0.04); in hypogonadal compared Table 1. Patient characteristics at baseline. IQR interquartile range, SEM standard error of the mean, SD standard deviation, yrs years, n number. Bold values are statistically significant for p = 0.05; significance level was set at 5%. Total testosterone levels in men significantly increased, namely form 5.9 ± 4.8 nnoml/l at baseline to 13.3 ± 3.6 nmol/l in the long-term (p = 0.001). Likewise, estradiol levels in women significantly increased, from 62 ± 68 pg/ml at baseline to 161 ± 371 pg/ml in the long-term (p = 0.003).
The duration of clinical symptom onset reported prior to diagnosis was 18 ± 69 months (± SD). The calculated duration of hyperprolactinemia and hypogonadism was 41 ± 82 months and 38 ± 98 months, respectively. Thereby, Figure 1. (A) Prevalence of bone impairment in both sexes. Significantly more men with prolactinomas suffered from bone impairment, both at baseline (28 vs. 2%, p < 0.001) and at last follow-up (37 vs. 7%, p < 0.001), compared to women. (B) Kaplan-Meier estimation of recurrence-free intervals. The median (± SD) recurrence-free intervals were significantly shorter in patients with impaired BMD (179 ± 72 months) than in those with normal BMD (396 ± 117 months; log-rank test, p = 0.04). In patients with resolution of hyperprolactinemia, the time to performance of bone densitometry was 47 ± 64 months, with a longer time period in men than in women, although this was not statistically significant (31 ± 58 months vs. 57 ± 67 months; p = 0.06).
PRL levels had normalized in most patients by the long-term follow-up, independent of gender (men vs. women; 82% vs. 89%, p = 0.37). Nevertheless, significantly fewer women required DA agonists for the long-term control of hyperprolactinemia than men (75 vs. 42%, p = 0.001). Also, PRL levels had normalized independent of the primary treatment approach (surgery vs. DAs; 92% vs. 80% p = 0.14). We noted that significantly fewer patients in the surgical versus medical cohort required DA agonists over the long-term (32% vs. 79%, p < 0.001). Gonadotropin deficiency significantly improved both in men and women (same p < 0.001), as did headache (p < 0.001 and p = 0.02, respectively). Secondary hypothyroidism and secondary adrenal insufficiency improved in both groups, although not significantly. In 41 (68%) premenopausal women with available data confirming amenorrhea at baseline, no significant association between duration of amenorrhea and long-term BMD status was noted (r = 0.20, p = 0.08). In addition, the duration of amenorrhea in women was not a risk factor for impaired BMD at last follow-up (OR 1.0, 95%CI 1.0-1.1; p = 0.32). Furthermore, the amount of time between resolution of hyperprolactinemia and the performance of bone densitometry was not a risk factor for impaired BMD at last follow-up (OR 1.0, 95% CI 1.0-1.1, p = 0.14).
Of the 100 patients assessed with DXA, only one patient with osteopenia at baseline was noted with a normal BMD over the long-term (patient number 1, Table 2). While a normal BMD status was noted both at baseline and at last follow-up in 82 patients, seven of those patients with initially normal BMD demonstrated impaired BMD over the long-term. In addition, in eight patients osteopenia was noted both at baseline and over the long-term, as was osteoporosis in two further patients. Thus, persistent bone impairment in patients with prolactinomas was common, despite long-term control of hyperprolactinemia and hypogonadism in the majority of them. As a result, of the 12 patients with low bone density at baseline (OP and OO), 11 also had low BMD in the long-term, and deterioration was noted in an additional 7 patients.
At last follow-up, recurrence of prolactinoma was observed in 35% of patients with an impaired BMD compared to 22% of patients with a normal BMD (p = 0.35). Specifically, the recurrence rate was 36% in men and 33% in women with an impaired BMD, and 17% in men vs. 24% in women with normal BMD (p = 0.76). In addition, recurrence of a prolactinoma was noted in 20% of patients with upfront surgery compared to 30% of patients treated with DAs (p = 0.25). There was no significant difference in the recurrence-free intervals of prolactinomas between men and women (178 ± 18 months vs. 288 ± 28 months; log-rank test, p = 0.25). However, the median (± SD) recurrence-free intervals were significantly shorter in patients with impaired BMD (179 ± 72 months) than in those with normal BMD (396 ± 117 months; log-rank test, p = 0.04, Fig. 1B).
The risk factors associated with long-term bone impairment are summarized in Table 3. Significant risk factors in the univariable analysis were: patient age, male sex, persistent hyperprolactinemia including length of hyperprolactinemia, and persistent hypogonadism. The multivariable logistic regression revealed male sex (OR 16.4, 95% CI 2.4-114.3, p = 0.01) and persistent hyperprolactinemia (OR 5.6, 95% CI 1.0-32.5, p = 0.05), but not persistent hypogonadism (OR 3.1, 95% CI 0.8-12.4, p = 0.12) or primary treatment strategy (OR 1.2, 95% CI 0.3-5.2, p = 0.81) as independent risk factors for long-term bone impairment (Table 3). www.nature.com/scientificreports/ Morbidity and mortality. There was no mortality in either cohort. Postoperative complications in the surgical group consisted of transient rhinoliquorrhea (3%), syndrome of inappropriate antidiuretic hormone (SIADH) secretion (12%), and diabetes insipidus (13%). In the medical group, prolonged nausea occurred in 9% of patients, dopamine agonist-induced impulse control disorders were observed in two men (4%) 23 , and vertigo in 3% of patients with no difference between men and woman.
Discussion
This large prolactinoma cohort study shows that: (1) although both hyperprolactinemia and hypogonadism are under control in the majority of patients at a median follow-up of ≈ 7 years, the prevalence of bone impairment was and continues to be significantly higher in men than in women; (2) persistent hyperprolactinemia and male sex, but not persistent hypogonadism, are independent risk factors for long-term bone impairment in prolactinoma patients; and (3) recurrence-free intervals are significantly shorter in prolactinoma patients with impaired BMD.
Long-term impact of prolactinoma treatment on bone mineral density. Hyperprolactinemia and the associated hypogonadism affect bone turnover in prolactinoma patients 10,14,15 . While age-related bone loss might have contributed to bone fragility over our study period of almost seven years to some extent 24,25 , longlasting hyperprolactinemia has been found to be a major contributor to bone impairment, even when hyperprolactinemia is brought under control 15, 20 , corroborating our results. Consistently, treatment with DA agonists over 2 years was not found to restore bone impairment in young patients suffering from hyperprolactinemia 12 .
We further noted that significantly more men than women suffered from bone impairment at study entry. While amenorrhea in women is easily detected and investigated, men often do not report the more nonspecific symptoms of hypogonadism, such as loss of libido. Consequently, women probably suffer from hyperprolactinemia and hypogonadism over a much shorter period before diagnosis, and treatment is initiated much earlier than for men 26 . This hypothesis is further supported by the current finding that the age at diagnosis was significantly higher for men than for women. Likewise, macroprolactinomas were more frequently encountered in men than in women, possibly contributing to both the higher baseline PRL levels as well as the subsequent higher prevalence of bone impairment in men compared to women. Namely, initial prolactin levels and the size of the tumor may reflect how long the disease has been present, given that bone loss has been associated with the duration of amenorrhea in women with prolactinomas 8 . Nevertheless, in this study cohort, the duration of therapy or the duration of amenorrhea in women was not a significant risk factor for BMD development. Furthermore, treatment of the prolactinoma might interfere with BMD development. Conversely, while couldn't observe a difference in testosterone replacement, vitamin D supplementation, or the use of hydrocortisone in men versus women, it is conceivable that a certain selection bias towards the screening of osteoporosis in more severely affected men with prolactinoma took place at study entry, with 3:2 ratio of women to men. This may partly be explained by the fact that the prevalence of prolactinoma is known to be higher in women than in men 27,28 . In addition, although health insurance in Switzerland covers medical investigation and therapy, decisions regarding whether to screen for bone density are not based on financial considerations. Bone measurement and programs for osteoporosis prevention have mainly focused on post-menopausal women 8,9 , while this condition often remains underdiagnosed in men [10][11][12] . Consistently, in a large study cohort, significantly fewer men received evaluation for osteoporosis following a distal radial fracture, with rates of evaluation unacceptably low according to published guidelines 12 .
In the context of prolactinomas, the need for awareness of bone loss in both sexes might thus have been underestimated in men, with those affected more severely being preferentially assessed. Thus, screening of bone loss in both sexes should not be underestimated in prolactinoma patients, regardless of the primary treatment chosen (i.e., surgical or medical), as the primary treatment did not seem to influence the prevalence of bone impairment in our cohort.
Recurrence rates of prolactinomas. We noted no differences in the recurrence rates between men and women after DA agonist withdrawal, whereas other authors reported more recurrences in men than women 29 , possibly because men suffer more often from macroprolactinomas than women do 30 . While recurrence-free intervals were not significantly different with regard to adenoma size, patients with impaired BMD had significantly shorter recurrence-free intervals than those with normal BMD. This is an intriguing finding. It is conceivable that the smaller sample size of patients with macroadenomas conceals a true effect 31 . Indeed, macroprolactinomas in men are associated with longer lasting hyperprolactinemia and related hypogonadism, with subsequently impaired BMD 11,32 . Nevertheless, the adenoma size per se might not be the only factor that determines the severity of the disease. In contrast, impaired BMD, which as "end organ" reflects the full range of the disease, including duration of hypogonadism, might thus become a more comprehensive surrogate marker for the severity of long-lasting hyperprolactinemia. Given that osteoporosis prevention has particularly focused on postmenopausal women (with prolactinomas), assessment of BMD in men with prolactinomas might become routine and incorporated into study guidelines. Further studies should be directed at how to improve bone health in prolactinoma patients in general and how to better evaluate patients at risk at the earliest time point possible.
Study limitations
This study suffers from the limitations of any retrospective study, and of the single-center design. In 83 of 100 patients, data was available on the onset of symptoms prior to diagnosis. Thus, the duration of hypogonadism and hyperprolactinemia, or the time period between resolution of hyperprolactinemia and the performance www.nature.com/scientificreports/ of bone densitometry, could not be retrieved for all patients. In addition, it reflects an approximate estimation of the duration of both hypogonadism and hyperprolactinoemia. In addition, a true effect for the association between amenorrhea duration and long-term BMD status might have been concealed given the sample size of premenopausal women with available data confirming amenorrhea at baseline. Given that there was no prospective assessment of DA-induced impulse control disorders, the true number of patients experiencing them might be underestimated. Likewise, although severe personality changes have been reported, these might often not be mentioned by the patients due to feelings of shame 12 .
No treatments with growth hormones (GH) were noted in this cohort, and not all patients werescreened for growth hormone (GH) deficiency using validated dynamic testing, or for vitamin D concentrations and active smoking status, so it is possible that these parameters influenced the bone health status in some patients [33][34][35][36][37][38] . Patients with osteopenia and osteoporosis have been grouped together as patients with impaired BMD, and statistical uncertainty in this sample size precluded us from discriminating between osteopenia and osteoporosis in both men and women. Numeric BMD values in this patient cohort are missing, thus quantification of bone improvement following treatment of hyperprolactinoma and hypogonadism was not possible. Allocation into groups (i.e. normal, osteopenia, osteoporosis) indirectly reflects changes in bone impairment. This pooling doesn't incorporate the fact that osteopenia is present in about 15% of young, healthy women 39 . Likewise, using multiple logistic regression analysis to assess independent predictors influencing BMD-such as location of BMD measurement, testosterone replacement, vitamin D supplementation, and use of hydrocortisone (see Table 2)-was not statistically feasible. In addition, the location used for BMD measurement with DXA was not consistent in all patients examined. Although there is a significant correlation for BMD values between anatomical regions such as the spine, proximal femur and forearm, the validity of DXA measurement in prolactinoma patients favors the spine only, as data show that femoral BMD measurement might mask BMD effects exerted by hyperprolactinemia and associated hypogonadism 8,40 .
Our biochemical definition of persistent hypogonadism (i.e. inadequate gonadotropins in the presence of low estradiol) may have underestimated a true association between persistent hypogonadism and long-term BMD status, as it doesn't incorporate those women with sporadic normal estradiol levels at follow-up, but ongoing oligomenorrhea.
Conclusions
The prevalence of bone impairment is and continues to be significantly higher in men with prolactinomas than in women. Impaired BMD as "end organ" reflects the full range of the disease and could become a surrogate marker for the severity of long-term hyperprolactinemia and associated hypogonadism.
Methods
This retrospective cohort study included all consecutive prolactinoma patients with osteodensitometric data at study entry and at long-term follow-up (> 12 months) who were treated at our tertiary referral center between 1997 and 2015. All patients fulfilled the diagnostic criteria of a prolactin (PRL)-secreting pituitary adenoma (i.e., PRL levels > 30 µg/L without evidence of pituitary stalk compression, primary hypothyroidism or drug-induced hyperprolactinemia), and had a positive pituitary magnetic resonance imaging (MRI) scan. The indication for first-line pituitary surgery was local complications of the adenomas or the patient's preference to undergo surgery rather than long-term DA agonist therapy, as reported previously 11,41 . Each patient's situation and primary treatment were discussed at the interdisciplinary pituitary tumor board meeting.
BMD was assessed by dual-energy X-ray absorptiometry (DXA, HOLOGIC, Bedford, MA, USA) at the femoral bone and/or spine at baseline and at last follow-up. A T-score ≥ 1 SD was regarded as normal, whereas a T-score of − 1.5 to − 2.5 SD suggested osteopenia, and ≤ −2.5 SD suggested osteoporosis. The Z-score was used in the diagnosis of impaired BMD in premenopausal women and in men aged < 50 years 42,43 . Impaired BMD was considered in patients with osteopenia and/or osteoporosis 11,44 .
MRI was performed on a 1.5-T or 3-T system including a Proton/T2-weighted whole-brain study with unenhanced, contrast-enhanced, dynamic contrast-enhanced and post contrast-enhanced overlapping studies in the axial, sagittal and coronal planes over the sellar region 45,46 . A tumor with a diameter of 1-10 mm was defined as a microadenoma, and a tumor > 10 mm in diameter was defined as a macroadenoma. Infiltration of the cavernous sinus was defined as ≥ two-thirds encasement of the internal carotid artery by the adenoma, as previously described 47,48 .
Patient characteristics recorded at study entry included age, body mass index (BMI), co-occurring clinical symptoms such as headache, pituitary axes deficits and radiological findings. Symptoms such as galactorrhea and amenorrhea in women or infertility and/or lack of libido or erectile dysfunction in men were noted separately. Partial hypopituitarism was defined as impaired secretion of one or more pituitary hormones. PRL levels were assessed. These included the immunoradiometric PRL assay with serum dilution in order to overcome the high-dose PRL hook effect [49][50][51][52] . Secondary adrenal insufficiency was noted in the presence of low cortisol levels in the serum or in cases where cortisol level was normal but responses to the adrenocorticotropic hormone (ACTH) stimulation test or insulin tolerance test were inadequate. The diagnosis of secondary hypothyroidism was made in the presence of low-normal thyrotropin (TSH) levels and a low free thyroxin (FT4) level. Hypogonadotropic hypogonadism, or central hypogonadism, leads to secondary amenorrhea or irregular menstrual cycle in female patients and impaired libido in males. Biochemically, inadequately normal-low gonadotropins can be documented, resulting in lack of production of estradiol or testosterone 53,54 .
For men, two fasting measurements of total testosterone concentrations were used for the screening for androgen deficiency 55,56 . Blood samples were collected after overnight fasting. Serum concentrations of total testosterone (normal reference range, 9.9-28.0 nmol/L) were measured using the Elecsys-System (ROCHE diagnostics, 57 . To evaluate the day-to-day variance, total testosterone was measured by the Elecsys-System on two different days within one month at 8 am in the fasting state 58 . In order to estimate the duration of hyperprolactinemia and subsequent hypogonadism, we reviewed patients' records in order to assess the reported onset of clinical symptoms prior to diagnosis (i.e., onset of galactorrhea/ amenorrhea in women; loss of libido or erectile dysfunction in men). The estimated duration of hyperprolactinemia and hypogonadism was then calculated from the date of reported onset of symptoms to the date of laboratory correction of hyperprolactinemia or hypogonadism during the follow-up visit.
Pituitary surgery (n = 53) was performed using a transseptal, transsphenoidal microsurgical approach, as described previously 45,59 . Postoperatively, body weight, fluid intake and output, serum electrolytes, and serum and urine osmolality were monitored daily. An antibiotic was administered in the perioperative setting and discontinued after 24 h.
Early follow-up took place about three months after surgery or at the initiation of DA agonist treatment. The dose of the DA agonist was increased if PRL levels were still elevated (> 30 µg/L) in the medical cohort. If patients in the surgical cohort had elevated PRL at pathological levels, DA agonist therapy was initiated.
A standardized protocol was followed for the withdrawal of DA agonists. In the medical cohort, DA agonists were tapered 24 months after initiation of the medical therapy if PRL levels had normalized and tumor reduction of > 50% was attained at the time of radiological follow-up, as defined previously 60,61 . Recurrence was defined as an increase in PRL levels above the normal range (> 25 µg/L for women, > 20 µg/L for men) during the last follow-up period after a previous remission, irrespective of radiological findings 62, 63 . Statistical analysis. Data were analyzed using IBM SPSS statistical software Version 24.0 (IBM Corp., New York, NY, USA) and visualized using GraphPad Prism (V7.03 software, San Diego, CA, USA). Continuous variables were examined for homogeneity of variance and are expressed as mean ± SD except where otherwise noted. PRL levels are presented as median values and interquartile range (IQR, 25th-75th percentile). For comparisons of means between two groups, Student's t-test was used for normally distributed data, and the Mann-Whitney test for nonparametric data. The Wilcoxon signed-rank test was used to evaluate paired differences in PRL, testosterone and estradiol levels before and after treatment 64 . Categorical variables were compared using Pearson's chi-square test or Fisher's exact test, as appropriate 65 . The Kaplan-Meier method was used to analyze recurrence-free intervals during follow-up, and significance was calculated using the log-rank (Mantel-Cox) test. To identify potential associations with impaired BMD at last follow-up, possible risk factors (patient age, sex, primary therapeutic approach, BMI, initial tumor size [i.e. macroadenoma], persistent need for DA agonists, persistent hyperprolactinemia and hypogonadism) were included, and multivariate analysis was performed with a binary logistic regression model. OR and 95% CI were calculated and p values ≤ 0.05 were considered statistically significant 66, 67 . Ethical standards and patient consent. All methods were performed in accordance with the relevant guidelines and regulations of Scientific Reports. The study is a retrospective data project using existing data to evaluate registry data quality, and there was no any patient contact for the study, therefore there was no patient consent process. The Human Research Ethics Committee of Bern (Kantonale Ethikkommision KEK Bern, Bern, Switzerland) approved the project (KEK no. 10-10-2006 and 8-11-2006). The ethics commit waived the need for informed consent for this study as part of the study approval. The study was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. | 2021-03-05T14:29:15.900Z | 2021-03-04T00:00:00.000 | {
"year": 2021,
"sha1": "756195e84c0d928c12c0dcb0c12c0ff1e96699fd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-84606-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b638493313533c84462d5d2160a96706f19d7d1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219633695 | pes2o/s2orc | v3-fos-license | UDC 582.26/.27(477.75) MICROALGAE OF MUD VOLCANO OF THE BULGANAK SOPOCHNOE FIELD ON THE CRIMEAN PENINSULA
Mud volcanoes are one of unique natural phenomena widely spread around the world. They can be found in Crimea, including the Bulganak sopochnoe field – the largest cluster of active mud volcanoes on the peninsula (45°25′29.04′′N, 36°27′51.64′′E). Study of mud volcano microalgae in Crimea, as well as in other regions of Russia, has not been conducted so far. Therefore, scientific interest is caused by need and urgency of the study of these volcanoes. First data on microalgae species composition of active mud volcanoes are presented in this article. Samples collected by O. Yu. Eremin (03.08.2012 and 13.04.2013) in the upper 2–3-cm layer of suspension and in surface water were investigated. The ranges of salinity and water temperature were 27–32 g per L and +28...+31 °C, respectively. Microalgae species composition was determined in water preparations using Axioskop 40 (Carl Zeiss) light microscope at magnification of 10×40 with software AxioVision Rel. 4.6. Totally 16 taxa were found: Cyanobacteria (1), Dinophyta (2), Bacillariophyta (6), and Euglenophyta (7). Of these, cyanobacteria Chamaecalyx swirenkoi (Schirshov) Komárek et Anagnostidis, 1986 was found by us in the mud volcano in August 2012. Pennate species of diatoms were also identified – single living (of genera Cylindrotheca (Ehrenberg) Reimann & J. C. Lewin, Lyrella Karajeva, and Nitzschia Hassall) and colonial species (of genera Berkeleya Greville and Pseudo-nitzschia H. Peragallo). The brackish-water, benthic, boreal-tropical species Nitzschia thermaloidesHustedt was recorded for the algal flora of Crimea, the Black Sea, and the Sea of Azov for the first time. Euglenophytes were also found in the samples – 5 species of the genus Trachelomonas Ehrenberg and 2 species of the genus Strombomonas Deflandre. Of all the species found in the mud volcano ecotope, 7 species are common for the Black Sea, and 9 species, including 3 euglenophytes, are common for the Sea of Azov. It is shown that by characteristics of halobility, species found in the mud volcano belong to freshwater complex (53 %), with a significant share of marine (27 %) and brackish-water (20 %) species. Of the phytogeographic flora elements, boreal species make up 33 %, boreal-tropical – 47 %, and cosmopolites – 20 %. Three species of potentially toxic algae are recorded: diatom Pseudo-nitzschia prolongatoides (Hasle) Hasle, 1993, as well as dinophytes Prorocentrum lima (Ehrenberg) Dodge, 1975 and Alexandrium tamiyavanichii Balech, 1994. The last species is marine, boreal-tropical, and new to the algology of Crimea, the Black Sea, and the Sea of Azov. In the article, own and literary data on morphology, ecology, and phytogeography of species, as well as on their general distribution in different waterbodies of the world, are also presented. Some microalgae species are indicators of saprobity; they are able to participate in purification of water from organic substances. Photos of mud volcanoes and micrographs of some species are presented.
Mud volcanoes are one of unique natural phenomena widely spread around the world. They can be found on the Crimean Peninsula being part of the Bulganak sopochnoe field, which is the largest cluster of active mud volcanoes in Crimea [25]. The term "mud volcano" (in German, Mudevulkan) was proposed by G. Helmersen, who was involved in the studies of mud volcanoes, in particular, of Altai and oil fields of the Taman and Kerch peninsulas for 60 years. According to academician I. M. Gubkin, one of the founders and creators of oil geology in Russia, gas and oil manifestations and mud volcanism are functions of the same reasons, special forms of tectonics -of diapir structures (folds and domes arising due to extrusion from the lower horizons of highly plastic rocks, salt and clay). He was the first to establish their single genetic whole; it was used later in a program for the study of mud volcanoes of the Crimean-Caucasian geological province of Dzherelo [25].
Crimea is one of the areas of mud volcanism; there are 33 volcanoes on the territory of the peninsula [8]. Dirt pours out through craters and spreads along slopes in the form of streams. Volcano fields of Bulganak type belong to mud volcano formations with violent eruptions being not characteristic. They are natural monuments of regional significance, as well as tourist attractions.
So far, study of microalgae of mud volcanoes in Crimea has not been carried out. Moreover, there is no information available about similar research in other regions of Russia. Preliminary studies have shown presence of microalgae in the surface layer of mud volcano ejections. Relevance of the work is due to complete lack of data on the study of microalgae communities of Crimean mud volcanoes, which is of significant scientific interest.
Aim of the work is to describe the species composition of microalgae biotopes of the mud volcano located in the eastern part of the Crimean Peninsula.
MATERIAL AND METHODS
Material for the study was high-quality samples (sulfur-clay-silty substrate and water), taken by an employee of A. O. Kovalevsky Institute of Biology of the Southern Seas O. Yu. Eremin on the Crimean Peninsula from the area of active volcanoes of the Bulganak sopochnoe field. Volcanoes are scattered over a vast territory there, and their cones are almost flush with the ground or have relatively large sizes (Fig. 1).
Sampling was carried out on August 3, 2012 and April 13, 2013 in the upper 2-3-cm layer of silt suspension with surface water flowing from the mud volcano. Salinity (27-32 g per L) and water temperature (+28…+31°C) were measured using a refractometer and digital thermometer, respectively [22].
RESULTS AND DISCUSSION
A preliminary study of two samples of silty suspension of the mud volcanoes showed the presence of microscopic algae in these habitats belonging to different high rank taxonomic groups. Totally 16 species of different genera were identified: cyanobacteria (Chamaecalyx swirenkoi (Shirshov) Komárek Classification of the species identified, their size, ecology, phytogeography, and general distribution are given below. This species was first described by P. P. Shirshov from the Kodyma River, a tributary of the Bug River (Ukraine) [47]. Ecology, phytogeography, and general distribution. Freshwater and brackish-water species, found in stagnant freshwater bodies, as well as in seas; boreal-tropical species. It is recorded in supralittoral [23] and microphytobenthos of the Kazantip Nature Reserve of the Sea of Azov [21], in cystoseira epiphyton, and on other substrates of the Black and Aegean seas [17], as well as on algae and higher aquatic plants in Dniester River mouth and the Dniester Estuary in Odessa region [7], in epiphyton of green algae and higher aquatic plants near water edge in water bodies of Leningrad Region, in the Chikhachev Bay of the Sea of Japan [1], in a lagoon of the Gulf of Finland of the Baltic Sea [34], in Austria, Japan, Mexico, and Western Slovakia, on the Java Island [38]. [17]. Ecology, phytogeography, and general distribution. Marine and brackish-water, boreal and natal species inhabiting mainly the southern European seas, including shallow waters near southern Crimea and the Caucasian coast of the Black Sea, on stones, rocks, and invertebrates' shells [17,21]. The species was first described from phytoplankton and microphytobenthos of the Sea of Azov [3,13].
T. volvocina (Ehrenberg) Ehrenberg, 1834 (= Microglena volvocina Ehrenb.). Found in the mud volcano on April 13, 2013 with cell diameter of 8-9 μm (Fig. 8). Lodges are spherical, with diameter of (4)-8-23-(32) μm [14]. Ecology, phytogeography, and general distribution. It is a freshwater species, mainly inhabiting stagnant water, less commonly found in weakly brackish water at pH of (4.4)-5. 5-8.4. It is characterized as β-mesosaprob-oligosaprob, has mixotrophic nutrition. Boreal species. It is recorded in Odessa Region and Crimea [14]. Phylum Euglenophyta, class Euglenophyceae, order Euglenales, family Euglenaceae, genus Strombomonas Deflandre, 1930 (= Trachelomonas Ehrenberg). Lodge sizes of species of this genus are larger and more variable in shape compared to those of the genus Trachelomonas [14]. Strombomonas were found in the intravital state in the mud volcano. Representatives of this genus were often recorded in samples, but it was difficult to identify them to species. Micrographs of some of them are given below.
Conclusion. A preliminary study of microalgae of the mud volcano in the region of the Bulganak sopochnoe field on the Crimean Peninsula showed the diversity of their species composition in watered habitats.
We found cyanobacteria Chamaecalyx swirenkoi and 15 species of eukaryotic microalgae: 2 dinoflagellate species (of genera Prorocentrum and Alexandrium), 6 diatom species (1 of genera Lyrella, Pseudonitzschia, Nitzschia, and Cylindrotheca; 2 of genus Berkeleya), as well as 7 species of euglenophytes (5 of genus Trachelomonas; 2 of genus Strombomonas). Some of them are widespread in the microphytobenthos of the Sea of Azov and the Black Sea. Of all the types of algae found in the mud volcano, 7 species are common for the Black Sea, while 9 species, including 3 species of euglenophytes, are common for the Sea of Azov.
Three species considered to be potentially toxic were identified: diatom P. prolongatoides, as well as dinophytes Pr. lima and A. tamiyavanichii. The last species is marine, boreal-tropical, and new to Crimean flora. By characteristics of halobility, species found in the mud volcano belong to freshwater complex (53 %), with a significant share of marine (27 %) and brackish-water (20 %) species. Taking into account phytogeographic features, it can be concluded that boreal species make up 33 %, boreal-tropical species -47 %, and cosmopolite -20 %.
The work was carried out within the framework of government research assignment of IBSS RAS "Investigation of the mechanisms of controlling production processes in biotechnological complexes with the aim of developing the scientific foundations for the production of biologically active substances and technical products of marine genesis" (no. АААА-А18-118021350003-6).
Acknowledgment. The article is dedicated to IBSS employee Oleg Yuryevich Eremin who was an enthusiastic participant of many scientific expeditions aimed at studying of hypersaline lakes of Crimea and always helped his colleagues with hydrobiological samplings. He died tragically in 2014 when returning from one of the expeditions. The idea of studying microalgae on the surface of silty substrates in a mud volcano near hypersaline water bodies of Crimea belongs to him. It was he who took samples from the mud volcano and insisted on their analysis. | 2020-05-21T00:10:08.937Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8b3b863d953b8de4076c9b4378cf32c350c3c63f",
"oa_license": "CCBYNCSA",
"oa_url": "https://mbj.marine-research.org/article/download/221/221",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8026d816dd616ce3f8928cb421c8767b4a0b8255",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
23854466 | pes2o/s2orc | v3-fos-license | Perspective Space as a Model for Distance and Size Perception
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.
Introduction
Physical space can be defined as the boundless three-dimensional extent in which objects have size, form, and position. Physical space is homogeneous and isotropic within the extent of human vision, implying that objects do not change in size or form under translation and rotation. Visual space is the extent that we, that is, human beings, perceive through vision. Visual space differs from physical space, especially at long viewing distances. It is neither homogeneous nor isotropic, implying that objects are perceived to change in size or form under translation and rotation. Perspective space has been proposed as a model of visual space (Erkelens, 2015a;Gilinsky, 1951;Hatfield, 2012). Gilinsky (1951) introduced the model to describe empirical data of distance and size perception. The single parameter of the model, that is, the distance of the vanishing point was inferred to be about 30 m or more. Erkelens (2015a) used perspective space to describe perspective angles, that is, angles perceived between parallel lines in physical space. Distances of the vanishing point inferred from perspective angles were shorter than 6 m. The large difference between the distances of vanishing points in the two studies suggests that the models of Gilinsky and Erkelens have different geometries. An alternative explanation is that perceived distances and angles cannot be described by a single perspective space. The purpose of this study is to investigate properties of perspective space in relation to distance and size perception and to compare the models of Gilinsky and Erkelens with each other and with other models of distance and size perception.
Research of distance and size perception has a long history (for a comprehensive review see Wagner, 2006). The extensive literature on the topic presents a plethora of experimental results, which together do not seem to go well with a specific geometry of visual space. Results depended so heavily on methods, conditions, and instructions that researchers even repudiated the concept of visual space altogether (Cuijpers, Kappers, & Koenderink, 2002). Wagner (2006) championed the idea, less remote from the intuitive notion of a visual space, that we should see visual space as a family of spaces whose individual geometries differ from each other depending on experimental conditions and mental shifts in the meaning of size and distance. This study will show that perspective space is such a family of spaces. Perspective space will prove to be an attractive model for distance and size perception because it fits well to many experimental results and unifies a number of existing models. Another attractive property of perspective space is that it matches both physical space and pictures in a natural and simple way.
Erkelens' Model of Perspective Space
Perspective space is the collective name for spaces that differ from each other by the value of a single parameter, that is, the distance of the vanishing point. Figure 1 shows objects, rings in this example, in physical space (Figure 1(a) and (c)) and how their sizes, distances, and directions are transformed in perspective space (Figure 1(b) and (d)). Object size is independent of distance in physical space (Figure 1(c)) but not in perspective space (Figure 1(d)). Perspective space is defined relative to the position and viewing direction of an observer. The distance of its vanishing point characterizes a certain perspective space. Generally, the distance is finite meaning that perspective space is bounded in depth. The family of perspective spaces includes two spaces whose geometries are equivalent to spaces in the physical world. The analogue spaces are physical space itself and the projection of physical space on a flat surface orthogonal to the viewing direction, that is, the picture plane (Figure 1(a) and (c)), a planar representation of the retinal image. Distance of the vanishing point is infinite for physical space and zero for the picture plane. Positions of objects in perspective space are best expressed in terms of a two-dimensional direction relative to the viewing direction and a one-dimensional depth relative to the position of the observer. In perspective space, depth depends on distance of the vanishing point but direction does not, implying that directions of objects are identical in all perspective spaces, including physical space and the picture plane.
Perspective space is Euclidean, meaning that the metric is the straight-line distance and angles of triangles add up to 180 (Erkelens, 2015c). A property of perspective space is that straight lines in one perspective space constitute straight lines in any other perspective space ( Figure 2). As a consequence, line pieces aligned in physical space remain aligned in perspective space (Erkelens, 2015c). This property has been shown experimentally for visual space (Cuijpers et al., 2002). Another property of perspective space is that parallel lines in one space generally transfer to converging or diverging lines in other spaces. Thus, parallelism is not preserved. Parallel lines in frontal planes are the exception. Such lines remain parallel in all perspective spaces. Experimentally, parallelism was found not preserved in visual perception for lines having orientations in depth (Cuijpers, Kappers, & Koenderink, 2000). Figure 2(a) shows that lines in physical space running parallel to the viewing direction converge in perspective space to a vanishing point VP lying in front of the observer. Conversely, lines in perspective space parallel to the viewing direction converge in physical space to a vanishing point VP lying behind the observer (Figure 2(c)). The famous parallel alleys of Hillebrand (1902) and Blumenfeld (1913) are examples of parallel lines in visual space that are described by the model of perspective space (Erkelens, 2015c). The distance alleys of Blumenfeld (1913) are explained by perspective space in combination with the size-distance-invariance hypothesis (Epstein, 1963). Figure 2(b) shows parallel lines in physical space that have an orientation in depth different from the The observer (half sphere at the right side) fixates the center of the ring (blue). The black line indicates the (binocular or monocular) viewing direction. The ring defines a set of directions (forming the cone with its vertex at the observer). The ring also defines a set of lines parallel to the viewing direction, indicating physical trajectories if the ring would move in the viewing direction. The parallel lines together form a cylinder, projecting a circle the size of the ring in the plane of the observer. The orange plane orthogonal to the viewing direction contains the projection (orange) of the ring on a two-dimensional planar surface. (b) The ring of (a) in perspective space. The directional cone is identical to that in physical space. The parallels in physical space are converted to lines converging to a vanishing point in perspective space, indicating trajectories if the ring would move in the viewing direction. Together, the converging lines form a cone having its vertex at the vanishing point. Intersection between the two cones forms the ring (red) in perspective space. (c) Two identical rings at different distances in physical space. The two rings (blue) project to two concentric rings (orange) in the picture plane, indicating their relative size on the retina. (d) The two rings of (c) in perspective space. Size ratio of the rings (red) depends on the distance of the vanishing point and lies in between size ratios in the picture plane (orange) and physical space (blue) for positive finite vanishing distances.
viewing direction. These lines converge to vanishing points of other perspective spaces. These perspective spaces are defined by viewing directions running parallel to the lines of the grids in physical space.
Distance and Size Perception
The model of perspective space makes predictions for distance and size perception. Since direction and distance behave differently in perspective space, use of a polar coordinate system would be appropriate. However, perceived distances and sizes are usually expressed in meters rather than degrees. Therefore, it is convenient to use a Cartesian coordinate system having its origin at the observer and the z-axis along the viewing direction ( Figure 3). Furthermore, Cartesian coordinates are helpful in comparing derived equations directly to equations presented in the literature. Figure 3(a) shows in one graph the right half of the cross-sections along the z-axes of the physical and perspective spaces shown in Figure 1(a) and (b). Since perspective space is used as a model of visual space, the term visual will be used as a replacement of the term perspective. Transformations between physical and perspective shapes. (a) A two-dimensional grid in physical space (blue) is located at a certain distance from the observer (indicated by arcs at the right side). Each point of the grid defines a direction (the black dotted lines are examples). Each point also defines a line parallel to the viewing direction (blue dotted lines) intersecting with the frontal plane of the observer. The parallels in physical space converge to the vanishing point VP in perspective space (red dotted lines). Intersections between directions and vanishing lines define points of the grid in perspective space. Together the intersections form a deformed grid (red). Transformation from physical to perspective space affects shape, size and distance of the grid. A physical grid is shown whose lines are parallel or orthogonal to the viewing direction. (b) The same physical grid is shown but rotated clockwise by 30 about its center. Note that the computed perspective grids of (a) and (b) are not rotated versions of each other. (c) The parallels in perspective space converge to a vanishing point VP in physical space lying behind the observer.
The relationship between distance in visual and physical space is given by Since S p and S v define similar triangles relative to the observer, sizes have the same relationship as distances The two equations show that perceived distance and size depend both on physical distance and the vanishing distance of visual space. Although the relationship between physical and visual size is simple, it cannot directly be applied to fit experimental results presented in the literature. Usually, observers matched sizes of two objects. One object, called the standard, was fixed of size and placed at various distances. The other object, called the comparison, was placed at a fixed distance and adjustable of size. Figure 3(b) shows the geometry for a set of standard and comparison objects. The general relationship between the physical and visual sizes of the two objects is given by . Geometries for distance and size judgments. (a) A line in physical space (blue) of size S p is located at distance Z p from the observer (arc at the right side). In perspective space, as a model for visual space, the line (red) gets size S v and distance Z v if the vanishing point is placed at distance VP. (b) A standard line in physical space of fixed size S ps is located at a variable distance Z ps from the observer. A comparison line in physical space of adjustable size S pc is located at a fixed distance Z pc from the observer. In visual space, the lines have sizes S vs and S vc , respectively, if the vanishing point is at distance VP. The vertical, dotted lines on the right side of the graphs represent the plane of the observer orthogonal to the viewing direction.
Equations (1) and (3) are derived in the Appendix. Matching perceived sizes in a typical experiment means that observers are asked to set S vc equal to S vs , reducing Equation (3) Figure 4(a) shows relationships between visual and physical distances as expressed by Equation (1). The graph shows three classes of relationships. If VP is positive, visual distance is an underestimation of physical distance and becomes equal to VP for objects at infinity. In other words, visual space is a bounded space. If VP is infinite (blue line), visual and physical distances are equal. Then, visual space is unbounded. If VP is negative, observers overestimate physical distances. Negative VPs are associated with inverted perspective (Arnheim, 1972), also called reverse perspective (Derksen, 1999;Wade & Hughes, 1999), or occasionally, Byzantine perspective (Deregowski, Parker, & Massironi, 1994). If VP is negative, parallel lines in physical space are perceived to diverge rather than converge with distance. Negative VPs are discussed later in relation to instructions that have been given to observers in size and distance judgment tasks. Figure 4(b) shows relationships for perceived size as described by Equation (4). The graph shows ratios for two equally large perceived stimulus sizes positioned at different distances. The horizontal line (blue) for which VP is infinite shows the law of size constancy indicating that perceived size is independent of distance. Perceived size decreases with distance for positive VPs. This phenomenon is called underconstancy of size. Decrease in size with distance is fully determined by stimulus size in the picture or on the retina, respectively, if VP is zero. The negative VP is associated with overestimating size with increasing distance.
Other Models of Distance and Size Perception Gilinsky's Model
Equations (1) and (2), describing the relationship between visual and physical distances and sizes, are identical to the relationships derived by Gilinsky (1951). This means that the models of Gilinsky and Erkelens have the same geometry. According to Gilinsky herself, computations were mainly inspired by Luneburg's theory of a curved visual space (1947,1950). Gilinsky (1951, p. 460) stated, the two formulas [for distance and size] are rigorously derived from the basic metric of visual space as established mathematically (for binocular vision) by Luneburg (1947Luneburg ( , 1950. Second, the same two formulas are mathematically derived (somewhat less rigorously but without restriction to binocular vision) from the known principles of visual perspective. Finally, the same two formulas are derived by a simple inductive method of mathematical composition for the two boundary laws of size constancy and retinal image (visual angle). All three methods of derivation yield the identical pair of formulas to express a unifying law of visual space perception.
As Fry (1952) pointed out, Gilinsky (1951) made substitutions in Luneburg's equations, which turned Luneburg's essentially non-Euclidean metric into a Euclidean metric. Thus, Gilinsky (1951) used a Euclidean metric and described a flat rather than curved space for the domains of monocular as well as binocular vision. Gilinsky's equation has been very successful in describing considerable amounts of experimental data (Fry, 1952). Nevertheless, Baird and Wagner (1991) dismissed Gilinsky's equation for distance perception because the computed equation could not describe overestimation of distance as was observed in a number of experimental studies.
Equation (4) is mathematically equivalent to the equation Gilinsky (1951) derived for size ratios. However, interpretation and usefulness are different. Gilinsky (1951) computed size ratios by assuming a distance, called the ''normal'' viewing distance, at which perceived size was equal to physical size, which she called the ''true'' size. Thus, in size judgments between comparison and standard stimuli, distance of the comparison stimulus was limited to the one believed to be ''normal''. Equation (4) does not have this limitation because it describes the ratio between two physical sizes, which are perceived as equally large. Both physical sizes can be positioned at any distance from the observer. Equation (4) is a special case of Equation (3). Equation (3), describing the general relationship between physical and perceived sizes, is also valid for conditions in which physical sizes are perceived differently from each other. For instance, it can be used in a task where the size of one object is judged as being twice the size of another.
Ooi and He's Model
Many studies have reported that judged distance is influenced by ground surface information (Bian, Braunstein, & Andersen, 2005;Feria, Braunstein, & Andersen, 2003;He & Ooi, 2000;He, Wu, Ooi, Yarbrough, & Wu, 2004;Madison, Thompson, Kersten, Shirley, & Smits, 2001;Meng & Sedgwick, 2001, 2002Ni, Braunstein, & Andersen, 2004;Ooi, Wu, & He, 2001, 2006Philbeck & Loomis, 1997;Sinai, Ooi, & He, 1998;. Ooi and He (2007) took errors in perceived slant of the ground surface as the starting point for deriving a distance equation. The equation reads where d is perceived distance, D is physical distance, H is height of the eye above the ground, and is perceived slant of the ground surface. The authors showed that their ground-based equation took the same form as Gilinsky's equation if slant error was small. Difference between slants of planes in physical and visual space is a characteristic property of the perspective-space model if vanishing distances are finite (Erkelens, 2015c). Figure 5(a) shows computed grids in physical space, visual space and the picture plane according the perspective-space model. The grid on the ground surface in physical spaces is slanted towards the observer in visual space if the distance of its vanishing point is finite. Perceived slant depends on vanishing distance. At one extreme, the visual grid coincides with the physical grid if the distance is infinite. At the other extreme, the visual grid becomes oriented orthogonal to the viewing direction if the distance is zero. The equation for perceived distance of objects on the ground plane derived by Ooi and He (2007) is almost identical to the one derived from the model of perspective space. The equation is derived here for the geometry presented by Ooi and He (2007), in which the observer views along the z-axis ( Figure 5(b)). In the Appendix, the equation is derived for an observer fixating the object on the ground. The equations are identical in the two viewing conditions. Figure 5. Relationship between perceived and physical distance on the ground plane. (a) Geometry of grids is according to the perspective-space model. The grid (blue) on the ground in physical space has an equivalent grid (red) in visual space, whose distance of the vanishing point is finite. The orange grid represents the observer's proximal image, that is, the projection of the physical grid onto a plane orthogonal to the viewing direction (dashed line). (b) The vertical cross-section along the z-axis of (a) shows the geometry for an observer at height H above the ground, judging the distance from his feet to an object on the ground. Geometry and symbols are identical to those used by Ooi and He (2007). Viewing direction is along the z-axis. The blue and red points indicate associated locations in physical space and visual space.
Because VP ¼ H= sin , Equation (6) can be rewritten as follows Equation (7) is almost identical to Equation (5) proposed by Ooi and He (2007). Term D cos is different, it replaces the term D in Ooi and He's equation. However, for angles < 4 , as have been measured by Ooi and He (2007), the terms differ less than 0.5%.
Wagner's Models
Several investigators of distance judgments have proposed a power function for the relationship between perceived and physical distance (Baird & Wagner, 1991;Da Silva, 1985;Haber, 1985;Toye, 1986;Wagner, 1985Wagner, , 2006Wiest & Bell, 1985). The relationship can be written as d ¼ D . The power function has two parameters, namely scaling factor and exponent . Wagner (2006) fitted power functions to a great number of data from the literature. Across the board, the fits were very good. Wagner (2006) did not fit hyperbolic functions to the same set of data. Hyperbolic functions have only one parameter, the vanishing distance VP. To investigate differences between the two functions, hyperbolic functions were fitted to power functions presented by Wagner (2006). Fits were made for physical distances between 2 m and 50 m, distances relevant for the reported judgments ( Figure 6(a)). Fits were made to the full range of power functions that described the experimental distance judgments. The area between the hyperbolic and power function fits as a percentage of the area between the power function fit and the x-axis was used as a measure for the difference between the two fits. Differences were smaller than 2% for hyperbolic functions having VPs larger than 20 m. Differences were larger for hyperbolic functions with smaller VPs mainly due to poor fits at the very short distances. Considering the variability in distance judgments, hyperbolic functions would have fit the experimental data about equally well as did the power functions. The hyperbolic and power functions become very different from each other at very far distances because perceived distance is bounded for hyperbolic functions but not for power functions. The fact that the perceived distance of extremely far objects, such as the moon, is noninfinite implies that visual space has a bounded extent. This property of visual space argues against using power functions for describing perceived distances. An argument in favor of hyperbolic functions is that these functions follow directly from a model of visual space, namely perspective space. There is yet no model of visual space that predicts power functions.
Another model of Wagner (1985Wagner ( , 2006 describes distances between visible stakes in threedimensional space. Analysis of judgments of distances between stakes that were randomly placed in two-and three-dimensional spaces showed that physical distances were seen more than twice as large in frontal orientations as these were in in-depth orientations (Wagner, 1985). This result led to formulation of the vector contraction model of visual space. According to this model, the component of physical space frontal to the observer is unchanged in visual space, but the in-depth component is contracted. According to the model, frontally oriented sizes obey perfect size constancy. In-depth oriented sizes are contracted linearly as a function of distance. This implies that the model is not compatible with power and hyperbolic functions of distance. Figure 6(b) shows that linear functions only match power and hyperbolic functions for contraction factors near one, and thus, for visual spaces that closely match physical space.
Foley's Model
Foley, Ribeiro-Filho, and Da Silva (2004) also investigated perceived distances between stakes in three-dimensional space, which the authors called perceived extents. Foley et al. (2004) proposed a model in which perceived extent is proportional to the product of magnified image size and perceived distance (Figure 7(a)). The computations of extents were based on three equations with in total four free parameters. Foley et al. (2004) computed perceived extent as S v ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðR 0 1 Þ 2 þ ðR 0 2 Þ 2 À 2R 0 1 R 0 2 cos 00 q , where 00 ¼ þ Q P and R 0 ¼ R=ðF þ GRÞ. Magnified image size 00 was obtained from image size by adding a term with the two free parameters Q and P. Perceived egocentric distance R 0 was obtained from physical distance R by the expression with the two parameters F and G. The expression for R 0 resembles somewhat that for perceived distance in perspective space as described by Equation (1). However, Equation (1) relates perceived to physical distance with help of only one parameter. Foley et al. (2004) used two parameters (F and G) for relating perceived to physical distances and another two parameters (Q and P) for relating extents to image sizes. To compare fits to data of perceived distances and extents by Foley's model with fits by the model of perspective space, perceived positions (X', Z') according to the model of perspective space were computed from the physical positions (X, Z) of the stakes by applying Equations (1) and (2). Extents S p and S v were computed as Euclidean distances (Figure 7(b)). Foley et al. (2004) recorded the physical coordinates of the 14 stakes used in their experiments in a table as (X, Z) coordinates. In another table, the authors recorded all the measured median extents in four groups of 91 data points, namely, separately for binocular and monocular viewing and for viewing at far and near distances. The perceived egocentric distances of the stakes are shown as a function of their physical distances for binocular viewing in Figure 8(a) and for monocular viewing in Figure 8(b). The data were fit by Foley's distance function with the parameters F and G, and by the distance function of perspective space with parameter VP. The current values computed for F and G and root mean square errors (RMSE) are identical to those given by Foley et al. (2004). Both models provided good fits to the data. The slightly smaller root mean square errors for Foley's model were to be expected because the model contains two free parameters and the perspectivespace model just one. Adjusted R 2 values, as a goodness-of-fit measure for the two nonlinear models, were hardly different from each other (Foley: 0.996 (binocular), 0.994 (monocular); perspective: 0.994 (binocular), 0.992 (monocular)).
Extents computed by the perspective space model were compared with reported extents for a range of distances of the vanishing point (VP). Figure 8(c) to (f) shows the results for values of VP that produced the lowest root mean square errors (RMSEs). On average, the RMSEs are slightly larger than those resulting from Foley's model (Foley et al. 2004). For perceived distance during binocular viewing, fits of the perspective model were best for VP having a distance of 59 m. Optimal VPs were about three times as large for perceived extents under the same viewing condition. During monocular viewing, best fits of the distance data were computed for a VP of 98 m. The optimal VP was about equal for near extents. For far extents, fits of the perspective model were somewhat poorer. Best fits were obtained for VPs indistinguishable from infinity.
Discussion
Perspective space is not a neurobiological model of visual space. It does not explain or even suggest how visual space is constructed from retinal images and neural processes. Instead, perspective space describes how geometric quantities in visual space relate to those in physical space and pictures. The model is based on two assumptions about visual space. One assumption is that visual space is Euclidean, implying that geodesics are straight lines. Looking at a straight railway line, road, or tube oriented in depth shows that preserved straightness is a reasonable assumption for far and central vision (Erkelens, 2015a(Erkelens, , 2015b. Visual space being Euclidean also implies that line pieces aligned in physical space remain aligned in visual space (Cuijpers et al., 2002). The second assumption is that visual directions are identical in physical and visual space. Evidence for identical directions comes from aiming devices and eye movements. In the event of differences between physical and visual directions, these would occur as offsets or magnifications of the visual field relative to the physical field. Offsets are highly improbable because all kinds of aiming devices would be useless otherwise. Magnifications are improbable too because voluntary saccadic eye movements made between continuously visible targets are highly accurate relative to the required retinal angles (Collewijn, Erkelens, & Steinman, 1988;Erkelens, Steinman, & Collewijn, 1989). One could argue that eye movements and other motor actions operate on stimuli in physical space and do not affect visual space. However, convincing arguments in the empirical sciences support the view that perception of the external world is scaled by action-specific constraints (Barsalou, 2008;Bourgeois & Coello, 2012;Fajen, 2005;Gallese, 2007;Witt & Proffitt, 2008). Fitting the perspective-space model to perceived angles, distances, and sizes resulted in a wide range of inferred vanishing-point distances. Although data come from different studies and observers, it is hard to imagine that visual space defined by a single vanishing-point distance can describe all the judgments of individual observers. To illustrate this, data from individual observers showed that eye height affects judgments of distance (Ooi & He, 2007) and in-depth oriented angles (Erkelens, 2015a). Comparison of different studies suggests that distances of vanishing points also depend on the attribute that is judged. Distances of vanishing points computed from judgments of in-depth oriented angles are shorter than 6 m (Erkelens, 2015a). Distances computed from the parallel-alley data of Blumenfeld (1913) were even shorter than 1 m, probably because of the extremely small eye height at which the stimuli were viewed. Vanishing distances computed from distance judgments (Foley et al., 2004;Gilinsky, 1951) range from about 30 m to 100 m. Vanishing distances computed from size judgments made in the same studies range from about 100 m to infinity. The vanishing point is a theoretical attribute of perspective space. It is questionable whether observers can judge its distance. The wide range of inferred distances of vanishing points suggests that visual space is best described by a perspective space whose depth depends on condition and attribute. Apparently, observers are insensitive to the fact that different attributes of depth belong to different perspective spaces. The insensitivity is convincingly illustrated by a great number of perspective paintings. Laymen as well as experts of perspective are not aware of inconsistencies between in-depth oriented angles and distances in many high-quality paintings of famous artists (Erkelens, 2016).
Comparison With Competing Models
The perspective-space model has been compared with five models of distance and size perception. The first model was the mathematical model of Gilinsky (1951). Although based on different principles, equations for distance and size derived by Gilinsky (1951) are equivalent to those given by the perspective-space model. Advantage of the perspective-space model is its wider applicability and greater simplicity, giving analytical solutions for perceived distances, sizes, and angles. The second model was that of Ooi and He (2007) who proposed their model to describe a particular phenomenon, namely, foreshortening of distance on the ground plane. Ooi and He's model describes perceived distances of objects on the ground relative to the feet of the observer. Computations of perceived distance require estimates of eye height and another perceptual parameter, namely, the perceived inclination of the ground plane. The almost identical equation given by the perceptual-space model shows that the experimental results of Ooi and He (2007) may reflect perceived and physical distances of objects (Z v and Z p in Figure 9) relative to the viewing point of the observer, that is, the eye or the head. Differences between both models are too small to decide which model best describes the data of Ooi and He (2007). Li, and Durgin (2012) proposed an alternative hypothesis, the angular expansion hypothesis. The hypothesis, assuming exaggerations in visual angle, was also used to describe perceived foreshortening of distance on the ground plane measured by Ooi and He (2007). The hypothesis was compared with the hypothesis of Ooi and He (2007), which they called the intrinsic bias hypothesis. Models based on each of the two hypotheses described the data equally well. Li and Durgin (2012), however, claimed more general usefulness for their hypothesis. The current computations show that the models of perspective space, Ooi and He, and Li and Durgin can be regarded as equivalent models for distance perception of objects on the ground plane. The third model was the power-function model for perceived distance proposed by Baird and Wagner (1991) and used in many studies. Differences between power and hyperbolic functions of the perspective-space model were very small over the entire range of distances in which judgments have been made. It is reasonable to conclude that both functions are equivalent in describing perceived distance. The fourth model was the vector-contraction model of visual space (Wagner, 1985). This model was developed to describe judgments of distances between randomly positioned stakes. Comparison with hyperbolic and power functions showed that extending the model to perceived distances along visual directions will give results that are incompatible with all the other models. Conclusion is that the contraction model of visual space may fit a particular purpose but cannot be a generic model of visual space. The fifth model was Foley's model. Foley et al. (2004) proposed a model whose principal assumption was that, in the computation of perceived extent, the physical angle signal undergoes a magnifying transformation (Figure 7(a)). Figure 8 shows that the results of Foley et al.'s (2004) for egocentric distance and exocentric extent are described by the perspective-space model, distance and extent requiring different distances of the vanishing point. The models of Foley and perspective space have in common that distances and extents are not described by the same parameters. The perspective-space model is simpler and more generic in that it includes the description of perceived angles.
The Role of Instructions in Distance and Size Perception
In a previous study, I argued that we have representations of both visual and physical space at our disposal (Erkelens, 2015a). For example, we see on the one hand that a road narrows in front of us but on the other hand we are confident that it does not. The same holds for size. We see that an approaching car becomes larger but at the same time are aware that its size stays the same. Our representation of physical space does not result from vision alone but also from other senses and motor interaction with the physical environment. For a yet unknown reason, our representations of visual and physical space do not merge into a single representation. The different representations give human beings the possibility to answer questions about spatial relationships in several ways. The hypothesis deviates from the view of many researchers who assumed that spatial judgments made under different instructions reflect properties of a single space. Famous are the parallel and distance alleys, initially measured by Hillebrand (1902) and Blumenfeld (1913). The alleys led to the concept of curved visual space (Luneburg, 1947(Luneburg, , 1950. Results were confirmed and extended by many studies (Battro, di Pierro Netto, & Rozenstraten, 1976;Hardy, Rand, & Rittler, Figure 9. Relationship between perceived and physical distance. The observer at height H above the ground judges the distance from his feet to an object on the ground plane. Symbols are identical to those used in Figure 5(b). Viewing is in the direction of the object (blue) on the ground. 1951; Indow, Inoue, & Matsushima, 1962;Luneburg, 1950;Roberts & Suppes, 1967;Shipley, 1957;Yamazaki, 1987;Zage, 1980;Zajaczkowska, 1956). The studies concluded that visual space is curved although a few authors challenged its hyperbolic nature. Later studies reported conflicting results but persevered in constructing curved visual spaces (Cuijpers, Kappers, & Koenderink, 2001;Cuijpers et al., 2000Cuijpers et al., , 2002Higashiyama, 1984;Indow & Watanabe, 1984a, 1984bKoenderink, van Doorn, Kappers, Doumen, & Todd, 2008;Koenderink, van Doorn, Kappers, & Todd, 2002;Koenderink, van Doorn, & Lappin, 2000Musatov, 1976;Schoumans, Kappers, & Koenderink, 2000;Todd, Oomes, Koenderink, & Kappers, 2001;Wagner, 1985). The concept of a curved visual space results from the integration of parallel and distance alleys in one space. The integration may not be allowed because parallelism and equal size may concern different spaces. The parallel alleys are parallel in visual space, not in physical space. The distance alleys are based on the size-distance invariance hypothesis, a mechanism causing that equally large objects positioned at different distances in physical space (Figure 1(c)) are perceived as equally large, although the objects are unequal of size in visual space (Figure 1(d)). Thus, parallel alleys reflect a special condition in visual space and distance alleys may reflect a special condition in physical space. Carlson (1960) identified initially three and later four (Carlson, 1962) classes of instruction that affect size judgments considerably. The instructions were called objective, perspective, apparent, and projective. Effects of instruction were confirmed in other studies (Epstein, 1963;Gilinsky, 1955;Leibowitz & Harvey, 1967, 1969. Perspective and objective instructions cause overestimation of size with distance. Overestimation may reflect overcompensation of differences between representations of visual and physical space. Negative distances of the vanishing point simulate such overestimations in the perspectivespace model (Figure 4). Apparent instructions caused underestimation if the instruction was given first and resulted in almost perfect size estimation if the instruction was given after judgments under perspective and objective instructions (Carlson, 1962). The apparent instructions may cause size judgments to occur in representations of either visual or physical space. The projective instruction causes strong underconstancy, where the size judgments seem governed by retinal size. Similar judgments occur under reduced cue conditions (Thouless, 1931;Holway & Boring, 1941). It may indicate that observers, at least up to a certain extent, have access to their retinal images. Retinal access is associated with a type of visual perception called proximal perception (Todorovic´, 2002). Proximal perception is controversial already for a long time (Hastorf, 1950;Hochberg & Hochberg, 1952;Ittelson, 1951) and still is today. Recent studies question proximal perception in laymen as well as artists (Perdreaux & Cavanagh, 2011, 2013.
Conclusion
Perspective space is a simple, intuitive, and powerful model of visual space. It is simple because a single parameter defines its geometry. It is intuitive because perspective space is a trade-off between physical space and a two-dimensional projection of physical space representing the retinal image. It is powerful because it describes experimental results, explains visual phenomena and unifies a number of models of distance and size perception.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Derivation of Equations (1) and (3) In Figure 3(a), the line (black, dashed) connecting S p to the observer is described by: The perspective line (red, dashed) is described by x 2 ¼ S p À S p VP z.
At the intersection, x 1 ¼ x 2 ¼ S v and z ¼ Z v from which it follows that S p Z p Z v ¼ S p À S p VP Z v which can be simplified to Z v Z p þ Z v VP ¼ 1. Rearranging the equation gives Equation (1). | 2018-04-03T01:04:48.214Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "0577dc69b802c22fc081cfef42930e80c451d251",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2041669517735541",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0577dc69b802c22fc081cfef42930e80c451d251",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
46995124 | pes2o/s2orc | v3-fos-license | A Hybrid Genetic Algorithm for Multi-Trip Green Capacitated Arc Routing Problem in the Scope of Urban Services
: Greenhouse gases (GHG) are the main reason for the global warming during the past decades. On the other hand, establishing a well-structured transportation system will yield to create least cost-pollution. This paper addresses a novel model for the multi-trip Green Capacitated Arc Routing Problem (G-CARP) with the aim of minimizing total cost including the cost of generation and emission of greenhouse gases, the cost of vehicle usage and routing cost. The cost of generation and emission of greenhouse gases is based on the calculation of the amount of carbon dioxide emitted from vehicles, which depends on such factors as the vehicle speed, weather conditions, load on the vehicle and traveled distance. The main applications of this problem are in municipalities for urban waste collection, road surface marking and so forth. Due to NP-hardness of the problem, a Hybrid Genetic Algorithm (HGA) is developed, wherein a heuristic and simulated annealing algorithm are applied to generate initial solutions and a Genetic Algorithm (GA) is then used to generate the best possible solution. The obtained numerical results indicate that the proposed algorithm could present desirable performance within a suitable computational run time. Finally, a sensitivity analysis is implemented on the maximum available time of the vehicles in order to determine the optimal policy.
Introduction
Nowadays, generating different types of waste and the outbreak of its social, economic and environmental inconsistencies has caused many problems of collecting, transporting, processing and disposing of such waste for urban service management.Since, the main cost of the waste management is related to the transportation [1], evaluation and optimization of this system would play an important role in reducing the imposed cost and solving the problems of urban service management.
Determining the optimal routes would lead to reduce transportation costs and improve service quality as one of the vital operational decisions in urban services organization [2][3][4].Transportation imposes some irreparable impacts on the environment.Consumption of resources, land use, toxic effects on the ecosystem and human beings, noise pollution, emission of greenhouse gases and contaminants are examples of the hazardous impacts.Besides the mentioned negative impacts, emission of the greenhouse gases is directly related to people's health and indirectly associated with the destruction of the ozone layer.The necessity of paying attention to this topic comes from the fact that, the greenhouse gases emitted by the transportation sector are the causes for a major portion of pollution in different countries around the world [5].In other words, climate change has attracted a lot of attention around the world in recent years, particularly the global warming, which significantly resulted by greenhouse gas (GHG).Carbon dioxide (CO 2 ) is a major part of GHG.According to the Baidu Index [6], CO 2 concentration has increased rapidly in recent years and this is going on.Therefore, minimizing fossil fuel consumption and CO 2 emissions due to vehicles' transportation by optimizing transportation operations is a very helpful way for controlling the global warming [7].
As such, increased concerns about the reduction of such hazardous impacts indicates the necessity of implementing a well-planned program for transportation sector, for which green routing models based on consumed fuel and air pollution can be helpful.
There are two main categories presented in routing problems related to urban waste collection [8].First, a set of given nodes are distributed throughout the urban graph network and the objective is to find the best routes that traverse all the nodes.The best-known problem in this category is Vehicle Routing Problem (VRP).Second, there are some predefined edges/arcs in the urban graph network and the objective is to find the best routes that traverse all the edges/arcs with positive demand.In fact, the edges/arcs denote the streets or alleys of the urban area in which the waste are distributed along them.The most applicable problem in the second category is Capacitated Arc Routing Problem (CARP).
In this research, the problem is modeled as a CARP on an undirected graph and solved accordingly.The reported results in this area mentioned that several real world activities can be modeled as CARP, headmost among them are waste collection, street sweeping, snow removal and mail collection or delivery.Whereas the CARP is a robust problem model, which was first introduced by Golden and Wong [9], have been studied by many researchers.Dror [10] presented the most applications of CARP variants and of related solution methods.For a further survey, the reader can also see the research done by Assad and Golden [11].
Even though CARP is a well-known concept in operational research but only limited research and extensions have been studied in this respect.This important routing problem was first introduced by Golden and Wong [9].CARP refers to the set of problems wherein a fleet of vehicles originally located in one or more depots delivering services on road networks; the main examples of these services include municipal waste collection, snow removal, pouring salt on snows and road surveying.The roads are represented by edges or arcs across these networks.Each edge contains two arcs with different directions.The services should be delivered in such a way to minimize the associated cost.By starting from the associated central depot, the vehicle delivers the planned service and then returns back to the depot.Each vehicle has a certain capacity and all routes are both originated from and terminated to the origin (central depot).
Most of the research works performed in this respect have attempted to achieve economic objectives by focusing on minimization of traveled distance, required time, or the number of vehicles required but failing to take environmental objectives and pollution reduction into consideration is so remarkable.So, the crucial aspects of the research are listed as below: -Environmental involvement -Economic transportation system -Real world assumptions -Mathematical model limitations -
Efficient solution methods
We survey the literature for three different parts of solution methods and possible extensions of the problem, green aspects of the problem with different existed solution methods and some novel studies in the vehicular technologies and related solution methodologies applicable to the routing problems.In the first part, some important research is investigated in terms of different solution methods and different applications of the CARP.Ghiani et al. [12] solved CARP with intermediate facilities (CARP-IF) by considering capacity and distance constraints using a new Ant Colony Optimization (ACO) that an auxiliary graph is used in it.Experimental results indicated that their proposed algorithm was able to make substantial improvements over the known heuristics.Li et al. [13] solved a waste collection problem in Porto Alegre, Brazil that has a population of over 1.3 million people and consists of 150 districts.They made a truck scheduling operational plan with the objective of minimizing operating costs and fixed costs of trucks.Furthermore, they proposed a heuristic approach to balance number of travels between facilities.Computational results indicated that they could reduce the average number of required vehicles and the average traveled distance of 27.21% and 25.24% respectively.
Laporte et al. [14] presented a CARP problem considering stochastic demands which will cause failure in paths because of exceeding from vehicle capacity.They solved the problem by a neighborhood search heuristic algorithm.Khosravi et al. [15] presented a periodic CARP (PCARP) with mobile disposal sites specific to the urban waste collection.They tested two versions of the Simulated Annealing (SA) algorithm to solve the problem.Their proposed algorithm showed an appropriate performance in comparison with CPLEX.
Babaee Tirkolaee et al. [16] investigated a novel mathematical model for the robust CARP.The objective function of their proposed model aimed to minimize the traversed distance considering the demand uncertainty of the edges.To solve the problem, they developed a hybrid metaheuristic algorithm based on a SA algorithm and a heuristic algorithm.
Recently, Tirkolaee et al. [1] developed a Mixed-Integer Linear Programming (MILP) model for the multi-trip CARP in order to minimize total cost in the scope of the urban waste collection.In the proposed model, depots and disposal facilities were located in different places specific to the urban waste collection.They proposed a hybrid algorithm using the Taguchi parameter design method based on an Improved Max-Min Ant System (IMMAS) to solve well-known test problems and large-sized instances.They could demonstrate the high efficiency of their proposed algorithm.Hannan et al. [17] proposed a Particle Swarm Optimization (PSO) algorithm in order to solve a Capacitated VRP (CVRP) with the aim of finding the best waste collection way and optimal routes.They could prove the efficiency of their algorithm in different datasets.
Rey et al. [18] developed a hybrid solution method based on ACO heuristics, Route First-Cluster Second methods and Local search improvements to obtain high quality solutions for VRP in comparison with other metaheuristic solvers.
Tirkolaee et al. [19] proposed a novel mathematical model for the robust PCARP considering working time of the vehicles.They developed a hybrid SA algorithm in order to solve the problem approximately.The obtained results showed that their proposed algorithm could generate appropriate robust solutions.
In the second part of the literature, Green VRP (G-VRP) and its different applications are investigated which deal with the optimization of energy consumption of transportation.The G-VRP was mainly studied since 2006 [19].Lin et al. [20] presented a review research in the field of G-VRP and its past and future trends.Miden et al. [21] investigated time window-constrained VRP wherein speed was dependent on travel time.They further proposed a heuristic for solving the problem and ended up with 7% saving in CO 2 emission in a case study in England.
Erdo gan and Miller-Hooks [22] formulated a G-VRP and developed some solution methods to consider fuel-powered vehicles to cope with the limited refueling infrastructure in the problem.They could generate acceptable solutions using the modified Clarke and Wright Savings heuristic and the Density-Based Clustering (DBC) algorithm.Kopfer et al. [23] did some research on the analysis of different costs incurred through pollution and environmental impacts.They presented a mathematical model and evaluated it by CPLEX solver.Tavares et al. [24] studied the effects of road slope and vehicle load on consumed fuel in waste collection problem; however, they considered three levels of load only: half load (during waste collection), full load (traveling to the disposal site) and no load (when returning to the depot).In their research, the relationship between fuel consumption rate and load was not considered.In the meantime, it is obvious that, when a vehicle serves a node, its losses some its load, which translates into lower fuel consumption along the rest of the route.Therefore, it is necessary to consider load-dependent fuel consumption for calculating total cost more accurately.
Mirmohammadi et al. [5] presented a multi-trip time-dependent periodic G-VRP considering time windows for serving the customers with this assumption that urban traffic would disrupt timely services.The objective function of the proposed problem was to minimize the total amount of carbon dioxide emissions produced by the vehicle, earliness and lateness penalties costs and costs of used vehicles.They used CPLEX solver to solve the problem exactly.
Stochastic G-VRP has been investigated in some research in which some parameters are considered to be stochastic such as vehicle speed, breakdown rate of vehicles [25,26].Recently, Poonthalir and Nadarajan [27] presented a bi-objective G-VRP, considering various speeds and fuel efficiency.They minimized the travelling cost and fuel consumption using goal programming and Particle Swarm Optimization (PSO).As a recent applied high efficiency solution method in the field of study, Kulkarni et al. [28] proposed a novel two-stage heuristic based on the inventory formulation for the recreational Vehicle Scheduling Problem (VSP).
As the last part of the literature, Wang et al. [29][30][31] proposed some mobile sink based routing methods to the routing process, which can largely improve network performance such as energy consumption and network lifetime.On the other hand, there are some novel technologies that would be applicable to the problem such as conversion of CO 2 into clean fuels, autonomous vehicle control and so on [32][33][34][35][36][37].
Uebel et al. [35] conducted the study of a novel approach that combines discrete state-space Dynamic Programming and Pontryagin's Maximum Principle for online optimal control of hybrid electric vehicles (HEV).They considered engine state and gear, kinetic energy and travel time are considered states in this paper besides electric energy storage.They could demonstrate the high quality of the generated solutions in comparison with a benchmark method.Woźniak and Polap [36] developed a hybrid neuro-heuristic methodology for intelligent simulation and the control of dynamic systems over time interval specific to the model of electric drive engine vehicle.
Alcala et al. [37] presented the control of an autonomous vehicle using a Lyapunov-based technique with a LQR-LMI tuning.They could apply a non-linear control strategy based on Lyapunov theory for solving the autonomous guidance control problem.
After reviewing the literature in different aspects, it is perceived that all of the research contains different solution methods so that each has its own advantages.Therefore, in this research, the most applied metaheuristic algorithms that is, SA and GA are combined together in order to keep the advantageous of each one.On the other hand, the applied local search procedures are defined innovatively in line with the problem solution space.
Accordingly, this research is aimed at presenting a novel model for the multi-trip CARP of urban waste collection which not only brings about economic benefits (minimizing the fixed cost of used vehicles) but also reduces adverse impacts of the CO 2 emission in air pollution considering advantages for the environment and people's health.Also, a Hybrid Genetic Algorithm (HGA) is developed to solve the problem efficiently.
Therefore, the main novelties of the present paper are briefly as follows: (1) presentation of the multi-trip Green Capacitated Arc Routing Problem (G-CARP) which has not been yet introduced in the literature to the best of our knowledge; (2) since this paper is related to municipal solid waste management, loading, unloading sites and vehicle depots are commonly located in different places so that two separate locations are considered for the depot and unloading site in the model to make it closer to real world; and (3) developing a customized efficient solution method.
The remaining of the paper is organized as follows: Section 2 describes the distance-oriented green capacitated arc routing problem studied in this paper.Section 3 presents the proposed algorithm.Section 4 discusses the computational results.Finally, the concluding remarks and outlook of the research are presented in Section 5.
Distance-Oriented Green Capacitated Arc Routing Problem
The assessment of fuel consumption and CO 2 emission for vehicles requires performing complicated computations which only shows an estimation and approximation due to the difficulty of determining some of the fundamental variables values such as road slope, driving mode, weather conditions, accidents and so on [38].
The investigations performed on CO 2 emission are based on either fuel consumption or traveled distance.Based on an initiative approach of greenhouse gases protocol [39], Table 1 lists the required criteria for determining feasibility of each of these methods [39].In one hand, in the fuel-oriented method, the fuel consumption is multiplied by CO 2 emission factor for the fuel type.On the other hand, in the distance-oriented method, CO 2 emission can be calculated using the distance-oriented emission factors.A fuel-oriented emission factor is developed based on fuel heat values, the fraction of fuel carbon which reacts with oxygen and carbon content coefficient.The distance-oriented method can be used when the data related to the traveled distance by the vehicle is available.Making a decision regarding which of these two methods is used, depends on the data accessibility.It is clear that trying to obtain a theoretical formulation of this problem, the distance-oriented method (wherein CO 2 emission is calculated based on traveled distance and distance-based emission factors) is easier to apply.This requires taking two main steps: (1) collecting data on traveled distance by a given vehicle and fuel type (e.g., km or ton-km); and (2) converting the distance estimations to CO 2 emissions by multiplying the obtained results from step 1 by the distance-based emission factors.
In addition, the CO 2 emission calculations are based on the assumption that all of this computation depends mainly on two factors: type of the vehicle and type and quantity of the consumed fuel.Furthermore, this means that the emission is a function of two factors: transportation types (the vehicle and its load) and traveled distance [40].Therefore, CO 2 emission estimations differ depending on the vehicle mass and transported load, which is an important parameter [41].
As it is presented in Table 2, emission estimation factor goes through the two main steps mentioned earlier.The first step includes estimating a fuel conversion factor using chemical reaction of fuel combustion (C 13 H 28 + 20 O 2 → 13 CO 2 + 14 H 2 O) [42].Given the molecular mass of diesel (C 13 H 28 ) and CO 2 (184 and 24, respectively) and knowing that there are 13 CO 2 molecules for each diesel molecule, one can simply find that for each kg of diesel, 13 × 44/184 = 3.11 kg of CO 2 is produced.Then, using diesel density (0.84 kg/L), one can calculate the produced CO 2 per liter of consumed diesel (3.11 × 0.84 = 2.61 kg).It is observed that this estimated theoretical conversion factor is well close to that experimentally obtained by Defra (2.63 kg) [43], providing conversion factors for greenhouse gases, so as to use existing data resources and convert them to equivalent CO 2 emission data.Subsequently, having the fuel conversion factor (2.61 kg of CO 2 /L of diesel), the second step is to estimate emission factor (ε).In this step, a function incorporating the data on average consumption depending on load is defined.Table 2 shows estimated value of this factor for several different capacity scenarios for a truck of 10 tons in capacity [39].Accordingly, the presented information is generalized to our problem by considering the impact of CO 2 emission and conversion factors.
Mathematical Model of G-CARP in the Scope of Municipal Services
As the main difference between VRP and CARP, CARP consists of determining optimal routes that traverse all the edges with positive demands (required edges), however, VRP consists of finding optimal routes that traverse all the nodes defined in a graph network [1].
Consider a graph of G = (V, E) including the set of V for all the nodes constituting the edges and the set of E for all the edges defined in the network.The proposed G-CARP involves determining the optimal number of vehicles and optimal routes for each vehicle to minimize an overall objective function involving the cost of using the vehicles and the cost of total CO 2 emission throughout the network which has a direct relation with total traveled distance.The vehicles are originally located in depot; then start traveling (their first trip) to serve the required arcs and once their capacity limitation is reached, proceed to the unloading site to empty their loads.If possible, they continue traveling (their second trip) from the unloading site to the operational area.Having more than one trip for each vehicle directly depends on the capacity constraint and the maximum available time of vehicles.When the remaining time for a vehicle becomes zero, it shall return to the unloading site where it is unloaded before returning back to the depot.
Node 1 denotes the depot and node n denotes the unloading site in the network graph.
The main steps of the research and modeling the problem are described in Figure 1 before the model being presented.
Sets V
The set of the network nodes K The set of the available vehicles P k The set containing p th trip by the k th vehicle E The set of all edge defined across the network E R The set of all required edge defined in the network S An optional set of all edges defined in the network The set of nodes defined in the set S
Parameters t ij
The time it takes to traverse the edge (i, j), where (i, j) ∈ E d ij Demand of the edge (i, j), where (i, j) ∈ E c ij Distance (length) of the edge (i, j), where (i, j) ∈ E e ij CO 2 emission along the edge (i, j), where (i, j) Decision variables (3) ∑ ∑ ∑ ∑
∑
(j,h)∈S The objective function consists of two parts.The first part includes minimization of total CO 2 emission cost while the second part attempts to minimize the cost of using (renting) the k th vehicle.Constraints (5) denote the flow balance for each vehicle, that is, it controls input to output from each intermediate node constituting two arcs.Constraint (6) ensures that each required edge is served by one of its two constituting arcs.Constraint (7) indicates the capacity constraint of the k th vehicle.Constraint (8) expresses that the required edge is served by the vehicle traveling through it (or there are chances that a vehicle travels through an edge without having the edge served).Constraint (9) stipulates that the k th vehicle will be used when the associated cost is paid.Constraint (10) represents the maximum time limitation considered for each vehicle.Constraints (11) and (12) ensure that the first trip of the vehicle starts from the depot and ends at unloading site.Constraints ( 13) and ( 14) make sure that from the second trip to the next (if any), the trips start and end at the unloading site.Constraint (15) ensures that no sub-tour will be constructed.
Total CO 2 emission is based on the environmental matrix (e) which is calculated by considering the matrix containing distances between each pair of nodes constituting edge (i, j) and respective emission factor (ε).
In order to gain a better understanding, an example with four required edges and two available vehicles is demonstrated in Figure 2. Nodes 1 and 8 represent the depot and unloading site, respectively.In this figure, the numbers indicated on each edge refer to the demand and length of the edge.It is assumed that the lengths of the edges are equal to their traversing time.The required edges are marked by solid lines (e.g., the edge (2, 3)).Vehicle 1 has the capacity of 40 units and the usage cost of 4000$.Vehicle 2 has the capacity of 50 units and the usage cost of 5000$.Maximum available time for each vehicle is equal to 200 units.In this example, vehicle 1 is used and constructs two trips in order to serve all the required edges.By solving the final proposed model considering appropriate input parameter value and considering time periods, the obtained results will be reliable and applicable in an urban area and would definitely lead to huge cost savings as a real time application.
Limitations of the Adopted Model
The applicable limitations of the proposed adopted model are listed as below: By solving the final proposed model considering appropriate input parameter value and considering time periods, the obtained results will be reliable and applicable in an urban area and would definitely lead to huge cost savings as a real time application.
Limitations of the Adopted Model
The applicable limitations of the proposed adopted model are listed as below: (1) It is just applicable for a specific time period and it cannot include a planning horizon.As it is obvious the demand of different periods may be different and it would change the obtained results.(2) The exact fuel consumption rate is not accessible due to the hardness of computing the exact effects of the road slope, temperature conditions, load volume and so forth.
Hybrid Genetic Algorithm (HGA)
Since the CARP is an NP-Hard problem [9] and due to the high complexity of our proposed problem as an extended CARP, the exact methods are capable to solve the problem only in small sizes.Therefore, an HGA is proposed to solve the problem approximately in medium and large sizes.The proposed HGA is based on SA algorithm and Genetic Algorithm (GA).
The structure of the proposed HGA is depicted in Figure 3.As it was mentioned, there are many applied metaheuristics presented in order to solve the optimization problems similar to the research problem [44][45][46][47][48][49].Since the applicability and the robustness of the GA algorithms have been proved and it has generated appropriate solutions for CARPs in the literature [50][51][52][53][54], GA is proposed as the main algorithm for the current research.
In the following, the mechanism of the proposed algorithm is described.
In order to generate initial solutions for HGA, a heuristic initial solution generator algorithm is implemented.HGA is composed of three stages.In the first stage, a random solution is generated.The second stage involves improving the obtained solution by SA algorithm.In the third stage, GA is run with the output of the SA.In the HGA, the solution is represented by a chromosome as shown in Table 3.
Solution Representation
In the proposed algorithm, a matrix is used to represent the traversing sequence of the arcs, the related vehicle numbers and trip numbers in which there are two separators of number zero between vehicle number and trip number and between the trip number and constructed routes.According to the example presented in Table 3, we have two activated vehicles so that the first vehicle has two trips and the second one has only one trip that covers all the arcs with demands.
Initial Solution Heuristic Algorithm
In order to generate initial solutions, a constructive heuristic algorithm is employed.The steps taken in this algorithm are as follows: 1.
Select a vehicle randomly.The first trip starts at the depot.2.
Among all of the edges starting at the depot, consider β edges with the shortest distance to the depot and select one of them randomly.Go to Step 3.
3.
Once arrived at the new vertex at the end of the selected edge, go to Step 4 if there exists any required edge; otherwise, go to Step 5.
4.
Among all of the required edges, consider β edges with the highest demands that can be selected considering vehicle capacity and maximum available time constraints.Select one of the β edges and go to Step 3. If there exists no required edge meeting both of the mentioned constraints, go to Step 5 (for an edge to meet the maximum available time constraint, there shall be a vehicle that can travel through the edge and then proceed to the unloading site within the specified time interval for the considered vehicle).5.
Among the entire set of non-required edges at the considered vertex, select β edges meeting the time constraint with the smallest lengths and then select one of them randomly.If such conditions are not satisfied, go to the unloading site and then proceed to Step 6.
6.
If all of the required edges are served, go to Step 7, otherwise update the vehicle capacity constraint.If vehicle maximum available time constraint is at least enough to go from the unloading site through the shortest edge and then return to the unloading site, go to Step 3; otherwise, select the next vehicle and go to Step 2. 7.
Terminate the algorithm.
Improving the Solution Using SA Algorithm
SA algorithm is applied to improve the solutions and all of the initial solutions are separately improved using this algorithm.This algorithm has a great efficiency to solve the problems in non-convex or discrete solution space [55].Initial parameters of the algorithm include the number of iterations at each temperature (M), initial temperature (TE 0 ), temperature reduction rate (α), ultimate temperature (TE end ) and Boltzmann's constant (Kc), which are initialized before starting to search.Then, a neighborhood of the initial solution is considered.If the value of the objective function within the generated neighborhood is better than the value of the objective function for the respective initial solution, the neighborhood replaced the initial solution; otherwise, a random number between zero and one was generated and compared to the algorithm defined equation [45].If the random number is smaller than the value of the algorithm equation, the worse solution is accepted.A number of iterations were performed at each temperature before going to a lower temperature.Stopping criterion is set for achieving the ultimate temperature.In the present paper, the values of the algorithm parameters are set as follows using trial-and-error method wherein three example problems are solved under different scenarios to find the best values of the algorithm parameters.M = 5, α = 0.98, TE 0 = 200, TE end = 1, Kc = 0.8 (18) Local search methods applied in this algorithm are as follows: 1.
Swap a trip of one vehicle with a trip of another vehicle at random.A solution can be a candidate for selection only if it is viable.
2.
Two trips are selected at random.If there exists common edge/edges in the two selected trips, one of the edges is selected randomly and both of the trips are divided into two parts considering the selected edge.The first part of the first trip is combined with the second part of the second trip and the first part of the second trip is combined with the second part of the first trip to form two new trips.
3.
Two trips are selected at random.If there exists common edge/edges in the two selected trips, the sequence between common edges of the two trips is changed.4.
One edge along one trip is selected randomly and its direction is reversed.
5.
Part of a trip is selected at random and its direction is reversed.
Genetic Algorithm
This algorithm is based on producing new generations and selecting the best solutions for producing the next generation [56].The heuristic approach explained earlier is used to generate initial solutions and to be improved by SA for the proposed GA.For this purpose, the heuristic algorithm generates a specified number of initial solutions (200 solutions).Among the generated solutions, a sample of the size equal to that of the initial population of the GA is taken following the initial solution selection approach explained in the next sub-section.In the next stage, the required number of solutions is selected using two-parent tournament selection method and the proposed crossover method explained in the following is employed to generate two solutions.This process is repeated until the required number of solutions is achieved, followed by applying mutation operator at a particular rate following the method proposed in the following.Finally, among all of the solutions, a specified number of solutions are selected via the initial solution method and transferred to the next generation, eliminating all other solutions.
Initial Solution Selection
Two things must be considered when selecting initial solutions.The first thing to consider is the quality of solutions and the second thing is the scattering of the solutions.If one focuses on selecting the best solutions only, the search space will be reduced and there are chances of trapping in local minima.In order to address this problem, the present paper proposes an initial solution selection approach which goes through the following steps: 1.
Define two sets Q and S, which are initially empty.
2.
Assign the initially generated solutions to the Set Q and sort them by the value of the objective function (ascending).
3.
Divide the interval between the best and worst values of the objective function into num intervals of the same length (where num is the number of solutions to be selected).
4.
Select isolated solutions in each interval and transfer them to the set S before having them eliminated from the set Q.If the number of selected solutions is equal to n, terminate the algorithm; otherwise, proceed to Step 5.
5.
Assign a value to the solutions remaining in set Q.The value is equal to the inverse of the number of the solutions within the respective interval of solutions.6.
Sort the solutions by value (descending) and assign to each solution a cumulative score obtained by summing up scores of previous solutions and that of the current solution.Each time, generate a random number between zero and the sum of scores, select the solution which falls within the considered interval and transfers it to the set S.
7.
If the set S contains enough number of initial solutions, terminate the algorithm; otherwise, go to Step 6.
GA Operators
The most important operator in the scope of a GA is crossover operator.The proposed crossover operation is as follows: 1.
Two parents are selected via tournament method.For this purpose, a particular number of solutions are selected randomly and the two ones for which the best values of the objective function is obtained are selected.Two trips are selected at random and swapped between the two parents.
2.
Two parents are selected via tournament method, from whom two trips are selected randomly.
If there exists common edge/edges in the two selected trips, one of the edges is selected randomly and both of the trips are divided into two parts considering the selected edge.The first part of the first trip is combined with the second part of the second trip and the first part of the second trip is combined with the second part of the first trip to form two new trips to replace the previous trips.
On the other hand, the mutation operator is applied at the fixed rate of Pm to mutate the child produced via the crossover operation.
The local search methods used to improve initial solutions is further used as mutation operators.
After applying the operators, the feasibility of the solutions are evaluated by the existence check of the arcs in the graph network, vehicle capacity constraint of each trip, vehicle maximum available time in each tour.
Parameter Tuning of the HGA
In order to adjust the parameters of the proposed HGA, a trial-and-error approach is followed, wherein three example problems are solved under different scenarios to find the best values of the algorithm parameters.Considering what has mentioned above, the number of initial solutions generated by the proposed heuristic algorithm is set to 200 (i.e., initial population size), the number of solutions obtained from the crossover operator is set to 150 and mutation rate is set to 0.1.Figure 4 demonstrates pseudo-code of the proposed GA.
Numerical Results
In this section, in order to validate the proposed mathematical model and to evaluate the performance of the proposed algorithm, 15 random instances of various sizes are generated.After solving instances with the exact method and analyzing the obtained results, it has been revealed that the proposed model has passed its validity test.
For all of the instances, two types of vehicle (types 1 and 2) with capacities and activation costs of 5 and 7 tons and 400$ and 500$, respectively, were considered.Supporting information and network structure are demonstrated in Table 4.The input parameters values are generated randomly with a uniform distribution.
In Table 4, column 1 denotes the instance number, column 2 defines the total number of edges, column 3 gives the number of required edges and columns 4 and 5 show numbers of available vehicles of types 1 and 2, respectively, for each instance.The emission factors of the vehicles are described in Table 5.Also, Ψ is equal to 10 in all instances.
The 15 instances are then solved using CPLEX solver of GAMS Software, the proposed SA and HGA separately for the applied run time constraint of 3600 s.The aim of investigating SA and HGA separately is to make the impact of applying GA on SA more obvious.In fact, HGA is the result of applying GA on SA.The obtained results are shown in Table 5.The solution methods are executed on a Laptop equipped with Core i7 CPU @ 2.60 GHz processor and 12.00 GB of RAM.
As it is obvious in Table 6, CPLEX is not capable of finding a solution for some problems by applying the 3600 s run time limitation.Results have shown that the proposed HGA has appropriate efficiency in comparison with SA and CPLEX.However, SA could solve the problems at a lower run time against HGA, however, this difference is negligible due to the significant better Gap percentage (Figure 5).The average gaps obtained by SA and HGA are 2.66% and 1.64%, respectively.On the other hand, the capability of the proposed solution methods is evaluated through solving P15-P18.As it is obvious, CPLEX could report the best found solution up to the first 15 problems.For P16-P17, there is a significant increase in run time of SA and HGA and for P18, none of the algorithms are able to find any solution within 3600 run time limitation.It shows that some additional modification may be needed to be applied to improve the efficiency of for solving very large sized problems.The number of used vehicles in each instance problems is presented in Table 7 for different solution methods.As it is obvious, there are no significant differences between the number of used vehicle types 1 and 2. As another advantage of the proposed algorithm, it can solve the large sized problems with a reasonable low run time in comparison with the algorithms proposed in the literature like Improved Max-Min Ant System (IMMAS) [1].
Sensitivity Analysis
In order to investigate the effects of changing some parameters on the value of the objective function, a sensitivity analysis is performed on these parameters of the first fifteen problems by HGA.In fact, the behavior of the objective is studied in front of the uncertain environment if the considered value of a parameter changed.On the other hand, managers are willing to know how much benefits would be gained if they assign more resources.In fact, they want to know about the relation defined between objective function and the value of the assigned resources.In this research, the effects of the maximum available time of each vehicle on the objective are analyzed.Four different executive values (i.e., 360, 480, 550 and 600 min) are considered for the parameter while the other parameters are constant.
Results of the sensitivity analysis on Tmax are given in Table 8.The most important conclusion drawn from the analysis is that the higher the Tmax is, the lower objective function and the lower number of the used vehicles are.As it is clear in Figure 6, the objective value will increase significantly by changing Tmax to 360 min.In other words, the worst case is obtained by Tmax of 360.The difference between the objective values obtained by Tmax of 550 and 600 is proportionally the lowest.Table 9 shows the cost savings obtained by different Tmax value.function, a sensitivity analysis is performed on these parameters of the first fifteen problems by HGA.In fact, the behavior of the objective is studied in front of the uncertain environment if the considered value of a parameter changed.On the other hand, managers are willing to know how much benefits would be gained if they assign more resources.In fact, they want to know about the relation defined between objective function and the value of the assigned resources.In this research, the effects of the maximum available time of each vehicle on the objective are analyzed.Four different executive values (i.e., 360, 480, 550 and 600 min) are considered for the parameter while the other parameters are constant.
Results of the sensitivity analysis on Tmax are given in Table 8.The most important conclusion drawn from the analysis is that the higher the Tmax is, the lower objective function and the lower number of the used vehicles are.In order to optimize associated costs, managers should consider the impact of these maximum vehicle usage times and set an appropriate upper limit to gain maximum cost saving.As it is presented in Table 9, the average cost saving for Tmax of 360 is not a positive value that is, it causes the loss for all instances.The average cost savings for Tmax of 550 and 600 are equal to 374.00 and 507.80, respectively.Finally, the performed sensitivity analysis can be used as a managerial tool to be applicable in decision making processes.
Conclusions
Considering economic and environmental aspects of the urban services are two inseparable parts of decision making in the organizations such as municipalities.Usually, these two important factors are investigated separately in applied research.This has been while, in many cases, finding the shortest routes could not result in optimal solution when CO 2 emission and air pollution minimization is taken as the objective function, because fuel consumption depends on many factors including the vehicle load, speed, road conditions and so forth.In this paper, a multi-trip Green Capacitated Arc Routing Problem is proposed to find the shortest routes serving to minimize total cost and total emission of greenhouse gases considering various types of real world constraints, such as capacity constraint, maximum available time for vehicles and so forth.In other words, the aim is to reduce adverse impacts of greenhouse gases and air pollution besides further consideration of cost minimization in terms of the minimal activation of vehicle fleet for serving waste and the optimal routing.In order to solve the proposed model, a hybrid genetic algorithm is developed based on a simulated annealing algorithm and genetic algorithm.The results indicated the high efficiency of the proposed algorithm so that it could yield solutions with the average gap of 1.64% for the instances.On the other hand, the proposed HGA solve large sized problems in an appropriate run time in comparison with the other algorithms proposed in the literature.Finally, a sensitivity analysis is implemented on the problem to study the impact of different maximum available time of vehicles and propose the optimal policies.For future studies, it is proposed to develop the robust optimization approaches for the problem in order to evaluate the effect uncertainty.On the other hand, proposing another novel algorithm such as polar bear optimization and moth-flame optimization would be effective in order to test the proposed algorithm in large sized problems.
Figure 1 .Figure 2 .
Figure 1.The main steps of the research.Sustainability 2018, 10, x FOR PEER REVIEW 10 of 21
Figure 2 .
Figure 2. A schematic example and solution generation flow.
Figure 3 .
Figure 3.The structure of the proposed hybrid genetic algorithm (HGA).
Figure 5 .
Figure 5.A comparison of computational times between CPLEX solver, SA and HGA.
Figure 6 .
Figure 6.A comparison between mean values of the objective function for different values of Tmax.
Figure 6 .
Figure 6.A comparison between mean values of the objective function for different values of Tmax.
Table 1 .
Fuel-oriented and distance-oriented methods.
Table 3 .
The solution representation chromosome of the proposed HGA.
Table 4 .
Input information of the instances.
Table 5 .
Estimated emission factor for the 5-ton and 7-ton trucks.
Table 6 .
The obtained computational results.
Table 7 .
The number of used vehicles obtained by different solution methods.The number of used vehicle type 1; ** The number of used vehicle type 2.
* The number of used vehicle type 1; ** The number of used vehicle type 2.*
Table 8 .
Computational results obtained for different values of Tmax.
Table 8 .
Computational results obtained for different values of Tmax.
Table 9 .
Cost savings obtained for different values of Tmax. | 2018-06-09T17:28:13.702Z | 2018-04-27T00:00:00.000 | {
"year": 2018,
"sha1": "528c08adc0bb9a62fd883b1358aedad92e6cbab3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/5/1366/pdf?version=1525768168",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "528c08adc0bb9a62fd883b1358aedad92e6cbab3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
255323690 | pes2o/s2orc | v3-fos-license | A Pleasant Ride: Vintage Aesthetics as a Strategy to Deliver Sex Education and Harm Reduction on Instagram
This article discusses the possibilities and limitations of delivering transformative sex education and harm reduction on highly regulated platforms such as Instagram and what are helpful strategies in this process. I use the Brazilian project Sento Mesmo (SENTA) as a case study. SENTA is a multiplatform project that uses vintage aesthetics and deploys explicit language to address its topics. It is an activist project as it challenges current paradigms, particularly the way sex-ed and drugs are hegemonically addressed, and targets groups excluded from public policies. SENTA is explicitly inspired by Freirean pedagogies, and I argue it can be framed as pleasure activism (brown, 2019) as it understands sexuality as more than just a site of oppression and as an indissociable part of our lives. Through a mixed-methods approach that included content analysis and a semi-structured interview with the creator, I analysed SENTA’s content to understand the creative strategies chosen to evade Instagram’s ban or censorship. There are intentional and numerous contrasts: between text (explicit) and images (vintage, evoking the “good old times”), present and past, invisibility and visibility. In sum, visual, textual and engagement concessions have to be made to be able to circulate in such a highly regulated environment, but such concessions can still be filled with meaning. In SENTA’s case, the contrasts create a dialogue whilst trying to balance attractiveness for the readers and being harmless to the algorithms. This constant dialogue between past and present also reaffirms SENTA’s political alignment, including its alignment with historical LGBT actors in the country. I conclude that despite SENTA’s content being very context-specific, its strategies can be applied elsewhere, and become more important as public policies and traditional sex-ed approaches continue to overlook people who do not comply with every single norm.
INTRODUCTION
I was scrolling through my Instagram feed when a friend of mine posted that he would be DJing at a Zoom party organised by Sento Mesmo, or SENTA (here translated as Shamelessly Riding), a project focused on sex education (sex-ed) and harm reduction. The unapologetic name caught my attention, and I was drawn in by the profile's use of vintage images, reminiscent of the 1970s or 80s visuals, but combined here with contemporary fonts, memes, print screens, and, above all, explicit language on sex and drugs.
Building on previous insights on the importance of phrasing sex-ed in a grounded and non-medicalised language (Paiva, 2000), this article explores SENTA's style, to understand the reasons behind their particular linguistical and visual choices and how they contribute to the project's goal of delivering transformative sex-ed.
SENTO MESMO
Created in 2020, Sento Mesmo, or SENTA, is a multiplatform project for sex-ed and harm reduction, although it addresses more than these topics. On its website, there are two 'about us' descriptions, using different language styles. The more informal one defines SENTA as: a place talking about drugs, dirty sex and everything that gives pleasure. In a mocked and real way, it discusses and exchanges ideas in the most democratic and didactic form possible, trying to answer the questions everybody has and [trying] to say what everybody wants to say (QUE PORR* É ESSA, n.d.; My Translation).
In addition to the website, there are Facebook and Twitter pages, a Telegram channel, a podcast and an Instagram account (first @sentomesmo, now @sento.mesmo), which is the project's main platform, gaining over 70,000 followers in just one year 1 . SENTA's posts on Instagram are the focus of this article. The creator is a young man with a background in graphic design and medicine, and he launched SENTA as a project after one of his designs -a chart detailing the effects of mixing different drugs -went viral. Figure 1 shows a sample post, so English-speaking readers can have a gist of the visual and language choices. I suggest that a major contributing factor to SENTA's traction on social media is the use of the visual style in their designs. They frequently feature images from the 70s and the 80s, like stock-like pictures, religious images, and stills from old movies or advertisements. Visuals come in all sorts of colours, but SENTA privileges vivid ones, particularly red. The crucial message is typically incorporated as text in the images, leaving only superfluous information for the captions. This strategy makes sense as the content is published on a platform that privileges the visual and also because, as the creator stated in the interview for this research, he designs the pieces so they can be displayed in public spaces.
In contrast, the text is very explicit and playful; there is no euphemistic language or patronising of the audience. SENTA frequently calls the audience out: for instance, for gathering at the peak of a COVID-19 wave or reminding them that unsolicited sexual touching is harassment regardless of your sexual orientation. But to be (sexually) explicit in an environment as regulated as Instagram requires creativity, and so to circumvent platform surveillance, letters are often replaced by numbers or symbols (Example: sex/S3X or ass/A$$) or excluded. For example, typing C_msh*t, means audiences are still able to identify the word, whilst avoiding getting caught by a platform's regulatory algorithm. Wordplays are another common strategy.
The central presence of the creator is a major SENTA feature, to a point that the project is indissociable from him. He usually shares his own experiences and impressions before explaining a topic in-depth to the followers and, particularly more recently, has relied more on videos of him explaining or reacting to different things. According to him, making the project so personal was intentional to consolidate an approachable and imperfect persona who speaks the same language as the audience. Someone they can contact without fearing being lectured to; someone they can ask what they really want to know.
ANALYSING THE VISUAL
To identify and discuss how SENTA's unique combination of image/text on Instagram is used, this research initially drew on writings on visual critical methodology (Mannay, 2015;Rose, 2016) and visual political communication (Veneti et al., 2019). Rose (2016) differentiates four sites -production, the image itself, circulation and audiencing -and three modalities -technological, compositional and social -of visual analysis. Using her categories, my research question focuses on the production and on the image itself. Although audiencing and circulation are also important, I primarily seek to explore the strategies to openly address issues that are almost 'taboos', or at least not frequently talked about: heterosexual men being penetrated, consciously combining different drugs, condomless anal sex, etc. What are the project's choices? How do they express SENTA's intentions? For this purpose, the research employs a combination of content analysis and an interview with the creator of the content in question.
An online semi-structured interview through Zoom with the creator was conducted in January 2022. Questions focused mainly on the (political and visual) inspirations for the project, the production process, and the feedback from SENTA's community. Collecting Instagram posts for the content analysis posed particular challenges since at the time of this research SENTA's Instagram page (@sentomesmo) had recently been shut down after numerous reports claiming obscene content. Whilst content posted on other platforms were still accessible these were less relevant for the purpose of this research, mainly because they followed different patterns depending on the platform -TikTok, for instance, relies on videos. After the interview, the creator added me on Facebook, where he kept a copy of many Instagram posts, images and captions.
The 175 posts were published on Facebook -in sync with the first Instagram account -from January 4 th , 2021, to exactly one year later, January 4 th , 2022. They were all coded and constituted my sample for the content analysis 2 . At the time of the analysis, there was nothing posted after this date, and there was not much posted before January 4 th , 2021 (and it was all personal content or relying on Facebook-specific features -events, sharing of posts, etc.). Therefore, it was not difficult to draw the line between what was SENTA content and what was not. Confirming what SENTA's creator said in the interview, it was also clear the 175 posts were not primarily planned for Facebook, as they used hashtags (not clickable on Facebook), and other Instagram engagement tools (referring, for instance, to 'stories', 'carousel' or linking to specific profiles).
I relied on the interview to understand the production site and on content analysis to explore the images. As Rose (2016) describes, content analysis is a methodology that privileges this site of analysis (the image itself) over the other three. I applied its principles (selecting, coding and quantitative analysis) to observe general guidelines and main features -in my research, they matter more than the exceptions or deviant images -and then critically analyse the strategies adopted. I coded the 175 posts I have found 3 on the basis of the main topic (sex-ed/harm reduction/other); if they were part of a carousel or not; the use of prevalent colours; the presence/absence of illustrations/pictures; and, if the images were religious or not, contemporary, vintage or both, sexually explicit or not. Furthermore, as I did the coding after the interview, I added the variables 'religious symbol?' and 'sexually explicit?' because the creator explicitly addressed in the interview his intentions to reference religious values and not depict explicit images. I wanted to see how this was put into practice.
I noticed interesting things: at first, SENTA did not rely that much on vintage aesthetics and there was no distinctive visual identity (patterns in the use of colours, fonts and images) yet. As it developed, vintage images became more prevalent, however, memes also remain important. In general, the 'serious' or educational posts are interspersed with humorous ones, which reflect the creator's effort to show his persona as 'normal' in the eyes of the audience, someone who is not their teacher but a peer, a guy you can send a meme to. Most posts also come as an Instagram carousel. Of the 175 posts, 21 directly addressed harm reduction, 60 directly addressed sexual education and 94 addressed other topics. Only 10 publications depicted more sexually explicit images and 9 had explicitly religious images (Jesus, nuns, etc.). However, many more relied on images that evoke 'traditional' Christian values: a happy heterosexual couple or a happy heterosexual family out in the fields, for instance. After analysing the coding results, I chose images that displayed the more prevalent patterns to illustrate this article and images that express the central topics from my interview.
CONCEPTS WORTH REMEMBERING
In this section, I present the main conceptual frameworks for theorising SENTA's work: pleasure activism, Freirean methodologies applied to sex-ed, and intersectionality. I indicate how these concepts are put into practice in the project, also introducing key specificities of Instagram and social media algorithms that help understand SENTA's work. Lewin (2019) and Lewin and Jenzen (forthcoming) define four forms of (queer) visual activist practice -protest, product, and process and partying, -which can all be observed in SENTA's work at some level. The project is connected with product-based LGBTQ+ visual activism as the visual pieces is made for display both digitally and physically. It is also possible to frame SENTA as partying and protest, as it has always been concerned with building a safe space for people to share their experiences and real doubts related to pleasure -doubts which are not usually covered in sex-ed traditional approaches. And, by doing this, SENTA criticises and challenges traditional sex-ed and calls for change at the societal level.
However, SENTA is more strongly and directly framed as process-driven visual activism as it uses 'art to empower or engage with participants' (Lewin and Jenzen, forthcoming). Projects in this category tend to be influenced by the work of Brazilian pedagogue Paulo Freire (2013Freire ( , 2014). Freire's pedagogies value consciousnessraising through an educational process based on dialogue and mutual learning and transformation. Freedom, autonomy, love, hope, reflection and praxis are key words in Freirean thought, in which Education is not politically neutral: it is aligned with the oppressed.
Although sexuality is not a primary theme in discussions on Freirean pedagogies, applying his methodologies to sex-ed is not unusual (Beserra et al., 2011;Dias, 2015;Demartini and Silva, 2016). Freirean-inspired sexual education values dialogue, and sees education as an intrinsically political, transformative and emancipatory process (Warken and Melo, 2019;Sousa, 2021). It also frames sexuality as an inseparable part of our beings. Sousa (2021) highlights how writings from Freire and bell hooks (1994) potentially lean towards a transgressive sexual education, which favours the autonomy of the self, questions heteronormativity and promotes a language of resistance. Such language of resistance is central to LGBTQ+'s (or sexual dissidents, in her terms) fight for liberation because by appropriating language once demeaning, it is possible to imagine and create the Freirean 'untested feasibilities' (Sousa, 2021:13). This means imagining and building through transformative practice a new reality beyond the current structures 4 . In this sense, Sousa argues, transgressive sexual education is also decolonial, as it encourages people to free themselves from the shadow of the oppressor.
In the interview, SENTA's creator reflected on how the project invites people to be open about sexual and gender diversity and explore their own sexuality beyond heterosexuality -he mentioned, for instance, that people have messaged him saying they now saw themselves as LGBTQ+ after engaging with the page, accepting their (previously unexplored) desires. This process of questioning the status quo -heteronormativity -including at the personal level, resonates with Sousa's definition of transgressive sexual education. In addition, in terms of language, SENTA brings theoretical concepts closer to the audience's reality, a methodology aligned with Freire. This can be observed in Figure 2, where theory and humour are combined to describe SENTA's work and inspirations. It recommends readings by Freire, Marcuse and Lopes Louro, and explains its Freirean alignment, as follows: Education according to Paulo Freire is freeing. It sets you free and it emancipates you. Therefore, sexual education sets you free, allows everyone to freely explore their sexualities and has the power to fight against oppression.
However, it also calls Paulo Freire 'a gorgeous daddy' and argues that 'riding is freeing' and 'blowing is an act of love'. This is a very good representation of SENTA's strategies towards the sexual liberation of his audiences.
Like Freire and his followers, SENTA values autonomy, resistance and liberation. It also values education through dialogue instead of abiding by a 'banking model of education' (Freire, 2014: 82), where there are subjects and objects: one side teaches, the other one learns, one holds the knowledge and speaks while the other passively listens. A model where one part decides and the other one obeys. SENTA, on the other hand: Is not to spread information, but to create a dialogue about it. What I am proposing is that we discuss information (...). It is by discussing that we truly learn. If I just say 'use a condom in this situation' people will quickly forget (Personal Communication, 2022).
One way this intention is put into action is by showing his own face and strengthening his persona, with his own opinions and preferences -not only on sex and drugs but on other mundane topics. By doing this, SENTA's creator is positioning himself horizontally with the readers to facilitate the dialogue. It allows that at the same time he is seen as an authority in that field of knowledge, he is also subject to critiques and disagreements from the audience.
This approach is welcomed considering Brazil's history with sexual education. Under the strong influence of the Catholic church and successive conservative governments, sexual education has been historically repressed in the country (Demartini and Silva, 2016). There was some opening after the AIDS outbreak, and individual responsibility lost some ground to approaches focusing on social and collective vulnerability (Monteiro, 2002;Ude et al., 2020). Yet even then, sexual education was still predominantly delivered vertically, aligned with the 'banking model'. Even now, the focus remains on preventing STIs and pregnancies (Demartini and Silva, 2016), and sex-ed is barely a political concern (Sexuality Policy Watch, 2021; Guimarães, 2022). Sento Mesmo, on the other hand, makes sexual education political and frames it as more than just preventing diseases but also as a site of pleasure.
Such an approach is exemplary of 'pleasure activism' 5 (brown, 2019), a concept inspired by Audre Lorde's 'Uses of the Erotic' (1978, republished in brown, 2019). It is defined as 'the work we do to reclaim our whole, happy, and satisfiable selves from the impacts, delusions, and limitations of oppression and/or supremacy' (brown, 2019: 11). It considers pleasure as coming not only from the erotic realm but from a broad range of sources, and as a natural and safe aspect of life. This strongly resonates with SENTA's approach, as project does not try to regulate or control how people experience pleasure -it encourages them to do so and tries to help them to do it safely. brown (2019: 11) argues it is possible to 'offer each other tools and education to make sure sex, desire, drugs, connection, and other pleasures aren't life-threatening or harming but life-enriching'. Pleasure activism focuses on moderation, a concern also echoed in SENTA's harm reduction strategies. For instance, SENTA's first post in 2020, which went viral 6 , teaches 'how to use drugs during Carnival'. It is not a tutorial on how to use each drug, but rather a guide on how they safely or dangerously interact with each other. In addition, brown understands moderation as opposite to excess and not to abundance, and excess is classified as a symptom of capitalism's unequal distribution mechanisms. According to her, it 'destroys the spiritual experience of pleasure' (2019: 12-13). In other words, pleasure activism opposes capitalist values and aims for a new system in which pleasure and collectivity are central and regards pleasure and collectivity not just as the products of this new system but the tools to build it. Such perspective connects brown's ideas with Freirean pedagogies, as both authors value dialogue, collectivity and emphasise not just the goal but the process as fundamental. Such perspectives, reflected in SENTA's work, also stress that respecting people's autonomy is crucial towards liberation from current structures.
Lastly, a few words on intersectionality are required to better understand SENTA's work. brown defines pleasure activism as a black feminist project (2019: 62) both for the need to approach sexuality as more than just a site of oppression and for the centrality of intersectionality in black feminist thought. Coined by Crenshaw in 1989, various schools of thought have worked on intersectionality ever since, and it remains an important topic in feminist theory (Piscitelli, 2008). Crenshaw recently defined it in an interview as: A lens, a prism, for seeing the way in which various forms of inequality often operate together and exacerbate each other. (...) The experience is not just the sum of its parts. (Steinmetz, 2020: n.p.).
The concept matters because, even if the word intersectionality is not used in SENTA's posts, it is not possible to ignore its concern with it. The primary focus on sex-ed and harm reduction does not exclude addressing topics such as racism (Figure 1), misogyny and ableism, amongst others. In his attempt to make his audience explore their sexualities and fight against oppression, SENTA's creator intentionally makes connections with other struggles at stake. In SENTA's approach, sexual education is knowing that people with disabilities have sex too, that gay men can be abusers even if they are also an oppressed group, and that pornography can be racist and violent, and one should choose it wisely to not reinforce racist practices. In sum, that one is free to explore as long as no one else is hurt in the process. These topics are addressed with seriousness despite the jokes and explicit language.
Having connected Sento Mesmo's practices with contemporary theories, the article will now attend to how SENTA navigates social media and how it connects with Brazilian LGBTQ+ activism.
DISCUSSION: THE MEDIUM MATTERS
Interviewing SENTA's creator evidenced -especially considering the account's ban -that the medium matters when it comes to delivering sex-ed. Although one can print and display SENTA's work, the project is rooted in the digital world. Delivering sex-ed online is not exactly new: Oosterhoff et al. emphasise that, just like 'offline' methods, it remains common for online sex-ed to focus mainly on negative aspects, such as risks of STIs. '[Online sexual education] rarely offers any practical suggestions on what young people really want to know: how to give and receive pleasure, and how to engage in sexual relationships in ways that make them happy' (2017: 1-2). Thus, there is a gap when it comes to a more open, realistic and non-judgmental approach. A gap SENTA aims to fill.
Sex-ed can be a sensitive topic, which makes the digital environment a privileged medium for delivering it, as it allows people to remain anonymous when looking for the content they want or need (Waldman and Amazon-Brown, 2017). To properly enjoy this potential, sex-ed projects must share content that not only reflects common doubts and questions but also rightfully adjust their language and tone to their audience. Moreover, at least theoretically, the content is accessible to larger audiences. However, a digital sex-ed project also has to abide by the medium rules. SENTA, like other projects, needs to comply with social media community guidelines and is dependent on the algorithm at some level to deliver its content, as just posting on Instagram does not guarantee it will reach its audience. Rose (2016) notes that social media algorithms now play an important role in visual communication. And although I did not focus on the circulation of content for this article, it is still relevant to reflect on how the medium can influence online sexual education as it directly affects the production of every social media post. As Gillespie (2018) stresses, all platforms engage in processes of content moderation, and although moderation is more noticeable on social media platforms that have algorithmically curated timelines instead of chronological ones, this is a feature of every platform. In fact, content moderation is essential to the constitution of platforms and helps shape the public conversation. However, the process of moderation has to be 'hidden, in part to maintain the illusion of an open platform and in part to avoid legal and cultural responsibility' (Gillespie, 2018: 21). A consequence, Gillespie notes, is that only those culturally privileged at some level can experience this process as if it was invisible or unnoticeable. Or, as Olszanowski (2014: 85) states about Instagram, censorship 'has a consequential role in the way particular subaltern communities are built and maintained'. SENTA is a good example; as a project aiming to subvert current norms and targeting people not usually targeted, battling moderation is a fundamental and inescapable aspect of the work.
In addition, moderation has to respond to the values and interests of each platform -and even Facebook and Instagram differ despite being part of the same company (Leaver et al., 2020). And although there are rules and community guidelines stating what is permitted and not permitted, there is still room for interpretation. Platforms, after all, are made up of various communities, with diverse and sometimes conflicting interests and values. That is when moderation gets trickier. For instance, users can police themselves -through the function of 'community flagging', a commonly used tool. In this configuration, users might flag content because they disagree with it, and not because it violates any rules. And if the content is subversive at some level, like SENTA's, it may be subsequently understood by the platform as a violation, in which case the sanction -exclusion, suspension or ban -is collectively interpreted as a platform's statement on that matter (Gillespie, 2018).
To escape sanctions, creators often engage in strategies to circumvent the platform's automated algorithms. For instance: misspelling words, expressions or hashtags (Cobb, 2017), using synonyms, covering nudity (Olszanowski, 2014) and more. One problem is that even automated moderation has its biases (Noble, 2018), including toward conservative and cisheteronormative norms (Jenzen, 2017). Furthermore, there is still user flagging to deal with, and content creators constantly complain about content removal or a profile/page ban without further explanation (Olszanowski, 2014). Phrasing it differently: users experience first-hand the lack of transparency of social media platforms when it comes to moderation. They don't have clarity of what is allowed or not and why. And, more importantly, they don't feel like they have the space to present their side of the story. Then, they act (and react) on their own account, creating their strategies to navigate in such an environment.
Tactics vary depending on each page's topic and type of content, but the common goal remains to circumvent censorship. In this matter, Olszanowski (2014: 93) summarises it well that 'recognizing the polysemic ontology of censorship while at the same time 'playing' with it is one way to destabilise its repressive power'. In other words, it is a powerful move by subaltern communities. The tricky aspect is that subaltern communities encompass diverse and, in fact, oppositional actors: from sex-ed providers to communities promoting eating disorders (Cobb, 2017). From feminist artists to white supremacists or, important in the Brazilian context, groups trying to undermine democracy. These are all trying to remain active and visible on social media.
Specifically talking about Instagram, the platform is primarily visual, unlike Twitter and Facebook, for instance, and this focus is fundamental to its success. As Leaver et al. (2020, n.p.) argue, Instagram has become so prevalent in everyday life that it is now 'synonymous with the visual zeitgeist'. The platform has been through significant changes, particularly after being purchased by Facebook, but its original focus is of particular relevance: Instagram launched heavily relying on retro and vintage aesthetics. This was expressed in its early iconography, filters and square photos (Leaver et al., 2020) -features that were minimised over time. Maybe because, as the authors argue, 'commercial accounts advertising and selling their products through the platform may have considerably less desire to make their content seem like it was from the 1970s' (2020: n.p.). Now, the everchanging platform offers more possibilities for users, like more editing tools, Instagram Stories, marketplaces and more.
Like other social platforms, Instagram also relies both on automated and manual moderation, both of which are targeted by SENTA's creator in his efforts to not be censored. The platform is particularly strict on banning nudity (Leaver et al., 2020;Olszanowski, 2014), regardless of context, which might explain why SENTA uses so few explicit images. However, there were some changes in the Community Guidelines over time, responding, for instance, to very vocal protests about censoring breastfeeding (Leaver et al., 2020). These changes reaffirm the platform's never-stopping changes in its rules, which force users to constantly adapt.
VINTAGE AND YET SO MODERN: CONTRASTS BETWEEN TEXT AND IMAGE
In Sento Mesmo, the visual elements are a centrally important part of the project -particularly considering the creator's background in graphic design -and are carefully planned to provide the best support for the educational and activist content. In this sense, Instagram is the ideal medium of choice. SENTA's intention is to evoke a popular aesthetic and visually represent a paradox of Brazilian society: a society that is both very conservative, and also very libertarian. This paradox is expressed through contrasts between images and texts, but as the next sections will show, SENTA's graphic choices express more than a contrast between conservativism/libertarianism. They also build a bridge between past and present LGBTQ+ activism in the country and express a social media dilemma: how to be visible and interesting for its audience whilst remaining invisible and ordinary to the algorithms.
In Sento Mesmo, images -with pictures or letterings -must evoke 'traditional' times -in terms of morals and manners -because texts are doing the opposite. SENTA talks to people horizontally and considers what they actually do, instead of what they should be doing -which resonates a lot with Freirean methodologies focusing on the lived reality. To do so on Instagram, the creator has to adopt strategies (like omitting/replacing letters or words) to circumvent moderation. All visual elements must make the posts attractive to readers and also invisible to algorithms, as SENTA is an easy target considering its language and topics addressed.
Sento Mesmo's images reference western culture and include pop singers, decorations, or stock-like pictures of daily activities (Figure 3). Posts often replicate newspapers (Figure 4 and Figure 5) or magazine covers. Religious images (Figure 6) and, particularly, Jehovah's Witnesses' magazines were a visual inspiration for the project, as, in the creator's view, they were a good visual representation of the 'conservative' side of the paradox he was addressing. All these features show how images are carefully chosen to reminisce 'traditional' times and maximise the contradiction between image and text. Figure 4 talks about toxic masculinities; Figure 5, entitled 'How to fuck up a date', addresses which drugs are not to mix in this context. At the top, there is a banner saying, 'Prevention is not the same as [drug] incitement'.
More rarely, there are contemporary pictures (Figure 7) or print screens -either for humour purposes, to depict political figures or to recommend a read. Humour plays a central role in SENTA's content and evidence SENTA's alignment both with pleasure activism and with Freirean sexual education. It is massively used either to call the audience out and/or to create a bond with the followers, emphasising that the creator's persona shares the same culture as them. By heavily using slang and openly talking about common and usually embarrassing situations in relationships or sexual interactions, SENTA shows one use of the 'language of resistance' that Sousa (2021) framed as an element of Freirean sexual education. Additionally, pleasure is celebrated, encouraged and seen as a political act, as can be seen in Figure 8 and Figure 9. Figure 8, for instance, says: WATCH OUT, BOTTOMS! Beware of your lower back! I know you lift your butt like crazy when you're near people you want, but please be careful with hyperlordosis. Today is the International Day of Fighting hyperlordosis, so girl please work out more, stretch your back and just lift your butt when it's time (Figure 8). Similarly, Figure 9 states: 'Masturbating is a political act (...) Uuuuhhh I'm so woke! So obvious LMFAO.' These choices are not obvious for sex-ed and harm reduction projects, despite the accessible language being almost always described as an important feature. Muller et al. (2017), for instance, show that creative strategies, particularly visual ones, are needed to evade governmental or platform censorship. However, quite often the chosen path is precisely to use more 'scientific' and medical language, even if targeting young people (Herbst, 2017), as this would give more credibility and appears to be more 'neutral' and less 'activist'. Other experiences of sexual education through Instagram in the Brazilian context (Castro, 2020;Silva dos Santos, 2021) are not nearly as explicit in their language. For SENTA, on the other hand, explicit, non-judgmental language is a non-negotiable feature. This is because the creator considers there are barely any places for people who do not behave by the book: Brazilian campaigns are always: 'don't do it, don't do it'. It's never: 'people will do this. What can we do for them?' So, these people excluded from Brazilian morality have nowhere to go (...) Even if they are the majority and not the exception (Personal Communication, 2022).
Thus, SENTA's focus is on the gap of knowledge in such communities on how to make safer choices -and it is an urgent concern. The work is, then, a direct critique and a response at the grassroots level to ineffective public policies 7 , which justifies the use of language 'from the ground'. If SENTA's visuals evoke the 'good old times', the text targets those currently excluded.
PAST AND PRESENT: CONNECTIONS WITH HISTORICAL QUEER ACTIVISM
Just as there is a dialogue between visuals and words, there is a conversation between past and present. SENTA's entire visual identity aims for a 'popular appeal', to reach as many people as possible. Some of the visual references are popular Brazilian newspapers from the 70s-80s, as well as resistance newspapers from the same period, which evidences the political and contra-cultural affiliation of Sento Mesmo from the start. Here, I argue that the project's visual choices evidence this alignment with the historical Brazilian LGBTQ+ movement and mark SENTA as a resistance project, preserving and updating the legacy of iconic activism from the past.
When I asked Sento Mesmo's creator what his first visual inspirations were, he told me that at the time he was creating SENTA he was working alongside São Paulo's Diversity Museum on a project to rescue LGBTQ+ Memory in the dictatorship years. The job involved collecting press material from that period, so newspapers from 7 See, for instance, 'Bolsonaro says no to sex': https://latinamericanpost.com/31957-bolsonaro-says-no-to-sex the 70s and 80s were fresh on his mind. Popular, almost sensationalist newspapers, costing the equivalent of £0.05, like Notícias Populares, Super and the right-wing Bundas ('Butts'). He also mentioned Lampião da Esquina, a queer newspaper printed between 1978-81 and considered the first media of its type in the country. Initially, SENTA would be called Lampião, which explicitly confirms the inspiration beyond the visual. More than that, it shows a willingness to somehow preserve the memory and the legacy of Lampião.
Lampião was visually very impactful. It used different fonts, pictures and illustrations, not necessarily abiding by only one graphic identity (Castro and Fonseca, 2021). Stories were told through abundant slang and humour and the first pages were designed to look like street posters (ibid) (see Figure 10) -the same features of SENTA, but in a different medium.
Lampião was a resistance project, traditionally framed as part of the 'alternative press' that acted outside the law, escaping government censorship (Kucinski, 2018). But it was more than that. 'It was a newspaper that disobeyed in various directions' (Trevisan, 2018: 317). Gay journalists that created it felt silenced both in the public arena and amidst left circles (ibid), with no space to discuss and to freely live their sexualities. So Lampião was not only to tell their side of the story and talk about their issues but also a space to collectively experiment. Meetings included group nudity and touching (not necessarily sexual) as a way to build collective trust. Trevisan recalls: We considered fucking as a political act because our political action should be 'filled with the tenderness we have learned in the bed'. We started thinking (timidly at first) of pleasure as a legitimate right of every citizen. Even more in a country of such poverty as Brazil. We wanted to believe that misery didn't neutralise joy (2018: 318, My Translation).
This sentence evidences the stylistic and political alignment between the two projects: both the founders emphasised pleasure as a core value in their political actions, and as a right of every citizen. Trevisan also stressed the centrality of the collective, who would experience their sexuality and desires together, learn together and from each other, precisely what Freire (2014) states. These connections show that Freirean values and pleasure activism are not only embedded in SENTA: they are also core values of its major inspiration.
In addition, both projects rely on humour to ground their political action and focus/ed on building and sustaining a community in the process, and approach other topics in addition to the initial ones: SENTA discusses racism, ableism, COVID-19 (and more). Lampião discussed sexism, racism, and ecology as well as 'homosexual' topics and the fight for democracy. In both cases, there is a clear understanding of the interconnectedness of many agendas to advance toward a new society. In all of these, Sento Mesmo is carrying the legacy of Lampião da Esquina and updating it to the 21 st century and new mediums.
BEING RADICAL ON SOCIAL MEDIA: HOW TO DEAL WITH ALGORITHMS
Another factor connects past and present in SENTA's work -and it is also expressed in SENTA's use of image and text: living under authoritarian and non-transparent regimes. Then, a former dictatorship with official Figure 10. Covers of editions n. 16 and n. 20 of Lampião da Esquina. The entire digitalised archive is available at http://www.grupodignidade.org.br/projetos/lampiao-da-esquina/ press censorship. Now, an authoritarian regime in a weakened democracy (V- DEM Institute, 2022) 8 and, in addition, the omnipresent power of social media algorithms. Both forces -state power and diffuse/omnipresent algorithms -can operate together, as they did when SENTA's original profile was banned: people organised and collectively flagged the page for obscene content until Instagram permanently banned it. What was considered obscene were the topics addressed and the language used by SENTA, its unnegotiable features. This means that to deliver sexual education on Instagram, SENTA has to fight not only authoritarian political regimes and the algorithm's omnipresent power but also the conservative morality present in Instagram's huge user base. This section will address this concern, showing how this struggle is expressed graphically.
When I asked about the strategies to circumvent the algorithms, SENTA's creator said: 'I assume I am living under a dictatorship', explaining that this directly influences his graphic choices. And added, making it more explicit: 'Visually, I'm in the 60s or 70s'. That is, he suggests that using images from 50 years ago might distract the algorithm. And that by visually locating itself in such a period, SENTA would make the content 'safe' to presentday moderation mechanisms. It should not be forgotten that, in the beginning, Instagram echoed vintage aesthetics. It was an ode to this style, relying on nostalgia to attract more users. SENTA, on the other hand, uses nostalgia to mislead the algorithm, alluding to what its political opponents consider a safer and better time. That is, SENTA evokes Instagram's early aesthetics to safely navigate its current norms.
Interestingly, the 60s and 70s were precisely the most repressive years of the Brazilian military dictatorship. Thus, by visually positioning the project in these decades, SENTA's creator does not simply display a contrast between text and images, it reinforces the connections between current and past political struggles. Moreover, it reaffirms that SENTA acknowledges the role of the first LGBTQ+ activists in the country such as Lampião founders. That is, the contrasts serve different functions and by no means represent a contradiction. Quite the opposite, they enrich the project's message and put different elements into conversation.
Including numerous layers of political engagement in each post, SENTA's battle with the algorithms becomes even more meaningful. In present-day Brazil, creators -including SENTA's -often complain about what is considered offensive on each social media or a violation of community guidelines. This is a common complaint among sex-ed digital providers, even when a project is State-sponsored (Muller et al., 2017;Herbst, 2017). SENTA's creator is particularly critical of Facebook and Instagram, saying their verdicts are quite arbitrary and impossible to follow with their constant changes.
This battle is sometimes addressed, as Figure 11 shows. It says: "If Instagram can put a warning on everything that is COVID-related, why can't it use its technology to flag racist, homophobic and misogynistic posts?". It 8 President Bolsonaro lost the 2022 elections to former president Lula, by a very tight margin, showing concerning levels of acceptance of Bolsonaro's authoritarian style. Figure 11. "If Instagram can put a warning on everything that is COVID-related, why can't it use its technology to flag racist, homophobic and misogynistic posts?" criticises the automated algorithms' double standards and reaffirms SENTA's political position. Other posts seem to respond to the second step of moderation: human judgement. Examples are posts carrying the disclaimer 'Saving lives is not incitement' or 'Prevention [of overdoses] is not the same as encouraging the use of drugs.' These are SENTA's attempts to defend their content against an accusation of violating the medium rules. It is as if he anticipated the moderator's movement and responded to it in advance.
As shown throughout the article, delivering effective and transformative sex-ed on Instagram and other social media requires dynamism, the willingness to adapt, and a constant state of attention. Creators have to pay attention to multiple guidelines as well as to their audience's needs. Visual, textual and engagement concessions have to be made to be able to circulate in such a highly regulated environment, but such concessions can still be filled with meaning. If done effectively, this type of work can fill an important gap with a more realistic dialogue with people who are rarely targeted and even more rarely listened to.
CONCLUSION
The article sought to discuss the possibilities and limitations of delivering transformative sex-ed and harm reduction in the digital environment, using the Brazilian project Sento Mesmo as a case study. SENTA shamelessly and unapologetically talks about practices not usually addressed -condomless anal sex, penis sizes, combining different drugs to increase pleasure, heterosexual men being penetrated. And the project teaches how to enjoy such practices safely, or as safely as possible. It is explicitly inspired by Freirean pedagogies and, as I argue, puts the concept of 'pleasure activism' coined by brown (2019) into practice. It frames pleasure not only as an indissociable part of ourselves but one that should be encouraged and that has the potential to disrupt current norms and create new ways of living: a world with more sexual freedom and where labels and identities are not important or definitive. A world where everyone is deserving of care and pleasure. By doing so, it is an activist project as well as an educational one. It fights against current policies and aims for a new collective, not avoiding the political struggles in between.
To accomplish its mission, SENTA relies on contrasts between text/image, modern-new/traditional-old, present/past, and visibility/invisibility. In the end, these strategies all work toward circumventing algorithms' censorship, allowing SENTA to deliver transformative and radical sex-ed on a regulated platform.
The goal is to be as attractive to the reader as possible while being undetectable and innocuous in the eyes of the algorithm. For that, it depicts vintage aesthetics that evoke the 'good old times' when it comes to morals and manners -at least according to the voices who antagonise SENTA. Although the visuals might reminisce conservative values, the written language could not be more contrasting, using abundant slang, curses and sexual terms. Most of such 'dirty' words are, however, replaced, omitted or translated to circumvent the algorithms, usually with humour and irony. And the contrasts are in fact complementary. Visuals and text do not oppose, they create dialogue and make the content more engaging. SENTA mixes past and present times to reach its goal while acknowledging what came before. Both sides are in conversation.
The digital landscape has opened new horizons for activist and political action, as well as for educational projects. It is easier to reach larger audiences, who can engage with the content as they don't have to identify themselves. In such delicate topics as sexual practices and drug use, anonymity and privacy play important roles. This is particularly important in a context of strengthening far-right and conservative agendas, which is the case in Brazil in this decade. Such context only reinforces how SENTA-like projects are important. Currently, sexual education and harm reduction are in no way a priority for public policies. To fill this gap, civil society acts when the government does not, often being targeted by algorithms for doing so. Because of that, it is reasonable to assume that if there were more similar projects, it would be more difficult to censor all of them.
SENTA's creative use of visuals and text is certainly to be credited to the creator's background in graphic design and digital communications. Despite being a very grassroots project, it looks quite professional, rightfully dialoguing with the mediums it is inserted and drawing from its references to affirm its political alignment. Finally, despite being very context-specific in its references, I argue that SENTA's strategies (particularly regarding image/text choices) can be applied elsewhere. Although this article had its share of translation challenges, I hope that its discussion of SENTA can inspire other initiatives worldwide. People need this inspiration, as they will continue to ride. May they do it shamelessly. | 2023-01-01T16:10:18.937Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "94d61acc6604d8321bc31f937c3257cc3d10f50e",
"oa_license": "CCBY",
"oa_url": "https://www.lectitopublishing.nl/download/a-pleasant-ride-vintage-aesthetics-as-a-strategy-to-deliver-sex-education-and-harm-reduction-on-12758.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bea5ff675a60b51d002516ec028271623bcdbd96",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": []
} |
59396842 | pes2o/s2orc | v3-fos-license | Genetic Improvement of Bread Wheat for Stem Rust Resistance in the Central Federal Region of Russia: Results and Prospects
Advanced breeding lines of spring and winter wheat with several effective resistance genes to stem rust, including its aggressive race Ug99, were developed for the first time for the non-Chernozem zone of Russia. Modern wheat varieties cultivated in this region have high productivity and grain quality. However, they are susceptible to fungal diseases and therefore are cultivated using frequent fungicides treatments. The introgression wheat lines with multiple alien translocations (“Arsenal” collection) have been developed in the Moscow Scientific Research Institute of Agriculture “Nemchinovka” by using gamma irradiation of pollen of wild wheat relatives (Aegilops speltoides, Ae. triuncialis, Triticum kiharae, Secale cereale). Initial material with several effective Sr resistance genes for wheat breeding was developed using donors from the “Arsenal” and the VIR collections. The created initial material can compete with modern varieties, as it has resistance to leaf rust and powdery mildew, high productivity and numerous other advantages. On this basis, a new direction in breeding of spring and winter wheat is developed for this region, that is, creation of wheat cultivars with resistance to fungal diseases. This allows to reduce the fungicide load during cultivation with the goal of producing ecologically clean grain for healthy diet.
1. Introduction 1.1.Geographical position of the Central Federal District of Russia, achievements in the selection of wheat and the directions of its improvement The Central Federal District of Russia (CFDR) is an area of more than 650,000 km 2 , which includes 18 oblasts with the capital city of Moscow (Figure 1).The CFDR lies within the Atlantic-continental climatic region of the north temperate zone.It is characterized by not too cold winter and warm, but not excessively hot summer.The lowest temperatures are observed in January: on the average from À8 to À12 C. Summer temperature ranges from 18 to 20 C. The average duration of the frost-free period is 125-140 days, and the sum of the effective temperatures is 1800-2300, which allows to successfully cultivate most of the cereals, potatoes, vegetables, fodder grasses and flax in the CFDR.The average annual precipitation is 450-600 mm [1] (http://studopedya.ru/2-68711.html).This economic region includes about 16,000,000 ha of arable land, in which winter and spring wheat are the leading crops.The area under these crops is 3,600,000 and 620,000 ha, respectively.Traditionally, until the late 1960s of the twentieth century, rye was grown in this region, which is less demanding for the fertility of sod-podzolic soils.However, rye gave way to wheat due to the efforts of breeders.Breeders P. Lukyanenko (Bezostaya 1, Karlik 1), V. Remeslo (Mironovskaya 808), G. Lapchenko (PPG-1, PPG-186), E. Varenitsa (Zarya), E. Nettevich (Moskovskaya 35, Priokskaya, Lada) and B. Sandukhadze (Inna, Galina, Moskovskaya 39, Nemchinovskaya 17) made a great contribution to the creation of wheat cultivars.
Modern cultivars of bread wheat, derived in the Moscow Scientific Research Institute of Agriculture "Nemchinovka" (Moscow Sc.Res.Inst. of Agr."Nemchinovka"), are characterized by high winter hardiness, productivity and grain quality.They are cultivated according to intensive technologies with the application of mineral fertilizers up to 150 kg N, 120 kg P 2 O 5 , 150 kg K 2 O, against the background of manure and annual grasses.Seed treatment before sowing, threefold application of fungicides, herbicides, insecticides and growth regulators for the season ensure the yield of spring wheat up to 7 tons/ha and winter wheat up to 10 tons/ha.However, these cultivars are susceptible to the majority of fungal diseases common in this zone (powdery mildew, leaf rust, stem rust and Septoria leaf spot).To date, only one cultivar of winter wheat Nemchinovskaya 24 with two resistance genes Lr9 and Lr46 has genetic protection against leaf rust.Therefore, the development of cultivars with increased immunity to fungal diseases is actual for the CFDR.Extensive transport and economic relations in the globalization of the world do not exclude the importation of quarantine diseases into our territory, for example, the aggressive race of stem rust Ug99.The worsening of the phytopathological situation requires increased research in this area, especially in recent years, when the cases of crop damage caused by stem rust occur more frequently (2010, 2013, 2016 and 2017).The aim of this study was to identify the sources and donors of resistance to stem rust, including the race Ug99, from the collection of VIR and "Arsenal" collection, and to create on their basis, the initial material of spring and winter wheat with durable resistance to stem rust.
2. The development of the initial material of spring and winter wheat with several Sr genes resistant to Puccinia graminis f. sp.tritici
The modern phytopathological stem rust situation in CFDR and possible threats
The situation in the CFDR reflects the general trend observed in the populations of P. graminis in all areas of pathogen distribution; the fungus actively evolves.Differences concern only the speed and genes of the pathogen virulence, depending on the geographical location.In the case of Ug99 (TTKSK), the process is very fast (in 18 years, 13 biotypes of the fungus appeared); on the other hand, in the territory of CFDR, the change of the dominant races took place for 57 years.The phytopathological situation is complicated by the proximity of the CFDR to European countries, where aggressive pathogens of P. graminis have been identified recently.Six races (TKTTF, TKKTF, TKPTF, TKKTP, PKPTF and MMMTF) were retrieved from 48 isolates, obtained from the P. graminis population in 2013 in Germany [2].The detection of the TKKTP race causes concern because of its virulence to the Sr24, SrTmp and Sr1RSAmigo genes, although it has been determined that none of these races belongs to the race group TTKSK (Ug99), and the German isolates of the TKTTF race are phenotypically different from the TKTTF race that caused plant disease epidemic in Ethiopia in 2013/2014.It is known that 55% of North American and international cultivars and selection lines resistant to the race TTKSK (Ug99) are susceptible to the TKKTP race [2].On the Italian island of Sicily, a new race of stem rust, the TTTTF, hit several thousand hectares of durum wheat in 2016, leading to the largest outbreak of stem rust in Europe in recent decades.TTTTF is a newly identified race of stem rust that can soon spread over long distances along the Mediterranean basin and the Adriatic coast [3] (http://www.fao.org/news/story/en/item/469467/icode/).The analysis of the racial composition of R. graminis f. sp.tritici in CFDR was held annually since 1960.During this time, significant changes occurred in the composition of the dominant races.In the 1960s-1970s, the population of stem rust included physiological races 21, 17 and 34 according to Stakman's nomenclature [4].Races 11 and 14 were detected regularly, but were not widely distributed.In the 1960s-1970s, only the resistance genes Sr7b and Sr9g were completely ineffective.Virulence to the genes Sr5, Sr21, Sr9e, Sr11, Sr6, Sr8a, Sr36, Sr9b, Sr30 and Sr17 was low or absent [5].In subsequent years, the fungal pathotypes, virulent to the resistance genes Sr5, Sr21, Sr6, Sr8a and Sr17, appeared.Races of the pathogen MKCT, MKCK, MKBK, MKBS, MKBT, RKCT and RKBS dominated in CFDR in 2004 [6].During this period, the Sr9e, Sr11, Sr36, Sr9b and Sr30 genes were effective.Over the past decade, the structure of the population on the basis of virulence has changed toward the predominance of several aggressive virulent races, including races that are virulent to genes Sr5, Sr21, Sr9e, Sr7b, Sr6, Sr8a, Sr9g, Sr36, Sr30, Sr9a, Sr9d, Sr10 and SrTmp.Among samples from the European part of the Russian Federation, the races of stem rust MKBT and MRLT in 2002 and TKNT, TKST, TTNT in 2005 dominated [7].The race composition of P. graminis f. sp.tritici populations in the CFDR in the period 2000-2009 is presented in the work of Skolotneva et al. [8].They analyzed 387 isolates of the fungus using the North American set of differentiators.Samples were obtained from cereals (wheat and barley), wild herbs and barberry.As a result of the study, 45 races of P. graminis f. sp.tritici were identified.The predominant races TKNT and TKNTF were isolated.The Ug99 race and its derivatives were not found in the Russian Federation.
According to the data obtained in 2013 [9], during assessing the collection of lines with known Sr genes, the effective genes of resistance to stem rust in CFDR were the genes Sr2, Sr9e, Sr13, Sr25, Sr26, Sr31, Sr32, Sr36, Sr44, SrWld and the combination of genes Sr17 + Sr13, Sr31 + Sr38.According to the data of Skolotneva et al. [8], the resistance genes Sr9e and Sr36 in 2009 were ineffective in the Central region of Russia.Differences in the data are probably due to a change in the composition of the pathogen population.Thus, in the same work of Skolotneva, a change in the percentage of fungal isolates virulent to the Sr17 gene was noted from 92.5% in 2000 to 0% in 2008, and the genes of Sr31 and Sr24 remained effective against all local races of stem rust.Observations of the pathogen development in the period 2013-2017, conducted in the All-Russian Research Institute of Phytopathology, showed the annual development of stem rust.The development of the disease on the susceptible genotype of Khakasskaya in 2017 reached 100% [10].
Development of bread wheat lines with several resistance Sr genes
The process of creating the initial material of bread wheat with several resistance genes passed in several stages.First stage: Isolation of resistance sources in the seedling stage; evaluation of the spring bread wheat line against the background of Ug99 natural infection in Ethiopia.Second stage: Identification of resistance Sr genes using specific molecular markers and isolation of resistance donors.Third stage: Selection of pairs for crossing and hybridization of donors among themselves, obtaining of the segregative population of hybrids F 2 .Fourth stage: Performing backcrossing by one of the recurrent parents or stepwise hybridization of individual plants by the third parent in the field against the infectious background of leaf rust; subsequent self-pollination or repeated backcrossing with test of progeny against infectious background of leaf rust.At this stage, work with spring and winter plants was carried out in parallel at different plots and with different planting times.Schematically, the process of breeding lines with several effective Sr genes can be represented in the following sections.
First stage
The identification of the resistance sources to the race Ug99 of stem rust was started by an employee of FSBSI VIZR Anna Anisimova together with the scientists from Minnesota University (USA) in 2010.At the seedling stage, 386 accessions of bread wheat from the collection of VIR and the "Arsenal" collection from Moscow Sc.Res.Inst. of Agr."Nemchinovka" were evaluated, six accessions of winter wheat and one accession of spring wheat with resistance to this disease were selected (type of reaction during pathogen penetration from 0 to 2) [11].It is the selection line GT 96/90 (hereinafter referred to as line 96) from Bulgaria with genetic material of the species T. timopheevii, a cultivar of winter wheat Donskaya polukarlikovaya (hereinafter referred to as D) in the pedigree of which Aegilops tauschii was present (accessions from the collection of VIR).From the collection, "Arsenal" lines with translocations from Ae. speltoides were selected: 9/00w (2n = 42), disomic addition lines of Ae. speltoides: 19/95w and 141/97w (2n = 44); wheat-Ae.speltoides-rye line 119/4-06rw (2n = 42), hereinafter referred to as line 119.The only stable accession of spring wheat 113/00i-4 (2n = 42) (in the text accession 113), obtained from crossing the spring cultivar Rodina with irradiated pollen of the species Ae. triuncialis [12] and then crossed with the line with the genetic material of T. kiharae, showed immunity to the natural infection of stem rust race Ug99 in Ethiopia at the stage of an adult plant [13].
Second stage
Identification of resistance Sr genes was carried out using molecular markers both to effective genes against the Ug99 race (Sr2, Sr22, Sr24, Sr25, Sr26, Sr32, Sr35, Sr36, Sr39, Sr40, Sr44, Sr47) and to ineffective ones: Sr9a, Sr15, Sr17, Sr19 and Sr31, but providing resistance to local populations of the pathogen.The list of molecular markers is given in Table 1.For each primer, the most optimal PCR conditions were selected; the conditions given in the original studies are taken as basis.Separation of amplification products was performed by electrophoresis in 2% agarose and 8% polyacrylamide gels stained with ethidium bromide at a voltage of 100 V for
Sr gene
Chromosome Marker References Table 1.Specific primers used to identify Sr genes.
3 h in 0.5Â TBE buffer.As markers of molecular weights, 50 bp, 100 bp and 1 kb GeneRulerTM DNA Ladder from "Fementas" were used.The results of gene identification in new sources are given in Table 2 [32].
We explain a wide range of identified genes in donors from the "Arsenal" collection by multiple alien translocations of the genetic material of species Aegilops speltoides, Ae. triuncialis, Triticum kiharae, Secale cereale, arising during the irradiation of pollen, and in the selection line GT 96/90 by the presence of translocations from the species T. timopheevii.The use of such donors, even in paired crosses, can lead to the creation of plant forms with an unusual combination of resistance Sr genes due to the recombination of genes in meiosis.However, since we were faced with the task of obtaining the initial material for the selection process, we had to take into account not only the level of donors ploidy but also the presence of economically valuable traits.It should be noted that despite the positive identification of the Sr22 gene in the wheat-Aegilops lines (9/00w, 141/97w and 119/4-06rw) using the Xbarc121 and Xcfa2123 markers, the absence in the pedigree of these lines of the genetic material T. monococcum leaves doubt in the presence of this gene.
Third stage
We rejected the use of disomic addition lines with chromosomes of Aegilops speltoides in the selection of pairs for crossing, since the supplemented alien chromosome with which we bind resistance was rarely conjugated to wheat chromosomes and was lost in the process of division in meiosis.The remaining donors had an euploid number of chromosomes, but were different according to the morphophysiological and agronomic characteristics (terms of ear formation, height, susceptibility to powdery mildew).The D cultivar and the GT 96/90 line had a very short stem (60-70 cm), early ear formation (late May to early June) and were affected by powdery mildew to a high degree (severity 30-50%), remaining resistant to leaf rust.For donors from the "Arsenal" collection, on the contrary, later ear formation, long stem, but the high resistance to powdery mildew and leaf rust were characteristic.Parent pairs for crossing were selected with alternative development of traits (short stem, early ear formation, ).In the conditions of the greenhouse, the D cultivar was found to be the earliest ripening, and it was not possible to hybridize with it because of the mismatch of the flowering periods.Later, this cultivar was used in stepwise hybridization.F 1 plants were also grown in the greenhouse.The fact of the segregation of future F 2 populations into winter and spring genotypes from the crossing of winter lines with the spring line 113/00i-4 was taken into account when planning crossings.Crossing with the productive wheat-Ae.speltoides line 145/05i (grain weight from ear is 1.9 g, weight of 1000 grains is 49.0 g), which had group resistance to powdery mildew and leaf rust, but was susceptible to Ug99, was additionally planned in order to shift the formative process toward the isolation of productive spring forms of plants.
Fourth stage
Beginning with F 2 , the work with spring and winter forms of plants was carried out against the infectious background of leaf rust at different seeding times.Half of the seeds were sown in February in the heated plot after snow melting.After the emergence of shoots, the heating of the soil was switched off, and the plants passed vernalization at natural low temperatures and natural snow cover.In this case, spring plants perished, and winter plants formed the ear.The second half of the seeds were sown in the field in spring.Under these conditions, spring plants formed the ear, and winter crops remained in the tillering phase.Backcrossing of plants resistant to leaf rust, beginning with F 2 , was conducted by recurrent spring parent or line 145/05i (when working with spring genotypes) and winter recurrent parent or D cultivar (when creating winter wheat lines).The infectious background of leaf rust was created using all races characteristic for the Moscow Oblast.For further hybridization, only plants resistant to leaf rust were selected.The second backcrossing or self-pollination was carried out under the conditions of a greenhouse, the progeny was sown on the appropriate soil background, and the process of backcrossing on the infectious background of leaf rust was repeated again.Then self-pollination of plants was carried out.The scheme of the selection process for obtaining spring and winter lines with several resistance Sr genes is shown in Figure 2.
Fifth stage
In the progeny of self-pollinated plants, which were sown as lines of different generations BC1F 3 , BC2F 2 , BC3F 2 , F 4 , F 5 , individual plants were selected by morphotype with parallel identification of Sr genes by PCR analysis.During the selection, attention was drawn to the habitus of the plant (bush form, the number of productive shoots), the location of the leaves, the shape of the ear, the presence of marker morphological features (the presence of awns and anthocyanin on different parts of the plant such as stem, ear, anther), the date of ear formation and the degree of severity of affection by powdery mildew and leaf rust were taken into account.Preference was given to plants with group resistance to diseases, with optimal plant height (80-110 cm), early ripening and an ear with 19-21 developed spikelets, that is, the selection of individual plants was not accidental, but aimed to combining economically valuable traits.In such plants, a piece of leaf was taken to isolate DNA and to identify Sr genes.In total 200 spring plants and more than 200 winter plants were selected for PCR analysis.The spectrums of identified effective Sr genes in spring and winter plants differed.
In spring plants, the genes Sr2, Sr44, Sr36 and Sr40 were found most often in the homozygous state (71, 89, 78 and 26%, respectively, of the number of plants tested).The Sr22 gene, which was originally identified in the winter donor 119/4-06rw using two markers Xbark 121 and cfa 2123, was detected in the progeny of selected spring plants at a frequency of 20%, when the donor was used for backcrossing and the resulting progeny was self-pollinated.The Sr39 and Sr47 genes were rare, with a frequency of 4.4 and 1.4%, respectively.After PCR analysis, 137 individual plants with several Sr genes in the homozygous state were selected from 200 spring plants, namely: with two resistance genes-54 plants, with three-64 plants, with four-15 plants and with five genes-4 plants.
In individual winter plants, selected from the hybrid population represented by the families F3, BC1F2, BC1F3, BC2F3, BC3F2 of different origin, eight genes were identified, which form a row: Sr2 > Sr44 > Sr32 > Sr36 > Sr22 > Sr31 > Sr47 > Sr39 and Sr40 by the frequency of occurrence in progeny.The combination spectrum of the identified genes in winter wheat plants differed from the spectrum of genes identified in spring wheat lines.This is connected with the orientation of backcrossings conducted in winter and spring wheat.
The combination of Sr genes compound in the genotypes of winter wheat is more diverse.
The presence of Sr32, Sr39, Sr40, Sr44 genes, which are poorly studied in relation to other Pgt races and rarely used in selection programs, with the resistance Sr2 gene of an adult plant showing "slow rusting" effect, gives particular value to the selected winter plants.However, the presence of the recessive Sr2 gene of resistance in the heterozygous state in most winter wheat plants will require additional efforts to transfer it to a homozygous state.In particular, we have planned experiments on the production of digaploid lines using androgenesis method.Individual plants with the identified genotype of resistance to stem rust differed greatly in height (75-145 cm), ear productivity (1.0-2.7 g), weight of 1000 grains (36-60 g) and morphological features.For further testing in infectious nurseries of stem and leaf rust, 373 individual winter wheat plants were selected: 199 plants with the identified Sr genes and 174 plants selected for a set of other economically valuable traits.From the populations of spring [33].One hundred and fifty-eight lines of spring wheat (or 81% of the number of studied lines) showed high resistance to infection (0R) by the North Caucasian population of stem rust, and 160 lines were resistant to leaf rust.
Testing of the same set of spring lines in Western Siberia (Omsk), which were sown with a special late spring sowing (late crops are more affected than those sown in the optimal time) led to the death of some lines, but from the 167 surviving lines, 111 lines (66.5%) with resistance to stem rust were selected.In the year of testing (2015), strong epidemic of stem rust was observed in the region.Under these conditions, only a small group of genes, according to the observations of the researchers, was effective (Sr2, Sr9e, Sr11, Sr12, Sr13, Sr19, Sr24, Sr25, Sr26, Sr27, Sr30, Sr31, Sr35 Sr37), but none of these genes provided full protection against the disease.The severity of lines with known resistance genes varied from 5 to 30% in comparison with 50-60% severity of cultivars without effective genes [34].
Selected in such harsh conditions, stable lines with group resistance to stem and leaf rust are valuable initial material for the selection of spring wheat in this region.Structural analysis performed in comparison with the standard cultivar Omskaya 37 allowed to select 20 lines with the least decrease in productivity in the unfavorable dry conditions of Western Siberia.In 2016, these lines were involved in crosses with the best adapted varieties cultivated in this region (Shamanin, personal communication).
In the Moscow Oblast, in 2015, no development of stem and leaf rust was observed even on the highly susceptible line Khakasskaya because of unfavorable weather conditions for the development of these pathogens (low air humidity, lack of dew, strong wind).However, in the Moscow suburbs, the spring lines were evaluated for resistance to powdery mildew.After that, the results of lines estimates at three geographic locations were combined, and genotypes that showed resistance simultaneously to leaf and stem rust in Krasnodar and Omsk and resistance to powdery mildew in the Moscow Oblast (71 genotypes) were selected.In 2016, under the conditions of epidemic development of stem rust in the Moscow Oblast, after negative selection for resistance to diseases, the timing of the ear formation, height and the presence of segregation by morphological features, 40 genotypes were left for further tests.
After the statistical evaluation of the productivity elements (yield of grain from 0.3 m 2 , productivity of the ear, weight of 1000 grains), 25 best genotypes with a set of economically valuable traits were selected (see Stage 7).
Winter wheat
The progeny of winter wheat plants was tested in two geographical locations: the Moscow Oblast against the natural but epidemic course of stem rust in 2016 and Krasnodar for the North Caucasian population of stem rust against the infectious background.In the Moscow Oblast in 2016, favorable conditions for the epidemic development of stem rust arose on wheat crops.The focus of the disease arose on winter wheat in the phase of milk ripeness of grain and then switched to spring wheat.The disease affected the standard winter wheat cultivar Moskovskaya 39 by 40% with the type of response to infection 3-4, and allowed a clear differentiation of the genotypes among the sown source material on the basis of resistance, and also the evaluation of the spring wheat lines collection with known resistance Sr genes for the effectiveness of individual genes in the Moscow Oblast.
During assessing the collection of lines with known Sr genes in 2016, it was found that compared to 2013, the spectrum of effective resistance genes to this disease narrowed, which indicates possible mutational processes in the fungus population or various sources of plant disease epidemic.If in 2013, the following genes: Sr2, Sr9e, Sr13, Sr22, Sr25, Sr26, Sr28kt, Sr30, Sr31, Sr32, Sr36, Sr44, SrWld and combinations Sr13 + Sr17 and Sr31 + Sr38 were effective, then in 2016, only lines with the following Sr genes showed high resistance (severity 0) or resistance (up to 1% severity with the reaction type of 1 point): Sr28kt, Sr30, Sr31, Sr32 and SrWld, and lines with the Sr9e, Sr17, Sr25, Sr26, Sr33 and Sr40 genes showed moderate resistance (from 5 to 20% severity with the reaction type of 2 points).
The evaluation of the created winter wheat lines for fungal diseases showed high resistance of most genotypes to local populations of leaf rust and stem rust and to powdery mildew.Only 14 out of 373 sown lines (about 4% of the genotypes) were susceptible to the P. graminis of the Moscow population and segregating along this trait.Even more lines (98.7%) were resistant to P. triticina.In the test material, there were 147 lines resistant to powdery mildew with severity up to 10% (Table 3).One hundred and thirty-six lines with group resistance to the three diseases were selected.
The evaluation of 367 winter wheat lines in the Krasnodar Krai made it possible to isolate 146 immune lines (severity 0) and 22 resistant lines (up to 5% severity, reaction type of 1, 2 points), that is, 46% of the genotypes which showed resistance to the North Caucasian population of stem rust.By comparing the results obtained in the Moscow Oblast and in the Krasnodar Krai, 50 genotypes that showed stability in both geographically remote points were selected.
Seventh stage
Evaluation of the economically valuable traits of selected stable lines in the Moscow Oblast conditions in comparison with standard cultivars, selection of the best genotypes for competitive testing.
Spring wheat
During the selection of spring genotypes, we were guided by such characteristics as the earlier (43-46 days) or simultaneous ear formation with the standard spring cultivar Lada, the optimum height of the plant (up to 110 cm), the grain mass from the ear (1.6-2.6 g) and weight of 1000 grains (45-50 g).During the selection of winter lines, the wintering of the lines was taken into account, and we also oriented toward the listed characteristics and compared them to the standard winter cultivar Moskovskaya 39.The reliability of the differences in the indices (the productivity of the ear, the mass of 1000 grains, the height) was estimated upon the results of a single-factor disperse analysis using the "Agros" statistical analysis algorithms [35].Protein and gluten content in the grain was determined on an infrared analyzer SpectraStar 2400 in the productive lines with large grain.The content of gluten in the flour was analyzed on Glutamatik Perten device, and the quality of gluten on an IGD-3 M (the measuring instrument of gluten deformation).Other indicators of flour quality (strength and dilution) were determined on alveograph and farinograph.The main physiological trait of the selected lines is the group resistance to fungal diseases (leaf and stem rust, powdery mildew) and the presence of several identified genes of resistance to stem rust that must provide durable resistance to the Pgt population in the Central Federal District of the Russian Federation and on the territory of Western Siberia.The distinctive morphological sign of the majority of the lines is the presence of anthocyanin on the pericarp of the grain, which causes the grain to acquire a different degree of coloration (from dark red to dark purple).As a rule, lines with purple grain have the manifestation of anthocyanins on other organs too (stems, ears, anthers).As stated earlier, the 25 best genotypes with a set of economically valuable traits were selected among the spring progeny, which were estimated in 2017 in the control nursery for resistance to diseases, grain harvest from the plot, grain nature and its quality.The control nursery was laid in triple replication in the conditions of the Moscow Oblast (area of the registration plot 1.5 m 2 ).All tested lines of spring wheat confirmed their high resistance to stem and leaf rust, but none of the lines exceeded the standard cultivar Lada by the yield of grain from the plot.Only three accessions (11-17, 21-17 and 23-17) out of 25 produced a crop that is not inferior to this standard.The second standard cultivar Zlata was strongly affected by stem and leaf rust (up to 70%) and formed a yield significantly lower than Lada and some tested lines (Table 4).Some of the selected lines, when compared with standard cultivars, look attractive in terms of the number of days before ear formation, which was reduced by 1-2 days and height (lines 1- According to the results of complex assessments, seven genotypes were selected for the evaluation of grain quality (Table 5) and ecological testing in CFDR (Moscow Oblast, Vladimir Oblast, Tula Oblast).After the results of the environmental test, which is planned in 2018, the best prototype of the cultivar will be sent to the State Test and determination of the cultivation regions.
The analysis of grain and flour samples in 2017 is presented in Table 5. Grain has a good nature, corresponding to the first class and the high weight of 1000 grains (see Table 4).Almost all the lines have an increased grain hardness, according to the protein content in the grain, they correspond to the first class (>14.5%), and to the gluten content in grain to the second class (> 28%).This allows us to attribute them to a group of strong wheat and use them in mill grist to improve lower quality grain.The high content of gluten in flour characterizes it as a premium product.However, the quality of gluten is characterized as satisfactorily weak in the indications of the measuring instrument of gluten deformation (third group).Only one sample (8)(9)(10)(11)(12)(13)(14)(15)(16)(17) for gluten quality corresponds to second group.The strength of the flour (245), determined on the alveograph, allows it to be attributed to a good filler group, and according to the dilution factor of the dough (80), the flour is at the standard level and corresponds to valuable wheat.The results of the baking test show that the volume yield of tin bread of this line exceeds the standard, and the color and porosity of the crumb are not inferior to it.Due to the high content of protein and gluten in the flour of other lines, one should also consider their other purpose, for example, making flour confectionery products, where satisfactorily weak gluten is required (GDI > 85).
According to the data available in the literature, grain cereals with anthocyanin coloration have increased antioxidant activity [36].However, during obtaining the premium flour, the colored shells of the grain go into the bran.An attempt was made to use whole-wheat flour of bread wheat purple grain lines with a high content of antioxidants (up to 70 mg/100 g) in confectionery technologies.Whole-grain flour had an increased water holding capacity, far exceeding control.But according to their technological properties, the samples were inferior to the standard-the dough formed worse, crumbled, less amenable to laminating.The best technological properties, closest to the standard, were shown by the samples of lines 8-17, 9-17 and 17-17.Sample 17-17 was distinguished by the presence of large bran, which prevented the formation of the dough.The baked sugar cookies were better on the indicators of swelling in water (up to 78%) and specific volume (up to 0.76 g/cm 3 ) compared to the standard (58% and 0.62 g/cm 3 , respectively).The structure of the cookies from all the samples was more crumbly and fragile than that of the standard, and the organoleptical properties (taste, smell, appearance, cross-sectional texture) were at the standard level.Using flour from whole grains in industrial conditions will allow to obtain pastry with a high yield of products suitable for healthy eating.
Winter wheat
From the 373 winter wheat lines created during the experiment, 137 were selected for further testing in breeding nurseries of the Moscow Oblast.This group also included 49 stable genotypes, which were selected during the study in the Krasnodar Krai.Table 6 shows the diversity of the best winter wheat lines by the identified resistance Sr genes and some economically valuable traits in comparison with the standard cv.Moskovskaya 39 in the Moscow Oblast.Among the winter genotypes, it was possible to select the lines that formed ear 2-8 days earlier than the standard and had a shorter stem than the standard cv.Moskovskaya 39.Both attributes are of selective importance for the Central Federal District of the Russian Federation, and breeders tend to create early ripening short-stemmed analogues of productive cultivars.This is due to the climatic conditions of the CFDR: abundant rainfall with the wind during the ripening of cereals lead to lodging of cereals and crop losses.Thick stiff short stem provides resistance to lodging.Most of the created lines form a large grain with the mass of 46-60 g and the productive ear at the standard level.Several lines (86-16, 48-16) were selected, which are superior to the standard cultivar according to the productivity of the ear (grain mass from ear of 2.7 and 2.4 g, respectively).Preliminary evaluation of lines by grain quality (protein and gluten content in grain) on an infrared analyzer showed an increased value of these parameters in comparison with the Moskovskaya 39 cultivar, which is a quality standard in the non-Chernozem zone of the Russian Federation.The fluctuation of the protein content in the grain from the isolated lines was in the range of 15.2-20.2%,and the gluten content was 29.7-41.5% (cv.Moskovskaya 39 had 17.6% of protein and 31.4% of gluten in the grain).An additional assessment of the gluten content in flour, carried out on the Glutomatik device, confirmed such a high gluten content in the selected lines (37-61.3%),but the quality of gluten of most lines corresponded to the third class (GDI unit of the instrument 92-114).Such gluten is characterized as satisfactorily weak.Flour with such indicators is used in the confectionery industry for baking biscuits and cookies.
Selected winter wheat lines will have to undergo tests at the control nursery in the Moscow Oblast, and then environmental testing at three geographical locations, before they receive the status of the prototype of a new cultivar.
Conclusion
During the period 2010-2017, the initial material of spring and winter wheat was developed in the Moscow Scientific Research Institute of Agriculture "Nemchinovka," which differs fundamentally from the varieties of wheat that have been obtained to date.Namely, for the first time, prototypes of cultivars with group resistance to the most widespread fungal diseases in the Central Federal District of Russia (leaf and stem rust and powdery mildew) were developed.
Resistance to stem rust is determined by the presence of 2-4 effective resistance genes not only to the European but to the North Caucasian and West Siberian Pgt pathogen populations.
Taking into account the presence of the APR gene Sr2 with other effective genes Sr22, Sr32, Sr39, Sr40, Sr44 and Sr47, lines can also have a selection value for regions where the rust race Ug99 is common.The genetic diversity of lines, as far as the spectrum of resistance genes is concerned, differs from that obtained earlier in the world practice.The possibility of creating such genotypes in a short time is explained by the presence of original resistance donors having in their genealogy an alien genetic material of species relatives (Aegilops speltoides, Ae. triuncialis, Triticum kiharae, Secale cereale, T. timopheevii, Ae. tauschii) and the presence of several effective Sr genes in donors identified using specific molecular markers.The advantage of the used donors was the presence of other selection valuable traits such as resistance to leaf rust and powdery mildew, early ripeness and the presence of a short stem.As a result of simple, stepped and backcross crossings with subsequent self-pollination, hybrid populations were obtained from which the individual plants were initially selected, and then on their basis, lines were obtained that were tested for resistance to stem rust at three geographical locations: Moscow, Krasnodar and Omsk.According to the results of testing, the progeny in breeding nurseries of the Moscow Oblast and the results of genotypes resistance to stem rust, the lines of spring and winter wheat are selected in three geographical locations, which form the crop at the level of standard cultivars without the use of chemical protection agents during cultivation.This technology allows to get environmentally friendly products for a healthy diet.In fact, these are new prototypes of spring and bread wheat cultivars for the Central Federal District of Russia, which can be used as donors of resistance to stem rust while improving wheat in other regions.These lines have some morphophysiological features such as the presence of anthocyanin on the stem, anthers and grain.The presence of anthocyanins gives the grain an increased content of antioxidants and increased resistance to unfavorable environmental factors, according to the literature.Technological evaluation of the grain from the created lines of spring and winter wheat showed an increased content of protein and gluten in flour, which allows them to be classified as a group of strong wheat and used in mill grist to improve the lower quality grain in baking.However, the quality of gluten in new lines is characterized as satisfactorily weak.An attempt to define a different direction for the use of such grains in the food industry, taking into account grain coloring by anthocyanins, namely, in the production of flour confectionery products (sugar cookies) has been undertaken.Product from whole-wheat flour exceeded the standard baking on the basis of the features of swelling in water, volume, crumbliness and fragility, but in organoleptical indicators was not inferior to the standard.It is concluded that the use of whole-grain flour with increased antioxidant activity for baking confectionery products determines the use of this grain for healthy food (not only because of the lack of residual chemical protection agents not used in cultivating such varieties, but also due to the presence of anthocyanins in the grain and its antioxidant properties).Taking into account the conducted researches, a new direction in selection for the Central Federal District of Russia is defined: development of spring and winter wheat varieties with group resistance to fungus diseases and with grain suitable for healthy nutrition.
Fifth stage: Selection of individual plants by morphotype with parallel identification of Sr genes.Sixth stage: Testing the progeny of individual spring wheat plants against the infectious background for the North Caucasian and West Siberian populations of stem rust, and plants of winter wheat for the North Caucasian population of stem rust (infectious background) and natural epidemic development of stem rust in the Moscow Oblast.Seventh stage: Evaluation of the economically valuable traits of selected stable lines in the Moscow Oblast conditions in comparison with standard cultivars, selection of the best genotypes for competitive testing.
Figure 2 .
Figure 2. Scheme of development of spring and winter wheat lines with several resistance Sr genes.
Figure 3 .
Figure 3. Identification of the Sr2 gene using the molecular marker Xgwm533 in winter plants 1-36: M-molecular weight marker of 50 bp "Fermentas", Sr2-positive control Pavon76, K-negative control Saratovskaya 29 cultivar.The arrow indicates a diagnostic fragment with a molecular weight of 120 bp.The amplification products were separated in 2% agarose gel."+"-presence of the diagnostic fragment; "À"-absence of a diagnostic fragment; h-heterozygote.
Table 2 .
[32]lts of the identification of Sr genes in resistance donors to stem rust[32]and their economically useful traits.Genetic Improvement of Bread Wheat for Stem Rust Resistance in the Central Federal Region of Russia: Results… http://dx.doi.org/10.5772/intechopen.75379susceptibility to powdery mildew) x (long stem, later ear formation, resistance to powdery mildew).The first crossings were conducted in 2010 in the greenhouse.The following pairs of direct crossing and backcrossing were successful: (GT 96/90  113/00i-4), (119/96rw  GT 96/ 90), (113/00i-4  119/96rw wheat, only 198 spring plants were selected for further testing: 129 plants with identified Sr genes and 69 plants with a set of valuable traits.
Table 3 .
The results of estimations of winter wheat lines for fungal diseases against the natural background of leaf, stem rust and powdery mildew development in the Moscow Oblast (2016).
Table 4 .
Variety of spring wheat lines from the control nursery for some qualitative and quantitative traits (2017).
Table 5 .
Indicators for the quality of grain, gluten and test baking of bread in spring wheat lines with different intensity of grain coloring (harvest of 2017).
Table 6 .
Some economically valuable traits of the winter wheat lines with identified genotype of resistance to Pgt. | 2018-12-26T10:48:33.971Z | 2018-08-16T00:00:00.000 | {
"year": 2018,
"sha1": "1642f8b3eb27de10c356c6b8b6cd750c790ff5db",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/60077",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8769c32554fc38333cbaef1d40ac256624e6d7c0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
239660123 | pes2o/s2orc | v3-fos-license | Secure Rotation Invariant Face Detection System for Authentication
: Biometric applications widely use the face as a component for recognition and automatic detection. Face rotation is a variable component and makes face detection a complex and challenging task with varied angles and rotation. This problem has been investigated, and a novice algorithm, namely RIFDS (Rotation Invariant Face Detection System), has been devised. The objective of the paper is to implement a robust method for face detection taken at various angle. Further to achieve better results than known algorithms for face detection. In RIFDS Polar Harmonic Transforms (PHT) technique is combined with Multi-Block Local Binary Pattern (MBLBP) in a hybrid manner. The MBLBP is used to extract texture patterns from the digital image, and the PHT is used to manage invariant rotation characteristics. In this manner, RIFDS can detect human faces at different rotations and with different facial expressions. The RIFDS performance is validated on different face databases like LFW, ORL, CMU, MIT-CBCL, JAFFF Face Databases, and Lena images. The results show that the RIFDS algorithm can detect faces at varying angles and at different image resolutions and with an accuracy of 99.9%. The RIFDS algorithm outperforms previous methods like Viola-Jones, Multi-block Local Binary Pattern (MBLBP), and Polar HarmonicTransforms (PHTs). The RIFDS approach has a further scope with a genetic algorithm to detect faces (approximation) even from shadows.
Introduction
Face recognition is an important process for facial emotion recognition, face tracking, gender classification, multimedia applications, automatic face recognition, and many others [1,2]. Many algorithms have been proposed for face detection, but many challenges with efficient and fast Nevertheless, rotated face recognition remains a challenge in practical scenarios [9][10][11]. The rotation invariant detection capability using various methodologies is summarized in Tabs. 1 and 2. So the Multi-block (MBLBP) [12] and Polar Harmonic Transforms (PHT) [1,2] techniques alone are not sufficient for fast display and rotation. For the picture illustrated in Fig. 1a. Viola-Jones algorithm [11] is not able to detect the rotated face images. LBP [13] and HOG [1,2] features are also utilized to fetch facial features of the image, but they are not rotation invariant and unable to detect the face from rotated images [14]. To address this problem, the paper proposes a Rotation Invariant Face Detection System (RIFDS) technique to detect the face from different angles of rotations [15]. RIFDS combines Polar Harmonic Transforms (PHTs) [1,2] with Multi-Block LBP (MBLBP) [12] technique for fast and accurate detection of rotated faces. MBLBP is used to extract the texture features from different angles of the image, and the PHT [1,2] method is implemented to recognize the face from any angle. MBLBP [12] extracts the features from small blocks, and these features are more précises than the features extracted from a single image as a whole [16]. Thus the features extracted from small blocks of a single image are more detailed. This leads to more accurate results. RIFDS uses binary images to display the selected facial features. When a test image is uploaded, it is converted into a grayscale image because image color increases the complexity of multiple color channels (like RGB and CMYK) [9]. RIFDS is tested on the face databases, namely JAFFF, ORL, CMU, MIT-CBCL, and LFW. The database contains images with different sizes (i.e., resolution), poses (i.e., face direction in left, right, up and down), facial expression (i.e., fear, joy, cry, anger, happiness, and sadness, shyness), and rotations (i.e., rotated at different angles). The paper is structured in four main sections: Section 1 introduces the content of the article, Section 2 presents the proposed method, Section 3 validates it experimentally, and lastly, Section 4 concludes the paper.
Binary Images
It uses two colors (black and white) and two-pixel values, i.e., 0 and 1. A binary image with m number of rows and n number of columns has N pixels and is given by Eq. (1). They display the extracted edges and other facial features in the Multi-Block LBP. When the LBP operator is applied to a digital image, detected edges are shown with white pixel values, and the rest of the image is the background. Different facial features from digital images are extracted by using the LBP operator as shown in Fig. 1b, in which extracted features (i.e., edges) are shown in white color, and the rest are the background.
Multi-Block Local Binary Pattern (MBLBP)
It detects faces from digital images through the concept of head and face boundary extraction. It can detect faces at a 15 • angle (i.e., an image with a pose left side or right side) and 360 • (i.e., frontal face) [12]. It is also used to encode the rectangular region's intensity using a local binary pattern [17]. LBP looks at nine pixels at a time (i.e., a 3 × 3 window of image = 9-pixel values) and 2 ∧ 9 = 512 possible values (see Fig. 2). MBLBP allows 256 types of different binary patterns to be formed for edge detection and face detection from images. The MBLBP operator is computed to identify the rectangle by comparing the central's rectangle average intensity, k c , with those of its neighborhood rectangles {k 0 ,. . .,k 8 }. In this way, a binary sequence is generated. The MBLBP value is obtained by Eq. (2).
where, k c is the average intensity of center rectangle and k i (i = 1..8) are the intensity of neighborhood rectangles.
Polar Harmonic Transforms (PHTs)
They are used for feature extraction and generate an invariant rotation feature. According to it if f(r, θ) represents a continuous image function on a unit disk D = {(r, θ) : 0 ≤ r ≤ 1, 0 ≤ θ ≤ 2 . The PHT with m repetition and order n is given by Eq. (4).
The radial part R n (r) of image is given by Eq. (6).
R n (r) = cos(π nr 2 ); for PCT sin(π nr 2 ); for PST With the help of PHTs, non-frontal faces are detected at different angles of rotation of faces (i.e., ±30 • , ± 45 • , ± 60 • , ± 90 • , ± 120 • , ± 135 • , ± 150 • , 180 • , ± 210 • , ± 225 • , ±240 • , ±270 • , ±300 • , ±315 • , ± 330 • ± 360) Gradient direction histogram (HOG) features can be used for face recognition under non-restrictive conditions [18]. HOG is a feature descriptor used in image and vision processing for face and object detection. The technique measure incidences of gradient alignment in localized part of the test image. This method is comparable to that of edge orientation histograms or scale invariant feature transform descriptors, and shape contexts. The major variance with other techniques is to compute on a dense grid of homogeneously spaced cells and uses touching local contrast normalization for better-quality accuracy. Tab. 1 demonstrates various face detection methods to detect the rotated faces. It has been shown that Viola-Jones, HOG features, LBP features, and Multi Block-LBP features are not rotation invariant (i.e., unable to detect rotated faces). On the other hand, Polar Harmonic Transforms (PHTs) is rotation invariant (i.e., detect the rotated faces). Tab. 2 represents features supported by different methods used for the detection of faces.
Pre-Processing Framework
The RIFDS system combines two methods PHT and MBLBP. MBLBP is used to extract texture patterns from the digital image, while PHT keeps rotation invariant characteristics [13]. This process is illustrated in Fig. 3. Here, a query image is selected to detect the rotated face from the sample data set. Then, pre-processing operations like morphological operators and classification are performed to the query image for fast processing. The query image is rotated at a 45 • angle to make it ready for analysis. The facial entities are selected as features (i.e., eyes, nose, and mouth) from the modified image. Facial features are selected and extracted for the training of the face recognition system. Face detection is applied to the selected features. The rotated face is generated and finally cropped at a 45 • angle. The sample dataset is chosen randomly.
Face Detection at Different Rotations
PHT technique is used to detect faces at different rotations. PHT is robust to noise, minimum information redundancy, fast and accurate face detection technique at different angles. So basically after selection of the test image and selecting angle with initial morphological operation images have been processed. The PHT techniques and cascading have been performed. Finally, detection of faces at various angle has been achieved. The steps for the algorithm are shown in Algorithm 1 and 4) is computed by Eq. (8).
where, N-1 and The image is reconstructed using the inverse transform function given in Eq. (10). Where min and max are the minimum and maximum values of p and q for PHT. G (x i , y k ) is the reconstructed image of the original image G(x i , y k ). The mean square for the image is computed by Eq. (11).
Facial Features Extraction and Detection
The Multi-Block LBP is used for facial feature extraction and detection. Initially test image has been selected and after rescaling processed by dividing into blocks. Comparison and binary numbers have been concluded. Further with MBLB and cascading of facial extraction the detection of faces have been performed. The local binary operator is used for the calculation of binary patterns in digital images. Extracted features of the input image are displayed using the binary image. The calculation of the local binary pattern is shown in Fig. 2. Comparison of neighboring pixels is done with the center pixel. If the neighbor pixel value is more than or equal to the center pixel value, then assign 1; otherwise, assign 0. The steps to calculate the multi-block local binary pattern for face facial feature extraction and detection are given in Algorithm 2. Figs. 7 and 8 show the detection of the face using Multi-Block LBP. In MBLBP, feature extraction performance also depends on the number of blocks or scale size used to form the filter from the operator. Its' detection process is shown in Fig. 9. In MBLBP, s is denoted as the parameter, which is the scale of the MBLBP operator. The feature extraction is implemented with three different scales (3 × 3, 9× 9, 12× 12, and 21 × 21). By using different block sizes, it can be observed that if the scale is small, i.e., (3 × 3), it works very effectively, but it cost more than others. The average size filter (9 × 9) is computed effectively and works very fast. It also works better on noise present in the image. If large-size filters are chosen, they are easy to implement and costs less. But a large amount of discriminative information will be lost. Tab. 3 shows the performance of MBLBP with different block numbers.
RIFDS Algorithm Description
The RIFDS approach of the face detection system is shown in Algorithm 3 and Fig. 10. It can detect faces at different angles of rotation with accuracy (i.e., ± 30 Fig. 18. In Fig. 20, the results on LENA face dataset with image resolution as 512 × 512 are shown. Fig. 21 shows the result analysis of the proposed algorithm along with accuracy and time analysis. The face detection time comparison is shown in Tab. 7. Tab. 8 shows the comparison of RIFDS with PHT. In the PHT face reorganization method, feature extraction is done from a complete image. One issue in this idea is that it did not extract the features from the rotated image. While in the RIFDS approach, the feature is extracted from small blocks from a single image by using MBLBP, PHT is applied for face recognition. Fig. 20, the objectives of the paper have been achieved using RIFDS technique. The algorithm achieved promising comparable results. The accuracy is 99.99%. For the test images with angle starting from 30 • to 180 • results shows better performance than the said known algorithm and techniques.
Conclusions
This paper presents a new algorithm called Rotation Invariant Face Detection System (RIFDS) to detect the face from different angles of rotations. It aims to fast and accurately detect rotated faces by combining Polar Harmonic Transforms (PHTs) with Multi-Block LBP (MBLBP). In the RIFDS approach, texture patterns are extracted from the image using MBLBP, and PHT is used to keep invariant rotation characteristics. The proposed face detection system is able to detect faces within a short time and at different angles (i.e., 30 Firstly, if the scale of MBLBP is 3 × 3, it will not be able to acquire the primary features of a large scale. To solve this issue, the process is then generalized to used neighbor's information. The other is that when pp is used without Bessal Functions, not any other radial kernel can be defined explicitly, which causes some time increase the computational complexity if not defined properly. The technique is also tested for face detection at different image resolutions. It has been tested and verified that the proposed RIFDS technique can detect faces with different angles, facial expressions, and emotions speedily and accurately. The accuracy achieved is 99.99% as margin of .01% is due to noise and external uncontrollable factors like calculating ability of the algorithm as per significant figures of any numeric value. The extension or futuristic benefits of the algorithm can be used in the domain of automation, machine learning and deep learning through genetic algorithms for face detections from shadows. The application of the algorithm are in the areas of Twin face recognition, Object and shape recognition , Video or live surveillance, detection of face in the incarnation and in medical image processing for tumor detection by focusing the detection of malignant cells. | 2021-10-21T15:51:48.265Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "8a6f1f933611edb52207119181613b16114c5d1b",
"oa_license": "CCBY",
"oa_url": "https://www.techscience.com/cmc/v70n1/44447/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2ffbedf6921adb31446095abd1a85a9611d27c86",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17455616 | pes2o/s2orc | v3-fos-license | Comparison of Hemoglobin Levels Before and After Hemodialysis and Their Effects on Erythropoietin Dosing and Cost
Background Hemoglobin levels measured after hemodialysis, as compared to hemoglobin levels measured before hemodialysis, are suggested to be a more accurate reflection of the hemoglobin levels between hemodialysis sessions, and to be a better reference point for adjusting erythropoietin dosing. Objectives The aim of this study was to compare the hemoglobin levels before and after hemodialysis, to calculate the required erythropoietin doses based on these levels, and to develop a model to predict effective erythropoietin dosing. Patients and Methods In this cross-sectional study, the hemoglobin levels of 52 patients with end-stage renal disease were measured before and after hemodialysis. The required erythropoietin doses and the differences in cost were calculated based on the hemoglobin levels before and after hemodialysis. A model to predict the adjusted erythropoietin dosages based on post-hemodialysis hemoglobin levels was proposed. Results Hemoglobin levels measured after hemodialysis were significantly higher than the hemoglobin levels before hemodialysis (11.1 ± 1.1 vs. 11.9 ± 1.2 g/dL, P < 0.001, 7% increase). The mean required erythropoietin dose based on post-hemodialysis hemoglobin levels was significantly lower than the corresponding erythropoietin dose based on pre-hemodialysis hemoglobin levels (10947 ± 6820 vs. 12047 ± 7542 U/week, P < 0.001, 9% decrease). The cost of erythropoietin was also significantly lower when post-hemodialysis levels were used (15.96 ± 9.85 vs. 17.57 ± 11.00 dollars/patient/week, P < 0.001). This translated into 83.72 dollars/patient/year in cost reduction. The developed model for predicting the required dosage is: Erythropoietin (U/week) = 43540.8 + (-2734.8) × Post-hemodialysis Hb* (g/dL). [(R2) = 0.221; *P < 0.001]. Conclusions Using post-hemodialysis hemoglobin levels as a reference point for erythropoietin dosing can result in significant dose and cost reduction, and can protect hemodialysis patients from hemoconcentration. The prediction of the erythropoietin adjusted dosage based on post-hemodialysis Hb may also help in avoiding overdosage.
Background
Anemia is a common problem in end-stage renal disease (ESRD), and insufficient production of erythropoietin (EPO) by the kidneys is considered to be one of its major causes (1,2). One routine approach to treating anemia in ESRD is the administration of erythropoiesis-stimulating agents (3,4); however, the high cost of these drugs necessitates their judicious use (5).
Both hemoconcentration caused by excessive use of erythropoiesis-stimulating agents and anemia are associated with some complications in ESRD patients (3). Anemia, especially with hemoglobin (Hb) levels less than 9 g/dL, can lead to symptoms which negatively affect quality of life, including low energy, fatigue, decreased physi-cal functioning, and low exercise capacity. Anemia also increases the need for blood transfusions and further possible complications (3,6). On the other hand, hemoconcentration, especially with Hb > 13 g/dL, is also associated with adverse outcomes, including increased risk for stroke (7,8), hypertension (9), and vascular access thrombosis (10). Thus, it is vital to maintain Hb levels within a conventional target range (10 -11.5 g/dL) in ESRD patients by administering the appropriate amounts of EPO (3).
Most of the studies which have contributed to establishing a target Hb level have focused on pre-hemodialysis Hb and hematocrit (Hct) values (3). However, some other research has focused on post-hemodialysis Hb and Hct values, reporting a significant rise in Hb and Hct concentrations following a hemodialysis (HD) session, especially in Copyright © 2016, Nephrology and Urology Research Center. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. the first 24 hours, which is then followed by a gradual decrease during the rest of the interdialysis period (11)(12)(13). In a more recent study, it was found that serum Hb levels measured at 4, 24, and 48 hours after an HD session were still elevated as compared to the pre-hemodialysis Hb level, whereas they did not have a significant difference when compared to the immediate post-hemodialysis Hb concentration (14). These findings suggest that in HD patients, the real Hb and Hct values are closer to the post-hemodialysis concentrations than the pre-hemodialysis levels. Therefore, using post-hemodialysis Hb levels as the reference point for EPO dosage adjustments in HD patients is reasonable, as it can be considered to be an action that results in the reduction of the required EPO dosages and their cost (14). Nevertheless, the amount that an EPO dosage should be decreased and the resulting cost reduction may not be the same in different centers, and thus needs further investigation.
Objectives
In this study, the pre-hemodialysis and posthemodialysis Hb concentrations of patients on maintenance HD in a dialysis center in Shiraz were measured in order to calculate the decline in EPO dosage prescriptions and the subsequent cost reduction when using posthemodialysis Hb levels as the reference point. A model was then developed to predict the adjusted EPO dosages according to post-hemodialysis Hb levels.
Patients and Methods
In this cross-sectional study, 52 patients aged 18 years or older undergoing hemodialysis at an outpatient center at Shiraz University of Medical Sciences were enrolled. The research was reviewed and approved by the ethics committee at Shiraz University of Medical Sciences and was performed in accordance with the declaration of Helsinki. All patients provided their informed written consent before enrollment in the study.
Participants were required to be on maintenance HD for more than three months. They underwent bicarbonate hemodialysis three times weekly with polysulfone membranes (1.8 -2 m 2 ). The dialysis time for each HD session was 240 minutes for all of the patients. All of the patients also received EPO therapy.
The exclusion criteria consisted of active infection, including hepatitis B, hepatitis C, or human immunodeficiency virus, active hematologic malignancy, and acute illness requiring hospitalization within three weeks prior to enrollment in the study.
Hb and Hct levels were measured before and after the first HD session of the week using an autoanalyzer. The prescribed EPO dosages were determined and the adjusted doses of EPO were calculated using the pre-hemodialysis Hb levels as the reference point. The hypothetically adjusted EPO dosages using the post-hemodialysis Hb levels as the reference point were also calculated. The reductions in EPO dosage for each patient and week were calculated, the cost reduction was estimated, a model was then developed to predict the adjusted EPO dosages based on posthemodialysis Hb levels. Weight for each patient was measured with the same digital scale both before and after the HD session in three consecutive sessions.
The primary outcome measurement of this study was an absolute change in Hb levels both before and after the HD session. Following a cohort pilot study in which mean pre-hemodialysis and post-hemodialysis Hb concentrations were measured (14), the sample size of 52 patients was used to detect the mean difference of pre-hemodialysis and post-hemodialysis Hb levels with a standard deviation of 1.1 g/dL, type I error of 5%, and precision of 0.3 g/dL.
Statistical analysis was performed using the SPSS version 16 (SPSS Inc., www.ibm.com/software/analytics/spss/products/statistics) statistical software package. Results for the quantitative variables are shown as means and standard deviations, and the results for the categorical variables are shown in terms of frequencies and percentages. The changes in the measured parameters were calculated with a normal distribution before and after the intervention with a paired t-test. The McNemar-Bowker's test was used to calculate the changes in the Hb level categories before and after the HD session in 3 × 3 square tables. In order to predict the adjusted EPO dosage, post-hemodialysis Hb and weight loss after the HD session were entered as covariates in a linear regression model, where EPO dosage was considered as a dependent variable. The stepwise method was used to detect the most influential covariates. P < 0.05 was considered to be statistically significant.
Results
There were 27 males and 25 females included in the study, with a mean age of 62 ± 15 years (range: 18 to 90 years). The baseline characteristics of the patients are listed in Table 1.
The mean post-hemodialysis Hb level was significantly higher than the mean pre-hemodialysis Hb level (11.9 ± 1.2 vs. 11.1 ± 1.1 g/dL, P < 0.001). The mean intradialytic percent variations (% delta) of the Hb and Hct levels was 7.0 ± 6.0% (range: -7 to 20) and 6.5 ± 5.6% (range: -6 to 19), respectively. The mean weight loss during HD was 2.26 ± 0.89 kg (range: According to the KDIGO clinical practice guidelines (3), using the pre-hemodialysis Hb concentrations revealed that 27 patients (51.9%) had adequate Hb levels (10 -11.5 g/dL), while seven patients (13.5%) had low Hb levels (< 10 g/dL) and 18 patients (34.6%) had high Hb levels (> 11.5g/dL). However, using the post-hemodialysis Hb levels, five out of the seven patients (71%) with low pre-hemodialysis Hb levels had adequate post-hemodialysis Hb concentrations, and 12 out of the 27 patients (44.4%) with pre-hemodialysis Hb concentrations within the KDIGO target had high post-hemodialysis Hb levels ( Table 2, P = 0.001). Taking into account the patients who received more than 12,000 U/week of EPO, five out of six patients (83%) with low prehemodialysis Hb concentrations had post-hemodialysis Hb levels within the KDIGO target, and three out of 10 patients (30%) with adequate pre-hemodialysis Hb levels had high post-hemodialysis Hb concentrations (Table 3, P = 0.018).
The hypothetically adjusted EPO dosage was calculated using post-hemodialysis Hb levels as the reference point. If this EPO dosage was used, the mean required EPO units in a week would be significantly lower in comparison to the mean EPO dosage prescribed based on prehemodialysis Hb concentrations in routine practice (10947 ± 6820 vs. 12047 ± 7542 U, P < 0.001, 9% decrease).
After adjusting for weight, the prescribed EPO dosage could be reduced by 8.8% if the post-hemodialysis Hb was used as the reference point (EPO dosage according to prehemodialysis Hb, 204 ± 145 U/kg/week; EPO dosage according to post-hemodialysis Hb, 186 ± 134 U/kg/week). Finally, using post-hemodialysis Hb as the reference point of EPO dosage calculation results in significant cost reduction: 17.57 ± 11.00 dollars/patient/week for pre-hemodialysis Hb level vs. 15.96 ± 9.85 dollars/patient/week for posthemodialysis Hb level, respectively (P < 0.001) (15). Thus, this course of action could bring about savings of 83.72 dollars/patient/year, and for the 52 patients included in our study, this would result in savings of 4,353 dollars/year. Taking into account the at least 12,500 HD patients in Iran (16)
Discussion
These results confirm that using post-hemodialysis Hb levels as the reference point for EPO dosage calculation causes significant reductions in dosages and cost. Reduction in the prescribed EPO dosages could have beneficial effects on HD patients because of its ability to prevent vulnerability to high Hb and Hct levels and complications during the interdialysis period, and its economic efficiency (14). In addition, a simple model has been developed to estimate the adjusted EPO dosage based on post-hemodialysis Hb so that overdosage of EPO can be prevented.
Vlassopoulos et al. reported a significant rise in Hb and Hct following the HD session, which remained significantly elevated for at least 24 hours (11). Movilli et al. and Bellizzi et al. discussed similar findings in their studies (12,13). Furthermore, Castillo et al. reported increases of 6.1% and 5.8% in the Hb and Hct values, respectively, after the HD session (14). This result is similar to our findings, which indicate 7% and 6.5% rises in the Hb and Hct levels, respectively.
The normal hematocrit cardiac trial (NHCT), a study consisting of 1,200 HD patients with congestive heart failure or ischemic heart disease randomized into two groups with target Hct ranges of 42 ± 3% (the normal Hct group) and 30 ± 3%, was prematurely stopped by the data safety monitoring board because of concerns about the increased risk of cardiovascular disease and mortality in the normal Hct group. Three recent randomized controlled trials, the correction of hemoglobin and outcomes in renal insufficiency (CHOIR) (17), the cardiovascular risk reduction by early anemia treatment with epoetin beta (CRE-ATE) (18), and the trial to reduce cardiovascular events with Aranesp therapy (TREAT) (8), showed that achieving a high versus a low Hb target by administering higher EPO doses was associated with an increased risk of myocardial infarction, stroke, and death in chronic kidney disease patients who had not undergone dialysis. In addition, a metaanalysis on anemic chronic kidney disease patients treated with erythropoietin suggested that a higher Hb target increases the risk of all-cause mortality, arteriovenous access thrombosis, and poorly-controlled hypertension (19).
In accordance with the KDIGO guidelines (3), when using post-hemodialysis measurements, most of the patients with low pre-hemodialysis Hb levels had adequate Hb levels, and some of the patients with a pre-hemodialysis Hb level within the KDIGO target also had a high Hb level. These changes are a result of the slow reequilibration process following the HD session (14) and can potentially lead to hemoconcentration in HD patients. Therefore, by using post-hemodialysis values as the reference point for EPO prescription, hemoconcentration-related complications can be reduced, including the increased risk of stroke (7,8), hypertension (9), vascular access thrombosis (10) and all-cause mortality (19) in a significant number of HD patients.
In conclusion, using post-hemodialysis Hb levels as the reference point for EPO administration can protect hemodialysis patients from hemoconcentration and can result in significant reductions in EPO dosages (8.8% U/kg/week) and cost (83.72 dollars/patient/year). Also, a simple model was presented to estimate the adjusted EPO dosage based on the post-hemodialysis Hb level to avoid EPO overdosage. The main limitations of this study are the small sample size and its cross-sectional design without follow-up. Future multicenter studies with larger sample sizes and longer follow-up durations are needed to examine the outcomes of using post-hemodialysis Hb levels as the reference point for EPO prescription. | 2018-04-03T02:10:33.659Z | 2016-06-29T00:00:00.000 | {
"year": 2016,
"sha1": "597b11ee8fdb13563ea71e354b10d6bff3f8995c",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5045528?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "597b11ee8fdb13563ea71e354b10d6bff3f8995c",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219053656 | pes2o/s2orc | v3-fos-license | Effect of Insulation Layer Composite and Water Adsorption on Bonding Performance in Heat Barriers
Received: 21 August 2019 Revised: 19 October 2019 Accepted: 23 October 2019 Available online: 26 October 2019 K E Y W O R D S Thermal insulation layer in solid rocket motors is a vital component during the rocket flight. Many factors can affect the performance of this insulation layer. Bonding property between rocket propellant and the thermal insulation layer are examined in this study. Hydroxyl terminated polybutadiene (HTPB) and isophorone diisocyanate (IPDI) as a curative was chosen as the most common type of rocket propellant. The effect of two types of polymeric insulation layer such as nitrile butadiene rubber (NBR) and ethylene-propylene-diene monomer (EPDM), on the bonding performance at the interface between (HTPB/IPDI) propellant and the respective insulation layer has been investigated. Results revealed that both types of insulation layer considerably decreased the interfacial bonding performance of the (HTPB/IPDI) propellant. NBR was proven to be more severe on weakening the adhesion strength than that of the EPDM. We further investigated the effects of the thickness and water content of NBR on the bonding performance, and proved that bonding strength was inversely proportional to the thickness and the water content. Solid rocket motor HTPB propellant Insulation layer Bonding property G R A P H I C A L A B S T R A C T M. E. Awad & M. Nasser 371
Introduction
Rocket motor mainly consists of shell, thermal insulating layer, liner and propellant as shown in Figure 1. Liner is a kind of special adhesive, bonding the thermal insulation layer, shell, and the propellant. The main function of the liner is to prevent the unexpected increase of burning surface of the propellant when the motor is working [1]. The manufacturing process of the combustion chamber consists of the following steps: first mold the insulating layer into shape, second spray or brush the liner slurry on the insulation layer, third cast propellant slurry when the liner is curing to a certain extent or semi-curing state.
The function of the liner and the manufacturing process of combustion chamber make the interface of liner between propellant and insulation layer the weakest interface where problems arise [4]. It was found that, many factors affect the interfacial bonding property between liner and propellant [5]. Among those factors, heat insulation layer is the major factor effecting the bonding property [6]. The insulation layer contains active groups, such as hydroxyl groups that can adsorb water molecules and at the same time can absorb isocyanate compounds [7,8]. Therefore, the effect of the water in the insulating layer on bonding property has attracted a considerable attention. However, the following factors influencing the bonding interface such as the types, thickness, and water removal conditions of insulation layer, have not been publicly reported in any research study.
Materials
NBR insulation layer is a kind of rubber filled with precipitated silica and asbestos fiber which is vulcanized by sulfur. EPDM is a kind of rubber filled with precipitated silica and organic fiber, which is vulcanized by peroxide. The vulcanizing condition of the two kinds of insulation specimen is determined as follows: the molding temperature is 160 ℃, the molding pressure is 10 MPa, and the molding time is 40 min [9]. The solid content of HTPB propellant (the sum of AP, Al and RDX) is 88% cured by isophorone diiscyanate (IPDI) [10]. The curing condition is chosen to be at 60 ℃ for 8 days. HTPB liner is reinforced by precipitated silica and cured by polyisocyanate. The curing condition is the same as HTPB/IPDI propellant.
Specimen and test
The dumbbell shaped specimens of the propellant were prepared according to the ASTM D412 C standard [11]. The tensile strength test condition was determined as follows: the test temperature of 25 ℃ and the stretching speed of 100 mm/min. The structure of the rectangular bonding specimen containing insulating layer is shown in Figure 2. The thickness of the insulation was 2 mm. The bonding test condition determined as follows: the test temperature of 25 ℃ and the stretching speed of 20 mm/min. The tensile strength for the propellant and the rectangular adhesive specimen was tested using the universal material experiment machine (INSTRON 4301).
Effect of insulation layers on bonding property
The influence of the insulation layer on the liner/propellant interface bonding property is shown in Table 1. Compared with the specimen without EPDM or NBR insulation layer, the bonding strength of the specimen with insulation layer is obviously lower. The failure mode was cohesive failure of propellant. Either EPDM or NBR, contains the reinforcing agent of precipitated silica. There is a large amount of hydroxyl on the surface of precipitated silica [12]. The hydroxyl groups of precipitated silica not only can form hydrogen bond with water molecular to cause the insulation layer to have a certain water absorption property, but also can react with the isocyanate, enabling the curing agent at the interface to be additionally consumed. The water absorption of EPDM and NBR both with the thickness equals 2 mm at different humidity conditions is depicted in Figure 3 and The results indicated that as the humidity enhanced and the moisture absorption time extended, the water absorption of the insulation layer increased. Therefore, it can be concluded that, the -OH group in the insulation layer and the absorbed water may consume the -NCO group at the interface, which results in reducing the ratio of the curing agent of propellant. As a result, the bonding strength of steel/insulation layer/liner/propellant specimen is obviously lower than the steel/liner/propellant specimen.
Effect of insulation layer types on bonding property
Under the uniform conditions (the same propellant, the same liner, the same insulation layer thickness and the same water removal condition), the effect of EDPM and NBR on bonding property is shown in Table 2. Compared with EPDM, the weakening of bonding property for NBR is more obvious. The reason for the above experimental phenomenon can be summarized as the differences in absorption characteristics of IPDI. To verify the conjecture, the absorption property for IPDI of insulation layer was studied. As shown in Figure 5, the weight gains rates of NBR (2 mm) immersed in IPDI is obvious higher than EPDM (2 mm).
The relationship between the weight gain of insulation layer immersed in IPDI and the diffusion coefficient is as follows: In Eq. 1: m: Weight of insulation layer after absorbing IPDI (g) mo: Initial weight of insulation layer, (g) w1: The weight of IPDI absorbed by the insulation layer, (g) A: Area of insulation layer, (m 2 ) ρ: The density of IPDI, 1.06 E+3 kg·m -3 t1: Absorption time, (s) D: diffusion coefficient, (m 2 ·s -1 ) According to the formula: = Where (di, the thickness of insulation layer; ρi, the density of insulation layer). Plot the weight increasing rate of insulation layer and square root of time, the relationship between slope (k) and diffusion coefficient is as follows: The diffusion coefficient can be calculated by formula 4: Density of NBR and EPDM was 1.25 E+3 kg·m -3 and 1.05 E+3 kg·m -3 , respectively [13]. Diffusion coefficient of IPDI in NBR and EPDM was 1.4454 E-12 m 2 ·s -1 and 6.537 E-15 m 2 ·s -1 , respectively. Diffusion coefficient of IPDI in NBR is about 200 times higher than that of the EPDM. It is assumed that the reaction follows the Fick, s second law, the migration of curing agent can be described by formula 5.
According to the Equation 5, under the same condition, the slower the curing reaction rate of propellant is, the greater the diffusion coefficient D of the curing agent will be, namely the migration loss of curing agent is higher. Due to the use of IPDI curing agent of HTPB propellant cured at a low speed (the time to reach vulcanization point at 60 ℃ is 8 days), in the early curing, the migration of free IPDI curing agent in the propellant in the interfacial region can lead to further weakening of the interface properties of propellant. The greater the amount of migration is, the weaker the mechanical properties of the interface propellant will be. Compared with EPDM, the diffusion coefficient of IPDI in the NBR is greater, which leads to the weakening the mechanical properties of the interface propellant by the NBR insulation layer.
Effect of thickness of insulation layer
Because the weakening effect of NBR insulation layer on interface property is more obvious and remarkable than EPDM insulation layer, the NBR insulation layer was chosen to study the effect of the thickness of the insulation layer on interface bonding property (Table 3). To eliminate the influence of water, the specimen was pre-baked at 80 ℃ for 2 h. As seen in Table 3, the thickness of NBR insulation layer has a remarkable effect on the interfacial bonding property of steel/insulation layer/liner/propellant, to be more specific, the bonding strength decreased with the increasing of the thickness of NBR insulation layer. Figure 6 depicts that the weight loss rate of NBR insulation varies with thickness changing when heated up to 80 ℃.
Obviously, under the same condition with the increase of the thickness of NBR insulation, the loss rate decreased, indicating that the residual water comparably in thick insulation is higher under the same heating condition. In addition, as the thickness of the insulating layer of NBR increases, the migration loss of IPDI curing agent in propellant was enhanced. The interaction of migration of water and curing agent intensified the weakening effect of the insulating layer at the interfacial with the increasing of the thickness.
Effect of water removal condition
During the curing of propellant, the small molecules containing active hydrogen in the insulation layer migrate to the heat-insulation layer/liner/propellant interfacial region as the concentration difference may cause the extra consumption of the curing agent in the interfacial propellant and liner. Thus, the migration process may decrease the interfacial propellant strength and bonding strength of liner/propellant interface. The active small molecules such as water molecules in the insulation layer can be driven out by heating to reduce its adverse influence. Therefore, increasing the heating temperature or prolonging the heating time is beneficial to reduce the influence of the active small molecules in the insulation layer on the interfacial bonding property as shown in Table 4.
Conclusions
This work aims at studying the bonding property between rocket propellant and the thermal insulation layer and factors affecting the performance of the insulation layer in a typical solid rocket motor. Samples characterization was performed using universal material experiment machine (INSTRON 4301) for tensile strength test and bonding evaluation. 1) Both the NBR insulation layer and the EPDM insulation layer have weakened the effect on the interfacial bonding property of the HTPB/IPDI propellant, and the influence of the NBR insulation layer is greater.
2) The small molecules containing active proton such as water in the heat-insulation layer migrate to the liner/propellant interface and the curing agent in the interface propellant migrates to the heatinsulation layer, which consumes the curing agent near the liner/propellant interface, resulting in the decrease of the interfacial bonding property. As the thickness of the heat-insulation layer enhanced, the weakening of the interfacial bonding property became more significant. 3) Drying the insulation layer before spraying or brushing the liner can reduce the negative influence of the heat-insulation layer on the bonding property at the interface. | 2020-03-05T10:29:13.781Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "c0dfec83cdbe87a11ed918abddcbc42b42d8bd8f",
"oa_license": null,
"oa_url": "http://www.ajchem-a.com/article_95803_20984a1d4c05d65fe5f06e8257b0a864.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d5ad93d4a83375c99d7439af644879f6ee212074",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
52958581 | pes2o/s2orc | v3-fos-license | Revisiting Imidazolium Based Ionic Liquids: Effect of the Conformation Bias of the [NTf$_{2}$] Anion Studied By Molecular Dynamics Simulations
We study ionic liquids composed 1-alkyl-3-methylimidazolium cations and bis(trifluoromethyl-sulfonyl)imide anions ([C$_n$MIm][NTf$_2$]) with varying chain-length $n\!=\!2, 4, 6, 8$ by using molecular dynamics simulations. We show that a reparametrization of the dihedral potentials as well as charges of the [NTf$_2$] anion leads to an improvment of the force field model introduced by K\"oddermann {\em et al.} [ChemPhysChem, \textbf{8}, 2464 (2007)] (KPL-force field). A crucial advantage of the new parameter set is that the minimum energy conformations of the anion ({\em trans} and {\em gauche}), as deduced from {\em ab initio} calculations and {\sc Raman} experiments, are now both well represented by our model. In addition, the results for [C$_n$MIm][NTf$_2$] show that this modification leads to an even better agreement between experiment and molecular dynamics simulation as demonstrated for densities, diffusion coefficients, vaporization enthalpies, reorientational correlation times, and viscosities. Even though we focused on a better representation of the anion conformation, also the alkyl chain-length dependence of the cation behaves closer to the experiment. We strongly encourage to use the new NGKPL force field for the [NTf$_2$] anion instead of the earlier KPL parameter set for computer simulations aiming to describe the thermodynamics, dynamics and also structure of imidazolium based ionic liquids.
INTRODUCTION
Having a reliable force field available is one of the most important prerequisites for setting up a molecular dynamics simulation. Hence, a lot of effort has been put into the development of new as well as the improvement of existing force field models. There are essentially two different approaches on how to improve or optimize force fields: One approach is trying to develop a "universal" force field parameter set which can be applied to a broad range of different molecules or ions, such as the force field parameters for ionic liquids introduced by Pádua et al. [1][2][3][4][5][6][7][8][9][10]. These force fields are very popular in the ionic liquids molecular simulation community and yield in general good results in comparison with experimental data.
An alternative, less universal approach is to focus on a specific subset of molecules and ions, and to enhance the quality of the model by fitting the parameters of a system to a set of selected thermodynamical, dynamical and structural properties, which then can be accurately emulated by the force field. The most well-known example for the application of such a strategy is perhaps the water molecule. In 2002 Bertrand Guillot gave a comprehensive overview over (at the time) more than 40 different water models [11], and the number has been increasing since then [12][13][14][15]. Obviously, water is of great scientific interest. As a consequence, there exist a variety of force field models consisting mostly of three (SPC, TIP3P) to five (TIP5P, ST2) interaction sites, including (POL5) or without (SPC/E) polarizability an and even force fields optimized to best represent the solid phases of water (TIP4P/ICE) and their phase transitions.
The second strategy was employed by Köddermann et al. in 2007 to arrive at the KPL (Köddermann, Paschek, Ludwig) force field for a selected class of imidazolium based ionic liquids composed of 1-alkyl-3-methylimidazolium cations and bis(trifluoromethylsulfonyl)imide anions ([C n MIm][NTf 2 ]) [16]. Aim of this work was to further optimize the force field of Pádua et al. to better represent dynamical properties like selfdiffusion coefficients, reorientational correlation times, and viscosities. As shown in their original work from 2007 as well as in further works published by different groups, the KPL force field has been proven to yield reliable results for dynamical properties, but also thermodynamical properties, such as the free energies of solvation for light gases in ionic liquids [17,18], and is still used frequently to this date [19][20][21].
Here we want to present our take on further improving the KPL force field by revisiting the conformation- space explored by the [NTf 2 ] anion. Extensive studies of the conformation of the [NTf 2 ] anion using the KPL force field in comparison to experimental as well as quantum chemical calculations have revealed a significant mismatch of the energetically favored conformations. Therefore we feel the need for presenting a modified version of the force field, removing this conformation-bias. We are discussing the implications of this modification for a wealth of thermodynamical, dynamical, and structural quantities.
CONFORMATION-SPACE OF THE ANION
During MD simulations of ionic liquids of the type [C n MIm][NTf 2 ] with the force field of Köddermann et al. it became apparent that the favored [NTf 2 ] anion conformations observed in the simulation differ from what has been shown earlier from quantum chemical calculations [6] as well as from Raman experiments [22] (see Fig. 1 and Fig. 2).
For locating the minimum energy conformations we performed extensive quantum chemical calculations with the Gaussian 09 program [23] following the approach of Pádua et al. [2]. We started by calculating the potential energy surface as a function of the two dihedral angles S1-N-S2-C2 (φ 1 ) and S2-N-S1-C1 (φ 2 ) on the HF level with a small basis set (6-31G*). Subsequent to these optimizations we performed single point calculations on the MP2 level using the cc-pvtz basis set for all HF optimized conformations. In agreement with earlier calculations by Pádua et al. [6], and Raman measurements of Fujii et al. [22], we observe essentially two structurally distinct minimum energy conformations that can be identified as energy minima on the energy-landscape depicted in Fig. 3. The trans conformations of the [NTf 2 ] anion are energetically preferred, followed by the gauche-conformations, which are elevated by about 3 kJ mol −1 (see Fig. 2).
To compare these ab initio calculations with the KPL force field model, we employed the molecular dynamics package Moscito 4.180 and computed the same potential energy surface as a function of the two dihedral angles φ 1 and φ 2 (see Fig. 4 top panel) by fixing the two dihedral angles and optimizing all other degrees of freedom. We would like to add that in the force fieldoptimizations all bond-lengths were kept fixed. It is quite obvious that the KPL force field does not adequately reproduce the potential energy surface obtained from the quantum chemical calculations (compare the top panel in Fig. 4 with FIG 3). The minimum energy conformations of the KPL model reveals essentially two structurally distinct conformations illustrated in Fig. 1. However, both are somewhat similar, being positioned between the trans and gauche conformations favoured in the ab initio calculations. The fact that the energy landscape does not reflect all the symmetry-features of the molecule, however, might be a lesser problem since energy barriers are rather large and the anion could explore similar conformations simply by rotation. However, for arriving at a better representation of the ab initio energy surface, we reparameterized the charges as well as the two distinct independent dihedral potentials (S-N-S-C and F-C-S-N), while keeping the other parameters unchanged. From our quantum chemical calculations we yield the global minimum conformations at φ 1 = φ 2 = 90 • and φ 1 = φ 2 = 270 • . Due to the symmetry of the [NTf 2 ] anion these two minima are conformationally identical. To calculate the parameters for the S-N-S-C dihedral angle, we fixed φ 1 at 90 • and calculated the energy as function of the dihedral angle φ 2 on the MP2 level using a cc-pvtz basis set (as shown in Fig. 5). The same procedure was applied using the KPL force field while switching of the dihedral potential, such that only the nonbonding (nb) interactions matter. We then subtracted the latter energy function from the energies obtained via the QM calculations, and arrive at the dihedral potential for the dihedral angle S-N-S-C, which should be reproduced by the torsion potential in our force field (see Fig. 5
bottom panel).
In contrast to Köddermann et al., we chose to fit a dihedral potential function obeying the conformational symmetry-features of the anion using (with n = 6 and ψ 0 m = 0) to the computed ab initio potential, leading to the proper minimum energy conformations of the [NTf 2 ] anion [2]. Similarly obtained were parameters for the F-C-S-N dihedral potential of the terminal CF 3 -groups (see Fig. 6). The complete set of new parameters for the NGKPL force field is given in Table III. All charges were computed from the MP2wavefunction using the method of Merz and Kollman as implemented in in the Gaussian 09 programm. [24]. The refined charges are listed in Table I.
Finally, employing new refined parameters for the dihedral potentials and partial charges, we re-calculated the energy surface as a function of the two dihedral angles φ 1 and φ 2 (see Fig. 4 bottom panel). The result is in much better agreement with the ab initio calculations and resolves the conformational mismatch issue for the force field of the [NTf 2 ] anion.
All parameters for the new [NTf 2 ] anion force field are listed in the Tables I-III as the parameters for the cations can be found in the publication of Köddermann et al. [16].
MOLECULAR DYNAMICS SIMULATIONS
We performed MD simulations for the two force fields KPL and NGKPL with Gromacs 5.0.6 [25][26][27][28][29] over a temperature range from T = 273 -483 K to calculate thermodynamical and dynamical properties and compare them with the original KPL force field. All simulations were carried out in the N pT ensemble. However, to compute viscosities, we performed additional N V T simulations using starting configurations sampled along the N pT -trajectory. Periodic boundary conditions where applied using cubic simulation boxes containing 512 ionpairs. We applied smooth particle mesh Ewald summation [30] for the electrostatic interactions with a real space cutoff of 0.9 nm, a mesh spacing of 0.12 nm and 4th order interpolation. The Ewald convergence factor TABLE III. Parameters k dp m and ψ 0 m for the torsion potential V dp κλωτ = n k dp α was set to 3.38 nm −1 (corresponding to a relative accuracy of the Ewald sum of 10 −5 ). All simulationa were carried out with a timestep of 2.0 fs, while keeping bond lengths fixed using the LINCS algorithm [31]. An initial equilibration was done for 2 ns at T = 500 K applying Berendsen thermostat as well as Berendsen barostat with coupling times τ T = τ p = 0.5 ps [32]. After this another equilibration was done for 2 ns at each of the desired temperatures. For each of the six temperatures 273 K, 303 K, 343 K, 383 K, 423 K and 483 K we performed production runs of 30 ns, keeping the the pressure fixed at 1 bar applying Nosé-Hoover thermostats [33,34] with τ T = 1 ps and Rahman-Parrinello barostats [35,36] with τ p = 2 ps.
RESULTS & DISCUSSION
Analogous to the publication of Köddermann et al. from 2007 [16] we will compare densities, self-diffusion coefficients and vaporization enthalpies for [C n MIm][NTf 2 ] as function of temperature and alkyl chain-length as well as viscosities and reorientational correlation times for [C 2 MIm][NTf 2 ] as function of temperature. It is important to keep in mind, that the original force field was optimized to reproduce these properties and yields a good agreement between experiment and simulation. By resolving the mismatch of the favored conformations of the [NTf 2 ] anion we are able to describe these properties as good as the KPL force field or even better.
Structural Features
Here we take a look at structural features of the liquid phase and in how they are influenced by changes in the conformation-population of the [NTF 2 ] anion. First we inspect the three distinct center of mass pair distribution functions between the different ions computed for [C n MIm][NTf 2 ] with n = 2 at T = 303 K (shown in Fig. 7). It is quite apparent that these distribution functions are only slightly affected by the alterations in the force field. Most notable are the differences observed in the anion-anion pair distribution function depicted in Fig. 7c with the first peak being significantly broadened. It is quite obvious to assume that this behavior is related to the more distinct conformational states (trans and gauche) that the reparameterized [NTf 2 ] anion is adopting as shown in Fig. 2. In the trans state the molecule is more elongated along the molecular axis and more compact perpendicular to it. In addition, the gauche-state is generally more compact than the minimum energy conformations adopted by the original KPL force field model shown in Fig. 1. This leads to an enhanced population of both, short and long anion-anion distances. This effect manifests itself also in the slight shift of the max-imum of the first peak of the anion-cation pair distribution function towards smaller distances (see Fig. 7a). Another interesting distribution function is the pair distribution function of the anion-oxygens surrounding the C(2)-hydrogen site on the cation. The C(2)-position is deemed to act as a hydrogen-bond donor [37,38]. With changing conformations we expect an effect on the hydrogen bonding situation between the anion and cation. Here we observe that the NGKPL force field promotes hydrogen bonds between anions and cations as indicated by an increased first peak of the O-H pair distribution function shown in Fig. 8. The computed number of hydrogen bonds increases throughout by about 4 %, mostly unaffected by the alkyl chain-length and temperature (not shown). Taking into account the importance of more elongated trans configurations of the anion, it is also not surprising that the second peak is somewhat depleted, while the third peak is again enhanced (see Fig. 8). We further investigate the hydrogen-bond situation by not just looking at the distance between the oxygen and hydrogen, but also at the angular distribution. Therefore we compute the probability density map of the anion-oxygens surrounding the C(2) hydrogen site on the cation. Again we focus on the C(2) hydrogen because its hydrogen-bond interaction with the anion is deemed the strongest and most important. To calculate this map we compute both, the O-H distance as well as angle between the C-H bond-vector on the cation and the intermolecular C-O vector, where C is the C(2)-position of the cation and O represents the oxygen-sites on the anions. In addition, the computed probabilities are weighted by r −2 OH . It is revealed that the maximum of this probability density map does not quite represent a linear hydrogen bond at a distance of 2.3Å, but is tilted by about 25 • , and is characterized by a rather broad angular distribution. (Fig. 9).
Densities & Self-Diffusion Coefficients
To get an idea on how the changing conformationpopulations influence the properties of the imidazolium based ionic liquids, we first take a look at the mass density of [C 2 MIm][NTf 2 ]. In molecular simulations the density has always been an important property for evaluating a force field. The enhanced conformational diversity of the [NTf 2 ] anion leads to a slight increase in the density over the whole temperature range (see Fig.10). This overall increase is in better agreement with the experimental data from Tokuda et al. [39]. For lower temperatures the NGKPL force field even matches the experimental values. The thermal expansivity, however, is significantly overestimated, although at the highest temperatures the difference between experiment and simulation is still within about 5 %. Despite the overall density increase from KPL to NGKPL, the thermal expansivities of both models are practically identical. With this increasing density, also slightly reduced selfdiffusion coefficients for the [NTf 2 ] anion are observed (see Fig. 11). We calculated the self-diffusion coefficient using the Einstein relation (Fig. 11). Taking a look at the alkyl chainlength dependence we can support the findings for the n = 2 imidazolium ionic liquid. The NGKPL force field is able to reproduce the dependence better, especially for n ≤ 4, for longer chains the KPL force field is closer to the experiment (see Fig. 12). As observed for the temperature dependence the general trend of the selfdiffusion coefficient as function of the alkyl chain-length is identical for the KPL and NGKPL force field.
Vaporization Enthalpies
The magnitude of the vaporization enthalpy of ionic liquids was studied extensively over the last few years and has been sometimes discussed quite emotionally [40][41][42][43][44][45][46][47][48]. For the purpose of this study we will compare our results with the more recent QCM data of imidazolium based ILs of type [C n MIm][NTf 2 ] from Verevkin et al. of 2013 [47] as shown in Fig. 13. We would like to point out that an exhaustive overview of the huge amount of vaporization enthalpy data from different experiments as well as molecular simulation studies is provided in the supporting informations of Verevkin et al. [47] and in the COSMOS-RS study by Schröder and Coutinho [48]. The [39]. The results from our molecular dynamics simulation using the NGKPL (blue dots) and KPL (red squares) force fields were fitted with a linear function represented by the dashed lines. See also Table IV. vaporization enthalpies per mol of [C n MIm][NTf 2 ] were here calculated by assuming ideal gas behavior with which is a well justified approximation, given the low vapor pressures of ILs at low temperatures. The energy difference between the liquid and gas phases were computed via where U ′ l and U ′ g are the internal energies per mol ionpairs of the liquid and gas phases, respectively. To determine U ′ g we performed gas phase simulations of individual ion-pairs without periodic boundary conditions. It has been shown in the literature that the gas phase of ionic liquids consists mostly of ion-pairs [47,[49][50][51][52][53][54][55] Table V. together by strong long-range electrostatic forces. Hence, simulating an isolated ion-pair instead of separated ions is the most realistic approximation of the IL gas phase. As it is standard practice, during the simulation of of both, the liquid phase and also of the isolated ion-pair, the total linear momentum was set to zero, thus eliminating the systems center of mass translational motion. In addition, in the simulations of the isolated ion-pairs also the total angular momentum was set to zero. However, when comparing the internal energy of the gas phase and the liquid phase, we have to correct for differences in the kinetic energy stored in the translational/rotation motion of either system by adding per mole of ion-pairs, where N IP = 512 is the number of ion-pairs used in the liquid simulation, and U g and U g are the total energies per ion-pair as computed directly from the MD simulations. With these corrected molar internal energies U ′ g and U ′ l we compute the heat of vaporization ∆ v H using Eq. 3 for a temperature of T = 303 K shown in Fig. 13 and given in Table V.
Both the data computed from the KPL and from the NGKPL force field as a function of alkyl chain-length are rather close to the experimental data of of Verevkin et al. [47].
However, we would like to point out, that the optimized NGKPL force field is in even better agreement with the QCM experiments, particularly for chain-lengths up to n = 4. Not only are the data for n = 2 now in quantitative agreement with the experimental data, but also the step from n = 1 to n = 2 is better captured by the new model, suggesting a significant influence of the enhanced conformational diversity of the [NTf 2 ] anion [56]. Since the exact slope of ∆ v H as a function of the alkyl chain-length has been shown to be controlled by the counterbalance of electrostatic and van der Waals forces [57], the increasing deviation for longer chain-length might indicate a slight misrepresentation of size of the dispersion interaction introduced by increasing the alkyl chain-length.
Viscosities & Reorientational Correlation Times
To further compare dynamical properties of the simulated ionic liquids with experimental data, the temperature dependence of the reorientational correlation times for the C(2)-H vector and viscosities for [ where calculated. To compare with the quadrupolar relaxation experiments of Wulf et al. [58] we computed reorientational correlation functions R(t) of the C(2)-H bond-vector according to where P 2 is the second Legendre polynomial and represents the angle-cosine between the CH-bond vector at times "0" and t and | r CH | is the CH-bond length, which is kept fixed during the simulation. The reorientational correlation times τ c are obtained as integral over the correlation function Here, the long-time behavior is fitted to a stretched exponential function and the total correlation time is determined by numerical integration. Again we find that both force fields are in good agreement with the experimental values, albeit with the original KPL model being slightly closer to the experimental data (see Fig. 14).
To determine the viscosities we used the approach of Zhang et al. [59] to compute viscosities from equilibriumfluctuations of the off-diagonal elements of the pressure tensor via the Green-Kubo relation For each temperature we performed 15 independent N V T simulations, where the starting configurations where sampled from the earlier N pT simulations with a constant time interval of 2 ns. After a 1 ns equlibration we computed 8 ns long productions runs for each of the [58] are shown as green triangles, KPL-data as red squares, and NGKPL-data as blue dots. The data are summarized in Table VI. [39]. See also Table VI. sampled configurations storing the pressure tensor data for each time-step. Finally, the correlation function was calculated and integrated over a time-window of 1 ns for each of the 15 simulations. The average of the running integrals was calculated as well as standard deviation. The average over the running integrals as well as the standard deviation where handled as suggested by Zhang et al. [59] with a fitting cut off t cut at the point where σ(t) is 40 % of the calculated average viscosity.
We find that the differences between the KPL and NGKPL models to be rather small. Both are basically lying within the statistical errors of this method. However, both force field model yield viscosities very close to the experiment (Fig. 15).
CONCLUSIONS
We showed that the reparametrization of the dihedral potentials as well as charges of the [NTf 2 ] anion leads to an improvment of the force field model of Köddermann et al. for imidazolium based ionic liquids from 2007. The most prominent advantage of the new parameter set is that the minimum energy conformations (trans and gauche) of the anion, as demonstrated from ab initio calculations and Raman experiments, are now well reproduced.
The results obtained for [C n MIm][NTf 2 ] show that this correction leads to a slightly better agreement between experiment and molecular dynamics simulation for a variety of properties, such as densities, diffusion coefficients, vaporization enthalpies, reorientational correlation times, and viscosities. Even though we focused on optimizing the anion parameters, the alkyl chain-length dependence is found to be general also closer to the experiment.
With this work we want to point out that it is important to re-examine established force field and, if necessary, to improve those. We highly recommend to use the new NGKPL force field for the [NTf 2 ] anion instead of the original KPL force field. Especially for simulation aiming to describe the thermodynamics, dynamics and also structure of imidazolium based ionic liquids. ACKNOWLEGEMENTS B.G. is thankful for financial support provided by COST Action CM 1206 (EXIL -Exchange on Ionic Liquids). | 2017-11-10T11:42:51.000Z | 2017-11-10T00:00:00.000 | {
"year": 2017,
"sha1": "40db46ab5f3eb2ef42d9bd5cd17ffb6cb94c5d44",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.03779",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "40db46ab5f3eb2ef42d9bd5cd17ffb6cb94c5d44",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Physics"
]
} |
119486898 | pes2o/s2orc | v3-fos-license | One-loop considerations for coexisting vacua in the CP conserving 2HDM
The Two-Higgs-Doublet model (2HDM) is a simple and viable extension of the Standard Model (SM) with a scalar potential complex enough that two minima may coexist. In this work we investigate if the procedure to identify our vacuum as the global minimum by tree-level formulas carries over to the one-loop corrected potential. In the CP conserving case, we identify two distinct types of coexisting minima --- the regular ones (moderate $\tan\beta$) and the non-regular ones (small or large $\tan\beta$) --- and conclude that the tree level expectation fails only for the non-regular type of coexisting minima. For the regular type, the sign of $m^2_{12}$ already precisely indicates which minima is the global one, even at one-loop.
The Two-Higgs-Doublet model (2HDM) is a simple and viable extension of the Standard Model with a scalar potential complex enough that two minima may coexist. In this work we investigate if the procedure to identify our vacuum as the global minimum by tree-level formulas carries over to the one-loop corrected potential. In the CP conserving case, we identify two distinct types of coexisting minima -the regular ones (moderate tan β) and the non-regular ones (small or large tan β) -and conclude that the tree level expectation fails only for the non-regular type of coexisting minima. For the regular type, the sign of m 2 12 already precisely indicates which minima is the global one, even at one-loop.
I. INTRODUCTION
After the discovery of the Higgs boson of 125 GeV mass in 2012 [1], all the pieces of the SM were firmly established. The Standard Model (SM) however is far from being a complete theory at the smallest physical scales as, to name a few shortcomings, it does not contain a Dark Matter candidate and cannot explain the matter and antimatter asymmetry of the Universe [2].
Among the various proposals, considering an extended Higgs sector is one of the simplest modifications that we can implement in the SM without extending the hidden fundamental forces of nature and also passing the many stringent collider tests at the LHC. In this work we consider one of such extensions namely the Two-Higgs-doublet model (2HDM) in which just another scalar doublet is added. This model features five physical Higgs bosons, three neutral and two charged, instead of just one in the SM. It has been extensively studied in the literature (see e.g. Ref. [3] for a review) partly because more fundamental theories, for instance the MSSM [4], require a similar extended scalar sector. The model also allows the possibility of other sources of CP violation [5], a feature that gets even richer when more doublet copies are added [6]. Finally, a more complex scalar sector can generate a strong enough EW first-order phase transition [7,8], a property that is lacking in the SM [9] but is necessary to explain the matter-antimatter asymmetry of our universe.
Another feature that a more involved scalar potential encompasses is the possibility of different symmetry breaking patterns and the 2HDM is no exception [10][11][12][13][14]. With additional Higgs doublets, this complexity increases substantially [15]. In general, for a sufficiently complex potential, there may even exist sets of parameters for which many local minima coexist. In this case, identifying which one is the global minimum might be a nontrivial task that involves solving a system of polynomial equations. Usually, when one is sure that only one minimum is present for a given choice of parameters, such a task is bypassed by trading some of the quadratic parameters in favor of the vevs and ensuring the extremum is a local minimum. In the 2HDM, that assumption does not hold in general and for some parameter ranges it is possible that up to two minima coexist for the same potential [11]. So a worrisome possibility arises: we may be living in a metastable vacuum with the possibility to tunnel to the global minimum. That situation was described in Ref. [16] as our vacuum being a panic vacuum. One way of testing that situation is explicitly calculating the depth of the second minimum and comparing it to the depth of the first. However, finding the location of the minima explicitly may be a difficult or, at least, computationally intensive task. Fortunately, in the same work, the authors developed a method capable of distinguishing if our vacuum is a panic vacuum by calculating a discriminant that depends only on the position of our vacuum (see Ref. [17] for a more general test). Although many of the possible scenarios with coexisting minima are not favored by current LHC data [16], we are interested here in studying if the simple use of this discriminant can be carried over to the one-loop corrected effective potential. Already for the inert doublet model [10] it was found that the potential difference of the coexisting minima can change sign when one-loop corrections are taken into account [18]. Therefore, the present work aims to verify the validity of such conclusions for a general CP conserving 2HDM with softly broken Z 2 . We focus on the case of two coexisting normal vacua and study the predictive power of tree-level formulas for the depth of the potential when one-loop corrections are considered.
The outline of the paper is as follows: In Sec. II we review the properties of the 2HDM with softly broken Z 2 at tree-level focusing on the possibility of two coexisting minima. Some results can not be found in previous literature. In Sec. III we review the form of the one-loop effective potential for our case while Sec. IV explains our procedure for ensuring that our vacuum has the correct vacuum expectation value. The procedure to compute and fix the pole mass of the SM Higgs boson to its experimental value is explained in Sec. V. The steps we performed to generate the numerical samples are listed in Sec. VI and the resulting analysis is shown in Sec. VII. Finally, the conclusions can be found in Sec. VIII.
II. COEXISTING NORMAL VACUA AT TREE-LEVEL
The general 2HDM potential at tree-level is We will be considering the real softly broken Z 2 symmetric case where m 2 12 and λ 5 are real while λ 6 = λ 7 = 0. We will also focus on CP conserving vacua where the vacuum expectation values are real and we employ the parametrization These vevs can be further parametrized by modulus and angle as where v = 246 GeV for our vacuum and we use the shorthands c β ≡ cos β, s β ≡ sin β. We call this type of vacuum a normal vacuum and we denote our vacuum by NV [16] with v 1 > 0 and v 2 > 0. By ensuring the existence of one normal vacuum (our vacuum), a scalar potential with fixed parameters cannot simultaneously have another minimum of a different type, namely a charge breaking vacuum or a spontaneously CP breaking vacuum [12]. Just another coexisting normal vacuum NV with vevs (v 1 , v 2 ) may exist and this is the only case where two minima can coexist in the 2HDM potential at tree level [11]: only two minima with the same residual symmetry may coexist. When the coexisting minima exist, we define the potential difference as so that ∆V > 0 indicates that our vacuum is the global minimum. We use this convention for the one-loop potential as well.
To describe the situation of two coexisting normal vacua in more detail, we can write the extremum equations for nonzero v 1 and v 2 : We employ the usual shorthand λ 345 ≡ λ 3 + λ 4 + λ 5 . We will see that there are two types of coexisting normal vacua depending on the Z 2 symmetric limit. The complete solutions of Eq. (5) for m 2 12 = 0 can be easily found: there are two degenerate extrema that spontaneously break Z 2 -ZB + and ZB − -and two extrema that preserve Z 2 -ZP 1 and ZP 2 ; the latter are often denoted as inert or inert-like vacuum (see Ref. [18] and references therein). Only one of the pairs ZP 1,2 or ZB ± may coexist as minima. 1 They are characterized by (v 1 , v 2 ) of the form where we adopt the convention that allv 1 ,v 2 ,ṽ 1 ,ṽ 2 are positive. The specific values of the vevs are given by v 2 for the Z 2 breaking extrema 2 andṽ for the Z 2 preserving minima. The two extrema ZB ± are indeed connected by the spontaneously broken Z 2 symmetry: φ 2 → −φ 2 . We note that simultaneous sign flips of both v 1 , v 2 is a gauge symmetry and do not count as a degeneracy. Hence we adopt the convention that v 1 > 0 while v 2 can attain both signs so that we only analyze the first and fourth quadrant in the (v 1 , v 2 ) plane. As the −m 2 12 v 1 v 2 term is continuously turned on, the Z 2 symmetry is soft but explicitly broken with a negative (positive) contribution in the first (fourth) quadrant when m 2 12 > 0. The opposite is true for negative m 2 12 . The effect of adding the m 2 12 term is different for the two types of coexisting minima which we denote by ZB ± and ZP 1,2 from their m 2 12 → 0 limit. We also denote the ZB ± minima as regular and ZP 1,2 as non-regular simply because it is much more probable to generate models with the former pair than the latter for generic values of tan β and other parameters.
The two degenerate spontaneously breaking minima ZB ± : (v 1 , ±v 2 ) deviate to ZB + : (v 1 , v 2 ) and ZB − : (v 1 , v 2 ), respectively, and the degenerate potential depth, also deviates differently lifting the degeneracy. In first approximation in small m 2 12 and in the deviation of the vevs, the potential depths change respectively by the amount δV ± ≈ ∓m 2 12v1v2 , so that the depth difference of the two minima is See appendix A for the general formula. As we defined our vacuum to be ZB + , we see that indeed it gets deeper as m 2 12 increases from zero while the non-standard vacuum ZB − is pushed up. The deviation for the Z 2 preserving minima are different: the first order perturbation to the potential value vanishes. We need the deviations in the locations of the minima ZP 1,2 which, in first order, read where with m 2 Hi = m 2 ii + 1 2 λ 345ṽ 2 j , (ij) = (2,1) or (1,2), is the second derivative in the v i direction around ZP j . When m 2 12 > 0, the deviations δv i are positive and the two minima enter the first quadrant. Otherwise they move to the fourth quadrant. The potential depth then changes from the Z 2 limit by 2 As long as the solutions for v 2 i give positive solutions.
respectively. The depth difference of the coexisting ZP 1,2 is The first term corresponds to the usual difference between the two inert-like vacua. We conventionally consider ZP 2 to be our vacuum NV corresponding to large tan β. The behaviors described above can be clearly seen in the left panel of Fig. 1 (blue points) for ZB ± where we show the depth difference calculated exactly against m 2 12 , both normalized by the appropriate power of v = 246 GeV (NV). Only potentials with two minima are selected and the free parameters are taken as with m h = 125 GeV, 1 ≤ tan β ≤ 50, {m H + , m A , m H } ranging from 90 GeV to 1 TeV (m H > m h ), −20,000 GeV 2 ≤ m 2 12 ≤ 6000 GeV 2 and α is constrained near alignment, −0.1 ≤ cos(β − α) ≤ 0.1. Simple bounded from below and perturbativity constraints are also imposed [3]. The blue points end around m 2 12 /v 2 ∼ 0.07 because the nonstandard minimum gets pushed up until the point where it disappears. In contrast, the right panel shows the normalized depth difference with respect to the ratio v /v of the values for NV and NV. We can see for the blue points that the vacuum that lies deeper has a larger vacuum expectation value. The method we employed to calculate the location of NV is described in appendix B. We confirm the approximation (10): for ZB ± the sign of m 2 12 discriminates between our vacuum being the global minimum (m 2 12 > 0) or just a local metastable vacuum (m 2 12 < 0). This behavior, however, does not apply for the points (red) that deviate from the inert-like vacua, ZP 1,2 , where the nonstandard vacuum may lie deeper despite m 2 12 being positive. For the generic values of t β as used above the density of non-regular coexisting minima is very low so that only a handful of coexisting non-regular minima is obtained jointly with the regular points. To generate a sufficient number of non-regular points we further produced another sample (most of the red points) by restricting 20 ≤ t β ≤ 50 and positive m 2 12 . To accurately distinguish among the different cases, Ref. [16] constructed a very useful discriminant D that ensures that our vacuum is always the global minimum if D is positive. 3 Since that discriminant was derived assuming that v 1 , v 2 are both positive, we cannot apply it to NV when v 2 < 0. So we rederived the discriminant allowing the vevs to be negative with the result where k ≡ (λ 1 /λ 2 ) 1/4 and we have normalized to obtain a dimensionless quantity. This discriminant is useful because it can be obtained by using only the angle β calculated in one vacuum and cases with only one minimum are 3 For D = 0 but m 2 12 = 0 the discriminant is inconclusive.
automatically taken into account. The discriminating power of D is shown in Fig. 2 where the depth difference is plotted against D calculated using NV. Obviously we could have calculated the discriminant for NV , obtaining a D with sign opposite to D. That implies that the quantity that depends on the vevs, s 2β (t 2 β − k 2 ), must have opposite signs when calculated for NV and NV . Our main goal here is to analyze if the discriminant power of m 2 12 and D carries over to the one-loop effective potential.
III. EFFECTIVE POTENTIAL AT ONE-LOOP
We can now consider the effective potential with the one-loop contribution The masses M 2 k (ϕ i ) correspond to the scalar-field-dependent eigenvalues of the tree-level mass matrices of all particles of the theory while µ is the renormalization scale. We are already assuming a renormalization scheme with minimal subtraction (MS) and, for the gauge sector, the Landau gauge and dimensional reduction (DRED), following the scheme of Ref. [19]. The parameters contained in V 0 are thus the renormalized parameters. The integer coefficients |c k | count all the degrees of freedom for each particle k including color, charge and spin, while the sign of c k is determined by its boson/fermion character: positive for bosons and negative for fermions. For example, for the top quark we have c t = −3 × 2 × 2 corresponding to its 3 colors, 2 particle/antiparticle and 2 spin degrees of freedom. We should note that the effective potential is generically a gauge dependent quantity but its value at an extremum is not [20].
As we will focus on normal vacua, we can consider that the effective potential depends only on the two real values ϕ 1 , ϕ 2 in the real neutral directions 4 : We reserve the symbols v 1 , v 2 in Eq. (2) to values at a minimum. So the field-dependent gauge boson masses retain the same functional form as in the SM with v 2 = v 2 1 + v 2 2 : The fermion masses depend on the type of 2HDM we are considering. We only consider models where FCNC are suppressed due to a Z 2 symmetry. We focus more on the type I model but our results are equally valid for all types II, X and Y because of the dominance of the top Yukawa. For the type I we have where y t,b are the Yukawa couplings of the third family quarks normalized to the SM values and the enhancement factor 1/ s β should be considered as the fixed value at the NV minimum at one-loop. We emphasize that information with the bracket . For the type II, we should replace the M b dependence on ϕ 2 and s β by ϕ 1 and c β respectively. We will see that, as usual, the top correction dominates the fermion loops and the difference between type I or type II is negligible for the one-loop corrections except for excessively large tan β which we do not consider. It is also justified that we only consider the effects of the top and bottom quarks; see Fig. 3 and comments in the text.
For the scalar contribution we need to calculate the eigenvalues of the matrix of second derivatives of V 0 for generic values of ϕ i . These mass matrices are shown in appendix C and their eigenvalues correspond to M 2 S (ϕ i ) of the 8 scalars S ∈ {G ± , H ± , G 0 , A 0 , H 0 , h 0 }. Due to charge and CP conservation, the mass matrices are still separated into three sectors: two charged scalars and its antiparticles, two CP odd scalars and two CP even scalars. We emphasize that e.g. M 2 G 0 (ϕ i ) is nonvanishing at ϕ i away from any tree-level minimum. It is the second derivative of the whole effective potential at one-loop that will vanish in the directions of the Goldstone modes.
IV. PARAMETRIZATION AND MINIMIZATION AT ONE-LOOP
We are interested in surveying the cases where the effective potential at one-loop (18) continues to have two local minima, one of which should be our vacuum with v = v 2 1 + v 2 2 = 246 GeV. The vevs v 1 , v 2 no longer satisfy the tree-level minimization relations in (5) but should now minimize the whole effective potential V 0 + V 1l . We need a convenient parametrization to ensure that one minimum has the appropriate value of v.
To parametrize V 0 , we will use as input the usual 8 quantities where v i satisfy the minimization equations (5). It is clear that these quantities define V 0 unambiguously by fixing the 8 parameters {m 2 11 , m 2 22 , m 2 12 , λ 1 , λ 2 , λ 3 , λ 4 , λ 5 }; see e.g. the first reference in [16]. When we add the one-loop contribution, it is clear that the true minimum will be shifted by a small amount from the position (v 1 , v 2 ) at treelevel. Instead of correcting for that shift, we add the finite counterterms to the potential and adjust the values of δm 2 ii so that v 1 , v 2 continue to be a minimum at one-loop. 5 This means that the one-loop effective potential (18) is now rewritten as We can see that δm 2 ii → 0 and V 1l → 0 in the limit where we turn off all couplings of the scalars to other particles including self-couplings. For small couplings, it is also expected that the physical masses are close to the masses It is possible to use a different renormalization scheme where all the masses and mixing angles at tree level are maintained at one-loop [21]. Our scheme, however, avoids the need to deal with infrared divergences coming from the vanishing Goldstone masses [8,22]. This problem is more severe at higher loop orders [23]. Now the minimization equations at one-loop can be separated into a tree-level part which leads to the tree-level equations written in (5), and a one-loop part that defines δm 2 ii by We can separate the derivative of V 1l into its contribution from scalars (S), vector bosons (V) and fermions (F): for each i = 1, 2. The dimensionless coefficients λ iS are given in appendix D and we use lowercase letters for m W,Z,t,b because they correspond to the actual values in the SM when we use our vacuum NV. For charged particles the contribution of the antiparticle can be taken into account by doubling the contribution of the particle. The fermion part corresponds to the type I model. For the type II model, we must replace s β → c β in the couplings to the b quark. We can see in Fig. 3 that the contributions from scalars are large for δm 2 11 and δm 2 22 while the top contribution is also large and negative for δm 2 22 . The contribution from the bottom quark is negligible for tan β ≤ 50 and there is no appreciable difference between the type I or type II model. Thus for definiteness we consider the type I model. We also note that the scalar masses and coefficients depend on δm 2 11 , δm 2 22 and (27) must be solved self-consistently. The only remaining task is to write M 2 S (v i ) in terms of the input parameters (23). We note that M 2 S (v i ) should be computed from the second derivatives of V 0 + δV . But the part coming from V 0 at ϕ i = v i corresponds to the usual masses at tree-level because (v 1 , v 2 ) still corresponds to a minimum of V 0 . Therefore, these matrices will have the generic form Specifically, the mass matrices for the different sectors read where c β s β −v 2 λ 5 are the masses squared for the charged Higgs and the pseudoscalar at tree-level; see e.g. [3]. Clearly the first and second matrices contain each a vanishing eigenvalue corresponding to the charged (G ± ) and neutral (G 0 ) Goldstone bosons in the limit where δm 2 ii → 0. We use the basis (φ (30), (31) and (32), respectively, from the parametrization where Using the same notation we can find the eigenvalues shifted by δm 2 ii as where the angles β + , β 0 , α are shifted from β, β, α by a small amount due to δm 2 ii : The explicit forms for δβ +,0 and δα can be seen in appendix D.
V. POLE MASSES AT ONE-LOOP
The previous section showed a way of ensuring that one of the minima of the effective potential at one-loop corresponded to our vacuum with v = 246 GeV and that tan β = v 2 /v 1 could be used as input at one-loop. The following task to describe a realistic 2HDM at one-loop is to ensure that the SM higgs boson mass corresponds to the experimentally measured value [24]: It is clear that at one-loop we cannot use this value for the tree-level parameter m h in (23). Instead, we must check that the pole mass corresponds to (37). 6 We follow Ref. [19, b] to calculate the pole massesm S of all the scalars S including the SM higgs boson by computing the self-energies of the theory at one-loop. We focus in this section on the CP even sector which will give rise to the pole masses of h and H restricted to the casem H >m h . The self-energy for the other sectors are given in appendix F.
The scalar self-couplings can be extracted from V 0 . Given a set of real scalars S i that interact through the quartic vertex −ig ijkl and cubic vertex −ig ijk , the self-energy Π ij for S i -S j coming from scalars in the loop with incoming momentum p 2 = s is given by [19] assuming we are in the basis with diagonalized masses (quadratic part of V 0 + δV ). The A and B functions are the Passarino-Veltman functions [26] which, in the notation of Ref. [18], read The A function represents the one-loop graph with one quartic vertex (tadpole) and the B function represents the one-loop graph with two vertices and two internal lines. Hence the factors 1/2 are the symmetry factors that appear in front of the Feynman diagrams for identical fields. These functions appear after renormalizing these diagrams using the MS prescription. Simplifications for vanishing s can be found in Ref. [18]. The contributions coming from gauge bosons and fermions in the loop are also shown in appendix F and all the necessary cubic and quartic couplings of the theory are explicitly shown in appendix E. For example, the SM higgs self-energy is given by where g h 2 h 2 = g h 4 and g hh 2 = g h 3 are the quartic and cubic self-couplings for h and all M S refer to M S (v i ). However, the mixing to the other CP even scalar H in Π Hh cannot be neglected. Due to charge and CP conservation, the self-energy for the different scalars decouple into separate pieces for the CP even, CP odd and charged sectors. The self-energy for the CP even sector is given by the matrix The one-loop pole squared massesm 2 H ,m 2 h for the CP even scalars H, h are the solutions s k for [19] det where we take only the real part of the self-energy because we are not interested in the decay widths. Them 2 h corresponds to the solution that continually approaches M 2 h in the limit where we turn off the interactions. A similar consideration applies to all pole squared masses.
VI. NUMERICAL SURVEY
We describe here the procedure we used to survey the models. For definiteness, we adopted the relevant fixed parameters of the SM to be g = 0.6483; g = 0.3587; v = 246.954 GeV; y t = 0.93697; y b = 0.023937 .
The first four values were taken from Refs. [23,27] as the running parameters of the SM at the top pole mass µ = 173.34 GeV and the bottom Yukawa was adapted from its mass value m b = 4.18 GeV [24]. Among these parameters, only the top Yukawa appreciably affects the one-loop effective potential together with the quartic scalar self-couplings.
We use a fixed renormalization scale µ = 300 GeV for all calculations and note that the running of y t from the top mass scale only amounts to a small difference. The running for the rest of the parameters are even less relevant. We remark that any choice of the renormalization scale is allowed, since the difference in depth of the potential at two extrema is a renormalization scale-independent quantity [28]. However, from a practical point of view, it is desirable that the logarithms in the effective potential do not become too "large" so as to lead to numerical instabilities [29].
We have checked our calculation for some different values of the renormalization scale and the value of µ = 300 GeV proved to be a stable choice. Among the input parameters in (23), we fixed the standard vev v as above and took the rest of the parameters randomly in the range shown in the first row of Table I, restricted to m H > m h . After checking for simple perturbativity and bounded from below conditions at tree-level, 7 we picked only the points where the shifted masses squared M 2 S (v i ) were positive and the solutions for the shift δm 2 ii in (27) were real. Then we further selected only the points where the pole massm h calculated as in Sec. V fell in the experimental range of (37). 8 In this way, we generated the sample G with 294437 points among which 4525 had two minima at one-loop, 17 of which were non-regular. To find the second minimum, we explicitly minimized the real part of the effective potential (18) starting from the non-standard minimum at tree-level and then retained only the points where the value of the potential at that minimum was real [30]. Given the small number of non-regular points, we generated another sample denoted as NR focusing only on non-regular points by imposing large t β as in the second row of Table I; other ranges were kept the same, except for m 2 12 which was chosen positive. Sample NR thus contained 185905 points among which 1563 had two minima at one-loop. Hereafter, unless explicitly specified, we will consider only the joint sample of G and NR. However, we would like to emphasize that, even though samples G and NR have a similar number of points, the parameter space scanned by sample NR represents only a small portion of the parameter space probed by sample G. This justifies our definition of non-regular points. After the selections described above, the input parameters m H + , m A , m H get roughly confined to the range [90, 500] and the non-standard higgsses acquire pole masses in a similar range with slightly smaller maximal value form H + . In contrast, the distribution for t β is homogeneous in the range of Table I for the whole sample but, as we select only 7 Since we work with a fixed renormalization scale where the one-loop corrections are not large, we expect the tree level relations to be valid to a good approximation [31]. For bounded from below conditions, this is a conservative choice as one-loop corrections may enlarge the possible parameter space [32]. 8 The exact adopted procedure differs slightly with respect to the range of m h : instead of post-selecting only the values of m h for which the pole mass coincided with the experimental range, we randomly selected the input m h in the range [0, 200] GeV and then later varied only this value searching for the correct pole massm h . If a solution were found, we kept that point. This procedure resulted in the approximately homogeneous distribution of m h in the range shown in Table I. This modification speeded up the generation of points.
the points with two minima at one-loop, it gets separated into two ranges, 1 ≤ t β 3.8 for the regular minima, and 24 t β ≤ 50 for the non-regular ones.
To check our numerically implemented formulas, we performed the following consistency checks: 1. Vanishing of the pole masses for the Goldstone bosons G ± , G 0 for zero external momentum for the three cases where we successively add the one-loop scalar, vector boson and fermion contributions.
2. Equality of the pole masses for the CP even higgsses H, h for zero external momentum and the eigenvalues of the explicitly calculated second derivative matrix of the effective potential in the real neutral directions (20) in all three cases of successive addition of the one-loop scalar, vector boson and fermion contributions.
VII. RESULTS
Let us first quantify the shifts δm 2 ii in (27) for the different contributions. The contribution coming from the gauge bosons only depends on the value of v and, for fermions, it depends on v and β. The scalar contribution depends on many parameters coming from the scalar potential. The dependence of the different contributions on tan β are shown in Fig. 3. We can see that the dominant contribution for δm 2 11 comes from the scalars whereas δm 2 22 also has large positive contributions from scalars but they are partly canceled by the negative contribution from fermions (top). Such a partial cancellation in δm 2 22 can be clearly seen in Fig. 4 where we show the contribution only from scalars (green points) and all the contributions (red points). The orange curve in Fig. 3, which quantifies the bottom contribution to δm 2 11 in the type II model, shows that it is negligible in the range of tan β we are interested in and our calculation that uses the type I model applies equally well to the type II case. All the different contributions are calculated using Eq. (28) and the scalar contribution in particular depends on the δm 2 ii themselves and these are taken as the total contributions. The remaining contributions of gauge bosons (purple dashed curve in Fig. 3) are much smaller. The contribution from scalars are calculated using the whole sample (G + NR) described in Sec. VI which also includes the points with only one vacuum. (27). The scalar contributions are positive for δm 2 11 (blue points) and mostly positive for δm 2 22 (green points) while the fermions contribute negatively. The fermion contribution to δm 2 22 (red curve) is practically the same for the type I and II models while the contribution to δm 2 11 (orange curve) applies only to the type II case but vanishes for the type I case. The contributions from the gauge bosons are shown in the purple dashed line.
The deviation of the location of our vacuum when we add δV do V 0 is illustrated in Fig. 5 where we show the ratio of t β for V 0 + δV (t tree+δ β ) to that of V 0 (t β ) against the ratio of the vev for V 0 + δV (v tree+δ ) to that of V 0 (v). We only show the points with two coexisting minima and divide the points between the regular ones (blue) and the non-regular ones (red). We can see that as m 2 ii get shifted by δm 2 ii all points with two minima have their vevs decreased while t β mostly increases for the regular points and mostly decreases for the non-regular points. Note that the non-regular points only consider large t β whereas the regular points only include moderate t β roughly up to 3.8. As the location of (v 1 , v 2 ) for our vacuum is the same for V 0 and the one-loop corrected potential (25) we can also interpret this plot as the modification of the vev location of V 0 + δV compared to V 0 + δV + V 1l . If we had considered all the points including the points with only our vacuum, the majority of points would follow the behavior of the regular points in blue. 2 22 in (27): the green points quantify the scalar contribution only whereas the red points consider all the contribution in the type I or II model. For the case of coexisting vacua, we can see a clear difference in behavior between the regular points and the non-regular ones in Fig. 6 where we plot the potential difference with respect to m 2 12 . The left panel shows these quantities using the tree-level potential V 0 while the right panel is the same plot for the one-loop potential (25). We can clearly see that the potential difference goes continuously to zero as m 2 12 → 0 for the blue points representing the regular minima. Moreover, for positive m 2 12 our vacuum is guaranteed to be the global minimum in both tree level or one-loop potential. This behavior is opposite for negative m 2 12 . The reason is that only m 2 12 breaks explicitly the Z 2 symmetry of the theory and it controls the degeneracy breaking of the spontaneously breaking minima. This behavior is not followed by the non-regular minima that do not have degenerate minima in the m 2 12 → 0 limit. Some points (green) in which our vacuum is not the global one at tree-level even get inverted and become the global minimum as the one-loop corrections are added. The same conclusion is reached if we had compared the one-loop potential to V 0 + δV instead: some cases where our minimum is not the deepest at tree-level becomes the global minimum at one-loop.
FIG. 4: Different contributions to δm
We can have an idea of the different one-loop contributions in Fig. 7 where we separate the potential difference in Fig. 6 into its different contributions. Regarding the regular points (left plot), it can be seen that the one-loop potential difference is almost entirely due to the tree-level contribution (blue) since the contributions from δV (yellow), fermions (red) and scalars (purple) approximately cancel each other while the contribution from gauge bosons (green) is negligible compared to the others. This behavior justifies our choice for the renormalization scale. For the nonregular points (right plot), no clear pattern emerges. We can also see in Fig. 8 that there are points where the potential difference is raised as well as points where it is lowered by the one-loop corrections for both regular and non-regular points.
Considering that the discriminant (17) test is applicable for all cases where at least one normal vacuum is known, we can investigate if it is still a good predictor for the one-loop potential with two coexisting minima. We can adapt (1) is considered. This plot should be compared with Fig. 1. Right: The full 1-loop effective potential (25) is considered. The green points represent a change in sign of the tree level prediction for the potential difference. the tree-level discriminant to one-loop in the following three ways: The quantity D 1 is the discriminant calculated with V 0 as the whole potential while D 2 is calculated by using V 0 + δV in (25). In the latter case, the quadratic parameters m 2 ii get shifted and the location of the minimum, denoted above as (v (0) , β (0) ) or as tree + δ (superscript/subscript) in Fig. 5, do not coincide with the ones used as input, denoted as (v, β). The last adaptation D 3 considers the shifts in the quadratic parameters but keeps the vevs as (v, β). We will test here if any of these discriminants are capable of distinguishing if our vacuum is the global one at one-loop only using parameters at tree-level. 9 For the regular coexisting minima, Fig. 6 shows that m 2 12 is already a good predictor of the global minimum, but we can test the discriminants in (45). The result of this test is shown in Fig. 9 where the potential difference at one-loop 9 Strictly speaking, some calculation at one-loop is required for some of these quantities depending on how the calculation is set up. Using the splitting of the potential in the form (25), D 1 is the most natural quantity to use and there is no one-loop calculation required. If V 0 + δV is considered as the potential at tree-level, D 2 or D 3 are the natural quantities depending on which minimum is taken.
FIG. 8: One-loop correction to the potential difference relative to the tree level potential difference. The color coding follows Fig. 6.
is plotted against the three discriminants. We can see in the left and middle plots that D 1 and D 2 correctly predict the global minimum of the one-loop effective potential while the right plot shows that D 3 fails for more than 10% of the points. In constrast, the failure of all the discriminants (45) for the non-regular points (of samples G and NR) can be seen in Fig. 10 which shows the potential depth difference as a function of the discriminants, similarly to the regular points in Fig. 9; note that the horizontal scale is very different. The green points in the second and fourth quadrants mark the cases where the discriminant D 1 predicts the opposite behavior at one-loop. We can clearly see that all discriminants fail for a significant portion of points, not necessarily for the same ones. From the property of D 1 , we could have seen its wrong prediction in Fig. 6 as well.
At last, to make sure that the points for which the discriminant test fails include phenomenologically realistic points, we have checked the viability of the red/green points in Fig. 6 by considering phenomenological constraints implemented in the 2HDMC code [33]. We found that in the case of the type I model around half of the green points are allowed by experimental constraints such as the S, T, U precision electroweak parameters and data from colliders implemented in the HiggsBounds and HiggsSignals packages [34,35], while in the type II model they are all excluded. For the type II model we have also included constraints coming from R b measurements [36] as well as B meson decays [37] which prove to be very strong by setting m H + > 480 GeV independently of the value of tan β. For type I, since the red/green points have tan β > 10, these constraints do not impose any further restrictions [38]. We also checked that all points still respect simple bounded from below and perturbativity constraints [3].
VIII. CONCLUSIONS
We have studied the one-loop properties of the real Two-Higgs-doublet model with softly broken Z 2 with respect to the possibility of two coexisting normal vacua. The softly broken nature must remain at one-loop and the case of two coexisting normal minima can be classified into two very distinct types depending on their nature in the vanishing m 2 12 limit: regular minima that spontaneously break the symmetry and the non-regular minima (or minimum) that preserve the symmetry, i.e., they are inert-like in the symmetric limit. Since in the first case the two minima are degenerate in the symmetry limit, even at one-loop, they are connected by the Z 2 symmetry and then they should differ only by the sign of v 2 . After the inclusion of the m 2 12 term, the sign of v 2 continue to be opposite for our vacuum and the non-standard one so that the two regular minima are found in the first and fourth quadrants in the (v 1 , v 2 ) plane. In contrast, the non-regular minima deviate from the inert-like minima and both deviate to the first quadrant when m 2 12 is positive. The vacua that spontaneously break Z 2 in the m 2 12 → 0 limit behave rather regularly and we can distinguish which coexisting minimum is the global one by just examining the sign of m 2 12 : when it is negative our vacuum is only a metastable local one and the opposite is true if it is positive. For this type of coexisting vacua, the discriminant at tree level [D 1,2 in Eq. (45)] is still a good predictor of the nature of the minimum it is calculated with.
For the non-regular coexisting vacua, m 2 12 is positive in the convention that our vacuum has both vevs positive and it cannot be used as an indicator. At tree level, the discriminant of Ref. [16] is a very convenient way of testing if our vacuum is the global one because only the location of our minimum is required. However, at one-loop, this discriminant is not a precise indicator of which minima is the global one. We have found realistic cases where our vacuum is not the global minimum at tree-level but it becomes the global one after the addition of the one-loop corrections. Few cases for the opposite behavior were also found. As the discriminant effectively distinguishes the sign of the potential difference between the coexisting minima at tree-level, the latter itself is also a not good indicator for the non-regular minima. This is reminiscent of the exact Z 2 symmetric case (inert model) investigated in Ref. [18]. On the other hand, we were unable to find a discriminant that works for both regular and non-regular minima at one-loop. Finding a simple and precise criterion for global minimum does not seem to be an easy task as that was not achieved even in the simpler exact Z 2 limit. We also emphasize that for our parametrization which enforces our vacuum to be a minimum from the start, and for the chosen generic ranges of parameters (sample G), the occurrence of non-regular minima, as suggested by its name, is much rarer: only 38% correspond to the non-regular cases and, among then, only 0.3% are coexisting minima.
In summary, the soft-breaking term m 2 12 controls the lifting of the degeneracy of the regular coexisting minima (moderate tan β) even at one-loop and can be used as the sole indicator of which minimum is the global one. That is not true when two coexisting non-regular vacua (large or small tan β) exist: the discriminant that is a precise indicator at tree-level is not reliable at one-loop and explicit calculation of the potential depths must be carried out.
Appendix A: Deviation of a potential minimum Take a potential V 0 (ϕ) depending on real scalar fields ϕ i for which we know a minimum (extremum) ϕ i =φ i satisfying We want to quantify how the location of the minimum and the value of the potential deviate when we add a small perturbation U on the potential as We assume V 0 has no flat direction aroundφ. We first quantify the deviation of the location of the minimum as The derivative of the perturbed potential gives Since the first term vanishes due to (A1), we get the deviation to first order where U i ≡ ∂U (φ)/∂ϕ i and M ij = ∂ 2 V0(φ) ∂ϕi∂ϕj is the squared-mass matrix aroundφ. The deviation in the value of the minimum can be equally expanded aroundφ as after using (A5). Generically the third term contributes to deepen the potential. That is the dominant contribution when the perturbation vanishes on the unperturbed minimum: U (φ) = 0. The latter happens for the m 2 12 term when perturbing the inert-like minima while for the spontaneously broken Z 2 minima the dominant term is the second, linear in the perturbation.
Appendix B: Finding more than one normal vacua Let us rewrite the minimization equations in Eq. (5) as where and ζ ≡ 2m 2 12 /v 1 v 2 depends on the vevs. To find solutions (v 1 , v 2 ) of (B1) for m 2 12 = 0, we first need an equation for ζ. For that end, we can formally write (B1) as X = A −1 ζ M and equate v 2 1 v 2 2 = 4m 4 12 /ζ 2 . We obtain where µ 1 = λ 345 m 2 11 − λ 1 m 2 22 and µ 2 = λ 345 m 2 22 − λ 2 m 2 11 . The m 2 12 → 0 limit is taken as ζ → 0 and m 4 12 ζ 2 → v 2 1 v 2 2 /4. When m 2 12 = 0, we can find the possible values for ζ (extrema) from the quartic equation We know there are at most four real solutions coinciding with the maximal number of extrema in this case [11]. We also can see that the root λ 345 + √ λ 1 λ 2 from the lefthandside is always positive due to bounded below conditions. Possible solutions for (B1) depends on the sign of ζ and we allow both signs for v 1 v 2 . We can see that for the same ζ, flipping the sign of m 2 12 is equivalent to flipping the sign of v 1 v 2 . Once some solution ζ = ζ 0 is found, we can find the vevs from the relation X = A −1 ζ M or, explicitly, After ensuring these expression are positive, we can extract v 1 , v 2 and tan β with the sign convention where v 1 > 0 and sign(v 2 ) = sign(ζm 2 12 ) .
Appendix C: Matrix of second derivatives The mass matrices at tree-level for the charged, CP-odd and CP-even scalar sectors prior to imposing the minimization conditions are given respectively by The term (quadratic) refers to the quadratic contribution given by The derivatives are with respect to the fields around the values in Eq. (2).
Appendix D: Cubic derivatives
The coefficients λ iS in (28) are defined as where S = {G + , H + , G 0 , A, H, h}, sec ={char, odd, even} refers to the different scalar sectors and U sec diagonalizes M sec . The last subindex SS refers to one of the diagonal entries following the ordering in (35).
Explicit calculation leads to where the mixing angles β + , β 0 , α were defined in (36) with The coefficients λ 2S can be obtained from λ 1S for each scalar S above by using the replacement Q 12 in (G4).
Appendix E: Cubic and quartic couplings
Given that the cubic and quartic couplings do not depend on the quadratic parameters, they correspond to the tree-level ones listed, e.g., in Ref. [39] if we are in the limit α → α, β 0,+ → β. If we want the couplings with these corrections, we can adapt the rotation angles α → α or β → β + or β → β 0 for the couplings that do not mix the charged sector with the CP-odd sector because in the latter there is an ambiguity in distinguishing β + from β 0 . Another ambiguity arises if the couplings are written in terms of (v, β) instead of (v 1 , v 2 ) because then we need to distinguish β coming from the vevs and the β 0,+ coming from the diagonalization of the shifted mass matrices.
We adopt the convention that −ig ABCD is the Feynman rule associated to the vertex ABCD. This is opposite to e.g. Ref. [39]. We also abbreviate e.g. g G 0 G 0 G 0 G 0 as g G 04 . The essential set of quartic couplings is More couplings can be obtained from simple replacements; see table II.
Coupling
Obtainable from Replacement The essential set of cubic couplings is The remaining couplings can be obtained through reparametrization symmetries as shown in table III.
Coupling Obtainable from Replacement Coupling Obtainable from Replacement Finally we note that, although the reparametrization symmetry already allows a huge simplification in the computation of the cubic and quartic couplings needed for our calculation, their use can be error-prone. In our routines we have adopted a different approach, namely we expanded the tree-level scalar potential in terms of physical fields S = {h, H, A, G 0 , G + , H + , G − , H − } and performed derivatives to obtain the desired couplings. For instance, Appendix F: Self-energy for the scalars We show here the different contributions to the self-energy of scalars due to scalars (S), fermions (F) and gauge bosons (V) in the loop.
We first list the contributions from scalars in the loop for the different sectors. For the CP odd sector we have Some couplings are absent because CP is conserved and A, G 0 are CP odd.
In the CP even sector we have The self-energy for the charged sector is Note that couplings such as g G + G − A 0 = 0 due to CP conservation. The fermionic corrections to the propagator of a scalar S to a scalar S are given by the general formula [19]: where N c is the number of colors of the fermion in the loop, the B function was given in (40) We denote the vertex of the scalar S k to the fermionsf and f by −iY k f f and they may contain the γ 5 matrix whilē Y refers to the transformationΓ = γ 0 Γ † γ 0 in spinor space. These Yukawa couplings are listed in table IV where the coefficients C S f depend on the model used as in table V.
The couplings C x SS and C x S can be read from the following tables. | 2017-11-04T19:05:40.000Z | 2017-07-14T00:00:00.000 | {
"year": 2017,
"sha1": "4a66f28239d147c1373229d706dc30d1e88ce9fb",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP11(2017)106.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "4a66f28239d147c1373229d706dc30d1e88ce9fb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4668375 | pes2o/s2orc | v3-fos-license | DATA MINING CONSULTING IMPROVE DATA QUALITY
Data are important for making decisions. However, the quality of the data affects the quality of decisions. Data mining as one of the most important sources of knowledge needs high quality data to mine, but there are not enough good quality data in many enterprises. By analyzing the reasons for low data quality systematically, a new method called data mining consulting for improving data quality has been established. It defines data quality in a wider sense from the view of data mining, finds data quality problems, and solves data quality problems by a series of methods. Its application shows that it has good practicality and can increase data quality considerable.
INTRODUCTION
In recent years, data has become more and more important in the information age.Individuals, companies, and organizations usually make decisions based on data.Data accumulate so quickly that data mining technology has to be used for analyzing data.Today data mining as a knowledge resource has been widely accepted especially in large sized-enterprises, government, and financial departments (Han & Micheline, 2006;Shi, 2002).More and more leading-edge organizations are realizing that data mining provides them with the ability to reach their goals in customer relationship management, risk management, fraud and abuse detection, etc.Also, data mining is becoming a key technology to e-business (Noonan, 2000).Data mining could help enterprises establish a knowledge base during the development phase and aid in making the right decisions instead of making mistakes, gaining an early benefit from the informationalization process.Additionally, it could bring added value from data services and new revenue.However, following the rule of "garbage in, garbage out," data mining needs high quality data while there are often not enough good data in many enterprises for data mining to yield credible conclusions.From PriceWaterhouseCoopers' survey in New York in 2001, 75% of 599 companies had economic losses because of data quality problems (Pierce, 2003).
Previous research on improving data quality was from the view of information systems (IS) (Wang, 1993;Aebi & Perrochon, 1993;Wang, 1995;Missier, 2003;Dasu, 2003;Scannapieco, 2004) or from the view of data warehousing (Rahm, 2000).Their ranges were not large enough for the needs of data mining.Data cleaning and Extraction-Transformation-Loading tools (Hernandez, 1998;Lee, 1999;Galhardas, 2000;Galhardas, 2001;Raman, 2001;Guo & Zhou, 2002;Dasu , 2003 ) or tolerance algorithms (Zhu & Wu, 2004) have been used to mine low quality data.However, those methods only can improve the current data quality for mining.Low quality data are created every day and will make the data dirty again.Moreover, data cleaning conceals the source of dirty data, so not enough actions are taken to improve the system, therefore forming a vicious circle as shown in Figure 1.
Figure 1. Vicious circle of data cleaning
In order to get credible conclusions, data cleaning and processing accounted for 80-90% of the workload of a data mining project (Johnson, 2003).That makes data mining so difficult that most small and medium businesses cannot afford to do it.Data quality problems have become an important factor in data mining applications (Dasu, etc., 2003).Dasu (2003) gave a good start to data cleaning from the earlier phase by expert systems.
We have done more research on data quality from the view of data mining.The purpose of this paper is to propose a systemic solution for improving data quality.The rest of our paper is organized as follows.In Section 2, we introduce the definition of data mining consulting methodology.Section 3 presents the details of data mining consulting.We give an example of applying our model to a real application leading to a satisfactory answer in Section 4. The paper is summarized in Section 5.
WHAT IS DATA MINING CONSULTING?
Data mining consulting was put forward by Extension Theory, which was established by Prof. Wen Cai in China in 1976.Extension Theory is a discipline that studies the extensibility of things, the laws and methods of exploitation, and the innovation needed to solve all kinds of contradiction problems in the real world with formalized models (Cai, 2005).Extension theory establishes matter-element, affair-element, and relation-element to describe matter, affairs, and relations.From the view of matter-element analysis in theory, matter can be divided into two parts: an imaginary part and a real part from the view point of material nature of matter.Or the division can be into soft parts and hard parts from the view point of systems, which is called the conjugate nature of matter-elements.The theory says that the real part is the base and the imaginary part is what we use (Cai, 1999).
According to Extension Theory, data mining is conjugated.The real part is the data mining techniques and software tools, and the imaginary part is the idea of data mining and the methodology.As usual, the imaginary part plays a very important role.Based on the integration of the real and imaginary parts, we put forward the following methodology for improving data quality called data mining consulting method: Data mining consulting consists of three parts: the principles, the technology of software engineering, and the rules of management, as shown in Figure 2.These three parts interact with each other and form two circles: principles of data mining give a new view for data quality management and other related rules; then the new management rules make software engineering more effective and therefore the software becomes more suitable for data mining.This is a big circle outside.At the same time, a series of new management rules from the view of data mining can enhance the principles.Strong principles of data mining can then guide the software design and implementation.Good software design and implementation can decrease the workload of management.This forms the small co-adjustment circle seen in Figure 2. From principles of data mining, the conditions that data mining needs and its standard rules are listed, and then traced back to the use of the software.Actions are taken to prevent the creation of dirty data.Through the whole cycle, a series of management rules are used to reduce human mistakes.This series of principles, rules, and actions from requirements analysis process, data base design, software development to data integration, cleaning, and mining is called data mining consulting.Its aim is to improve data quality and to make the implementation of data mining projects efficient and easy to run.
Framework of data mining consulting
In order to improve the accuracy and integrity of the data, a data mining consulting solution framework is presented in Figure3.
Figure 3. The framework of data mining consulting
In this framework, data quality needed in data mining is first listed (given in detail in 3.2).We collect the data set and identify the gap between the present data and objective data in the view of data mining, using data mining consulting actions including data mining testing, data quality analysis, and data structure adjustment, storage and integration, time remaining, etc. until the data meet standard quality requirements.
By recycling data mining experiments and taking improvement measures, the data gap will be decreased and high quality data will take the place of poor data.Once the conclusions of data mining begin to benefit a business's decision-making, senior management will pay more attention to data's accuracy and take effective measures that will boost information system development, such as increasing input, improving management, emphasizing data analysis, etc.With the above measures, we can augment the demands of data, integrate more data, deal with the relevant quality issues, and come to the next phase of data mining consulting and implementation.This kind of spiral-recycling implementation will accelerate the transformation from unready mining data to ready mining data and also enhance the quality of corporation information systems (Li, 2006).Aebi and Perrochon (1993) give a definition of data quality from the view of an information system.Data quality measures the amount of consistency, instance correctness, completeness, and minimality are achieved in a certain system.That is true in information systems (IS) but not really from the view of data mining.A customer's ID in CRM system is "1100", while in POS system is "021233"
Minimality
No repeat records after integration A sales record became 3 records after integration Reliability Results of integrating stable regardless of who or when did Attributes in product table changed between two integrating process resulting in confused sales information If the information system quality does not meet the needs of data mining, improvements should be done while doing the data mining trial or action should be taken to first improve the data quality.Rahm, et al. (2000) classified the major data quality problems that could be solved by data cleaning and data transformation.As can be seen, these problems are closely related and should thus be treated in a uniform way.Data transformations are needed to support changes in the structure, representation, or data content.These transformations become necessary in many situations, e.g., to deal with schema evolution, migration of a legacy system to a new information system, or when multiple data sources are to be integrated.Classification of data quality problems (Rahm, 2000) is shown in Figure 4: (Rahm, 2000) In this paper, data quality problems are distinguished between single-source and multi-source problems and between schema-and instance-related problems.Schema-level problems, of course, are also reflected in the instances; they can be addressed at the schema level by an improved schema design (schema evolution), schema translation, and schema integration.Instance-level problems, on the other hand, refer to errors and inconsistencies in the actual data contents, which are not visible at the schema level.They are the primary focus of data cleaning.These data quality problems cover the ETL process (extraction, transformation, loading), basically used in data warehousing.To meet the needs for data mining, two more things must be considered: first, once the data mining objective is selected, will the IS providing all the data for it; and even if single-source and multi-source problems and between schema-and instance-related problems have been solved, will the data meet the needs then?To trace back all possible problems, Li (2007) gave a flowchart as shown in Figure 5.
Figure 5. Flowchart for tracing all possible data quality problems
Starting from the mining process, we can trace back for more processes: data cleaning, data integration, software implement, and software designing.Such analysis finds additional problems as listed in Table 2. Two data quality problems need to be solved systematically: when the data set cannot support objective-oriented mining and human errors during data inputting process.The earlier we discover these problems, the more easily we can solve them.
Actions for improving data quality by using an information system
If a data set cannot support objective-oriented mining, we can improve the Information Systems by analyzing Data Science Journal, Volume 6, Supplement, 6 October 2007 decision objectives with the current data set, identifying the data gap by a data map, and then taking action (Figure 6): The process includes five steps as follows: Step 1. Defining the business objects that are supported by decision-making according to corporate strategy and competition environment.
Step 2. Based on the data that is required by business objectives, making the data map.
Step 3. Identifying whether the present information system can satisfy the requirements of data mining based on the business objective and the data map.If not, choosing and applying a complementary software system.Step 4. Using out the complementary information system and accumulating data.
Step 5. Starting the data mining project to gain data mining results for decision-making when the mined data has accumulated to the required amount.
This process can help organizations to improve efficiently their information systems according to their business requirements.In other words, it could have an effect in a very short time.
Actions for improving data quality by management
Concerning human error during the data inputting process, it is very important to let the persons doing inputting know that the data will be used for mining and can produce valuable knowledge for the business and that it is not just being stored in the database with the possibility of deletion.The process for improving data quality divides the daily working process into four main phases: (1) Planning: set data quality objectives and propose key action measures; make IS users understand the goals.The goal and action measures are communicated to workers and are related to rewards.
(2) Responsibility: define the responsibility of each position concerned with data quality and its Key Performance Indicators (KPIs).
(3) Results tracking: set up an inquiry system based on facts and data analysis, periodically inquire about data quality achievement, and find dirty data in earlier phases.Finally propose improved methods to ensure the implementation of data quality objectives through meetings or in other ways.
(4) Performance evaluation: evaluate each employee's achievement or contribution to data quality; then grade the evaluation results and give rewards.
The processes above can also make information system users (often in charge of data input), managers, software designers, and database administrators share their experiences in daily work and form a responsible and cooperative culture among the team.Moreover, a series of well-written documents on how to improve data quality will greatly help.
CASE STUDIES
In web companies, the number of registered customers and ordinary visitors has increased rapidly since 1998.They are providing richer information and more products, and the accumulated data of each business unit has become more abundant as well.These data become useful if they are analyzed and mined in the future.However, OLAP statistically analyses are quite descriptive, and they lack illustrations of the rules and the business value behind the data.Therefore, it is very difficult to know the intrinsic relationships among the data, understand the real demand of customers, and even forecast future requirements.The reason for this could be that the information customers' supply is not sufficient or has low validity and reliability.In this case, many data mining enterprises are not likely to do data mining projects with this company.To discover the characteristics and the real requirements of clients and to develop the corresponding product as soon as possible along with more and more intense competition, one company cooperated with us to solve their problem.Our team analyzed the operation of customer data in detail with the help of Extension theory and rich data mining experience and proposed to use data mining consulting to improve data quality and carry out the project by phases.At present, a multi-objective linear programming method based software has been used in experimental data mining.Correspondingly, some primarily conclusions are deduced and have a good effect on the application.
CONCLUSIONS
This paper provides an overview of data quality problems and enlarges the concept of data quality from the view of data mining.By systemic analysis, we find that it is necessary to redefine data quality for the needs of data mining.Based on Extension Theory, data mining consulting provides a novel solution for improving data quality from earlier phases, such as requirement analysis, software design, software implementation, and data integration by data mining consulting, Some small and medium businesses that have poor data quality, or even no information systems, can be supplied with specific solutions to attempt data mining projects.Nonetheless, there are still some limitations on the implementation of data mining consulting.There is still much work to do.
For instance, teams of data mining consultants need experts skilled both in business and data mining.The process of cleaning data from the beginning is challenging and needs time.Cleaning data is effective for those companies that accumulate data in a short time, but it can do less for old data.However, the process has good practicality for data mining in low quality data enterprises and can help find data quality problems earlier, making all workers realize the value of data.We are sure that our process provides a good base for collecting high-quality data and can solve data quality problems from A to Z in the future.
Figure 2 .
Figure 2. Structure of data mining consulting
Figure 6 .
Figure 6.Improving the information system based on data mining consulting
Table 1 .
Data quality added from the view of data mining
Table 2 .
Data quality problems from the view of data mining
Table 3 .
Data mining precision contrast after data mining consulting This research has been partially supported by a grant from National Natural Science Foundation of China (#70621001, #70531040, #70501030, #10601064, #70472074), National Natural Science Foundation of Beijing #9073020, 973 Project #2004CB720103, Ministry of Science and Technology, China, National Technology Support Program #2006BAF01A02, Ministry of Science and Technology, China, and BHP Billiton Co., Australia. | 2018-03-31T13:05:06.586Z | 2007-10-12T00:00:00.000 | {
"year": 2007,
"sha1": "36655c1583921320a61a284b3a20745c2b37e9ed",
"oa_license": "CCBY",
"oa_url": "http://datascience.codata.org/articles/10.2481/dsj.6.S658/galley/480/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "36655c1583921320a61a284b3a20745c2b37e9ed",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250694268 | pes2o/s2orc | v3-fos-license | Model for hydrogen-induced Sb atoms reconstruction √3 × √3 → 2 × 1 on Si(111)
We propose the 6-state model to describe the hydrogen-induced √3 × √3 (trimers)→ 2 × 1 (zigzag chains) reconstruction in 1 ML Sb/Si(111) system. In agreement with recent experimental data, we demonstrate that adsorption of small coverage of hydrogen atoms turns the √3 × √3 phase into mixed √3 × √3 and 2 × 1 structure. The length and number of zigzag chains grow with the increase of hydrogen concentration.
Introduction
The domains of three Sb-formed reconstructions ( √ 3 × √ 3 [1,2,3,4,5], 2 × 1 [6] and 1 × 1 [5]) have been observed on Si(111) surface at Sb saturation coverage of 1 ML. Usually patches of accompanying phases are obtained in the main phase and this depends on preparation conditions -initial Sb coverage, its flux rate and substrate temperature. The STM experiments [4] and first principles ab initio calculations [3,7,8,9] show that in all ordered Sb/Si(111) structures Sb atoms prefer the positions on top of the first layer Si atom. However in both √ 3 × √ 3 and 2 × 1 structures Sb atoms are slightly shifted from the ideal on top position as shown in Fig. 1. In √ 3 × √ 3 three neighbouring Sb atoms move towards mutual center on top of the second layer Si atom in the so-called T 4 site and form trimers (milk-stool model). In 2 × 1 phase Sb atoms form zigzag chains.
The formation energies of the √ 3 × √ 3 and 2 × 1 phases are very similar in all mentioned 1 ML Sb covered systems. This was recently confirmed by the experiment [4] where it was demonstrated that small amounts of atomic hydrogen adsorbed on √ 3 × √ 3-Sb/Si(111) phase partly reconstruct it into the H-2 × 1-Sb/Si(111). We propose that this reconstruction could be regarded as the phase transition which is described by the 6-state model. Here we present this model for the 1 ML Sb-Si(111) system and also demonstrate how small concentrations of hydrogen can induce experimentally observed 2 × 1 ordering in the √ 3 × √ 3 phase.
Model
We denote the Sb atom states (shifts) on Si(111) in a way shown in Fig. 1. Then the Hamiltonian (per Sb atom) of our 6-state model has the form where δ(p i , q j ) is Kroneker delta function equal to 1 when combination of states p i and q j in the nearest neighbor Sb sites i and j corresponds either to √ 3 × √ 3 or 2 × 1 phases and zero otherwise, and Actually, the Hamiltonian (1) consists of two competing parts: the part responsible for occurence of the √ 3 × √ 3 phase trimers, represented by attractive pair interaction v √ 3× √ 3 > 0 and triple interaction v t > 0, and the part stimulating the occurence of the zigzag chains of the 2 × 1 phase represented by attractive pair interaction v 2×1 > 0. The interaction parameter v t > 0 contibute only for the √ 3 × √ 3 phase. Three pairs of the v 2×1 term are due to the fact that 2 × 1 zigzag chains in hexagonal lattice can run in three possible directions possessing (1,4), (2,5) and (3,6) states. Calculation of phase transitions with the energy (1), (2) and v t = 0 shows that the For calculations by Monte Carlo method (Metropolis algorithm) we use hexagonal lattice of 48 × 48 Sb sites. The phase transitions from both ordered phases ( √ 3 × √ 3 and 2 × 1) to high-temperature disordered phase (with probability of each state approximately equal to 1/6) are calculated [10] for different values of the ratio v 2×1 /v √ 3× √ 3 . Phase transition temperature T c is determined from the peak of temperature dependence of specific heat and phase diagram is obtained when there are no hydrogen atoms on the surface.
Effect of hydrogen adsorption
Evaporation of 3 ML of Sb at 650 − 670 C and further desorption leads to formation of 1 ML √ 3 × √ 3-Sb/Si(111) structure [3,4,5] which is the most often observed phase on 1 ML Sb/Si(111) surface. However, large areas of the zigzag chains corresponding to the 2 × 1 pattern are found when hydrogen atoms are additionally adsorbed on √ 3 × √ 3-Sb/Si(111) [4]. In order to simulate this experiment using the model (1), we distribute hydrogen atoms randomly in the √ 3 × √ 3 structure. The interaction parameters are chosen in the √ 3 × √ 3 part of the phase diagram, but very close to the √ 3 × √ 3 and 2 × 1 phase boundary. This choice is justified, since both phases have very similar surface formation energy [7]. We assume that a hydrogen atom randomly breaks off one of two Sb-Sb bonds when adsorbed close to Sb atom (as shown in Fig. 1c) and cancels the interaction corresponding to the broken bond. If the broken bond belongs to the √ 3× √ 3 trimer, the local energy increases by v √ 3× √ 3 +v t /3, and the two remaining v √ 3× √ 3 interactions of the trimer can be readily substituted by v 2×1 interaction In such a manner part of trimers can be substituted by zigzag chains running in one, two or three directions. In our simulations this is demonstrated by temperature dependences of order parameters η √ 3× √ 3 = δ (1,3,5) and η 2×1 = 1 3 ( δ(1, 4) + δ(2, 5) + δ (3,6) ) (see Fig. 2a). Insertion of hydrogen atoms decreases the T c of disordered-to-√ 3 × √ 3 transition, weakening and smoothing the anomalies of functions characterizing this point. The length and number of zigzag chains grows with increase of hydrogen concentration c H . At low c H (< 0.05) trimers possessing H-atom loose one bond, but no chains with more than 3 Sb atoms are created. When c H = 0.1, large part of H-atoms are located at (or very close to) the ends of small chains. This might be seen in the snapshots of our simulation in Fig. 3. When c H is around 0.2 part of H-atoms finds itself comfortable even inside the chains as well. Actually hydrogen atoms force the zigzag chains to "freeze" inside the √ 3 × √ 3 environement keeping the chains "pinned" by the H-atoms.
In conclusion, we present the 6-state model to describe hydrogen-induced trimer to zigzag chains reconstruction, √ 3 × √ 3 → 2 × 1, in 1 ML Sb system on Si(111). Our model is perfectly suited to explain the experimental data [4], since similar scheme of hydrogen-induced reconstruction was recently suggested in this paper. | 2022-06-27T23:59:25.414Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "41590e869db5e89d9a552106d9ce86ae2d41f96b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/100/7/072037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "41590e869db5e89d9a552106d9ce86ae2d41f96b",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
243839505 | pes2o/s2orc | v3-fos-license | Reflection of Buber's Moral Perspective on Teacher’s Professional Ethics: Teacher-Student Communication Area
: Teachers in modern societies have a model, identity function, a professional and ethical mission to achieve the perfection of students. The aim of this research is to deduce guidelines for the teacher’s professional ethics in the area of teacher-student communication according to the moral principles from Buber's point of view, because Buber pays special attention to the issue of ethics in human relations. The research paradigm is qualitative. The data collection method is documentary and a descriptive-interpretive approach is used to analyze the data. In order to provide guidance for each of the eight principles of teacher professional ethics, the deductive method is used. The results indicate that in the light of the unifying ability of the final moral criterion, the basic orientation of other principles of teacher’s professional ethics is correctly determined. In teacher’s professional ethics, Buber assigns a real position and weight to each of the poles of teacher and student; in a way, the reflection of Buber's "I-you" relationship in educational processes significantly reduces the conflicts between educational principles and the relationship between teacher and student. In the area of teacher’s professional ethics, we can emphasize the positive concepts of Buber thought, which are "moral virtues".
Introduction
Since teaching has always been involved with the human factors i.e. the learner has responsibility of realizing their inherent potentials, it is generally categorized among the most sensitive professions to ethics. In fact, teacher's professional ethics, do not have a fixed position among different branches of professional ethics. One reason could simply be the fact that teaching as a position does not have the definitiveness of a profession. This could also be attributed to the lack of an officially recognized ethical code (Haydon, 1996). Needless to say that among all human factors in an educational atmosphere, learner is the most important element on which teaching activities have direct effects. Therefore, the quality of the interaction between teacher and learner is of great significance and the relationship between them has its roots in ethical teachings. Martin Buber (1878Buber ( -1965 a prominent Jewish and existentialist philosophers (Seif, 2006) pay a considerable attention to ethics. Unlike other existentialist thinkers who believed of the interaction between humans and world or the relation humans with with God or existence, he puts his emphasis on the interaction among humans and believed that the human will not exist beyond his interaction to other humans (Macquarie, 1998).
Considering Buber's theory on ethical nature and professional identification of teachers on the one hand and taking social issues and contemporary education ending in ethical conflicts, self-alienation and collapsed identities of individuals into consideration (Gutek, 1997), rediscovering teacher's professional ethics based on ethical views of Buber could be of great benefits for the teacher and their interaction with their students. In order to reach a conceptual framework on teacher's professional ethics, a proper understanding is initially required so that a proper philosophical response could be given based on ethical principles. Table 1 provides a review on the history of teacher's professional ethics. (Faramarz Gharamaleki, 2010) In order to make a deduction on teacher's professional ethics, it should be noted that the principles listed by Faramarz Gharamaleki (2010) seem to be more comprehensive. Table 2 gives a systematic definition of teacher's professional ethics. Analyzing the encounter with objective ethical issues in professional activities Teacher's professional ethics has to create a framework for analyzing objective ethical issues so that ethical responsibility of teachers encountering ethical and professional dilemmas is defined. 4 Settling ethical dilemmas The method by which problematic issues or ethical conflicts are settled has to be clearly defined. 5 Modification and correction of behavior It is expected to present procedures to correct moral vices.
lf purification and personality excellence
Indices have to be presented through which the teacher could purify their souls and realize their personality. 7 Moral models Teacher's characteristics have to be defined as a moral model. 8 Expressing professional responsibility The main pillars of professional responsibilities of a teacher have to be clarified. Numerous studies (Ametrano, 2014;Amini Mashhadi, 2018;Azizi, 2010;Carr, 2005;Jafari, Abolghasemi, Ghahramani, & Khorasani, 2017;Karimi, 2008;Majedi, Naderi, & Seifnaraghi, 2019;Motallebi Fard, Nawe Ebrahim, & Mohsen Zadeh, 2001;Warnick & Silverman, 2011) revealed that most of the research work in the field focus on the teacher's professional ethics like ethics of teaching, professional capabilities of a teacher, academic qualifications and characteristics of a good teacher. However, this study aims to conceptualize teacher's professional ethics for all principles in his domain and offer solutions relevant for them. The dominant theory in the field is that since the emphasis of ethics in existentialism is based on personal choice in objective circumstances, it is based on relativism (Holmes, 2003). Nevertheless, Buber with his belief in the existence of god, tries to give some space to absolute values in human's life. However, on the other hand, he strongly supports the idea of choice and human responsibility in the face of changed life conditions (Friedman, 2003). Thus, Ethical theory of Buber is of numerous strengths that could be utilized. The present study intends to extract guidelines for teacher's professional ethics in the domain of teacher-student interaction using ethical principles emphasized by Buber in the realm of humanhuman interaction.
Material and Methods
The present study is a qualitative research. The data required are collected through documentary method and the data collected are analyzed through Descriptive and interpretive analysis. In this regard, the researcher attempts to extract real concepts and themes from data (Mirzaei, 2016). In order to present guidelines for each of the eight principles of teacher's professional ethics, inferential method is used. In this method, the researcher bases their work on a particular philosophical theory and provides logical consequences for education domain (Bagheri, Sajadieh, & Tavasoli, 2010).
In the present study, considering principles and propositions of "normative ethics or theorems related to practical criteria of good and evil (Sharafi & Imani, 2010)" in Buber's point of view, for each of the principles of teacher's professional ethics, criteria and indices are introduced.
Principles of Teacher's Professional Ethics in the Domain of Teacher-Student Interaction 1-Ultimate Criterion of Ethics: Establishing "I-Thou" Relation with "Otherness"
According to Buber, two basic interactions could be found in the interaction humans have with the other; one is the real and desirable one also known as "I-Thou" relationship and the second one is the dominant relationship in the modern world which is the "I-It" relationship. Characteristic of I-Thou relationship is lack of existential integrity and the presence of human for the other occurring through experiencing the other in order to realize material purposes. In I-Thou relationship, an individual presents his/her self to the other without any intention to reach a goal and with existential integrity (M. Buber, 2016).
Establishing an I-Thou relationship with the student must be the ultimate or original ethical criterion in teaching profession. Regarding this criterion, the teacher takes the desirable relationship with the student as the guarantee for realization of existential integrity. Table 3 elaborates on moral principles based on Buber's Point of view and the moral vices and virtues relevant to them. Virtue: Deed based on the concept of unsuspensibility of moral element: the teacher obliges himself to observe moral values when encountering students. He believes that in an interaction with the student, ethical affair is not suspensible (Buber, 2013). Vice: The deed based on possibility of suspension of moral element: The teacher divides his life into public and private domains in which different moral values are permissible. (Buber, 2014) Reciprocity and inclusion in the relationship between human and other and the dialogue between them Virtue: Observing reciprocity and inclusion in interactions with students: for the teacher, it is not merely his ideas that matter. Students ideas are respected as well and the teacher begins a real conversation with his students (Buber, 1947). Vice: Neglecting reciprocity and inclusion in interactions with students: For the teacher, it is only his ideas that matter. He neglects his students' ideas. He cannot address his students through a real dialogue. Confirmation and making the other present Virtue: Trying to prepare students: The teacher does his best to prepare and confirm the student. In other words, the teacher understands the issues and the needs of the student. The teacher always confirms the student's originality even when the student disagrees with him (Buber, 1959). Vice: Refusing to prepare students: The teacher does not prepare the student as an original individual and fails to understand and appreciate the student's issues and personal traits when encountering him (Buber, 1957). Uniqueness of human relationship with the Other Virtue: Forming an exclusive relationship with the student: The teacher identifies the student as a unique creature that has to realize his existence in a unique way (Buber, 2016). Vice: Refusing to form an exclusive relationship with the student: The teacher takes his students as "it" and takes a unique approach toward all of them. Love for the Other Virtue: Encountering the student with love: The teacher encounters the student in order to obey the command of God with a genuine love and out of kindness (Buber, 2013) Vice: Encountering the student with hatred and cynicism: The teacher believing in a godless world, encounters students with arrogance and cynicism. This teacher is looking for realizing his lying self (Buber, 1959). Relationship with the student from the side of the Other Virtue: Trying to see the student: Beside keeping his originality, the teacher confirms individuality of the student. In other words, he really sees the student (Buber, 1959). Vice: Avoiding to see the student: When encountering the student, he rejects the student's originality and sometimes even abandons his originality as well (Buber, 1959). Creating confidence in the relationship between human and the Other Virtue: Trying to create confidence in the student: The teacher creates a sense of confidence between him and the student (Buber, 1959). Vice: Distrust in the relationship between student and teacher: The teacher fears a comprehensive relationship with students with cynicism and distrust. Realization of personality of Other and overcoming the Appearance Virtue: Accepting responsibility of realization of personality of student and overcoming appearance: The teacher collaborates in realization of personality of student (Buber, 1959) and in this regard, avoids superficiality (Buber, 1957). Vice: Superficiality and refusing the responsibility to realize student's personality: The teacher refuses any responsibility regarding realization of personality of student and with superficiality accepts only the ones that matter to himself and help him reach his own goals (Buber, 1959). Avoiding individuality and collectiveness Virtue: Having a real encounter with the student: The teacher removes mental barriers of individuality and collectiveness and interacts with the students as an individual (Smith, 2006). Vice: Interacting the student based on individuality and collectiveness: Being influenced by collective values, the teacher avoids identifying the student as an individual. On the other hand, the teacher merely takes his own relationship as the basic and fails to properly encounter the student under the influence of individuality (Herberg, 1963
3-How to Analyze and Encounter Objective Ethical Issues in Professional Activities: A. First Criterion: Belief in Unsuspensibility of Ethics in Interactions with the Student
Modern humans divide existence into two general and particular domains and oblige themselves to observe dual moral values. As a result of this moral duality, the teacher may commit a deed that is unacceptable in the case of himself or his acquaintances but he may permit it with his students (M. Buber, 2014). The teacher influenced by this moral duality may have proper behavior toward one student and may mistreat another student under racial, religious, gender and age considerations. This injustice however may provide the ground for numerous problems in teacher-student interactions.
B. Second Criterion: Unique Encounter of a Student
The teacher sometimes takes similar approaches to teach moral and academic concepts to students. Such teacher imagines all students as similar and prescribes similar prescriptions for all of them. However, the teacher with proper approach identifies originality of students as a desirable basis and tries to direct them toward salvation via identifying their particular talents.
C. Third Criterion: Focusing on Cooperation in Professional Activities
Innovation instinct is the motive for human to create in the world. Using this instinct, the student tries to build what he desires and reaches a personal achievement that has the potential to influence the world in an objective yet personal way; however, it fails to improve human relationships. In contrast, cooperation instinct helps the teacher interact with the student with all his heart (Martin Buber, 2003). The teacher, on the other hand, has to organize his professional affairs not on the innovation but on the cooperation instinct. One of the strategies that could help the teacher conduct the process of education based on cooperation instinct is using active teaching techniques. In active teaching techniques, the teacher and the student cooperatively participate in learning process, and the student takes the responsibility of building his body of knowledge.
4-How to Resolve Moral Conflicts A. First Criterion: Utilizing Inclusion Capability
The root for many complex moral issues lies in the unresolved conflicts between student's freedom and teacher's authority. Like traditional approaches, modern educational trends have failed to balance principles of freedom and authority. Traditional views give total authority to the teacher and see no room for students' freedom. However, in modern approaches, the teacher has no authority against unlimited freedoms of the student (Murphy, 1988). In Buber's point of view or the approach of dialogue, there is no conflict between students' freedom and teachers' authority. In fact, for him, authority is the main requirement for realization of the student's freedom. Real freedom of the student is the ultimate goal of education and discipline is an effective strategy to reach this goal. For the teacher possessing educational discipline, freedom is the liberation of instinct to encounter which is the inherent capability of the student to have a dialogue with his own self, the teacher and the other students (Friedman, 2003).
B. Second Criterion: Establishing a Relationship with Student by the Other
The origin of many of moral conflicts in the classroom is the fact that the teacher plans all educational activities according to his personal view and neglects those of students. However, experience-based teaching students from the point of view of Other prevents the teacher from taking unilateral and personal actions in the classroom and base his plans according to inherent potentials of the students.
The point worth mentioning about planed teaching according to experience of the student from the point of view of the Other; the teacher has to remember that since the student has not personally realized his personality and the teacher's activities have been inclusive, the student has to fully understand his teacher's approach (Friedman, 2003). C. Third Criterion: Gaining Student's Trust Students, nowadays, suffer disruption deep in their existence due to the ups and downs of the modern world, and this internal discord demonstrates itself as distrust in their relationship with their teachers. On the other hand, the teacher with a dominant sense of distrust to the surrounding world fails to communicate with their students. This distrust however is the origin of numerous moral conflicts in classroom context. If the teacher presents him to the student, he will be able to create a sustainable trust in his student. This trust then will make the student an integral part of his teaching profession and will bring the teacher and the student closer through meeting the student's needs (Friedman, 2003).
D. Forth Criterion: Taking the Responsibility to Realize Students' Personality
Among all forces forming student's personality, the teacher is the most distinctive element for they have the power to form the student's personality (Martin Buber, 2003). When a child discovers the world with his liberated activities, he/she attempts to communicate with it as well (Friedman, 2003). Among all the people around, the student realizes his personality through communication and a dialogue with his teacher. Thus, students have to cooperate properly with their teachers and must feel the responsibility for this cooperation so that it creates a framework for teacher-student interaction and prevent any possible moral conflicts.
5-Process of Changing and Correcting Behavior: A. First Approach: Observing Inclusion when encountering the Student:
The root for many undesirable behaviors in the context of education is organization of teacher's activities based on two modern and classic approaches toward education. The teacher influenced by classic approach attempts to take a sculptor's role and form the student according to the image he has in his mind. However, in the modern one, the teacher gives up against his students' will and tries to be a gardener who removes the barriers blocking free development of a child's internal potentials (Murphy, 1988). In these two approaches, the relationship between teacher and student forms in an incomplete manner. The proper replacement for them is the dialogue and inclusive interaction of the teacher and the student. The teacher's inclusion and dialogue based behavior gives the required position to both students and teachers.
B. Second Approach: Avoiding Individuality and Collectiveness and Relieving by the Encounter
The teacher influenced by individuality possesses fake personality; while the one affected by collectiveness rejects any of the requests of his students against his party interests. The teacher making proper encounter with his students could relieve himself from bottlenecks of individuality and collectiveness. Such teacher is aware of the disorders suffered by his students in the modern world and obliges himself to make his students aware of their status, and they could try to evade their existential crises consciously (Martin Buber, 2003).
6-Purification of Soul and Elevation of Teacher's Personality: A. First Index: Realization of True Perfection of Teacher's Personality through Real Encounter with the Student
Perfection of human personality could not be realized merely through "I". However, it could not be realized without "I" as well. "I" needs "Thou" in the process of humanization (M. Buber, 2016). The teacher who has reached personality integrity will not think of domination or instrumental use of his students when encountering them. He does not treat them in a way that the development of their existential potentials are hindered. He appreciates presence of all students unconditionally (Friedman, 2003). This teacher tries to have a dialogue with his students over a moral issue. This dialogue will be based on Socratic questioning techniques (Murphy, 1988).
B. Second Index: Teacher's Avoidance of Inception
A teacher who is trying to reach an elevated personality should avoid inception or forcing his beliefs to his students. In the process of inception occurring unilaterally, the teacher puts his real personality away and encounters the student neglecting real circumstances of the student. However, inclusion capability is the opposing point of inception. In the light of inclusion, the teacher consciously develops his personality, and this helps him understand the presence of his student and his circumstances (Martin Buber, 2003).
7-Teacher as a Moral Model A. First Index: Using the Power of Love and Avoiding Superficiality
An elite teacher puts arrogance and cynicism induced by the relationship of I-It with the world aside since God's content is of significance to him and builds his relationship with his students based on love and sympathy (M. Buber, 2013). The teacher has to purify his desires with humane motives and reach a higher degree of purification. In other words, he reaches an approach in which his responsibility is not interfering in his student's life; yet, he must do his best to realize his existence through utilizing his inherent potentials especially the power of love (Martin Buber, 2003).
B. Second Index: Manifestation of a Great Character's Traits in Actions
The model of teaching must manifest the image of a "Great Character" in actions and the teacher possessing a great character acts responsibly and uniquely when he faces a critical moment requiring something from him. The teacher could gain insight into the great character in order to overcome the problems existing in personality of students and their resilience in this regard (Martin Buber, 2003).
8-Elaborations on Professional Responsibility: Directing Professional Activities based on Realization of the Student
The ultimate goal of education in any society is to direct students toward perfection; the teacher is the most important element contributing to the realization of students. The teacher has to organize educational situations in a way that the ground for various interactions is provided for students. In the process of education, the teacher's actions should give a sense of commitment and purpose toward society and lead them to serve the humankind. The teacher creates a democratic atmosphere in the classroom due to the fact that he knows for sure that nurturing character in students could only be possible with real freedom. In order to realize the student, the teacher could provide opportunities in which the student experiences skills like self-assessment and self-contemplation.
Conclusion
Considering Buber's ethical principles in interpersonal relationships, the present study aims to provide strategies for teacher's professional ethics in the domain of teacher-student interactions. In the domain of teacher's professional ethics, Buber's concept of positive thinking also known as "moral virtues" is emphasized and utilized. In fact, moral virtues for Buber are ethical components of a desirable human relationship which could be taken as ethical responsibilities of the teacher. In a world in which educational systems and the curricula compiled by them encourages non-personalization and selfalienation in teacher-student interactions, teacher's actions based on moral virtues of Buber could facilitate building identity for students. Besides, with reuniting capabilities of the final or original moral criterion which is the establishment of I-Thou relationship with the other (the student) in this study; the main direction of other teacher's professional ethics is determined properly. On the other hand, these principles have to contribute to the realization of this criterion. It is worth mentioning that Buber's conceptualization in teacher's professional ethics possesses another important strength. For Buber, the real position and weight is given to both teacher and student so that reflection of establishment of I-Thou relationship of Buber in educational procedures lead to significant decrease among educational principles. In Buber's point of view, humane discipline and authority of the teacher is a major requirement for realization of real freedom of the student. In other words, freedom for a student is his cooperation with the teacher and his classmates (Principles of freedom and discipline). On the other hand, Buber believes that purposeful activities of the student guiding him toward a more desirable life in the future, play a constructive role (Principles of perfection and activity). Finally, Buber believes that a constructive educational group will only get the point if the students join the group and participate in it freely while preserving their independence (Principles of individuality and society). | 2021-11-08T16:06:07.845Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "5a848d24b2183976f5a4c065a5213b403d001450",
"oa_license": "CCBYNC",
"oa_url": "http://ieepj.hormozgan.ac.ir/files/site1/user_files_739c87/ahmadgholami-A-10-353-1-8e63e44.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "224d41b3e86071a043684884284691385ad28510",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
259268496 | pes2o/s2orc | v3-fos-license | A Genome-Wide Analysis of a Sudden Cardiac Death Cohort: Identifying Novel Target Variants in the Era of Molecular Autopsy
Sudden cardiac death (SCD) is an unexpected natural death due to cardiac causes, usually happening within one hour of symptom manifestation or in individuals in good health up to 24 h before the event. Genomic screening has been increasingly applied as a useful approach to detecting the genetic variants that potentially contribute to SCD and helping the evaluation of SCD cases in the post-mortem setting. Our aim was to identify the genetic markers associated with SCD, which might enable its target screening and prevention. In this scope, a case–control analysis through the post-mortem genome-wide screening of 30 autopsy cases was performed. We identified a high number of novel genetic variants associated with SCD, of which 25 polymorphisms were consistent with a previous link to cardiovascular diseases. We ascertained that many genes have been already linked to cardiovascular system functioning and diseases and that the metabolisms most implicated in SCD are the lipid, cholesterol, arachidonic acid, and drug metabolisms, suggesting their roles as potential risk factors. Overall, the genetic variants pinpointed herein might be useful markers of SCD, but the novelty of these results requires further investigations.
Introduction
Sudden cardiac death (SCD) is an unexpected natural death due to cardiac causes, usually happening within one hour of symptom manifestation. It generally occurs in individuals without any prior pathological condition or in good health up to 24 h before the event [1,2]. The aetiologies of SCD are heterogeneous, involving numerous diseases, and differentiated between age groups. In young people, the most common causes of SCD are complications of cardiomyopathies (i.e., hypertrophic, dilated, or arrhythmogenic cardiomyopathies), channelopathies (i.e., long QT syndrome, short QT syndrome, Brugada syndrome, or catecholaminergic polymorphic ventricular tachycardia), and even cardiac malformations. In contrast, in older people, SCD is mainly due to coronary artery disease (CAD) and, to a lesser extent, cardiomyopathies, myocarditis, and valve diseases [3].
The identification of SCD causes through autopsy examinations is sometimes not trivial and inconclusive. In these cases of sudden unexplained deaths (SUD), genomic screening is a useful approach to detecting the genetic variants that potentially contribute to SCD [4]. Since the first post-mortem genetic testing, also known as a "molecular autopsy", it has been possible to discover the genes implicated in sudden cardiac deaths over last two decades, which is also thanks to massive improvements in sequencing technologies using small amounts of DNA in a cost-effective manner [1,5]. Molecular autopsies mainly include the screening of the genes known to be involved in cardiac arrhythmias, such as KCNQ1, KCNH2, and SCN5A, which are associated with long QT syndrome, and RYR2, which is associated with major catecholaminergic polymorphic ventricular tachycardia (CPVT) [4][5][6]. On the other hand, genome-wide association studies, which test greater numbers of genes, have allowed for the discovery of the novel genetic loci associated with SCD [5,[7][8][9][10][11]. It is of note that SCD might be the result of the combined effect of several genetic polymorphisms and not only from a unique mutated gene [12].
In this study, we performed a case-control analysis using post-mortem genome-wide screening with the purpose of identifying the genetic markers associated with sudden cardiac death, which might enable its target screening and prevention. The hypothesis is that the genetic variants associated with sudden cardiac death are located in the genes known to be involved in cardiovascular diseases, such as the genes mentioned above, and in the genes that are part of the metabolic pathways involved in cardiovascular function and homeostasis.
Study Population and Controls
Thirty autopsy cases were included in this study. The inclusion criteria were: (a) cases submitted to a complete medico-legal investigation, including a full autopsy, cardio pathological examination, and systematic toxicological analysis at the Department of Medical and Surgical Sciences, University of Bologna, between 2018 and 2021; (b) deaths classified as a SCD; and (c) a post-mortem interval defined by autopsy as <5 days. The SCD investigations and diagnoses were performed according the 2017 guidelines of the Association for European Cardiovascular Pathology [13] by a medico-legal examiner after a comprehensive evaluation of all the post-mortem findings, which are better detailed in Section 2.2. The autopsy findings were grouped into three categories, as follows: 1. Normal heart, when no macroscopic and microscopic alterations were found; 2. Atherosclerotic coronary artery disease (CAD), when an acute coronary occlusion or severe atherosclerotic plaque with coronary luminal stenosis of >75%, in the absence of other acute diseases, was found; 3. Other "highly probable" CoDs such as cardiomyopathies, myocarditis, congenital coronary artery anomalies, and channelopathies, etc.
The following data were collected from the included cases: demographics (age, gender, and ancestral origin), medical history, with a particular focus on cardiac diseases, neurologic diseases, or infectious diseases connected to drug use, e.g., HIV or hepatitis C, cause, and manner of death. For the genetic analyses, the samples were pseudo-anonymized by assigning laboratory coding, followed by progressive numbers.
As a control cohort for the genetic screening, we selected individuals from the Tuscan population (TSI, Italy) of the 1000 Genome Project [14] and the Bergamo population (BERG, Italy) of the Human Diversity Genome Project (HGDP) [15]. We randomly chose twenty individuals belonging to the TSI and all the available individuals of the BERG population, for a total of thirty samples, five of which were females. Whole-genome sequences mapped to the GRCh37 primary reference assembly were recovered from online repositories of the projects.
Descriptive statistics were provided for all the data. The sktest was used to assess the gaussian distribution of the numerical variables. Depending on this, a parametric or non-parametric analysis of variance was used to test the age difference between the groups with different causes of death. The chi-square test was used to explore the association between the categorical variables (ancestry, gender, activity before death, comorbidities, and toxicological results) and CoD. The statistic tests were performed with Stata 15.1 (StataCorp LLC, College Station, TX, USA) and considered to be significant with p-values < 0.05. The figures were realized with Prism (GraphPad Software, LLC, version 9.0.0).
Post-Mortem Examination
A full autopsy was performed according to a shader forensic methodology [16]. The cardio-pathological analysis was performed according to the 2017 guidelines of the Association for European Cardiovascular Pathology by a forensic pathologist and expert cardio-pathologist [13]. When no certain or highly probable causes of death were found, initial genetic testing for 38 genes implicated in cardiac arrhythmia was performed, following an internal protocol for SCD [17]. During the autopsy, samples of urine, bile, peripheral (femoral) blood, or, in the absence of peripheral blood, aortic or heart blood, and other biological matrices, when needed, were collected. The blood specimens were preserved with 2% sodium fluoride. All the specimens were stored at −20 • C immediately following their collection during the autopsy. A general toxicological screening and quantification for alcohol, illicit drugs, and medicinal drugs were performed. The analyses for alcohol were performed using gas chromatography coupled to a Flame Ionization Detector (Shimadzu QP 2010 Plus, Kyoto, Japan). The blood samples were screened for cocaine, cannabinoids, opiates, methadone, and amphetamine-like drugs (amphetamines/methamphetamines/MDMA/MDA) using an immunoassay (ILab 650, Werfen, Barcelona, Spain) [18]. The confirmation analyses for cannabinoids were performed with a Shimadzu GC-2010 Plus gas chromatograph interfaced with a QP 2010 Ultra mass spectrometer (Shimadzu, Kyoto, Japan) [19]. The confirmation analyses for other illicit drugs and screening/confirmations for 68 psychoactive medications (benzodiazepines, Z-drugs, antipsychotics, antidepressants, and medical opioids) were performed with an ACQUITY UPLC ® System (Waters Corporation, Milford, MA, USA) equipped with an Acquity UPLC ® HSS C18 column (2.1 × 150 mm, 1.8 µm; Waters), following a previously validated method [20].
Genotyping and Data Quality Control
After the DNA extraction from the blood samples of the sudden cardiac death group, the DNA was genotyped for~720,000 genetic markers using the HumanOmniExpress BeadChip (Illumina, San Diego, CA, USA).
The quality control on the sequenced variants was performed using a combination of the PLINK version 1.9 software [21] and Linux-based command line. The following filtering steps were applied:
•
Retention of autosomal markers only; • Removal of duplicate variants; • Retention of variants with a missingness rate lower than 5% (--geno 0.05); • Retention of individuals with a missingness rate lower than 5% (--mind 0.05); • Retention of variants with values of probability for Hardy-Weinberg equilibrium test below the threshold of α = 0.01/number of markers, considering the Bonferroni correction for multiple testing/(--hwe α); • Removal of variants with a minor allele frequency (MAF) lower than 0.01 (--maf 0.01).
Bio-Geographical Ancestry
To infer the genetic ancestry of the sudden cardiac death group, we carried out geographical contextualization against a dataset of 737 Italian individuals [22] typed for 550,000 genetic markers. The individuals were collected in 20 locations across the Italian peninsula, as well as in Sicily and Sardinia, using the grandparents' criterion (both parents and all four grandparents must have been born in the same location as the sampled individual) to ensure that the local ancestry had been preserved. A Principal Component analysis (PCA) was performed after merging the SCD cases with the Italian control individuals and applying an extra set of filtering options with the PLINK 1.9 software [21], as indicated in the following list: The PCA was performed by converting the PLINK dataset using the convertf software, followed by a computation of the principal components using the smartpca tool contained in the eigensoft suite of programs for population genetics (version 6.0.1) [23,24].
Data Processing and Statistical Analysis
A chi-square analysis was performed to find out the likely associations of alleles and genotypes with sudden cardiac death. Above all, an χ 2 test with both one and two degrees of freedom was carried out on allele frequencies of almost 46,000 polymorphisms to identify the genetic variants that differed between the cases and control groups, potentially contributing to SCD. The genotypes of the variants with higher chi-square values were then tested for association by using the χ 2 test with two degrees of freedom and the Fisher exact test. The two statistics were performed in a comparison between the subgroups of cases, identified by autopsy examination, and the controls, even comparing the subgroups to each other (intra-autoptic group comparison). A pathway enrichment analysis was carried out to assess the pathways enriched with the genes identified by the statistical analyses. The enrichment analysis was implemented on the R statistical software version 4.2.2, which runs Bioconductor version 3.16, through the R package enrichR version 3.1. The gene sets were retrieved from the KEGG, Gene Ontology, and WikiPathways using the Enrichr tool [25] accessed via enrichR.
Post-Mortem Data Collection
In total, 30 autopsy cases identified as SCD that underwent a full post-mortem examination at the University of Bologna were included in this study: 27 males (90%) and 3 females (10%). The age range was from 2 to 76 years old (mean 43.4, SD 19.9, and median 45.5). A total of 26 of the deceased (86.7%) were of European ancestry, 3 subjects were considered to be from the Near Eastern ancestry group (10%), and 1 subject (3.3%) was from South America. The past medical history included alcohol or drug use disorders in 5 cases (16.7%), psychiatric or neurological diseases in 4 cases (13.3%), and cardiovascular risk factors (obesity, hypertension, and diabetes) in 4 cases (13.3%). A negative history was observed in 12 cases (40%), and in 5 cases (16.7%), no clinical histories were available. Pharmacological therapy was present in 4 cases (13.3%) suffering from a neurological/psychiatric disease (1 with antidepressants and 3 with antipsychotics) and in 3 cases (10%) for the treatment of diabetes and/or hypertension. In 12 (40%) cases, no therapy was present, and in 11 cases (36.7%), these data were not available. The toxicological analyses of the blood detected the presence of psychopharmacological therapy in 3 cases (10%, fluphenazine, clonazepam, levomepromazine, and promazine). Alcohol was detected in 1 case, cocaine in 3 cases, and both alcohol and cocaine in 1 case. The toxicological analyses were negative in 22 cases (73.3%). In none of the positive cases were drugs found in toxic/lethal levels and SCD was considered the only cause of death, with a contributory role being identified, on the basis of the multidisciplinary post-mortem analysis, in 4 cases involving cocaine. As for the autopsy findings, a normal heart was found in 12 cases (40%); atherosclerotic coronary artery disease was found in 10 cases (33.3%); and other "highly probable" CoDs were identified in 8 cases (26.7%). The data and descriptive statistics results are summarized in Table 1. The categorical variables were not statistically associated with CoD (p > 0.05). Age did not show a normal distribution and did not differ within the groups of CoD, as demonstrated by the non-parametric analysis of variance. The median age, gender, and CoD (p > 0.05). Age did not show a normal distribution and did not differ within the groups of CoD, as demonstrated by the non-parametric analysis of variance. The median age, gender, and ancestry across the groups based on the autopsy findings, as well as the medical history, therapy, and toxicology of the cases, are shown in Figure 1.
Analysis of Data
The chi-square (χ 2 ) test, performed on allele frequencies of~46,000 SNPs (singlenucleotide polymorphism), identified more than 2000 variants with statistically significant differences in the frequencies (p ≤ 0.05) between the cases and controls, which might be associated with SCD. Among these variants, 356 SNPs, having the highest statistical values (p ≤ 0.001) between the analyzed populations, were selected for further investigation (Supplementary Table S1).
The top SNPs map inside or near 456 genes, both in coding and non-coding regions. The majority of these polymorphisms had not previously been implicated in any phenotype and disease; however, there were 25 variants that had shown a previous association with cardiovascular diseases and phenotypes that increase the risk of developing these pathologies (Table 2), confirming the hypothesis of this study.
Among the 356 top variants without any prior association to other phenotypes, some were located in genes already implicated in cardiovascular functions and diseases, such as TBXAS1 encoding the thromboxane A2, which promotes vascular thrombosis [26]. Many of these genes are involved in the development of atherosclerosis and coronary artery diseases, which are risk factors for sudden cardiac death. We even found that some of these polymorphisms are in genes already associated with sudden cardiac death, namely CACNA1C, KCND2, PRKAG2, and SREBF2 [9,[27][28][29]; however, except for a variant in SREBF2, the polymorphisms in CACNA1C, KCND2, and PRKAG2 did not show a previous association with this phenotype. All the genes implicated in cardiovascular diseases are reported in Supplementary Table S2. In addition, by studying the functions of all the genes, we discovered that many of them are involved in brain functioning, neuropsychiatric disorders, and drug metabolism and dependence.
Pathway Enrichment Analysis
In order to identify the metabolisms most involved in and likely associated with SCD, a pathway enrichment analysis was carried out including 456 genes, where the 356 top variants were mapped inside or near. The analysis, performed via the R package enrichR, retrieved from KEGG, Gene Ontology, and WikiPathways 3548 pathways enriched by these genes, of which many were equivalent, owing to the different identifiers used by the three databases. Among these, 46 pathways, having p-adjusted lower than 0.05, were selected as the top pathways implicated in SCD (Figure 2, Supplementary Table S3). These top pathways are mainly involved in the lipid, cholesterol/bile, xenobiotics/drugs, and arachidonic acid metabolisms, which are known to be involved in cardiovascular disease development [60][61][62].
Association with Autopsy Findings
The forensic autopsy identified three subgroups based on the autopsy findings, which were coronary artery disease, other known CoDs, and normal heart. We therefore decided to verify if the three subgroups of sudden cardiac death cases differed in the genotypes of their top variants. The χ 2 test with two degrees of freedom and Fisher tests were implemented on the genotype frequencies by carrying out a comparison between the controls and three autoptic subgroups, comparing the subgroups to each other. Overall, the tests identified the genotypes of 38 genetic variants with statistically significant differences in their frequencies (p-value ≤ 0.05), of which, 33 variants were in the subgroup-control comparison and 21 variants were in intra-autoptic group comparison ( Table 3). The "other known CoD" subgroup had the greatest number of variants with statistically significant genotypes (32), followed by the "coronary artery disease" subgroup (5) and finally by the "normal heart" subgroup, with only 2 variants. In contrast to the other subgroups, the "normal heart" subset displayed genotypes with statistical significance only in the comparison with the controls. The autoptic subgroups differed from each other (Table 3), except for the rs6746883 variant, which showed significant statistical values in both the "coronary artery disease" and "other known CoD" subgroups with respect to the control group (Table 3). retrieved from KEGG, Gene Ontology, and WikiPathways 3548 pathways enriched by these genes, of which many were equivalent, owing to the different identifiers used by the three databases. Among these, 46 pathways, having p-adjusted lower than 0.05, were selected as the top pathways implicated in SCD (Figure 2, Supplementary Table S3). These top pathways are mainly involved in the lipid, cholesterol/bile, xenobiotics/drugs, and arachidonic acid metabolisms, which are known to be involved in cardiovascular disease development [60][61][62]. Only the intron variant rs12986742 in the LINC01122 gene, which was significant only in the "other known CoD" group, displayed a previous association with HDL levels (Tables 2 and 3) [46].
Case Control Study
Sudden cardiac death (SCD) is one of the leading causes of mortality in the world, and in Western countries, it amounts to nearly 20% of deaths [63]. The complex task of establishing the exact cause of SCD belongs to pathologists and many SCDs present a clear pathological cause, which can be detected and identified with varying degrees of confidence through a complete post-mortem examination. However, a high percentage of cases remain with an unexplained cause of death, despite careful macroscopic, microscopic, and additional toxicological and molecular analyses [1]. Post-mortem genetic testing, focused on cardiac-disease-associated genes, offers the opportunity to help in investigating cases of unexplained SCD and might improve the identification of the factors associated with arrhythmogenic risks or subtle structural abnormalities, even before the manifestation of pathological structural abnormalities [13]. The inclusion of a higher number of genes through genome-wide analyses allows for the detection of novel genes and variants, expanding the knowledge on SCD and providing biomarkers that are useful for prevention [5,[7][8][9][10][11].
The genome-wide screening performed in this study through a case-control analysis allowed us to pinpoint many genetic variants with statistically significant differences in their frequencies (p ≤ 0.001), potentially contributing to pathogenesis of SCD, most of which showed no previous link with other phenotypes or diseases. Among these variants, 25 SNPs could be considered to be likely pathogenic for SCD in our study, as they were consistent with previous publications showing an association with cardiovascular diseases or other risk factors for the development of these pathologies (Table 2). In particular, the missense variant rs2228314 (Gly595Ala substitution) in the SREBF2 gene, encoding a transcription factor that regulates the expressions of the genes involved in cholesterol biosynthesis [64], has been associated with the pathogenesis of coronary atherosclerosis and an increased risk of SCD, especially in middle-aged males [29]. Importantly, the cases analyzed here displayed a high frequency of the minor allele C (MAF = 0.7), which is the risk allele for SCD [29]. Atherosclerosis is a very impactful cardiovascular disease with a high mortality rate, characterized by chronic vascular inflammation as well as cholesterol accumulation, which highly contributes to its pathogenesis [65]. Atherosclerosis, in turn, leads to CAD [66], which is one of the main causes of SCD [67], and our study allowed for the confirmation of several variants related to CAD, but also to thrombosis and risk factors for atherosclerosis (cholesterol, HDL and LDL levels, intima-media thickness, and hypertension [65]) as useful markers of SCD.
Using a case-control study, many other polymorphisms were also pinpointed as related to SCD, mapping inside or near 456 genes with different functions. Although the majority of the polymorphisms identified herein have not formerly been associated with any phenotype, many of these SNPs were mapped in genes already implicated, to some degree, in cardiovascular system functioning and diseases (mostly atherosclerosis, CAD, and thrombosis, see Supplementary Table S2), indicating a likely relationship with SCD. Moreover, this link with SCD was strengthened by the presence of three genes (CACNA1C, KCND2, and PRKAG2) already associated with sudden cardiac death [9,27,28], as well as SREBF2. However, it is necessary to further explore the roles of these variants in the pathogenesis of SCD, especially considering the lack of previous associations with this phenotype. Furthermore, by deepening the roles of these genes, it was found that the most statistically significant biological pathways (p-adjusted ≤ 0.05) involved in SCD are represented by the lipid, cholesterol, arachidonic acid, and xenobiotics/drugs metabolisms ( Figure 2). Overall, these results would further confirm our initial hypothesis, since these metabolisms have already been related to an increased risk of developing cardiovascular diseases [60][61][62], which may finally result in SCD.
The lipid, cholesterol, and arachidonic acid metabolisms are widely related to cardiovascular diseases. As mentioned, impaired blood levels of lipids and cholesterol are widely known to be risk factors for atherosclerotic plaque formation, the pathogenesis of CAD, myocardial ischemia, and ischemic stroke [61]. Arachidonic acid is a ω-6 polyunsaturated fatty acid that is metabolized in a class of bioactive molecules called eicosanoids (i.e., prostanoids, leukotrienes, epoxyeicosatrienoic, and hydroxyeicosatetraenoic acids), which are implicated in cardiovascular homeostasis, inflammation fostering, and even thrombosis [26,60]. The enhanced cleavage of arachidonic acid from cellular membranes triggered by proinflammatory stimuli and the consequent increased synthesis of eicosanoids, especially of prostanoids, have been associated with atherosclerosis, CAD, myocardial infarction, and thrombosis [26,60]. Among the genes involved in this metabolism, we detected three variants (rs6948035, rs17161326, and rs6962291) in the TBXAS1 gene to be statistically significant in our population (p < 0.001). The thromboxane A2 (TxA2) encoded by TBXAS1 plays a detrimental role in the cardiovascular system because it induces platelet aggregation, vascular dysfunction, vasoconstriction, and even cardiac arrhythmias [26]. Regarding the variants in TBXAS1, the intron variant rs6962291 has been related to aspirin intolerance in asthmatic patients and the minor allele A (MAF = 0.6333 in our study population) seems to reduce the degradation of TxA2 [68], suggesting that it could be a promising biomarker of sudden cardiac death. Overall, this evidence seems to confirm the link between thrombosis and SCD, as displayed by previous data [69].
Some of the cases examined herein (eight in total) were positive for medical or recreational drugs and it is noteworthy that we detected drug metabolism as one of most significant metabolic processes implicated in SCD, since there is evidence that many drugs can induce and exacerbate cardiac arrhythmias [62], which are a common cause of SCD, especially in young people [3]. The detected medical drugs (levomepromazine and clozapine) only displayed a weak or moderate association with QT prolongation [70] and no variant was detected in the genes that have a modulatory effect on membrane potentials, allowing us to exclude a synergistic effect of drugs resulting in sudden death.
In the case-control comparison, we identified many genetic variants mapping in the genes involved in drug metabolism, such as the intron variant rs1202171 in the ABCB1 gene, which seems to influence the expression of other ABC transporters [71], and the missense variant rs17222723 in the ABCC2 gene, which is related to drug-induced cardiotoxicity [34]. Both genes encode the transporters of the ABC family of transporters, which are involved in drug transport and highly associated with drug resistance [72]. It will therefore be interesting to deepen our understanding of the roles of the variants localized in drug-metabolizing genes in SCD and if they play a primary causal role, given that other polymorphisms in ABCB1 increase the risk of sudden cardiac death in digoxin users [73]. Unfortunately, the case-control study performed herein did not allow us to analyze the effect of the variants within each single case, and thus to further explore the association of such variants with a subset of SCD due to possible antipsychotic/illicit drugs. Further studies on a wider casuistry might allow the investigation of this issue. Furthermore, a possible cooperation between the drug and arachidonic metabolisms in the pathogenesis of SCD has been highlighted, since some genes (for instance CYP2E1, CYP2C9, and CYP2J2) displaying variants with a statistical significance are involved in both metabolic processes (Supplementary Table S3).
Overall, these results seem to confirm the roles of the lipid, cholesterol, arachidonic acid, and drug metabolisms in the pathogeneses of atherosclerosis, CAD, and thrombosis, in cardiac damage, and ultimately in SCD, suggesting potential additional biomarkers of SCD, which would deserve further study.
Association with the Cause of Death (CoD)
When considering the three subgroups defined based on the autopsy findings (i.e., normal heart, CAD, and other known CoDs, as specified in Supplementary Table S4), some genetic variants were associated with a single category of SCD. The majority of the statistically significant associations were found for the subgroup of "other known CoD", and the implied genes were involved in functions such as the cholesterol and drug metabolisms, but also cellular stress response, apoptosis, inflammation, immune response, and neurodevelopment or degeneration. The wide variability of the involved genes and functions was expected, given the fact that this subgroup of SCD is the most uneven in its composition: specific variants might be pathogenetic of specific cardiac structural modifications or diseases leading to SCD. On the other hand, the cholesterol and xenobiotic metabolisms were associated with both the "other known CoD" and CAD subgroups, suggesting that there might be common pathways involved in different kinds of SCD. Interestingly, cases of SCD with normal heart in the autopsy only demonstrated an association with possible variants when compared to the control cohort and not when looking at the intra-group comparison, but this might be due to the limited sample size. One of the two variants associated with the "normal heart" subgroup is located in the EXOC6 gene and had not formerly been related to other phenotypes. EXOC6, encoding the exocyst complex component 6, is involved in translocation of the GLUT4 glucose transporter in adipocytes [74] and insulin secretion in pancreatic β-cells, increasing the risk of type 2 diabetes [75]. It would be interesting to further study the role of this gene and of glucose metabolism in SCD, even if there is currently no evidence of an association between EXOC6 and cardiovascular diseases such as atherosclerosis or CAD. It is necessary to emphasize that our SCD cohort widely differed from that of another study [76], where sudden arrhythmic death syndrome was identified as the cause of death in a majority of SCD cases. In contrast to Papadakis and colleagues [76], we detected no association between the "normal heart" subgroup and genetic variants located in the genes related to cardiac arrhythmias, such as RYR2, CACNA1C, and SCN5A. However, it is possible that the lack of association with arrhythmogenic genes might be due to the low sample number in our "normal heart" subgroup.
Further in-depth analyses are needed to elucidate the relationship between the genes involved in brain functions and SCD, since the autonomic nervous system contributes to the maintenance of the cardiovascular system's homeostasis [77] and an imbalance in autonomic neural activity and remodeling enhances the risk of pathologies such as arrhythmias and heart failure up to sudden cardiac death [78]. Alzheimer's disease is a neurodegenerative disorder also characterized by intraneural tangles of the tau protein encoded by MAPT [79,80], where the CAD subgroup of cases displayed a significance in a variant localized herein (rs17651507, Table 3). Notably, Alzheimer's disease displayed a correlation with CAD and other cardiac dysfunctions [81], corroborating a likely involvement of this gene and the nervous system in SCD by promoting neural dysfunction. Further support for the implication of the nervous system in SCD was the statistical significance that we found in a variant (rs988748) in the BDNF gene through the case-control comparison; indeed, the neurotrophic factor encoded by this gene is highly related to cardiovascular disease development [82]. In addition, the rs17651507 variant in MAPT has been associated with waist-hip ratio [83], which could be in agreement with data showing a connection between cognitive impairment and obesity [84].
The present study confirmed and strengthened the roles of several genetic variants related to CAD, thrombosis, and risk factors for atherosclerosis in the determinism of SCD. Additional polymorphisms have been pinpointed as being related to SCD, mainly mapping in genes involved in the pathways of the lipid, cholesterol, arachidonic acid, and xenobiotics/drugs metabolisms. Considering the large number of variants and genes related to SCD reported herein, SCD appears as a rather polygenic trait, in which the normal and altered activities of many genes contribute to the pathogenesis of cardiovascular conditions leading to death. More in-depth and wide studies in forensic cases are required to clarify the significance of the biomarkers suggested herein, involved in cardiovascular functions, but not yet associated with cardiovascular diseases, in order to investigate their role in the pathogenesis of SCD and their potential roles in diagnostic tests. Given the fact that relatives and families of individuals who have died of SCD can be diagnosed with heritable conditions, such as Brugada syndrome [76], these results might also improve individual risk assessments, as well as screening and prevention for family members.
Conclusions
Thanks to the application of a genome-wide scan of sudden cardiac death, many variants that statistically differed with respect to the control cohort, which are likely implicated in SCD, were pinpointed. Some of these polymorphisms had already been detected as risk factors for the development of atherosclerosis, thrombosis, and coronary artery disease, thus strengthening their association with SCD. Even if most of these variants had not previously been associated with any character, many were found to be mapped in the genes involved in cardiovascular functions and pathologies. Furthermore, several biological pathways linked to the same diseases are enriched by these genes, pointing out that the lipid, cholesterol, arachidonic acid, and drug metabolisms are highly implied in SCD.
The three subgroups of SCD, as determined by autopsy examinations, were significantly differentiated only in a few genotypes. Despite the small sample number, these results provide the opportunity for further analyses relating to different causes of death.
Finally, owing to the large number of variants and genes related to SCD that we discovered in this study, our study supports that sudden cardiac death is a polygenic trait, in which the normal and altered activities of many genes contribute to the pathogenesis of cardiovascular conditions leading to death. Nevertheless, the current lack of involvement of many variants in cardiovascular diseases makes it necessary to investigate these polymorphisms further and more deeply, with the aim of clearly defining their roles in the pathogenesis of SCD and whether they will be useful as potential diagnostic markers allowing for prevention measures.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes14061265/s1, Table S1: Table containing results of chisquare performed in case-control comparison. All reported variants have p-value ≤ 0.001; Table S2: Table containing all genes already associated with some degree with cardiovascular functions and diseases; Table S3: Table containing results of pathway enrichment analysis, displaying only pathways with p-adjusted ≤ 0.05. Table S4: Breakdown of the other "highly probable" causes of death identified. | 2023-06-29T05:10:21.228Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "bc8de2efcfa0db2929587db06d69ffcfc219f624",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/genes14061265",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc8de2efcfa0db2929587db06d69ffcfc219f624",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118566479 | pes2o/s2orc | v3-fos-license | Spontaneous Quantum Teleportation in a Quenched Spin Lattice
An Ising-inspired numerical model is developed to study spontaneous quantum teleportation in a quenched spin lattice. Quantum teleportation is an operation that can, using entangled pairs of particles, transport a quantum state across arbitrary distances with high fidelity. In doing so, it destroys the state in one location and relocates it to another. In this context, teleportation serves as a long range interaction that randomly introduces correlations and disorder into a lattice. In addition, different Bell state projection and entangled pair swapping models are also explored, as are the effects of decoherence. The results are compared to the standard Ising model in one- and two-dimensions across several thermodynamic parameters versus temperature.
I. INTRODUCTION
Quantum teleportation is an operation that can transport a pure quantum state across arbitrary distances with high fidelity [1]. Normally, the process is performed actively by an experimenter [2][3] [4] and is a standard element of the growing arsenal of quantum information tools being developed for quantum computing, quantum cryptography, and other emerging quantum technologies [5][6] [7]. Here, we explore the effects of spontaneous quantum teleportation using an Ising-inspired model in a quenched spin lattice. That is, the teleportations occur without an active experimental intervention and the quantum correlation process is driven by the natural dynamics of a physical system in thermal equilibrium. We refer to a quenched lattice as one which fixes the location of all particles and isolated spins in space. This is in contrast to an annealed lattice which would allow single spins and entangled particle pairs to diffuse and interact in the lattice versus time.
In addition to a lattice of fixed single-particles in zcomponent spin eigenstates used in the Ising model, all of our quantum correlation models require the presence of entangled pairs of particles known as Einstein-Podolsky-Rosen (EPR) pairs [8]. In this context, the spin EPR pairs are treated quantum mechanically in the Bell basis. These pairs can be considered to be correlated impurities in an otherwise thermalized spin system. To introduce the EPR pairs into the spin system, and to emulate the measurement process, we consider several models of a quenched lattice and also consider decoherence effects.
After initializing a randomized spin lattice, the fixed spins and EPR pairs are allowed to interact at some temperature. Like the Ising model, these objects are subjected to a metropolis algorithm that allows their states to fluctuate based on nearest neighbor interactions. However, now there are additional effective interactions that can spontaneously teleport single particle states, teleport entangled pairs, or generate entangled pairs, producing a variety of results.
To isolate the effects of introducing entangled EPR pairs and spontaneous teleportation into the system, three cases are studied: pure Ising; Ising with telepor-tation; and teleportation alone. The teleportation interaction includes pair swapping, where the teleported state is itself an EPR pair, as well as Bell state projections, which are local single particle measurements that project non-entangled pairs into entangled states. The latter case has the effect of generating an EPR pair from two single-particle spins. All of these spontaneous correlations alter the local spin interactions and can serve as a randomized long range interaction that introduces differing degrees of order and disorder, thus affecting various thermodynamic parameters. These three models are compared with one another in both one and two dimensions.
The decoherence mechanism dilutes the purity of the Bell states, driving them into a mixed quantum configuration due to local interactions over a tunable characteristic time scale. This has the effect of measuring pure two-particle entangled Bell states into mixed uncorrelated single particle states of definite spin. In the limit of short decoherence times, the system approaches the expected Ising quenched lattice results. In the limit of long decoherence times, the entangled pairs retain their quantum mechanical purity indefinitely.
After reviewing the mechanics of quantum teleportation, the Ising-inspired models are discussed in some detail for the one-and two-dimensional lattice. The effects of teleportation on the system are isolated and studied across several thermodynamic observables varied quasi statically versus temperature such as: energy, specific heat, magnetization, and critical temperature. In addition we study the entanglement density of the system. Limitations of the models are discussed as well as possible future directions.
II. THE BELL STATES, QUANTUM TELEPORTATION, AND PAIR SWAPPING
One remarkable result of quantum mechanics is the existence of entangled particles, particles who's quantum states are entangled, correlated, and dependent on one another. Once two particles are entangled, the quantum correlation and entanglement of states perpetuates over distance. For two spin one half particles, any possible entangled state in which they can reside can be described in terms of the Bell basis, a complete orthonormal basis for a set of two entangled spin one half particles. The four states, known as the Bell states, that comprise the Bell basis are Ψ (−) 12 where each subscript refers the respective particle [1]. The non-local effects of quantum entanglement can be utilized to transmit quantum states, or quantum information, over distances without physically transferring any particles themselves. This transmission process is known as quantum teleportation. The nature of entangled particles and the quantum teleportation process also does not require the sender to have any knowledge of the transmitted state or of the location of the receiver [1]. Simple quantum teleportation can be accomplished using a three particle system. This quantum teleportation process is outlined in Fig. 1. The first particle, particle 1, resides in the unknown state which is to be teleported, while the other two particles, particles 2 and 3, are prepared in a Bell state. The unknown state, particle 1, can be described by where a and b can be complex and satisfy the normalization condition |a| 2 + |b| 2 = 1. In order to transmit the unknown state, |φ 1 , of particle 1 to another location, particle 1 and one of the entangled particles, particle 2, must be separated from the other entangled particle, particle 3, as shown in Fig. 1. The location of particle 3 will be the location where the quantum state of particle 1 will be transmitted. To initiate the quantum teleportation process, a measurement of particles 1 and 2 must be preformed in the Bell operator basis. A measurement of particles 1 and 2 in the Bell operator basis will entangle particles 1 and 2 into one of the four Bell states given by Eqn. 1. Since particles 2 and 3 where previously correlated, the resulting state of particle 3, which is in another location, is dependent on which Bell state particles 1 and 2 are projected into by the measurement. If particles 2 and 3 initially resided in the singlet state, given by Eqn. 1a, then expressing the wave function of the entire system in terms of the Bell basis of particles 1 and 2, as is done in [1], reveals that the four possible resulting states of particle 3 are The resulting |φ 3 1 state corresponds to a simple phase shift of the original unknown state, |φ 1 , while the other three correspond to 180 • rotations of |φ 1 about the x, y, and z axes. Since the resulting state of particle 3 is correlated to the Bell basis measurement result of particles 1 and 2, the measurement result can be relayed to the location of particle 3, indicating which unitary operator (if any) the receiver must apply to particle 3 to completely reconstruct the original unknown state |φ 1 and complete the teleportation process [1]. The process of quantum teleportation is more rigorously described in [1].
A special case of quantum teleportation occurs when all particles in the system reside in entangled states. Consider a system of four entangled particles, where particles 1 and 2 are entangled in a Bell state and particles 3 and 4 are entangled in a separate Bell state. If particles 2 and 3 are then entangled and projected into a Bell state, it follows that the particles with which they were previously paired, particles 1 and 4, will also be entangled and projected into a Bell state. It turns out that no matter what combination of Bell states particles 1 and 2 and particles 3 and 4 reside in, if particles 2 and 3 are projected into a bell state the final state of the system will be one of the following four, (2d) where the resulting entangled states of particles 2 and 3 and particles 1 and 4 are the Bell states given in Eqn. 1 [9]. Thus, Eqn. 2 indicates that whichever Bell state particles 2 and 3 are projected into is the same Bell state particles 1 and 4 are projected into. This creation of two new EPR pairs causes all four particles in the system to swap entanglement partners and is therefore refereed to as pair, or entanglement, swapping The model discussed in this paper, investigates the thermodynamical effects of a Bell state projection interaction in a quenched spin lattice. A Bell state projection interaction or measurement is responsible for both quantum teleportation and pair swapping as well as the creation of entangled EPR pairs. Normally, the quantum teleportation process requires a classical channel to complete the operation with 100% FIG. 1: Quantum teleportation schematic.Throughout the teleportation process, particles 1 and 2 are located separately from particle 3. Initially particles 2 and 3 are entangled in an EPR pair while particle 1 resides in the unknown state which is to be teleported. After a measurement of particles 1 and 2 is preformed in the Bell basis, particles 1 and 2 comprise a new EPR pair, and particle 3 resides in a non-entangled state related to the initial state of particle 1.To complete the teleportation process, the result of the Bell basis measurement of particles 1 and 2 is relayed to the reviver at Location 2, via a classical channel, allowing the receiver to determine which unitary operator to apply to completely reconstruct the initial state of particle 1.
fidelity. That is, the appropriate 180 • rotation required to finish the process with complete certainty needs to be communicated to the member of the EPR pair prepared to receive the unknown state. In our case, we dispense with the classical channel and allow statistics to determine how the unknown state is transported across the lattice. As described above, in the standard treatment, the unknown state has complex coefficients a and b which characterize the state to be teleported. In our case, to simplify the model for this treatment, a and b are either 0 or 1. That is, the unknown states are always zcomponent spin eigenstates. As a result, up to an overall phase that does not affect the dynamics of the model, the unknown quantum state is always teleported with a 50% fidelity. Otherwise, it is the opposite spin state is teleported. What makes this interesting, regardless of the fidelity, is that a local interaction in one part of the lattice can force the spin state into another part of the lattice. A random spin state is inserted into the lattice even if it would not be normally energetically favorable to do so.
Future models will incorporate a lattice of spin states with arbitrary complex coefficients. This is equivalent to a lattice of Bloch spheres. By retaining all the phase information, a richer variety of interactions can be explored. Decoherence could then be modeled using a density matrix approach whereby the state vectors not only reside on the surface of the Bloch spheres, but can diffuse into the bulk of the spheres in time due to interactions.
III. THE ISING MODEL OF A FERROMAGNET AND THE METROPOLIS ALGORITHM
The Ising model of a ferromagnet is a simplified model which provides relevant insight into the thermodynamic behavior of a ferromagnet, specifically the spontaneous phase change from a non-magnetic to magnetic state. The Ising model of a ferromagnet consists of a set of spins, and thus magnetic moments, arranged in a regular lattice, usually linear, square, or cubic depending on dimension. These spins can take one of two values +1 (spin up) or -1 (spin down). Each spin in the lattice is acted upon by its immediate neighbors, where the force is dependent on the relative orientation of the neighboring spins. Aligned spins are favored while anti-aligned spins are not. Since each spin has a magnetic moment, all spins can be acted upon by an external magnetic field as well. The total energy of the system is given by where s i is the spin state of the ith spin, is the magnitude of the energy of two neighboring spins, m is the magnetic moment of a spin, B ext is the external magnetic field, and the sum over (i, j) indicates a sum over all neighboring spins [10]. The first term in Equation 3 represents the energy from the interaction between neighboring spins, while the second term represents the energy from each spins interaction with an external magnetic field. Therefore, in the absence of an external magnetic field the energy of the system is where the sum is agin carried over all neighboring spins. The total magnetization of the system is given by where in this case the sum is carried over all spins. The Ising model holds for any dimension, however the one dimensional Ising model does not result in a phase change.
The two and three dimensional Ising models do display phase changes. Several analytical solutions to the two dimensional Ising model have been completed, two of which can be found in [10] [11], however no analytical solution to the three dimensional Ising model has been found. In addition to finding analytical solutions, the Ising model can also be investigated through Monte Carlo algorithms. However, since a lattice of any considerable size has an innumerable number of possible states, it is practical to utilize a Monte Carlo algorithm with importance sampling, specifically the Metropolis algorithm [12]. Under the Metropolis algorithm, new random state configurations are generated based on the Boltzmann probability where ∆E is the energy difference, k B is Boltzmann's constant, and T is the temperature of the system. For the Ising Model, the Metropolis algorithm is specifically executed as follows. A random spin within the lattice is chosen and the energy difference, ∆E, that would result from flipping the spin is calculated. If ∆E ≤ 0, i.e., flipping the spin lowers the energy of the system, then the spin is flipped. However, if ∆E > 0, then the spin is flipped with the probability given in Eqn. 6. Then the thermodynamical quantities, such as energy, for this new configuration can be calculated. The average of a thermodynamical quantity over all configurations generated at a specific temperature results in that thermodynamical quantities value for that temperature. The model discussed in this paper is based on the Ising model and is therefore investigated under the Metropolis algorithm. For comparative purposes this paper presents the results of the Ising model in addition to the modified case.
IV. ISING-INSPIRED MODEL OF QUANTUM TELEPORTATION IN A SPIN LATTICE
In order to investigate the thermodynamical effects of quantum teleportation in a quenched spin lattice, an Ising-inspired model was developed. By directly modifying the Ising model to encompass entangled Bell states as well as Bell state projection and quantum decoherence interactions, the quantum teleportation of spin states within the lattice could occur. The model is structured as a discrete lattice of stationary particles, just as the Ising model. However, unlike the Ising model, each particle can reside in one of three distinct states, spin up, spin down, or an entangled Bell state. Just as in the Ising model, the spin up state is represented by s = +1 and the spin down state is represented by s = −1. Each particle in an entangled Bell state resides in a perfect 50/50 superposition of a spin up and spin down state, thus the energy between a particle in a Bell state and any neighboring particle is the average of the energies given if the Bell state particle was in either a spin up or spin down state. Since the average of these two energies is zero, the energy between a particle in a Bell state and any neighboring particle is zero. Therefore, any particle residing in a Bell state can be represented by s = 0.
The total energy of the lattice is still defined as it was for the Ising model and is given by Eqn. 4, however the possible values of s i are now +1, -1, and 0 instead of simply +1 or -1. Similarly, the magnetization of the lattice is still defined by Eqn. 5 with the additional s i value of zero. In addition to adding Bell states, a Bell state projection interaction executed via the Metropolis algorithm and a time dependent quantum decoherence interaction were included. Since both of these interactions are dependent of the specific Bell state in which an EPR pair resides, the specific Bell state of all EPR pairs is also determined and tracked throughout the model.
A. Bell State Projection Interaction
The Bell state projection interaction projects two particles within the lattice into an entangled Bell state. The projection of two particles into a Bell state is the mechanism that is responsible for both quantum teleportation and pair swapping as discussed in Section II. Therefore, the Bell state projection interaction may result in the long range teleportation of quantum states within the lattice or in pair swapping. Just as with the basic spin flip interaction in the Ising model, the Bell state projection interaction is executed under the Metropolis algorithm. Under the Metropolis algorithm, the Bell state projection interaction is executed as follows. Two random adjacent particles within the lattice are chosen and the energy difference, ∆E, that would result from projecting the two particles into a Bell state is calculated. For the case in which the teleportation of a quantum state would result from the interaction, the energy change due to the teleportation of the quantum state is not considered in ∆E, since the exact state teleported is random. Also, it is important to note that ∆E = 0 in the case of a pair swap because all particles will remain in Bell states, i.e., s = 0. If ∆E ≤ 0, then the two particles are projected into a Bell state, where the specific Bell state into which the two particles are projected is random and each Bell state is equally probable. If ∆E > 0 then the particles are randomly projected into a Bell state based on the probability given in Equation 6. Again, the specific Bell state into which the particles are projected is random where each Bell state is equally likely. When two particles are projected into a Bell state there are three possible results, where each result depends on the initial states of the involved particles. Each of the three possible resulting cases is outlined below.
The first and simplest case, case 1, is the projection of two non-entangled particles into an entangled Bell state. This occurs if both particles involved in the Bell state projection interaction reside in either a spin up or spin down state. A visual representation of a simple Bell state projection is show in Fig. 2.
The second and most interesting case, case 2, is the teleportation of a quantum state. This occurs when one particle involved in the interaction resides in either a spin up or spin down state and the other resides in one of the four Bell states. Considering all possible combinations of the two spin states and the four Bell states results in eight possible situations described by the following product state wave functions, where particles 1 and 2 are involved in the Bell state projection interaction, particle 1 initially resides in either a spin up or spin down state, and particles 2 and 3 are initially entangled in one of the four Bell states. Equation 7 shows that every spin state, Bell state combination results an 50% probability that the teleported state, final state of particle 3, will be spin up and a 50% probability that the teleported state will be spin down. Therefore, since the specific spin state that is teleported is random, the teleported state may actually increase the energy of the gas even if the projection of particles 1 and 2 into a Bell state lowers the energy. Conversely, the teleported state can also decrease the energy of the gas even though the projections of particles 1 and 2 into a Bell state raises the energy. A visual representation of a teleportation is shown in Fig. 3.
The third and final case, case 3, is that of a pair swap. This occurs when both particles involved in the Bell state projection interaction are part of separate entangled Bell states. The pair swap will occur in the manner described in Section II and the four possible, and equally likely, resulting wavefunctions are given by Eqn. 2. Since ∆E = 0 for all pair swap cases, a pair swap will always occur under the Metropolis algorithm. The visual representation of a pair swap is shown in Fig. 4.
Finally, it is important to note the inherent tendency of the Bell state projection interaction to project all particles in the lattice into Bell states. Since the interaction only projects particles into Bell states, as time and iteration number progress, the number of particles in Bell states increases. Thus, after many iterations most, if not all, particles will reside in a Bell state. This will result in a near zero energy. However, this effect can me mitigated by a quantum decoherence interaction, which will return particles in Bell states to the more classical spin states. while the particles labeled B, C, and D represent other separate EPR pairs. After the interaction, the particles labeled C and D are the same particles previously labeled A and B, however, they are labeled differently because they are now entangled in new EPR pairs with different partners.
B. Quantum Decoherence Interaction
The quantum decoherence interaction provides a very simplified model and execution of quantum decoherence. Quantum decoherence [13] is the process by which a quantum system looses its quantum coherence and devolves into a semi-classical or classical state. The loss of quantum information which causes the quantum system to devolve is a result of the interaction between the quantum system itself and the environment, which is also treated as a quantum mechanical system. Quantum correlations between the quantum system and the environment allow quantum information to be dispersed throughout the environment. This dispersion of quantum information increases the entropy of the system.
The quantum decoherence interaction simulates a simplified interaction between a particle in a Bell state and its neighboring particles, i.e, local environment. The interaction between a particle in Bell state and its neighbors will cause the particle to disperse quantum information and decohere into the more classical spin up and spin down states. The dispersion of quantum informa-tion is a result of the neighboring particles preforming a measurement on the Bell state particle in a pointer basis determined by the orientations of the neighboring particle's spins. For example, if a majority of the neighboring particles of a Bell state are spin up, then the Bell state particle decohering into a spin up state will be energetically favorable. Thus, the neighboring particles will "perform" an effective measurement on the Bell state particle causing it to decohere into a spin up state. A similar decoherence of the Bell state particle to a spin down state would occur if the majority of the neighboring particles were in spin down states. The random likelihood that a Bell state will decohere operates on a time dependent probability, where the probability increases with the time the particle has resided in a Bell state. The decoherence probability is given by where t is the time in which the particle has been in a Bell state, and τ is the characteristic decoherence time.
A visual representation of the quantum decoherence interaction is shown in Fig. 5.
When a particle in an EPR pair decoheres, its partner will also decohere in a correlated manner. This correlated decoherence will occur in a manner consistent with the Bell state in which the two particles were previously entangled. For example, if a particle residing in either the Ψ (+) 12 or Ψ (−) 12 state decoheres into a spin up state, its partner will decohere into a spin down state. In the general case, when a particle in either the Ψ state decoheres into a spin state, its partner will decohere into the opposite spin state. Conversely, when a particle in either the Φ state decoheres into a spin state, its partner will decohere into the same spin state. In addition to the quantum decoherence interaction modeling the quantum mechanical interaction between a particle in a Bell state and its local environment, the interaction also mitigates the Bell state projection interaction's inherent tendency to project all particles in the system into Bell states.
V. NUMERICAL DETAILS
To isolate and understand the thermodynamical effect of a Bell state projection interaction in a quenched spin system, six different interaction models were conducted under the Metropolis algorithm, three in one-dimension and three in two-dimensions. The one-dimensional models and their designations are as follows; the pure onedimensional Ising model (1D Ising Model), the combination of the Bell state projection interaction and the quantum decoherene interaction (Model 1A), and the combination of the Ising spin flip interaction, Bell state projection interaction, and quantum decoherence interaction (Model 1B). The corresponding two-dimensional models are as follows; The pure two-dimensional Ising model (2D Ising Model), the combination fo the Bell state projection and quantum decoherence interactions (Model 2A), and the combination of the Ising spin flip, Bell state projection, and quantum decoherence interactions (Model 2B). A summery of all models by name, dimensionality, and comprising interactions is given in Table I. The three one-dimensional models were studied on a 1×40 linear lattice and the two-dimensional models were studied on a 40×40 square lattice. The 1×40 and 40×40 lattices provided the optimal combination of of data variation reduction and computation time. The various models were iterated over varying temperature ranges, which were dependent on the dimensionality and τ value used. Each temperature range was split into a finite number of evenly spaced points so that a data point density of about 306 data points per temperature [k B T / ] resulted, where is the neighboring spin interaction energy given in Eqn. 4. The Metropolis algorithm was executed 50,000 time at each temperature point. The resulting lattice for each iteration and temperature point was used as the initial lattice for the next. To determine the effects of varying τ values, given in Eqn. 8, all interaction models involving the Bell state projection and quantum decoherence interactions (Model 1A, Model 1B, Model 2A, and Model 2B) were conducted with τ values of 10 −7 I tot , 10 −5 I tot , 10 −4 I tot , 10 −3 I tot , 10 −2 I tot , 10 −1 I tot , I tot , and 10 2 I tot , where I tot is the total number of Iterations over all temperature steps. The temperature dependent thermodynamical quantities of energy, magnetization, specific heat, and entanglement density were determined from each execution. The neighboring particles to each Bell state particle can cause it to decohere into a spin state The Bell state particle does not decohere The Bell state particle does decohere and was in either the |Φ + > or |Φ -> state The Bell state particle does decohere and was in either the |Ψ + > or |Ψ -> state No change to the system The Bell state particle decohered into a spin down state The Bell state particle decohered into a spin down state Since the Bell state particle was in the |Ψ + > or |Ψ -> state, the particle with which it was entangled decohered into a spin up state Since the Bell state particle was in the |Φ + > or |Φ -> state, the particle with which it was entangled decohered into a spin down state } In this case, the neighboring particles will promote the Bell particle to decohere into a spin down state since it produces a more favorable configuration.
FIG. 5: Decoherence schematic in a 5 × 5 lattice. White represents spin up, black represents spin down, and gray represents a Bell state. Each EPR pair is separately labeled so that the particles labeled A represent one EPR pair while the particles labeled B represent another separate EPR pair. When a particle in an EPR pair decoheres, its partner will also decohere in a correlated manner consistent with the Bell state in which the particles used to reside.
VI. PRELIMINARY RESULTS
The preliminary thermodynamic results in both oneand two-dimensions displayed critical behavior varying from that displayed by the Ising model. For both oneand two-dimensions the existence and location of a critical temperature was dependent on the τ parameter used in the decoherence interaction. For small τ values, which correlates to short decoherence times, the onedimensional results displayed no critical behavior. Thus the low τ value one-dimensional energies and specific heats, as well as those obtained for the one-dimensional Ising model, were fitted to the respective one-dimensional Ising model analytical energy and specific heat. The onedimensional Ising model results matched the analytical functions almost exactly, however the other low τ value results displayed some variation. Where critical behavior occurred, the magnetization and specific heat data was fitted to the respective proportional power laws of for magnetization and for specific heat, where T is temperature and T c is the critical temperature. Each respective least squares power law fit of the resulting magnetization and specific heat data was used to determine the critical temperature of each model and its dependence on the τ parameter. However, because of limitations in computational power the power law fits applied to the two dimensional Ising model results produced a critical temperature which differed from the analytical critical temperature by 0.061 k B T / . Therefore, for all critical temperatures resulting from the power law fits, the difference between the analytical and determined Ising model critical temperature was added to all errors given by the covariance matrix of the fit. However, since the covariance matrix error in the critical temperture was several orders of magnitude smaller than the critical temperature difference error, the critical temperature difference error dominates.
A. One Dimensional Results
The thermodynamical results (energy, specific heat, magnetization, and entanglement density) for Model 1A are shown in Fig. 8 and the results for Model 1B are shown in Fig. 9. For low τ values (approximately ≤ 550 iterations) no critical behavior is displayed by either interaction and the energy and specific heat follow a functionality similar to the one-dimensional Ising model. In order to determine the variation of each interaction from the 1D Ising Model, the energy and specific heat data of Models 1A and 1B with the τ values of 5.5 and 550 iterations as well as the 1D Ising Model were fit to the known one-dimensional Ising model energy, and specific heat, where N is the particle number or lattice length of the system. The varying lattice lengths which resulted from the fits are given in Table II. As expected the 1D Ising Model fits give the correct particle number of 40. However, both the fits of Model 1A and Model 1B gave incorrect lattice lengths indicating that even at low τ values the Bell state and decoherence interactions alter the thermodynamics of the system. For larger τ values (τ > 550 iterations), corresponding to longer decoherence times, both Model 1A and Model 1B began to display critical behavior. Thus the magnetization and specific heat data for these higher τ values were fit to the respective power laws given by Eqn. 9 and Eqn. 10.. For very large τ values (τ ≥ I tot ), the system becomes completely or almost completely saturated with Bell states resulting in all thermodynamical quantities remaining relatively constant even at low temperatures. This saturation prevented a critical temperature from being determined for Models 1A and 1B with τ = I tot . A plot of the critical temperature vs. the natural logarithm of τ for both Model 1A and Model 1B is shown in Fig. 6. As can be seen in Fig. 6, the critical temperature for both models decreases at about the same rate as the τ parameter increases. In addition to this, as the τ parameter increases the energy, magnetization, and entanglement density of both models converge to a step function, where the step occurs at the critical temperature. This in turn causes the specific heat to converge functionally to a delta spike.
B. Two Dimensional Results
The thermodynamical results (energy, specific heat, magnetization, and entanglement density) for Model 2A are shown in Fig. 10 and the results for Model 2B are shown in Fig. 11. For low τ values (τ ≤ 750 iterations) and short decoherence tomes, Model 2Al did not show any apparent critical behavior where Model 2Bl did. Therefore, where critical behavior was apparent a least squares fit of the magnetization and specific heat to the respective power laws, Eqn. 9 and 10, were applied to determine the critical temperature. Just as with the one-dimensional models, as the τ parameter increases the energy, magnetization, and entanglement density of both models converge to a step function and the specific heat converges to a delta spike. In addition, large τ values cause the system to become saturated with Bell states resulting in a loss of critical behavior.
A plot of the critical temperature vs. the natural logarithm of the τ parameter for both Model 2A and Model 2B are shown in Fig. 7. In contrast to the onedimensional case, T c vs. log(τ ) for Model 2B begins to diverge from the Model 2A as τ becomes smaller, approaching a constant value. Also, this constant value differs from the critical temperature of the pure Ising model.
VII. CONCLUSION/FUTURE WORK
By utilizing an Ising-inspired numerical model, the temperature dependent effects of spontaneous quantum teleportation and pair swapping on several thermodynamical quantities was examined within a one-and twodimensional spin lattice. Several models were developed by modifying the pure Ising model spin lattice to include a Bell state projection interaction and quantum decoherece interaction instead of and in addition to the typical spin flip of the Ising model. By executing each model via the Metropolis algorithm the temperature dependent effects of the interactions on multiple thermodynamical quantities was determined. In addition, the time dependent decoherence parameter, τ , was also varied to determine its effect on the thermodynamical quantities. The resulting thermodynamical quantities of energy, specific heat, and magnetization were compared with those of the pure Ising model in both one and two dimensions.
In one dimension, Model 1A and Model 1B, the preliminary results at low τ values show no critical behavior and follow those of the Ising model. However, as the τ parameter is increased both Model 1A and Model 1B developed critical behavior in the thermodynamical quantities, something not seen in the one-dimensional Ising model. However, no difference in the τ dependent critical temperatures between Model 1A and Model 1B was observed.
The preliminary two-dimensional results, Model 2A and Model 2B, displayed a variation between themselves as well as with the pure two-dimensional Ising model. Model 2A displayed no apparent critical behavior at low τ values where as the 2D Ising model and Model 2B did. In addition, at these low τ values the critical temperatures of Model 2B were higher than that of the pure two-dimensional Ising model. As the τ parameter was increased, both Model 2A and Model 2B experience critical behavior. For moderate τ values, the critical temperatures of Model 2A and Model 2B were very near that of the pure two-dimensional Ising model. However, as τ was increased these critical temperatures become much lower than the critical temperature of the pure two-dimensional Ising model. For high τ values, the critical temperatures of Model 2A and Model 2B are nearly identical, however as τ is decreased the critical temperatures of the two models diverge, with the critical temperature of Model 2B approaching a constant value.
Future work investigating the thermodynamical effects of spontaneous quantum teleportation in a spin lattice will include working to determine an analytical Hamiltonian and partition function for all models. If an analytical result can be determined it can be compared with those given by the numerical models. Future work on the numerical model will include the addition of dynamic motion of the spins within the lattice. This dynamic motion should allow more long range quantum teleportaion and quantum decoherence effects to take place. In addition, both models will be expanded to include more general superimposed spin states instead of the binary spin states currently used. These superimposed spin states can be modeled by the spin vector lying on the surface of a Bloch sphere. Quantum decoherence could then be represented as the decay of the spin vector from the surface of the Bloch sphere. Vertical lines indicate the critical temperatures determined from the fit of both the specific heat and magnetization. As τ increases, the critical temperatures reflected in these thermodynamical results decrease. Fig. 8a: for low τ values the resulting energies resemble that of the 1D Ising Model. As the τ parameter increases, the temperature dependent energy results approach a step function. Fig. 8b: the specific heats functionally resemble those of the 1D Ising Model, with the lower τ value resembling the 1D Ising model very closely. Fig. 8c: as τ increases the critical temperature decreases while the specific heat curves become steeper and begin to resemble a delta function. Fig. 8d: as τ increases, the temperature at which the system gains an average non-zero magnetization decreases. No definite critical transition temperature is apparent. Fig. 8e: as τ increases, the temperature at which the system gains an average non-zero magnetization decreases. Also, with increasing τ a definite critical transition temperature becomes more apparent, where it was not for lower τ values. Fig. 8f: as τ is increased, the maximum entanglement density of the system increases. Also, with increasing τ , the temperature at which the system transitions to a near zero entanglement density decreases and the temperature dependent entanglement density approaches a step function. Vertical lines indicate the critical temperatures determined from the fit of both the specific heat and magnetization.As τ increases, the critical temperatures reflected in these thermodynamical results decrease, diverging from the 2D Ising Model. Fig. 10a: at low τ values, the energy functions are lower and smoother than the 2D Ising model energy. As τ increases, the energy functions become more drastic surpassing the 2D Ising model energy and approaching a step function. Fig. 10b: at these low τ values the specific heat results are relatively smooth displaying no apparent critical temperature, in contrast to the 2D Ising Model also shown. Fig. 10c: for these τ values the specific heats display critical behavior similar to the 2D Ising Model. However, the critical temperatures are lower than that of the 2D Ising Model. Fig. 10d: at these higher τ values the critical temperatures displayed by the specific heats are lower than that of the 2D Ising Model and become lower with increasing τ . Also, as τ increases, the specific heat transitions become more drastic and begin to resemble a delta function. Fig. 10e: as the τ parameter increases, the critical temperature of the magnetic transition decreases. In addition, as τ increases, the transition becomes more drastic and begins to resemble a step function at very high τ values. Fig. 10f: as τ is increased, the maximum entanglement density of the system increases. Also, with increasing τ , the temperature at which the system transitions to a near zero entanglement density decreases and the temperature dependent entanglement density approaches a step function. Vertical lines indicate the critical temperatures determined from the fit of both the specific heat and magnetization.As τ increases, the critical temperatures reflected in these thermodynamical results decrease, diverging from the 2D Ising Model. Fig. 11a: At low τ values, the energy functions are lower but very similar to the 2D Ising model energy. At mid-range τ values, the energies are very similar and close in magnitude to the 2D Ising model. As τ increases, the energy functions become more drastic surpassing the 2D Ising model energy and approaching a step function. Fig. 11b: for these low τ values, the specific heats display critical behavior similar to the 2D Ising Model. However, the critical temperatures are lower than that of the 2D Ising Model. Fig. 11c: for these τ values the specific heats resemble the 2D Ising Model results. However, the critical temperatures are lower than that of the 2D Ising Model, with the critical temperature for τ = 5.5e4 iterations residing very close to the 2D Ising Model critical temperature. Fig. 11d: at these higher τ values the critical temperatures displayed by the specific heats are lower than that of the 2D Ising Model and become lower with increasing τ . Also, as τ increases, the specific heat transitions become more drastic and begin to resemble a delta function. Fig. 11e: at lower τ values the magnetizations resemble that the the 2D Ising Model. As the τ parameter increases, the critical temperature of the magnetic transition decreases. In addition, as τ increases, the transition becomes more drastic and begins to resemble a step function at very high τ values. Fig. reffig:2D IB ED: as τ is increased, the maximum entanglement density of the system increases. Also, with increasing τ , the temperature at which the system transitions to a near zero entanglement density decreases and the temperature dependent entanglement density approaches a step function. | 2015-11-07T00:34:38.000Z | 2015-11-07T00:00:00.000 | {
"year": 2015,
"sha1": "c55416adedcdad38d4f31900be632578eeffb6f8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c55416adedcdad38d4f31900be632578eeffb6f8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
42552600 | pes2o/s2orc | v3-fos-license | Covalent linkage between proteins of the inter-alpha-inhibitor family and hyaluronic acid is mediated by a factor produced by granulosa cells.
The direct interaction of hyaluronic acid (HA) and proteins of the inter-α-inhibitor family plays a critical role in organization and stabilization of the expanding cumulus extracellular matrix (cECM) following an ovulatory stimulus. Despite similarities in the morphology of cumulus oocyte complexes (COCs) expanding in vivo and in vitro, we find that the cECM of COCs which expand within intact follicles are more elastic and resistant to shear stress than the cECM of those stabilized in vitro. Western blot analysis shows that only the heavy chains of inter-α-inhibitor are incorporated into the cECM and appears to be covalently linked to HA after stabilization in vivo while intact inter-α-inhibitor is bound to the HA-enriched cECM by a non-covalent mechanism in in vitro stabilized COCs. However, purified pre-α-inhibitor and HA can form covalent linkage in the presence of granulosa cells or with granulosa cell-conditioned medium. In addition, COCs resistance to shear stress is also enhanced by coincubation with granulosa cells. Upon formation of the apparent covalent linkage between heavy chains and HA in culture medium, the light chain (bikunin) is concomitantly released into the medium as a complex with chondroitin sulfate moieties of inter-α-inhibitor supporting the possibility that HA may replace the chondroitin sulfate linkage to the heavy chains. We speculate that a factor(s) secreted by granulosa cells within the follicle may catalyze a transesterification reaction resulting in an exchange of chondroitin sulfate with HA at the heavy chain/chondroitin sulfate junction followed by release of chondroitin sulfate-bikunin into the follicular fluid. It is also possible that the consequent further stabilization of the cECM through the covalent interaction of HA and heavy chains of inter-α-inhibitor may play an important role in the process of ovulation.
The direct interaction of hyaluronic acid (HA) and proteins of the inter-␣-inhibitor family plays a critical role in organization and stabilization of the expanding cumulus extracellular matrix (cECM) following an ovulatory stimulus. Despite similarities in the morphology of cumulus oocyte complexes (COCs) expanding in vivo and in vitro, we find that the cECM of COCs which expand within intact follicles are more elastic and resistant to shear stress than the cECM of those stabilized in vitro. Western blot analysis shows that only the heavy chains of inter-␣-inhibitor are incorporated into the cECM and appears to be covalently linked to HA after stabilization in vivo while intact inter-␣-inhibitor is bound to the HA-enriched cECM by a non-covalent mechanism in in vitro stabilized COCs. However, purified pre-␣-inhibitor and HA can form covalent linkage in the presence of granulosa cells or with granulosa cellconditioned medium. In addition, COCs resistance to shear stress is also enhanced by coincubation with granulosa cells. Upon formation of the apparent covalent linkage between heavy chains and HA in culture medium, the light chain (bikunin) is concomitantly released into the medium as a complex with chondroitin sulfate moieties of inter-␣-inhibitor supporting the possibility that HA may replace the chondroitin sulfate linkage to the heavy chains. We speculate that a factor(s) secreted by granulosa cells within the follicle may catalyze a transesterification reaction resulting in an exchange of chondroitin sulfate with HA at the heavy chain/chondroitin sulfate junction followed by release of chondroitin sulfate-bikunin into the follicular fluid. It is also possible that the consequent further stabilization of the cECM through the covalent interaction of HA and heavy chains of inter-␣-inhibitor may play an important role in the process of ovulation.
In most mammalian species (including mouse, rat, and human), cumulus-oocyte complexes (COCs) 1 of pre-ovulatory follicles undergo a dramatic change following an ovulatory stim-ulus. The tightly packed cumulus cells first disaggregate and then synthesize and secrete large amounts of hyaluronic acid (HA) into their extracellular matrices (ECMs). The ECM, cumulus cells, and oocyte are thus integrally bound within an expanded mucoid complex which is about 20 to 40 times larger (volume) dependent upon the species (1). This process of cumulus expansion is required for ovulation and may also facilitate the process of fertilization (2)(3)(4).
We have previously identified a serum factor (proteins of the inter-␣-inhibitor family), critical in organizing and stabilizing the expanding cumulus matrix (5). This protein factor appears to be excluded from follicular fluid until the ovulatory gonadotropin surge and then quickly diffuses into the follicular fluid where it becomes integrated within the cumulus ECM (5,6). Two major forms of this factor, pre-␣-inhibitor (P␣I) and inter-␣-inhibitor (I␣I), exist in mammalian species including mouse, bovine, and human (7,8). They each include a common light chain (about 40 kDa) which has two domains of the Kunitz-type trypsin inhibitor and so this protein is termed bikunin. P␣I is composed of bikunin and a single heavy chain connected by chondroitin sulfate (9 -12). I␣I consists of bikunin and two heavy chains also joined by chondroitin sulfate. According to a model proposed by Enghild et al. (10,11), a single chondroitin sulfate chain extends from a glycosylation site at Ser-10 of the bikunin subunit to link with the C-terminal Asp residue of each heavy chain via an ester bond to form a novel carbohydrate linkage. The three different heavy chains are highly homologous and, in fact, the specific heavy chain combinations identified in different species may differ from one another. For example, P␣I of human and mouse is composed of heavy chain 3 (HC3) and the light chain, while bovine P␣I consists of heavy 2 (HC2) and the light chain (13).
Both P␣I and I␣I are almost identical in their ability to stabilize the expanding cumulus ECM in vitro (14) where a direct interaction between proteins of the I␣I family and HA seems to play a critical role in preventing the release of HA into the culture medium. As demonstrated in an earlier in vitro study, this initial interaction appears to be a non-covalent charge-mediated interaction (14). Although COCs which expand in vitro are morphologically indistinguishable from those expanding in vivo, ovulated COCs appear to be more elastic and more resistant to mechanical shear force. It has been reported that proteins of the I␣I family could form covalent interactions with HA in various systems including follicular fluid (15)(16)(17)(18), however, the degree of native protein that forms covalent linkage with HA appears to be very low and the mechanism of the covalent interaction has not been clarified. Nonetheless, it is possible that a covalent interaction between I␣I and HA could result in this observed increased stability of the cumulus ECM.
In this study, we show that the majority of I␣I within the ovulated cumulus ECM is covalently linked with HA and that this covalent interaction can be partially achieved in vitro by incubating purified P␣I and HA with granulosa cells. Like ovulated COCs which expand within the intact follicle, COCs stabilized in medium containing granulosa cells or granulosa cell-conditioned medium, possess greater resistance to shear forces than those stabilized in medium lacking granulosa cells or granulosa cell-conditioned medium. This increased stability may be required for maintenance of integrity of the cumulus mass during extrusion of the COC through the rupture site within the follicular wall.
Methods
Preparation of COCs-Mice were injected with 5 IU of pregnant mare's serum gonadotropin and sacrificed 48 h later. Ovaries were placed in MEM with penicillin-G (100 units/ml) and streptomycin (50 g/ml). COCs (about 50 -80 COCs per animal) for in vitro expansion assays were isolated and incubated in medium containing MEM, 2.5 mM glucosamine, porcine follicle-stimulating hormone (2 g/ml), and other factors (FBS, purified bovine or mouse P␣I as specified in each experiment) at 37°C and 5% CO 2 for 16 h as described previously (5). In vivo stabilized ovulated COCs were collected about 12 h after an injection of an ovulatory dose of hCG (5 IU) in animals primed 48 h earlier with pregnant mare's serum gonadotropin (5 IU).
High Performance Liquid Chromatography Coupled ELISA for COCs-Ovulated COCs were washed 3 times in phosphate-buffered saline and then transferred to 500 l of 6 M guanidine HCl with 8% lauryl sulfobetaine or to 100 l of phosphate-buffered saline with 2 units of Streptomyces hyaluronidase for 3 h at 37°C and then transferred to 400 l of 6 M guanidine HCl with 8% lauryl sulfobetaine. About 100 l of each of these samples were fractionated using a gel-filtration column (TSK-G-4000, Bio-Rad) on a Waters HPLC unit eluted with phosphate-buffered saline at a 6 ml/min flow rate. Fractions were collected from 10 to 34 min. 100 l of each collected fraction was placed in a microwell plate (Dynatech Labs, Alexandria, VA) overnight at 4°C. The plates were then washed and blocked with 1% dry milk in 10 mM Tris-HCl buffer (pH 8.0) with 0.05% of Tween 20 (TBT). Following incubation with anti-human I␣I (1:1000 dilution with TBT) for 1 h at room temperature and three consecutive washes with TBT, the plates were then incubated with goat anti-rabbit IgG conjugated to horseradish peroxidase (1:1000 dilution, Bio-Rad). Following three washes with TBT, the plates were developed according to the manufacturer's instructions. The relative absorbance at 450 nm was determined with a Bio-Tek EL 309 Autoreader (Bio-Tek, Winnoski, VT).
Cumulus ECM Shear Resistance Assay-Pasteur pipettes were flamed to an average size of 180 m (inside diameter; about half the size of fully expanded and stabilized COCs). Individual COCs were sucked fully into the pipette and gently blown out (defined as one cycle). This process was repeated until the outer half of the cumulus mass had been stripped away and the remaining complex had been reduced to the size of the bore of the pipette. Although this parameter is judged subjectively, the release of the outer layer of the expanded cumulus mass is an all or nothing event which permits an accurate quantitation of the stability of expanded cECM. The resistance to shear stress (shear resistance index) is defined as the number of cycles necessary to strip off the outer half of the mass of cumulus cells.
Purification of Mouse and Bovine P␣I-Both mouse and bovine P␣I were purified from mouse serum or FBS through four consecutive steps including ammonium sulfate precipitation, HPLC using gel filtration, DEAE, and gel filtration as described previously (5,14). The purity of P␣I/I␣I used in this experiment is about 99% free of other proteins judged by Coomassie Blue staining following sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE). The cross-contamination of P␣I with I␣I, however, is about 5%.
Generation of Bikunin Site Specific Antiserum-Peptide corresponding to human bikunin sequence 116 -130 (QGNGNKFYSEKECRE) was synthesized and conjugated to keyhole limpet hemocyanin in the Department of Biochemistry, University of Kentucky (Lexington, KT). New Zealand White rabbits were immunized with the conjugated peptide and antiserum was collected as described previously (19). The specificity of the antiserum was characterized by Western blot as shown in Fig. 4B.
Western Blot Analysis-Protein samples from FBS, mouse serum, ovulated COCs, and various in vitro stabilized COCs or media (see figure legends for details of individual sample preparation) were heated to 100°C in SDS-PAGE sample buffer (62.5 mM Tris-HCl at pH 6.8 and containing 2% SDS, 10% glycerol, and 5% 2-mercaptoethanol) for 90 s. Prior to treatment in this sample buffer, most samples in MEM were divided equally into two parts and one part was treated with Streptomyces hyaluronidase for 3 h at 37°C. The Streptomyces hyaluronidase (Sigma, H1136) is specific for hyaluronic acid and does not have chondroitinase activity (20). As shown in this study, this enzyme will not dissociate the heavy chains and light chain of native P␣I/I␣I. The reduced samples were then resolved on polyacrylamide gel (Bio-Rad pre-casted 4 -15%, 4 -20% gradient, or homemade 7 or 10% as specified in figure legends) and then transferred to nitrocellulose paper. Following incubation with rabbit anti-human I␣I IgG (Dako) (1:1000) or antibikunin site-specific anti-serum (1:500) and alkaline phosphatase-conjugated goat anti-rabbit IgG (1:1000), the blots were developed using substrates according to the manufacturer's instructions (Bio-Rad).
Preparation of Chondroitin Sulfate Radiolabeled I␣I and Immunopreciptation-To radiolabel the chondroitin sulfate component of I␣I, a mouse hepatoma cell line was generated from an SV40 large T antigen transgenic mouse provided by Dr. J. S. Butel, Baylor College of Medicine. The cells were maintained and propagated in MEM supplemented with 10% FBS at 37°C under 5% CO 2 . After achieving confluence in a 75-cm 2 flask, the cells were washed 3 times with phosphate-buffered saline and then incubated with 5 ml of labeling medium (sulfate free MEM containing 2 mCi of carrier-free [ 35 S]sulfuric acid, 0.5% FBS) overnight. The medium was centrifuged (1000 ϫ g) for 10 min and passed through a 0.2-m filter (Millipore) to remove cell debris. The unincorporated radioisotope was removed using ultrafiltration with a molecular mass cut-off of 100 kDa (Centricone-100, Amicon) and the medium concentrated to a final volume of 0.5 ml. About 20 l of this sulfate-labeled protein mixture was then added to 80 l of MEM containing granulosa cells under various conditions specified in the figure legends and incubated overnight at 37°C and 5% CO 2 . These cellmedium mixtures were then centrifuged (at 1000 ϫ g for 10 min) to remove cell debris and 2 l of rabbit anti-human I␣I was added and the mixture then incubated at room temperature for 2 h following addition of 20 l of protein A-agarose suspension. The resulting mixture was incubated for another 2 h with gentle agitation and washed 4 times in Tris-HCl buffer (pH 8.0 with 1% bovine serum albumin and 0.05% Tween 20). 20 l of sample buffer (62.5 mM Tris-HCl, pH 6.8, with 2% SDS, 10% glycerol, and 5% 2-mercaptoethanol) was added and this mixture was heated to 100°C for 90 s. After centrifugation (16,000 ϫ g for 5 min), the supernatant was resolved on a 4 -15% gradient SDS-PAGE. The gels were then soaked with EN 3 HANCE (DuPont NEN) and fluorographed as recommended by the manufactures manual.
RESULTS AND DISCUSSION
In vitro and in vivo expanded COCs exhibit marked differences in resistance to shear forces. In vivo stabilized, ovulated COCs exhibit shear resistant indices greater than 60 in every ovulated COC tested (n ϭ 12). In fact, trituration of ovulated COCs was arbitrarily terminated at the 60th cycle since every ovulated COC tested in this manner was still intact. In sharp contrast, in vitro stabilized COCs (stabilized in the absence of granulosa cells) exhibit a shear resistance index of 8 Ϯ 2 (n ϭ 26), while those stabilized in vitro in the presence of granulosa cells exhibit a shear resistance index of 20 Ϯ 4 (n ϭ 23). While the number of cumulus cells in either conditions were not assessed, incubating COCs with granulosa cell-conditioned medium also resulted in an enhancement of the shear resistance index 18 Ϯ 4 (n ϭ 11). This suggests a different cECM stabilization mechanism in vivo and in cultured COCs, despite the similarity in morphology of stabilized COCs of both preparations. The difference may be in part because of the participation of granulosa cells.
Since the molecular mass of ovarian HA is Ͼ2000 kDa (21), gel filtration HPLC should be able to distinguish native P␣I/I␣I from the protein-HA complex. Thus, to assess the interaction of HA and proteins of the I␣I family within the ECM of in vivo ovulated COCs, the ovulated COCs were subjected to 6.0 M guanidine HCl and 8% lauryl sulfobetaine (a reagent which would dissociate most non-covalent interactions (16)), prior to subjecting the sample to gel filtration HPLC. The results are summarized in Fig. 1. Fig. 1, A and B, are controls of native bovine P␣I and P␣I treated with chondroitinase ABC where the bimodal peak in panel B represents the heavy chain and the light chain fraction of the native protein as verified by Western blot (not shown). However, following guanidine HCl-lauryl sulfobetaine treatment, the majority of the I␣I immunopositive fraction was still present in the void volume (Fig. 1C). When COCs were treated with Streptomyces hyaluronidase, the immunopositive fraction shifted to a position as a broader peak that corresponds roughly to the naked heavy chain (panel D). Moreover, these results implied that most of the inter-␣-inhibitor immunopositive material in the in vivo stabilized cumulus ECM was covalently associated with HA.
This conclusion was strengthened by Western blot analysis of the ovulated COCs using antibody against native I␣I (Dako) as shown in Fig. 2. Lane 2 illustrates the relative positions of native mouse I␣I and P␣I on SDS-PAGE. These bands migrate at about 220 and 130 kDa, respectively. Lane 3 shows that this antibody recognizes both the heavy and light chains of I␣I on SDS-PAGE followed by Western blot. These bands migrate at about 100 and 50 kDa, respectively. Both P␣I and I␣I could be dissociated from in vitro stabilized COCs by treatment of the sample with SDS and 2-mercaptoethanol (Fig. 2, lane 4). Prior treatment of the same sample of in vitro stabilized COCs with Streptomyces hyaluronidase, however, did not release any more detectable native I␣I, P␣I, or heavy chain (Fig. 2, lane 5), indicating that the interaction of I␣I/P␣I with HA without granulosa cells is non-covalent in nature. In contrast, major I␣I components of in vivo stabilized COCs could not enter the gel upon treatment of the sample with SDS and 2-mercaptoethanol (lane 6) without prior hyaluronidase treatment. The Coomassie Blue staining of the transferred gel shows similar transfer efficiency in both lanes 6 and 7 (not shown). After Streptomyces hyaluronidase treatment prior to SDS and 2-mercaptoethanol of the in vivo stabilized COCs, however, prominent immunostaining was visible with a major component of I␣I at about 100 kDa (probably the heavy chain) and a minor component migrating at about 200 kDa (probably a double heavy chain; see Fig. 3, below). There is, however, a very small amount of immunopositive material corresponding to the heavy chains and the native P␣I in the sample that is not treated with hyaluronidase (lane 6, compare to lane 7). It may be that a small amount of heavy chain spontaneously falls off during extraction. If all of the immunopositive material shown in lane 7, following hyaluronidase treatment, is covalently linked with HA, the conversion from native protein to the covalently bound form is almost complete. Such a high degree of covalent linkage between heavy chains of the I␣I family and HA in an extracellular matrix is unprecedented.
The time course of incorporation of I␣I heavy chains into the expanding HA enriched cumulus ECM in vivo is shown in Fig. 3. During the time frame from 3 to 12 h after the hCG injection, only trace amounts of intact I␣I/P␣I become incorporated into the cECM as shown by Western blot (lanes 2-4) without prior treatment with hyaluronidase. However, a large amount of P␣I/I␣I heavy chains were incorporated into the matrix by about 6 h after injection of hCG revealed by treating the sample with Streptomyces hyaluronidase (lanes 5-7). There is no detectable incorporation of native protein or heavy chains of P␣I/I␣I during the first hour following hCG, regardless of whether the samples are treated with hyaluronidase or not (not shown). In addition, a band was again observed at about 200 kDa, which was suspected to be comprised of double heavy chains, possibly derived from the I␣I that forms a covalent linkage with a single HA molecule in such a way that it is protected from the action of hyaluronidase. Indeed, further treatment of the hyaluronidase-treated sample with NaOH (0.1 M for 10 min at room temperature followed by neutralization with 0.1 M HCl; a method previously shown to dissociate the heavy chains from O-linked carbohydrates as well as the ester bond that links the heavy chain of I␣I with chondroitin sulfate (9, 18)), the 200-kDa band disappeared and only the single . Moreover, direct treatment of ovulated COCs with NaOH (without pretreatment with hyaluronidase, lane 15) also coverts this 200-kDa band to a sharp 100-kDa band that corresponds to the position of the heavy chain of I␣I. These results taken together support the possibility that a small fraction of the two heavy chains of I␣I form a covalent linkage with the same HA molecule in close proximity that is resistant to Streptomyces hyaluronidase or chondroitinase but sensitive to NaOH treatment.
The apparent covalent interaction observed in vivo within the ovulated cECM can be partially reproduced in vitro in the presence of granulosa cells or granulosa cell-conditioned medium. This system consisted of HA, purified I␣I or P␣I, and granulosa cells in MEM or granulosa cell-conditioned medium. As shown in Fig. 4A, only when granulosa cells or granulosa cell-conditioned medium are added into the reaction mixture, will the system generate the free light chain of P␣I and the heavy chain of P␣I which is released upon treating the sample with Streptomyces hyaluronidase (Fig. 4A, lanes 6 -9). Heattreated granulosa cell-conditioned medium is unable to facilitate the covalent binding of HA with heavy chain (Fig. 4A, lanes 4 and 5). The same experiments illustrated in lanes 6 -9 of Fig. 4A were repeated and illustrated in lanes 2-5 of Fig. 4B but with a higher concentration of purified P␣I and HA. As stated above, Western blot of medium extracts showed two bands corresponding to P␣I and bikunin (Fig. 4B, lanes 2 and 4). Treatment of the samples with Streptomyces hyaluronidase prior to SDS and 2-mercaptoethanol treatment, however, revealed a prominent band corresponding to the heavy chain position (Fig. 4B, lanes 3 and 5). Purified bovine P␣I displayed the same pattern of interaction with HA when incubated with granulosa cells or granulosa cell-conditioned medium (not shown). The identity of the 50-kDa band as free bikunin (light chain) of P␣I was strengthened by using bikunin site-specific antiserum in the Western blot. In this experiment, a sample of purified P␣I was electrophoresed before (Fig. 4B, lanes 7 and 9) or after treatment with chondroitinase ABC to dissociate the light and heavy chains (Fig. 4B, lanes 8 and 10). The commercial anti-human I␣I rabbit IgG recognizes native protein (lane 7) and the dissociated heavy chain and light chain (lane 8). In contrast, the bikunin site-specific antiserum only recognizes native P␣I (lane 9) and the light chain (lane 10). Lane 12 is a Western blot illustrating an experiment in which the antibikunin site-specific antiserum was used to detect the presence of bikunin-positive epitopes after a sample of P␣I was incubated with granulosa cells and HA and then subsequently treated with Streptomyces hyaluronidase. As expected, the Western blot only showed the native protein band and the light chain. No heavy chain was detected. It should noted that the migration patterns for I␣I/P␣I and bikunin are somewhat different in different Western blots in this figure because different percentages of polyacrylamide were utilized as specified in the figure legend.
After treatment of native P␣I with chondroitinase ABC, the bikunin (light chain) fraction always appeared as two closely migrating bands on Western blot (e.g. Fig. 2 and Fig. 4B). This pattern may reflect the heterogeneity in the length of chon- droitin sulfate linkage in the native protein which has been shown to vary from 16 to 21 polydisaccharide units (11,12). Alternatively, it is possible that the unit of polysaccharide that connects the heavy chain and light chain may not be exclusively composed by chondroitin sulfate such that alternative cutting sites of chondroitinase may exist on the polysaccharide. This later possibility is consistent with the present study showing that upon forming the covalent linkage with HA, the released light chain-chondroitin sulfate complex migrates as a single band in Western blots (Fig. 4).
The amount of heavy chain binding covalently in our in vitro system is also dependent upon the concentration of exogenous HA (Fig. 5). The amount of native protein apparently binding covalently with HA progressively increases with increasing exogenous HA as judged by the progressive loss of native protein bands (lanes 3-6). In contrast, incubation of purified P␣I and HA with granulosa cells alone (Fig. 5, lane 2) or with exogenous HA alone in MEM (Fig. 5, lanes 7-9), does not generate any detectable heavy chain covalently bound to HA and the intensity of the native protein band is unaltered.
The physiological significance of the covalent interaction of the heavy chains of P␣I/I␣I with HA is not yet clear. However, further stabilization of the cumulus ECM seems to be achieved by this interaction as quantified indirectly by the shear-resistance assay in vitro. It should be pointed out that the efficiency of P␣I/I␣I incorporation into the cECM in vitro is much lower than that in vivo (Fig. 2, compare lanes 5 and 7). Since this low level of P␣I/I␣I incorporation occurs even at high serum concentration (10%), we are currently unable to adequately assess the status of the P␣I-HA complex in COCs stabilized in vitro when coincubated with granulosa cells. It is unlikely that the moderate enhancement of shear resistance of COCs by coincubating with granulosa cells results from recruitment of granulosa cells into the expanding COCs because the addition of granulosa cell-conditioned medium results in almost the same degree of enhancement. However, Salustri et al. (22) have shown that ovulated COCs have about 3 times more cumulus cells than those compact COCs expanded in vitro and that the origin of those extra cumulus cells occurs by recruitment of mural granulosa cells. Thus, the high shear resistance of in vivo stabilized COCs may occur as a consequence of recruitment of granulosa cells by the inner layer of the cumulus. Since granulosa cells possess the ability to catalyze the covalent interaction between heavy chains of P␣I/I␣I and HA, this may also lead to the packing and incorporation of large amounts of heavy chain into the expanding cECM. We speculate that the additional stabilization possibly achieved through the covalent interaction of heavy chains of P␣I/I␣I and HA, may provide elasticity required by the complex to maintain its integrity and to protect the oocyte during its extrusion from the ruptured follicle.
The current study also supports the likelihood that only the heavy chain of I␣I and P␣I forms a covalent linkage with HA while bikunin is concomitantly released into the culture medium when they are incubated with granulosa cells or in granulosa cell-conditioned medium. It is plausible to speculate that this process may be catalyzed or assisted by a factor(s) secreted by granulosa cells in response to an ovulatory stimulus. We have postulated that this conversion may involve an enzyme (esterase) synthesized by granulosa cells which catalyzes a transesterification between HA and chondroitin sulfate at the junction between the carboxyl end of the heavy chain of I␣I and chondroitin sulfate where chondroitin sulfate serves as the linker between the heavy chain and light chain (23). Indeed, Huang et al. (16) originally proposed that an enzyme(s) in serum could catalyze the exchange of HA with the chondroitin sulfate moiety of I␣I based upon the observation that incuba- All the samples consisted of 20 g/ml purified mouse P␣I in MEM medium with or without 6 ϫ 10 6 /ml granulosa cells. After incubation at 37°C and 5% CO 2 overnight, the mixtures were centrifuged at 200 ϫ g to remove granulosa cells. The SDS-PAGE sample buffer was then added to the supernatant followed by Western blot using antibody against human I␣I. Lane 1, high molecular weight standards. Lanes 2-6, P␣I incubated with granulosa cells and increasing amounts of HA (0, 0.1, 0.5, 1.0, and 2.0 mg/ml). Lanes 7-9, P␣I incubated without granulosa cells and with increasing amounts of HA (0.5, 1.0, and 2.0 mg/ml). Note the progressive increase in the apparent HC-HA smear above the native P␣I band and the concomitant progressive increase of the light chain at about 50 kDa. tion of serum with HA generates a heavy chain that is covalently linked with HA. In our system, however, overnight incubation of either purified I␣I/P␣I or serum (both mouse and bovine) with various amounts of HA alone could not generate any detectable heavy chain-HA complex or free bikunin. While the proportion of heavy chains covalently binding with HA after incubation in serum was not reported (16), conversion from a charge-mediated interaction between P␣I/I␣I and HA to a covalent binding of the heavy chain and HA in in vivo matured COCs is virtually complete. Serum may, however, contain a very low level of the hypothetical enzyme capable of catalyzing this charge mediated to covalent conversion. It will be interesting to determine whether or not long-term incubation of purified I␣I and HA can spontaneously generate a low level of heavy chain-HA complexes.
More recently, Zhao et al. (24) have found that I␣I heavy chain-HA complexes isolated from pathological synovial fluid form an ester bond with the C-terminal Asp residues of I␣I heavy chain. This finding and the present study are consistent with the transesterification model of covalent binding of the heavy chain and HA involving exchange of HA and the chondroitin sulfate component of I␣I as depicted in Fig. 6. This model predicts that formation of the covalent linkage with the I␣I heavy chain results in release of the chondroitin sulfate moiety of the native molecule along with bikunin. This was partially confirmed by radiolabeling the chondroitin sulfate moiety of the native I␣I and chasing this component into the "freed" bikunin fraction after formation of the covalent interaction between heavy chains and HA in vitro which occurs in the presence of granulosa cell-conditioned medium (Fig. 7). Equally supportive of the authenticity of this postulated interaction, however, is the finding that the heavy chain is absent (Fig. 7, lane 3) following treatment of the medium with hyaluronidase. This result indicates that the release of chondroitin sulfate is at least a necessary step in the formation of the HA-heavy chain complex. Further experiments are necessary to determine whether heavy chains of I␣I can independently form an equivalent covalent linkage with HA. | 2018-04-03T05:21:58.731Z | 1996-08-09T00:00:00.000 | {
"year": 1996,
"sha1": "f99172f7650dc0bfa548b2375c6bb25bc3aff4fb",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/32/19409.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "130d023f68cd2a16bd2cc1dd9c8792b59d63b442",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
26561514 | pes2o/s2orc | v3-fos-license | Specific heat, Electrical resistivity and Electronic band structure properties of noncentrosymmetric Th7Fe3 superconductor
Noncentrosymmetric superconductor Th7Fe3 has been investigated by means of specific heat, electrical resisitivity measurements and electronic properties calculations. Sudden drop in the resistivity at 2.05 ± 0.15 K and specific heat jump at 1.98 ± 0.02 K are observed, rendering the superconducting transition. A model of two BCS-type gaps appears to describe the zero-magnetic-field specific heat better than those based on the isotropic BCS theory or anisotropic functions. A positive curvature of the upper critical field H c2(T c) and nonlinear field dependence of the Sommerfeld coefficient at 0.4 K qualitatively support the two-gap scenario, which predicts H c2(0) = 13 kOe. The theoretical densities of states and electronic band structures (EBS) around the Fermi energy show a mixture of Th 6d- and Fe 3d-electrons bands, being responsible for the superconductivity. Furthermore, the EBS and Fermi surfaces disclose significantly anisotropic splitting associated with asymmetric spin-orbit coupling (ASOC). The ASOC sets up also multiband structure, which presumably favours a multigap superconductivity. Electron Localization Function reveals the existence of both metallic and covalent bonds, the latter may have different strengths depending on the regions close to the Fe or Th atoms. The superconducting, electronic properties and implications of asymmetric spin-orbit coupling associated with noncentrosymmetric structure are discussed.
where R is the molar gas constant, n D and n E are dimensionless Debye-type and Einstein-type vibrators, while Θ D HT and Θ E are the high-temperature Debye and Einstein temperature, respectively. We can justify the presence of optical modes by plotting (C p − γ N T)/T 3 vs. T, depicted in Fig. 1(b). γ N = 52.7 mJ/molK 2 is the nornal state Sommerfeld coefficient (see below). One can see a broad maximum at approximately 12 K, which is surely caused by the excess low-frequency vibrations, giving rise to deviation of the specific heat from the Debye model. The best fitting of experimental data with C el (T) + C ph (T) yields γ = 5 mJ/molK 2 , n D = 8.5, n E = 1.5, Θ D HT = 215 and Θ E = 55 K. We must concede, however, that the fit (solid line in Fig. 1(a) does not reproduce the temperature dependence of C p /T around 35 K correctly. The discrepancy between the experimental and theoretical data exhorts to take into account a more complex phonon density of states than that considered.
In Fig. 1(c) we present the temperature dependence of the electrical resistivity ρ(T) measured at 0 and 9 T. The zero-field resistivity has a value of ρ = 268.3 μΩ cm at room temperature and 62.7 μΩ cm at 2.1 K, resulting in the residual resistivity ratio value of 4.28. We have fitted the resistivity in the temperature range 2-40 K using a simple ρ(T) = ρ 0 + AT 2 composed of the residual resistivity ρ 0 and electron-electron scattering contribution AT 2 . The fitting results with ρ 0 = 62.7 μΩ cm and A = 0.042 μΩ cm/K 2 are shown by the solid line. In the normal-state, ρ(T) of Th 7 Fe 3 , in similar manner to that observed in Th 7 Co 3 14 , can be characterized by unusual temperature dependence as is compared to those of ordinary metallic alloys. In fact, ρ(T) has curvature temperature dependence, hence with increasing temperature the resistivity increases in slower manner than that predicted by the Bloch-Grünseisen theory 19 , accounting for acoustic phonons. Above 100 K, ρ(T) bends downward showing the tendency of saturation. The downward turn in ρ(T) at high temperatures was found previously in different classes of compounds, e.g., A15-type and Chavrel phase superconductors, 3d and 5d transition metals, high-T c cuprates 20 , and in some strongly correlated electron systems (SCES) 21 . Unfortunately, there is no generally accepted theory of resistivity saturation for all materials. According to the consideration of Gunnarsson et al. 20 , the resistivity of weakly correlated metals may saturate when the inelastic mean-free path tends towards lattice spacing, known as the Ioffe-Regel limit 22 . On the other hand, the saturated resistivity in SCES can be understood based on the Rivier-Zlatic model developed for electron scattering by spin fluctuations at temperatures above spin-fluctuation temperature T sf 23 . For Th 7 Fe 3 and Th 7 Co 3 the shape of ρ(T) curve and a high value of the resistivity at room temperature would be consistent with feature due to a strong coupling of conduction electrons to fluctuating d-electron spins.
The resistivity in the temperature range 0.4-3.0 K is displayed in the inset of Fig. 1(c). Evidently, ρ(T) discloses a sharp drop at 2.1 K and vanishes at T c = 1.95 K, revealing the transition into a superconducting state. Using a 50 % normal-state resistivity criterion, the critical temperature is estimated as 2.05 K. The transition width ΔT c defined as the difference of T at 10% to 90% of resistivity at the transition is 0.15 K. We note that our experimental T c is the same as that in ref. 17 but is a little higher than that previously reported T c = 1.86 K 16 .
A strong proof for the bulk superconductivity in Th 7 Fe 3 is the specific heat jump at zero field shown in Fig. 2(a). The critical temperature is taken as the position of the half height of the C p /T-jump, T c = 1.98 ± 0.02 K. We calculated the specific heat jump ΔC p (T = T c ) as the difference between the C p at T c and the normal state specific heat (illustrated by the dashed line in Fig. 2(a)). The normalized jump ΔC p /(γ N T c ) amounts to 1.21, being substantially larger than 1.01 in Th 7 Co 3 14 . We notice that the observed specific heat jump in both these compounds is much reduced as compared to the BCS value of 1.43 24 . To estimate the electronic contribution to the specific heat we considered 10 kOe-C p /T (closed squares) for T < T c and zero-field C p /T (open circles) for T > T c . The least-squares fitting of experimental C p -data with a sum of an electronic γ N T and lattice βT 3 contributions yields the Sommerfeld coefficient γ N = 52.7(1) mJ/molK 2 and the Debye constant β = 5.51 mJ/molK 4 . The best fitting result is shown by the solid line in Fig. 2(a). Using the experimental Sommerfeld coefficient γ N the band structure Density of states at Fermi level N(E F ) can be deduced from the formula: k B is Boltzmann's constant, N A is Avogadro's number and N(E F ) is found to be 22.36 st/eV.f.u. From the β-value, and taking into consideration the relation: where n = 10 is the number of atoms per mole, we calculated low-temperature Debye temperature Θ D LT = 152.2 K. Having the critical temperature T c and Debye temperature Θ D LT , we can evaluate the electron-phonon coupling constant λ − el ph using the McMillan's equation 25 : According to the BCS description of the electronic specific heat, the superconducting energy gap Δ 0 is given by an equation 26 : where A is a constant. In order to check the prediction of the BCS theory for the superconductivity one should plot the normalized electronic specific heat (C p − βT 3 )/(γ N T c ) in a log scale vs. the inverse of temperature, 1/T. For Th 7 Fe 3 such a plot is shown in Fig. 2(b). Apparently, a straight line cannot be used to describe the data between 0.4-T c and this observation allows us to propose that the superconductivity in Th 7 Fe 3 is not a classic isotropic s-wave BCS-type. Adapting the same treatment of data as was previously utilized for closely related Th 7 Co 3 compound, we fitted the specific heat data using two models of non-isotropic gap structure: a) two-gap and b) anisotropic gap, respectively. In the two-gap model, electronic specific heat is assumed to be the sum of two contributions with different values of gaps (Δ 1 , Δ 2 ) and electronic specific heat coefficients (γ 1 , γ 2 ). The electronic specific heat data of the Th 7 Fe 3 superconductor was fitted with the equation: In fittings, γ N = 52.7 mJ/molK 2 and T c = 1.98 K were kept constant, and we obtained the best fit with Eq. 7 for the following parameters A = 10.82, Δ 1 /k B = 3.22 K, Δ 2 /k B = 0.75 K and x = 0.985. The result of fit is illustrated by the solid line in Fig. 2(b and c). Within anisotropic gap scenario, we examined the superconducting state electronic specific heat, employing the same equations as were used for Th 7 Co 3 14 . However, the fittings of experimental data of Th 7 Fe 3 to anisotropic gap model did not give satisfactory result.
We calculated the thermodynamic critical field H c (T) according to equations: c 0 2 where μ 0 is the magnetic constant, V is the unit cell volume. The variation of internal energy ΔU(T) can be obtained by integrating the difference of the specific heat in the superconducting C s (T) and in the normal C n (T) states: T T s n 0 c while the variation of entropy ΔS(T) is obtained via the difference of the entropies in the normal and in the superconducting states: The calculated temperature dependencies of the internal energy ΔU(T), entropy multiplied by the temperature TΔS(T) and free energy ΔF(T) are shown in Fig. 3(a) while H c (T) is shown in Fig. 3(b). In order to evaluate the value of H c (0) at 0 K we used Taylor expansion of thermodynamic critical field H c (T) 27 : yield superconducting gap Δ 0 /k B = 2.43 K, which is much smaller than Δ 1 /k B = 3.22 K found above. Another noticeable feature of the superconductivity in Th 7 Fe 3 is presented by the behaviour of deviation function , with t = T/Tc. It is seen from Fig. 3(c) the deviation function lies below the BCS curve, thus electron-phonon coupling in the studied compound is weak. Low-temperature specific heat data for Th 7 Fe 3 in several magnetic fields up to 10 kOe are plotted as C p /T vs. T 2 in Fig. 4(a). An increasing applied field causes broadening of superconducting transition and lowers | C T / p T c -jump. One recognizes that the suppression of superconductivity accompanies steady increase of C p /T ratio at 0.4 K. We determined the upper critical field H c2 dependence on T c as illustrated by dashed line. The obtained H c2 (T c ) and Fig. 4(b and c), respectively. The slope dH c2 /dT near T c was found to be approximately −3.96 kOe/K. Using the Werthamer-Helfand-Hohenberg (WHH) formula for a type-II dirty superconductor, , we estimated the zero temperature upper critical field H c2 (0) = 5.4 kOe. The H c2 (T c ) curve in the whole temperature range 0 − T c can be simulated using the digamma function 28 : so so so so is the Maki parameter. For α = 0.21 and λ so = 10 we obtained the dotted line, which presents the best description of the WHH model to the experimental data. Unfortunately, as can be seen in the figure, the WHH model has failed to describe the H c2 (T c ) dependence of Th 7 Fe 3 . In fact, the theoretical WHH values are very significantly underestimated as compared with the experimental ones.
A greater value of the 0 K upper critical field H c2 (0) can be obtained with the help of the Maki theory 29 : The above equation is simply deduced from well known relations: where Φ 0 is the magnetic flux quantum and ξ GL is the Ginzburg-Landau coherence length. The fit of Eq. 15 to experimental data is shown by the solid line in Fig. 4(b). However, the GL model is insufficient to reproduce the convex curvature of the experimental H c2 (T c ) data below 1.2 K. There are several possible reasons for an enhancement and concave-upward behaviour of H c2 (T c ) 31 , including twisting of electron orbits by a magnetic field 32 , dimensional crossover 33 and multi-gap structure 34 . The first mechanism was considered by Lebed 32 for low-dimensional organic superconductors, in which the twisting of electron orbits by a magnetic field was assumed to be important. It was noted that the upward curvature in H c2 (T c ) is expected to occur below a characteristic temperature T* < T c and only for the plane of applied magnetic field. Our measurements were conducted on polycrystalline samples and the studied compound is a 3D material, therefore the low-dimensional effect has nothing to do with the observed anomaly of H c2 (T c ). The mechanism based on multiple-gap structure has been and FeAs-based 31,38 . Assuming that the superconductivity in Th 7 Fe 3 is interwoven with two-band nature, we are able to simulate H c2 (T c ) dependence (dashed line in Fig. 4(b)) using the formula developed by Gurevich 34 : where a 0 , a 1 and a 2 are parameters associated with intraband λ 11 , λ 22 and interband λ 12 , λ 21 couplings, η = D 2 /D 1 is the ratio of diffusivities of bands and U(x) = ψ(1/2 + x) − ψ(1/2) is the difference of di-gamma functions. We must admit that though the agreement between the experimental and theoretical data seems to be satisfactory, there remains questionable reliability of obtained fitting parameters since the fit was done for large number of fitting parameters. Nonetheless, the extrapolated zero-temperature upper critical field H c2 (0) = 13 kOe, corresponding to ξ GL = 15.9 nm seems to be reasonable since ξ GL has the same order of magnitude as that found via evaluation of the equation 39 : Using γ V = 3361.43 erg/cm 2 K 2 and the normal state resistivity ρ n = 62.7 × 10 −6 Ω cm, we obtained ξ(0) = 13.2 nm.
Yet, we can evaluate the Ginzburg-Landau penetration depth from the values of the upper and thermodynamic critical fields: Theoretical results. In the left-side panel of Fig. 5 we depict the total and interstitial DOS of Th 7 Fe 3 . The data were obtained for spin polarization within the fully relativistic (FR) approximation. In the figure, we observe no spin polarization effect, thus implying a non-magnetic the ground state of Th 7 Fe 3 , even in the presence of spin-orbit interaction. The finding is in agreement with experimental data collected down to 0.4 K. Resemblance of the total DOS with those from FP-LMTO calculations 15 is high, in respect of both the DOS values at the Fermi energy E F and DOS feature below E F . Here, N(E F ) amounts approximately to 20 st./(eV. f.u) and there exists peak structure at around −1 eV. Its akin to the Van Hove singularity often observed in superconductors. The relative contributions from the muffin-tin sphere and interstitial region to the total DOS can be evaluated by comparing the calculated values of the total and interstitial DOS's. If we focus on the data around E F we see an sizeable contribution from the interstitial region, so the overlap of orbitals is expected to be essential. Obviously, the main contribution to the total DOS below 0.5 eV comes from the muffin-tin spheres, where the orbitals around the atoms are atomic-like. In right-side panel of Fig. 5 we show the partial DOS calculated for one spin direction. We perceive that the contributions of 3d and 6d-electrons orbitals at the E F are almost equal and they dominate the DOS. The DOS derived from the remaining orbitals are negligible, thus we would expect important performance of the mixture of 3d and 6d-orbitals for the superconductivity of Th 7 Fe 3 . In Fig. 6(a and b) we compare electronic band structures (EBS) obtained without and with spin-orbit coupling. Evidently, the calculation without spin-orbit coupling conveys a quite lucent electronic structure with several bands crossing E F . This feature together with fairly flat and closely lying bands in the energy range 1.5-0.5 eV below E F (not shown here) reflect the behaviour of N(E) curve (see Fig. 5a). Looking at Fig. 6(a) we can see that the band 1 (green color) crosses E F in the directions A − Γ, A − L and A − H. The band 2 (blue color), 3 (red color) and 4 (olive) have hole structure at both A and Γ points. The dominance of hole bands is seen in the compound without spin-orbit coupling. When the spin-orbit coupling was included in the calculations the electronic band structure gets more complex and as much as six bands with the bandwidth of 0.303-0.385 eV crossing E F can be recognized. We remark that the splitting into spin polarized bands is highly anisotropic in momentum space, namely, the band structure along the directions Γ − M, H − K and K − Γ exhibits a very strong dispersion, in comparison with that along the A − Γ completely lacking split. This behaviour may account for a combined outcome of relativistic effect and ASOC. It is worthwhile to highlight that the spin-orbit coupling rearranges levels of the band energy. The energy of bands 1-2 and 3-4 ( Fig. 6b) as respectively compared to those of bands 1 and 2 (Fig. 6a) becomes lowered. On the other hand, the bands 5 and 6 ( Fig. 6b) are weakly changed versus band 3 (Fig. 6a). In contrast, the band 4 ( Fig. 6a) is pushed upwards and no longer crosses the E F (Fig. 6b).
To gain insights into the contributions of 3d-and 6d-electrons, the orbital-projected band structures of the Fe and Th atoms are shown in Fig. 6(c and d), respectively. At first glance, the overall features of the 3d-and 6d-electron band structures are similar. This observation indicates a robust mixture of 3d and 6d orbitals in the energy range around E F . There are differences between projected weights, which are distinctly bigger for those of the 3d orbitals and may suggest that the 3d-electrons are more localized. An inspection of Fig. 6(c and d) reveals that three kinds of electronic bands exist nearby the Fermi level. There are two bands, denoted as 1 and 2, hole-like at the A point, but electron-like around the Γ point. These bands elucidate the metallic nature of the compound. Other two bands crossing the E F , denoted as 5 and 6, have hole-like properties around both the A and Γ points. The remaining two bands, denoted as 3 and 4, have both hole-and electron-like character at Γ. Clearly, the ASOC induces two types of carries through lowering energy levels of partial bands as compared with dominant holes in the case without SOC. We believe that the multiband structure induced by ASOC associated with lack of inversion symmetry possibly entails multiple-gap superconductivity in the studied material.
Fermi surfaces (FS) in the first Brillouin zone of the six bands crossing the Fermi energy are presented in Fig. 7. The notation a and b corresponds to FS view from top and in 3D forms, respectively. We would like to emphasise that the FS's shown in Fig. 7(1), (3) and (5) are similar to those from SR calculations (not shown here), though there are some noticeable differences due to spin-orbit coupling. For example, for FS in Fig. 7(1) the pocket at K point becomes split onto two pockets around this point. Thereafter, for FS in Fig. 7(3), the holelike band around Γ point in the SR approach turns into electron-like in the FR calculation. Finally, for FS in Fig. 7(5), we see that the six tubes along the A − Γ direction alter to more and more slender shapes. Obviously, the FS's shown in Fig. 7(2), (4) and (6) do not appear in the SR calculations, thus split FS properties must to be associated with SOC. It should be kept in mind that crystal symmetry plays a role in the formation of FS's, in particular, it may impinge on the properties of individual FS sheets and anisotropy of FS's. For Th 7 Fe 3 , we discern that FS's viewed from top are seen to be essentially symmetric, while FS's in every planes embodying the A − Γ line are highly anisotropic.
Since the information about Electron Localization Function (ELF) topology is important for understanding the bonding nature of materials 40 , we have calculated ELF. There are shown the crystal unit cell together with the ELF isosurfaces cutting through the Th and Fe atoms in Fig. 8(a). The 3D vizualizations of ELF in (001)-, (010)-and (110)-plane are depicted in Fig. 8(b-d), respectively. We would like to pay attention to topological differences between regions at Th and at Fe atoms. The ELF of the Fe atoms is characterized by peaked maxima and almost spherically symmetric. High values at these maxima of about 0.78 in Fig. 8(c) and 0.82 in Fig. 8(d) evidence that electrons around the Fe cores are strongly paired and they are attractors 41 . On the other hand, the ELF of the Th atoms exhibits broader peak with a relatively low value of about 0.7, but this value still indicates a covalent bonding. The observed difference in the ELF values of the Th and Fe core regions certainly manifests the different strength of covalent bonds. Surprisingly, the ELF maximum of the Th cores is found inside external wall. As follows, the ELF around the Th atoms has anisotropic, extended volcano-like shape. It is noticed that the ELF values of the external walls are approximately 0.5-0.6, suggesting the region of delocalized electrons. Thus, the distinction of ELF values in Th 7 Fe 3 apprises a change in the bonding properties, from strongly to weaker covalent, and to metallic character.
Conclusions
In summary, we measured specific heat and electrical resistivity as well as performed electronic band structure calculations using FP-LAPW method for hexagonal, noncentrosymmetric Th 7 Fe 3 compound. The measurements reveal that the studied material is a weakly electron correlated superconductor with superconducting phase transition at 1.98 ± 0.05 K. In particular, anomalous behaviour observed in C el (T)/T, γ(H) and H c2 (T c ) provides evidence for the existence of two superconducting energy gaps. Based on experimental data we also determined some fundamental thermodynamic parameters, which are gathered in Table 1.
The electronic band structure calculation supports non-magnetic ground state of the superconductor. The theoretical partial DOS at E F imply equal contributions of the 3d-electron of Fe and Th 6d-electrons to the total DOS. The mixture of these d-electrons is conjectured to be responsible for the superconductivity in Th 7 Fe 3 . There are six bands crossing the Fermi level, and the Fermi surfaces are ascribed to two bands hole-like at the A point but electron-like around the Γ point, two bands hole-like at the A point and both hole-and electron-like at the Γ point, and the two hole-like bands around both the A and Γ points. Two observed types of charge carriers are affected by ASOC through lowering band energies as compared with those without SOC. It is suggestive that this multiband structure may have close relation with two-gap superconductivity in the studied material. The distinct differences in both EBS and FS's obtained without and with SOC reflect considerable effect of band splitting. Strong anisotropic properties in SBS, FS's and ELF are ascribed to ASOC associated with noncentrosymmetric structure. With the aid of ELF data, we examined the bonding nature in Th 7 Fe 3 . It was found that there are different ELF values, corresponding to different characters of bonding. In addition to the metallic bonds, strongly covalent bonds were found around the Fe atoms but somewhat weak strength around the Th atoms. We think that the observed experimental and theoretical properties of Th 7 Fe 3 may be beneficial in the contest of comparative investigations of noncentrosymmetric superconductors without strong electron correlation effects.
Methods
Polycrystalline sample of Th 7 Fe 3 was prepared from pure elements Th: 99.8% and Fe: 99.99%. A two-step synthesis was carried out using an arc-melting under a Ti-gettered purified argon atmosphere. First, the Th content was firstly melted separately and then impurities on the surface of the melted button were removed by mechanical cleaning and nitric acid etching. Next, a mixture of the stoichiometric ratio 7:3 of Th and Fe was remelted several times to insure homogeneity. The as-cast Th 7 Fe 3 specimen was wrapped in tantalum foil, sealed into evacuated quartz tube and annealed at 800° for two weeks. The quality of the Th 7 Fe 3 sample was checked using powder Xray diffraction (XRD) at room temperature, utilizing an X′Pert PRO diffractometer with monochromatized CuK α radiation (λ = 1.5406 Å) at the 2θ range of 10-90°. The observed Bragg peaks in the XRD pattern indicate that the studied sample is highly homogeneous, crystallized in its own type hexagonal with the space group P6 3 mc. We are able to index all observed Bragg reflections with the lattice parameters a = b = 0.9849 nm and c = 0.6198 nm, being comparable to those previously reported 16,42 . It is recalled that the crystal unit cell can be characterized by the three atomic positions for thorium atoms with Th 1 , Th 2 located at two (6c) positions and Th 3 at (2b) position and one position (6c) for iron atoms. Specific heat C p (T) and electrical resistivity ρ(T) measurements were carried out in a Quantum Design PPMS with a 3He option in the temperature range 0.4-400 K and in magnetic fields up to 2 T. The C p (T) data were collected using the relaxation-time technique and the two-tau model. Heat capacity of sample platform with very small quantity of the apiezon N cryogenic grease was measured prior to the C p measurement. The given values of the specific-heat have an uncertainty of less than 5%. The ρ(T) data were measured using the standard ac four-probe method applying an alternating current of 1 mA with a frequency of 47 Hz. The gold wires used as electrical contacts were bonded with a silver paste. The error in the reported resistivity is about 10% mainly due to the presence of micro-cracks in the sample.
Theoretical results including electronic band structures, densities of states, Fermi surfaces and electron localization function were obtained from density functional theory (DFT) calculations using all-electron Full-Potential Linearized Augmented Plane Wave (FP-LAPW) method as implemented in ELK code, available under the GNU Public License 43 . The parametrization given by Perdew et al. 44,45 is used for the exchange correlation potential within the Generalized Gradient Approximation (GGA). Muffin-tin radii of 2.918 a.u. and 2.334 a.u. were used for Th and Fe atoms, respectively. This corresponds to the total number of core states of 1012, total number of valence states of 566, and total number of local-orbitals of 360. We have computed total energy as a function of number of 8 × 8 × 12 Brillouin zone (BZ) mesh, while in the Fermi surfaces calculations we used 60 × 60 × 60 mesh. The self-consistent field cycles were iterated until the total energy was stable to within 1 meV. The calculations were conducted using relativistic approaches without and with spin-orbit couplings. For the latter treatment, we have included also spin polarization to look for eventual spontaneous magnetization. Electronic band structure was calculated along the high-symmetry A − Γ − M − L − A − H − K − Γ lines. | 2018-04-03T01:07:29.439Z | 2017-11-17T00:00:00.000 | {
"year": 2017,
"sha1": "698171a63bbeb2f729b179e3090039579c404b09",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15410-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2949ed372a4755eae40126d73ccfa0bb6f630a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
78530676 | pes2o/s2orc | v3-fos-license | Knowledge and Awareness about Colorectal Cancer and Its Screening Guidelines among Doctors in Al Ahsa , Eastern Province , Kingdom of Saudi Arabia
Introduction: Cancer is a major public health problem. Worldwide, colorectal cancer (CRC) is a leading cause of deaths due to cancer in both men and women. Among, Saudi men, CRC is the most common malignancy while it is the third most common among Saudi women. Over, two decades the incidence and deaths due to CRC have been steadily increasing in Saudi Arabia. Regular and timely screening has the potential in reducing the incidence and deaths due to colorectal cancer. The present study is conducted to evaluate the knowledge and awareness about colorectal cancer and its screening among the doctors. Objectives: To measure the frequency of knowledge and awareness about colorectal cancer and its screening guidelines among doctors in Al-Ahssa. Methods: A questionnaire based survey of the doctors (Specialists & residents), working in different hospitals and primary health centers under the Ministry of Health in Al Ahssa region, Eastern province, KSA. Knowledge and awareness about colorectal cancer and its screening among the doctors is evaluated. Results: Over 80% of the doctors knew, screening reduces deaths due to CRC. Only 60% were aware about the risk factors and less than 50% knew the clinical features of CRC. About 60% doctors agreed Colonoscopy is gold standard screening test. While, less than 60% knew the ideal age to initiate screening and the actual interval of screening tests in the standard risk and high-risk population. Fewer than 25% doctors were aware about the American cancer society recommended screening guidelines. Majority of the doctors expressed keen interest to know and receive information about CRC and its screening guidelines. Conclusions: Regular and timely screening reduces deaths due to CRC. There is a need for improving knowledge and awareness of doctors about CRC and its screening. Awareness among the doctors improves uptake of screening by the general and high-risk population.
Introduction
Cancer is a major public health problem.Worldwide Colorectal cancer CRC is one of the leading cancers in both men and women (Ferley, 2013;Bernard, 2014).In the United States, colorectal cancer is the third common cancer in both men and women (Seigel, 2014).As per the Saudi cancer registry 2010, CRC is the most common malignancy among Saudi men and third most common in women (Ministry, 2014;Eid, 2007).Over the past two decades, a steady increase in the incidence and deaths due to CRC has been reported from Saudi Arabia (Mosli, 2012;Ibrahim, 2008).Majority of the patients had an advanced stage disease and survival rates of 44.6%, which is lower than the reported survival rates all over the world (Elsamany, 2014).Further, nearly half of the cases were diagnosed in individuals less than 50 years of age with mean age of 58 years, which is lower than reported from the developed countries (Aljebreen, 2007;Ibrahim, 2008).Colorectal cancer with its high incidence and a long interval between the appearance of the polyps and frank carcinoma is an ideal tumor for screening.Regular and timely screening helps in the detection and removal of both precancerous lesions and early stage cancer, there by reduces both the incidence and deaths due to CRC (Winawer, 1997;Burt, 2010).Several studies from Saudi Arabia and outside, stressed the need for preventive measures and early detection of the disease (Almurshed, 2009;Umer, 2009).Recently, a downward trend in both the incidence and mortality from CRC is reported from the United States.This positive outcome may be attributed to widespread implementation of screening programs along with better understanding of the pathogenesis and advances in the treatment (Edwards, 2006).The current ACS guidelines for CRC screening includes: Fecal occult blood testing (FOBT) -Annually, Sigmoidoscopy every 5 years and colonoscopy once every 10 years, all starting at age 50 years (Levin, 2008).Knowledge, awareness, and beliefs of the doctors about CRC and its screening is a major factor, motivating general population to undergo screening (Sheih, 2005;Gennarelli, 2005).Considering the important role of the doctors (Specialists and Residents) in the screening activities, the present study was conducted to prospectively evaluate the knowledge and awareness about CRC and its screening programs among the doctors, working in different hospitals and primary health care centers in Al Ahssa region, Eastern province of Kingdom of Saudi Arabia.
Method
Ours is a cross sectional study.Between, December 2014-September 2015, total 160 doctors employed in different hospitals and primary health centers, under Ministry of health, in Al Ahssa region, eastern province, KSA were evaluated.Questionnaire about the CRC and its screening guidelines was given to doctors by hand.The questionnaire was pilot tested on five internists for the content validity.The questionnaire was distributed to the doctors after seeking permission from the hospital authorities.A group of medical students involved in the research handed over questionnaire to the doctors during their free time in the hospital.The survey questionnaire took approximately 15 minutes to complete.Response to the questions were designed in the form of agree or disagree.Results of the survey are presented as percentages.Questionnaire has four sections-Section1: Demographic characteristics of the doctors, Section 2: Risk factors and the Symptoms of CRC, Section 3: CRC Screening guidelines recommended by the American cancer society and Section 4: Ideal screening test, timing and frequency of testing in standard and high risk individuals.
Statistical Analysis
Descriptive statistics is performed using the Statistical Package for the social sciences version 21.0 (SPSS).Variables are presented as frequencies and percentages.
Results
Total 160 doctors (Specialists 33% and Residents 67%) were evaluated using self-administered questionnaire.Internists were 59% and surgeons 41%.Male doctors were 77% and Female 23%.Nearly 85% doctors knew, screening reduces deaths due to CRC.While 73% knew the actual benefits of screening (detection and removal of polyps and precancerous lesions in asymptomatic individuals), only 50% doctors could define screening correctly (testing of asymptomatic individuals).About, 60% of the doctors knew the high risk factor for CRC such as diet rich in calories, fat and red meat.While less than 45% doctors knew about westernized life style and the risk of CRC -Physical inactivity, excess weight gain, excess alcohol consumption, and smoking.Less than 50 % of the doctors knew the symptoms of CRC -Bleeding per rectum, constipation, altered bowel habits and pain abdomen.Over 95% of the doctors knew about the hereditary nature of CRC.Moreover, 60% knew colonoscopy is the gold standard screening test for CRC.Less than 25% (21%) doctors were aware about the American Cancer Society recommended screening guidelines and the actual interval of the screening-FOBT annually, Flexible Sigmoidoscopy every 5 years and colonoscopy once every 10 years.Ideal age to initiate screening in average risk individual (50 years) was known to 58% doctors while less than 50% knew the age to initiate screening in high-risk individuals (40 years).More than 85% agreed that, Knowledge and awareness about the CRC and its screening guidelines among the doctors would improve uptake of screening in the general population.In our survey more than 90% doctors expressed their interest to know and to receive information about CRC and its screening guidelines.
Discussion
The primary objective of our study is to evaluate the awareness and understanding of the CRC and its screening modalities among the doctors working in different hospitals under ministry of health, in Al Ahssa region, eastern province, Kingdom of Saudi Arabia.Over 95% of the doctors considered CRC as a major public health problem and they admitted that screening would help in reducing the incidence and deaths due to CRC.Several studies from Saudi Arabia and outside have concluded and highlighted the need for screening to reduce deaths due to CRC (Mosli 2012, Elsamany 2014and Aljebreen 2007).Over, 65% of the doctors agreed Colonoscopy as gold standard test for screening.Several studies from Saudi Arabia strongly recommend population screening for CRC, as benefits outweigh the drawbacks (6).Physicians play a critical role in implementing guidelines and achieving public health targets for colorectal cancer screening (Brawarsky 2004).Saudi Arabia; patients attend the primary health centers for their first consult.Very little is known about the practice of CRC screening by primary care physicians, as there is no published data on the practice of colorectal cancer screening from this region.In the United States, practice of colorectal cancer screening is excellent, achieving 90-95% of health care providers recommending screening (Task Force 2002).Our study results showed that CRC screening is underutilized as more than half of the study subjects were not practicing CRC screening in spite of the documented survival benefit from CRC screening (Klabunde 2003, Mandel 2000and Hardcastle 1996).This can be attributed to lack of information and knowledge among the doctors about the recommended screening guidelines and poor awareness about the timing of initiation of screening and its benefits.Seeff et al. concluded that lack of physician recommendation is one of the most common reasons for patients not undergoing screening (Seef 2004).Knowledge and awareness of doctor's about CRC and its screening modalities will have a great impact on the population screening, thereby reducing deaths due to CRC (Fedrici 2005).We are aware of the limitation of our study as it is a single time point.Secondly, it involves only those doctor's in the general hospitals and primary health centers under the ministry of health in the Al Ahssa region.Our results may not reflect the overall knowledge and awareness of all the doctors from the region as we have excluded doctors in the private setting, which is assumed to see almost an equal number of patients.
Conclusions
Regular and timely screening reduces deaths due to CRC.Awareness about CRC and its screening among the doctor's improves uptake of screening by the average and high-risk population.In our present survey, we found a sizeable percentage of doctor's have less clear information about the CRC screening guidelines.We therefore recommend and suggest that there is a need for large-scale educational programs, which keeps the doctor's updated with the latest screening guidelines.
Table 1 .
Demographics of the doctors
Table 2 .
About colorectal cancer & screening | 2018-12-05T02:29:09.684Z | 2016-11-30T00:00:00.000 | {
"year": 2016,
"sha1": "a3115459924c31fb51810e514279b3977f3a1791",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/gjhs/article/download/63096/35025",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a3115459924c31fb51810e514279b3977f3a1791",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
82474334 | pes2o/s2orc | v3-fos-license | Multiparameter monitoring for optimal T-cell adoptive therapy
Adoptive transfer of ex vivo-expanded antigen-specific T cells is a promising therapeutic approach for the treatment of cancer and infectious diseases. Clinical studies have proven the feasibility and potency of this procedure but several limitations need to be overcome before this form of immunotherapy reaches its full potential. For instance, the efficacy of the transferred cells is often impeded by terminal effector differentiation and exhaustion acquired during in vitro expansion. This would notably explain the lack of further in vivo proliferation and persistence in many trials. However, the factors that induce T-cell differentiation and functional impairment in culture remain poorly defined. Using the model antigen HA-1, we determined that phenotypic and functional features indicating T-cell exhaustion/dysfunction may not be detected simultaneously and depend on the method of expansion as well as the antigenic repertoire stimulated. Thus, our study has defined critical parameters to monitor in order to optimally differentiate and expand antigen-specific T cells in culture prior to adoptive transfer.
We have recently entered a new age in cancer therapy with the development of numerous effective antitumor immunotherapeutic approaches [1,2] .Pioneered in the 1970's, allogeneic hematopoietic stem cell transplantation (HSCT) was among the first therapies demonstrating that the immune system could cure refractory blood cancers.In most instances, the potent anti-neoplastic effect of allogeneic HSCT hinges on genetically encoded donor-host proteome variations.As such, HLA molecules will present fragments from self-proteins that will differ in sequence between donor and host.These peptides are called minor histocompatibility antigen (MiHA) and are the cornerstone of both the graft-versus-leukemia (GVL) effect and graft-versus-host disease (GVHD) in HLA-identical, non-monozygous allogeneic HSCT [3,4] .The adoptive T-cell therapy (ACT) of hematopoietic-restricted MiHA-specific T cells that are absent on GVHD target tissues is an attractive approach to augment GVL effects without risking GVHD [5,6] .While clinical studies have shown encouraging results with such approach, no long-term evidence of leukemia control or cure were recorded [7,8] .One reason for such limited effect is the lack of persistence of the transferred T cells [9] .This might be influenced by the environment, repeated antigenic stimulations or culture duration prior to adoptive transfer.A common feature is that in vitro priming and expansion prior to transplantation can lead to terminal T-cell differentiation thereby limiting further T-cell expansion after transfer and leading to rapid apoptosis [10] .Although terminal T-cell differentiation and exhaustion have been studied at varying degrees of depth in animal models of chronic infections [11][12][13] or in human immunodeficiency virus (HIV)-infected patients [14-16] , the central question of ex vivo culture-driven human
RESEARCH HIGHLIGHT
T-cell differentiation has not been fully investigated and the critical variables influencing late T-cell differentiation and the acquisition of exhaustion features during in vitro expansion are incompletely understood.Thus, it is imperative to decipher the main mechanisms by which T-cell dysfunction occurs in order to properly control it in ex vivo expansion protocols.
Adoptive immunotherapy presents the opportunity to activate and expand T cells outside the tolerizing environment of the host or, in context such as HSCT, from healthy donors.A number of strategies have been developed to optimize the cellular product generated.The main idea across the field was that optimal therapeutic effects are achieved when the ex vivo generated T cells maintain features associated with early memory differentiation [17,18] or even T-cell "stemness" (i.e capacity to further differentiate as effector or self-renew as memory cells and persist long-term in the host).Interestingly, evidence accumulates demonstrating that therapeutic efficacy might depend more on the antigen-specific T cells 'early' differentiation phenotype as well as their ability to proliferate and/or persist in vivo rather than on the number of infused cell [19] .Concretely, this means that central memory T cells (Tcm), generally described as expressing CD45RO, CD62L and CCR7 markers, are predicted to have increased in vivo efficacy relative to effector memory cells (Tem) showing a reduced expression of CD62L and CCR7 with a concomitant loss of proliferative capacity [20][21][22] .However, there is also evidence that a fraction of effector memory T cells have the potential to revert back to a central memory phenotype when transferred into patient and persist, indicating that the acquisition of effector memory phenotype in culture may not always predict limited functionality in vivo [23] .Thus, the analysis of additional phenotypic markers and functional properties of T cells prior to patient infusion could provide insights into optimal compositions of ACT for therapeutic efficacy [19] .
Our group has determined that the proportion of effector memory or central memory phenotype cells do not necessarily correlate with loss of antigen-specific cells or decline in their functionality [24] .Thus we decided to address the question of cell "fitness" for adoptive immunotherapy by finding complementary features that would better characterize specific T-cell lines to attest their differentiation and functional status. Figure 1 summarizes the system used to prime and expand T-cell lines against HA-1, a HLA-A0201-restricted MiHA.Briefly, donors predicted to generate an anti-HA-1 response were recruited and their T cells were co-cultured with HA-1 pulsed autologous monocyte-derived dendritic cells.Following three rounds of stimulation, the antigen-reactive cells were enriched using a cytokine capture system and further expanded in the presence of the anti-CD3 antibody OKT3 and interleukin (IL)-2.In order to study culture-driven exhaustion/dysfunction, both the co-culture and the expansion phases were prolonged.
In our hands, repeated antigenic stimulation of MiHA-specific T cells has indeed led to terminal differentiation, as evidenced by upregulation of specific markers such as PD-1 and Killer cell lectin-like receptor subfamily G member 1 (KLRG-1) predominantly on antigen-specific cells [24] (and not on accompanying CD8 + T cells present in the same culture) with detection of Tim-3 at percentage similar to PD-1 (unpublished observations).By contrast, prolonging the expansion phase with OKT3 and IL-2 led to a decline in the antigen-specific population as well as a sharp decline in T-cell proliferation of both antigen-reactive and non-reactive CD8 + T cells.This occurred with little expression of PD-1 or KLRG-1 as well as with the preservation of polyfunctional cytokine secretion and antigen-specific granule exocytosis by the remaining antigen-reactive cells.We also observed these distinguishing features while using the same donors and protocol to expand T cells against the Epstein-Barr Virus (EBV)-derived antigen LMP2 426-434 (also presented by HLA-A0201).However, we noted that the expression of KLRG1 and PD-1 occurred earlier during the co-culture with antigen-pulsed antigen-presenting cells (APC), suggesting that cells derived form a memory repertoire (all donors were EBV-seropositive) might be more susceptible to exhaustion following repeated antigen exposure.
Globally, our findings reveal the importance of context in the acquisition of features indicative of T-cell dysfunction.While co-culture with antigen-loaded dendritic cells in the presence of cytokines leads to PD-1 and KLRG-1 expression on antigen-reactive cells without altering central memory marker expression or proliferation (as indicated by Ki-67 staining, a nuclear protein associated with cellular proliferation) in function of time, the use of OKT3 and IL-2 leads to another type of impairment.In a context independent of APC and repeated antigen exposure [24] , the prolongation of culture led to proliferation arrest.This suggests that dysfunction occurring during in vitro expansion phase, where no APC is required, is influenced by pathways that selectively target cell division without affecting polyfunctionality and cytotoxicity.It may still be controversial that PD-1 is a marker of exhausted cells (as it is expressed on recently activated T cells) whereas Tim-3 and KLRG-1 are markers for terminally differentiated and/or senescent cells, but one have to consider that they might not be mutually exclusive nor inclusive [37] .Furthermore some markers may act synergistically or additively to mediate T-cell dysfunction according to different kinetics and thus collectively induce pathways that affect T-cell proliferation and survival [32,38,39] .
In our experience, the use of CD45RO/CD62L/CCR7 expression was not helpful at determining the state of T-cell differentiation in culture.Hence, in an attempt to better predict T-cell fitness, a multiparameter approach was essential for a rigorous follow up of our cultures.The differential regulation of phenotypic and functional T-cell exhaustion according to culture conditions and duration argues for the implementation of a more comprehensive monitoring of in vitro expanded cells to optimally predict the in vivo fitness of an immunotherapeutic product.We determined that the expression of several extracellular as well as intracellular markers such as PD-1, KLRG-1, IFNγ and Ki67 provided substantial information about T-cell quality and will hopefully predict in vivo persistence of cultured specific T cells (Figure 1).
Understanding and curtailing CD8 + T-cell terminal
effector differentiation is a central issue in adoptive immunotherapy using either the natural T-cell receptor repertoire or genetically modified T cells.As such, chimeric antigen receptor (CAR) T cells, another promising approach for immunotherapy, have provided limited success against a broad variety of cancer types while showing spectacular results in others [40] .These cells, genetically engineered to express antibody binding domains fused to T-cell signaling domains, are generally polyclonaly expanded with anti-CD3/CD28 antibodies or coated beads with or without IL-2 [40][41][42] which may eventually drive them towards an exhaustion state [43] .The persistence of CAR T cells at the tumor site is one of the major principles for effective tumor eradication [44,45] and poor T-cell homing is thought to be one reason for the reduced efficiency of adoptive immunotherapy based on CAR-engineered T cells in certain cancer types.However, in vivo proliferation of transferred cells is a key feature that also predicts the success of the therapy and several studies have shown that the lack of survival of the infused CAR T cells greatly limits the efficiency of adoptive immunotherapy [9,46,47] .To solve this problem, modulating co-stimulatory signaling during T-cell culture or modifying the cytokine environment, which greatly influences the human T-cell differentiation processes, have been suggested to improve the persistence of infused T cells [40,43,[48][49][50] .The central issue of T-cell proliferation following transfer might also be dependent on the exhaustion status of T cells.Unfortunately, to our knowledge, there are no markers that can reliably attest of cell quality (neither for classical ACT nor for CAR T-cell studies) prior to infusion.Thus, determining molecular targets leading to exhaustion/dysfunction characteristics could help us enhance T-cell culture for clinical-scale expansion of a healthier non-exhausted product.Generating strategies to manipulate these targets to limit/reverse exhaustion in a clinical-grade culture setup will be of great interest for T-cell biologists and adoptive immunotherapists.
Figure 1 .
Figure 1.Expression of markers indicative of T-cell exhaustion.The monitoring of classical extracellular markers of Tem/Tcm differentiation such as CD45RO, CD62L and CCR7 do not allow for the prediction of cell quality over time.However, the concomitant expression of proteins associated with dysfunction such as PD-1 and KLRG-1 during a priming phase with multiple antigen-specific stimulations shows that cells must be harvested at a specific time before apparition of phenotypic exhaustion features.Along similar lines, an expansion phase affects the features related to functionality such as intracellular cytokines like IFNγ and/or the expression of Ki67, a marker of active proliferation.Monitoring multiple parameters at different stages of the culture may best define T-cell fitness for adoptive therapy.Tcm: central memory T cells; Tem: effector memory T cells; IFNγ: interferon-γ; KLRG-1: killer cell lectin-like receptor subfamily G member 1; PD-1: programmed cell death-1. | 2019-01-08T22:19:47.595Z | 2015-08-06T00:00:00.000 | {
"year": 2015,
"sha1": "ed20c07f2bd69202c9724348b773e1e83b2d2d38",
"oa_license": "CCBY",
"oa_url": "http://www.smartscitech.com/index.php/sp/article/download/913/pdf_5",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ed20c07f2bd69202c9724348b773e1e83b2d2d38",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
1095988 | pes2o/s2orc | v3-fos-license | Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate
Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.
Introduction
The recent evolution of sensing devices and the availability of new solutions and techniques in the area of WSN have increased the expectation of WSN applications. Today, WSN is struggling with issues concerning the battery lifetime and power consumption. Recent research advances in the field of data compression have triggered the possibility to exploit the minimization of storage and communication payload, with the goal to extend as much as possible the lifetime of the nodes.
Recently, compression algorithms have gained a lot of interest, in particular the ones capable of exploiting the fact that the majority of the signals of interest in WSN applications has sparse representation in terms of some basis [1][2][3]. Compressed sensing (CS) has been used as a new approach to simultaneous sensing and compressing, fostering a potentially large reduction in sampling and computational costs.
CS builds on the works [4][5][6], which demonstrated that if a signal can be compressed using classical transform coding techniques and its representation is sparse in some basis, then a small number of projections on random vectors contain enough information for approximate reconstruction. The compression comes from the fact that the number of these measurements is usually smaller than the number of samples needed if the signal is sampled at the Nyquist frequency. In general, if a signal has a sparse representation in one basis, it can be recovered from a small set of measurements onto a second measurement basis that is incoherent with the first.
While a rich literature has been developed about the mathematical aspects of CS and the reconstruction algorithms used to perform reconstruction (i.e., [7,8], relatively limited attention has been paid to practical implementation of CS on resource-constrained nodes, such as those typically used in WSN deployments. Distributed CS (DCS) [9,10] is probably the most prominent framework dealing with the sparsity and compressibility of signal ensembles tailored on distributed sensor nodes where signals are each individually sparse in some basis, but a correlation from sensor to sensor does exist.
Moreover, the great majority of the papers addressing CS and DCS deals with a purely digital implementation of CS, which consists of sampling the signal at a given frequency (e.g., Nyquist or above) and then compressed using CS with dense encoding matrices. Nevertheless, when natural signals have a relatively low information content, as measured by the sparsity of their spectrum, the theory of CS suggests that randomized low-rate sampling may provide an efficient alternative to high-rate uniform sampling. This technique is usually referred to as analog CS, and it is a novel strategy to sample and process sparse signals at a sub-Nyquist rate [11].
In this paper, we address the problem of energy consumption for sensor nodes performing CS and DCS when both digital and analog CS are considered. Our contribution is: (i) to establish a common energy framework in which a fair comparison can be made by modeling the nodes when real signals are considered for reconstruction and real resource-constrained hardware is used to perform the compression; (ii) to investigate the impact of CS parameters for compression on nodes' lifetime; this was only partially discussed in [12]; (iii) to investigate if low-rate CS (CS with sub-Nyquist sampling) can be exploited to reconstruct environmental signals with good quality; and (iv) to propose design parameters for low-rate CS that are able to achieve a superior reconstruction quality with the minimum energy expenditure, so as to prolong the lifetime of the whole network.
The rest of this paper is organized as follows. Section 2 surveys related works. Section 3 gives a brief introduction about compressive sensing background. In Section 4, the energy consumption modeling for CS is addressed when small COTSWSN nodes are used. The low-rate CS is proposed in Section 5, and the reconstruction analysis is presented in Section 6. In Section 7, we discuss the conclusions.
Related Works
The problem of data gathering and compression using CS is widely developed in the literature. Even though a lot of attention is paid to reconstruction algorithms and mathematical aspects, practical aspects and implementation problems have been gaining a lot of interest lately.
The general problem of using CS in WSNs is investigated in several works, like in [13], where the authors analyze synthetic and real signals against several common transformations to evaluate the reconstruction performance, or in [14], where the measurement matrix is created jointly with routing, trying to preserve a good reconstruction quality. Furthermore, in [15], the authors improve reconstruction by reordering input data to achieve a better compressibility. In general, all of these papers address the problem of the signal reconstruction, but they lack a real consideration about energy involved in compression. When real hardware is considered, considerations about CS have to be revised.
One of the first papers trying to address the problem of energy consumption for compression dealing with the problem to generate a good measurement matrix using as low energy as possible is [3]. In this work, the research is focused on wireless body sensor networks (WBAN) for real-time energy-efficient ECG compression. Other works that focus on bio-signals and WBANs are [16,17]. This is a quite different research field with respect to WSNs, where the presence on several nodes sensing the same environment permits one to exploit the distributed nature of the signals to improve the quality of recovery. However, CS is today applied in several other signal processing fields, from video compression [2] to underwater acoustic OFDMtransmission [18] and to air quality monitoring [19].
In fact, several works, like [20] or [21], deal with the use of CS when multiple nodes are used in a joint reconstruction. The best known technique used to exploit the existing correlation among several nodes in a WSN is distributed compressed sensing (DCS) [9,10], which permits new distributed coding algorithms for multi-signal ensembles that exploit both intra-and inter-signal correlation structures.
Besides the classical digital implementation of CS used in all of the aforementioned papers, in this paper, we deal also with CS when the signals are sampled at a sub-Nyquist frequency. Usually, in the literature, this compression technique is referred to with the name of analog CS. This is because, usually, the subsampling is performed at the ADClevel, dropping samples during the acquisition and analog-to-digital conversion stage. For example, in [22], the effects of circuit imperfections in the analog compressive sensing architectures are discussed.
While it is common in the literature to find papers like the two aforementioned addressing the problem of analog CS with a focus on the hardware called analog-to-information converters (AICs), other works investigate the problem from a higher system-level prospective when the samples are not discarded by the ADC architecture, but by the device performing the sensing. One of the papers dealing with this specific case is [23], where the analysis on energy consumption is totally neglected, and it is strictly related to the specific application of the pulse oximeter. Differently from environmental signals, the signals obtained by the oximeter present a much higher temporal correlation, presenting small variations in its temporal evolution. Furthermore, in [24], the authors use a sparse generated matrix adjusting the sampling rate to maintain an acceptable reconstruction performance while minimizing the energy consumption. In this work, the authors use the reconstruction quality to give the node a feedback that is used to modify the sampling pattern. Differently from this work, the authors do not address the problem of investigating different reconstruction algorithms, and they just rely on simple BPDN or LASSOfor reconstruction. Neither to they try to exploit potential correlations among signals and nodes or training to increase the quality of the recovered signal.
Even in [25], the usage of sparse measurement matrices is investigated, and even though the energy consumption in a WSN is taken into consideration, in the paper, there is no precise analysis on the energy for compression nor a real trade-off between power consumption and reconstruction quality, like we do in our work.
In [26], the authors use a weighted form of the basis pursuit to reconstruct signals gathered using a sparse measurement matrix addressing also the problem of the energy spent in generating the random projection matrix on the node itself. Nevertheless, the aim of the paper is quite different from ours: the authors in [26] want to detect a specific event characterized by a well-defined frequency, and this makes it easier to train the reconstruction algorithm to detect the specified event; whereas in our approach, we address the reconstruction without any priorsabout the signal to recover and using temporal or spatial correlation as data training for reconstruction.
Related to this work is also [27], where a sparse matrix is generated considering the energy profile of the node and, even considering a set of environmental signals similar to those ones reported in this paper, the authors do not try to exploit the inter-signals correlation properties. While in [28], the authors introduce the random access compressed sensing, a form of low-rate CS, but their focus is on the network architecture investigating the network design more than using compressive sensing for data compression.
CS and DCS: A Mathematical Background
For a band-limited signal x(t) of duration T , let x(n), 1 ≤ n ≤ N be its discrete version. The Nyquist sampling theorem states that in order to perfectly capture the information of the continuous signal x(t) with band-limit B nyq /2 Hz, we must sample the signal at its Nyquist rate of B nyq samples/s. Thus: such that T s ≤ 1/B nyq and N T s ≤ T . Sampled in time, the signal that we want to acquire is represented by an N -dimensional vector of real numbers x.
In the standard CS setting, one is concerned with recovering this finite-dimensional vector x ∈ R N from a limited number of measurements. A typical assumption is that the vector x is sparse. The sparsity of a signal is usually indicated as the 0 -norm of the signal, where the p -norm · p is defined as: with α ∈ R N . Thus, if the signal x is sparse, this means that there exists some N × N basis or dictionary Ψ ∈ R N ×N , such that, for any instance of x, there is an N -dimensional vector α, such that x = Ψα and α 0 ≤ K with K N . CS theory demonstrates that this kind of signal can be compressed using a second different matrix Φ ∈ R M ×N with M N . The compression procedure can be written as y = Φx, where y is the M -dimensional measurements vector.
Since Ψ is usually defined by the signals characteristic and it is considered fixed, one seeks to design Φ, so that M is much smaller than N .
Having the measurements vector y, the recovery of the original signal x can be obtained by the inversion of the problem: In general, this is not an easy task, since the matrix Θ ∈ R M ×N is rectangular with M N . Fortunately, the fact that x is sparse relaxes the problem a bit, opening the way to the use of optimization-based reconstruction or iterative support-guessing reconstruction.
The most common optimization-based method is the so-called basis pursuit (BP) method that looks for the "most sparse" solution for which the α 1 is minimum. In formulas: CS proves that if the two matrices Φ and Ψ are incoherent (elements of the matrix Φ are not sparsely represented in the basis Ψ) and the original signal x is compressible, then we can recover α with high probability [29].
In case the sensors, which produce data, are close each other (as is usual in a WSN), the signals can be assumed similar and the outputs correlated. We can then expect that the ensemble of these signals has an underlying joint structure (inter-and intra-correlation) so that it is possible to exploit to further compress data.
In an ensemble of J signals, we can denote with x j ∈ R N the j-th signal with j ∈ {1, 2, . . . , J}. As done before for the single signal CS, for each signal x j in the ensemble, we can have a sparsifying basis Ψ ∈ R N ×N and a measurement matrix Φ j ∈ R M j ×N , such that y j = Φ j x j with M j N and x j = Ψα j . Even though the DCS theory proposes three different models [9,10] for jointly-sparse signals, it is possible to consider JSM-2 as the most suitable model to describe the ensemble of signals, as those ones are typically gathered by nodes in a WSN.
In the JSM-2 model, all signals share the same spare set of basis vectors, but with different coefficients. If α j ∈ R N is the coefficients vector for the basis Ψ, which is not zero only on a common set Ω ∈ {1, 2 . . . , N } of coefficients, we have |Ω| = K with Ω being the same for all the signals. The reconstruction can be performed via greedy algorithms, such as simultaneous orthogonal matching pursuit (SOMP) or the more promising DCS-SOMP [30].
Hardware and Compression
In this subsection, we want to analyze the real potential of CS aiming at low-complexity energy-efficient data compression on resource-constrained WSN platforms.
CS is usually considered as a suitable approach for data acquisition and compression in WSNs. It is claimed in [31] to be particularly attractive for energy-constrained devices for at least two reasons: (1) the compression is agnostic of the specific properties of the signal and is performed through a small number of linear independent measurements; and (2) the small number of measurements can be transmitted to a remote gathering center where they can be accurately reconstructed using complex, nonlinear and energy expensive decoders [12].
Nevertheless, the energy spent in compression is often underestimated in the literature. When implemented in software, data compression goes through several matrix-vector multiplications, as seen in Section 3, that are not negligible, especially when resource-constrained nodes are used for compression and for the generation of the measurement matrix.
The hardware used as a reference in our tests is a wireless node by ST microelectronics, the STM32W108, which is a fully-integrated SoC with a 2.4-GHz IEEE 802.15.4-compliant transceiver, 32-bit 24-MHz ARM Cortex-M3 microprocessor, 128-KB Flash and eight-Kbyte of RAM memory. Two additional sensors, Sensirion SHT21, are considered on the board. The microcontroller has no floating point unit, and it uses software emulation to overcome this limitation. The compiler used for compiling benchmarks is Sourcery CodeBench Lite Edition, and the code is compiled with -O3optimization. The time measurement is performed using the debug registers in the ARM core capable of accurately measuring the number of cycles spent in performing a certain operation. Data for power consumption of the various subsystems are not reported for lack of space. For reference, the reader can refer to the datasheets of microcontroller [32] and sensors [33]. Our tests and simulations track the reported datasheet values with high fidelity.
Compression using CS can be performed using different kinds of compression matrices Φ. In the literature, it is possible to find a plethora of papers arguing on different kind of sensing matrices [34]. As seen in Section 3, the only requirement is that the sensing matrix is highly incoherent with the sparsifying basis Ψ. Such a property is practically verified for random matrices, such as random matrices with independent identically distributed (i.i.d.) entries. Interestingly, many efficient sensing matrices can be generated having different characteristics and, hence, different memory and power footprints; moreover, they require a different number of bytes for encoding and then storing.
In Figure 1, the number of cycles required by a microcontroller to generate the compression matrix and to perform the compression of a single sample for different kinds of measurement matrices is shown. The differences are mainly due to: (1) the computational workload required for generating the random vectors for the compression, since in some cases, the generation implies the use of complex and computationally-intensive functions, such as sqrt or log; and (2) the time spent in multiplication of the vector against the sample that, especially in the case of floating point numbers, is not negligible.
Power Consumption Model
When CS is used to perform compression in a WSN, the type of compression matrix strongly affects also the power consumption of other subsystems: (1) the longer the time necessary to compress the data, the longer the node has to be awake before switching back to sleep mode to save energy; (2) the number of bytes required to encode the compression matrix is not the same for all of the matrices Φ; (3) following from the previous point, the time and space required by the micro-controller to store data in non-volatile memory is different; and (4) the energy spent in transmission is different for measurement matrices.
To evaluate the influence of the choice of measurement matrix and other compression parameters, in this subsection, we introduce an architecture-level power consumption model to evaluate the power consumption of the nodes when compression is performed using different parameters for compression, and we compare the results against the power spent to transmit data without any kind of compression. Using this power model and feeding it with data coming from real hardware, we can easily evaluate how changing the parameters influences the energy consumption of the system, enabling design space exploration.
The hardware taken as the reference (already described in this section) is an STM32W108 node acquiring data from the two on-board sensors. The network is organized as a star, a very common topology for practical WSN deployments [35].
During the simulation involving no compression, the node wakes up, samples data from the two sensors and sends them out to a collector center. Afterwards, it goes back to sleep mode waiting for the next cycle. The energy spent in each cycle can be written as: where E sleep is the energy spent in sleep mode, E setup is the energy used for waking up and setting up the device, E sample is the energy for sampling each sensors and E send is the energy used to send the acquired data. Expanding each term, we have: T setup · (P mcu + P soff + P toff )+ T sample · (P sample + P sactive + P toff )+ T trans · (P comm + P soff + P trans ) where T sleep , T setup, T sample , T trans are the duration of each respective phase. P sleep is the power consumed in sleep mode; P soff is the power absorbed from sensors when sleeping; P toff is the power consumption of the transceiver when the node is in sleep mode. P mcu is the power consumed by the MCU; P sample is the power spent for data acquisition; P sactive is the power consumed by sensors; P comm is the power consumption for filling the transceiver output buffer; and finally, P trans is the power for sending data. All of the values for the power consumption or timing are actually measured on the hardware. When CS is used to compress data, the compression is performed after the node has acquired N acc samples. Thus, the energy consumption in each cycle is: E nv + E comp + E trans )/N acc where E store is the energy to store the acquired sample in non-volatile memory, E nv is the energy spent during the recovery of the data from non-volatile memory and E comp is the energy for compression. In detail: T setup · (P mcu + P soff + P toff )+ T sample · (P sample + P sactive + P toff )+ T nv · (P store + P soff + P toff ))+ T store · (P soff + P toff + P store )+ T comp · (P soff + P toff + P comp )+ T trans · (P comm + P soff + P trans ))/N acc with self-explanatory meaning of the symbols. In Figure 2, the result of simulations is reported when N acc = 512, M = 100, T sleep = 10 s with an overhead of 10 bytes for each packet sent. The other parameters in Equations (6) and (8) are derived from these values and the hardware specification data in the datasheets. The two compression matrices used in the simulation when CS is performed are: (T2) Gaussian matrix generated using a Box-Muller transformation with mean zero and variance 1/M and (T6) the matrix generated from the symmetric Bernoulli distribution P (Φ jk = ±1) = 1/2. According to Figure 1 these two matrices define the energy consumption boundary for CS. Esleep Esetup Esample Esend Ecomp Env Figure 2. Energy spent in one sampling cycle when CS is used to compress the sample compared to the energy consumed when the sample is sent without compression. The first bar refers to CS when the measurement matrix is obtained from a Bernoulli distribution (T6), while for the second bar, the compression is performed using a Gaussian matrix (T2) (simulation parameters: N acc = 512, M = 100, T sleep = 10 s).
The result of the simulation clearly shows how compressing data with CS does not always determine an actual savings in power consumption. For all of the cases, the energy spent in sleep mode, the energy for sampling and the energy for setting up the node after sleep are obviously the same. The differences are related to the energy for compression and for sending the data.
Using a complex compression matrix (T2) is really expensive in terms of energy consumption; thus, the overall power consumption is higher with CS than without any compression. Differently, when a simpler matrix is used (T6) the energy for compression becomes negligible, and the power consumption abruptly decreases. A huge difference between using CS and not using compression is also in the power for sending data due to two different factors: (1) the number of bytes sent; and (2) a better packetization, since the compressed vector is sent at the end of the N acc cycles, permitting one to maximize the number of compressed samples that fit in the packet payload [36].
Low-Rate Compressive Sensing
In this section, we want to investigate how it is possible to further reduce the energy consumption by means of simpler sparse measurement matrices and acting on the number of samples gathered by the node.
In classical acquisition systems (as in the digital CS seen before), samples are taken regularly on the time axis at a given rate (usually not less than the Nyquist one). A particular form of CS, called analog CS, relies on random sampling to avoid this regularity and aims to produce a number of measurements that, on average, are less than those produced by Nyquist sampling, while still allowing the reconstruction of the whole signal thanks to sparsity and other priors.
While usually analog CS is performed by means of specialized hardware encoders, we want to study whether analog CS is a suitable technique to be performed on WSNs nodes and whether this peculiar form of compression, which we call low-rate CS (LR-CS), is still able to reconstruct the original signals of interest with satisfying quality.
From a mathematical point of view, the problem is still the same as seen in Equation (3), which is different in the form of the measurement matrix Φ. Let B denote an M -dimensional vector, each element of which contains a unique entry chosen randomly between one and N . In analog CS, the measurement matrix Φ is a sparse M × N matrix, where the i-th row of the matrix is an all-zero vector with one at the location given by the i-th element of B. This is a very simple measurement matrix, energetically inexpensive to generate and store and permits also savings on the number of samples to gather.
Practically, using this kind of measurement matrix means that the node is required only to randomly gather M samples with an under-sampling ratio of order ρ = M/N . As done before, the energy consumption on average after the N acc sampling period is: Figure 3, the comparison between digital and low-rate CS is reported. As inferred from Equations (7) and (9), the energy savings is mainly due to three factors: (1) there is no energy spent in compression for the analog version of CS; (2) the contribution of E setup , E sampl and E store is reduced by a factor ρ; and (3) E nv is decreased since the number of bytes to store in flash is reduced. In Figure 4, the comparison between the energy spent for low-rate and digital CS is reported, normalizing the energy with respect to the energy spent when no compression is applied. The low-rate CS is always more convenient with respect to the digital CS. In the plot is also visible the influence of the packet overhead on the power consumption that creates small abrupt increases in energy consumption when an additional packet has to be sent. Having verified that using low-rate CS and a sparse measurement matrix, the node can save energy, the problem shifts to verify whether low-rate CS can be used in practice to reconstruct signals gathered by WSNs nodes deployed in a real environment.
WSN Data Reconstruction for Low-Rate CS
In this section, we want to investigate the performance of several reconstruction algorithms to check if there is an algorithm that is better able than others to cope with low-rate CS and that can guarantee a good signal recovery. Moreover, we want to address the problem of choosing a suitable sampling pattern for the low-rate CS, since the sampling pattern chosen is strictly related to the quality of the recovered signal during the reconstruction phase.
In our experiments, we consider data coming from the CIMIS [37] dataset that manages a network of over 120 automated weather stations in the state of California. We take as the reference the data collected during the 23rd week of 2012 by seven different weather stations near Monterey (CA). For our simulations, we refer to three different kinds of sensors: temperature, relative humidity and wind speed, as reported in Figure 5. The ensemble of signals is chosen, such that it includes periodic and highly correlated signals (temperature and relative humidity) with less correlated signals (wind speed).
In our model, the seven nodes are deployed in the same IEEE 802.15.4 star network. The power consumption for each node adheres to the same model as described in Section 4. In each simulation cycle, each node samples the signal for a certain period, called the acquisition period, collecting a certain number of samples before compressing these samples and sending out the compressed vector toward a central collector. The acquisition period is supposed to be the same for each node, and each node uses the low-rate CS for compressing data. The sparse compression matrix Φ used for compression is locally generated by each node using its own ID and the timestamp as the seed for generation. The compressed vectors are gathered by the central coordinator, and here, the original signals are recovered using different algorithms.
Two different sampling patterns for the generation of the measurement matrix Φ are considered in this section: (1) uniform sampling (US) pattern; and (2) non-uniform sampling (NUS) pattern. In the uniform sampling pattern, the inter-measurement intervals are constant ∆k j = k j+1 − k j = ∆k = γ∆k min where ∆k min is the minimum sampling period of the ADC and γ = N/M , whereas in the non-uniform sampling pattern, the inter-sample period is randomly chosen between [∆k min , ∞].
We carry the reconstruction using several algorithms, distributed and non-distributed and evaluate the quality of reconstruction using the SNR expressed in dB: where x is the original signal andx is its recovered version. While the BP does not exploit any correlation or a priori information and DCS-SOMP and JS-BP try to exploit inter-correlations existing among the different nodes, the GPSR algorithm is well suited, both for periodic and correlated signals, since it presents a weighting factor that can be used to give to the reconstruction algorithms some hints about reconstruction.
With the same nomenclature as in the previous section, the problem of signal reconstruction for GPSR can be expressed as: where τ is a non-negative parameter providing the relative weight of the 1 -norm and 2 -norm in the cost function, while W is a diagonal matrix with ω 1 , . . . , ω n on the diagonal and: where > 0 is in order to provide stability, and in general, the weights η i are free parameters in the convex relaxation whose values could improve the signal reconstruction. The matrix W can be in fact used to incorporate a priori information about sparsity and can be estimated on-line from interor intra-correlation data between sensors and nodes.
In this section, we use the data for the same sensor the day before those involved in the reconstruction as training information for each sensor to obtain the W matrix, exploiting the temporal intra-correlation of each node.
In the simulations, the acquisition period before sending out the compressed data toward the base station is two days (more precisely 42 h). During this period, each sensor of each node is sampled, and M samples are gathered by the node according to the generated Φ matrix. The minimum wake-up time (the minimum inter-sample period) is 5 min, so a maximum number of N acc = 512 samples can be gathered by each node for each sensor in one acquisition period. The sparsifying matrix Ψ is a DCTmatrix that is already demonstrated to be a good basis for compressible natural signals, as highlighted in [31,39]. Each simulation cycle is performed for 100 trials, and for each run, both the measurement matrix and the sampling pattern for the non-uniform random sampling are randomly generated.
In Figure 6, the reconstruction quality for each kind of signal averaged over all seven nodes is reported. The plot is done against the under-sampling ratio ρ = M/N defined as the fraction of the samples actually taken with respect to the number of total samples. (c) Wind speed Figure 6. Quality of reconstruction vs. the under-sampling ratio for the three kinds of signals taken into consideration. Each signal is reconstructed using all of the algorithms investigated in the paper, varying also the under-sampling pattern.
The results clearly show how BP does not perform well for all three signals when low under-sampling ratios are considered, achieving an SNR that is lower than the one obtained with all of the other algorithms. Algorithms involving the exploitation of spatial inter-correlation between nodes or temporal intra-correlation achieve a much better reconstruction quality for all of the signals considered. In general, the results show that better reconstruction quality is obtained using the GPSR algorithm. This much higher SNR for reconstruction using GPSR is obtained by giving the reconstruction algorithm useful hints about the signal to reconstruct, as seen in Equation (11). For the wind speed the reconstruction quality guaranteed by GPSR is comparable to that achieved by DCS-SOMP; this is due to the fact that the wind speed, among all of the signals, presents a lower temporal correlation.
The plot also shows that, while for GPSR, the uniform sampling (US) outperforms the non-uniform sampling pattern (NUS), for BP, this is the opposite.
Training Data for GPSR
From the results collected, it follows that the best algorithm able to provide a good reconstruction of the signals is GPSR. In this section, we want to investigate how the training data (the parameters under the form of the W matrix in Equation (11)) can influence the reconstruction. This is particularly significant in WSNs where spatial and temporal correlations do exist between different nodes and within the node itself.
In our simulations, we investigate four different scenarios, each aimed to exploit spatial correlation among nodes or temporal correlation within the sensor of interest to create a suitable data training for the GPSR reconstruction.
As seen in Figure 7, our training data are obtained: (1) exploiting temporal correlation by using data of the same sensor on the same node reconstructed in the previous acquisition cycle; (2) by averaging a maximum of 10 signals reconstructed in the previous acquisition cycles; (3) by using a pseudo-signal obtained combining the raw data gathered by neighbor nodes; and (4) by using a line-powered node taken as the reference providing uncompressed data placed near the compressing node. This last point is a fictitious case taken as the reference, since it is not always possible to have a line-powered node providing a continuous stream of data, but it is useful to evaluate the recovery when spatially-correlated data are used for reconstruction.
The first result inferred from the simulations output is that, exploiting the spatial correlation using as training for the algorithm the pseudo-signal is not convenient, since the quality of the reconstruction is lower than the one obtained using the other methods.
In general, a better recovery is achieved when data temporally correlated with the signal that we want to recover are used as training data. This is particularly true for periodic signals, such as the environmental signals of interest. The best results in the compression range of interest are obtained by using as training for the GPSR algorithm data coming from the same sensor and node, but gathered in a previous acquisition cycle. This can guarantee the maximum temporal (and obviously, spatial) correlation, giving helpful hints to the reconstruction algorithm to correctly recover the signal. Figure 7. Quality of the reconstruction varying the training data used in the gradient projection-based sparse reconstruction (GPSR) algorithm.
Energetically Optimal Reconstruction
In Sections 4 and 5, we have investigated the compression phase, coming to the conclusion that a sparse measurement matrix is the best compression matrix to save energy in compression. Afterwards, in Section 6, we have obtained that, among several reconstruction algorithms and using this sparse measurement matrix, GPSR is the best algorithm capable of guaranteeing the higher reconstruction quality. In Figure 8, a graphical review of the best choices in the measurement and reconstruction phase is reported. Figure 8. With the same nomenclature previously introduced, this plot highlights the different choices for the measurement and reconstruction phase that permit one to achieve better reconstruction with the minimum energy expenditure.
Since we have investigated both the power consumption in compression and the reconstruction quality using GPSR, it is possible to address the problem to find the optimal compression parameters able to guarantee good reconstruction quality with the minimum energy expenditure.
In Figure 9, the trade-off between quality of signal recovery and power consumption is reported, plotting the ratio between quality of reconstruction and the energy spent in compression varying, the under-sampling ratio ρ for low-rate CS and the compression vector size M for digital CS.
Looking at the plot, we can see how the curves for LR-CS are always above the curves for the digital CS, meaning that for LR-CS, the compression is energetically cheaper. More precisely, this means that each dB in reconstruction is obtained using less Joules of energy during the compression phase.
Moreover, within the same class of curves, we have a range of compression values M and ρ (between M = 100 and M = 200 for digital CS and ρ = 0.2 and ρ = 0.4 for the low-rate CS) for which the curves present a maximum, identifying the best trade-off between reconstruction quality and power consumption for compression.
Comparing these values with the plots in Figure 6, we can see how in this range, the quality of reconstruction is always > 30 dB, which is a very good reconstruction quality for our goals.
Thus, the low-rate CS with an under-sampling ratio 0.2 ≤ ρ ≤ 0.4 when reconstruction is performed using GPSR with temporally correlated data as training data is able to guarantee an optimal reconstruction > 30 dB with minimum energy used for compression.
Conclusions
In this paper, we have investigated the application of CS with real COTS hardware, and using an energy consumption model, we have evaluated the impact of different kinds of measurement matrices on the power consumption. We have verified that huge differences do exist according to the compression matrix used and that it is not always convenient to compress data with CS when expensive matrices are used in compression.
Even though low-rate CS seems an optimal solution to save energy, different algorithms for reconstruction exist that do not always guarantee the same recovery quality. Several of these algorithms have been compared against a set of sub-Nyquist-sampled signals taken from a real dataset. Among all of the algorithms considered (each exploiting a different kind of correlation among different nodes within the node itself), GPSR has resulted in being the best algorithm for data recovery when temporally-correlated signals are used as training data.
Finally, an optimal under-sampling ratio and reconstruction algorithm have been identified to be capable of achieving the best reconstruction at the minimum energy cost for the compression.
For future work, we want to explore the possibility to extend low-rate CS to perform in-network compression using distributed scalable algorithms for data gathering and reconstruction, moving from a star network to more complex mesh networks. | 2015-09-18T23:22:04.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "23d3964711776dd8e4f3cb4a42d841aa439246c4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/15/7/16654/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ab15976df31e463b24db5a639156188a988db85",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
]
} |
239679587 | pes2o/s2orc | v3-fos-license | State and Unknown Terrain Estimation for Planetary Rovers via Interval Observers
Herein, the problem of state and unknown terrain estimation is considered, where the unknown planetary terrain parameters, e.g., terrain stiffness and ground height, are inferred from how it affects rover motion through vehicle‐terrain interaction. In particular, an alternative framework for terrain estimation based on set‐valued or set‐membership estimation is proposed, where the goal is to find set‐valued estimates (in the form of hyper‐rectangles or intervals) of the states and unknown terrain parameters. For this purpose, a state and model interval observer is designed for partially unknown nonlinear systems with bounded noise. By leveraging a combination of nonlinear bounding/decomposition functions, affine abstractions, and a data‐driven function abstraction method (to overestimate the unknown dynamics model from noisy input–output data), the proposed observer is capable of simultaneously estimating the states and learning the unknown dynamics. Further, a tractable sufficient condition is derived for guaranteeing the stability of the designed observer, i.e., such that the sequence of interval estimate widths are uniformly bounded. When applied to the state and unknown terrain estimation problem, the simulation results indicate that our approach can more reliably find the range of possible terrain parameters when compared with the cubature Kalman filter.
Introduction
A recent survey on slippage estimation and compensation for planetary exploration rovers [1] (also see references therein) reveals that the entrapment and embedding of lunar and planetary exploration vehicles are often due to the lack of understanding of the lunar or planetary terrains. Thus, terrain properties estimation plays a critical role in rover mobility on unknown or uncertain terrains, especially in the context of lunar and planetary exploration. [1] By estimating or identifying terrain parameters, e.g., soil cohesion and internal friction angle, rovers (or off-road vehicles) can reduce the uncertainty in terrain understanding and predict longitudinal and lateral wheel slippage. [2] Moreover, the rover/vehicle control systems, e.g., traction control, stability control, assistive braking, and cruise control, as well as the path planning algorithms can adapt to the current/actual terrain properties [1,3] and as a result, improve their performance and reduce power consumption.
Literature Review
To complement efforts on terrain estimation in mobile robotics from exteroceptive sensors, e.g., light detection and ranging (LIDAR), vision and radar, [4][5][6] several recent works have considered terrain estimation using proprioceptive sensors based on vehicle-terrain (or more precisely, wheel-terrain) interactions. [2,3,[7][8][9] In fact, a related problem of wheel-road friction coefficient estimation has been extensively studied in the automotive field and those approaches are already being commercialized. Nonetheless, these methods are often unsuitable for off-road driving and uncertain planetary terrains. [3] To address this issue, several wheel-terrain interaction approaches for off-road and planetary terrain estimation have been proposed in previous studies [2,3,7,8] that are based on linear regression, discrimination analyses, and Kalman filtering. Specifically, methods based on state and parameter estimation via Kalman filtering have been shown to provide optimal solution to estimation problems in the automotive and mobile robotics fields when the system dynamics are linear and the noise is Gaussian-distributed, [3,10] whereas extended Kalman filters (EKF) and unscented Kalman filters (UKF) have been used when the models are nonlinear for estimating vehicle states and parameters such as side-slip angles. [11][12][13] More recently, methods based on cubature Kalman filter (CKF) [14] have also been proposed to further improve the terrain estimation performance. [2] However, these approaches often hinge on the idea that the disturbance inputs caused by terrain irregularity can be modeled as stochastic uncertainties with known statistics, e.g., as a zeromean Gaussian noise.
In contrast, when bounds on the environmental uncertainties or sensor errors are available (instead of distributional information), set-valued and interval observers have been designed for DOI: 10.1002/aisy.202100040 Herein, the problem of state and unknown terrain estimation is considered, where the unknown planetary terrain parameters, e.g., terrain stiffness and ground height, are inferred from how it affects rover motion through vehicleterrain interaction. In particular, an alternative framework for terrain estimation based on set-valued or set-membership estimation is proposed, where the goal is to find set-valued estimates (in the form of hyper-rectangles or intervals) of the states and unknown terrain parameters. For this purpose, a state and model interval observer is designed for partially unknown nonlinear systems with bounded noise. By leveraging a combination of nonlinear bounding/decomposition functions, affine abstractions, and a data-driven function abstraction method (to overestimate the unknown dynamics model from noisy input-output data), the proposed observer is capable of simultaneously estimating the states and learning the unknown dynamics. Further, a tractable sufficient condition is derived for guaranteeing the stability of the designed observer, i.e., such that the sequence of interval estimate widths are uniformly bounded. When applied to the state and unknown terrain estimation problem, the simulation results indicate that our approach can more reliably find the range of possible terrain parameters when compared with the cubature Kalman filter. multiple classes of systems, including linear time-invariant (LTI), [15] linear parameter-varying (LPV), [16,17] Metzler and/or partial linearizable, [18,19] cooperative, [18,20] Lipschitz nonlinear, [21] monotone nonlinear, [22,23] and uncertain nonlinear [24] systems. Moreover, the set-valued observer design framework was recently extended to additionally estimate arbitrary unknown disturbance inputs for LTI, [25] LPV, [26] switched linear, [27] and nonlinear [28,29] systems with bounded-norm noise.
The aforementioned state and parameter estimation approaches often assume that a mathematical dynamic model of the system of interest is known. Hence, when the system model is not exactly known, the unknown dynamics needs to be estimated or learned from data, where learning methods such as support vector machines (SVMs), Bayesian networks and deep learning have been used to model and learn terrain properties. [1,30,31] In particular, a learning/identification technique known as nonlinear set membership prediction, Lipschitz interpolation or kinky inference [32][33][34][35][36] that can capture worst-case generalization errors holds great promise for being integrated into the set-valued estimation framework that we consider in this article for terrain estimation.
Contribution
This article presents a novel approach for state and unknown terrain estimation based on wheel-terrain interaction using the framework of a nonlinear interval observer/estimator for partially known dynamical systems with bounded noise. Specifically, the proposed approach bridges between setmembership state estimation approaches, e.g., in the studies by Mazenc and Bernard, Rassi et al., Efimov et al., Khajenejad and Yong,[15,18,21,26,27] and data-driven function approximation methods, e.g., in the studies by Milanese and Novara, Canale et al., Zabinsky et al.,Beliakov,Calliess,[32][33][34][35][36] to design intervalvalued observers for nonlinear dynamical systems with bounded noise and partially known dynamics, where the state and observation vector fields belong to a fairly general class of nonlinear functions and the partially known system dynamics contains an unknown function, which can represent dynamic terrain uncertainty.
Similar to stochastic filtering, our recursive interval observer approach consists of a state propagation step to construct framers of system states and unknown parameters (i.e., upper and lower bounds that contain/sandwich the true states and parameters) that are consistent with the dynamic model by leveraging techniques from nonlinear decomposition/bounding functions [29,[37][38][39] and affine abstractions, [40] as well as a measurement update step to tighten the framers based on sensor measurements. In addition, as the system dynamics is only partially known, we adopt the data-driven abstraction approach based on the study by Jin et al. [41] to recursively over-approximate the unknown dynamics function from noisy observation data and interval estimates from the update step. Furthermore, we provide a sufficient condition for stability of our observer, which if satisfied, guarantees that the sequence of interval estimate widths are uniformly bounded. Finally, the effectiveness of the proposed observer is demonstrated on the problem of state and unknown terrain estimation, and our approach is observed to outperform methods based on CKF. Note that a preliminary version of the interval observer design appeared in a conference proceeding. [42] The article is organized as follows. Section 2 describes the wheel-terrain interaction model and formulates a general interval observer design problem for discrete-time partially known dynamical systems with bounded noise. Section 3 designs a state and model interval observer (SMIO) and proves the correctness and stability of our design. Simulation results are presented in Section 4 to validate our algorithm and Section 5 concludes the article.
Preliminaries and Problem Formulation
In this section, we first present the wheel-terrain interaction model we will be using for state and unknown terrain estimation. Then, after presenting some background mathematical ideas, we formally formulate the SMIO design problem whose solution will later be applied for terrain estimation in the simulation section.
Wheel-Terrain Interaction Modeling
Similar to the study by Reina et al., [2] the wheel-terrain interaction model we consider in this article is based on the standard quarter-car model, which consists of two mass bodies, i.e., the vehicle sprung mass m s and unsprung mass m ns , with lumped stiffness and viscous friction parameters given by k and c, as shown in Figure 1. The vertical displacement of m s and of m ns are represented by y 1 and y 2 , respectively. Furthermore, we model the wheel-terrain (or tire-soil) interaction as a spring with a combined stiffness k tot that represents the equivalent stiffness corresponding to soil deformability k s and tire stiffness k t , given by where the combined stiffness k tot is unknown, while the parameters m s , m ns , k, and c are known. The equations of motion for the quarter-car model are given by the following 2 Þ þ kðy 1 À y 2 Þ ¼ 0 m nsÿ2 þ cðy : 2 À y : 1 Þ þ kðy 2 À y 1 Þ þ k tot ðy 2 À hÞ ¼ 0 (2) where h is the terrain elevation profile, whose unknown variation is assumed to satisfy an unknown functionf , as follows to capture the variation of the terrain height, where v is the forward velocity that is assumed to be known or measured. Moreover, as k tot is unknown but constant, we adopt a common approach in state and parameter estimation to describe the constant k tot as a state with zero dynamics as follows Then, defining the state vector as we find the resulting equations of motion for the quarter-car and wheel-terrain interaction model as Due to the presence of disturbance/noise signals and unknown dynamics, the states of the system in (8) cannot be always measured directly, hence need to be estimated. Consequently, the problem of state and unknown terrain estimation can be considered as a specific instance of a more general problem of designing interval observers for partially unknown bounded-error nonlinear systems in the form of (8), which will be described Section 3.
Preliminary Material
Before formulating the problem, we first introduce some notations, definitions, and related results that will be useful throughout the article.
Notation. ℝ n denotes the n-dimensional Euclidean space and ℝ þþ positive real numbers. For vectors v, w ∈ ℝ n and a matrix M ∈ ℝ pÂq , kvk ≜ ffiffiffiffiffiffiffi v ⊤ v p and kMk denote their (induced) 2-norm, and v ≤ w is an elementwise inequality. Moreover, the transpose, Moore-Penrose pseudoinverse, ði, jÞth element and rank of M are given by M ⊤ , M † , M i,j and rkðMÞ, whereas M ðr∶sÞ is a submatrix of M, consisting of its rth through sth rows, and its row support is r ¼ rowsuppðMÞ ∈ ℝ p , where r i ¼ 0 if the ith row of M is zero and r i ¼ 1 otherwise, ∀i ∈ f1 : : : We call M a non-negative matrix, i.e., M ≥ 0, if M i,j ≥ 0, ∀i ∈ f1 : : : pg, ∀j ∈ f1 : : : qg.
Definition 1 (Interval, Maximal and Minimal Elements, Interval Width). An (multidimensional) interval ℐ ⊂ ℝ n is the set of all real vectors x ∈ ℝ n that satisfies s ≤ x ≤ s, where s, s, and ks À sk are called minimal vector, maximal vector, and width of ℐ, respectively.
As a corollary, if A is non-negative, then Ax ≤ Ax ≤ Ax.
Definition 3 (Mixed-Monotone Mappings and Decomposition Functions). [37, Definition 4] A mapping f
Note that the decomposition function of a vector field is not unique and a specific one is given in the study by Yang et al. [37,Theorem 2]: If a vector field q ¼ ½q ⊤ 1 : : : q ⊤ n ⊤ ∶ X ⊆ ℝ n ! ℝ m is differentiable and its partial derivatives are bounded with known bounds, i.e., 13)]. Consequently, for x ¼ ½x 1 : : : x j : : : x n ⊤ , y ¼ ½y 1 : : : y j : : : y n ⊤ , we have In contrast, when the precise lower and upper bounds, a i,j , b i,j , of the partial derivatives are not known or are hard to compute, we can obtain upper and lower approximations of the bounds using Proposition 3 with the slopes set to zero, or by leveraging interval arithmetic. [43]
Problem Formulation
Consider a partially unknown nonlinear discrete-time system in the following form (cf. (8) and (9) for the terrain estimation example) where x k ∈ X ⊂ R n is the state vector at time k ∈ ℕ, u k ∈ U ⊂ R m is a known input vector, y k ∈ R l is the measurement vector and d k ∈ D ⊂ R p is a dynamic input vector whose dynamics is governed by an unknown vector field ϕð⋅Þ (Note that if the vector field ϕð⋅Þ is partially known (i.e., consists of the sum of a known componentφð⋅Þ and an unknown componentφð⋅Þ), we can simply consider d kþ1 Àφð⋅Þ as the output data for the model learning procedure to learn a model of the (completely) unknown functionφð⋅Þ). It might be worth emphasizing that d k can be considered as a dynamic exogenous input, whose dynamics/changes can be described by the unknown function ϕð⋅Þ, which is an unknown mapping from the current values of state, x k , known input u k , exogenous input d k , and disturbance w k to the exogenous input value d kþ1 at the next time step.
Moreover, we refer to z k ≜ ½x ⊤ k d ⊤ k ⊤ as the augmented state. The process noise w k ∈ R n w and the measurement noise v k ∈ R l are assumed to be bounded, with w ≤ w k ≤ w and v ≤ v k ≤ v, where w, w and v, v are the known lower and upper bounds of the process and measurement noise signals, respectively. We also assume that lower and upper bounds, z 0 and z 0 , for the initial augmented state ∀j ∈ f1 : : : pg is known to be Lipschitz continuous. For simplicity and without loss of generality, we assume that the Lipschitz constant L ϕ j is known; otherwise, we can estimate the Lipschitz constants with any desired precision using the approach in the study by Singh et al. [41,Equation (18) and Proposition 3], which is briefly recapped in Section 3.5. Moreover, we assume the following: k ⊤ and the known inputs u k , ∀k ∈ f0 : : : ∞g, respectively.
The observer design problem for a general partially unknown nonlinear discrete-time system in the form of (8) can be cast as follows: Problem 1. Given a partially known nonlinear discrete-time system (8) with bounded noise signals and unknown dynamics ϕð⋅, ⋅ , ⋅ , ⋅Þ, design a stable observer that simultaneously finds bounded intervals of compatible augmented states, k ⊤ , and learns an unknown dynamics model for ϕð⋅, ⋅ , ⋅ , ⋅Þ.
the quarter-car and wheel-terrain interaction models described in Section 2 for simultaneously estimating system states and unknown terrain parameters, where the combined soil-tire stiffness and the terrain elevation variation function are unknown.
Why Set-Valued Observers?
As mentioned in Section 1, many state and parameter estimation approaches via a Kalman filtering framework have been proposed for terrain estimation based on wheel-terrain interaction, e.g., the studies by Reina and coworkers. [2,3,10] Implicit in this Kalman filtering framework are the assumptions that the disturbance signals caused by terrain irregularity can be modeled as stochastic uncertainties with known statistics (e.g., as a zeromean Gaussian noise, so that the available statistical information on the model uncertainty can be leveraged in the design process) and that the system dynamics is fully known. By contrast, this article considers the setting where these assumptions are not available or justified. Instead, we consider a distribution-free framework that treats disturbances/noise as nondeterministic and bounded signals, and we assume that the system dynamics is only partially known. Under this setting (with a lack of additional statistical assumptions), Kalman filtering approaches are unfortunately not directly applicable. In contrast, set-valued and interval observers are viable and appropriate approaches when only bounds of the disturbance/noise signals are available and also when only distribution-free bounds on the generalization errors of the model learning methods can be derived.
Recursive Interval Observer
In this section, we briefly recap a three-step recursive interval observer that was introduced in our previous work [42] to solve the state and unknown terrain estimation problem described in Section 2.1 that is modeled as a partially known system (8). The main idea is to combine a data-driven model learning approach to learn the unknown function ϕð⋅Þ and a model-based interval observer approach to estimate the augmented states, z k ≜ ½x ⊤ k d ⊤ k ⊤ (consisting of the state and the exogenous input). Three main constituents are combined to design the observer structure: a state propagation (SP), a measurement update (MU) step, and a model learning (ML) step. Via the state propagation step, the interval estimate for the augmented states is propagated for one time step through the nonlinear state equation and the estimated model of the unknown dynamics function obtained in previous time step. Then, the update step iteratively updates compatible intervals of the augmented states, given new measurements and the nonlinear observation function, and finally, the upper and lower framer functions (abstractions) for the unknown dynamics function are estimated in the model learning step. Mathematically speaking, the three observer steps can be described in the following form (with where ℱ p and ℱ u are to-be-designed interval-valued mappings and ℱ l a to-be-constructed function over-approximation procedure (abstraction model), whereas ℐ z p k and ℐ z k are the intervals of compatible propagated and estimated augmented states, respectively. Moreover, fϕ k ð⋅Þ, ϕ k ð⋅Þg is a data-driven abstraction or over-approximation model for the unknown function ϕð⋅Þ, at time step k, i.e.
with D ϕ being the domain of ϕð⋅Þ and ζ k ≜ ½z ⊤ k u ⊤ k w ⊤ k ⊤ . To avoid the computational complexity of optimal observers, [44] while taking the advantages of intervals, [17] the following form of interval estimates in the propagation and update steps is considered where the estimation would be equivalent to finding the maximal and minimal values of ℐ z p k and ℐ z k , i.e., z p k , z p k , z k , z k . Further, given the sequence of interval estimates up to the current time, at the model learning step, we aim to apply the data-driven function abstraction/over-approximation approach developed in our previous work [41] to update and refine the learned/estimated model of the unknown dynamics function ϕð⋅Þ at the current time step.
In particular, defining the augmented state our proposed interval observer at each time step k ∈ ℕ is given as follows: State Propagation Measurement Update x Model Learning ϕ k,j ðζ k Þ ¼ min t∈f0, : : : , TÀ1g ϕ k,j ðζ k Þ ¼ max t∈f0, : : : , TÀ1g d kÀt,j À L ϕ j kζ k Àζ kÀt k À ε j kÀt (24) www.advancedsciencenews.com www.advintellsyst.com where j ∈ f1 : : : pg, fζ kÀt ¼ 1 2 ðζ kÀt þ ζ kÀt Þg k t¼0 , and fd kÀt , d kÀt g k t¼0 are the augmented input-output data set. At each time step k, the estimated framers gathered from the initial to the current time step construct the augmented data set, which is used in the model learning step to recursively derive over-approximations of the unknown function ϕð⋅Þ, i.e., fϕ k ð.Þ, ϕ k ð.Þg by applying [41,Theorem 1]. In addition, the propagated state framers at time step k are computed as Moreover, the sequences of updated framers fz u i,k , z u i,k g ∞ i¼1 are iteratively computed as follows where kÀt , ∀q ∈ ff , ϕg, J ∈ fA, Wg, i ∈ f1, : : : , ∞g, j ∈ f1, : : : , pg, are to-be-designed observer parameters, matrix gains of appropriate dimensions at time k and iteration i (given in Theorem 1 and Appendix), whereas f d ð., ., ., .Þ is the bounding function (based on (10)), with the purpose of achieving desirable observer properties.
Note that the measurement update step is done iteratively, because the tightness of the upper and lower bounding functions for the observation function g (cf. Propositions 2 and 3) depends on the a priori interval ℬ, (see proof of Theorem 2 for more explanation). Hence, if tighter updated intervals are obtained starting from the compatible intervals from the propagation step, we can use them as the new ℬ to obtain better abstraction/ bounding functions for g, which in turn may lead to even tighter updated intervals. Repeating this process results in a sequence of monotonically tighter updated intervals, that is convergent by the monotone convergence theorem, and its limit is chosen as the final interval estimate at time k. It is worth mentioning that in practice, the iterations can be terminated when the improvement is less than a user-specified stopping/threshold criterion, without jeopardizing the correctness and stability properties we will prove in the next sections.
Further, in the model learning step with the history of obtained compatible intervals up to the current time, given f½z s , z s g k s¼0 as the noisy input data and the compatible interval of unknown inputs, ½d k , d k , as the noisy output data, by leveraging our previous result in the study by Jin et al. [ is recursively constructed for the unknown function ϕð⋅Þ, that by construction, satisfies (45). In other words, it is guaranteed that our model estimation is correct (i.e., is guaranteed to frame/bracket the true function) and becomes more precise with time (cf. Lemma 1).
Correctness of the Observer
The objective of this section is to guarantee the framer property [19] of the proposed SMIO observer by designing the observer gains. In other words, we desire to make sure that the observer returns correct interval estimates, in the sense that starting from the initial interval z 0 ≤ z 0 ≤ z 0 , the true augmented states of the dynamic system (8) are guaranteed to be within the estimated intervals, given by (18)-(28). We call fz k , z k g ∞ k¼0 an augmented state framer sequence for system (8), if the observer is correct. Prior to derive our result on correctness of the observer, we state a modified version of our previous result in the study by Singh et al. [40,Theorem 1], in a unified manner that enables us to derive parallel global and local affine bounding functions for our known f ð⋅Þ, gð⋅Þ and unknown ϕð⋅Þ vector fields.
Proposition 3 (Parallel Affine Abstractions). Let the entire space be defined as X and suppose that Assumption 2 holds. Consider the vector fields qð.Þ, qð.Þ∶X ⊂ ℝ n 0 ! ℝ m 0 , where ∀ζ ∈ X, qðζÞ ≤ qðζÞ, along with the following Linear Program (LP) where ℬ is an interval with ζ, ζ and V ℬ being its maximal, minimal and set of vertices, respectively, 1 m ∈ ℝ m is a vector of ones, σ q is given in the study by Singh et al. A q ζ þ e q ≤ qðζÞ ≤ qðζÞ ≤ A q ζ þ e q , ∀ζ ∈ X.
( 3 2 ) Using the aforementioned proposition, we first solve (30) on the entire space X, i.e., with ℬ ¼ X (where the constraint (31) is trivially satisfied and is thus redundant) and obtain a tuple of ðθ q , A q , e q , e q Þ that satisfies (32), i.e., we construct a global affine abstraction model for the pair of functions qð.Þ, qð.Þ on the entire space X.
Next, given the (global) tuple ðA q , e q , e q Þ computed as described earlier, we solve (30) on ℬ subject to (31) to obtain a tuple of local parallel affine abstraction matrices for the pair of functions fqð⋅Þ, qð⋅Þg on the interval ℬ, satisfying the following: ∀ζ ∈ ℬ www.advancedsciencenews.com www.advintellsyst.com Now, equipped with all the required tools, we state our first main result on the framer property of the SMIO observer.
Theorem 1 (Correctness of the Observer). Consider the system (8) with its augmented state defined as z ≜ ½x ⊤ d ⊤ ⊤ , along with the SMIO observer in (18)- (24). Suppose that Assumptions 1-2 hold, f d ð⋅Þ is a decomposition function of f ð⋅Þ and observer gains and parameters are designed as given in (6.1). Then, the SMIO observer estimates are correct, i.e., the sequences of intervals fz k , z k g ∞ k¼0 are framers of the augmented state sequence of system (8) that satisfy z k ≤ z k ≤ z k for all k.
Proof. We will prove this by induction. For the base case, by assumption, z 0 ≤ z 0 ≤ z 0 holds. Now, for the induction step, suppose that z kÀ1 ≤ z kÀ1 ≤ z kÀ1 . Then, Propositions 1-3 as well as (8), (20)- (25) and the study by Singh et al. [41,Theorem 1] imply that z p k ≤ z k ≤ z p k . Given this, iteratively obtaining upper and lower abstraction matrices for the observation function gð.Þ based on Proposition 3 and applying Proposition 1, we have where α i,k , α i,k are given in (29) and A g i,k is a solution of the LP in (30), i.e., the parallel abstraction slope for function gð⋅Þ at iteration i in the corresponding compatible interval ½z u iÀ1,k , z u iÀ1,k . Then, multiplying (34) by A g † i,k , Proposition 1 and using the fact that z u iÀ1,k , z u iÀ1,k are framers for the augmented state z k at time k and, [45] we obtain z u i,k ≤ z k ≤ z u i,k , with z u i,k , z u i,k given in (27). Now, note that by construction, the sequences of updated upper and lower framers, fz u i,k g ∞ i¼0 and fz u i,k g ∞ i¼0 with z u 0,k ¼ z p k and z u 0,k ¼ z p k , are monotonically decreasing and increasing, respectively, and hence are convergent by the monotone convergence theorem. Consequently, their limits z k , z k are the tightest possible framers, i.e., ∀i ∈ f1 : : : ∞g i,k ≤ : : : ≤ z u i,k ≤ : : : ≤ z u 0,k ≤ : : : ≤ z u i,k ≤ : : : where z k , z k are the returned updated augmented state framers by the observer. This completes the proof. Next, through the following lemma, we show that the abstraction model of the unknown dynamics function becomes tighter (i.e., more precise) over time given correct interval estimates. Hence, our model estimate of the unknown dynamics becomes more accurate as time increases. Lemma 1. Consider the system (8) and the SMIO observer in (18)- (28) and suppose that all the assumptions in Theorem 1 hold. Then, the following sequence of inequalities holds ϕ 0 ðζ 0 Þ ≤ : : : ≤ ϕ k ðζ k Þ ≤ : : that is, the unknown input model estimations/abstractions are correct and become more precise or tighter with time.
Proof. It directly follows from the study by Singh et al. [41,Theorem 1] and Theorem 1 that the model estimates are correct, i.e., ∀k ∈ f0 : : : ∞g∶ϕ k ðζ k Þ ≤ ϕðζ k Þ ≤ ϕ k ðζ k Þ. Moreover, considering the data-driven abstraction procedure in the model learning step, note that by construction, the data set used at time step k is a subset of the one used at time k þ 1. Hence, from the study by Singh et al. [41,Proposition 2] the abstraction model satisfies monotonicity, i.e., (45) holds.
Observer Stability
This section investigates the stability of the designed observer, which will be first formally defined as follows Definition 4 (Observer Stability). The observer SMIO (18)-(24) is stable, if the sequence of interval widths fkΔ z kÀ1 k ≜ kz kÀ1 À z kÀ1 kg ∞ k¼1 is uniformly bounded, and consequently, the sequence of estimation errors fk˜z kÀ1 k ≜ maxðkz kÀ1 À z kÀ1 k, kz kÀ1 À z kÀ1 kÞ is also uniformly bounded.
Remark 1. Note that it is straightforward to observe that the above notion of (global) stability implies that the estimation error system is uniformly bounded-input bounded-state (UBIBS), which is a widely used stability notion for distribution-free approaches. [46][47][48][49][50][51] In particular, a dynamic system is UBIBS, if bounded initial states x 0 and bounded (disturbance/noise) inputs u produce uniformly bounded trajectories [46, Section 3.2], i.e., there exist two К-functions (A function σ∶ℝ þ ! ℝ þ is a К-function if it is continuous, strictly increasing and σð0Þ ¼ 0 σ 1 and σ 2 ) such that Next, a useful property for the decomposition function given in (10) will be derived, which will be helpful in obtaining sufficient conditions for the observer stability.
We are now ready to state our next main result on the SMIO observer stability in the following theorem.
Theorem 2 (Observer Stability). Consider the system (8) along with the SMIO observer in (18)- (28). Let D m be the set of all diagonal matrices in ℝ mÂm with their diagonal arguments being 0 or 1. Suppose that all the assumptions in Theorem 1 hold and the decomposition function f d is constructed using (10). Then, the observer is stable if there exist D 1 ∈ D nþp , D 2 ∈ D l , D 3 ∈ D n that satisfy D 1,i,i ¼ 0 if rðiÞ ¼ 1, i.e., if there exist (40) such that (10). Proof. Note that, we aim to obtain sufficient stability conditions that can be checked a priori instead of for each time step k. In contrast, for the implementation of the update step, we iteratively find new local parallel abstraction slopes A g i,k by iteratively solving the LP (35) for g on the intervals obtained in the previous iteration, (29)), with additional constraints given in (36) in the optimization problems, which guarantees that the iteratively updated local intervals obtained using the local abstraction slopes are inside the global interval, i.e.
z u k ≤ z u 0,k ≤ : : : ≤ z u i,k ≤ : : and ðD Ã 1 , D Ã 2 , D Ã 3 Þ is a solution of the following problem Consequently, the sequence of interval widths fkΔ z k kg ∞ k¼1 is uniformly upper bounded by a convergent sequence as Proof. The proof is straightforward by applying Proposition 1, computing (47) iteratively, using the fact that by Theorem 2, AðD 1 , D 2 , D 3 Þ is a stable matrix for the tuple of ðD 1 , D 2 , D 3 Þ that is a solution of (41), and from triangle inequality.
Estimation of Lipschitz Constant
In previous sections, the Lipschitz constants are assumed to be given. In the case when the constants are not known, they can be estimated from the noisy sampled data set D ¼ fðs j ,ỹ jþ1 Þj j ¼ n y , · · · , N À 1g as followŝ whereỹ k ands k are the augmented measured/estimated data input and data output vectors of unknown function ϕð⋅Þ at time step k. This can be considered as an extension of the lazy update rule in the study by Calliess [36,Section 4.3.2] to the case where both the input and output data, i.e.,s j andỹ jþ1 for all j, are corrupted by bounded noise. The expression in (62) simply follows from the definition of Lipschitz continuity and the use of triangle inequality.
To guarantees the accuracy of L ðiÞ p that is critical for the results in the previous sections, we proceed to find some confidence that we obtain the right estimate with high probability. To achieve this goal, we leverage a classical result on probably approximately correct (PAC) learning for linear separators, which is summarized as follows Definition 5 (Linear Separators introduction). [53] For Γ ⊂ ℝ Â ℝ, a linear separator is a pair ða, bÞ ∈ ℝ 2 such that ∀ðx, yÞ ∈ Γ∶ x ≤ ay þ b (63) Proposition 4 (PAC Learning introduction). [53] Let ε, δ ∈ ℝ þ . If number of sampling points is N ≥ 1 ε ln 1 δ , where the sample points Γ are drawn from a distribution , then, with probability greater than 1 À δ, a linear separator ða, bÞ has an error of less than ε, where the error of a pair ða, bÞ is defined as .
From Definition 5, it is easy to verify that ourL ðiÞ P estimate in (62) is a special case of the linear separator in (63) with b ¼ 0. Thus, the estimatedL ðiÞ P using (62) is guaranteed to be close to the true Lipschitz constant of the original unknown function with high probability if we have sufficient data.
However, from the nature of the Equation (62), the estimated value of the Lipschitz constantL p tends to be smaller than the true value of the Lipschitz constant L p , especially when the available data is limited, which may lead to the violation of the framer property of our proposed interval observer. To overcome this potential concern, we adopt a heuristic approach to estimate the Lipschitz constant. First, we initialize the algorithm with a prior sampled data set D with size n 0 , a large constant L p , and a desired data size N 0 which can be determined based on Proposition 4. Then, we expand the data set D with newly observed data and estimateL p using (36) for each time step. The Lipschitz constant is selected by the following equation In Equation (64), if the size of the current data set n 0 satisfies the desired data size N 0 , the estimated Lipschitz constantL p will be kept, otherwise, the large constant L p is used for compensation. Our proposed SMIO with the Lipschitz constant estimation heuristic is summarized in Algorithm 1. ðz k , z k Þ ¼ ðz u ∞,k , z u ∞,k Þ; ℐ z k ¼ fz ∈ R n ∶z k ≤ z ≤ z k g;
Simulation Results
In this section, we apply the proposed SMIO in Section 3 to the quarter-car and wheel-terrain interaction models described in Section 2 for simultaneously estimating system states and unknown terrain parameters, where the combined soil-tire stiffness and the terrain elevation variation function are unknown.
In particular, we compare the estimation performance of the proposed SMIO is compared with the performance of CKFs, which was the estimator of choice in the study by Reina et al., [2] under the same operating conditions and some additional statistical assumptions to enable the use of the CKF. The function trackingCKF in MATLAB is used to implement the CKF algorithm and interested readers are referred to the literature, e.g., the studies by Reina et al. and Arasaratnam and Haykin, [2,14] for more details. In addition, we verify that stability condition of the interval observers in Theorem 2 is indeed satisfied in this example. The parameters of a typical off-road vehicle (representing a planetary rover) that are used in the simulations are shown in Table 1. The sampling time is chosen to be δt ¼ 0.01s and the true terrain stiffness k s is selected to be k s ¼ 651.1 kN, thus, the true unknown k tot ¼ k s k t k s þk t ¼ 137.93 kN m À1 . Further, the true unknown terrain elevation variation function is chosen to be h : ¼f ðh, v, tÞ ¼ sinðhÞ þ 3 sinðvtÞ with v ¼ 5 m s À1 , which is treated as a Gaussian-distributed noise in the CKF.
We initialize the simulations with the initial unknown function model obtained with 1000 sample points and choose the compensation constant L p to be 10. The initial augmented state bound of z is given by Moreover, the process and measurement noise signals are assumed to be uniformly distributed with the following bounds: jw 1,k j ≤ 0.1897, jw 2,k j ≤ 0.1897, jw 3,k j ≤ 0.0949, jv 1,k j ≤ 2.1213, jv 2,k j ≤ 2.1213, jv 3,k j ≤ 0.001, jv 4,k j ≤ 0.001, jv 5,k j ≤ 0.001, jv 6,k j ≤ 0.001, and jv 7,k j ≤ 0.001.
To compare our method with the CKF approach, we relate the noise bounds to the variance matrix by setting jw i,k j=jv i,k j ¼ 3σ w i,k =3σ v i,k , with a starting guess for the CKF as follows As shown in Figure 2-7, the state estimates obtained by the proposed SMIO performed better than the CKF in terms of the estimation errors, and the difference is particularly notable for the estimate of the combined tire-soil stiffness where the percentage error is 6.5% for SMIO and 34% for the CKF approaches, respectively. Moreover, SMIO can correctly identify bounds/ framers for the states, whereas CKF does not provide such guarantees and appears to be unstable. The poor performance of the CKF method can be a result of treating the unknown dynamics function and uniformly distributed noise as Gaussian distributions. In Figure 8, the percentage error for the terrain stiffness estimate is 41.1% for SMIO and 73.7% for CKF. The error is relatively large because the nonlinear function that outputs the k s amplifies the estimation error of k tot .
Further, to verify that the correctness of our approach, we first obtain finite-valued upper and lower bounds (horizontal abstractions) for the partial derivatives of f ð⋅Þ using Proposition 3 with abstraction slopes set to zero, as follows www.advancedsciencenews.com www.advintellsyst.com are correct. This is shown in Figure 2-9, where the true states and unknown inputs as well as interval estimates are depicted. In addition, solving the optimization problem in Proposition 3 for the global abstraction matrices, we obtain A ϕ ¼ ½1.14 À0.40 (72) Figure 7. Actual value, estimates, and estimation errors of k tot when using our proposed SMIO versus the CKF. www.advancedsciencenews.com www.advintellsyst.com Next, from the study by Yang et al. [37, (10)-(13)]), we obtain C f ¼ ½0 5Â6 when using (10). Consequently, (47) is satisfied and so, the sufficient condition in Theorem 2 holds. Thus, as expected, we obtain uniformly bounded and convergent interval estimate errors when applying our observer design, as can be seen in Figure 10, where at each time step, the actual error sequence is upper bounded by the interval widths, which converge to steady-state values. Further, Figure 9 (right) shows the framer intervals of the learned/estimated unknown dynamics model (depicted by the "kinky" red and blue meshes) that frame the actual unknown dynamics function ϕð⋅Þ, as well as the global abstraction that is computed via Proposition 3 at the initial step. In summary, the simulation example demonstrates that the proposed SMIO is effective for state and unknown terrain estimation and holds great promise for enhancing the mobility of lunar and planetary vehicles.
Conclusion
In this article, we consider the design of a novel terrain estimation approach based on the framework of an SMIO. Specifically, an interval observer was introduced for partially unknown nonlinear systems with bounded noise. By leveraging a combination of nonlinear bounding or decomposition functions, affine abstractions, and a data-driven function abstraction method (to overestimate the unknown dynamics model from noisy inputoutput data), the proposed observer is capable of estimating the augmented states and learns the unknown dynamics simultaneously. In addition, we derived a tractable sufficient condition for the stability of the designed observer, i.e., for the uniform boundedness of the sequence of interval estimate widths. Finally, the effectiveness of the proposed interval observer is demonstrated using a terrain estimation problem, where we observed that the estimation performance is better when compared with the use of a CKF.
Future research on this subject includes the use of more realistic lunar or planetary rover models, e.g., the rocker-bogie system, and the fusion of the estimated parameters with estimates from exteroceptive sensors to further enhance the estimation of lunar and planetary terrain properties. | 2021-10-21T16:27:18.640Z | 2021-09-05T00:00:00.000 | {
"year": 2023,
"sha1": "a6ae973683905101159d53a025a5947d895c322f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/aisy.202100040",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "bf97318d4e95ce981c647f70d50123cc8dfceffa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
259183736 | pes2o/s2orc | v3-fos-license | Investigation of a Multistate Outbreak of Listeria monocytogenes Infections Linked to Frozen Vegetables Produced at Individually Quick-Frozen Vegetable Manufacturing Facilities
In 2016, the U.S. Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), and state partners investigated nine Listeria monocytogenes infections linked to frozen vegetables. The investigation began with two environmental L. monocytogenes isolates recovered from Manufacturer A, primarily a processor of frozen onions, that were a match by whole genome sequencing (WGS) to eight clinical isolates and historical onion isolates with limited collection details. Epidemiologic information, product distribution, and laboratory evidence linked suspect food items, including products sourced from Manufacturer B, also a manufacturer of frozen vegetable/fruit products, with an additional illness. The environmental isolates were obtained during investigations at Manufacturers A and B. State and federal partners interviewed ill people, analyzed shopper card data, and collected household and retail samples. Nine ill persons between 2013 and 2016 were reported in four states. Of four ill people with information available, frozen vegetable consumption was reported by three, with shopper cards confirming purchases of Manufacturer B brands. Two identified outbreak strains of L. monocytogenes (Outbreak Strain 1 and Outbreak Strain 2) were a match to environmental isolates from Manufacturer A and/or isolates from frozen vegetables recovered from open and unopened product samples sourced from Manufacturer B; the investigation resulted in extensive voluntary recalls. The close genetic relationship between isolates helped investigators determine the source of the outbreak and take steps to protect public health. This is the first known multistate outbreak of listeriosis in the United States linked to frozen vegetables and highlights the significance of sampling and WGS analyses when there is limited epidemiologic information. Additionally, this investigation emphasizes the need for further research regarding food safety risks associated with frozen foods.
Federal and state partners have made significant progress in efforts to enhance the detection and investigation of listeriosis outbreaks by standardizing surveillance activities, as well as epidemiologic data collection and sharing.Broader implementation of whole genome sequencing (WGS) has enhanced the ability to identify genetic associations between clinical and non-clinical isolates, assisting in hypotheses development around outbreak sources (U.S. Food and Drug Administration, 2018b).Additionally, WGS has led to the identification of matching isolates that are collected and analyzed months to years apart from each other (Pightling et al., 2018).Investigations of clusters of foodborne illness are most often initiated with the detection of clinical isolates determined to be linked based on molecular subtyping and/or epidemiologic data (U.S. Centers for Disease Control and Prevention, 2018c).For listeriosis cases, food exposure histories are routinely obtained using the standardized Listeria Initiative (LI) case report form, with more focused interview questions developed, as needed, to determine a food or ingredient of interest to focus traceback and/or facility investigations conducted by federal and state regulatory partners (Irvin et al., 2021; U.S. Centers for Disease Control and Prevention, 2016a).However, increasing availability of WGS data for environmental and product samples collected outside of active outbreak investigations can shift the expected sequence of events once a possible cluster of illnesses is detected.
Federal and state health and regulatory partners linked this multistate outbreak of L. monocytogenes infections to the processing environment of Manufacturer A and frozen vegetables sourced from Manufacturer B (both frozen vegetable manufacturers).Based on investigational evidence, Manufacturers A and B conducted voluntary recalls of their products.FDA and CDC both issued public communications during the investigation, which included preliminary findings as well as advice for consumers and businesses to prevent additional illnesses (U.S. Centers for Disease Control and Prevention, 2016b; U.S. Food and Drug Administration, 2016a).This is the first known multistate L. monocytogenes outbreak in the U.S. linked to frozen vegetables and provides a powerful example of the impact WGS can have in foodborne illness outbreak investigations, particularly those that may have initially limited case exposure details.
FDA Facility Investigation
During March 8-11 and 14-17, 2016, in accordance with the domestic food safety compliance program, the U.S. Food and Drug Administration (FDA) initiated surveillance inspections at two independently owned frozen vegetable and fruit processing facilities (Manufacturer A and Manufacturer B), both located in Washington state (Fig. 1).Manufacturer A primarily processed frozen onions that were not sold directly to retail, while Manufacturer B processed a variety of frozen vegetable and fruit products sold at retail under several brand names.During each inspection, FDA investigators assessed facility compliance with current Good Manufacturing Practice (cGMP) requirements found in 21 CFR 110, including, but not limited to, plant construction and design; pest control procedures; building maintenance and design; safety of water; condition and cleanliness of food and non-food-contact surfaces; and employee practices.In addition, FDA investigators collected environmental swabs and finished product samples of onions for Listeria species analysis and documents to support their inspectional observations (U.S. Food and Drug Administration, 2022a, 2022c).
Microbiological investigation
Environmental and product samples collected during FDA's investigation of the manufacturing facilities were cultured for Listeria species at FDA laboratories using standard methods (Hitchins et al., 2011).Environmental samples were collected from Zone 1 (i.e., food-contact surfaces), Zone 2 (i.e., areas directly adjacent to food-contact surfaces), and Zone 3 (i.e., areas immediately surrounding Zone 2) locations within the facility environment per standard methods (U.S. Food andDrug Administration, 2019a, 2020a).Product samples were also collected at both manufacturing facilities.FDA performed pulsed-field gel electrophoresis (PFGE) and WGS on L. monocytogenes isolates recovered from any product or environmental samples.
As part of the outbreak response, the California Department of Public Health (CDPH) and Idaho public health partners under the direction of the Idaho Department of Health and Welfare (IDHW) collected leftover product from the homes of ill people, when available, and performed PFGE and/or WGS on L. monocytogenes isolates recovered during product sample analyses.FDA also analyzed state samples, including WGS and/or the most-probable-number (MPN) method to enumerate L. monocytogenes (U.S. Food and Drug Administration, 2020a).Independent of the outbreak investigation, in April 2016, the Ohio Department of Agriculture (ODA) collected and tested two product samples of frozen corn and frozen green peas (consisting of one 10 oz.bag each) produced by Manufacturer B from a retail location in Ohio as part of routine surveillance sampling that included L. monocytogenes testing.WGS of non-clinical isolates was performed at state public health and FDA laboratories, and WGS data were submitted to federal databases (U.S. Food and Drug Administration, 2020b).
Outbreak Detection
In 2013, CDC, FDA, the U.S. Department of Agriculture's Food Safety and Inspection Service (USDA FSIS), and state health departments initiated real-time WGS subtyping on all available clinical, food, and environmental L. monocytogenes isolates.The sequences and associated metadata of all isolates are uploaded to the Listeria PulseNet national database at CDC, and GenomeTrakr, a public genomic reference database of clinical, food, and environmental isolates from foodborne pathogens (Allard et al., 2016;Timme et al., 2019;Zitz et al., 2011).The sequence data and limited metadata are shared through public databases at the National Center for Biotechnology Information (NCBI) (Jackson et al., 2016; National Institute for Biotechnology Information).CDC's PulseNet, the national molecular subtyping network for foodborne disease surveillance, uses the sequencing data to identify clusters of illness (U.S. Centers for Disease Control and Prevention, 2016c).On March 25, 2016, PulseNet detected a possible listeriosis outbreak comprised of ten isolates with a rare PFGE pattern combination; the cluster included isolates from eight ill people from four states, along with two historical onion isolates from 2014 with limited sample collection details (PFGE analysis was not initially performed).Subsequently, two environmental isolates obtained from Manufacturer A were found to be a match to this listeriosis cluster.All isolates were determined to match by WGS using Single Nucleotide Polymorphism (SNP) analysis carried out by CFSAN SNP Pipeline (Davis et al., 2015).
Epidemiologic investigation
Ill people or their surrogates were interviewed using CDC's standard LI case report form, which collects information on the course of illness, demographics, and select food exposures during the month before illness began (U.S. Centers for Disease Control and Prevention, 2018a).Based on the close genetic relationship noted between clinical, historical onion, and the two environmental isolates from Manufacturer A, CDC and state partners developed a focused questionnaire that included detailed questions about onions, frozen foods (including frozen vegetables, fruits, and meals), and deli items.Due to the long shelf-life of frozen food products, available shopper card data were collected for the three months before the person's illness.Consent was obtained from ill people to share shopper card data with state and federal partners.
Facility Investigation Findings
The March 8-11, 2016 inspection of Manufacturer A resulted in multiple observations of sources and routes of food contamination, including: failure to clean food-contact surfaces as frequently as necessary to protect against contamination of food; facility construction not preventing condensate from contaminating food-contact surfaces; food-contact surfaces not adequately cleaned and sanitized to minimize accumulation of food particles and other organic matter that could provide conditions allowing growth of microorganisms; failure to maintain physical facilities in a sanitary condition; and facility construction not allowing adequate cleaning of floors and walls (U.S. Food and Drug Administration, 2016b).
During the March 14-17, 2016 inspection of Manufacturer B, FDA investigators observed that the materials and workmanship of equipment and utensils did not allow proper cleaning and maintenance, and therefore could be potential sources and/or routes of food contamination.There were no other significant observations from the inspection of Manufacturer B.
Microbiological Investigation Findings
Facility Environmental and Product Samples-Eighteen percent of the 106 environmental swabs (n = 19) collected by FDA investigators from Manufacturer A's facility on March 8 and 9, 2016, were positive for L. monocytogenes (Table 1).Of these 19 positive environmental swabs, seven were collected from Zone 1 in the facility's processing and packaging rooms.The seven Zone 1 positive environmental swabs were collected from direct food-contact surfaces, including the chiller water and interior wall of the water chiller; a nylon strip in the tunnel discharge chute between the freezer and the finished product packaging room; and the metal arm on the chain conveyor belt between the freezer and packaging room.Two of the remaining 12 positive environmental swabs were collected from Zone 2 and ten from Zone 3 in the processing and packaging rooms that were in areas adjacent to food and non-food-contact surfaces.One Zone 1 and one Zone 3 environmental isolate collected from Manufacturer A were subsequently found to match the PFGE pattern combination of Outbreak Strain 1 and were found to be a match by WGS to the eight clinical isolates included in this outbreak (see Case Definition and Epidemiologic Investigation).The 17 other L. monocytogenes environmental isolates from Manufacturer A were determined to be three different PFGE pattern combinations, with a distinct sequence that did not match any clinical isolates in GenomeTrakr or PulseNet (Table 1).Two product samples consisting of 20 (4 oz.) subsamples of finished frozen diced onions were also collected during the inspection; no Listeria species were isolated from the product samples or the remaining environmental swabs.
Although none of the environmental swabs collected by FDA during the inspection of Manufacturer B's facility on March 15 and 16, 2016 were positive for L. monocytogenes, 5% of 100 environmental swabs (n = 5) were positive for Listeria innocua, which indicates evidence of conditions that could be suitable for L. monocytogenes (Zitz et al., 2011).Of the five L. innocua positive swabs, two swabs were from Zone 1 and one swab from Zone 2 surfaces of the packing line (collected while the facility was repacking frozen corn), and two swabs were from Zone 3 surfaces of the processing line (collected while the facility was processing onions).A single product sample consisting of 20 subsamples of whole, peeled onions was also collected during the inspection; no Listeria species were isolated from the product samples or remaining environmental swabs collected during this inspection.
State Product Samples and Additional Whole Genome Sequencing (WGS)
Analyses-Frozen corn and frozen green peas product samples sourced from Manufacturer B were collected from retail stores by ODA in April 2019 and determined to be contaminated with L. monocytogenes (Table 1).Additionally, one intact (5 lbs.) and one opened (5 lbs.) bag of frozen mixed vegetables, both sourced from Manufacturer B and collected by CDPH from the home of an ill person included in the outbreak, were determined to be positive for L. monocytogenes.The remainder of the CDPH product samples were submitted to FDA for analysis, including enumeration, and both samples were found positive for L. monocytogenes and L. innocua based on FDA analysis; enumeration results were below the detectable limits of the analysis (i.e., <0.3 MPN/g).A sample of frozen baby peas from an opened bag sourced from Manufacturer B and included in the subsequent recall was also collected by Idaho public health from a household of six family members with gastroenteritis compatible with noninvasive listeriosis (Ooi & Lorber, 2005).The sample was confirmed positive for L. monocytogenes; the remainder of this sample was submitted to FDA for WGS analysis (Table 1).
In addition to the eight clinical isolates matching Outbreak Strain 1, the two environmental isolates collected from Manufacturer A were found to be a match by WGS to 16 product isolates (SNP difference of ≤18 SNPs) (WGS Clade A; Fig. 2) (Davis et al., 2015).These product isolates included the frozen corn isolate collected by ODA, nine isolates from two household samples of frozen mixed vegetables collected by CDPH, and the household sample of frozen baby peas collected by Idaho public health.WGS Clade A also included the 2014 onion isolates; based upon further review of additional genomic data from GenomeTrakr, three isolates recovered from green beans in 2015 were also included.These sequences were submitted to GenomeTrakr by third-party laboratories and had limited sample collection details.
The isolate recovered from the frozen green peas sample also collected by ODA was a match by WGS to a single clinical isolate from 2016, six product isolates, and 32 environmental isolates (SNP difference of ≤21 SNPs) (WGS Clade B; Fig. 3).The product isolates included three isolates from the two household samples of frozen mixed vegetables collected by CDPH.A review of genomic data from GenomeTrakr identified the additional non-clinical isolates that were included in WGS Clade B, namely green beans (one 2015 isolate), potatoes (one 2010 isolate), and environmental isolates (32 isolates from 2015).These sequences were submitted to the GenomeTrakr by third-party laboratories and had limited sample collection details.
The 17 additional L. monocytogenes positive environmental isolates from Manufacturer A were not found to match any clinical isolates in PulseNet or the GenomeTrackr database but were found to be a match to each other by WGS (WGS Clade C; Fig. 4).
Case Definition and Epidemiologic Investigation-An outbreak case was defined as infection with one of two outbreak strains of L. monocytogenes (Outbreak Strain 1 and Outbreak Strain 2), isolated from a normally sterile site from September 1, 2013, to May 15, 2016, and highly related by WGS (within 0-14 SNPs difference).A total of nine confirmed cases were identified in four states, including California (6), Connecticut (1), Maryland (1), and Washington (1).Ill people ranged in age from 56 to 91 years (median 76 years), and 78% were female.No illnesses were pregnancy-associated.All nine ill people were hospitalized, and three deaths were reported; one death was considered attributable to listeriosis based on determination by state and local health officials.Eight ill people were infected with Outbreak Strain 1, and one with Outbreak Strain 2 (Table 1).
Exposure Information-At the start of the investigation, preliminary exposure information was available for six ill people; three were interviewed with the LI case report form, and three were interviewed with state-developed forms.None of these standard case report forms included questions on exposure to onions, frozen fruits and vegetables, or frozen meals.To evaluate these potential exposures, CDC developed an outbreak-specific focused questionnaire.Two ill people who had been interviewed with the LI case report form, and one ill person who had been interviewed with a state-specific case report form, were reinterviewed with this focused questionnaire.A fourth patient was reinterviewed using an open-ended approach where the interviewer asked about food items on the focused questionnaire.Among the four ill people interviewed with (or using content derived from) the focused questionnaire, three reported consuming frozen vegetables, three reported consuming frozen fruit, and two reported consuming fresh onions.One ill person in California reported cooking frozen vegetables on the stove and storing remaining uncooked product in the freezer; preparation information was not available for any other ill people.Shopper card records for foods purchased in the three months before illness onset were obtained for three ill people.Of these ill people, two purchased two brands of frozen vegetables sourced from Manufacturer B, while the third purchased a variety of frozen vegetable brands and products, including one brand sourced from Manufacturer B; two of the three ill people also purchased one of two brands each of frozen fruits associated with Manufacturer B.
An additional cluster of six illnesses in a family with gastroenteritis compatible with noninvasive listeriosis was identified by Idaho public health officials.Illness onset dates of family members ranged from April 24, 2016, to May 9, 2016.Five of the six ill people reported eating frozen baby peas sourced from, and recalled by, Manufacturer B before their illness began.The frozen baby peas were purchased on February 1, 2016, and the family ate them intermittently over a three-month period, either uncooked or cooked.However, the individuals in this household did not meet the case definition and were not included as confirmed cases in this outbreak, as the only clinical sample collected (a less than ideal stool specimen from the person with the earliest onset date) was negative for L. monocytogenes.
Public Health Response Activities-After being informed by FDA and CDC of a cluster of illnesses that were closely related by WGS to FDA environmental isolates collected from their facility, on April 8, 2016 (within two weeks of the initial cluster detection), Manufacturer A promptly notified downstream customers of their voluntary recall of bulk frozen and fresh onion products manufactured between March 8 and April 8, 2016 (Figure 1).Based on the detection of L. monocytogenes in the retail samples collected and analyzed by ODA, Manufacturer B initiated a voluntary recall of 11 frozen products containing corn and green peas on April 22, 2016.This voluntary recall was expanded on May 2, 2016, to include products manufactured or processed at their facility since May 2014.Ultimately, Manufacturer B voluntarily recalled approximately 450 products related to this outbreak; the expansion of Manufacturer B's recall resulted in one of the largest frozen vegetable recalls in the US, with at least 82,000 tons of FDA-regulated and 23,000 tons of USDA-regulated products removed from the market (U.S.Department of Agriculture Food Safety and Inspection Service, 2016, 2017a, 2017b; U.S. Food and Drug Administration, 2022b).FDA, CDC, and state partners informed the public about investigational findings, public health actions that were taken in response to the outbreak, and measures for consumers to protect themselves through three FDA web posts, two CDC web posts, and a web page created on FoodSafety.gov listing the downstream recalls associated with the expanded recall issued by Manufacturer B. After reviewing inspectional information and initial corrective actions reported by the firm, an FDA warning letter was subsequently issued to Manufacturer A on July 15, 2016, which noted the presence of L. monocytogenes in the facility as being indicative of inadequate sanitation efforts to effectively control pathogens in the facility, and on processing equipment specifically, to prevent contamination of food (U.S. Food and Drug Administration, 2016b).
Discussion
This is the first reported multistate outbreak of L. monocytogenes illnesses associated with frozen vegetables in the United States.Routine and outbreak-directed product sampling during the investigation led to the identification of the outbreak strains in a food product that otherwise would have been challenging to identify through epidemiologic investigation alone.Due to faster identification of a food source, swift public health action was able to be taken, likely saving lives and preventing additional illnesses.This investigation also demonstrates the significant role that environmental and/or product sampling during facility inspections can play as part of a comprehensive food safety system to not only help detect outbreaks but understand why they may have occurred and prevent future illness.
Previous outbreak investigations and studies of Listeria contamination of food processing plants suggest that the pathogen can establish itself in facilities with deficiencies in sanitation practices (Leong et al., 2014;U.S. Food and Drug Administration, 2014b;Vitas & Garcia-Jalon, 2004).The FDA investigation of Manufacturer A provided evidence of multiple potential sources and routes of food contamination; most notably, the facility's failure to clean food-contact surfaces to protect against contamination of food, facility construction not preventing condensate from contaminating food-contact surfaces, and an overall inadequacy of their sanitation practices.Although illnesses were not directly linked epidemiologically to Manufacturer A during the investigation, the sanitation deficiencies observed in the facility of Manufacturer A could have led to contamination of retail products sold by downstream customers of this manufacturer.In addition to detection of Outbreak Strain 1 (Clade A) in the environment of Manufacturer A, WGS analysis of 17 environmental swabs from various surfaces in Zones 1, 2, and 3 detected another strain of L. monocytogenes (Clade C), suggesting extensive contamination throughout the facility.Although this strain of L. monocytogenes was not found to match clinical cases, the investigation and subsequent recall of product by Manufacturer A may have prevented illnesses associated with this particular strain from occurring.The investigation of Manufacturer B also revealed sanitation deficiencies, specifically that the materials and workmanship of equipment and utensils did not allow proper cleaning and maintenance, which also could have resulted in potential sources and/or routes of food contamination.Even though the original source of the L. monocytogenes contamination is unknown, failure to control the pathogen in the processing environment of both Manufacturers A and B is believed to have played a role in contaminating food products.During this outbreak, illnesses occurred over the course of three years, further supporting the hypothesis that contamination persisting in the processing environment was likely an important contributing factor to the outbreak.
Following the completion of this investigation, an outbreak of L. monocytogenes infections linked to frozen corn and other frozen vegetables produced in a single facility occurred in five European Union (E. U.) member states, resulting in 53 reported cases and ten deaths during 2015 through 2018 (European Food Safety Authority, 2018;Koutsoumanis et al., 2020).Investigation findings for the E.U. outbreak also suggested that the pathogen persisted in the processing plant and was transferred to the final product, despite standard cleaning and sanitation practices being conducted, in combination with periods of inactivity in the plant and stock rotations (Koutsoumanis et al., 2020).L. monocytogenes can be recovered from processing equipment that may be difficult to clean and disinfect, and some strains have been found to persist for decades in some food processing environments, thus increasing the risk of contaminating the final product (Buchanan et al., 2017;Tompkin, 2002).
As described in this outbreak, investigations can follow a nontraditional sequence of events, with food and/or environmental isolates detected in tandem with, or prior to, the identification of a possible outbreak, providing early hypotheses about possible outbreak sources instead of hypothesis development about the source arising solely from interviews with ill people (Irvin et al., 2021;Jackson et al., 2016).Therefore, early information provided by genetic relationships between clinical and non-clinical isolates could facilitate the progress of an outbreak investigation.Other investigations involving food or environmental isolates detected before or early in the investigation have been recently reported, including Salmonella spp.infections linked to mayonnaise made using raw eggs (U.S. Food and Drug Administration, 2019b), tahini (U.S. Food and Drug Administration, 2018a), and raw cake mix (Ladd-Wilson et al., 2019), as well as L. monocytogenes infections linked to deli ham (U.S. Centers for Disease Control and Prevention, 2018b) and ice cream (U.S. Food and Drug Administration, 2015).In order to detect important linkages between clinical and non-clinical isolates during an outbreak investigation, regulatory inspection and sampling efforts, as well as increasing publicly available sequence data (including contribution of food and environmental isolates by third parties, such as industry and academia) are critical.However, as the outbreak described here demonstrated, the WGS linkages to two distinct manufacturers alone are insufficient to indicate a causal relationship to clinical illness and should always be interpreted in the broader context of epidemiologic, traceback, and investigational data.
The use of WGS in outbreak surveillance and response can result in federal and state partners identifying more clusters of illnesses, in addition to linking seemingly temporally and geographically dispersed illnesses to contaminated food and facilities, due to the ability to determine genetic relatedness with greater certainty than with PFGE analysis alone (Jackson et al., 2016;U.S. Food and Drug Administration, 2016c).The strength of genetic relationships identified between food and/or environmental and clinical isolates has provided clues to inform epidemiological investigations, provided justification for further follow-up, and has occasionally resulted in the initiation of outbreak investigations (Pightling et al., 2018).In addition, recent investigations of several other listeriosis outbreaks supported by WGS analysis have identified novel food vehicles, which, while known to be at risk for contamination by Listeria, have not traditionally been considered risks for outbreaks, such as stone fruit (Jackson et al., 2015), caramel apples (U.S. Food and Drug Administration, 2014a), and enoki mushrooms (U.S. Food and Drug Administration, 2020c).Genetic relationships between historical food and environmental and clinical isolates can also contribute to the refinement of questionnaires used for interviews of ill people (Jackson et al., 2016;U.S. Food and Drug Administration, 2016c).
Because frozen vegetables and fruits may be consumed without cooking, control of L. monocytogenes in frozen vegetable and fruit production environments is an important component of preventing potential product contamination.Once frozen foods, including frozen vegetables, are contaminated, preparation and/or manner of consumption of these foods may further influence whether infections occur.Although specific information indicating whether the implicated frozen products were cooked or not prior to consumption was not available for the 2015-2018 outbreak in the E.U., investigators suspected some ill people may have eaten thawed products without having cooked them properly or at all (Koutsoumanis et al., 2020).Frozen foods do not typically support the growth of L. monocytogenes, but can still contribute to the risk of listeriosis under certain conditions, such as when cooking instructions are not followed, or when frozen fruit or vegetables are consumed after thawing (e.g., added directly to smoothies or salads) (Zoellner et al., 2019).Frozen vegetables, such as green peas and corn, may be thawed and held refrigerated before consumption, and some people may eat them without cooking or heating, as was noted during this investigation based on follow-up with the suspected cases of illness in Idaho.Holding these foods after thawing for extended periods may allow L. monocytogenes to grow to levels that present a public health concern (Kataoka et al., 2017).Based on the quantitative risk assessment model developed by the European Food Safety Authority (EFSA) following the listeriosis outbreak in E.U. member states, the probability of illness per serving of blanched frozen vegetables for females and males aged 65-74 years was found to be up to 3,600 times greater for products consumed uncooked rather than cooked (Koutsoumanis et al., 2020).Risk assessment modeling related to the contamination of frozen vegetables by Zoellner et al. also noted that for low-level L. monocytogenes contamination of frozen foods that typically do not support bacterial growth, quantifying and understanding consumer handling practices becomes critical (Zoellner et al., 2019).In addition to measures to improve the safety of frozen foods, continued public messaging to raise awareness of the risk of listeriosis from foods, such as frozen vegetables, and consumer education on ways they can reduce that risk (e.g., through proper holding and cooking according to the manufacturer's instructions), can further protect public health.
Analytical results of FDA environmental sampling provided a crucial clue to the possible source of the listeriosis illnesses.In addition, although limited during this investigation, the collection of product samples during listeriosis outbreak investigations for research aimed at enumerating the pathogen in product linked to illnesses can further inform risk and prevalence associated with particular commodities.The only product samples available for enumeration during this investigation indicated results below the detectable limits of the analysis (<0.3 MPN/g); there is insufficient information to indicate whether this finding reflects the dose consumed by those who became ill (which may be suggestive of low infective dose causing illness), as only two samples were enumerated, and consumers may have held products under conditions in which bacterial growth could have occurred.Recent research has demonstrated that low-level contamination in food that may not support growth can still cause listeriosis in highly susceptible populations (Datta & Burall, 2018).A recent review of available literature related to L. monocytogenes prevalence in frozen vegetables noted that few studies reported enumeration of L. monocytogenes in frozen vegetables and, in most of the cases, the numbers were below the limit of enumeration of the plate count procedure applied in each of the studies (Koutsoumanis et al., 2020;Willis et al., 2020).In addition, occurrence of positive samples was noted to vary considerably among facilities (from 5.5% to 46.8%), which also supports the need for further research on prevalence and numbers of L. monocytogenes in frozen vegetables and fruit, but also more robust contamination prevention strategies for such foods.
Investigating outbreaks of listeriosis can be challenging because of difficulties in identifying suspected food items and exposures of ill people to these food items (Marshall et al., 2020).Investigators involved in this outbreak faced additional challenges because food items suspected early in the investigation (e.g., onions, frozen fruit and vegetables) were not included on the standard LI case report form or state questionnaire; this required reinterview of ill people, and five were not able to be contacted for reinterview.Second, these suspected food items are commonly consumed, and could also be ingredients in many frozen or readyto-eat dishes; while three of four ill people reported or had purchase records indicating multiple exposures to frozen vegetables (including brands linked to Manufacturer B), it is still possible that some exposures may not have been remembered.Another challenge in this investigation was limited sample collection details for several non-clinical isolates that were sequenced and submitted to GenomeTrakr before the outbreak was detected that were later found to be a match to the outbreak strains.While offering certain clues, limited information about the specific source and/or type (e.g., frozen, fresh-cut) of non-clinical isolates submitted to Genome-Trakr that are found to be a match to cases of illness can introduce additional complexity to outbreak investigations.In this particular investigation, however, while food isolates with limited collection details were identified early on, FDA and state sampling efforts provided the most compelling laboratory evidence that eventually led to product actions.Additional illnesses associated with this outbreak might have continued to occur without the availability of environmental and product isolates suggesting onions and frozen fruit and/or vegetables as suspect foods at the beginning of the investigation.The suspect food was supported by purchase records identifying specific frozen vegetable and/or fruit items purchased and eaten by ill people obtained later in the investigation.
There were important limitations of this investigation.First, investigators were ultimately unable to determine whether frozen fruit, in addition to frozen vegetables, was a source of illness for people linked to this outbreak.Although three ill people reported eating or had purchase records for frozen fruit, including an ill person who denied eating frozen vegetables, no leftover frozen fruit was available for microbiological testing to determine whether it could also have been contaminated with Outbreak Strains 1 and/or 2. Of note, frozen fruit brands reported by ill persons included two sourced from Manufacturer B, and Manufacturer A was known to process frozen blueberries at least once a month annually.Second, information on how ill people prepared and ate frozen vegetables and/or fruit, which could inform consumer-focused prevention strategies, was extremely limited.
In conclusion, the FDA, CDC, and state and local health agencies collaborated successfully to identify and stop the first reported outbreak of listeriosis associated with frozen vegetables in the United States.Based on the findings of the facility inspections, failure to control the pathogen in the processing environment of both Manufacturers A and B is believed to have played a role, highlighting the importance of proper cleaning and sanitization of food-contact and non-food-contact surfaces to prevent the contamination of food and/or establishment of resident pathogens within the facility environment.Although manufacturers may consider frozen vegetables to be not ready-to-eat and provide cooking instructions, they should take steps to ensure these foods are not contaminated from the processing environment, especially since some consumers may use these products without cooking and/or with inadequate cooking.While labeling of frozen food products was not reviewed during this investigation, consumers may need to be informed to follow a manufacturer's cooking instructions and that consuming undercooked or uncooked frozen vegetables could lead to foodborne illness.An assessment by manufacturers of cooking instructions on frozen food product labels, including accessibility, simplicity, and effectiveness, may also be warranted (Farber et al., 2021).Given evidence of low-dose contamination causing listeriosis outbreaks in highly susceptible consumers (Pouillot et al., 2016), further research into the prevalence and risk of L. monocytogenes in food products such as frozen vegetables, including enumeration studies, is warranted.Finally, this investigation highlights how WGS has become an indispensable tool in outbreak investigations, with broader implementation enabling investigators, in some instances, to identify outbreaks that may not have been otherwise detected prior to when WGS was available as a tool; helping identify novel pathogen-commodity pairs; identifying contamination in food production facilities that may be linked to illnesses over a broad timeframe (which could be suggestive of recurrent contamination); and allowing more efficient allocation of state and federal public health resources (Jackson et al., 2016).Confirmed human, food, and environmental isolates, by date of isolation for which information was reported as of July 18, 2016, and timeline of events that took place during the outbreak response including the outbreak detection, inspections, recalls, and regulatory actions taken.Whole Genome Sequencing Clade A Phylogenetic tree illustrating the analysis of eight clinical isolates included in this outbreak (red), 11 product isolates sourced from Manufacturer B (green), five product isolates from unknown sources (orange), and two environmental isolates from Manufacturer A (blue) matching Outbreak Strain 1.This tree represents isolates that were included in the database and subsequent analysis at the time of the investigation.(SNP Range 0:18, mean 6.46).Whole Genome Sequencing Clade B illustrating the analysis of a single clinical isolate from 2016 (red), four product isolates sourced from Manufacturer B (green), two product isolates from unknown sources (orange), and 32 environmental isolates from an unknown source (blue) that matched Outbreak Strain 2 (SNP range 0:21, mean 6.97).This tree represents isolates that were included in the database and subsequent analysis at the time of the investigation; isolates in black were not a match to Outbreak Strain 2, hence not determined to be of significance at the time of the investigation.Whole Genome Sequencing Clade C Phylogenetic tree illustrating the analysis of 17 environmental isolates from Manufacturer A (blue) (SNP range 0:21, mean 6.97).This tree represents isolates that were included in the database and subsequent analysis at the time of the investigation. | 2023-06-18T06:17:07.192Z | 2023-06-14T00:00:00.000 | {
"year": 2023,
"sha1": "f80ac35c10d806ffa65097f34299706f8deeb980",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jfp.2023.100117",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37770bef6614f0ce9d8d160ee3c614a7e95f0a7a",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33101845 | pes2o/s2orc | v3-fos-license | Binding of Myomesin to Obscurin-Like-1 at the Muscle M-Band Provides a Strategy for Isoform-Specific Mechanical Protection
Summary The sarcomeric cytoskeleton is a network of modular proteins that integrate mechanical and signaling roles. Obscurin, or its homolog obscurin-like-1, bridges the giant ruler titin and the myosin crosslinker myomesin at the M-band. Yet, the molecular mechanisms underlying the physical obscurin(-like-1):myomesin connection, important for mechanical integrity of the M-band, remained elusive. Here, using a combination of structural, cellular, and single-molecule force spectroscopy techniques, we decode the architectural and functional determinants defining the obscurin(-like-1):myomesin complex. The crystal structure reveals a trans-complementation mechanism whereby an incomplete immunoglobulin-like domain assimilates an isoform-specific myomesin interdomain sequence. Crucially, this unconventional architecture provides mechanical stability up to forces of ∼135 pN. A cellular competition assay in neonatal rat cardiomyocytes validates the complex and provides the rationale for the isoform specificity of the interaction. Altogether, our results reveal a novel binding strategy in sarcomere assembly, which might have implications on muscle nanomechanics and overall M-band organization.
In Brief Pernigo et al. analyze the myomesindependent integration of obscurin/ obscurin-like-1 at the muscle M-band. They discover a mechanism of structural trans-complementation whereby an incomplete immunoglobulin-like domain of obscurin-like-1 assimilates an isoformspecific myomesin interdomain sequence providing mechanical stability. Accession Numbers 5FM4 5FM5 5FM8 INTRODUCTION Sarcomeres, the basic contractile units of striated muscles, specialize in force generation through cyclic interactions of myosin and actin filaments. This fundamental activity requires the correct positioning of hundreds of proteins assembled in an overall functional architecture that need to respond to me-chanical force in a cooperative, orchestrated way, as well as providing key integration of regulatory signals. The Z-disc and M-band sarcomeric regions ( Figure 1A), although not directly involved in the actomyosin complex, are hubs where multiple structural and regulatory proteins are linked (Gautel and Djinovic-Carugo, 2016). In particular, the central M-band, where titin filaments entering from opposite half-sarcomeres overlap, has been proposed as a structural safeguard of sarcomere integrity during force-generation cycles (Agarkova et al., 2003).
Myomesin is a 185 kDa modular protein that localizes exclusively at the M-band, where anti-parallel dimers cross link myosin filaments ( Figure 1B). It is expressed in all muscle types and its knockdown by siRNA results in a general failure in M-band assembly and the formation of disordered sarcomeres (Fukuzawa et al., 2008). Long interdomain a-helices at the protein's C-terminus have been suggested to act as strain absorbers enabling myomesin to buffer mechanical forces between molecules during muscle work (Pinotsis et al., 2012;Xiao and Grater, 2014). In addition to a mechanical role, myomesin is also needed for the integration of obscurin and its smaller obscurin-like-1 homolog at the M-band (Fukuzawa et al., 2008). Together with titin's C-terminus, a hotspot for disease-related mutations (Carmignac et al., 2007;Pollazzon et al., 2009), myomesin recruits obscurin and obscurin-like-1 Ntermini at the myofibril periphery and core, respectively, establishing a ternary complex ( Figure 1B).
Obscurin and obscurin-like-1 share a common immunoglobulin (Ig)-rich modular structure, which, in the case of obscurin, is more extended, featuring additional signaling and proteinbinding domains absent in obscurin-like-1 (Fukuzawa et al., 2008). The presence of a non-modular C-terminus able to interact with small ankyrin-1 isoform 5 and ankyrin-2 led to the suggestion that obscurin plays a role in establishing the sarcomere-sarcoplasmic reticulum connection (Bagnato et al., 2003;Kontrogianni-Konstantopoulos et al., 2003). The pathophysiological roles of these proteins are only beginning to emerge. Ablation of obscurin in mice results in changes in longitudinal sarcoplasmic reticulum architecture with alterations in several SR-associated proteins (Lange et al., 2012) as well as marked sarcolemma fragility and reduced muscle exercise tolerance (Randazzo et al., 2013), while its depletion in zebrafish leads to disturbances in the extracellular matrix organization during skeletal muscle development (Raeker and Russell, 2011). The founding member of the obscurin family of proteins is UNC-89 in Caenorhabditis elegans (Benian et al., 1996). unc-89 loss-of-function mutant worms display reduced locomotion, disorganized myofibrils, and lack M lines (Small et al., 2004;Waterston et al., 1980). unc-89 mutants show disorganization of myosin thick filaments by immunostaining (Qadota et al., 2008;Wilson et al., 2012). Drosophila expresses a protein more similar to nematode UNC-89 than to vertebrate obscurin. In Drosophila, RNAi experiments indicate that obscurin is needed for the formation of normal symmetrical sarcomeres (Katzemich et al., 2015). However, fundamental differences exist in the domain patterns and likely functions of the signaling domains in vertebrate, insect, and nematode obscurins/unc-89 members. All obscurin/UNC-89 members contain a constitutively expressed Rho-type GDP/ GTP exchange factor domain (GEF) with a preceding Src-homology-3 (SH3) domain, which in insect and nematode obscurin/ UNC-89 are situated at the N-terminal end of the proteins, while in vertebrate obscurin, the GEF domain is at the C-terminus. In addition, obscurin/UNC-89 isoforms can contain up to two serine/threonine kinase domains (Katzemich et al., 2012;Spooner et al., 2012). In insect and nematode obscurin, these are catalytically inactive pseudokinases that form scaffolds for the interactions with regulators of sarcomere assembly and/or maintenance (Katzemich et al., 2012), while the two differentially spliced kinases in vertebrate obscurin contain all canonical residues required for catalysis (Fukuzawa et al., 2005) and (B) Modular myomesin, titin, and obscurin/obscurin-like-1 proteins form an intricate M-band network with C-terminal myomesin dimers crosslinking myosin filaments. The inset highlights the interaction between myomesin and obscurin/obscurin-like-1, which has been mapped to linker sequence (L) located between the myomesin fibronectin (Fn-III) domains My4 and My5 and the third immunoglobulin (Ig) domain of obscurin/obscurin-like-1 (O3/OL3, respectively) (Fukuzawa et al., 2008). were reported to be catalytically active in vitro (Hu and Kontrogianni-Konstantopoulos, 2013). Analyzing the molecular interactions and signaling functions therefore requires dedicated approaches for each of these presumptive homologs. From a pathological viewpoint, obscurin polymorphisms has been linked to hypertrophic cardiomyopathy (Arimura et al., 2007) and dilated cardiomyopathy (Marston et al., 2015), while mutations in obscurin-like-1 have been linked to the rare hereditary growth retardation 3-M syndrome (Huber et al., 2009) with a role in the maintenance of cullin-7 levels (Hanson et al., 2009). Understanding obscurin/UNC-89 functions thus also bears relevance to understanding the impact of pathogenic variants in human.
To advance knowledge on M-band organization and function, we have previously established the molecular basis for titin:obscurin-like-1 (Pernigo et al., 2010) and titin:obscurin (Pernigo et al., 2015) connection at the M-band. Obscurin and obscurin-like-1 use their homologous N-terminal immunoglobulin-like (Ig) domains (O1 and OL1, respectively) to bind titin's most C-terminal Ig domain (M10) in a mutually exclusive manner and in a unique chevron-shaped anti-parallel Ig-Ig architecture (Pernigo et al., 2010(Pernigo et al., , 2015Sauer et al., 2010). Mechanically, the M-band titin:obscurin(-like-1) junction is labile, as in single-molecule force spectroscopy experiments both M10:O1 and M10:OL1 complexes yield at forces of around 30 pN (Pernigo et al., 2010). An obvious missing piece in the M-band structural puzzle is the molecular architecture of the obscurin(-like-1):myomesin complex, a key elusive element to understand the global geometry and mechanical stability defining the M-band. Using a multidisciplinary approach encompassing structural techniques, in vivo cellular competition assays, and single-molecule force spectroscopy experiments, we investigated here the myomesin-dependent mechanism of obscurin(-like-1) integration at the M-band.
RESULTS
Human Obscurin/Obscurin-like-1:Myomesin Complex for Structural Analysis Large muscle proteins are typically modular, featuring several Ig and fibronectin-type-III (Fn-III) domains interspersed by linkers of variable length and structural order. Yeast two-hybrid and biochemical analyses have mapped the obscurin/obscurin-like-1:myomesin interaction to the linker region (L) located between the fourth and fifth Fn-III domains of myomesin (My4 and My5, respectively) and the third Ig domain of either obscurin or obscurin-like-1 (O3 and OL3, respectively) ( Figure 1B, inset) (Fukuzawa et al., 2008). O3 and OL3 are highly homologous, sharing 47.2% sequence identity. To produce protein complexes for structural analysis, we initially attempted the expression of isolated domains in Escherichia coli, but failed to obtain soluble O3 or OL3. We therefore decided to pursue a co-expression approach and cloned either O3 or OL3 C-terminal to a GST tag in the first expression cassette of a bicistronic vector, where the myomesin region encompassing the fourth and fifth Fn-III domains (My4LMy5) was cloned in the second expression cassette. This strategy readily produced soluble protein for both constructs and size-exclusion chromatography (SEC) analysis of GST-cleaved complexes is consistent with the formation of obscurin(-like-1):myomesin heterodimers with a 1:1 stoichiometry ( Figure S1). Using a matrix microseeding (MMS) approach (D'Arcy et al., 2014), we successfully crystallized the OL3: My4LMy5 complex and solved its structure at the 3.1 Å resolution. X-ray data collection and refinement statistics are given in Table 1. The final model is characterized by excellent statistics and R/R free (%) values of 21.6/25.9. While the myomesin My4 domain and its C-terminal linker L as well as obscurin-like-1 OL3 are well defined in the structure, the entire myomesin My5 domain is invisible in electron density maps. Proteolysis during crystallization does not appear to be the reason for this, as SDS-PAGE analysis of dissolved crystals shows both My4LMy5 and OL3 components at the expected molecular weight ( Figure S2). Thus, we conclude that the lack of electron density for My5 is due to its positional disorder in the crystal. As My5 is not visible in the structure, we hereafter refer to the crystallographic complex as to OL3:My4L.
Overall Organization
The OL3:My4L complex is present as a (OL3:My4L) 2 dimer of heterodimers in the crystal (Figure 2A). Individual OL3:My4L complexes display a bent dumbbell-shaped structure, in which the myomesin L linker extends away from the My4 domain integrating within the OL3 fold. Two OL3:My4L heterodimers then interlock around a non-crystallographic two-fold axis, giving rise to a dimeric assembly with overall dimensions of 105 Å 3 48 Å 3 26 Å . Large solvent channels running parallel to the molecular dyad axis are observed in the crystallographic packing ( Figure S3). These are compatible with the presence of positionally disordered My5 domains. As the OL3:My4LMy5 complex typically elutes from SEC as a monomeric unit during purification ( Figure S1), we analyzed its behavior at concentrations similar to that used for crystallization. In the accessible range of 3.0-8.2 mg/mL (0.082-0.225 mM), we observed the formation of complex dimers in a concentrationdependent manner with an approximately 30:70 dimer:monomer ratio at the highest protein concentration ( Figure 2B). Thus, the oligomerization state observed in the crystal reflects that of a population in solution promoted by high protein concentration ($15.9 mM in the crystal).
The OL3:My4L Heterodimer Three distinct structural regions that we identify as My4, L H , and OL3 A contribute to the architecture of individual OL3:My4L heterodimers ( Figures 3A and 3B). The My4 domain displays the typical Fn-III fold made of seven anti-parallel b-strands organized in two separate b-sheets (A-B-E and C-C 0 -F-G) arranged in a b sandwich. C 0 is rather short, while G is broken in two (G 0 and G 00 ) and interacts with the beginning and end of the long F b-strand. C-terminal to My4, the L H spacer region encompasses the first 11 amino acids of L and is formed by a 10.6-Å -long a-helix (Pro 607 -Lys 614 ) followed by a short three amino acids peptide (Ser 615 -Pro 617 ). L H leads the C-terminal portion of L, an 18-amino-acid-long extended stretch divided into two b-strands (L S 0 and L S 00 ) that integrate within the OL3 Ig fold. Similar to the Fn-III architecture, the Ig fold is also organized into a b sandwich formed by two b-sheets (A 0 -G-F-C-C 0 and L S 0 /L S 00 -B 0 /B 00 -E-D). As OL3 integrates structural elements of L, we refer to this portion of the complex as the augmented OL3 (OL3 A ) domain. The distinc-tively bent geometry of the heterodimer is dictated by the principal axes of My4 and OL3 A forming a $100 angle along the longest dimension of the complex. This, coupled with the 18.2-Å -long L H region (Pro 607 -Pro 617 , Ca-Ca distance) that acts as a spacer between the domains, allows for the positioning of the second OL3:My4L complex within the tetrameric assembly ( Figure 2A). OL3 A is an example of fold complementation ( Figure 3C), and the isolated OL3 domain is best described as an incomplete Ig of the intermediate-set (I-set) subfamily (Harpaz and Chothia, 1994). This type of Ig is often found in muscle proteins (Otey et al., 2009) and consists of a total of nine strands arranged into two distinct b-sheets (A-B-E-D and A 0 -G-F-C-C 0 ), exhibiting the characteristic discontinuous A/A 0 strand distributed over both b-sheets ( Figure S4A). In OL3, the A b-strand that is hydrogen-bonded to B is missing and is replaced by myomesin L S 0 (Ser 618 -Thr 622 ), thus re-establishing a complete Ig architecture. A second myomesin strand (L S 00 , Ile 626 -Glu 630 ) also hydrogen bonds to B 00 at a position that is reminiscent of the A 0 positioning found in a few deviant I-set Ig domains, identified as the I*-set subtype (Pernigo et al., 2015). Members of this subtype feature a relocation of their A 0 strand, resulting in the formation of an A/A 0 -B-E-D b-sheet ( Figure S4B). Thus, OL3 A is a complex trans-complemented hybrid I/I*-set Ig fold.
Molecular Interfaces
Two sets of molecular interfaces are present in the crystallographic structure. The first one is involved in the formation of the OL3:My4L heterodimer. An additional set of interactions enables its dimerization. As SEC analysis indicates that in solution the formation of the (OL3:My4L) 2 assembly is promoted by high concentration of the complex ( Figure 2B), this implies that homodimerization is hierarchically secondary to the establishment of the OL3:My4L interface.
As highlighted in the contact maps in Figures 3D and 3E, the OL3:My4L heterodimer is held together by Ig-fold complementation. A mixture of hydrogen bonds and hydrophobic interactions stabilizes the heterodimer ( Figure 3F). One edge of the (legend continued on next page) mixed L S 0 /L S 00 -B 0 /B 00 -E-D b-sheet is engendered by the anti-parallel pairing of L S 0 -B 00 and L S 00 -B 0 b-strands mediated by a total of 11 main-chain hydrogen bonds (in cyan in Figure 3F) connecting L S 0 /L S 00 residues to B 0 /B 00 residues. Side chains also stabilize the complex by hydrogen bonding (in pink in Figure 3F). They typically involve hydroxyl groups of Thr and Ser residues (myomesin S618, T628 and OL3 T328, S330) interacting with main-chain carbonyl oxygen atoms. A single salt bridge connects the carboxylate of myomesin E630 to the amine side chain of OL3 K305. A number of hydrophobic residues are buried upon complex formation. For example, myomesin I625 points its aliphatic side chain in a tight cavity lined by OL3 F265, W279, L304, Y317, C319, V332. Together with myomesin P620, T622, and V627, this residue buries more than 90% of its surface in the interaction, representing a critical determinant for binding. Overall, the OL3:My4L interface area is 1,046 Å 2 .
Molecular Basis for Myomesin Isoform Specificity
The myomesin gene family comprises three MYOM genes in humans (Schoenauer et al., 2008). MYOM1 encodes the ubiquitously expressed myomesin protein, while MYOM2 and MYOM3 encode a fast-fiber isoform called M-protein or myomesin-2 and myomesin-3, a recently identified isoform of slow fibers, respectively. The interaction with obscurin/obscurinlike-1 is limited to myomesin, as neither M-protein nor myomesin-3 shows any appreciable binding (Fukuzawa et al., 2008). Our X-ray structure explains the molecular basis for this specificity. Three myomesin residues mapping onto the L linker (T622, I625, and V627) display side chains that are complementary to the OL3 surface ( Figure 4A). These are not conserved in either M-protein or myomesin-3 and occasionally exhibit rather dramatic amino acid substitutions. For example, myomesin T622 is replaced by a lysine in M-protein, while in myomesin-3 a more polar threonine takes the place of myomesin I625 ( Figure 3E).
To validate the interaction between myomesin and obscurin/ obscurin-like-1 in the context of the sarcomere, we generated a number of myomesin variants targeting the L linker and tested them for their ability to compete endogenous obscurin from the M-band. A quantitative analysis of these results is summarized in Figure 4B, while immunofluorescence images of representative experiments are shown in Figures 4C-4G and S6. When overex-pressed in neonatal rat cardiomyocytes (NRCs), GFP-My4LMy5 targets the M-band, in addition to other diffuse subcellular localizations, displacing endogenous obscurin (first bar in Figures 4B and 4C). In the case of T622, its replacement with an isosteric valine (T622V) does not significantly alter the wild-type behavior (second bar in Figures 5B and 4D). This is consistent with the lack of hydrogen bonding between the side chain of T622 and OL3 residues contributing to the small receptor cavity (Figures 3F and 4A). However, when T622 is replaced by a lysine (T622K) as in M-protein (third bar in Figures 4B and 4E), or alternatively when I625 is replaced by a threonine like in myomesin-3 (fourth bar in Figures 4B and 4F), competition is essentially abrogated. A similar effect is mediated by the V627Y replacement also found in M-protein (fifth bar in Figures 4B and 4G). As expected, control substitutions targeting myomesin regions not involved in the interface have no effect on the ability to compete endogenous obscurin ( Figure S6).
The OL3:My4L Heterodimer Is a Flexible Structural Element
The bent dumbbell shape of OL3:My4L observed in the crystal is stabilized by its homodimeric assembly. As SEC analysis indicates that the complex is predominantly monomeric in solution, we explored whether this geometry is representative of the complex in solution using small-angle X-ray scattering (SAXS). The overall molecular parameters derived from scattering data on OL3:My4L and OL3:My4LMy5 are shown in Figure 5A. A comparison of the experimental radius of gyration R g for OL3:My4L (25.2 ± 2 Å ) with that calculated from the structure (28.9 Å ) indicates that in solution, the complex adopts a less extended conformation than in the crystal. Accordingly, the scattering pattern computed from the crystallographic model yielded a suboptimal fit (c = 1.91) to the SAXS data ( Figure 5B, upper curve, blue line), suggesting differences in the relative domain arrangement. To investigate the structure in solution, we considered the complex composed of three rigid bodies defined by the My4, L H , and OL3 A structural regions ( Figure 3A). A good fit to the scattering curve was obtained with a model that is more compact than that seen in the crystal. We then used this structure as a starting template and, following energy minimization, generated >30,000 additional models (a selection shown in Figure 5C) using the tCONCOORD (Seeliger et al., 2007) algorithm, a computationally efficient method for sampling conformational transitions. Within this large pool, we found $500 models that provide an excellent fit (c < 1.0) to the experimental curves (Figure 5B, upper panel, red line). These models all display the L H helix resting on the OL3 A domain, resulting in a less extended conformation compared with the dimer-stabilized crystal structure (a selection shown in Figure 5D). Additional SAXS data measured on OL3:My4LMy5 reveal that inclusion of the My5 domain increases the R g value to 31.0 ± 2 Å ( Figure 5A (legend on next page) again used tCONCOORD to sample the conformational space following addition of an additional Ig domain (My5). Several similar models provide an excellent fit (c < 1.0) to the scattering curve ( Figure 5B, lower curve). We find that the OL3:My4L portion of the complex remains largely invariant, with My5 approximately orthogonal to OL3 A ( Figure 5F).
The ability of OL3:My4L to transition from the solution conformation to that observed crystallographically suggests that the L H helix might have a degree of flexibility. We explored this by solving the crystal structure of My4L H (myomesin residues 510-618) in two different space groups (data collection and refinement statistics in Table 1). In space group P6 5 (2.05 Å resolution), all four My4L H independent molecules in the a.u. display clear electron density until residue A608, while residues E609-S618 (L H ) cannot be modeled ( Figure S7A). The same applies for four of six My4L H independent molecules in the alternative P2 1 space group (2.80 Å resolution). However, in the latter crystal structure, crystal contacts stabilize the C-terminal region in the remaining two other My4L H molecules. While in one molecule, L H folds into an a-helix as in the My4L:OL3 complex ( Figure S7B), in the other molecule the C-terminus is in a more extended conformation ( Figure S7C). Overall, SAXS and crystallographic analyses support a model in which interdomain freedom allows the transition (see Movie S1) from a relatively compact solution conformation to an open one that can be stabilized by homodimerization.
Mechanical Stability of the Complex
It is enticing to speculate that the physical connection via swapped secondary structure elements might act as the molecular glue necessary for the mechanical stability of the obscurin(-like-1):myomesin assembly. To probe this, we employed single-molecule force spectroscopy using atomic force microscopy (AFM), and guided by the structure, we fused the C-terminus of the myomesin L linker to the N-terminus of OL3 by an unstructured 43 amino acids connector. This single-chain L-(connector) 43 -OL3 complex was then sandwiched between two ubiquitin (Ub) domains that serve as well-characterized handles (Carrion-Vazquez et al., 2003) ( Figure 6A). The engineered polyprotein enables the unambiguous characterization of the forces required to break the molecular interactions that hold the complex together.
When stretched in our AFM setup at the constant velocity of 400 nm s À1 often employed in these types of studies (del Rio et al., 2009;Garcia-Manyes et al., 2012;Perez-Jimenez et al., 2006), the polyprotein unfolded displaying a saw-tooth pattern with peaks of alternating mechanical stability ( Figure 6B). At the beginning of the trace, we identified two mechanical events with associated contour length increments of DL 1 = 20.2 ± 1.2 nm (n = 66) and DL 2 = 31.0 ± 0.9 nm (n = 68), respectively, followed by the unfolding of the two ubiquitin monomers (DL Ub $24.5 nm), which serve as internal molecular calibration fingerprints ( Figure 5B). Interestingly, the observed unfolding pattern does not follow the expected hierarchy of mechanical stability . The first event occurs at a force value of 129.4 ± 27.0 pN (n = 66) while the second one only requires 86.6 ± 29.1 pN (n = 68) ( Figure 5C). Both mechanical events are followed by the unfolding of the two Ub monomers, occurring at a higher force $200 pN (Carrion-Vazquez et al., 2003). Such unfolding scenario is hence reminiscent of a molecular mechanism whereby a mechanically labile domain is mechanically protected from the pulling force by a more resilient protein structure (Peng and Li, 2009).
The crystal structure shows that a stretch of 15 amino acids belonging to L lies within the OL3 domain. As the engineered protein connector is 43 residues long, the predicted length increase as a result of L detachment from OL3 is DL 1 =((15 + 43) residues 3 0.36 nm/residue) = 20.88 nm. This value is in agreement with the experimental measurement (DL 1 = 20.2 ± 1.2 nm, Figure 5D). The second unfolding event (DL 2 = 31.0 ± 0.9 nm) corresponds to the unfolding and stretching of OL3 (89 amino acids). Thus, the single-molecule unfolding trajectories support an unfolding scenario whereby the first high-force event corresponds to the removal of the L linker from the OL3 domain, followed by the unfolding of OL3, occurring at a significantly lower force. To further confirm our molecular hypothesis, we constructed a second polyprotein in which the flexible connector length was lengthened to 64 residues. This new construct confirmed forces of 143 ± 29 pN (DL 1 = 27.9 ± 1.5 nm) and 81 ± 22 pN (DL 2 = 31.7 ± 1.2), for the detachment of the L latch and OL3 unfolding, respectively. As expected, while DL 2 is invariant in the two polyproteins, the longer DL 1 is fully consistent with the predicted extension of 28.4 nm ((15 + 64) residues 3 0.36 nm/residue) for the longer connector ( Figure S8). Our single-molecule nanomechanical experiments thus unambiguously support a molecular organization in which the mechanically labile OL3 domain is protected from force by a more resilient architecture afforded by myomesin L complementation (Peng and Li, 2009).
DISCUSSION
The reason why muscle sarcomeres do not self-destruct during contraction lies in the intricate yet poorly understood cytoskeletal protein networks coordinated by titin at the Z-disk and M-band, which link actin and myosin filaments transversally and longitudinally (Horowits et al., 1986). The M-band network (A) Molecular parameters calculated from SAXS data. MM, R g , D max are the molecular mass, radius of gyration, and maximum size, respectively. The superscript exp denotes experimental values while xt, ai, and tC refer to crystal, ab initio, and tCONCOORD fitted models, respectively. MM calc is the theoretical MM computed from the protein sequence. c is the discrepancy between experimental data and those computed from models.
(legend continued on next page) at the center of the myosin filaments is believed to play a key role as a mechanical safeguard during force-generating cycles and as a signaling hub (Agarkova et al., 2003). Compared with the Z-disk, there is currently limited knowledge of this sarcomeric region. The reason for this is 2-fold. On the one hand, although the identity of some key M-band proteins is well established, new components are steadily emerging, suggesting that a much richer complement resides either stably or transiently at this region. For example, cardiomyopathy associated 5 protein (Cmya5 or myospryn) has been recently shown to bind to M-band titin and calpain-3 (Capn3) protease (Sarparanta et al., 2010). Mutations in Capn3 lead to limb girdle muscle dystrophy (LGDM) type 2A, and secondary Capn3 deficiency occurs in LGMD type 2J. Also, a novel leucine-rich protein named myomasp ([myosin-interacting, M-band-associated stress-responsive protein]/LRRC39) has been detected as an interactor of myosin heavy chain (MYH7), and knockdown of the myomasp/ LRRC39 ortholog in zebrafish resulted in severely impaired heart function and cardiomyopathy in vivo . On the other hand, even for known M-band protein components, their complexity is such that their detailed molecular organization is still largely unknown. Thus, advances in our understanding of M-band biology need to address its dynamic proteome, and the mechanical and architectural aspects underpinning its function.
In this work, we explored the myomesin-dependent anchoring of obscurin-like-1 to the M-band and found a mechanism that is new in the sarcomere context. The structure of the obscurinlike-1:myomesin complex reveals that the myomesin L linker between its fourth (My4) and fifth (My5) Fn-III domains integrates within the incomplete third Ig domain of obscurin-like-1 (OL3), resulting in a stable protein complex. The mechanism of fold complementation in-trans observed for the My4L:OL3 complex is somewhat reminiscent of that of subunit-subunit and chaperone-subunit interactions in bacterial pili assembled by the chaperone-usher pathway, whereby the binding partner inserts a b-strand into a partial Ig domain, thus restoring its fold (Remaut et al., 2006). In the case of OL3:My4L, this binding mode provides a surprisingly high mechanical stability to the complex ($135 pN), a rupture force significantly higher than that required to unfold OL3 alone ($85 pN) and quantitatively similar to that exhibited by the myomesin C-terminal dimer ($137 pN) (Berkemeier et al., 2011) required for myosin crosslinking. The high force that the complex is able to withstand contrasts with the mechanical lability ($30 pN) measured for the titin:obscurin/ obscurin-like-1 complex between M10:OL1/O1 Ig domains (B) Experimental scattering intensities for OL3:My4L (upper curve) and OL3:My4LMy5 (lower curve) are displayed as dots with error bars. Curves computed from the crystallographic model and the best tC models are shown as solid blue and red lines, respectively, while the curve computed from the ab initio models is shown as a dotted green line. (C) Cartoon-tube representation of a selection of 20 OL3:My4L tCONCOORD models (out of >30,000) aligned with respect to their My4 domain. My4L is in slate blue and OL3 is in green. (D) Cartoon-tube representation of the ten OL3:My4L tCONCOORD models providing the best fit to the SAXS data (0.78 < c < 0.81). (E and F) OL3:My4L SAXS molecular envelope calculated from the ab initio model with a representative tCONCOORD model in two orthogonal orientations; (F) like (E) for OL3:My4LMy5. (C) Histogram of unfolding forces. The first two events can be readily identified in light of their different mechanical stability and increment in contour length. While the first peak occurs at forces as high as 129.4 ± 27.0 pN (n = 66), the second peak unfolds at the markedly lower force of 86.6 ± 29.1 pN (n = 68). (D) Histogram of contour length increase: DL 1 = 20.2 ± 1.2 nm (n = 66) and DL 2 = 31.0 ± 0.9 nm (n = 68). In (C) and (D), colored curves are Gaussian fits. (Pernigo et al., 2010). Such mechanically weaker interaction reflects a completely different structural architecture, based on a parallel b-strand augmentation in an Ig:Ig chevron-shaped zipper module (Pernigo et al., 2010(Pernigo et al., , 2015Sauer et al., 2010). Interestingly, the low rupture force of the latter interaction is on the order of only about six myosin crossbridges, thus stable anchoring of obscurin-like-1 to the M-band appears to be dependent on its binding to myomesin rather than to titin. Given the high sequence similarity between OL3 and obscurin O3, particularly for the residues involved in the molecular interface with myomesin ( Figure 3E), we suggest that the same holds true for obscurin anchoring and that the obscurin:myomesin complex recapitulates OL3:My4L in its binding mode. This closely mirrors the behavior of N-terminal Ig domains OL1 and O1 that interact with titin M10 in a mutually exclusive manner using a common interface. However, as for OL1 and O1, where minor, yet significant, structural differences suggest different specificities for putative additional partners (Pernigo et al., 2015), we cannot exclude a similar unanticipated behavior for OL3/O3 as well. Interestingly, both OL3 and O3 are insoluble in bacteria when expressed in isolation, while co-expression in the presence of the myomesin L region results in biochemically well-behaved complexes. This suggests a chaperone effect by myomesin, effectively enabling the correct folding of the unconventional augmented O(L)3 A Ig domain. Crucially, removal of the L linker from OL3 A results in a semi-folded state with a significantly decreased mechanical stability, requiring only $85 pN to unfold.
A mechanism of b-strand complementation between linkers or non-structured regions with incomplete Ig domains has also been observed, both in-cis and in-trans, in Ig domains of the actin crosslinking protein filamin. Filamin A can interact with the cytoplasmic tail of integrin 3 via its Ig-like domain 21 (FLNa21), but FLNa21 can also bind to the linker between FLNa20 and 21 in an intramolecular complex that competes with integrin. Intriguingly, the removal of the trans-complemented b-strand from FLNa21 unmasks the binding site for integrin, which, when bound to filamin, engages integrin an inactive state (Heikkinen et al., 2009;Liu et al., 2015). This specific interaction can be opened by mechanical stretch and triggers integrin binding, filamin's partner in mechanosensing (Chen et al., 2009;Seppala et al., 2015). The intermolecular domain trans-complementation we observe here for the obscurin(-like-1):myomesin complex might therefore also play a role in mechanosensing, by freeing the O3/OL3 domain for binding to an alternative ligand. As obscurin-like-1 has been linked to ubiquitin-mediated turnover, such a mechanosensing pathway around the obscurin/obscurin-like-1:myomesin complex might feed into the turnover of sarcomere-associated structures (Lange et al., 2012). Myomesin crosslinks myosin filaments and therefore must be exposed at least to some of the shear forces developing transversally to the myosin filament axis, but the extent to which myomesin is directly exposed to mechanical force in vivo remains unknown, not least because the exact orientation with respect to the filament axis can currently be only indirectly inferred, and the geometry of force transmission is therefore unclear. It is also yet unclear in which directionality mechanical forces act on the titin-obscurin, which might be relevant based on recent molecular dynamics simulations (Caldwell et al., 2015). However, it is reasonable to speculate that the extremely stable anchoring of obscurin(-like-1) to myomesin is not only structurally important, but has evolved also as functionally relevant for nanomechanical necessity, supporting the notion that the M-band is a key strain sensor in muscle sarcomeres (Agarkova et al., 2003;Pinotsis et al., 2012;Xiao and Grater, 2014).
The MYOM gene family codes for three proteins sharing a similar Ig/Fn-III-rich domain organization. Our OL3:My4L structure offers a clear structural basis for the specificity of obscurin(-like-1) binding to the myomesin-1 isoform that was validated by competition assays in the relevant cellular context of NRCs. Interestingly, the OL3:My4L complex also reveals interdomain flexibility and the ability to dimerize. The dimeric arrangement observed in the crystal and in solution at high protein concentration opens the possibility that this geometry might reflect the local obscurin(-like-1):myomesin organization in the crowded environment of the sarcomere. The M4/M4 0 lines typical of striated muscles define a hexagonal arrangement of myosin filaments in the super-lattices of most vertebrates. Antibody mapping experiments suggested that the N-terminal region of myomesin runs roughly perpendicular to the myosin filament, since My1 and the L loop are only 7 nm apart from the M1 line (Obermann et al., 1996). Thus, it is conceivable that myomesin molecules emanating from neighboring myosin filaments of the hexagonal lattice cross over at the level of the L linker as seen in the OL3:My4L dimer (Figure 7). The intrinsic flexibility of the complex monomer coupled with the presence of the helical spacer at the L N-terminus appears perfectly poised for this. This suggestion is compatible with previous M-band models (Lange et al., 2005) but adds a novel geometric constraint. In summary, our work provides a necessary structural and biomechanical reference to establish the geometrical context and mechanical hierarchies in M-band assembly, which will need to be reconciled with more highly resolved in situ information of this protein network and its response to mechanical stress.
EXPERIMENTAL PROCEDURES
Detailed methods used for cloning, protein expression, and protein purification are given in the Supplemental Experimental Procedures.
Crystallization
An initial vapor-diffusion sparse matrix screening performed using the sittingdrop setup with the aid of Mosquito crystallization robot (TTP LabTech) produced hundreds of OL3:My4LMy5 microcrystals in the presence of 1.1 M ammonium tartrate (pH 7.0) and a 1:2 protein:reservoir ratio. The protein concentration used in the screen was 4.0 mg/mL in storage buffer (20 mM HEPES, 150 mM NaCl, 1 mM DTT [pH 7.5]). A standard pH-precipitant grid optimization allowed us to obtain fewer marginally larger crystals in the presence of 0.8 M ammonium tartrate, 0.1 M sodium acetate (pH 5.5) using a 1:1 protein:reservoir ratio. These crystals, however, proved unsuitable for diffraction experiments. To further improve crystal quality, we employed the random MMS screening approach (D'Arcy et al., 2014). Crystals obtained in the optimization step were harvested and stored in a solution containing 0.9 M ammonium tartrate, 0.1 M sodium acetate (pH 5.5) (hit stock). A new sparse matrix screening was performed using various commercial screens using a hit stock:protein:reservoir ratio of 1:2:1. Few OL3:My4LMy5 single crystals were finally obtained in the presence of 20% PEG8000, 0.1 M Tris-HCl (pH 8.5), 0.2 M MgCl 2 using the protein complex at 3.0 mg/mL. Crystallization of My4L H is described in the Supplemental Experimental Procedures.
X-Ray Data Collection and Structure Determination
Crystals were cryo-protected by soaking them in their reservoir solution supplemented with 20% glycerol. For OL3:My4LMy5 a 3.1 Å resolution dataset was collected in space group C2 while My4L H crystallized in the alternative space groups P6 5 and P2, yielding diffraction data at 2.05 Å and 2.8 Å resolution, respectively. All datasets were collected at Diamond Light Source synchrotron facility (Didcot, Oxfordshire, UK) and processed with the xia2 expert system (Winter et al., 2013) using XDS (Kabsch, 2010) and AIMLESS (Evans and Murshudov, 2013) packages. All X-ray structures were solved by the molecular replacement method with the package MOLREP (Vagin and Teplyakov, 2010) and refined using the programs REFMAC5 (Murshudov et al., 2011) and BUSTER (Bricogne et al., 2011). A summary of data collection and refinement statistics is shown in Table 1. Further details on the crystallographic methods are available in the Supplemental Experimental Procedures.
Cellular Competition Assays in NRCs and Ratiometric Analysis
NRC isolation, culture, transfection, and staining were performed essentially as described previously (Pernigo et al., 2010). Briefly, NRCs were transfected with GFP-tagged transiently expressing constructs (pEGFPC2-, Clontech Laboratories) using Escort III (Sigma-Aldrich). After 48 hr culture to promote protein expression, cells were fixed with 4% paraformaldehyde/PBS, permeabilized with 0.1% Triton X-100/PBS, and then stained with the appropriate antibodies. The antibodies used for the current work were as follows: MyB4, a mouse monoclonal antibody to the myomesin domain My12 (Grove et al., 1984); and Ob5859, a rabbit polyclonal antibody to two consecutive Ig domains in obscurin, Ob58 and Ob59 (Fukuzawa et al., 2005;Young et al., 2001). All fluorescent-conjugated secondary antibodies were purchased from Jackson ImmunoResearch. All images for ratiometry analysis were collected on a Zeiss LSM510 confocal microscope as described previously (Fukuzawa et al., 2008). Image analysis was carried out as described in our previous work (Pernigo et al., 2015). Further details are available in the Supplemental Experimental Procedures.
Small-Angle X-Ray Scattering Synchrotron SAXS data for OL3:My4L and OL3:My4LMy5 were collected at the BM29 BioSAXS beamline (ESRF, Grenoble) using a Pilatus 1M detector (Dectris) (Pernot et al., 2013). All samples were measured at four concentrations (0.5-4.5 mg/mL in 20 mM HEPES [pH 7.5], 500 mM NaCl, 1 mM DTT buffer) in the range of momentum transfer 0.005 < s < 0.608 Å À1 (s = 4psinq/l, where the wavelength l is 0.9919 Å and 2q is the scattering angle). All experiments were performed at 18 C using a sample volume of 30 mL loaded into the flowing measurement cell. Individual frames were processed automatically and independently within the EDNA framework (Brennich et al., 2016). Merging of separate concentrations and further analysis steps were performed using a combination of tools from the ATSAS package (Petoukhov et al., 2012). Initial rigid body modeling of the complex was done with CORAL (Petoukhov et al., 2012) and domain dynamics of the protein complexes was further explored by generating conformational ensembles using the tCONCOORD (Seeliger et al., 2007) method. Further details are available in the Supplemental Experimental Procedures.
Single-Molecule Mechanical Experiments by Atomic Force
Microscopy cDNA was commercially synthesized (Genscript), which allowed the expression of a polyprotein in which the myomesin linker L and the obscurin-like-1 OL3 domain are connected by a flexible 43-amino-acid-long connector sandwiched between two ubiquitin (Ub) domains (Ub-L-connector-OL3-Ub). The synthetic gene was inserted into a pQE80L vector (QIAGEN) using standard molecular biology techniques. A PCR-based approach also allowed the extension of the connector length to 64 amino acids. Single proteins were picked up from the surface and pulled at a constant velocity of 400 nm s À1 (del Rio et al., 2009;Garcia-Manyes et al., 2012;Perez-Jimenez et al., 2006). Further details are available in the Supplemental Experimental Procedures.
ACCESSION NUMBERS
Atomic coordinates for the X-ray structures presented in this article have been deposited with the PDB under accession codes PDB: 5FM4, 5FM5, 5FM8. | 2018-04-03T04:35:46.820Z | 2017-01-03T00:00:00.000 | {
"year": 2017,
"sha1": "ea44dc813f25b8a2b42c0e3ee3c8f598b831b383",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0969212616303574/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a68c8cf0d9e0149451a177f2edd034bc16f4165",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251726159 | pes2o/s2orc | v3-fos-license | The first complete genome sequence and pathogenicity characterization of fowl adenovirus serotype 2 with inclusion body hepatitis and hydropericardium in China
Since 2015, fowl adenovirus (FAdV) has been frequently reported worldwide, causing serious economic losses to the poultry industry. In this study, a FAdV-2, namely GX01, was isolated from liver samples of chickens with hepatitis and hydropericardium in Guangxi Province, China. The complete genome sequence of GX01 was determined about 43,663 base pairs (bp) with 53% G+C content. To our knowledge, this is the first FAdV-2 complete genome in China. There was a deleting fragment in ORF25 gene. Phylogenetic analysis based on the hexon loop-1 gene showed that GX01 is most closely related to FAdV-2 strain 685. Pathogenicity experiment of GX01 in 3-day-old and 10-day-old specific-pathogen-free chickens showed that although no mortality was observed within 21 days post infection (dpi), strain GX01 significantly inhibited weight gain of infected chickens. Moreover, FAdV-2 was still detectable in the anal swabs of infected chickens at 21 dpi. Necropsy analysis showed that the main lesions were observed in liver, heart, and spleen. Of note, hepatitis and hydropericardium were observed in the infected chickens. In addition, massive necrosis of lymphocyte was observed in spleen of infected 3-days-old chickens. We concluded that FAdV-2 strain GX01 is capable of causing hepatitis and hydropericardium, which will make serious impact on the growth of chickens. Our research lays a foundation to investigate the molecular epidemiology and etiology of FAdV.
Since
, fowl adenovirus (FAdV) has been frequently reported worldwide, causing serious economic losses to the poultry industry. In this study, a FAdV-, namely GX , was isolated from liver samples of chickens with hepatitis and hydropericardium in Guangxi Province, China. The complete genome sequence of GX was determined about , base pairs (bp) with % G+C content. To our knowledge, this is the first FAdV-complete genome in China. There was a deleting fragment in ORF gene. Phylogenetic analysis based on the hexon loop-gene showed that GX is most closely related to FAdVstrain . Pathogenicity experiment of GX in -day-old and -day-old specific-pathogen-free chickens showed that although no mortality was observed within days post infection (dpi), strain GX significantly inhibited weight gain of infected chickens. Moreover, FAdV-was still detectable in the anal swabs of infected chickens at dpi. Necropsy analysis showed that the main lesions were observed in liver, heart, and spleen. Of note, hepatitis and hydropericardium were observed in the infected chickens. In addition, massive necrosis of lymphocyte was observed in spleen of infected -days-old chickens. We concluded that FAdV-strain GX is capable of causing hepatitis and hydropericardium, which will make serious impact on the growth of chickens. Our research lays a foundation to investigate the molecular epidemiology and etiology of FAdV. KEYWORDS fowl adenovirus serotype , complete genome, pathogenicity, weight gain, viral shedding Introduction Fowl adenoviruses (FAdVs), a non-enveloped, linear double-stranded DNA virus with a genome of 43-45 kbp in size, are common and harmful pathogens in chickens (1). FAdVs belongs to the family Aviadenovirus. According to the basis of phylogeny, genome organization and the lack of significant cross-neutralization (2, 3), it has currently been divided into five species (FAdV-A to FAdV-E) and 12 serotypes (FAdV-1 to FAdV-8a and FAdV-8b to FAdV-11) (4). FAdV-1 belongs to the species FAdV-A; FAdV-5 belongs to the species FAdV-B; FAdV-4 and FAdV-10 belong to the species FAdV-C; FAdV-2, 3, 9, and 11 belong to the species FAdV-D; FAdV-6, 7, 8a, and 8b belong to the species FAdV-E (1, 2, 5).
In our study, we successfully isolated a FAdV-2 strain GX01 from commercial broiler with hepatitis and hydropericardium. The complete genome sequence of GX01 was determined and characterized. The pathogenicity of GX01 was evaluated in 3day-old and 10-days-old Specific Pathogen Free (SPF) chickens. To our knowledge, this work reports the first FAdV-2 complete genome sequence in China. The pathogenicity experiment showed that FAdV-2 strain GX01 is capable of causing hepatitis and hydropericardium. Moreover, GX01 significantly inhibited weight gain and caused viral shedding for a long period of time in the infected chickens. We concluded that the isolated GX01 may have serious impact on the production of the poultry industry, especially the broiler industry.
Sample collection and virus isolation
In July 2020, the liver samples were collected from 20-daysold commercial broiler with hepatitis and hydropericardium in Guangxi Province, China. Total RNA and DNA were extracted using Tianlong Nucleic Acid Extraction & Purification Kit T180H (Tianlong Technology Co., Ltd., China). The extracted DNA was subjected to polymerase chain reaction (PCR) amplification to detect FAdV (26). Other pathogens include avian influenza virus (AIV), Newcastle disease virus (NDV), infectious bursal disease virus (IBDV), infectious laryngotracheitisand virus (ILTV), chicken infectious anemia virus (CIAV), avian leukosis virus (ALV), Marek's disease virus (MDV), egg drop syndrome virus (EDSV) and avian reovirus (ARV) were detected by PCR or reverse transcriptionpolymerase chain reaction (RT-PCR) assays. The positive PCR product was sequenced. And the obtained nucleotide sequence was blasted in NCBI. The primers used for detecting pathogens here were shown in Supplementary Table 1. And the primers were synthesized by Sangon Biotech (Guangzhou, China). The liver sample was homogenized in phosphate buffered saline (PBS) to obtain a 10% tissue suspension. After three freezethaw cycles, the suspensions were centrifuged at 12,000 × g at 4 • C for 10 min. And then the suspensions were filtered through a 0.22-µm pore-size sterile filter (Millipore, Bedford, MA, United States). The filtered solution was inoculated into leghorn male hepatocellular (LMH) cells and propagated for three passages (27,28). The infected LMH cells and the mock LMH cells were sent to Guangzhou Sevier Biotechnology Co., Ltd., China for transmission electron microscopy (TEM) analysis.
TCID and growth curve assay
The 50% tissue culture infective dose (TCID 50 ) assay was performed as previously described (29). Viral cytopathic effect (CPE) was observed for approximately 5 days. The virus titer was calculated as TCID 50 according to the Reed-Muench method (30). The virus growth kinetics were determined as previously described with a few modifications (31). Briefly, the supernatant and infected LMH cells were harvested at 0, 12, 24, 36, 48, 60, 72, 84, and 96 h post infection (hpi), respectively. After three freezethaw cycles, the virus titers of each time point were determined by TCID 50 assay.
Determination of complete genome
Based on published sequences of FAdVs, a set of primers were designed to determine the complete genome sequence of GX01 (Supplementary Table 2). And the primers were synthesized by Sangon Biotech (Guangzhou, China). The fragments of GX01 were amplified by PCR assay. The amplified fragments were determined by sequencing (Shanghai Sangon Biotechnology Co., Ltd., China). Finally, all sequences were assembled using the SeqMan of the DNASTAR software package (version 7.1, Madison, WI, United States). The open reading frames (ORFs) of complete genome sequence were annotated using the Snapgene software (version 2.3.2, United States).
Phylogenetic analysis
Based on the sequences of GX01 complete genome and hexon loop-1 gene and amino acid sequences of hexon, fiber, and DNA polymerase gene, the homology analysis of GX01 was carried out with other FAdV-D strains reference sequences using the MegAlign of the DNASTAR package. Moreover, Phylogenetic trees were constructed based on the sequences of DNA polymerase amino acid and hexon loop-1 gene, respectively, by the maximum likelihood method in MEGA 7.0 software (32). Bootstrap values were determined based on the original data from 1,000 replicates.
Animal experiment and ethics statement
Twenty 3-days-old and 20 10-days-old SPF leghorn chickens (Guangdong Dahuanong Animal Health Products Co., Ltd., China) were raised in separated negative-pressure isolators and randomly divided into two groups, respectively. Chickens in the challenge groups were inoculated intravenously with dose of 0.2 ml (1.0×10 7 TCID 50 /ml) virus. The control groups were inoculated intravenously with an equal volume of DMEM/F12 basic medium (Gibco, Australia). All chickens were monitored daily for clinical signs. The anal swabs of all chickens were collected from 0 to 21 dpi. The weight of all chickens was recorded weekly. At 21 dpi, all chickens were humanely euthanized. The liver, heart, spleen, and kidney were collected. And the weight of spleen was recorded. The spleen indexes were calculated by the spleen (milligram, mg) / body weight (gram, g). For histopathological examinations, samples were fixed in 4% paraformaldehyde. The remaining samples were stored at−80 • C. The animal experiment protocol used in this study was approved by and performed under the guidance of the Committee on the Ethics of Animal Experiments of Institute of Animal Health, Guangdong Academy of Agricultural Sciences Experimental Animal Welfare Ethics Committee on 8 November, 2021 (Approve ID: SPF2021027). All efforts were made to minimize animal suffering.
Real-time quantitative PCR
Total DNA of anal swabs and tissue samples were extracted. Quantitative real-time PCR analysis was carried out using SYBR Green master mix (Roche, United states). Absolute quantitative real-time PCR was conducted as described previously with small modifications (33). Briefly, 200 microliter (µl) was taken from anal swab to obtain DNA in 60 µl volume. 200 mg was taken from tissue sample to obtain DNA in 60 µl volume. And then 1 µl was taken to conduct real-time PCR. Primers were designed according to the conserved 52K gene of FAdVs (Supplementary Table 3) (33). A plasmid containing the 52K gene of FAdV-2 strain was used to construct standard curve in each reaction.
Virus isolation and identification
In July 2020, chickens on a chicken farm in Guangxi Province were suffering from severe hepatitis and mild hydropericardium. The liver samples were tested positive for FAdV but tested negative for AIV, NDV, IBDV, ILTV, CIAV, ALV, MDV, EDSV, and ARV. After inoculating the filtered tissue supernatant in LMH cells, the infected LMH cells were shedding, round-shaped and strongly refracted at 48 dpi to 72 dpi ( Figure 1A). According to the growth curve of strain GX01, the virus titer increased dramatically from 12 hpi to 24 hpi, and then it reached a plateau at 24 hpi and peaked at 72 hpi, approximately 1.0 × 10 7 TCID 50 /ml ( Figure 1B). Moreover, the virus particles were observed by TEM. It displayed a spherical shape and latticed distribution ( Figure 2). These results indicated that the virus was successfully isolated. The isolated virus was designated as GX01.
Complete genome analysis
To obtain the complete genome sequence of strain GX01, all the sequences were assembled together. Finally, we obtained a 43,663 bp viral genome sequence of strain GX01, with 53% G + C content. And the complete genome sequence contained 36 ORFs ( Figure 3A). The complete genome sequence is deposited in GenBank under the accession number ON014843. In addition, there was a deleting fragment in ORF25 gene ( Figure 3B). The two coding sequences (CDS) of GX01 ORF25 gene are contiguous. The ORF25 gene of other FAdV-2 strains has a non-coding sequence in the middle of the CDS at both ends.
Phylogenetic analysis and sequence comparisons of strain GX
According to the species demarcation criteria of the International Committee on Taxonomy of Viruses (ICTV), the species designation of FAdVs depends on the distance matrix analysis of the DNA polymerase amino acid sequence. Phylogenetic tree based on amino acid sequence of DNA polymerase showed that GX01 was clustered within the species FAdV-D ( Figure 4A). In addition, the hexon loop-1 gene is recognized as a gene that distinguishes serotypes of FAdVs (4).
Pairwise comparisons were performed to determine the sequence identities. The results showed that GX01 shared of 90.5-97.9% identify in the complete genome sequence with other FAdV-D strains (
Pathogenicity assessment
In order to assess the pathogenicity of GX01 to chickens, we used GX01 to challenge chickens. All of chickens in the challenge groups showed clinical signs of depression and anorexia at 1 to 3 dpi. The chickens in the control groups did not show any clinical signs. Moreover, although none of the chickens died through the study, the weight gain of infected 3-days-old chickens was significantly inhibited than that of the control group at 7, 14, and 21 dpi. The average weight of control chickens reached 245 g at 21 days, while the average weight of infected chickens was only 159 g ( Figure 5A). The weight gain of infected 10-day-old chickens was significantly inhibited than that of the control group at 14 and 21 dpi. The average weight of control chickens reached 334 g at 21 days, while the average weight of infected chickens was only 261 g ( Figure 5B).
At 21 dpi, necropsy results showed that livers of challenge groups exhibited obvious petechial hemorrhage (Figures 6A,B). And the pericardium showed yellowish effusion (Figures 6D,E). In addition, the spleen indexes showed that the spleens in challenge groups were significantly larger than that of the control groups ( Figure 5C). Histological examination showed that massive infiltration of inflammatory cells was observed in liver and kidney of infected chickens ( Figure 7A). Typically IBH was observed in livers of 3-days-old challenge and 10days-old challenge groups ( Figure 7B). Lesions in the spleen were characterized as massive lymphocyte necrosis in red-white pith. Absolute quantification of FAdV-2 in the organs of the euthanized chickens showed that the FAdV-2 was detectable in the heart, liver, spleen, and kidneys. The viral loads were still . /fvets. .
Viral shedding analysis
To determine viral shedding of GX01 in the 3-days-old and 10-days-old SPF chickens, we collected the anal swabs of all chickens from 0 to 21dpi. The virus genome was detected in anal swabs by quantitative real-time PCR assays (Figure 8). At 1 dpi, FAdV-2 in infected 3-days-old and 10-days-old groups has been detected about 1.26 × 10 2 copies and 1.93 × 10 2 copies, respectively. The viral shedding of infected 3-daysold chickens increased sharply at 3dpi and peaked at 5dpi (5.42 × 10 6 copies), then it dramatically declined. The viral shedding of infected 10-days-old chickens increased sharply at 3dpi and peaked at 4dpi (1.82 × 10 6 copies), then it declined slowly. At 21 dpi, FAdV-2 in infected 3-days-old and 10-days-old groups was still detectable approximately 2.64 × 10 2 copies and 1.26 × 10 2 copies, respectively. No FAdV-2 was detected in the negative control chickens throughout the experiment.
Discussion
In recent years, FAdVs have been frequently reported worldwide, causing huge economic loss to the poultry industry (34). Co-infection of multiple serotypes was observed in different regions, such as FAdV-2 and 8b in South Africa (18,19), FAdV-2, 8a, 8b, and 11 in Asia, Europe and North America (2,15,21,(35)(36)(37). In China, FAdVs are transmitted both vertically and horizontally. The prevalence is dominated by serotype 4, 8a, 8b and 11 (23). Co-infection has also been reported in southern China (23). In this work, we successfully isolated a FAdV-2 strain GX01 and obtained the complete genome sequence of the virus.
Up till now, only two complete genome sequences of FAdV-2 are documented in GenBank. They are FAdV-2 strain 685 and FAdV-2 strain SR48. FAdV-2 strain 685 was isolated from United Kingdom. FAdV-2 strain SR48 has already been confirmed to be FAdV-11 (2). Besides, our study confirmed the unassigned species FAdV-D strain GA/1358/1995 to be a FAdV-2, based on phylogenetic analysis of hexon loop-1 gene. The hexon gene of FAdVs was related to the classification of serotypes, especially the hexon loop-1 gene (4,26,35). This gene is sufficiently variable to ensure species identification and additional differentiation of the currently recognized 12 serotypes (38-40). Moreover, the ORFs of GX01 were almost identical to the other reported FAdV-2 strains by alignment with reference sequences, suggesting that the genome sequence of FAdV-2 was relatively conserved. However, there are some differences in ORF25 gene of GX01. The ORF25 gene of other strains in species FAdV-D is composed of two discontinuous CDS (5). In our study, the two CDS of GX01 ORF25 gene were contiguous, indicating that some non-coding sequences of GX01 were deleted in natural evolution. Moreover, the sequence of the first CDS of GX01 ORF25 gene was quite different from the other known FAdV-2 strains. To date, the function of FAdVs ORF25 gene is still unclear (25,41,42). It may be associated with evolution and mutation of the virus.
The pathogenicity of FAdV-2 is not well-understood, especially with regard to the effects on weight gain and viral shedding. In our study, although none of the chickens died throughout the experiment, we found the weight gain of infected chickens was significantly inhibited than that of control groups. It means that the growth of chickens is affected by FAdV-2 strain GX01, which will cause significant economic losses to poultry industry. Moreover, the viral shedding increased sharply, indicating strain GX01 has a strong ability to proliferate in chickens. Furthermore, FAdV-2 was still detectable in anal swabs at 21 dpi, indicating viral shedding lasted for a long period of time after infection, which will produce significant crossinfection in production and endanger healthy flocks.
FAdV-2 has only been reported to cause IBH (16). HHS was related with FAdV-4 and 11 (12,43). Interestingly, in addition to severe hepatitis, hydropericardium was also observed in the infected chickens in our study. Thus, GX01 was able to cause HHS in both clinical and experimental cases. To our knowledge, this is the first report in which hydropericardium were associated with the FAdV-2. The volume of hydropericardium caused by FAdV-4 can reach as much as 15 ml, while the volume of that caused by FAdV-2 strain GX01 and FAdV-11 was 1 to 2 ml (43,44). We suggested that the pathogenicity of FAdV-2 strain GX01 may be diverse.
In recent years, FAdV-4 has been proven to have immunosuppressive potential, which caused structural and functional damage of immune organs via apoptosis along with induction of severe inflammatory responses (45)(46)(47). Other serotypes of FAdVs have also been reported to cause immune system damage in chickens, such as FAdV-8b, and FAdV-11 (48)(49)(50). In our study, not only the spleen indexes of challenge groups were significantly higher than that of the control groups, but also massive necrosis of lymphocyte was observed in spleen of infected 3-days-old chickens. These result revealed that GX01 may also be an immunosuppressive pathogen. The immunosuppression potential of FAdV-2 is required for further investigation.
Conclusions
In our study, a FAdV-2 strain GX01 was successfully isolated from commercial broiler with HHS in Guangxi Province, China. The first complete genome of FAdV-2 in China was determined and characterized, which not only increased the knowledge of the molecular characteristics, but also enriched the understanding of FAdV-2 diversity. Moreover, the Pathogenicity of GX01 in SPF chickens showed that GX01 significantly inhibited weight gain in infected chickens and caused viral shedding lasted for at least 21 dpi. Furthermore, FAdV-2 strain GX01 is capable of causing HHS. We concluded that this virus may become a severe threat to poultry industry. Therefore, further studies of pathogenic mechanism and vaccine development of FAdV-2 are needed to obtain more insights for the prevention and control of this disease.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by the Committee on the Ethics of Animal Experiments of Institute of Animal Health, Guangdong Academy of Agricultural Sciences Experimental Animal Welfare Ethics Committee on 8 November, 2021 (Approve ID: SPF2021027).
Author contributions
ZX and JZ designed this study and critically revised the manuscript and performed the experiments, data analysis, and drafted the manuscript. ZX, MS, QZ, YH, JD, and LL participated in sample collection, virus isolation, and animal experiment. SH and ML participated in the coordination and manuscript revision. All authors read and approved the final manuscript.
Funding
This work was funded by Special Fund for Scientific Innovation Strategy-Construction of High Level Academy Frontiers in Veterinary Science frontiersin.org . /fvets. . | 2022-08-23T13:51:57.394Z | 2022-08-15T00:00:00.000 | {
"year": 2022,
"sha1": "dd95d3a348f5ad02d74661716f23b941d208d018",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "dd95d3a348f5ad02d74661716f23b941d208d018",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": []
} |
243789280 | pes2o/s2orc | v3-fos-license | Epithelial-mesenchymal transition is the main driver of intrinsic metabolism in cancer cell lines
A fundamental feature of cancer cells is genomic heterogeneity. It is a main driver of phenotypic differences, including the response to drugs, and therefore a key factor in therapy selection. Motivated by the increasing role attributed to metabolic reprogramming in tumor development, we wondered how genomic heterogeneity affects metabolic phenotype. To this end, we profiled the intracellular metabolome of 180 cancer cell lines grown in similar conditions to exclude environmental factors. For each cell line, we estimate activity for 49 pathways across the whole metabolic network. Upon clustering of activity data, we found a convergence into only two major metabolic types. These were further characterized by 13C-flux analysis, lipidomics, and analysis of sensitivity to perturbations. These experiments revealed differences in lipid, mitochondrial, and carbohydrate metabolism between the two major types. Finally, a thorough integration of our metabolic data with multiple omics data revealed a strong association with markers of epithelial-mesenchymal transition (EMT). Our analysis indicates that in absence of variations imposed by the microenvironment, the metabolism of cancer cell lines falls into only two major classes despite genetic heterogeneity.
46
Over the past two decades, altered metabolism has re-emerged as a prominent hallmark of 47 cancer 1,2 . Beyond the seminal example of aerobic glycolysis 3 , multiple examples of dysregulated 48 pathways and novel essential reactions have been presented 4-6 and gave rise to tailored 49 therapeutic opportunities. A key lesson in oncology that also extends to metabolism is that tumors 50 are heterogeneous and, therefore, their sensitivities to drug or genetic treatments can differ 51 greatly. In the context of metabolism, a main driver of heterogeneity is the tumor 52 microenvironment. Previous studies have demonstrated the relevance of oxygenation and cancer 53 specific nutrient utilization 7,8 which give cancer cells a unique growth advantage. The second, 54 intrinsic driver of heterogeneity is the genetic makeup of tumor cells, which varies between and 55 within tumors. Mutations in coding sequences or regulatory regions and alterations in copy number 56 may affect gene expression and the activity of proteins and, hence, enzymes. Mutations result in 57 granular differences in pathway utilization, some of which provide a fitness advantage for tumor 58 growth. 59 and any of the subtrees. This is a positive outcome as it indicates the clustering was not biased 160 by experimental parameters like confluency. The enrichment analysis was extended to metadata 161 available for many cell lines in the CCLE 23 on cancer type, histology, histology subtype, pathology, 162 or ethnicity of the cancer donor. In line with Li et al. 12 and the aforementioned clustering of 163 suspended cells, hematopoietic cells were found to be associated in the type 1. However, no 164 further association with clinical or histological data was found. 165 We moved on to evaluate whether the observed metabolic types were associated with molecular 166 traits at genomic, transcriptomic, or proteomic level. Our goal was dual: characterize major 167 differences between metabolic types and seek potential upstream regulators that drive division in 168 robust types. Using mutation data and focusing mainly on cancer genes 12 , we found only one 169 association between a subcluster of type 2, the left cluster, and mutations in AXIN1. AXIN1 is a 170 component of the beta-catenin destruction complex and thus its mutation can promote the 171 accumulation of this cell-to-cell adhesion molecule 24 . In copy number data 12 , we found two 172 positive associations with minor subclusters. For instance, a sub-cluster of type 1 was associated 173 with Cyclin-dependent kinase inhibitor 2B (CDKN2b), which acts as cell growth regulator 25 . We 174 next considered the methylation status of cancer genes and found 252 significant associations, 175 thereby highlighting a strong link between epigenetic regulation and metabolic phenotypes. 176 Notable associations are highlighted, like the two negative ones between type 1 and 177 Thrombospondin 1 (THBS1), and CXADR-like membrane protein (CLMP), both linked with cell-178 to-cell adhesion and interaction. THBS1, an activator of transforming growth factor-beta (TGF-179 beta), has been shown to promote an aggressive phenotype through EMT 26 . In supporting of this 180 claim, we also found a negative association between type 1 and Eukaryotic translation initiation 181 factor 5A-2 (EIF5A2), known to also induce EMT 27,28 . 182 In the expression data provided by the CCLE, we identified 453 associations. Because of the large 183 number, we focused on genes that could be related to each other (identified using the String 184 database 29 to find known interactions or shared biological processes among protein coding 185 genes). Two negative associations were identified between type 1 and CDP-diacylglycerol 186 synthase 1 (CDS1) and lysophosphatidic acid phosphatase type 6 (ACP6). Since both target 187 genes which encode for enzymes involved in the biosynthesis of glycerophospholipids, it may 188 suggest a reduction in lipid synthesis in type 1. We identified positive associations between type 189 1 and THBS1 expression. As previously observed, THBS1 was less methylated in type 1 and is 190 more expressed in type 1, hence consolidating its association to type 1. Finally, we found 27 191 proteins significantly associated to some subtrees. Of note, p53 levels were lower in type 1 192 compared to the other types. P and E-cadherin, classical markers of epithelial cells 30 had 193 significantly lower levels in type 1.
We hypothesized that the observed, robust metabolic types might be driven by common regulatory 195 mechanisms. Therefore, we tested whether transcriptional factor activity and signaling pathways 196 are associated with pathway score clusters. For 743 transcription factors (TFs), we assessed 197 whether differential genes were overrepresented in known TF-targets 31 . We identified 115 198 associations between TFs and the clustered metabolic phenotypes. Several interesting hits were 199 linked to the major types 1 and 1B, middle cluster which bears many similarity with type 1: HIF1A, 200 previously associated with aggressive tumor phenotypes, treatment resistance, and poor clinical 201 prognosis 32 ; TP63, known to regulate migration, invasion, and in vivo pancreatic tumor growth 33 ; 202 and SNAI1, involved in EMT induction. Further, we tested the activity of 14 signaling pathways 34 . 203 We identified uniquely a positive association between TGF-beta signaling and TRAIL with type 1 204 and 1B. Recent findings highlighted the potentially aberrant consequence of TRAIL activation in 205 promoting cell motility and metastasis 35 and of TGF-beta in promoting cell migration and tissue 206 invasion 36,37 . 207 Association between EMT and metabolic types 208 Several of the significant associations pointed to an increase of metastasis-related processes, i.e. 209 EMT. To directly test this hypothesis, we used the EMT score proposed by Rajapakse et al. 38 . It 210 is based on gene expression of known EMT markers to quantify the potential of invasiveness and 211 metastasis formation of cancer. A high EMT score is associated with epithelial state and a low 212 EMT score to mesenchymal state. In our dataset, we could confirm that type 1 and 1B were linked 213 with the mesenchymal state, and type 2 with the epithelial state (last line, Figure 3). We validated 214 the putative EMT association experimentally. We selected representative cell lines of the two main 215 metabolic types 1 and 2, and stained the canonical EMT markers vimentin and E-cadherin using 216 immunofluorescence (Supp. Figure 2). In line with the expectations, the mesenchymal marker 217 vimentin was higher in type 1 (p-value < 1 × 10 -3 , Student t-test), and the epithelial marker E-218 cadherin was higher in type 2 (p-value < 1 × 10 -3 , Student t-test). Microscopy analysis also 219 highlighted the expected morphology differences. Type 1 cells featured spindle-like shapes 220 resembling fibroblasts, whereas type 2 portraited rounded regular shapes, consistent with EMT 221 progression. Altogether, gene expression data, immunostaining and morphology substantiate the 222 link between the main observed metabolic types and EMT state. 223 Differences in metabolic pathway activity unraveled by 13 C tracing 224 The association analysis provided novel leads on the regulatory differences that characterize the 225 main metabolic types identified by activity scores but failed to expand our understanding of the 226 metabolic differences. For instance, only sporadic associations were found for enzyme levels or 227 their expression and, therefore, it is not possible to draw robust hypotheses on nutrient utilization 228 or nutrient fluxes. To directly assess differences in pathway usage between the major metabolic 229 types, we used 13 C-labeling experiments. The goal was to assess whether conserved differences 230 in fluxes could be identified between type 1 and 2. Given the generic preference of cancer cell 231 lines for glucose and glutamine, we grew nine representative but diverse cell lines of the two types 232 in media enriched with either [U-13 C]glucose or [U-13 C]glutamine for 48 hours. Upon metabolite 233 extraction from cells, we used mass spectrometry to measure 13 C-enrichment in metabolites and, 234 in turn, to quantify their fraction labeling (FL), in short, their differences in 13 C labelling. Given the 235 experimental design in which a single substrate is labeled, the FL of each detectable metabolite 236 informs on the fraction of carbon that originated from either glucose or glutamine. To highlight 237 differences in carbon fluxes between type 1 and 2, we computed the difference in FL between the 238 averages of the two groups ( Figure 4A for the example of [U-13 C]glucose). This allowed ranking 239 all metabolites according to FL differences. To consolidate the results at the level of biochemical 240 pathways, we sought for enrichment in both tails of the ranked metabolite list. For the example of 241 TCA cycle metabolites, the 11 detected metabolites were mostly ranked towards type 2, resulting 242 in a significant enrichment (q-value < 0.01, Hypergeometric test). 243 On [U-13 C]glucose, the vast majority of pathways of primary metabolism exhibited higher 13 C-244 labeling in type 2 cells ( Figure 4B). This indicates that more glucose is used to replenish central 245 carbon metabolism, amino acids, nucleotides, and fatty acids. In contrast, type 1 cells showed a 246 slight enrichment in glucose-derived 13 C in the pathways related to carbohydrate metabolism and 247 storage, which are often confused because of the numerous isomers that cannot be resolved 248 analytically. The [U-13 C]glutamine revealed less differences between the two types, mostly 249 because the measured FL were low in general. This indicates that glutamine-derived carbon is 250 only a minor fraction of the total carbon assimilated for biosynthesis. 251
Alterations in lipid metabolism between metabolic types
252 Multiple evidence suggested that the main metabolic types might differ in lipid metabolism. To 253 validate this finding, we analyzed the lipidome of seven representative cell lines for both main 254 types 1 and 2 by LC-MS/MS. We could detect and quantify 305 lipid species ( Figure 5) and found 255 that most lipid classes were slightly but reproducibly more abundant in type 1 cell lines ( Figure 5A 256 and B, Supp. Figure 3). Despite their minor contribution to total lipids, we stress that total 257 cardiolipins (p-value < 0.01) and phosphatidylglycerols (p-value < 0.05) contents were higher in 258 type 2 ( Figure 5C). These lipids constitute the membrane of mitochondria and, therefore, it 259 suggests an increase of mitochondrial mass in type 2. Conversely, lipids associated with the 260 extracellular membrane such as phosphatidylserines (p-value < 1 × 10 -3 ) and sphingomyelins (p-261 value < 0.01) were higher in type 1, which is consistent with the spindle-like morphology that 262 requires increased membrane surface. The classes of ether phosphatidylcholines and 263 triacylglycerols were characterized by remarkable within-class shifts (Supp. Figure 3). We 264 hypothesized that these rearrangements could affect structural properties of membranes, more 265 than the difference in total abundance. When comparing the species by double bond and acyl 266 chain length, we found that triacylglycerols had shorter chain length in type 1 cell lines (Supp. 267 Figure 4A) and the opposite for ether phosphatidylcholines (Supp. Figure 4B). This could point at 268 differences in synthesis compared to uptake in these two classes. Type 2 cell lines had generally 269 higher levels of lipid unsaturation (Supp. Figure 5 The resulting data revealed striking differences between the two major metabolic types. This is 282 shown exemplarily for TAG 50:1 in the case of [U-13 C]glucose, an abundant member of 283 triacylglycerols ( Figure 5D). In the type 1 representative HS578T, 24% of carbon atoms were 284 labeled and the largest isotopologue was M+3, which results from the fusion of a 13 C3-glycerol 285 backbone and unlabeled acyl chains. In contrast, the type 2 representative NCI-H460 featured a 286 much higher 13 C-enrichment, 70%, with evident incorporation of 13 C in the acyl chains. This pattern 287 indicates substantially higher de novo fatty acid biosynthesis in type 2 compared to type 1. If 288 lipogenesis is affected, similar trends should be observable across lipid classes. We extended the 289 same analysis to nine cell lines and found for 13 out of 15 tested lipids, more 13 C-incorporation in 290 type 2 (p-value < 0.05, Student-test) ( Figure 5E). The only exceptions being 291 lysophosphatidylcholines, which are phospholipid derivatives and not made de novo. In the data 292 related to the second tracer [U-13 C] glutamine, the labeling enrichment was lower, in the range of 293 10% to 20% (Supp. Figure 6). We observed a small but opposite trend with increased 13 C in type 1 for 5 out of tested 15 lipids which could reflect a marginal difference in the fraction of citrate that 295 originates from glutamine and provides acetyl-CoA monomers to lipogenesis. In conclusion, we 296 observed higher activity in de novo lipid synthesis (from the main carbon source glucose) in type 297 2 cells, which was not coupled to higher lipid content. The remaining fraction of non-labeled lipids 298 resulted from other carbon sources, which could be from direct lipid uptake. Of note, type 2 had 299 higher de novo unsaturated lipids biosynthesis and labeling into ether lipids, which are also linked 300 with membrane fluidity 39 . 301
302
To functionally validate the pathway scores, we evaluated if the inferred metabolic types were 303 associated to differences in sensitivity to genetic or pharmacological inhibition. We used the 304 dependency data from a CRISPR knockout screen of 18 333 genes 40 , for which 63 cell lines 305 overlap with two types. We indeed found that active pathways were more sensitive in one type 306 versus the other. For example, in upper glycolysis we found that PGM3 and PGAM1 deletion had 307 stronger effect in type 1 (Student t-test p-value < 0.05 and p-value < 0.01, respectively) ( Figure 308 6A and B). Indeed, PGAM1, phosphoglycerate mutase 1, knockout has a deleterious effect in both 309 types. However, we observed that the effect was significantly more pronounced for type 1 (-1 vs -310 0.8 in type 2, Student t-test p-value < 0.01) and close to the median value of essential genes 40 . 311 These two results corroborated from a functional standpoint that the association of type 1 to higher 312 sugar metabolism activity. Inversely, the sensitivity to gene knockout shifted between major types 313 in the TCA cycle. For example, knockout of IDH2 or SDHAF4 affected the growth of type 2 cells 314 (p-value < 0.01) more than type 1. 315 We extended the analysis from single reactions to whole pathways. We ranked all 18,333 genes 316 on their correlation with the two types and performed gene set enrichment analysis to identify 317 pathways that include genes whose knockout lead to differential effects. The top pathways 318 associated with type 1 were linked to sugar metabolism ( Figure 6C). More strikingly, the pathways 319 whose knock-out caused frequently a growth defect in type 2 were oxidative phosphorylation (q-320 value = 0, Supp. Figure 7A) and biosynthesis of unsaturated fatty acid (q-value < 0.05, Supp. 321 Figure 7B). These are in line with the higher activity predicted by pathway score and verified by 322 13 C-experiments in the TCA cycle and de novo lipogenesis (e.g. PC 34:5e in Figure 5E), 323 respectively. The differentiating relevance of unsaturated fatty acids was confirmed by drug 324 sensitivity data 41 . Type 2 cells were indeed more susceptible to inhibition of Stearoyl-CoA 325 desaturase (SCD), a major contributor for biosynthesis of unsaturated fatty acids ( Figure 6D). In 326 summary, we demonstrated that the pathway activity and clustering derived from metabolomics data can translate into dependency for cancer cell lines and sensitivity to genetic or 328 pharmacological inhibition. In type 2, the importance of mitochondrial pathways was highlighted 329 by oxidative phosphorylation dependency and the sensitivity to unsaturated fatty acid biosynthesis 330 was confirmed as a therapeutic liability for these cancer types. 331 Therefore, we expect that through action of the in vivo cellular environment, additional sensitivities 368 will emerge in addition of the ones suggested by this study in rich media and normoxia 50 . 369
Discussion
What has yet to be explored are the principles that drive these cancer cells to adopt different 370 metabolic types. These could be associated to specific limitations that cancer cells have to bypass 371 to support their development and transformation, such as adaptation to hypoxia, or to solve whole-372 cell challenges of efficient energy production or proteome allocation 51 . Because of the 373 overwhelming result linking type 2 to aerobic pathways (TCA cycle, oxidative phosphorylation, 374 unsaturated fatty acids, etc.), we hypothesize that oxygen and its metabolism play a major role in 375 shaping the metabolic phenotype. Even though cells were grown under the same normoxic 376 condition, hysteresis due to past oxygen availability could explain these phenotypes. In fact, type 377 1 cells are characterized by ubiquitous changes that characteristic of hypoxia: activation of HIF1 378 targets, inhibition of mitochondrial pathways, and increase in lipid uptake, which has been shown 379 to be beneficial against hypoxic stress 52 . Type 2 cells, in contrast, maintain membrane fluidity by 380 producing de novo unsaturated fatty acids fueled by the TCA cycle. Moreover, the differences in 381 these aerobic pathway usages could be explained by impaired mitochondria. In fact, type 1 cells 382 had less mitochondrial lipids, less activity in mitochondrial pathways, and were less dependent on 383 respiration. Future work should verify causality, i.e. whether mitochondrial dysfunctions, the loss 384 of mitochondrial mass, or stabilization of HIF1 are sufficient to drive a shift type 2 cells to type 1. sequence. The expectation was that all repeated measurements of either MCF7 or MDAMB321 448 had to be possibly similar, but the differences between MCF7 and MDAMB321 were preserved 449 across the seven batches. The reproducibility metrics included the following criteria: Methods that correct for signal drifts, that occur chronologically during the injection 489 sequence. The drifts might be caused by smooth changes in solvents, the ionization 490 source, ion optics, or the detection process. We used three methods that consider the 491 injection sequence to detect temporal drifts and correct after smooth interpolation. 492 First, we implemented a method that applies a moving median (window 120 min) to all 493 measured samples to estimate a robust trendline (MovMed). Second, we used a locally 494 weighted regression (LOESS) 57 and its derivate for temporal trends (Robust LOESS) 58 , 495 and third, we used the QC-based support vector regression method (QC-SVR) 59 . In 496 the latter case, the MDAMB321 samples were selected as QC samples to be used for 497 correction. 498 These methods were tested singularly and in reasonable combinations (Combo). Given that 499 the three classes tackle different types of problems, we have combine up to methods from the 500 different classes. The choice of methods and the order of combinations was based on their 501 improvement of quality metric scores. Results and average quality score can be found in Supp. 502 Table 1. 503
504
A common issue using pathways is that their definition can be arbitrary, i.e. the start and end of a 505 pathway is dependent on the database. Kyoto Encyclopedia Genes and Genomes (KEGG) 60 , our 506 chosen database because of its high curation, has the disadvantage of having substantially 507 overlapping pathways. We circumvented this limitation by removing reactions, and their 508 corresponding substrate or product, which were present in multiple metabolic pathways. This 509 curation resulted in a smaller pathway definition with less overlapping reactions, and thus 510 metabolites more specific to a pathway. Out of the 1809 putatively annotated ions (according to 511 HMDB), 367 could be linked to KEGG pathways. As in many cases the measurement doesn't not 512 allow to distinguish between structural isomers, an ion could match to one or multiple metabolites 513 with the same formula. In total, the 367 deprotonated ions matched to 530 metabolites which are 514 part of KEGG metabolic pathways. Figure 1). For representative pathways, we correlated PC1 scores 532 and measured fluxes. Strong positive correlation is observed between the glycolytic and pentose 533 phosphate metabolites summarized in PC1 and the fluxes of these pathways. As the direction of 534 principal components can be flipped (e.g. in the case of the purine pathway in Supp. Figure 1C). 535
Pathway activity scoring
Overall, the PC1 scores correlated favorably with fluxes in all cases tested. 536 Inference of pathway score in cancer cell lines 537 Metabolomics data were mapped onto pathways, where pathways were considered with a 538 minimum of four metabolites measured were further analyzed. Regardless of the number of 539 detectable metabolites, the relative pathway score for each cell line replicate was obtained by 540 PCA as outlined above. To isolate robust pathway scores, we analyzed the 1060 injections (after 541 removal of controls) independently. For each cell line, we averaged the 6 independent PC1 scores 542 to assess the pathway activity score. Final scores were scaled to [-1…1] for comparison across 543 pathways. 544 Typing and association analysis 545 The matrix with pathway scores was subject to hierarchical clustering using Ward's method. 546 The association analysis over the tree was done by iterating through all subtrees with at least 18 547 cell lines (10% of the total number). For categorical traits (e.g. batch number or genomic data), 548 we calculated enrichments with hypergeometric tests. For continue variables (e.g. gene 549 expression), we used Student two-tailed t-tests. We assembled all (1 025 576) resulting p-values 550 and corrected in toto for false discovery rate by the Storey & Tibshirani method to produce q-551 values 63 . 552 553 We chose 9 cells from type 1 and type 2 for further evaluation (Supp. Table 3 Massachusetts, United States). For liquid chromatography we used a 30 mm Waters ACQUITY 595 UPLC BEH C18 column (cat. no. 186002352) and a 7 minute gradient from 15% buffer B (90% 596
Selection of cell lines for follow ups
(v/v) isopropanol, 10% acetonitrile, 10mM ammonium acetate) and 85% buffer A (60% acetonitrile, 597 50% water, 10mM ammonium acetate) to 99% Buffer B. Mass spectra were recorded from 150 to 598 2000 m/z in positive ionization mode, recording MS1 and MS2 (DDA, top 5 ions) spectra. 599 Lipidomics data processing for non-labeled samples was done using Compound Discoverer 3.1 600 (Thermo, Massachusetts, United States). Lipid were annotated with MS2 information. Lipids from 601 each class were quantified using class-specific internal standards. 602 For labeled lipids, we adopted a targeted data extraction. The most abundant representatives for 603 each lipid class were selected in naturally labeled samples. All 13 C-isotopomer traces were 604 extracted as ion chromatograms from labeled samples based on accurate mass and retention 605 time. Related mass isotopomers were integrated with identical boundaries and normalized to unity 606 to obtain labeling fractions. 607 | 2021-11-06T15:18:42.210Z | 2021-11-04T00:00:00.000 | {
"year": 2021,
"sha1": "e2363374971c2117599d34204974806e4e53fe06",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2021/11/04/2021.11.02.466992.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "e11e811dbf72103b1be11b4fe0e532a7a8e4db29",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
238527566 | pes2o/s2orc | v3-fos-license | Gastrinoma and Zollinger Ellison syndrome: A roadmap for the management between new and old therapies
Zollinger-Ellison syndrome (ZES) associated with pancreatic or duodenal gastrinoma is characterized by gastric acid hypersecretion, which typically leads to gastroesophageal reflux disease, recurrent peptic ulcers, and chronic diarrhea. As symptoms of ZES are nonspecific and overlap with other gastrointestinal disorders, the diagnosis is often delayed with an average time between the onset of symptoms and final diagnosis longer than 5 years. The critical step for the diagnosis of ZES is represented by the initial clinical suspicion. Hypergastrinemia is the hallmark of ZES; however, hypergastrinemia might recognize several causes, which should be ruled out in order to make a final diagnosis. Gastrin levels > 1000 pg/mL and a gastric pH below 2 are considered to be diagnostic for gastrinoma; some specific tests, including esophageal pH-recording and secretin test, might be useful in selected cases, although they are not widely available. Endoscopic ultrasound is very useful for the diagnosis and the local staging of the primary tumor in patients with ZES, particularly in the setting of multiple endocrine neoplasia type 1. Some controversies about the management of these tumors also exist. For the localized stage, the combination of proton pump inhibitory therapy, which usually resolves symptoms, and surgery, whenever feasible, with curative intent represents the hallmark of gastrinoma treatment. The high expression of somatostatin receptors in gastrinomas makes them highly responsive to somatostatin analogs, supporting their use as anti-proliferative agents in patients not amenable to surgical cure. Other medical options for advanced disease are super-imposable to other neuroendocrine neoplasms, and studies specifically focused on gastrinomas only are scant and often limited to case reports or small retrospective series. The multidisciplinary approach remains the cornerstone for the proper management of this composite disease. Herein, we reviewed available literature about gastrinoma-associated ZES with a specific focus on differential diagnosis, providing potential diagnostic and therapeutic algorithms.
INTRODUCTION
Zollinger-Ellison syndrome (ZES) was firstly described in 1955 as associated with a neuroendocrine neoplasm (NEN) capable of ectopic gastrin secretion (namely gastrinoma)[1], resulting in gastric acid hypersecretion, which typically leads to gastroesophageal reflux disease (GERD), recurrent peptic ulcers, and chronic diarrhea. The terms gastrinoma and ZES have been frequently used as synonymous, although gastrinoma refers to the NEN secreting gastrin, whereas ZES refers to the clinical manifestations of the disease. ZES has an incidence of 1-1.5 cases/million per year [2]. Gastrinomas are NENs located in the duodenum (70%), pancreas (25%), and rarely (5%), in other sites, including stomach, liver, ovary, and lung. Gastrinoma is the most frequent functioning duodenal NEN and the second most frequently occurring functional pancreatic NEN (pNEN), following insulinoma; in turn, 15% of functioning pNENs is represented by gastrinoma. It may be sporadic, which is generally diagnosed between the ages of 50 and 70 years with a male to female ratio of 1.5-2:1 [3], whilst 20%-30% of the patients develop ZES in the context of a genetic syndrome known as multiple endocrine neoplasia type 1 (MEN-1) [4].
The diagnosis of ZES is not always straightforward due to both non-specific symptoms and confounding factors including proton pump inhibitor (PPI) therapy, which might temporarily relieve symptoms. Furthermore, as these patients tend to be referred to gastroenterologists because of diarrhea and/or reflux disease disorder, despite a better awareness of the disease, the diagnosis might be challenging for those gastroenterologists with low experience in the neuroendocrine setting as well as for many oncologists who are less used to dealing with diarrhea and reflux disease. As a consequence, the average time between the onset of symptoms and the final diagnosis is often longer than 5 years [5,6], and nearly 25% of patients are metastatic at the first diagnosis and show a worse prognosis when compared to non-metastatic patients in whom the surgical management is associated with a promising 15-year survival rate of > 80% [7].
Furthermore, some controversies about the management of these tumors still exist, particularly regarding the exact role of surgery or medical treatment and the possible role of somatostatin analogs (SSAs) [3]. Given that gastrinoma and ZES need both a proper medical treatment for symptom relief and a surgical procedure whenever feasible, the multidisciplinary approach, with close cooperation between clinicians and surgeons, remains the cornerstone for proper management of this composite disease, which should be always referred to tertiary centers.
Herein, we review from a critical point of view current knowledge about gastrinoma-associated ZES, also providing potential diagnostic and therapeutic algorithms based on both evidence from literature and own personal experience.
METHODOLOGY
Bibliographical searches were performed in PubMed using the following keywords: Gastrinoma; Zollinger Ellison syndrome; neuroendocrine neoplasms; pancreatic neuroendocrine neoplasm; duodenal neuroendocrine neoplasm; diagnosis; therapy; guidelines. We searched for all relevant articles published over the last 10 years. The reference lists from the studies returned by the electronic search were manually searched to identify further relevant reports. The reference lists from all available review articles, primary studies, and proceedings of major meetings were also considered. Articles published as abstracts were included, whereas non-English language papers were excluded.
CLINICAL PRESENTATION
ZES is characterized by gastric acid hypersecretion and consequent hyperchlorhydria resulting in severe acid-related peptic disease and diarrhea. The symptoms usually resolve when gastric acid secretion is controlled pharmacologically with PPIs [8,9]; of note, the disappearance of diarrhea following PPI treatment is typical of ZES and represents one of the factors contributing to the diagnostic delay. According to data from the literature, common symptoms include abdominal pain (75%), diarrhea (73%), heartburn (44%), and weight loss (17%)[6,8,10]. As these symptoms are both not specific and often less severe due to concomitant PPI treatment, the final diagnosis is often delayed and patients are diagnosed with irritable bowel syndrome or reflux disease by gastroenterologists with low or no knowledge of the disease [8,11].
The endoscopic features are also not specific and might include erosions and ulcers [12], however, ZES patients often present with multiple ulcers located at unusual sites, e.g., beyond the first or second portion of the duodenum[8,13]. Furthermore, enlarged gastric folds can be present in more than 90% of patients with ZES[11].
One should keep in mind that approximately 25% of gastrinomas occur in the context of MEN-1, which is characterized by the presence of parathyroid, pancreaticduodenal, and pituitary tumors [14]; thus the occurrence of unexplained hypercalcemia might be a sign for possible MEN-1 syndrome-associated ZES [15,16], also taken into account that primary hyperparathyroidism is generally the presenting feature in the majority of cases of MEN-1 syndrome [8,16,17]. Of note, parathyroidectomy usually improves gastrin levels and basal acid output [16]. Finally, in ZES/MEN-1 patients, type 2 gastric NENs might occur[3].
DIFFERENTIAL DIAGNOSIS
Symptoms of ZES are nonspecific and overlap with other gastrointestinal (GI) disorders, which explains the frequent diagnostic delay.
As concerns chronic diarrhea in ZES, this is sustained by hyperchlorhydria and sodium and water malabsorption due to hypergastrinemia [18]. As afore-mentioned, . As well as diarrhea, they are sustained by hyperchlorhydria, which directly damages GI mucosa, causing ulcers and erosions. Abdominal pain could be associated with peptic ulcers, which, differently from Helicobacter pylori or non-steroidal anti-inflammatory drug-related ulcers, are multiple, located at unusual locations (e.g., the third part of the duodenum, small bowel) and complicated by bleeding, penetration, perforation, or strictures [8,13,19].
Similar to peptic ulcer disease, chronic GERD is one of the most frequent manifestations of ZES [13]. Heartburn and regurgitation are the most typical symptoms, which are super-imposable to symptoms associated with typical GERD; differently from the typical syndrome, patients with ZES often present esophageal strictures due to overexposition to acid reflux.
Again, the association between these symptoms and chronic diarrhea, after exclusion of other common GI etiologies, might raise the suspicion of ZES, which requires specific tests in order to get the final diagnosis.
DIAGNOSIS
The diagnosis of ZES is quite challenging, also considered that the critical point is the initial suspicion of ZES. A suggested diagnostic algorithm is represented in Figure 1.
ZES is a clinical syndrome characterized by the following triad: (1) gastric acid hypersecretion, sustained by (2) fasting serum hypergastrinemia causing (3) peptic ulcer disease and diarrhea[1]. Hypergastrinemia is sustained by a gastrinoma, a rare NEN (located primarily in the duodenum or pancreas) that secretes gastrin.
Since ZES symptoms can be explained almost entirely by acid hypersecretion, PPIs, which significantly decrease acid secretion, can mitigate or resolve ZES symptoms, making ZES diagnosis even more challenging than in the past[9,22], but avoiding severe ZES complications.
Hypergastrinemia is the hallmark of ZES; however, hypergastrinemia might recognize several causes, which should be ruled out in order to make a final diagnosis of ZES[23]. In detail, it can be distinguished between (1) appropriate hypergastrinemia, due to atrophic gastritis (with or without pernicious anemia), anti-secretory therapy (PPIs or high-dose histamine H2-receptor antagonist, namely famotidine), chronic renal failure, Helicobacter pylori-related pan-gastritis, vagotomy, and (2) inappropriate hypergastrinemia that can be observed in ZES (sporadic or associated with MEN-1), antral-predominant Helicobacter pylori infection, retained-antrum syndrome, gastric-outlet obstruction, extensive small-bowel resection.
The diagnosis of ZES requires the demonstration of inappropriate gastrin secretion associated with gastric hyperchlorhydria, which corresponds to a gastric pH < 2[5]. Normal fasting gastrin levels are < 100 pg/mL; levels > 300 pg/mL are highly suspicious, and levels > 1000 pg/mL together with a gastric pH below 2 are considered to be diagnostic for gastrinoma [2,9,24]. Naso-gastric tube aspiration has classically been used to estimate gastric pH, but it can be uncomfortable for patients and can underestimate gastric acid output; alternatively, gastric pH can be measured during upper GI endoscopy, by aspiration of gastric juice for pH determination using either pH paper or a pH meter; while endoscopic sampling was shown to overestimate total acid volume, it provided more reproducible results and offered greater patient tolerance than nasogastric tube placement [
Imaging and ultrasound endoscopy (endoscopic ultrasound)
Localization of the primary tumor and its metastases is the first diagnostic step when ZES associated with gastrinoma is suspected. Contrast-enhanced computed tomography (CT) scan is useful to identify primary tumor > 1 cm, pancreatic head tumors, and liver metastases, with a sensitivity between 59% and 78% and a specificity between 95% and 98%, respectively. Conversely, sensitivity decreases for tumor size < 1 cm and extra-pancreatic locations [28,29].
Contrast-enhanced magnetic resonance imaging (MRI) showed high specificity (namely 100%) in detecting small pancreatic tumors and liver metastases, whereas sensibility is sub-optimal varying from 25% to 85%. Of note, MRI showed a higher sensibility for liver metastases detection when compared to CT scan [28,30].
In more recent years, somatostatin receptor positron emission tomography (PET) techniques have shown great promise for improving the localization of gastrinomas as well as other NENs[36-39] and for the detection of distant metastases, including bone lesions. The radioisotope 68 Ga can be ligated to peptides that bind to somatostatin receptors found in abundance on the NEN surface [36]. This technique showed a higher sensibility and specificity (72%-100% and 83%-100%, respectively) when compared to the aforementioned diagnostic techniques in localizing the primary tumor, especially small size tumors[36,37,40]. Combining 68 Ga-radiotracers with traditional CT scans (PET/CT) further enhances diagnostic accuracy compared to PET alone, showing a sensitivity of 93% and a specificity of 96% in primary tumor detection [41]. Gallium-68 PET-scan should be always included in the diagnostic pathway of all NENs, including gastrinoma, in order to both identify the primary tumor and stage the disease.
Endoscopic ultrasound (EUS) has become an important diagnostic tool to localize gastrinomas, particularly small (i.e. < 2 cm) pancreatic lesions; its sensitivity and specificity are 75%-100% and 95%, respectively, for pancreatic tumors. Unfortunately, its sensitivity dramatically decreases in cases of duodenal localization, ranging from 38% to 63%[28,42]. A further advantage of this technique is the possibility of taking cytologic/histologic samples through a fine needle aspiration/biopsy (FNA/B) to confirm the diagnosis of NEN, even if false-negative results are possible mainly due to poor sampling adequacy. EUS-FNA/B is now considered the primary sampling technique for pancreatic tumors, with a sensitivity ranging between 80% and 90%, specificity at 96%[43], and a sampling adequacy rate of 83%-93% [44].
When used as a screening modality in asymptomatic patients with MEN-1, EUS has been reported to be more accurate than CT scan to detect smaller tumors [45]. Therefore, its diagnostic ability has led experts to recommend it as an annual screening modality for all patients with MEN-1, although recent evidence suggests that the growth rate of small pNENs (i.e. < 2 cm) is low and that EUS screening frequency can likely be extended [14,46].
Esophageal pH-recording
Since one of the most common symptoms of ZES is GERD, it could be argued that esophageal pH-monitoring could be a useful tool to diagnose ZES. Recent BSG guidelines for esophageal manometry and esophageal pH monitoring[47] stated indications to perform esophageal pH-monitoring, also including as an indication GERD symptoms that did not respond to double dose of PPIs. This technique allows to diagnose an increased acid exposure, to evaluate the association between symptoms and acid or non-acid reflux, and to identify different phenotypes of upper symptoms ( i.e. non-erosive reflux disease, hypersensitive esophagus, and functional heartburn).
ZES is not usually included in diagnosis performed by esophageal pH-monitoring, and, consequently, ZES reference standard for esophageal pH-monitoring is lacking. However, evidence of a high number of acidic reflux episodes (i.e. esophageal pH < 4), a high number of long (i.e. > 5 min) reflux episodes, a high percentage of time with esophageal pH < 4, both on a double dose of PPIs and off PPIs, could raise the suspicion of abnormal gastric acid secretion. This hypothesis should be confirmed by prospective studies; however, considering the rarity of this syndrome, it would be very difficult to obtain standard values to use in clinical practice; therefore, despite its potential utility, this test is not currently included in the standard diagnostic workup of gastrinoma.
Secretin provocative test
Secretin provocative test in ZES diagnosis founds its application in controversial cases, that is patients with suspected ZES, gastric pH < 2 but fasting serum gastrin < × 10 upper limit of normal [9]. To perform a secretin stimulation test, fasting gastrin levels are obtained before intravenous (IV) administration of secretin and then 2, 5, and 10 min after infusion [25]. Patients with gastrinomas exhibit an inappropriate increase in gastrin production in response to secretin infusion [9]. This mechanism can be explained in part by the fact that secretin receptors are expressed directly on the gastrinoma cell surface [48]. Different cut-offs for positive tests have been proposed, including an absolute increase in gastrin concentration ≥ 110 pg/mL or ≥ 200 pg/mL or a 50% increase in gastrin concentration [49]. However, previous data suggested that a positive secretin-provocative test (≥ 120 pg/mL increase) has a sensitivity of 94% and specificity of 100%, respectively[50]. According to data from the literature, a falsenegative response can occur in 6% to 20% of patients[51,52], whereas false-positive responses, ranging from 15% to 39% in different studies [52,53], are found in patients with pernicious anemia or chronic PPI use.
In order to reduce the risk of false-positive results, PPI treatment should be withdrawn, but, again, the decision should be discussed in a case-by-case manner to limit the risk of severe complications (e.g., perforation or bleeding). This might partially explain the reason why the secretin test can be difficult to be performed and should be reserved for strictly selected cases when the diagnosis is not straightforward.
In patients with an established diagnosis of gastrinoma-related ZES, MEN-1 syndrome might be present in approximately 25% of the cases. The presence of hypercalcemia due to hyperparathyroidism is one of the first signs. However, the diagnosis might be challenging in this specific setting as ZES does not usually develop in the absence of primary hyperparathyroidism, and hypergastrinemia has also been reported to be associated with hypercalcemia as a confounding factor [15]. Furthermore, parathyroidectomy leads to restoration of normocalcemia and improvement in clinical symptoms and biochemical abnormalities in as many as 20% of MEN-1 patients with ZES[14]. Moreover, staging and localization with CT or MRI is even more challenging in the setting of MEN-1 due to the presence of numerous small tumors < 1 cm in size [27,28]. A high index of suspicion must be maintained if a patient with chronic diarrhea and unexplained peptic ulcer disease presents with primary hyperparathyroidism. The genetic test for MEN-1 syndrome should be performed in a selected subgroup of patients, namely (1) in patients with two or more primary MEN-1-associated endocrine tumors (e.g., parathyroid adenoma, entero-pancreatic tumor, and pituitary adenoma) or hypercalcemia associated with an endocrine tumor; and (2) patients showing MEN-1-related features and being the first-degree relative of a patient with a clinical diagnosis of MEN-1[14].
THERAPY
The management of gastrinoma and ZES includes both a proper medical treatment for symptom's relief and surgery with curative intent whenever feasible. A proposed therapeutic algorithm is represented in Figure 2.
Surgery
The role of surgery in the treatment of gastrinoma has changed completely from the introduction of PPIs in the 1980s. In fact, before the advent of an effective antisecretory therapy, surgery was performed to control acid hypersecretion, mainly removing the target cells of gastrin through total gastrectomy. These operations were, by the way, affected by a high mortality rate due to acid-related complications in the postoperative course. With the use of PPIs, gastric hypersecretion was no longer a problem, and the main determinant of prognosis became the gastrinoma itself because of its malignant potential and surgical excision started to be proposed as a potentially curative therapy. From 1981, the National Institute of Health began a prospective study recruiting patients with ZES for surgical therapy, with a well-designed surgical protocol in order to capture the long-term results of the best available surgical approach. The study reported a 10-year overall survival (OS) and disease-free survival (DFS) of 94% and 34%, respectively[54]. Therefore, surgery has gradually changed its role and gastrinoma resection has started to be increasingly proposed to patients eligible for resection. Currently, across the most important guidelines, surgical excision is generally recommended either for sporadic gastrinoma or for MEN-1 associated gastrinoma if complete tumor removal is possible[2,55-58]. Subsequent studies reported a 20 year OS of 58%-71%, a 20-year disease-related survival of 73%-88%[59], and a 10-year DFS of 25%-50%[60]. Surgery of the primary tumor also demonstrated to reduce the occurrence of liver metastases [61][62][63], which are one of the main determinants of prognosis, and to improve DFS in comparison with non-surgical management [62].
The majority of gastrinomas (from 60% to 90% depending on the series)[42,60] occur in the duodenum, and, since these are often very small lesions (less than 1 cm) and located at the submucosal layer, tumor detection is not so straightforward. Therefore, the surgical technique should follow a stepwise approach to search for the tumor even in case of negative preoperative imaging. In this context, surgery has firstly a diagnostic purpose, which is quite uncommon in modern surgery and, given the peculiarity of this technique and the rarity of the disease, it should be performed by experienced surgeons in tertiary referral centers. After a complete abdominal exploration, the duodenum and the pancreatic head are mobilized (Kocher maneuver) and carefully palpated. Intra-operative ultrasound with a linear probe is then performed on the duodenum and pancreas looking for the primary tumor and on the liver in search for liver metastases. Intra-operative endoscopy is performed thereafter advancing the scope into the duodenum; duodenal gastrinomas may be found through trans-illumination of the bowel wall as non-trans-illuminated spots. If a lesion is identified, it should be marked with a suture and the duodenum opened around it for a full-thickness excision. If the described steps fail to reveal any lesion, a 3 cm longitudinal incision is made on the anterior aspect of the second portion of the duodenum, and the entire duodenal wall is palpated. Suspicious lesions are excised with a fullthickness rim of normal tissue and sent for pathology. The duodenum is then closed transversally, if possible, to minimize the risk of strictures [64,65]. In the hands of an experienced surgeon, lesions could be found in 98% of imaging-negative ZES patients, with a 50% curative rate [59], similar to that of imaging-positive patients. These findings suggest that surgery should be performed as soon as possible in sporadic ZES, despite negative imaging findings. Pancreatic gastrinomas should be enucleated if located 3 mm or farther from the main pancreatic duct. Conversely, lesions that are closer to the pancreatic duct require distal pancreatectomy with or without splenectomy if located in the body or tail of the gland and pancreaticoduodenectomy if located in the head/neck. Pancreaticoduodenectomy or distal pancreatectomy may be necessary also for local recurrence after enucleation[66].
Regional lymph nodes should always be removed because nodal metastases are present in almost half of the patients[54,67] and lymphadenectomy has been associated with increased DFS [68], as reported also for other pNENs [69][70][71]. The presence of primary gastrinoma located in a lymph node is controversial, however, several studies reported long disease-free survivors after resection of only a positive lymph node [72,73], and this supports the role of routine lymphadenectomy.
Since pancreaticoduodenectomy provides complete removal of the regional lymph nodes of the pancreatic head, the results in terms of DFS are better with respect to enucleation because of the higher chance of radicality [54,67]. However, given the high postoperative morbidity and the good prognosis also of patients with small residual disease, pancreaticoduodenectomy is not recommended as the standard operation for these patients[2,55-58]. Generally, the indication for surgery should always follow a thorough risk/benefit assessment within a multidisciplinary tumor board aiming at maximum radicality and minimum morbidity. This is particularly the case for MEN-1 patients; in these patients, who have generally an earlier age of onset, pNENs should be resected in low-risk patients, and surgery is generally recommended for tumors larger than 2 cm[14,58]. However, according to most authorities, as well as all guidelines, surgical resection for an attempted cure should be performed in ZES patients whenever possible[2,27,58]. This is particularly true for functioning duodenal NENs, including gastrinomas, which have been reported to express a high metastatic potential [74], thus a radical surgical approach should be the first choice in this specific setting. However, in highly selected cases (i.e. duodenal lesions ≤ 1 cm, limited to the submucosal layer and without lymph nodal involvement), endoscopic resection might also be considered, although the risk of undetected micro-metastases might represent an issue.
Another controversial issue is laparoscopic surgery; while it is widely adopted for pNENs, its role for gastrinomas is limited to patients in whom preoperative imaging gives an accurate definition of tumor location. Unfortunately, as already mentioned, extensive exploration is often needed for diagnostic purposes. In these cases, laparoscopy is inadequate, and laparotomy is mandatory.
The role of surgical resection in ZES patients with advanced metastatic disease or even with extensive invasive localized disease is not well-defined. In this setting, the possibility of surgical removal of all resectable tumors (cytoreductive surgery, debulking surgery) should be considered, and surgery is generally recommended if ≥ 80% of all disease can be removed (generally feasible in 5%-15% of all metastatic gastrinomas), although only a few reports containing primarily gastrinomas treated with this approach are currently available [10,72].
Finally, in highly selected metastatic gastrinomas, with liver-only metastases and fulfilling strict inclusion criteria, liver transplantation might be considered, even if its use remains controversial and the risk of tumor recurrence represents an issue[58].
Liver-directed therapies
Studies specifically focused on liver-directed therapies in the context of gastrinomas are scant; however, as for other NENs, the embolization approaches in the setting of gastrinoma are generally reserved for patients with metastatic unresectable hepatic metastases either limited to the liver or with a liver-predominant disease, particularly if locally symptomatic[10,58]. Of note, liver-directed therapies are used less frequently in ZES than in other metastatic NENs, because in ZES, the hormone excess-state can be well-controlled medically. September 21, 2021 Volume 27 Issue 35
Medical treatment
Among functioning NENs, gastrinoma is the most frequent type. There are two therapeutic goals in the management of patients with gastrinoma: The control of gastric acid hypersecretion and the treatment of the tumor itself.
Antisecretory medications
The therapy for syndrome control is based on PPI (e.g., omeprazole, esomeprazole, lansoprazole, pantoprazole, etc.), which are highly effective drugs and considered the drugs of choice for suppressing acid secretion. PPIs effectively block gastric acid secretion by irreversibly binding to and inhibiting the hydrogen-potassium ATPase pump on the luminal surface of the parietal cell membrane. Theoretically, the choice and titration of anti-secretory therapy should be guided by the parameters of gastric acid secretions such as basal acid output (to reduce it below 10 mEq/h)[75], since using symptoms alone as a signal of efficacy might be misleading, even if in many centers these methods are not available. Therefore, in most cases, PPI therapy is started at an empirical maximized dosage. The recommended initial dose of omeprazole is 60 mg/daily or esomeprazole 120 mg/daily, lansoprazole 45 mg/daily, rabeprazole 60 mg/daily, pantoprazole 120 mg/daily, divided, twice-a-day [75][76][77][78]. The type of PPI used seems not to be of relevance and a systematic review of 12 randomized trials examining the relative effectiveness of different PPI doses and dosing regimens found no consistent differences in symptom resolution and esophagitis healing rates [79]. IV PPIs are indicated in patients with clinically significant upper GI bleeding from a suspected peptic ulcer. Omeprazole, pantoprazole, and esomeprazole are the only PPIs available as an IV formulation. The other patients can be treated with oral preparation. As concerns efficacy, PPIs have significantly decreased the morbidity and mortality resulting from severe ulcer disease [80]. In 60% of patients, ulcer healing occurs within 2 wk; in 90%-100% of patients, healing occurs within 4 wk. PPIs are generally safe, even when used in high doses.
Once an effective clinical control of the peptic disease has been achieved, a gradual dose reduction is generally suggested [81,82]. In a study by Metz et al [83], 37 patients received high-dose omeprazole for almost 2 years, and nearly 50% were able to lower the dose down to 20 mg once daily, with 95% of patients experiencing safe long-term reductions in their medication dose. PPIs are generally well tolerated and can control hypergastrinemia in ZES for > 10 years (although some patients experience low vitamin B12 levels) [84].
No tachyphylaxis has been described. Therefore, the long duration of action, the fewer adverse effects, and the high potency make them superior to H2 blockers.
Regarding the use of H2-receptor antagonists in ZES, the dose usually is 4-8 times higher than the dose administered to patients with peptic ulcer disease. Although a good success rate exists, this treatment has been reported to fail in 50% of patients. Therefore, these drugs are never the first choice.
Only when PPIs are unable to control gastric acid secretion, SSAs can be considered, as they reduce gastrin secretion, even if they do not represent a first-line treatment at least for symptom control.
Even if this is not a treatment currently approved in localized gastrinoma, it is worth mentioning that in animals, the cholecystokinin-2 receptor antagonist YF476 has been shown to inhibit the development of enterochromaffin-like cell-tumors in susceptible animals with induced hypergastrinemia. Therefore, this drug could represent a potential option in ZES, not only to inhibit hypergastrinemia but also to prevent gastric NEN type 2 (e.g., associated with ZES/MEN-1). Furthermore, there continues to be interest in the development of cholecystokinin-2 receptor antagonists as anti-secretory agents [85]. However, strong evidence supporting the role of these molecules in this specific setting is lacking.
Anti-proliferative treatment
Approximately one-third of ZES patients present with metastatic disease to the liver [10,86]. There are several systemic therapeutic options for advanced gastrinoma, not substantially different from the ones for other NENs, however, studies evaluating specific response rates in gastrinomas alone are limited.
SSAs like octreotide and lanreotide are highly effective in controlling the symptoms associated with hormone hypersecretion in all functioning tumors [87,88]; furthermore, they can reduce gastrin levels and their anti-proliferative effect has been demonstrated in PROMID and CLARINET studies [89,90]. However, in these studies only a few cases of gastrinoma were included, and, even if different case reports and case series suggested the role of SSAs in controlling gastrin secretion and symptoms in ZES September 21, 2021 Volume 27 Issue 35 patients [91][92][93][94], to date only a few studies with a very low number of patients investigated specifically the role of SSAs in ZES[3]. The multitargeted tyrosine kinase inhibitor, sunitinib, has demonstrated an improved progression-free survival from 5.5 mo to 11.4 mo in metastatic pNENs [95]. Moreover, based on the results of two randomized, double-blind, prospective, placebo-controlled studies, the mammalian target of rapamycin-inhibitor everolimus has been approved in advanced both pancreatic [96] and extra-pNENs [96]. However, there are no specific studies on the effects of sunitinib/everolimus in the specific setting of gastrinomas.
Streptozocin, 5-fluorouracil, and doxorubicin have been used, with the response rate reported to be as high as 69% [97]. Despite these reported response rates, the true radiologic response rate is more probably between 10% and 40% [98,99]. More recently, anti-proliferative activity has also been shown for temozolomide. Data came from retrospective studies [100] as well as from a prospective randomized study comparing capecitabine plus temozolomide to temozolomide alone in pNENs that revealed a median progression-free survival longer in the combination arm (22.7 mo vs 14.4 mo, hazard ratio 0.58, P = 0.023), but satisfactory in both [101]. Moreover, a recent realworld analysis confirmed the combination of capecitabine and temozolomide as an active treatment for metastatic NENs [102]. Because of these studies, the use of capecitabine plus temozolomide has become routine for advanced pNENs, including gastrinomas.
Lastly, peptide receptor radionuclide therapy (PRRT) may be the most promising systemic therapy, and it has been repeatedly reported as particularly useful for symptom relief in functioning forms, even if this aspect might be less important in the setting of gastrinomas due to concomitant PPI treatment which is considered to be the first-line approach for symptoms' control [10]. Two different isotopes have been used in most studies: 90 Yttrium ( 90 Y)-or 177 Lutetium ( 177 Lu)-labeled SSAs [103]. The approval of PRRT treatment comes from the promising results of a double-blinded, control phase 3 trial (NETTER-1) [104] in patients with advanced unresectable, midgut carcinoids and the results of treatment of 510 patients with advanced pNENs and other NENs[105,106]. According to data from the literature, gastrinomas are one of the malignant pNENs that were most responsive to PRRT; however, they also had one of the highest recurrence rates leading to a poorer prognosis [103,105]. In detail, in one study including 11 patients with metastatic ZES[107] treated with either 90 Y-and/or 177 Lu-labeled SSAs, the mean serum gastrin decreased by 81%, complete response occurred in 9%, partial tumor response in 45%, tumor stabilization in 45%, with a persistence of the antitumor effect for a median period of 14 mo in 64% of the cases. Another study[108] involving 30 gastrinoma patients treated with 90 Y-labeled SSAs reported a partial response rate of 33% with a mean OS time of 40 mo.
CONCLUSION
As the diagnosis of ZES is challenging; the maintenance of a high index of suspicion is necessary to get the final diagnosis. Better disease awareness is useful to reduce the diagnostic delay, particularly due to the improper referral of patients to physicians with low or no expertise in the neuroendocrine field. The association between typical symptoms including chronic diarrhea, reflux disorder, and recurrent peptic disease particularly at unusual sites should raise the suspicion of ZES after exclusion of alternative and more common GI etiologies. The possibility of an underlying MEN-1 syndrome should be always considered, particularly in young patients with concomitant hypercalcemia suggestive of hyperparathyroidism and/or familiar history of MEN-1. A fasting gastrin level is generally the first step and confounding factors such as PPI use need to be considered. Gastric pH, esophageal pH-recording, and possibly a secretin stimulation test might be necessary as well, although the decision to perform them should be tailored to every single patient, considered both the need to withdraw PPI treatment and the limited availability of these tests in routine clinical practice. Tumor localization must be performed and EUS with the possibility of getting a sampling through FNA is considered to be a more accurate technique than conventional imaging for small lesions. Given the high expression of STTRs in gastrinomas, gallium-68PET-scan should be always included in the diagnostic pathway of all NENs, including gastrinoma, in order to both identify the primary tumor and to stage the disease.
Regarding the treatment of the localized disease, the two milestones are represented by PPIs for symptoms' control and surgery with curative intent. The role of surgery in the treatment of gastrinoma has changed completely from the introduction of PPIs. In the past, total gastrectomy represented the sole effective treatment to treat ZES by removing the end-organ target of gastrin. With the use of PPIs, gastric hypersecretion was no longer considered a problem and surgical excision started to be proposed as a potentially curative therapy. Surgical removal of the primary tumor (and possibly its metastases) with curative intent should be, indeed, always performed. Unfortunately, the diagnosis is often made when the disease is too advanced for a surgical approach. The first step, again, is represented by syndrome control, based on PPIs, which are considered to be the drugs of choice for suppressing acid secretion. In order to achieve tumor growth control, SSAs constitute a viable option; studies specifically focused on advanced gastrinomas are scanty and often retrospective, however, according to data from the literature, treatments for the advanced disease are super-imposable to other NENs and include targeted therapies, chemotherapy, and PRRT. As there is a need for both a proper medical treatment for symptom's relief and a surgical procedure whenever feasible with curative intent, the multidisciplinary approach, with close cooperation between clinicians and surgeons, remains the cornerstone for proper management of this composite disease. Due to the risk of overlapping ZES with other GI common disorders, referral to tertiary centers with great expertise in the neuroendocrine field is mandatory. | 2021-10-10T05:21:19.950Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "5556f0e1a49050e12db734e761e2fbb697b2e7ee",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v27.i35.5890",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "56d2d402273331efd08dff9f7c88826d903cb81d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219499974 | pes2o/s2orc | v3-fos-license | Neither forest herbicides nor ambient temperature exert strong effects on reproductive output of a cavity-nesting songbird
Land management may combine with air temperature to influence the persistence of animal populations, so both must be evaluated in concert to understand how populations respond to increased forest management intensity and projected climate change. We used a large-scale study that experimentally manipulated herbicide application intensity within regenerating forests to test how herbicide-mediated changes in temperature influenced three components of reproductive output in the House Wren (Troglodytes aedon): nest survival, the number of offspring produced, and nestling body condition. We found no evidence for a consistent herbicide treatment effect on any reproductive measure, although our power to detect effects was modest. Relative to unsprayed controls, nest survival was lowest in the light herbicide treatment, and this measure increased on sites that were subjected to greater herbicide application (i.e., at moderate and intensive herbicide treatments), against our predictions. We also detected no evidence of a temperature effect singly or in combination with herbicide application on wren reproductive output. Although herbicide intensity was more influential on reproductive output than was temperature, we found that neither exerted strong effects in regenerating conifer forests. Given the dearth of studies that combine evaluations of temperature and land management impacts on songbird reproductive output, we suggest researchers continue to expand our understanding of the relative influence of both drivers simultaneously to better formulate conservation strategies in light of expected changes in climate and a heightened global demand for wood products. Ni les herbicides forestiers ni la température ambiante n'ont d'effets importants sur le succès reproducteur d'un passereau cavicole RÉSUMÉ. L'effet combiné des pratiques d'aménagement forestier et de la température de l'air pourrait influer sur la pérennité des populations animales, de sorte qu'on doit les évaluer de concert si on veut comprendre comment les populations réagissent aux pratiques forestières de plus en plus intenses et aux changements climatiques prévus. Nous avons utilisé une étude à grande échelle dans laquelle l'intensité de l'application d'herbicides dans les forêts en régénération a été manipulée afin de tester de quelle façon les changements de température induits par les herbicides influaient sur trois paramètres du succès reproducteur du Troglodyte familier (Troglodytes aedon) : la survie au nid, le nombre d'oisillons produits et la condition physique des oisillons. Nous n'avons pas constaté d'effet régulier d'un traitement aux herbicides sur les paramètres de reproduction examinés, bien que notre capacité à détecter les effets ait été modeste. Comparativement aux sites témoins sans arrosage, la survie au nid était le plus faible sous le traitement léger aux herbicides, et a augmenté aux sites sur lesquels davantage d'herbicides avaient été appliqués (c.-à-d. dans les traitements modéré et élevé aux herbicides), à l'encontre de nos attentes. De plus, nous n'avons pas détecté d'effet de la température, ni seule ni en combinaison avec l'application d'herbicides, sur le succès reproducteur du troglodyte. Même si l'intensité des herbicides a eu plus d'effet sur le succès reproducteur que la température, nous avons constaté qu'aucun d'eux n'a eu d'effets importants dans les forêts de conifères en régénération. Étant donné le manque d'études combinant l'évaluation des impacts de la température et des pratiques de gestion des terres sur le succès reproducteur des passereaux, nous incitions les chercheurs à continuer d'approfondir notre compréhension de l'effet relatif de ces deux facteurs simultanément afin de formuler de meilleures stratégies de conservation à la lumière des changements climatiques attendus et d'une demande mondiale accrue pour les produits du bois.
INTRODUCTION
In response to increased societal demand for wood products (FAO 2016), the application of intensive management practices within forests has become commonplace throughout the world (Carle and Holmgren 2008, Paquette and Messier 2010, Rodriguez et al. 2014. Such practices include shortened rotation length, planting of genetically improved trees, and the use of herbicides to control competing vegetation, among others (de Moraes Goncalves et al. 2014). Of these, the use of herbicides has the greatest potential to reduce the quality of regenerating forest because herbicides typically alter the composition and structure of early seral forest vegetation (Shepard et al. 2004, Balandier et al. 2006, Wagner et al. 2006, and their use can lead to a truncation of the early seral period (Swanson et al. 2014). In turn, a reduction in broadleaf vegetation is thought to impact organisms that depend on early seral forest as critical habitat during the annual cycle Comet 1996a, Hagar et al. 2007). For example, lepidopteran larvae are an important food source for songbirds during the breeding season (Rodenhouse and Holmes 1992) and are positively associated with broadleaf vegetation (Hammond and Miller 1998); thus, herbicide-mediated reductions in broadleaf cover are thought to reduce the extent and availability of this food source when songbirds rear their offspring. In addition, rates of nest failure can be higher under conditions of reduced deciduous vegetation available for nesting (Easton and Martin 1998, but see Rivers et al. 2019), so herbicide use in early seral forests has the potential to negatively impact songbird reproductive output indirectly through changes in both nesting and foraging habitat.
Increased air temperature is one of the most prominent components of ongoing human-driven climate change (IPCC 2013), and such increases have the potential to alter the demographic response of songbirds that nest in early seral forests where herbicide application is most prevalent. Subtle increases in temperature, for example, may enhance reproductive output by allowing incubating individuals to maintain greater levels of incubation constancy (Ardia et al. 2009), which, in turn, can lead to enhanced offspring body condition (Pérez et al. 2008) and may even enhance long-term survival (Andreasson et al. 2017). Warmer temperatures may also increase food availability for insectivorous birds whose foraging efficiency is reduced during cooler temperatures via higher thermoregulatory costs and reduced food availability (Avery and Krebs 1984, Winkler et al. 2002. In contrast, more pronounced temperature increases may have the opposite effect with negative consequences for organisms through alteration of metabolic rates (Schmidt-Nielsen 1997) and reductions in the survival of adults and their offspring (Selwood et al. 2015). For example, experimental increases in ambient temperature of temperate box-nesting populations of the Tree Swallow (Tachycineta bicolor) had strong consequences for the condition of females and their offspring, even when temperature increases were of short duration (Pérez et al. 2008, Ardia et al. 2009). Furthermore, heat stress can shift energy allocation to thermoregulation, reducing offspring growth (Murphy 1985, Pipoly et al. 2013, Salaberria et al. 2014, which may ultimately reduce postfledgling survival and influence population recruitment (du Plessis et al. 2012, Edwards et al. 2015. Thus, climate can influence temporal variation in vital rates for many species (McCarty 2001, Gallinat et al. 2015, Williams et al. 2015, even for those that have adapted to temperature regimes within temperate regions and select microhabitats to nest where temperature modulation is reduced, so this factor must be considered when evaluating the effects of management practices that alter vegetation composition. Vegetation composition in forests can influence rates of surface cooling through changes in biophysical factors such as albedo and canopy conductance (von Arx et al. 2012, Zhao andJackson 2014). Therefore, herbicide-mediated changes in vegetation composition may also lead to changes in ambient temperature, and these factors may work together to influence reproductive output (Sieving andWillson 1998, Chase et al. 2005). More broadly, expected changes in climate will combine with land use practices (Brook et al. 2008) to produce novel environmental conditions for many species (Hobbs et al. 2006, Mantyka-Pringle et al. 2012, Jantz et al. 2015, Northrup et al. 2019. Understanding how these pressures combine to affect vital rates is essential to conservation planning as both forest management and humandriven temperature change are expected to increase in extent and intensity in the coming decades (Lambin and Meyfroidt 2011, Seto et al. 2011, Tscharntke et al. 2012, IPCC 2013. Despite this, very few studies have examined the potential for combined effects of changes in vegetation and air temperature on animal vital rates within forest ecosystems (Cox et al. 2013, Becker andWeisberg 2015), and new studies are needed to fill these critical knowledge gaps.
In this study, we tested whether experimental herbicide application and ambient air temperature, both singly and in combination with each other, were linked to songbird reproductive output within intensively managed coniferous forests. Forest herbicides are designed to target plant-specific physiological mechanisms and are not known to directly influence animal populations when used at prescribed levels (Tatum 2004, McComb et al. 2008. It is worth noting that one widespread forest herbicide, glyphosate, has been shown to impact bird health under captive conditions, although investigations of direct effects on birds under field conditions are currently lacking (reviewed in Gill et al. 2018). For our investigation, we focused on evaluating the indirect effects of herbicides on songbird reproductive output in the cavity-nesting House Wren (Troglodytes aedon, hereafter wren) because this species is a long-distance migrant that typically arrives to our study area after spring herbicide application takes place and is therefore most likely to be affected by indirect consequences of forest herbicide application, e.g., changes in vegetation structure and composition. We selected the wren for study because it has experienced a strong, long-term decline in the Pacific Northwest, i.e., 3% per year (Sauer et al. 2015), its abundance is known to initially decline with increases in herbicide application intensity (Betts et al. 2013), and it is strongly affected by climate across the western United States . Thus, decreases in the quality of early seral forest, changes in climate, or both may be linked to long-term wren population declines. We predicted that three components of wren reproductive output-nest survival, the number of offspring produced, and nestling body condition-would decrease with increasing management intensity as a result of herbicidemediated changes in vegetation. We also predicted that measures of reproductive output would be enhanced by increased maximum daily air temperatures throughout the breeding season up to a threshold (nestlings: > 30 °C; eggs > 38-40.5 °C; Pipoly et al. 2013, Wada et al. 2015, beyond which reproductive output would decrease, i.e., a quadratic relationship. We focused our assessment on mean daily maximum temperature (hereafter T max ) for two reasons. First, maximum temperatures can be used to index other temperature values that are physiologically relevant to birds, i.e., minimum and mean temperatures, and we found all three of these measures were statistically indistinguishable among treatments in our study system (Jones et al. 2018). In addition, increasing surface temperatures from global warming are likely driving several trends in weather and climate, with the most important being warmer temperature patterns, e.g., frequency of heat waves, warmer days and nights, fewer cold days and nights (IPCC 2013). Additional increases in higher temperatures have the strongest potential to alter demographic responses of birds, particularly through changes in activity during the breeding season, e.g., foraging or nestling provisioning (du Plessis et al. 2012). As the first investigation to evaluate the combined effects of herbicide intensity and air temperature on songbird reproductive output, this study highlights the need for new investigations that help songbird conservation planning within early seral forests under expected increases in human-induced climate change and forest management intensity.
Our study species, the wren, is a small (10-12 g), insectivorous, long-distance migrant passerine that is found throughout much of North America (Johnson 2014). It is a cavity-nesting songbird that readily uses nest boxes (Johnson 2014) and is found in a wide variety of open wooded habitats in Oregon from mid-April to mid-August. Female wrens lay clutches of 4 to 8 eggs and can be double-brooded, with a high rate of hatching (~90%) and fledgling success (~70-90%; Johnson 2014). Females alone incubate the eggs and brood the nestlings; however, both adults feed the nestlings (Johnson 2014). Of note, whole broods within unshaded nest boxes can die of apparent hyperthermia even when temperatures are relatively mild (≥32 °C; Johnson 2014).
Experimental design and herbicide treatments
Our study was undertaken as part of a broader investigation of biodiversity-timber production trade-offs that implemented a randomized complete block study design whereby 32 stands were located in eight separate blocks, with four treatment levels randomly applied to one stand within each block; for this study, we used a subset of 24 stands in six of the eight study blocks (Fig. 1). All blocks were located on intensively managed conifer forest across a 100 km (N-S) section of the northern Oregon Coast Range region, and all stands within each block were located > 5 km from each other ( Fig. 1) to ensure spatial independence of treatments and to reduce within-block variation in stand characteristics. Fig. 1. Location of the eight study blocks used to examine the influence of intensive forest management on early-seral forest biodiversity in the Oregon Coast Range. The six study blocks used in this study to assess the influence of intensive forest management and air temperature on House Wren (Troglodytes aedon) reproductive output are indicated by black rectangles.
All study sites were clear-cut harvested in fall 2009/winter 2010 and replanted in spring 2011 with Douglas-fir, the most common commercial species in our region, at a density of 1100 trees per ha. A suite of herbicides and surfactants typically used in commercial timber management operations (described in Appendix 1) was applied to stands in a manner that created a gradient in management intensity (Betts et al. 2013, Jones et al. 2018). This gradient included light, moderate, and intensive herbicide treatments, in addition to no-spray controls that received no herbicide application at any time during the course of this study; this led to strong differentiation in the amount of broadleaf cover between treatments (Appendix 2). All herbicide applications occurred in the typical time frame in which vegetation control takes place in intensively managed timber operations, and included aerial application of chemicals by helicopter as well as ground-based backpack spraying. Of note, the light and moderate treatments represented the range of herbicide application from landowners in our study region, i.e., state lands and private industrial lands, respectively, and the intensive treatment was implemented to quantify the full range of biological responses to herbicide application.
We placed a total of eight nest boxes/stand (n = 192) that were constructed such that songbirds the size of a wren could access them for nesting, i.e., 3.8-cm diameter entrance hole. We sited nest boxes with several considerations in mind: (1) equal distances between boxes (> 50 m separation), (2) even stand coverage, and (3) sampling logistics; vegetation was not a factor in box siting.
We established nest boxes for secondary cavity-nesting species likely to colonize our study sites (e.g., Tachycineta swallows, Western Bluebird [Sialia mexicana], House Wren), but we found that wrens colonized the great majority of boxes across our study sites. Thus, decisions about box placement were made to provide spatial independence between boxes so that all could be used by a range of species, and were not made based on expected wren territory sizes.
We placed a single iButton temperature logger on the underside of each nest box to quantify ambient air temperature throughout the wren breeding season; the iButton data reported here were part of a related study looking at how intensive forest management influences ambient temperature (Jones et al. 2018). We placed each iButton so it hung freely 5 cm below the box and was covered with a section of white 10 cm diameter PVC tube containing ventilation holes to allow airflow, minimize heat accumulation, and prevent direct exposure of the iButton to solar radiation and moisture. We recorded air temperature outside of nest boxes because doing so allowed us to standardize measurements in a way that was impossible inside nest boxes on account of marked variation in the architecture of individual nests (see McCabe 1965). Our pilot data indicated that temperatures inside nest boxes were warmer than external temperatures throughout the breeding season (mean = 4.4 °C warmer, SE = 0.8; Jones, Rivers, and Betts, unpublished data), so our results provide a conservative measure of the temperatures experienced by eggs and nestlings within nest boxes. Because of logistical constraints, we used two iButton models that varied slightly in their accuracy (± 1.0 °C accuracy, n = 71 boxes: ± 0.5 °C accuracy, n = 89 boxes), with the two models distributed evenly among stands. All iButtons were validated against an independent digital thermometer prior to placement (Omega HH609R, Omega Engineering, Norwalk, Connecticut, USA), and any iButton that deviated by ≥ 0.5 °C during our testing procedure was not used.
We programmed each iButton to record temperature every 15 min throughout each 24-hour period, and we used these data to calculate the mean daily maximum temperature (hereafter T max ) during each observation interval starting at midnight and extending for 24 h. Following previous studies using highresolution temperature data, we removed measurements that appeared to be erroneous or were caused by instrument malfunction or damage to logging stations by wildlife. Specifically, we considered temperature data to be erroneous when values were > 50 °C or < -10 °C with no temporal or spatial precedence following Baker et al. (2014); this led us to removẽ 5% of temperature values.
Measures of songbird reproductive output
We monitored nest boxes every 3-4 days throughout the breeding season (late April to early August 2014) to determine the number of eggs and/or nestlings present on each visit and to quantify reproductive output. We considered a nest to be successful if it produced at least one offspring; we considered a nest to have failed if (1) parents were absent and ≥ one eggs were missing or broken, or (2) all nestlings were dead or missing prior to the expected earliest date of fledging (Martin and Geupel 1993); the few nests that had an uncertain fate were removed from analyses. We note that we detected only eight instances (< 3%) of a nest being abandoned out of the pool of n = 282 nests that were located in this study, with two to three abandoned nests recorded in each treatment. Because nest abandonment was not biased toward any particular treatment(s) and occurred in a very small number of nesting attempts, excluding abandoned nests from analysis should not lead to any changes in our findings.
We quantified the number of nestlings that fledged from nests by taking the number of nestlings present on the last day we could visit nests without causing premature fledging, i.e., nestling day 8/9, where day 0 is the hatch date, and subtracting any nestlings found dead in the nest after the nest had finished (hereafter the number of offspring produced). Thus, for this measure we explicitly restricted our focus to successful nests to quantify how many young fledged from the nest because temperature effects can have negative consequences for nestlings (Murphy 1985, Pipoly et al. 2013. Wren offspring do not reach their growth peak until nestling day 10-13 (Zach 1982), but we were unable to measure nestlings at later developmental stages because of the risk of premature fledging (Rivers and Jones, personal observation). In addition to the number of offspring produced, we also evaluated nestling body condition as a measure of offspring quality. During nest checks on nestling day 8/9 we measured right tarsus length, right wing chord (± 0.5 mm), and body mass (± 0.1 g). Prior to analysis, we calculated the average body mass, tarsus length, and wing chord in each nest because nestlings sharing a nest are not independent in growth. Because of logistical constraints, we restricted our measurements of nestling body condition to four of the six study blocks (16 stands).
Statistical analysis
All models were fit in the R statistical environment (v3.2.0; R Core Team 2015), and we provide a summary of all a priori candidate models describing herbicide treatments and air temperature effects on wren nest survival, offspring production, and nestling body condition in Appendix 3. Models for the number of offspring produced and nestling body condition were fit using the lme function of the nlme package (Pinheiro et al. 2015). Nest survival models were fit using the glmer function of the lme4 package . We constructed three sets of linear mixed models, with each model representing a priori hypotheses about the three distinct measures of reproductive output, i.e., nest survival, the number of offspring produced, and nestling body condition, to separately model the relationship between (1) herbicide treatment and reproductive output, and (2) air temperature (T max ) and reproductive output (using the same measures mentioned previously; see Table 1). We included models with and without a quadratic term for T max because the relationship between reproductive measures and T max could be nonlinear. For example, moderate increases in T max may benefit nestlings by reducing their thermoregulatory costs, but excessive increases in temperature could result in negative consequences via thermoregulatory behaviors, e.g., panting. We included elevation as a covariate in all models to control for differences in elevation between stands, and between nest box locations within stands. We also included three random effects in all models: (1) study block, (2) stand, and (3) nest box. The random effects for block and stand account for potential correlation of nest fates within blocks and stands, whereas the random effect for each nest box accounts for potential correlation of fates between nests occurring in the same nest box. Our final dataset represented n = 282 nests, more than the n = 192 nest boxes that were available, because the majority of boxes experienced multiple nesting attempts (i.e., nest boxes with two attempts: n = 134; nest boxes with three attempts: n = 17); thus, our models contained an abundance of information for estimating variance within nest boxes. This condition, and that none of our models experienced convergence issues, led us to conclude that the mixed effects models we used performed well and were able to provide robust estimates of variance. Table 1. Model selection results from a priori candidate models describing the effects of herbicide treatment and T max on House Wren (Troglodytes aedon) nest survival, the number of offspring produced, and nestling body condition. Models are ranked in ascending order between the difference between the best model and all other models by Akaike's Information Criterion adjusted for small sample sizes (ΔAICc). The number of parameters (k), the relative likelihood of a model, AICc weights (w i ), and evidence ratio (ER) are given for each model. Although songbird reproductive output often declines with seasonal advancement (Perrins 1970, Martin 1987, the date of nest initiation in our study was highly correlated (r > 0.8) with T max . Recent work has indicated that model averaged-coefficients based on AIC weights are invalid where there is multicollinearity among predictor variables (Cade 2015), so we did not include a covariate for nest initiation date in our analyses. We used a secondorder Akaike's Information Criterion, AICc, to quantify the relative strength of all competing linear regression models for each response variable (Anderson 2008). Model parameter estimates were averaged across the set of candidate models for each response variable (Anderson 2008). We used the MuMIn package (Barton 2015) to conduct the model selection and averaging process. Before performing model selection, the global model, i.e., candidate model with the most parameters, for the number of offspring produced and nestling body condition response variables was evaluated for compliance with model assumptions (see below regarding nest survival model checking).
We used the logistic exposure method to estimate daily nest survival rate (Shaffer 2004), which allows for the modeling of time-dependent covariates (Grant et al. 2005). Logistic exposure takes into account the fate of a nest during successive observation intervals, i.e., nest checks, by using a logistic-exposure link function that explicitly considers exposure time as measured by the length of observation interval (Shaffer 2004). In all logistic exposure models we included a term for mean nest age, i.e., nest age at the midpoint of each observation interval (see Grant et al. 2005) because nest age can influence nest survival rates (Thompson 2007). All continuous variables were standardized prior to analysis, and subsequently unstandardized to allow for interpretation of estimates (Hosmer and Lemeshow 2013). Currently, no effective methods exist for testing fit of logistic exposure models (Shaffer and Thompson 2007;Shaffer, personal communication); therefore, we used binned residual plots from the R package arm (Gelman and Su 2015) as a qualitative test, and we found no evidence to indicate a lack of fit with regard to normality among model residuals.
To model the number of offspring produced, we fit linear mixed models to our dataset, which was a left-censored Poisson distribution of the response and thus severely underdispersed (dispersion parameter [c-hat] = deviance/degrees of freedom; r < 0.4, P < 0.001). Because the censored data approximately followed a normal distribution, we treated the data distribution as an approximation of the normal distribution for analysis (Greene 2005). We evaluated six models that tested the relative importance of herbicide treatment intensity and T max effects on the number of offspring produced (Table 1). All models contained a term for clutch size, i.e., the maximum number of eggs laid during a nest attempt, to control for any differences in starting brood size. To model nestling body condition, we fit linear mixed models. Although body condition indices such as body mass regression residuals are used widely in the literature (Labocha and Hayes 2012), recent work suggests that simple measures of body mass can outperform body condition indices as a measure of energy stores (Schamber et al. 2009, Labocha andHayes 2012). Therefore, we focused on body mass and included tarsus length to control for structural size in our analyses (Bowers et al. 2014, Paquette et al. 2014). In addition, we included a covariate for mean nestling age to control for differences in when we measured nestlings for growth.
RESULTS
The total number of nesting attempts by wrens was similar among herbicide treatments and the control, with a mean of 11.8 (± 4.4 SD) nests initiated over the eight boxes available on each stand; this reflected a combination of high nest box occupancy and multiple nesting attempts within individual boxes. Of the 282 nests we monitored, 28% failed, and the great majority of these failures were attributed to predation (95%). We found that although all models containing herbicide treatment were better supported than the null model (ΔAICc = 2.69, evidence ratio [ER] = 3.83; Table 1), we did not detect an effect of herbicide treatment on daily nest survival ( Fig. 2A, Table 2). Nevertheless, herbicide treatment did have a negative influence on mean daily survival rate when compared to the control treatment, with the greatest effect in the light treatment, with less pronounced effects in both the intensive and moderate treatments ( Table 2, Fig. 2A). The parameter estimate confidence intervals for herbicide treatment effects on nest survival were large (Table 2), indicating modest statistical power to detect effects.
We also did not detect a relationship between T max and nest survival (Fig. 2B, Table 2). Additionally, we did not find any evidence of a quadratic relationship between T max and mean daily nest survival rate (= -0.14, 95% CI: -0.28, -0.01; Table 2). We did find support for combined effects of treatment and T max on mean daily survival rate, as the best supported model contained treatment, T max and its quadratic term (w i = 0.35; Table 1). However, the best supported model containing herbicide treatment and a quadratic effect of T max was only 1.3× more likely than the next best model, which contained treatment only (ΔAICc = 0.53, ER = 1.30; Table 1). We note that the comparable ΔAICc values of all models included in the model set (≤ 4.08) in addition to comparable evidence ratios (Table 1) indicated that all hypotheses were equally plausible (Table 2).
On average, 4.9 (± 1.6 SD) offspring fledged from each successful nest across all treatments and the control. We did not detect an effect of treatment on the mean number of offspring fledged (Fig. 3A, Table 1), nor did we detect an effect of T max on the number of offspring produced per nest ( Table 2). The best-supported model was the null model containing elevation and maximum brood size (w i = 0.62; Table 1). All models containing herbicide treatment had AICc weights < 0.04 and evidence ratios > 16 (Table 1). The mean number of offspring produced generally decreased with increasing T max (Fig. 3B); however, this estimated effect was very small (-0.01 nestlings/1 °C; 95% CI: -0.07, 0.05). We also did not detect evidence of a quadratic effect of T max on the number of offspring produced (0 nestlings/1 °C; 95% CI: -0.01, 0.01). However, comparable ΔAICc values and evidence ratios for T max (ΔAICc = 2.01, ER = 2.73) and its quadratic term (ΔAICc = 3.81, ER = 6.70) indicated that T max is an equally plausible explanation compared to the null model (Table 1). Finally, we did not find support for combined effects of treatment and T max on the mean number of offspring produced (Table 2); both models containing treatment and T max had ΔAICc values > 7 and ER > 47 (Table 1).
On average, nestling mass on day 8/9 averaged 8.86 g (± 0.76 SD) across all treatments and the control. The null model containing elevation, mean tarsus length, and mean nestling age was the bestsupported model (w i = 0.36; Table 1). However, the model containing treatment was the second best supported model (ΔAICc = 0.80, ER = 1.49), which suggests some support for an effect of treatment on mean nestling day 8/9 body mass, with somewhat lower body mass in unsprayed controls (Fig. 4A). Similarly, we did not detect an effect of T max on nestling body condition (Table 2, Fig. 4B), nor did we detect evidence of a quadratic effect of T max on mean nestling body condition (Table 2). Last, we did not find support for combined effects of treatment and T max on the mean number of offspring produced (Table 2). However, models containing treatment and T max and T max (quadratic) were 4.7× and 7.5× more likely than the null to explain variation in nestling body condition, respectively (Table 1). Thus, all hypotheses were equally plausible at explaining variation in nestling body condition, evidenced by comparable ΔAICc values and ERs of all models in the model set (Table 1).
Fig. 2.
Plots depicting the effect of herbicide treatment and T max and on House Wren (Troglodytes aedon) nest survival during the breeding season in the Oregon Coast Range, 2014.
(A) Differences in odds ratio estimates of nest survival between the effect of herbicide treatments relative to control stands; odds ratios were averaged over all models in the candidate set. The dashed horizontal line represents odds ratios of the control for comparison with herbicide treatments; 95% CIs that overlap one indicate lack of treatment differences relative to the control treatment. (B) Boxplots depicting T max values from nests that either failed (gray) or fledged offspring (white) across all four treatments. Horizontal bars within boxes represent medians, boxes are interquartile ranges, whiskers are 1.5× interquartile range, and dots are outlying data.
DISCUSSION
Our study found that the use of forest herbicides within intensively managed conifer stands did not have a strong influence on three measures of wren reproductive output: nest survival, the number of offspring produced, and nestling body condition. Furthermore it also found that air temperature effects on components of wren reproductive output were negligible, with little empirical support for a combined effect of herbicide treatment and air temperature.
Of the factors we assessed, herbicide treatment appeared to have a stronger impact on wren nest survival than did temperature. That nest failure was driven almost entirely by predation suggests that the differences in nest survival between treatments we detected were caused by variation in the predator community, treatment-mediated differences in vegetation, or both. Among herbicide treatments, nest survival differed substantially; differences from control sites in the daily survival rate was 2.9× lower in the light treatment relative to the intensive treatment. Wren nest survival is known to decrease with increasing vegetation density at the nest (Belles-Isles and Picman 1986, Finch 1989, Li and Martin 1991, Hane et al. 2012, perhaps via enhanced hiding cover for small predators that can enter small nest cavities and are themselves subjected to higher-trophic predators, e.g., raptors. However, the notion that greater vegetation cover led to reduced nest survival was not supported in our study because predation rates were lowest in the control stands where vegetation cover was greatest and small predators would be expected to have the greatest amount of hiding cover. Another possibility is that conspecific egg-pecking behavior by wrens, a widespread behavior (Belles-Isles and Picman 1986, Johnson 2014), played a role in driving nest failure and varied across the herbicide treatments. Regardless of whether egg-pecking contributed to nest failure in our study, it is clear that the mortality agent(s) driving nest failure is not straightforward and leads us to conclude that changes in vegetation arising from forest herbicide application have an inconsistent relationship with wren nest survival.
The greater nest survival of wren nests in control sites relative to moderate and intensive herbicide treatments in our study is consistent with another study of the effects of herbicide-driven changes in vegetation composition on songbird nest survival (Easton and Martin 1998). However, the study by Easton and Martin focused on open-cup nesting songbirds and was conducted after a substantially longer period of time had passed following harvest relative to the time since harvest in our study (15-18 y vs. 3 y, respectively). Thus, additional factors influencing nest success are likely to have differed, making broad generalizations challenging. In contrast, results from this study differed from a recent study on reproductive output of the Whitecrowned Sparrow (Zonotrichia leucophrys). That study, which was conducted on some of the same stands as this investigation of wren reproductive output, found that sparrow nest survival was unrelated to herbicide intensity . Although nest survival rates were lower in sparrows than in wrens , the sparrow is an open-cup nesting species with a broader suite of predators than those capable of depredating nests of cavity-nesting species (Li and Martin 1991) and therefore would be expected to have nests that are more likely to fail due to predation. Given that only a handful of studies have evaluated how forest herbicides influence songbird vital rates and their conclusions have differed, there is no strong consensus at the current time about the role forest herbicides play in influencing songbird reproductive output, making additional investigations of this topic a priority.
In addition to influencing predation rates, a decrease in the cover of broadleaf vegetation may also influence songbird reproductive output by affecting the amount and quality of food available Comet 1996b, Hagar 2007). For example, lepidopteran larvae, an important source of energy and nutrients for songbird nestlings (Rodenhouse and Holmes 1992), are positively associated with increased abundance of broadleaf vegetation Miller 1998, Miller et al. 2003). Thus, we expected that food availability would decrease with increasing herbicide treatment intensity, which would in turn result in decreased nestling body mass and/or the number of offspring produced. However, we found that herbicide treatment effects were as equally likely as our other hypotheses in explaining the number of offspring produced and their body mass, suggesting that herbicide application did not lead to differences in food availability for wrens. This idea is strengthened by data collected from our study stands that indicate stand-level arthropod biomass collected during the time of this study did not differ relative to herbicide treatment (Verschuyl, Rivers, and Betts, unpublished data), including a large-scale assessment of Lepidoptera (Root et al. 2017). This may be due to the generalist foraging approach used by wrens and the broad diet they feed to their nestlings (Johnson 2014;Rivers, unpublished data), which may result in wrens being less sensitive to changes in broadleaf vegetation as other early seral forest songbirds (Betts et al. 2010, Ellis and Betts 2011, Kroll et al. 2017). An alternative and nonmutually exclusive possibility for the lack of differences in the number of offspring produced or nestling body condition relates to compensatory behavioral responses of adults, assuming that realized food availability to wrens did indeed differ between treatments. Food limitation can strongly influence reproductive success and parents can adjust provisioning behaviors to a range of factors (e.g., Martin 1987, Peluc et al. 2008, so wren parents could have altered provisioning rates and/or food loads in a way that equalized offspring production even if food availability differed across treatments. We did not quantify adult provisioning in this study, so such behavioral adjustments by wren parents could help explain why we did not detect differences in reproductive output between treatments. Finally, we note that our assessment of body condition was necessarily limited to the period of chick development by our study design, and that the postfledging period is an especially challenging period in the songbird life cycle that is characterized by low survival rates (reviewed in Cox et al. 2014), including forest birds that use early seral conifer forests in our study area (Rivers et al. 2012. Studies of postfledging survival in songbirds, including wrens, are surprising limited (Cox et al. 2014), so it remains unknown if body condition in wren chicks could have had subsequent impacts on postfledging survival.
Our finding of a lack of combined effects of air temperature and herbicide-induced changes in vegetation differs from several previous studies that have found a relationship between air temperature and songbird reproduction (Cox et al. 2013, Becker and Weisberg 2015, Salaberria et al. 2014. One explanation for this disparity is that air temperatures in our study area may have been unlikely to exert negative effects on developing eggs and nestlings (Pipoly et al. 2013, Wada et al. 2015. For example, we found that T max never exceeded 33 °C at any of our study sites across the entire breeding season, which falls within the optimal range for eggs (DuRant et al. 2013), although it should be noted that temperatures > 30 °C have been shown to negatively affect nestling growth (Murphy 1985, Pipoly et al. 2013. Taken together, it would appear that temperature effects had limited influence on wren nestling growth because such high temperatures were relatively infrequent on our study sites, e.g., 18% (340/1869) of all observation periods. Coupled with small effects of T max on nest survival and body condition and the relatively low variability surrounding these estimates, our results suggest that any combined effects of temperature, if they were present, had only a limited effect on wren reproductive output.
Habitat quality elements that may be altered by herbicide use in early seral forests include microclimatic air temperature, as well as vegetation composition and structure (Lehtinen et al. 2003). In our study, we found that patterns of within-season air temperature and herbicide-mediated changes in vegetation did not strongly affect songbird reproductive output in temperate, intensively managed, early seral coniferous forest, either singly or in concert with one another. Nevertheless, it is plausible that projected future increases in climate (IPCC 2013) may lead to combined effects of air temperature and forest cover change that reduce songbird reproductive output in ways that are not currently present. Herbicides are widely used in intensive forest management to control competing vegetation (Wagner et al. 2006), and their use in regenerating forests can alter the structure and function of early seral communities (Flueck and Smith-Flueck 2006), especially in intensively managed conifer forests. Such areas are expected to meet increased demands for wood products in the coming decades (Sloan and Sayer 2015), so as climate change continues (IPCC 2013) researchers should recognize that both pressures must be considered in tandem to better understand the response of animal populations to global change. Furthermore, because land use change and climate can have a synergistic influence on animal populations (Northrup et al. 2019), additional studies that expand our understanding of the relative influence of each factor and their combined effect will be essential for formulating future conservation strategies. | 2020-05-21T09:15:40.630Z | 2020-05-20T00:00:00.000 | {
"year": 2020,
"sha1": "7deb8d42e14c2b1a72820884e23ac09b550cf99d",
"oa_license": "CCBY",
"oa_url": "http://www.ace-eco.org/vol15/iss1/art18/ACE-ECO-2020-1578.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e05d1e3218af9439edf643e21b15b3d0773ff436",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
} |
235899213 | pes2o/s2orc | v3-fos-license | New Candidates for AM Canum Venaticorum Stars among ASAS-SN Transients
We studied Zwicky Transient Facility (ZTF) light curves of 34 dwarf nova candidates discovered by All-Sky Automated Survey for Supernovae (ASAS-SN) between 2020 May 12 and September 9 and found 6 AM CVn-type candidates. All objects showed short outbursts (post-superoutburst rebrightenings) on the fading tail. Two objects (ASASSN-20eq, ASASSN-20la) showed double superoutbursts. Three objects (ASASSN-20jt, ASASSN-20ke, and ASASSN-20lr) showed short superoutbursts (5-6 d). These features in the light curve can be used in discriminating AM CVn-type candidates from hydrogen-rich systems. In contrast to hydrogen-rich systems, some object did not show red color excess during the rebrightening or fading tail phase. We interpret that this is due to the higher ionization temperature in helium disks. Two objects had long (likely) supercycles: ASASSN-20gx (8.5 yr) and ASASSN-20lr (7 yr). We provide a scheme for identifying AM CVn-type candidates based on the light curve characteristics.
Introduction
AM CVn stars are a class of cataclysmic variables (CVs) containing a white dwarf (primary) and a masstransferring helium white dwarf (secondary). [For a review of AM CVn stars, see e.g. Solheim (2010)]. In systems with mass-transfer rates (M dot ) the accretion disk around the primary become thermally stable and no outbursts are observed. In systems with lower M dot , the disk becomes thermally unstable and dwarf nova (DN)-type outbursts occur (Tsugawa and Osaki 1997;Solheim 2010;Kotko et al. 2012). The mass-transfer in AM CVn stars is driven by angular momentum loss due to the gravitational wave radiation and M dot is a strong function of the orbital period (P orb ). Systems with short P orb (less than 22 min) have thermally stable disks and those with longer P orb have thermally unstable disks (Ramsay et al. 2012;Kotko et al. 2012). It is not yet clear whether AM CVn stars with extremely low M dot have thermally un-stable disks and show outbursts, but the recent discovery of an outbursting AM CVn star, ZTF20acyxwzf, with P orb =0.0404(3) d (N. Kojiguchi et al. in preparation) suggests that AM CVn stars even with the longest P orb have thermally unstable disks.
AM CVn stars have low mass-ratios (q = M 2 /M 1 ). In systems with low q, the disk becomes tidally unstable due to the 3:1 resonance (Whitehurst 1988;Hirose and Osaki 1990;Lubow 1991) and the precessing eccentric disk whose eccentricity is excited by the 3:1 resonance causes superhumps and superoutbursts (Osaki 1989). In extremely low-q systems, the disk can even hold the radius of the 2:1 resonance and this is believed to be responsible for the WZ Sge-type phenomenon (Osaki and Meyer 2002), which show infrequent large-amplitude superoutbursts and often post-superoutburst rebrightenings (Kato 2015). Post-superoutburst rebrightenings in AM CVn stars are relatively commonly seen (Isogai et al. 2015a;Duffy et al. 2021).
AM CVn stars have recently receiving special attention and there have been a number of projects in search of AM CVn stars. Anderson et al. (2005) and Rau et al. (2010) used spectra and Carter et al. (2013) used colors in the Sloan Digital Sky Survey (SDSS). Levitan et al. (2015) used the Palomar Transient Factory (PTF) to detect outbursting AM CVn stars. High time-resolution observations also discovered several AM CVn stars [e.g. Burdge et al. (2019) and Burdge et al. (2020) using the Zwicky Transient Facility (ZTF) data]. A number of outbursting AM CVn stars have been identified by time-series observations to search for superhumps (e.g. Kato et al. 2015b;Isogai et al. 2019;Isogai et al. 2015a). Most recently, van Roestel et al. (2021) selected ZTF transients by colors and detected several new AM CVn stars.
In this paper, we present new candidate AM CVn stars from recently detected potential dwarf novae (DNe) by the All-Sky Automated Survey for Supernovae (ASAS-SN) (Shappee et al. 2014;Kochanek et al. 2017). 1 using the public light curves of the ZTF survey 2 . We supplemented the data using the Asteroid Terrestrial-impact Last Alert System (ATLAS) Forced Photometry (Tonry et al. 2018) 3 . The list of object is shown in table 1. The coordinates and variability range are taken from AAVSO VSX. 4 ). The parallaxes and quiescent magnitudes are taken from Gaia EDR3 (Gaia Collaboration et al. 2021) 2 Individual Objects
ASASSN-20eq
This object was detected at g=15.6 on 2020 May 12. We noticed that this object showed multiple rebrightenings on a fading tail from ZTF observations. The quiescent color in SDSS is unusual in that the object had a strong ultraviolet excess of u − g=−0.22 (vsnet-alert 25852). 5 By supplying ATLAS and ASAS-SN observations, D. Denisenko (vsnetalert 25859) and P. Schmeer (vsnet-alert 25861) identified the initial superoutburst.
The combined light curve (figure 1) shows two superoutbursts (JD 2458979-2456984 and JD 2458986-2458991) The overall light curve is very similar to that of "double superoutburst" of an AM CVn star NSV 1440 (Isogai et al. 2019). Very rapid fading (more than 2 mag d −1 ) of rebrightenings is also characteristic of AM CVn-type outbursts [there is Bailey relation for hydrogen-rich dwarf novae: the decline rate T decay is proportional to P 0.79 orb (Warner 1987;Warner 1995) and it has been confirmed to apply to AM CVn stars (Patterson et al. 1997)]. Based on these characteristics of the light curve and a strong ultraviolet excess, we identified this object to be an AM CVn star. The initial superoutburst was most likely characterized by the 2:1 resonance (Isogai et al. 2019;Kato et al. 2014) and the second one almost certainly showed ordinary superhumps. Based on the similarity of the light curve with that of NSV 1440, the P orb of ASASSN-20eq is expected to be around 0.025 d.
ASASSN-20gx
This object was detected at g=15.4 on 2020 June 16 and further brightened to g=14.8 on 2020 June 21. We noticed multiple rebrightenings on a fading tail from ZTF observations as in ASASSN-20eq (vsnet-alert 25853). The light curve (figure 2) suggests that ASAS-SN observations missed the initial superoutburst during observational or seasonal gaps (maximum of a 12 d gap and the observations started just after the seasonal gap).
There were at least five rebrightenings (assuming that there was an unrecorded superoutburst). These rebrightenings showed rapid fading (2 mag d −1 ). Combined with the blue color in quiescence (u − g=+0.15 in SDSS), we consider that this object is also an AM CVn star. Although we cannot completely exclude the complete absence of the initial superoutburst from the available observations, this possibility appears to be low considering the similarity of the light curve of the fading tail with those of other wellobserved superoutbursts of AM CVn stars.
ASASSN-20jt
This object was detected at g=17.0 on 2020 August 7. The light curve based on the ZTF data (figure 3) indicates brightening from r=19.27 on 2020 August 5 (JD 2459067) to g=18.00 on 2020 August 6, followed by a dip at g=20.32 on 2020 August 7 (despite the ASAS-SN transient detection, no positive observation was available from the ASAS-SN Sky Patrol). After this, a long outburst lasting at least 4 d was recorded. There were six rebrightenings on a fading tail. The initial short outburst was likely a precursor and the long outburst was likely a superoutburst. Based the short duration of the main superoutburst and rapid fading (up to 1.7 mag d −1 ), we identified this object as a likely AM CVn star.
There was a similar, but less observed, outburst in 2018 October-December in the ZTF data. The observations only recorded the phase of fading tail and three rebrightenings were detected on it. The supercycle of this object is estimated to be ∼670 d.
ASASSN-20ke
This object was detected at g=16.2 on 2020 August 18. The light curve based on the ZTF and ASAS-SN data (figure 4) indicates the initial long outburst lasting 6 d. There was at least three confirmed post-superoutburst rebrightenings. Although there were several more ASAS-SN detections around g=17.0, they were spurious detections near the detection limit which were confirmed by comparison with ATLAS data. These detections were not plotted on the figure. This object is most likely classified as an AM CVn star based on the short duration of the initial superoutburst and rapid fading (more than 2 mag d −1 ) of rebrightenings.
There was another outburst in 2019 July-August. This outburst was only detected by ZTF and ATLAS and rebrightenings on a long-lasting (at least 70 d) fading tail were recorded. The initial part of this outburst was not recorded due to the long observational gap. The supercycle of this object is estimated to be ∼410 d.
ASASSN-20la
This object was detected at g=16.1 on 2020 August 28. The light curve based on the ZTF, ASAS-SN and ATLAS Forced Photometry data (figure 5) indicates the initial superoutburst lasting 6 d (JD 2459088-2459064) followed by a dip, and the possible second superoutburst (JD 2459098-2459101). Six post-superoutburst rebrightenings were detected on the fading tail. The shortness of the initial superoutburst is incompatible with a hydrogen-rich DN. Likely double superoutburst and the rapid fading rate (more than 2 mag d −1 ) of rebrightenings also support the AM CVn-type classification. No previous outburst was detected in ASAS-SN (since 2013 November) and ZTF (since 2018 June). Even considering the seasonal observational gaps, the lack of previous signature of a fading tail suggests that the supercycle is longer than 900 d.
ASASSN-20lr
This object was detected at g=15.9 on 2020 September 9. The light curve based on the ZTF and ASAS-SN data (figure 6) indicates the initial superoutburst lasting 5 d (JD 2459100-2459105). There were at least three postsuperoutburst rebrightenings. As in ASASSN-20la, the short duration of the initial superoutburst and the rapid fading rate (more than 2 mag d −1 ) of rebrightenings support the AM CVn-type classification. Pan-STARRS1 data recorded a fading tail in 2016 and the supercycle of this object is estimated to be ∼7 yr.
Number statistics
A total of six AM CVn candidates discovered within four months is amazingly high in number. They comprised 18% of 34 newly discovered ASAS-SN DN candidates during the same interval having ZTF light curves which had a good temporal coverage and quality allowing type classification. The total number of ASAS-SN DN candidates during the same interval was 111. The ratio of 18% appears too high to reflect the population statistics among DNe, and this high number may have simply been a result of random fluctuation. The estimated parent fraction of AM CVn candidates has a 95 percent confidence interval of [0.068, 0.345]. In fact, in our previous survey of SU UMa-type DNe , we found that 8% of objects showing dwarf nova-type outbursts were AM CVn-type objects (8 out of 105 outbursts; all of them were confirmed either by spectroscopy or by the detection of superhumps or eclipses). The 95 percent confidence interval of the parent fraction of AM CVn stars in this sample was [0.033, 0.145]. This interval overlaps with the estimate in the present study. Combined with the sequence of recent discoveries of AM CVn stars or candidates [ASASSN-21au = ZTF20acyxwzf (Isogai et al. 2021), ASASSN-21eo (vsnet-alert 25635) and ASASSN-21hc (vsnet-alert 25849, 25868)] among ASAS-SN DN candidates, the recent statistics of outbursting objects may suggest a signature of a larger fraction of AM CVn stars among CVs than had previously been thought (e.g. about 1% in RKcat Edition 7.21 Ritter and Kolb 2003). There may have also been selection biases, such as the past detection scheme in ASAS-SN more easily detected hydrogen-rich DNe than helium ones (e.g. the short duration of superoutbursts in helium DNe would make detections more difficult in low-cadence surveys). Such a potential bias needs to be examined in more detail in discussing the fraction of AM CVn stars among DNe.
Post-superoutburst rebrightenings and fading tail
In hydrogen-rich CVs, multiple post-superoutburst rebrightenings are associated with WZ Sge-type DNe (Kato 2015), but no exclusively [e.g. V1006 Cyg, Kato et al. (2016); ASASSN-14ho Kato (2020)]. Although the cause of these rebrightenings is still poorly understood, the infrared or red excess observed during the rebrightening phase or during the fading tail in hydrogen-rich systems (Uemura et al. 2008;Matsui et al. 2009;Chochol et al. 2012;Nakagawa et al. 2013;Golysheva and Shugarov 2014;Isogai et al. 2015b) is usually considered to arise from an optically thin region that is located outside the optically thick disk, and this matter in the outer disk would serve as a mass reservoir (Kato et al. 1998;Kato et al. 2004) to enable rebrightenings.
Among the AM CVn-type candidate we studied, ASASSN-20eq and ASASSN-20jt did not show significant red colors (ZTF g − r) during the rebrightening/fading tail phase (we refer to the colors in comparison with those around the outburst peak or in quiescence). This apparently makes clear contrast to hydrogen-rich systems. This may be a result of higher ionization temperature of helium compared to hydrogen, and optically thin region can still emit bluer light compared to hydrogen-rich systems. These instances suggest that the lack of red excess during the rebrightening/fading tail phase can be used for identifying AM CVn stars.
Two objects (ASASSN-20gx and ASASSN-20lr) showed a some degree of red color during the same phase. These instances suggest that the outer part of the disk can become cool enough to emit red light even in helium systems.
Implication on transient selection
As introduced in section 1 AM CVn stars have been receiving much attention in recent years. Selections of AM CVn stars from other CVs, however, have been a challenge in most cases. Since the fraction of AM CVn stars among CVs is low, there always arises a serious problem of detecting a small number of objects among the far numerous non-AM CVn background. This usually causes a large number of false positives (undesired hydrogen-rich systems) if the criterion is loose. With a more stringent criterion, many false negatives (many AM CVn stars classified as ordinary CVs) occur. The small number of AM CVn samples also would provide a difficult condition in machine learning.
Using one of the criteria (BP−RP < 0.6) in van Roestel et al. (2021), ASASSN-20jt (BP−RP=+0.76) becomes a false negative among the three objects with known Gaia colors. Using their criterion of high priority candidates (−0.6 < BP−RP < 0.3) all three objects with Gaia colors in our sample are not considered as high priority. This indicates the limitation in choosing candidates by colors only (particularly in the presence of high number of "background" hydrogen-rich objects).
We propose to use the structure of the light curve as a better selection tool of AM CVn-type candidates. We also added some additional features useful for identifying AM CVn-type outbursts.
They are: 1. Rapid fading (more than 1.5 mag d −1 during any part of the light curve). 2. Short duration (usually 5-6 d) of the superoutburst.
In hydrogen-rich systems, superoutbursts usually last more than 10 d. 3. Double superoutburst. Double superoutbursts are rare in hydrogen-rich systems and initial superoutburst of the double superoutburst last longer than 10 d (see e.g. Kato et al. 2013). Rapid fading (item 1) after ∼5 d-long outburst (item 2) is a strong sign of an AM CVn system and observations to watch for the second superoutburst and emergence of superhumps are very desirable. 4. Long fading tail lasting 100-200 d despite the lack of long (usually more than 20 d in hydrogen-rich systems) superoutburst. The lack of red excess in this stage would also be a signature of an AM CVn system. 5. Outburst amplitudes of long outburst (4-6 mag) smaller than hydrogen-rich WZ Sge stars (6-8 mag or even more). This reflects the small disk size in AM CVn-type objects. Potential confusions with outbursts in hydrogen-rich systems with lower amplitudes (such as SU UMa stars or SS Cyg stars) could be avoided by confirming the absence of past outbursts. 6. In the same sense, a faint absolute magnitude (significantly fainter than +4) of a long outburst can be a signature of an AM CVn-type superoutburst, if the parallax is known. For example, the maximum absolute of ASASSN-20lr is +6.3(4).
In summary, item 1 is probably most useful in practice. If the number of observations is sufficient, item 2 is also very helpful. If the light curve is known long after the event, items 3 and 4 will be helpful. Items 5 and 6 will be helpful if the quiescent counterpart can be identified or Gaia parallax is available.
Incorporation of these features in automated recognizing system will certainly increase the success rate in follow-up spectroscopic observations. Upon request by the referee Michael Coughlin, we provide a toy R code to implement the items 1 and 2. Using the actual ZTF r data, this code correctly recognized ASASSN-20eq, ASASSN-20jt, ASASSN-20ke (for the 2019 outburst) and ASASSN-20la as AM CVn-type superoutbursts while the data for the hydrogen-rich WZ Sge star AL Com did not pass this test. The reason why ASASSN-20gx did not pass the test was due to the observational gaps in the ZTF data, causing an apparent fading rate smaller than 1.5 mag d −1 (if we loosen the criterion to 1.2 mag d −1 , this object is recognized as an AM CVn star). The reason why ASASSN-20lr did not pass the test was the lack of observations immediately after the peak. The second observation by the ZTF was 4 d after the peak and it was impossible to measure the duration of the initial outburst only by the ZTF data. We hope others would benefit from this toy code and perhaps have ideas to turn it into a better filter. | 2021-07-16T01:15:32.452Z | 2021-07-15T00:00:00.000 | {
"year": 2021,
"sha1": "9a8fb0cfcc8b98992cc9ced6906722e743cae38e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2107.07091",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a8fb0cfcc8b98992cc9ced6906722e743cae38e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245634527 | pes2o/s2orc | v3-fos-license | Boosting micromachine studies with Stokesian Dynamics
Artificial microswimmers, nano and microrobots, are essential in many applications from engineering to biology and medicine. We present a Stokesian Dynamics study of the dynamical properties and efficiency of one of the simplest artificial swimmer, the three linked spheres swimmer (TLS), extensively shown to be an excellent and model example of a deformable micromachine. Results for two different swimming strokes are compared with an approximate solution based on point force interactions. While this approximation accurately reproduces the solutions for swimmers with long arms and strokes of small amplitude, it fails when the amplitude of the stroke is such that the spheres come close together, a condition where indeed the largest efficiencies are obtained. We find that swimmers with a"square stroke cycle"result more efficient than those with"circular stroke cycle"when the swimmer arms are long compared with the sphere radius, but the differences between the two strokes are smaller when the arms of the swimmers are short. This extended theoretical research of TLS incorporates a much precise description of the swimmer hydrodynamics, demonstrating the relevance of considering the finite size of the constitutive microswimmers spheres. This work expects to trigger future innovative steps contributing to the design of micro and nanomachines and its applications.
I. INTRODUCTION
Self-propulsion of microorganisms and artificial swimmers is only possible through the generation of motility strategies that are able to overcome the absence of inertia. This condition, implied in every low Reynolds number regime, allows the success of only those swimming strategies that are non-reciprocal, i.e. time-reversed motion is not the same as the original one [1]. In the past two decades, there has been a growing interest in understanding the dynamics of self-propelled microorganisms and artificial swimmers. For recent results and reviews see Refs. 2-5, and references therein.
Artificial microswimmers, micromachines and nanorobots are of great present interest for technical and medical applications [6][7][8][9], like cargo transport [10,11], drug delivery [7,[12][13][14], analytical sensing in biological media [15,16], waste-water treatment [17]. The propulsion mechanism of these microdevices may be classified into two generic types: external and autonomous [18]. In the first type, an external field is used to propel and direct the swimmer, while in the second, the swimmer itself converts energy to achieve self-propulsion. Deformable microswimmers, which generate propulsion by a nonreciprocal periodic deformation belong to the latter type. One of the simplest examples is the three-linked-spheres (TLS), a swimmer built upon three spheres linked by two arms that contracts asynchronously, originally proposed by Najafi and Golestanian [19]. The simplicity of this swimmer allows an analytical (within certain approximations) and numerical study of its dynamics, making it an excellent choice to test different numerical approaches. * banchio@famaf.unc.edu.ar Experimental realizations of the TLS have been also reported [20][21][22][23]. In particular, the analytical and numerical studies concerning the dynamics and optimization of the TLS [19,[24][25][26][27][28][29][30][31][32][33][34][35], are strictly valid in the limit where the distances between the spheres are much larger than the sphere radius, due to the treatment of the hydrodynamic interactions. The works of Earl et al. [36], and more recently Nasouri et al. [37], Pickl and coworkers [38,39] and Lengler model the hydrodynamic interactions in more detail by means of lattice Boltzmann simulations [36,38,39], multiparticle collision dynamics [36], boundary element method [37] and the method of regularized Stokeslets [40].
In this work, we extend the theoretical study of the TLS, incorporating a much better description of the hydrodynamic interactions between the spheres composing the swimmer. For this purpose, we use Stokesian dynamics simulations [41,42] to study in detail the forces acting on each of the swimmer's components and the power consumption during its motion. Stokesian dynamics simulations provide an accurate method to study the dynamics of the TLS and is computationally less demanding in comparison with mesoscale methods like lattice Boltzmann and multiparticle collision dynamics, which consider explicitly the suspending liquid.
We define efficiency as the ratio between power dissipation and the work needed to produce the same motion by an external force. We found that the most efficient swimmer is that, in which its arms contracts almost to contact of the spheres. Interestingly, under these optimum conditions, the analytical predictions based on the point force (PF) approximations of the hydrodynamic equations divert significantly from those found in our simulations, in which near field interactions are taken into account. This highlights the importance of considering the finite size of the spheres, as it is done by the method implemented here. We believe that the results shown in this work might be very useful for the design of artificial swimmers of this kind.
The article is organized as follows: in Section II, the TLS model is presented, summarizing the point force approximation results and introducing the Stokesian dynamics approach. Section III contains the results from our systematic study of the dynamics, power consumption and efficiency of the TLS. Finally, summary and conclusions are presented in Section IV.
II. THE MODEL
The three linked spheres swimmer (TLS) geometry is shown in Fig. 1. It consists of three equal spheres linked by two virtual arms of lengths L 1 and L 2 . The length of the arms varies between its contracted and its stretched states, with lengths l j − d and l j + d, respectively (j = 1, 2). l j is the arm rest length, and d the amplitude of the spheres relative movement. The swimmer stroke may be any closed cycle in the L 1 −L 2 phase-space. In this work, we study two particular strokes: the square cycle (SC) and the circular cycle (CC). For the SC cycle, the stroke is defined by a square in the L 1 − L 2 phase-space, that is traveled by the system at a constant speed, while for the CC the stroke is defined by a circle in the L 1 − L 2 phasespace that is traveled at a constant angular velocity (see Fig. 1). The most remarkable difference between these two cycles is that for the SC the arms stretch/contract sequentially and at a constant speed, and in the case of the CC, while one arm stretches, the other contracts, in a harmonic way.
A. Point Force approximation
Following the work of Golestanian and Ajdari [26] we write the relation between the forces f i that each sphere of radius a produces on the fluid and the spheres velocities v i , assuming that the spheres act like point forces on the fluid. Under this assumptions, which is a good approximation if a/(l − d) 1, we have (1) Here, η is the fluid viscosity. Using the self propulsion condition f 1 + f 2 + f 3 = 0 and defining the arms contraction velocities asL where Which, after defining D = AC + B 2 , leads to Equations (1) and (4) allows to calculate any dynamical quantity of interest, given that L 1 , L 2 ,L 1 andL 2 are known as a function of time, t, i.e. the particular stroke cycle is determined. For the circular cycle, the arms deformations are given by with the angular velocity, w c , and a corresponding period, T c = 2π/w c . For the square cycle, Here, v s is the contraction velocity of the arms, the period of the motion is given by T s = 8d/v s and I i are consecutive intervals of duration T s /4.
B. Stokesian Dynamics
To take into account the full hydrodynamic interactions between the spheres composing the TLS, one could solve the full three-body problem. This, however, constitutes a formidable task. For this reason, and considering that the interactions between two TLS swimmers or even a suspension of TLS swimmers might be of interest, we study the dynamic of the swimmer by implementing Stokesian Dynamics [41] (SD) simulations. This simulation scheme has been extended to treat selfpropulsion [42], as long as the swimmer can be approximated by a collection of spheres, which in the particular case of the TLS swimmer is not even an approximation. Stokesian dynamics is a well-established simulation scheme for the study of the suspensions taking into account the many-body hydrodynamic interactions. It has been shown that it can quantitatively reproduce the properties of monodisperse suspensions at high volume fractions [43] and has been successfully applied to study the dynamics and rheology of colloidal particles [44][45][46][47][48].
Here, we have implemented SD simulations adapting the code provided in the work of Swan and coworkers [42] to represent the TLS swimmer with the two swimming strokes cycles under consideration, namely, the square cycle and the circular cycle.
For convenience, we use the sphere radius, a, as unit length, the cycle period, T , as unit of time, and ηa 2 /T as unit of force, where η represents the fluid viscosity. The time step used in the SD simulations was dt = T /n 0 , with n 0 typically of the order of 50000, in the case of the CC cycle, while for the SC cycle, n 0 was 400000. The SC cycle needs to be solved with a much smaller time step due to the fact that the spheres come almost to contact at a constant velocity.
A. Swimmer Dynamics
Three linked sphere swimmer takes advantage of the differences in the drag forces on the different intervals of its swimming cycle to produce its net displacement. If the cycle in Fig. 1 is traveled counterclockwise, the swimmer moves to the right, and if the cycle is traveled clockwise the swimmer moves to the left. In the first part of the SC cycle, starting with both arms extended and going through the cycle counterclockwise, the left arm contracts at a constant speed v s from (a) This contraction moves the center of mass, cm, to the left as it is shown by the gray dashed line. During this interval, the other arm is extended, resulting in a larger drag opposing the backward motion of the swimmer. In the next interval, L 2 goes from l+d to l − d, producing a cm motion to the right. This forward movement is larger than the previous displacement to the left due to the lower drag exerted by the left arm, that is contracted. Analogously, analyzing the rest of the cycle, a net displacement (gain) of the swimmer to the right is obtained.
The situation, depicted in the last paragraph, is shown quantitatively in Fig. 2 for three swimmers that differ in their stretched-arm sizes, l + d, and in the amplitude of their arms motion, d. Blue solid lines are results obtained by Stokesian Dynamics, while dashed red lines correspond to the point force approximations. The first column shows results for a swimmer, s 1 , with l = 8 and d = 4. With these dimensions, the closest distance between sphere centers, l − d = 4, is large enough to allow for a fine estimation of the swimmer dynamics by the point force approximation. The second column corresponds to s 2 , a swimmer also with l = 8, but with d = 5.9. Note that for this swimmer, the extreme spheres do come almost to contact (actually, at a surface separation of 0.1) with the central sphere when the respective arm is fully contracted. Finally, the third column display results for a smaller swimmer, s 3 , in which the spheres are almost all the time close to contact (l = 2.6, d = 0.5). Figure. 3 shows analogous results for the same swimmers, but following a circular cycle.
For the three swimmers, the backward motion during the first interval can be seen both in the time evolution of the center of mass position and in an initially negative center of mass velocity. Looking at the final displacement after one cycle, it is observed that s 2 is the swimmer that moves more in one cycle (this is also true for the circular cycle shown in Fig. 3). The difference between the SD results and PF approximations is significant for all the variables and grows with the compression of the arms, as expected, showing even a curvature inversion for the velocity of swimmers s 2 and s 3 in the square cycle. This inversion takes place at the regions where the spheres get in close proximity and the need for a complete representation of the hydrodynamic forces is more relevant. To compare SD results with other methods that also include hydrodynamic interactions, like Lattice Boltzmann (LB) [49,50] and Multiparticle Collisions Dynamics (MPC) [51], we have calculated the one cycle net displacement, ∆, for the SC swimmer with l + d = 25/3 as a function of the relative spheres displacement amplitude, and compared SD results with those obtained by Earl et al. [36]. This comparison in conjunction with analytical results obtained within the point force approximation are shown in Figure 4. LB and MPC data have been taken from the work of Earl et al. [36], where they use both mesoscale methods to study the TLS and other generalizations of it. For details in the implementations of LB and MPC and the parameters used, see Ref. 36. Remarkably, the SD results are in quite good agreement with both methods, and in particular with LB, which is more accurate. Note, however, that the LB implementation in Ref. 36 do not include lubrication corrections [52], for this reason near field interparticle interactions might be underestimated when particles come close together. The great advantage of SD, in comparison with those mesoscale schemes, is that it treats the fluid as a continuum, allowing the simulation of larger time-scales as well as larger systems with much less computational effort. Figure 4 also includes the small deformation limit of the net displacement within the PF approximation up to second order in 2d/(l + d), Eq. (22) from Ref. 36. As expected, in the limit of small deformation it converges to the full PF approximation.
The instantaneous dissipated power can be obtained directly from the expression Note that with the adimensionalization we are using, P (t) is expressed in units of ηa 3 /T 2 . A remarkable increase of the dissipation is found for swimmers s 2 and s 3 in the square cycle with respect to s 1 . This compression dependent behavior is highly underestimated by the PF approximation and will have a major influence in the determination of the swimmer efficiency. The fast growth of the dissipation in the square cycle is caused by the spheres approximating at an imposed constant speed v s , overcoming the lubrication forces growing like 1/(L i − 2). These sharp peaks are not present for the circular cycle because, in this case, the contraction velocity of the arms goes to zero when the spheres are at the shortest separation.
B. Averages quantities
To precisely quantify the differences between the PF approximation and the SD results, we have computed the mean velocity, v , and the mean dissipated power, P , averaged over one period. With the selected unit of time, the mean velocity and the mean dissipated power are equivalent to the net displacement, ∆, and the dissipated power per cycle, respectively. Fig. 5 shows these quantities as a function of the minimum distance between the sphere centers, l − d. As can be seen in Fig. 5 for a swimmer with l = 8, the PF approximation overestimates the velocity found by SD, both for SC and CC, and underestimates the average power dissipation (inset of Fig. 5). We have also analyzed the percentage error (difference) between the PF and SD mean velocities, 100( v P F − v SD )/ v SD , in terms of the rest length of the arms, l, and their highest compression l − d. These results are summarized in Fig. 6, where the corresponding percentage-error map is shown. Note that a value of l > 17 is required for a swimmer to obtain an error smaller than 1% when using the PF approximation. Even for a swimmer with l = 20, an error <1% is obtained for extremely low compression, which are only possible for amplitudes larger than d ≈ 5. On the other hand, already ⇥ ⇥ Figure 5. Average velocity and power dissipation (inset) for swimmers with l = 8 as a function of of the contracted-arm length, l − d. The shaded area represent volume exclusion due to the finite size of the spheres. Dotted red and dashdotted yellow lines correspond to results for the CC swimmer obtained by SD and PF respectively. Solid blue and dashed green lines correspond to results for the SC swimmer obtained by SD and PF respectively. for minimum separation as large as 4 the error may be larger that 20%. Summarizing, the PF approximation is only good for swimmers with large arm lengths, l, and, simultaneously, large minimum sphere separations, l − d, compared with the sphere radius (roughly, l − d larger than 15 for an error <1%).
C. Efficiency
For studying the efficiency, ε, of the swimmer we use a definition first introduced by Lighthill [53], which corresponds to the ratio between the power dissipated when an external force moves the swimmer at a given velocity, and the power dissipated by the swimmer to propel itself at the same velocity. In the particular case of the TLS, its shape, and consequently the drag force, varies during a stroke cycle. For this reason, we calculate the efficiency as where C(l, d) is the friction coefficient of a non-deforming TLS swimmer with arms rest length, l, and arm variation length amplitude, d, in its most contracted state (L 1 = L 2 = l − d). With this choice, we calculate a lower limit for the efficiency. Other authors [54] suggest using an average friction coefficient, corresponding to the time evolution of the swimmer shape during the stroke. We prefer to use the less dissipative configuration (the most contracted) to define the efficiency since the shape changes are consequence of the swimming stroke. The coefficient C(l, d) was obtained within the same SD simulation scheme, and takes values between (0.51±0.01)×18π and 18π in units of ηa, corresponding to the limits of three spheres in contact (l = 2 with d = 0) and infinitely separated, respectively.
As it was shown in Fig. 5 for a swimmer with arm length l = 8, both cycle types present an average velocity, v , and an average dissipated power, P , that grow when the minimum sphere separation (contracted arm length), l − d, decreases, reaching a maximum when the spheres can touch. The corresponding point force approximation results are also shown for comparison. The PF starts to overestimate (appreciably) the average velocity and dissipated power for both systems when the contracted arm length is around 6. The faster growth of v 2 in relation to that of P produces also an increasing efficiency for a larger d, as can be seen in Fig. 7 for both cycles. This growing of the efficiency stops when d is large enough for the outermost spheres to approximate to the central sphere and almost touch it for a SC swimmer, and when d = l − 2 for a CC swimmer. In the case of SC swimmer, the lubrication forces produces a sharp increase in power dissipation and a corresponding fall of the efficiency. This behavior of the efficiency can not be found using the PF approximation, because it does not account for these forces, and for this reason it produces the most significant overestimation of the efficiency where the swimmers are more efficient. In the inset of Fig. 7 it is possible to see that the lubrication forces affect the efficiency of the two cycles in markedly different ways. For the square cycle, the efficiency reaches a maximum for swimmers that have a contracted arm length (l − d) around 2.1, i.e. a gap between spheres approximately 10% of the sphere radius. For the circular cycle, on the other hand, no maximum is observed, and the efficiency grows monotonically as the contracted swimmer size decreases. These different behaviors are produced by the different relative velocities of the spheres when approaching each other. For the square cycle, the approximation velocity is constant and equal to v s , while for the circular cycle it goes to zero as a sine function. The efficiencies as a function of the amplitude, d, for different values of the arm length, l, are shown in Fig. 8 as thin gray lines. Note that for a given value of l, d can take values between 0 and l − 2, and that we have plotted here only the region where the efficiency grows with d, truncating the (gray) lines when they reach the respective maximum efficiency. Connecting the highest efficiency points for each l it is possible to build a curve of the maximum efficiency of the swimmer as a function of d (for each d there is a system with l ∼ d + 2 that has the maximum possible efficiency). This curve is interesting because it allows to visualize how the swimmer size does affects its ability to swim. In Fig. 9 a comparison for this function for both, SC and CC cycle swimmers is presented. Notably, the two cycles studied present markedly different behaviors. In the case of the circular cycle, the efficiency of the best swimmer with a given l grows almost linearly for small swimmers, then has its maximum at d = 8 and finally decays monotonically for larger swimmers. The Square Cycle, on the other hand, does not show an optimum size and has an efficiency, , that monotonically grows and tends asymptotically to a value slightly over 0.0021. Due to these two distinct behaviors of the most efficient swimmers, it would be preferable to build large swimmers using a square cycle rather than a circular cycle, but in the case of small swimmers the specific shape of the cycle seems to be less important.
IV. CONCLUSIONS
We have systematically studied by implementing SD simulations, the dynamics of the TLS swimmer for a wide range of parameters and two different swimming cycles, and compared the results with the analytic point force approximation. Furthermore, the efficiency of the swimmer was analyzed and the optimum parameters were identified. The point force approximation describes reasonably well the dynamics, as long as the spheres do not come into close contact, where the strong lubrication forces start to play a dominant role. This has been quantitatively analyzed, and the results presented in a figure showing the percentage difference in the mean velocity between point force approximation and the SD solution, as a function of the size of the arms and the minimum separation of the spheres.
Our study has shown that the velocity and the dissipated power grows for a given size of the arms, l, when the minimum separation, l − d, decreases. Furthermore, the mean velocity and the mean dissipated power in one cycle, are larger for the SC cycle, for any value of d. This result alone is not sufficient to decide which cycle is most efficient since a larger velocity has a larger energetic cost.
Considering the most efficient swimmer for a given amplitude, d, we have shown that the two studied swimming cycles have nearly the same efficiency as long as d 8, and that for larger separations (i.e. also larger swimmers) the SC cycle results more efficient than the CC.
Summarizing, we have shown that the SD simulation scheme is an appropriate tool to study the dynamics and efficiency of artificial swimmers, in particular those constituted by spherical particles. The precise description of the hydrodynamic interactions can be of relevance for boosting the study and design of more complex and efficient micro-swimmers or micro-machines with individual constitutive parts of finite size, that can come very close, positioning indeed Stokesian Dynamics as a valuable tool for these purposes. Visualizing its promising applications as natural ongoing steps we are studying interaction between two or more TLS swimmers, as well as more complex micro-swimmer models composed by hundreds of spheres.
VI. DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-01-03T02:15:19.560Z | 2021-12-30T00:00:00.000 | {
"year": 2021,
"sha1": "eeea89e1c6dba32006a628c30d91cceb9fb6169e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eeea89e1c6dba32006a628c30d91cceb9fb6169e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
6351156 | pes2o/s2orc | v3-fos-license | The Interaction of Two Saccharomyces cerevisiae Strains Affects Fermentation-Derived Compounds in Wine
Previous winery-based studies showed the strains Lalvin® RC212 (RC212) and Lalvin® ICV-D254 (D254), when present together during fermentation, contributed to >80% relative abundance of the Saccharomyces cerevisiae population in inoculated and spontaneous fermentations. In these studies, D254 appeared to out-compete RC212, even when RC212 was used as the inoculant. In the present study, under controlled conditions, we tested the hypotheses that D254 would out-compete RC212 during fermentation and have a greater impact on key fermentation-derived chemicals. The experiment consisted of four fermentation treatments, each conducted in triplicate: a pure culture control of RC212; a pure culture control of D254; a 1:1 co-inoculation ratio of RC212:D254; and a 4:1 co-inoculation ratio of RC212:D254. Strain abundance was monitored at four stages. Inoculation ratios remained the same throughout fermentation, indicating an absence of competitive exclusion by either strain. The chemical profile of the 1:1 treatment closely resembled pure D254 fermentations, suggesting D254, under laboratory conditions, had a greater influence on the selected sensory compounds than did RC212. Nevertheless, the chemical profile of the 4:1 treatment, in which RC212 dominated, resembled that of pure RC212 fermentations. Our results support the idea that co-inoculation of strains creates a new chemical profile not seen in the pure cultures. These findings may have implications for winemakers looking to control wine aroma and flavor profiles through strain selection.
Introduction
In spontaneous fermentations conducted at commercial wineries, it is common to find more than one Saccharomyces cerevisiae strain fermenting the wine must [1]; however, multiple strains have also been detected even in inoculated fermentations [2,3].It is well documented that different wine strains of S. cerevisiae affect flavor and aroma properties differently [1].Although the sensory influence of co-inoculation between non-Saccharomyces and a single S. cerevisiae strain has been widely studied [4][5][6][7][8], fewer studies have reported on the co-inoculation of multiple S. cerevisiae strains [9][10][11][12][13].The commercial active dry yeast (ADY) strains, Lalvin ® Bourgorouge RC212 (RC212) and Lalvin ® ICV-D254 (D254), are frequently used to ferment Pinot Noir and Chardonnay musts, respectively.Together, they have been found to dominate operational fermentations, with an overall relative abundance of >80% in both inoculated (where RC212 was used as the sole inoculum and D254 entered as a contaminant) and spontaneous Pinot Noir fermentations [14,15].Furthermore, D254 was the dominant strain at the end of these fermentations, even when tanks were inoculated with RC212 [3].These findings suggest, when observing their dynamics during operationally conducted fermentations, that D254 out-competes RC212.Originally, the strain RC212 was selected by the Burgundy Wine Board (BIVB) to extract and protect the polyphenols of Pinot Noir.In the information supplied by the manufacturer, it is claimed that wines fermented by RC212 have good structure with fruity and spicy characteristics (Lallemand Inc., Montreal, QC, Canada).The strain D254 is commonly used in both red and white wines.Red wines fermented with D254 contribute to high fore-mouth volume, smooth tannins, intense fruits and a slightly spicy finish (Lallemand Inc., Montreal, QC, Canada).Nevertheless, there is a lack of information on the sensorial attributes when these two strains co-exist during fermentation.Given that there are many factors that can affect the interactions of these two strains under operational conditions, it is important to determine how these two strains interact and affect key fermentation-derived chemicals under controlled conditions.
The formation of aroma and flavor compounds is dependent on the nutrient availability, the physicochemical properties of the fermentation, and the yeast strains present, especially S. cerevisiae strains.Higher alcohols and esters are usually yeast-derived and can greatly contribute to the aroma and flavor profile of the wine [16].Many of these flavor compounds are derivatives of amino acids, and it has been shown that amino acid uptake by yeasts is strain-dependent [11,17].Other wine aroma and flavor compounds include pyrazines, terpenes, lactones, sulfur-containing compounds, phenols, organic acids, and aldehydes, which are usually not strain-dependent.The concentration of these other compounds is strongly influenced by varietal, grape ripeness, non-Saccharomyces organisms, aging, and winemaking practices [16,18].Several studies have concluded that different strains of S. cerevisiae produce strain-specific metabolites [19,20].For example, higher alcohols and esters can differ with varying dominance of two or more strains [11,19,20].At low concentrations, higher alcohols contribute to increased aroma complexity, but at high concentrations (>300 mg/L), their presence can be undesirable [21,22].At low concentrations (<100 mg/L), ethyl esters, such as ethyl acetate, often contribute fruity aromas, but at high concentrations they can produce undesirable solvent-like aromas and flavors [16,23].In the present study, we targeted only compounds that are known to be fermentation-derived and are integral to aroma and flavor development.
Knowledge of the competitive interaction between different S. cerevisiae strains and its effect on aroma and flavor compounds will guide winemakers in choosing commercial yeasts, because final wine composition may be enhanced with the use of the most suitable combination of yeast strains [11].In addition, we are not aware of any competition or metabolomic studies that have conducted co-fermentations with RC212 and D254 strains in grape must.For our study, competition between two strains, which ultimately results in competitive exclusion, is defined at the end of a co-inoculated fermentation, where one strain has a greater relative abundance than it did when it was inoculated.
The aim of this study was to generate and test hypotheses that were based on observations from operational settings and from the literature.We tested, under controlled conditions, the hypotheses that: (1) D254 will out-compete RC212 when inoculated as a 1:1 or as a 4:1 RC212:D254 ratio; (2) D254 will have a greater impact than RC212 on key fermentation-derived chemicals when the inoculation abundances of the two strains are equal; and (3) D254 will have a greater impact than RC212 on key fermentation-derived chemicals when the inoculation is administered in a 4:1 RC212:D254 ratio.Our results indicate that no competitive exclusion occurred in the co-inoculated treatments, but rather the inoculated ratios remained constant throughout fermentation.Furthermore, we found that the chemical profile of the 1:1 RC212:D254 treatment closely resembled the chemical profile of the pure D254 fermentations, but the 4:1 RC212:D254 treatment more closely resembled the chemical profile of the pure RC212 fermentations.We conclude that although D254 does not appear to competitively exclude RC212 under controlled conditions, it has a relatively larger impact on the sensory profile of the resulting wines than RC212.
Experimental Design
The experiment consisted of four fermentation treatments: a pure culture control of RC212; a pure culture control of D254; a 1:1 co-inoculation ratio of RC212:D254; and a 4:1 co-inoculation ratio of RC212:D254.Each treatment was replicated using three separate fermentation flasks for a total of 12 flasks, with each flask containing 100 mL Pinot Noir juice.Each flask was sampled for strain abundance at the start (180 g/L sugar, 0 h), early (83-102 g/L sugar, 24 h), mid (64-73 g/L, 32 h), and end stages (<2 g/L sugar, 97 h) of the 100 h fermentation.Samples for chemical analysis were taken only at the end stage of fermentation.The co-inoculation treatments represented one situation where the two strains were inoculated in equal abundance (1:1 ratio) and another where RC212 was inoculated at a higher proportion than D254 (4:1 ratio); these two co-inoculation treatments, along with their pure-culture controls, allowed us to adequately test all of our hypotheses.
Juice Preperation
Pinot Noir juice was obtained from WineExpert™ (Port Coquitlam, BC, Canada).The juice was prepared by centrifugation for 45 min at 3500ˆg and was subsequently filtered through a series of filters, which had a decreasing pore size: 2.7 µm glass fiber filter (GF), 1 µm GF, 0.45 µm mixed cellulose ester membrane filter (MCE), and 0.22 µm MCE and polyvinylidene difluoride filter (PVDF).The filtered juice was adjusted to 180 g/L sugar with sterile Milli-Q water and stored at ´20 ˝C until it was needed for the experiment.We selected this concentration because it was within the typical range (180-220 g/L) at which grape juice fermentation commences [24].The filtered juice, following the adjustment to 180 g/L sugar, had a pH of 3.8 and its sterility was confirmed by plating 0.1 mL onto yeast extract peptone dextrose (YEPD) media and observing an absence of colonies after 4 days of incubation at 28 ˝C.The adjusted filtered juice (>2 L) was used as the source to make the RC212 and D254 inoculated solutions, described in the section below.
Inoculation and Fermentations
For both strains, (~10 mg) ADY inoculum was rehydrated in 25 mL liquid YEPD media and was shaken for eight hours (120 rpm) at 28 ˝C.Yeast abundance (cells/mL) in the rehydrated suspension was counted using a hemocytometer.Rehydrated yeasts were added in a quantity of 1 ˆ10 6 cells/mL to 100 mL diluted Pinot Noir grape juice (1:1 juice:sterile Milli-Q H 2 O) for each strain.Once yeast cell count was determined in each solution, the RC212 and D254 solutions were added separately to 1.2 L and 700 mL of the filtered Pinot Noir juice, respectively, to produce a concentration of 5 ˆ10 6 cells/mL.The resulting master mixes of each strain were combined in the appropriate ratios to obtain 300 mL of each co-inoculation treatment.Subsequently, for each co-inoculation treatment, the resulting solution was divided into three independent flasks (each containing 100 mL juice).The pure-culture controls were treated the same way; however, the RC212 and D254 solutions were not mixed.For all treatments, the final inoculation concentration was 5 ˆ10 6 cells/mL.
Fermentations (100 mL per flask) were conducted in 250 mL fermentation flasks, which contained sampling ports and air-locks.The flasks were shaken (120 rpm) at 28 ˝C until the end of the fermentation.To monitor the progression of fermentation and to identify strains, 0.5 mL samples were collected aseptically at the start, early, mid, and end stages of fermentation.At the end of fermentation, all wines contained <2 g/L residual sugar, as indicated with a D-Glucose/D-Fructose sugar assay kit (Megazyme, Bray, Ireland).The wine was clarified by centrifugation (1200ˆg; 2 min) and filtered (0.45 µm) at the end of fermentation.At the end stage, 40 mL were transferred to glass vials and stored at ´80 ˝C until chemical analysis was performed.
Yeast Strain Identification
Wine must samples from each stage were plated on YEPD agar and incubated at 28 ˝C for 48 h.Twenty colonies from each plate (960 colonies total) were randomly chosen for DNA analysis.Extraction and amplification of the DNA followed the methods of Lange et al. [15], except that amplification of the isolates was performed with primer sets for the microsatellite loci C11 and SCYOR267c [25].These two loci were chosen because RC212 is heterozygous and D254 is homozygous at both of these loci, resulting in two fragments for RC212 and one fragment for D254 [14].Additionally, the size of the two loci was separated by 78 base pairs, which allowed for simultaneous analysis.
Chemical Analysis
A total of 11 fermentation-derived compounds were selected based on reports of their importance to Pinot Noir, their importance to flavor and aroma, and whether they were yeast strain-dependent.Four of these compounds (ethyl butyrate, isoamyl acetate, 1-hexanol, and phenethyl alcohol) were quantified with gas chromatography mass spectrometry (GC-MS) at UBC Okanagan.A Varian/Agilent CP-3800 GC equipped with a VF-5MS 30 m ˆ0.25 mm FactorFour capillary column and with a CP-8400 auto sampler was used for a splitless analysis.The injector was ramped from 40 to 100 ˝C at 10 ˝C/min.The oven was ramped from 40 to 240 ˝C at 10 ˝C/min and a solvent delay of 2.5 min was used.Samples were extracted with liquid-liquid extraction using a 1:1 ratio of the solvents pentane and diethyl ether.A combination of 5 mL sample, 5 µL of 1.615 mg/L methyl isobutyl carbinol (MIBC), and 5 mL solvent were shaken vigorously in large test tubes.The solution settled for 1 h and the extract was transferred from the top layer to GC-MS vials.The other seven compounds (ethyl acetate, acetaldehyde, methanol, 1-propanol, isobutanol, amyl alcohol, and isoamyl alcohol) were quantified by ETS laboratories (St.Helena, CA, USA), using a gas chromatography flame ionization detector (GC-FID), as per the methods of the American Association for Laboratory Accreditation.
Data Analysis
Strain ratios at the start of the fermentation were compared with expected ratios and with pooled data from subsequent stages by performing a Chi-square goodness of fit test.Relative abundance of strains was compared between treatments and controls by performing a one-way analysis of variance (ANOVA) on data that had fermentation stages pooled, as well as a one-way ANOVA on the end-stage of each treatment.Furthermore, the relative abundance of RC212 in the co-inoculated fermentations was compared between fermentation stages of the same treatment by performing one-way ANOVAs.When significance was indicated, a Tukey-Kramer honest significant difference (HSD) post-hoc test was performed.The relationship between the abundance of RC212 and the concentrations of fermentation-derived compounds was determined using regression analysis.Hierarchical cluster analysis, based on Ward's method with euclidean distance, was used to group treatments [9,26]; these results were visualized using a Principal Component Analysis (PCA).The statistical analyses mentioned above were conducted using JMP ® 11.0.1.The hierarchical cluster analysis and PCA employed an R 2.0 platform add-in.The concentrations of fermentation-derived chemical compounds were compared between inoculation treatments by performing one-way ANOVAs.When significant differences were detected, Tukey-Kramer HSD post-hoc tests were performed to determine differences between treatments.Statistical analysis of chemical compounds was performed using the Rcmdr package in RStudio version 3.1.1.All results were considered significant at p < 0.05.
Results
The starting proportions of RC212 to D254, sampled immediately after co-inoculation, were not different from their expected ratios (1:1 treatment: X 2 = 0.563, p = 0.453; 4:1 treatment: X 2 = 0.039, p = 0.844) (Table 1).This indicated that our inoculation treatments were accurate, which was important in order to make conclusions about the competition between these two strains and about the specificity of chemical compounds to one strain or the other.The co-fermentation treatments differed significantly in their proportion of yeast strains from both control treatments and from each other, when all fermentation stages were pooled (F = 436.1,p < 0.0001).Furthermore, the yeast ratios at the end stage of fermentation differed significantly between the two co-fermentation treatments (F = 171.4,p < 0.0001), but the yeast proportions of each co-inoculation treatment were constant throughout fermentation (1:1 treatment: F = 0.50, p = 0.70; 4:1 treatment: F = 1.6, p = 0.27).These results confirm that the proportions of RC212 and D254 differed between all co-inoculation and control treatments at both the beginning and throughout fermentation, and that the inoculated yeast ratios remained constant over the course of fermentation for both co-inoculated treatments.There was a positive linear relationship between the abundance of RC212 and the quantity of four compounds present during fermentations.These compounds were acetaldehyde, 1-propanol, isobutanol, and isoamyl alcohol (Table 2).Alternatively, there was a negative linear relationship between the abundance of RC212 and the quantity of ethyl acetate, amyl alcohol, and isoamyl acetate.We considered that a positive relationship indicated specificity towards RC212 and a negative relationship indicated specificity towards D254.The compounds, ethyl butyrate and phenethyl alcohol, while detected, were not significantly correlated with the relative abundance of RC212 (Table 2).The compounds 1-hexanol and methanol were not detected in any treatment (Table 3).In our study, RC212 produced significantly higher levels of isobutanol than did D254.Production of this compound by RC212 was also evident in the two co-inoculated treatments, which both contained higher levels of this compound than the pure D254 treatment, but lower levels than the pure RC212 treatment (Table 3).In the pure RC212 cultures, the concentration of this compound approached the sensory threshold of 300 mg/L [18].For all treatments, isoamyl alcohol was detected at concentrations approaching its bitter sensory threshold of 300 mg/L, although its production was significantly higher in the pure RC212 treatment than the pure D254 treatment (Table 3).Acetaldehyde and 1-propanol concentrations were well below their aroma thresholds of 100-125 mg/L for all treatments [27,28] (Table 3).Ethyl acetate was detected in all treatments at levels above its detection threshold but well below its solvent-like threshold of 100 mg/L, and above the sensory threshold for fruitiness [23].Unlike previous studies [10,29,30], no strain specificity in the production of ethyl butyrate or phenethyl alcohol was detected (Table 3).A Principal Components Analysis (PCA) showed the chemical profile of the 4:1 RC212:D254 co-inoculation treatment clustering with the chemical profile of the pure RC212 fermentations, while the profiles of the 1:1 ratio co-inoculation treatment clustered with the D254 pure culture (Figure 1).The pure RC212 culture fermentations, as well as the 4:1 RC212:D254 co-inoculated fermentations, were correlated with the presence of 1-propanol, acetaldehyde, isobutanol, and isoamyl alcohol.The D254 pure culture fermentations, as well as the 1:1 RC212:D254 co-inoculated fermentations, were correlated with the presence of isoamyl acetate, amyl alcohol, and ethyl acetate.A Principal Components Analysis (PCA) showed the chemical profile of the 4:1 RC212:D254 co-inoculation treatment clustering with the chemical profile of the pure RC212 fermentations, while the profiles of the 1:1 ratio co-inoculation treatment clustered with the D254 pure culture (Figure 1).The pure RC212 culture fermentations, as well as the 4:1 RC212:D254 co-inoculated fermentations, were correlated with the presence of 1-propanol, acetaldehyde, isobutanol, and isoamyl alcohol.The D254 pure culture fermentations, as well as the 1:1 RC212:D254 co-inoculated fermentations, were correlated with the presence of isoamyl acetate, amyl alcohol, and ethyl acetate.
Discussion
The finding that the proportion of RC212 to D254 remained constant throughout fermentation in both co-inoculation treatments suggests that there was a lack of competitive exclusion under controlled conditions between RC212 and D254, which does not support our first hypothesis that D254 would out-compete RC212 even when RC212 was inoculated in a 4:1 RC212:D254 ratio.Our original hypothesis was based on winery-based studies [3,14], where physical, chemical, and microbial conditions likely differ from in-lab fermentations.We are not aware of any other in-lab studies that have followed the interaction of these two strains during co-fermentation.Nevertheless, one study has followed mixtures of different S. cerevisiae strains throughout fermentation and showed both strain exclusion as well as situations where inoculated ratios remained the same throughout fermentation [12].A second co-inoculation study, using three different commercial strains, observed one strain (Anchor ® Vin7) competively excluding Anchor ® Vin13 and Lalvin ® QA23 [13].
Production of isobutanol was highest in the pure RC212 treatment and lowest in the pure D254 treatment.Thus, the presence of D254 in the co-inoculated treatments appeared to have an inhibiting effect on the production of isobutanol by RC212, as evidenced by the decrease in isobutanol concentration with increasing relative abundance of D254 in the co-fermentations.Although we are not aware of any study that has worked with these two strains in grape must, one other study has shown levels of both n-butanol and isobutanol to differ between some S. cerevisiae pure cultures and their mixtures, indicating a significant production trend due to strain interactions [10].In our RC212 pure cultures, the concentration of isobutanol approached the sensory threshold of 300 mg/L [18], where it could produce bitter flavors; however, solvent-like aromas and flavors probably would not be produced until it neared concentrations of 400 mg/L [18,24].Although isoamyl alcohol showed the same trend as isobutanol, with respect to pure cultures, the co-inoculations did not result in a significant trend.We are not aware of any studies that have observed the effects of S. cerevisiae strain interactions on isoamyl alcohol.As with isobutanol, isoamyl alcohol could produce bitter flavors at the concentrations we found, but not solvent-like aromas and flavors.Both isobutanol and isoamyl alcohol are derivatives of amino acids, so the high concentrations of these compounds were likely, in part, a reflection of the amino acid content in the initial must [9,17].We did not find a significant interaction trend for isoamyl acetate in both co-inoculation treatments.Our results were similar to another study where ethyl ester concentrations of strain mixtures were similar or slightly higher than those of pure cultures [10].Supporting our results, this previously conducted study found ethyl esters, including ethyl acetate, above their sensory thresholds for fruity aromas, but not for solvent-like characteristics [10].Acetaldehyde and 1-propanol concentrations were well below their aroma thresholds of 100-125 mg/L for all treatments, and thus they did not likely contribute directly to the sensorial characteristics of these wines.Many of the compounds we evaluated were below their detection limits, but it is important to note that our study reports on only a small portion of chemicals that are important in contributing to the sensory profile of wine.A full metabolomics study may reveal other chemicals that are important in the interaction of these two strains.
The results of PCA cluster analysis revealed the RC212 pure culture and the 4:1 co-inoculation treatment shared similar chemical profiles, separate from the D254 pure culture and the 1:1 co-inoculation treatment, which also shared similar chemical profiles.This suggests that when the two strains were equally abundant, D254 had a greater effect on the chemical profile than did RC212.This also suggests that the presence of D254 reduced the chemical profile that was contributed by RC212, as evidenced by the reduction in production of a number of chemicals positively correlated with RC212, including acetaldehyde, isobutanol, and 1-propanol.These results support our second hypothesis that D254 would have a greater impact on the chemical profile than RC212 when the cell numbers of the two strains were equal.Nevertheless, our results did not support our third hypothesis that D254 would have a greater impact than RC212 on key fermentation-derived chemicals when the inoculation was administered in a 4:1 RC212:D254 ratio.In our study, both co-inoculated abundance ratios remained constant throughout the fermentations, and when a 4:1 RC212:D254 ratio was in place, the chemical profile resembled the RC212 pure culture more than the D254 pure culture.Nevertheless, the chemical profiles of the co-inoculations shared some of the characteristics of both pure culture fermentations, which supports the results of other studies showing that chemical profiles differ between co-inoculation and pure culture fermentations [9][10][11][12].This indicates that the interaction between two or more strains creates a new chemical profile not seen in the pure cultures.The interactions of multiple strains during fermentation can have synergistic or antagonistic effects on the final sensory attributes of wine [9,12,31], which makes strain selection an important consideration for commercial winemakers.Our results, along with those of Saberi et al. [10], suggest that by increasing the number of different strains in a fermentation, a more complex wine, in terms of chemical profile, can be achieved and managed due to multiple interactions between different strains of yeasts.Further research is necessary to determine whether increasing the number of strains in fermentation has an additive effect on the complexity of the wine's chemical profile.
Conclusions
In contrast to our original prediction, RC212 and D254 maintained their original inoculation ratios throughout the bench-top fermentations, suggesting that neither RC212 nor D254 competitively excluded the other strain under controlled conditions.The chemical profiles of both co-inoculated fermentations shared some characteristics of each pure culture fermentation.Nevertheless, when the two strains were equally abundant, D254 had a greater impact on the chemical profile than did RC212; this is in support of our hypothesis that D254 would have a relatively greater impact than RC212 on the chemical profile of wine.This is the first report to show that the co-fermentations of these two commercial strains can result in chemical profiles that are different than what is found when each strain is fermenting in pure culture.
Figure 1 .
Figure 1.Principal Component Analysis of fermentation-derived compounds detected in each fermentation treatment.The variation (62.9%) among chemical profiles for all treatments can be attributed to a primary principal component (PC1) that differentiates the treatments into two unique chemical groups: (1) D254 pure culture and 1:1 ratio fermentations; and (2) RC212 pure culture and 4:1 ratio fermentations.
Figure 1 .
Figure 1.Principal Component Analysis of fermentation-derived compounds detected in each fermentation treatment.The variation (62.9%) among chemical profiles for all treatments can be attributed to a primary principal component (PC1) that differentiates the treatments into two unique chemical groups: (1) D254 pure culture and 1:1 ratio fermentations; and (2) RC212 pure culture and 4:1 ratio fermentations.
Table 1 .
Percent relative abundance of RC212.Chi-square tests were performed to compare pooled data from the early, mid, and end stage ratios with the start ratio of a given treatment.Statistics were only run on the two co-inoculated treatments and not on the pure culture treatments.Any bolded results indicate significance at p < 0.05.
Table 2 .
Regression analysis between chemical concentrations and abundances of RC212.Chemicals having a positive linear relationship with RC212 abundance indicate RC212 strain specificity.Chemicals having a negative linear relationship with RC212 abundance indicate D254 specificity.Any bolded results indicate significance at p < 0.05.
Table 3 .
Summary of fermentation-derived compounds in concentration (mg/L) for all controlled fermentation treatments.Values are means ˘S.E.(n = 3).Different superscript letters indicate significant differences between treatments at p < 0.05.Each compound was analyzed separately.
ND: not detected. | 2016-06-10T08:59:46.098Z | 2016-03-30T00:00:00.000 | {
"year": 2016,
"sha1": "1c3683c6987c2b2821f9ffdcea3b4153118bb7a7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5637/2/2/9/pdf?version=1459337213",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c3683c6987c2b2821f9ffdcea3b4153118bb7a7",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
248241441 | pes2o/s2orc | v3-fos-license | System-Specific Separable Basis Based on Tucker Decomposition: Application to Density Functional Calculations
For fast density functional calculations, a suitable basis that can accurately represent the orbitals within a reasonable number of dimensions is essential. Here, we propose a new type of basis constructed from Tucker decomposition of a finite-difference (FD) Hamiltonian matrix, which is intended to reflect the system information implied in the Hamiltonian matrix and satisfies orthonormality and separability conditions. By introducing the system-specific separable basis, the computation time for FD density functional calculations for seven two- and three-dimensional periodic systems was reduced by a factor of 2–71 times, while the errors in both the atomization energy per atom and the band gap were limited to less than 0.1 eV. The accuracy and speed of the density functional calculations with the proposed basis can be systematically controlled by adjusting the rank size of Tucker decomposition.
■ INTRODUCTION
Numerical methods for replacing partial differential equations with finite-dimensional algebraic equations are essential for the electronic structure calculations of molecular or solid systems. 1,2 The atom-centered and plane-wave basis methods are frequently used for nonperiodic and periodic systems, respectively. 3,4 Real-space methods are potentially competitive with the aforementioned methods because of their flexibility and computational simplicity. 5 However, they have not yet been widely adopted for electronic structure calculations. Discretization of the simulation domain results in a large dimension for the Hamiltonian and orbitals. To mitigate the increase in memory usage and computational costs due to the large dimension, tensor decomposition techniques can be applied to real-space methods. 6−9 Tensor decomposition techniques are not limited to realspace methods. 10 They have been actively investigated to accelerate various quantum chemistry methods that require a large amount of computational and memory resources (e.g., a perturbation method, 10,11 coupled cluster theory, 12−14 and full basis representation methods 15 ). For density functional calculations, tensor decomposition techniques using realspace methods have been studied because orbital values on a rectangular grid can be represented as an order-3 tensor. Solala et al. applied tensor decomposition to density functional calculations to minimize memory load. 6 Their results show that the Tucker decomposition method can successfully compress the orbitals represented on a cubic grid from the results of bubbles and the cube numerical framework, which is a variation of real-space methods. Tensor decomposition can be used to compress orbitals on a three-dimensional (3D) grid and build a basis for a self-consistent field (SCF) procedure. Gavini et al. proposed a Tucker tensor basis derived from a separable approximation of the Hamiltonian. It effectively reduces the dimensions of a Kohn−Sham (KS) Hamiltonian matrix originally represented on an equidistant finite-element grid. 8 The separability of the basis is an important property to reduce the computational costs of many operations. A typical separable basis can be obtained from the simple product of three 1D functions (e.g., a Gaussian function). To impose system information on such a basis, a 1D function that reflects the system information on a general polyatomic structure needs to be developed. By contrast, a Hamiltonian matrix that implicitly includes all system information can be easily constructed on a real-space grid. Using the Hamiltonian on a real-space grid is an attainable solution for imposing system information on a separable basis.
Herein, we propose a system-specific separable basis derived from a finite-difference (FD) KS Hamiltonian matrix and investigate its performance for 2D and 3D periodic structures. The resulting basis is constructed by reflecting information on the Hamiltonian of the system and is also separable along the axes of the spatial coordinates. These two features are common to other types of Tucker tensor basis. 8 A key contribution of this work is that the basis is built directly from a finitedifference method, and its nonzero patterns are used in the projection process instead of introducing a separable Hamiltonian. In addition, the convergence of our basis is systematically controlled by increasing the rank size of Tucker decomposition. In the following, we briefly explain the mathematical background of our method, followed by the implementation details. We then discuss the performance of the proposed basis on density functional calculations for 2D and 3D periodic systems and demonstrate its advantages for reducing the computation time of density functional calculations.
■ METHOD
Tucker Representation and Higher-Order Singular-Value Decomposition. Here, we briefly introduce a Tucker representation and a higher-order singular-value decomposition (HOSVD) method for completeness. A more detailed explanation can be found in previous papers. 16,17 The Tucker representation is used to represent an order-d tensor, , as a contraction of a small order-d core tensor, ∈ × ··· r r r d 1 2 , and d unitary factor matrices, ∈ × U n N r ( ) n n (n ∈ {1, 2, ..., d}), where N n and r n are the dimensions of the nth axis for the original and core tensors, respectively. 16−19 Then, the Tucker decomposition can be written as The convergence of eq 1 is mathematically guaranteed as r n approaches N n . 10 However, the Tucker representation is frequently used to find a compact representation of a given tensor, which means r n < N n . This compact representation can reduce the computational complexity and memory consumption of tensor operations, thereby minimizing accuracy loss. 17 An HOSVD method is the most common choice to find a set of U (d) and the corresponding . 10,16 This is one multilinear extension of the matrix singular-value decomposition (SVD). In the HOSVD method, U (n) is obtained as the left singular vector of a factor-n flattened tensor n . Because all U (n) from HOSVD are unitary, ̃i n eq 1 can be evaluated from the contraction of the original tensor with the obtained U (n) , as follows: where (·)* indicates a complex conjugate.
To obtain a compact Tucker representation, the singular vectors of n ( ) with large singular values are denoted as ∈ × U ( ) n N r ( ) n n . The compact core tensor is computed using consecutive tensor contractions, as shown in eq 2. Although the memory usage for ̃a nd U (n) is much smaller than that for , the key patterns of can be recovered by contraction with U (n) , as in a typical compact SVD.
In the field of quantum chemistry, the Tucker representation has been used to accelerate the calculation of two-electron integrals of atom-centered basis functions or tensor contractions for higher-order methods. 10 (3) where p and q denote the indices of grid points whose x, y, and z indices are (i, j, k) and (i′, j′, k′), respectively. To ensure that the factor matrices of the Hamiltonian matrix span low-lying orbitals well, we introduce a constant fictitious potential, which implies that . This constant potential shifts the eigenspectrum of the original Hamiltonian downward without changing the eigenvectors, so that the factor matrices from the decomposition of a Hamiltonian matrix with fictitious potentials are more likely to span low-lying orbitals of the Hamiltonian, which are physically meaningful. For simplicity, we do not denote the fictitious potential in this section. A more detailed explanation for the fictitious potential is provided in the Appendix.
Using the HOSVD method, can be decomposed into the core tensor, ̃, and the corresponding factor matrices, U: where α, β, γ, α′, β′, and γ′ are the indices of ∈ × × × × × ( ) r r r r r r 1 2 3 4 5 6 . Owing to the Hermitian property of H, if r 1 = r 4 , r 2 = r 5 , and r 3 = r 6 , the flattened Hamiltonian matrices and factor matrices satisfy the following relations: (1) Hereafter, for convenience, we use U x , U y , U z , H (x) , H (y) , and H (z) instead of U (1) , U (2) , U (3) , H (1) , H (2) , and H (3) , respectively. Similarly, the rank sizes of U x , U y , and U z are denoted as r x , r y , and r z , respectively. However, we define the square matrix form of ̃a s where μ and ν are the indices of x y z . From eqs 4 and 5, H̃can be rewritten as where ⊗ and (·) H denote the Kronecker product and the conjugate transpose, respectively. Here, H̃is the projection of H on separable basis vectors U (≔ U x ⊗U y ⊗U z ). As discussed in the previous section, the convergence of U x , U y , and U z to make both sides of eq 7 equal is mathematically guaranteed as r x , r y , and r z reach N x , N y , and N z , respectively. Therefore, it is guaranteed that H̃becomes equal to H when its dimension r x × r y × r z becomes N x × N y × N z .
Here, U is a set of separable basis vectors that can reduce the dimensions of the Hamiltonian from N x × N y × N z to r x × r y × r z . In addition, U satisfies the orthonormality condition because U x , U y , and U z are orthonormal matrices. If U spans physically meaningful eigenstates of the original Hamiltonian well, we only need to diagonalize H̃, which has a smaller dimension than that of H. In addition, U denotes a set of numerical basis vectors that are never explicitly constructed. Owing to its separability, operations with U can be replaced by operations with three small matrices, U x , U y , and U z . Therefore, the memory requirement for U is not To evaluate H̃, instead of directly evaluating the right-hand side of eq 7, we project three terms of the Hamiltonian matrix separately: kinetic energy, local potential, and nonlocal potential terms. Owing to the properties of U and the nonzero patterns of the three terms, the evaluation of H̃can be efficiently performed. A further explanation of the projection of the Hamiltonian matrix is described in the Appendix.
Unlike typical basis functions that use a predetermined formula, our U reflects the system information (e.g., relative positions of atoms, phase factors, and cell size) because it is constructed from the decomposition of the Hamiltonian matrix that includes system information. Hereafter, we name U the system-specific separable basis vector and investigate its applicability to density functional calculations.
Before we discuss the performance of the system-specific separable basis in a density functional calculation, we plot its overall process in Figure 1. The right and left panels of Figure 1 represent the conventional SCF procedure and the additional process, respectively. In the system-specific basis calculation, the basis vectors are constructed using the eigendecomposition of H are identical with the left singular vectors of H (x) , H (y) , and H (z) , respectively. To avoid the SVD of a large sparse matrix, we perform eigendecomposition instead of SVD.
After basis construction, the original Hamiltonian is projected to the obtained basis space. The eigenvalues and eigenvectors of the projected Hamiltonian are then computed using a typical matrix diagonalization method. The orbital, ϕ p , for computing the density, ρ , is evaluated as z H , where ϕ p and Ù ϕ p are the pth eigenvectors of H and H̃, respectively. We note that ϕ p satisfies the orthonormality condition because both Ù ϕ p and U are orthonormal. The Hartree and the exchange-correlation (XC) potentials for the obtained ρ are evaluated in the same way as the ordinary FD calculation. Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article To accelerate the system-specific basis vector calculations, we introduce two approximations. The first approximation is using the fixed basis during the SCF loop. In other words, the basis is constructed in the first step of the SCF loop using the initial Hamiltonian matrix; it is then used in the subsequent SCF steps. Although the construction of the basis set is not computationally heavy, changes in the basis set at every SCF step reduce the speed of SCF convergence. For a fixed basis, only two local potential terms (Hartree and XC potentials) must be updated at each SCF step. Therefore, only the projection of the updated local potential is performed for each SCF step.
The second approximation is discarding the nonlocal pseudopotential in the basis construction. The errors introduced by the two approximations are plotted in Figure S1. The approximations may induce errors up to 100 meV in both the atomization energy and the band gap; however, these deviations disappear when the rank size sufficiently converges, and the approximations lead to a ∼6× increase in speed in all tested cases. Hereafter, all results are obtained using both approximations.
Implementation and Experiments. The construction of system-specific basis vectors and the projection of the Hamiltonian are performed using the Tucy package, which is written in C++ and has a Python interface. For density functional calculations, our Python package, called the gridbased open-source Python engine for large-scale simulation (GOSPEL), was used. GOSPEL supports FD calculations and Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article system-specific separable basis calculations using the Tucy. In GOSPEL, the Hartree potential was evaluated using the interpolation scaling method, as in our previous studies. 23,24 An XC potential is evaluated from the experimental version of libXC. 25 To assess the convergence and performance enhancement, both the reference FD and system-specific separable basis calculations were performed using the same systems. All computational options were used equally in both cases, and all calculations were performed using a single thread of an Intel Xeon Gold 6234 CPU. The PBE 26 functional was used for the XC functional, and optimized norm-conserving Vanderbilt 27 pseudopotentials were used. All 2D and 3D periodic structures were calculated using (4 × 4 × 1) and (4 × 4 × 4) k-point meshes, respectively. For the kinetic energy matrix, a 7-point FD matrix is used. SCF procedures end when the sum of the occupied band energies converges to less than 10 −6 Hartree.
For iterative diagonalization for both typical FD and the projected Hamiltonian matrices, we use LOBPCG functions implemented in scipy, 28 a highly mature and optimized Python package for scientific computing. A compressed sparse row format is used for the FD Hamiltonian instead of a dense matrix format to compute the matrix-vector multiplications. Tucy and GOSPEL are freely available in their online git repositories (https://gitlab.com/jhwoo15/gospel and https:// gitlab.com/jhwoo15/tucy, respectively).
Because the cell parameters do not exactly match the multiples of a given grid spacing, the actual grid spacing is set to have the closest value of the given spacing within a small difference (up to 0.1 bohr). Here, we denote the given grid spacing instead of the actual grid spacing in the paper for better readability. The actual grid spacing corresponding to each structure is listed in Table S1.
The performance of the system-specific separable basis is assessed for seven structures: three cubic diamond structures (C, SiC, and Si), two ABO 3 perovskites (BaTiO 3 and SrTiO 3 ), and two hexagonal 2D materials (hBN and hBCN). The atomic coordinates and cell parameters of the seven systems are presented in the third section of the Supporting Information. We used the atomization energy per atom for a fair comparison between systems with different numbers of atoms. Hereafter, we refer to the atomization energy per atom as the atomization energy.
■ RESULTS AND DISCUSSION
To confirm the dependency of the system-specific separable basis on the system information, we plot the results of the 1D Fourier transform of the first five vectors of the factor matrices constructed from the initial Hamiltonian at the Γ-(blue bars) and X-(red bars) points (see Figure 2). In the lowest panel, only Γ-point data is visible because only one k-point was sampled along the nonperiodic axis of the 2D hexagonal sheet structures.
First, the blue bars are symmetrically distributed in all cases because the basis vectors at the Γ-point are always real regardless of the structure. However, at the X-point, the basis vectors and Hamiltonian matrices are no longer symmetric because of the phase factor; therefore, the basis vectors are no longer symmetric in Fourier space. One interesting point related to the k-space is that the basis vectors at the X-point are not just shifts in the basis vector at the Γ-point. This indicates that the basis vectors at different points in the k-space are not the products of Γ-point basis vectors with the phase factor, and the basis vectors at each k-point are constructed in a way that reflects the overall Hamiltonian matrix. Figure 2 also shows the structural dependency of the basis vectors. The ABO 3 structures have a common pattern. C and Si structures also share a similar pattern. However, the SiC structure has a different shape than those of other diamond structures. SiC is composed of two different elements; therefore, the nature of the covalent bonds in SiC is largely different than those of C and Si. Likewise, the basis vectors of two hexagonal sheet structures show different patterns along the x-axis, whereas the basis vectors along the z-axis show a similar trend. This implies that hBN and hBCN show different characteristics along the periodic axes but not along the nonperiodic axis. Although it is difficult to elucidate which system information changes a specific pattern in the basis vectors that we obtain, we can observe the structural and phase dependencies of the system-specific basis vectors.
We investigated whether the obtained basis vectors can properly span an orbital from the reference FD calculations. We projected the reference orbitals from the FD calculation of the SrTiO 3 on U and calculated their residuals. Figure 3 plots the sizes of the projection residual on U constructed with different rank sizes (r x , r y , and r z ). The large residual size means that the basis space does not sufficiently span the reference orbital. The tested reference orbitals were obtained from ordinary FD calculations of SrTiO 3 . For good readability, we Journal of Chemical Theory and Computation pubs.acs.org/JCTC Article present the residual size of the first 100 orbitals and occupied orbitals in Figure 3a and 3b, respectively. For small rank sizes, the basis vectors do not sufficiently cover the reference orbitals, but they span the orbitals better as the rank size increases (see Figure 3a). In addition, it was observed that the residual sizes for the low-lying orbitals do not always decrease first as the rank size increases, but those for high virtual orbitals slowly converge to zero. This indicates that the basis space spans the orbitals sufficiently, especially for lowlying orbitals.
To investigate the effects of system-specific separable basis vectors in the SCF procedure, we performed density functional calculations for the tested structures. The computational details and results are summarized in Tables S3−S9. Figure 4 shows the absolute errors in atomization energies, |ΔE a | (black line), and band gaps, |ΔE g | (blue line), as a function of the number of basis vectors, R = r x × r y × r z . For the cubic diamond structures and ABO 3 structures, we sampled the same rank sizes for all three axes, whereas the rank size of the hexagonal sheet structures is proportional to the cell size along each axis. The same cell parameters and rank sizes were used for the single-atom calculations needed to calculate the atomization energies.
The atomization energy and the band gap do not converge monotonically with respect to R, whereas the monotonic convergence of the total energy is guaranteed by the variational principle, as shown in the 10th column in Tables S3−S9. In the test range of R, the systems show convergence within 0.1−10 meV for both |ΔE a | and |ΔE g |. Elucidating the dependence of the error convergence on the structures is difficult with a few test cases. Nonetheless, we note that the system-specific separable basis converges well, even in systems with transition metals or a nonperiodic axis.
To investigate the speed of SCF calculations with a systemspecific separable basis, we plot the elapsed times for the overall calculations and three major bottlenecks for Si calculations as a function of R in Figure 5. To check the dependence of the computation time on the grid spacing h, we also plotted the results with different h values. The blue, orange, and green lines indicate the results with h values of 0.3, 0.25, and 0.2 bohr, respectively. The dashed lines represent the elapsed time for the reference FD calculations. The elapsed time of the total SCF procedure with the system-specific separable basis increases as R increases but does not show a dramatic change with respect to h, as shown in Figure 5a. By contrast, the total elapsed time of the reference FD results increased rapidly as h decreased.
The increase in the elapsed time of the reference calculations originates from diagonalization, which is the primary bottleneck. As shown in Figure 5b, most of the elapsed time for the reference calculations is spent in diagonalization, and its cost is strongly dependent on the h values. For the case of a systemspecific separable basis, the elapsed time of the diagonalization is independent of the choice of h and is much smaller than that of the reference cases. This is because the dimensions of the projected Hamiltonian matrix are determined not by the number of grid points, N = N x × N y × N z , but by R which is much smaller than N.
The system-specific separable basis additionally induces the basis construction and projection processes. Figures 5c and d show the elapsed time for projection and basis construction, respectively. The computational time for basis construction relies on h values because we obtain U x , U y , and U z from the direct diagonalization of small dense matrices, H ,H (y) H (y) H , and H (z) H (z) H . Despite the strong h dependence, the basis construction time occupies only a small part of the overall time. Contrary to the basis construction time, the cost of the projection depends on both h and R. The detailed computational complexities of the projections and basis construction are explained in the Appendix. The projection time increases as R increases and h decreases. However, large differences in the elapsed time of the projection as a function of h are not shown, except for a few small R cases. Hence, the total computational time for a system-specific basis calculation does not increase significantly as h decreases.
Although system-specific separable basis calculations require additional processes, they show excellent performance in diagonalization; thus, the overall computational costs are reduced in most cases. Here, we discuss only the results of Si, but we observed the same trend for other systems (see the sixth−ninth columns of Tables S3−S9). Figure 6 summarizes the overall performance enhancement by the new basis with respect to the reference FD calculations as a function of R/N, when h = 0.2 bohr. Figure S3 shows the performance enhancement results with other h values. For all systems, smaller R/N values resulted in larger performance enhancement. This is because a smaller R/N implies a larger The intersections of the horizontal and vertical lines represent the smallest R/N case for each system, where both |ΔE a | and |ΔE g | were less than 100 meV. The intersections of the ABO 3 and hexagonal sheet structures were ∼8% and ∼2%, respectively. The diamond structures should have an R/N of ∼5% to achieve a tolerance of 100 meV, except for Si. The N value for the Si system is greater than those of the other diamond structures because the cell volume of Si is larger than that of the others. In addition, the R for the error convergences was slightly smaller than that of the others. Therefore, the Si system showed significant performance enhancement.
To be more practical, a system-specific separable basis must achieve performance improvements with sufficiently high accuracy. The system-specific separable basis balances speed and accuracy by tuning R. A large R achieves high accuracy but simultaneously reduces the calculation speed, as shown in Figures 4 and 6. Table 1 summarizes the performance enhancements of the tested systems for 100, 50, and 25 meV tolerances for both types of errors. As shown in Figure 6, the use of large R/N to achieve high accuracy reduces the gains in computation speed. However, significant acceleration (2− 14×) was achieved, even within a small tolerance value of 25 meV.
■ CONCLUSION
Here, we proposed a system-specific separable basis derived from Tucker decomposition of a finite-difference Hamiltonian and investigate its performance in density functional calculations. We show that the new basis can successfully span low-lying orbitals and that their coverage is systematically improved by increasing the rank size of Tucker decomposition. The proposed basis dramatically reduces the dimensions of the Hamiltonian matrix and hence accelerates the diagonalization of the Hamiltonian matrix. We confirmed the properties of the basis vectors and measured the performance enhancements using seven selected systems. The system-specific separable basis achieved a 2−71× increase in computation speed with 100 meV tolerance for the band gap and the atomization energy. Higher accuracy can be achieved for all tested systems with a larger rank size but a lower gain in computation speed (e.g., 2−14× increase with a 25 meV tolerance). Here, we validated the performance of the system-specific separable basis only for density functional calculations. However, we expect it to be useful for higher-order quantum chemical methods because our basis benefits from the advantages of both the real-space method (e.g., fast numerical integrations and derivatives) and the typical basis function expansion method (e.g., low dimension of basis space and separability).
■ APPENDIX Negative Shift of the Hamiltonian Matrix
We introduced a fictitious potential to obtain singular vectors that well span low-lying orbitals. By the application of a large negative constant potential, the eigenspectrum of H is shifted down. This shift avoids a large positive eigenvalue of H, so the singular vectors that mainly span the high virtual orbitals are not selected in the compact HOSVD. In Figure S2, the results of representability tests with different sizes of the fictitious potentials are plotted. If the magnitude of the fictitious potential is sufficiently large, its value does not affect the representability of the basis vectors. We used −500 au as the fictitious potential for all calculations presented in this work.
Projecting the Hamiltonian Matrix on a System-Specific Separable Basis Space
A KS Hamiltonian matrix is composed of three terms: kinetic energy matrix, T, local potential matrix, V local , and nonlocal potential matrix, V NL . Each term has a specific nonzero pattern; therefore, we replace the projection of the Hamiltonian matrix, H̃, with the projection of each matrix using its nonzero pattern.
An FD kinetic energy matrix in order-6 tensor format, , can be represented as follows: where T n (n ∈ {x, y, z}) is the second-order FD matrix along the n-axis and δ is the Kronecker delta. Owing to the separability and orthonormality of U, the projection of the kinetic energy matrix on U, ̃, was obtained by three 1D projections as follows: In eq 9, the projection of the kinetic energy matrix is calculated using the projections of the three 1D kinetic energy matrices. If we assume that r x = r y = r z = R 1/3 and N x = N y = N z = N 1/3 , the computational complexity of evaluating each term of the second line in eq 9 becomes + R N R N ( ) 1/3 2/3 2/3 1/3 . In the FD representation, the local potential matrix, V local , has nonzero values only on the diagonal, which means that V pq local = V p local δ pq , where p and q are the indices on a 3D grid. If p and q are decomposed into (i, j, k) and (i′, j′, k′), respectively, The first, second, and third parentheses in eq 10 represent the x-, y-, and z-axis projections, respectively. Each projection is performed with the contraction of an order-6 tensor with two matrices; therefore, it requires an 8-nested for-loop. However, using the nonzero pattern of ′ ′ ′ ijki j k , the x-, y-, and z-axis projections are performed by 5-, 6-, and 7-nested forloops, respectively. The computational complexities of the three projections are R N ( ) 2/3 , R N ( ) 4/3 2/3 , and R N ( ) 2 1/3 , respectively. In our test, the last projection was a major bottleneck in the local potential projection.
Because all electron−electron interactions are represented by the local potential under the KS density functional theory, a nonlocal potential matrix, V NL , is only from the pseudopotential. The most frequently used pseudopotential is the Kleinman−Bylander (KB) 29 | 2022-04-20T06:25:14.910Z | 2022-04-18T00:00:00.000 | {
"year": 2022,
"sha1": "4a5c0c57eced8ff8be7e85d55f221899db60605e",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c79f25cdd1a5a4cfedc70908748a718acb1a3b12",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57434880 | pes2o/s2orc | v3-fos-license | Atlantoaxial Fixation in a Patient with Bilateral Persistent First Intersegmental Vertebral Artery Anomaly Using an O-arm Navigation System: A Case Report
Several variants of the vertebral artery (VA) have been reported, including those in the extraosseous and intraosseous regions of the craniovertebral junction (CVJ). The extraosseous variants include persistent first intersegmental vertebral artery (PFIA) and fenestration. Atlantoaxial fixation for PFIA or fenestration is challenging because C1/2 transarticular screws (TAS) and screw rod constructs with C1 lateral mass screws (LMS) and C2 pedicle screws are required to expose the bony landmarks, including the C1/2 joint, C2 pedicle, and C2 isthmus, where the anomalous VA is located. Yamazaki et al. performed fusion in patients with an anomalous VA in the extraosseous region of the CVJ. Although occipitocervical fusion provides rigid and secure fixation for various kinds of lesions around the craniocervical junction, it decreases the range of motion of the cervical spine and may cause dyspnea and/or dysphagia postoperatively. Here, we describe a patient with bilateral PFIA and atlantoaxial subluxation who was successfully treated with TAS using an O-arm navigation system (O-arm). A 53-year-old woman presented with severe pain in the left occipital region. She had been treated for rheumatoid arthritis for 10 years. Results of neurological examination were normal. Preoperative radiographs revealed an increased atlas-dens interval (Fig. 1a). MRI showed no spinal stenosis or high-intensity signal on T2-weighted images at C1/2 (Fig. 1b). Preoperative three-dimensional computed tomography (CT) angiography revealed bilateral PFIA but no congenital skeletal anomaly (Fig. 2). Surgery was planned because of the severe left occipital pain. Fixation could not be performed using screw rod constructs because the PFIA ran across the entry points of the LMS at C1. Furthermore, conventional TAS with exposure of the C2 isthmus and C1/2 joints was not possible because of the presence of PFIA. Therefore, we performed TAS fixation and Brooks’procedure using an O-arm without exposing the C2 isthmus and C1/2 joints. After induction of general anesthesia, the patient was placed in the prone position. A midline incision was made to expose the C1 posterior arch, C2 lamina, and cranial side of the C3 lamina. Next, 3-mm Nesplon tapes (Alfresa Pharma, Osaka, Japan) were passed under the C1 posterior arch and C2 lamina. A Doppler echo probe was used to confirm the position of the VA during these steps. A Nesplon tape was tightened for the temporary fixation of C 1/2 joints. The starting point of the TAS was confirmed using an O-arm, and screw holes were made with a navigated drill guide, through which 4-mm screws were inserted (Fig. 3a, b). The bone graft was harvested from the iliac crest and placed using Brooks’procedure with Nesplon tape. Severe pain in the left occipital region disappeared just after surgery. The patient wore a cervical collar until bone union was confirmed with CT three months after surgery. Here, we describe a patient with PFIA in whom we performed atlantoaxial fixation using an O-arm. Atlantoaxial fixation using an O-arm has previously been reported. Wada et al. reported insertion of LMS at C1 caudally from the C2 nerve root using an O-arm, with no screw malpositioning observed on postoperative CT. Hitti et al. reported less blood loss when fixation of the upper cervical spine was performed with navigation rather than without. They also reported that the use of an O-arm avoided the need to
Several variants of the vertebral artery (VA) have been reported, including those in the extraosseous and intraosseous regions of the craniovertebral junction (CVJ) 1) . The extraosseous variants include persistent first intersegmental vertebral artery (PFIA) and fenestration. Atlantoaxial fixation for PFIA or fenestration is challenging because C1/2 transarticular screws (TAS) and screw rod constructs with C1 lateral mass screws (LMS) and C2 pedicle screws are required to expose the bony landmarks, including the C1/2 joint, C2 pedicle, and C2 isthmus, where the anomalous VA is located [2][3][4] . Yamazaki et al. performed fusion in patients with an anomalous VA in the extraosseous region of the CVJ 1) . Although occipitocervical fusion provides rigid and secure fixation for various kinds of lesions around the craniocervical junction, it decreases the range of motion of the cervical spine and may cause dyspnea and/or dysphagia postoperatively. 5,6) Here, we describe a patient with bilateral PFIA and atlantoaxial subluxation who was successfully treated with TAS using an O-arm navigation system (O-arm).
A 53-year-old woman presented with severe pain in the left occipital region. She had been treated for rheumatoid arthritis for 10 years. Results of neurological examination were normal. Preoperative radiographs revealed an increased atlas-dens interval (Fig. 1a). MRI showed no spinal stenosis or high-intensity signal on T2-weighted images at C1/2 ( Fig. 1b). Preoperative three-dimensional computed tomography (CT) angiography revealed bilateral PFIA but no congenital skeletal anomaly (Fig. 2).
Surgery was planned because of the severe left occipital pain. Fixation could not be performed using screw rod con-structs because the PFIA ran across the entry points of the LMS at C1. Furthermore, conventional TAS with exposure of the C2 isthmus and C1/2 joints 7) was not possible because of the presence of PFIA. Therefore, we performed TAS fixation and Brooks'procedure using an O-arm without exposing the C2 isthmus and C1/2 joints.
After induction of general anesthesia, the patient was placed in the prone position. A midline incision was made to expose the C1 posterior arch, C2 lamina, and cranial side of the C3 lamina. Next, 3-mm Nesplon tapes (Alfresa Pharma, Osaka, Japan) were passed under the C1 posterior arch and C2 lamina. A Doppler echo probe was used to confirm the position of the VA during these steps. A Nesplon tape was tightened for the temporary fixation of C 1/2 joints. The starting point of the TAS was confirmed using an O-arm, and screw holes were made with a navigated drill guide, through which 4-mm screws were inserted (Fig. 3a, b). The bone graft was harvested from the iliac crest and placed using Brooks'procedure with Nesplon tape.
Severe pain in the left occipital region disappeared just after surgery. The patient wore a cervical collar until bone union was confirmed with CT three months after surgery.
Here, we describe a patient with PFIA in whom we performed atlantoaxial fixation using an O-arm. Atlantoaxial fixation using an O-arm has previously been reported 8,9) . Wada et al. reported insertion of LMS at C1 caudally from the C2 nerve root using an O-arm, with no screw malpositioning observed on postoperative CT 8) . Hitti et al. reported less blood loss when fixation of the upper cervical spine was performed with navigation rather than without 9) . They also reported that the use of an O-arm avoided the need to expose any bony landmarks when placing TAS for atlantoaxial fixation. We, therefore, applied an O-arm for this case, as it could minimize the exposure of the bony landmarks where the PFIA was located. However, screw malposition in cervical spine with an O-arm has been reported 10) . Thus, we need to recognize the potential risks of using an O-arm.
Conflicts of Interest:
The authors declare that there are no relevant conflicts of interest.
Author Contributions: Hideaki Kashiro wrote and prepared the manuscript. All authors participated in the study design. All authors have read, reviewed, and approved the article. | 2019-08-23T13:03:46.228Z | 2018-11-10T00:00:00.000 | {
"year": 2018,
"sha1": "8f04cca60fef6dbeb5d4627ef74aa855ed9783e8",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/ssrr/3/2/3_2018-0065/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f04cca60fef6dbeb5d4627ef74aa855ed9783e8",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220514363 | pes2o/s2orc | v3-fos-license | Legendrian contact homology for attaching links in higher dimensional subcritical Weinstein manifolds
Let $\Lambda$ be a link of Legendrian spheres in the boundary of a subcritical Weinstein manifold $X$. We show that the computation of the Legendrian contact homology of $\Lambda$ can be reduced to a computation of Legendrian contact homology in 1--jet spaces. Since the Legendrian contact homology in 1-jet spaces is well studied, this gives a simplified way to compute the Legendrian contact homology of $\Lambda$. We restrict to the case when the attaching spheres of the subcritical handles of $X$ do not interact with each other, and we will only consider mod 2 coefficients for now. The more general situation will be addressed in a forthcoming paper. As an application we compute the homology of the free loop space of $\mathbb{CP}^2$.
Introduction
A Weinstein manifold is the symplectic counterpart of a Stein manifold in complex geometry. More precisely, any Weinstein manifold X of dimension 2n can be given a handle decomposition into symplectic handles of index at most n. The handles are attached along isotropic spheres in the contact boundary of X, and after having attached the handles of index less than n we get a subcritical Weinstein manifold. The top index handles are then attached along Legendrian spheres in the contact boundary of the subcritical part of X. It has been shown that much of the symplectic topology of X is encoded in the Legendrian attaching spheres. Indeed, the symplectic homology of any subcritical Weinstein manifold vanishes, and by [BEE12] it follows that the symplectic homology of X is isomorphic to the Hochschild homology of the Chekanov-Eliashberg DGA of the attaching link.
The Chekanov-Eliashberg DGA A(V, Λ) of a Legendrian Λ in a contact manifold (V, λ) is freely generated by the Reeb chords of Λ. These are solution curves to the Reeb vector field R λ associated to λ, defined by λ(R λ ) = 1, dλ(R λ , ·) = 0, and the Reeb chords should have start and end point on Λ. The grading is given by a Maslov-type index, and the differential counts certain pseudo-holomorphic curves. In the case when (V, λ) is the contact boundary of a Weinstein manifold X, this is given by a count of pseudo-holomorphic disks in R × V , capped off with pseudo-holomorphic planes in X. Such disks are called anchored in X. See Section 2.3. Another special case is when V is the 1-jet space J 1 (M ) of a smooth manifold M . Then one counts pseudo-holomorphic disks either in the symplectization of J 1 (M ) or in the Lagrangian projection T * M . This case is rather well-studied, and there are a number of computational tools available, even in higher dimensions. See e.g. [EES05,EES07,Ekh07,Kar16]. 1 In this paper we describe a setup where the Chekanov-Eliashberg DGA of the attaching spheres in the boundary V of a subcritical Weinstein manifold X with c 1 (X) = 0 can be computed from Legendrians in 1-jet spaces. In this way we don't have to consider pseudo-holomorphic disks anchored in X.
This is a generalization of the work in [EN15], where the Chekanov-Eliashberg DGA is computed in the boundary of subcritical Weinstein 4-manifolds. We will assume that n > 2, and we focus on a simplified situation where the attaching spheres of the subcritical handles do not interact with each other. We also restrict to Z 2 -coefficients. The more general situation will be dealt with in a forthcoming paper, together with a careful treatment of signs so that we can compute the Legendrian contact homology of the attaching link over Z.
To obtain our result, we need Λ to satisfy some assumptions when passing through the subcritical handles. Namely, if Λ passes through a handle of index k < n we assume it to be of the form D k × Λ sub in the handle. Here D k is the core of the subcritical handle and Λ sub ⊂ S 2n−2k−1 is a Legendrian submanifold with respect to the standard contact structure. We also assume that Λ is contained in a 1-jet neighborhood of Λ st = D k × Λ st,sub when passing through the handle, where Λ st,sub ⊂ S 2n−2k−1 = {z ∈ C 2n−2k ; |z| = 1} is the standard Legendrian unknot, given by the real part of S 2n−2k−1 . In addition we assume that the part of Λ outside the sub-critical handles is contained in a Darboux ball D a ⊂ S 2n−1 , which we then identify with a ball in J 1 (R n−1 ).
In this way we cover Λ with charts of Legendrians in 1-jet spaces. This is in general not enough to be able to compute the Chekanov-Eliashberg DGA of Λ in 1-jet spaces, since we might have pseudo-holomorphic disks that leave the Darboux ball and the 1-jet neighborhood of Λ st . To remedy this problem, we Legendrian isotope Λ in a neighborhood of the attaching regions for the subcritical handles, by performing a high-dimensional analogue of the dipping procedure in [Sab06]. As a result, we are able to to split the Chekanov-Eliashberg DGA of Λ in different parts: (1) For each handle of index k < n − 1 we get a sub-DGA A(J 1 (R n−k−1 ), Λ sub ).
(2) For each handle of index k = n − 1 we get a sub-DGA which can be explicitly described and is similar to the handle-DGA in [EN15].
(3) For each sub-critical handle we also get a sub-DGA which can be computed in J 1 (Λ st ) and where the Legendrian is a dipped version of D k × Λ sub . (4) Finally, we get a sub-DGA A(J 1 (R n−1 ), Λ ∩ D a ). We will prove the following in the case when there is only one sub-critical handle of X.
As an application we describe a Weinstein handle decomposition of T * CP 2 and compute the Chekanov-Eliashberg DGA of the index 4 attaching sphere. Using the relation between the Legendrian contact homology of the attaching spheres and the symplectic homology of the resulting Weinstein manifold [BEE12] together with the results of [AS06,Vit18,SW06], which relate the symplectic homology of T * M with the singular homology of the free loop space of M , this gives a description of the singular homology of the free loop space of CP 2 .
From [BEE12] it also follows that there is a relation between the Chekanov-Eliashberg DGA of the Legendrian attaching spheres and the wrapped Fukaya category of the cocores of the critical handles. By recent results in [GPS17,CDGG17] these cocores generate the wrapped Fukaya category of the resulting Weinstein manifold. In [CM19] the authors use this together with the formula in [EN15] to give examples of mirror manifolds in homological mirror symmetry. Similar calculations are performed in [ACG + 20]. We hope that such computations can be made in higher dimensions with the help of our work. We also hope that one can use our results to perform higher-dimensional analogues of the computations in [EL17,EL19], where the authors use Kozul duality together with the work in [EN15] to compute the wrapped Fukaya category for 4-dimensional plumbings.
Outline. In Section 2 we fix notation, give a brief introduction to Weinstein manifolds and define Legendrian contact homology in contact manifolds which are Weinstein fillable. We also describe the easier case when the contact manifold is a 1-jet space of a smooth manifold. In Section 3 we explain the assumptions needed for us to show that Legendrian contact homology in the boundary of a subcritical Weinstein manifold reduces to a computation of Legendrian contact homology in some different 1-jet spaces. We also describe the dipping procedure and give a more careful statement of Theorem 1.1. In Section 4 we give proofs of the results in Section 3. In Section 5 we use a Weinstein handle decomposition of T * CP 2 to compute the singular homology of the free loop space of CP 2 .
Background
A Weinstein manifold is a symplectic manifold (X, ω) equipped with a Liouville vector field Z and a Morse function which is gradient-like for Z. Along the boundary V of X we get an induced contact structure with contact form λ = ι Z ω. The Morse function allows us to give a handle decomposition of X into Weinstein handles, defined below, and where the handles are attached along isotropic spheres in the contact boundary. If dim X = 2n, these handles are of index at most n, and if X only have handles of index less than n we say that X is subcritical. A contact manifold (V, λ) that occurs as the boundary of some Weinstein manifold as above is called Weinstein fillable.
2.1. Notation. For u = (u 1 , . . . , u l ) ∈ R l we write where u are the coordinates on M , v are the cotangent coordinates and r is the coordinate in the R-direction.
2.2. Geometry. Let X be a subcritical Weinstein manifold of dimension 2n, let V be its contact boundary. Assume that X admits a Weinstein handle decomposition where H k 1 , . . . , H k k l are Weinstein handles of index k. We will use the following model of such a handle: where δ > 0 is some small constant. The handle has a Liouville vector field with respect to the standard symplectic form ω st = dx ∧ dy + dp ∧ dq, and this vector field is transverse to the boundary and points out of H k along H k + and into H k along H k − . If now X is a Weinstein manifold with contact boundary V , and if Υ ⊂ V is an isotropic sphere of dimension k − 1 then one can use the Liouville vector field of X along a neighborhood of Υ in V and the Liouville vector field of H k along H k − to attach H k to X to get a new Weinstein manifold X = X ∪ H k . See [Wei91]. The boundary of X will again be a contact manifold, where the contact form on H k + is given by 2p j dq j + q j dp j .
Remark 2.1. If (V, λ) is the contact boundary of a subcritical Weinstein manifold then we will identify it with standard contact S 2n−1 with handles attached, so that 2.3. LCH in fillable contact manifolds. Here we give a brief overview of the definition of Legendrian contact homology in the boundary of a Weinstein manifold. We refer to [EGH00, BEE12, Ekh19] for a more thorough treatment. Let (V, λ) be the contact boundary of a Weinstein manifold X, and let Λ ⊂ V be a Legendrian submanifold. Assume that c 1 (X) = 0. Then we define the Legendrian contact homology (LCH) of Λ to be the homology of the differential graded algebra (DGA) A(V, Λ) which is defined as follows.
The algebra is freely generated over Z 2 [H 2 (X, Λ)] by the Reeb chords of Λ, which are solution curves of the Reeb vector field R λ having start and end point on Λ. The chords are graded by a Maslov type index, called the Conley-Zehnder index µ cz . That is, if c is a Reeb chord of Λ then the grading of c is given by where γ c is a closed path in V from the end point c + of c in Λ, going through Λ to the starting point c − of c and then follows c to the end point c + in the case when c − and c + belong to the same component of Λ. In the case when the start and end point belong to different components of Λ the path γ c also contains a path from the component of Λ containing c + to the component of Λ containing c − . The Conley-Zehnder index measures how much the contact distribution ξ = Ker λ rotates along this path. The differential is defined by a count of anchored pseudo-holomorphic curves in X, as follows. Let J be an almost complex structure on X which is compatible with the symplectic form and which is cylindrical in a neighborhood R t × V of the boundary V , meaning that it is invariant under translations in the R-factor, gives a complex structure on Ker λ and satisfies that J(∂ t ) = R λ .
An anchored pseudo-holomorphic disk is a two-level J-holomorphic building, where the top level is given by a J-holomorphic map where D m is the unit disk in C with m punctures p 0 , p 1 , . . . , p m−1 along its boundary, where p 0 is distinguished and located at 1. This puncture is called positive and the punctures p 1 , . . . , p m−1 are called negative. Near the puncture p 0 the map u is required to be asymptotic to a Reeb chord a at +∞, and near the negative puncture p i it should be asymptotic to a Reeb chord b i at −∞ for i = 1, . . . , m − 1. The disk D m is also allowed to have interior punctures z 1 , . . . , z l , so that near z i the map u is asymptotic to a cylinder over a Reeb orbit γ i at −∞, i = 1, . . . , l.
The lower level consists of J-holomorphic maps v i : C 1 → X, i = 1, . . . , l, from the punctured sphere and where v i maps a neighborhood of the puncture asymptotically to the Reeb orbit cylinder over the Reeb orbit γ i at +∞. Let M R×V ;X A (a, b), b = b 1 · · · b m , denote the moduli space of such buildings, where A denotes the homology class of the building. Then the differential of A(V, Λ) is defined on generators by where |M R×V ;X A (a, b)| is the mod 2 count of R-components in the moduli space, and the differential is extended to the whole of A(V, Λ) by the Leibniz rule. For proofs that the homology of this DGA gives a Legendrian invariant we refer to [EGH00, BEE12, Ekh19]. In this case one can use the Lagrangian projection to study Legendrian submanifolds. The Legendrians are projected to exact, immersed Lagrangians of T * M under this projection, and the double points of Π C (Λ) correspond to Reeb chords of Λ. Moreover, after a small Legendrian isotopy of Λ we may assume that it is chord generic, meaning that Π C (Λ) is an immersion with transverse double points as the only intersections. This implies that if Λ is closed, then the number of Reeb chords of Λ is finite.
2.4.1. Grading. The grading of a Reeb chord of Λ can be explicitly described as follows.
Consider the front projection and the base projection We will assume that Λ is front generic. We refer to [[EES05], Section 3.2] for a definition of this, but briefly this means that Π| Λ is an immersion outside a co-dimension 1 singular set Σ ⊂ Λ, and that there is a subset Σ ⊂ Σ of codimension 1 so that the points in Σ \ Σ belong to a standard cusp singularity of the front projection. The points in Σ \ Σ will be called the cusp edge points of the front of Λ. Let Λ 1 , . . . , Λ s be the connected components of Λ. For each component Λ j fix a point q j ∈ Λ j so that q j does not project to a singularity under the front projection and so that it does not coincide with a Reeb chord start or end point.
For each pair Λ i , Λ j such that there is a Reeb chord between them, pick one such chord c ij , which we will call a connecting chord. Let c ij,± be the start and end point of c ij so that z(c ij,+ ) > z(c ij,− ). Suppose that c ij,+ ∈ Λ j , c ij,− ∈ Λ i . Then there are locally defined functions f i , f j : U → R, where U ⊂ M such that Π(c ij ) ∈ U , so that a neighborhood of c ij,− in Λ i and of c ij,+ in Λ j is given by respectively, and c ij corresponds to a non-degenerate critical point of for i = j and c ij going from Λ i to Λ j , and let If now c is a Reeb chord of Λ with c ± ∈ Λ i± , pick admissible paths γ c,± ⊂ Λ i± from c ± to q i± . These paths are called capping paths for c.
Definition 2.4. The grading of a Reeb chord c of Λ is given by
2.4.2. Differential. The differential of A(J 1 (M ), Λ) is defined by counting pseudoholomorphic disks of the Legendrian Λ. This can be done in two different ways, either by counting disks in the cotangent bundle T * M , with the disks having boundary on the Lagrangian Π C (Λ), or by counting disks in the symplectization of J 1 (M ), with the disks having boundary on the Lagrangian R × Λ. In [DR16] it is proven that for certain choices of almost complex structures these two different set-ups give the same count of elements mod 2, and in [Kar20] this is proven to hold also with Z-coefficients.
We give the definition of the count in the cotangent bundle, and refer to [Ekh08,DR16,Kar20] for the definition of the count in the symplectization.
Let J be an almost complex structure of T * M , compatible with the standard symplectic structure. Let D m+1 denote the punctured unit disk in C with m + 1 punctures p 0 , . . . , p m cyclically ordered along the boundary in the counterclockwise direction, starting at p 0 = 1. Let a, b 1 , . . . , b m be Lagrangian projections of Reeb chords.
Definition 2.5. We say that u : is a J-holomorphic disk of Λ with positive puncture a and negative • u(p 0 ) = a, andũ makes a jump from lower to higher z-coordinate when passing through p 0 in the counterclockwise direction, . . , m, andũ makes a jump from higher to lower z-coordinate when passing through p i in the counterclockwise direction.
We let M(a, b) = M Π C (Λ) (a, b) denote the moduli space of J-holomorphic disks of Λ with positive puncture a and negative punctures b. We consider two disks in the moduli space to be equal if they only differ by a biholomorphic reparametrization of the domain.
We define the differential of A(J 1 (M ), Λ) to be given by on generators a, and extend it to the whole of A(Λ) by the Leibniz rule. Here |M(a, b)| R ∈ R is the algebraic count of elements in the moduli space, which in the case of R = Z 2 is given by the modulo 2 count. In [EES07] it is proven that the homology of A(V, Λ) is a well-defined Legendrian invariant, that is, ∂ 2 = 0 and the homology is invariant under Legendrian isotopies.
Morse flow trees.
Instead of using pseudo-holomorphic disks to define the differential, one can as well use Morse flow trees. These are defined as follows.
Let Λ ⊂ J 1 (M ) be a chord generic Legendrian submanifold with simple front singularities, meaning that the codimension 2 subset Σ ⊂ Λ where the singularities of the base projection does not consist of cusp singularities is empty. If dim Λ = 2 we may also allow swallow tail singularities, see [ [Ekh07], Section 2.2.A].
Away from the singular set Σ, the pre-image of an open set U ⊂ M under the base projection Π is given by the multi-1-jet lift of locally defined functions These locally defining functions of Λ are used to build the Morse flow trees. More precisely, these trees are defined as follows.
Definition 2.6. A Morse flow tree is an immersed tree Γ in M satisfying the following conditions.
• The tree is rooted and oriented away from the root. The root is 1-or 2-valent.
• Each edge γ of Γ is a solution curve of some local function difference: where f i > f j are locally defining functions of Λ. • The edge γ ij is given the orientation of −∇(f i − f j ).
• The cotangent lift of Γ gives an oriented closed curve in Π C (Λ), in the following way. Each edge γ ij has two cotangent liftŝ If we giveγ ij,i the orientation of γ ij , andγ ij,j the negative orientation of γ ij , then the union of all the lifted edges of Γ are required to patch together to give a closed curve in Π C (Λ) ⊂ T * M . • The vertices of Γ have valence at most 3, and are of the following form.
-1-valent punctures, which are critical points of the corresponding local function difference, -2-valent punctures, which are critical points of the corresponding local function difference, Since a puncture p is a critical point of a local function difference, we have stable and unstable manifolds associated to p. These we denote by W s (p) and W u (p), respectively.
The dimension of a Morse flow tree Γ with positive puncture a and negative punctures b 1 , . . . , b m can be computed using data from the tree, and is given by where e(Γ), s(Γ), Y 1 (Γ) is the number of end-, switch-and Y 1 -vertices of Γ.
Definition 2.7. A rigid Morse flow tree of Λ is a Morse flow tree of dimension 0 which is transversely cut out from the space of flow trees.
In [Ekh07] it is proven that one can define the differential by counting rigid Morse flow trees of Λ instead of counting rigid pseudo-holomorphic disks. In some situations this gives an easier way to understand A(J 1 (M ), Λ), since this avoids solving the∂equation which is a non-linear PDE.
The Chekanov-Eliashberg DGA in a subcritical Weinstein manifold
Let Λ be a Legendrian submanifold of a contact manifold V which is the boundary of a subcritical Weinstein manifold X of dimension 2n, n > 2. Assume that c 1 (X) = 0. In this section we describe the Chekanov-Eliashberg DGA of Λ, A(V, Λ), in terms of sub-DGAs which can be computed from Legendrians in 1-jet spaces. To do this we need to some additional assumptions on V and Λ.
3.1. Preliminary assumptions. To simplify notation we will assume that X only has one subcritical handle H k attached, and that this handle is attached along an isotropic sphere Υ ⊂ S 2n−1 = ∂B 2n . This easily generalizes to the case of having several subcritical handles attached along isotropic spheres where no attaching spheres of the subcritical handles passes through any other subcritical handle.
We will assume that there is a Darboux ball B A ⊂ S 2n−1 of radius A containing the attaching region N (Υ) of the handle. This means that we have a contactomorphism where α S 2n−1 is the standard contact structure on S 2n−1 . Thus we can consider the handle attachment as being performed in , where φ is extended by the identity over the handle.
Assuming that V and Λ satisfy the requirements of this lemma we will consider Λ as a subset of D H A from now on, dropping the map φ to simplify notation. We will need some further assumptions on Λ to be able to describe A(D H A , Λ) in terms of sub-DGAs of Legendrians in 1-jet spaces. First, we need to assume that there is an a < A such that the attaching sphere of the handle is contained in D A \ D a and that for ρ 1 , ρ 2 , ρ 3 sufficiently small, which maps Υ to the zero-section of be the core of the handle and let be the cocore. Assume that where Λ sub ⊂ S 2n−2k−1 is a Legendrian submanifold with respect to the standard contact structure. We also assume that Λ ∩ H k + is of the form is a Morse function with exactly one critical point, located at the origin and of index 0. We also assume that Λ∩H k + is contained in a 1-jet neighborhood of the standard Legendrian cylinder in H k + , given by By identifying H k + with H k − using the Liouville flow and then identifying a region of H k − with the attaching region, we see that we might assume that the projection of ψ(Λ ∩ N (Υ)) to T * ρ 1 (S k−1 ) × I ρ 2 coincides with the zero section of T * ρ 1 (S k−1 ) and that the projection of ψ for some ρ 3 > 0. See Section 4. From these assumptions it follows that we can cover Λ by charts given by Legendrians in D a ⊂ J 1 (R n−1 ) and in J 1 (S n−k−1 × D k ). In Section 3.2.1 we will describe an isotopy of Λ allowing us to describe A(D H A , Λ) in terms of subalgebras, where each subalgebra can be computed in one of the 1-jet spaces just described.
3.2. The differential of A(V, Λ). The differential of A(V, Λ) is a priori given by a count of pseudo-holomorphic curves anchored in X as in Section 2.3. However, by similar arguments as in [EN15] and also by the work in [Ekh19] it follows that it is enough to consider pseudo-holomorphic disks in the symplectization of V . In this subsection we will investigate these disks further.
Recall that the Chekanov-Eliashberg algebra A(V, Λ) is generated by the Reeb cords of Λ. By Lemma 3.1 and Lemma 4.2, 4.5, 4.7 it is enough to consider the following Reeb chords of Λ, where we get different cases depending on the index k of the subcritical handle. be the copies of these chords located at Λ sub × {0, 0} ⊂ H k + . Handle chords, k = n − 1: Note that Λ sub is a collection of s points for some integer s . In this case we cannot make use of Lemma 3.1, but we also have to take long chords of Λ sub into account. Therefore, we will have infinitely many chords of Λ sub × {0, 0} ⊂ H k + , labeled by We would like to be able to make a similar partition of the pseudo-holomorphic curves which contribute to the differential. To be able to do this, we isotope Λ in the attaching region, to introduce a high-dimensional counterpart of the dippings from [Sab06].
Recall that we assume Λ to be of the form (3.2) and (3.3) in H k + and N (Υ), respectively. In Section 4 we prove that we have a sub-algebra A(J 1 (R n−k−1 ), Λ sub ) at the minimum q = 0 in the handle in the case when k < n − 1, and a subalgebra generated by the chords in (3.4) and (3.5) and with differential explicitly described in Lemma 4.7 in the case when k = n − 1. However, we might have pseudo-holomorphic disks with positive punctures at diagram chords traveling into the handles. The dipping procedure will help us to get control over these disks.
If k > 1 this gives us a Morse-Bott situation where we for each Reeb chord of Λ sub get one S k−1 -family of Reeb chords for ρ = p 1 and another S k−1 -family for ρ = p 2 . To avoid this situation let g : S k−1 → R be a positive Morse function with one maximum at σ 1 ∈ S k−1 , one minimum at σ 2 ∈ S k−1 and no other critical points. Legendrian isotope Λ ∩ N (Υ) to the Legendrian where χ : [ρ 3 , ρ 3 ] → R is a bump function as in Figure 1. We continue to denote the isotoped Legendrian by Λ.
By choosing the height h of the bump function χ small enough we can ensure that we get exactly four critical points for the function (1 + χ(ρ)g(σ))f (ρ) on S k−1 × [ρ 3 , ρ 3 ], as in Figure 2. That is, we get critical points If k = n − 1 this algebra has generators c 0 , 1 ≤ i < j ≤ s , using similar notation as in the 1 < k < n − 1 case. Again, these Reeb chords are the dipping chords of Λ.
Hence we get generators
, using similar notation as in the k > 1 case. We will assume that the dipping region intersects D a along ρ = 2 , where 2 ∈ (p 1 , p 2 ), so that the critical points m 2 , s 2 are contained in D a , but not the points m 1 , s 1 .
3.2.2.
Gradings. If k < n − 1 we define gradings of the Reeb chords b 1 , . . . , b m of Λ sub ⊂ J 1 (R n−k−1 ) as in Section 2.4. That is, for Λ 1 , . . . , Λ s the connected components of Λ sub we choose marked points q i ⊂ Λ i i = 1, . . . , s and connecting chords c ij from Λ i to Λ j , i = j, as described in that section, together with admissible pathsγ c ij,i 1, and the chords a 1 , . . . , am, , c ij , 1 ≤ i, j ≤ s , p > 0, when k = n − 1, we proceed as follows. Let Λ 1 , . . . , Λ s be the connected components of Λ. For each component Λ j fix a point q j ∈ Λ j ∩ D a so that q j does not project to a singularity under the front projection and so that it does not coincide with a Reeb chord start or end point. Also, for each pair Λ i , Λ j such that there is a Reeb chord between them, pick one such chord c ij ∈ D a as connecting chord. (Note that this is possible since we assume the dipping region to intersect D a .)
Definition 3.2. We say that a path γ ⊂ Λ is handle admissible if γ ∩ D a is admissible and γ has constant projection to
is computed as in Definition 2.3.
Now choose handle admissible paths as follows.
• For each connecting chord c ij choose pathsγ c ij,j ⊂ Λ j from p j to c ij,+ and γ c ij,i ⊂ Λ i from p i to c ij,− . • For each diagram chord a = a 1 , . . . , am with a ± ∈ Λ l± choose capping paths γ a± ⊂ Λ l± from a ± to q l± .
With these choices is it now possible to define gradings of a 1 , . . . , am as in Definition 2.4.
To define gradings of the handle and dipping chords we use the following results.
Proof. To choose the capping paths, we first pick handle admissible paths as follows.
• For each component Λ j of Λ sub let l be such that Λ j ⊂ Λ l and letγ jl ⊂ Λ l be a path from {m 2 } × {q j } ∈ A d ∩ D a to q l . • For each connecting Reeb chord c ij of Λ sub , let l± be such that Λ i ⊂ Λ l− , Λ j ⊂ Λ l+ , and choose pathsγ il−,con ⊂ Λ l− ,γ jl+,con ⊂ Λ l+ from {m 2 } × {c ij,− } to q l− and from {m 2 } × {c ij,+ } to q l+ , respectively. Let γ 1 * γ 2 be the concatenation of the paths γ 1 and γ 2 . To simplify notation, if γ ⊂ Λ sub is a path we continue to write γ for the copy To define the capping path γ b± for the Reeb chord b = b i [m 2 ], i = 1, . . . , m, assume that b i,± ∈ Λ i± ⊂ Λ l± . We get the following cases, see Figures 4, 5 and 6.
where I l−l+ is given by (2.5) and (2.6) and computed with respect to the connecting chords in Λ. Comparing with Figures 4, 5 and 6 it is clear that the lemma follows. Figure 3. The choice of admissible paths for Λ sub .
Proof. For c 0 ij and its copies this is similar to the proof of Lemma 3.4. For c p ij , p > 0 this follows from the formula for |c 0 ij | together with Lemma 4.6 and the fact that the That is, in D a ⊂ J 1 (R n−1 ) C n−1 × R z we choose the standard complex structure on C n−1 and then extend it to the whole space by requiring that J(∂ t ) = ∂ z .
In the handle we choose an almost complex structure which is standard in the 1-jet neighborhood of Λ st where we assume that Λ ∩ H k is contained. This means that for J 1 (Λ st ) T * Λ st × R z we assume it to be given as in [[Ekh07], Section 4.3] in T * Λ st , mapping the vertical subbundle of T * Λ st to the horizontal subbundle, where these subbundles are defined using some metric connection. Again we extend it to a cylindrical almost complex structure by setting J(∂ t ) = ∂ z . Extend this to a cylindrical almost complex structure in the rest of the handle.
Assuming that Λ st ∩D a ⊂ N (Υ) is contained in a real plane (R n−1 ×{y 0 }×{z 0 })∩D a , the almost complex structure defined in the handle will coincide with the almost complex structure in R×D a in the 1-jet neighborhood of Λ st , assuming this is small enough, and we can interpolate the almost complex structures outside this neighborhood to give a cylindrical almost complex structure defined over the whole of R × D H A .
3.2.4. Description of the differential. We are now ready to state the main result of this paper. Recall that Λ now represents the dipped version of the Legendrian attaching spheres.
Proposition 3.6. Assume that k < n − 1. Then the DGA A(V, Λ) is quasi-isomorphic to a DGA that splits into subalgebras
Proposition 3.7. Assume that k = n − 1. Then the DGA A(V, Λ) is quasi-isomorphic to a DGA that splits into subalgebras A(J 1 (R n−1 ), Λ ∩ D a ), A(J 1 (Λ st ), Λ ∩ H k + ), and the DGA with generators c 0 ij , 1 ≤ i < j ≤ s , c p ij , 1 ≤ i, j ≤ s , p ≥ 1, graded by Lemma 3.5, and with differential given by Proof of Proposition 3.6 and 3.7. Similar to [EN15] the differential can be given by a count of pseudo-holomorphic disks in R × D H a with boundary on R × Λ. After having introduced the dipping region this count reduces to the following. Section 4 we prove that we only have to consider disks with negative punctures at handle chords and that we get subalgebras as in the statements above. Dipping disks: These are pseudo-holomorphic disks of Λ having positive punc- The only way for these disks to leave the dipping region is to enter the handle. If the parameters for the dipping function f is chosen sufficiently small it follows by action reasons that these disks cannot leave a 1-jet neighborhood of Λ st ⊂ H k + and hence we can use the techniques from Section 2.4 to find the pseudo-holomorphic disks. Note that we may identify the whole dipping area with a subset of a 1-jet neighborhood of Λ st ⊂ H k + if the dipping parameters are small enough.
Remark 3.8. It is possible to give a more explicit description of the algebra in the dipping region using the techniques of broken Morse flow trees from [EK08].
It follows that the Legendrian contact homology of Λ in V can be computed using 1-jet space techniques together with the explicit description of the differential in the handle when k = n − 1. It also follows that the coefficients reduce form Z 2 [H 2 (X, Λ)] to Z 2 since no disk passes through a handle.
The sub-DGA in the handle
In this section we prove that we get a sub-DGA of A(D H A , Λ) generated by the Reeb chords in the cocore of the handle. To do this, we will modify the model of the handles from Section 2.2 slightly, to simplify the Reeb dynamics. 4.1. Geometry of a 2n-dimensional symplectic handle of index k. Let a 1 , . . . , a n−k ∈ R be some positive constants that are linearly independent over Q and define The handle still has Liouville vector field which is transverse to the boundary The Liouville vector field induces contact forms α ±δ on H k ± : with Reeb vector field given by NR, where It follows that the differential equation for the Reeb flow is given bẏ Hence we get that the time t Reeb flow Φ t R = (x(t), y(t), p(t), q(t)) is given by Let us now consider some special cases. 4.2. Index 1 handles. In this case the attaching region is given by two disjoint balls of dimension 2n − 1. We describe models for the attaching of the handle H = H 1 along these balls. Let equipped with the contact structure α b = dz + 1 2 (udv − vdu). We will often omit the dimension and only write B ρ . Identify this with a ball in (R 2n−1 , dz − ydx), centered at a = (x, y, z 0 ) via the contact embedding F a : B ρ → R 2n−1 , F a (u, v, z) = (x + u, y + v, z + z 0 + yu + 1 2 uv).
Denote the image F a (B ρ ) the standard contact ball of radius ρ centered at a.
and since the Reeb vector field is transverse to A ± ρ (δ) and G(A ± ρ (δ)), respectively, its flow can be used to extend G to a contactomorphism from a neighborhood of A ± ρ (δ) to B ρ . Denote this neighborhood by B ± ρ (δ) ⊂ H − , and note that this neighborhood is identified with a neighborhood of a ± via the composition a 1x 1 , 0, . . . , 2(δ 2 +q 2 ) a n−1x n−1 , 0, 0, q ∈ Λ st then T (a) = T (q) and T (q) decreases when q increases (e −T (q) increases with q).
When q 2 = δ 2 this equation has solution close to u = 3/2. Moreover, assuming that u > 1 we might rewrite this as and since the function u+1 u 3 −1 is strictly increasing to ∞ as u decreases from 2 to 1 , the second statement follows.
This means that the image of Λ st in H − under the negative Liouville flow is given by e − 1 2 T (q) 2(δ 2 + q 2 ) a 1x 1 , 0, . . . , e − 1 2 T (q) 2(δ 2 + q 2 ) a n−1x n−1 , 0, 0, e T (q) q , and if we view this in B ρ using the map G we get that Λ st = e − 1 2 T (q) 2(δ 2 + q 2 )x 1 , 0, . . . , e − 1 2 T (q) 2(δ 2 + q 2 )x n−1 , 0, 0, 0 ⊂ B ρ , that is, this is nothing but a cone on S n−2 with radius decreasing into B ρ . Let us examine this in more detail. We have that which is a contact submanifold of B 2n−1 ρ . Moreover, after scaling if necessarily, we get that which then is a Legendrian in S 2n−3 , which we denote by Λ sub . Assume that it does not intersect any coordinate subspaces {(u i , v i ) = 0} for i = 1, . . . , n − 1. Further assume that Λ ∩ B ρ is a cone on Λ sub , meaning that if then there is some positive parameter r which is strictly increasing with the radius of rṽ, 0). Hence 1ũ1 , a 1ṽ1 , . . . , a n−1ũn−1 , a n−1ṽn−1 , 0, 0) ⊂ H − and similar to the proof of Lemma 4.1 we see that we might assume that Λ is a cone on Λ sub in the handle, that is Λ ∩ H + = (ra 1ũ1 ,ra 1ṽ1 , . . . ,ra n−1ũn−1 ,ra n−1ṽn−1 , 0, q), wherer : D 1 → R ≥0 is a Morse function with exactly one critical point, located at q = 0 and of index 0. .
Letν << δ and let Dν be a Darboux ball contained in {x 2 + y 2 + p 2 + q 2 ≤ν}, Proof. The first statement follows from Lemma 4.3, and since Λ is conic in H + with a minimum at q = 0 it follows that A(H + , Λ ∩ H + ) is quasi-isomorphic to A (E δ (a 1 , . . . , a n−1 ), G −1 (Λ sub )) with a degree shift given as in Lemma 3.4, which in turn is quasi-isomorphic to A(J 1 (R n−2 ), Λ sub ) by Lemma 3.1. 4.3. Index k handles, k ∈ {2, . . . , n − 1}. This is similar to the case of index 1 handles, so we just describe the main differences.
To describe the attaching map in the case when n > 2, recall the contact identifica- from Section 3.1. Let ρ = (ρ 1 , ρ 2 , ρ 3 ) and let (4.14) Then G * α N = α −δ | Aρ(δ) and since the Reeb vector fields are transverse to A ρ (δ) and G(A ρ (δ)), respectively, we can extend G to a contactomorphism from a neighborhood of A ρ (δ) in H − to a neighborhood of G(A ρ (δ)). We now consider two different cases, k < n − 1 and k = n − 1.
4.3.1. k < n − 1. As in the case of index 1 handles we might assume that Λ is of the form (4.15) Λ ∩ H + = (r(q)G −1 (Λ sub ), 0, q) Λ sub × D k ⊂ H + where Λ sub ⊂ S 2n−2k−1 is a Legendrian submanifold andr : D k → R ≥0 is a Morse function with exactly one critical point, located at q = 0 and of index 0. Moreover, we assume that Λ ∩ H + is contained in a small 1-jet neighborhood of the perturbed standard Legendrian cylinder in H Proof. This is similar to the case of index 1 handles.
4.3.2.
Index n − 1 handles. First of all let us assume that a 1 = 1 and let H = H n+1 to simplify notation. From the Reeb vector field formulas (4.4) -(4.7) we see that there is exactly one geometric Reeb orbit γ in H + , given by γ = {p = q = 0, x 2 1 + y 2 1 = 2δ 2 }, and this orbit intersects in the point x 1 = √ 2δ, y 1 = p = q = 0. Pick a trivialization of the contact structure ker α + given by (v 1 , iv 1 , . . . , v n−1 , iv n−1 ), where R(j) is a linear combination of vectors ∂ q i , ∂ p i with coefficients given by constants times exactly one of the coordinate functions q l , p l , i, l = j. This trivialization corresponds to the trivialization of the contact planes in D a ⊂ J 1 (R n−1 ) C n−1 ×R induced by the Lagrangian projection form the choice of trivialization ∂ t 1 , i∂ t 1 , . . . , Proof. First we compute the Conley-Zehnder index with respect to the trivialization (∂ p 1 , ∂ q 1 , . . . , ∂ q n−1 , ∂ p n−1 ). Since the linearized flow is hyperbolic in these directions we get that the index equals 0. Next we need to calculate the Maslov index of the path of matrices that express (∂ p 1 , ∂ q 1 , . . . , ∂ q n−1 , ∂ p n−1 ) in the basis (v 1 , iv 1 , . . . , v n−1 , iv n−1 ). We notice that After renormalizing we might assume that x 2 1 + y 2 1 = 1, that γ has period 2π and that we start at the point x 1 = 1, y 1 = 0. Then we get Since the matrix has crossings only when t = π for t ∈ [0, 2π], anḋ we get that the crossing form has signature −2 when restricted the (∂ p j , ∂ q j )-plane.
Since the problem splits into these n − 1 planes we get a total index of −2(n − 1) for each iterate of γ.
Then the Reeb chords from Λ i to Λ j is given by a) i < j : For each integer w ≥ 0 there is a Reeb chord c w ij of length 2πwδ +(j −i) , b) i ≥ j : For each integer w > 0 there is a Reeb chord c w ij of length 2πwδ −(i−j) . In all cases the Reeb chords follow the orbit γ. where p ≥ 2, δ ij is the Kronecker delta and where we extend it to the rest of the algebra by the Leibniz rule.
Proof. That the Reeb chords c 0 ij , 1 ≤ i < j ≤ s , c p ij , 1 ≤ i, j ≤ s , p ≥ 1 generates a sub-DGA follows similar as in the proof of Lemma 4.3. The statement about the grading follows from our choice of capping paths in Section 3.2.2 together with Lemma 4.6.
The statement about the differential follows by similar arguments as in the proof of [[EN15], Lemma 5.17], which states that the only moduli spaces of rigid pseudoholomorphic disks are of the form M(c m ij ; c m 1 lj , c m 2 il ) with m = m 1 + m 2 , and that these moduli spaces consist of one point each, up to translation and reparametrization. 5. The singular homology of the free loop space of CP 2 In this section we give a Weinstein handle decomposition of T * CP 2 and compute the Chekanov-Eliashberg DGA of the Legendrian attaching sphere. By the results of [AS06,Vit18,SW06] and [BEE12] this gives a description of the singular homology of the free loop space of CP 2 . 5.1. Weinstein handle decomposition of T * CP 2 . Recall that CP 2 is obtained from B 4 by attaching a 2-handle along a knotΥ ⊂ S 3 with framing 1, and then attaching a 4-handle along the boundary of B 4 ∪Υ H 2 . Thus, to give a Weinstein handle decomposition of T * CP 2 we should attach one subcritical handle to ∂B 8 along an isotropic S 1 with the correct framing and then attach the critical handle along a Legendrian S 3 which goes through the subcritical handle along the standard Legendrian cylinder.
Another way of seeing this is to consider the Legendriañ Λ = {x 2 1 + x 2 2 + x 2 3 + x 2 4 = 1} ⊂ S 7 = {z = x + iy ∈ C 4 ; |z| = 1} with the isotropic attaching sphere Υ of the subcritical handle as being a subset ofΛ, and then let Λ be the Legendrian submanifold we get by replacing a neighborhood of Υ with the standard Legendrian cylinder in H 2 . If we viewΛ as a Legendrian submanifold of J 1 (R 3 ) ∂B 8 \ {pt}, then it can be given by the 1-jet lift of the locally defined functions f ± : R 3 → R, f ± (u 1 , u 2 , u 3 ) = f ± (|u|) withf ± as in Figure 7. (Recall that the 1-jet lift of f : R n → R is given by {(u, df (u), f (u)); u ∈ R n } ⊂ J 1 (R n ).)f We see thatΛ has exactly one Reeb chord a of grading 3, located at the origin, and that there is an S 2 -family of Morse flow trees going from a to the cusp edge S 2 .
IdentifyingΛ \ {(0, 0, 1)} with R 3 via stereographic projection (mapping (0, 0, −1) to the origin) we get that the lifts of the flow trees ofΛ go radially from ∞ to the origin. Assume that the unit sphere S 2 ⊂ R 3 represents the cusp edge singularity ofΛ. Now we describe our choice of attaching sphere Υ for the subcritical handle in this picture. To that end, pick a point p ∈ S 2 ⊂ R 3 . Then a neighborhood of p in R 3 can be identified with J 1 (I) T * I × R where I ⊂ S 2 is some interval and T * I D is a disk contained in S 2 , and the R-direction corresponds to the radial direction in R 3 . Let Υ be given by the standard Legendrian unknot as in Figure 8.
It follows that the lifted flow trees ofΛ intersect Υ either 0,1 or 2 times, in 2dimensional, 1-dimensional and 0-dimensional families, respectively. Moreover, there is exactly one tree that intersects Υ in 2 points, namely the tree which goes through the point p. This tree gives rise to a rigid flow tree Γ of Λ, and this will be the only rigid flow tree. Since Λ coincides with Λ st in the subcritical handle we have that Λ sub | 2020-07-15T01:01:33.010Z | 2020-07-14T00:00:00.000 | {
"year": 2020,
"sha1": "2c6ee7c0d0731f5a00aa1a79880515462ae78333",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2c6ee7c0d0731f5a00aa1a79880515462ae78333",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
250048673 | pes2o/s2orc | v3-fos-license | The external photoevaporation of planet-forming discs
Planet-forming disc evolution is not independent of the star formation and feedback process in giant molecular clouds. In particular, OB stars emit UV radiation that heats and disperses discs in a process called 'external photoevaporation'. This process is understood to be the dominant environmental influence acting on planet-forming discs in typical star forming regions. Our best studied discs are nearby, in sparse stellar groups where external photoevaporation is less effective. However the majority of discs are expected to reside in much stronger UV environments. Understanding external photoevaporation is therefore key to understanding how most discs evolve, and hence how most planets form. Here we review our theoretical and observational understanding of external photoevaporation. We also lay out key developments for the future to address existing unknowns and establish the full role of external photoevaporation in the disc evolution and planet formation process.
Introduction
Planets form in the discs of dust and gas known as 'planet-forming' or 'protoplanetary' discs. These discs are found around most young stellar objects (YSOs) in star forming regions up to ages of ∼ 3−10 Myr (e.g. Haisch et al., 2001;Ribas et al., 2015). Since 2015, Atacama Large Millimetre/sub-Millimetre Array (ALMA) has instigated a revolution in our understanding of these objects. In particular, the high resolution sub-mm continuum images, tracing the dust content of protoplanetary discs, not only offer an insight into the dust masses and radii (e.g. Ansdell et al., 2016Ansdell et al., , 2017Eisner et al., 2018;van Terwisga et al., 2020;Ansdell et al., 2020), but also a wealth of internal sub-structures which serve as a window into planet formation processes (e.g. ALMA Partnership et al., 2015;Andrews et al., 2018;Benisty et al., 2021). The nearby (distance D 150 pc) discs in the low-mass star forming regions such as Taurus and Lupus can be sufficiently resolved to expose rings, gaps and spirals in dust, as well as kinematic signatures in gas (Teague et al., 2018;Pinte et al., 2019;Teague et al., 2019;Pinte et al., 2022;Casassus et al., 2022) which may either be signatures of planets or the processes that govern their formation.
Prior to ALMA, some of the first images of discs were obtained around stars in the Orion Nebula cluster (ONC) at a distance of D ∼ 400 pc away, much further than the famous HL Tau (Sargent and Beckwith, 1987;ALMA Partnership et al., 2015). These discs exhibit non-thermal emission in the radio (Churchwell et al., 1987;Felli et al., 1993a,b;Forbrich et al., 2021), with resolved ionisation fronts and disc silhouettes seen at optical wavelengths (O'dell et al., 1993;O'dell and Wen, 1994;Prosser et al., 1994;McCaughrean and O'dell, 1996). Identified as 'solar-system sized condensations' by Churchwell et al. (1987), they were estimated to be undergoing mass loss at a rate ofṀ ∼ 10 −6 M yr −1 . O'dell et al. (1993) confirmed them to be irradiated protoplanetary discs, dubbing them with the contraction 'proplyds'.
While this term has now been dropped for the broader class of protoplanetary disc, it is still used in relation to discs with a bright, resolved ionisation front owing to the external UV radiation field. These objects are of great importance in unravelling what now appears to be an important process in understanding planet formation: external photoevaporation.
The process of external photoevaporation is distinguished from the internal photoevaporation (or more commonly just 'photoevaporation') of protoplanetary discs via the source that is responsible for driving the outflows. Both processes involve heating of the gas by photons, leading to thermal winds that deplete the discs. However, internal photoevaporation is driven by some combination of FUV (far-ultraviolet), EUV (extreme-ultraviolet) and X-ray photons originating from the central host star (e.g. Ercolano et al., 2008;Picogna et al., 2019;Sellek et al., 2022), depleting the disc from inside-out (e.g. Clarke et al., 2001;Owen et al., 2010Owen et al., , 2012Jennings et al., 2018;Coleman and Haworth, 2022). External photoevaporation, by contrast, is driven by an OB star external to the host star-disc system. The high FUV and EUV luminosities of OB stars, combined with the heating of outer disc material that experiences weaker gravity, can result in extremely vigorous outside-in depletion . Indeed, as inferred by Churchwell et al. (1987), the brightest proplyds in the ONC exhibit wind-driven mass loss rates of up tȯ M ext ∼ 10 −6 M yr −1 (see also e.g. Henney and Arthur, 1998;Henney and O'Dell, 1999).
At present, external photoevaporation remains an under-studied component of the planet formation puzzle. It has long been thought that the majority of star formation occurs in clusters or associations which include OB stars (Miller and Scalo, 1978) and that mass loss due to external photoevaporation can be more rapid than theṀ acc/int ∼ 10 −8 M yr −1 typical of stellar accretion (e.g. Manara et al., 2012) and internal photoevaporation (e.g. Owen et al., 2010). Thus it is reasonable to ask why external photoevaporation has, until now, not featured heavily in models for planet formation. While there is no one answer to this question, two important historical considerations are compelling in this context: -The first is a selection bias. Low mass star forming regions (SFRs) are more numerous, so they are also the closest to us (although nearby star formation is probably also affected by the expansion of a local supernova bubble -Zucker et al., 2022). Surveys with, for example, ALMA, have rightly prioritised bright and nearby protoplanetary discs in these low mass SFRs. However, when accounting for the relative number of stars, these regions are not typical birth environments for stars and their planetary systems (see Section 5.2). This has motivated recent studies of more typical birthplaces that experience strong external irradiation from neighbouring massive stars (e.g. Ansdell et al., 2017;Eisner et al., 2018;van Terwisga et al., 2019). -The second is the well-known 'proplyd lifetime problem' in the ONC (e.g. Störzer and Hollenbach, 1999). The presence or absence of a near infrared (NIR) excess, indicating inner material, was one of the earliest available probes for studying disc physics. The high fraction (∼ 80 percent) of stars with a NIR excess in the ONC was an apparent paradox that appeared to undermine the efficacy of external photoevaporation in influencing disc populations. We discuss this problem in Section 5.4.
Given the apparently high fraction of planet forming discs that undergo at least some degree of external photoevaporation, this process has particular relevance for modern efforts to connect observed disc populations to exoplanets (e.g. Mordasini, 2018). Now is therefore an opportune time to take stock of the findings from the past three decades.
In this review, we first summarise the theory of external photoevaporation in Section 2, including both analytic estimates that provide an intuition for the problem as well as state-of-the-art simulations and microphysics. We address the observational signatures of disc winds in Section 3, including inferences of individual mass loss rates. We consider the role of external photoevaporation for disc evolution and planet formation in Section 4, including evidence from disc surveys. In Section 5 we contextualise these studies in terms of the physics and demographics of star forming environments. Finally, we summarise the current understandings and open questions in Section 6.
2 Theory of externally driven disc winds 2.1 The most basic picture We begin by reviewing our theoretical understanding of external photoevaporation. At the very basic level one can determine whether material will be unbound from a potential by comparing the mean thermal velocity of particles (i.e. the sound speed) in the gas with the local escape velocity. In an isothermal system, equating the sound speed and escape velocity yields the gravitational radius, beyond which material is unbound where M * is the point source potential mass, and c s is the sound speed. Therefore if an isothermal disc were to extend beyond R g , then material would be lost in a wind. Consider now an isothermal disc that is entirely interior to the Fig. 1: A schematic of the basic picture of external photoevaporation in terms of the gravitational radius. The gravitational radius is that beyond which mean thermal motions (propagating at the sound speed) exceed the escape velocity and are unbound. In this basic picture, a disc smaller than the gravitational radius will hence not lose mass. External UV irradiation heats the disc, leading to faster mean thermal motions (a higher sound speed), driving the gravitational radius to smaller radii and unbinding material in the disc. Johnstone et al. (1998). In the EUV driven wind, the flow from the disc edge at radius R d travels at a subsonic velocity through the thin photodissociation region (PDR) of thickness xR d with x 1.5 before reaching the ionisation front (IF). Mass loss is therefore determined by the thermal pressure at the IF. For x 1.5, the wind is launched from the disc edge at a supersonic velocity, producing a shock front that is reached before the IF. In this case, the mass loss rate is determined by the thermal conditions in the PDR.
gravitational radius and is hence fully bound (upper panel of Figure 1). If this disc is then externally irradiated, the temperature and hence sound speed increases, driving down the gravitational radius to smaller values, potentially moving interior to the edge of the disc and unbinding its outer parts (lower panel of Figure 1). The details of external photoevaporation do get subtantially more complicated than the above picture, for example with pressure gradients helping to launch winds interior to R g . However, this picture provides a neat basic insight into how external photoevaporation can instigate mass loss in otherwise bound circumstellar planet-forming discs.
Flux units
Before considering the physics of EUV and FUV driven winds, a note on the units canonically used to measure UV fluxes is necessary. While the ionising flux is usually measured in photon counts per square centimetre per second, FUV flux is normally expressed in terms of the Habing unit, written G 0 (Habing, 1968). This is the flux integral over 912 − 2400Å, normalized to the value in the solar neighbourhood, i.e.
F λ dλ 1.6 × 10 −3 erg s −1 cm −2 . (2) Another measure of the FUV field strength is the Draine unit (Draine, 1978), which is a factor 1.71 larger than the Habing unit. Hence 10 3 G 0 ≈ 585 Draines. We highlight both because two similar units that vary by a factor of order unity can and does lead to confusion. For reference, the UV environments that discs are exposed to in star forming regions ranges from 1 (i.e. embedded discs) to ∼ 10 7 G 0 (discussed further in Section 5). For the sake of clarity, we will hereafter consider low FUV environments to be those with F FUV 100 G 0 , intermediate environments greater than this up to F FUV 5000 G 0 and high FUV environments for any FUV fluxes F FUV 5000 G 0 .
Flow geometry
The basic physical picture of an externally photoevaporating protoplanetary disc was laid out by Johnstone et al. (1998), and has remained largely the same since. We summarise that picture here because it is useful for what follows, but refer interested readers to the more detailed discussion in that original work as well as that of Störzer and Hollenbach (1999).
The heating mechanism that launches the thermal wind may be driven by ionising EUV photons (energies hν > 13.6 eV), heating gas to temperatures T ∼ 10 4 K, or photodissociation FUV photons (6 eV < hν < 13.6 eV), yielding temperatures of T ∼ 100−1000 K (Tielens and Hollenbach, 1985;Hollenbach and Tielens, 1997). The EUV photons penetrate down to the ionisation front, at radius R IF = (1 + x)R d from the disc-hosting star, with disc outer radius R d . The atomic gas outside of R IF is optically thin for the FUV photons, which penetrate down to R d producing a neutral PDR of thickness xR d . Whichever photons drive the photoevaporative wind, the flow geometry far from the disc surface is approximately spherically symmetric, since it is accelerated by the radial pressure gradient (cf. the Parker wind). We will start with this approximation of spherical geometry, which guides the following analytic intuition.
EUV driven winds
For an EUV driven wind, the gravitational radius is R g ∼ 10 au for a solar mass star (and scales linearly with stellar mass), such that we are generally in a regime in the outer disc where R d R g , and a wind can be launched. If the EUV flux is sufficiently strong, the ionisation front (IF) sits close to the disc surface making the PDR thin (x 1.5; see Section 2.2.4). In this case, the basic geometry of the system is shown in Figure 2a. The thermal pressure at the disk surface is determined by the ionisation rate, with the flow proceeding subsonically through the PDR. If we assume isothermal conditions in the PDR, then the density n I is constant: n I = n 0 = N D /xR d , where N D is the column density of the PDR, which is the column density required to produce an optical depth τ FUV ∼ 1. This column density is dominated by the base of the wind N D ≈ n 0 R d , but is dependent on the microphysics and dust content (see Section 2.3). For our purposes, we will simply adopt N D ∼ 10 21 cm −2 (although see the direct calculations of Störzer and Hollenbach, 1999, for example).
Since density in the PDR is constant, the velocity in the flow v I ∝ r −2 to maintain a constant mass flux. In order to conserve mass and momentum flux, the velocity at the ionisation front must be v IF = c 2 s,I /2c s,II ∼ 0.5 km s −1 , where c s,I ≈ 3 km s −1 is the sound speed in the PDR and c s,II ≈ 10 km s −1 beyond the IF. We can write the mass loss rate:Ṁ where F is a geometric factor and m I is the mean molecular mass in the PDR, and as before x is the relative thickness of the PDR with respect to the disc radius R d . Since n I ∝ 1/x this mass loss rate appears to diverge in the limit of a thin PDR. However, x must satisfy the recombination condition such that: where Φ is the EUV counts of the ionising source at distance d, f r is the fraction of photons unattenuated by the interstellar medium (ISM), and α B = 2.6 × 10 −13 cm 3 s −1 is the recombination coefficient for hydrogen with temperature 10 4 K (e.g. Osterbrock, 1989). Substituting R IF = (1 + x)R d into equation 4 and adopting typical values, we can write a defining equation for x that can be solved numerically. However, in the limit of x 1 we can also simply estimate: which is the solution for an ionised globule with no PDR physics (Bertoldi and McKee, 1990). Here, EUV ∼ 1 is a correction factor that absorbs uncertainties in the PDR physics. We notice that the EUV driven wind is super-linearly dependent on R d , but only scales with the square root of the EUV photon count.
FUV driven winds
We will here proceed under the assumption that the outer radius and FUV flux is sufficient to produce a supersonic, FUV heated wind. This means that R d R g , where R g is the gravitational radius in the PDR. In this case, where the wind mass loss is determined by FUV heating, the neutral wind must launch at (constant) supersonic velocity v I = v 0 c s,I , where v 0 is the launching velocity from the disc surface with number density n 0 , while v I is the wind velocity in the PDR as before. To conserve mass, the density in the PDR drops as n I ∝ r −2 . The wind travels faster than at the IF (v IF = c 2 s,I /2c s,II , as before) and therefore must eventually meet a shock front at radius R shock . Assuming this shock is isothermal, the density n I increases by a factor ∼ M 2 , where M = v 0 /c s,I is the Mach number. If the region between the shock and ionisation fronts is isothermal, then n I is then constant and v I ∝ r −2 to conserve mass. This geometry is shown in Figure 2b.
Solving mass and momentum conservation requirements for the flow, we have: which immediately puts a minimum distance below which EUV mass loss dominates over FUV. If R shock < R d then the shock front is inside the disc, the flow is subsonic at the base, and we are back in the EUV driven wind regime. Given that v 0 /2c s,II ∼ 0.4−0.6 (with v 0 ∼ 3−6 km s −1 ), then we require R IF 2.5R d for an FUV driven wind to be launched. Coupled with equation 4 this gives the minimum distance required for the launching of FUV driven winds. Conversely, a maximum distance exists from the requirement that the gas can be sufficiently heated to escape the host star, meaning that FUV driven winds only occur at intermediate separations from a UV source.
Under the assumption that the FUV flux is sufficient to launch a wind and R IF 2.5R d , then the overall mass loss in the flow does not care about what is going outside of the base of the FUV-launched wind. The mass loss rate in the FUV dominated case is simply: where F is the geometric correction and m I is the mean molecular mass in the PDR as before. All of the difficult physics is contained within a convenient correction factor FUV . In reality, this expression is only helpful in the limit of a very extended disc, and computing the mass loss rate in general requires a more detailed treatment that we will discuss in Section 2.3. Nonetheless, we gain some insight from the estimate from equation 7. First, we see that the mass loss rate is not dependent on the FUV flux F FUV . In reality, there is some dependence due to the increased temperature in the PDR with increasing F FUV , but this dependence is weak for F FUV 10 4 G 0 (Tielens and Hollenbach, 1985). From equation 5, we also see that the mass loss rateṀ FUV scales less steeply with R d thanṀ EUV does. This means that once a disc has been sufficiently truncated by these external winds, the FUV dictates the mass loss rate. Since the time-scale for this depletion can be short (see Section 4.1), we expect FUV flux to dominate mass loss over the disc lifetime for reasonable EUV fluxes.
While this picture is useful to gain some insight into the physics of external photoevaporation, accurately computing the mass loss rate in the wind requires more detailed numerical modelling of the PDR physics. We consider efforts in this direction to date as follows.
Microphysics of external photoevaporation
One of the biggest challenges surrounding external photoevaporation is that it depends upon a wide array of complicated microphysics. The wind is launched primarily by the FUV radiation field and determining the temperature in this launching region, which is critical, requires solving photodissociation region (PDR) microphysics which in itself can consist of many hundreds of species and reactions and complicated thermal processes (e.g. Tielens and Hollenbach, 1985;Hollenbach and Tielens, 1997). In particular, as we will discuss below, the line cooling is difficult to estimate in 3D, meaning most PDR codes are limited to 1D (e.g. Röllig et al., 2007). The dust and PAH abundance in externally driven wind also play key roles in determining the mass loss rate, but may differ substantially from the abundances in the ISM (e.g. Vicente et al., 2013;Facchini et al., 2016). In addition to the complicated PDR-dynamics, EUV photons establish an ionisation front downstream in the wind which affects the observational characteristics. Here we introduce some of the key aspects of the microphysics of external photoevaporation in more detail.
The theory of dust grain entrainment in external photoevaporative winds
We begin by considering the dust microphysics of external photoevaporation. First it is necessary to provide some context by discussing briefly how dust evolves in the disc itself. Canonically, in the ISM the dust-to-gas mass ratio is 10 −2 and grains typically follow a size distribution of the form (Mathis et al., 1977) with grain sizes spanning a s ∼ 10 −3 − 1 µm (Weingartner and Draine, 2001) and q ≈ 3.5. In protoplanetary discs the dust grains grow to larger sizes which eventually (when the Stokes number is of order unity) dynamically decouples them from the gas, leading to radial drift inwards to the inner disc. This growth proceeds more quickly in the inner disc (e.g. Birnstiel et al., 2012) and so there is a growth/drift front that proceeds from the inner disc outwards. It is not yet clear how satisfactory our basic models of this process are, particularly in terms of the timescale on which it operates, since if left unhindered by pressure bumps in the disc it quickly results in most of the larger drifting dust being deposited onto the central star (see e.g. Birnstiel et al., 2012;Andrews, 2015;Sellek et al., 2020a). However for our purposes the key point is that the abundance of smaller (∼ µm) grains in the disc ends up depleted relative to the ISM due to grain growth. The nature of dust in the external photoevaporative wind is important for three key reasons 1. The dust in the wind sets the extinction in the wind and hence has a significant impact on the mass loss rate 2. The extraction of dust in the wind could have implications for the mass reservoir in solids for terrestrial planet formation and/or the cores of gas giants. 3. The entrainment of dust in winds could provide observational diagnostics of the external photoevaporation process (we will discuss this further in section 3). and so the key questions are what size, and how much, dust is entrained in an external photoevaporative wind. This problem was addressed in the semi-analytic work of Facchini et al. (2016). They solved the flow structure semi-analytically (we discuss semi-analytic modelling of external photoevaporative winds further in section 2.4) and calculated the maximum entrained grain size. The efficiency of dust entrainment in the wind is dependent on the balance of the Epstein (1924) drag exerted on the dust grain with density ρ s by the the outflowing gas of velocity v th versus the gravity from the host star. Thus the condition for a dust grain to be lost to external photoevaporation is : where 4πF is the solid angle subtended by the wind (see Adams et al., 2004). The main outcome of the above is that only small grains are entrained in the wind and the mean cross section is reduced. Therefore, when grain growth proceeds to the disc outer edge, the dust-to-gas mass ratio, mean cross section, and hence extinction in the wind drops substantially. This makes external photoevaporation more effective than previously considered when the dust in the wind was treated as ISM-like. This lower cross section in the wind is now accounted for in numerical models of external photoevaporation, assuming some constant low value (e.g. the FRIED grid of mass loss rates Haworth et al., 2018a, discussed more in 2.5). However, what is still missed in models is that the cross section in the wind is actually a function of the mass loss rate and so needs to be solved iteratively with the dynamics.
Photodissociation region physics for external photoevaporation
The FUV excited photodissociation region (PDR) microphysics determines the composition, temperature and therefore the dynamics of the inner parts of external photoevaporative winds. As discussed above, this FUV/PDR part of the inner wind can determine the mass loss rate from the disc. This is not a review on PDR physics (for further information see e.g. Tielens and Hollenbach, 1985;Hollenbach and Tielens, 1997;Tielens, 2008) but given its importance for setting the temperature, and therefore the dynamics, we provide a brief overview of some relevant processes.
We focus primarily on the main heating and cooling contributions. These are summarised as a function of extinction for an external FUV field of 300 G 0 in Figure 3, which is taken from Facchini et al. (2016). Note that, as we will discuss below, the exact form of these plots depends on the FUV field strength and the assumed composition, e.g. the metallicity, dust grain properties and polycyclic aromatic hydrocarbon (PAH) abundance.
The heating mechanism that is anticipated to be most important for external photoevaporation is photoelectric heating (see the left hand panel of Figure 3) that occurs when PAHs lose electrons following photon absorption, increasing the gas kinetic energy (Tielens, 2008). The impact this can have on the mass loss rate is illustrated in Figure 4, which shows the results of numerical models of an externally photoevaporating 100 au disc around a 1 M star in a 1000 G 0 FUV environment as a function of metallicity. Each coloured set of points connected by a line represents a different PAH-to-dust ratio. Reducing the PAH-to-dust ratio has a much larger impact on the mass loss rate than changing the overall metallicity. These models are previously unpublished extensions of the 1D PDRdynamical calculations of Haworth et al. (2018a), which are discussed further in 2.5. When the metallicity is reduced the PAH abundance and heating is also lowered, but so is the line cooling. Changes in metallicity therefore only lead to relatively small changes to the mass loss rate as the heating and cooling changes compensate. Conversely, changing only the PAH-to-dust ratio can lead to dramatic changes in the mass loss rate.
A key issue for the study of external photoevaporation is that the PAH abundance in the outer parts of discs and in winds is very poorly constrained. For the proplyd HST 10 in the ONC, Vicente et al. (2013) inferred a PAH abundance relative to gas around a factor 50 lower than the ISM and a factor 90 lower than NGC 7023 (Berné and Tielens, 2012). Note that the models in Figure 4 use a dust-to-gas mass ratio of 3 × 10 −4 so an f P AH of 0.1 in Figure 4 corresponds to a PAH-to-dust ratio of 1/330. PAH detections around T Tauri stars are generally relatively rare (Geers et al., 2006(Geers et al., , 2007, which leads us to expect that the PAH abundance is depleted in discs irrespective of external photoevaporation. This lower PAH abudance would mean less heating due to external photoevaporation, resulting in lower external photoevaporative mass loss rates. Conversely, Lange et al. (2021) demonstrated that PAH emission from the inner disc could be suppressed when PAHs aggregate into clumps, which also crucially would not suppress the heating contribution from PAHs (Lange, priv. comm.). However, it is unclear if that same model for PAH clustering applies at larger radii in the disc, let alone in the wind itself and so this is to be addressed in future work (Lange, priv. comm.).
Given its potential role as the dominant heating mechanism, determining the PAH abundance in the outer regions of discs is vital for understanding the magnitude of mass loss rates and so should be considered a top priority in the study of external photoevaporation. The James Webb Space Telescope (JWST) should be able to constrain abundances by searching for features such as the 15-20 µm emission lines (e.g. Boulanger et al., 1998;Tielens et al., 1999;Moutou et al., 2000;Boersma et al., 2010;Rieke et al., 2015;Jakobsen et al., 2022). Ercolano et al. (2022) also demonstrated with synthetic observations that the upcoming Twinkle (Edwards et al., 2019) and Ariel (Tinetti et al., 2018(Tinetti et al., , 2021 missions should be able to detect the PAH 3.3 µm feature towards discs, at least out to 140 pc. Even if detections with Twinkle/Ariel would not succeed in high UV environments because of the larger distances to those targets, constaining PAH abundances of the outer disc regions in lower UV nearby regions would also provide valuable constraints. These Mass loss rate, M yr 1 Fig. 4: External photoevaporative mass loss rate as a function of metallicity (Z/Z ) for a 100 au disc around a 1 M star irradiated by a 1000 G 0 radiation field. Each coloured line represents a different value of the base PAH-to-dust mass ratio scaling f PAH . These are extensions of the FRIED PDR-dynamical models of Haworth et al. (2018a). When the overall metallicity is scaled, there are changes to both the heating and cooling contributions that broadly cancel out. Conversely, varying the PAH-to-dust ratio (which is very uncertain) can lead to large changes in the mass loss rate. Note that these calculations have a floor value of 10 −11 M yr −1 .
will be an important step for calibrating models, refining the mass loss rate estimates and hence our understanding of external photoevaporation.
Often the dominant cooling in PDRs is the escape of line photons from species such as CO, C, O and C + . Evaluating this is the most challenging component of PDR calculations, since to estimate the degree of line cooling, line radiative transfer in principle needs to sample all directions (4π steradians) from every single point in the calculation. For this reason, most PDR studies to date (even without dynamics) have been 1D, where it is assumed that exciting UV radiation and cooling radiation can only propagate along a single path, with all other trajectories infinitely optically thick (e.g. Kaufman et al., 1999;Le Petit et al., 2006;Bell et al., 2006;Röllig et al., 2007). Most dynamical models of external photoevaporation with detailed PDR microphysics have also therefore been 1D. For example Adams et al. (2004) used the Kaufman et al. (1999) 1D PDR code to pre-tabulate PDR temperatures as inputs for 1D semi-analytic dynamical models (we will discuss these in more detail in section 2.4). Note that 2D models of other features of discs have circumvented this issue by assuming a dominant cooling direction, for example vertically through the disc (e.g. Woitke et al., 2016), or radially in the case of internal photoevaporation calculations (Wang and Goodman, 2017). This approach is not applicable in multidimensional simulations of an externally driven wind, where there is no obvious or universally applicable dominant cooling direction.
The 3d-pdr code developed by Bisbas et al. (2012) and based on the ucl-pdr code (Bell et al., 2006) was the first code (and to our knowledge remains the only code) able to treat PDR models in 3D. It utilises a healpix scheme (Górski et al., 2005) to estimate the line cooling in 3D without assuming preferred escape directions. healpix breaks the sky into samples of equal solid angle at various levels of refinement. For applications to external photoevaporation, 3d-pdr was coupled with the Monte Carlo radiative transfer and hydrodynamics code torus in the torus-3dpdr code (Bisbas et al., 2015), making 2D and 3D calculations possible in principle, which we will discuss more in section 2.5. However, the computational expense of doing 3D ray tracing from every cell in a simulation iteratively with a hydrodynamics calculation is prohibitely expensive. Finding ways to emulate the correct temperature without solving the full PDR chemistry may offer a way to alleviate this problem (e.g. Holdship et al., 2021).
1D Semi-analytic models of the external photoevaporative wind flow structure
In Section 2.3 we discussed the importance of PDR microphysics for determining the temperature structure and hence the flow structure of externally irradiated discs. We also noted that PDR calculations are computationally expensive and are usually limited to 1D geometries. Until recently, calculations of the mass loss rate that utilise full PDR physics have also been confined to 1D and solved semi-analytically. Here we briefly review those approaches. semi-analytic model structure and how it is used to estimate total mass loss rate estimates. The flow solution is solved along the disc mid-plane with appropriate boundary conditions, e.g. at the disc outer edge and at some critical point in the wind. This mid-plane flow is then assumed to apply over the entire solid angle subtended by the disc outer edge.
First we describe the 1D approach to models of external photoevaporation and the justification for such a geometry. 1D models essentially follow the structure radially outwards along the disc mid-plane into the wind, as illustrated in Figure 5. The grid is spherical, buts the assumptionis that the flow only applies over the solid angle subtended by the disc outer edge at R d . That fraction of 4π steradians is for a disc scale height H d (again see Figure 5). The mass loss rate at a point in the flow at R with velocityṘ and density ρ is thenṀ = 4πR 2 FρṘ.
The 1D geometry is justified based on the expectation that it is there that material is predominantly lost from the disc outer edge. That expectation arises because: 1. Material towards the disc outer edge is the least gravitationally bound. 2. The vertical scale height is much smaller than the radial, which results in a higher density at the radial sonic point than the vertical one (Adams et al., 2004). This is demonstrated analytically in the case of compact discs in the appendix of Adams et al. (2004), who show that the ratio of mass lost from the disc surface to disc edge iṡ whereṀ surface andṀ edge are the mass loss rates from the disc upper layers and mass loss rates from the outer edge respectively. As before, R d and R g are the disc outer radius and gravitational radius respectively. That is for larger, more strongly heated discs there is a more significant contribution from the disc surface. This has also been tested and validated in 2D radiation hydrodynamic simulations by Haworth and Clarke (2019) (that we will discuss fully in Section 2.5) who showed that, at least in a 10 3 G 0 environment, the majority of the mass loss comes from the disc outer edge and the rest from the outer 20 percent of the disc surface. Mass loss rates in 2D and analogous 1D models were also determined to be similar to within a factor two, with the 1D mass loss rates being the lower values. Mass loss rates computed in one dimension are therefore expected to be somewhat conservative but reasonable approximations. Adams et al. (2004) took a semi-analytic approach to solving for the flow structure by using pre-tabulated PDR temperatures from the code of Kaufman et al. (1999) and using those in the flow equations. They found that the flow structure is analogous to a Parker (1965) wind, but non-isothermal and with centrifugal effects. At each point in the flow, the pre-tabulated PDR temperatures are interpolated as a function of local density incident FUV and extinction. The boundary conditions used were the conditions at the disc outer edge and the sonic point in the flow. They demonstrated both that FUV driven winds are dominant for setting the mass loss rate and that winds could be driven interior to the gravitational radius, down to ∼ 0.1 − 0.2 R g (see also e.g. Woods et al., 1996;Liffman, 2003). Facchini et al. (2016) took a similar approach to 1D semi-analytic models with pre-tabulated PDR temperatures from Bisbas et al. (2012). As already discussed above, their main focus was on dust entrainment and the impact of grain growth in the disc on the dust properties in the wind. They found that the entrainment of only small grains, coupled with grain growth in the disc, reduces the extinction in the wind and can enhance the mass loss rate. In addition, they used a different approach to the outer boundary condition, finding a critical point in the modified Parker wind solution and taking into account deviations from isothermality at that point. They then integrated from that critical point, inwards to the disk. Thanks to this different approach Facchini et al. (2016) were able to compute solutions over a wider parameter space than before, particularly down to low FUV field strengths.
Semi-analytic models have offered a powerful and efficent tool for estimating mass loss rates in different regimes. However, there are still regions of parameter space where solutions are not possible and semi-analytic models are still only limited to 1D. To alleviate those issues we require radiation hydrodynamic simulations.
Radiation hydrodynamic models of external photoevaporation
Above we discussed semi-analytic calculations of the external wind flow structure. Those have the advantage that they are quick to calculate. However they are limited by being restricted to 1D solutions and by solutions not always being calculable. This leaves a demand for radiation hydrodynamic calculations capable of solving the necessary radiative transfer and PDR temperature structure in conjunction with hydrodynamics. Such calculations can solve for the flow structure in 2D/3D and in any scenario.
The radiation hydrodynamics of external photoevaporation is one of the more challenging problems in numerical astrophysics because the key heating responsible for launching the wind is described by PDR physics. That is, we are required to iteratively solve a chemical network that is sensitive to the temperature with a thermal balance that is sensitive to the chemistry. To make matters worse, the cooling in PDRs is non-local, being dependent on the escape probability of line photons into 4 π steradians from any point in the simulation (see Section 2.3.2). In other scenarios this does not cause significant issues if there is a clear dominant escape direction. For example, within a protoplanetary disc the main cooling can quite reasonably be assumed to occur vertically through the disc, since other trajectories have a longer path length (and column) through the disc ("1+1D" models, e.g. Gorti and Hollenbach, 2008;Woitke et al., 2016). Similarly, for internal winds there are models with radiation hydrodynamics and PDR chemistry, where the line cooling is evaluated along single paths radially on a spherical grid (Wang and Goodman, 2017;Nakatani et al., 2018a,b;Wang et al., 2019). In the complicated structure of an external photoevaporative wind, however, this sort of geometric argument cannot be applied and so multiple samplings of the sky (4π steradians) are required from every point in a simulation.
Although 3D cooling is ideally required, simulations have been performed using approximations to the cooling in high UV radiation fields in particular, where the PDR is small. For example Richling and Yorke (2000) ran 2D axisymmetric simulations of discs irradiated face-on. In their calculations the optical depth and cooling is estimated using a single ray from the cell to the irradiating UV source (this same path is used for calculating the exciting UV and cooling radiation). They also employed a more simple PDR microphysics model compared to work at the time such as Johnstone et al. (1998), which enabled the move from 1D to 2D-axisymmetry. Richling and Yorke (2000) studied the mass loss of proplyds as well as the observational characteristics using intensity maps derived from their dynamical models, some of which are illustrated in Figure 6. They found, in the first geometrically realistic EUV+FUV irradiation models, that rapid disc dispersal gives morphologies in various lines similar to those observed in the ONC.
The torus-3dpdr code (Bisbas et al., 2015) is a key recent development in the direct radiation hydrodynamic modelling of external photoevaporation. It was constructed by merging components of the first fully 3D photodissociation region code 3d-pdr (Bisbas et al., 2012) with the torus Monte Carlo radiative transfer and hydrodynamics code . 3d-pdr (and hence torus-3dpdr) address the 3D line cooling issue using a healpix scheme, which breaks the sky into regions of equal solid angle. torus-3dpdr has been used to run a range of 1D studies of external photoevaporation. It has been shown to be consistent with semi-analytic calculations . It was used to study external photoevaporation in the case of very low mass stars, with a focus on Trappist-1 (Haworth et al., 2018b). The approach was to run a grid of models to provide the mass loss rate as a function of the UV field strength, disc mass/radius for a 0.08 M star and interpolate over that grid in conjunction with a disc viscous evolutionary code based on that of Clarke (2007) to evolve the disc. The usefulness of such a grid led to the FRIED (FUV Radiation Induced Evaporation of Discs) grid of mass loss rates that has since been employed in a wide range of disc evolutionary calculations by various groups (e.g. Winter et al., 2019a;Concha-Ramírez et al., 2019;Sellek et al., 2020b, others). Note, given the discussion on the importance of PAHs above that the FRIED models use a dust-to-gas ratio a factor 33 lower than the canonical 10 −2 for the ISM and a PAH-to-dust mass ratio a further factor of 10 lower than ISM-like. This is conservative (i.e. a PAH-to-gas ratio of 1/330 that in the ISM) compared to the factor 50 or so PAH depletion measured by Vicente et al. (2013). So models predict that the PDR heating is still capable of driving significant mass loss, even when the PAH abundance is heavily depleted compared to the ISM.
These applications were all still 1D though and so there is a growing theoretical framework that is based on that geometric simplification. Haworth and Clarke (2019) ran 2D-axisymmetric external photoevaporation calculations (with 3D line cooling, utilising the symmetry of the problem) and found that 1D calculations are, if anything, conservative since the 2D mass loss rates were slightly higher. True 3D calculations with 3D line cooling and the disc not irradiated Fig. 6: A gallery of intensity maps (log 10 erg s −1 cm −2 strad −1 ) resulting from the 2D-axysmmetric radiation hydrodynamic simulations of proplyds by Richling and Yorke (2000). The simulations utilised a simplified microphysics approach which enabled them to model proplyds in 2D-axisymmetry, with UV radiation incident from the top. These multi-wavelength synthetic line emission maps were compared and found to be consistent with the properties of ONC proplyds.
isotropically or face-on, are yet to be performed. Though in principle torus-3dpdr is capable of this, in practice the 3D ray tracing of the healpix scheme makes such calculations computationally expensive.
The disc-wind transition
Most of the models discussed so far impose a disc as a boundary condition from which a wind solution is derived. In reality the wind is not launched from some arbitrarily imposed point in an irradiated disc. Owen and Altaf (2021) recently implemented a model without an imposed inner boundary and a smooth transition from disc to wind using a slim disc approach (Abramowicz et al., 1988). In this first approach they assume an isothermal disc, but demonstrate that while the transition from disc to wind is narrow, it is not negligibly thin. They also introduced dust of different sizes into their model, which predicts a radial gradient in grain size in the outer disc/inner wind. Although the fixed inner boundary models are valid for computing steady state mass loss rates for disc evolutionary models, a worthwhile future development will be to include detailed microphysics in a slim-disc approach like that of Owen and Altaf (2021).
2.6
Summary and open questions for the theory of externally driven disc winds 1. Numerical models of external photoevaporation require some of the most challenging radiation hydrodynamics models in astrophysics. This is primarily because it is necessary to include 3D PDR microphysics, including line cooling with no obvious dominant cooling direction. 2. Limited by the above, 1D models of external photoevaporation are now well established and are used to estimate disc mass loss rates. But 2D and 3D simulations are still limited.
Some of the many open questions and necessary improvements to current models are 1. What is the PAH abundance in external photoevaporative winds and the outer regions of discs? This is key to setting the wind temperature and mass loss rate. 2. Including mass loss rate dependent dust-to-gas mass ratios and maximum grain sizes (and hence extinction) in numerical models of external photoevaporation. At present a single representative cross section is assumed, irrespective of the mass loss rate. 3. Can accurate temperatures from PDR microphysics be computed at vastly reduced computational expense (e.g. via emulators)? 4. What is the interplay between internal and external winds? 5. 3D simulations of external photoevaporation with full PDR-dynamics 6. Non-isothermal slim-disc models of externally photoevaporating discs.
Observational properties of externally photoevaporating discs
Here we discuss observations to date of individual externally photoevaporating discs. We discuss the diversity in their properties such as UV environment and age, and summarise key diagnostics. (Grenman and Gahm, 2014). Globulettes have radii from hundreds to thousands of au and can also take on a cometary morphology when externally irradiated, though in many cases do not contain any YSOs, which by our definition would mean that they are not proplyds (note that Grenman and Gahm (2014) never referred to them as such, rather using the term globulette).
Defining the term proplyd
The term "proplyd" was originally used to describe any disc directly imaged in Orion with HST in the mid-90s (e.g. O'dell et al., 1993;Johnstone et al., 1998) as a portmanteau of "protoplanetary disc". Since then, use of the term has adapted to only refer to cometary objects resulting from the external photoevaporation of compact systems. However this use of the term is ambiguous, since a cometary morphology can result from both externally irradiated protoplanetary discs and externally irradiated globules which may be host no embedded star/disc, such as many of the compact globulettes as illustrated in Figure 7 (Gahm et al., 2007;Grenman and Gahm, 2014). We therefore propose to define a proplyd as follows
Proplyd:
A circumstellar disc with an externally driven photoevaporative wind composed of a photodissociation region and an exterior ionisation front with a cometary morphology.
We have chosen this definition such that it makes no distinction as to whether EUV or FUV photons drive the wind, but specifically define that for an object to be a proplyd then the wind must be launched from a circumstellar disc. In the absence of a disc it is a globule or globulette (also sometimes referred to as an evaporating gas globules, or EGG, e.g. Mesa-Delgado et al., 2016). To further clarify, a globule or globulette with an embedded YSO (identified through a jet for example) with cometary morphology would also not be defined as a proplyd since it is the ambient material being stripped rather than the disc.
Where and what kind of proplyds have been detected?
Proplyds were originally discovered in the ONC and there are now over 150 known in the region (e.g. O'dell et al., 1993;Johnstone et al., 1998;Smith et al., 2005;Ricci et al., 2008). The ONC is around 1-3 Myr old with the primary UV source the O6V star θ 1 C. In general, the surface brightness of proplyds if the local EUV exposure is lower then proplyds can be harder to detect. For example, the surface brightness of Hα is (O'dell, 1998): where α eff Hα = 1.2 × 10 −13 cm −3 s −1 and α B = 2.6 × 10 −13 cm −3 s −1 at temperature T = 10 4 K are hydrogen recombination coefficients (e.g. Osterbrock, 1989). Nonetheless, in recent years there have been important detections of proplyds in lower EUV environments. Proplyds can be found in the ONC at separations of up to ∼ 1 pc from θ 1 C (Vicente and Alves, 2005;Forbrich et al., 2021;Vargas-González et al., 2021). Meanwhile, Kim et al. (2016) found proplyds in the vicinity of the B1V star 42 Ori in NGC 1977, demonstrating that B stars can drive external photoevaporative winds. Haworth et al. (2021) also presented the discovery of proplyds in NGC 2024. In that region it appears that both an O8V and B star are driving external photoevaporation. The main significance of proplyds there is the ∼ 0.2 − 0.5 Myr age of a subset of the region where proplyds have been discovered. This is important since it implies that external photoevaporation can even be in competition with our earliest stage evidence for planet formation Segura-Cox et al., 2020).
In these regions proplyds have been detected with host star masses from around solar down to almost planetary masses (< 15 M jup Robberto et al., 2010;Kim et al., 2016;Fang et al., 2016;Haworth et al., 2022). The UV fields incident upon these proplyds ranges from > 10 5 G 0 down to possibly around 100 G 0 (Miotello et al., 2012). Mass loss rates are estimated to regularly be greater than 10 −7 M yr −1 and sometimes greater than than 10 −6 M yr −1 (e.g. Henney and Arthur, 1998;Henney and O'Dell, 1999;Henney et al., 2002;Haworth et al., 2021). Examples of binary proplyds have also been discovered (Graham et al., 2002). For sufficiently close separation binaries the disc winds merge to form a so-called interproplyd shell, which was studied by Henney (2002). These nearby regions, the ONC, NGC 1977 and NGC 2024 show the clearest evidence for external photoevaporation.
Due to resolution and sensitivity issues, unambiguous evidence for external photoevaporation is more difficult to obtain in more distant star forming regions than the D ∼ 400 pc of Orion. Smith et al. (2010) identified candidate proplyds in Trumpler 14, at a distance of D ∼ 2.8 kpc, and Mesa-Delgado et al. (2016) subsequently detected discs towards those candidates with ALMA. Although many of those candidates are large evaporating globules (in some cases, with embedded discs detected), some are much smaller and so could be bona fide proplyds.
There are other regions where "proplyd-like" objects have been discovered, including Cygnus OB2 Guarcello et al., 2014), W5 (Koenig et al., 2008), NGC 3603 (Brandner et al., 2000), NGC 2244, IC 1396 and NGC 2264 (Balog et al., 2006). However our evaluation of those systems so far is that they are all much larger than ONC propyds and likely evaporating globules. Given the high UV environments of those regions and the identification of evaporating globules we do expect external photoevaporation to be significant in the region. However, it remains unclear for many of these objects whether the winds are launched from an (embedded) star-disc system. Future higher resolution observations (e.g. extremely large class telescopes should resolve ONC-like proplyds out to Carina) and/or new diagnostics of external photoevaporation that do not require spatially resolving the proplyd (for example line ratios, or tracers that show a wind definitely emanates from a disc) are required in these regions.
We provide a further discussion of proplyd demographics and particularly the demographics of discs in irradiated star forming regions in Section 4.3.
Estimating the mass loss rates from proplyds
As discussed above, proplyds have a cometary morphology with a "cusp" pointing towards the UV source responsible for driving the wind and an elongated tail on the far side, pointing away from the UV source. The leading hemisphere that is directly irradiated by the UV source is referred to as the cometary cusp. On the far side of the proplyd is a trailing cometary tail.
The extent of the cometary cusp is set by the point beyond which all of the incident ionising flux is required to keep the gas ionised under ionisation equilibrium. A higher mass loss rate and hence denser flow increases the recombination rate in the wind and would move the ionisation front to larger radii. Conversely, increasing the ionising flux will reduce the ionisation front to smaller radii. As a result, the ionisation front radius R if (i.e. the radius of the cometary cusp) is related to the ionising flux incident upon the proplyd and the mass loss rateṀ ext . This is independent of the actual wind driving mechanism, being enforced simply by photoionisation equilibrium downstream of the launching region of the flow. This provides a means to estimate the mass loss rate from the disc: where Φ is the ionising photons per second emitted by the source at distance d responsible for setting the ionisation front. This has been applied to estimating mass loss rates in NGC 2024 ) and NGC 1977. Note that this is neglects extinction between the UV source and the proplyd and could underestimate the true separation between the UV source and proplyd. Both of these effects would reduce the true ionising flux incident upon the proplyd and so equation 14 provides an upper limit on the mass-loss rate.
Generally, the mass loss rate could alternatively be inferred if one knows the density and velocity through a surface in the wind enclosing the disc. However, the ionisation front is very sharp, making it an ideal surface through which to estimate the mass loss rate.
Other more sophisticated model fits to proplyds have been made, such as Henney et al. (2002), who use photoionisation, hydrodynamics and radiative transfer calculations to model the proplyd LV 2, but equation 14 provides a quick estimate that gives mass loss rates comparable to those more complex estimates.
Multi-wavelength anatomy of proplyds
Here we provide a brief overview of some key observational tracers of proplyds. We also highlight possible alternative tracers that might prove useful to identify external photoevaporation when proplyds cannot be inferred based on their cometary morphology (i.e. in weaker UV environments and distant massive clusters). A schematic of the overall anatomy and location of various tracers of a proplyd is shown in Figure 8.
Ionised gas tracers in absorption and emission
Proplyds are observed in two ways using ionised gas emission lines. Proplyds are typically found in H ii regions which are associated with widespread ionised emission lines. Proplyds on the side of the H ii region closest to the observer can therefore manifest in the absorption of those ionised gas emission lines. In addition, the ionisation front at the rim of the proplyd cusp is an additional source of ionised emission lines that can be directly detected. Ionised gas tracers in emission probe the region close to or outside of the Hi-H ii ionisation front (e.g. Henney and O'Dell, 1999). These ionised gas tracers detected in emission are valuable for estimating mass loss rates using the procedure discussed in 3.
Disc tracers
CO Thanks to its brightness, CO is one of the most common line observations towards protoplanetary discs. Facchini et al. (2016) pointed out that because the specific angular momentum in the wind scales as R −2 rather than the Keplerian R −3/2 that non-Keplerian rotation is a signature of external photoevaporation. Although this deviation grows with distance into the wind, it does not get a chance to do so to detectable levels before CO is dissociated Haworth and Owen, 2020). CO is therefore not expected to kinematically provide a good probe of external winds for proplyds (though it may be useful for more extended discs with slow external winds in low UV environments, as we will discuss in 3.5). However, this does not preclude CO line ratios or intensities showing evidence of external heating in spatially resolved observations.
Dust continuum
External photoevaporation influences the dust by i) entraining small dust grains in the wind and ii) by heating the dust in the disc. Directly observing evidence for grain entrainment would provide a key test of theoretical models both of external photoevaporation and dust-gas dynamics. In addition to the prediction that small grains are entrained, Owen and Altaf (2021) predict a radial gradient in the grain size distribution. Evidence for such a radial gradient in grain size was inferred in the ONC 114-426 disc by Miotello et al. (2012) (see also Throop et al., 2001). This disc is the largest in the central ONC, on the near side of the H ii region. Although the UV field incident upon it is expected to be of order 10 2 G 0 , it is clearly evaporating, with the wind resulting in an extended diffuse foot-like structure (though no clear cometary proplyd morphology). They mapped the absorption properties of the transluscent outer parts of the disc, finding evidence for a radially decreasing maximum grain size, which would be consistent with theoretical expectations. Revisiting 114-426 to obtain further constraints on grain entrainment would be valuable, as well as searching for the phenomenon in other systems. JWST will offer the capability to similarly study the dust in the outer parts of discs in silhouette in the ONC, comparing JWST Paschen α absorption with HST Hα absorption (as part of PID: GTO 1256McCaughrean, 2017. The dust in discs is also influenced by the radiative heating from the environment. If a proplyd is sufficiently close to a very luminous external source the grain heating can be comparable to, or in some parts of the disc exceed, the heating from the disc's central star. If this is not accounted for when estimating the dust mass in a proplyd (i.e. if one assumes some constant characteristic temperature, typically T = 20 K) the mass ends up increasingly overestimated in closer proximity to the luminous external source (Haworth, 2021) and so may suppress spatial gradients in disc masses at distances within around 0.1 pc of an O star like θ 1 C (e.g. Eisner et al., 2018;Otter et al., 2021).
Photodissociation region tracers
Photodissociation region (PDR) tracers are valuable because they trace the inner regions of the flow. In particular, as discussed in 2.3.2, it is the FUV/dissociation region that determines the mass loss rate. PDR tracers in the wind are also valuable because they are what we will rely on to identify externally photoevaporating discs that are non proplyds (discussed further in section 3.5). PDR tracers of external photoevaporation have received a lot of attention in recent years and so are explored somewhat more thoroughly than photoionised gas tracers here.
[OI] 6300Å Störzer and Hollenbach (1998) modelled the [OI] 6300Å emission from proplyds motivated by Bally et al. (1998) observations of the ONC proplyd 1822-413. They found that in the case of external photoevaporation the line is emitted following the photodissociation of OH, with the resulting excited oxygen having a roughly 50 per cent chance of de-exciting through the emission of a [OI] 6300Å photon. For the density/temperature believed to be representative in the wind this dissociation is expected to dominate over OH reacting out by some other pathway. This model approximately reproduced the flux of 1822-413. Ballabio et al. (in preparation) generalised this, studying how the [OI] 6300Å line strength varies with UV field strength and star disc parameters. They found that the line is a poor kinematic tracer of external winds because the velocity is too low to distinguish it from [OI] emission from internal winds (though spectro-astrometry, e.g. Whelan et al., 2021, and future instrumentation may solve this issue by spatially distinguishing the internal and external winds). However, the [OI] luminosity increases significantly with UV field strength. The ratio of [OI] 6300Å luminosity to accretion luminosity is therefore expected to be unusually high in strong UV environments. The ratio usually has a spread of about ±1 orders of magnitude in low UV environments (Nisini et al., 2018) but the models predict the ratio increases by an additional order of magnitude above the usual upper limit in a 10 5 G 0 environment. This could make [OI] a valuable tool for identifying external photoevaporation at large distances where proplyds cannot be spatially resolved.
There are some observational challenges associated with utlising this diagnostic though. One is that the targeted YSO's emission has to be distinguished from emission from the star forming region. For example [OI] emission from a background PDR. Furthermore, estimating the accretion luminosity of proplyds appears to have only been attempted in a handful of cases, with tracers like H α also possibly originating from the proplyd wind.
C I As we discussed above, the most commonly used disc gas tracer, CO, is dissociated in the wind. This means that it is ineffective for detecting deviations from Keplerian rotation that are expected in external photoevaporative winds. C I primarily resides in a layer outside of the CO abundant zone until it is ionised at larger distances in the wind, so could therefore trace the deviation from Keplerian rotation. Haworth and Owen (2020) proposed that C I offers a means of probing the inner wind kinematically. A key utility of this would be its possible use as an identifier of winds in systems where there is no obvious proplyd morphology. Haworth et al. (2022) used APEX to try and identify the C I 1-0 line in NGC 1977 proplyds, which are known evaporating discs in an intermediate (F FUV ∼ 3000 G 0 ) UV environment. However they obtained no detections, which they explain in terms of those proplyd discs being heavily depleted of mass. However an alternative explanation would be if the disc were just depleted in carbon. Distinguishing these requires independent constraints on the mass in those discs. Overall the utility of C I remains to be proven and should be tested on higher mass evaporating discs. Based on the expected flux levels (see also Kama et al., 2016) it seems unlikely though that C I will be suitable for mass surveys searching for the more subtle externally driven winds when there is no ionisation front. Though it could be used for targeted studies of extended systems that are suspected to be intermediate UV environments. Champion et al. (2017), who compared the line fluxes with uniform density 1D PDR models with the Meudon code (Le Petit et al., 2006) to constrain parameters such as the mean flow density, which is supported by ALMA observations. They suggested that the proplyd PDR selfregulates to maintain the H-H 2 transition close to the disc surface and maintain a flow at ∼ 1000 K in the supercritical regime (R d > R g ). Their models also pointed towards a number of heating contributions being comparably (or more) important than PAH heating. However those calculations also assumed uniform density and ISM-like dust, whereas we now know it is depleted in the wind. Overall this highlights the need for further detail in PDR-dynamical models.
Clearly these PDR tracers do have enormous utility for understanding the conditions in the inner part of external photoevaporative winds. The main limitation to using these far-infrared lines now is the lack of facilities to observe them, with Herschel out of commission and SOFIA (e.g. Young et al., 2012) due to end soon. We are unaware of any short term concepts to alleviate this, but in the longer term there are at least two relevant far-infrared probe class mission concepts being prepared, FIRSST and PRIMA, which would address this shortfall. However, these missions would not launch until the 2040s.
External photoevaporation of discs without an observed ionisation front
Proplyds are most easily identified because of their cometary morphology. However in weaker UV fields it is still possible to drive a significant wind that is essentially all launched only from close to the disc outer edge (where material is most weakly bound. Recent years have seen the discovery of possible external winds from very extended discs in very weak UV environments, down to F FUV 10 G 0 . The first example of this was the extremely large disc IM Lup, which has CO emission out to ∼ 1000 au. IM Lup had previously been demonstrated to have an unusual break in the surface density profile at around 400 au in submillimetre array (SMA) observations by Panić et al. (2009). Cleeves et al. (2016) then observed IM Lup in the continuum and CO isotopologues with ALMA, similarly finding that the CO intensity profile could not be simultaneously reproduced at all radii by sophisticated chemical models, inferring a diffuse outer halo of CO. Using Hipparcos to map the 3D locations of the main UV sources within 150 pc of IM Lup and geometrically diluting their UV with various assumptions on the extinction, Cleeves et al. (2016) had determined that the UV field incident upon IM Lup is only F FUV ∼ 4 G 0 , so not expected to be sufficient to drive an external wind. Haworth et al. (2017) demonstrated using 1D radiation hydrodynamic models that the CO surface density profile as a disc/halo could be explained by a slow external photoevaporative wind launched from around 400 au by an extremely low FUV environment, F FUV ∼ 4 G 0 . This is possible because the disc is so extended that the outer parts are only weakly gravitationally bound, so even modest heating can drive a slow molecular wind. However, 2D models are required to give a more robust geometric comparison between simulations and observations to verify an outer wind in IM Lup.
Another candidate external wind in a F FUV < 10 G 0 UV environment was identified in HD 163296 by Teague et al. (2019) and Teague et al. (2021). They developed a framework in which the 3D CO emitting surface of the disc is traced, which can then be translated into a map of the velocity as a function of radius and height in the disc as illustrated in Figure 9. Their main focus was the meridional flows identified at smaller radii in the disc, but they serendipitously discovered evidence for an outer wind launched from ∼ 350 − 400 au (see the right hand blue box of Figure 9). This is yet to be interpreted with any numerical models or synthetic observations which will be necessary to support the interpretation that it is external photoevaporation that is responsible. Teague et al. (2021) also carried out a similar analysis of the disc MCW 480 but found no evidence of an outer wind despite having a similar radial extent and environment. Whether this is a consequence of the face-on orientation (∼ 33 • compared to ∼ 45 • for HD 163296) or because there is no outer wind remains to be determined. Indeed, the HD 163296 kinematic feature that appears to be an outflow may also be due to some other mechanism. Furthermore, a similar approach is yet to be applied to IM Lup to search for kinematic traces of an outer wind. Fig. 9: The azimuthally averaged velocity (vectors) at the height of CO emission as a function of radius in HD 163296 by Teague et al. (2021). In addition to meridional flows, there is detection of a possible outer wind at ∼ 320 − 400 au, highlighted by the blue dashed box on the right.
Looking ahead, determining whether external irradiation can really launch winds from discs down to FUV fluxes F FUV < 10 G 0 is important for understanding just how pervasive external photoevaporation is.
Summary and open questions for observational properties of externally photoevaporating discs
1. External photoevaporation has been directly observed (e.g. proplyds) for almost 30 years. The vast majority of proplyd observations are in the ONC. 2. In recent years direct evidence for external photoevaporation is being identified in more regions, and B stars are now also known to facilitate it. However, the range of environments in which it has been observed is still very limited.
Some of the many open questions are: 1. What are robust diagnostics and signposts of external winds in weak UV environments? 2. What are diagnostics of external photoevaporation in distant clusters where a cometary proplyd morphology is spatially unresolved? 3. How widespread and significant are external winds from discs in weak UV environments?
Impact on disc evolution and planet formation
In this section we consider how a protoplanetary disc evolves when exposed to strong external UV fields, and the consequences for planet formation. We will only briefly describe some relevant processes for planet formation in isolated star-disc systems. This is a vast topic with several existing review articles on both protoplanetary discs (e.g. Armitage, 2011;Williams and Cieza, 2011;Andrews, 2020;Lesur et al., 2022) and the consequences for forming planets (e.g. Kley and Nelson, 2012;Baruteau et al., 2014;Zhu and Dong, 2021;Drazkowska et al., 2022) to which we refer the reader. Adams (2010) reviewed general influences on planet formation (see also Parker, 2020), with a focus on the Solar System. Here we focus on a detailed look on how external photoevaporation affects the formation of planetary systems.
Governing equations
The gas dominates over dust by mass in the interstellar medium (ISM) by a factor ρ g /ρ d ∼ 100 (Bohlin et al., 1978;Tricco et al., 2017). This ratio is usually assumed to be similar in (young) protoplanetary discs, although CO emission suggests that the gas may be somewhat depleted with respect to the dust . Nonetheless, gas is a necessary ingredient in instigating the growth of planetesimals by streaming instablility (Youdin and Goodman, 2005) and represents the mass budget for assembling the giant planets (Mizuno et al., 1978;Bodenheimer and Pollack, 1986). Thus, how the gas evolves in the protoplanetary disc is one of the first considerations in planet formation physics.
Despite its importance, the gas evolution of the disc remains uncertain. The observed accretion rates ofṀ acc ∼ 10 −10 −10 −7 M yr −1 onto young stars Muzerolle et al., 1998;Manara et al., 2012), depending on the stellar mass (Herczeg and Hillenbrand, 2008;Manara et al., 2017), imply radial angular momentum transport. For several decades, this angular momentum transport has widely been assumed to be driven by an effective viscosity, mediated by turbulence that may originate from the magnetorotational instability (Balbus and Hawley, 1998). In the absence of purturbations, the surface density Σ g of the gaseous disc as a function of cylindrical radius r then follows (Lynden- Bell and Pringle, 1974) The loss ratesΣ int andΣ ext are the surface density change due to the internally and externally driven winds respectively. The kinematic viscosity ν is usually parametrised by an α parameter (Shakura and Sunyaev, 1973) such that ν(r) = αc 2 s /Ω K , for sound speed c s and Keplerian frequency Ω K . In a disc with a mid-plane temperature T ∝ r −1/2 , this yields ν ∝ r .
In the following discussion, we will assume angular momentum transport is viscous. However, recently several empirical studies of discs have suggested a low α ∼ 10 −4 −10 −3 Flaherty et al., 2017;Trapman et al., 2020). This is difficult to reconcile with observed accretion rates. Alternative candidates, particularly magnetohydrodynamic (MHD) winds, have been suggested to drive angular momentum transport (e.g. Bai and Stone, 2013). In this case, angular momentum is not conserved but extracted from the gas disc, with consequences for the disc evolution (Lesur, 2021;Tabone et al., 2022). In the following we will assume a standard viscous α disc model, with the caveat that future simulations may offer different predictions by coupling the externally driven photoevaporative wind with MHD mediated angular momentum removal.
Implementing wind driven mass loss
In order to integrate equation 15, we must define the form ofΣ int andΣ ext . The internal wind may be driven by MHD effects (e.g. Bai and Stone, 2013;Lesur et al., 2014) or thermally due to a combination of EUV (e.g. Hollenbach et al., 1994;Alexander et al., 2006), X-ray (e.g. Ercolano et al., 2009;Owen et al., 2010Owen et al., , 2011 and FUV (e.g. Gorti et al., 2009) photons, or probably a combination of the two (e.g. Bai et al., 2016;Bai, 2017;Ballabio et al., 2020) . We do not focus on the internally driven wind in this review, but note that the driving mechanism influences the radial profile ofΣ int (see Ercolano and Pascucci, 2017, for a review).
Several authors have included the external wind in models of (viscous) disc evolution (e.g. Clarke, 2007;Anderson et al., 2013;Rosotti et al., 2017;Sellek et al., 2020b;Concha-Ramírez et al., 2021;Coleman and Haworth, 2022). In general, these studies follow a method similar to that of Clarke (2007) in removing mass from the outer edge, because winds are driven far more efficiently where the disc is weakly gravitationally bound to the host star. We discuss the theoretical mass loss rates in Section 2; in brief, the analytic expressions by Johnstone et al. (1998) are applied to compute the mass loss rate in the EUV driven wind, while early studies applied the expressions by Adams et al. (2004) for an FUV driven wind. The latter has now been improved upon using more detailed models by Haworth et al. (2018a), such that one can interpolate over the FRIED grid to find an instantaneous mass loss rate. For typical EUV fluxes, once the disc is (rapidly) sheared down to a smaller size, any severe externally driven mass loss is expected to be driven by FUV rather than EUV photons (see Section 2.2). For this reason, the EUV mass loss is often neglected in studies of disc evolution.
Since the mass loss rate is sensitive to the outer radius R out , care must be taken when implementing a numerical scheme that the value of R out is sensibly chosen. In practice, a sharp outer radius quickly develops for a disc with initial mass loss rate higher than the rate of viscous mass flux (accretion). For a viscous disc with ν ∝ r, the surface density Σ ∝ r −1 in the steady state, which is the same profile as adopted for the numerical models in the FRIED grid. One can then interpolate using the total disc mass and outer radius. The latter is evolved each time-step by considering the rate of wind-driven depletion versus viscous re-expansion (e.g. Clarke, 2007;Winter et al., 2018). However, the physically correct way to define the outer radius is to find the optically thin-thick transition, since the flow in the optically thin region is set by the wind launched from inside this radius. Mass loss scales linearly with surface density in the optically thin limit , such that one can define R out to be the value of r that gives the maximal mass loss rate for the corresponding Σ(r) in the disc evolution model (see discussion by Sellek et al., 2020b). Under the assumption of a viscous disc with ν ∝ r both approaches yield similar outcomes, but the approach of Sellek et al. (2020b) should be adopted generally. For example, this prescription would be particularly important in the case of a disc model incorporating angular momentum removal via MHD winds (Tabone et al., 2022).
Viscous disc evolution with external depletion of gas
In Figure 10 we show examples for the evolution of the disc radius that contains 90 percent of the mass, R 90 , and corresponding externally driven mass loss rates (from Haworth et al., 2018a) To illustrate the variation in mass loss and radius, we choose an initial scale radius R s = 100 au and mass M disc = 0.1 M , truncated outside of 200 au, and with a viscous α = 3 × 10 −3 . For simplicity, we ignore internal winds (Σ int = 0 everywhere), in order to highlight the main consequences of the externally driven photoevaporative wind in isolation.
From such simple models, we gain some insights into how we expect disc evolution to be affected by externally driven mass loss. In the first instance, the efficiency of these winds at large r leads to extremely rapid shrinking of the disc. In a viscous disc evolution model, this shrinking continues until the outwards mass flux due to angular momentum transport balances the mass loss due to the wind such that the accretion rateṀ acc ∼Ṁ ext (Winter et al., 2020b;Hasegawa et al., 2022). This offers a potential discriminant for disc evolution models: whileṀ acc ∼Ṁ ext for a viscously evolving disc in an irradiated environment, this need not be the case if angular momentum is removed from the disc via MHD winds or similar. In either case, the initial rapid mass loss rates ofṀ ext ∼ 10 −7 M yr −1 are only sustained for relatively short time-scales of a few 10 5 yr. Because the mass loss rate is related to the spatial extent of proplyds (equation 14), this implies easily resolvable proplyds should be short-lived and therefore rare.
This rapid shrinking of the disc has consequences for the apparent viscous depletion time-scale of the disc, leading Rosotti et al. (2017) to expound the usefulness of the dimensionless accretion parameter: where τ age , M disc are the age and mass of the disc respectively, whileṀ acc is the stellar accretion rate. For disc evolution driven by viscosity, and where the disc can reach a quasi steady-state, we expect η ≈ 1. Indeed η ≈ 1 for discs in low mass local SFRs when using the dust mass M dust (or, more precisely, sub-mm flux) as a proxy for the total disc mass M disc = 100 · M dust . While numerous processes can interrupt accretion and yield η < 1, only an outside-in depletion process can yield η > 1. Rosotti et al. (2017) showed that this applies to a number of discs in the ONC, hinting that external photoevaporation has sculpted this population.
With the inclusion of internal disc winds, a number of disc evolution scenarios become possible for an externally irradiated disc. The internal wind drives mass loss outside of some launching radius R launch ≈ 0.2R g , which is inside of the gravitational radius due to hydrodynamic effects (Liffman, 2003). Once (viscous) mass flux through the disc becomes sufficiently small (Ṁ acc Ṁ int , the mass loss in the internal wind), the internal wind depletes material at r ≈ R launch faster than it is replenished, leading to gap opening. Subsequently, the inner disc is rapidly drained and inside-out disc depletion proceeds (Skrutskie et al., 1990;Clarke et al., 2001). Coleman and Haworth (2022) discussed how the balance of internal and external photoevaporation can alter this evolutionary picture. In the limit of a vigorous Figure 11a shows the maximum grain size, relative to an isolated disc (F FUV = 0 G 0 ) as a function of time and radius in an irradiated disc with viscous α = 10 −3 , exposed to FUV flux F FUV = 1000 G 0 . Figure 11b shows the total fraction of dust removed by the wind over the lifetime of a disc as a function of the viscous α and F FUV .
externally driven wind, the outer disc may be depleted down to R launch . In this case, the internal wind no longer drives inside-out depletion, and the disc is dispersed outside-in. In the intermediate FUV case (F FUV = 500 G 0 ), external disc depletion erodes the outer disc without reaching R launch . In models for disc material where outer disc material is eventually transported inwards, then outer disc depletion should still shorten the lifetime to some degree. In Figure 10 we see that only the disc exposed to F FUV = 5000 G 0 (above which we consider 'high FUV environments' by our definitions in Section 2.2.1) is sheared down to R out < 20 au before the inner surface density becomes small (lower than the FRIED grid range). Thus, if angular momentum transport is inefficient beyond this radius, then sustained exposure to high FUV fluxes F FUV 5000 G 0 should be required to shorten the disc lifetime. In this case, we may expect inside-out depletion for discs exposed to more moderate F FUV , although the external depletion still reduces their overall mass.
Solid evolution
We now consider how the evolution of solids within the gas disc may be influenced by externally driven winds. The growth of ISM-like dust grains to larger aggregates and eventually planets is the result of numerous inter-related physical processes, covering a huge range of scales. We do not review these processes in detail here, but refer the reader to the recent reviews by Lesur et al. (2022, with a focus on dust-gas dynamics) and Drazkowska et al. (2022, with a focus on planet formation). Due to the complexity of the topic, and the fact that the primary empirical evidence comes from local, low mass SFRs without OB stars, most studies to date have focused on dust growth in isolated protoplanetary discs. We here consider the results of the few investigations focused on dust evolution specifically in irradiated discs.
Dust evolution
Sellek et al. (2020b) investigated the drift and depletion of dust in a viscously evolving protoplanetary disc. Dust is subject to radial drift, wherein dust moves towards pressure maxima (i.e. inwards in the absence of local pressure traps - Weidenschilling, 1977), as well as grain growth dependent on the local sticking, bouncing and fragmentation properties (see Birnstiel et al., 2012, and references therein). The drift velocity is determined by the Stokes number, which in this context is the ratio of the stopping time of the dust grain to the largest eddy timescale ∼ Ω −1 K . Near the midplane and in the Epstein regime, this can be approximated: where a s and ρ s are the grain size and density respectively. The draining of large dust grains can be understood in terms of the balance of viscous mass flux and radial drift for the standard α disc model with ν ∝ r. For St > St eq , dust drifts inwards regardless of the viscosity. At the outer edge of the disc |d ln Σ g /d ln r| becomes large, so that St eq → 3α. Hence dust in the outer disc that grows above St > 3α will migrate rapidly inwards. External evaporation acts to increase |d ln Σ g /d ln r| in the outer disc, clearing this region of large dust grains. Figure 11a, adapted from Sellek et al. (2020b), shows how the external wind can rapidly evacuate the outer disc of large grains, dependent on the value of α. Given that this occurs on short time-scales compared to the disc lifetime, it has consequences for planet formation and observational properties of discs, possibly explaining why the sub-mm flux-radius relationship seen in low mass SFRs (Tripathi et al., 2017) does not hold in the ONC .
The clearing of large grains from the outer disc also has consequences for the quantity of solid material that can be lost in the wind. Sellek et al. (2020b) use the entrainment constraints given by equation 9 to estimate an entrainment fraction f ent for a given grain size distribution. The fraction of dust removed in their global externally irradiated disc model is shown in Figure 11b. When viscosity is large (α 10 −2 ), this fraction can exceed 50 percent for F FUV 10 3 G 0 . This trend of higher dust depletion with higher α is both due to the faster replenishment of disc material in the outer regions where the wind is launched, and because inward drift of large grains is less efficient (higher St eq ). However, in general depletion of dust is not efficient, with typically less than half of the dust mass being lost. However, in the models of Sellek et al. (2020b) this does not result in an enhancement of the dust-to-gas ratio due to the enhanced loss of the large dust grains via rapid inwards drift.
A big caveat of the above discussion is that it does not consider the role of local pressure traps in halting the radial drift (e.g. Pinilla et al., 2012;Rosotti et al., 2020). This local accumulation of solids can also lead to a mutual aerodynamic interaction that leads to a local unstable density growth that can seed planet formation (Youdin and Goodman, 2005). If sufficient quantities of dust remain when the gas component is depleted by external photoevaporation, then this may serve to initiate planetesimal formation in the outer disc (Throop and Bally, 2005), perhaps leading to low mass or rocky planets rather than gas giants. Future studies may consider how the efficiency of dust trapping in the outer disc affect this picture.
Planet formation and evolution
As discussed in the introduction, the influence of external photoevaporation is still rarely included in models for planet formation, despite its apparent prevalence. However, Ndugu et al. (2018) studied how increases in the disc temperature due to external irradiation alter the formation process. By increasing the disc scale height (thus decreasing midplane density), this heating reduces the efficiency of pebble accretion and giant planet formation in the outer disc. In this framework, giant planets that do form at high temperatures more frequently orbit with short periods (hot/warm Jupiters) because their planet cores need to form early or in the inner disc. Temperature also has an influence after the formation of a low mass planet core. Low mass planets that have not opened up a gap in the gas disc undergo type I migration, which is due to torques associated with a number of local resonances (Paardekooper et al., 2011). These torques are sensitive to thermal diffusion, such that they also depend on local temperature and associated opacity. This may lead to complex migration behaviour for the low mass planets (Coleman and Nelson, 2014), which would also be influenced by increasing the disc temperature due to external irradiation (Haworth, 2021).
Perhaps more directly, where there is sufficient mass loss from the disc due to an externally driven wind, this also reduces the time and mass budget available for (giant) planet formation and migration. Internal winds have long been suggested to play an important role in stopping inward planet migration (Matsuyama et al., 2003). Armitage et al. (2002) and Veras and Armitage (2004) investigated a how type II migration of giant planets (within a gap) can be severely altered by the loss of the disc material in a photoevaporative wind, even leading to outwards migration if the outer disc material is removed. Since then, a number of authors have investigated how giant planet migration can be halted by internally driven disc winds (e.g. Alexander and Armitage, 2009;Jennings et al., 2018;Monsch et al., 2019). Planet population synthesis models have now started to implement prescriptions for mass loss due to an external wind (such as in the Bern synthesis models - Emsenhuber et al., 2021). However, this is presently based on a single (typical) estimatedΣ ext that is constant throughout time and with radius r, rather than the more physical outside-in, radially dependent depletion models discussed in Section 4.1. Recently, Winter et al. (2022) looked at how growth and migration is altered by external FUV flux exposure, and we show the outcomes of some of these models in Figure 12. The outside-in mass loss prescription leads to more dramatic consequences than those of Emsenhuber et al. (2021), curtailing growth and migration early. As a result, low FUV fluxes F FUV 100 G 0 produce planets that are massive, M p 100 M ⊕ and on short orbital periods (P orb 10 4 days), similar to those that are most frequently discovered 10 1 10 2 10 3 10 4 10 5 Orbital period: P orb [days] Winter et al., 2022). The circular points, connected by faint dotted green lines for fixed starting semi-major axis a p,0 , represent the final location of the planet in P orb − M p space. Points are coloured by the external FUV flux experienced in that model, with the same colour is shown in a Voronoi cell to emphasise trends. Red crosses show the locations of HARPS radial velocity (RV) planet discoveries presented by Mayor et al. (2011). The 50 percent detection efficiency of the HARPS survey is shown by the red contour. We show orbital period at longer than which Fernandes et al. (2019) infers a dearth of planets (dotted black line) and the limit above which planets may no longer form by core accretion (black dashed line -Schlaufman, 2018). We also show the planets in the Solar Sytem (green triangles). by radial velocity (RV) surveys (e.g. Mayor et al., 2011). The typical FUV fluxes in the solar neighbourhood are F FUV ∼ 10 3 −10 4 G 0 (see Section 5.2), which yield typical planet masses and orbital periods closer to those of the massive planets in the Solar System. Such relatively low mass planets fall close to, or below, typical RV detection limits, and the anti-correlation between planet mass and semi-major axis may therefore contribute to the inferred dearth of detected planets with periods P orb 10 3 days (Fernandes et al., 2019). Testing the role of external photoevapation for populations of planets further requires coupling these prescriptions with population synthesis efforts.
While we can try to identify the role of external photoevaporation via correlations in disc populations, more direct ways to connect environment with the present day stellar (and exoplanet) population would be useful. For an example of how this might work in practice, Roquette et al. (2021) have highlighted that premature disc dispersal may leave an impact on the rotation period distribution of stars. The rotation of a star is decelerated due to 'disc-locking' during the early pre-main-sequence phase accelerates rotation. Thus, by shortening the inner disc lifetime (see Section 4.3.2), external photoevaporation may leave an imprint on the stellar population up to several 100 Myr after formation. This may explain, for example, the increased number of fast rotating stars in the ONC compared to Taurus (Clarke and Bouvier, 2000). In future, models and rotation rates may be used to interpret the disc dispersal time-scales for main sequence stars, which may complement investigations into planet populations in stellar clusters (e.g. Gilliland et al., 2000;Brucalassi et al., 2016;Mann et al., 2017).
Disc surveys of local star forming regions
In this section so far, we focus on the theoretical influence of external photoevaporation on planet formation. However, the most important evidence for or against the influence of external photoevaporation on forming planets must be found statistically in the surveys of local protoplanetary disc populations. Such surveys and more detailed observations of individual discs offer the most direct way to probe the physics of planet formation. Here we report the evidence in the literature for the role of external photoevaporation in sculpting disc populations. We first consider the observational approaches and the challenges in inferring disc properties in Section 4.3.1. We then consider the evidence for variations in (inner) disc survival fractions (Section 4.3.2) and outer disc properties (Section 4.3.3) with FUV flux exposure.
Inner disc lifetimes
Photometric censuses of young stars in varying SFRs can yield insights into the survival time of discs in regions of similar ages. Young stars exhibit luminous X-ray emission due to the magnetically confined coronal plasma (Pallavicini et al., 1981), X-ray surveys with telescopes such as Chandra offer the basis for constructing a catalogue of young members of a SFR. These can be coupled with photometric surveys to infer the existence or absence of a NIR excess. Comparison of the fractions of discs with varying either within the same SFR, or between different regions with similar ages, allows one to identify regions where discs have shorter lifetimes than the ∼ 3 Myr typical of low mass SFRs.
While this principle appears simple, in fact several steps required in achieving this comparison bear with them numerous pitfalls. One issue is reliable membership criteria, which were historically photometric or spectroscopic (e.g. Blaauw, 1956;de Zeeuw et al., 1999), improved recently through proper motions (and to a lesser extent, parallax measurements) from Gaia DR2 (Arenou et al., 2018).
Uncertainty and heterogeneity in age determination for young SFRs also represent a significant challenge, particularly in comparing across different SFRs (e.g. Bell et al., 2013). Michel et al. (2021) point out that considering only low-mass, nearby regions, the isolated disc life-time might be a factor 2−3 longer than the canonical 3 Myr obtained by aggregating across many star forming regions with a ride range of properties. Further, since disc life-times are shorter around high mass stars (Ribas et al., 2015), care must be taken when comparing across samples of discs that may have different sensitivity limits.
Binning young stars by incident FUV flux also carries complications. The three dimensional geometry (projection effects) and dynamical mixing in stellar aggregates may also hide correlations between historic FUV exposure and present day disc properties (e.g. Parker et al., 2021). Even empirically quantifying the luminosity and spectral type of neighbour OB stars can be challenging. Massive stars are often found in multiple systems (e.g. Sana et al., 2009), which can lead to mistaken characterisation (e.g. Maíz Apellániz et al., 2007), while these massive stars are also expected to go through transitions in the Hertzsprung-Russell diagram in combination with rapid mass loss (see Vink, 2021, for a review). Statistically, any studies attempting to measure correlations in individual regions must choose binning procedures for apparent FUV flux with care. For example, the number of stars per bin must be sufficient such that uncertainties are not prohibitively large and binning should be performed by projected UV flux rather than separation alone (i.e. controlling for the luminosity of the closest O star).
Finally, studies of NIR excess probe the presence of inner disc material, which represents the part of the disc expected to be least affected by external photoevaporation (see discussion at the end of Section 4.1.3). Therefore, inner disc survival fractions should be interpreted as the most conservative metric by which to measure the role of external disc depletion.
Outer disc properties
The outer regions of the protoplanetary disc less weakly bound to the host star than the inner disc, and therefore much easily unbound by externally driven photoevaporative winds. Probing disc mass and radius also offers much more information than simply the presence or absence of circumstellar material. Outer disc properties are frequently inferred via probing the dust content, then assuming a canonical dust-to-gas ratio of 10 −2 to infer a total mass, although this may be significantly higher in some cases . In surveys of protoplanetary discs, the dust mass is usually inferred by making the assumption that the disc is optically thin such that: where D is the distance to the source, F ν,dust is the flux from the cool dust at frequency ν, T dust = 20 K is the dust temperature, B ν is the Planck function and κ ν,dust ≈ 2 cm 2 g −1 is the assumed opacity (Beckwith et al., 1990). Even for studies of discs in low mass star forming regions, the estimate from equation 19 comes with several assumptions, including the fixed dust temperature and opacity. In the context of discs in irradiated environments, this can be further complicated by the heating of the dust by neighbouring stars (Haworth, 2021). Even further, in some regions background free-free emission may contribute to the continuum flux, and must therefore be subtracted by extrapolating from cm-wavelength observations (see Eisner et al., 2018, and references therein). However, since free-free emission may be variable, this is another source of uncertainty. Beyond dust masses, estimates of the outer disc extents can be made using spatially resolved continuum observations by either fitting a surface density profile or defining an effective radius enclosing a fraction of the total disc emission . This compilation is not complete, nor necessarily unbiased. We include only SFRs that exhibit strong evidence of external photoevaporation of the discs, as well as well-studied properties in terms of mass and age. Columns from left to right are: name of the SFR, heliocentric distance, total stellar mass, central density, total FUV luminosity, age, type of evidence for external disc depletion and references. FUV luminosity is calculated using the luminosity of the most massive members at an age of 1 Myr. The evidence for external dispersal in each region is listed as proplyds/winds (P), dust/outer disc depletion (DD) or shortened inner disc lifetime (IDL).
(see discussion by Tripathi et al., 2017;Tazzari et al., 2021). This comes with the significant caveat that the sizedependent inward drift of dust grains mean that the inferred radii may not trace the physical disc radius, with long integration times required to trace the small grains that remain well-coupled to the gas (Rosotti et al., 2019). Probing the gas content of discs cannot be achieved by directly measuring the hydrogen. Molecular hydrogen is a symmetric rotator, thus has no electric dipole moment. Quadrupole transitions are not excited at the typical temperature for the bulk of the gas in protoplanetary discs (e.g. Thi et al., 2001). Instead CO isotopologues, and sometimes HD (Trapman et al., 2017) are commonly used to infer disc masses (Williams and Best, 2014), which requires some assumption for the carbon and oxygen abundances. Outer disc radii might be inferred from spatially resolved moment 0 maps using a similar approach as discussed for resolved dust observations (e.g. Ansdell et al., 2018), or by assuming Keplerian rotation and fitting a model to the gas kinematics (e.g. Czekala et al., 2015). All of these methods rely on some assumptions about disc abundances and chemistry, which should be applied with caution in the case of irradiated discs. In particular the heating, winds/sub-keplerian rotation and photodissociation of molecules close to the outer edge of the disc will all influence the observed molecular line emission (Haworth and Owen, 2020).
Proplyd definitions
Proplyds are discussed in detail in Section 3, and we do not revisit them in detail here. As discussed in that section, we distinguish between photoevporating globules and proplyds, the latter of which are typically smaller than a few 100 au and are synonymous with an ionised star-disc system. For this reason, we here consider the examples in Trumpler 14 , Pismis 24 , NGC 3603 (Brandner et al., 2000) and Cygnus OB2 as candidate proplyds only. These range in size from ∼ 1000 au to several 10 4 au in size, and are generally undetected in X-rays. We therefore consider these large photoevaporating objects as candidate proplyds or globules, rather than confirmed proplyds. This does not necessarily mean that true proplyds do not exist in these regions, as these are challenging to resolve at such large distances.
Disc survival fractions
Despite the numerous difficulties, many studies have reported correlations between disc lifetimes and local FUV flux. One of the earliest was Stolte et al. (2004), who found an increase from 20 ± 3 percent in the central 0.6 pc of NGC 3603, increasing to 30 ± 10 percent at separations of ∼ 1 pc from the centre. Later, Balog et al. (2007) presented a Spitzer survey of NGC 2244 (of age ∼ 2 Myr - Hensberge et al., 2000;Park and Sung, 2002) found a marginal drop off in the disc hosting stars for separations d < 0.5 pc from an O star (27 ± 11 percent) versus those at greater separations (45 ± 6 percent). Guarcello et al. (2007) obtained similar results for NGC 6611 (of age ∼ 1 Myr), finding 31.1 ± 4 percent survival in their lowest bolometric flux bin, versus 16 ± 3 percent in their highest (see also Guarcello et al., 2009). Similar evidence of shortened inner disc lifetimes have been reported in Arches , Trumpler 14, 15 and 16 ), NGC 6357 (or Pismis 24 -Fang et al., 2012 and the comparatively low density OB association Cygnus OB2 . We highlight that correlations between location and inner disc fraction are not found ubiquitously -for example, Roccatagliata et al. (2011) do not recover evidence of depleted discs in the central 0.6 pc of IC 1795. We return to discuss some of these cases in greater detail below.
Interestingly, the study by Fang et al. (2012) of Pismis 24 revealed not only a correlation of disc fraction with FUV flux, but also a stellar mass dependent effect wherein disc fractions are found to be lower for higher mass stars. While this is similar to the case of non-irradiated discs (Ribas et al., 2015), it is the opposite of what might be expected from the dependence ofṀ ext on the stellar mass; lower mass stars are more vulnerable to externally driven winds due to weaker gravitational binding of disc material (e.g. Haworth et al., 2018a). This finding, if generally true for irradiated disc populations, would therefore put constraints on how other processes in discs scale with stellar host mass. For example, more rapid (viscous) angular momentum transport for discs around higher mass stars would sustain higher mass loss rates for longer, due to the mass flux balance in the outer disc (see discussion in Section 4.1). However, although accretion rates correlate with stellar mass (Herczeg and Hillenbrand, 2008;Manara et al., 2017), this does not necessarily imply faster viscous evolution for discs with high mass host stars (Somigliana et al., 2022); thus the physical reason for the Fang et al. (2012) findings remain unclear. Whether discs around high mass stars are generally depleted faster than those around low mass stars may be further confirmed by the upcoming JWST GTO program by Guarcello et al. (2021), who aim to map out disc fractions and properties in Westerlund 1 for stars down to brown dwarf masses. This large dataset should allow to control both for stellar mass and location in the starburst cluster.
All of the regions mentioned above, in which evidence of inner disc dispersal has been inferred, share the property that they are sufficiently massive to host many O stars. A notable example for which shortened disc lifetimes are not observed is in the ONC. Despite rapid mass loss rate for the central proplyds, inferred a high disc fraction of ∼ 80 percent. This may be due to a large spread in ages (typically estimated at ∼ 1−3 Myr -e.g. Hillenbrand, 1997;Palla and Stahler, 1999;Beccari et al., 2017), such that the resultant dynamical evolution (Kroupa et al., 2018;Winter et al., 2019b) leads to the central concentration of young stars (Hillenbrand, 1997;Getman et al., 2014a;Beccari et al., 2017), as discussed in Section 5.4. However, as always, interpreting the luminosity spread as a physical spread in ages is not so simple. Luminosity spreads may result from disc orientation, accretion, multiplicity and extinction, while the latter two effects may contribute to systematic gradients in the cluster. High survival rates may also reflect the inefficiency of inner disc clearing of photoevaporative winds in intermediate F FUV environments, which would hint at inefficient angular momentum transport at large radii (due to dead zones, for example).
Some other studies have also obtained null results when trying to find evidence of spatial gradients in the fraction of stars with NIR excess within SFRs. Studies searching for spatial correlations in NIR excess fraction are useful because they are not subject to the same large uncertainties in the stellar ages. However, for studies of this kind the question of membership criteria to the region is of utmost importance, since given a constant number of foreground/background contaminants then the outer regions should be more affected, presumably suppressing the apparent disc fraction. One example of a search for these correlations is that of Richert et al. (2015), who found an absence of any correlation in the MYStIX catalogue (Povich et al., 2013) of NIR sources across several O star hosting SFRs. The methodology of Richert et al. (2015) differed from other studies in that they adopted the metric of the aggregate density of disc-hosting or disc-free stars around O stars, rather than distances of these lower mass stars to their nearest O star. Similarly, Mendigutía et al. (2022) used Gaia photometry and astrometry to compare relative disc fractions in a sample of SFRs, finding no spatial correlations. This study did not use FUV flux/O star proximity, but binned stars into the inner 2 pc versus the outer 2 − 20 pc for each region. Both studies have not attempted an absolute measure of disc survival fractions, but relative numbers. They highlight that environmental depletion of inner discs is not necessarily ubiquitous in high mass SFRs. Physical considerations such as age and dynamics may also play an important role in determining whether correlations in disc survival fractions can be uncovered.
In Figure 13 we show the disc survival fraction versus SFR age for a composite sample, including a number of the massive regions discussed in this section. This figure is subject to the caveats discussed above, and particularly that observed samples of stars in massive, distant regions may be biased towards higher stellar masses than close by regions. Setting aside the caveats, Figure 13 appears to demonstrate a shortening of disc lifetimes across numerous massive SFRs. Low mass SFRs typically have τ life ≈ 3 Myr, while the most massive SFRs may have τ life 1 Myr. This shortening of disc lifetimes has so far only been found in regions with a total FUV luminosity L FUV 10 39 erg s −2 (see Table 1). In the case of many of these massive regions, the shortening of disc lifetimes can also be seen in a local gradient, for which stars that are far from an O star have a greater probability of hosting a disc. This is more difficult to explain by variations in the stellar masses of surveyed stars, since dynamical mass segregation occurs on much longer time-scales (e.g. Bonnell and Davies, 1998). This suggests that external photoevaporation can shorten inner disc disc lifetimes. If this is the case, it implies that if gas giants form in these environments, they must do so early (as suggested by some recent results -e.g. Segura-Cox et al., 2020;Tychoniec et al., 2020).
Outer disc properties
In recent times, ALMA has become the principle instrument for surveying outer disc properties in local star forming regions. Nonetheless, prior to this revolution a number of studies had already demonstrated, by degrees, the role of external photoevaporation in sculpting the outer regions of protoplanetary discs. For example, in the ONC, Henney and Arthur (1998) and Henney and O'Dell (1999) used HST images and Keck spectroscopy to demonstrate the rapid mass loss rates up to ∼ 10 −6 M yr −1 of several proplyds in the core of the ONC, and their concentration around the O star, θ 1 C. Later, Mann et al. (2014) used SMA data to show a statistically significant dearth of discs with high dust masses close to θ 1 C (see also Mann and Williams, 2010). This result has since been confirmed via ALMA observations towards the core and outskirts (van Terwisga et al., 2019) of the region. This is a clear demonstration that external photoevaporation depletes the dust content, although it remains unclear if this is via instigating rapid radial drift Sellek et al. (2020b) or via entrainment in the wind (Miotello et al., 2012). Boyden and Eisner (2020) also find the gas component of the disc is more compact than in other SFRs, and correlated with distance to θ 1 C, suggestive of truncation by external photoevaporation.
Dust depletion has also been inferred in σ Orionis. Here the dominant UV source is the σ Ori multiple system, for which the most massive component has a mass 17 M (Schaefer et al., 2016). From Herschel spectral energy distributions of discs in this region, Maucó et al. (2016) found evidence of external depletion in the abundance of compact discs (R out < 80 au) consistent with the models of Anderson et al. (2013). The authors also found evidence of ongoing photoevaporation in one disc exhibiting forbidden [Ne ii] line emission (see also Rigliaco et al., 2009). Later, Ansdell et al. (2017) presented an ALMA survey that uncovered an absence of discs with high dust masses at distances 0.5 pc from σ Ori, consistent with models of dynamical evolution and disc depletion (Winter et al., 2020a). In NGC 2024, an early survey of the discs with the SMA presented by Mann et al. (2015) revealed a more extended tail of high mass discs than those of other SFRs. The authors suggested this is indicative of a young population of discs that had not (yet) been depleted by external photoevaporation. However, van Terwisga et al. (2020) presented an ALMA survey of the region and found evidence of two distinct disc populations. The eastern population is very young (∼ 0.2−0.5 Myr old) and still embedded close to a dense molecular cloud that may shield them from irradiation. The western population is slightly older, exposed to UV photons from the nearby stars IRS 1 and IRS 2b. The western discs are lower mass than those in the east and those in similar age SFRs, which is probably due to external depletion. This conclusion is supported by the subsequent discovery of proplyd (candidates) in the region .
We have focused here on regions where evidence of external depletion appears to be present. It is challenging to convincingly demonstrate the converse by counter example, particularly considering the many potential issues discussed above (e.g. accurate stellar aging, projection effects and dynamical histories). As an example, in an ALMA survey of discs in λ Orionis, Ansdell et al. (2020) reported no correlation between disc dust mass and separation from the OB star λ Ori. As discussed by the authors, a number of explanations for this are possible. One explanation is that, given the older age (∼ 5 Myr) of the region, the discs may have all reached a similar state of depletion, with spatial correlations washed out by dynamical mixing (e.g. Parker et al., 2021). Meanwhile, in a survey of non-irradiated discs, van Terwisga et al. (2022) demonstrated that discs in SFRs of similar ages appear to have similar dust masses, suggesting that any observed depletion in higher mass SFRs is probably the result of an environmental effect.
Summary and omissions
For quick reference, in Table 1 we summarise the properties of some local SFRs in which some observational evidence of external photoevaporation has been uncovered, and note some references for further reading. However, we emphasise that this is not a complete or representative list; we have chosen SFRs that exhibit strong evidence of external photoevaporation and well studied stellar populations. Some notable exclusions include Trumpler 13/IC 1396A (the Elephant Trunk nebula) and IC 1805 (the Heart nebula). Trumpler 13 contains an O6.5V and O9V central binary (Tokovinin, 1997), and the young stellar objects exhibit marginal spatial gradients in the disc survival fraction . However, given the abundance of class I discs that indicate ongoing star formation, this may also originate from heterogeneous ages (Silverberg et al., 2021). Similarly, in the 2 − 3.5 Myr old cluster IC 1805, with mass M tot ≈ 2700 M (L FUV ≈ 10 39.4 erg s −1 ), any gradient in the surviving disc fraction is marginal (Sung et al., 2017). A complex dynamical evolution, involving collapse and ejection of members (Lim et al., 2020), may confuse signatures of external disc depletion. Richert et al. (2018) and Mamajek (2009). We further show some specific SFRs discussed in the text and listed in Table 1. Massive and dense 'starburst' clusters are shown as star symbols. Where a local gradient in the disc fraction has been reported in the literature, we show the outer regions as a square symbol. We show the fraction of discs following f discs = exp(−t age /τ life ) for τ life = 1 and 3 Myr (dotted and dashed lines respectively). An important caveat for this figure is that disc fractions in distant SFRs are based on samples that may include fewer low mass stars, which typically exhibit longer disc life-times.
Missing aspects of disc evolution and planet formation in irradiated environments
Notably, we have not discussed chemistry in this section. The chemistry of discs (e.g. Bruderer et al., 2012;Kamp et al., 2017) and the planets they produce (e.g. Cridland et al., 2016Cridland et al., , 2019Bitsch and Battistini, 2020) is a complex function of the stellar metallicity and local disc temperature. For irradiated discs, temperatures increase in the outer regions and disc surface layers with respect to isolated discs, altering the chemistry in these regions, although not necessarily in the disc mid-plane (e.g. Nguyen et al., 2002;Walsh et al., 2013). However, how this alters planet formation and chemistry may dependent on disc evolution, and this remains a matter for future investigation. This issue may also soon be addressed empirically via JWST observations, with the GTO programme by Ramirez-Tannus et al. (2021) aiming to probe the chemistry of discs of similar age but varying FUV flux histories in NGC 6357.
Apart from chemistry, this section has highlighted the many gaps in the understanding of the role of external photoevaporation in shaping the evolution of the disc and the formation of planets. From both theory and observations, we understand that gas disc lifetimes are shortened by external photoevaporation. This shortened lifetime of the gas disc alone should be enough to influence the formation and disc induced migration of giant planets. Meanwhile, how the aggregation of dust produces planets in irradiated discs has only just started to be addressed (e.g. Ndugu et al., 2018;Sellek et al., 2020b). When coupled with the apparent prevalence of external photoevaporation in shaping the overall disc population, the role of star formation environment must be considered as a matter of urgency for planet population synthesis efforts (e.g. Alibert et al., 2005;Ida and Lin, 2005;Mordasini et al., 2009;Emsenhuber et al., 2021). Meanwhile, the connection between stellar rotation periods and premature disc dispersal via external photoevaporation may in future offer a window into the birth environments of mature exoplanetary systems up to ∼ 1 Gyr after their formation (Roquette et al., 2021).
Summary and open questions for irradiated protoplanetary disc evolution
While many aspects of disc evolution in irradiated evironments remain uncertain, we can be relatively confident in the following conclusions: 1. External photoevaporation depletes (dust) masses and truncate the outer disc radii in regions of high FUV flux, at least for F FUV 10 3 G 0 . 2. Inner disc lifetimes appear to be shorted for discs in regions where the total FUV luminosity L FUV 10 39 erg s −1 , while does not appear to be strongly affected for regions with lower L FUV . 3. Dust evolution, low mass planet formation and giant planet formation all have the potential to be strongly influenced by external photoevaporation through changes in the temperature, mass budget, and time available for planet formation.
Some of the many open questions in this area include: 1. Do the externally driven mass loss rates in photoevaporating discs balance with accretion rates, as expected from viscous angular momentum transport? 2. If angular momentum transport is extracted via MHD winds rather than viscously transported, how does this influence the efficiency of external photoevaporation? 3. Is the lack of correlation shortened disc lifetime with FUV flux in intermediate mass environments related to a dead-zone or similar suppression of angular momentum transport? 4. Is the observed dust depletion in near to O stars due to entrainment in the wind or rapid inward drift? 5. In relation to this, how do external winds influence dust trapping and the onset of the streaming instability? 6. Conversely, how does the trapping of dust influence the dust depletion and dust-to-gas ratio in discs? 7. How does disc chemistry vary between isolated and externally FUV irradiated discs? 8. How does the local distribution of FUV environments influence the planet population as a whole?
Demographics of star forming regions
In this section, we consider the properties of star forming regions and the discs they host from an observational and theoretical perspective. We are interested in understanding: 'what is the role of external photoevaporation of a typical planet-forming disc in the host star's birth environment?' The degree to which protoplanetary discs are influenced by external irradiation depends on exposure to EUV and FUV photons. Hence the overall prevalence of external photoevaporation for discs (and probably the planets that form within them), depends critically on the physics of star formation and the demographics of star forming regions.
OB star formation
OB stars have spectral types earlier than B3, and are high mass stars with luminosities > 10 3 L and masses > 5 M . These stars are responsible for shaping a wide range of physical phenomena, from molecular cloud-scale feedback on Myr timescales (Krumholz et al., 2014;Dale, 2015) to galactic nucleosynthesis over cosmic time (Nomoto et al., 2013). The radiation feedback of these stars on their surroundings already plays a significant role during their formation stage, where stars greater than a few 10 M in mass must overcome the UV radiation pressure problem (Wolfire and Cassinelli, 1987). Forming OB stars may do this through monolithic turbulent collapse (e.g. McKee and Tan, 2003), early core mergers (e.g. , or competitive accretion (e.g. Bonnell et al., 2001) -see Krumholz (2015) for a review.
Two questions regarding OB stars are important with respect to local circumstellar disc evolution: 'when do they form?' and 'how many of them are there?'. The former determines when the local discs are first exposed to strong external irradiation. The latter determines the strength and, importantly, the spatial uniformity of the UV field (we discuss this point further in Section 5.4).
The question of the timescale for formation of massive stars is empirically tied to the frequency of ultra-compact HII (UCHII) regions. These regions are the small (diameters 0.1 pc) and dense (HII densities 10 4 cm −3 ) precursors to massive stars. During the main sequence lifetime of the central massive star, the associated HII region will evolve from the embedded ultra compact state to a diffuse nebula. In the context of the surrounding circumstellar discs, the lifetime of a UCHII region represents the length of time for which the surroundings of massive stars are efficiently shielded from UV photons. These lifetimes can be inferred by comparing the the number of main-sequence stars to the number of observed UCHII regions, yielding timescales of a few 10 5 yrs (Wood and Churchwell, 1989;Mottram et al., 2011). The star-less phase prior to excitation of the UCHII region is short (∼ 10 4 yr - Motte et al., 2007;Tigé et al., 2017), hence the UCHII lifetime represents the effective formation timescale for massive stars in young clusters and associations. This formation timescale is much shorter than the typical lifetime for protoplanetary discs evolving in isolation (∼ 3 Myr -e.g. Haisch et al., 2001;Ribas et al., 2015). Therefore, in regions with many OB stars we expect the local discs to be irradiated throughout this lifetime. However, this need not be the case when the number of OB stars is ∼ 1, in which case the time at which an OB star forms may be statistically determined by the spread of stellar ages in the SFR. We discuss this point further in Section 5.4. The total luminosity is calculated using the effective temperature and total luminosity from the model grids of Schaller et al. (1992), with metallicity Z = 0.02 and output time closest to 1 Myr. We apply the atmosphere models of Castelli and Kurucz (2003) to give the wavelength-dependent luminosity. Right panel: The relative contribution of stars of a given mass to the total luminosity averaged across the IMF. Here we use a log-normal Chabrier (2003) sub-solar IMF, and power-law with Γ = 1.35 for m * > 1 M .
The relative contribution of stars of a given mass to the local UV radiation field depends on the relative numbers of stars with this mass, or the initial mass function (IMF), dn/d log m * . We can write the mean total luminosity L SFR of a star forming region (SFR) with N members: where L is the luminosity of a single star. Hence, to understand the contribution of massive stars to the UV luminosity, we are interested in the shape of the IMF. The IMF in local SFRs exhibits a peak at stellar masses m * ∼ 0.1−1 M and a steep power law dn/d log m * ∝ m −Γ * with Γ ≈ 1.35 at higher masses (see Bastian et al., 2010, for a review). This power-law appears reasonable at least up to m * ∼ 100 M (Massey and Hunter, 1998;Espinoza et al., 2009).
We combine the mass-dependent luminosity for young stars, as in the left panel of Figure 14, with the Chabrier (2003) IMF to produce the right panel of Figure 14. Here we multiply the stellar mass dependent luminosity by the IMF, as in the integrand on the RHS of equation 20, to yield the average contribution of stars of certain mass to the FUV and EUV luminosity of a SFR. Note that this is only accurate for a very massive SFR, where the IMF is well sampled. However, the figure demonstrates that for low mass SFRs -which here means where the IMF is not well sampled for stellar masses m * 30 M -both EUV and FUV luminosities are dominated by individual massive stars, rather than many low mass stars. In the following, we consider this in the context of the properties of local star forming regions.
Demographics of star forming regions
We now consider the FUV flux experienced by protoplanetary discs in typical SFRs. We will focus on FUV, since as discussed in Section 4.1, the FUV is expected to dominate disc evolution over the lifetime of the disc. The word 'typical' in the context of SFRs is in fact dependent on the local properties within a galaxy, and we here refer exclusively to the solar neighbourhood (at distances 2 kpc from the sun). A number of studies have approached this problem in differing ways (Fatuzzo and Adams, 2008;Thompson, 2013;Winter et al., 2020b;Lee and Hopkins, 2020), however all such efforts require estimating the statistical distribution dn SFR /dN in the number of members N of SFRs, which dictates how many OB stars there are and therefore the local UV luminosity. For example, Fatuzzo and Adams (2008) used two different approaches to this problem. One was to directly take the distribution from the list of nearby SFRs compiled by Lada and Lada (2003). While this is the most direct, it suffers from small number statistics for massive SFRs. The other approach by Fatuzzo and Adams (2008) is to assume a log-uniform distribution in the number of stars existing in a SFR with a number of members N , truncated outside of a minimum N min = 40 and maximum N max = 10 5 . Alternatively, one can produce a similar distribution using a smooth Schechter (1976) function (see e.g. Gieles et al., 2006a): where β ≈ 2 is expected from the hierarchical collapse of molecular clouds (Elmegreen and Falgarone, 1996) and is consistent with empirical constraints (e.g. Gieles et al., 2006a;Fall and Chandar, 2012;Chandar et al., 2015), as well as the log-uniform distribution adopted by Fatuzzo and Adams (2008, n.b. that to obtain the fraction of stars, one must multiply the mass function described by equation 21 by a factor N ).
In order to interpret equation 21, we further need to estimate N min and N max . The upper limit N max can be inferred empirically (Gieles et al., 2006a,b), or by appealing to the theoretical limits. As elucidated by Toomre (1964), the origin of the maximum mass of a SFR can be understood in terms of the maximum size of a region that can overcome the galactic shear (Toomre length) and local kinetic pressure (Jeans length). This length scale, the Toomre length, may then be coupled to a local surface density of a flattened disc to obtain a maximum mass for a SFR. However, in the outer regions of the Milky Way, the stellar feedback in massive SFRs can interrupt star formation and further reduce N max . Adopting the simple model presented by Reina-Campos and Kruijssen (2017) for this limit with a typical stellar mass m * ∼ 0.5 M , this yields a maximum N max ∼ 7 × 10 4 for all SFRs (not necessarily gravitationally bound). This is broadly consistent with the most massive local young stellar clusters and associations .
Meanwhile, for the minimum number of members in a SFR, Lamers et al. (2005) used the sample of Kharchenko et al. (2005) to infer a dearth of SFRs with N < N min ∼ 280. Such a lower limit might be understood as the point at which the lower mass end of the hierarchical molecular cloud complex merges due to high star formation efficiency and long formation timescales (Trujillo-Gomez et al., 2019). However, obtaining constraints on N min in general is made difficult by the lack of completeness in surveys of low mass stellar aggregates, and it remains empirically unclear how N min varies with galactic environment. Note that generally SFR demographics may depend on galactic environment, and therefore also vary over cosmic time (see Adamo et al., 2020, for a review).
With the above considerations, we can now generate the distribution of UV luminosities experienced by stars in their stellar birth environment. We adopt N min = 280 and N max = 7×10 4 and couple equation 21 with the stellar mass dependent luminosity discussed in Section 5.1. The resultant FUV luminosity distribution when drawing 6 × 10 4 SFRs, each with N members drawn from equation 21, is shown in Figure 15. For context, we estimate the FUV luminosity in Taurus using the census of members by Luhman (2018), assuming the local luminosity is dominated by four B9 and three A0 stars in that sample of 438 members. We also add the ONC, whose total UV luminosity is dominated by the O7-5.5 type star θ 1 C, with a mass of ∼ 35 M (e.g. Kraus et al., 2009;Balega et al., 2014), and Westerlund 1 which has a total mass of ∼ 6 × 10 4 M (Mengel and Tacconi-Garman, 2007;Lim et al., 2013) and thus a well sampled IMF. Regions like Taurus with only a few hundred members are common, and therefore we expect to find such regions nearby. However, each one hosts ∼ 1/1000 as many stars as a starburst cluster like Westerlund 1. Taurus thus represents an uncommon stellar birth environment in terms of the FUV luminosity. The most common type of birth environment for stars lies somewhere between the ONC and Westerlund 1, with FUV luminosity ∼ 10 40 erg s −1 (at an age of 1 Myr).
In order to understand the typical flux experienced by a circumstellar disc, we further need to consider the typical distance between it and nearby OB stars -i.e. the geometry of the SFR. To this end, Fatuzzo and Adams (2008, see also Adams et al. 2006) and Lee and Hopkins (2020) use the Larson (1981) relation that giant molecular clouds and young, embedded SFRs have a size-independent surface density (see also Solomon et al., 1987); hence the radius R of the SFR scales as R ∝ √ N (Adams et al., 2006;Allen et al., 2007). However, this relationship does not hold for very young massive clusters (see Portegies , for a review), with several exhibiting comparatively high densities (e.g. Westerlund 1 - Mengel and Tacconi-Garman, 2007), while older clusters follow a shallower mass-radius relationship R ∝ N α with α ∼ 0.2−0.3 (Krumholz et al., 2019). Instead of the Larson relation, Winter et al. (2020b) attempt to relate SFR demographics to galactic-scale physics by using the lognormal density distribution of turbulent, high Mach number flows combined with a theoretical star formation efficiency. This yields the stellar density distribution in a given galactic environment. In massive local SFRs, the local FUV flux F FUV ≈ 1400 (ρ * /10 3 M pc −3 ) 1/2 G 0 , for stellar density ρ * , due to their radial density profiles (Winter et al., 2018). Using these two ingredients, it is possible to estimate the distribution of FUV fluxes experienced by stars in the solar neighbourhood from a mass function for SFRs. Despite the differences in the two approaches, both Fatuzzo and Adams (2008) and Winter et al. (2020b) find that, neglecting extinction, young stars in typical stellar birth environments experience external FUV fluxes in the range F FUV ∼ 10 3 −10 4 G 0 . The colour bar shows an estimate for the relative density of stars (i.e. density of SFRs multiplied by the number N of members) in logarithmic space using a Gaussian kernel density estimate with a bandwidth 0.1 dex. We estimate the numbers of stars and total luminosity of three local SFRs: Taurus (green), the ONC (purple) and Westerlund 1 (orange). Right panel: the corresponding distribution in the number of stars (red) and SFRs (black) per logarithmic bin in total FUV luminosity space.
While these efforts produce some intuition as to the typical FUV radiation fields, this is not the end of the story for understanding how discs in SFRs are exposed to UV photons. In Sections 5.3 and 5.4 we will discuss the role of interstellar extinction and the dynamical evolution of stars in SFRs.
Extinction due to the inter-stellar medium
After stars begin to form, residual gas and, more importantly, the dust in the inter-stellar medium (ISM) attenuates FUV photons such that circumstellar discs may be shielded from external photoevaporation. The column density required for one magnitude of extinction at visual magnitudes is N H /A V = 1.8 × 10 −21 cm −2 mag −1 (Predehl and Schmitt, 1995), and this can be multiplied by a factor A FUV /A V ≈ 2.7 (Cardelli et al., 1989) to yield the corresponding FUV extinction. The main problem comes in estimating the column density between OB stars and the cluster members. Both Fatuzzo and Adams (2008) and Winter et al. (2020b) approach this problem by adopting smooth, spherically symmetric density profiles with some assumed star formation efficiency to estimate this column density. However, even during the embedded phase this approach overestimates the role of extinction because a more realistic, clumpy geometry of the gas makes the attenuation inefficient (e.g. Bethell et al., 2007). In addition, stellar feedback from massive stars acts to drive gas outflows from the SFR even before local supernovae ignition (e.g. Bania and Lyon, 1980;Matzner, 2002;Jeffreson et al., 2021), reducing the quantity of attenuating matter.
Due to the above concerns, understanding the influence of extinction on shielding the young disc population requires direct simulation of feedback in the molecular cloud. To approach this problem, Ali and Harries (2019) simulated feedback from a single 34 M mass star in a molecular cloud of mass 10 4 M , similar to the conditions in the ONC. The authors find that many discs are efficiently shielded for the first ∼ 0.5 Myr of evolution, while the discs may experience short drops in UV exposure at later stages. This result is somewhat dependent on cloud metallicity, since lower metallicity increases the efficiency of feedback by lengthening the cooling time (Ali, 2021).
If more than one massive star forms in a SFR, this further increases feedback efficiency and geometrically reduces the efficiency of extinction. Qiao et al. (2022) investigated the role of feedback in the simulations of a molecular cloud Qiao et al., 2022). Comparatively few stars are born in the early stages of the cluster formation, where attenuation of FUV photons is efficient. of mass 10 5 M by Ali (2021). The resultant FUV flux experienced by the stellar population is shown in Figure 16. For such a massive region with several O stars, lower mass disc-hosting stars typically experience extremely strong irradiation by the time they reach an age of ∼ 0.5 Myr. Although extinction may be efficient for the stars that form early, the majority that form later are quickly exposed to F FUV ∼ 10 5 G 0 . Hence the embedded phase of evolution in massive SFRs can only shield discs early on. While the FUV flux experienced by discs in real SFRs also depends on the density of the regions, inefficient shielding may partially explain why disc life-times appear to be shortened in SFRs with several O stars (see Table 1). Any giant planet formation in such a region must therefore initiate early and may be strongly influenced by their environment (see Section 4.2.2).
The role of stellar dynamics
In some instances, the FUV exposure history of stars may strongly depend on the dynamical evolution of the SFR. This is particularly true only one dominant OB star is present. For example, the difference in F FUV for a disc at a separation d = 0.05 pc from a single massive star (as for some of the brightest proplyds in the ONC) and those at d = 2 pc (a typical distance for discs in the ONC) is a factor 1600. Hence the dynamical evolution of the star and the SFR can play a major role in the historic UV exposure of any observed star-disc system.
Many studies have performed N-body simulations of SFRs to track the exposure of star-disc systems, either with aim of quantifying general trends (Holden et al., 2011;Nicholson et al., 2019;Concha-Ramírez et al., 2019;Parker et al., 2021) or modelling specific regions (Scally and Clarke, 2001;Winter et al., 2019bWinter et al., , 2020a. These studies generally adopt the FUV driven mass loss rates given by the FRIED grid (Haworth et al., 2018a) with an external flux computed directly from N-body simulations. In general, these studies find that SFRs with typical stellar densities ρ * 100 M are sufficient to rapidly deplete protoplanetary discs (e.g. Nicholson et al., 2019;Concha-Ramírez et al., 2019. Similarly, discs are more rapidly depleted in sub-virial initial stellar populations that undergo cold collapse, as a result of the higher densities and therefore UV flux exposure (Nicholson et al., 2019). However, initial substructure has a minimal effect on a external photoevaporation since UV fields are generally dominated by the most massive local stars and not necessarily nearest neighbours (Nicholson et al., 2019;Parker et al., 2021). Thus it is volume-averaged rather than star-averaged density measures in a SFR that act as a proxy for the typical external UV flux. This is not necessarily true for the bolometric flux, which may still be dominated by the closest neighbours in a highly structured SFR (Lee and Hopkins, 2020).
For an example of how the dynamics in SFRs may alter disc properties, we need only look at the closest intermediate mass SFR to the sun: the ONC. This region has historically dominated the study of externally photoevaporation protoplanetary discs since the discovery of proplyds (e.g. O'dell and Wen, 1994). Surprisingly, up to ∼ 80 percent of stars exhibit a NIR excess , indicating inner-disc retention. This finding is apparently inconsistent with the ∼ 1−3 Myr age (Hillenbrand, 1997;Palla and Stahler, 1999) when accounting for reasonable stellar dynamics, mass loss rates and initial disc masses (e.g. Churchwell et al., 1987;Scally and Clarke, 2001). The ONC contains one O star, θ 1 C, that dominates the local UV luminosity, tying the FUV history of the local circumstellar discs precariously to the history of this single star. Indeed, for this reason intermediate mass star forming regions may be subject to multiple bursts of star formation due to the periodic ejection of individual massive stars (Kroupa et al., 2018). This possibility seems to be supported by the clumped age distribution of stars in the ONC, hinting at multiple populations or phases of star formation Jerabkova et al., 2019). This possibility bears the usual caveat that luminosity spreads do not necessarily imply age spreads, as discussed in Section 4.3.2. However, Winter et al. (2019b) showed how, when considering the gravitational potential of residual gas, such episodic star formation can yield inward migration of young stars and outward migration of older stars. This yields a stellar age gradient as observed in the ONC (Hillenbrand, 1997). As a result, much younger discs experiencing the strongest UV fluxes such that inner disc survival over their lifetime is feasible. Similar core-halo age gradients have been reported in other star forming regions (e.g. Getman et al., 2014a), highlighting the importance of interpreting disc properties through the lens of the dynamical processes in SFRs.
Unlike regions with one O star, dynamical evolution may not be such an important consideration for discs in massive SFRs. For example, tracking the UV fluxes experienced by stars in the neighbourhood of numerous OB stars, Qiao et al. (2022) find that the vast majority of stars experience F FUV ∼ 10 5 G 0 at an age of 1 Myr. The relatively uniform flux in regions with multiple OB stars may go someway to explaining the absence of correlation between disc mass and location found in some simulations (Parker et al., 2021). By contrast, Winter et al. (2020a) reproduce a disc mass-separation correlation in the relatively low mass region σ Orionis region , which is occupied by a single massive star (see Section 4.3). In either case, it is clear that the physics of star formation cannot be ignored when interpreting the properties of protoplanetary discs, and probably the resultant planetary systems.
Summary and open questions for the demographics of star forming regions
Based on the previous dicussion, the following conclusions represent the current understanding of the demographics of SFRs in the context of external disc irradiation: 1. When accounting for a standard IMF, the total FUV and EUV luminosity of a SFR is dominated by stars of mass 30 M . 2. Although most SFRs are low mass and have few OB stars, the majority of stars form in regions with a total FUV luminosity greater than that of the ONC ( 10 39 erg s −1 ). 3. In the absence of interstellar extinction, typical FUV fluxes experienced by star-disc systems in the solar neighbourhood are F FUV ∼ 10 3 −10 4 G 0 . This is enough to shorten their lifetime with respect to the typical ∼ 3 Myr for isolated discs. 4. Extinction due to residual gas in SFRs can shield some young circumstellar discs, with ages 0.5 Myr. However, at later times extinction is inefficient at shielding discs, and unattenuated FUV photons may reach protoplanetary discs.
Some of the open questions that remain in quantifying the typical external environments for planet formation: 1. How important are the dynamics of SFRs in determining FUV exposure for populations of discs, and how much does this vary between SFRs? 2. How does FUV exposure end external photoevaporation of typical protoplanetary discs vary with cosmic time? 3. Do particular types of planetary system preferentially form in certain galactic environments?
Summary and future prospects
In this review, we have discussed many aspects of the process of external photoevaporation, covering basic physics, observational signatures, consequences for planet formation and prevalence across typical star forming regions. While Fig. 17: A cartoon for how protoplanetary disc evolution and planet formation proceeds in weakly-irradiated, low mass SFRs (top) and strongly-irradiated, high mass SFRs (bottom). We consider two identical, initially large discs with outer radii of ∼ 100 au. In the high mass SFR, any neighbouring massive stars may initially be extincted by an UCHII region for ∼ 10 5 yrs. However, before the star-planet system is 0.5 Myr old, extinction in the SFR will typically become inefficient and the disc is irradiated by a strong UV field. This produces a bright, extended ionisation front (IF) and rapidly truncates the disc down to a few 10 au. This may also interrupt the early giant planet formation that proceeds in the outer regions of an isolated disc in a low FUV environment. The irradiated disc is now small, which results in slow mass loss rates and a smaller IF. Given its compact size, the time-scale for viscous draining of the irradiated disc becomes short compared to its isolated counterpart. This can lead to premature clearing of disc material earlier than the typical ∼ 3−10 Myr for which isolated discs survive.
numerous open questions remain, with the current understanding we can make an educated guess at how protoplanetary discs evolve in different star forming regions. In Figure 17 we show a cartoon of how disc evolution proceeds in low and high FUV flux environments. The early period of efficient extinction in SFRs typically lasts less than 0.5 Myr of a disc's lifetime . After this, studies of individual externally irradiated discs in proplyd form have demonstrated mass loss rates up to ∼ 10 −7 −10 −6 M yr −1 (Weidenschilling, 1977;O'dell and Wen, 1994;Henney and O'Dell, 1999). This bright and extended proplyd state is short-lived, but the disc is rapidly eroded outside-in during this period. The influence of the external wind on the outer disc can result in dust loss via entrainment in the wind (Miotello et al., 2012) or due to rapid inward migration of large grains (Sellek et al., 2020b). It can also interrupt any giant planet formation that occurs on this time-scale in the outer disc Segura-Cox et al., 2020;Tychoniec et al., 2020). Given the shorter viscous time-scale of the compact disc, this can lead to a rapid clearing of the disc material similar to expected after gap opening in internally photoevaporating discs . This is corroborated by the observed shortening of inner disc lifetimes in several massive local SFRs (e.g. Preibisch et al., 2011a;Stolte et al., 2015;Guarcello et al., 2016), however only those in which there are several O stars and a total FUV luminosity L FUV 10 39 erg s −1 . In these extreme environments, where high FUV fluxes are sustained over the disc lifetimes, external photoevaporation presumably shuts off accretion onto growing planets and curtails inward migration, possibly leaving behind relatively low mass outer planets . Within this framework several questions arise, requiring both theoretical and empirical future study. As an example, models for the chemical evolution of planetary discs and planets in irradiated environments, and their expected observational signatures, are urgently needed. Upcoming JWST investigations of high UV environments may shed some light on disc chemistry from an observational perspective (e.g. Ramirez-Tannus et al., 2021). Inferring mass loss rates for moderately photoevaporating discs that do not exhibit bright ionisation fronts is also a crucial test of photodissociation region models. Meanwhile, perhaps the biggest problem for planet population synthesis efforts is determining how the solid content evolves differently in photoevaporating discs; when and where do solid cores form in the discs in high mass versus low mass SFRs? These are just some of the numerous questions that remain open towards the goal of understanding external photoevaporation.
In conclusion, the process of external photoevaporation appears to be an important aspect of planet formation, although the full extent of its influence remains uncertain. Future efforts in both theory and observations are required to determine how it alters the evolution of a protoplanetary disc, and the consequences for the observed (exo)planet population. | 2022-06-27T01:16:11.297Z | 2022-06-23T00:00:00.000 | {
"year": 2022,
"sha1": "61bcd7ca47b0394ce29e872be9bc356d32500e2b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "13c024903c2d7f3aa8b6f69edff3e2202c6e3f15",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
264401935 | pes2o/s2orc | v3-fos-license | A Novel ASCT2 Inhibitor, C118P, Blocks Glutamine Transport and Exhibits Antitumour Efficacy in Breast Cancer
Simple Summary ASCT2 is an attractive tumour metabolism target based on its critical role in cancer cell growth. The potential mechanisms of microtubule protein inhibitor C118P in breast cancer remain unknown. Identification of the potential target of C118P is essential. We evaluated the inhibitory effects of C118P on breast cancer. C118P restrained the tumour growth of MDA-MB-231 cells by inducing apoptosis, G2/M phase arrest, and autophagy. Furthermore, ASCT2 was confirmed to be a target of C118P. This is the first report to provide evidence that ASCT2 might be a candidate target of C118P in breast cancer treatment. Remarkably, this study will provide an opportunity for ASCT2 inhibition as a therapeutic strategy. Abstract Background: The microtubule protein inhibitor C118P shows excellent anti-breast cancer effects. However, the potential targets and mechanisms of C118P in breast cancer remain unknown. Methods: Real-time cellular analysis (RTCA) was used to detect cell viability. Apoptosis and the cell cycle were detected by flow cytometry. Computer docking simulations, surface plasmon resonance (SPR) technology, and microscale thermophoresis (MST) were conducted to study the interaction between C118P and alanine-serine-cysteine transporter 2 (ASCT2). Seahorse XF technology was used to measure the basal oxygen consumption rate (OCR). The effect of C118P in the adipose microenvironment was explored using a co-culture model of adipocytes and breast cancer cells and mouse cytokine chip. Results: C118P inhibited proliferation, potentiated apoptosis, and induced G2/M cell cycle arrest in breast cancer cells. Notably, ASCT2 was validated as a C118P target through reverse docking, SPR, and MST. C118P suppressed glutamine metabolism and mediated autophagy via ASCT2. Similar results were obtained in the adipocyte–breast cancer microenvironment. Adipose-derived interleukin-6 (IL-6) promoted the proliferation of breast cancer cells by enhancing glutamine metabolism via ASCT2. C118P inhibited the upregulation of ASCT2 by inhibiting the effect of IL-6 in co-cultures. Conclusion: C118P exerts an antitumour effect against breast cancer via the glutamine transporter ASCT2.
Introduction
Metabolic reprogramming, which fuels tumour cell growth, is considered an emerging hallmark of cancer [1].Cancer cell metabolic remodelling is characterised by the aberrant metabolism of glucose, amino acids, and lipids [2].In addition to their dependency on aerobic glycolysis, cancer cells exhibit other metabolic adaptations, such as increased fatty acid synthesis and "glutamine addiction" [3].Glutamine metabolic reprogramming is mainly associated with several glutamine metabolic enzymes, such as glutaminase (GLS), glutamine synthetase (GLUL), and the glutamine transporter family (solute carrier (SLC) family).Among SLC family members, alanine-serine-cysteine transporter 2 (ASCT2, encoded by the SLC1A5) is recognised as the principal glutamine transporter and is critical for glutamine uptake in tumour cells.Compared with its expression in healthy tissues, ASCT2 is overexpressed in many tumours, such as non-small cell lung cancer (NSCLC) [4], breast cancer [5], and hepatocellular carcinoma [6].Previous studies have mainly focused on the pro-proliferative effect of ASCT2 on tumours, such as breast cancer [7], prostate cancer [8], melanoma [9], NSCLC, colon cancer, and endometrial cancer [10].A recent study revealed the critical role of ASCT2-mediated amino acid metabolism in promoting leukaemia development and progression [11].Thus, these studies indicate the importance of ASCT2 in tumour progression.
Since glutamine plays a critical role in cancer cell growth, new therapies targeting glutamine metabolism have attracted attention.One agent targeting GLS, CB-839, is currently being evaluated in phase II clinical trials.However, a limitation of targeting GLS is that these treatments may induce RAS-independent activation of MAPK signalling [12].In addition to agents targeting GLS, anti-ASCT2 agents have been developed as potential antitumour drugs and have shown promise.In addition to the drug GPNA [13], H. Charles Manning and his team discovered a small-molecule inhibitor of ASCT2 (V-9302) [14].However, V-9302 may not be a specific inhibitor of ASCT2, as it also targets SNAT2 (sodiumcoupled neutral amino acid transporter 2, SLC38A2) and LAT1 (large neutral amino acid transporter 1, SLC7A5) [15].In addition to small-molecule inhibitors, ASCT2 monoclonal antibodies are currently under investigation, but they do not appear to show selectivity between patients with low and high ASCT2 expression, which will limit their successful application [16].
CAAs in the tumour microenvironment promote metabolic remodelling in breast cancer.However, our understanding of the interplay between breast cancer cells and adipocytes on glutamine metabolism is incomplete.Then, we focused on the effect of the adipocyte-breast cancer microenvironment on glutamine metabolism.
Therefore, we report the identification of a potential target of C118P, a new class 1 drug for which a clinical phase I trial has been approved.We aimed to identify biomarkers of C118P and recruit the appropriate patients to test the efficacy of C118P, and identified a potential target of C118P, ASCT2, via reverse docking.In this study, blockade of ASCT2 with C118P resulted in attenuated cancer cell growth and proliferation, increased cell apoptosis, and G2/M cell cycle arrest, which collectively contributed to the antitumour response of C118P in vitro and in vivo.In summary, investigating the effect and mechanism of C118P on inhibiting the proliferation of breast cancer cells is expected to provide guidance for the treatment of breast cancer in the clinic and promote the development of new drugs targeting metabolism.
Cell Viability Assay
The effects of C118P on breast cancer cells (MDA-MB-231, MDA-MB-468, BT-549, MCF-7, T47D, and BT-474 cells) were determined using the MTT assay.Cell suspensions were prepared, and 1800 cells of each type were seeded into a 96-well plate.After incubation for 24 h, the cells were treated with C118P for another 72 h.Subsequently, 20 µL of an MTT solution (0.5 mg/mL) was added and incubated for another 4 h, and the medium was replaced with 150 µL of DMSO to dissolve formazan precipitates.The absorbance at 570 nm was detected using a universal microplate reader (Infinite M100, Tecan, Stadt Crailsheim, Germany).Inhibition rates were calculated with the following formula: inhibition rate (%) = (1 − absorbance of the treated group/absorbance of the control group) × 100.
Real-Time Cellular Analysis (RTCA)
An xCELLigence system is a novel approach developed by Roche Applied Science (Penzberg, Germany) to investigate cell growth, adhesion, and morphology in real time in a label-independent manner.A change in impedance is recorded as the cell index, which indicates cell number, cellular attachment, and morphology.Cells were seeded at a density of 8000 cells (MDA-MB-231) or 10,000 cells (MDA-MB-468) per well, placed on a rotating plate and incubated for 30 min, and subsequently placed in the xCELLigence system, which was linked to a 37 • C incubator with a humidified atmosphere containing 5% CO 2 [17].After incubation for 24 h, cells were treated with C118P and observed for 144 h.
Colony Formation Assay
The effect of combination treatment on cell proliferation was detected with colony formation assays.A total of 2000 cells were seeded into a 6-well plate and incubated for 24 h.Subsequently, the cells were treated with 0.025, 0.05, and 0.1 µM C118P for 14 days.The cells were then fixed with 0.5% crystal violet and stained with 4% formaldehyde.Colonies were then counted macroscopically.
Apoptosis Detection and Cell Cycle Analysis
Cells were collected with EDTA-free trypsin and washed with ice-cold PBS.Subsequently, the cells were suspended in 500 µL of binding buffer and stained with 5 µL of PI and 5 µL of FITC-conjugated Annexin V for 15 min.Apoptotic cells were analysed with a FACSCalibur flow cytometer (BD Biosciences, San Jose, CA, USA).
The cell cycle distribution was detected by PI staining.Cells were collected and fixed in 75% ethanol overnight after drug treatment.Then, the cells were washed with ice-cold PBS once and stained with PI for 30 min at 37 • C. Cell cycle analysis was performed using a FACSCalibur flow cytometer (BD Biosciences).
SPR Analysis of Recombinant Proteins
SPR measurements were performed using a Biacore T200 instrument (GE Healthcare, Chicago, IL, USA).The ASCT2 protein (SL5-H5149) was purchased from ACRO Biosystems.C118P at different concentrations (0.15625 µM to 10 µM) was run over the SPR instrument with a CM5 chip (GE, Chicago, IL, USA) using running buffer containing 1.8 mM KH 2 PO 4 , 10 mM Na 2 HPO 4 , 137 mM NaCl, 2.7 mM KCl, and 0.005% Tween-20 (pH 7.8).The binding and dissociation rates were measured at a flow rate of 25 µL/min.Ligand injection was performed over 1.5 min, followed by flow with ligand-free buffer to analyse dissociation for 2.5 min.Curves were corrected for nonspecific ligand binding by subtracting the signal obtained for the negative control flow cell.The equilibrium KD was derived from a simple 1:1 interaction model using Reichert data evaluation software (version 1.7.1).
Microscale Thermophoresis (MST) Analysis of Recombinant Proteins
The Monolith Protein Labeling Kit RED-NHS (L001) was purchased from NanoTemper Technologies (Watertown, MA, USA).For NT.115 NanoTemper measurements, an infrared (IR) laser beam coupled to a light path (i.e., fluorescence excitation and emission) with a dichroic mirror is focused into the fluid sample through the same optical element used for fluorescence imaging.The IR laser is absorbed by the aqueous solution in the capillary and locally heats the sample with a 1/e 2 diameter of 25 µm.Up to 24 mW of laser power was used to heat the sample without damaging the biomolecules.Thermophoresis of the protein in the presence of C118P at varying concentrations (0.15625 µM to 10 µM) was analysed for 30 s. Measurements were performed at room temperature, and the S.D. was calculated from three independent experiments.Data were normalised to either ∆Fnorm [‰] (10*(Fnorm (bound)-Fnorm (unbound))) or the bound fraction (∆Fnorm [‰]/amplitude).
Detection of ATP, Glutamine, Glucose, and Lactate Levels
After transduction or treatment with C118P at various concentrations (0.025, 0.05, 0.1 µM) for 48 h, ATP was detected with an ATP assay kit (Beyotime Biotechnology).Glutamine was detected with a glutamine assay kit (Sigma, St. Louis, MO, USA), glucose was detected with a glucose assay kit (Whitman Biotech, Nanjing, China), and lactate production was detected with a lactate production kit (Sunshine Biotechnology Ltd., Thatoom, Thailand).
Oxygen Consumption Rate (OCR) Measurements
The OCR was measured using an XF96 analyser (Seahorse Bioscience, North Billerica, MA, USA).Cells were seeded in 96-well XF96 cell culture plates at a density of 20,000 cells/well.After incubation for 48 h, the cells were treated with C118P (0.025, 0.05, 0.1 µM).The media were then removed, and the wells were washed in XF-modified DMEM (Seahorse Bioscience) at pH 7.4 supplemented with 1 mM glutamine (glycolysis and mitochondrial stress tests), 2.5 mM glucose, 1 mM sodium pyruvate, 0.5 mM carnitine, and 1 mM palmitate in complex with 0.2 mM BSA (mitochondrial stress tests) and incubated for 1 h at 37 • C without CO 2 .The OCR was measured in the basal state (1 mM palmitate in complex with 0.2 mM BSA) or after the injection of 5 µM oligomycin, 1 µM 2-[2-[4-(trifluoromethoxy) phenyl] hydrazinylidene]-propanedinitrile (FCCP), and rotenone with antimycin A (both at 0.5 µM).After the Seahorse Bioscience experiments, the proteins were quantified to normalise the results.
MDC Staining and LysoTracker Red Staining
Monodansylcadaverin (MDC, KeyGEN BioTECH, Nanjing, China) is an eosinophilic stain that is commonly used as a specific stain to detect autophagosome formation.Lyso-Tracker Red (Beyotime Biotechnology, Shanghai, China) is commonly used as a specific stain to detect lysosomes.After pretreatment with C118P (0.025, 0.05, or 0.1 µM) for 48 h, cells were incubated with MDC for 30 min or with LysoTracker Red for 60 min.Antifluorescent quenching tablets were used to seal the cells, and cells were photographed under a fluorescent microscope.
Adipose-Breast Cancer Cell Co-Culture Model
Breast cancer cells (2.0-3.0 × 10 5 ) were seeded in the upper well of the Corning Transwell co-culture chambers (Corning, NY, USA).When the cells adhered, the induced mature adipocytes were seeded in the lower chamber.The supernatant from the upper and lower chambers was replaced with DMEM/F12 containing 2% FBS.Subsequent experiments were performed after three days of co-culture.
Three-Dimensional Culture
Breast cancer cells were cultured in 3.8 mL of complete medium, 1 mL of methylcellulose solution, and 50 µL of the Matrigel gel (Corning) mixture.Obvious white dots were visualised at the bottom of the well after a 72 h incubation in the cell incubator.Next, 96-well plates were spread with 50 µL of Matrigel, and 200 mL of complete medium was added to each well.Single-cell spheres were seeded in 96-well plates.Observation and photos were recorded for 0 h and recorded for 7 consecutive days.
Detection of the Mouse Microarray
The sandwich antibody chip is a chip based on the Raybiotech sandwich (Peachtree Corners, GA, USA), which is a detection mode using two antibodies.The experiments were performed by drying, sealing, and incubating the samples, followed by an analysis of fluorescence.
Nude Mouse Xenograft Study
Female BALB/c athymic nude mice (5-6 weeks) with body weights from 18 to 22 g were purchased from the Model Animal Research Center of Nanjing University (Nanjing, China).A total of 2 × 10 6 MDA-MB-231 cells transfected with shControl or shSLC1A5#1 were injected into the subcutaneous tissue of the armpit.Tumours were grown until their volume reached 300 to 500 mm 3 , resected, and cut into small pieces.Subsequently, the tissue pieces were subcutaneously implanted into each of the nude mice.The mice were randomly divided into groups of six individuals each.C118P was administered by tail vein injection at a concentration of 50 mg/kg.The negative group was given an equal amount of normal saline.At 21 days after administration, the mice were euthanised, and the tumour tissues were then resected and assessed.Tumour volume (TV) was calculated by the following formula: TV (mm 3 ) = A/2 × B 2 , where A represents the longest diameter of the tumour, and B represents the shortest diameter.Relative tumour volume (RTV) was calculated with the following formula: RTV = V t /V 0 , where V t represents the TV on day t, and V 0 represents the TV on day 0. The animal care and surgical procedures were guided by the Animal Care and Control Committee of China Pharmaceutical University.
Targeted Metabolomics Analysis
Metabolomics studies of mouse tumour tissue samples from different experimental groups were performed using LC-MS as the analytical method.Experiments were conducted by collecting biological samples, detecting samples with the instrument, and analysing the data, as previously described [20].
Plasmids and ASCT2 Expression and Purification
The SLC1A5 was cloned into the pET-28b (GenScript, Nanjing, China) expression plasmid to produce recombinant ASCT2 with a histidine tag.E. coli strain BL21 (DE3) obtained from Tiangen Biotech Co., Ltd.(Beijing, China) was transformed with the plasmid and cultured on a selective antibiotic LB agar plate.After 16 h, a single colony was picked and cultured in 10 mL of LB medium containing 50 µg/mL kanamycin with vigorous shaking at 37 • C for 10 h.Then, 10 mL cultures were added to 250 mL of medium and cultured for 2 h.Next, protein expression was induced by the addition of IPTG to a final concentration of 0.5 mM.The cells were left to grow overnight at 16 • C and then harvested by centrifugation.Protein extraction and purification were performed using a Ni-NTA Fast Start Kit (QIAGEN, Hilden, Germany) and an AKTA system, respectively.Then, the purified protein was concentrated by centrifugal filter devices (Millipore, Burlington, MA, USA), mixed with glycerol to a final concentration of 20%, and stored at −80 • C until use.
Gene Expression Analysis
The GEPIA2 (Gene expression profiling interactive analysis, version 2) web server (http://gepia2.cancer-pku.cn/#analysis,accessed on 20 July 2022) was used to determine the difference in ASCT2 expression between the breast cancer tissues and the corresponding normal tissues from the TCGA database [21].Violin plots of ASCT2 expression in different pathological stages of breast cancer were constructed using GEPIA2.The Human Protein Atlas (https://www.proteinatlas.org,accessed on 20 July 2022) was used to obtain the expression of ASCT2 protein in human tissues [22].GEPIA2 was used to determine the significance of the association of OS (overall survival) with ASCT2 expression in breast cancer.
Statistical Analyses
All data in this study are expressed as the mean ± S.D. and were analysed using Student's t-test (* p < 0.05, ** p < 0.01, *** p < 0.001, and N.S. represents no significant change).
C118P Potently Inhibited the Proliferation of Breast Cancer Cell Lines In Vitro
We compared the viability of several breast cancer cell lines upon C118P treatment.Among these cell lines (MDA-MB-231, MDA-MB-468, BT-549, MCF-7, T47D, and BT-474 cells), C118P inhibited cell proliferation (Figure 1a), with IC 50 values ranging from 9.35 to 325 nM.Then, RTCA showed that 0.025, 0.05, and 0.1 µM C118P treatment potently inhibited breast cancer cell proliferation for more than 3 days after C118P administration (Figure 1b).In addition, C118P significantly inhibited colony formation in the MDA-MB-231 and MDA-MB-468 cell lines (Figure 1c), with an inhibitory effect of more than 50% at 100 nM C118P.As reported previously [23], mTORC1 is a central regulator.It regulates cell growth, mRNA translation, and metabolism.Our data suggest that C118P treatment markedly decreased phosphorylated p70S6K and phosphorylated S6 levels (Figure 1d).Thus, we demonstrated that C118P inhibits breast cancer progression in vitro.
C118P Substantially Potentiated Apoptosis and Cell Cycle Arrest in Breast Cancer Cells
We performed Annexin-V/PI staining to detect apoptosis.The results indicated that the apoptosis rate increased in both the MDA-MB-231 and MDA-MB-468 cell lines following treatment with C118P (0.025 µM, 0.05 µM, or 0.1 µM) for 48 h compared to the corresponding controls (Figure 2a,b, p < 0.01).Bcl-2, Bcl-xl, and MCL-1 [24] have been identified as specific apoptosis markers [25].Apoptosis induction by C118P was further shown by the decreased expression of Bcl-xl and MCL-1 upon C118P treatment (Figure 2e).In addition to apoptosis, cell cycle arrest was detected.As shown in Figure 2c,d, C118P induced cell cycle arrest at the G2/M phase in the MDA-MB-231 and MDA-MB-468 cell lines.Furthermore, the transition from the G2 phase requires activation of the Cyclin B1/CDK1 checkpoint complex.Here, we found that Cyclin B1 accumulated and that p-CDK1 was dramatically reduced after treatment with C118P (Figure 2f).Collectively, these results suggest that C118P inhibits proliferation by inducing cell cycle arrest at the G2/M phase and apoptosis in breast cancer cells.
Validation of ASCT2 As a Target of C118P through Reverse Docking, SPR, and MST Analyses
To identify the target of C118P, proteins in the PDB database were docked with C118P.Then, ASCT2 was selected from a reverse docking library as a potential target of C118P (Figure 3a).Additionally, AutoDock Vina software (version 1.0.8) was used to dock ASCT2 (PDB: 5llm) with C118P, and the conformation with the lowest binding energy (−7.6 kcal/mol) was then selected, and all of the amino acids less than 1 Å from C118P were shown with PyMOL software (version 1.5.3).As shown in Figure 3b, the data indicated that ASN182, SER195, MET221, and ASN222 comprise the C118P-binding pocket.Among these residues, ASN182 and ASN222 form hydrogen bonds with C118P.As ASCT2 is a transmembrane protein, obtaining full-length ASCT2 was very difficult.Therefore, we obtained an active fragment of ASCT2, as described in the Methods section.ASCT2 was then expressed, purified, and identified using Western blot analysis and Coomassie staining (Figure S1a,e).We next applied SPR analysis to investigate whether and how C118P directly interacts with ASCT2.The biochemical parameters for the binding of C118P and ASCT2 were then measured.Our results showed that the equilibrium dissociation constant (KD) of C118P toward ASCT2 is 2.358 µM (Figure 3c).The MST analysis showed that the KD of C118P toward ASCT2 is 309 nM (Figures 3d and S1f).This finding indicated that ASCT2 is a potential target of C118P.
C118P Inhibits Glutamine Metabolism and Mediates Autophagy in Breast Cancer Cells
ASCT2, the primary glutamine transporter, is overexpressed in different cancers [5].An analysis of the TCGA database revealed that ASCT2 is expressed at high levels in basal-like breast cancer, and distinct associations exist between ASCT2 expression and the prognosis of patients with breast cancer (Figure S2a-d).Thus, we hypothesised that C118P inhibits glutamine metabolism by targeting ASCT2.As expected, at concentrations of 0.025 µM, 0.05 µM, and 0.1 µM, C118P reduced ATP production (Figure 4a).We also investigated the dependency of MDA-MB-231/MDA-MB-468 cells on oxidative phosphorylation (OXPHOS) by measuring the oxygen consumption rate (OCR) with a real-time metabolite analyser.The basal OCR was much lower after treatment with C118P, implying that C118P inhibits the metabolism of breast cancer cells (Figure 4b).Then, a glutamine uptake assay showed that C118P inhibits the uptake of glutamine (Figure 4c).In addition to ASCT2, several key metabolic enzymes, such as GLS1, GLUL, and GDH, are involved in glutamine metabolism.Interestingly, we found that ASCT2 protein expression, but not mRNA expression, was reduced (Figures 4d and S3a).As shown in Figure 4e, after treatment with cycloheximide (CHX, 10 µg/mL), ASCT2 was still degraded.This result indicates that C118P mediates the degradation of ASCT2.Proteolysis in eukaryotic cells is mainly mediated by the ubiquitin (Ub)-proteasome system (UPS) [26] and autophagosome system [27].As shown in Figure 4f,g, only the lysosome inhibitor chloroquine (CQ), but not the proteasome inhibitor MG-132, reversed the reduction in ASCT2 levels caused by C118P.This means that ASCT2 is degraded through the lysosomal pathway.There are three types of biomarkers in the process of autophagy: markers of vesicle formation, lysosomal markers, and markers of autophagic substrates.During vesicle formation, LC3, which exists in the forms LC3-I and LC3-II as a result of LC3-I transformation, is involved in the formation of autophagosome membranes [28].Lysosome-associated membrane protein type 1 (LAMP-1) is a lysosomal marker [29].As shown in our study, C118P induced autophagy in breast cancer cells and increased expression of the autophagy markers LAMP-1, LC3-II, and Beclin1.Meanwhile, the expression of p62 was decreased (Figures 4h and S3b-f).Previous reports have suggested that MDC accumulates in mature autophagic vacuoles, such as autophagolysosomes, and autophagic vacuoles stained with MDC appear as distinct dot-like structures distributed in the cytoplasm [30].Data obtained from MDC staining and LysoTracker Red staining showed that autophagosomes emerged after C118P treatment (Figures 4i,j and S3g,h).
C118P Inhibits Breast Cancer Metabolism via ASCT2
Since we found that C118P reduces the production of ATP and uptake of glutamine, we investigated whether C118P inhibits breast cancer metabolism via ASCT2.First, we constructed stable ASCT2-knockdown and ASCT2-overexpressing cell lines (Figure S4a-d).
Then, the different metabolic responses of these cells after pretreatment with C118P were investigated.The data showed that after treatment with C118P, the metabolic response of cells that overexpressed ASCT2 was significantly different than that of cells in which ASCT2 had been knocked down.After treatment with C118P, the inhibition of ATP production ranged from 50% to 60% in ASCT2-knockdown cells, compared to 135% to 145% in ASCT2overexpressing cells (Figure 5a,b).The inhibition of glutamine uptake ranged from 46% to 63% in ASCT2-knockdown cells, compared to 130% to 145% in ASCT2-overexpressing cells (Figure 5c,d).The promotion of the glucose uptake and lactate production by C118P was reversed in the ASCT2-knockdown group (Figure 5e-h).The growth curve results showed that although overexpressed ASCT2 slightly rescued the survival of breast cancer cells after C118P treatment, relatively speaking, we found that compared with cells in the negative control group, at the same concentration of C118P, the inhibitory effect on cells overexpressing ASCT2 was still strong, while the inhibitory effect on cells with the knockout of ASCT2 was the weakest (Figure 5i,j).
In general, these results indicate that the SLC1A5-overexpressing group was more susceptible to C118P pretreatment than the shControl-transfected group.However, the SLC1A5-knockdown group was insensitive to C118P.In conclusion, C118P inhibits breast cancer metabolism via ASCT2.
Adipocytes Upregulated ASCT2 Expression in Breast Cancer Cells through IL-6
We constructed a co-culture model to investigate the interaction between breast cancer cells and adipocytes.A well-known "cocktail therapy" was used to induce the differentiation of 3T3-L1 cells into adipocytes.After incubating confluent cells with the differentiation medium for up to 14 days, a large number of lipid droplets formed and were observed under a bright-field microscope.Then, Bodipy staining and oil red O staining were used to confirm the differentiation efficiency, as shown in Figure 6a.These differentiated mature adipocytes were seeded in the lower chamber, and the triple-negative breast cancer cell lines MDA-MB-231 and MDA-MB-468 were seeded into the upper chamber.Then, a co-culture model system to mimic the adipocyte-rich breast cancer microenvironment was constructed.Co-culture with adipocytes upregulated ASCT2 expression, increased ATP levels, and promoted breast cancer cell proliferation (Figures 6b-f and S5a).ASCT2 knockdown reversed these changes.These data verified that adipocytes promote glutamine metabolism via the glutamine transporter ASCT2 and suggested that ASCT2 might play an important role in the development of breast cancer.The microarray analysis showed that the levels of many cytokines in the supernatant changed significantly after co-culture; IL-6 was the main secreted protein and the cytokine detected at the highest levels (Figure 6g).The expression of the ASCT2 protein was significantly upregulated after IL-6 stimulation, while when ASCT2 was knocked down, the effect of IL-6 was reversed (Figure 6h).Based on these results, IL-6, secreted by the adipocytes, promotes breast cancer cell growth and metabolism by upregulating ASCT2 protein expression.After co-culture with adipocytes, the expression of ASCT2 was detected using Western blotting.GAPDH served as a loading control (b).The effects of C118P on ATP production (c), glucose uptake (d), and lactate production (e) were detected in ASCT2-knockdown cell lines co-cultured with or without adipocytes.The data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.* p < 0.05, *** p < 0.001, and ns represents no significant change vs. control group.# p < 0.05 and ## p < 0.01 vs. adipocyte group.n = 3. ASCT2-knockdown cells were cultured in a 3D system, and photos were captured for 7 consecutive days (f).Scale bars, 200 µm.Cytokine contents in adipocytes and co-culture models were determined using a mouse microarray (g).After stimulation with IL-6 (5 ng/mL), ASCT2 expression was detected in ASCT2-knockdown cell lines using Western blotting.GAPDH served as a loading control (h).The uncropped bolts are shown in Supplementary Materials.
C118P Exerted Antitumour Effects on the Co-Culture of Breast Cancer Cells and Adipocytes via ASCT2
We investigated whether C118P inhibited breast cancer metabolism via ASCT2 in the co-culture system.C118P inhibited glutamine uptake and ATP generation, inhibited breast cancer cell proliferation, and downregulated ASCT2 expression in the co-culture system (Figures 7a-f and S5b).The SLC1A5 knockdown group was insensitive to C118P.
The effect of C118P in the co-culture system on ATP production, glucose uptake, and lactate production in knockdown cells was weaker than that in the control group (Figure S6a-c).The expression of IL-6 receptor gp130 was upregulated after IL-6 stimulation.Meanwhile, the levels of p-STAT3 and p-ERK1/2 were increased.However, C118P treatment combined with IL-6 stimulation inhibited the upregulation of gp130 (Figure 7g,h).V-9302 for 48 h.Relative glutamine uptake (a), relative ATP production (b), relative glucose uptake (c), and the lactate production (d) were detected in cells co-cultured with or without adipocytes, and the data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.* p < 0.05 and *** p < 0.001 vs. control group.# p < 0.05, ## p < 0.01, ### p < 0.001 vs. adipocyte group, and ns represents no significant change.n = 3. Breast cancer cells were treated with 0.025, 0.05, or 0.1 µM C118P or 3 µM V-9302 cultured in a 3D model, and photos were captured for 7 consecutive days (e).In the co-culture system, ASCT2 expression was detected using Western blotting after treatment with 0.025, 0.05, or 0.1 µM C118P or 3 µM V-9302 for 48 h.GAPDH served as a loading control (f).After stimulation with IL-6 (5 ng/mL) and 0.025, 0.05, or 0.1 µM C118P, the levels of gp130, STAT3, p-STAT3, ERK1/2, and p-ERK1/2 were detected in breast cancer cells using Western blotting.GAPDH served as a loading control (g).Adipocytes upregulate ASCT2 expression in breast cancer cells by secreted IL-6, which subsequently promotes tumour growth.The red arrow indicates upregulation (h).The uncropped bolts are shown in Supplementary Materials.
C118P Exerted Antitumour Effects via ASCT2 In Vivo
Previous data indicated that C118P represses the proliferation of breast cancer in vitro; thus, we further investigated whether C118P exerts anti-breast cancer effects via ASCT2.In MDA-MB-231 nude mouse xenografts, C118P (50 mg/kg, i.v.) significantly inhibited tumour growth (p < 0.01) (Figure 8a-c).As shown in Figure 3, ASCT2 was confirmed as a target of C118P, so we aimed to detect the effect of C118P when ASCT2 was knocked down.Interestingly, when ASCT2 was knocked down, C118P (50 mg/kg, i.v.) did not exert a clear anti-breast cancer effect on MDA-MB-231 xenografts in nude mice (Figure 8g-i).Additionally, H&E staining and immunohistochemical staining detected SLC1A5 and Ki67 (Figures 8d,j and S7a-c).The metabolomic analysis showed that C118P significantly inhibited amino acid metabolism and lipid metabolism in tumour tissues (Figure 8e,f).These results indicate that ASCT2 is a target of C118P (Figure 8k).
Taken together, the study findings show that C118P significantly exerted antitumour effects via ASCT2 in vitro and in vivo, providing evidence to confirm that ASCT2 might be an effective target in the clinical therapy of breast cancer.
Discussion
Targeting metabolic abnormalities is a research direction that has attracted much attention in the field of novel antitumour drug development.Recently, glutamine metabolism has become a hotspot in tumour metabolism research.As a transporter for glutamine, ASCT2 is an attractive tumour metabolism target, based on its role and increased physical activity in cancer [31].The inhibitors targeting ASCT2 mainly consist of small molecules and antibodies, such as MEDI7247, GPNA, V9302, etc.Despite these ASCT2 inhibitors, so far, few patients have benefited from ASCT2 inhibitor treatment strategies.In our study, C118P bound to ASCT2 and decreased its expression.Therefore, ASCT2 is recognised as a target of C118P.Compared with other ASCT2 inhibitors, C118P has the advantages of high efficiency and low toxicity.
The ASCT2 inhibitor MEDI7247 is a novel pyrrolobenzodiazepine antibody-drug conjugate (ADC) monoclonal antibody.MEDI7247 showed potent activity in vitro and in vivo in a spectrum of haematological cancers and solid tumours.GPNA is widely used as a drug to inhibit ASCT2 in basic research (IC 50 ~1000 µM) [13].However, as an ASCT2 inhibitor, GPNA exhibits poor potency and selectivity in human cells.V-9302 was reported to be the first specific and potent small-molecule inhibitor of ASCT2.V-9302 was shown to significantly inhibit ASCT2-mediated glutamine uptake with an IC 50 of 9.6 µM [14].Suppression of ASCT2 by V-9302 resulted in attenuated proliferation of cancer cells and increased oxidative stress, which, collectively, contributed to antitumour responses in vitro and in vivo.Nevertheless, V-9302 has shortcomings, such as low selectivity and high toxicity.Research showed [14] that the response to V-9302 did not correlate with the level of ASCT2 expression in tumours.This means that the results of studies in which patients with high expression of the target ASCT2 are enrolled will be unreliable.
Here, we report the anti-breast cancer effects of C118P.C118P inhibited the proliferation of breast cancer cell lines with an IC 50 of 9.35 to 325 nM and exhibited an improvement in potency over GPNA and V-9302.Moreover, C118P exposure resulted in decreased mTOR activity, which is consistent with reduced amino acid transport and metabolism.SLC1A5 silencing was reported to inhibit oesophageal cancer growth by inducing cell cycle arrest and apoptosis [32].C118P potentiated cell apoptosis and G2/M cell cycle arrest in breast cancer cells, and these effects might be mediated by ASCT2.
In this study, we identified ASCT2 as a target of C118P.Because of the structural complexity of membrane proteins, Jeff Holst [33] constructed a homology model of ASCT2 for virtual screening [34].Benefitting from the resolution of the crystal structure of ASCT2 [35], we obtained possible binding sites between the drug C118P and ASCT2 via virtual screening, which is worthy of further study in the future.Experimental validation of the specific binding sites of C118P with ASCT2, such as point mutation experiments and molecular dynamics simulations, may be necessary.C118P is currently being investigated in a phase I clinical trial in China.We hope our results guide the application of ASCT2 as a target of C118P in subsequent phase II clinical trials.
We found that the metabolic regulatory effect of C118P may not be closely related to the glutamine metabolic enzymes.Meanwhile, C118P may affect protein stability rather than protein synthesis after binding to ASCT2.Further research found that ASCT2 may be degraded through the autophagy-lysosomal pathway.Elevated autophagy is another notable characteristic of the observed C118P-mediated response.However, more research on this topic needs to be undertaken before the mechanism of ASCT2 downregulation can be more clearly understood.C118P probably exerts a dual effect on ASCT2.C118P binds directly to ASCT2, thus inhibiting the transport of amino acids, such as glutamine, and promoting ASCT2 degradation.Moreover, in the tumour microenvironment, C118P may indirectly affect ASCT2 expression through IL-6 and receptor gp130.Further experiments are still needed to verify this conclusion.
Despite these promising results, questions remain.Proteins were docked with C118P to identify the targets.Our results showed that the antitumour effect of C118P was partly caused by ASCT2-mediated metabolic alterations.However, other targets of C118P are not excluded.C118P showed antitumour effects against melanoma via BUB1B or against HCC via tumour vasculature [36,37].In addition, our other studies have shown that C118P inhibited breast cancer metastasis through ASCT2.Whether C118P exerts antitumour effects through other targets in breast cancer still needs further investigation.Another significant issue is that specific antagonism of ASCT2 will also block the ASCT2-mediated transport of other neutral amino acids beyond glutamine.It cannot be excluded that the observed efficacy may be partly due to simultaneous blockade of multiple ASCT2 substrates.
In terms of experimental design, we used a cell-derived xenograft (CDX) model to evaluate the in vivo efficacy of C118P.However, because the CDX model cannot maintain the heterogeneity of tumour tissue, its biological characteristics and drug efficacy evaluation results are less similar to clinical characteristics.The patient-derived xenograft (PDX) model retains tumour heterogeneity, is more consistent with clinical tumour characteristics, and has better clinical predictability.We will continue to further evaluate the in vivo efficacy of C118P in the PDX model to provide a basis for clinical research.Moreover, expanding the sample size in future studies will allow us to more fully evaluate the efficacy and safety of C118P.
Targeting ASCT2 provides a new option for tumour therapy, but due to the existence of tumour metabolic heterogeneity, not all patients can benefit from ASCT2 inhibition, interference with glutamine metabolism treatment strategies.Finding sensitive indications for the clinical mediation of ASCT2 inhibitors is a common problem faced by targeted metabolism antitumour drugs, which has become an urgent issue to be solved.In the future, in view of the above problems, the metabolic heterogeneous characteristics of breast cancer and glutamine dependence will be analysed to find the difference in drug sensitivity and causes of ASCT2 inhibitors.
Conclusions
In summary, we found that C118P regulates glutamine metabolism by inhibiting glutamine uptake through ASCT2, resulting in antitumor effects.Our conclusion proposed a prospect that ASCT2 might be a candidate target of C118P in breast cancer treatment.ASCT2 may be a promising therapeutic target for tumours, especially glutamine-sensitive tumours.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers15205082/s1.Supplementary material associated with this article can be found in the online version.S5: C118P inhibited breast cancer cell proliferation in the co-culture system.(a) ASCT2-knockdown cells were cultured in a 3D system and photos were captured for 7 consecutive days.(b) Breast cancer cells were treated with 0.025, 0.05, or 0.1 µM C118P, cultured in a 3D model, and photos were captured for 7 consecutive days.Figure S6: C118P inhibiting cell metabolism of human breast cancer cells in cocluture system.Human breast cancer cells and ASCT2-knockdown cells were co-cultured for three days in the presence or absence of adipocytes.Different doses of C118P were added in the culture medium for 48 h.ATP production (a), glucose uptake (b) and lactate production (c) were detected after treatment with C118P for 48 h in human breast cancer cells.Quantitative Results are representative of three independent experiments mean ± SD.Data are analyzed using the Student's t-test was statistically significant; ** p < 0.01, *** p < 0.001 vs. shControl group.Figure S7: The statistical analysis of Ki67 positive cells and ASCT2 positive cells related to Figure 8. (a-c) The statistical analysis of Ki67, ASCT2 and Ki67, respectively.Data are analyzed using the Student's t-test was statistically significant; *** p < 0.001 vs. shControl group.
Figure 1 .
Figure 1.C118P potently inhibited the proliferation of breast cancer cells in vitro.Six breast cancer cell lines (MDA-MB-231, MDA-MB-468, BT-549, MCF-7, T47D, and BT-474) were treated with C118P for 72 h.The IC 50 of C118P is summarised (a).The cell index was detected after MDA-MB-231 and MDA-MB-468 cells were treated with C118P at three concentrations (0.025, 0.05, 0.1 µM) or 0.5 µM taxol for 144 h by RTCA (b).Colony formation was detected for 2 weeks after MDA-MB-231 and MDA-MB-468 cells were treated with C118P (c).n = 3.The data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.*** p < 0.001, and ns represents no significant change vs. control group.The activation of proliferation-related proteins was tested by detecting mTOR, p-mTOR, p70S6K, and p-p70S6K in MDA-MB-231 and MDA-MB-468 (d) cells at 48 h, with β-actin serving as a loading control.The uncropped bolts are shown in Supplementary Materials.
Figure 3 .
Figure 3.The structure of C118P and its binding affinity with ASCT2.(a) Chemical structure of C118P.By docking C118P with 100,000 protein crystal structures in the PDB database and comprehensive evaluation of drug target docking scores, ASCT2 was selected as a potential target for C118P.Then, ASCT2 (PDB: 5llm) was docked with C118P, the conformation with the lowest binding energy (−7.6 kcal/mol) was selected (b), and all amino acids within 1 Å of C118P were displayed with PyMOL software.(c,d) We further verified the binding affinity between C118P and ASCT2 by performing SPR and MST to detect the KD.
Figure 6 .
Figure6.Adipocytes upregulated ASCT2 expression in breast cancer cells through IL-6.Bodipy staining and oil red O staining were performed to assess the differentiation efficiency of adipocytes (a).Scale bars, 50 µm.After co-culture with adipocytes, the expression of ASCT2 was detected using Western blotting.GAPDH served as a loading control (b).The effects of C118P on ATP production (c), glucose uptake (d), and lactate production (e) were detected in ASCT2-knockdown cell lines co-cultured with or without adipocytes.The data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.* p < 0.05, *** p < 0.001, and ns represents no significant change vs. control group.# p < 0.05 and ## p < 0.01 vs. adipocyte group.n = 3. ASCT2-knockdown cells were cultured in a 3D system, and photos were captured for 7 consecutive days (f).Scale bars, 200 µm.Cytokine contents in adipocytes and co-culture models were determined using a mouse microarray (g).After stimulation with IL-6 (5 ng/mL), ASCT2 expression was detected in ASCT2-knockdown cell lines using Western blotting.GAPDH served as a loading control (h).The uncropped bolts are shown in Supplementary Materials.
Figure 7 .
Figure 7. C118P exerted antitumour effects on the co-culture system of breast cancer cells and adipocytes through ASCT2.Breast cancer cells were treated with 0.025, 0.05, or 0.1 µM C118P or 3 µM
Figure 8 .
Figure 8. C118P suppressed tumour growth via ASCT2 in MDA-MB-231 xenograft nude mice.In the MDA-MB-231 xenograft nude mouse model, when tumours grew to 100-200 mm 3 , mice were administered 50 mg/kg C118P (i.v., q.o.d., 4 weeks).Images of the tumours (a) are shown.Statistical analyses of the tumour volume (b) and tumour weight (c) were performed.MDA-MB-231 cells transfected with shSLC1A5 were used as a positive control.The data are expressed as the mean ± S.D. and were analysed using Student's t-test.* p < 0.05, ** p < 0.01, and *** p < 0.001.H&E staining and immunohistochemical staining for SLC1A5 and Ki67 (d).Targeted metabolomics analysis.Enrichment analysis of the differentially altered metabolites and pathways (e,f).In the MDA-MB-231 shSLC1A5 xenograft nude mouse model, when tumours grew to 100-200 mm 3 , mice were administered 50 mg/kg C118P (i.v., q.o.d., 3 weeks).Magnified views of the tumour (g) are shown.Statistical analyses of tumour volume (h) and tumour weight (i) were performed.H&E staining and immunohistochemical staining for SLC1A5 are shown (j).Scale bars, 50 µm.(k) The mechanism by which C118P suppresses breast cancer proliferation via ASCT2.Schematic representation of the relationship between C118P and ASCT2.
Figure S1: ASCT2 was expressed in E. coli and purified.The ASCT2 protein was validated by western blotting and Coomassie staining.(a,b) SDS-PAGE and Western Blot analysis of the expression of ASCT2 proteins with the change of IPTG concentration.(c-e) Purification of ASCT2 through Ni+ column and AKTA pure.(f) C118P-ASCT2 interactions were measured by MST. Figure S2: Expression level of ASCT2 in breast cancer and survival analysis.(a,c) The expression of the SLC1A5 in different cancers or specific cancer subtypes was analysed using GEPIA2 and The Human Protein Atlas.(b) Based on TCGA data, the expression level of the SLC1A5 was analysed in the main pathological stages (stage I, stage II, stage III, stage IV, and stage V) of BRCA.Log2 (TPM + 1) was applied for log transformation.(d) We used GEPIA2 to analyse the OS of patients with breast cancer in TCGA stratified by SLC1A5 expression.
Figure S3: C118P inhibited glutamine metabolism and induced autophagy in vitro.(a) ASCT2, GLS1, GLUL, and GDH mRNA levels were measured by qRT-PCR in MDA-MB-231 and MDA-MB-468 cells treated with C118P 0.05 µM for 48 h.(b-f) LAMP-1, MAP1LC3A, MAP1LC3B, Beclin1, and p62 mRNA levels were measured by qRT-PCR in MDA-MB-231 and MDA-MB-468 cells treated with C118P 0.025 µM, 0.05 µM, and 0.1 µM for 48 h, respectively.(g,h) The aotophagosome density and the lysosome density was analysed in MDA-MB-231 and MDA-MB-468 cells treated with C118P 0.025 µM, 0.05 µM, and 0.1 µM for 48 h, respectively.n = 3.The data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.* p < 0.05, ** p < 0.01, *** p < 0.001, and ns represents no significant change vs. control group.Figure S4: The transfection efficiency for ASCT2 knockdown and overexpression.Western blot analysis was performed to verify the transfection efficiency for ASCT2 knockdown (a) and overexpression (b) in the MDA-MB-231 and MDA-MB-468 cell lines.The lower panel shows the statistical data.The ASCT2 mRNA level upon ASCT2 knockdown (c) and overexpression (d) was determined in the MDA-MB-231 and MDA-MB-468 cell lines.n = 3.The data are presented as the means ± S.D. of triplicate measurements and were analysed using Student's t-test.*** p < 0.001 vs. shControl group. | 2023-10-22T15:04:58.145Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "79ab56d750101ee8b06c2c5b3b98243837e8b4b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/20/5082/pdf?version=1697800583",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8c7b0b0a206d52363ed39af692e331d1b1fc7f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236697325 | pes2o/s2orc | v3-fos-license | FMCW Radar with Multiple Initial Frequencies for Robust Source Number Estimation ∗
: We propose a novel method to estimate source number used in Direction Of Arrival (DOA) estimator for Frequency Modulated Continuous Wave – Multiple-Input Multiple-Output (FMCW-MIMO) radar. The main principle is that the phase of the intermediate signals can be modified by the initial frequency of transmitted signals. We confirmed that our method can distinguish true and spurious sources when the number of sources is larger than the number of true sources.
Introduction
Direction Of Arrival (DOA) estimation is one of the significant tools for highspeed wireless communication. It is well-known that most of accurate DOA estimation algorithms require the source number in advance [1]. Akaike Information Criteria (AIC) [2] and Minimum Description Length (MDL) [3] are known as classical but accurate source number estimation algorithms, however they cannot give accurate estimates when sources are highly correlated. Therefore they cannot be directly applied to Intermediate Frequency (IF) signals of Frequency Modulated Continuous Wave -Multiple-Input Multiple-Output (FMCW-MIMO) radar because they are highly correlated. In such cases, we often set the programmed (pre-determined) source number to be larger than true source number. Once DOAs are estimated, we should distinguish which one is true or spurious sources.
In this paper, we propose a novel method to estimate accurate source number using the principle of IF signals that their phases can be changed by the initial frequency of transmitted (Tx) chirp signals. We use Annihilating Filter (AF) [4] which can estimate DOAs of highly correlated signals, and also use multiple Tx chirps with different initial frequencies. The estimated DOAs by AF are classified into true and spurious sources using the phase variation of IF signals. The true source number can be accurately estimated by calculating the correlation between the initial frequency pattern and the phase fluctuation pattern of an estimated signal. Performance of the proposed method is evaluated through computer simulation.
Signal Model
We introduce the principle of controling the phase of IF signal by changing the initial frequency of the Tx chirp in FMCW radar. The IF signal can be obtained by mixing down the Tx signal and the received (Rx) signal. Therefore, the IF signal y(t) is represented as follows: where A t and A r are the amplitudes of Tx and Rx signals, µ is the chirp gradient, R is the distance between the target and the radar, c is the speed of light, and f min is the initial frequency of the Tx signal. Note that the third term of the exponent part in (1) becomes a function of f min .
DOA and Amplitude Estimation
The principles of estimating DOAs and complex amplitude using AF [4] are also briefly described. Assume an M -element Uniform Linear Array (ULA) and L(< M ) far-field sources under an Additive White Gaussian Noise (AWGN) environment, where the source number L is equivalent to the number of objects in radar systems.
IEICE Communications Express, Vol. , 1-6
Llet X m (t) be the array input signal at the m-th element. The AF method [4] estimates DOAs by solving polynomial equation: where with K being the programmed (pre-determined) source number which satisfy L ≤ K < M and to be assigned in advance. Also the parameter X m (f ) denotes the Fourier coefficient at the frequency f at the m-th element. Note that the value of K is determined to be enough large, not to miss true sources.
Then the complex amplitude of each source can be retrieved by the linear regression scheme [4] as follows: where {z k } K k=1 and {a k } K k=1 are the phase difference and the complex amplitude of the k-th source. Finally K source candidates are obtained, means that more number of true sources are estimated. Once DOAs are estimated, we should distinguish which one is true or spurious sources.
Spurious Elimination Using FMCW Signal with Different Initial Frequencies
Our method is to distinguish true and spurious sources using the relationships between the phases of Tx and Rx signals. Fig. 1 shows a brief overview of the proposed method. Let f (n) min be the minimum frequency of n-th chirp signal, where it is fixed for all n in the classical FMCW radar. In the proposed method, the minimum frequency {f n=0 is changed along with a given frequency pattern as in the right figure in Fig. 1 where N denotes the number of chirp signals. Means, we set a binary code that represents the relation between neighboring minimum frequencies: Note that spurious signals might have similar DOA values among the different chirp signals. However, we can distinguish true sources from spurious sources because the phases of true sources correspond to (1) but those of the spurious signal do not. Therefore, the true source candidates can be classified by grouping each estimated phase that has similar DOA value among the Tx signals. In a similar manner with the minimum frequency relation in (5), we encode the phase sequence p (n) k of the k-th candidate (k = 1, 2, . . . , K) to a binary code: Here the behavior of the minimum frequency in (5) will become similar to that of the phase sequence in (6) if the candidate corresponds to true source, and will be different if the candidate corresponds to spurious. Then the k-th candidate can be judged by evaluating the stacked and normalized correlation becomes larger than a certain threshould, i.e., where the parameter ε is a threshold that determines whether it is a true source or a spurious source. Finally the estimated source number is determined by the number of candidates whose stacked correlation in (7) exceeds ε.
Simulation
Performance of the proposed scheme is evaluated through simulation. We assume 3 × 4 FMCW-MIMO radar system where its center RF frequency and its bandwidth are 79 GHz and 4 GHz, respectively. We installed three Tx and four Rx antenna elements on a line with the interval of 2.0λ between Tx elements and 0.5λ between Rx elements, which is regarded to be equivalent with a 12-elements ULA. Simulation parameters are as follows: the IF sampling frequency is 2.75 MHz, the number of time samples per a chirp signal is 128, SNR is 20 dB, and the distance between the targets and the radar system is 1.0 m.
Behavior of Phase Characteristics
First we evaluated the phase characteristics in case that the true and programmed source numbers are respectively given as two true sources L = 2 and six programmed sources K = 6, where the true DOAs are set to −32 and 38 deg. Fig. 2 shows the behavior of the estimated phase characteristics p (n) k for K = 6 candidates when Tx transmits the chirp pattern in Fig. 1. The signals whose DOA are close to −32 and 38 deg correspond to "candidate 1" and "candidate 2", respectively.
We see from Fig. 2 that the phase characteristics of the first and second candidates are almost the same with the initial frequency pattern in the right figure of Fig. 1, while the other four phase characteristicss are very different from the initial frequency pattern. Therefore, we can easily guess that the stacked correlation in (7) will become large in cases of k = 1, 2 and will become small in cases of k = 3, 4, 5 and 6.
Source Number Estimation by Stacked Correlation
We also evaluate the distribution of the stacked correlation in a different scenario. Fig. 3 shows the stacked histogram of the correlation coefficients in (7) calculated for each candidates with 1,000 Monte-Carlo trials under the conditions in case of five true sources L = 5 and six programmed sources K = 6. One of the five true sources is SNR = 0 dB with the DOA of −44 deg, and the other four sources are SNR = 10 dB with the DOAs of −25, −8, 14, 26 deg. We see from Fig. 3 that the coefficients corresponding to the true signals becomes almost one. In contrast, the correlation coefficients calculated from the spurious sources corresponding to "Candidate 6" are distributed around zero. In this situation, the success rate of our method is 94% when we set the value of the threshold ε to be 0.7. Nevertheless the classical methods AIC and MDL cannot estimate the number of sources at all (the success rate is 0%). This is because five true sources are highly correlated due of the same distance 1.0 m far from the radar system, the sample covariance matrix of the array input signal becomes rank-deficient. Originally AIC and MDL can estimate the number of correlated sources as mentioned in [2], [3], however the specification here is very severe where those methods do not work at all. Indeed AIC and MDL respectively estimated the source number as 1 and 11 (equal to 12 elements minus one) for all the 1,000 trials.
Conclusion
In this paper, we proposed a novel source number estimation method with multiple chirp signals that have different initial frequencies. We confirmed that the source number could be estimated more accurately compared to the conventional method by exploiting the stacked correlation between the Tx initial frequency and the phase information of the estimated signal. | 2021-08-03T00:06:31.512Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "dd417dc4b40730ea386e6e69e80344378deb1c0d",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/comex/10/9/10_2021SPL0031/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fe90b120d3282653c687924bb7e14517bd43886e",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244345565 | pes2o/s2orc | v3-fos-license | Spontaneous Regression of Metastatic Lesions of Adenocarcinoma of the Gastro-Esophageal Junction
Spontaneous regression of cancer is a rarely recognized entity in modern medicine. Historically, this was recognized and hypothesized that an infection causes immune activation, indirectly stimulating the body to destroy tumor cells. Similarly, immune-oncology has now become a major modality in the treatment of solid and some liquid malignancies. However, now with improved therapeutic modalities in the oncology world, one does not get to appreciate our own immune system’s ability to fight cancer. We present a patient who had spontaneous regression of metastatic adenocarcinoma of the gastroesophageal junction (GEJ). The patient is a 58-year-old female who had presented with early satiety and dysphagia for which she underwent esophagogastroduodenoscopy which showed an esophageal mass and endoscopic ultrasounds (EUSs) confirmed adenocarcinoma of the GEJ with metastasis to the regional lymph nodes and left supraclavicular lymph nodes. The patient had refused to undergo any surgical, medical oncological, or holistic treatments. Interim disease monitoring positron emission tomography-computed tomography (PET-CT) showed resolution of the metastatic sites of gastroesophageal cancer with clinical improvement of her symptoms. She continues to have this distant regression of metastatic gastroesophageal cancer six months after the initial diagnosis. In literature, spontaneous cancer regression has been reported in melanoma, renal cell carcinoma, and basal cell carcinoma. To our knowledge, this is the first case reported of spontaneous regression of metastatic lesions involving adenocarcinoma of the GEJ with no medical or surgical intervention.
Introduction
Spontaneous regression of cancer is defined as the partial or complete disappearance of primary tumor tissue or its metastases in a patient who never received cancer-directed treatment. There has been a significant increase in the available treatments for cancer. One of them is the development of immunotherapy which essentially uses the ability to fight off cancerous cells which tends to otherwise escape the immune-directed killing [1][2]. The immune phenomenon is presumed to be one of the explanations for the spontaneous regression of cancer. It has been postulated that the stimulation of the immune system leads to the destruction of cancer cells, in turn causing tumor regression [3]. Spontaneous regression of cancer is an uncommon phenomenon but has been observed for hundreds of years [4]. Spontaneous regression of esophageal cancer is even more rare, with only a few cases having been reported in the literature [5][6]. We present a 58-year-old female with poorly differentiated adenocarcinoma of the gastroesophageal junction (GEJ) with multiple lymph node metastases, who had resolution of the lymph node metastases without regression of the primary lesion four months later without any cancer-directed therapy. This phenomenon is characterized as clinical category two of the Everson categorization of the spontaneous regression of the pathologically proven distant metastasis. She continues to have this distant regression on positron emission tomography-computed tomography (PET-CT) seven months after the initial diagnosis, with continued clinical stability nine months after her initial diagnosis. The patient is encouraged to pursue oncological treatment on every visit, however, she continues to refuse any oncological intervention for the primary cancerous lesion.
Case Presentation
Our patient was a 58-year-old female who initially presented in November 2020 with a history of dysphagia and early satiety. Her past medical history was significant for gastric bypass done in 2001, multiple ventral hernia repairs, B12 deficiency, and depression. She underwent esophagogastroduodenoscopy (EGD) and was found to have a one-centimeter polyp in the distal esophagus. Biopsy of this polyp revealed a poorly differentiated adenocarcinoma of the GEJ, no evidence of deficient mismatch repair (low probability of MSI-H) and it was HER-2 positive. Initial PET-CT showed 1.6 cm fluorodeoxyglucose (FDG) avid lateral wall 1 2, 3 1 4 5 thickening of the distal end of the esophagus with SUV 3.1, with another focus at the GEJ SUV 2.4, also noted were 2 cm x 1.6 cm left supraclavicular lymph node SUV 7.2, 1 cm precarinal lymph nodes, and other nodes included retrocrural, gastrohepatic, periaortic, and aortocaval lymph nodes. EUS showed a medium-sized ulcerating mass measuring two centimeters at GEJ extending to the gastric pouch. There was also invasion of the muscularis propria. Five malignant-appearing lymph nodes were visualized in the lower para esophageal mediastinum, celiac, and peri-aortic regions.
Fine needle aspiration of the para-aortic lymph node was positive for metastatic adenocarcinoma. Given these findings, the patient was staged as T3N2M1a. Her combined positive score (CPS) score was 20, and she was offered palliative chemotherapy and immunotherapy. The patient expressed wishes not to pursue any cancer-directed therapies and had no changes in her diet or had other holistic measures. A PET-CT was repeated four months later which showed findings consistent with known primary malignancy but anatomic and metabolic resolution of metastasis to the left supraclavicular lymph node, the intraabdominal lymph nodes, and hypermetabolic focus of GEJ that was previously seen (Figure 1). The size of the supraclavicular lymph node and the other gastric and para-aortic lymph nodes significantly decreased in size and were no longer metabolically active although, the primary lesion had increased in size. The patient had another PET-CT almost seven months later from her initial diagnosis, which showed findings very similar to the PET-CT shown below, performed about five months from her initial diagnosis. She did not receive any treatments or medical care at any time between diagnosis and the repeat PET-CT. Nine months after her initial presentation, the patient reports symptomatic resolution of dysphagia. The patient is encouraged to obtain oncological treatment for her persistent primary cancerous lesion at the GEJ; however, she refuses any medical or surgical treatment options at this time.
Discussion
Esophageal cancer has two main subtypes: squamous-cell carcinoma and adenocarcinoma. While squamous-cell carcinoma makes up the majority of esophageal cancer worldwide, adenocarcinoma has become the predominant subtype in the western world. The incidence of esophageal adenocarcinoma in 1975 was 0.4 per 100,000 compared to 2009, which was 2.6 per 100,000 [7]. The spike in cases has been attributed to the increase in obesity and gastroesophageal reflux disease. A high BMI index increases the risk of esophageal adenocarcinoma by a factor of 2.2 [8]. Adenocarcinoma is more prevalent in males than in females.
For esophageal carcinoma, it is crucial to determine the extent of lymphatic involvement. Our patient had extensive lymph node involvement with a PET-CT showing six regions involved and EUS confirming five malignant-appearing nodes. The presence and extent of lymph node involvement have a prognostic value -greater than four having a poorer prognosis [9]. Two lymphatic plexuses are present in the esophagus, and lymphatic fluid may move up, down, or bidirectional. Thus, fluid may move to any nodal bed and region of the thorax. Studies have shown esophageal carcinoma metastasize to cervical, thoracic, and abdominal lymph node stations, regardless of the primary tumor location [10]. One particular study showed in GEJ tumors, abdominal lymph nodes were positive in all, thoracic lymph nodes were positive in 40%, and cervical lymph nodes were positive in 20% [11]. A possible explanation for the involvement of cervical lymph nodes in GEJ tumors could be the presence of an extensive lymphatic network in the submucosa and even in the lamina propria of the esophagus, with both intramural and longitudinal lymphatic drainage [10]. Another interesting phenomenon is seen in esophageal carcinoma --skip metastasis. Skip metastasis are distant lymph nodes that bypass the first lymph node and directly metastasize into the second or third lymph node. Patients with skip metastasis had a significantly better five-year survival rate than patients with continuous metastasis [12].
Spontaneous regression of cancer is defined as the partial or complete disappearance of primary tumor tissue or its metastases in a patient who never received cancer-directed treatment. There are four clinical categories as defined by Everson: 1) regression in the primary tumor, 2) regression of metastatic tumor (confirmed via histology), 3) regression of metastatic tumor (no pathological confirmation), and 4) regression of presumptive metastases by radiography [13]. Our patient could be classified as category two since she had fine needle aspiration (FNA) positive for adenocarcinoma. Spontaneous tumor regression is an uncommon phenomenon but one that has been observed for hundreds of years. It was initially known as St. Peregrine tumor, being named after Peregrine Laziosi, who developed a bone tumor of his tibia that spontaneously disappeared. In 1966 Everson and Cole wrote about 176 cases of spontaneous regression from 1900 to 1964. Cole went on to publish additional works in the Journal of Surgical Oncology to address spontaneous regression. He speculated that there could be antigens in our body that stimulate our immune system, causing regression of cancer [13]. Factors associated with spontaneous regression primarily include apoptosis, immune system, and conditions in the tumor microenvironment, particularly the presence of inhibitors of metalloproteinases and angiogenic factors and decreased epithelial cadherin proteins [14]. Common infections have also been implicated in the role of spontaneous regression. Coley, in 1891, wrote about the inoculation of his patients with erysipelas and its curative effect [15]. Since this, many other studies have been done to find the underlying mechanism. It has been postulated that the stimulation of the immune system activates resting dendritic cells, lymphocytes, and natural killer cells, increasing the immunorecognition of tumor cells and leading to the destruction of cancer cells, in turn causing tumor regression [16].
Spontaneous tumor regression is much less reported in esophageal cancer. Cancers most likely to undergo spontaneous tumor regression include melanoma and cutaneous basal cell carcinoma, testicular germ cell, and renal cell carcinoma. Of the reported cases, the subtypes include squamous cell carcinoma and small cell carcinoma [5][6]17]. Our case is unique in that spontaneous regression occurred in our patient who has esophageal adenocarcinoma. To our knowledge, this is the first case of esophageal adenocarcinoma, which had spontaneous regression in its metastatic lymph nodes without any surgical resection of the primary.
Conclusions
Spontaneous regression of cancer is an uncommon phenomenon where either the primary or the metastatic lesions disappear without oncological intervention. We attribute this phenomenon to immune-directed cancer cell destruction. We presented a 58-year-old female who had biopsy-proven poorly differentiated adenocarcinoma of the GEJ with multiple lymph node metastases and had resolution of metastatic sites without any oncological treatment. We encourage this entity to be recognized. However, we do not recommend spontaneous waiting and watching as an alternative to definitive therapeutic modalities. However, if a patient does not wish for treatment at the time of diagnosis, we recommend doing surveillance without any immunosuppressive therapies. If there happens to be a regression of the tumor, then a different treatment strategy could be proposed to the patient, and the patient should be given the choice of all available modalities.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2021-10-16T15:18:29.885Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "e4fde8d3521817db04b9e71ef7a9233e3aa89401",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/69048-spontaneous-regression-of-metastatic-lesions-of-adenocarcinoma-of-the-gastro-esophageal-junction.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f568f3b4036f4aaab01bc62db1bfe430c34e1831",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202078058 | pes2o/s2orc | v3-fos-license | Detoxification of oil-contaminated soils by using humic acids
The paper deals with the study of the effect of peat mechanoactivation on the structure and biostimulating and detoxifying properties of isolated humic acids (HA) in of oil-contaminated soil. It is shown that the mechanical activation of peat leads to an increase in the HA yield by a factor of 2-3. In this case, the changes in the fragment composition of HA are observed. Hence, the percentage of aromatic and oxidized alkyl fragments increases, while the amount of oxygen-containing functional groups decreases. The binding of petroleum organic compounds occurs due to the presence of an aromatic skeleton in the structure of HA. The increase in the proportion of aromatic fragments in the structure of mechanically activated HAs increases their affinity for hydrophobic oil compounds, thereby providing them a detoxifying ability. The maximum detoxifying ability of mechanically activated HA determines their biostimulating properties in the wheat seed germination experiment. The biostimulating effect from the use of mechanically activated HA samples is manifested itself to a greater degree in the increase in the height of the sprout stem and the dry mass of the aerial part of the plant. The processes of biodegradation of oil in the soil in the presence of HA are investigated. It is shown that the amount of bitumoids extracted from the soil in the presence of HA is reduced by 30%. The content of hydrocarbons in bitumoids decreases by 50% due to the biodegradation of low molecular weight alkanes. At the same time, the degree of branching of hydrocarbon chains increases, which suggests a microbiological activity. An increase in alcohol-benzene resins in the composition of bitumoids indicates the stimulation of hydrocarbon-oxidizing bacteria by humic acids. According to the data of IR spectroscopy, the content of paraffin hydrocarbons has decreased during the destruction of oil by soil microflora.
Introduction
Oil and petroleum products are widely spread pollutants in nature. Hence, the most important and urgent task in the activities of oil producing and oil refining enterprises is detoxification, cleaning and restoring the properties and fertility of soils polluted by oil and oil products. In the environment, the processes of soil self-purification are extremely slow and need to be stimulated. To remediate the oil-polluted soils, a combination of mechanical, chemical and biological methods is used. When using microbiological methods, complex problems arise in the interaction of populations introduced into the soil with the natural microflora. Microflora includes hydrocarbon-oxidizing microorganisms, which are permanent components of soil biocenoses. It is possible to increase the catabolic activity of microorganisms by introducing peat humic acid-based preparations into contaminated soil.
Humic substances make up 50-80% of the soil organic matter and solid fossil fuels. The presence of a wide spectrum of functional groups in molecules of humic acids (HA) in combination with aromatic fragments determines their ability to enter into ionic and donor-acceptor interactions, to form hydrogen [1][2][3][4][5][6][7].
The complex composition of macromolecules of humic acids (HA) and their stochastic nature require the establishment of the relationship of the sorption and detoxifying properties of HA with their structure. One of the methods for the structural modification of HA is the method of mechanical activation of solid caustobioliths. It was previously found that the mechanical activation of solid caustobioliths results in a change in the structural parameters of the isolated HA and increases their efficiency when interacting with ecotoxicants and heavy metals [8][9][10].
The purpose of this work was to study the detoxifying and biostimulating ability of humic acids in relation to oil.
Experimental part
The objects of the study were HA of sphagnum peat, HA1 of the mechanically activated peat without additives, and HA2 of the mechanically activated peat with the addition of solid alkali -3% NaOH of analytical grade. Mechanical activation of peat was carried out under following experimental conditions: the rotational speed of the drums -1820 rpm -1 and centrifugal acceleration -600 m/s 2 . Grinding bodies were steel balls of 8-10 mm diameter. The weight of balls per one drum loading was 0.2-0.5 kg, sample weight was 15-20 g, and time of treatment was 2 minutes. Humic acids were isolated from peat by treating it with 0.1 n NaOH at a temperature of 90 °C in an amount of 150 ml of solution per 1g of the sample weight for 1 hour. Alkaline extraction was repeated three times. Humic acids in an alkaline solution were precipitated with 4% HCl to pH 2. The brown amorphous HA precipitate was separated by centrifugation and then washed with distilled water to pH 7 and dried in a Petri dish in a vacuum oven to constant weight.
The elemental composition of HA was determined using a Carlo Erba Strumentazione 1106 analyzer (Italy). The analysis of fragment composition was performed via NMR 13 S spectrometry using a Bruker 300 radiospectrometer (Germany) at an operating frequency of 100 MHz via the fast Fourier transform accumulation method. Spectral sweep width was about 26000 Hz, the time of registration of the free induction decay signal (FID) was 0.6 s, the interval between pulses (Td) was 8 seconds at a pulse width of 90°, and the duration of spectrum accumulation was 24 hours. The quantity of preparation weighed 50-70 mg was dissolved in 0.7 cm 3 of 0.3 M NaOD.
To assess the detoxifying properties of humic preparations, laboratory-based vegetation experiments were conducted on the soil contaminated with a crude oil. A universal, uniform soil from Einheitserde Werkverband E0 (Germany) was used as a model soil. To simulate pollution a crude oil at a concentration of 1.5 wt% was used. The dose of nutrient substrate applied per 100 g of soil was 2 ml of the HA solution with a concentration 0.5 g/l. In this case, the calculated amount of HA was dissolved in a small volume of alkali (0.1 N NaOH), neutralized with 1 M HCl solution to pH 7.0, and made with distilled water up to one liter. The HA solutions were applied to the soil on the 3rd and 5th days of cultivation. Wheat of a genetically unmodified cultivar variety Samurai served as a test object. The duration of the experiment was 20 days at a temperature of 23-27 °C with natural light, daily watering or irrigation with appropriate solutions of HA. The physiological activity of humic preparations was assessed by their effect on the height of the stem of a seedling and its biomass using the standard procedure described in [11].
After the experiment, the content of residual oil in the soil was determined by extraction with chloroform. To determine the group composition of crude and residual oil, the method of column chromatography was used. Destructive changes in oil hydrocarbons were analyzed by IR spectroscopy using a Nicolet 5700 spectrophotometer (FT-IR) in KBr tablets at a ratio of 1:300 within the range from 400 to 4000 cm -1 . Based on the results of IR spectroscopy, spectral coefficients were calculated. They represent the normalized optical densities of the following absorption bands: D 1610 /D 725 -coefficient of aromatization, D 720 /D 1380content of paraffin structures, D750/D720content of condensed aromatics, D725/D1465content of paraffin structures, and D1380/D1465branching coefficient.
Results and discussion
In Table 1 the results of studies of the effect of peat mechanoactivation (MA) on the yield in HA and their elemental and fragmentary composition are presented. Preliminary mechanical activation of peat promotes an increase in the yield of HA by a factor of 2-3 compared with the initial peat. The highest amount of HA is isolated from peat by MA with 3% NaOH. The increase in the yield of HA is possible due to the destruction of hardly hydrolyzable substances. It is evident from the data of elemental analysis that the H/C and O/C atomic ratios are decreased in HA. This suggests an increase in the degree of aromaticity. The structural parameters calculated from the results of 13 C NMR spectroscopy are shown in Table 1. Data analysis indicates a change in the distribution of carbon atoms by the structural fragments of HA depending on the conditions of mechanochemical activation. The percentage of carbon in aromatic and oxidized aliphatic fragments in the HA1 and HA2 molecules from mechanically activated samples is higher than that in HA of the initial peat. The number of oxygen-containing groups has decreased.
An analysis of the chemical composition of oil-contaminated soil in a month after the introduction of HA from the initial peat into it showed a decrease in the amount of bitumoids under extraction by 30% (Table 2). The amount of extracted bitumoids decreased by 42-47% after the introduction of mechanically activated samples HA1 and HA2 into the soil. A decrease in the percentage of paraffin-naphthenic fractions, in particular, n-alkanes, was observed in the composition of bitumoids (Table 2). At the same time, the content of resins, in particular, acidic alcohol-benzene resins increased due to the new PFSD-2019 IOP Conf. Series: Materials Science and Engineering 597 (2019) 012020 IOP Publishing doi:10.1088/1757-899X/597/1/012020 4 formation of oxygen-containing compounds. This suggests the destructive oxidative activity of microorganisms, stimulated by the introduction of HAs into the soil.
Based on the results of the analysis of the hydrocarbon composition of bitumoids extracted from the soil, the biodegradation coefficients were calculated (Table 3). A decrease in the n-C17/n-C27 ratio in bitumoids, extracted from the soil with the addition of HA suggests a decrease in the percentage of low molecular weight n-alkanes, which are subject of biodegradation in a greater degree. The index i-C19/i-C17, which characterizes the degree of branching of hydrocarbon molecules increases in samples with HA additives and confirms the occurrence of more intensive processes of oil biodegradation. The index 2C29/C28+C30 reflects the degree of destruction of high molecular weight n-alkanes. The values of this index are similar for hydrocarbons isolated from soils containing HA. In Table 4 are presented the spectral coefficients characterizing the processes of oil biodegradation by soil microflora and HA additives. As follows from the Table 4, the oil biodegraded by soil microflora, stimulated by the addition of HA2, is characterized by increased coefficients C1, C2, which indicates an increase in the fraction of aromatic structures in oil. Biodegradation of paraffin hydrocarbons is confirmed by a decrease in C3 and C4 coefficients and an increase in the branching degree of C5. The biostimulating and detoxifying effect of HA solutions was manifested in an increase in the height of the stem of seedling and dry weight of the aerial parts of plants compared to the control (Figure 1). The greatest effect was obtained when treating the soil with solutions of mechanically activated samples HA1 and HA2. Thus, it has been found that HA exhibit biostimulating and detoxifying properties under conditions of oil-contaminated soil. The greatest effect in this case has been provided by HA of peat, mechanically activated in the presence of alkali. The presence of the aromatic framework provides the ability of HA to bind organic compounds, therefore, as the contribution of the aromatic framework to the HA structure increases, their affinity for hydrophobic organic compounds increases. Maximum aromaticity is characteristic of HA, which determines their high binding and detoxifying ability with respect to oil under the conditions of vegetative experiment.
The processes of oil biodegradation in the presence of HA are accompanied by a decrease in the amount of bitumoids and low-molecular hydrocarbons, and the new formation of high-molecular-weight hydrocarbons. The increase in alcohol-benzene resins in the composition of bitumoids indicates a stimulating effect of humic acids on hydrocarbon-oxidizing microorganisms. | 2019-09-10T00:27:58.447Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "4e12230b476e0fe49f5a0e28b62993907cd5b202",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/597/1/012020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "12f402ddee03c162b60a23f46cd578000bd794d6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
11825172 | pes2o/s2orc | v3-fos-license | Responsible Editor: M. Parnham.
Objective The aim was to create pathological changes in mice relevant to human smoke exposure that can be used to further understand the mechanisms and pathology of smoke-induced inflammatory disease. Methods Mice were exposed to tobacco smoke or lipopolysaccharide (LPS) to generate an inflammatory infiltrate within the lungs. Results Tobacco smoke exposure over a 4 day period led to neutrophilia in the lungs of BALB/c mice. Within the inflammatory exudates, significant changes were also seen in protein levels of IL-1B, IL-6, MIP-2, KC (IL-8) and TIMP-1 as measured by ELISA. Further protein changes, as measured via multiplex analysis revealed increased levels of MMP-9, MDC, LIF and MCP-1, amongst other mediators. Major changes in whole lung tissue gene expression patterns were observed. The neutrophilia seen after smoke exposure was steroid-insensitive, relative to doses of steroid needed to reduce LPS-driven neutrophilia in controls. This exposes pathological switches that are changed upon exposure to tobacco smoke, rendering steroids less effective under these conditions. Challenge of chemokine receptor type 1 (CCR1) KO mice in the tobacco smoke model showed that lack of this gene protected the mice from smoke-induced inflammation. Conclusions This suggests the CCR1 receptor has a key role in the pathogenesis of smoke-induced inflammation.
Introduction
Chronic obstructive pulmonary disease (COPD) is a major cause of death and morbidity in the Western world [1]. It is becoming increasingly common and will soon be the third leading cause of death in the world. COPD is increasing in developing countries [2]. The main causative agent in COPD is cigarette smoke [3]. In addition, a genetic component is involved in the development of COPD [4]. Treatments for COPD include both inhaled corticosteroids (ICS), Beta2 agonists, muscarinic receptor antagonists and combinations of these agents [5].
We have established a mouse model of inflammation which is triggered by tobacco smoke. This model is being used to understand the link between pathological events in the inflammatory model and human disease. The model can be used to test target mechanisms for use as potential new anti-inflammatory treatments and to understand the pharmacology of the inhibitors of such mechanisms. We have assessed the activity of steroids in the model and found steroids to be relatively inactive in comparison to steroid activity in a separate lipopolysaccharide (LPS) driven inflammatory model. Finally, we show the effects of CCR1 transgenic (KO) mice on the inflammatory response.
The model we describe is a new robust model and for an in vivo application is of reasonably high throughput. We hope the model can serve as a platform for the testing of other mechanisms and hypotheses to further understand the pathobiology of cigarette smoke-induced inflammation. This could also include the addition of other components to the protocol; such studies have already been carried out with the addition of bacteria to a 4 day smoke exposure [6].
Animals
Female BALB/c mice (weighing 18-22 g) were purchased from Taconic Europe A/S (Denmark), housed 4-6 per cage and allowed to acclimatize for 1 week before experiments. Animals were provided with food, R70 pellets (Lantmännen, Sweden), and water ad libidum. Their body weights were measured prior to first exposure and termination. CCR1 KO mice (weighing 18-22 g) backcrossed to C57BL/6 for ten generations were bought from the Children's Medical Center Corporation, Boston, MA, USA. Age matched C57BL/6 mice were used as controls. Female mice were used in all experiments.
Animals were handled in conformity with standards established by Council of Europe ETS123 AppA, Swedish legislation and AstraZeneca global internal standards.
Tobacco smoke exposure Mice were exposed to cigarette smoke using a whole body smoke exposure unit SIU48 (PromechLab AB, Vintrie, Sweden). Up to 64 mice were placed in a sealed exposure chamber, divided into separate chambers and exposed to mainstream tobacco smoke or room air (control mice). Smoke was generated from 1R3F research cigarettes (Tobacco and Health Research Institute, University of Kentucky) with filters removed.
Smoke was drawn into the box with a vacuum flow that was aligned to 16 puffs per cigarette. Air was drawn into the box after each puff, and the control unit automatically cycled the opening and closing of air and smoke inlets (5 sec air and 10 sec smoke). Side stream smoke was removed via exhaust. Animals were exposed to 12 cigarettes, twice daily, for 1, 2, 3, 4, or 9 days. There was a 5 h interval between the exposures. The SIU48 allows for monitoring of smoke input in real time to ensure consistency between exposures. Terminations and analysis were carried out in each case 16 h after the last smoke exposure.
The exposure system and methods are identical to that already described and published in Gashler et al. [6]. Cotinine levels were measured by ELISA (Bio-Quant, San Diego, CA, USA) in the blood of mice after 4 days of smoke exposure. In all mice levels of cotinine were approximately 350 ng/ml. In addition, carboxyhaemoglobin levels were measured by spectrometry and were approximately 7% after smoke exposure.
LPS exposure
Female BALB/c mice (weighing 18-22 g) were used. Nonanaesthetised mice were placed into a ventilated chamber, that was directly connected to a Pari LC Ò JetStar nebuliser (PARI Respiratory Equipment, Inc., Midlothian, VA, USA), and exposed either to 1 mg/ml P. aeruginosaderived LPS (L9143, Sigma-Aldrich) or to the vehicle (0.9% NaCl) for 10 min.
Bronchoalveolar lavage
Mice were anaesthetised by an interperotineal (i.p.) injection of sodium pentobarbital (60 mg/kg) either 16 h after the last smoke exposure or 24 h after LPS challenge. After semi-excision of the trachea, a plastic catheter was inserted, and the bronchoalveolar lavage (BAL) fluid was collected by passive/gravity flow of phosphate buffered saline (PBS) (2 ml 9 2 times; a 2 ml syringe placed at 23 cm height) into the lungs. Retrieved fluid, kept on ice at 4°C from both washes was pooled and centrifuged (1,200 rpm 9 10 min, 4°C). The supernatant was aliquoted and saved at -70°C for mediator analysis; the cell pellet was resuspended in 0.25 ml PBS and maintained at 4°C until cell determination.
Cell count and differentiation
Total cell number in BAL was determined using a semi automatic cell counter Sysmex F-800. Differential cell counts were performed using standard morphological criteria on May-Grünwald and Giemsa (Merck) stained cytospins (50,000 cells/slide, 500 rpm 9 3 min, Shandon CytoSpin 3 cytocentrifuge). At least 200 cells were counted per cytospin. Leukocyte numbers were determined by multiplying the percentage of each leukocyte subpopulation with the total number of cells for each sample and expressed as cell number/sample.
Mediator measurements and multiplex analysis
Mediators (KC, MIP-1a, MIP-2, IL-1b, IL-6, TIMP-1) were determined using ELISA kits (R&D System, USA) according to the manufacturer's instructions. In some studies BAL fluid samples were sent to Rules Based Medicine (Texas, USA) for multiplex analysis.
Treatment with compounds
The mice were anaesthetised with an isoflurane mixture (N 2 O: O 2 ; 1.2:1.4 and 4% isoflurane), put in a supine position with 30°-40°angle and intratracheally (i.t.) instilled either with vehicle or fluticasone propionate (FP) in a volume of 1 ml/kg 1 h prior to LPS or tobacco smoke exposure. The topical instillations were performed using a modified metal cannula with a bulb-formed top.
Statistical analysis
Data are expressed as mean ± SEM. Statistical analysis was performed using Student's t test for samples with unequal variances. P values \0.05 were considered as statistically significant.
Gene expression analysis
Expression levels of more than 39,000 transcripts were measured in whole lung tissue from mice exposed to fresh air (n = 10) or cigarette smoke (n = 10) using Affymetrix GeneChip Ò Mouse Expression Arrays 430A and 430B (Affymetrix, Santa Clara, CA, USA). Whole lung tissue was homogenized using a dismembrator (Mikro-Dismembrator U, B. Braun Biotech International, PA, USA) and RNA isolated according to the TRIzol Ò protocol (Invitrogen, Carlsbad, CA, USA) with an additional purification using RNeasy columns (Qiagen, Hilden, Germany). Total RNA was measured spectrophotometrically and the quality assessed by visualization of ribosomal RNA bands on 1% agarose gels. Ten micrograms of total RNA from each sample was used to generate hybridisation probes according to the Affymetrix GeneChip Expression Analysis Technical Manual (Affymetrix). Double-stranded cDNA was synthesised (Invitrogen) and subsequently in vitro transcribed in the presence of biotinylated nucleotides using an Enzo BioArray TM HighYield TM RNA Transcript Labelling Kit (Enzo Life Sciences, Inc., Farmingdale, NY, USA). Fifteen micrograms of fragmented cRNA were hybridised to each array. Hybridisation, washing and staining of the arrays were performed according to the instructions provided by Affymetrix. The GCOS software from Affymetrix was used to generate transcript expression level values, signals, and corresponding present call values as measures of the certainty of detection. Group comparisons between the fresh air and cigarette smoke treated groups were made using a modified t test, Samroc [7] and overall differences between samples analysed with Principal Component Analysis in SIMCA-P ? (Umetrics AB, Umeå, Sweden) and visualised in Spotfire Ò DecisionSite Ò (TIBCO Spotfire Inc., Gothenburg, Sweden).
Tobacco smoke exposure results in an inflammatory cell based influx
The whole body smoke exposure system was developed in collaboration with ProMech (Malmö, Sweden) and enables the exposure of up to 60 animals at a time. The smoke exposure was carried out within a sealed unit where the animals were placed and were free to move. The smoke was allowed to flow through the cabinet and there was a continual flow of air through the exposure area. The smoke exposure system is identical to the one used in a previous publication [6]. The same system was used for air-exposed control animals but no tobacco smoke was in the chamber.
Exposure for 50 min two times a day produced a robust cell influx as measured in the BAL (Fig. 1). Cell influx peaked at day 4, although inflammation was also observed when animals were exposed for 9 days. The inflammatory cell influx was dominated by neutrophils. Other cell types were seen in the BAL in smaller numbers, including macrophages. Experiments where animals were exposed only once per day produced little inflammation (data not shown).
Airway inflammation is characterised by a range of inflammatory mediators
Cytokine mediators were measured in the BAL and these were correlated with the inflammatory cell influx (Fig. 2).
The levels of KC peaked before the peak of inflammatory neutrophils. Further temporal analysis of mediator levels versus neutrophilia also showed the presence of IL-1b, IL-6, MIP-1a, MIP-2, and TIMP-1 with respect to time. IL-1b levels and MIP-1a levels peaked at the same time as neutrophil levels (Fig. 2). Using Rules Based Medicine's analysis, we further characterised the protein infiltrate and saw that a number of mediators increased in the BAL following TS exposure. Proteins showing the most increase in presence in the BAL included MMP-9, TIMP-1, MCP-1 and MIP-1 (Table 1). At no time point could any TNFa levels be detected within the BAL.
After TS exposure substantial changes in the expression of genes within the lung are observed Gene expression changes in the lung were determined using Affymetrix GeneChip Ò microarrays. TS treatment caused an increase of 685 probe sets and a decrease of 586 Cellular infiltrate in BAL following TS exposure. BALB/c mice (groups of 8) were exposed to either air or TS twice a day for 50 min for 1, 2, 3, 4, or 9 days, then were terminated and cells counted from BAL. Total cells are shown (black columns) and neutrophils (grey columns) and macrophages (clear columns). Statistically significant differences in response to air controls are shown *P \ 0.05, **P \ 0.01, ***P \ 0.001. The data for the air control groups for each day have been pooled and shown as ''air'' Changes in BAL fluid protein levels for IL-1b, KC, MCP-1, MMP-9 and TIMP-1 were also seen at the transcriptional level in whole lungs (Table 1).
Response to TS is insensitive to steroids relative to an LPS model
We wished to compare the sensitivity of the inflammatory response to TS with another inflammatory model, so we also used an LPS mouse model to generate a BAL specific neutrophilia (Fig. 3b). We included groups that had been exposed to LPS and treated with different concentrations of a steroid, FP. It can be seen that FP was able to significantly reduce the LPS-driven inflammation. We carried out an analogous experiment in the TS-driven inflammatory model (Fig. 3b). Here, even though the same doses of FP were used, no significant inhibition of the TS-driven neutrophilia could be observed. No other cell types were significantly inhibited by treatment with FP (Fig. 3a). In addition, it can be seen that the levels of neutrophilia in the BAL were lower in the TS-driven system than in the LPSdriven model, and still no significant inhibition with FP could be observed.
Total cells Macrophages Neutrophils
*** * *** *** *** *** *** *** *** *** *** *** Fig. 3 Sensitivity of tobacco smoke model to steroids relative to an analogous LPS model. a Sensitivity of the TS-induced inflammation to steroids. Four groups of BALB/c mice (8 mice per group) were exposed to air, tobacco smoke, tobacco smoke plus 30 lg/kg of FP, tobacco smoke plus 300 lg/kg FP. The cellular response in BAL is shown, reflecting total cells, macrophages and neutrophils. Statistically significant differences between groups are shown as *P \ 0.05, **P \ 0.01, ***P \ 0.001. b Sensitivity of LPS-induced inflammation to steroids. Four groups of BALB/c mice (8 mice per group) were exposed to air, LPS, LPS plus 30 lg/kg of FP, LPS plus 300 lg/kg FP. The cellular response in BAL is shown, reflecting total cells, macrophages and neutrophils. Statistically significant differences between groups are shown as *P \ 0.05, **P \ 0.01, ***P \ 0.001 Mice lacking the CCR1 receptor have a reduced inflammatory response to TS We wished to further characterise the acute tobacco smoke model. Given we observed the CCR1 ligand MIP-1a in the BAL of animals after tobacco smoke exposure, we investigated the role of the MIP1a receptor, CCR1, in the tobacco smoke model. We carried out a 4 day TS exposure on mice lacking the CCR1 gene (CCR1 Knock out) and compared this to wild type mice. It can be seen that the inflammatory response to TS is dramatically reduced in the CCR1 KO mice. This is observed by a reduction in neutrophils in the BAL (Fig. 4) as well as a reduction in IL-1B and MIP-1a in the BAL (Fig. 4). The levels of KC in the CCR1 KO animals were not reduced but, in fact, elevated in the BAL of the transgenic mice.
Discussion
Novel anti-inflammatories are sought for the treatment of COPD. Mouse models are being increasingly used to understand smoke-induced inflammation [8] and are used within the pharmaceutical industry to support drug discovery programs. Many tobacco smoke models are run as chronic models, often with changes in lung structure measured by mean linear intercept of airspaces (for reviews, see [9,10]). There are far fewer acute models of cigarette exposure, and it was our intention to develop a new acute model of TS-driven inflammation with some characteristics of smoke-induced inflammation in humans. We also think that the steroid insensitivity of the model is key and that compounds showing greater efficacy in the tobacco smoke model than steroids have a greater chance of efficacy in human clinical trials. The model is also of reasonable throughput and can be used for the medium throughput evaluation of candidate drugs for novel anti-inflammatories. In addition, we see it as a valuable platform for adding in extra factors to augment the effects of TS and for further understanding the pathobiology of smoke-induced inflammation.
Leclerc et al. [11] describe an acute TS model in which mice were exposed to TS for 3 days and a neutrophildominated response was seen, although this was also accompanied by significant eosinophilia. A full Fig. 4 Effects of deletion of the gene for the CCR1 receptor on TSinduced inflammation. Wild type female C57/BL76 mice (8 per group) or female CCR1 transgenic C57/BL76 mice (8 per group) were exposed to TS for 4 days for 50 min twice a day. Control groups were included that were exposed only to air. After 4 days the animals were terminated and the following determined. a Total cells within the BAL, b macrophages within the BAL, c neutrophil levels, d KC levels, e MIP-1 levels and f IL-1 beta levels. Air controls are shown as clear columns and smoke exposed groups as red columns. Statistically significant differences between groups are shown as *P \ 0.05, **P \ 0.01, ***P \ 0.001 characterisation of the inflammatory response was not done. However, KC, MIP-1 and MIP-2 were shown to be upregulated in the BAL. The model also showed a degree of steroid insensitivity, however the steroid tested was dexamethasone. In the present study, we used the much more potent steroid FP (versus the glucocorticoid receptor) [12]. In addition, at the highest concentration of dexamethasone tested, inflammation was reduced in the model described by Leclerc [11]. Medicherla et al. [13] have also described a tobacco smoke model. There, mice were exposed for 5 or 11 days and, in stark contrast to both our model and the model of Leclerc et al., an inflammatory influx was seen that was dominated by macrophages. Again, the inflammation seen was resistant to dexamethasone, but dose levels of only up to 0.3 mg/kg were used.
The differences in inflammatory cell population between the model described by Medicherla et al., and the one reported here is interesting, but may merely reflect that in Medicherla et al., A/J mice were used, while we used BALB/c mice. A/J mice lack part of the complement system [14] and perhaps would not be able to promote a full anti-inflammatory response to the TS challenge. In fact, many other studies assessing acute TS exposure in mice with fully intact complement systems show that acute TS exposure leads to an inflammatory infiltrate dominated by neutrophils [15,16] and not by macrophages. Although COPD is a complex disease with a range of cellular and molecular pathologies, neutrophils are also common in COPD [17,18]. In addition, a number of mediators observed specifically in the BAL after TS exposure are in common with mediators seen in humans exposed to cigarette smoke. MCP-1 is involved in the recruitment of monocytes, lymphocytes and basophils and a member of the CC chemokines [19]. Elevated expression of MCP-1 is seen in COPD [20]. MCP-1 levels also correlate with a decline in lung function in COPD [21]. Reduction of MCP-1 levels with a therapeutic antibody in COPD patients also serves to reduce neutrophil numbers [9]. IL-8, the human equivalent of KC, is also increased in COPD [22] and this is reflected in the BAL samples from TS exposed animals. From the Rules Based Medicine analysis we can see that MMP-9 is one of the proteins highly up-regulated in TS-exposed animals compared to air-exposed animals. MMP-9 has been shown to be present within COPD lungs [23] and also to increase within a COPD exacerbation [24]. This could reflect the beginning of a protease/anti-protease imbalance within the model. The protease/anti-protease balance is one of the core themes around the pathology of COPD [25] and its imbalance may also contribute to further damage during a COPD exacerbation [24].
One interesting omission from the array of mediators present in the model is the lack of TNFa. TNFa is seen in the BAL of COPD patients [22]. The lack of TNFa may represent the mild nature and acute exposure times of the model. Other acute TS models, also, have not reported seeing TNFa after acute TS exposure [13,16], or report no increase after TS exposure [15].
Gene expression patterns in whole lung tissue following TS exposure changes dramatically. Many of the changes seen are related to oxidative stress and this has also been implicated in COPD [26]. The expression of genes related to oxidative stress in COPD has been studied in epithelial cells [27]. Genes involved in proteolytic processing were also increased. Specific genes such as Noxo1 were upregulated. Noxo1 is a regulator of NADPH oxidase and, presumably, its up-regulation indicates a response to TS in trying to control damage via the production of reactive oxygen species [28], but may subsequently also increase inflammation via the activation of NFkB and MAP kinase pathways [28]. CXCL5 was up-regulated in the TS exposed mice after 4 days and has also been shown to be up-regulated in COPD [29]. MMP-12 is also up-regulated within COPD patients [30].
Overall, the characterisation of the TS exposed mice reveals a number of similarities to tobacco smoke exposure in humans, even after acute exposure. Pharmacological intervention studies will enable resolution as to which markers are affected by treatment and thus allow comparison between different anti-inflammatory mechanisms.
We feel a key feature of the TS model is that the inflammation observed, as measured by the influx of neutrophils to the lung, is relatively steroid insensitive compared to that seen in an LPS-driven mouse inflammatory model (Fig. 3). Although we used Pseudomonas LPS, the same effects and steroid sensitivity were seen from experiments with Escherichia coli LPS (data not shown). We do not think that this insensitivity to steroids is caused by the ''volume'' of the inflammation. The levels of neutrophilia driven by LPS are higher than by TS (Fig. 3). We hypothesise that the TS exposure triggers molecular switches in vivo. This switching affects the pharmacological intervention by a steroid in the TS model.
Steroids work in COPD but the inflammation is not as well controlled as in asthma [31]. FP is ineffective at controlling neutrophils in COPD [32] and therefore there is a pharmacological link between the TS exposure model and human disease. Interestingly, FP is able to reduce LPSdriven neutrophilia in humans 6 h after LPS challenge [33], which again correlates to the ability of FP to reduce LPSdriven neutrophilia in mice. We are currently assessing other differences in endpoints between LPS-and TS-driven models to assess whether further insights can gained for steroid efficacy in reducing smoke-or LPS-induced inflammation. Also, we will use the TS exposure model in the future in assessing other pharmacological mechanisms with greater potential than steroids in reducing inflammation provoked by tobacco smoke.
Having established the TS exposure model and characterised it with known agents in respiratory disease, we have begun to use it to test the rationale for other mechanisms as potential targets within COPD. One such hypothesis was of the role of CCR1 in regulating TS-induced inflammation; the CCR1 ligand was observed in the BAL of smokeexposed animals. The chemokine receptor CCR1 belongs to the seven-transmembrane G protein coupled receptor (GPCR) super-family of receptors and is expressed primarily by T lymphocytes, monocytes/macrophages, basophils, dendritic cells, eosinophils and neutrophils. CCR1 deficient mice have been generated, and they show normal development and leukocyte profiles, but significantly reduced leukocyte mobilization in response to different inflammatory stimuli [34]. A number of compounds against this receptor are in development for the treatment of autoimmune/inflammatory diseases [35].
Exposure of mice lacking the CCR1 receptor to TS showed a significantly reduced inflammatory response compared to wild type mice. As well as showing a reduced neutrophil response, the levels of IL-1 and MIP-1a in the BAL following TS exposure were lower in CCR1 deficient mice than in wild type controls. This suggests that blockade of the CCR1 receptor may be a therapeutic strategy for the treatment of tobacco smoke-driven inflammation. There is some evidence from human studies to support this statement. Increased frequency of MIP-1a positive epithelial cells in the bronchial mucosa was observed in COPD patients, and the frequency of MIP-1a expressing cells correlated with reduced lung function (FEV 1 ) [36]. Key CCR1 ligands (MIP-1a, RANTES and MCP-3) have been found to be expressed in COPD lung tissue and are present in BAL or sputum from COPD patients [37].
Within the CCR1 deficient mice study, it was interesting that after TS exposure the levels of KC, the mouse homologue of IL-8, were not reduced in the same fashion as other markers, but in fact were increased after TS exposure compared to wild type controls. Perhaps this reflects a compensation response, more KC (IL-8) being produced as a chemotactic factor to try and draw neutrophils into the lung. We can also postulate that this means that there may be other factors, independent of KC (IL-8), that are responsible for the recruitment of neutrophils into the lung. This has been suggested for COPD, specifically that neutrophils in COPD patients show reduced levels of migration and chemotaxis in response to IL-8 [38].
Mice insufficient in CCR1 have been analysed previously in separate model systems. The same transgenic animals have been used in separate studies published elsewhere by others. Pancreatitis associated lung injury is CCR1 dependent [39]. In addition, leukocyte recruitment is dependent on CCR1 in allergic encephalomyelitis models [40]. The deletion of CCR1 reduces responses following RSV infection [41].
In summary, we have developed an acute tobacco smokedriven mouse model of inflammation. It has many similarities to inflammation seen in humans exposed to tobacco smoke. We intend to use the model to further explore the pharmacology of TS-induced inflammation, to attempt to build the predictivity of the model for predicting efficacy of antiinflammatories in humans and to assess the mechanisms responsible for steroid insensitivity in vivo. As the model is relatively simple, robust and of reasonable throughput we also feel it can be a platform for the exploration of other factors involved in smoke-induced inflammation, for example, the addition of virus or bacteria to TS-induced inflammation [6]. We have explored the role of the CCR1 receptor in TS-induced inflammation and we find that ablation of this gene significantly reduces TS-driven neutrophilia. | 2014-10-01T00:00:00.000Z | 2010-04-13T00:00:00.000 | {
"year": 2010,
"sha1": "bcb8734173198827f5729dada67f53f374cfafe2",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00011-010-0193-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcb8734173198827f5729dada67f53f374cfafe2",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
53012934 | pes2o/s2orc | v3-fos-license | Noncanonical farnesoid X receptor signaling inhibits apoptosis and impedes liver fibrosis
Background Hepatocyte is particularly vulnerable to apoptosis, a hallmark of many liver diseases. Although pro-apoptotic mechanisms have been extensively explored, less is known about the hepatocyte-specific anti-apoptotic molecular events and it lacks effective approach to combat hepatocyte apoptosis. We investigated the anti-apoptotic effect and mechanism of farnesoid X receptor (FXR), and strategies of how to target FXR for inhibiting apoptosis implicated in liver fibrosis. Methods Sensitivity to apoptosis was compared between wild type and Fxr−/− mice and in cultured cells. Cell-based and cell-free assays were employed to identify the binding protein of FXR and to uncover the mechanism of its anti-apoptotic effect. Overexpression of FXR by adenovirus-FXR was employed to determine its anti-fibrotic effect in CCl4-treated mice. Specimens from fibrotic patients were collected to validate the relevance of FXR on apoptosis/fibrosis. Findings FXR deficiency sensitizes hepatocytes to death receptors (DRs)-engaged apoptosis. FXR overexpression, but not FXR ligands, inhibits apoptosis both in vitro and in vivo. Apoptotic stimuli lead to drastic reduction of FXR protein levels, a prerequisite for DRs-engaged apoptosis. Mechanistically, FXR interacts with caspase 8 (CASP8) in the cytoplasm, thus preventing the formation of death-inducing signaling complex (DISC) and activation of CASP8. Adenovirus-FXR transfection impedes liver fibrosis in CCl4-treated mice. Specimens from fibrotic patients are characterized with reduced FXR expression and compromised FXR/CASP8 colocalization. Interpretation FXR represents an intrinsic apoptosis inhibitor in hepatocytes and can be targeted via restoring its expression or strengthening FXR/CASP8 interaction for inhibiting hepatocytes apoptosis in liver fibrosis. Fund National Natural Science Foundation of China.
Introduction
The liver is an organ of immense complexity and functionally indispensable for its essential roles in controlling endogenous metabolic homeostasis, xenobiotic metabolism and innate immunity. Due to its unique function and environment, hepatocyte, the predominant liver cell, is continuously exposed to high levels of toxic endobiotics, xenobiotics, viruses, and inflammatory triggers [1,2]. To maintain homeostasis, the liver has developed a sophisticated system ensuring efficient removal of damaged or virus-infected hepatocytes mainly via death receptors (DRs)-engaged apoptosis. However, excessive apoptosis and massive loss of hepatocytes may result in irreversible liver damage.
Indeed, accumulating evidence indicate that excessive hepatocytes apoptosis represents a hallmark of the pathogenesis of many liver diseases [3,4]. Enhanced hepatocytes apoptosis amplifies inflammatory damage and promotes the development of fibrosis and ultimately liver cancer [5,6]. Thus, inhibition of hepatocellular apoptosis has been suggested to be a plausible treatment therapy for liver injury, especially liver fibrosis.
In contrast to the extensive knowledge in signals triggering cell death, the molecular events underlying how hepatocytes survive from such a hostile environment remain poorly understood. To combat against cell death from various damaging factors, the liver must have developed an efficient protective system to maintain homeostasis. Previously identified anti-apoptotic proteins, mainly including XIAP, Bcl-X L, Mcl-1, and receptor interacting protein 1 (RIP1) [7][8][9][10][11], are ubiquitously expressed in cells other than hepatocytes. In view that excessive apoptosis of hepatocytes is implicated in many forms of liver diseases which are extremely in short of effective therapeutic treatments, it is urgent to uncover liver specific and/or dominant molecular signals in balancing pro-apoptotic and anti-apoptotic events.
Farnesoid X receptor (FXR), highly expressed in hepatocytes, is conventionally recognized as a member of nuclear receptor (NR) superfamily of ligand-activated transcription factors that controls the metabolism of bile acids, lipids, glucoses, and amino acids [12,13]. Thereafter, FXR is increasingly regarded as a potential drug target for treatment of a number of diseases, including obesity [14], cholestasis [15,16] and septic shock [17,18]. Moreover, FXR was shown to influence viral hepatitis, alcohol-induced liver disease, nonalcoholic steatohepatitis (NASH), cholestasis, and ischemia/reperfusion injury, and even hepatocellular carcinoma [19], all of which are characterized by enhanced hepatocyte apoptosis. Thus, besides its well-documented roles in metabolic control, FXR may also act as a cell protector for hepatocytes [20], although the molecular basis underlying how FXR protects against liver injury remains elusive. Previous efforts mainly focused on how FXR functions as a ligand-dependent transcriptional factor in controlling the metabolic homeostasis of bile acids and lipids and its potential benefit in the therapy of various liver diseases. These findings indicated FXR agonism to be a promising therapeutic strategy of liver diseases [21]. Numerous efforts in the identification of various kinds of FXR agonists culminated in the successful launch of obeticholic acid (OCA) as a clinical treatment for primary biliary cholangitis (PBC) [19,22], and it is now in the phase 3 clinical trials for NASH. However, in addition to its side effects including pruritus and increased serum lipids [23], recent findings indicated that OCA is not efficacious against liver fibrosis in PBC patients [24,25], despite improvement in NASH patients [23,26]. These results indicate that FXR agonists alone might not be sufficiently effective against liver fibrosis, at least in the case of PBC.
Here, we report that cytosolic FXR is an intrinsic apoptosis inhibitor in hepatocytes via physically interacting with caspase 8 (CASP8). Under un-liganded conditions, FXR naturally associates with CASP8 in the cytoplasm and precludes its recruitment to DISC for activation and thereby inhibits apoptotic signal transduction. Activation of DRs reduces cytosolic FXR levels and facilitates CASP8-mediated apoptotic cell death. Forced overexpression of FXR, but not FXR agonists, attenuates both acute liver injury and chronic liver fibrosis in mice. Moreover, we show that the identification of cytosolic FXR as an apoptosis inhibitor via interacting with CASP8 is clinically translatable since fibrotic patients are characterized with reduced FXR levels and compromised FXR/CASP8 colocalization. Since apoptosis is a hallmark in most forms of liver injury, this study provides a mechanistic rationale for restoring cytosolic FXR levels or strengthening FXR/CASP8 interaction, but not FXR agonism, as a promising approach to inhibit hepatocyte apoptosis for the therapy of diverse liver diseases.
Animals
Specific pathogen free male C57BL/6 J mice (8-weeks-old, 18-22 g) were obtained from Comparative Medicine Centre of Yangzhou University, China. The animal studies were approved by the Animal Ethics Committee of China Pharmaceutical University. FXR knock out (Fxr −/− ) mice and wild-type (WT) mice on a C57BL/6 J genetic background were raised and maintained in the National Cancer Institute, and mouse handling was in accordance with an animal study protocol approved by the National Cancer Institute Animal Care and Use Committee. All mice were kept at a temperature of 25 ± 2°C and a relative humidity of 50 ± 10% with 12-hour light/dark cycles for 1 week before experiments and allowed water and standard chow ad libitum.
LPS-induced fulminant hepatic failure
Fulminant hepatic failure was induced using a previously described method [27] with minor modifications. Briefly, mice were given an intraperitoneal (i.p) injection of D-galactosamine (GalN, Sigma, 800 μg/g), followed by an i.p injection of lipopolysaccharide (LPS, Sigma, 100 ng/g) before fasted for 8 h. Mice were killed 5 h after LPS injection. To determine the effects of FXR on hepatocyte apoptosis, mice were pretreated with vehicle, GW4064 (Medchem Express, 30 mg/kg/daily, i.p) or CDCA (Sigma, 50 mg/kg/daily, i.p) for 5 days before the injection of GalN/LPS. Separately, mice were injected intravenously with 10 9 pfu of adenoviruses (amplified by Biowit Technologies) Ad-Ctrl or Ad-FXR daily for a consecutive 3 days [28] and subjected to GalN/LPS treatment 3 days after final virus delivery. To compare the apoptotic sensitivity between WT and Fxr −/− mice, they were injected with GalN (400 μg/g) and LPS (50 ng/g) as described above. Mice were fasted for 8 h before sacrifice.
CCl 4 -induced liver fibrosis
To investigate the effect of FXR overexpression in liver fibrosis, mice were injected with CCl 4 (0.1 ml/kg, i.p) twice a week for 6 weeks [29]. From the 3rd week, mice were intravenously administered with Ad-Ctrl or Ad-FXR (10 9 pfu/mouse) twice per week for 4 weeks. Mice were fasted for 8 h before sacrifice.
To investigate the effect of FXR depletion on apoptosis, HepG2 cells were transfected with FXR-specific siRNA (Dharmacon) or negative control siRNA (Santa Cruz) using Lipofectamine RNAiMAX (Invitrogen).
Research in context
Evidence before this study Hepatocytes apoptosis represents a hallmark of the pathogenesis of many liver diseases, and inhibition of hepatocellular apoptosis has been suggested be to a plausible treatment therapy for liver diseases. As a nuclear transcription receptor, ligands-bound FXR translocates into the nucleus to elicit its canonical role on gene transcription. FXR plays pivotal roles in maintaining homeostasis of bile acids, lipids and glucose, thus it is generally regarded as a therapeutic target for metabolic diseases.
Added value of this study
Cytosolic FXR is an intrinsic apoptosis inhibitor via physically interacting with CASP8, thus inhibiting DRs-engaged apoptosis. Reduction of cytosolic FXR is a prerequisite initiating apoptosis cascade. Forced overexpression of FXR, but not FXR agonists, impedes hepatocyte apoptosis and liver fibrosis.
Implications of all the available evidence
Restoring the expression of FXR via preventing its degradation as well as strengthening FXR/CASP8 interaction would be attractive strategies for the treatment of liver fibrosis.
investigate the effect of FXR overexpression on apoptosis, cells were transfected with Ad-Ctrl or Ad-FXR (20 MOI) in the presence or absence of Z-guggulsterone (GS, Santa Cruz,10 μM), a FXR antagonist [31].
Human specimens
112 patients who were pathologically diagnosed with liver fibrosis were enrolled in this study. Their ages ranged from 22 to 66, with a median age of 41. The available clinical characteristics of these patients are summarized in Supplementary Table S1. Percutaneous liver biopsies were performed using a biopsy gun with a 16 g needle (Bard-Magnum Biopsy Instrument, Covington, GA, USA). Histological scoring was performed by experienced hepato-pathologists according to the Guideline of Prevention and Treatment for Chronic Hepatitis B (2nd Version). Blood was obtained from each patient at the time of liver biopsy, processed to plasma and stored frozen at −80°C. In addition, 17 healthy age-matched controls (F0) from blood bank donors without clinical signs or symptoms of liver disease, and no history of chronic illnesses, were analyzed. The study was approved by The Ethics Committee of The First Affiliated Hospital of Anhui Medical University (PJ2016-10-11) and all patients gave written informed consent prior to participation.
Statistical analysis
Data were analyzed using GraphPad Prism (Graphpad Software, Inc., San Diego, CA, USA) and are presented as the mean ± standard error of mean (SEM). A two-tailed Student's t-test was applied for comparison of two groups and a one-way ANOVA with Tukey post hoc analysis was applied for comparison of multiple groups. P values below 0.05 were considered statistically significant.
FXR deficiency sensitizes hepatocytes to apoptosis
To dissect the role of FXR in apoptosis, Fxr −/− mice were compared with wild-type (WT) littermates after treatment with Dgalactosamine (GalN)/lipopolysaccharide (LPS). Serum ALT and AST levels were much higher in Fxr −/− mice than that in the WT group (Fig. 1A). H&E and TUNEL staining confirmed that GalN/LPS treatment induced more severe apoptosis of hepatocytes in Fxr −/− mice ( Fig. 1B and C). In line with the histological findings, increased cleavage of BID and PARP and higher enzymatic activities of CASP3, CASP8, and CASP9 were observed in Fxr −/− mice than that in WT littermates (Supplementary Fig. S1A and B). To confirm this finding, adeno-associated virus (AAV) transfected Fxr shRNA was injected via tail vein to directly knock down the hepatic expression of FXR. Liver specific knock down of FXR by AAV shRNA resulted in increased apoptotic cell death of hepatocytes, as supported from the analysis of serum aminotransferases, caspase activities, and liver histology (Supplementary Fig. S1D-G).
To further validate the role of FXR in apoptosis, primary hepatocytes from WT and Fxr −/− mice were isolated and treated with actinomycin D/tumor necrosis factor alpha (ActD/TNFα). Primary hepatocytes from Fxr −/− mice were more sensitive to ActD/TNFα-induced apoptosis than those from WT mice (Fig. 1D). Likewise, HepG2 cells transfected with FXR short interfering RNA (siRNA) exhibited an exacerbated apoptosis and lower cell viability than that with control siRNA upon ActD/TNFα treatment ( Fig. 1E and F), which was also supported by the assessment of cleavage of BID and PARP and caspase activities (Supplementary Fig. S1H-J). In addition to TNFα triggered receptor, engagement of others DRs, such as Fas and TRAIL is also dominant in triggering hepatocytes apoptosis. Thus, we explored the effect of FXR on apoptosis triggered by ligands of Fas and TRAIL receptor. The results showed that FXR deficiency also rendered hepatocytes more susceptible to FasL and TRAIL induced apoptotic cell death (Fig. 1G). Together, these results indicate that FXR is important in protecting against DRs-engaged apoptotic cell death.
FXR overexpression but not FXR agonists inhibits apoptosis
Because FXR is conventionally recognized as a ligand-activated transcription factor, we asked whether FXR agonists could protect against apoptosis of hepatocytes. To this end, the synthetic FXR agonist GW4064 and the natural endogenous FXR agonist chenodeoxycholic acid (CDCA) were used to test their effects on apoptosis. However, neither GW4064 nor CDCA showed protective effects against apoptotic cell death of hepatocytes both in vivo and in vitro. Additionally, no antiapoptotic effect was observed for other FXR agonists, including OCA, Px-102, Tropifexor and WAY-362450 ( Supplementary Fig. S2). We next tested whether forced overexpression of FXR could protect hepatocytes against apoptosis. Mice transfected with Ad-FXR showed reduced serum ALT and AST levels upon GalN/LPS treatment ( Fig. 2A). Histological assessment by H&E and TUNEL staining indicated that the number of apoptotic cells was significant lower in Ad-FXR transfected mice than that in the Ad-Ctrl infected mice ( Fig. 2B and C). In agreement, lower caspase activities and less cleavage of BID and PARP were observed in Ad-FXR transfected mice ( Supplementary Fig. S3A-C). The anti-apoptotic effect of FXR was also observed in cultured HepG2 cells with Ad-FXR infection (Fig. 2E The finding that forced overexpression of FXR but not agonist treatment protected against hepatocellular apoptosis hints that FXR may protect against apoptotic cell death in a transcriptional-independent manner. Transfection of Ad-FXR resulted in obviously enhanced expression of FXR protein in both the cytoplasm and nucleus of HepG2 cells ( Supplementary Fig. S4A), and increased transcriptional activity was observed ( Supplementary Fig. S4B). Since Ad-FXR transfection also induced a slight transcriptional activation of FXR, a FXR antagonist was employed to exclude the possible involvement of FXR transactivation. GS treatment significantly suppressed FXR transactivation induced by Ad-FXR transfection, but had negligible influence on its anti-apoptotic effect (Fig. 2F, G, and Supplementary Fig. S4B-C). These results demonstrate that FXR protein level but not FXR transcriptional activity is responsible for its anti-apoptotic effect.
FXR physically interacts with CASP8
While FXR is characterized as a ligand-activated NR, the current results indicate that FXR may protect against hepatocyte apoptosis independent of its transcriptional activity. Apoptosis of hepatocytes induced by TNFα, FasL and TRAIL is initiated by the formation of the DISC, which is required for activation of pro-caspase 8. Activated CASP8 can cleave multiple intercellular substrates, such as downstream effector CASP3, CASP9 and BID to execute apoptosis [32,33]. We tested whether FXR may interrupt the formation of DISC and the activation of CASP8. FXR overexpression significantly repressed CASP8 activation (Supplementary Fig. S3A and E) but had little effect on the mRNA and protein levels of DISC components ( Supplementary Fig. S5A-B), further supporting the view that FXR protection against apoptosis is independent of transcription. We assumed that FXR protein may directly bind to the members of DISC and interfere with DISC assembly.
Co-immunoprecipitation (Co-IP) experiments were performed to investigate the interaction of FXR with members of DISC complex including Fas-associated protein with death domain (FADD), RIP1, and CASP8. In contrast to normal IgG, immunoprecipitations of FXR demonstrated binding of FXR to DISC members in hepatocyte lysates. Silence of CASP8 but not FADD impaired these interactions, suggesting that FXR may directly interact with CASP8 ( Supplementary Fig. S5C). To validate the direct interaction of FXR with CASP8, Co-IP assays, confocal microscopy analysis and GST pull-down analysis were employed. All the results support a physical interaction between FXR and CASP8 in the resting conditions ( Fig. 3A-C). In particular, the confocal analysis clearly showed that under resting conditions FXR is abundantly localized in the cytoplasm where it physically interacts with CASP8 (Fig. 3B). The direct interaction between FXR and CASP8 was further validated by cell-free assays including biolayer interferometry (BLI) analysis (Fig. 3D) and microscale thermophoresis (MST) analysis ( Supplementary Fig. S5D) using recombinant proteins. Moreover, the Co-IP analysis supported a physical and natural interaction of FXR with CASP8 in primary mouse hepatocytes, mouse livers and, more importantly, in the human liver biopsies from patients with liver fibrosis (Fig. 3E).
FXR inhibits apoptosis via interaction with CASP8
To prove that FXR elicits its anti-apoptotic effect via interacting with CASP8, CASP8 siRNA and inhibitor was used to determine their influences on anti-apoptotic effect of FXR. HepG2 cells treated with CASP8 siRNA or inhibitor ( Supplementary Fig. S6A-E) largely abrogated the anti-apoptotic effects of FXR, supporting that FXR inhibits apoptosis via CASP8. We next tested whether association of FXR may directly inhibit the CASP8 activity. Unexpectedly, the association of FXR and CASP8 had little influence on the enzymatic activity of recombinant CASP8 ( Supplementary Fig. S6F), suggesting that FXR may not directly inhibit CASP8 activity upon binding. Because CASP8 activation depends on DISC assembly to cleave pro-caspase 8 to the activated protein, we asked whether FXR interaction may prevent pro-caspase 8 recruitment to DISC, thereby inhibiting its activation. CASP8 contains a C-terminal catalytic protease domain (CPD) and N-terminal tandem death effector domain (DED), which is recruited to FADD upon apoptotic stimulation [34]. Homologous modeling and molecular docking analysis (Supplementary Fig. S6G) showed that the DED of CASP8 may interact with the FXR ligand binding domain (LBD), which was confirmed by GST pull-down analysis ( Supplementary Fig. S6H). Thus, it is likely that FXR may occupy the DED of CASP8. Forced expression of FXR by Ad-FXR transfection restored the association between FXR and CASP8, and thus precluded the recruitment of CASP8 to FADD and suppressed the activation of CASP8 ( Supplementary Fig. S6I-K).
Molecular docking (Supplementary Fig. S7A) and cross-linking mass spectrometry (Supplementary Fig. S7B) were employed to predict and validate the exact binding sites of FXR to CASP8. The results indicated that D363, E364, S371, K374, R440, and E443 of FXR LBD interact with the DED of CASP8. To further validate these findings, we constructed mutant FXR recombinant proteins with D363A, E364A, S371A, K374A, R440A, and E443A. Mutation of these sites impaired not only the binding of FXR with CASP8 as demonstrated by GST pull-down, BLI ( Fig. 4A and B) and Co-IP assay (Supplementary Fig. S7C and D), but also the protective role of FXR against apoptosis as demonstrated by cell viability and cell apoptosis analysis (Fig. 4C and D). Further results demonstrated that mutation of these sites failed to prevent the assembly of DISC and activation of CASP8 ( Fig. 4E and F). Together, these results support that FXR physically interacts via its LBD to the DED of CASP8, thereby preventing the recruitment of pro-CASP8 to FADD and inhibiting apoptosis signal transduction. Of note, mutation of these sites has little influence on the transcriptional activity of FXR, as supported from nearly identical activity compared to WT FXR in upregulating SHP and BSEP, and downregulating CYP7A1 (Supplementary Fig. S7E). Together, these results strongly support that FXR in the cytoplasm functions as an intrinsic apoptosis inhibitor via interacting with CASP8 and this function is independent of its canonical transcriptional activity.
FXR reduction is a prerequisite for DISC assembly
Since FXR physically interacts with CASP8 in the cytoplasm under resting conditions, we asked what would happen when hepatocytes are challenged by apoptotic stimulus. It was of interest to note that, upon apoptotic stimulation, FXR protein levels were dramatically decreased ( Fig. 5A and B), resulting in impaired binding of FXR-CASP8 and enhanced binding of FADD-CASP8 ( Fig. 5C and D). Association between FXR and CASP8 inhibited recruitment of CASP8 to FADD/RIP1 and disrupted DISC assembly and CASP8 activation. Collectively, these results indicate that cytosolic FXR represents an intrinsic apoptosisinhibitory signal; upon apoptosis stimulation, downregulation of FXR is a prerequisite step conferring DISC assembly for ultimately activating apoptotic signal in hepatocytes.
FXR overexpression protects against chronic liver fibrosis
Apoptosis of hepatocytes is a hallmark for both acute liver damage and chronic liver diseases. Additionally, hepatocytes apoptosis is regarded as an important causal factor of fibrosis [35] and antiapoptosis is regarded as a plausible treatment for liver fibrosis [36]. More importantly, hepatocellular CASP8 was demonstrated as an essential modulator of fibrosis [37]. Since our results indicated that FXR could attenuate apoptosis via interaction with CASP8, it is reasonable to predict that hepatic levels of FXR would be a key determinant in the pathological development of liver fibrosis. To test this hypothesis, mice were injected with AAV-ctrl shRNA or AAV-Fxr shRNA to knock down hepatic FXR and then subjected to CCl 4 -induced liver fibrosis. Compared to mice injected with AAV-ctrl shRNA, mice injected with AAV-Fxr shRNA exhibited aggravated hepatic apoptosis and injury as evidenced by serum aminotransferase levels, histological analysis, and caspase activities ( Supplementary Fig. S8A-C). Moreover, the mRNA levels of Acta2 (encoding αSMA), col1a1, col1a2, Timp1 and Timp2 indicated enhanced fibrosis in AAV8-Fxr shRNA injected mice (Supplementary Fig. S8D). These results indicated that reduction of hepatic FXR levels may aggravate the pathological development of liver fibrosis via sensitizing hepatocytes to apoptosis. To further validate this point, we tested whether the forced overexpression of FXR would hamper the process of liver fibrosis. To this end, mice were treated with CCl 4 to induce fibrosis and treated with Ad-ctrl or Ad-FXR. As expected, Ad-FXR transfection significantly reduced serum aminotransferases (Fig. 6A) and apoptosis (Fig. 6B-D). As revealed by Masson-trichrome staining and Sirius red staining of liver sections, fibrosis was obviously observed in CCl 4treated mice, while the fibrosis was alleviated in Ad-FXR transfected mice ( Fig. 6C and Supplementary Fig. S9A). In accordance with the histological evidence, the mRNA expression levels of Acta2, col1a1, col1a2, Timp1 and Timp2 were increased in livers of CCl 4 -treated mice, and all these increases were significantly reversed by transfection with Ad-FXR (Fig. 6E), confirming the protective role of FXR against liver fibrosis. The apoptosome released from apoptotic hepatocytes represents an important causal factor in activating hepatic stellate cells (HSCs) for fibrotic development. Indeed, co-culture of apoptosis-triggered hepatocytes dramatically increased the fibrotic biomarkers of HSCs. In contrast, enforced overexpression of FXR into hepatocytes largely abolished such an effect ( Supplementary Fig. S9C), supporting that FXR may impede hepatic fibrosis via protecting against hepatocytes apoptosis.
To provide a translational link to human beings, we collected serum and liver biopsy samples from patients with liver fibrosis. The serum levels of TNFα, FasL, and TRAIL were all increased in patients with liver fibrosis in comparison with those from healthy controls (Fig. 7A), suggesting that the hepatocytes in fibrotic livers are likely subjected to DRs-engaged apoptotic stress. Actually, apparent apoptotic cell death of fibrotic livers was witnessed from the increased serum apoptotic biomarker CK-18 and the positive TUNEL staining of fibrotic liver biopsies ( Fig. 7B and C). Immunohistochemical staining of fibrotic livers indicated a clear trend of gradual loss of cytoplasmic FXR from stage-1 to stage-4 liver fibrosis (Fig. 7D), and also the decreased co-localization of FXR with CASP8 in the cytoplasm (Fig. 7E). These results indicate that, in the process of hepatic fibrosis development, the hepatic cytosol FXR may be gradually reduced accompanying with fibrosis progression, rendering hepatocytes more susceptible to DRs engaged apoptotic cell death.
Discussion
The hepatocytes are continuously exposed to high apoptotic stress including toxic endobiotics and xenobiotics, viruses, inflammatory triggers, and the high expression levels of DRs [1,2]. It is reasonable to expect that hepatocytes should have developed a powerful apoptosis inhibitory system to combat against DRs-engaged apoptotic challenge and thereby maintaining functional homeostasis of the liver. The present study shows that FXR, which is highly expressed in hepatocytes, acts as an intrinsic apoptotic inhibitor combating against DRs engaged apoptosis. Under un-liganded conditions, FXR physically interacts with CASP8, precluding its recruitment to the DISC and thereby blocking apoptotic signal cascade. Upon excessive apoptotic challenge, FXR is rapidly reduced, conferring CASP8 recruitment to DISC for activation and initiation of apoptotic signals. Forced FXR overexpression attenuates both acute liver injury and chronic liver fibrosis via inhibiting hepatocytes apoptosis. Surprisingly, FXR agonists are not effective against DRs engaged apoptotic cell death, supporting that FXR combats apoptosis in a non-genomic manner (Fig. 8).
DRs-engaged apoptosis is the major form causing hepatocytes loss in many forms of liver diseases [3]. Upon binding with their cognate ligands, DRs, including Fas, TRAIL-R1/2, and TNFR1, activate the same extrinsic apoptotic signaling pathway involving FADD interaction with CASP8 to form a protein complex defined as DISC which facilitates CASP8 activation [38]. In hepatocytes, activated CASP8 cleaves the BH3-only protein Bid generating truncated Bid (t-Bid) which translocates to mitochondria and, in concert with active Bax and Bak, results in mitochondrial outer membrane permeablization (MOMP) [39]. There have been identified two anti-apoptotic proteins, Bcl-x L and Mcl-1, which inhibit the activation of Bax and Bak blocking MOMP mediated intrinsic apoptotic signaling. XIAP can inhibit caspase 9 and effector caspases 3, 6, 7 [40,41], and depletion of XIAP switches hepatocytes to CASP8 dependent but Bid independent apoptosis as seen in type I cells [39]. Altogether, these findings indicate that CASP8 activation is a pivotal step in initiating apoptotic signaling pathway in hepatocytes. In this regard, CASP8 activation should be precisely regulated to maintain apoptosis to anti-apoptosis balance. Cellular CASP8 inhibitory protein (cFLIP), which is also recruited to the DISC, is the best defined regulator in controlling CASP8 activation. Because cFLIP, together with Bcl-xL and Mcl-1, are ubiquitously expressed in many types of cells, they might not be sufficient for precise regulation of apoptotic balance in hepatocytes existed in a unique environment with high apoptotic stress. The present identification of cytosolic FXR as an inhibitor of apoptosis in hepatocytes via natural interaction with CASP8 may thus shed new insights in delineating the molecular events of the intrinsic hepatoprotective system. Under un-liganded conditions, FXR interacts via its LBD to the DED of CASP8, which is the same binding domain of CASP8 interacts with FADD. Thus, FXR competitively inhibits FADD association with CASP8 to preclude DISC assembly. Unlike cFLIP, FXR cannot directly inhibit the catalytic activity of CASP8, but retard CASP8 autoactivation on the DISC platform. Therefore, cytosolic FXR may coordinate with cFLIP to limit CASP8 overactivation and thus enhance the apoptotic threshold of hepatocytes. Of note, high levels of TNFα, TRAIL and FasL induced a fast reduction of cytosolic FXR before the initiation of apoptotic signal cascade, while forced overexpression of FXR could largely inhibit DRs-engaged apoptosis. These facts indicate that reduction of cytosolic FXR levels is a prerequisite to activate the apoptotic cascade and that the balance between FXR and DRs could be an important determinant for cell fate decision of hepatocytes. Because FXR is predominantly expressed in hepatocytes, the identification of FXR as an intrinsic apoptosis inhibitor is important in understanding how hepatocytes survive from the uniquely hostile environment.
FXR is conventionally recognized as a ligand-activated NR. Upon binding with ligands, NRs translocate from the cytoplasm to the nucleus and to target sites in the genome, exerting genomic actions by regulating its target genes. In this study, we uncovered that FXR in the cytoplasm of hepatocytes functions as an intrinsic apoptosis inhibitor via interacting with CASP8. Forced overexpression, but not FXR ligands, protects against DRs-engaged apoptosis supporting that the antiapoptotic effect of FXR is independent of its canonical transcriptional activity. In line with our study, no significant improvement of caspase- cleaved keratin-18 levels was observed for NASH patients after treatment with OCA [26]. Previous studies demonstrated that FXR is important in the pathological development of many liver diseases including alcohol hepatitis, NASH, viruses induced hepatitis, cholesterol liver diseases, and liver cancer. However, the exact molecular mechanisms of how FXR protects against diverse pathological factors-induced liver injury have not been fully addressed, and in most cases, presumably ascribed to the transcriptional activity of FXR on the regulation of bile acids and lipids homeostasis. Of interest, it was previously shown that FXR agonists were effective against intrinsic apoptosis induced by serum deprivation and fasting [42]. However, we found that FXR agonists cannot protect cells against DRs-engaged extrinsic apoptosis. Thus, it is reasonable to presume that both genomic and non-genomic functions of FXR may coordinate with each other to protect against diverse factors induced liver injury. More work is needed to delineate how the pleiotropic functions of FXR are tuned to maintain homeostasis of hepatocytes. Notably, non-genomic functions of some NRs have been witnessed. Nur77 was reported to interact with Bcl-2 and induce conformational change of Bcl-2, resulting in conversion of Bcl-2 from a protector to a killer [43]. ERβ elicited its anti-apoptosis and antiinflammasome activity via interacting with a protein network in the cytoplasm [44]. Interaction between vitamin D receptor (VDR)/RXR with p62 was demonstrated as a crucial determinant of HSC activation and fibrosis [45]. A non-canonical, transcription-independent function of NOTCH1 in regulating adherens junctions and vascular barrier function is revealed recently [46]. Together, all these findings indicate that, although NRs are usually considered to be mainly located in the nucleus to elicit their canonical transcriptional activity, some NRs in unliganded conditions may also locate and function in the cytoplasm in a noncanonical manner via PPI.
Increased serum levels of TNFα, FasL and TRAIL, enhanced apoptotic cell death, reduced FXR levels and compromised FXR/CASP8 interaction were found in fibrotic patients, suggesting that our findings are clinically relevant. Excessive DRs-engaged apoptotic death of hepatocytes, which is aggravated by gradual loss of FXR, may represent a hallmark in facilitating fibrotic development. In both GalN/LPSinduced acute hepatic failure and CCl 4 -induced chronic fibrosis, drastic reduction of hepatic FXR levels was noted. As a support, previous studies also showed that the age-dependent decline in FXR expression and activity is a major factor in the development of fatty liver observed in aging mice [47]. Together, these findings indicate that reduced levels of FXR may render hepatocytes more susceptible to apoptotic stresses and therefore it is reasonable to expect that restoring FXR levels would be a promising therapeutic strategy for many liver diseases. Indeed, we found that forced overexpression of FXR protected against both acute liver injury and chronic liver fibrosis. Due to the explicit knowledge on its genomic actions, previous efforts mainly focused on the development of FXR agonists. Among which, OCA was approved by FDA and EMEA as a breakthrough drug. Although OCA may be effective against liver fibrosis in NASH [23,26], it had little effect on liver fibrosis in PBC patients [24,25], which may be partially explained by our findings that FXR agonists are not effective against DRs-engaged hepatocytes apoptosis, a key event in the pathological development of liver fibrosis. In contrast to FXR agonists, forced overexpression of FXR combats against hepatocytes apoptosis both in vitro and in vivo. We thus propose that, in addition to the development of more efficient FXR agonists, future efforts can be directed to design drug candidates that recovers FXR protein levels by preventing its degradation or strengthens FXR/CASP8 interaction for the therapy of liver diseases in which apoptosis of hepatocytes is a causal event. Future studies in delineating the exact mechanism of FXR degradation in conditions of liver fibrosis are thus warranted to exploit the strategy of restoring FXR protein levels for combating apoptosis triggered fibrotic events [19]. Indeed, both high throughput screening and rational design of Fig. 8. Proposed mechanism for cytosolic FXR inhibiting death receptors engaged apoptosis. FXR physically interacts with CASP8 in the cytoplasm. Apoptotic stimulation leads to rapid FXR downregulation and DISC formation. Enhanced association between FXR and CASP8 precludes DISC formation and CASP8 activation, thereafter preventing apoptosis. compounds targeting FXR degradation and FXR/CASP8 interaction are now undergoing in our lab.
In summary, this study uncovers cytosolic FXR as an apoptosis inhibitor via precluding CASP8 recruitment to the DISC in hepatocytes (Fig. 8), shedding insights in delineating molecular events involved in controlling anti-apoptotic and pro-apoptotic homeostasis of hepatocytes. Moreover, it suggests a mechanistic basis for targeting cytosolic FXR protein levels and FXR/CASP8 interaction as a promising strategy for the therapy of liver diseases. | 2018-11-09T20:33:53.886Z | 2018-10-15T00:00:00.000 | {
"year": 2018,
"sha1": "b8086ce522b2088b34651887b9025bf4ea460874",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thelancet.com/article/S2352396418304468/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8086ce522b2088b34651887b9025bf4ea460874",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
227162063 | pes2o/s2orc | v3-fos-license | What can we learn from brain autopsies in COVID-19?
Highlights • Chronic neurological disease and acute abnormalities are present in COVID-19 brain autopsies.• Acute hypoxic-injury, hemorrhage, and minimal inflammation are frequently observed.• Low levels of viral SARS-CoV-2 RNA are present; cellular source remains unknown.
Manuscript
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), an enveloped, single-stranded positive-sense RNA betacoronavirus, is the causative agent of coronavirus disease 2019 (COVID- 19), for which there have been over 50 million confirmed cases and 1.2 millions deaths worldwide as of November 8, 2020 [1,2]. Morbidity and mortality are more common in older individuals and those with comorbidities, including cardiovascular disease, hypertension, obesity, and diabetes, although young people with no comorbidities are also at risk for critical illness [3][4][5]. While many SARS-CoV-2 infected individuals are asymptomatic or experience predominantly respiratory symptoms, extrapulmonary manifestations, including neurological symptoms and conditions, are increasingly recognized [6][7][8]. The majority of current studies on neurological manifestations are case reports or retrospective series focused on hospitalized patients through the extraction of medical record data, which have described disorders of consciousness, delirium, and neuromuscular and cerebrovascular complications [7][8][9][10]. Smell and taste disturbances in the absence of nasal obstruction are particularly characteristic of COVID-19, leading to speculation regarding the olfactory nerve as a possible route of central nervous system entry [11,12]. Other neurological findings include headache, myalgia, rhabdomyolysis, Guillain-Barre syndrome, encephalopathy, and myelopathy with rare cases of encephalitis based on imaging or cerebrospinal fluid [8,[13][14][15][16][17][18]. SARS-CoV-2 has not been detected in cerebrospinal fluid in the majority of patients tested [8,19], highlighting the need for studies of autopsy brain tissue to understand COVID-19 neuropathogenesis and develop neurocognitive preserving treatment strategies.
Autopsies provide a wealth of information about the decedents, regardless of whether a likely cause of death was identified pre-mortem [20,21]. Due to initial uncertainties regarding the infectious properties of SARS-CoV-2 and limitations in personnel and personal protective equipment availability, autopsies for COVID-19 patients have been limited, although an increasing number of studies are now being published (reviewed in [22][23][24]). Reports of detailed neuropathological examinations have lagged behind general autopsy series, in part due to the initial focus on lung pathology combined with the longer (2-3 weeks) formalin fixation time preferred by most neuropathologists before cutting brains. Additional factors include the reluctance of some institutions to perform brain removal in COVID-19 cases due to concerns over electric bone saw generated aerosols, which can be effectively contained through the use of vacuum filters or hand saws [25,26]. Included in this review are peer-reviewed studies of autopsy findings published in English between January 1, 2020, and November 5, 2020. Two different databases (PubMed, Google Scholar) were searched for key terms, including COVID-19, nCoV-2019, and SARS-CoV-2, crossed 2020 [32] 11; full autopsy with brain findings Schurink et al. 2020 [38] 11; full autopsy with brain findings No specific findings Hypoxic changes, activation/clusting of microglia, astrogliosis, perivascular cuffing of T cells most prominent in olfactory bulbs and medulla (n = 11); neutrophilic plugs (n = 3) Viral nucleocapsid IHC negative in 11 cases N.A.
While additional COVID-19 autopsy series continue to be published, the overall picture of acute hypoxic-injury, hemorrhage, and mild to moderate non-specific inflammation is unlikely to change significantly. Evidence of direct viral involvement in the brain or olfactory nerve is limited to the detection of low levels of viral RNA and rare viral antigen in cranial nerves and scattered brainstem cells. Diagnosis of coronavirus particles by electron microscopy is challenging due to similar appearing normal cellular structures, which has created significant controversy in the literature [42,43]. Due to the inherent bias of autopsy studies for severe, fatal disease, and additional institutional restrictions for which cases include brain evaluation, the frequency and extent of neuropathological findings are likely to be overestimated relative to the average COVID-19 patient. At the time of this review, pediatric autopsies, including individuals with multisystem inflammatory syndrome in children (MIS-C), remain extremely limited. While the number of pediatric COVID-19 cases accounts for <2 % of all cases [44], data obtained from brain tissue in this age-group can help address the unique pathophysiology of SARS-CoV-2 infection, including age-dependent immune-responses, hypercoagulability, and degree of hypoxic-ischemic injury.
Additional remaining areas of interest include characterizing the effects of remdesivir and other potential antiviral therapeutics, immunomodulatory medications including dexamethasone, anti-IL-6 or other monoclonal antibodies, and anticoagulants on brain tissue. Given that the therapeutic response to COVID-19 vastly differs between institutions, it remains a challenge to understand how therapeutic choices during acute hospitalization are responsible for the variability in observed neurological manifestations and neuropathological findings. Also, while not surprisingly this early in the pandemic, long-term neuropathological sequelae in COVID-19 survivors remain unstudied. There is evidence that neurological symptoms, including fatigue and headaches, linger for weeks to months in a subset of affected patients [45,46] and studies determining mechanisms for persistent neurological symptoms are needed.
There have been several efforts for sharing COVID-19 brain tissue, including the International Society of Neuropathology (ISN) Collaborative Efforts [47] and the COVID-19 Virtual Biobank at the University of Nebraska Medical Center [48]. To address many of the remaining unanswered questions regarding the neuropathological effects of COVID-19, large scale integrated studies from multiple institutions with relevant clinical metadata will be crucial. The ongoing collection of neurological tissue will be critical to inform best practice management guidance and to direct research priorities as it relates to neurological morbidity from COVID-19. | 2020-11-26T05:09:50.836Z | 2020-11-25T00:00:00.000 | {
"year": 2020,
"sha1": "2edfba9113a901557ca9a391af46c059e292beee",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.neulet.2020.135528",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2edfba9113a901557ca9a391af46c059e292beee",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117771822 | pes2o/s2orc | v3-fos-license | On automorphic sheaves on Bun_G
Let X be a smooth projective connected curve over an algebraically closed field k of positive characteristic. Let G be a reductive group over k, \gamma be a dominant coweight for G, and E be an \ell-adic \check{G}-local system on X, where \check{G} denotes the Langlands dual group. Let \Bun_G be the moduli stack of G-bundles on X. Under some conditions on the triple (G,\gamma,E) we propose a conjectural construction of a distinguished E-Hecke automorphic sheaf on \Bun_G. We are motivated by a construction of automorphic forms suggested by Ginzburg, Rallis and Soudry in [6,7]. We also generalize Laumon's theorem ([10], Theorem 4.1) for our setting. Finally, we formulate an analog of the Vanishing Conjecture of Frenkel, Gaitsgory and Vilonen for Levi subgroups of G.
Introduction
Let F be a number field, A be its ring of adeles. If G is one of the groups SO 2n+1 , Sp 2n or SO 2n then consider standard representationǦ →Ȟ of the Langlands dual groupǦ, so here H is GL 2n , GL 2n+1 or GL 2n respectively. For an irreducible, automorphic, cuspidal representation τ of H(A) satisfying some additional conditions, D. Ginzburg, S. Rallis and D. Soudry have proposed a conjectural construction of an irreducible, automorphic cuspidal representation σ of G(A) which lifts to τ (cf., [6,7]).
For example, consider G = SO 2n+1 . Let X be a smooth projective absolutely irreducible curve over F q . Consider the Langlands dual groupǦ = Sp 2n overQ ℓ . Let H = GL 2n over F q , H = GL 2n overQ ℓ . The standard representation V ofǦ is a mapǦ →Ȟ = GL(V ).
Let E be an ℓ-adicǦ-local system on X, assume that V E is irreducible. According to [8], theorem VII.6, irreducibility of V E implies that End(V E ) is pure of weight zero. It follows that for each closed point x ∈ X the local L-function L(E x ,ǧ, s) is regular at s = 1, and the corresponding irreducible unramified representation of G(F x ) is generic. Here F x denotes the completion of F q (X) at x, andǧ = LieǦ. An analog of D. Ginzburg, S. Rallis and D. Soudry's conjecture for function field is to predict that in the L-packet of automorphic forms corresponding to E there exists a unique nonramified cuspidal generic form ϕ E : Bun G (F q ) →Q ℓ (cf. [7], Conjecture on p. 809 and [6]).
In loc.cit. an additional condition is required: the L-function L(E, ∧ 2 V, s) has a pole of order exactly one at s = 1. This condition is satisfied in our situation. Indeed, ∧ 2 V = V ′ ⊕Q ℓ , where V ′ is an irreducible representation ofǦ. Since V is self-dual, H 0 (X ⊗F q , V ′ E ) = 0 and the L-function L(E, V ′ , s) is a polynomial in q −s . The purity argument shows that L(E, V ′ , 1) = 0.
In this paper we consider the problem of constructing a geometric counterpart of ϕ E . Given a reductive group G, a dominant coweight γ and aǦ-local system E on X, we impose on these data some conditions similar to the above. Then we propose a conjectural construction of a distinguished E-Hecke eigensheaf on the moduli stack Bun G of G-bundles on X. Our approach applies to root systems A n , B n , C n for all n, D n for odd n, and also E 6 , E 7 . For GL n our method reduces to the one proposed by Laumon in [9].
The construction is exposed in Sect. 2, 3. In Sect. 4 we study the additional structure on Levi subgroups induced by γ, and prove a generalization of Laumon's theorem ( [10], Theorem 4.1) for our setting. We discuss its applications to cuspidality and formulate an analog of the Vanishing Conjecture of Frenkel, Gaitsgory and Vilonen for Levi subgroups of G.
Statements and conjectures
2.1 Notation Throughout, k will denote an algebraically closed field of characteristic p > 0. Let X be a smooth projective connected curve over k. Fix a prime ℓ = p. For a k-scheme (or k-stack) S write D(S) for the bounded derived category of ℓ-adicétale sheaves on S.
Let G be a connected reductive group over k. Fix a Borel subgroup B ⊂ G. Let N ⊂ B be its unipotent radical and T = B/N be the "abstract" Cartan. Let Λ denote the coweight lattice. The weight lattice is denoted byΛ. The semigroup of dominant coweights (resp., weights) is denoted Λ + (resp.,Λ + ). The set of vertices of the Dynkin diagram of G is denoted by I. To each i ∈ I there corresponds a simple rootα i and a simple coroot α i . Byρ ∈Λ is denoted the half sum of positive roots of G and by w 0 the longest element of the Weil group W .
For λ ∈ Λ + write V λ for the irreducible representation ofǦ of highest weight λ.
The trivial G-bundle on a scheme is denoted by F 0 G . Recall that for any finite subfield k ′ ⊂ k and any non-trivial character ψ : k ′ →Q ℓ one can construct the Artin-Shrier sheaf L ψ on G a,k . The intersection cohomology sheaves are normalized to be pure of weight zero.
Additional data and assumptions
We say that γ ∈ Λ + is minuscule if γ is a minimal element of Λ + and γ = 0. If γ ∈ Λ + is minuscule then, by (Lemma 1.1, [14]), for any rootα we have γ,α ∈ {0, ±1}, and the set of weights of V γ coincides with the W -orbit of γ. For example, if γ = 0 is orthogonal to all roots then γ is minuscule. One checks that the natural map from the set of minuscule dominant coweights to π 1 (G) is injective. Definition 1. We say that {γ} is a 1-admissible datum if the following conditions hold • the center Z(G) is a connected 1-dimensional torus; • π 1 (G) →Z; • γ ∈ Λ + is a minuscule dominant coweight whose image θ in π 1 (G) generates π 1 (G); • V γ is a faithful representation ofǦ.
The following lemma is straightforward.
is simply-connected. Note that forμ ∈Λ + ,λ ∈Λ + S the conditionμ ≤λ impliesμ ∈Λ + S . Note that {−w 0 (γ)} is also a 1-admissible datum. Since V γ is faithful, the weights of V γ generate Λ and for each i ∈ I we have γ,ω i > 0. For each maximal positive rootα we have γ,α = 1. In particular, if the root system of G is irreducible (so, nonempty) then γ is a fundamental coweight corresponding to some simple root.
Some examples of 1-admissible data are given in the appendix.
Consider the formal disk D = Spf(k[[t]]
). Recall that the Affine Grassmanian Gr G is the ind-scheme classifying pairs (F G , β), where F G is a G-bundle on D and β : F G →F 0 G is a trivialization over the punctured disk D * = Spec k((t)). Define the positive part Gr + G ⊂ Gr G of Gr G as a closed subscheme given by the following condition: ). Recall that for µ ∈ Λ + one has the closed subscheme Gr µ G ⊂ Gr G (cf. [2], sect. 3.2). One checks that Gr µ G ⊂ Gr + G iff µ ∈ Λ + G,S . Let π + 1 (G) ⊂ π 1 (G) be the image of Λ + G,S under the projection Λ → π 1 (G).
For ν ∈ π 1 (G) the connected component Gr ν G of Gr G is given by the condition: 2.3 Denote by Bun G the moduli stack of G-bundles on X. Let H + G be the corresponding positive part of the Hecke stack, it classifies collections: extends to an inclusion of coherent sheaves on X, and Vω 0
2.4
Version of Laumon's sheaf Given a local system W on X and d ≥ 0, one defines a sheaf L d W on H +,d G as follows. LetH +,d G be the stack of collections: where supp (resp., p) sends the above collection to (x 1 , . . . , x d ) (resp., to ( be the open subscheme classifying reduced divisors. Over rss X (d) , this diagram is cartesian.
Proposition 1. The map p is representable proper surjective and small.
This is a perverse sheaf, the Goresky-MacPherson extension from supp −1 ( rss X (d) ). It is equiped with a canonical action of S d . Define where p (resp., q) sends (F G , F ′ G , β) to F G (resp., F ′ G ). By (property 3, sect. 5.1.2 [2]), the sheaf L d W is ULA with respect to both projections p and q. Let r = dim V γ . For a partition µ = (µ 1 ≥ . . . ≥ µ r ≥ 0) of d define the polynomial functor W → W µ of aQ ℓ -vector space W by where U µ stands for the irreducible representation of S d corresponding to µ. For d > 0 let ℓ(µ) be the greatest index i ≤ r such that µ i = 0. For d = 0 let ℓ(µ) = 0. If ℓ(µ) is less or equal to dim W then W µ is the irreducible representation of GL(W ) with h.w. µ, otherwise it vanishes.
For ν ∈ Λ + let A ν denote the IC-sheaf on Gr ν . Recall that the category Sph(Gr G ) of spherical perverse sheaves on Gr G consists of direct sums of A ν , as ν ranges over the set of dominant coweights. We have the Satake equivalence of tensor categories Loc : Rep(Ǧ) → Sph(Gr G ) (cf. Theorem 3.2.8, [2]). In particular, we have Loc The Satake equivalence yields the following description.
Proposition 2. For any local system W on X the restriction of L d W to the fibre (1) of supp ×q identifies with the exterior product the sum being taken over the set of partitions of d k of length ≤ r.
2.5 Given a local system W on X, for d ≥ 0 define a functor Av d W : Let also Av −d W : D(Bun G ) → D(Bun G ) be given by The functors Av d W and Av −d W * are both left and right adjoint to each other. As in (Proposition 9.5, [4]) one proves Proposition 3. Let K be a Hecke eigensheaf on Bun G with respect to aǦ-local system E. Then for the diagram → Bun G and any local system W on X we have Consider the stack of pairs (F T ,ω), where F T is a T -torsor on X andω is a trivial conductor, that is,ω is a collection of isomorphismsω i : Lα i F T → Ω for each i ∈ I. The exact sequence 1 → Z(G) → T → i∈I G m → 1, where the second map is i∈Iα i , shows that this stack is noncanonically isomorphic to Bun Z(G) (recall that by our assumption Z(G) is connected).
Fix a section T → B. Then for each pair (F T ,ω) we have the evaluation map evω : Bun F T N → A 1 (cf. [3], section 4.1.1). Fix a T -torsor on X with a trivial conductor (F T ,ω).
another T -torsor with trivial conductor on X then there exists a Z(G)torsor F Z(G) on X and an isomorphism F T ⊗ F Z(G) →F ′ T with the following property. Let where Z(G) acts diagonally. Then the diagram commutes . So, for eachλ ∈Λ + S we have an embedding of coherent sheaves Given a local system W on X, define the sheaf P d W,ψ on Y d as follows. Consider the open immersion j : where d N = dim Bun F T N . By (Theorem 2, [3]), this is a perverse sheaf and D(P 0 It is easy to see thatQ ℓ ⊠ L d W is ULA with respect to the projection Y 0 × Bun G H d,+ G → Y 0 . So, by (property 5, sect. 5.1.2 [2]), Proposition 4. For any local system W on X, P d W,ψ is a perverse sheaf on Y d , the Goresky-MacPherson extension from rss Y d . If W is irreducible then P d W,ψ is irreducible.
The proof is found in Section 3.2.
2.7 Let π 0 : Bun F T N → Bun G be the projection.
By Remark 1, this property does not depend (up to a tensoring K by a 1-dimensional vector space) on our choice of the pair (F T ,ω).
be the projections. From Proposition 3 one derives Corolary 1. Let K be a generic normalized E-Hecke eigensheaf on Bun G . Let W be any local system on X. Then for each d ≥ 0 one has For aǦ-local system E pick aǦ-local system E * such that Vλ E * → (Vλ E ) * for allλ ∈Λ + . Let K be a E-Hecke eigensheaf then DK is a E * -Hecke eigensheaf. Assume that DK is generic normalized then from Corolary 1 we get an isomorphism By adjunction, it yields a nonzero map Dualizing, we see that this is equivalent to providing a nonzero map Conjecture 1 (geometric Langlands). Let E be aǦ-local system on X. Assume that W = V γ E is irreducible and satisfies the condition Then there exists N > 0 and for each d ≥ N a nonempty open substack U d ⊂ Bun d G with the following property. There exists a E-Hecke eigensheaf K on Bun G such that • both K and DK are generic normalized; • K is an irreducible perverse sheaf over each Bun d G , which does not vanish over U d .
Remarks . i) The sheaf K from Conjecture 1 is unique up to an isomorphism if it exists. ii) For any local system W on X we have π ! P d W,ψ → Av d W (π ! P 0 ψ ) naturally.
Informal motivation
If the ground field k was finite then according to Langlands' spectral decomposition theorem ( [13]), each function from L 2 (Bun G (k)) would be written as linear combination (more precisely, a direct integral) of Hecke eigenfunctions. Conjecturally, some version of spectral decomposition should exist for the derived category D(Bun G ) itself. We also have an analog of the scalar product of two objects K 1 , K 2 ∈ D(Bun G ), which is the cohomology RΓ c (Bun G , K 1 ⊗ D(K 2 )) (we ignore here all convergence questions).
Let E be aǦ-local system on X satisfying the assumptions of Conjecture 1. One may hope that to E is associated a E-Hecke eigensheaf K, which is unique in appropriate sense.
Since K is expected to be generic normalized, the "scalar product" of π ! P 0 ψ and K should equal "one". That is, K should appear in the spectral decomposition of π ! P 0 ψ with multiplicity one. By Proposition 3, the functor Av d W applied to π ! P 0 ψ with d large enough, will kill all the terms in the spectral decomposition of π ! P 0 ψ except K itself. So, roughly speaking, Av d W (π ! P 0 ψ ) should equal K tensored by some constant complex.
2.9
Stratifications For µ ∈ Λ pos denote by X µ the moduli scheme of Λ pos -valued divisors of degree µ. If µ = i∈I a i α i then X µ = i∈I X (a i ) .
are regular everywhere and maximal. We have a projection µ Bun 2.10 For any local system W on X letP d W,ψ be the complex obtained by replacing in the definition of P d W,ψ Laumon's sheaf by Springer's sheaf for some sheafW d,µ on X d,µ . HereW d,µ is placed in usual degree zero. Let W d,µ denote the corresponding sheaves for P d W,ψ .
the inside sum being taken over partitions ν of d k of length ≤ r.
The proof is found in Section 3.1. Proposition 5 together with Corolary 1 suggest the following conjecture.
. Let ev λ : BunF T N → A 1 be the evaluation map given by the conductor data. Let E be aǦ-local system on X, K be a generic normalzed E-Hecke eigensheaf on Bun G . Then as follows. Consider the diagram This yields a morphism and, by adjunction, the desired map (4).
is placed in perverse degrees ≤ 0.
2) For any µ ∈ Λ pos , the restriction of (5) to µ Y d ×X is supported by µ Y + d ×X and is isomorphic to the tensor product of The proof is given in Section 3.3 Remarks . i) One may show that for any µ ∈ Λ pos the restriction of (4) to µ Y d × X comes from a morphism of sheaves W d,µ The latter map is an isomorphism over the open substack of X d,µ × X classifying triples (D, D pos , x) such that x does not appear in D. Therefore, (4) is an isomorphism over the locus of (F, κ, D, x) ∈ Y d × X such that x does not appear in D. ii) In the situation of Conjecture 1 we expect that the map (4) yields the Hecke property of K corresponding to the coweight γ. Moreover, it should also yield the Hecke properties corresponding to all λ ∈ Λ + G,S (as it indeed happens for GL n ). Define the Hecke functor H Y : Note that the cohomological shifts in the definition of the Hecke functor H : D(Bun d+1 G ) → D(Bun d G ×X) corresponding to γ and H Y differ by one! So, the Hecke property of K can not be simply the push-forward of (4) with respect to π : Y d → Bun G .
2.12
Let ω be a generator of the group of coweights orthogonal to all roots. Since the image of ω in π 1 (G) is not zero, we assume that this image equals d ω θ for some and κ ′λ is the composition for allλ ∈Λ + S . Let E be aǦ-local system on X and set W = V γ E . Then there is a natural map This is not an isomorphism in general, and one may show that the LHS of (6) is placed in perverse degrees ≤ 0.
2.13 Whittaker sheaves Let K(Y d ) denote the Grothendieck ring of the triangulated category D(Y d ).
To eachǦ-local system E on X and d ≥ 0 we attach the Whittaker sheaf W d E,ψ ∈ K(Y d ) defined as follows.
Let µ ∈ Λ pos be such that dγ + w 0 (µ) is dominant. Let τ be a partition of dγ + w 0 (µ), that is, a way to write dγ + w 0 (µ) = i n i λ i with λ i ∈ Λ + G,S pairwise different and n i > 0. Let τ X ⊂ i X (n i ) be the complement to all diagonals. We consider τ X ⊂ X d,µ as the locally-closed subscheme classifying divisors k λ k x k of degree dγ + w 0 (µ) with x k ∈ X pairwise different. Let to be the (unique) complex with the following properties. Its The sheaf W d E,ψ should satisfy the Hecke property, in particular we suggest Conjecture 3. Recall the diagram (cf. Sect. 2.11) There is a canonical isomorphism in the Grothendieck ring K(Y d × X) 2.13.1 We don't know if Λ + G,S is a free semigroup in general, however this is the case for our examples GL n , GSp 2n , GSpin 2n+1 (cf. appendix).
Assuming that Λ + G,S is a free semigroup, we can describe W d E,ψ more precisely, namely "glue" the pieces on the strata τ Y to get a sheaf on µ Y + d . To do so, we will glue the sheaves τ E to get a constructible sheaf AE d,µ on X d,µ (here 'A' stands for 'automorphic').
Let λ 1 , . . . , λ m be free generators of Λ + G,S thus yielding Λ + G,S → (Z + ) m . Given d ≥ 0 and µ ∈ Λ pos with dγ + w 0 (µ) = m i=1 a i λ i dominant, we get on X d,µ . Let D = k ν k x k be a k-point of X d,µ , where x k are pairwise different and ν k ∈ Λ + G,S . Write ν k = m i=1 a i,k λ i for each k. The fibre of (7) at D is There is (a unique up to a nonzero multiple) inclusion ofǦ-modules The following is borrowed from [12].
Proposition 7. Assume that Λ + G,S is a free semigroup. There is a unique constructible subsheaf whose fibre at any k-point D = k ν k x k of X d,µ is the image of (8).
Some proofs
3.1 Recall that Gr G is stratified by locally closed ind-subschemes S µ indexed by all coweights µ ∈ Λ. Informally, S µ is the N (K)-orbit of the point µ(t) ∈ Gr G , whereK = k((t)). We refer the reader to [3], Section 7.1 for the precise definition. Recall the following notion from loc.cit., section 7.1.
. LetΩ denote the completed module of relative differentials ofÔ over k (so,Ω is a freeÔ-module generated by dt). Given a coweight η ∈ Λ and isomorphisms for each i ∈ I, one defines an admissible character χ η : N (K) → G a of conductor η as the sum The restriction of Spr d W to this fibre is the tensor product of (⊗ k (W ⊗d k where the inside sum is taken over dominant coweights ν ∈ Λ + such that ν ≤ d k γ, and V ν,k are some vector spaces. Let such that x k are pairwise different (and some of d k may be zero). Let K y denote the fibre at y of The fibre of 0 q Y over y identifies with An equivariance argument (as in [3], Lemma 6.2.8) shows that K y vanishes unless all ν k are dominant. By (Lemma 7.2.7(2), [3]) the restriction of the map In the latter case, it is canonicallyQ ℓ [ ν k , 2ρ ]( ν k ,ρ ).
The above equivariance argument shows also that the restriction ofP d W,ψ to µ Y + d , after tensoring by ev * µ L ψ −1 , descends with respect to the projection µ Y + d → X d,µ . Combining this with Proposition 2, one finishes the proof of Proposition 5.
The above proof combined with (Proposition 3.2.6, [2]) also gives the following
Given a partition τ of dγ + ω 0 (µ), consider the locally closed subscheme τ X ⊂ X d,µ , which is the moduli scheme of divisors k (d k γ + ω 0 (µ k ))x k with x k pairwise different. Given τ , if k runs through the set consisting of m elements then dim τ X = m. Clearly, the schemes τ X form a stratification of X d,µ .
Suppose that τ is of length m, that is, dim τ X = m. Then from (Lemma 7.2.4, [3]) it follows that In particular, we have dim Y d = d + d N + dγ, 2ρ . Now from Proposition 5, we learn that the restriction ofP d W,ψ to τ Y is placed in perverse degrees ≤ 0. Moreover, the inequality is strict unless µ = 0 and m = d. SinceP d W,ψ is self-dual (up to replacing W by W * and ψ by ψ −1 ), our assertion follows.
Remarks . i) As a corolary, note that the restriction of P d W,ψ to 0 Y d identifies canonically with is a finite morphism. Let W and W ′ be any local systems on X. Then the complex is placed in usual cohomological degree −2d. This is seen by calculating this direct image with respect to the stratification of Y d by µ Y d . For G = GL n the complex (9) is a Rankin-Selberg integral considered in [11].
Proof This follows from the description of S wλ ∩ Gr λ given in ( [14], Lemma 5.2).
Consider a k-point of µ Y d given by (F G , κ, D, D pos ).
We conclude that the restriction of (5) to µ Y d × X vanishes outside the closed substack µ Y + d × X, and is isomorphic to for some sheaf F on µ Y + d × X placed in usual degree zero. An equivariance argument (as in the proof of Proposition 5) assures that F ⊗ ev * µ L ψ −1 descends with respect to the projection for any µ ∈ Λ pos , the complex (5) is placed in perverse degrees ≤ 0. Proposition 6 is proved. The following identities are straightforward.
Lemma 3. We have
. It may be shown that Gr +,µ M is connected for each µ ∈ π + 1 (M ). Recall the following definition from ( [2], sect. 4.3.1). For µ ∈ Λ G,P let S µ P ⊂ Gr G denote the locally closed subscheme classifying (F G , β : has niether pole nor zero over D for everyλ ∈Λ G,P ∩Λ + . For each ν ∈ π 1 (G) the component Gr ν G is stratified by S µ P indexed by those µ ∈ Λ G,P whose image in π 1 (G) is ν. Moreover, we have a natural map t µ S : S µ P → Gr µ M .
Lemma 4. For each µ ∈ Λ G,P the map t µ S : Gr + G ∩S µ P → Gr µ M factors through Gr +,µ M ֒→ Gr µ M , and the induced map t +,µ There is unique (F P , β : F 0 P | D * → F P | D * ) ∈ Gr P that induces (F G , β), and t µ S sends (F G , β) to F M = F P × P M . Since for anyλ ∈Λ + S the maps βλ : Vλ F P ֒→ Vλ F 0 P are regular, the first assertion is reduced to the next sublemma. ]. Since t +,µ S is M (Ô)-invariant, it suffices to show that ν(t) ∈ Gr +,µ M lies in the image of t +,µ S . We know that there exists w ∈ W with wν ∈ Λ + G,S . Therefore, ν(t)G(Ô) defines a point of Gr + G ∩S µ P which is sent by t +,µ S to ν(t) ∈ Gr +,µ M . (Lemma 4) Note as a consequence that for each ν ∈ π + 1 (G) the scheme Gr +,ν G is stratified by locally closed subschemes Gr +,ν G ∩S µ P indexed by those µ ∈ π + 1 (M ) whose image in π 1 (G) is ν.
where F i G is a G-bundle on D, and β i : Pick a k-point (11) whose image under the convolution map There exist a unique collection Lemma 6. 1) Each λ ∈ Λ +,θ M,S is a minuscule dominant coweight for M . 2) The natural map Λ +,θ M,S → π θ 1 (M ) is bijective.
For d ≥ 0 consider the stack
where we used the projection q : H +,d G → Bun G in the fibred product. For µ ∈ Λ G,P let H +,µ P be the locally closed substack of (12) classifying for which there exists a Λ G,P -valued divisor D µ on X of degree µ with the property: for alľ λ ∈Λ G,P ∩Λ + the meromorphic maps is non empty iff µ ∈ π + 1 (M ) and actually D µ is a π + 1 (M )-valued divisor on X. So, for each d ≥ 0 the stack (12) is stratified by locally closed substacks H +,µ P indexed by those µ ∈ π + 1 (M ) whose image in π 1 (G) is dθ. For µ ∈ π + 1 (M ) let d = µ,ω 0 and d i = µ,ω i for i ∈ I − I M and let X µ M denote the scheme image of the projection We will think of X µ M as the moduli scheme of π + 1 (M )-valued divisors on X of degree µ. As we will see, X µ M need not be irreducible. For µ ∈ π + 1 (M ) we have a commutative diagram where we have denoted by supp P and s M the natural projections. For µ ∈ π + 1 (M ) whose image in π 1 (G) is dθ consider the diagram where we used q M : H +,µ M → Bun M in the fibred product, f M is the natural map, and q M sends where now we used p M : H +,µ M → Bun M in the fibred product, and p M sends (F P , Here is a generalization of Laumon's theorem ( [10], Theorem 4.1).
The proof is given in Sections 4.4-4.5.
4.4 Let J = {i ∈ I | γ,α i = 0}. Let W J ⊂ W be the subgroup generated by the reflection corresponding to i ∈ J. Using Bruhat decomposition, one checks that the map W/W J → W γ sending w to wγ is a bijection. Fix a section T → B. Let P γ denote the parabolic of G generated by T and Uα for all rootsα such that γ,α ≤ 0. So, P γ contains the opposite Borel. We have a bijection Λ +,θ M,S → W M \W/W J sending wγ ∈ Λ +,θ M,S to the coset W M wW J .
The map G/P γ → Gr γ G sending g ∈ G(k) ⊂ G(Ô) to gγ(t)G(Ô) is an isomorphism. The scheme Gr γ G is stratified by Gr γ G ∩S λ P indexed by λ ∈ Λ +,θ M,S . The above isomorphism transforms this stratification into the stratification of G/P γ by P -orbits. We have a disjoint decomposition So, we have Gr γ G ∩S λ P → P wP γ /P γ → P/P ∩ wP γ w −1 Similarly, for λ ∈ Λ +,θ M,S let P λ (M ) be the parabolic of M generated by T and Uα, whereα runs through those roots of M for which λ,α ≤ 0. Then the map M/P λ (M ) → Gr λ M sending m ∈ M (k) to mλ(t)M (Ô) is an isomorphism. So, the map is nothing else but the map P/P ∩ wP γ w −1 → M/P wγ (M ) sending p to p mod M . The correctness is due to Lemma 8. i) The map convμ is representable, proper and small over its image. Besides, the perverse sheaf is the Goresky-MacPherson extension from rss H +,µ M . Here a = dim H +,μ M . ii) The d-tupleμ gives rise to A(µ) in general position, say µ = k n k ν k . The group k S n k acts naturally on (15), and the sheaf of k S n k -invariants is canonically isomorphic to the direct summand of L µ W corresponding to A(µ).
Remark 3. From Lemma 8 it follows that for any µ ∈ π + 1 (M ) the complex L µ W is ULA with respect to both projections p M , q M : H +,µ M → Bun M .
Proof of Proposition 8 1) Consider the diagram
where λ k ∈ Λ +,θ M,S maps to µ k ∈ π θ 1 (M ). The restriction of f * M L d W to q * M (U × Bun M Bun P ) comes from rss X (d) . So, over U × Bun M Bun P , we get the desired isomorphism. Now it suffices to show that, up to a shift, q M ! f * M Spr d W is a perverse sheaf, the Goresky-MacPherson extension from rss H +,µ M × Bun M Bun P . For a d-tupleμ = (µ 1 , . . . , µ d ) with µ = µ 1 + . . . + µ d and µ i ∈ π θ 1 (M ), let H +,μ P be the stack of collections where x i ∈ X and (F i P , F i+1 P , x i , β i ) ∈ H +,µ i P for i = 1, . . . , d. The stack is stratified by locally closed substacks H +,μ Proof The functor Av µ W = ⊕ Av is a direct sum of functors indexed by A(µ) in general position. In the notation of Proposition 9, we have Av A(µ) Here d = n k , and k runs through the finite set π θ 1 (M ). For d large enough at least one of n k will satisfy n k > r(2g − 2) dim U λ k , and the RHS of (17) will vanish.
Generalizing the Vanishing Conjecture of Frenkel, Gaitsgory and Vilonen ( [4]), we suggest Conjecture 4. Let W be an irreducible local system on X of rank r = dim V γ . Assume that P is a standard proper parabolic of G. Then for all µ ∈ π + 1 (M ) whose image in π 1 (G) equals dθ with d > c(P ), the functor Av µ W vanishes identically.
Consider the diagram Bun
where α P and β P are natural maps. The constant term functor CT P : D(Bun G ) → D(Bun M ) is defined by CT P (K) = β P ! α * P (K). The following is a generalization of Lemma 9.8, [4].
Lemma 9. Let W be any local system on X. For any K ∈ D(Bun G ) and d ≥ 0 the complex CT P • Av d W (K) ∈ D(Bun M ) has a canonical filtration by complexes indexed by those µ ∈ π + 1 (M ) whose image in π 1 (G) is dθ.
Proof Consider the stack H +,d G × Bun G Bun P , where we used q : H +,d G → Bun G in the fibred product. The complex CT P • Av d W (K) is the direct image with respect to the natural map Recall that H +,d G × Bun G Bun P is stratified by locally closed substacks H +,µ P indexed by those µ ∈ π + 1 (M ) whose image in π 1 (G) is dθ. This gives a filtration on CT P • Av d W (K Corolary 4. Assume that Conjecture 4 holds. Then 1) Let d satisfy d > c(P ) for any standard proper parabolic of G. Then for any K ∈ D(Bun G ) and any irreducible local system W on X of rank r = dim V γ the complex Av d W (K) is cuspidal. 2) Let E beǦ-local system on X and K be a E-Hecke eigensheaf on Bun G . If V γ E is irreducible then K is cuspidal.
2) The argument given in ( [4], Theorem 9.2) applies in our setting. Namely, pick d such that d > c(P ) for any standard proper parabolic of G. Set W = (V γ E ) * . By Proposition 3, The LHS vanishes by Lemma 9. Since RΓ( is not zero, CT P (K) = 0. Remark 4. For G = GL n Conjecture 4 is proved by D. Gaitsgory ([5]). For G = GSp 4 (example 2 in the appendix) Conjecture 4 also holds, it is easily reduced to the result of loc.cit. for GL 2 . So, for G = GSp 4 Corolary 4 is unconditional.
Appendix. 1-Admissible groups Definition 8. Let H be a connected, semi-simple and simply-connected group (over k). Assume that the center Z(H) is cyclic of order h and fix an isomorphism z : µ h →Z(H). Assume that the characteristic of k does not divide h. Denote by G the quotient of H × G m by the diagonally embedded µ h . Call a reductive group G over k 1-admissible, if it is obtained in this way.
Let H be a connected, semi-simple and simply-connected group (over k). Let T H be a maximal torus of H. WriteΛ H (resp., Λ H ) for the weight (resp., coweight) lattice of T H . Leť Q H ⊂Λ H be the root lattice. Set It is understood that the pairing Λ×Λ → Z sends (λ, b), (λ, a) to λ,λ + ab. The map (λ, b) → b yields an isomorphism π 1 (G) → Λ/Λ H → 1 h Z. Note also that π 1 (Ǧ) →Λ/Q H → Z. The next result follows from definitions. • the irreducible representation V γ H of (H ad )ˇis faithful.
Examples of 1-admissible data
The examples below are produced using Lemma 10.
Remark 5. This particular choice of γ H yields a construction of an automorphic sheaf proposed by Laumon in [9]. However, all fundamental coweights for H ad = PSL n are minuscule, and a choice of γ H here is equivalent to a choice of a generator of the cyclic group π 1 (H ad ). If γ H is a fundamental coweight corresponding to a simple root which is not one of two edges of the Dynkin diagram A n−1 then the corresponding 1-admissible group G is not isomorphic to GL n .
2. The case G = G Sp 2n , n ≥ 1. The group G is a quotient of G m × Sp 2n by the diagonally embedded {±1}. Realise G as the subgroup of GL(k 2n ) preserving up to a scalar the bilinear form given by the matrix 0 where E n is the unit matrix of GL n . The maximal torus T of G is {(y 1 , . . . , y 2n ) | y i y n+i does not depend on i}. Letǫ i ∈Λ be the caracter that sends a point of T to y i . The roots arě R = {±α ij (i < j ∈ 1, . . . , n), ±β ij (i ≤ j ∈ 1, . . . , n)}, whereα ij =ǫ i −ǫ j andβ ij =ǫ i −ǫ n+j .
5.
The case E 6 . So, H is the simply-connected group corresponding to E 6 root system. There are two possible choices for γ H , namely ω 1 or ω 6 , the fundamental coweights corresponding to the simple rootsα 1 andα 6 in the Bourbaki 6. The case E 7 . So, H is the simply-connected group corresponding to E 7 root system. Take γ H to be the fundamental coweight ω 7 corresponding to the rootα 7 in the Bourbaki | 2019-04-12T09:11:31.669Z | 2002-11-04T00:00:00.000 | {
"year": 2002,
"sha1": "834b3f44251a5a39e909d937ed0868e908096dc8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "834b3f44251a5a39e909d937ed0868e908096dc8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253704222 | pes2o/s2orc | v3-fos-license | Preparation of Chitosan-Composite-Film-Supported Copper Nanoparticles and Their Application in 1,6-Hydroboration Reactions of p-Quinone Methides
Here, we describe the preparation of copper nanoparticles that are stabilized on a chitosan composite film (CP@Cu). This material could catalyze the 1,6-hydroboration reactions of p-quinone methides with B2pin2 as a boron source under mild conditions. This reaction exhibited very good functional group compatibility, and the organoboron compounds that were formed could easily be converted into corresponding hydroxyl products with good to excellent yields. This newly developed methodology provides an efficient and sequential pathway for the synthesis of gem-disubstituted methanols.
Introduction
Not only do organoboron compounds exist in a wide range of active molecules, natural products, and materials [1][2][3], but they are also the key intermediates in the synthesis of many functional chemicals [4,5]. For these reasons, in recent years, a series of methodologies-especially transition metal catalysis-have been developed for the synthesis of organoboron compounds [6][7][8], which have become more and more important [9][10][11][12]. Copper catalysts are increasingly favored by organic chemists due to their low cost, lower toxicity, and solid performance [13,14]. In previous work, copper-catalyzed hydroboration reactions of unsaturated compounds were widely studied, and this is also a common method for constructing C-B bonds [15][16][17][18]. However, examples reported here involved the use of strong bases and specifically designed ligands, which reduced the reaction economy. Thus, it is necessary to explore alternative highly active and sustainable copper-catalyzed hydroboration reactions of unsaturated compounds.
Metallic nanoparticles have been widely used in various reactions with the continuous development of organic synthetic chemistry over recent decades [19][20][21][22]. However, to the best of our knowledge, though they are some of the most widely used metallic nanoparticles, copper nanoparticles are rarely used to catalyze the hydroboration of unsaturated compounds [23,24]. In particular, with p-quinone methides, as a class of intermediates with a wide range of applications in organic synthesis [25][26][27][28], the 1,6-hydroboration products can be well transformed into gem-disubstituted methanols under certain conditions; they are widely spread throughout nature and are the core skeletons of many biologically active molecules and natural products [29][30][31][32][33]. So far, there have only been a few reports in the literature on the copper-catalyzed 1,6-boron addition reaction of p-quinone methides, and these reports were mainly focused on Cu(I)-catalyzed reactions [34,35]. In our previous work, we used Cu(OH) 2 as a catalyst to investigative the 1,6-hydroboration reaction of p-quinone methides, and good functional group compatibilities and reaction yields were obtained (Scheme 1a) [36]. Based on the results of our previous research, we found that when reports in the literature on the copper-catalyzed 1,6-boron addition reaction of p-quinone methides, and these reports were mainly focused on Cu(I)-catalyzed reactions [34,35]. In our previous work, we used Cu(OH)2 as a catalyst to investigative the 1,6-hydroboration reaction of p-quinone methides, and good functional group compatibilities and reaction yields were obtained (Scheme 1a) [36]. Based on the results of our previous research, we found that when chitosan-supported copper nanoparticles were used as a heterogeneous catalyst, construct C-B bonds [37] and C-Si bonds could be constructed with a high efficiency [38]. Therefore, in this work, we hope to use chitosan-supported copper nanoparticles as a heterogeneous catalyst and p-quinone methides as substrates to study the 1,6hydroboration reactions. In comparison with previous work, the biggest advantage of this work is the avoidance of the participation of bases in the reactions and the recycling of the catalyst to increase its utilization rate (Scheme 1b). Scheme 1. Cu(II)-catalyzed 1,6-hydroboration of p-quinone methides.
Results and Discussion
The initial experiments commenced with p-quinone methide 1a as a model substrate. CP@Cu (6 mol%) was used as a catalyst by using B2Pin2 (1.2 equiv.) as the boron source in the reactions. Firstly, the various organic solvents were investigated, and considering the role of protons, the whole reaction was started with ethanol (2.5 mL) as the solvent. However, no reaction happened (Table 1, Entry 1). When DCM and THF were used as solvents and MeOH (2 equiv.) was used as an additive, reactions were still not observed (Table 1, Entries 2-3). To our delight, when MeCN was used as the solvent and MeOH (2 equiv.) was used as an additive, the reaction was able to take place, and the desired product 2a was smoothly obtained. It was confirmed that further oxidation of 2a gave the corresponding gem-disubstituted methanol product 3a by using NaBO3·4H2O as an oxidant. (Table 1, Entry 4). Continuing to use acetone as the solvent and MeOH (2 equiv.) as an additive further promoted the occurrence of the reaction, and the target product 3a could be obtained with 70% yield (Table 1, Entry 5). Since water was a green solvent in this reaction, we added water (2 equiv.) to the reaction as an additive; unfortunately, the reaction did not happen (Table 1, Entry 6). As far as we know, in organic synthesis reactions, the use of mixed solvents could sometimes greatly improve the efficiency of the whole reaction. Therefore, in order to further improve the yield, we considered using acetone and H2O as mixed solvents to carry out the reaction, and the ratios of the solvents were screened (Table 1, Entries 7-10). When the ratio of acetone to H2O was 4:1, the reaction had the highest rate of conversion, and it occurred almost completely; the final target product could be obtained with 98% yield (Table 1, Entry 7). When the weight of H2O in the mixed solvents was continually increased, it was found that the conversion rate of the reaction decreased as the proportion of water increased; even when the ratio of acetone to H2O was reversed to 1:4, the reaction hardly occurred, and only trace amounts of product could be detected ( Table 1, Entry 10). In order to verify the importance of the CP@Cu in the reactions, we performed a control experiment, and no reaction occurred without any CP@Cu, which proved that the catalyst was indispensable in these reactions (Table 1, Entry 11). Finally, the reaction time was also investigated. Even if the reaction time was shortened to 1 h, the reaction still occurred efficiently and produced the target product with 93% yield ( Table 1, Entry 12). Thus, through a series of optimizations of the conditions, the optimal conditions in this research were found to be 6 mol% of CP@Cu as a catalyst and 1.2 equiv. of
Results and Discussion
The initial experiments commenced with p-quinone methide 1a as a model substrate. CP@Cu (6 mol%) was used as a catalyst by using B 2 Pin 2 (1.2 equiv.) as the boron source in the reactions. Firstly, the various organic solvents were investigated, and considering the role of protons, the whole reaction was started with ethanol (2.5 mL) as the solvent. However, no reaction happened ( Table 1, Entry 1). When DCM and THF were used as solvents and MeOH (2 equiv.) was used as an additive, reactions were still not observed ( Table 1, Entries 2-3). To our delight, when MeCN was used as the solvent and MeOH (2 equiv.) was used as an additive, the reaction was able to take place, and the desired product 2a was smoothly obtained. It was confirmed that further oxidation of 2a gave the corresponding gem-disubstituted methanol product 3a by using NaBO 3 ·4H 2 O as an oxidant. (Table 1, Entry 4). Continuing to use acetone as the solvent and MeOH (2 equiv.) as an additive further promoted the occurrence of the reaction, and the target product 3a could be obtained with 70% yield (Table 1, Entry 5). Since water was a green solvent in this reaction, we added water (2 equiv.) to the reaction as an additive; unfortunately, the reaction did not happen (Table 1, Entry 6). As far as we know, in organic synthesis reactions, the use of mixed solvents could sometimes greatly improve the efficiency of the whole reaction. Therefore, in order to further improve the yield, we considered using acetone and H 2 O as mixed solvents to carry out the reaction, and the ratios of the solvents were screened (Table 1, Entries 7-10). When the ratio of acetone to H 2 O was 4:1, the reaction had the highest rate of conversion, and it occurred almost completely; the final target product could be obtained with 98% yield (Table 1, Entry 7). When the weight of H 2 O in the mixed solvents was continually increased, it was found that the conversion rate of the reaction decreased as the proportion of water increased; even when the ratio of acetone to H 2 O was reversed to 1:4, the reaction hardly occurred, and only trace amounts of product could be detected (Table 1, Entry 10). In order to verify the importance of the CP@Cu in the reactions, we performed a control experiment, and no reaction occurred without any CP@Cu, which proved that the catalyst was indispensable in these reactions (Table 1, Entry 11). Finally, the reaction time was also investigated. Even if the reaction time was shortened to 1 h, the reaction still occurred efficiently and produced the target product with 93% yield ( Table 1, Entry 12). Thus, through a series of optimizations of the conditions, the optimal conditions in this research were found to be 6 mol% of CP@Cu as a catalyst and 1.2 equiv. of B 2 Pin 2 as a boron source, and the whole reaction was conducted in 2.5 mL of mixed solvents (acetone:H 2 O = 4:1) at room temperature for 2 h (Table 1, Entry 7). With the optimal conditions in hand, we continued to examine the universality of the reaction, and the results are summarized in Figure 1. Firstly, the effects of substituents at the ortho-position of the benzene ring on the reaction were investigated. For electron-donating substituents, such as methyl and methoxy, the desired target product could be obtained with excellent reaction yields (3b-3c, 96-98% yields). The whole reaction could still proceed smoothly and achieved the corresponding products with satisfactory yields when the more conjugated 2-substituted naphthyl was selected as the substituent instead of phenyl (3d, 92% yield). Although the electron-withdrawing substituents at the ortho-position had a certain effect on the reaction, the desired product was still obtained with a good yield (3e, 74% yield).
Entries
CP@Cu With the optimal conditions in hand, we continued to examine the universality of the reaction, and the results are summarized in Figure 1. Firstly, the effects of substituents at the ortho-position of the benzene ring on the reaction were investigated. For electron-donating substituents, such as methyl and methoxy, the desired target product could be obtained with excellent reaction yields (3b-3c, 96-98% yields). The whole reaction could still proceed smoothly and achieved the corresponding products with satisfactory yields when the more conjugated 2-substituted naphthyl was selected as the substituent instead of phenyl (3d, 92% yield). Although the electron-withdrawing substituents at the ortho-position had a certain effect on the reaction, the desired product was still obtained with a good yield (3e, 74% yield).
Next, we investigated the reactivity of the substituents at the meta-position of the benzene ring. From the reaction results summarized in Figure 1, the electron-donating substituents had a good effect on the reaction, and the desired products could be obtained with an almost equivalent yield (3f-3g, 96-97% yields). However, when an electron-withdrawing substituent was used, such as fluorine, the reaction yield was reduced to some extent (3h, 77% yield). To our delight, when the benzene ring had multiple substituents, such as naphthyl, dimethoxy, or even trimethoxy, the reaction could still occur well, and good to excellent reaction yields could be obtained (3i-3k, 80-96% yields). We also investigated the reactivity of para-substituents on the benzene ring; both electron-donating substituents (methyl, isopropyl, tert-butyl, methoxy, and benzyloxy) and electron-withdrawing substituents (fluorine, chlorine, bromine) had little effect on the reaction (3l-3s, 91-96% yields). Finally, we investigated thiophene, and although the target product could be obtained only with a moderate yield, the reaction still proved that the catalyst had good functional group compatibility (3t, 58% yield).
Considering that this could be a heterogeneous catalyst in this reaction, it is necessary to identify the reusability and stability of catalyst. It was demonstrated that when the reaction was completed, the CP@Cu catalyst could be easily recycled with a simple operation. The catalytic activity stayed almost the same after experimenting with recycling the catalyst six times, and the yield was still up to 96% even in the sixth experiment, so the catalyst has the advantage of being recyclable (Figure 2). Molecules 2022, 27, x FOR PEER REVIEW 4 of 9 Next, we investigated the reactivity of the substituents at the meta-position of the benzene ring. From the reaction results summarized in Figure 1, the electron-donating substituents had a good effect on the reaction, and the desired products could be obtained with an almost equivalent yield (3f-3g, 96-97% yields). However, when an electron-withdrawing substituent was used, such as fluorine, the reaction yield was reduced to some extent (3h, 77% yield). To our delight, when the benzene ring had multiple substituents, such as naphthyl, dimethoxy, or even trimethoxy, the reaction could still occur well, and good to excellent reaction yields could be obtained (3i-3k, 80-96% yields). We also investigated the reactivity of para-substituents on the benzene ring; both electron-donating substituents (methyl, isopropyl, tert-butyl, methoxy, and benzyloxy) and electron-withdrawing substituents (fluorine, chlorine, bromine) had little effect on the reaction (3l-3s, 91-96% yields). Finally, we investigated thiophene, and although the target product could be obtained only with a moderate yield, the reaction still proved that the catalyst had good functional group compatibility (3t, 58% yield).
Considering that this could be a heterogeneous catalyst in this reaction, it is necessary to identify the reusability and stability of catalyst. It was demonstrated that when the reaction was completed, the CP@Cu catalyst could be easily recycled with a simple operation. The catalytic activity stayed almost the same after experimenting with recycling the catalyst six times, and the yield was still up to 96% even in the sixth experiment, so the catalyst has the advantage of being recyclable ( Figure 2). The Cu nanoparticles supported on a chitosan-PVA composite are shown in Figure 3. As observed, the dark spherical particles in the red circles are the CuNPs, which are uniformly dispersed in the CP matrix. The particle sizes ranged from 2 to 4 nm, showing that the Cu nanoparticles were uniformly distributed on the chitosan-PVA composite. In addition, no aggregation of CuNPs was noticed, which confirmed that the CP matrix is a good stabilizing agent for the synthesis of CuNPs. The good dispersion of CuNPs into the CP matrix enhanced their performance during the catalytic process [39]. The Cu nanoparticles supported on a chitosan-PVA composite are shown in Figure 3. As observed, the dark spherical particles in the red circles are the CuNPs, which are uniformly dispersed in the CP matrix. The particle sizes ranged from 2 to 4 nm, showing that the Cu nanoparticles were uniformly distributed on the chitosan-PVA composite. In addition, no aggregation of CuNPs was noticed, which confirmed that the CP matrix is a good stabilizing agent for the synthesis of CuNPs. The good dispersion of CuNPs into the CP matrix enhanced their performance during the catalytic process [39]. The Cu nanoparticles supported on a chitosan-PVA composite are shown in Figure 3. As observed, the dark spherical particles in the red circles are the CuNPs, which are uniformly dispersed in the CP matrix. The particle sizes ranged from 2 to 4 nm, showing that the Cu nanoparticles were uniformly distributed on the chitosan-PVA composite. In addition, no aggregation of CuNPs was noticed, which confirmed that the CP matrix is a good stabilizing agent for the synthesis of CuNPs. The good dispersion of CuNPs into the CP matrix enhanced their performance during the catalytic process [39]. The full-scan XPS spectrum showed that the major elements of the Cu nanoparticles supported on the chitosan-PVA composite were O, C, and N (Figure 4a). This is consistent with the chemical structures of chitosan and PVA, which are rich in the functional groups of -NH-C=O, -NH2, and -OH. The presence of a Cu 2p peak in the Cu-loaded PVA-CS nanofiber membrane proved the adsorption of Cu(II) onto the adsorbent. The spectrum of the adsorbent after copper adsorption showed peaks at 932.67, 933.85, and 934.74 eV, which corresponded to Cu 2p (Figure 4b). The C 1s spectra of the Cu nanoparticles supported on the chitosan-PVA composite showed peaks at 284.48, 285.88, and 287.81 eV after the adsorption of Cu(II), indicating the involvement of the functional groups in the adsorption of Cu(II) onto the adsorbent (Figure 4c) [40,41]. The full-scan XPS spectrum showed that the major elements of the Cu nanoparticles supported on the chitosan-PVA composite were O, C, and N (Figure 4a). This is consistent with the chemical structures of chitosan and PVA, which are rich in the functional groups of -NH-C=O, -NH 2 , and -OH. The presence of a Cu 2p peak in the Cu-loaded PVA-CS nanofiber membrane proved the adsorption of Cu(II) onto the adsorbent. The spectrum of the adsorbent after copper adsorption showed peaks at 932.67, 933.85, and 934.74 eV, which corresponded to Cu 2p (Figure 4b). The C 1s spectra of the Cu nanoparticles supported on the chitosan-PVA composite showed peaks at 284.48, 285.88, and 287.81 eV after the adsorption of Cu(II), indicating the involvement of the functional groups in the adsorption of Cu(II) onto the adsorbent (Figure 4c) [40,41].
Analytical Methods
Nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Avance III 400 MHz spectrometer (Karlsruhe, Germany) operating at 400 for 1 H and 100 MHz for 13 An X-ray photoelectron spectroscopy (XPS) analysis was performed with a Thermo Fisher ESCALAB250Xi spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) by using monochromatized Al-Ka radiation at a detection angle of 30 • . The photon energy was 1486.6 eV. A pass energy of 30 eV was used for high-resolution scans in a valence band analysis. The test area size was 500 um. The binding energy of all spectra was determined by using binding energy correction with respect to polluting carbon (C 1s, 284.6 or 284.8eV). The spectra were collected over a range of 0-1486.6 eV, and the high-resolution spectra of C 1s and Cu 2p regions were provided. The Shirley background and Gaussian/Lorentzian functions were used to fit the peaks.
Transmission electron microscopy (TEM) was used to observe the morphology on a Jeol 2100f instrument (JEOL, Tokyo, Japan). Samples were prepared for TEM analysis by placing a drop of a sample of a particle suspension on a copper grid and quickly wicking away the solution with filter paper.
General Procedure for the Preparation of CP@Cu NPs
According to a report in the literature [39], 200 mg of chitosan powder was dissolved in 10 mL of acetic acid solution (2%, v/v) and stirred at room temperature for 5 h. At the same time, 400 mg of poly (vinyl alcohol) was dissolved in 10 mL of water and stirred at 80 • C for 12 h. The two solutions obtained were mixed and stirred at room temperature for another 0.5 h; then, 32 µL of glutaraldehyde solution (25%, w/w) was added, and stirring was continued for 5 min. In order to form the chitosan/poly (vinyl alcohol) composite film, the mixed solution described above was transferred to a Petri dish and dried at 40 • C for 12 h. After completion of this procedure, 0.1 mol/L of NaOH solution was added to the above composite film and allowed to soak for 5 min; then, this was washed until it was neutral by using water and dried over 12 h at 40 • C. After immersing the composite film in 0.2 mol/L CuCl 2 solution for 2.5 h, the excess Cu 2+ and Cl − were removed by washing with water, and then drying took place at 40 • C for 12 h. Finally, the chitosan/poly (vinyl alcohol)composite-film-supported copper nanoparticles (CP@Cu NPs) were obtained by reducing with 0.05 mol/L of NaBH 4 solution, and they were then submitted for ICP analysis. The copper loading of the CP@Cu NPs was found to be 1.78 mmol/g.
Recycling and Reuse of CP@Cu NPs
To demonstrate the recyclability of the CP@Cu NPs, the addition of a boron conjugate was repeated six times with the same composite film. The initial amount of catalyst was 5 mg (6 mol % Cu loading). Reactions were carried out under standard conditions. After the completion of the reaction, the catalyst was filtered off, washed with acetone, and then dried at 50 • C before the next run.
Conclusions
In conclusion, we have reported the preparation of a copper nanoparticle stabilized on chitosan composite film (CP@Cu) and its application for catalyzing the 1,6-hydroboration reaction of p-quinone methides with B 2 pin 2 as a boron source. The conditions of the whole reaction were very mild, and no additional bases were needed. This newly developed methodology showed very good functional group compatibility and reactivity (20 examples, up to 98% yield). The organoboron products that were formed could be easily and directly oxidized to the corresponding hydroxyl products with good to excellent yields. In addition, the recycling experiments evidenced that this catalyst still showed good reactivity after being recycled six times (>96% yield), which proved that the catalyst had good reusability and stability. | 2022-11-20T16:28:08.341Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "6d50d50b819b6de04e0797bdd13ed27ce8d9b93c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/22/7962/pdf?version=1668735733",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a1c74c78b804344661a5f6cf54cfa5e3a7dfe00",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259303008 | pes2o/s2orc | v3-fos-license | Breast cancer: miRNAs monitoring chemoresistance and systemic therapy
With a high mortality rate that accounts for millions of cancer-related deaths each year, breast cancer is the second most common malignancy in women. Chemotherapy has significant potential in the prevention and spreading of breast cancer; however, drug resistance often hinders therapy in breast cancer patients. The identification and the use of novel molecular biomarkers, which can predict response to chemotherapy, might lead to tailoring breast cancer treatment. In this context, accumulating research has reported microRNAs (miRNAs) as potential biomarkers for early cancer detection, and are conducive to designing a more specific treatment plan by helping analyze drug resistance and sensitivity in breast cancer treatment. In this review, miRNAs are discussed in two alternative ways-as tumor suppressors to be used in miRNA replacement therapy to reduce oncogenesis and as oncomirs to lessen the translation of the target miRNA. Different miRNAs like miR-638, miR-17, miR-20b, miR-342, miR-484, miR-21, miR-24, miR-27, miR-23 and miR-200 are involved in the regulation of chemoresistance through diverse genetic targets. For instance, tumor-suppressing miRNAs like miR-342, miR-16, miR-214, and miR-128 and tumor-promoting miRNAs like miR101 and miR-106-25 cluster regulate the cell cycle, apoptosis, epithelial to mesenchymal transition and other pathways to impart breast cancer drug resistance. Hence, in this review, we have discussed the significance of miRNA biomarkers that could assist in providing novel therapeutic targets to overcome potential chemotherapy resistance to systemic therapy and further facilitate the design of tailored therapy for enhanced efficacy against breast cancer.
Introduction
The most common malignancy worldwide is breast cancer (BC). According to the status update on the GLOBOCAN 2020 projections of cancer incidence and mortality, BC is the primary cause of cancer death in women and accounts for 1 in 4 cancer diagnoses among females (1). An estimated 684,996 people died from breast cancer in 2020, with low-resource areas accounting for a disproportionate share of these deaths. According to statistics, the prevalence of BC ranges from 2-6% in Western nations to 10-20% in Asian nations (2), indicating that BC is becoming a global health concern, even in nations with sizable young populations like India. Although breast cancer diagnoses have increased recently (3), the prognosis for the disease has significantly improved, with expected 5-year survival rates rising from 40% to approximately 90% over the last 50 years. With few exceptions, en-bloc radical resections in the form of Halstead mastectomy and axillary clearing were formerly thought to be essential for managing BC (4). Recent advancements in clinical trials have been brought about by a greater understanding of the molecular processes associated with the heterogeneity of breast tumors. This understanding has allowed for more conservative surgical procedures and the personalization of treatment plans to maximize sensitization to the tumor while minimizing unneeded morbidity to the patient. This includes the era of cancer diagnostics, which has recognized BC as a diverse disease and routinely subcategorized these cancers into four genetically distinct, integral subgroups -luminal A breast cancer (LABC), luminal B breast cancer (LBBC), human epidermal growth factor receptor-2 enriched breast cancer (HER2+) and triplenegative breast cancer (TNBC). These subgroups have different clinical behavior, prognosis, treatment approaches, and clinical outcomes in the known treatments (5).
For BC patients, chemotherapy is regarded as the most successful and crucial therapeutic approach. Anthracyclins, Tamoxifen, Taxane, 5-FU and trastuzumab are the major chemotherapeutic drugs which are administered to BC patients (6)(7)(8)(9)(10)(11)(12). Doxorubicin, daunorubicin, and epirubicin are some of the anthracycline antibiotics that are frequently used. Anthracyclines can be administered at all BC stages and have demonstrated a crucial function in treating BC (8). Tamoxifen is specifically used for the oestrogen receptor (ER) positive subtype of BC (9). Anthracyclines and taxanes are used as the predominant treatment for TNBC (10). The two most widely used taxanes arepaclitaxel and docetaxel, causing acute hypersensitivity responses (HSRs) in 5% to 10% of patients (11). The human epidermal growth factor receptor 2 (HER-2) is frequently used to categorize BC patients based on its overexpression (also known as HER-2 positive) or lack of expression (also known as HER-2 negative) (13). The likelihood of BC metastases and poor prognosis are strongly correlated with HER-2 overexpression (13). A targeted therapy for HER-2 is trastuzumab (TRS), a humanized monoclonal antibody (12). Despite our efforts to categorize tumors into prognostic categories, tumor behavior and prognosis remains unpredictable, which makes it challenging to develop strategies that would improve disease control while minimizing toxicities to patients. Although, a better understanding of the disease has led to advancements in treatment over the past few decades, but drug resistance remains a challenge and the underlying molecular causes are still largely undefined (14). Drug-resistant cancer cells multiply rapidly and grow more hostile, increasing the likelihood that the tumor may aggressively spread to other organs. Drug resistance can be categorized in two different ways. One is internal resistance or inherited resistance, which occurs when tumors are resistant to treatment even before receiving it, meaning that even early detection and treatment are ineffective. Another form of resistance is received resistance or acquired resistance which occurs following an initial positive response to the therapy (15). Here, the targets and processes associated are a focus of significant research, and the mechanisms of such drug resistance are largely still under investigation (16). For instance, Martz et al. (2014) demonstrated that stimulation of the Notch-1, mitogen-activated protein kinase (RAS-MAPK), phosphoinositide 3-kinase (PI3K) and mammalian target of rapamycin (mTOR), PI3K/AKT and estrogen receptor (ER) signaling pathways resulted in resistance to a variety of drugs (17). It was observed that when Notch-1 is activated, BRAF (V600E) melanoma cells develop acquired resistance to MAPK inhibitors and breast cancer cells also exhibit resistance to tamoxifen (17). Hence, the research group used a Notch-1 inhibitor to restore sensitivity, indicating that Notch-1 knockdown could be a therapeutic strategy in melanomas and drugresistant breast malignancies (17). Likewise, it seems, resistance to chemotherapy is also related to the epidermal growth factor receptor (EGFR) pathway. Genetically modified murine model (GEMM), human cell lines, and a clinically applicable model of KRAS-mutant colorectal cancer (CRC) have all been used to study EGFR and PI3K/mTOR (18). According to the evidence, PI3K/ mTOR and EGFR inhibition boost drug sensitivity and are increasingly used in cancer therapy to combat drug resistance. Additionally, the use of systemic drugs as neoadjuvant enables the production of in-vivo data on tumor sensitivity, which has been shown to have predictive importance for disease survival and recurrence. These contemporary aspects of traditional breast cancer management shed light on the potential value of emerging biomarkers in advancing the current treatment model. There are currently few biomarkers that may reliably predict response and resistance to systemic and targeted therapy and attempts to use non-invasive approaches to collect such biomarkers have largely been ineffective (19). This highlights how important it is for researchers to find new biomarkers that can assess patient response to therapy, predict the prognosis of breast cancer patients, and offer clinicians cutting-edge oncogenesis-targeting therapeutic approaches. In the context of BC chemoresistance monitoring and systemic therapy, this study focuses on the function of microRNA (miRNA) as new clinical biomarkers.
miRNAs are small non-coding RNAs ranging from 19 to 25 nucleotides in size and are involved in a variety of biological activities, including cell cycle, apoptosis, survival, and gene control (20). miRNAs primarily bind to the 3′ or 5′ untranslated region (UTR) of their target mRNAs and, depending on the degree of binding, participate in controlling the translation of proteins or destruction of the mRNA itself (21). A single miRNA may target several mRNAs, while many miRNAs may target single mRNA with varying degrees of efficiency (22). Therefore, changes in miRNA expression levels and gene expression silencing by miRNAs have a significant impact on human health and the emergence of diseases such as cancer, diabetes, neurological disease, and cardiovascular disorders (23-25). In the context of cancer, miRNAs can function as both tumor suppressors and oncogenes/oncomirs (26). In contrast to their counterparts in normal tissue, many miRNAs are reported to be up-or down-regulated in cancer tissues. For instance, practically all cancer types have increased miR-21 expression (27). Numerous B-cell malignancies have been shown to express miR-155 at high levels (28). One of the first miRNAs to be found was let-7 which is essentially missing throughout embryonic stages or tissues, although it is highly expressed in the majority of differentiated tissues (29). Similar to the fall in let-7 expression during development, the decline in let-7 expression in malignancies is more pronounced in cancer cells that are more advanced, less differentiated and have mesenchymal features (29). The generation, biology and function of miRNAs in cancer have been discussed in detail in further sections.
This review focuses primarily on the latest findings about the involvement of miRNAs in breast tumor resistance to chemotherapeutic agents and in the development of systemic therapy. Targeting miRNAs-either reducing or raising their expression-seems to be an attractive approach for designing novel, more effective, and personalized treatments for BC. Boosting drug efficacy by examining the downstream targets/ pathways influenced by miRNA targeting and predicting patient response to various therapies can lead to better treatment outcomes for BC patients.
Breast cancer chemotherapy
Breast cancer bears 7% of the total number of cancers related deaths in 2020. To date, many strategies have been adapted to combat this disease. Complete surgical removal has usually enabled efficient breast cancer disease management (30). Regardless of the severity of the disease, William Halstead's radical mastectomy (which required significant removal of all the breast parenchyma, local lymph nodes, and pectoralis major muscle) used to be the cornerstone of breast cancer treatment (4). Cyclophosphamide, m e t h o t r e x a t e , a n d 5 -fl u o r o u r a c i l ( C M F ) , t h e fi r s t chemotherapeutic treatment prescribed by Bonadanno et al. in 1976 with the intention of curing breast cancer, significantly decreased breast cancer relapse (94.7% of 207 patients administered chemotherapy vs 76.0% of 179 patients constrained chemotherapy) (31). However, since 1950s, Bernard Fisher and the National Surgical Adjuvant Breast and Bowel Project (NSABP) have hypothesized that aggressive surgery for breast cancer has only limited scientific and biomolecular justification because it is frequently insufficient to achieve complete disease control (32). Fisher's theory that all breast cancer patients needed systemic therapy (especially with chemotherapy) has, however, been thoroughly refuted.
However, the inherent advantage of treating cancer patients with chemotherapy in the neoadjuvant setting has now been recognized as an oncological practice. Neoadjuvant chemotherapy (NAC) benefits comprised tumor downstaging, expanding patient suitability for breast conservation surgery (BCS), and producing in vivo data related to tumor resistance, which has been shown to hold predictive value for cancer recurrence and overall survival (OS) (33, 34). The Early Breast Cancer Triallist's Collaborative Group (EBCTCG) recently published data from a meta-analysis of randomized clinical trials showing that locoregional recurrence (LRR) rates are higher following neoadjuvant therapy (21.4% vs. 15.9%), despite the fact that disease-free survival (DFS) and overall survival (OS) results are parallel with those treated in the adjuvant setting (35). Additionally, there is growing proof that people who have a pathological complete response (pCR) with NAC have a higher chance of living longer than those who have a latent disease (34, 36). Nevertheless, the clinical usefulness of NAC has been integrated into best-practice guidelines for HER2+ and Triplenegative breast cancer (TNBC). HER2+ malignancies should be treated with NAC and trastuzumab, with the exception of T1a-T1b N0 disease (37). High-risk LN (lymph node)-patients and those with LN positivity should receive anthracycline-and taxane-based chemotherapy along with trastuzumab (37). Further, until TNBC is identified with cancer stages T1a-T1b N0, patients with TNBC should always be provided with an anthracycline and taxane-based treatment (37). American Society of Clinical Oncology (ASCO) also supports the inclusion of platinum-based chemotherapy in TNBC based on the results of a recent meta-analysis because of a higher likelihood to obtain pCR (52.1% versus 37.0%) (38). Pembrolizumab and NAC significantly increased the pCR rates in the KEYNOTE522 trial's preliminary findings (pembrolizumab and NAC: 64.8% versus placebo and NAC: 51.2%) (39).
Need for miRNA-based therapy
The idea of improving pCR rates, simplifying the de-escalation of adjuvant therapy post pCR, and minimising treatment-related toxicities for patients receiving these neoadjuvant medicines are the main directions for translational research efforts in the future (40). Therefore, numerous clinical trials have focused on practices that improve pCR rates (40). To further improve the pCR, the idea of manipulating treatment with miRNA-based therapies may be helpful in boosting pCR rates to NAC in breast cancer is now popular, and the same has been discussed in depth in this review.
Chemoresistance in breast cancer
Various molecular aspects are known to be involved in inducing chemoresistance in cancer cells (Figure 1). Some of them have also been summarized below: ♦ Resistant genes:
Twist
Twist is a key player in the invasion and metastasis of tumors because it regulates the epithelial-mesenchymal transition (EMT) (41). It has been reported that NF-kB up-regulation of twist-1 is a factor in the chemoresistance (42). Through the downregulation of estrogen receptor alpha (ERa) activity, twist overexpression can also contribute to hormone resistance in breast tumors (43). ♦ Efflux proteins: Another mechanism of resistance to chemotherapy is mediated by ATP-dependent efflux pumps, which decrease the intracellular concentration of drugs. By using the energy from ATP hydrolysis, or the MDR phenomenon, the ATP-dependent efflux transporters in cancer cells can actively transport a range of substrates outside the cell membrane (46 The major chemoresistance mechanisms of cancer cells. (68) and glutathione S-transferases (GST) are some of the key enzymes that contribute to MDR in cancer cells. These agents have the potential to enhance the transformation and catabolism of anti-neoplastic drugs, shorten the term of effective concentration of chemotherapeutic drugs in tumor cells, decrease drug accumulation in target areas, and ultimately limit drug efficacy (69). For example, GST-p can be employed as a separate index to direct a clinical treatment against BC as its expression in breast cancer patients was associated with the histological grade, the number of lymphatic metastases, and the age of the patients (70).
MicroRNAs
MicroRNAs are a class of small noncoding RNAs (ncRNAs), which function in post-transcriptional regulation of gene expression and are powerful regulators of various cellular activities and have been linked to many diseases (20). RNA polymerase II (Pol II), which produces the main transcripts, participates in several stages of microRNA synthesis (pri-miRNA). The pri-miRNA are split up into precursor miRNAs (pre-miRNAs) by the RNase III Drosha (71). The pre-miRNAs are subsequently moved from the nucleus into the cytoplasm by Exportin-5 (Exp5), where they are further split by Dicer into a mature single-stranded miRNA. The miRNA is induced to either degrade or suppress the translation of mRNA targets when the mature miRNA is removed from the pre-miRNA hairpin and attached to the RNA-induced silencing complex (RISC) (72). MiRNA expression and function. RNA polymerase II enzyme in the nucleus transcribes the miRNA-encoding genes, forming the "Pri-miRNA" hairpinshaped molecule. DROSHA and DGCR8 molecules work together to transform the "Pri-miRNA" molecule into the "Pre-miRNA" precursor molecule. Pre-miRNA then travels to the nuclear export receptor "Exportin 5" and reaches the cytoplasm. This precursor is cleaved by Dicer complex in the cytoplasm to create a double-stranded molecule called "miRNA duplex." One of these two strands is left active after this process, and it has the ability to suppress or even activate the target downstream genes at the transcriptional or translational level. and function. Besides miRNAs, other ncRNAs are long-chain noncoding RNAs (LncRNAs), piRNAs, and circle RNAs (circRNAs), which make up just 1% of the whole genome's RNA (73).
The molecular revolution enables us to design strategies to maximize patient outcomes, reduce toxicity, and control disease with less strenuous and more focused therapies. The development of chemotherapeutic response biomarkers is necessary in the future to speed up the removal of tumors and reduce the need for extended and excessive treatments. The utility of detecting miRNA expression (both in tumors and in the blood) is now being discussed in the scientific community. Doing so may help doctors prescribe medicines that are suitably targeted, address early relapse, or even enable miRNA-directed therapies. In general, miRNAs can be either tumor suppressors (tumor suppressor miRNA) or oncomirs, and they can affect the development of cancers in either way. Numerous miRNAs, along with their downstream targets, have been shown to be differentially expressed in breast cancer patients when compared to healthy controls (either circulating or in tumors) ( Table 1).
miRNAs in BC subtyping
The three main subtypes of breast cancer that exist are (1) Positive ER and PR (2); HER-2 positive; and (3) (ER, PR, and HER-2 negative) Triple negative. However, this subtyping is expanded to a more precise one using the microarray approach for identifying miRNA profiles, including (1):Luminal A (ER-positive with low grade) (2); Luminal B (ER-positive with high grade) (3); HER-2 positive (4); Basal-like (Almost equal to triple-negative condition).
There are several miRNAs that are used for the breast cancer subgrouping as shown in Table 2 (87). Currently, it is possible to use miRNA profiling for the subgrouping of breast tumors. This capability can therefore aid in the selection of cancer patients who will get adjuvant therapy. In addition, miRNA profiling can be successful in identifying new therapeutic targets by revealing the genetic underpinnings of distinct subgroups of breast cancer.
Role of miRNA in BC drug resistance
It is well known that miRNAs can regulate drug resistance to traditional chemotherapeutic medicines, endocrine hormone treatments, and radiotherapies in cancer cells (90)(91)(92)(93). It has been shown that miRNA expression can influence a breast cancer patient's ability to respond to or reject systemic treatment as shown in Table 3. The miRNAs along with other ncRNAs significantly reverse the BC cell drug resistance by suppressing signaling pathways such as Wnt/b-catenin, Hippo, AKT, TGF-b, or mTOR signaling pathways. A summary of molecules which can participate in target diversity in miRNA interactions leading to drug resistance using different chemotherapeutic drugs is mentioned in Table 3. According to various reports, there are scientific explanations and processes for chemotherapeutic resistance, including altered drug-target interactions, lower active drug doses, and increased tumor tissue survival (143). Numerous miRNA expression profiles have been linked to the prediction of (150). The role of miRNAs in BC chemoresistance has been attributed to some of the following molecular mechanisms (Figure 3): ♦ miRNAs and cell cycle: Cell cycle deregulation is an established hallmark of cancer, and it has been linked to both drug resistance and poor prognosis when it is aberrantly activated. Various miRNAs have been reported to target genes linked to cell cycle regulation, resulting in either drug sensitivity or resistance such as miR-93, involved in G1/S phase arrest, was reported to be downregulated in paclitaxel-resistant BC samples compared to responder patients ( Figure 3A) (151). Direct (142) targets of this miRNA were discovered to include CCND1 and the E2F transcription factor 1 (E2F1), which upon downregulation resulted in cell cycle arrest in G1 phase and increased apoptosis via inhibiting AKT phosphorylation (p-AKT), and BCL-2 expression, and increasing the expression levels of BCL-2-associated X, apoptosis regulator (BAX), which could increase the paclitaxel sensitivity.
♦ miRNAs and DNA repair machinery: As mentioned above, most chemotherapy drugs used today to treat breast cancer cause either direct or indirect DNA damage. To counteract DNA damage, however, CSCs activate DDR pathways, explaining why chemotherapy that destroys DNA could result in drug resistance ( Figure 3B). One such DDR pathway involves BRCA1 which is engaged in different cellular processes that maintain genomic stability like DNA damage repair, DNA damage-induced cell cycle checkpoint activation, chromatin remodelling, protein ubiquitination, transcriptional control, and cell death [43]. Drug sensitivity is thus impacted by its miRNA influenced control, for instance miR-182 inhibits BRCA1 expression to induce drug sensitization. Furthermore, it has been shown that overexpressing miR-182 makes BC cells more susceptible to PARPi (PADP-ribose polymerase 1 inhibitor). In contrast, miR-182 suppression raises BRCA1 levels and results in PARPi resistance (152).
♦ miRNAs and cell death: The interests of investigators are growing in drug-miRNA combination anticancer therapy since miRNAs can influence cell death ( Figure 3C). Examples include miR-125b, which confers paclitaxel resistance by inhibiting the expression of BAK1 (BAX and/or BCL-2 Antagonist/Killer 1), which causes the release of cytochrome C from mitochondria to the cytoplasm, where it binds Apoptotic peptidase Activating Factor 1 (APAF-1) and triggers caspase activation (76). Similar findings were also made from miR-149-5p, whose overexpression was shown to boost BAX expression (153), and from miR-663b, which imparts tamoxifen resistance by indirectly upregulating BAX (154).
♦ miRNAs, CSCs and epithelial to mesenchymal transition (EMT): Breast cancer stem cells (BCSCs) are a small population of cells with a high ability for tumorigenesis and are involved in therapy resistance (155). The modulation of BCSCs' phenotype is mediated by several molecular mechanisms, the most significant of which is EMT. This process takes place as cancer develops, and it involves a decrease in the expression of molecules associated with epithelial growth, such as E-cadherin, and a rise in molecules associated with mesenchymal development, such as N-cadherin, vimentin (VIM), and fibronectin (FN1) (156). Thus, the cells become more capable of invasion and migration (155) and can nest in various tissues where they can multiply and create new tumors through a process called metastasis (155). In this context, miRNAs play a significant role in controlling stemness and EMT by targeting a few genes implicated in these two pathways ( Figure 3D). Among those engaged in the control of EMT, the miR-200 family has received the greatest research attention. Five different miRNAs make up this family: miR-141, miR200a, miR200b, miR200c, and miR-429 (157), which can inhibit the expression of ZEB1 and ZEB2 (Zinc finger E-Box-binding homeobox genes) (157). As a result, it has been demonstrated that overexpressing miR-200 in many cancer cell lines can reverse EMT (158). Another factor contributing to stemness in BC is the Wnt/-catenin signaling pathway. It has been shown that several miRNAs, including miR-105 and miR-93-3p, regulate this pathway. The Wnt/-catenin signaling pathway suppressor Secreted Frizzled Related Protein 1 (SPFR1) is the target of those miRNAs. This led Li et al. to show that those miRNAs encourage cisplatin resistance (159).
Four circulating miRNA patterns linked to pCR were recently identified using profiling of circulating miRNA (ct miRNA detected in plasma) to categorize NAC responders (from non-responders) in Her2+ patients (160). These results demonstrate the potential of miRNA signatures as prognostic and predictive biomarkers that could individualize breast cancer treatments and enhance patient sampling techniques for current therapies, including traditional cytotoxic chemotherapies. The following is a discussion of a few of them: ♦ miR-638 -miR-638 was shown to be downregulated in cases with BC chemoresistance in a microarray analysis (161). A minimal patient-derived xenograft (MiniPDXTM) was also developed by the researchers to assess the chemosensitivity of various drugs. The results of this study demonstrated that in patients, who received 5-FU, miR-638 levels were relatively low in the 5-FU-resistant group compared to the 5-FU-sensitive group. So, according to the MiniPDX™ model, MDA-MB-231 BC cells overexpressing miR-638 were more susceptible to 5-FU treatment in vivo.
♦ miR-17/20 -The serine-threonine kinase Akt1 has been linked to the regulation of cellular homeostasis, proliferation, and growth as well as hyperactivation in human malignancies (162
miRNAs in neoadjuvant chemotherapies: predicting response
As already said, breast oncology research has advanced recently to realize that treating patients with chemotherapy in the neoadjuvant setting is both rational and beneficial (173,174). Although conventional clinicopathological traits have been shown to correlate with response to NAC (33), it is still difficult for oncologists to identify patients who are likely to experience such reactions since success rates are frequently unpredictable. The latest research has linked miRNA expression profiles with breast cancer patients' responses to NAC therapy. Table 4 shows systematic trials examining the function of miRNAs in determining how patients would respond to neoadjuvant therapy and lists the miRNAs that are important in this setting (160,176,178,179,183,184,(186)(187)(188)(189)(190). Using miRNA expression profiles to assess response to adjuvant chemotherapy is substantially more difficult. It is quite challenging to measure whether medication improved oncological outcomes for patients who were most likely to succumb to recurrence, estimate the timing of miRNA sampling, and analyze treatment response rates in a crude way. Therefore, it is not surprising that most research evaluates miRNA expression patterns using metrics that indicate response to NAC rather than adjuvant chemotherapy (e.g., RECIST, Miller-Payne grade, Sataloff score, etc.).
MicroRNAs for therapeutic use in breast cancer
The use of miRNAs for the development of novel treatment approaches has been made easier by the current molecular technology. These entail the administration of carefully chosen miRNAs into the tumor microenvironment for therapeutic purposes or to improve the efficacy of currently available therapeutic modalities employed in standard clinical practice, such as systemic chemotherapy (143,191). miRNAs can act as tumor suppressors or oncomirs, so there are two possible methods for using them as therapeutics (1): miRNA replacement therapy, which involves inducing and overexpressing specific miRNA to reduce oncogenesis or increasing sensitivity to systemic treatment, or (2) oncomir inhibition, which involves lowering targeted miRNA expression characteristics (i.e., miRNA silencing) by incorporating inhibitory miRNA to lessen the translation of the target miRNA ( Figure 4).
♦ miRNA Replacement Therapy -By inhibiting oncogenes and the genes that regulate cell proliferation and death, tumor suppressor miRNAs can prevent the development of cancer (192). MiRNA replacement treatment includes reintroducing tumor-suppressing miRNA (or mimics) into the tumor microenvironment in order to inhibit tumor growth and restrain the spread of malignancy (193). They might be delivered into the cytoplasm of cancer cells through a variety of transporters, such as chemicals, electroporation, and modelling of the endogenous miRNA (192). (198).
9.1 miRNA delivery strategies used for cancer therapy miRNAs can be introduced therapeutically into cancer cells through a variety of methods. These approaches are typically divided into two categories of local and systemic delivery, which are thoroughly discussed below and in Figure 5: Local delivery of miRNAs: Target gene suppression with less toxicity may result from the local delivery of miRNAs as opposed to the systemic delivery of miRNAs. According to Møller et al., 2013, the aforementioned strategy has been examined mainly for primary tumors including melanoma, breast, and cervical cancers (203). Recently, different local delivery techniques, such as the direct injection of miRNA vectors into the tumor site and the nanoparticles (NP) formulation with surface modifications, have been devised. For instance, glioblastoma multiform was treated using the intracranial miRNA delivery approach (203). In a study by Trang et al., 2010, let-7 was introduced into non-small-cell lung cancer using viral vectors, which inhibited the growth of KRASdependent tumors (204). The topical distribution approach is an additional technique for treating skin conditions. The target region is more accessible with fewer adverse effects when topical administration is used (205). Moreover, the local delivery system makes the use of modified miRNAs. For instance, astro-cyte elevated gene-1 (AEG-1) was the target of intratumoral miR-375 mimics in cholesterol-conjugated 2′-O-methyl modified form, which significantly suppressed tumor growth in in-vivo models of hepatoma xenograft (206).
However, as the local delivery system employs direct injection or local application of miRNAs with or without carriers, it cannot be recommended as a good strategy for treating late-stage metastatic disease. Therefore, developing a systemic delivery strategy is essential to provide efficient miRNA cancer therapy.
Systemic delivery of miRNAs: The systemic miRNA delivery technique represents a significant advancement in the effort to increase the effectiveness of cancer therapy and get over the drawback of miRNA delivery in vivo. Different systemic delivery strategies have been devised up until this point. A few of them are covered below: ♦ Viral delivery of miRNAs -miRNAs can be transmitted by being encoded in several types of vectors, including viral and non-viral vectors. In this regard, viral delivery is an advantageous strategy. One of its benefits include low offtarget rate, resulting from given miRNAs being translated by tumor cells. Lentiviruses, adenoviruses, and adenoassociated viruses (AAVs) are among the viruses that have been identified as delivery vectors for miRNAs. As a result, targeting components were added to the viral capsid to strengthen the affinity between viral vectors and cancer-specific receptors, allowing for better transportation into tumors (209). However, due to the immunological reaction they cause and the difficulty of scaling up the production process in comparison to nonviral delivery systems, there are still some significant challenges to overcome. Additionally, the potential for a virus with replication competence may raise the risk of the pathogenic condition. For instance, some retroviruses can cause the start of a CNS illness as a result of their active reproduction (210).
♦ Non-viral delivery of miRNAs -The use of non-viral vectors is a beneficial strategy for miRNA delivery. In this approach, sitespecific delivery, system optimization, or polyethylene glycol (PEG) molecule augmentation could be employed to achieve targeted ligands or lengthen circulation times. Additionally, nanocarriers are produced in a secure and straightforward manner, and they are distinguished by their affordability, minimal immunogenicity, and adaptability. Non-viral delivery vectors can be divided into three primary categories: inorganic materials, lipid-based carriers, and polymeric carriers (211, 212).
However, non-viral-based approaches to miRNA delivery have their own shortcomings such as lower loading efficiency, lack of cargo protection, lower endosomal escape, nonspecific interaction with target cells and nucleic acids, etc (193).
Discussion
Considering that drug resistance continues to be a major obstacle in the clinical context, causing relapse and metastatic spread in many cancer types, novel treatment approaches are of the utmost importance. The discovery of miRNAs has provided a novel perspective on the molecular processes behind cancer, increasing the possibility of creating novel and more potent therapeutic approaches. This review is centered on new findings pertaining to the significance of miRNAs in breast cancer chemoresistance. miRNAs regulate numerous signaling pathways and regulatory networks, therefore even little changes in miRNA expression can have a big impact on the development and the progression of the disease. Targeting miRNAs-either reducing or enhancing their expression-seems promising to develop novel, more effective, and customized treatments, boost therapeutic efficacy, and predict patient response to various treatments. However, to fully explain all the miRNAs that are altered in tumors based on profiling data would be beyond the scope of this review. Numerous organizations are exploring the use of microRNAs as potential therapeutics. In vivo and translational investigations are currently the focus of increased research. Evidence exists that points to miRNAs as possible therapeutic agents, particularly when used in conjunction with anti-cancer chemotherapeutics. This could take the form of mimics that support miRNA function and expression or antagonists that block miRNA expression. By affecting the expression of endogenous microRNAs in cancer cells, miRNA mimics or anti-miRNAs can potentially change chemotherapy's efficacy. Two Types of microRNA delivery techniques employed in cancer therapy. (213). Another phase 1 clinical trial including individuals with liver cancer or metastatic cancer with liver problems is MRX34 (a mimic of the tumor suppressor miR-34). Healthy volunteers and patients with advanced or metastatic liver cancer (hepatocellular carcinoma) are being tested for the safety and effectiveness of MRX34 in this study (214). Future possibilities for these novel medicines are encouraging given the encouraging preliminary findings from both trials. Challenges in the field of miRNA therapy-As mentioned above, the miRNAs can be delivered by either local or systemic approaches. It might not be a suitable strategy, for advanced cancer. However, miRNA cancer therapy works well with systemic delivery. Figure 6 summarizes the various constraints to miRNA delivery. For instance, poor miRNA penetration is caused by the leaky nature of aberrant tumor vasculature (215). The rapid cleavage of naked miRNAs by serum nucleases of the RNase A type poses another challenge (216). Additionally, there is a rapid renal clearance, notably for naked miRNA (217). When utilizing big NPs (>100 nm), reticuloendothelial system (RES) clearance would rise in the liver, spleen, lung, and bone marrow, leading to nonspecific absorption by innate immune cells such monocytes and macrophages (218).
Additionally, the systemic miRNA distribution triggered the innate immune system, as with other nucleic acid types, which resulted in undesired toxicities. Immune system activation includes the release of inflammatory cytokines and Type I IFNs via Toll-like receptors (TLRs) (219). Anti-inflammatory miRNA treatment, however, may prevent the activation of inflammatory pathways (220). On the other hand, some miRNAs work through TLRs to trigger neurodegeneration. For instance, Lehmann et al. (2012) demonstrated that miRNA let7b can cause neurotoxicity by activating TLR7 signaling in neurons (221). Therefore, a significant issue for miRNA systemic cancer therapy is the incidence of miRNA-related neurotoxicity. Additionally, increased miRNA uptake in cancer cells is a problem, and methods to address this issue include increasing endosomal escape and releasing miRNA payloads into the cytoplasm.
Off-target effects brought on by the miRNA mode of action are yet another challenge for miRNA delivery systems. These compounds may have undesirable side effects since they may bind to the 3′-UTR of a number of genes and decrease their expression (222). A developed method to lessen these adverse effects is the use of multifunctional co-delivering systems (223). Furthermore, it was demonstrated that under specific circumstances, such as hypoxia, the activity of miRNA processing enzymes such RISC reduces, which lowers the expression of tumor suppressor miRNAs (224). De Carvalho Vicentini et al. (2013) suggest that altering the expression or activity of these enzymes is another method for suppressing miRNA (225).
Conclusion
The discovery, development, and enhancement of miRNAs as potential medicines for the treatment of breast cancer patients have received significant funding, yet this branch of translational research is still in its infancy. Numerous attempts have been made to tailor cancer therapies using miRNA, but little progress has been made in improving clinic-oncological outcomes using miRNA targeting. miRNA therapies are now facing several developmental obstacles. This study is constrained by the fact that most of the research done thus far provides information about invitro trials, with very few studies coming from sources other than animal or breast cancer cell lines. Clinical trials assessing the clinical FIGURE 6 Challenges in miRNA delivery. effectiveness, risk profiles, and premium benefit are necessary for addition to the generally accepted scientific technique to support the initial findings of these recent investigations. An in-depth discussion of how clinical trial research has transformed BC patient care over the last four decades is provided in the current review. This research has produced novel, individualized therapeutic approaches, minimally invasive surgical techniques for the breast and axilla, and improved clinico-oncological results for patients who might otherwise have died from their disease in earlier times. The personalization of BC patient care appears to be closer than ever thanks to ongoing trials evaluating cutting-edge targeted therapies like immune checkpoint modulation (39, 226) and the use of poly(adenosine diphosphate-ribose) polymerase inhibitors (or PARP inhibitors) in the treatment of early-stage breast cancer in BRCA mutation carriers (227).
Hence, before we can use miRNAs in the therapeutic setting, numerous obstacles remain to be overcome. The delivery method is the key impediment. We might be able to get over this obstacle with the use of chemical alterations, viral vectors, or nanoparticles. Despite these delivery issues, it is possible that miRNAs will play a significant role in cancer therapy, including BC, in the future. A novel approach to treating breast cancer that combines miRNA therapies with conventional chemotherapeutic techniques and drug targets is possible, but further study is needed before this promising paradigm can be implemented in the clinic. Thus, this review emphasizes how important it is to prioritize clinical trials and therapeutic interventions to advance the precision oncology movement's goal of "curing" breast cancer. | 2023-07-02T05:09:15.228Z | 2023-06-16T00:00:00.000 | {
"year": 2023,
"sha1": "b7d1a1def3233578bf3ca8bbe3a2c22ad5ca0893",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fonc.2023.1155254",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7d1a1def3233578bf3ca8bbe3a2c22ad5ca0893",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9386431 | pes2o/s2orc | v3-fos-license | Role of immune recognition in latent allotype induction and clearance. Evidence for an allotypic network.
The role of allotype recognition in the regulation of the expression of latent allotypes has been investigated in two series of experiments. The first experiments were designed to investigate the apparent instability of latent allotypes in circulation. In these experiments, clearance rates of IgG preparations bearing allotypes matched and unmatched to the recipient were examined. In all cases, iodinated IgG matched in allotype to the recipient was cleared at a normal rate from the serum. However, in several cases, iodinated IgG of an unmatched allotype was cleared at a rate and in a manner suggesting prior sensitization of the recipient to IgG of that allotype. Such apparent sensitization correlated with the presence of the foreign allotypes as a latent allotype in several bleedings taken both before and after the clearance experiment. In the second series of experiments, designed to test the ability of antiallotype antibodies to affect the expression of latent allotypes, five rabbits were immunized first with purified antiallotype antibodies and then after 3-4 mo, with streptococcal vaccine. Examination of the antistreptococcal antibodies for latent allotype revealed, in all cases, that the allotype against which the antiallotype antibodies were directed was present in levels 8- to 20- fold greater than were observed before the antiallotype injections. These results indicate that recognition of allotypic determinants is an important element in the control of latent allotype expression and suggest the existence of a regulatory network involving antiallotype antibodies.
were reduced from three per week to one per week because of adverse reactions to the immunization.
Determination of Allotypes. Group a and b allotypes of rabbits used in this study were determined by an inhibition of binding radioimmunoassay (RIA)a using published methodology (13). Latent allotypes were determined by both RIA and hemagglutination as previously described (7).
Isolation and Radioiodination oflgG. IgG fractions were obtained from nonimmune serum by ion exchange chromatography on DEAE-cellulose in 0.02 M potassium phosphate buffer, pH 7.0. Samples to he radioiodinated were dialyzed into glycine-NaOH buffer, pH 8.5, and labeled with either 12~I or 131I by the ICI method of McFarlane (14), as follows: labeling solution was prepared by equilibrating IC1 in dilute HCI with freshly received 12hi or 13aI (carrier-free; Amersham Corp., Arlington Heights, Ill.). Labeling solution was added dropwise with vortexing to the IgG solution in an amount sufficient to give equimolar amounts of IgG and ICI.
Labeled samples were immediately desalted on columns of Sephadex G-10 equilibrated in 0.01 M phosphate-buffered saline, pH 7.0, and then ultracentrifuged for 2 h at 35,000 rpm using a Beckman 50 Ti rotor (Beckman Instruments, Inc., Fullerton, Calif.) The uppermost two-thirds of the centrifuged preparation was transferred to a new tube and centrifuged again under the same conditions. The uppermost two-thirds of the solution after the second spin was used immediately for the clearance studies. The absence of high molecular weight complexes in the final supernate was verified by gel permeation chromatography. For each radiolabeled preparation, only a single (7S) peak was obtained after chromatography on Sephadex G-200.
Determination of Rates of Clearance for Radioiodinated IgG. Rabbits to be used in the clearance studies were housed in metabolic cages in a room maintained at 70°F and given drinking water containing NaI for 1 wk before and throughout the course of the experiment. All animals included in this report were in good health throughout the experiment as judged by general appearance, appropriate consumption of food and water, constant weight, and normal values for urinary output, hematocrit, and total serum protein concentration. An air-conditioning failure occurred during experiment 1, resulting in fluctuating temperatures of 80-90°F during the experiment. The stress of these conditions produced only transient changes in the physical condition of most of the rabbits, but was reflected in a decreased rate of clearance (Results). Rabbits that were more seriously affected were removed from the study.
Each rabbit received 125I-IgG of one allotype and 131I-IgG of another allotype. One of the IgG preparations was fully matched to the allotypes of the recipient whereas the other was unmatched in either the group a or group b allotype, but not in both. In experiment 1, four ala3 heterozygous rabbits received paired al and a3 IgG preparations as a control.
For each experiment, the radiolabeled IgG preparations were mixed, and an amount containing 500 pg of each preparation was injected into each recipient via the left marginal ear vein. A small bleeding (1-2 ml) was obtained 10 rain later from the right ear to determine the zero-time level of radioactivity. Subsequent bleedings were obtained every 48 h for 2 wk. Serum 125 131 was isolated from the blood samples, and I and I were determined in a Beckman Gamma 9000 spectrometer (Beckman Instruments, Inc.) Appropriate background, 125I-X31I overlap, and isotope half-life corrections were made to obtain normalized values for ~2~I and 131I from which rates of clearance could be determitied. All clearance curves showed an initial rapid fall in radioactivity, caused largely by equilibration of the IgG between intravascular and extravascular fluid (15). From day 6 (day 4 in many rabbits) through day 14, clearance data could be fit to a first-order exponential decay curve with a high correlation coefficient. The half-life of this decay, which is the value reported in Table I, was determined by least squares analysis of the linear portion of the plot (Fig. 1) of log percent radioiodine remaining vs. time.
Immunization with Antiallotype Antibodies. Antiallotype antibodies were isolated by immunoadsorbent chromatography on IgG columns of the appropriate allotype. Bound antibodies were eluted with 3 M NH4SCN in 0.01 M phosphate-buffered saline, pH 7.0, and desahed on columns of Sephadex G-10. Isolated antibodies were lightly cross-linked with glutaraldehyde, and 1 mg of cross-linked antibody was injected subscapularly each month for 3 consecutive mo.
Streptococcal Immunization after Immunization with Antiallotype Antibodies. Rabbits were rested 3 mo after the last antiallotype antibody injection, then immunized with group C streptococcal vaccine. Rabbits 4805, 4806, and 4809 received normal injections (three per week) whereas rabbits 5455 and 5456 received 1 ml of vaccine once a week for 3 wk.
Antistreptocoecal antibodies were isolated from immune sera on an immunoabsorbent containing p-aminophenyl-N-acetylgalactosamine coupled to Sepharose 2B. Bound antibodies were eluted with 0.5% N-acetylgalactosamine in 2 M NaC1, desalted, and passed through a column of IgG containing all allotypes except the one to be determined in the latent allotype assay.
Results
The rate of clearance of IgG was determined using freshly isolated IgG preparations lightly iodinated by the IC1 method and freed from aggregated material by highspeed centrifugation. Preliminary experiments demonstrated considerable variation in rates of clearance among rabbits tested, so allotype-related clearance was examined by the paired label technique, using 125I-labeled IgG of one allotype and lalI-labeled IgG of another allotype. The three types of results obtained are shown in Fig. 1. When rabbits heterozygous for the allotypes in question were examined (panel A), the clearance rates of the two IgG preparations were invariably identical, within the limits of experimental error. Similarly, most homozygotes examined showed identical rates of clearance (panel B) for IgG of the self allotype and IgG of the foreign allotype. However, certain homozygous rabbits gave results that deviated from this pattern of equal clearance rates (panel C). In all cases of differential clearance, the foreign allotype was cleared more rapidly than the selfallotype. Beyond 4 d, clearance curves (for normal or accelerated clearance) always followed first-order kinetics, with no indication of nonfirst-order clearance processes such as primary sensitization.
A summary of data from the three clearance experiments performed is shown in Table I. Mean half-lives of clearance, calculated on the basis of the self allotypes only, were 9.1, 6.5, and 6.9 d, for the three experiments respectively. The high value for experiment 1 was obtained under unfavorable environmental conditions (Materials and Methods). 3 of 10 rabbits in experiment 1 and 3 of 9 rabbits in experiment 2 cleared IgG of the foreign allotype at a rate significantly higher than that of the paired control sample. In these experiments the variable marker was the group a allotype. In experiment 3, two rabbits (5 and 23) from experiment 1 that cleared the foreign group a allotype rapidly, were retested with new preparations of the same allotypes. These rabbits again cleared the foreign allotype more rapidly. Four rabbits that had previously cleared the foreign group a allotype rapidly (5432, 5436, 5441, 5453) and one that did not (5454) were also tested in experiment 3 for clearance of a self and foreign group b allotype. Three of the five showed abnormally rapid clearance of the foreign group b allotype.
All rabbits that showed evidence of allotype-associated rapid clearance were tested in order to determine whether the rapidly cleared foreign allotype were present as a latent allotype in serum. Three rabbits that did not show allotype-related rapid clearance were tested as controls. The results of these assays are shown in Table II. Four random bleedings, taken within a 6-mo period before the clearance experiments, and six weekly bleedings, taken starting 1 mo after the end of the clearance experiments, were tested. A high percentage of bleedings (24/36 for preclearance bleedings and 36/54 for postclearance bleedings) from rabbits showing rapid clearance of a foreign allotype had readily measurable levels of that (latent) allotype in the serum. From three control rabbits similarly monitored, only one of 24 bleedings contained any latent allotype.
Latent Allotype Induction by Immunization with Antiallotype Antibodies. In view of the striking correlation between latent allotype expression and abnormal clearance of the allotype, the effect of immunization with antiallotype antibodies on latent allotype expression was investigated in five rabbits. Three received anti-a2 antibody and two received anti-b6 antibody. The antiallotype antibodies were allotype matched to the recipients. Each rabbit was immunized three times at monthly intervals with 1-mg doses of affinity-purified antiallotype antibody. After a 3-mo rest period, the rabbits were immunized with streptococcal vaccine. Latent allotype levels were measured in bleedings taken before and after immunization as well as in affinity-purified antistreptocoecal antibody fractions from immune bleedings. The allotype against which the immunizing antiallotype antibodies were directed was present in high levels in the immunized animals, particularly in the antistreptococcal antibody fractions (Tables III and IV). Because anti-antiallotype antibodies would mimic allotype in the latent allotype assay, the contribution of such antibodies to measured latent allotype levels was determined by preadsorption of serum with insolubilized antiallotype reagents di- reeted against the nominal allotypes. These would remove anti-antiallotype antibodies and other Ig with nominal allotypes without affecting latent allotypes. In this way, it could be determined that anti-antiallotype antibodies were present neither in bleedings taken before immunization with antiallotype antibodies nor in purified antistrep-tocoecal antibody fractions, but were present in variable amounts in normal bleedings taken after antiallotype immunization.
Discussion
The present report documents a phenomenon of allotype-associated rapid clearance of radiolabeled IgG from serum. It was shown that although the majority of rabbits tested cleared lightly iodinated IgG bearing a foreign allotype at the same rate as they clear labeled IgG bearing a self allotype, certain rabbits showed enhanced clearance of the foreign allotype. No cases were observed in which the self allotype was cleared at an accelerated rate. In each case in which accelerated clearance was documented, the rapidly cleared allotype appeared as a latent allotype in the majority of serum samples obtained both before and after the clearance experiment.
In an extension of these experiments, the effect on latent allotype expression of immunization with purified antiallotype antibodies directed against the latent allotype was examined. It was shown that such immunization in each case greatly enhanced the serum levels of the (latent) allotype against which the antiallotype antibodies were directed.
Before discussing possible interpretations for these results certain details of the experimental procedures will be reviewed. The validity of the results depends strongly on proper experimental design, particularly in the determination of clearance rates for IgG and in the measurement of latent allotypes. The serologic detection of latent allotypes requires careful attention to details in the preparation of antisera and samples to be tested. The protocol used has been described thoroughly in another publication (7), and all of the experimental considerations have been reviewed recently (6). Clearance experiments are subject to a number of potential artifacts (15), and considerable attention was devoted to eliminating all such problems. First, the use of paired labels eliminates the problem of variability in the animal population because each animal is simultaneously tested for clearance of allotype-matched and allotype-unmatched IgG. Second, the iodinated IgG samples injected were prepared and given in a manner appropriate to eliminate artifacts caused by denaturation and aggregation. Iodinations were done so as to introduce an average of less than one atom of iodine per molecule of IgG. All iodinated IgG preparations were ultracentrifuged twice immediately before administration to eliminate any molecules greater than 7S in size. Sephadex G-200 chromatography verified the absence of macroglobulins or aggregated material. The dose injected was kept low to minimize the possibility of sensitization and to simulate observed levels of latent allotypes. Rabbits were given NaI in the drinking water to prevent iodine scavenging and reutilization.
Under these conditions, the average clearance rate for iodinated IgG with self allotypes was 6.6 d in experiments 2 and 3, with a range of 5.0-8.1 d. This number compares well with published values for IgG clearance in the rabbit, which range from 5.7 to about 8 d (16)(17)(18)(19)(20). The clearance rate observed in experiment 1 (mean 9.1, range 7.7-11.4 d) is clearly unusual and appeared to be caused by temporary, unfavorable environmental conditions in the animal room, as discussed in Materials and Methods. The unusual environmental conditions, however, altered only the absolute clearance rates, without distorting the relative clearance rates of the paired labels.
Analysis of the results of the clearance experiments suggests first that the increased rates of clearance observed in some rabbits is causally related to recognition of foreign allotypic determinants and, second, that the recognition reflects previous sensitization of either the cellular or humoral immune system by autologous Ig bearing latent allotypes. A number of points of evidence support each of these conclusions. That accelerated clearance is allotype mediated is most immediately suggested by the fact that it was observed only for IgG of a foreign allotype, never a self allotype. For every rabbit that cleared an IgG preparation at an unusually rapid rate, there were several rabbits that cleared the identical preparation at a normal rate. This is sufficient proof to rule out degradation, improper radioiodination, or other sorts of denaturation as the cause of the abnormal clearance. Each rabbit cleared the self allotype at a normal rate, so a nonspecific process can be ruled out. The possibility that unknown idiotypic, subclass, or other nonallotypic differences might underlie the accelerated clearance is unlikely based on all that is known about rabbit IgG, but it is almost completely excluded by the repeat determination done with rabbits 5 and 23. These rabbits showed the same allotype-specific clearance in two experiments with two IgG preparations obtained from unrelated donors. It is of further note that those rabbits that rapidly cleared IgG of a given allotype invariably had an unusually high incidence of the cleared allotype as a latent allotype in serum samples obtained both before and after the experiment.
That the allotype-specific clearance observed is caused by a prior autosensitization rather than by primary sensitization during the experiment is established by several independent considerations. First, the relatively low dose, the lack of aggregates, and the route of administration make the test preparation an unlikely immunogen. Second, if sensitization were occurring, it would be expected to occur in a higher percentage of recipients. Third, primary sensitization leads to non-first-order kinetics, with a sharp increase in the rate of clearance at 7-10 d (21). Fourth, the repeat experiment with rabbits 5 and 23 showed no increase in relative clearance of self and foreign allotypes.
The mechanism of the accelerated clearance could not be determined; it may have been mediated by antiallotype antibodies or by allotype-specific cells. Hemagglutination assays for serum antiallotype antibodies were uniformly negative, but this does not exclude their involvement. In a preliminary experiment, an a2a3 rabbit was immunized once with a l-mg dose of al IgG in complete Freund's adjuvant. At a time after the immunization when anti-al antibodies were barely detectable, al IgG was cleared so rapidly that the half-life could not be accurately measured. Thus, the modest rates of clearance observed in the present study would, if mediated by antiallotype antibodies, be compatible with undetectable levels of these antibodies.
In summary, the rabbits that showed accelerated clearance appear to have been previously sensitized to mount a cellular and/or humoral immune response to the allotype that was rapidly cleared. Because the rabbits were raised in the laboratory and had no previous experimental exposure to protein antigens, nor had any of them been bred, it seems most likely that autologous latent allotypes were the source of sensitization.
If latent allotypes are immunogenic, then serum latent allotypes may, in fact, constitute the "tip of an iceberg" (22), because active allotype-specific suppression may reduce the levels of many genetically possible latent allotypes to undetectable levels. The results obtained after immunization with antiallotype antibodies suggest very strongly that this is true--that rabbits have the genetic information required to synthesize most, if not all, allotypic specificities.
The network concepts of Jerne (23) and experiments on network interactions in the control of idiotype expression in the rabbit provide a framework for interpreting the results obtained after immunizing rabbits with purified, allotype-matched antiallotype antibodies. It is clear that antiallotype antibodies are restricted in heterogeneity (24), and they have recently been shown to be idiotypically restrictedas well (25,26). Thus, the immunization would reasonably be expected to lead to antiidiotypic antibodies against the antiallotypic antibodies. The observed sharp increase in latent allotype levels attendant upon the immunization suggests that the antiidiotype response relieves an antiallotype-mediated suppression, which is important in suppressing latent allotype expression. Such a network of interactions has been shown by idiotypic analysis of antibodies of a variety of specificities by Urbain, et al. (27) and Yarmush and Kindt (28).
Given the ever increasing mass of data supporting a functional network of idiotypes in the immune system, it is only a modest extension to suggest that network interactions control expression of latent allotypes. Latent allotypes have in common with idiotypes that they are immunologically recognizable self constituents, which are not routinely expressed and thus circumvent the usual mechanisms for induction of tolerance of self. What is surprising, if this speculative interpretation of our data is valid, is that it suggests a wider potential for latent allotype production than has been implied by analysis of allotypes in serum. All five rabbits immunized with antiallotype antibodies produced relatively large amounts of IgG bearing a randomly chosen latent allotype. It is particularly noteworthy that latent b6 appeared in both rabbits injected with anti-b6 antibodies, because latent b6 is the least frequent latent allotype in our colony.
The potentially widespread, if not universal, ability to make latent allotypes is also suggested by the work of McCartney-Francis and Mandy (29), who have reported induction of latent allotypes in vitro by treating spleen cell cultures with lipopolysaccharide and antiallotype serum directed against a nominal allotype. Under these conditions, expression of the nominal allotype was suppressed, and plaque-forming cells (PFC) were induced with the allotype of the antiallotype serum. Thus, treatment of a spleen culture from a b4 rabbit with b5 anti-b4 suppressed b4 PFC but led to the appearance of large numbers of b5 PFC.
Further work will be necessary to establish whether latent allotypes are under cellular or humoral antiallotype control and to what extent that control can be overcome. The immunization procedure reported here offers a means for the routine induction of latent allotypes. Such a method would permit a rapid resolution of questions concerning the distribution, control of expression, and genetic significance of latent allotypes.
Summary
The role of allotype recognition in the regulation of the expression of latent allotypes has been investigated in two series of experiments. The first experiments were designed to investigate the apparent instability of latent allotypes in circulation.
In these experiments, clearance rates of IgG preparations bearing allotypes matched and unmatched to the recipient were examined. In all cases, iodinated IgG matched in allotype to the recipient was cleared at a normal rate from the serum. However, in several cases, iodinated IgG of an unmatched allotype was cleared at a rate and in a manner suggesting prior sensitization of the recipient to IgG of that allotype. Such apparent sensitization correlated with the presence of the foreign allotypes as a latent allotype in several bleedings taken both before and after the clearance experiment.
In the second series of experiments, designed to test the ability of antiallotype antibodies to affect the expression of latent allotypes, five rabbits were immunized first with purified antiallotype antibodies and then after 3-4 mo, with streptococcal vaccine. Examination of the antistreptococcal antibodies for latent allotype revealed, in all cases, that the allotype against which the antiallotype antibodies were directed was present in levels 8-to 20-fold greater than were observed before the antiallotype injections.
These results indicate that recognition of allotypic determinants is an important element in the control of latent allotype expression and suggest the existence of a regulatory network involving antiallotype antibodies. | 2014-10-01T00:00:00.000Z | 1981-01-01T00:00:00.000 | {
"year": 1981,
"sha1": "e19fc40b49a0bc4f070f23ffa66a346236e0b04e",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/153/1/196.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e19fc40b49a0bc4f070f23ffa66a346236e0b04e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269408537 | pes2o/s2orc | v3-fos-license | A randomized controlled trial comparison of PTEBL and traditional teaching methods in “Stop the Bleed” training
Background The Stop the Bleed (STB) training program was launched by the White House to minimize hemorrhagic deaths. Few studies focused on the STB were reported outside the United States. This study aimed to evaluate the effectiveness of a problem-, team- and evidence-based learning (PTEBL) approach to teaching, compared to traditional teaching methods currently employed in STB courses in China. Methods This study was a parallel group, unmasked, randomised controlled trial. We included third-year medical students of a five-year training program from the Xiangya School of Medicine, Central South University who voluntarily participated in the trial. One hundred fifty-three medical students were randomized (1:1) into the PTEBL group (n = 77) or traditional group (n = 76). Every group was led by a single instructor. The instructor in the PTEBL group has experienced in educational reform. However, the instructor in the traditional group follows a traditional teaching mode. The teaching courses for both student groups had the same duration of four hours. Questionnaires were conducted to assess teaching quality before and after the course. The trial was registered in the Central South University (No. 2021JY188). Results In the PTEBL group, students reported mastery in three fundamental STB skills—Direct Finger Compression (61/77, 79.2%), Packing (72/77, 93.8%), and Tourniquet Placement (71/77, 92.2%) respectively, while 76.3% (58/76), 89.5% (68/76), and 88.2% (67/76) of students in the traditional group (P > 0.05 for each pairwise comparison). 96.1% (74/77) of students in the PTEBL group felt prepared to help in an emergency, while 90.8% (69/76) of students in the traditional group (P > 0.05). 94.8% (73/77) of students reported improved teamwork skills after the PTEBL course, in contrast with 81.6% (62/76) of students in the traditional course (P = 0.011). Furthermore, a positive correlation was observed between improved clinical thinking skills and improved teamwork skills (R = 0.82, 95% CI: 0.74–0.88; P < 0.001). Conclusions Compared with the traditional teaching method, the PTEBL method was superior in teaching teamwork skills, and has equally effectively taught hemostasis techniques in the emergency setting. The PTEBL method can be introduced to the STB training in China. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05457-4.
Introduction
According to the World Health Organization, mass traumatic injuries result in over five million deaths annually [1].In the United States, increasing shooting incidents have contributed to this high mortality rate [2].Due to rapid development in China, more than 700,000 motor vehicle accidents occur annually, leading to approximately 1.3 million injuries and 80,000 to 100,000 deaths [3,4].Traumatic hemorrhage remains a significant cause of death for all ages regardless of the form of trauma [5].It is estimated that 57% of deaths could be avoided with proper control of bleeding [6][7][8].In 2015, the White House launched the Stop the Bleed (STB) training program to minimize preventable deaths from trauma [9,10].Bleeding control techniques of both medical professionals and the general public have indeed improved through this campaign, with a 63% decrease in deaths from uncontrolled bleeding [11,12].However, only one STB course with small sample size equipped with Caesar (a trauma patient simulator) has been reported in China [13].But, the cost of the trauma simulator is relatively high and difficult to obtain.It is crucial to introduce STB skills courses utilizing proper teaching methods to general Chinese medical students without expensive equipment.
Medical students are the primary target population of STB training courses.Education in the course traditionally includes demonstrations, lectures, and hands-on teaching sessions [14][15][16].Although students' skills can be enhanced through these traditional teaching methods; training teamwork skills are often neglected.It can be difficult for medical students to manage complex clinical scenarios in a real-life trauma setting after completing a course that emphasizes single-skills training and de-emphasizes teamwork-based training.In a traumatic event, responding students are required to make comprehensive decisions in real time, including asking for help, diagnosing injuries, assigning tasks, transferring the patient, implementing clinical interventions, and more [17].Furthermore, medical training is about acquiring clinical skills and cultivating a state of mind that will allow students to embrace the sacrifice and love for humanity embedded in the Hippocratic oath [18].These comprehensive abilities should be enhanced through teamwork-based training.To facilitate this learning, a novel problem-, team-and evidence-based learning (PTEBL) approach to teaching may compensate for the weaknesses of traditional teaching methods [19].We conducted a cluster randomised controlled trial to compare PTEBL teaching approach (intervention) to a traditional course (control) among medical students of a five-year training program from the Xiangya School of Medicine, Central South University.This research aimed to evaluate the effectiveness of PTEBL, a novel teaching method, via comparison between an experimental PTEBL and control traditional teaching group.It was hypothesized that implementing a PTEBL teaching approach in the STB course could contribute to better teamwork skills and noninferior hemorrhage controlling skills compared with the traditional teaching method.
Study design
This is a parallel group, unmasked, randomized clinical trial (RCT) using online surveys completed before and after the STB course.STB was launched in the situation of increasing number of gunshot injuries in the United States, while there may be more traffic accident injuries in China.Traffic accident injuries usually involve complex injury processes.Therefore, the traffic accident injuries might need team work more.We applied the PTEBL teaching approach to fit the new situation.. Students of this case study were randomized into either an experimental group utilizing the PTEBL teaching approach (n = 77) or a control group utilizing the traditional teaching approach (n = 76), using a 1:1 allocation ratio.Random grouping is mainly achieved through random numbers.Every group was led by a single instructor.The instructor in PTEBL group has experiences in educational reform, and has published related articles on the PTEBL teaching method [19] and STB training [13].The instructor has also completed a "Stop the Bleed" training certificate.The instructor in the traditional group was trained in China and follows a traditional teaching mode.Both teachers were provided with scripts to follow.They prepared the lessons before each class.Each instructor also engaged students by asking questions to ensure students were learning the technique correctly.In addition, the teaching courses for both student groups had the same teaching duration of four hours on Jun 14, 2022.A 15-min break was provided for every 45 min of class.All courses were completed in the laboratory of the teaching building of Xiangya School of Medicine.Each instructor taught 16 to 17 students per class (teacher to student ratio: 1:16-17).In the questionnaire [20], students were queried about their mastery of STB skills and willingness to apply these skills during a traumatic medical emergency, etc.The questionnaires also included statistics to assess the students' attitudes of willingness to provide aid to a bleeding patient.The outcomes of the questionnaire were analyzed to assess the effectiveness of the PTEBL teaching approach.(Appendices 1 and 2) The trial was registered in the Central South University (No. 2021JY188).No incentives / reimbursements were provided to participants.
The learner attendance, the materials and the educational strategies used in the educational intervention and the duration for the educational intervention were assessed by raters.Raters were two doctoral level students, trained by senior staff.
Participants
All participants were the third-year medical students of a five-year training program from the Xiangya School of Medicine, Central South University.STB is a course for all people regardless of medical background.However, considering that our medical students still have gaps in hemostatic skills, we intended to incorporate this advanced skill into our training program for medical students.We have released recruitment information on Apr 30, 2022 and included medical students who voluntarily participated in the trial.We excluded students who have received systematic hemostatic training due to certain opportunities.One hundred fifty-three participants were randomized into two study groups (Fig. 1).We generated random numbers using IBM SPSS Statistics v26.0 statistical software.The demographic data of participants in age and sex are shown in Table 1.Informed consent was obtained from all participants enrolled in the study.
Study protocol
Prior to the course, participants completed both an anonymous pre-training questionnaire about their prior experiences with hemorrhage control techniques and a post-training questionnaire about their confidence levels with applying these techniques after completion of the course.(See Appendix 1 Pre-Questionnaire and Appendix 2 Post-Questionnaire [12,21]).
Fig. 1 Enrollment, randomization, and protocol of participants
For the traditional teaching method, the instructor demonstrated three fundamental skills for obtaining hemostasis (Direct Finger Compression, Packing, and Tourniquet Placement) while describing each step and explaining techniques in detail according to the standard STB (two hours in this part).Students then practiced these three skills for stopping bleeding (two hours in this part).At the end of the course, instructors evaluated and scored each participant's skill level (Fig. 1).
For the PTEBL teaching approach implemented in the experimental group, classes included three sessions: 1) problem-based learning (PBL) (1.5 h in this part), 2) team-based learning (TBL) (two hours in this part), and 3) evidence-based learning (EBL) (0.5 h in this part).The PTEBL teaching approach emphasized four steps in the EBM process a) developing an answerable question, b) finding the best available evidence, c) evaluation the evidence and d) applying the evidence to a patient care decision.
The first session presented theoretical knowledge and posed questions to students.Students read a scenario of traumatic bleeding adapted from a medical TV series.Instructors then posed four questions about the operation of pre-hospital emergency medical services: (Q1) How can bleeding be stopped effectively?(Q2) When should cardiopulmonary resuscitation be initiated?(Q3) Which actions were performed well?(Q4) Which actions were not performed well?After learning Direct Finger Compression, Packing, and Tourniquet Placement academic knowledge using interactive multimedia, students completed a 3-item knowledge quiz (see Appendix 3 Theoretical Test) to gauge the efficacy of theoretical teaching and the students' comprehension.
In the second session, participants were divided into small groups to practice hands-on bleeding control skills and to provide critiques to their team members.After instruction with tourniquet placement, where each student had an opportunity to perform at least one placement, each team member played different roles in the scenario simulation: the injured victim, the injured victim's friend, the primary rescuer, and the rescuer's colleague.The simulation involved a disabled individual sustaining an active brachial artery injury after a ground level fall.After direct finger compression, packing, and tourniquet placement were implemented by the team, bleeding control was achieved.During the simulation, team members made comprehensive decisions through collaboration, including assigning tasks, transferring patients, and implementing emergency medical services.After this scenario, participants described their experiences acting in different roles.Trained STB instructors observing the scenario evaluated their operation and provided participants with feedback on proper hemorrhage control techniques.
In the last session, instructors contributed to establishing competencies for medical students by adhering to expert consensus standards on emergency tourniquet application derived from current International Medical Association guidelines [22][23][24][25].The consensus presented an outline of international guidelines and practices in emergency medicine [26,27].
Statistical analysis
Statistical analysis was performed using IBM SPSS Statistics v26.0 statistical software.Continuous variables were expressed as the mean with standard deviation.Categorical variables were defined as frequency and compared using a paired X 2 test.Wilcoxon signed-rank test was used for the ordered variables of the Likert scale data.Spearman's correlation coefficient (CC) was applied to analyze the correlation between the variables, and the results were presented as a correlation heatmap.The greater the absolute value of CC, the stronger the correlation.When the absolute value of CC is between 0.9 and 1, variables are highly correlated.When the absolute value of CC is between 0.7 and 0.9, variables are strongly correlated [21].A P-value of < 0.05 was considered statistically significant.The "strongly agree" (5) and "agree" (4) components of the Likert scale were transferred into one, and the remaining three components were changed into zero, namely converting the variables into dichotomous variables for statistical analysis.To estimate the number of samples, an a priori power analysis was performed using G*power v3.1 (UCLA Statistical Consulting Group,
Characteristics of participants
A total of 153 participants participated in the study.All participants completed two questionnaires before and after the STB course independently.There were no statistically significant differences (P > 0.
Mastery level of hemostasis skills
Proficiency of hemostasis skills in compression via direct finger compression, packing, and tourniquet placement before and after the PTEBL and the traditional methods are presented in Table 2.Both the PTEBL method and the traditional method had statistically significant differences (P < 0.001) in reported proficiency before and after the course.However, there were no statistically significant differences between the PTEBL method and the traditional method in proficiency of fundamental hemostatic skills (P = 0.243, 0.645, and 0.280, respectively).No record of any modifications made during the course of the educational intervention was retained.
Rescue attitude
The number of participants who felt prepared to help and those who would refuse to provide assistance in a trauma even pre-and post-course were recorded and shown in Table 3. Again, there were statistically significant differences (P < 0.001) in the PTEBL group pre-course and post-course.There were also statistically significant differences (P < 0.001) in the traditional group.But no statistically significant differences (P > 0.05) between the PTEBL group and the traditional group.Effectiveness evaluation of the PTEBL method and the traditional method.
Evaluation of the effectiveness of the PTEBL method and the traditional method based on five indicators are presented in Fig. 2. 94.8% (73/77) of the PTEBL course participants believed their teamwork skills were improved, while 81.6% (62/76) of the traditional course participants believed their teamwork skills were improved, with a statistically significant difference (P < 0.05) observed.There were no statistically significant differences in teaching effectiveness between the PTEBL method and the traditional method (P > 0.05).
Performance assessment of the PTEBL method and the traditional method.
There was no statistically significant difference (P > 0.05) between the assessment scores after the PTEBL method (92.9 ± 2.8) compared to those after the traditional method (92.9 ± 2.1).
Correlation heatmap of relevant independent variables.The Spearman CC heatmap is shown in Fig. 3.The highest positive correlation in specific skills was observed between pre-course confidence with compression via packing and pre-course confidence with compression via tourniquet placement (R = 0.88; 95%CI: 0.81-0.93;P < 0.001).The second-highest positive correlation was observed between reported improved clinical thinking and reported improved teamwork skills on the post-course questionnaire (R = 0.82; 95%CI: 0.74-0.88;P < 0.001).There were three groups of variables with positive correlations of R values greater than 0.7 and less than 0.8, which were the correlation between pre-course confidence with compression via direct finger pressure and pre-course confidence with compression via packing (R = 0.76; 95%CI: 0.66-0.84;P < 0.001), pre-course confidence with compression via direct finger pressure and pre-course confidence with compression via tourniquet placement (R = 0.75; 95%CI: 0.66-0.82;P < 0.001), and post-course confidence with compression via packing and post-course confidence with compression via tourniquet placement (R = 0.74; 95%CI: 0.63-0.83;P < 0.001).
Other correlations are indicated in Fig. 3.
Discussion
In summary, our initial hypothesis was confirmed that the application of PTEBL in STB courses contributes to better teamwork.Furthermore, results of our pre-post evaluation demonstrated an increase in bleeding control knowledge, skills and willingness to be first responders regardless of the teaching methods, which indicates the PTEBL method could be applied in the STB courses in China.
These observations are consistent with the results of some prior studies.In a study by Goralnick et al., hemorrhage-control training consisting of a lecture followed by hands-on skills training (87.7% proven to be effective) was found to be the most effective method to enable laypersons to control hemorrhage using a tourniquet [10].Kaori et al. also suggested that STB training lectures with a practical session improved tourniquet knowledge and prepared Japanese citizens for mass casualty events [28].Generally, the teaching method of "demonstration-practice-examination", a single skill operation with little teamwork-based teaching, does have remarkable effectiveness, proven by its high utilization in traditional hemostasis training and widespread use in different countries [14,15].However, the best form of education for the STB course is still a source of debate [2].Despite individual hemorrhage-control skills being enhanced through this training; it is also important to note that teamwork, cooperation and, comprehensive ability to respond to emergencies are also important in a trauma scenario.
The novel PTEBL teaching method was first applied in an STB course in China by our group [13].In the present study without Caesar, students felt more inclined to express their opinions based on problems occurring during a trauma response, and a team-based approach encouraged collaborative thinking.Their abilities to analyze issues independently and think critically were also improved effectively.Furthermore, students worked in teams to practice and simulated clinical scenarios in which different emergency tasks were assigned to every individual.PTEBL achieved an overall improvement in personal and group development, improved the ability of students to integrate skills, especially in terms of communication skills, critical thinking, evidence-based thinking, and successfully prepared students for future clinical work [19].Findings of Orlas et al. have previously supported that via STB course lectures and hands-on skills practice, 92.1% of all participants from different groups felt confident in being able to apply a tourniquet correctly [1].Our study found that 92.2% of participants in the PTEBL course and 88.2% of participants in the traditional course could successfully apply a tourniquet after training.We suspect that this difference may be due to initial problem-based learning allowing for multiple practice opportunities and real-time feedback to correct mistakes or address overconfidence in some medical students [11].
Given that no statistically significant difference was seen in five other areas besides improved teamwork skills, including clinical thinking, problem analysis, learning effect, performance assessment, and pre-course guideline distribution, we cannot conclude that our new PTEBL teaching method performs better than the former traditional method remarkably.However, we observed that students' team cooperation ability was significantly improved in the PTEBL group compared with the traditional group due to team-based simulation scenarios.The mastery of bleeding control knowledge, skills and willingness to be first responders were also increased after PTEBL method on the basis of data from questionnaires, built on many references [21].Although the quality control of data may be affected to some extent, students often assess relevant skills and have a more accurate grasp of their skill level and self-confidence, reducing the research error.The heatmap demonstrated improved problem analysis correlated with pre-course guideline distribution, improved learning effect, and improved teamwork skills.Improved clinical thinking is also associated with enhanced learning effects, improved problem analysis, and teamwork skills.Those comprehensive abilities may change reciprocally due to STB training, which suggests that these items influence mutually.In addition, the confidence of hemostasis skills to compress via direct finger pressure, packing, and tourniquet placement also correlate significantly with each other pre-or post-course, suggesting the same principles and techniques of hemostasis were conveyed.Based on our overall results, we believe that PTEBL would be beneficial for developing comprehensive emergency response competence and teamwork skills in particular, and would be superior to traditional methods of teaching STB courses.
A study by Dhillon et al. evaluated all participants of an American College of Surgeons STB course and reported a high likelihood of utilizing hemorrhage control skills upon completion of the class, between 95.5%-97.9%[29].Moreover, the STB protocol has been well received in Italy and rendered good results among civilian health professionals and medical students [30].In the Middle East, lay members of the public have contributed to a positive response to trauma emergencies after STB training [31].However, such standardized bleeding control curriculum rely heavily on the acquisition and accessibility of specific equipment and materials, including tourniquets.Thus, the lack of cost and equipment, which would be readily accessible when required, contributed to only a few participants obtaining the necessary materials to mount an appropriate trauma response, as well as the practical education needed for long term use.This suggests that professional tourniquets should be readily available in a public area or commercially in stores, especially in China.Today, AEDs are designed to be simple enough to be used by any individual regardless of training [32].If we wish for STB courses to have a significant impact on reducing risk of death in a trauma setting, we must create an environment where people can obtain hemostasis tools and materials in case of an emergency.Otherwise, the STB course would be a waste of time, cost, and usefulness.
This study has several limitations.Firstly, the hemorrhage-control ability was measured using self-report questionnaires, which may not accurately reflect practice competence [31].Secondly, our study analyzed data from a small sample, making it difficult to generalize to other populations comprehensively [33].The STB project is designed for public education, for which the effect should be evaluated in larger public groups, including all medical personnel and laypersons [5].The third limitation is that our study only focused on the results of the current course and failed to demonstrate retention of STB skills.However, nearly all STB research has been limited by use of pre-and post-course assessment models [14,34].The ideal outcomes measures would be both current and long-term retention after this educational intervention, which would allow improved ability to assess effectiveness of novel teaching methods.The fourth limitation is that the injured victim was all played by group participants rather than real traumatic bleeding patients in our trial, which may lead to a loss of accuracy and scientificity in evaluating student performance.Considering the lack of medical permission and suitable patients, we schedule to test them at the last year of those students (two years after the course) to reveal whether long-term retention after this educational intervention would allow improved ability to cure the real patients.Moreover, building a team in real life is quite difficult, single practice can't reflect the effectiveness of the PTEBL teaching method and the improvement of teamwork may not necessarily benefit every STB action.
Fig. 2 Fig. 3
Fig. 2 Effectiveness evaluation of the two groups
Table 1
Demographic data of participants a All students were considered to have no good training experience in hemostasis techniques Los Angeles, CA) based on repeated measures within X 2 tests, with a hypothesized effect size of 0.3, an α error of 0.05 and a power of 0.95, which resulted in a sample size of n = 145.Our total sample size is 153, which meet the requirements.
Table 2
Students' proficiency in different hemostatic skills
Table 3
Rescue attitude in trauma scene | 2024-04-28T06:17:03.437Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "6a6159512449b3c71c90d5c861c011e579df86d2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "19f1cbba096f3be212cc5efe355668b180266834",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15274336 | pes2o/s2orc | v3-fos-license | How events determine spreading patterns: information transmission via internal and external influences on social networks
Recently, information transmission models motivated by the classical epidemic propagation, have been applied to a wide-range of social systems, generally assume that information mainly transmits among individuals via peer-to-peer interactions on social networks. In this paper, we consider one more approach for users to get information: the out-of-social-network influence. Empirical analyzes of eight typical events’ diffusion on a very large micro-blogging system, Sina Weibo, show that the external influence has significant impact on information spreading along with social activities. In addition, we propose a theoretical model to interpret the spreading process via both internal and external channels, considering three essential properties: (i) memory effect; (ii) role of spreaders; and (iii) non-redundancy of contacts. Experimental and mathematical results indicate that the information indeed spreads much quicker and broader with mutual effects of the internal and external influences. More importantly, the present model reveals that the event characteristic would highly determine the essential spreading patterns once the network structure is established. The results may shed some light on the in-depth understanding of the underlying dynamics of information transmission on real social networks.
I. INTRODUCTION
How social networks affect information transmission or information spreading is a pressing problem.Among the spreading phenomena studied in recent years are news [1] and rumors spreading [2,3], innovation diffusion [4,5], human behaviors [6,7], and culture transmission [8,9].The structure of a network is crucial in determining the spreading pattern and thus widely studied [10,11], with the critical phenomenon on network topology [12,13], identification of influential spreaders [14][15][16], and spreading dynamics on adaptive networks [17,18] being the focuses.With the increasing availability of real and good-quality data for analysis, the propagation paths [19,20], patterns of human activities [21,22] and locating the source [23,24] also become the hot spots in studying spreading dynamics.
Theoretical studies on information spreading are mostly carried out within the framework of epidemic spreading [12], where the propagation is regarded as a sequence of social interactions between infected and susceptible individuals [25,26].Simulation results from such models, however, * zhangzike@gmail.comare very different from those observed in empirical analyses on real data [27] as information spreading carries its special features.Normally, an online individual is unlikely to forward the same piece of news to his friends repeatedly, but s/he could infect (be infected by) a friend the same disease more than once [28].The memory [29] and temporal effects [30] are also significantly different, with previous behaviors having grave implications for the information spreading process.In addition, the information content [31] and timeliness [32] would generate spreading patterns that are very different from epidemic propagation.
The spreading channel also plays an important role in information spreading.Generally, there are two ways for an individual to access information: (i) peer-to-peer communications via a social network; and (ii) an external influence from outside of the network.Many previous studies traced the information spreading process by focusing on the interactions among individuals [28,33], but spreading through the external channel was also found to be important [27,34,35].In Twitter, for example, about 71% of information by volume can be attributed to internal diffusion within the network, and 29% through external influence [36].In innovation diffusion, Kocsis and Kun [37] found a power-law with a crossover in the cluster size distribution, where the global effect due to the external channel determines the cluster's core and the local effect due to the internal channel governs its growth.There have also been studies on the effects of an external channel in epidemics, with transmission through a medium, e.g.mosquitoes, playing the role of an external channel, that an enhanced infection results from having multiple routes [38,39].
Although external influence can apparently enhance the information diffusion [27], it remains unclear how the interplay between external influence and peer-to-peer interactions affects information transmission in social networks [36].
In this paper, we analyze internal and external influences on information spreading by tracking how events diffuse on the largest micro-blogging system -Sina Weibo (http://www.weibo.com/)-in China.Empirical results show that external influence plays a significant role, especially for events that attract the media's attention readily at their immediate outbreaks.We then propose a diffusion model that incorporates both social interactions and media effects [27] so as to illustrate the inter-relationship between the external and internal spreading channels.Both simulation and mathematical results of the model reveal that the spreading pattern is largely determined by the event's characteristic, as found in the empirical analyses.
II. EMPIRICAL REGULARITIES
As in other micro-blogging systems (e.g Twitter), users of Sina Weibo can post short messages, namely tweets, in the variety of formats.When an event occurs, there are basically two ways to learn about it.Through the peer-to-peer interactions in a social network, referred to as internal influence, users receive automatically the contents posted by other users whom they follow.Alternatively, users become aware of an event via an external influence outside the social network, e.g.via media broadcasts.
Figure 1 shows the spreading dynamics of some selected events from Sina Weibo in the first 100 days of their outbreak.Details on the data are given in Supplementary Materials.Each topic carries at least 10 4 new tweets or 10 5 retweets, taken as a measure of the external and internal influences respectively.The basic statistic in Table I shows that the average retweet number is much larger than the new tweets, indicating that information diffusion on Sina Weibo mainly through the internal channel, which is consistent with the results on Twitter [36].Although all the events spread rapidly in the first ten days (shaped blue), the details of the spread patterns are different.In Fig. 1, p , where n # (t) represents the cumulative number of messages posted through # (internal or external) channel till time t.Take the event labelled Yao Ming Retires (Fig. 1b) for example.Being an internationally famous basketball star from China, people learned the news from media's coverage.The external influence led to a quicker outbreak of new tweets than retweets as the news propagated and was discussed (p r for external channel is higher than internal).Another type of event can be observed in the example labelled the Guo Meimei Event (Fig. 1g).It started when an ordinary lady showed off her wealthy lifestyle online and it did not draw the media's attention initially.Many users gossiped when her account was revealed as a key official of the Chinese Red Cross.It became a hot topic quickly and eventually attracted the media's attention.This strong internal influence led to a quicker outbreak of retweets as the item propagated and was discussed (p r for internal channel is higher).Figure 1b and Fig. 1g can be taken as typical of externally and internally initiated events, respectively, what are the events' characteristic mainly discussed in this work.We further analyze the diffusion network of each tweet [40,41].It is a directed network with an edge i→j indicating information transmission from user i to user j.A tweet can be traced from its origin through the retweeting path until the spreading terminates, showing the cascade due to the tweet.The network consists entirely of internal channels and may be divided into serval unconnected communities due to effect of information blind areas [42].For each event, the cascade size of each tweet can be found.Figure 2 shows the spreading cascade size distribution for each event.Each distribution exhibits a power-law with a slope around −2.0, similar to other systems [27], and suggests the spreading dynamics via a few very large-scale cascade and many small ones.The details, however, are different for internally and externally initiated events.For the Death of Wangyue (Fig. 2f), Guo Meimei (Fig. 2g) and Qian Yunhui events (Fig. 2h), the distribution ex-ponents are less negative (smaller than 2), indicating events with stronger peer-to-peer interactions would lead to more larger-size cascades.Furthermore, the average cascade size is also larger (see the metric N r in Table I).These events were initiated within the social network (see Fig. 1f-1h) until the media picked them up, and the discussions among peers gave rise to the large cascades.In contrast, the other events caught the media's attention quickly.The stronger external influence led to more message sources and smaller cascades (see Fig. 2a-2e), and thus a more negative exponent (larger than 2).
III. MODEL ANALYSE A. Model Description
We propose a theoretical model of information spreading that incorporates both internal and external influence.Figure 3 illustrates the model schematically.Two types of agents -ordinary individuals and media-agents -are included in the network.An agent receives information from another agent if s/he follows that agent, as indicated by the arrows (solid lines) for information flow.A tiny fraction of media-agents could broadcast information to the public represented by a group of agents (dashed lines) without them being followed in addition to forwarding information to followers.We aim to incorporate (a) memory effects [29]; (b) external influences [36,37]; and (c) non-redundancy of contacts [28].As an event propagates, every agent takes on one of four states at any time: (a) unaware: has not received information on event yet; (b) aware: received information but hesitate to accept the content; (c) accepted: accepted the content and ready to transmit it; (d) removed: knew of the content but would not transmit it any more.Therefore, an agent goes through the sequence of unaware→aware→accepted→removed, analogous to the SIR epidemic model.
The information diffusion process can be described as follows: • To initiate an event, an agent is chosen randomly as a seed (coloured red in Fig. 3) to spread the first piece of information, with the state set to accepted.All other agents are in the unaware state.
• At a time step t, every agent who turns into the accepted state at the time step (t − 1) will post the information and become removed.For an ordinary agent, s/he for-wards the information to her/his followers as a retweet.For a media-agent, the information is broadcasted as a new tweet to a fraction of randomly chosen agents to mimic those who gather information from the media in addition to forwarding it as retweets to the followers.
• At a time step t, all other agents check on information arrival.For unaware agents, they become aware and evaluate a time-dependent acceptance probability p a upon receipt of information according to the source (see Eq. ( 1)).For aware agents, they update p a if information arrives.These agents then use p a to turn into accepted at time t.Those changed to the accepted state are recorded.
• The steps are repeated until the information is spread to all accessible agents in the network.
There is a fraction (0.1% in this paper) of media-agents, and each of them makes the same impact through broadcasting to 0.1% of all agents.The acceptance probability p a increases as one receives the same information repeatedly.For an ordinary agent i at time t, p a (i, t) is proportional to the amount of information C(i, t) received so far and it is updated according to where Γi t−1 is the set of agents that i follows and who switches to the accepted state at time (t − 1) and thus forward the information at time t to i, wji measures the internal influence due to interaction j → i (wji = w is set for all pairs in the network), β measures the external influence due to the media, and the set Mt contains agents who received broadcasted information at time t.
For the acceptance probability p (m) a of the media-agents, we consider two extreme cases.For events initiated via gossips (labelled II for internally initiated, such as the Guo Meimei events) that the media are not eager to report, p (m) a = pa as in Eq. ( 1) and thus follow the same updating rule.To mimic externally initiated (labelled EI, such as the Yao Ming Retires) events that the media rush to report, we set p (m) a = 1 so that media-agents accept the news immediately after they are aware of the news.Note that Eq. ( 1) incorporates the memory effect.Obviously, considering the external influence can enhance the information diffusion effect (see Supplementary Fig. S1).
B. Simulation Results
The model is implemented on the who-follow-whom online social network, i.e., followship network, extracted from Sina Weibo data.The directed links give the direction of information flow, i.e., i→j when agent i is followed by j.The basic statistics are given in Fig. 4 (see inset).The network reciprocity [43] is about 15%.Fig. 4 shows the in-degree and out-degree distributions, excluding agents of de-gree zero.The distribution of kout is much broader than that of kin, due to the two different social relationship in Sina Weibo: following someone and being followed.Agents tend not to follow too many people due to their limited attention [44].However, some targeted users, e.g.movie stars, are followed by a large number of agents without their consent.The resulting mean degrees give kout ≫ kin, suggesting that Sina Weibo has developed into a structure highly suitable for information flow (see Supplementary Fig. S2).kin and kout represent the number of followers and followees for the corresponding user, respectively.The inset is the basic statistics of the original social network of Sina Weibo.N node and N edge are the number of nodes and directed links, respectively.k, kin and kout represent the average degree, average indegree and average outdegree, respectively.The nodes with zero indegree or outdegree are not counted.
We study both internally (II) and externally initiated (EI) events.As the empirical analysis, the fraction of followee-followers retweets and broadcasts (new tweets) are recorded as a function of time as the information spreads.Figure 5 shows the results in terms of the cumulative fractions of removed agents due to the two processes for EI (Fig. 5a) and II events (Fig. 5c).Tracing the propagation paths of many events, Fig. 5b and Fig. 5d give the corresponding cascade size distributions.Evidently, the model reproduces the key features in retweets and new tweets for EI events (compare Fig. 5a with Fig. 1a-1e and Fig. 5b with Fig. 2a-2e), with pr(t) for new tweets higher than pr(t) for retweets and a more negative exponent in the cascade size distribution.Similarly, key features for II events are also reproduced (compare Fig. 5c with Fig. 1f-1h and 5d with Fig. 2f-2h), with pr(t) for retweets higher than pr(t) for new tweets and a less negative exponent in the cascade size distribution.
In order to further understand the effect of media-agents quantitatively, we detect the sensitivity of the proposed model to the ratio of media-agents.Figure 6 shows the dynamics of the removed individuals through the two different channels for various media-agents ratios for the EI events.Intriguingly, the spreading pattern can be apparently impacted by the ratio of media-agents, manifesting the burst attention changes from external channel for relatively large fraction of media-agents (Fig. 6a) to internal channel for small ones (Fig. 6f).Thus, the external channel would only play the determining role in affecting the spreading patterns when there are enough media-agents in the systems for the EI events (e.g.0.06% shown in Fig. 6d).In this way, only few media-agents would not be able to supersede the influence of gossips although they could response promptly to the EI events.Therefore, it inspires that the information spreading patterns of the EI events would be partially controlled by regulating the media-agents in real social networks, e.g.persuading "stars" not to forward the target message.However, different from EI events, the information spread through the internal channel always bursts first for the II events (see Supplementary Fig. S3).For such events, the media-agents can only influence information outbreak size, while unable to change the spreading patterns whatever how large they dominate the network.
C. Mathematical Analysis
In this section, we will give the mathematical analysis to illustrate the information diffusion patterns of the proposed model.We use superscript symbols * n and * m to represent the ordinary individuals and media-agents, respectively.Denote S(t), I(t) and R(t) as the densities of aware-and unaware-, accepted-and removed-states individuals.Adopting the mean-field approach [12,45,46], we can obtain the differential equations describing the time evolution of the densities in each population: where l is the average out-degree of ordinary individuals, and o is the number of agents that can receive the information through broadcasting of each media-agent, pa(t) and p ′ a (t) respectively are the accepted probability for ordinary individuals and media-agents at time t.According to Eq. ( 1), it can be obtained that the average pa is proportional to the number of removed State individuals in the system [47].Therefore, we hereby assume the dynamics of pa(t) as the sigmoid function (also known as Fermi function in classic physics [48]), pa(t) ∼ c (1 + e −at+b ) (Supplementary Fig. S1 and Fig. S2 show the plausibility to this hypothesis).
As we have illustrated in Model Description, for the diffusion of EI events, the media-agents respond the event promptly, indicating that p ′ a = 1 all the time, while for the II events, they are less attractive to media-agents when they happen, representing that p ′ a (t) for the media-agents are identical to ordinary individuals, saying p ′ a (t) = pa(t).In addition, as there are only a small fraction of media-agents (0.1%) are involved in the initial spreading process, resulting in p ′ a (t) → 0 in the initial times.Therefore, we can obtain the numerical results for Eq.(2) in Fig. 7, which share the similar pattern to the simulation and empirical results.That is to say, spreading via external channel is always ahead of that through internal channel for the diffusion of the EI events (see Fig. 7a), and vice verse for the II events (see Fig. 7b).Further detailed analysis on the outbreak threshold of the proposed model is also presented in Supplementary Materials, and considering the external influence can diminish the information outbreak threshold significantly (see Supplementary Fig. S4).
IV. CONCLUSIONS & DISCUSSION
In this paper, we have studied the internal and external influences on information transmission on social networks.Empirical analyses from a wide-range class of incidents of the Chinese largest social micro-blogging platform, Sina Weibo, show that there are apparent differences between EI and II events.For the EI events which attract more attention from media-agents would result in a broad and diverse popularity and corresponding large exponents of cascade size distribution.Comparatively, the II events, mainly involved by social communications, show a very opposite phenomenon.Therefore, the present findings demonstrate that the combination of out-of-network broadcasting and peer-to-peer interactions has played a significant role in facilitating the emergence of different information transmission patterns.
In order to understand how information transmits with both peerto-peer interactions and media effects, we have proposed an information spreading model based on the classical SIR model, considering three representative characteristics: (i) memory effect; (ii) role of spreaders; and (iii) non-redundancy of contacts, which are all essential properties of the information diffusion and make it quite different from the basic models of biological epidemics.Thereinto, a small fraction of randomly selected individuals to act as the media-agents, through which information can transmit out of the fixed structure of social network, referred to as the external influence.Both Simulation and mathematical results show that, though information diffusion depends largely on the strength of the peer-to-peer interactions, the spreading pattern is essentially determined by the event attribute once the observed network structure is established, which agrees well with empirical analyses.
In the proposed model, individuals receive information via two approaches: internal (peer-to-peer contacts) and external (media) influences.The role of the external influence can be interpreted as two aspects: (i) the depth effect: considered as the media's credibil-ity, the amount of received information of the aware-and unawarestate individuals, represented by the parameter β in the model; (ii) the breadth effect: considered as the media-agent's influence range, which brings more active unaccepted individuals via media broadcasting.Besides, the internal and external influences would also promote the effects of each other.On one hand, the breadth effect of the external influence will arouse more active individuals to be aware of the information, and transmit the information to all their followers.On the other hand, events spread through the internal channel will attract more medias to report them, which additionally enlarges the external influence of the event diffusion.As a consequence, information will spread quicker and broader in social systems by the mutual reinforcement of external and internal influences (see Supplementary Fig. S1).Furthermore, we additionally observe the impact of network structure by investigating different media-agents ratios (see Fig. 6 and Supplementary Fig. S3).It reveals that the population informed both from external and internal channels will increase with expanding the ratio of media-agents.In addition, the ratio of media-agents would largely influence the spreading patterns for the EI events.Therefore, strategy or policy makers should pay more attention to get along with the media-agents to obtain an effective way to manage the information diffusion.
The findings of this work may have various applications in studying how information spreads on social networks.(i) rumor spreading and detection are both very hot yet serious topics in purifying the air of public opinions; (ii) the field of information filtering confronts a huge challenge in dealing with tremendously increasing data every day, how to efficiently provide relevant information to users can be partially inspired to design more effective algorithms to obtain timely recommendations.The present work just provides a start point to preliminarily study the internal and external influences, a more comprehensive and in-depth understanding of multi-channel effects still need further efforts to discover.
FIG. 1 .
FIG.1.The spreading dynamics versus time of eight selected events on Sina Weibo.Blue areas represent the spreading range within ten days after the corresponding events have occurred.Red and black curves represent the spreading affected by internal (retweets) and external influence (new tweets), respectively.
FIG. 2 .
FIG.2.The cascade size for diffusion of eight selected events.The distribution exponent is obtained by the Least Square Method.
FIG. 3 .
FIG.3.Illustration of information spreading model with both internal and external influence.The agents with loudspeakers represent the media-agents (external influence), which can spread information to the other agents with the same probability (dash arrows).Other gray agents represent ordinary individuals (the red agent is randomly selected to represent the information seed in the model), which can only deliver messages via peer-to-peer interactions based on existing social structure (solid arrows).All arrows indicate the direction of information flow.
5 FIG. 4 .
FIG.4.The degree distribution of the social network of Sina Weibo.kin and kout represent the number of followers and followees for the corresponding user, respectively.The inset is the basic statistics of the original social network of Sina Weibo.N node and N edge are the number of nodes and directed links, respectively.k, kin and kout represent the average degree, average indegree and average outdegree, respectively.The nodes with zero indegree or outdegree are not counted.
FIG. 5 .
FIG. 5. Simulation process of the information spreading via two different channels.a and c: cumulative fraction of removed individuals as a function of time; b and d: the cascade size distribution represented by the proposed model.The parameters are set as: a and b: w = 0.1 and β = 0.01 for the EI events; c and d: w = 0.1 and β = 0.01 for the II events.
FIG. 7 .
FIG. 7. Cumulative fraction of removed individuals versus time steps in the numerical analysis.a for the EI events; b for the II events.The parameters are set as: w = 0.1 and β = 0.01.
TABLE I .
Basic statistics of the eight representative events.day represents the date when the corresponding event happens, Nm represents the number of new tweets talking about the corresponding event, Nr represents the total number of new tweets and retweets about the event, and Nr represents the average retweet number of each tweet.
FIG.6.Different patterns of information spreading via the two channels for various media-actor ratios for the EI events.a-f represent the results for different ratios of media-agents (#%): a 0.09%, b 0.08%, • • • , f 0.04% respectively.g represents the fraction of removed individuals through different channels for various media-actor ratios.The average value and the corresponding standard deviation value are obtained by averaging over 100 independent realizations. | 2015-03-25T22:51:37.000Z | 2015-03-25T00:00:00.000 | {
"year": 2015,
"sha1": "c52fa9038b8f55d926168e76058cb956aed81d2d",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1367-2630/17/11/113045/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "c52fa9038b8f55d926168e76058cb956aed81d2d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
255988294 | pes2o/s2orc | v3-fos-license | On the journey from nematode to human, scientists dive by the zebrafish cell lineage tree
Three recent single-cell papers use novel CRISPR-Cas9-sgRNA genome editing methods to shed light on the zebrafish cell lineage tree.
no further information on the nature of the cells, and hence rather uninformative. To label the tree with cell types, transcriptomic (or other) analysis of each cell is needed in addition to its genomic analysis. While single-cell transcriptomics is progressing in leaps and bounds and is now the cornerstone technology of the international Human Cell Atlas project, integrated single-cell genome and transcriptome analysis is still in its infancy [2].
Fortunately, a new idea has recently emerged. It is possible to use CRISPR-Cas9-sgRNA genome editing to address these two problems simultaneously. In accordance with the multiple discovery theory, the idea is presented in three independent, almost simultaneous, publications, all applying it to the discovery of the zebrafish cell lineage tree [3][4][5].
Uncovering zebrafish cell lineages by scarring its genome, waiting, then fishing the scars, the method uses CRISPR-Cas9 to inflict random edits to the cell's genome, called genomic scars, at specifically chosen subgenomic (sgRNA)-guided locations. Such scars are, in fact, induced somatic mutations heritable via cell division and can be used, with the help of phylogenetic analysis tools, to reconstruct lineage relationships among the organism's scarred cells. As the putative locations of these scars within the genome are known, they can be recovered by targeted sequencing, eschewing the need for high-coverage single-cell whole-genome sequencing. To eliminate the need for simultaneous genomic and transcriptomic analysis of individual cells, these scars are inflicted in expressed genomic loci. Thus, single-cell RNA sequencing can recover both a cell's type and its expressed genomic scars. To ensure the scars do not affect organism development, they are applied only to a nonfunctional transgene such as GFP, which is incorporated in a sufficient number of copies in the genome to support ample scarring. Three variations of this combined concept, termed ScarTrace [3], scGESTALT [5], and LINNAEUS [4], have been applied by the three teams to analyze various aspects of the zebrafish cell lineage tree, focusing on early development [4], the brain [5] and the entire organism, with focus on the immune system and eye [3]. Highlights of their research findings include showing that a subpopulation of resident macrophages in the fin has a different origin than monocytes in the marrow [3]; that erythrocytes generated by primitive hematopoiesis have a distinct origin from those generated by definitive hematopoiesis [4]; and that the heart harbors two seemingly very similar endocardial/endothelial cell types which have very different origins [4].
Diving deeper into the zebrafish cell lineage tree
The research milestone reached by these three papers is worth celebrating, as it offers a completely new way to peer into complex organism development. Yet, it is a small step in a long journey. Even within the realm of zebrafish, many limitations have yet to be overcome.
First, the number of cells analyzed by these papers is measured in the tens of thousands, a far cry from the adult zebrafish estimated 100,000,000 cells. Significant scaling of the method in all dimensions, as well as drastic declines in sequencing costs, is needed to reconstruct the full zebrafish cell lineage tree.
Second, unlike natural somatic mutations, which occur continuously during normal cell division, the methods described inflicted CRISPR-Cas9 scarring only once or twice during the organism's lifespan. Continuous scarring is needed for full cell lineage tree reconstruction.
Third, while phylogenetic analysis tools have been improving for decades, phylogenetic cell lineage reconstruction has specific needs, notably coping with noisy, partial, or missing single-cell genomic data, and reconstructing ever-increasing lineage trees, orders of magnitude larger than what has been previously attempted. Novel and better algorithms have to be developed to cope with these challenges.
Fourth, while cell type and lineage are useful information, without cell location the resulting picture would still be rather partial. Methods for in situ RNA sequencing which could incorporate genome scarring to uncover simultaneously cell location, cell type, and cell lineage would give a more complete picture of organism development.
Fifth, while the number branches between a cell and the root measures the number of cell divisions it underwent since the zygote, it does not measure time. There could be parts of the tree that extend slowly throughout the adult life and parts that progress quickly during early life then stop. The timing of cell division, differentiation, and renewal is a major question of fundamental biological importance. While the timestamps of the root and leaves of an organismal cell lineage tree are determined by the actual experiment that generated it, timestamps of internal nodes can only be inferred retrospectively, like type and location information, with the aid of yet-unavailable mathematical methods applied to snapshots taken at different time points.
Sixth, a fundamental limitation of any retrospective method, including this one, is that it cannot peer into the past, only speculate about it. Specifically, single-cell RNA-sequencing can provide information only on extant cells, namely the leaves of the cell lineage tree. Any knowledge on past internal tree nodes can only be inferred. Conversely, analysis of an organism at cellular resolution using current methods requires its sacrifice, obviously preventing further organism development, so peering into its future is also impossible. If organism development is deterministic, as in C. elegans, internal nodes can be analyzed by freezing development of individuals at different time points for analysis, and then coalescing the resulting partial lineage trees into a unified lineage tree. However, complex organisms may not be deterministic, in which case simple coalescence of cell lineage trees, even of clones, might not be possible. Snapshots at cellular resolution of different individual organisms at different stages of development would be needed and helpful of course, but they cannot be simply coalesced. Yet-unavailable mathematical and computational methods have to be developed to make sound inferences of the type and location of internal nodes from information on the cells at the leaves of a cell lineage tree of a complex organism.
From zebrafish to mouse and-ultimately-to the human cell lineage tree Climbing up the model organism hierarchy, the mouse is an obvious next target of this method, as a lot of cell lineage knowledge exists as a backdrop to verify the method, as well as to improve upon. The mouse can also be a stepping stone for human cell lineage reconstruction. A key hurdle for any human cell lineage reconstruction method is lack of a ground truth to measure against. While a cell lineage tree can be easily scribbled, verifying its relationship to the actual developmental history of an organism is far from trivial. If and when genome scarring proves a reliable method for mouse cell lineage reconstruction, it can serve as a ground truth for testing, in mouse, retrospective cell lineage reconstruction using naturally occurring somatic mutations. Due to ethics considerations, this may be the only viable method for uncovering the human cell lineage tree.
To conclude, let's ask: why bother? What will we gain at the end of this journey, if we know the human cell lineage tree? The answers are nothing short of dramatic. I can fairly say that truthful human cell lineage trees, fully labeled with type, temporal, and spatial information, would provide long-sought answers to the most profound open questions in human biology and medicine. Here are three examples: First, the human cell lineage tree can summarize the answers to all open questions on human development, at cellular, if not molecular, resolution. Second, such a tree would end the fierce controversies regarding regeneration during adulthood, which rage in every human-organ research community I know. For example, do beta cells renew [6]? The heart [7]? Neurons [8,9]? Oocytes [10]? The answers will be found in the human cell lineage tree. Third, it would also be able to explain disease dynamics and answer questions such as: where do metastases come from? Which cells initiate relapse after treatment? The answers lie in the patients' cell lineage trees [2].
Obtaining knowledge of the human cell lineage tree in development, aging, and disease on par with our current knowledge of the human genome will take decades. But this is a journey worth taking, and a journey science must take. | 2023-01-19T21:52:58.950Z | 2018-05-29T00:00:00.000 | {
"year": 2018,
"sha1": "07d54d03da0eb4eb90d54e7346cbff174b2c3ade",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13059-018-1453-x",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "07d54d03da0eb4eb90d54e7346cbff174b2c3ade",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
34374036 | pes2o/s2orc | v3-fos-license | Treatment of primary cutaneous apocrine carcinoma of the scalp-case and review of the literature
Primary cutaneous apocrine carcinoma (PCAC) of the scalp is a rare type of sweat gland neoplasm, with pathological features that make it difficult to distinguish from metastatic breast cancer. Review of the literature reports only 17 similar cases, which are reviewed based on the treatment used and survival outcomes. A 42 year old woman with such a case is described to supplement available literature. The patient described presented with a 3 x 2 cm mass on the scalp and was treatment surgically with no additional adjuvant treatment due to limited literature suggesting substantial benefits of adjuvant chemotherapy or radiation for these neoplasms. Correspondence to: Dr. Teresa Petrella, Odette Cancer Centre, Sunnybrook Health Sciences Centre, 2075 Bayview Ave., M4N 3M5, Toronto, ON, Canada, Tel: (416) 480 5248; E-mail: teresa.petrella@sunnybrook.ca
Introduction
Primary cutaneous apocrine carcinoma (PCAC) is a rare type of sweat gland neoplasm with incidence rates estimated to range from 0.0049-0.0173 per 100,000 patients per year [1]. Approximately 200 total cases are reported in the literature. PCAC occurs in areas with large numbers of apocrine glands [2]. The scalp is among one of the rarest sites of occurrence, while the axilla appears most commonly [3]. Among the 186 cases reviewed by Hollowell, equal distribution was present in both males and females, with 76% of the sample population of Caucasian ethnicity. Median age was calculated at 67 years for this sample, which is the largest cohort studied to date [1].
PCAC can develop in the dermal and subcutaneous layers of the skin, occasionally infiltrating the epidermal layer resulting in ulceration. PCAC has a diverse presentation, occurring as both uniand multi-nodular growths with varying colour [2]. Often these neoplasms are indurated, painless masses and can be associated with benign lesions [2], including a nevus sebaceous, most commonly seen with scalp lesions [4][5][6]. Development of these lesions typically occurs within a year before diagnosis [7]; however several cases have reported longer durations with a period of rapid growth [4,5,8,9]. PCAC is often quite difficult to differentiate from metastases of adenocarcinoma of the breast, for two reasons: First PCAC has a morphological profile almost indistinguishable from that of metastatic carcinoma of the breast [10], which may be attributed to the fact that the mammary glands are defined as a form of modified sweat gland [11]. Secondly, no immunohistochemical profile has been developed to differentiate between the two that have been widely agreed upon. This often leaves the diagnosis to be determined by clinical history and a thorough examination in search of a primary site [10].
Of the approximate 200 cases of PCAC, very few cases have reported detailed accounts of scalp primaries. We report the case of a 42 year-old woman who presented with a 3 × 2 cm mass on the scalp treated with surgery and review reported cases in the literature on the prognosis, treatment and outcomes of PCAC.
Treatment of primary cutaneous apocrine carcinoma of the scalp -case and review of the literature recurrence. No other suspicious skin lesions of the scalp, face or neck were found. The breast exam didn't show any nodules or masses, no suspicious skin lesions and no axillary lymphadenopathy. The rest of the exam was unremarkable. Bilateral mammograms and bilateral breast MRI were conducted and showed no evidence of malignancy. CT scan of the head and neck didn't show any evidence of pathologic lymph nodes or masses. Staging with CT scan of chest, abdomen and pelvis was also negative for metastatic disease.
Her case and pathology was further reviewed at the multidisciplinary case rounds and it was concluded that with the histopathological features and absence of a breast primary carcinoma this was considered a primary apocrine carcinoma of the scalp. The patient had an adequate excision initially and no further excision was done. It was felt that there was not enough evidence to treat the area with adjuvant radiation and patient remained on surveillance. The patient is currently on surveillance and remains free of disease 39 months post surgery.
Literature review results
We conducted a literature review to assess treatment options for patients with PCAC of the scalp. Our review identified 17 cases which had detailed reports, with the first four cases documented by Domingo and Helwig in 1979 [5]. Of the 17 reported cases, 10 females (58.8%) and 7 males (41.2%) made up the cohort. Race was only reported in 3 of the manuscripts. The mean age of the cohort at time of diagnosis was 57.8 years with a range from 20 to 85 years of age.
Of the 17 cases in our review, 12 manuscripts provided information regarding disease status upon presentation (standardized staging is not defined for this population). Ten cases (58.8%) had reported locally defined neoplasms, with no report of malignancy in the lymph nodes, while 2 cases (11.8%) had reported node positive disease (Table 1). Metastatic disease was not present at diagnosis in any of the reported cases and staging was not defined in 5 of the cases [12].
The size of the scalp masses varied among the cohort, with maximal measurements ranging from 0.5 to 7.5 cm, with an average of 3.1 cm. There were 7 non-metastatic lesions, all of which were 4 cm or less, with an average length of 2.2 cm at presentation. Average size of the metastatic lesions was 5.9cm, with 4 of the 6 measuring 4cm and greater and 2 unreported ( Table 2).
The PCAC of the scalp exhibited variable growth patterns among the 9 cases that had reported these details. Several cases reported long periods of evolution associated with benign lesions, often from birth, followed by a short period of rapid growth of the tumour mass [4][5][6]. Other cases demonstrated more spontaneous development from a range of several weeks to 6 months [12,13]. Due to limited information however, statistical analysis of these patterns in relation to prognosis could not be determined with accuracy.
Standard primary treatment among PCAC lesions of the scalp appears to be surgical excision, occurring in all but one of the cases analysed (94.1%) ( Table 3). In one instance, 6 palliative chemotherapy cycles were implemented instead due to initial misdiagnosis. Despite this, the patient showed excellent response and was disease free for 7 years following treatment [14]. For localized masses the most common surgical intervention was local excision (of undefined margins), as seen in 87.5% of the surgical treatments. Those with local excision had from 4 months of disease remission to six years with no evidence of disease ( Table 2). Radical or wide excision (with 2 cm margins) of the scalp lesion and regional lymph node dissection, were used in the presence of regional disease on two accounts (12.5%) [3,15]. The patient who underwent radical excision also received radiation therapy and adjuvant treatment with cisplatin and 5-fluorouracil for node positive disease. Unfortunately this patient had disease progression 10 months following initial treatment. The patient receiving only wide-excision to the scalp and lymph nodes was free of disease for 4 years, at which time disease had spread to the lungs ( Table 2).
Information regarding recurrence, disease progression and metastases was available for 14 cases. Among the cohort, 4 cases (22.2%) had no disease recurrence, while 3 (17.6%) had shown recurrence to local or regional sites (three involving lymph nodes). Six cases (35.3%) had developed metastatic disease to lymph nodes, bone, lungs and brain and other cutaneous regions. Information was unreported or patients were lost to follow-up in 4 of the patients ( Table 2).
For local recurrences without the involvement of regional lymph nodes, excision of the tumour appears to be sufficient. In the one instance where node negative local recurrence occurred, the patient was free of disease with no evidence of malignancy, at a one year followup [14]. Upon the involvement of regional lymph nodes, such as those in the cervical, preauricular and postauricular regions, the addition of a lymphadectomy along with excision of the primary lesion is common. Surgical intervention alone, following node positive disease provided mixed responses from 2 months to 1 year [3,5]. On a separate account, in addition to excision of the primary and lymphadectomy, adjuvant chemotherapy (5-fluorouracil and cisplatin) and radiotherapy were administered and provided the patient with 9 months of disease free status [2].
For the patients that had developed metastatic disease, 5 of the 6 cases provide details regarding further treatment. These included combinations of chemotherapy, radiotherapy and/or surgical intervention. Radiotherapy was commonly used for palliation of both bone and brain metastases ( Table 2). Common sites of disease progression occurred in distant lymph nodes (axillary, subclavicular), cutaneous tissue, bone, brain and the lungs. From the time of metastatic diagnosis, survival ranged from approximately one to four years, with an average of 2.25 years [2,3,5,13,15].
Commonly used chemotherapy agents among this cohort included combinations of anthracyclines, taxanes and platinum drugs. Four courses of adriamycin and etoposide with docetaxel proved effective to stabilize lung metastases for 4 years in one patient [15]. Paclitaxel and carboplatin were also used in another patient and administered every 21 days for metastatic disease to bone and lung regions. This patient remained disease free for 16 months [13]. Second line therapies that have been used include the combination of cisplatin and 5-fluorouracil, as well as methotrexate and bleomycin with short lived results [2,3].
Discussion
PCAC of the scalp is a rare neoplasm most often reported in the literature as case reports or small case series. To date, limited work has been done analyzing the prognosis, outcomes and treatment options available for the various stages of this disease. Concluding our review we were able to identify 17 cases of scalp primaries. Most cases had localized disease at initial presentation, while regional lymph node metastases were less prevalent. Primary treatment is most often local excision of the primary tumour. The use of radiotherapy and chemotherapy are not common outside of palliation, however radiation may be beneficial in treating lymph node metastases. Assessing prognosis is difficult to quantify with accuracy due to limited cases available. Data suggests that localized disease is typically treatable without aggressive therapy; however, survival seems to diminish following lymph node involvement. Data suggests larger primaries at initial visit may also indicate poor prognosis, due to their tendency to metastasize; outcomes are often fatal upon the diagnosis of metastatic disease.
Our data seems to be consistent with previous demographic and prognostic findings from other PCAC primaries. A review of 186 cases, analysing several PCAC primaries, showed that similar to scalp lesions, patients most often present with localized disease, while metastases to the lymph nodes and distant regions is less common. The data also shows that most cases have a good prognosis with an expected and observed 5-year survival of 85.4% and 76.1%, respectively for all site primaries and overall median survival of 51.5 months. Median survival significantly diminished following lymph node involvement, and metastatic disease, to 33 and 14.5 months, respectively [1]. Conclusions regarding the prognosis among scalp cases are limited due to such a small sample size, however survival seems to correlate with data from these various primaries. In both cases localized disease has shown to be manageable with surgical treatment, however prognosis appears to worsen following the spread of disease into the lymph nodes and other distant regions, often requiring additional treatment.
For the treatment of localized PCAC the current consensus tends to support the use of surgical resection; wide-excision being the recommended procedure. It has been shown that surgical excision provides significant prognostic benefit to patients compared to those who have not undergone surgery [1]. Due to insufficient data, surgical margins have not been standardized, however, 1 to 2 cm may provide sufficient eradication of tumour cells. These values have been suggested upon validated standards for similar cutaneous lesions [16]. At this point in time there is a lack of evidence to support the use of chemotherapy and radiotherapy in primary treatment of localized disease. Some reports have suggested the use of radiation in the treatment of masses exceeding 5cm. Due to the aggressive nature of these larger masses seen among scalp cases this recommendation may be warranted, however, it has not been prospectively evaluated [16]. These findings seem to be acceptable for all PCAC of the skin, including the axilla, head and neck and thoracic regions. More delicate regions including the eye or eyelid, and the anogenital region may require more specific treatment regimens.
Due to the significance lymph node involvement has on the prognosis of PCAC, Hollowell has recommended the use of sentinel lymph node biopsy (SLNB) to guide treatment planning [1]. Unfortunately due to the low incidence rate of PCAC, SLNB has not undergone prospective evaluation, but has been shown to provide prognostic value among other cutaneous neoplasms, including melanoma [17], squamous cell carcinoma [18] and merkel cell carcinoma [19]. Although this may suggest SLNB to be useful in this population, treatment should be determined on a case by case basis, as lymph node dissection is not standard practice for PCAC [16].
In patients that present with positive regional lymph nodes, lymphadectomy, in addition to the removal of the primary, is common treatment among the PCAC literature [2,3,15,[20][21][22][23]. Radiotherapy may also provide additional benefit among those with regional lymph node metastases; however data is limited. Among non-melanocytic skin cancers with lymph node metastases, radiotherapy in addition to surgical excision has also shown to increase survival rates, compared to those who only receive surgery [24,25]. Given the high percentage of metastatic occurrence among PCAC scalp primaries, clinicians should consider the possible benefit of radiotherapy within this population. Chemotherapy should be reserved for treating advanced disease that often proves to be fatal and initiation of palliative care in these circumstances is inevitable [3].
In review of these findings, our patient had a wide excision with no other therapy. Surgical excision has demonstrated to be a standard option for patients with localized disease. In our case, our patient remains disease free at 39 months post surgery. These results are consistent with previous case reports, and may provide interest to those developing treatment plans for similar cases.
Conclusion
Following an in depth review of the literature on PCAC, it can be concluded that the recommendation for surgical removal with cleared margins seems to be appropriate among patients with local, node negative disease. One to two centimeters surgical margins are generally accepted standards. No evidence is currently available to show the benefit of adjuvant treatment for PCAC.
Patients with additional regional lymph node involvement have lower median survival rates and may benefit from lymphadectomy and additional radiotherapy. Metastases to regional lymph and distant organs appear to be more common among scalp lesions, affecting approximately one third of the scalp cases, thus, suggesting the need for further treatment among this group. The use of chemotherapy and radiotherapy may also be considered in patients with advanced and distant disease, as well as chronic recurrence, but should be decided on a case to case basis.
Disclaimer
The opinions expressed by authors contributing to manuscript do not necessarily reflect the opinions of the Sunnybrook Health Sciences Centre or the Odette Cancer Centre with which the authors are affiliated. | 2019-03-16T13:06:37.932Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "5fdefb94fc087ad07349ddd7e2d2f4a37c6438eb",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/GOD-3-188.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8a7c2c224a1923d3a9d46ef7d9b4b8971bbf12c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232358561 | pes2o/s2orc | v3-fos-license | Effect of Foliar Application of Various Nitrogen Forms on Starch Accumulation and Grain Filling of Wheat (Triticum aestivum L.) Under Drought Stress
Foliar nitrogen (N) fertilizer application at later stages of wheat (Triticum aestivum L.) growth is an effective method of attenuating drought stress and improving grain filling. The influences or modes of action of foliar application of various nitrogen forms on wheat growth and grain filling need further research. The objective of this study was to examine the regulatory effects of various forms of foliar nitrogen [NO3–, NH4+, and CO(NH2)2] on wheat grain filling under drought stress and to elucidate their underlying mechanisms. The relative effects of each nitrogen source differed in promoting grain filling. Foliar NH4+-N application notably prolonged the grain filling period. In contrast, foliar application of CO(NH2)2 and NO3–-N accelerated the grain filling rate and regulated levels of abscisic acid (ABA), z-riboside (ZR), and ethylene (ETH) in wheat grains. Analysis of gene expression revealed that CO(NH2)2 and NO3–-N upregulated the genes involved in the sucrose–starch conversion pathway, promoting the remobilization of carbohydrates and starch synthesis in the grains. Besides, activities of superoxide dismutase (SOD), peroxidase (POD), and catalase (CAT) were increased, whereas the content of malondialdehyde (MDA) declined under foliar nitrogen application (especially NH4+-N). Under drought stress, enhancement of carbohydrate remobilization and sink strength became key factors in grain filling, and the relative differences in the effects of three N forms became more evident. In conclusion, NH4+-N application improved the antioxidant enzyme system and delayed photoassimilate transportation. On the other hand, foliar applications of NO3–-N and CO(NH2)2 enhanced sink capacity and alleviated drought stress injury in wheat.
INTRODUCTION
Drought is a severe source of abiotic stress that threatens wheat (Triticum aestivum L.) growth and yield worldwide and is exacerbated by climate change (Wang et al., 2016;Zhang et al., 2017;Alam et al., 2020). Northern China, a major global wheatproducing region, is frequently afflicted with drought during wheat growth, particularly when precipitation is relatively low and evapotranspiration is high in the spring (Deng et al., 2018;Jiang et al., 2020;Li et al., 2020). This leads to significant adverse effects on anthesis and grain filling (Li et al., 2000;Liu et al., 2014;Asif et al., 2016). As wheat production is vital to food security in China, there is an urgent need to mitigate the effects of drought stress on this crop.
As the core structural element of proteins, including enzymes and those involved in photosynthetic systems, nitrogen (N) is a key factor in crop growth and productivity (Beier et al., 2018;Evans and Clarke, 2018). N and water interact in significant ways that affect wheat growth. Water deficit impedes plant N uptake, causing a nitrogen deficiency which aggravates the damage caused by drought stress (Gonzalez-Dugo et al., 2010;Alam et al., 2020;Nazar et al., 2020). Efficacious N fertilizer management attenuates the effects of drought on wheat by maintaining normal crop physiology and scavenging reactive oxygen species (ROS) formed in response to drought stress (Agami et al., 2018;Guo et al., 2019). Wheat absorbs N in the form of ammonium (NH 3 + ) and nitrate or organic N such as urea (Woolfolk et al., 2002;Carlisle et al., 2012). The latter is widely used as a nitrogen fertilizer in crop production in China (Tao et al., 2018). Previous studies have elucidated the relative effects of different forms of N fertilizer. Carlisle et al. (2012) reported that plants supplied with NH 4 + -N allocated comparatively more nutrients and biomass to the shoots, whereas those provided with NO 3 − -N distributed relatively more nutrients to the roots. NO 3 − -N stimulated root growth in rice with high nitrogen use efficiency (NUE). In this way, biomass accumulation and NUE were enhanced at later growth stages (Song et al., 2011). Other studies reported that various forms of N have different effects on plant stress resistance. In rice, NH 4 + -N supplementation mitigates cadmium stress by inhibiting the uptake and transport of the metal. NH 4 + -N also increased rice and maize drought tolerance (Smiciklas and Below, 1992;Gao et al., 2009;Wu et al., 2018;Zhu et al., 2018). In contrast to the numerous studies focusing on the effects of various forms of exogenous N on root growth and nutrient element transport, research on the effects of various forms of N applied under abiotic stress on wheat grain filling is relatively sparse.
Many investigations explored the effect of nutrients on wheat grain filling through foliar application, as leaves were more efficient at absorbing nutrients during the later stages of wheat growth compared with senescent roots (Kutman et al., 2010;Uscola et al., 2014;Visioli et al., 2018). Nevertheless, researches on the effect of foliar application of various forms of nitrogen on wheat grain filling were sparse. Thus, the present study aimed to investigate the regulatory effects of foliar NO 3 − , NH 4 + , and CO(NH 2 ) 2 on wheat grain filling and starch accumulation under drought stress. For this purpose, foliar N application was conducted at anthesis, and drought stress was induced from anthesis to maturity. To elucidate the underlying mechanism, we evaluated how different N sources regulated endogenous phytohormones, the antioxidant enzyme system in flag leaves, and gene expression involved in grain starch biosynthesis. Results in this work could provide insight relevant to (i) the regulatory effects of different N sources on grain filling and starch accumulation; (ii) the response of starch biosynthesis, phytohormones levels, and plant senescence to different N sources; and (iii) the alleviative effect of different N sources on drought stress of wheat during grain filling period. This research might be valuable for wheat production under climate change.
Study Site and Treatment Descriptions
A pot experiment using a split-plot test design was conducted in large waterproof sheds at the experimental station of the Agricultural Crop Specimen Area of Northwest A&F University, Shaanxi Province, China (elevation: 466.7 m; mean annual temperature: 12.9 • C). Before putting into pots, the soil was crushed and sifted away plant residue, thereby preventing soil hardening and facilitating nutrient absorption. Each pot had a diameter and height of 24.25 and 26.4 cm, respectively, and was filled with 25 kg soil. The readily available levels of N, P, and K were 51.23, 20.01, and 105.37 mg kg −1 , respectively. A total of 240 pots were used in the experiment. Before planting, 3.89 g urea and 1.29 g monopotassium phosphate were applied to the soil in each pot. The wheat (Triticum aestivum L.) cultivar Xinong979, a cultivar currently grown in Huanghuai wheat production region of China, was planted in each pot on October 9, 2015. The seeds were pre-washed with 3% (v/v) H 2 O 2 and soaked in water for 24 h. A total of eight treatment regimens were performed, each comprising 30 pots with 15 seeds placed near the center of each. The soil was irrigated until flowering in order to maintain normal water potential at the 15-cm soil layer (−20 ± 5 kPa).
From anthesis to maturity, two soil moisture levels were set up by controlling the irrigation rates and volumes. For the well-watered (WW) and soil-dried (SD) treatments, the water potentials at the 15-cm soil layer were maintained at -20 ± 5 and -60 ± 5 kPa, respectively. Water potential was monitored and recorded between 11:00 and 12:00 daily using tensiometers (SWP-100; Soil Science Research Institute, China Academy of Sciences, Nanjing, China) installed in each pot. Under each treatment at anthesis, aqueous solutions of urea [CO(NH 2 ) 2 ], NaNO 3 (NO 3 − ), or NH 4 Cl (NH 4 + ) were sprayed on the leaves at the rate of 750 kg ha −1 for 3 d, which meant 3.46 g pot −1 according the pot's surface area. The concentrations of urea, NaNO 3 , and NH 4 Cl were 3.0, 8.5, and 5.3%, respectively. These concentrations ensured that the total N content was equal for all three types of N fertilizer; 0.01%(V/V) Tween-20 was added into each solution. Equal volumes of deionized water were applied to the control plants (CK1, CK2) under well-watered (WW) and soil-dried (SD) treatments separately.
Sampling and Measurement
Wheat spikes flowered at the same day were labeled for sampling. Twenty spikes were sampled at 4-d intervals from anthesis to maturity. All of the grains on each spike were removed. Half of the grain samples were frozen in liquid N. Subsequently, samples used to measure the phytohormone levels were stored at −40 • C, and samples used to quantity the expression levels of the genes were stored at −80 • C. The other half of the grain samples was dried at 70 • C and used to determine the constant weight and the sucrose and starch content. On a single day, 20 flag leaves were sampled from each pot, frozen in liquid nitrogen, stored at −40 • C, and used to measure superoxide dismutase (SOD, EC 1.15.1.1), peroxidase (POD, EC 1.11.1.7), and catalase (CAT, EC 1.11.1.6) activity and malondialdehyde (MDA) content.
Grain-Filling Process
The grain filling process was fitted to Richards's (1959) growth equation according to : The grain filling rate (G) was calculated as follows: where W is the grain weight (mg); A is the final weight (mg); t is the time after anthesis (d); and B, k, and N are coefficients determined by regression.
The active grain filling period is that in which W is between 5% (t 1 ) and 95% (t 2 ) of A. The average grain filling rate during this period was calculated for the interval between t 1 and t 2 .
Sucrose, Amylose, and Amylopectin Content
Dried grain samples were pulverized for analysis of sucrose, amylose, and amylopectin levels. Then, 0.2 g of the powder was extracted with 6 mL of 80% (v/v) ethanol for 30 min in a water bath at 80 • C. The suspension was centrifuged at 5,000 × g for 10 min at 25 • C, and the supernatant was collected. The extraction was performed in triplicate. The three supernatants were pooled and diluted with 80% (v/v) ethanol to achieve a volume of 25 mL for sucrose measurement. The sucrose content was determined by the resorcinol method. Absorbance was read at 480 nm, and the sucrose content was interpolated from a standard curve . Sample powder (0.1 g) was stirred in a water bath with 10 mg of 0.5 M KOH for 30 min at 90 • C. After the solution was diluted to a volume of 50 mL with distilled water, 2.5 mL was transferred to a fresh tube containing 20 mL distilled water. The pH was adjusted to 3.5 with 0.1 M HCl; 0.5 mL I 2 -KI reagent was added, and the solution was diluted to 50 mL with distilled water. After 20 min, the absorbance of the solution was measured at wavelengths of 631, 480, 554, and 754 nm. The amylose and amylopectin levels in the wheat grains were determined according to the method described by Jiang et al. (2003). The total starch content is the sum of amylose content and amylopectin content.
Phytohormones
Extraction and purification of Z+ZR and ABA were carried out according to the methods described by . Samples ∼0.5 g in mass were ground in a mortar on ice and homogenized with 5 mL of 80% (v/v) methanol containing 1 mM butylated hydroxytoluene (BHT) as an antioxidant. The extracts were incubated for 4 h at 4 • C and centrifuged at 10,000 × g and 4 • C for 15 min. The supernatants were passed through a Chromosep C18 column (C18 Sep-Park Cartridge; Waters Corp, Milford, MA, United States) in order to isolate the antioxidant enzymes. The fractions were vacuum-dried at 40 • C and dissolved in 1 mL phosphate-buffered saline (PBS) containing 0.1% (w/v) gelatin (pH 7.5) and 0.1% (v/v) Tween 20 for an enzymelinked immunosorbent assay (ELISA; Phytohormones Research Institute, China Agricultural University, Beijing, China). Levels of Z+ZR and ABA were measured by ELISA using previously described methods .
Ethylene (ETH) evolution from wheat grains was measured according to . The ETH was assayed by gas chromatography (GC) (Trace GC UItra TM ; Thermo Fisher Scientific, Waltham, MA, United States) according to a previous study (Lv X. K. et al., 2017).
SOD, POD, CAT, and MDA Content in Flag Leaves
One half gram of fresh flag leaves was ground in a mortar filled with 5 mL extraction buffer comprising 100 mM potassium phosphate buffer (pH 7.0), 1% (w/v) polyvinylpyrrolidone (PVPP), and 1 mM ethylenediaminetetraacetic acid (EDTA). After centrifugation at 20,000 rpm and 4 • C for 20 min, the supernatant was collected for antioxidant enzyme analysis. The activity of superoxide dismutase (SOD; EC 1.15.1.1) was assayed by the inhibition of nitro blue tetrazolium (NBT) photoreduction as previously described (Wang et al., 2021). The optical density of the product was measured at 560 nm. One unit of SOD corresponded to the amount of enzyme inhibiting 50% of the NBT photoreduction. Catalase (CAT; EC 1.11.1.6) activity was determined from H 2 O 2 decomposition over a 3-min interval. Absorbance of the product was measured at 240 nm as previously reported (Wang et al., 2021). Guaiacol peroxidase (POD; EC 1.11.1.7) activity was evaluated by guaiacol oxidation. Absorbance of the product was read at 470 nm according to the method described by Wang and Huang (2000).
Based on a previous study (Lv X. K. et al., 2017), 0.5 g of leaf tissue was homogenized in 5 mL of 0.1% (w/v) trichloroacetic acid (TCA) and centrifuged at 20,000 × g and 4 • C for 20 min. The MDA content was determined by the method of Wang and Huang (2000) using the following equation: (3)
RT-PCR of the Genes Encoding the Enzymes Involved in Starch Synthesis
A reverse-transcriptase polymerase chain reaction (RT-PCR) was performed to evaluate the relative expression levels of the genes encoding the enzymes participating in starch biosynthesis, including adenosine diphosphate pyrophosphorylase (AGPP), granule-bound starch synthase (GBSS), soluble starch synthase (SSS), and starch branching enzyme (SBE). Total RNA from wheat grain was isolated with an E.Z.N.A. plant RNA kit (Omega Bio-Tek Inc., Norcross, GA, United States) according to the manufacturer's instructions. RNA concentration and quality were measured with a NanoDrop TM 2000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, United States). RNA was reverse transcribed with a PrimeScriptTMRT reagent kit (TaKaRa Bio Inc., Shiga, Japan). The cDNA product was subjected to real-time PCR (QuantStudio 3; Applied Biosystems, Foster City, CA, United States) using a two-step method and a SYBR premix Ex Taq II kit (TaKaRa Bio Inc., Shiga, Japan). The conserved regions of the gene sequences of AGPP-L, GBSS-I, SSS-1, SSS-II, SSS-III, SBE-I, SBE-IIa, and SBE-IIb were obtained from wheat and used to design primers for the detection of gene expression in wheat grain. The gene-specific primers and the base pair (bp) sizes of the fragments generated are listed in Supplementary Table 1. The transcript levels of the selected genes were measured by qPCR on a QuantStudio 3 real-time PCR system (Applied Biosystems, Foster City, CA, United States) with a SYBR premix Ex Taq II kit (TaKaRa Bio Inc., Shiga, Japan). Each reaction consisted of 25 µL SYBR premix Ex Taq TM (2X), 4 µL diluted cDNA, 2 µL forward primer, 2 µL reverse primer, 1 µL Rox Reference Dye II, and 16 µL ddH 2 O in a total volume of 50 µL. The relative transcription levels of the starch synthesis-related enzyme genes were calculated using the 2 − Ct method (Han et al., 2006). Wheat β-actin (GenBank Accession No. AB181991) were used as internal controls. The data were analyzed for variance in SPSS v. 16.0 for Windows (IBM Corp., Armonk, NY, United States). Means were tested by the least significant difference method. P < 0.05 was considered statistically significant.
Grain Filling
Under the WW treatment, foliar application of all N sources enhanced grain filling and increased the final grain weight. However, the various foliar N applications had different effects on the grain filling characteristics (Figure 1 and Table 1). Compared with CK1, the CO(NH 2 ) 2 application significantly increased the maximum and mean grain filling rates and extended the grain filling period. Application of NO 3 − -N did not influence duration of the grain filling period, but resulted in the highest maximum and mean grain filling rates. The final grain weight of NO 3 − -N was similar to the CO(NH 2 ) 2 application. Application of NH 4 + -N did not affect the maximum or mean grain filling rates, but prolonged the active grain filling period compared with CK1 and NO 3 − -N. The final grain weight under the NH 4 + -N application was greater than that of CK1. The effect of NH 4 + -N on the maximum and mean grain filling rate was similar to that obtained with CO(NH 2 ) 2.
The soil-drying treatment markedly inhibited grain filling and resulted in a loss of grain weight (Figure 1 and Table 1). The NO 3 − -N treatment achieved the highest maximum and mean grain filling rates, whose final grain weight was the highest. The CO(NH 2 ) 2 treatment increased the maximum and mean grain filling rates and extended the active grain filling period, though not to a significant extent. The final grain weight under CO(NH 2 ) 2 was lower than that for the NO 3 − -N treatment. There was only the NH 4 + -N treatment extended the active grain filling period compared with other treatments and control. The increase of grain weight on the NH 4 + -N treatment was lower than the CO(NH 2 ) 2 and NO 3 − -N treatments. Whereas the NH 4 + -N treatment decrease the mean grain filling rate and had no significant effect on the maximum grain filling rate compared with the CK under drought stress.
Sucrose, Amylose, Amylopectin, and Total Starch Content
Under the WW treatment, sucrose levels increased in the early grain filling stages and peaked at 8 d post-anthesis (Figure 2). Compared with CK1, the foliar CO(NH 2 ) 2 and NO 3 − -N applications enhanced sucrose accumulation during the early grain filling stage and reduced the sucrose content at the late grain filling stage. At 8 d post anthesis, sucrose content in grains under the NO 3 − -N applications was the highest. The sucrose content in NO 3 − -N applications was lower than that in CO(NH 2 ) 2 application from 12 d to 24 d post anthesis. Sucrose content under the NH 4 + -N treatment was higher that under the CO(NH 2 ) 2 and NO 3 − -N treatments from 16 d post anthesis. The trends in grain sucrose content were similar for WW and SD, FIGURE 1 | Effect of N forms in foliar fertilizer on grain weights [(A) well-watered treatment, WW; (B) soil-dried treatment, SD] and grain filling rates [(C) well-watered treatment, WW; (D) soil-dried treatment, SD]. Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively.
whereas the sucrose content in grains approached the final level earlier under SD. The effects of different N forms on the changes of sucrose content under SD were similar to those under WW, while the difference between the foliar CO(NH 2 ) 2 and NO 3 − -N applications was smaller. There was no significance difference between the foliar CO(NH 2 ) 2 and NO 3 − -N applications from 16 d to 28 d post anthesis. Compared with CK2, the NH 4 + -N treatment did not change the sucrose levels significantly from 20 d to 28 d post anthesis.
Total starch, amylose, and amylopectin accumulation increased during grain filling (Figure 3). Starch and amylopectin levels were significantly lower under the SD treatment than the WW treatment; no significant differences were found in amylose levels (Figures 3A,B). Amylopectin was the main component of starch in grains. In this experiment amylopectin and total starch had similar trends. The effects of foliar N application on total starch and amylopectin content resembled those observed for grain filling (Figures 3B-F). Under WW, the total starch and amylopectin levels in grains sprayed with CO(NH 2 ) 2 were lower than those in plants sprayed with NO 3 − -N at the early and middle grain filling stages. At the late grain filling stage, however, total starch and amylopectin were higher in CO(NH 2 ) 2 -sprayed plants than they were in NO 3 − -N-treated plants. Under SD, the total starch and amylopectin levels were highest in the plants sprayed with NO 3 − -N.
Changes in Phytohormone Level
The rate of ETH evolution steadily declined in grains during the filling stage (Figures 4A,B). The SD treatment significantly enhanced ETH evolution even at the late grain filling stage. Values with a column and for the same soil moisture followed by the same letters are not significantly different at P < 0.05. WW means well-watered treatment and SD means soil-dried treatment. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soildried treatment, respectively. CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. Wmax, Gmax, Gmean, and D mean that the final grain weight, the maximum grain-filling rates, the mean grain-filling rates, and active grain-filling period, respectively.
The relative effects of the various foliar N applications on ETH evolution were similar under both water regimes. However, the ETH evolution rates were significantly lower for the NO 3 − -N and CO(NH 2 ) 2 treatments than the others. At the early and middle grain filling stages, the NH 4 + -N treatment had no significant influence on ETH evolution. At the late grain filling stage, addition of NH 4 + -N decreased ETH evolution compared to CK. Changes in grain ABA and Z+ZR content followed similar trends and peaked by the middle grain filling stage. Drought stress decreased ABA and Z+ZR in grains. Under CK2, ABA and Z+ZR were lowered by 18.38 and 43.61%, respectively, compared to CK1 at the middle grain filling stage (Figures 4C-F). Foliar NH 4 + -N fertilizer did not significantly influence the ABA and Z+ZR levels at the early and late grain filling stages except for Z+ZR under the SD treatment. The NO 3 − -N and CO(NH 2 ) 2 treatments significantly increased ABA and Z+ZR except at the late grain filling stage. Levels of ABA and Z+ZR were relatively higher under the SD treatment at the middle grain filling stage.
Relative Expression of Genes Encoding Enzymes Involved in Starch Synthesis
Analysis of the relative expression levels of the genes encoding enzymes involved in starch synthesis revealed that drought stress downregulated nearly of all these and blocked starch biosynthesis (Figures 5, 6). Foliar application of the various N fertilizers upregulated the AGPP-L and GBSSI genes. However, their expression levels did not significantly differ from each other except for AGPP-L under the WW treatment during the middle grain filling stage (Figures 5A,B). The SSSI and SSSII genes, both of which encode SSS, were expressed at low levels during the early and late grain filling stages (Figures 5C-F). At the middle grain filling stage, the foliar NO 3 − -N and CO(NH 2 ) 2 treatments upregulated SSSI and SSSII. Treatment with NO 3 − -N was comparatively more effective at lowering the expression of these genes than the other N forms. Expression of SSSIII was maximum during the middle grain filling stage. The effects of the various forms of N on relative SSSIII expression were similar to those for SSSI and SSSII (Figures 6A,B).
Changes in expression level differed among the three genes encoding SBE. The relative expression of SBEI continued to increase over the whole grain filling period, whereas SBEIIa and SBIIIb were expressed at their maximum levels during the middle grain filling stage and declined thereafter. The NH 4 + -N treatment did not significantly affect the relative SBEI expression, but upregulated SBEIIa and SBEIIb during the middle grain filling FIGURE 2 | Effect of N forms in foliar fertilizer on the sucrose content in the wheat grains [(A) well-watered treatment, WW; (B) soil-dried treatment, SD]. Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in Materials and Methods. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. Vertical bars represent ± the standard deviation of the mean (n = 3). Values followed by the same letters in each soil moisture are not significantly different at P < 0.05 level. CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. Values followed by the same letters in each soil moisture are not significantly different at P < 0.05 level.
Frontiers in Plant Science | www.frontiersin.org Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in the Materials and Methods section. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. Values followed by the same letters in each soil moisture are not significantly different at P < 0.05 level.
Frontiers in Plant Science | www.frontiersin.org stage (Figures 6C,D). The expression levels of SBEIIa and SBEIIb under the NO 3 − -N and CO(NH 2 ) 2 treatments were higher than those under CK. Maximum expression of these genes occurred during the middle grain filling period.
Antioxidant Enzymes and MDA
During the grain filling stage, the CAT activity in the flag leaves steadily decreased (Figures 7A,B). In contrast, POD and SOD activity levels in the flag leaves increased until the middle grain filling stage, peaked at 12 d and 8 d postanthesis, respectively, then decreased thereafter (Figures 7C-F). Drought stress downregulated CAT, POD, and SOD. Three forms of N application had positive effects on activities of antioxidant enzymes, while the NO 3 − -N treatment had relatively less influence on these enzymes during most of the grain filling process. Effects of CO(NH 2 ) 2 treatments on these antioxidants were slightly stronger than that under the NO 3 − -N treatment. Activities of CAT, POD, SOD under NH 4 + -N treatments were the highest during most of the grain filling period.
The MDA content of the flag leaves continuously increased during the grain filling period (Figures 7G,H). Drought stress promoted MDA accumulation whereas foliar N application decreased the MDA content. Difference between three forms treatment was significant, which was amplified under SD. The MDA levels in the plants treated with foliar NH 4 + -N was the lowest than those under other treatments throughout grain filling. The MDA levels under CO(NH 2 ) 2 treatment was substantially lower than that under the NO 3 − -N treatment during most of the grain filling period.
DISCUSSION
As an economical and efficient method widely practiced in China, foliar application of fertilizer can supply nutrient for crops more quickly than soil application, especially at the late growth period of crops when root activity keeps decreasing (Fageria et al., 2009;Zhang et al., 2010). Previous studies showed that foliar application of urea is beneficial for wheat growth (Blandino et al., 2015;Wang et al., 2021). We thus used this method of fertilization to determine specific effects of various forms of N on grain filling under well-watered and drought stress condition. In addition to damaging roots and above-ground biomass (Djanaguiraman et al., 2018), severe soil drought also impedes the grain filling process, thereby adversely affecting grain yield (Barnabas et al., 2008). In alignment with most references (Farooq et al., 2014;Liu et al., 2016) our results showed that soil drought stress decreased the final grain weight sharply. One study that produced contradictory results was that of Ramadan (Agami et al., 2018), in which the grain filling capacity increased under water deficit treatment. One possible reason is that only moderate drought stress, which is known contribute to increased grain weight if properly controlled, was induced in that study .
Grain filling rate and duration both contribute to the final grain weight in wheat . All types of N fertilizer tested in this study significantly increased grain weight relative to CK. Based on our observation that these treatments had different effects on grain filling characteristics, we posit that grain weight was increased via different mechanisms for each. Although there is limited information available on the effects of various forms of N fertilizer on grain filling, our results indicate that foliar NO 3 − -N applications may improve grain filling and alleviate drought stress damage by accelerating the maximum and mean grain filling rates, whereas foliar NH 4 + -N application may extend the grain filling period. Foliar CO(NH 2 ) 2 application may have yielded the largest final grain weight under well-watered conditions by affecting the grain filling rate and period in a coordinated way. However, the coordination effect appears to be weakened by drought, leading to a lower grain weight for the foliar CO(NH 2 ) 2 application than for NO 3 − -N. This observation is supported by studies that have reported that the benefits of accelerated grain filling rate outweighed those of extended grain filling periods on final grain weight . This explanation is consistent with the lower grain weight produced under the NH 4 + -N than the other two forms of N.
Comprising > 65% of the grain weight, starch is the main determinant of both grain weight and yield (Kumar et al., 2018). Biosynthesis of starch from sucrose is the main contributor to grain filling (Yang et al., 2004a;Wang Z. et al., 2014). Our results concur with previous studies, which report that the starch content and grain weight vary proportionally during grain filling (Dai et al., 2009;Zi et al., 2018). Earlier studies have also reported substantial interactions between N and carbon content during crop production (Henriksen and Breland, 1999;Cheng et al., 2010;Ko et al., 2010;Zi et al., 2018). These findings allow us to reasonably conclude that N application, particularly NO 3 − -N and CO(NH 2 ) 2, positively affected the total starch content in this study. Because none of the foliar N fertilizers significantly influenced the amylose content, we conclude that all three types of fertilizer promoted starch biosynthesis by increasing the amylopectin content. Our finding that grain sucrose levels were similar under both water conditions appears to contradict the observation that the SD treatment severely inhibited grain filling and starch accumulation. Based on pertinent sources (Ahmadi and Baker, 2001;Yang et al., 2014;Xu et al., 2016), we speculate that the sucrose content did not decrease under the SD treatment because severe drought stress hinders grain filling, decreases grain phytohormone levels, represses starch biosynthesis, and causes unconverted sucrose to accumulate.
To establish the effects of N fertilizer type on the expression levels of the genes encoding the enzymes involved in starch biosynthesis, we selected genes known to be regulated in wheat endosperm during grain filling (Yang et al., 2004b;Tetlow, 2006;Dai et al., 2008;Zhao et al., 2008;Cao et al., 2012;Kumar et al., 2018). The genes AGPP-L, GBSSI, SSSI, SSSII, SSSIII, SBEI, SBEIIa, and SBEIIb have various functions in starch biosynthesis, expressing different levels throughout grain filling. As substrate for the formation of starch, ADP-glucose is synthesized by the action of the AGPP enzyme encoded by AGPP-L in wheat (Jin et al., 2018). The gene GBSSI is reported to encode GBSS, which is involved in the formation of amylose; and SSSI, SSSII, SSSIII, SBEI, SBEIIa, and SBEIIb all play active roles in amylopectin Vertical bars represent ± the standard deviation of the mean (n = 3). CO(NH 2 ) 2 , NO 3 − , and NH 4 + represent CO(NH 2 ) 2 , NaNO 3 , and NH 4 Cl, whose concentration was described in Materials and Methods. CK1 and CK2 mean that deionized water was sprayed on leaves at anthesis under well-watered treatment and soil-dried treatment, respectively. synthesis by encoding SSS and SBE in wheat grain, respectively (Sarka and Dvoracek, 2017;Tetlow and Emes, 2017). Our results show that these genes were markedly upregulated by N fertilizer treatments at the specific period of grain filling, although the expression levels of SSSIII and SBEI did not significantly differ between the NH 4 + -N treatment and the control. The relative changes in expression level of these genes were consistent with the observed changes in amylose, amylopectin, and total starch content. We surmised that various types of N application may improve starch synthesis by regulating the expression of relevant genes, even under drought stress.
N fertilizer application influences endogenous phytohormone levels and regulates grain filling (Yang J. C. et al., 2001;Garnica et al., 2010;Kumar et al., 2018). In the present study, foliar applications of CO(NH 2 ) 2 and NO 3 − -N significantly increased Z+ZR and ABA and decreased ETH evolution during the early and middle grain filling stages. Zhang and Yang (2004) suggested that several hormones collectively regulate grain filling and starch biosynthesis. High grain ZR levels induce endosperm cell cleavage and are positively correlated with maximum grain weight and mean grain filling rate (Zhang et al., 2009). ABA regulates grain filling and starch biosynthesis by participating in the sugar-signaling pathway and enhancing the transport of stored assimilates, while ethylene inhibits grain filling in wheat and rice by promoting premature senescence Zhu et al., 2011;Kumar et al., 2018). Lower ETH and higher ABA levels contribute to grain filling Lv X. K. et al., 2017). suggest that the ratio of ABA/ETH is closely related to grian filling rate and sink capacity of wheat. We suggest that Z+ZR induced by foliar CO(NH 2 ) 2 and NO 3 − -N may have increased grain sink capacity by inducing endosperm cell division. The antagonism between ABA and ETH can be attenuated by CO(NH 2 ) 2 and NO 3 − -N application so that grain sink activity is increased. Furthermore, we observed that the genes upregulated by NO 3 − -N and CO(NH 2 ) 2 were positively correlated with ABA and ZR content and negatively correlated with ETH evolution rate ( Table 2). Previous studies reported correlations between phytohormone levels and starch biosynthesis. ABA and other endogenous plant growth regulators control starch biosynthesis in wheat grains (Xie et al., 2003;Wang Z. et al., 2015;Xu et al., 2016;Kumar et al., 2018). Our results and previous reports indicate that under the foliar treatments of NO 3 − -N and CO(NH 2 ) 2 , expression of genes encoding enzymes involved in starch synthesis can be regulated by endogenous phytohormone balances. This promotes wheat grain filling, whose mechanism may be the activation of carbohydrate transport, endosperm cell cleavage, and starch biosynthesis signaling (Xie et al., 2003;Albacete et al., 2014).
Our research demonstrated that duration of the grain filling period and grain weight are positively correlated to the activity of SOD, POD, CAT, and negatively correlated to the content of MDA (R = 0.9403 * * , 0.9153 * * , 0.9447 * * , -0.8359 * * , respectively). The duration of grain filling is closely related to plant senescence (Zhao et al., 2007). Plant senescence is a programmed process occurring late in the wheat growth period. It is hastened by abiotic stress, which induces the accumulation of reactive oxygen species (ROS) and MDA, and consequently shortens the duration of the grain filling period (Wang et al., 2013). ROS toxicity can be neutralized by antioxidant enzymes such as SOD, POD, and CAT, which protect nucleic acids, membrane proteins, and lipids in cells, and retard senescence in plants (Gregersen et al., 2008;Zhao et al., 2011). Our results indicate that foliar N application (especially NH 4 + -N and CO(NH 2 ) 2 ) can reduce scavenging ROS accumulation and protect cells by upregulating activity of SOD, POD, and CAT and lowering the content MDA in the flag leaves. When plants senesce, chlorophyll is lost, foliar photosynthesis declines, and structural chemicals degrade (Wang et al., 2019). At the same time, carbohydrate remobilization from the source (leaves) to the sink (grains) increases (Gregersen et al., 2013). Based on our observations, we conclude that drought stress induces premature senescence in plants by severely impairing biological activity through restricted photosynthate supply, accelerated carbohydrate remobilization to the grains, and a shortened grain filling period. We suggest that foliar N application (especially NH 4 + -N) at anthesis retards senescence by increasing photosynthate supply and delaying carbohydrate transformation. No significant differences were found among N applications (except NH 4 + -N) and the control in terms of grain filling period under drought stress, although the N treatments attenuated senescence. When wheat is subjected to water stress, the grain yield potential becomes limited so that remobilization of nutrients reserved in stems and leaves is critical to increasing grain yield . We suggest that when severe drought stress restrains the photosynthate supply after anthesis, NO 3 − -N and CO(NH 2 ) 2 treatments promote remobilization of carbohydrates from source organs to grains for starch synthesis, which increases the grain filling rate and thereby accelerates plant senescence and shortens the grain filling period.
Overall, we found that drought stress severely suppresses wheat growth, restricts grain filling, and induces premature senescence Djanaguiraman et al., 2018). Based on the foregoing analysis of grain phytohormones, starch biosynthesis, and leaf senescence, we propose that the three forms of foliar N fertilizer application mitigate drought stress-induced grain filling damage; however, the specific mechanisms involved differ for each N source. As drought stress seriously limits the source supply by impairing photosynthetic capacity, carbohydrates reserved before anthesis must be rapidly transported to the grain . The gains from enhanced remobilization of the "reserve pool" and the accelerated grain filling rate may outweigh the reductions in photosynthesis and grain filling period duration, resulting in a net increase in grain yield (Schnyder, 1993;. This is supported by our observation that foliar application of NO 3 − -N produced the largest grain weight under drought stress. The differences in the regulatory effects of the three foliar N fertilizers may be explained by their different functional characteristics and metabolic pathways. It is known that nitrate and ammonium are absorbed into plant cells by transporters of the NRT family and AMT family, respectively . Nitrate and urea both must be converted to ammonia in order to be assimilated into amino acids (Andrews et al., 2013). According to Patterson et al. (2010), ∼40% of N-regulated genes are differentially expressed by either nitrate or ammonium in Arabidopsis thaliana plants. Nitrate itself can serve as a signal to regulate the response of cytokinins and the metabolism of N and carbon on the level of mRNA and proteins (Yoneyama and Suzuki, 2018;Mu and Luo, 2019). The ammonium-specific pattern of gene expression plays key roles in modulating extracellular acidification and downstream metabolites in the pathway of ammonium assimilation (Patterson et al., 2010). Omics technology may be a useful tool in further elucidating the regulation pathways of various N sources on grain filling, including hormones levels, starch synthesis and plant senescence. In addition, soil fertilizer levels also strongly influence wheat grain filling (Yan et al., 2019). Future field studies are needed to evaluate the effects of various foliar N fertilizer sources in conjunction with soil water in order to develop practical wheat crop management protocols.
CONCLUSION
The present study showed that all three forms of foliar N application mitigated the deleterious effects of drought stress on wheat grain filling, although their modes of action differed. Foliar applications of CO(NH 2 ) 2 and NO 3 − -N controlled endogenous phytohormone activity in the grains, upregulated the genes involved in the sucrose-starch conversion pathway, and remobilized carbohydrates from the source organs (leaves). The sink strength and grain filling rate can be improved. In contrast, foliar NH 4 + -N application notably upregulated the antioxidant enzyme system, delayed senescence, hence the period of grain filling and photoassimilate supply was extended. Under water deficit, the relative differences in the three forms of nitrogen fertilizer in terms of their effect on grain filling were magnified. Improvement of carbohydrate transport and increase in sink strength were the key factors for grain weight increase. Therefore, foliar applications of CO(NH 2 ) 2 and NO 3 − -N were comparatively more efficacious at enhancing grain filling and the formation of grain weight under drought stress. Along with current research, molecular investigations are required for a better understanding of the functional characteristics and metabolic pathways of various nitrogen sources on grain filling, which will be helpful to optimize wheat grain yield and quality against climate change.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
XW, YL, and XL conceived and designed the study. XL, YD, and ML performed the experiment. XL, WL, and XG collected and analyzed the data. XL wrote the manuscript. XW and YL revised the manuscript. All authors read and approved the final manuscript. | 2021-03-26T13:26:41.730Z | 2021-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "2c9593810ff886290e3f68ae727f2a2856ac59bc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.645379/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c9593810ff886290e3f68ae727f2a2856ac59bc",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256697112 | pes2o/s2orc | v3-fos-license | Superconductivity and magnetic and transport properties of single-crystalline CaK(Fe$_{1-x}$Cr$_{x}$)$_{4}$As$_{4}$
Members of the CaK(Fe$_{1-x}$Cr$_{x}$)$_{4}$As$_{4}$ series have been synthesized by high-temperature solution growth in single crystalline form and characterized by X-ray diffraction, elemental analysis, magnetic and transport measurements. The effects of Cr substitution on the superconducting and magnetic ground states of CaKFe$_4$As$_4$ ($T_c$ = 35 K) have been studied. These measurements show that the superconducting transition temperature decreases monotonically and is finally suppressed below 1.8 K as $x$ is increased from 0 to 0.038. The magnetic transition temperature increases in a roughly linear manner as Cr substitution increases. A temperature-composition (\textit{T}-\textit{x}) phase diagram is constructed, revealing a half-dome of superconductivity with the magnetic transition temperature, $T^*$, appearing near 22~K for $x$ $\sim$ 0.017 and rising slowly up to 60~K for $x$ $\sim$ 0.077. The $T$-$x$ phase diagrams for CaK(Fe$_{1-x}$$T$$_{x}$)$_4$As$_4$ for $T$ = Cr and Mn are essentially the same despite the nominally different band filling; this is in marked contrast to $T$ = Co and Ni series for which the $T$-$x$ diagrams scale by a factor of two, consistent with the different changes in band filling Co and Ni would produce when replacing Fe. Superconductivity of CaK(Fe$_{1-x}$Cr$_{x}$)$_{4}$As$_{4}$ is also studied as a function of magnetic field. A clear change in $H^\prime_{c2}$($T$)/$T_c$, where $H^\prime_{c2}$($T$) is d$H_{c2}$($T$)/d$T$, at $x$ $\sim$ 0.012 is observed and probably is related to change of the Fermi surface due to magnetic order. Coherence length and the London penetration depths are also calculated based on $H_{c1}$ and $H_{c2}$ data. Coherence lengths as the function of $x$ also shows changes near $x$ = 0.012, again consistent with Fermi surfaces changes associated with the magnetic ordering seen for higher $x$-values.
On one hand, since the phase diagrams of Co and Ni substitutions of CaKFe 4 As 4 scaled almost exactly as a function of band filling change, the comparison between CaKFe 4 As 4 and Ba 0.5 K 0.5 Fe 2 As 2 based on their similar, nominal electron counts, seems justified. One the other hand, given that CaK(Fe 1−x T x ) 4 As 4 allows for the study of how nominal hole-doping with Mn and Cr can affect the superconducting and magnetic properties of this system, it is very important to see how their T -x phase diagrams compare with each other as well as those for T = Co and Ni.
We have recently found that for CaK(Fe 1−x T x ) 4 As 4 , T = Mn, Mn is a far more local-moment-like impurity than T = Co or Ni are. We also found that the substitution level of CaK(Fe 1−x Mn x ) 4 As 4 can only go to up to x = 0.036. Beyond that level, 1144 phase is not stabilized with the similar synthesis condition. This limited the exploration of hole -doped 1144 phase diagram and the evolution of h-SVC type antiferromagnetic transition. Cr offers twice the amount of nominal hole-doping per x and, like Mn, can sometimes manifest local-moment-like properties in intermetallic samples. As such, Cr substitution offers a great of opportunity to further our understanding of the behavior of h-SVC type antiferromagnetism in the 1144 system.
In this paper, we detail the synthesis and characterization of CaK(Fe 1−x Cr x ) 4 As 4 single crystals. A temperature-composition (T -x) phase diagram is constructed by elemental analysis, magnetic and transport measurements. In addition to creating the T -x phase diagram, coherence lengths and the London penetration depths are also calculated based on H c1 and H c2 data obtained from measurements. The data for Cr-substituted 1144 are added on the λ −2 versus σ T 2 c plot and compared with the Mn substitution. Finally, temperature vs change of electron count, |∆e − |, phase diagram for CaK(Fe 1−x T x ) 4 As 4 single crystals, T = Cr, Mn, Ni and Co, is also presented and discussed. By comparing all four T = Cr, Mn, Ni and Co substitutions we find that whereas for T = Ni and Co CaK(Fe 1−x T x ) 4 As 4 the temperature -substitution phase diagrams scale with additional electrons (in much the same way that the Ba(Fe 1−x T M x ) 2 As 2 phase diagrams do for T = Ni and Co), for T = Cr and Mn the temperature substitution phase diagrams are essentially identical when plotted more simply as T -x diagrams, suggesting that for Cr and Mn there may be other variables or mechanisms at play.
II. CRYSTAL GROWTH AND EXPERIMENTAL METHOD
Single crystalline CaK(Fe 1−x Cr x ) 4 As 4 samples were grown by high-temperature solution growth [22] out of FeAs flux in the manner similar to CaK(Fe 1−x Mn x ) 4 As 4 [19]. Lumps of potassium metal (Alfa Aesar 99.95%), distilled calcium metal pieces (Ames Laboratory, Materials Preparation Center (MPC 99.9%) and Fe 0.512 As 0.488 and Cr 0.512 As 0.488 precursor powders were loaded into a 1.7 ml fritted alumina Canfield Crucible Set [23] (LSP Industrial Ceramics, Inc.) in an argon filled glove-box. The ratio of K:Ca:Fe 0.512 As 0.488 and Cr 0.512 As 0.488 was 1.2:0.8:20. A 1.3 cm outer diameter and 6.4 cm long tantalum tube which was used to protect the silica ampoule from reactive vapors was welded with the crucible set in partial argon atmosphere inside. The sealed Ta tube was then itself sealed into a silica ampoule and the ampoule was placed inside a box furnace. The furnace was held for 2 hours at 650 ℃ before increasing to 1180 ℃ and held there for 5 hours to make sure the precursor was fully melted. The furnace was then fast cooled from 1180 ℃ to 980 ℃ in 1.5 hours. Crystals were grown during a slow cool-down from 980 ℃ to 915 ℃ over 100-150 hours dependent on substitution level. After 1-2 hours at 915 ℃ the ampoule was inverted into a centrifuge and spun to separate the remaining liquid from the grown crystals. Metallic, plate-like, crystals were obtained. The average size and thickness decreased by factor 2-4 as x is increased. The largest crystal is about centimeter size as shown in figure 1 Single crystals of CaK(Fe 1−x Cr x ) 4 As 4 are soft and malleable as CaKFe 4 As 4 and are difficult to grind for powder X-ray diffraction measurements. Diffraction measurements were carried out on single crystal samples, which were cleaved along the (001) plane, using a Rigaku MiniFlex II powder diffactometer in Bragg-Brentano geometry with Cu Kα radiation (λ = 1.5406 Å) [24].
The Cr substitution levels (x) of the CaK(Fe 1−x Cr x ) 4 As 4 crystals were determined by energy dispersive spectroscopy (EDS) quantitative chemical analysis using an EDS detector (Thermo NORAN Microanalysis System, model C10001) attached to a JEOL scanning-electron microscope. The compositions of platelike crystals were measured at three separate positions on each crystal's face (parallel to the crystallographic ab-plane) after cleaving them. An acceleration voltage of 16 kV, working distance of 10 mm and take off angle of 35 • were used for measuring all standards and crystals with unknown composition. Pure CaKFe 4 As 4 was used as a standard for Ca, K, Fe and As quantification. LaCrGe 3 and YCr 6 Ge 6 were used as standards for Cr, both leading to consistent results without significant difference within the experimental error (∼ 0.001). The spectra were fitted using NIST-DTSA II Microscopium 2020-06-26 software [25]. Different measurements on the same sample reveal good homogeneity in each crystal and the average compositions and error bars were obtained from these data, accounting for both inhomogeneity and goodness of fit of each spectra.
Temperature-and magnetic-field-dependent magnetization and resistance measurements were carried out by using Quantum Design (QD), Magnetic Property Measurement Systems (MPMS and MPMS3) and Physical Property Measurement Systems (PPMS). Temperatureand magnetic-field-dependent magnetization measurements were taken for H ab by placing the plate-like sample between two collapsed plastic straws with the third, uncollapsed, straw providing support as a sheath on the outside or by using of a quartz sample holder. The single crystal samples of CaK(Fe 1−x Cr x ) 4 As 4 measured in the MPMS and MPMS3 have plate-like morphology with length and width from 3 mm to 10 mm and thickness (c axis) 50 -200 µm. The approximate effective demagnetizing factor N ranges from 0.007 to 0.077 with magnetic field applied parallel to the crystallographic ab plane [26]. AC electrical resistance measurements were performed in a standard four-contact geometry using the ACT option of the PPMS, with a 3 mA excitation and a frequency of 17 Hz. 50µm diameter Pt wires were bonded to the samples with silver paint (DuPont 4929N) with contact resistance values of about 2-3 Ohms. The magnetic field, up to 90 kOe, was applied along c or ab directions, perpendicular to the current, with the current flowing in the ab plane in both cases.
Contacts for inter-plane resistivity measurements were soldered using tin. The top and bottom surfaces of the samples were covered with Sn solder [27,28] and 50 µm silver wires were attached to enable measurements in a pseudo-four-probe configuration. Soldering produced contacts with resistance typically in the 10 µΩ range. Inter-plane resistivity was measured using a two-probe technique with currents in 1 to 10 mA range (depending on sample resistance which is typically 1 mΩ). A fourprobe scheme was used down to the sample to measure series connected sample, R s , and contact, R c resistance. Taking into account that R s R c , contact resistance represents a minor correction of the order of 1 to 5%. The details of the measurement procedure can be found in Refs. [29][30][31]. The results of the measurements are in good agreement with similar measurements on pure CaKFe 4 As 4 [6]. Measurements with current along the c−axis suffer strongly from inter-layer connectivity due to the micacious nature of single crystals. To ascertain reproducibility, we performed measurements of ρ c on two to five samples and obtained qualitatively similar temperature dependencies of the electrical resistivity, as represented by the ratio of resistivities at room and low temperatures, ρ c (0)/ρ c (300). The resistivity ρ c (300K) was in the range 1-2 mΩcm, corresponding to an anisotropy ratio ρ c /ρ a ≈ 3 to 6 at 300 K.
III. CaK(Fe1−xCrx)4As4 STRUCTURE AND COMPOSITION Figure 1 presents single crystal diffraction data of CaK(Fe 1−x Cr x ) 4 As 4 with x EDS = 0.077 which is the largest substitution level obtained. Attempts to grow crystals with x EDS > 0.077 failed to yield mm-sized or larger samples that could be identified as Cr-doped 1144. From the figure, we can see that all (00l), l ≤ 12, are detected. The h+k +l = odd peaks which are forbidden for the I 4/mmm structure [5] can be clearly found. This indicates that sample has the anticipated P 4/mmm structure associated with the CaKFe 4 As 4 structure [5,6,16].
The Cr substitution, x EDS , determined by EDS is shown in figure 2a Plot of c lattice parameters as a function of substitution level x. c lattice parameters are calculated by the single-crystalline plate X-ray diffraction plot [24].
of the nominal Cr fraction, x nominal , that was originally used for the growth. Error bars account for both possible inhomogeneity of substitution and goodness of fit of each EDS spectra. A clear correlation can be seen between the nominal and the measured substitution levels, with a proportionality factor of 0.47 ± 0.02. For comparison, the ratio of measured to nominal Mn, Ni and Co fraction in the corresponding CaK(Fe 1−x T x ) 4 As 4 are 0.60, 0.64 and 0.79 respectively [16,19]. From this point onward, when substitution level x is referred to, it will be the EDS value of x. Figure 2b presents c lattice parameters as the function of x. c lattice parameter monotonically increases as Cr substitution level increase, which is consist with the larger radius of Cr than Fe. c lattice parameter values are calculated by the single-crystalline plate X-ray diffraction plot [24]. In Mn-1144 (CaK(Fe 1−x Mn x ) 4 As 4 ), the evolution of the c lattice parameter is difficult to determine due to the small difference of radius between Fe and Mn and low substitution levels. The highest x is 0.036 in Mn-1144 and it's smaller than, 0.039, lowest substitution of Co-1144. Figure 3 shows the low temperature (1.8 K -45 K), zero-field-cooled-warming (ZFCW) magnetization for CaK(Fe 1−x Cr x ) 4 As 4 single crystals for H ||ab = 50 Oe (ZFCW magnetization and field-cooled (FC) data for an x = 0.017 sample can be found in figure 19 in the Appendix). M is the volumetric magnetization in this figure and is calculated by using the density of CaKFe 4 As 4 , which is determined to be 5.22 g/cm 3 from the lattice parameters at room temperature [5]. A magnetic field of 50 Oe was applied parallel to ab plane (i.e. parallel to the surface of the plate-like crystal). The superconducting transitions (T c ) are clearly seen in this graph except for the substitution value x = 0.038. As the value of the Cr substitution, x, increases, the superconducting transition temperature decreases. For x = 0.025, a full magnetic shielding is not reached by 1.8 K. Figure 4 shows the low temperature (5 K -150 K) M (T )/H data for CaK(Fe 1−x Cr x ) 4 As 4 single crystals with 10 kOe field applied parallel to the crystallographic ab plane. The appearance of first a Curie-Weiss tail and later a kink-like feature after adding The criterion used to determine T * from this kink-feature is outlined and discussed in the appendix.
IV. DATA ANALYSIS AND PHASE DIAGRAM
Cr is similar to Mn substituted 1144. Kink-like features are found above 20 K for x > 0.012, which very likely indicate antiferromagnetic transition. Similar kink-like features were correlated with AFM order in CaK(Fe 1−x Mn x ) 4 As 4 [19,32]. The inset shows M (T )/H of a CaK(Fe 0.983 Cr 0.017 ) 4 As 4 single crystal over a wider temperature range. As Cr is added the Curie-tail-like feature grows. The M (T ) data above transitions can be fitted by a C/(T +θ) + χ 0 function as long as Cr doping levels is larger than 0.005 (x > 0.005). The effective moment versus x data is shown in figure 18 in the Appendix; µ ef f calculated per Cr, is found to be ∼ 4 µB. For x > 0.012, a kink-like feature can be seen at a temperature T * . As x increases from 0.017 to 0.077, the temperature T * increases from ∼ 20 K to ∼ 60 K. The criterion for determining T * and more discussion about the Curie-tail are shown in the Appendix. Figure 5 presents the temperature dependent, normalized, electrical resistance of CaK(Fe 1−x Cr x ) 4 As 4 single crystals. RRR (the ratio of 300 K and low temperature resistance just above T c ) decreases as Cr substitution increases, which is consistent with the disorder increasing. The superconducting transition temperatures decrease as Cr is added to the system. When x = 0.038, there is no signature of a superconducting transition detectable above 1.8 K. With increasing Cr content, a kink appears for x > 0.025 and rises to about 60 K for x = 0.077 and features become more clearly resolved with increasing substitution. A similar feature also appeared in Mn, Ni and Co-substituted CaKFe 4 As 4 electrical resistance measurements [16,19]. The criterion for determining the transition temperature, T * , associated with this kink is shown in the Appendix in figure 21c where R(T ) and dR(T )/dT are both shown. Figures 6 and 7 compare the normalized electrical resistivity of CaK(Fe 1−x Cr x ) 4 As 4 single crystals for electrical currents along a-axis in the tetragonal plane (black lines in the main panels) with those along the tetragonal c-axis (red curves in the main panels). Samples with x = 0.017, figure 6, are in the range of SC and AFM coexistence; samples with x = 0.038, figure 7, are in the range where superconductivity is suppressed (see figure 8, below). The inter-plane resistivity of the samples with x = 0.017 shows a broad cross-over close to room temperature, much milder feature is found in in-plane transport. This is very similar to the results on the parent compound x = 0 [6]. For samples with x = 0.038, figure 7, the inter-plane resistivity (red curve, main panel) reveals clearly non-monotonic dependence. The cross-over transforms into a clear maximum above 200 K, followed by a second maximum centered at about 35 K, close to the temperature of long-range magnetic ordering.
Insets in the figures compare the derivatives of the normalised resistivities for two current directions with d(M T /H)/dT (blue lines, right scale) [33]. For sample x = 0.017 in figure 6, no clear features are observed in the resistivity derivatives, however some flattening is observed for c-axis resistivity. For samples with x = 0.038, In some cases the features associated with magnetic ordering in the FeAs-based superconductors are clearer for current flow along the c-axis as opposed to current flow in the basal ab-plane [31]. This is believed to be due to an anternating arrangements of the magnetic moments along the c-axis direction, providing partial gapping of the Fermi surface affecting more strongly interplane transport. The clarity of features increases with x in CaK(Fe 1−x Cr x ) 4 As 4 , making then clearly visible in the raw resistivity data for x = 0.077 (see Appendix figure 23). Figure 8 summarizes the transition temperature results inferred from magnetization and resistance measurements, plots the superconducting and magnetic transitions as a function of substitution and constructs the Tx phase diagram for the CaK(Fe 1−x Cr x ) 4 As 4 system. As depicted in this phase diagram, increasing Cr substitution (i) suppresses T c monotonically with it extrapolating to 0 K by x ∼ 0.03 and (ii) stabilizes a new transition, presumably an antiferromagnetic one, for x 0.017 with the transition temperature rising from ∼ 20 K for x = 0.017 to ∼ 60 K for x = 0.077. Each phase line is made out of data points inferred from R(T ) and M (T ) measurements, illustrating the good agreement between our criteria for inferring T c and T * from magnetization and resistivity data. The CaK(Fe 1−x T x ) 4 As 4 series, T = Mn, Co and Ni, have qualitatively similar phase diagrams, with the quantitative differences being associated with the substitution levels necessary to induce the magnetic phase and to suppress superconductivity. We were not able to infer the behavior of T * once it drops below T c , but if it is similar to other T substitution [32,34], T * should be suppressed very fast in the superconductivity state. Further comparison of the CaK(Fe 1−x Cr x ) 4 As 4 phase diagram to the phase diagrams of the CaK(Fe 1−x T x ) 4 As 4 series will be made in the discussion section below. Given that the R(T ) data were taken in zero applied field whereas the M/H(T ) data shown in figure 4 were taken in 10 kOe, it is prudent to examine the field dependence of transition associated with T * . In figure 9 we show the d(M T /H)/dT data [35] for the x = 0.077 sample for H || ab = 10, 30 and 50 kOe. As is commonly seen for antiferromagnetic phase transition, increasing a magnetic field leads to a monotonic suppression of T * .
The inset to figure 9 shows that the extrapolated, H = 0, T * value would be 57.4 K as compared to the value of 57.2 K for 10 kOe. This further confirms that there should be (and is) good agreement between the T * values inferred from 10 kOe magnetization data and the T * values inferred from and resistance data in figure 8. In addition these data suggest that magnetic field could be used to fine tune the value of T * , if needed.
V. SUPERCONDUCTING CRITICAL FIELDS AND ANISOTROPY
Superconductivity can be studied as a function of field (in addition to temperature and doping). Before we present our H c2 (T ) results, based on R(T, H) data, it is useful to check the M (H) data. We start with M (H ) data for x = 0.012, T c = 21.3 K, taken over a wide field range. The 2 K M (H ) data for shown in figure 10 is classically non-linear, showing a local minimum near H ∼ 2.5 kOe. For T = 2 K < T c the H c2 value is clearly higher than the 65 kOe maximum field we applied (see discussion and figures below). H c1 can be inferred from the lower field M (H ) data.
In order to better estimate H c1 values we performed low field, M (H) sweeps at base temperature. In figure 11a we show the M (H ) data for 0 x 0.025 for H 100 Oe. As x increases the deviation from the fully shielded, linear behavior, that occurs at H c1 , appears at lower and lower fields. As shown in the inset of figure 11a, ∆M is determined by subtracting the linear, lowest field behavior of 4πM from H. Given the finite thickness In zero applied field cooling (ZFC) to 2 K so as to demagnetization is done at 60 K before cooling to minimize the remnant magnetic field. of samples and field direction applied in ab plane, there is a small demagnetizing factor (N < 0.077), therefore, H c1 is taken as the vortices start to enter the sample and determined as the point when the M (H) data deviates from the linear, lowest field behavior. The non-zero value is due to remnant field of MPMS. The standard error of H c1 comes from at least 4 different samples' measurements. Figure 11b shows H c1 at 2 K with different substitution. x = 0.025 is not shown in this plot since T c ∼ 7 K is close to 2 K. The data shown in figure 11b are roughly linear in x, but there may be a hint of a change in behavior near x ∼ 0.01, where T c drops below T * in figure 8. This will be discussed further when examine the London penetration depth below. In order to further study the effects of Cr substitution on the superconducting state, anisotropic H c2 (T ) data for temperatures near T c were determined for the substitution levels that have superconductivity. Figure 12a shows a representative set of R(T ) data taken for fixed applied magnetic fields, H c axis and ab plane ≤ 90 kOe for x = 0.017. Figure 12a also shows an example of the onset and offset criteria used for the evaluation of T c . Figure 12b 24 -27. From H c2 (T ) plots, we can see that T c is only suppressed by about 4 K when 90 kOe magnetic field is applied, so the complete H c2 (T ) plots of the CaK(Fe 1−x Cr x ) 4 As 4 compounds cannot be fully determined, however we still can observe several trends in these data. Figure 13 shows the temperature dependent anisotropy ratio, γ = H ab c2 (T )/H c c2 (T ), is around 2.5 for these samples over the 0.6 < T /T c < 1.0 range. This value is similar to other 122 and 1144 materials [6,13,19,31]. γ ∼ 2.5 is also qualitatively consistent with the estimated resistivity anisotropy ratio γ ρ = ρ c /ρ a ≈3 − 6 at 300 K, increasing to 6 − 10 at T = 0 K, with γ H ∼ √ γ ρ [36].
Black circles and dashed line present the data for pure CaKFe 4 As 4 [6]. The anisotropies of coherence length and penetration depth are expected to be the same close to T c , but can have opposite temperature dependence upon cooling below that [37]. However, almost no temperature dependence of γ is seen in the temperature range measured. Based on that, the average values of gamma can be taken as a good estimate of both anisotropies at low temperature as well. Given that we have determined H c2 (T ) for temperatures close to T c , we can evaluate the H c2 (T )/T c close to T c , where H c2 (T ) is dH c2 (T )/dT , specifically seeing how it changes as T c drops below T * with increasing x. Error of H c2 (T )/T c comes from linear fit of H c2 (T ) near the T c . In the case of other Fe-based systems [19,[38][39][40][41] clear changes in H c2 (T )/T c were associated with changes in the magnetic sublattice coexisting with superconductivity (i.e. ordered or disordered). In figure Temperature is ZFC to 2 K and demagnetization is done at 60 K before cooling to minimize the remnant magnetic field (b) H c1 value versus x.
14 we can see that there is a change in the x-dependence of H c2 (T )/T c for x > 0.012, beyond which substitution level suppresses T c below T * . Comparison with the slope change of H c2 in the pressure-temperature phase diagram of CaK(Fe 1−x Ni x ) 4 As 4 [38], further suggest that this is probably related to changes in the Fermi surface, caused by the onset of the new periodicity associated with the AFM order.
VI. DISCUSSION AND SUMMARY
The T -x phase diagram for CaK(Fe 1−x Cr x ) 4 As 4 (Figure 8) is qualitatively similar to those found for Co-, Niand Mn-substituted CaKFe 4 As 4 , there is a clear suppression of T c with increasing Cr substitution as well as an onset of what is likely to be a AFM ordering for x > 0.012.
In figures 11 and 14 we presented measurements and analysis of H c1 and H c2 data. Whereas we see only subtle, if any effect of the onset of AFM ordering on H c1 (figure 11b), there is a clear effect on H c2 (figure 14). Using our H c1 and H c2 data we can extract information about the superconducting coherence length and London penetration depth as well. Figure 15a shows coherence length, ξ, of CaK(Fe 1−x Cr x ) 4 As 4 as a function of x. ξ is estimated by using the anisotropic scaling relations 0.69|dH [42]. Figure 15b shows the London penetration depth, λ ab , as a function of x. Since, according to figure 13, γ of H c2 does not change much as the temperature decreases below T c , the anisotropy of the penetration depth is estimated as the average of the anisotropy of H c2 at low temperature. λ ab is obtained by using, λ c /λ ab = ξ ab /ξ c = H ab c2 (T )/H c c2 (T ) = γ = 1/ε with ε being the angle-dependent anisotropy parameter and κ ab = λ ab λ c /ξ ab ξ c [43][44][45]. Coherence lengths and penetration depths increase as substitution levels increase, and, given that ξ depends on dH c2 /dT and penetration depths depends on H c1 and ξ. Both figure 15a and b show breaks in behavior near x ∼ 0.01, the substitution level at which T * emerges from below T c . Figure 16a shows λ −2 versus σ T 2 c in Cr and Mn substitution of CaKFe 4 As 4 , where σ is normal state conductivity which was measured just before T c (Resistivity plot is shown in appendix figure 20). Whereas the CaK(Fe 1−x Mn x ) 4 As 4 data roughly follow the behavior associated with the Homes type scaling in the presence of pair breaking [46][47][48]. The CaK(Fe 1−x Cr x ) 4 As 4 data also follows the Homes scaling with slightly different slope. Figure 16b shows the Homes' scaling of superconductors on a log-log scale [46]. Other pnictides are: Ba(Fe 0.92 Co 0.08 ) 4 As 4 and Ba(Fe 0.95 Ni 0.05 ) 4 As 4 . Cuprates are YBa 2 Cu 3 O 6+y . On this log-log scale, both the CaK(Fe 1−x Cr x ) 4 As 4 and CaK(Fe 1−x Cr x ) 4 As 4 data sets agree rather well with other data, although being shifted up somewhat. It should be noted that the other data were determined from optical measurements [46] and differences in criteria as well as measurement techniques may be responsible for the offset.
The T -x phase diagrams of T = Co and Ni substitutions in CaK(Fe 1−x T x ) 4 As 4 scaled almost exactly as a [16,21]. This led to the conclusion that for electron doping of CaKFe 4 As 4 the number of electrons added was the control parameter for both the stabilization of magnetic ordering as well as for the suppression of superconductivity. This scaling did not seem to work for the case of Mn substitution [19], but with only one "hole-like" transition metal substitution, it was hard to make clear conclusions.
When Cr is substituted into CaKFe 4 As 4 , though, there is a qualitatively similar suppression of superconductivity as well as the stabilization of magnetic order, as Figure 16: (a) shows λ −2 versus σ T 2 c (Homes scaling) of Cr (black) and Mn (red) substitution of CaKFe 4 As 4 . (b) shows Homes'scaling with other superconductors [46]. Except CaK(Fe 1−x Cr x ) 4 As 4 and CaK(Fe 1−x Mn x ) 4 As 4 , Homes' scaling is given by 1/λ 2 s ∝ T c σ dc . σ dc is DC conductivity and data points are obtained from optical spectroscopes. Red dash-dot ellipse marks the Mn and Cr substituted 1144 data points.
was found for T = Mn, Co and Ni. However there is a clear and important difference on a quantitative level, as shown in figure 17a. The T -x phase diagram of CaK(Fe 1−x Cr x ) 4 As 4 is essentially identical to that of CaK(Fe 1−x Mn x ) 4 As 4 . This is very different behavior from, figure 17b, that seen for electron doped 1144 where electron count seemed to be the key variable.
In figure 17c the CaK(Fe 1−x T x ) 4 As 4 phase diagrams for T = Cr, Mn, Co and Ni are plotted on the same T and ∆e − axes. Comparison of figures 17a, b and c reveal a clear and striking difference between the hole and electron doped CaK(Fe 1−x T x ) 4 As 4 systems. Whereas for the electron doped (T = Co, Ni) there is very clear scal-ing with number of added electrons, for hole doped (T = Mn, Cr) there is clear scaling with the number of substituted atoms, x. These striking differences in the phase diagrams beg the question of what is different between the two types of substitution. When there was only data on Mn substitution to compare with the Co and Ni substitutions, one possible explanation could be based on an asymmetric density of electronic states on either side of E F . Given that the Cr-and Mn-substituted phase diagrams scale with x rather than e − , this is no longer a possibility.
A different approach to these data is to note that there is one other clear difference between the Mn and Cr substitutions as compared to the Co and Ni ones. Mn and Cr clearly bring local-moment-like behavior as manifest by their conspicuous, high temperature, Curie tails that grow with increasing x. This is absent for the Co and Ni substitutions. The effective moments coming from the Mn and Cr Curie tails (∼ 5 µ B [19] and ∼ 4 µ B respectively) are consistent with Mn 3+ and Cr 3+ valencies. The Cr and Mn appear to behave like local moment impurities. As such it is not surprising that they lead to a stronger suppression of T c (via Abrikosov -Gor'kov pair breaking [49]). In a similar manner, it is not surprising that the addition of relatively large, local moment impurities to an itinerant, relatively small moment system helps to stabilize magnetic order. Given that the size of the local moments are similar, it is not surprising that we find that the T -x phase diagrams scale well. The fact that CaKFe 4 As 4 manifests rather bi-modal responses to T substitution for T = Co and Ni versus T = Mn and Cr is consistent with the growing understanding of many of the Fe-based superconductors [13,50] families as manifesting properties in between those of a wide band metal (which would support rigid band shifting) and a more ionic-or Zintl-like compound (that would support valence-counting-like behavior).
In summary, we have been able to grow and study the CaK(Fe 1−x Cr x ) 4 As 4 system. Based on magnetic and transport measurements, we assemble a T -x phase diagram that clearly shows the suppression of the superconducting T c with the addition of Cr, with T c dropping from 35 K for x = 0 to zero for x ∼ 0.03, as well as the stabilization of magnetic order for x > 0.012, with 22 K ≤ T * ≤ 60 K. As x becomes greater than 0.012 and T c becomes less than T * , a clear change in the behavior of H c2 (T )/T c and the associated superconducting coherence length, ξ, can be seen. These are associated with the probable changes in the Fermi surface that accompany the AFM ordering at T * . Comparable features in H c1 or the London penetration depth are not clearly resolvable.
The Oe)
Cr T * R(T)
x b Figure 18: (a) shows Curie-Weiss temperature, θ, effective moment, µ ef f , obtained from Curie-Weiss fit to the difference magnetization (∆M ) between CaKFe 4 As 4 and CaK(Fe 1−x Cr x ) 4 As 4 single crystals as a function of temperature for with a field of 10 kOe applied parallel to the crystallographic ab plane. (b) shows temperature-independent susceptibility , χ 0 .
The magnetization plots shown in figure 4 have the appearance of Curie-Weiss tails potentially associated with the Cr-substitution. We fit the magnetization difference (∆M ) between CaK(Fe 1−x Cr x ) 4 As 4 and CaKFe 4 As 4 single crystals as a function of temperature from 20 K above T c to 250 K with a field of 10 kOe applied parallel to the crystallographic ab plane by a C/(T +θ) + χ 0 function assuming that tail behavior is only due to Cr. Figure 18 shows Figure 19 shows Zero-field-cooled-warming (ZFCW) and Field-cooled-warming (FC) low temperature magnetization as a function of temperature for CaK(Fe 0.983 Cr 0.017 ) 4 As 4 single crystals with a field of 50 Oe applied parallel to ab plane. The large difference between ZFCW and FCW is consistent with the large pinning found even in pure CaKFe 4 As 4 [51]. Figure 20 shows temperature dependence of resistivity, ρ, of CaK(Fe 1−x Cr x ) 4 As 4 single crystals with x < 0.38. Thickness is estimated by the density of pure CaKFe 4 As 4 , the mass and area of plate-like samples. The superconducting transition temperature is suppressed and resistivity before T c is increased by increasing substitution. Given the inevitably large geometric errors associated with the precised determination of length between voltage contacts as well as the sample thickness we consider the uncertainty in our resistivity values to be on the order of 20% and, as such we plot the data as R(T)/R(300 K) in the main text. Criteria for inferring T c and T * are shown in figure 21. For T c (figure 21a) we use an onset criterion for M (T ) data and an offset criterion for R(T ) data. As is often the case, these criteria agree well, especially in the low Figure 22: The T * anomaly appears clearly as a step in both plot d(M T /H)/dT and the derivative of resistance, dR/dT. Only the data above T c are plotted. Rhombuses symbols show the transition temperature of T * and error bars come from the criteria introduced above. field limit. The error bar of T c is determined by the half of difference between onset and offset. Since according to the [33], d(χT )/dT , d(ρ)/dT behave like C p which gives transition temperature between onset and offset points, we use average of onset and offset value of d(χT )/dT and d(ρ)/dT as T * . For T * , although the feature is much clearer for Cr substitution than it was for Mn, Ni or Co substitutions [16,19], the features in M (T ) and R(T ) are still somewhat subtle in low substitution level. We infer T * as the average of onset and offset value and use the half of the difference between onset and offset as the error. CaK(Fe 1−x Cr x ) 4 As 4 using onset criterion (solid) and offset criterion (hollow) inferred from the temperature-dependent electrical resistance data. | 2023-02-10T06:42:44.265Z | 2023-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "a041486ab31aae1971dc7eea787d32bead04ed1a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a041486ab31aae1971dc7eea787d32bead04ed1a",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.